Log Analyzer Pro

Log Analyzer Pro

Open multi-GB logs in VS Code powered by Rust

3 followers

Stop switching to the terminal to read large logs. This VS Code extension uses a native Rust backend and memory-mapping to open gigabyte-sized files instantly. Includes virtual scrolling, regex search, smart filtering, and live "tail -f" updates.
Log Analyzer Pro gallery image
Log Analyzer Pro gallery image
Free
Launch Team
Tines
Tines
The intelligent workflow platform
Promoted

What do you think? …

Artem Molchanov
Hey Product Hunt! 👋 I’m Artyom, the maker of Log Analyzer Pro. I built this tool to scratch my own itch. I love VS Code, but we all know the pain of accidentally opening a massive server.log file. The editor freezes, RAM spikes, and you’re forced to kill the process and switch to a terminal to use less or tail. I wanted the speed of CLI tools but with the comfort of my IDE (mouse scroll, copy-paste, highlighting). 💡 How it works: Instead of loading the file into VS Code’s memory (which is Electron/JS based and slow for big data), I built a **Rust sidecar**. - It uses Memory-Mapped I/O, so opening a 10GB file takes virtually zero RAM. - It builds a line index instantly, allowing you to scroll to line 5,000,000 without lag. - The frontend uses virtual scrolling to render only what you see. ✨ Key Features: - Zero Lag: Open multi-gigabyte files in milliseconds. - Smart Filtering: Filter for ERROR or WARN but keep the real line numbers visible (essential for debugging). - Follow Mode: A built-in tail -f that auto-scrolls when new lines appear. - Search: Regex and plain text search that doesn't freeze the UI.
Agbaje Olajide

This is a brilliant solution to a universal dev pain point. Using a Rust sidecar with memory-mapped I/O to bypass Electron's limits is the perfect technical approach.

A key question for production use: How does the extension handle actively writing log files? Does the "Follow Mode" (tail -f) update the index and virtual scrolling in real-time without performance degradation, or does it require periodic re-indexing?


Artem Molchanov

@olajiggy321 Follow Mode uses periodic polling with incremental detection, but full re-indexing on changes. Not true real-time streaming, but good enough for most production scenarios.

Artem Molchanov

@olajiggy321

  • Polls every 500ms (not true streaming)

  • Compares file size first (cheap) — skips re-indexing if unchanged

  • When file grows: full re-mmap + full re-index of the entire file

  • For a 10GB file growing constantly, this means re-scanning ~100MB/s per refresh (the indexer does ~500-1000 MB/s, so ~0.1-0.2s per poll is realistic)

Performance implications:

  • Works great for files up to ~5-10GB with moderate write rates

  • For extremely high write rates (thousands of lines/sec) or massive files (50GB+), there could be noticeable lag

  • The polling model means max 500ms latency for new lines

Potential improvement: Could implement incremental indexing (only scan appended bytes), but current implementation is "good enough" for 99% of use cases.

Agbaje Olajide

@let_molchanov 
Thanks for the exceptionally detailed and transparent breakdown—that level of technical honesty is rare and appreciated. The polling model with full re-index makes perfect sense for the "good enough for 99% of use cases" goal.

I have a small, practical idea related to managing user expectations around that performance trade-off that you could implement on your own.

If you're open to a suggestion, what's the best way to share it? (Email, DM, etc.)

Igor Kruze

Hey Artyom!

Great Job!

Could I use your plugin for searching large files of a different type?
I'm interested in JSON

Artem Molchanov

@igor_kruze Yes, you can open any text file including JSON via 'Open with Log Analyzer Pro' command. It's particularly useful for NDJSON/JSON Lines format (one JSON object per line) — common in log aggregation systems. For pretty-printed JSON, it works as a basic text viewer without JSON-specific features like syntax highlighting or tree navigation

Agbaje Olajide

Thanks for the exceptionally detailed and transparent breakdown—that level of technical honesty is rare and appreciated. The polling model with full re-index makes perfect sense for the "good enough for 99% of use cases" goal.

I have a small, practical idea related to managing user expectations around that performance trade-off that you could implement on your own.

If you're open to a suggestion, what's the best way to share it? (Email, DM, etc.)