Launching today

Glassbrain
Visual trace replay for AI apps to fix bugs in one click
46 followers
Visual trace replay for AI apps to fix bugs in one click
46 followers
Glassbrain captures every step of your AI app as an interactive visual trace tree. Click any node, swap the input, replay instantly without redeploying. Snapshot mode stores deterministic replays. Live mode hits your actual stack. Auto-generated fix suggestions reference exact trace data with one-click copy. Diff view shows exactly what changed. Shareable replay links let your team debug together. Works with OpenAI and Anthropic. Two lines of code to integrate. Free tier: 1K traces/month.








Glassbrain
@sai_ram_muthineni The replay part is the hook for me. Finding a bad run is one thing. Getting back to it cleanly is usually where time disappears. Does replay end up replacing manual log digging for most teams?
Glassbrain
@artem_kosilov Hey Artem, exactly right. Finding the bad run is maybe 10% of the pain. The other 90% is reproducing it, isolating which node broke, and testing a fix without breaking everything else.
Replay handles all of that. You click the node that failed, change the input, and fire it again right there. No log digging, no redeploying, no "let me reproduce this in staging."
On top of that Glassbrain also auto-suggests fixes referencing the exact trace data, so you're not even starting from scratch figuring out what to change. For most teams yeah, that whole workflow just goes away.
The replay without redeploying part is what got me. Does it work with any LLM framework or do you need to set up a specific SDK? Asking because I'm on a custom Claude API setup and always dread the debug process.
Glassbrain
@abhra_das1 Hey Abhra! So you do need the SDK, but honestly it's two lines. Just wrap your Anthropic client with wrapAnthropic and you're good to go. No framework, no setup headache.
The replay thing works because Glassbrain snapshots your exact call (prompt, params, model version, all of it) so when something breaks you just go into the dashboard, tweak the input, and fire a real call right there. Never touch your codebase. For someone who dreads the debug process this is kind of the whole point. Give the free tier a shot, would love to hear how it goes with your setup!
Two lines of code to integrate is the right move. Half the reason I avoid adding observability to my projects is the setup overhead. The visual trace tree vs. walls of JSON logs is a real upgrade. Quick question - does it handle multi-step chains where one node calls another model mid-pipeline, or is it mainly single-call tracing?
Glassbrain
@thenomadcode Hey Christophe, yeah it handles multi-step chains. The SDK wraps your Anthropic/OpenAI client, so every call from every step gets captured automatically. Whether it's a single completion or a 10-step agent pipeline with retrieval, tool calls, and nested LLM calls, each node shows up in the tree with its own inputs, outputs, latency, and tokens.
You get the full execution graph, so when something breaks three steps deep you can see exactly which model call produced the bad output and replay from that specific node. It also picks up LangChain and LlamaIndex pipelines since those wrap the same underlying clients.