RunStack

RunStack

Build, test, and run LLM workflows — visually.

15 followers

RunStack lets you design AI workflows with a visual editor, then run them from your app using an API. You can chain prompts, add variants, compare outputs, and turn the whole workflow into a reusable, production-ready endpoint. If you want to design AI workflows visually and ship them to production, this is for you.
RunStack gallery image
RunStack gallery image
RunStack gallery image
RunStack gallery image
Free Options
Launch Team / Built With
Famulor AI
Famulor AI
One agent, all channels: phone, web & WhatsApp AI
Promoted

What do you think? …

Martin
Maker
📌
Hey Hunters 👋 I built RunStack after struggling with the gap between prompt experimentation and production usage. Testing prompts in notebooks or chats is easy — but once you want to: - compare variants - reuse workflows - or call them from an app things get messy fast. RunStack lets you design and test LLM workflows visually, then expose them as an API you can call from your product or backend. You can: - run multiple prompt variants - compare outputs and costs - select the best result - and trigger the same workflow with real data via API It started as an internal dev tool and grew into something I use daily. I’d love feedback from people building: - AI agents - internal AI tools - production LLM pipelines I’ll be here all day — happy to answer technical questions or discuss tradeoffs 🙌
kxbnb

Nice work on RunStack! The visual workflow builder looks clean, and exposing workflows as an API is a smart approach for production use.

One thing I've run into when debugging LLM workflows in prod - you often need to see the actual API requests going out, not just what your workflow thinks it's sending. When a prompt chain fails intermittently, the issue is usually in the data hitting the wire.

We built [toran.sh](https://toran.sh) for exactly this - a transparent proxy that shows you real-time requests/responses to external APIs. Works well alongside tools like this for debugging when things don't behave as expected.

Curious how you're handling observability for the API calls inside workflows?

Marius Pon

@martysalade love the UI !! Do you plan adding other operators than LLMs ? Like having the possibility to manipulate some LLM output before sending to an other for more deterministic outputs ?