
RunStack
Build, test, and run LLM workflows — visually.
15 followers
Build, test, and run LLM workflows — visually.
15 followers
RunStack lets you design AI workflows with a visual editor, then run them from your app using an API. You can chain prompts, add variants, compare outputs, and turn the whole workflow into a reusable, production-ready endpoint. If you want to design AI workflows visually and ship them to production, this is for you.






Nice work on RunStack! The visual workflow builder looks clean, and exposing workflows as an API is a smart approach for production use.
One thing I've run into when debugging LLM workflows in prod - you often need to see the actual API requests going out, not just what your workflow thinks it's sending. When a prompt chain fails intermittently, the issue is usually in the data hitting the wire.
We built [toran.sh](https://toran.sh) for exactly this - a transparent proxy that shows you real-time requests/responses to external APIs. Works well alongside tools like this for debugging when things don't behave as expected.
Curious how you're handling observability for the API calls inside workflows?
@martysalade love the UI !! Do you plan adding other operators than LLMs ? Like having the possibility to manipulate some LLM output before sending to an other for more deterministic outputs ?