Deploy and scale your AI agents with Kodosumi, the open-source runtime built for developers. Fast, scalable, and fully free to use.
This is the 2nd launch from Kodosumi. View more
Kodosumi 1.0.0
The distributed runtime environment that manages and executes agentic services at enterprise scale.
Fast, scalable, and fully free to use.




Free
Launch Team / Built With






Sokosumi
Sokosumi
Congrats on the launch, team! 🚀 As developers tackle ever-more complex AI stacks, Kodosumi stands out for simplifying agent deployment and scaling. Can you share how your distributed runtime optimizes routing for different agentic services (LLMs, workflows, providers, etc.), and what efficiency gains you’ve seen for latency, cost, or throughput at scale?
Also curious about your foundation for stateful, memory-aware agents—how does Kodosumi’s approach to context management help agents maintain cross-session memory or support more advanced workflows?
@sneh_shah kodosumi is framework agnostic. So how you route your agentic service depends on the framework you chose. What kodosumi brings to the table is a start+stream+final-response control. scaling builds on top of ray as a means to distributed execution. Similar to cross-session memory store. This (should) ship with your agent framework and use in-memory or persistent stores as for example ADK is doing it with session services
Kodosumi is an amazing product!
Heyho frens!
This is a big upgrade. We experience that daily: drafting an agent is one thing. But putting it really to work and integrate it into a company is totally different issue. Ray clusters are magic but with kodosumi these become available for everyone super fast and easy. The upgrade were desparately expected and bring kodo to the lastest state-of-the-a(i)rt. Thx so muchm, team.
Just image what v 4.0 will look like!?
K