Launching today
Gradient Bang
Massively multi-player game played by talking to an LLM
30 followers
Massively multi-player game played by talking to an LLM
30 followers
Gradient Bang is a new kind of software: AI-native, built from the ground up to use LLMs everywhere. The game has a dynamic user interface driven by an LLM, conversational voice input, and to win you have to manage a fleet of AI subagents. You can even program your own subagents and run them in Vercel Sandboxes. Built with Pipecat, Daily WebRTC, Supabase, Vercel.






Daily.co
Gradient Bang is a massively multiplayer, completely LLM-driven game. Come play Gradient Bang with us. See if you can catch me on the leaderboard.
This whole thing started because I wanted to explore a bunch of things I’m currently obsessed with, in an application of non-trivial size, that felt both new and old at the same time.
So … a retro-style space trading game built entirely around interacting with and managing multiple LLMs. Factorio, but instead of clicking, you talk to your ship AI and figure out how to make money, make friends, and make havoc for your enemies.
Some of the things we’ve been thinking about as we hack on Gradient Bang:
- Sub-agent orchestration
- Managing very, very, very long LLM contexts, including episodic memory across user sessions
- World events and large volumes of structured data input as part of human/agent conversations
- Dynamic user interfaces, driven/created on the fly by LLMs
- And, of course, voice as primary input
If you’ve been building coding harnesses, or writing Open Claw agents, or doing pretty much anything that pushes the boundaries of AI-native development these days, you’re probably thinking about these things too!
The game is entirely open source. So if you want to see how we built it, you can clone the repo and start asking Claude/Codex about the code. If you want to add a feature, submit a PR.
New today, design your own corporation ship agents, run them in a Vercel Sandbox, and bring them into the game. Think you can make your pair trading loops faster? That's going to give you a pretty big advantage in the game. Want to run with unlimited corp ship compute using open source models? You can do that, now!
See the Vercel Sandbox subagents starter repo here: https://github.com/pipecat-ai/gradient-bang/tree/main/deployment/vercel
A multiplayer game driven by LLM prompts sounds like absolute chaos in the best way. How do you handle the latency issues that usually come with real-time LLM interactions?
Daily.co
@rivra_dev I'm so glad you asked. The entire game is built on Pipecat, the open source framework for realtime AI. Pipecat is the most widely used library for building voice agents and realtime video avatars.
We use models that are very low-latency. The game supports a number of options for models, but the current public game server is using Deepgram for speech-to-text and Gradium for text-to-speech.
We also built a new Pipecat library for the long-running subagents that need to share context with each other and with the voice agent, called Pipecat Subagents. But this library code has turned out to be so useful that we're working on integrating it into Pipecat core directly.
I wrote a long guide to building voice agents, which covers a lot of the "hard parts" about latency, interruption handling, context management, etc: https://voiceaiandvoiceagents.com/
Launching together today makes Product Hunt even more exciting. Love the product it really caught my attention.
Cheering for fellow makers today, and would love to support each other. Wishing you a fantastic launch 🚀
Daily.co
@erenasiroglu Love it!