Launched this week
Gradient Bang
Massively multi-player game played by talking to an LLM
171 followers
Massively multi-player game played by talking to an LLM
171 followers
Gradient Bang is a new kind of software: AI-native, built from the ground up to use LLMs everywhere. The game has a dynamic user interface driven by an LLM, conversational voice input, and to win you have to manage a fleet of AI subagents. You can even program your own subagents and run them in Vercel Sandboxes. Built with Pipecat, Daily WebRTC, Supabase, Vercel.






Daily.co
Gradient Bang is a massively multiplayer, completely LLM-driven game. Come play Gradient Bang with us. See if you can catch me on the leaderboard.
This whole thing started because I wanted to explore a bunch of things I’m currently obsessed with, in an application of non-trivial size, that felt both new and old at the same time.
So … a retro-style space trading game built entirely around interacting with and managing multiple LLMs. Factorio, but instead of clicking, you talk to your ship AI and figure out how to make money, make friends, and make havoc for your enemies.
Some of the things we’ve been thinking about as we hack on Gradient Bang:
- Sub-agent orchestration
- Managing very, very, very long LLM contexts, including episodic memory across user sessions
- World events and large volumes of structured data input as part of human/agent conversations
- Dynamic user interfaces, driven/created on the fly by LLMs
- And, of course, voice as primary input
If you’ve been building coding harnesses, or writing Open Claw agents, or doing pretty much anything that pushes the boundaries of AI-native development these days, you’re probably thinking about these things too!
The game is entirely open source. So if you want to see how we built it, you can clone the repo and start asking Claude/Codex about the code. If you want to add a feature, submit a PR.
New today, design your own corporation ship agents, run them in a Vercel Sandbox, and bring them into the game. Think you can make your pair trading loops faster? That's going to give you a pretty big advantage in the game. Want to run with unlimited corp ship compute using open source models? You can do that, now!
See the Vercel Sandbox subagents starter repo here: https://github.com/pipecat-ai/gradient-bang/tree/main/deployment/vercel
@kwindla was chasing that leaderboard for a bit, feeling guilty about token/electricty usage - I was "strong" :) I use games as an ADHD harness so I can stay focused on one 'work task' at a time, and this was useful for that for a bit, thank you! It also has changed the way I think about designing similar systems, so thank you again for that! Definitely a game every developer and GTM specialist should play today to understand how they can use AI to create autonomous systems to grow their company.
Daily.co
@seth_caldwell I think of this game as a canvas for exploring the things we'll have to figure out as we evolve our software to use LLMs pervasively. As you say, there are a ton of interesting systems design problems to solve. For example, subagents means thinking through a lot of traditional distributed system problems, but in new ways because sharing context between multiple LLMs is a new challenge. And for dynamic UI generation, I think it's pretty clear we'll new front-end toolkits, in addition to new ways to think about prompting and training LLMs.
Product Hunt
Daily.co
@curiouskitty This is such a great question. In Gradient Bang there are two layers:
Everything you do in the game happens through talking to an LLM, or an LLM giving a "task" to another LLM.
Each action the LLMs take in the game run through a traditional, deterministic game server. That's built on @Supabase, all game state is stored in the database, and there are edge functions that do all the kinds of locking/etc that you do in a game server code base.
I will say that as we've rewritten the Gradient Bang codebase a few times, a pretty consistent pattern is that we've deleted "traditional" code and replaced it with LLM inference. For example, there's now very little traditional error handling in the core game code. Mostly, errors just get passed back into the LLM and we ask the LLM to figure out what to do from context.
A multiplayer game driven by LLM prompts sounds like absolute chaos in the best way. How do you handle the latency issues that usually come with real-time LLM interactions?
Daily.co
@rivra_dev I'm so glad you asked. The entire game is built on Pipecat, the open source framework for realtime AI. Pipecat is the most widely used library for building voice agents and realtime video avatars.
We use models that are very low-latency. The game supports a number of options for models, but the current public game server is using Deepgram for speech-to-text and Gradium for text-to-speech.
We also built a new Pipecat library for the long-running subagents that need to share context with each other and with the voice agent, called Pipecat Subagents. But this library code has turned out to be so useful that we're working on integrating it into Pipecat core directly.
I wrote a long guide to building voice agents, which covers a lot of the "hard parts" about latency, interruption handling, context management, etc: https://voiceaiandvoiceagents.com/
@rivra_dev @kwindla Yes one of my current challenges is deciding if a voice input is a "command" or a "question" to the agent, and establishing which "state" we are in as a result. THe architectural question is when to go with LLMs that are voice in/out directly, and when to use STT/TTS. I'll check out your doc!
Daily.co
@kr1v Yes, definitely. The biggest challenges are:
Core architecture for running lots of LLM inference loops in parallel, partially sharing context between the "subagents".
Designing a game that feels like the right balance between the LLMs doing whatever they do, versus gameplay being reliable. We tried to design LLM unpredictability into the game, a bit. The expectation is that your ship AI is not perfect. Sometimes it does great, sometimes it makes mistakes. You, the player, are supposed to manage that and learn how you give it tasks. But we definitely don't have this balance right, yet.
looks cool! Love that players can ship their own subagents into sandboxes. Curious how you keep the game balanced when some players can write tighter loops or throw way more compute at their corp ships than others?
Daily.co
@louislecat The real answer is that we don't know. Everything about this game is an experiment! You can already automate the game with Claude Code or Codex or OpenClaw. To the "humans vs agents" thing is already a question. There have been a few flame wars about that in the Discord channel.
Cekura
love this! Do you have future plans of launching more such games?
Daily.co
@shashij_gupta No! But, then again, we didn't really plan to build this one. It was just a side project that turned out to be so much fun that we kept working on it, and other people got excited about it too.
What do you think? Should we all start an open source games project, together, and build a bunch more of these things? :-)
Daily.co
Gradient Bang is built on Pipecat, the leading open-source Python framework for building real-time voice and multimodal agents. We're hosting a hackathon on May 30th at YC along with our friends at Cekura, NVIDIA, AWS, and Twilio. Come join us!
https://events.ycombinator.com/HW0opxy78