From the team behind Gatsby, Mastra is a framework for building AI-powered apps and agents with workflows, memory, streaming, evals, tracing, and Studio, an interactive UI for dev and testing. Start building: npm create mastra@latest
This is the 3rd launch from Mastra. View more

Mastra Code
Launching today
If you’ve used a coding agent, you know the pain. You’re deep into a feature and everything is clicking. Then the context window fills up, it compacts, and key details are lost.
Mastra Code is different. Powered by our state-of-the-art observational memory, it watches, reflects, and compresses context without losing important details.
The result: long-running coding sessions that stay precise, letting you build faster, merge sooner, and ship more.






Free
Launch Team / Built With








Mastra
Paul from Mastra here, excited to launch Mastra Code 🎉
Mastra Code is a CLI-based coding agent built on the observational memory feature we shipped earlier this month. Our team uses it as their daily coding driver, and we think you’ll love it.
What problem does Mastra Code solve?
A lot of coding agents feel great at first, until the context window fills up. When this happens, the agent uses compaction to summarize past conversations, often dropping important details and forcing you to repeat work, re-explain context, or double-check what its remembered, which can become really frustrating.
Mastra Code is different. It's powered by our state-of-the-art observational memory, which watches your conversations, generates observations, and reflects on them to compress context without losing important information.
The result is simple: long-running coding sessions that remember what matters so you can build faster, merge sooner, and ship more.
Get started
Install mastracode globally with your package manager of choice:
If you prefer not to install packages globally, you can use npx:
What it's like to use
Mastra Code runs directly in your terminal, with no browser or IDE plugin required. With most coding agents, you spend time managing context windows, splitting work across threads, or saving notes before compaction hits. With Mastra Code, there is no compaction pause and no noticeable degradation, even in long-running sessions.
You can throw anything at it: planning, brainstorming, web or code research, writing features, and fixing bugs. Over time, memory fades into the background so you can focus on building.
After a few days, you realize you're no longer tracking context windows or restructuring work to avoid compaction. Observational memory quietly remembers what matters as sessions grow. Once you experience that, it is hard to go back.
We'd love your feedback. Drop questions below and we will be here answering all day.
"Never compacts" is the right framing — the real failure mode is mid-session context amnesia, not the obvious stuff. The hard part is that standard summarizers preserve what's recent or frequent, but the detail that derails a session is usually the rare constraint mentioned once: a deprecated API, a specific architectural decision made three hours ago. Does the observational memory layer use explicit signals (test failures, user corrections) to weight those rare-but-critical details, or is importance inference more heuristic?
Adjust Page Brightness - Smart Control
wow this is just AWESOME!