Pieces for Developers
On-device AI development assistant for your entire workflow
4.7•35 reviews•2.5K followers
On-device AI development assistant for your entire workflow
4.7•35 reviews•2.5K followers
Pieces™ is the first AI-powered Long-Term Memory for developers, capturing everything you work on across desktop apps, IDEs, browsers, and terminals. It remembers code, notes, links, and conversations for 9+ months, making past work instantly searchable and accessible. With Workstream Activity, instant recall, and deep linking, Pieces keeps you in flow.
This is the 4th launch from Pieces for Developers. View more

Pieces Long-Term Memory Agent
Ever wish you had an AI tool that remembered what you worked on, with who, and when across your entire desktop? Pieces Long-Term Memory Agent captures, preserves, and resurfaces historical workflow details, so you can pick up where you left off.







Free
Launch Team / Built With



Pieces for Developers
Definitely a new innovation in AI space where everyone is just running behind raw compute and reasoning, but no one catering to what would truly make AI personalised for the end user, the context. All the best for the launch.
Pieces for Developers
@nikhil_l Personalized AI is the logical next step with LLMs - Pieces Long-Term Memory makes that happen!
You'd never hire a personal assistant who woke up every morning with amnesia! AI shouldn't be any different.
Thanks for the support and hope you're loving LTM-2!!
Pieces for Developers
Exciting times for Pieces! 🚀 The team is building a game-changer with LTM-2, redefining AI-memory
Pieces for Developers
@hanna_stechenko2 This is probably the coolest update we have had yet! 🥳
Congrats on the launch! Curious about how you see recent developments in MCP impacting or complimenting the offering. Seems like things are heading in that direction and could open up a lot of opportunities for Pieces.
Pieces for Developers
Thanks @steve_caldwell2 , Sorry I was not able to catch what MCP is, can you sherd more light on what MCP means? Happy to answer your question;
@steve_caldwell2 @ialimustufa MCP = Model Context Protocol (https://www.anthropic.com/news/model-context-protocol) a standard how AIs can fetch data, it's good, but not comparable to pieces.
@ialimustufa @henry_rausch Thanks for providing the additional context there Henry. I'm not sure I'd call it "comparable" to Pieces, but it certainly seems like a protocol that Pieces could leverage to quickly connect to many more data sources. It's kind of the hot girl at the AI context dance right now.
Pieces for Developers
@henry_rausch @steve_caldwell2 Great question Steve! We don't currently use MCP, but we're excited about any standardization of data sources. With LTM-2 we're currently focussed more on the contextual data you're explicitly working with, but in the future we are definitely planning on augmenting the context for a query with these external data sources if we feel that it will provide a more useful answer!
Pieces for Developers
@henry_rausch @steve_caldwell2 @ialimustufa Funnily enough I've been playing with MCP recently as a side project. We have a Python SDK so it wouldn't be too hard to implement this yourself for now using our Python SDK and the MCP Python SDK.
Pieces for Developers
If your current AI assistant was a real person, you'd FIRE them.
No idea what you worked on yesterday
Makes you manually give them all your information
And even forgets your name!!!
Cutting edge LLMs (as great as they are) have the memory of a goldfish. 🐟
Pieces is the first AI that remembers EVERYTHING you do.
"Who asked me about that API bug last quarter and how did we solve it?" - is a question that would make ChatGPT break down into tears. Pieces can answer it, show you the links you clicked, find emails where you talked about it, and summarizes the entire thing so you can jump right back into your work with ZERO context swtiching.
Stop wasting time using assistants that don't grow with you.
All you context, all your memories, all your AI models. - All in one place.
Pieces for Developers
@jackross Exactly! LTM-2 is the major upgrade in AI assistants that we have all been waiting for. Thank you for the support Jack!
Pieces for Developers
Can’t wait to integrate work stream activities into my workflow! I’ll never need to use the Chat GPT interface again 😎
Pieces for Developers
@sam_parks_at_pieces Workstream Activities are a game-changer for sure 😎
MGX (MetaGPT X)
Really intrigued by Pieces' approach to long-term memory! While the memory chunking system looks promising, I'm curious about how you handle memory contamination issues. When multiple conversations or contexts overlap, how do you maintain clarity and prevent incorrect information bleed?
Also wondering about the memory cleanup process - is there a way to identify and remove potentially contaminated or outdated memory blocks? Would love to hear more about your solution to these challenges, as memory pollution has been a significant hurdle in long-term memory implementations.
Pieces for Developers
@zongze_x thanks for the great technical question, you are spot on, the issue of memory contamination is complex and a core challenge in designing features like this one. I suspect you can appreciate that writing a full answer here is tough - but would make an excellent topic for a technical article (watch this space). At a high-level, our approach to identifying and minimizing contamination happens at three levels:
On entry: we are very selective about what is added to the LTM. By analysing where the users focus is and how what they are focusing on currently relates to the big picture of there workstream we can prevent a lot of corruption at source.
On roll up: when we roll up memories into periodic summaries our agent looks for narratives and themes across workflow elements. When we find contradictions, we resolve them by comparing those narratives to cut out random chatter and keep focus on core tasks.
At query time: when you interact with your workstream data, through the copilot or the summaries, those interactions are used to infer which aspects are useful/truthful and which are not, which allows us to elevate quality information whilst demoting the noise.
Additionally, signals from all of these levels are used to periodically clean contamination from your stored memories. It's a work in progress but I have found the LTM to be much more resistant to context corruption than other solutions out there.
Nonilion
Hey 👋 super cool launch. It's a beautiful co incidence I just finished a paper on long term memory as a weight in a new form of transformer architecture. Paper is still in review but your launch is fun and practical.
All the best 👍
Pieces for Developers
@themisty Hey Krishna! Thank you for checking out the launch! Your paper sounds really interesting, I'm sure a lot of people from my team would enjoy reading it. Where will it be published?
Nonilion
@elliezub Dear Ellie, I am excited to have readers interested already :) it will be on Arxiv, fingers crossed. Still under heavy reviewing lol.
Pieces for Developers
@themisty Sounds great! Looks like we are connected on Linkedin now, so hopefully you will post about it once it's published. Can't wait to read it!
Pieces for Developers
@themisty Can't wait to read your paper Krishna! Thanks for the support as always!!
Nonilion
@elliezub Thanks will definitely ping the team. In the meantime I definitely don't mind if you guys can feel my product nonilion ? I don't mind a solid feedback 🙌