There s a lot of noise about what s breaking in AI.
But here s something we don t celebrate enough:
Systems today fail less than they did even a few months ago.
Agents recover from interruptions. Workflows resume where they left off. Context carries more reliably across chains. Tooling ecosystems are maturing faster than anyone expected.
We ve officially benchmarked Dropstone against the AGCI Benchmark a framework designed to evaluate adaptive and long-term cognitive intelligence in artificial systems.
Unlike traditional reasoning tests, AGCI measures how well systems learn, remember, and reason across sessions essentially assessing continuity, adaptability, and persistent understanding.
We built Dropstone because we were tired of starting from zero every time we opened an AI coding tool.
Dropstone learns, remembers, and evolves with your projects building a persistent understanding of your codebase, architecture, and workflow. It s designed to grow with you, not reset after every session.
We ve just launched it on Product Hunt and would love your thoughts on how memory should shape the next generation of developer tools. Your feedback will help us refine what s coming next.