p/tollecode
by
Bernard Ossei-Agyei
Agents can now remember your workspace context like a persistent brain for your project making long-running engineering tasks, multi-step workflows, and delegated agent work far more consistent.
Tollecode is a local-first AI coding assistant built for real engineering work:
File reads
Shell commands
Sub-agents
Full control over execution
7
18
p/introduce-yourself
Sourav Das
Hey Product Hunt!
I'm Sourav, builder from India. Spent 6 years building AI products for others. Memoair is my first startup.
6
p/mnexium-ai
marius ndini
We just shipped multi-provider support in @Mnexium AI so you can change LLMs without resetting conversations, user context or memories.
When teams switch providers, they usually lose everything:
22
p/blocpad
Mihir Kanzariya
If you use AI dev tools daily, you ve probably felt this:
You start a new session and immediately have to re-explain:
what the project is
what you already tried
why certain decisions exist
what not to repeat
Not because the AI is bad.Because the workflow forgets.
14
82
p/ai-context-flow
Yuze
We launched, we won, we almost lost ourselves. This is the honest story of building AI Context Flow after the spotlight faded.
Six months ago, we launched on Product Hunt. We were excited and nervous in equal measure , not sure what a community that has seen everything, every single day, would think of our product. Now, looking back, six months felt both incredibly long and impossibly short. A lot happened. Just consider how many AI launches have come and gone in that time.
12
16
p/pretty-prompt
Ilai Szpiezak
Not all context is equal.
So we stopped treating it that way.
Most AI tools store everything the same:
Writing rules
Personal background
Preferences
Temporary notes
10
19
p/dropstone-2
Santosh Arron
Every AI coding tool today suffers from the same flaw amnesia.It forgets what you ve done, what you re building, and how you think.
9
30
hira siddiqui
In November 25, AI Context Flow was #1 Product of the Day and #1 Productivity Tool of the Week. It was surreal.
Since then, we have been building in public, together with this amazing community here.
You believed in this before it was polished. You gave us feedback when it was rough. You kept asking for more and that pushed us to build more, and we delivered more.
21
p/general
Nika
In a discussion forum with @monatruong_murror , we talked about how AI can help us learn things that aren t naturally familiar to us, like programming.
The biggest challenge was/is:Getting AI to guide you toward a solution, instead of just giving you the answer.
23
Kyan
Hey makers!
Lately, I ve been looking closely at how independent builders and small teams are managing AI knowledge bases. It feels like the default "industry standard" is to immediately reach for a complex RAG pipeline and a heavy, paid Vector Database.
But I'm starting to wonder if we are over-engineering this for 90% of standard use cases.
Vector DBs are incredibly powerful for massive scale, but for smaller or non-massive datasets, they can be expensive, complex to query, and act as complete black boxes. If a search returns a weird chunk, diagnosing it is often a nightmare.
38
One of the biggest pain points in AI chatbots has been their forgetfulness having to repeat the same context over and over again. AI memory aims to solve this by allowing models like ChatGPT and the newly launched Gemini to retain past interactions.
But how well do these memory features work? Which AI ChatGPT or Gemini handles memory better? And more importantly, does AI memory provide more value in personal use or enterprise settings?
32
Yesterday, Meta announced that they have released a new collection of AI models, Llama 4, in its Llama family.
(It consists of Llama 4 Scout, Llama 4 Maverick, and Llama 4 Behemoth.)
Historically, Open AI with its ChatGPT has been on the market for the longest period.
46
33
Gabe Perez
People are talking about MCP so much and it feels like a secret hack to unlock AI/LLM capabilities and make them do more with other tools/softwares.Can anyone help me explain MCPs to my mom in as little words as possible? Preferably avoiding saying "Model Context Protocol"
p/vibecoding
Hassan Jahan
I keep seeing advice like use this model for the easy stuff and that one for complex problems. But it makes me wonder what really counts as a complex problem for an LLM?
For us, complex usually means lots of steps, deep reasoning, or tricky knowledge. But for AI, the definition might be different. Some things that feel easy for us can be surprisingly hard for models, while things that seem tough for us (like scanning huge datasets quickly) might be trivial for them.
11
Ilia Pluzhnikov
I ve been exploring MCP, an open standard from @Anthropic that aims to simplify AI integrations.
In theory, this should make it easier to connect AI with databases, task managers, or even development tools. But I m curious to know how well it actually works in practice.
Aaron O'Leary
29
47
p/claude
Aravind Parameswaran
13
Sunny Kumar
17
Kisson Lin
Hardik Sonawala
20
p/flamme-ai
An 🪐
When we launched Flamme back in 2022, the vision was simple:
Help couples stay and grow in love every day.
8
p/cursor
Tijs Teulings
I'm fascinated by the ability to extend what Cursor can do with MCP features but there are so many out there, with some of questionable pedigree, that I'm having a hard time finding the gems.
I've tried a few but so far I've only gotten good usage from the Think tool which allows Cursor to basically jot down notes on it's process which it can then refer to later. Theoretically allowing more context than just the context window https://github.com/DannyMac180/m... Since I've installed it Cursor seems to use it a lot but it's hard to gauge how much it helps in practice. I'm glad the AI likes it though :)
Ghost Kitty
Hailey.W
5
4