
LLM Memory with RAG... what's your take?
Trying to solve a problem: LLMs forget everything between conversations. I keep re-researching the same API docs, competitor info, and company knowledge. Tokens add up... Time wasted... What are people actually using? I see a few approaches: RAG setups Vector databases Just saving to Notion/Obsidian manually We built something that lets Claude save research to collections, then query them...


Launching Needle AI Agents Tomorrow
Hi everyone 👋 We’re launching Needle AI Agents tomorrow! Our goal is to help teams automate processes and build AI Agents with context of their own data sources (Google Drive, Notion, Slack, etc.). I’d love to hear just one and only thing... – What’s your biggest pain point when setting up AI workflows? We’re happy to answer any questions about the tech, roadmap, or how we built it!










