me and my co-founder are building an AI agent because at our last startup we just couldn t keep up with support.
we tried every chatbot out there. they all felt robotic. customers hated it.
hiring more people was too slow + too $$$
so we put together this ai chatbot (think intercom fin but deeper) that trains on your old tickets, learns your tone, doesn t hallucinate, and can actually answer stuff like a real support rep.
Why does it feel like every developer in the bay area announces ANOTHER MCP library to write client/servers for AI...the USB-C of AI they say, it feels to me like the nightmare of AI, because everything moves so fast and everyone is building more stuff (and barely maintaining it)
There are a lot of AI frameworks to begin with: LangChain. LlamaIndex. AutoGen. CrewAI. DSPy. OpenDevin. Semantic Kernel... They re all trying to answer one basic question:
'The best way to reliably interact with an LLM so that you can bolt things like memory, tools, and MCP to your AI agent'
I ve always found it painful to turn a simple training or explainer video into a full learning module quizzes, logic, SCORM exports... it s never really rapid.