We used LangChain to orchestrate prompt flows, memory, and tool use across our AI Agents. It gave us the flexibility to chain LLM calls with infrastructure logic, without reinventing the wheel. Solid framework for building structured, agentic AI systems.
We used Cursor during the development of the Microtica UI β especially for quickly implementing Tailwind components, refactoring logic across React views, and staying in flow while iterating on the developer experience.
Claude is the core LLM powering our AI Agents. We chose it for its ability to handle complex DevOps instructions, generate accurate infrastructure as code, and follow up with context-aware questions. Itβs reliable for infrastructure-focused use cases.