We use OpenAI’s models for querying saved content. When you ask a question, the system retrieves relevant snippets and uses the API to generate clear, contextual answers — even for long PDFs, transcripts, messy web pages and long videos/podcasts.
The ragobble front-end is built in React for a fast, modular UI. It powers everything from drag-and-drop file uploads to live search results and seamless transitions between saved content and AI responses.
LangChain handles orchestration between content retrieval, chunking, embedding, and generation. It lets us combine different tools (like vector search + OpenAI) into a clean pipeline without managing every integration manually.