Robert Floria

RAG-powered Global Assistant

by

One of the most interesting upgrades I recently shipped in Derisqo was a RAG-powered Global Assistant.

The app already lets users upload meetings and documents, but I wanted the AI to answer from the user’s actual workspace context — not just from a generic model response.

So I built a retrieval pipeline for the workspace-wide assistant: meetings and documents are processed, chunked, embedded, stored in pgvector, and semantically retrieved whenever the user asks a question in global chat.

In practice, the assistant searches across a user’s meetings and documents, pulls the most relevant context, and uses that retrieved information to generate a more grounded response.

A few things I focused on:
∙ Speaker-aware chunking for meeting transcripts
∙ Section-aware chunking for documents
∙ Cross-resource semantic retrieval
∙ Prompt grounding with retrieved context and source labels

The difference is subtle but powerful: answers are no longer generic — they’re context-aware and traceable back to your own data.

3 views

Add a comment

Replies

Be the first to comment