We tap Gemini 2.5 when we need strong multi-modal understanding and cross-checking of insights, using it alongside other models so Userology’s reports are more robust than any single-model stack.
OpenAI powers some of our heaviest language work — especially nuanced, long-form research analysis — thanks to its strong reasoning, reliability, and ecosystem of tools built for production use.
We use Llama as part of Userology’s research engine for fast, structured UX work like tagging, clustering, and pattern-finding in sessions. Because it’s open-weight, we get tighter control over performance, routing, and data privacy than most closed models, which makes our multi-model stack more flexible for different customer needs.
LiveKit lets us handle real-time audio/video and screen streaming for research sessions without fighting brittle infra, so we can focus on better participant experiences instead of WebRTC wizardry.