β’5 reviews
We use LiveKit as the orchestrator for everything voice-related in our platform β from real-time conversations to AI-driven phone calls. It handles signaling, session management, and audio streaming across our voice AI and telephony systems. The flexibility and performance have been critical to delivering reliable, low-latency voice experiences. Thanks to the LiveKit team for building an infrastructure layer we can trust.
What's great
flexibility (2)real-time communication (6)low-latency (4)audio streaming (2)signaling and session management (2)
Report
49 views
β’5 reviews
We use Langchain to orchestrate our entire agent framework β from the AI inbox and SDR copilots to outbound flows and long-term memory. It helps manage complex tool use, contextual reasoning, and multi-turn interactions across our platform. Langchain has been critical to scaling structured LLM workflows with modular, maintainable logic. Grateful to the Langchain team and open-source contributors (part of pour team) pushing this ecosystem forward.
What's great
community support (3)agentic workflow support (18)scalable AI development (6)modular toolset (3)context-aware reasoning (3)
Report
46 views
β’5 reviews
We use Deepgram to transcribe live AI-driven training calls, where SDRs and sales reps practice conversations with an AI agent impersonating real prospects. The fast, accurate transcription is essential β it enables our coaching system to deliver instant feedback and correction right after each mock call. This real-time loop helps reps improve faster, and weβre thankful to the Deepgram team for enabling it.
What's great
fast performance (12)real-time transcription (8)high accuracy (18)
Report
58 views
β’5 reviews
We use LiveKit as the orchestrator for everything voice-related in our platform β from real-time conversations to AI-driven phone calls. It handles signaling, session management, and audio streaming across our voice AI and telephony systems. The flexibility and performance have been critical to delivering reliable, low-latency voice experiences. Thanks to the LiveKit team for building an infrastructure layer we can trust.
What's great
flexibility (2)real-time communication (6)low-latency (4)audio streaming (2)signaling and session management (2)
Report
30 views
β’5 reviews
We use Groq for ultra-fast inference when analyzing millions of contact records and enriching them with AI. It enables us to run deep research and structured reasoning at speeds that would be impossible on standard GPU setups. This level of performance lets us deliver intelligent outputs in real time, even at scale. We're grateful to the Groq team for building the kind of infrastructure that makes this possible.
What's great
fast performance (12)real-time interaction (3)
Report
26 views




