β’43 reviews
Perplexity helps Logic handle tasks that need real-time web context. When our automations need current information, Perplexity delivers accurate results without the hallucination headaches.
Report
β’43 reviews
Gemini gives us another strong option for routing complex tasks. Fast response times and competitive pricing mean we can offer our customers more flexibility in how their automations run.
Report
1 viewβ’43 reviews
The OpenAI API powers much of Logic's intelligent routing. Reliable, fast, and the models keep getting better. Their infrastructure handles scale without breaking a sweat.
Report
β’43 reviews
Claude Code helped us build Logic's automation engine. It can hold our entire codebase in context, so it gets what we're actually trying to do. Massively sped up our development.
Report
β’43 reviews
Plaid is obviously the industry standard for bank linking and we've been impressed with just how many institutions are supported now.
Report
β’43 reviews
Alpaca's Broker API is the best in the business. We wouldn't have been able to launch by now if not for their amazing new rebalancing API.
Report
β’43 reviews
We use Langchain to orchestrate our entire agent framework β from the AI inbox and SDR copilots to outbound flows and long-term memory. It helps manage complex tool use, contextual reasoning, and multi-turn interactions across our platform. Langchain has been critical to scaling structured LLM workflows with modular, maintainable logic. Grateful to the Langchain team and open-source contributors (part of pour team) pushing this ecosystem forward.
What's great
community support (3)agentic workflow support (18)scalable AI development (6)modular toolset (3)context-aware reasoning (3)
Report
9 viewsβ’43 reviews
We use Deepgram to transcribe live AI-driven training calls, where SDRs and sales reps practice conversations with an AI agent impersonating real prospects. The fast, accurate transcription is essential β it enables our coaching system to deliver instant feedback and correction right after each mock call. This real-time loop helps reps improve faster, and weβre thankful to the Deepgram team for enabling it.
What's great
fast performance (12)real-time transcription (8)high accuracy (18)
Report
20 viewsβ’43 reviews
We use LiveKit as the orchestrator for everything voice-related in our platform β from real-time conversations to AI-driven phone calls. It handles signaling, session management, and audio streaming across our voice AI and telephony systems. The flexibility and performance have been critical to delivering reliable, low-latency voice experiences. Thanks to the LiveKit team for building an infrastructure layer we can trust.
What's great
flexibility (2)real-time communication (6)low-latency (4)audio streaming (2)signaling and session management (2)
Report
4 viewsβ’43 reviews
We use Groq for ultra-fast inference when analyzing millions of contact records and enriching them with AI. It enables us to run deep research and structured reasoning at speeds that would be impossible on standard GPU setups. This level of performance lets us deliver intelligent outputs in real time, even at scale. We're grateful to the Groq team for building the kind of infrastructure that makes this possible.
What's great
fast performance (12)real-time interaction (3)
Report
6 viewsβ’43 reviews
We chose Rust for SurrealDB because performance and safety matter. Rustβs memory safety guarantees and zero-cost abstractions let us build a fast, scalable database engine without sacrificing reliability.
What's great
fast performance (12)scalability (3)memory safety (6)zero-cost abstractions (2)
Report
β’43 reviews
Quick, scalable, and simple to implement.
What's great
rapid prototyping (5)scalable AI development (6)
Report
