
OpenAI powers the embeddings layer underneath Claro's validation engine.
Every document, source, and entity gets embedded, which is how we compute semantic similarity for entity resolution, source-to-query relevance in retrieval scoring, and one of the four signals in our confidence formula. Embeddings are the quiet foundation of "does this source actually answer this question" at 100k-row scale.
Alternatives Considered
Report

Claro runs on Google Cloud. BigQuery is our analytical backbone, we store and query millions of rows of supplier data, enrichment history, and confidence metadata there. On top of that, the full data science stack (Vertex AI, notebooks, model training environments) is where our team runs experiments, benchmarks new validation approaches, and iterates on scoring logic. GCP gives us the throughput for parallel LLM calls (multi-model consensus), the storage for customer knowledge bases, and the flexibility to orchestrate long-running enrichment jobs across thousands of rows.
Alternatives Considered
Report

Claro runs on Supabase. Auth, Postgres, storage, and realtime — all in one place, zero infra babysitting. When you're building a validation layer where every cell needs citations, provenance, and audit history, Postgres + RLS is the right foundation. Saved us months.
Alternatives Considered
Report




