We just shipped Custom AI for CodeCritic on Pro and Enterprise plans.
If you want AI code review on your own LLM endpoint instead of our default platform models, you can now connect a public HTTPS API that speaks the familiar OpenAI-compatible contract (/v1/chat/completions). That matters for teams that care about vendor choice, predictable token economics, or routing reviews through an approved corporate model.
What you get
BYO LLM for code review - point CodeCritic at your provider s base URL, pick a model id, and store your API key securely in Settings Integrations (encrypted on our side).
Same product surface - paste code, GitHub PR flows, API and automation keep working; when Custom AI is active and valid, reviews can run against your endpoint without consuming the same platform review quota pattern you re used to from the dashboard (see live plan details in-app).
Enterprise-friendly story - good fit when policy says self-hosted or contracted LLM only, as long as the endpoint is reachable over HTTPS and compatible with the OpenAI-style chat API.