Launched this week

ICME API
Cryptographic guardrails for AI Agents
3 followers
Cryptographic guardrails for AI Agents
3 followers
AI agents are becoming powerful enough to move real money, sign contracts, and make consequential decisions autonomously. Agentic security has not kept up. We built cryptographically secure guardrails for AI agents using formal verification and zero knowledge proofs.



We started with a simple question: why are AI agent guardrails still just prompts or observability dashboards?
Prompt-based guardrails and LLM judges are fundamentally arguable. A clever enough input can social engineer any system that uses a model to make the final judgment call. We wanted something where the decision was math, not opinion.
Observation is reactive.. humans are slow to verify and require sleep..
Why can't we just have guardrails based on math and cryptography?
Well, we can! And have it protect over 99.9% of edge cases.
The ARc paper showed that natural language policies could be compiled to SMT-LIB formal logic and checked by a solver. SAT or UNSAT (https://arxiv.org/pdf/2511.09008). Not a confidence score. We built an API on top of that, then wrapped it in zero knowledge proofs (https://github.com/ICME-Lab/jolt-atlas) so the verification is succinct (under 1s), the policy stays private, and every decision produces a cryptographic audit trail.
The x402 payment flow felt like the right fit, $0.10 USDC on Base per guardrail check, NO subscription needed. This type of primitive makes sense as agents start transacting with each other directly.
Happy to answer questions about the ARc compilation, the ZK circuit, the payment flow, or anything else