Launching today
Faramesh

Faramesh

Establishing the Standard for Agentic Execution Control

1 follower

The first deterministic governance and control system for AI agent actions, providing policy-based access control, approval workflows, and comprehensive observability.
Faramesh gallery image
Faramesh gallery image
Faramesh gallery image
Free
Launch Team
AssemblyAI
AssemblyAI
Build voice AI apps with a single API
Promoted

What do you think? …

Amjad Fatmi
Maker
📌
We spent most of 2025 watching teams try to 'prompt' their way out of agentic hallucinations. It doesn't work. If you give an LLM-based agent access to a production database or a shell, you are always one 'ignore previous instructions' jailbreak away from a catastrophic incident. We believe Faramesh is the first project to treat the 'Agent-to-System' bridge as a distributed systems problem rather than a prompting problem. The hardest part to get right was the Deterministic Canonicalization. LLMs are inherently messy, one model might send {"power": 100.0} while another sends {"power": 100} for the same tool call. To build a reliable Action Authorization Boundary (AAB), we had to ensure that the semantic intent produces a stable, cryptographic hash every single time. Without this, you cannot have reliable RBAC, audit trails, or 'fail-closed' security for autonomous agents. We’ve open-sourced the core logic today and would love for the community to tear apart our approach in canonicalization.py. We’re specifically interested in whether people think this architecture could eventually be standardized into a formal 'Agentic Firewall' protocol.