Launching today
layerAI

layerAI

Decide Before AI Acts.

2 followers

Hi — I'm building a permission decision layer for AI agents that sits between tool calls and execution. It’s already live and being tested. I’m exploring whether this could help teams working on agent/tool safety, especially when interacting with external tools. If you think it’s relevant, would you be open to taking a quick look and giving feedback? Or, if more appropriate, would you feel comfortable introducing me to someone on your team? No pressure at all 🙂
layerAI gallery image
layerAI gallery image
layerAI gallery image
Free
Launch Team / Built With
Unblocked AI Code Review
Unblocked AI Code Review
High-signal comments based on your team's context
Promoted

What do you think? …

Thanakorn Nuekchana
I built this after seeing how quickly AI agents were being connected to tools and APIs without a clear permission checkpoint before execution. The problem I’m trying to address is simple: agents can act too freely — calling tools, modifying data, or accessing resources without a centralized decision layer. Teams often end up building ad-hoc safeguards that are inconsistent, expensive to maintain, and not their core focus. My approach was to design a system that sits outside the agent’s logic — a managed decision layer that evaluates intent before action, without executing anything itself. During development and launch, I focused on keeping integration minimal (SDK call), policies centrally managed, and privacy preserved by avoiding payload storage.