All activity
Thanakorn Nuekchanaleft a comment
I built this after seeing how quickly AI agents were being connected to tools and APIs without a clear permission checkpoint before execution. The problem I’m trying to address is simple: agents can act too freely — calling tools, modifying data, or accessing resources without a centralized decision layer. Teams often end up building ad-hoc safeguards that are inconsistent, expensive to...

layerAIDecide Before AI Acts.
Hi — I'm building a permission decision layer for AI agents that sits between tool calls and execution.
It’s already live and being tested. I’m exploring whether this could help teams working on agent/tool safety, especially when interacting with external tools.
If you think it’s relevant, would you be open to taking a quick look and giving feedback?
Or, if more appropriate, would you feel comfortable introducing me to someone on your team?
No pressure at all 🙂

layerAIDecide Before AI Acts.
