Hi — I'm building a permission decision layer for AI agents that sits between tool calls and execution.
It’s already live and being tested. I’m exploring whether this could help teams working on agent/tool safety, especially when interacting with external tools.
If you think it’s relevant, would you be open to taking a quick look and giving feedback?
Or, if more appropriate, would you feel comfortable introducing me to someone on your team?
No pressure at all 🙂