All activity
Guardio is a proxy that sits between your AI Agent system and the external world. It catches and evaluates messages flowing to and from MCP tools and other APIs before they reach the real servers. You can enforce policies (allow, block, sanitize), require approval, and observe activity - all through a plugin system.
GuardioThe proxy that sits between AI Agent and the external world
Radosław Szymkiewiczleft a comment
I always struggle with the guarantee that my AI agent won't do anything that could break my system. An AI agent’s behavior is never 100% guaranteed - there is always a small chance that it might send tons of emails, accidentally delete a file, or cause other unintended side effects. Guardio solves this issue. You put a proxy in front of all your AI agents and create policies that the agents...
GuardioThe proxy that sits between AI Agent and the external world
Multi AI Agent Test Framework. Test agentic AI systems before deployment.
Maia FrameworkUI dashboard for reviewing and debugging tests
Radosław Szymkiewiczleft a comment
Introduction of Multi AI Agent Test Framework - the framework which tests agentic systems before deployment. Check how your AI Agents behave by testing, debugging, asserting and validating what they are generated. See everything on nice UI dashboard. Open-source project still in MVP phase, but growing fast!
Maia FrameworkUI dashboard for reviewing and debugging tests
Radosław Szymkiewiczleft a comment
New functionality came to Maia - now you can use dashboard for visualizing tests! See more at: https://github.com/radoslaw-sz/maia?tab=readme-ov-file#test-dashboard
Maia Test FrameworkA pytest-based framework for testing multi AI agents systems
A pytest-based framework for testing multi AI agents (mAIa) system. It provides a flexible and extensible platform for creating and running complex multi-agent simulations and capturing the results. - radoslaw-sz/maia
Maia Test FrameworkA pytest-based framework for testing multi AI agents systems
