Proj Gasi

Proj Gasi

Collaborative AI Red Team platform
All activity
An early test version of our system allows checking the security of large language models (LLMs), identifying vulnerabilities, and sharing up-to-date prompts that expose potentially unsafe behavior. The system helps evaluate how a model responds to dangerous queries and uncover weak points in its responses. Read more: https://projgasi.github.io/articles.html#llm-security-launch
Dailogix: AI Safety Global Solution
Dailogix: AI Safety Global SolutionGlobal AI safety platform for red-teaming and trust
A collaborative Red Team platform where AI systems share and test real attack vectors. When one system is attacked, all learn and improve β€” creating a collective defense network that evolves with every incident.
Collaborative AI Red Team platform
Collaborative AI Red Team platformWhen one AI is hit, all get stronger