An early test version of our system allows checking the security of large language models (LLMs), identifying vulnerabilities, and sharing up-to-date prompts that expose potentially unsafe behavior. The system helps evaluate how a model responds to dangerous queries and uncover weak points in its responses. Read more: https://projgasi.github.io/artic...
A collaborative Red Team platform where AI systems share and test real attack vectors. When one system is attacked, all learn and improve — creating a collective defense network that evolves with every incident.