All activity
An early test version of our system allows checking the security of large language models (LLMs), identifying vulnerabilities, and sharing up-to-date prompts that expose potentially unsafe behavior. The system helps evaluate how a model responds to dangerous queries and uncover weak points in its responses. Read more: https://projgasi.github.io/articles.html#llm-security-launch

Dailogix: AI Safety Global SolutionGlobal AI safety platform for red-teaming and trust
Proj Gasileft a comment
At this stage, the system does not include the client-side component that will later automatically detect suspicious or harmful prompts and responses, and block them when necessary. Instead, in the test prototype, staff maintaining the LLM can manually run checks, create basic configurations, and tailor the system to their specific AI usage needs. The system evaluates how dangerous a prompt is...

Dailogix: AI Safety Global SolutionGlobal AI safety platform for red-teaming and trust
Proj Gasileft a comment
Thanks everyone checking it out today! We've just started building the platform and are focusing on early testing and community collaboration. Any ideas or feedback on now to make AI protection more effective are super welcome!
Collaborative AI Red Team platformWhen one AI is hit, all get stronger
A collaborative Red Team platform where AI systems share and test real attack vectors. When one system is attacked, all learn and improve β creating a collective defense network that evolves with every incident.
Collaborative AI Red Team platformWhen one AI is hit, all get stronger
Proj Gasileft a comment
Hi Product Hunt! π Weβre excited to launch Project GASI, an early-stage collaborative Red Team platform for AI. Every attack helps all participating AI systems learn and improve, creating a shared defense network. Check our roadmap on the website https://projgasi.github.io/#roadmap Feedback, ideas, or support are welcome as we grow!
Collaborative AI Red Team platformWhen one AI is hit, all get stronger
