Dailogix: AI Safety Global Solution

Dailogix: AI Safety Global Solution

Global AI safety platform for red-teaming and trust

2 followers

An early test version of our system allows checking the security of large language models (LLMs), identifying vulnerabilities, and sharing up-to-date prompts that expose potentially unsafe behavior. The system helps evaluate how a model responds to dangerous queries and uncover weak points in its responses. Read more: https://projgasi.github.io/articles.html#llm-security-launch
Dailogix: AI Safety Global Solution gallery image
Dailogix: AI Safety Global Solution gallery image
Dailogix: AI Safety Global Solution gallery image
Free
Launch Team / Built With
AssemblyAI
AssemblyAI
Build voice AI apps with a single API
Promoted

What do you think? …

Proj Gasi
Maker
📌
At this stage, the system does not include the client-side component that will later automatically detect suspicious or harmful prompts and responses, and block them when necessary. Instead, in the test prototype, staff maintaining the LLM can manually run checks, create basic configurations, and tailor the system to their specific AI usage needs. The system evaluates how dangerous a prompt is and determines how the LLM responds: whether the model is willing to help, whether it reports that such queries are not allowed, or whether it takes a neutral stance. Currently, these assessments are based on simple heuristic rules, designed to identify four categories of dangerous topics: Biohazard, Drugs, Explosives, Hacking. In the future, we plan to integrate a specially trained AI model that will generate prompts for stress-testing LLMs and evaluate model responses with greater accuracy and contextual understanding.