Launching today

Mankinds
Continuous AI testing, with audit ready proof
10 followers
Continuous AI testing, with audit ready proof
10 followers
We put your AI under real-world attack conditions: 80+ test criteria, 50+ attack techniques. Mapped to 70+ regulations across 5 regions covering EU AI Act, DORA, NIS2, SOC 2, ISO 42001 and a lot more. Continuous, from development all the way to live production.





Hey PH! I'm Laurent, co-founder of Mankinds.
Before this, I spent 6 years as a CTO. And in those 5 years, I kept running into the same wall: we were shipping AI systems, but had no real way to validate them. Not against actual attacks. Not against the regulations that governed our sector. At some point, I realised I couldn't honestly tell my board whether our AI was safe or compliant. No tool existed to give me that answer.
That gap is what Baptiste and I built Mankinds to close.
Frame the risk: Automatic classification of every AI against the regulations that apply. 70+ frameworks, 5+ jurisdictions, sourced to the exact article.
Attack and score: Deterministic red-teaming across 80+ criteria and 50+ attack techniques. Every finding ships with a remediation path. Audit-grade in minutes.
Monitor, in production: Drift, hallucinations and policy violations flagged in real time, tied to the rule they break. Continuously.
We also open-sourced our evaluation library, mankinds-eval, for builders who want the primitives without the full platform. Free, composable, runs locally.
pip install mankinds-eval → https://github.com/mankinds/mankinds-eval/
If you're shipping AI in a regulated environment, what does your current evaluation process look like? We'll be here all day.
Amazing team and product, let's go!
@weiss_arnaud Thanks, Arnaud!!