Every AI company I looked at was focused on making LLMs more capable. Almost nobody was focused on making them secure in production.
The more I dug into it, the more obvious the gap became. Input scanning existed but if an attack got through, there was nothing on the other side. No output verification. No behavioral monitoring. No way to know if your LLM had been compromised until a user caught it.
I kept asking: who is watching what the LLM sends back? Who is worried about adversarial AI? The answer was nobody.