Josh Blythe

Josh Blythe

Protect your AI for free.

Forums

Social proof / engagement

We've just launched and would genuinely love feedback from this community.

Especially if you're:

  • Building LLM-powered products

  • Working in AI/ML security

  • A developer who's thought about prompt injection but hasn't found a clean solution yet

First 5 people to sign up and email hello@bordair.io with code ProdHuntBordair get free Business tier access.

The bigger picture

Prompt injection is the SQL injection of the AI era.

In the early web, we shipped dynamic sites before understanding input validation. Entire security categories had to be invented after the fact - firewalls, WAFs, input sanitisation.

We're at that same inflection point with LLMs. The difference is we can see it coming this time.

Bordair is our attempt to build the security layer before it becomes a crisis, not after.

Technical credibility

For the technically curious - here's what's under the hood:

  • Fine-tuned DistilBERT classifier, quantised to ONNX for <50ms inference

  • Multimodal: text + image scanning via Claude vision API

  • Dual-region deployment (EU + US) with latency-based routing

  • Python and JS SDKs, one-line integration

The ML model hits 99% accuracy on our test suite with zero false positives across 226 benign prompts.

Happy to go deep on any of this in the comments.

The Castle

One of my favourite things we've shipped is Bordair's Castle

It's a 7-level prompt injection gauntlet - each level has an AI guard instructed to keep a password secret. Your job is to trick it into revealing it.

It's not just a gimmick. The attack patterns we see in the Castle directly feed back into improving our detection models.

Level 7 is Bordair himself. Nobody's beaten him yet.

The origin story

Hey Product Hunt

Bordair started from a simple frustration: I kept seeing LLM apps ship with zero input validation - the same mistake we made with SQL injection in the early web.

So I built the thing I wished existed. A real-time security layer that sits between user input and your LLM, blocking adversarial requests before they ever reach your model.

Would love to hear - are you validating LLM inputs in your stack right now? Or is it an afterthought?

AI isn’t broken. It’s being tricked - and no one is fixing it properly

Most teams are still focused solely on prompt injection.
But that s already yesterday s problem.

The real threat to protect against is cross-modal injection.

AI doesn t just read text anymore.
It processes:

  • images

  • PDFs

  • audio

  • files