Patrei API that blocks prompt injection

Patrei API that blocks prompt injection

Just stop jailbreaks & prompt hacks with one API call.

2 followers

It's clear that LLMs are prone to prompt injection. My prompt injection scanner scans prompts for attacks before they reach your LLM models. What is returned is a score that indicates risk. Fast, cheap, and constantly updated based on your feedback.
Patrei API that blocks prompt injection gallery image
Patrei API that blocks prompt injection gallery image
Patrei API that blocks prompt injection gallery image
Free Options
Launch Team / Built With