Launching today
pref0

pref0

Your agent should learn. Pref0 makes it.

4 followers

pref0 is a preference learning API that extracts user preferences from AI agent conversations and serves them at inference time. Your agent learns from corrections and never forgets.
pref0 gallery image
pref0 gallery image
pref0 gallery image
pref0 gallery image
pref0 gallery image
Free Options
Launch Team / Built With
Wispr Flow: Dictation That Works Everywhere
Wispr Flow: Dictation That Works Everywhere
Stop typing. Start speaking. 4x faster.
Promoted

What do you think? …

Julian Flieller
Hey! 👋 I built Pref0 because I kept seeing the same pattern with AI agents. Users correct the agent, session ends, next day they make the same corrections. Over and over. Memory solutions store facts from what users say. But most users don't explicitly state preferences. They just correct the agent and expect it to learn. Pref0 treats corrections as the primary signal. When a user changes something, that's valuable data. The system extracts preferences, assigns confidence scores, and compounds confidence across sessions. Same correction three times? Learned preference. Two endpoints. Send chat history, fetch preferences at inference. Would love feedback from anyone building agents. What's missing?