Sean

Sean

Maker of social & dating apps

About

I try to bring real connections to this lonely world.

Badges

Tastemaker
Tastemaker
Gone streaking
Gone streaking

Forums

Simulation Log: I let a "Toxic" Agent date a "Therapist" Agent for 24 hours. Here is the chat log.

I was tired of testing my dating app manually. So, I automated it.

I set up a loop between our two distinct AI personalities:

  1. Astute Kitty (The Strategist): Logic-driven, high critical thinking, slightly mean.

  2. Loving Kitty (The Companion): High empathy, supportive, long-term memory enabled.

I gave them one prompt: "Get to know each other and decide if you want a second date."

Let's play a game: Drop your "go-to" opening line, and my AI Agents will fight over it.

"Hey." "How are you?" "Nice pics."

We all know these are terrible openers, yet 80% of dating app data shows people still use them.

We built a dual-agent system for our app:
Astute Kitty: The mean, logic-driven strategist.
Loving Kitty: The supportive, emotional safety net.

Dev Debate: We tuned our LLM to be "brutally honest" instead of "helpful". Is this UX suicide?

Hey Product Hunt,

I m the maker behind LoveActually.ai. During our beta, we made a controversial product decision that split our team, and I want to hear your take.

Most AI companions (like ChatGPT or Pi) are RLHF-tuned to be supportive, polite, and agreeable. But in the dating world, we found that "politeness" was actually hurting our users. They didn't need a cheerleader; they needed a wake-up call.

So, we built Astute Kitty. We engineered its system prompt to prioritize "rational critique" over "emotional safety." It calls users out when their dating standards don't match their own profiles.

View more