Anna R

Dev Debate: We tuned our LLM to be "brutally honest" instead of "helpful". Is this UX suicide?

by

Hey Product Hunt,

I’m the maker behind LoveActually.ai. During our beta, we made a controversial product decision that split our team, and I want to hear your take.

Most AI companions (like ChatGPT or Pi) are RLHF-tuned to be supportive, polite, and agreeable. But in the dating world, we found that "politeness" was actually hurting our users. They didn't need a cheerleader; they needed a wake-up call.

So, we built Astute Kitty. We engineered its system prompt to prioritize "rational critique" over "emotional safety." It calls users out when their dating standards don't match their own profiles.

The result?

  • Metric A: Beta user retention went up (people love the drama).

  • Metric B: We actually got support tickets from users saying the AI hurt their feelings.

The Dilemma: As makers, where do we draw the line?

  1. Should AI always be a "safe space"?

  2. Or is there room for "Mean AI" if it actually solves the user's problem (like getting ghosted less)?

Would you use an app that roasts you to help you improve, or would you churn immediately?

8 views

Add a comment

Replies

Best
Clark

not trying to build Mean AI

— just AI that doesn’t lie to be nice. The hard part is timing and tone, not honesty itself.