Anna R

Steal our "Personality Stack": How we engineered an AI to be intentionally "Mean" vs. "Kind".

by

Most AI wrappers feel the same because everyone uses the same "You are a helpful assistant" system prompt.

For our project (LoveActually), we needed two extreme opposites:

Astute Kitty: High logic, critical, concise (The Strategist).

Loving Kitty: High empathy, verbose, memory-retentive (The Supporter).


We found that simply saying "Be mean" doesn't work—the model fights its RLHF safety rails.

Here is the framework we used to break the "Helpful Bot" syndrome:

For the "Mean" Agent: We force a "Logic First, Empathy Last" output structure. We instruct the model to analyze the user's input for contradictions specifically before generating a reply.


For the "Kind" Agent: We use a RAG (Retrieval-Augmented Generation) layer that specifically pulls "Vulnerability Tags" from past conversations (e.g., "User felt sad about X last week") to force continuity.

The Takeaway: If your AI Agent feels boring, stop tweaking the temperature. Start tweaking the constraints.

I’m happy to answer questions about:

  • Managing context windows for "Long-term Memory".

  • Balancing "Roast" levels so users don't churn.

Let's talk prompts. 👇

6 views

Add a comment

Replies

Be the first to comment