Imed Radhouani

Imed Radhouani

RankfenderRankfender
Co-founder and CTO - Rankfender

Forums

You're a product builder. Should you also be a writer?

You're building a product.
Your focus is code, features, user experience.
Not meta descriptions.
Not FAQ schema.
Not internal linking.

But content still needs to get done. Docs, landing pages, blog posts, metadata. And if you ignore it, nobody finds your product.

So you have a choice. Spend hours on content yourself. Hire someone who doesn't understand your product. Or let an OS handle it.

We're building ROSE ( Rankfender Fullstack Optimization Engine ) as a Git based library. An SDK you install directly into your repo. It runs on every commit. Checks your metadata. Validates your heading structure. Suggests internal links. Even auto fixes the small stuff.

Rankfenderp/rankfenderImed Radhouani•

10d ago

We spent 6 months building for enterprise. Nobody bought it.

We thought we were ready.

Bigger deals. Fewer customers. Better margins. That was the dream.

So we built enterprise features. SSO. Advanced permissions. Audit logs. A whole new pricing tier starting at $2,000/month.

We spent 6 months. Three engineers. One dedicated product manager. Endless meetings about "enterprise readiness."

What's something you measured that completely changed how you build product?

For months, we were building features based on what users said they wanted. Feature requests.
Sales calls. "It would be great if you added X."

We built X. Nobody used it.

So we stopped trusting what people said and started tracking what they actually did.

The dataset

What's the one SEO myth you believed for way too long?

I'll start.

I believed that "keyword density" mattered. I spent hours making sure our target keyword appeared exactly 3-4 times per 500 words. I used tools that highlighted which words were "under-optimized." I even re-wrote paragraphs to squeeze in one more mention.

Turns out that hasn't been a real ranking factor for over a decade. Google's RankBrain (2015) and BERT (2019) made keyword density obsolete. These models understand context, synonyms, and user intent. They don't need you to say "best CRM for small business" five times. They know that "top CRM for startups" means the same thing.

What actually matters is topic coverage. Does your page answer the question completely? Do you cover related subtopics that a user would expect to see? Do you use natural language that matches how people actually ask questions?

Google isn't anti-AI. It's anti-AI slop.

Everyone is panicking about the March 2026 Core Update.
It started rolling out on March 27 and will take up to two weeks to complete .
The spam update hit just three days earlier and finished in 19.5 hours, the fastest spam update on record .

But here's what the data actually says.

JetDigitalPro analyzed 600,000 web pages across the update period. The correlation between AI usage and ranking penalties was 0.011, effectively zero . Google isn't penalizing AI content. It's penalizing low-value content that happens to be AI-generated.

Websites relying on mass-produced AI output without human oversight saw traffic drops of 60-80% . Affiliate sites were hit hardest 71% saw negative impacts .

SEO used to be human-driven. GEO is model-driven. Do humans still matter?

For 20 years, SEO was a human game.
You wrote for people, optimized for Google's crawlers, and built backlinks by convincing other humans to link to you.
The inputs were human. The outputs were human.

GEO is different. You're optimizing for language models that extract and synthesize. The inputs are structured data, schema markup, comparison tables. The outputs are citations, not clicks.

So where does the human fit now?

What the data says about AI's performance:

We asked 5 AI models the same 1,000 questions. How often do you think they agreed?

We built a model to generate 1,000 questions that people actually ask.
Not random prompts.
We scraped 50,000 real user queries from search logs, forum threads, and support tickets across 12 industries.
We clustered them by intent and generated 1,000 representative questions.

We asked those same 1,000 questions to 5 AI models: ChatGPT (GPT-4), Gemini (Ultra), Perplexity (Pro), Claude (4.5 Sonnet), and Llama (3).
We ran the experiment daily for 30 days. We tracked every citation at the source level.

The goal: measure citation overlap.
How often do these models cite the same source for the same question?

The dataset:

Rankfenderp/rankfenderImed Radhouani•

17d ago

We gave AI our entire competitor tracking data and asked it to predict who would beat us.

Six months ago, we ran an experiment with our own data.

At Rankfender, we tracked 5 of our own competitors across 8 AI systems. We log their share of voice, citation velocity, content gaps, platform variance. Months of raw numbers sitting in a dashboard.

I pulled 6 months of data and fed it into Claude. One question: "Based on this, who is most likely to overtake us in the next 6 months? Show your work. Use the data. Don't summarize. Give me the numbers."

The answer changed how I think about competition.

Product Huntp/producthuntMike Kerzhner•

18d ago

Vote selling on Product Hunt

Every day, after launching, makers are contacted on LinkedIn and X by people offering to sell votes. As the Product Hunt team, we are very much aware of this and really hate it. We have systems in place to neutralize this type of gaming. Every vote counts for a different number of points on Product Hunt. A couple examples:

  • An account with a recently created gmail address and no history of quality contributions on Product Hunt: this vote will count for 0 points. Yes, this might be a well intentioned user, but we take a conservative approach to protect the community. If the account has a company email or applies for verification on Product Hunt, that's a different story.

  • An account with a company email address linked to a legitimate LinkedIn account with a history of meaningful contributions on Product Hunt: this vote carries significant weight.

A couple questions for the community:

  • Are there specific accounts on Product Hunt that you suspect participate in vote selling? You can reply here or email report@producthunt.co

  • What would you want to see us do differently here?

We gave AI our entire product roadmap and asked it to predict our failure points. It was brutal.

We ran an experiment 2 weeks ago.

Control group: a two-hour roadmap review meeting. Six people in a room (virtual). We debated features. We argued about timelines. We discussed dependencies. We left feeling productive.

Test group: We fed the same roadmap into Claude. No slides. No politics. No one trying to protect their pet project. Just the raw plan. The prompt: "Analyze this roadmap. Identify the three most likely failure points. Use first principles reasoning. Assume we will follow your recommendations without ego. If you need more data, ask for it."

The results were not symmetrical.

What's something AI is actually terrible at that nobody talks about?

I'll take the hit.

AI has no idea when someone is politely furious.

You know the email. "Hi team, just circling back on this again as I haven't heard anything. Thanks for your attention to this matter." Reads like a sweet grandma wrote it.

A human reads that and thinks "oh no, they are about to burn the building down." AI reads it and thinks "great sentiment, very positive, 98% satisfaction score."

Help us not build the wrong thing (4 upcoming features)

Hey PH Community !
We've been heads down building. Four new things in the works. I want to know which one matters most to you.

RASE v1.0 App Store Intelligence

Tracks how your mobile app appears in AI answers (ChatGPT, Perplexity) and in store search. If you build apps, this tells you where you're visible and where you're invisible.

What's something you built that you thought was genius and nobody used?

Three months. Two developers. One feature nobody used.

I knew it was bad when I checked the analytics and saw that the only person who used it more than once was me. And even I stopped after the second week.

Here's how I knew it was a waste of time. Not in hindsight. In the moment. I just ignored the signs.

The first sign: I couldn't explain it in one sentence.

Product Huntp/producthuntGabe Perez•

22d ago

Introducing Randomized Leaderboard Day on Product Hunt!

If you re launching today, the leaderboard is about to get a lot more interesting.

We are running a Randomized Day to give products launching more of an opportunity to get seen!

The Mechanics

To level the playing field, we are cycling the homepage layout throughout the day:
The Loop: This cycle repeats every 30 minutes, all day long.

Rankfenderp/rankfenderImed Radhouani•

21d ago

We let Claude write 100% of our code for 7 days. Here's what broke first.

Last week we did something stupid.

We paused all human coding. Gave Claude (Anthropic) access to our GitHub repo. Told it to build new features, fix bugs, and ship.

No human review. No guardrails. Just Claude and our codebase.

For 7 days, it ran the engineering team.

Product Huntp/producthuntGabe Perez•

22d ago

Introducing Randomized Leaderboard Day on Product Hunt!

If you re launching today, the leaderboard is about to get a lot more interesting.

We are running a Randomized Day to give products launching more of an opportunity to get seen!

The Mechanics

To level the playing field, we are cycling the homepage layout throughout the day:
The Loop: This cycle repeats every 30 minutes, all day long.

We're launching RCGE v2.2 soon. Help us not build something you'll hate.

We're enhancing Rankfender's Content Generation Engine (RCGE) and v2.2 is coming in the next few weeks. Before we lock things in, we want to know what actually matters to people who use content generation tools.

Here's what RCGE already does:

  • Intelligence. It analyzes the top 10 ranking articles for any keyword and identifies patterns. What structure do they use? What headers? What formatting? What makes them get cited by AI? Then it builds a brief based on what actually works, not guesswork.

  • Structure control. You can add, remove, and reorganize H2s before generation. No fixed templates. You decide the flow.

  • Inline images. Generated articles include images, not just text walls.

  • Regeneration. Mess up one paragraph? Regenerate just that part. Not the whole article.

What we're adding in v2.2:

What's the worst advice you've ever gotten about marketing your product?

I'll go first.

Someone told me: "Just be consistent. Post every day. The algorithm rewards consistency."

So I did.

For six months, I posted every single day. Sometimes at 7am. Sometimes at 10pm. Weekends included. I wrote about our product, our features, our roadmap. I followed all the "best practices" hook in the first line, three takeaways, call to action at the end.

When does AI content cross the line from helpful to spammy?

We spent the last 4 months tracking 473 pieces of AI-generated content across our own site and customer sites. 218 got cited by ChatGPT or Perplexity. 255 got ignored. 12 got flagged in reader feedback as "low quality" or "clearly AI."

We wanted to understand what separates the ones that work from the ones that don't. Here's what the data showed.

The content that got cited

Three things stood out.

We Asked 3,000 B2B Buyers How They Use AI to Pick Vendors. Here's What They Told Us.

We surveyed 3,000 B2B buyers who used AI assistants (ChatGPT, Perplexity, Gemini) in their last vendor selection process.

The goal was simple: find out exactly what they asked and what actually influenced their shortlist.

The results reveal a massive gap between how companies market themselves and how buyers actually discover them through AI.