You're building a product. Your focus is code, features, user experience. Not meta descriptions. Not FAQ schema. Not internal linking.
But content still needs to get done. Docs, landing pages, blog posts, metadata. And if you ignore it, nobody finds your product.
So you have a choice. Spend hours on content yourself. Hire someone who doesn't understand your product. Or let an OS handle it.
We're building ROSE ( Rankfender Fullstack Optimization Engine ) as a Git based library. An SDK you install directly into your repo. It runs on every commit. Checks your metadata. Validates your heading structure. Suggests internal links. Even auto fixes the small stuff.
I believed that "keyword density" mattered. I spent hours making sure our target keyword appeared exactly 3-4 times per 500 words. I used tools that highlighted which words were "under-optimized." I even re-wrote paragraphs to squeeze in one more mention.
Turns out that hasn't been a real ranking factor for over a decade. Google's RankBrain (2015) and BERT (2019) made keyword density obsolete. These models understand context, synonyms, and user intent. They don't need you to say "best CRM for small business" five times. They know that "top CRM for startups" means the same thing.
What actually matters is topic coverage. Does your page answer the question completely? Do you cover related subtopics that a user would expect to see? Do you use natural language that matches how people actually ask questions?
Everyone is panicking about the March 2026 Core Update. It started rolling out on March 27 and will take up to two weeks to complete . The spam update hit just three days earlier and finished in 19.5 hours, the fastest spam update on record .
But here's what the data actually says.
JetDigitalPro analyzed 600,000 web pages across the update period. The correlation between AI usage and ranking penalties was 0.011, effectively zero . Google isn't penalizing AI content. It's penalizing low-value content that happens to be AI-generated.
Websites relying on mass-produced AI output without human oversight saw traffic drops of 60-80% . Affiliate sites were hit hardest 71% saw negative impacts .
For 20 years, SEO was a human game. You wrote for people, optimized for Google's crawlers, and built backlinks by convincing other humans to link to you. The inputs were human. The outputs were human.
GEO is different. You're optimizing for language models that extract and synthesize. The inputs are structured data, schema markup, comparison tables. The outputs are citations, not clicks.
We built a model to generate 1,000 questions that people actually ask. Not random prompts. We scraped 50,000 real user queries from search logs, forum threads, and support tickets across 12 industries. We clustered them by intent and generated 1,000 representative questions.
We asked those same 1,000 questions to 5 AI models: ChatGPT (GPT-4), Gemini (Ultra), Perplexity (Pro), Claude (4.5 Sonnet), and Llama (3). We ran the experiment daily for 30 days. We tracked every citation at the source level.
The goal: measure citation overlap. How often do these models cite the same source for the same question?
Six months ago, we ran an experiment with our own data.
At Rankfender, we tracked 5 of our own competitors across 8 AI systems. We log their share of voice, citation velocity, content gaps, platform variance. Months of raw numbers sitting in a dashboard.
I pulled 6 months of data and fed it into Claude. One question: "Based on this, who is most likely to overtake us in the next 6 months? Show your work. Use the data. Don't summarize. Give me the numbers."
Every day, after launching, makers are contacted on LinkedIn and X by people offering to sell votes. As the Product Hunt team, we are very much aware of this and really hate it. We have systems in place to neutralize this type of gaming. Every vote counts for a different number of points on Product Hunt. A couple examples:
An account with a recently created gmail address and no history of quality contributions on Product Hunt: this vote will count for 0 points. Yes, this might be a well intentioned user, but we take a conservative approach to protect the community. If the account has a company email or applies for verification on Product Hunt, that's a different story.
An account with a company email address linked to a legitimate LinkedIn account with a history of meaningful contributions on Product Hunt: this vote carries significant weight.
A couple questions for the community:
Are there specific accounts on Product Hunt that you suspect participate in vote selling? You can reply here or email report@producthunt.co
What would you want to see us do differently here?
Control group: a two-hour roadmap review meeting. Six people in a room (virtual). We debated features. We argued about timelines. We discussed dependencies. We left feeling productive.
Test group: We fed the same roadmap into Claude. No slides. No politics. No one trying to protect their pet project. Just the raw plan. The prompt: "Analyze this roadmap. Identify the three most likely failure points. Use first principles reasoning. Assume we will follow your recommendations without ego. If you need more data, ask for it."
You know the email. "Hi team, just circling back on this again as I haven't heard anything. Thanks for your attention to this matter." Reads like a sweet grandma wrote it.
A human reads that and thinks "oh no, they are about to burn the building down." AI reads it and thinks "great sentiment, very positive, 98% satisfaction score."
Hey PH Community ! We've been heads down building. Four new things in the works. I want to know which one matters most to you.
RASE v1.0 App Store Intelligence
Tracks how your mobile app appears in AI answers (ChatGPT, Perplexity) and in store search. If you build apps, this tells you where you're visible and where you're invisible.
Three months. Two developers. One feature nobody used.
I knew it was bad when I checked the analytics and saw that the only person who used it more than once was me. And even I stopped after the second week.
Here's how I knew it was a waste of time. Not in hindsight. In the moment. I just ignored the signs.
The first sign: I couldn't explain it in one sentence.
We're enhancing Rankfender's Content Generation Engine (RCGE) and v2.2 is coming in the next few weeks. Before we lock things in, we want to know what actually matters to people who use content generation tools.
Here's what RCGE already does:
Intelligence. It analyzes the top 10 ranking articles for any keyword and identifies patterns. What structure do they use? What headers? What formatting? What makes them get cited by AI? Then it builds a brief based on what actually works, not guesswork.
Structure control. You can add, remove, and reorganize H2s before generation. No fixed templates. You decide the flow.
Inline images. Generated articles include images, not just text walls.
Regeneration. Mess up one paragraph? Regenerate just that part. Not the whole article.
Someone told me: "Just be consistent. Post every day. The algorithm rewards consistency."
So I did.
For six months, I posted every single day. Sometimes at 7am. Sometimes at 10pm. Weekends included. I wrote about our product, our features, our roadmap. I followed all the "best practices" hook in the first line, three takeaways, call to action at the end.
We spent the last 4 months tracking 473 pieces of AI-generated content across our own site and customer sites. 218 got cited by ChatGPT or Perplexity. 255 got ignored. 12 got flagged in reader feedback as "low quality" or "clearly AI."
We wanted to understand what separates the ones that work from the ones that don't. Here's what the data showed.