I've been researching how autonomous agents (Claude, Google Jarvis) traverse websites, and I'm noticing a massive gap. Sites that score 100 on Google Lighthouse are literally crashing AI agents because of DOM bloat and missing ARIA semantics.
We just launched a tool today to physically measure this "Agentic Friction," but I'm curious what the community thinks: Should front-end devs be designing for human UI, or should they be stripping down their DOMs so AI agents can actually read and cite them? Where is the balance?
I ve been analyzing top agency and SaaS websites over the last few months, and the data is pretty staggering. Traditional SEO (optimizing for Google's passive crawler, building backlinks, keyword stuffing) feels completely disconnected from how users are actually finding software now through generative engines.
We noticed that sites ranking #1 on traditional Google are often completely ignored by agents like OpenAI Operator or Claude because of "Agentic Friction" (heavy DOM bloat, lack of semantic tags, missing JSON-LD).
For founders launching today: Are you still investing in traditional SEO, or have you completely shifted your strategy to Generative Engine Optimization (GEO)?
(P.S. We just launched GEO Stellar today to actually measure this friction would love to know what frameworks you're all using to stay visible to AI!)
We simulated how autonomous AI agents navigate the top 100 US SEO agencies — and 83% had never configured robots.txt for AI crawlers. GEO Stellar runs the same audit on your site: exposing why AI agents can't cite you, and generating the exact code to fix it. Built by a cybersecurity researcher. Now in Alpha.
#Alpha #SEO #GEO #AISearch #AgenticAI #GenerativeEngineOptimization