Launching today

Agentation
The visual feedback tool for AI agents
292 followers
The visual feedback tool for AI agents
292 followers
Agentation turns UI annotations into structured context that AI coding agents can understand and act on. Click any element, add a note, and paste the output into Claude Code, Codex, or any AI tool.








Indie.Deals
Agentation bridges the gap between design feedback and code changes. Annotate any element on your UI — click, type, done — and get structured output that AI coding agents can immediately understand and act on.
Paste your annotations into Claude Code, Codex, or any AI tool and watch feedback become working code.
Key features:
Multiple annotation modes: select text, click elements, multi-select, draw areas, or freeze animations to capture specific states
Smart element identification: automatically generates grep-friendly selectors so agents find the exact element in your codebase
React component detection: surfaces the full component hierarchy for any element, right in the annotation popup
Computed styles: view live CSS properties alongside your notes for precise design specs
Layout mode: drag 65+ component types onto the page and rearrange sections; changes sync to agents in real time via MCP
Structured markdown output: copy clean, agent-ready annotations with one keystroke (C)
MCP integration: two-way agent sync lets AI acknowledge, question, or resolve your feedback directly.
Check out the latest video demo by the maker here, who also happens to have joined X recently as the Design Lead.
Wannabe Stark
One of the most underrated pain points in building with AI agents is having zero visibility into what they're actually doing, you're basically flying blind until something breaks. Curious how you're handling agent workflows that branch or run in parallel. Does the visualization scale well for more complex pipelines?
This solves a real friction point. Right now when I use Claude Code or Codex, I spend a lot of time writing context about which element I mean - "the button in the top-right of the filter panel" etc. Having structured annotations that feed directly into the agent as context is much cleaner. How does it handle dynamic elements that change state? Like a button that’s disabled until a form is valid?
Biteme: Calorie Calculator
@mykola_kondratiuk
Thanks for launching, Agentation team! When we annotate a component and feed it to Claude Code, can we keep a history of annotations so the agent knows when the DOM changed? I'd love to see how you handle selector drift.
The click-to-annotate approach is what I've been wanting tbh. I spend way too much time trying to explain to coding agents which exact element needs changing, and half the time they still get it wrong.
Curious how it handles dynamically rendered stuff though? Like components behind a toggle or things that only show on hover?
Biteme: Calorie Calculator
Congrats on the launch! Visual feedback for AI agents is a big gap right now. Most agent tools just give you text logs and hope for the best. What does the feedback loop actually look like in practice?
The audience feels right away. I would test one version that pushes the outcome harder than:
"The visual feedback tool for AI agents"
Maybe:
"See where your AI agent breaks, fix it faster, and stop debugging blind."
中文也可以是:
"看清 AI agent 是在哪里出错,更快修掉,而不是继续盲调。"