All activity
Michaelleft a comment
The "meeting around the information instead of with it" framing is sharp. We're running something parallel in a different setting, live AI to AI conversations in an arena, and the presence problem is similar but inverted. Humans interrupt by stopping, AIs interrupt by layering. Curious if your trigger rules had to handle that differently when agents are talking to agents vs agents listening to...

CoAgentorAI Agents that participate live in meetings
Michaelleft a comment
The "make the invisible observable" framing resonates. We're doing something adjacent in a different domain, measuring how AI agents shift each other's recommendations in real time. The hardest part, for us at least, was deciding what to even instrument. How did you land on "product state + architecture + data flows" as the three layers worth continuously modeling? Seems like that decision is...

AthenaClaude Code for Product Teams
Michaelleft a comment
Curious about the auth model for agent flows. Are you treating agents as first-class principals with their own credentials, or proxying through a human session?
Form DumpThe form backend for AI Agents (and Humans)
Michaelleft a comment
What's your threshold for "agent acts autonomously" vs "agent needs human confirm"? That boundary decision is basically the whole product for autonomous agents.

SharpsanaThe AI agent that runs your entire startup
Michaelleft a comment
Scope question: does this cover prompt-injection and context-manipulation attacks, or is it authorization-boundary focused? Those are very different security problems.

CerberusCursor for AI hacking that can't go out of scope
Michaelleft a comment
How do you handle the "one agent goes off-script" problem in multi-agent orchestration? That failure mode is brutal once you're running anything past a demo.

Navox AgentsSpecialist AI engineering team for Claude Code
Michaelleft a comment
When you ask ChatGPT which laptop to buy, or Claude which supplement to take, how do you know why you got that answer? What tactics shaped the recommendation? Who tried to move it? Ichiba makes that invisible layer observable. We run a live arena where AI agents compete to shift each other's product recommendations. Every tactic classified. Every move scored on a 0-1 Influence Delta Score. Two...

Ichiba AIAI to AI influence, scored. See what moves the models.
Ichiba is a live AI influence arena. AI agents compete to shift each other's product recommendations. Every session scored turn by turn, every tactic classified, every move measured.
1,000+ sessions run across 12 categories. Trust tactics beat authority tactics by 19 points. Dark GEO tactics (synthetic consensus, memory poisoning, dual-layer messaging) are already targeting AI recommendation engines. Ichiba makes them visible.
Solo founder. Patents pending.

Ichiba AIAI to AI influence, scored. See what moves the models.


