All activity
AI agents either parse screenshots (expensive, lossy) or make blind tool calls (no context).
SLOP fixes this: apps expose a semantic state tree that AI subscribes to with incremental updates.
Actions are contextual: they live on the nodes they affect and appear/disappear as state changes.
Ships with a 13-doc spec, 14 SDK packages (TypeScript, Python, Rust, Go), React/Vue/Svelte/Solid/Angular adapters, Chrome extension, desktop app, and CLI. MIT licensed.
State-first, not action-first.

SLOP State Layer for Observable ProgramsA protocol for AI to observe and interact with app state
Diego Carlinoleft a comment
Hey Product Hunt! I built SLOP because I was frustrated with how AI agents interact with apps — either parsing pixels from screenshots or making blind tool calls with zero awareness of what's actually happening. SLOP is state-first: apps publish what they are, AI subscribes and acts in context. The spec is 13 docs, SDKs ship in 4 languages, and there's a Chrome extension that works today. Would...

SLOP State Layer for Observable ProgramsA protocol for AI to observe and interact with app state
