Assemble
One /go command for AI work that remembers — zero runtime
89 followers
One /go command for AI work that remembers — zero runtime
89 followers
Assemble is an open-source configuration generator for AI work: /go, memory, spec-driven workflows, and zero runtime across 21 platforms.






Assemble
Hey Product Hunt,
I’m Rénald, founder of Cohesium AI.
I built Assemble because I was tired of AI tools that sound helpful but stay generic. A code review becomes a polite summary. A security audit becomes a reformatted checklist. A multi-step project starts strong, then falls apart as soon as context gets longer or the work gets more complex.
So I built what I actually needed: a structured AI work system, not just another assistant.
With Assemble, you type /go and describe what you need. From there, it routes the task by difficulty, keeps useful cross-session memory, and switches into a spec-driven workflow when the work is complex. For bigger delivery, it can even move execution into a board with review and test stages.
What makes it different from most agent frameworks:
• it’s a configuration generator, not a runtime
• zero daemon, zero SDK, zero dependencies, zero lock-in
• native configs for 21 platforms including Cursor, Claude Code, Codex, Gemini CLI, Copilot, and Windsurf
• it works beyond coding too: docs, contracts, proposals, email, and client operations
The Marvel framework isn’t branding — it’s a prompt-engineering choice. In testing, it gave us stronger role identity, better consistency, and less generic output than traditional agent setups.
And because LLMs naturally agree too easily, Assemble bakes in structural dissent: Deadpool challenges assumptions by default, and Doctor Doom escalates high-stakes decisions.
A real turning point for me: a client project that was supposed to take 2 days turned into 10 days of failed attempts with generic AI tools. With Assemble, it took 30 minutes.
If you try it, I’d genuinely love your feedback — especially on the workflows, platforms, and specialist roles you’d want next.
MIT licensed. Open source. Built for real work.
RiteKit Company Logo API
Congrats on the launch! This looks really impressive. I'm curious about the memory component - when you say it "remembers," does that persist across different AI platforms automatically, or do users need to configure how context flows between integrations? Also, how does the zero runtime constraint work with platforms that have inherent latency?
Assemble
@osakasaul Thanks, appreciate it.
For memory, Assemble keeps things simple: it uses Markdown files (.md) as the persistence layer. So yes, memory can persist across platforms and LLMs, because it’s not tied to any provider’s hidden internal state.
That continuity comes from portable, readable files rather than model-specific memory.
On the zero runtime side, Assemble doesn’t add a daemon or always-on orchestration layer between the user and the target platform. The logic lives in the config and files, so execution still happens natively in the tool you use — which means latency stays the platform’s latency.
If helpful, I can also explain how we separate persistent memory from session context.
The 'spec-driven workflows' piece — how do you actually write a spec? Is there a schema or format Assemble expects, or is it more freeform? Trying to understand if this requires upfront investment to define the spec correctly, or if you can start loose and tighten it later.
also 21 platforms is a lot to claim parity across. Does the `/go` command actually behave consistently on all of them, or are some platforms more first-class than others? Like does it work the same on Claude Projects as it does on, say, a GitHub Copilot workflow — or are there meaningful differences in what gets supported?
Assemble
@sounak_bhattacharya
For specs:
No enforced schema. You write in markdown, freeform structure.
For COMPLEX tasks, Assemble follows a spec-driven methodology:
1. SPECIFY → spec.md
2. PLAN → plan.md
3. TASKS → tasks.md
4. IMPLEMENT → _board.yaml + Kanban pipeline
5. CLOSE → _quality.md
But the spec format itself is up to you. Assemble generates configs, not user specs.
In practice: you can start loose (objective + bullet points) and tighten as you go. No upfront investment required.
───
For the 21 platforms:
Source parity, not result parity.
The repo lists what gets generated per platform:
• First-class (agents + skills + workflows): Cursor, Claude Code CLI, Gemini CLI, Windsurf, Cline, Roo Code, etc.
• Lighter config: Codex (AGENTS.md only), Pi (AGENTS.md + SYSTEM.md), GitHub Copilot (instructions only)
The /go command is interpreted by the LLM via generated configs. On Claude Code or Cursor, Jarvis can chain agents, manage Kanban, etc. On Codex or Copilot, you get the same personas but without the workflow structure — the runtime does what it can.
Bottom line: one .assemble/ source, variable results depending on target platform. No magic, but no multi-repo maintenance either.