I built Assemble because I was tired of AI tools that sound helpful but stay generic. A code review becomes a polite summary. A security audit becomes a reformatted checklist. A multi-step project starts strong, then falls apart as soon as context gets longer or the work gets more complex.
So I built what I actually needed: a structured AI work system, not just another assistant.
With Assemble, you type /go and describe what you need. From there, it routes the task by difficulty, keeps useful cross-session memory, and switches into a spec-driven workflow when the work is complex. For bigger delivery, it can even move execution into a board with review and test stages.
What makes it different from most agent frameworks:
• it’s a configuration generator, not a runtime • zero daemon, zero SDK, zero dependencies, zero lock-in • native configs for 21 platforms including Cursor, Claude Code, Codex, Gemini CLI, Copilot, and Windsurf • it works beyond coding too: docs, contracts, proposals, email, and client operations
The Marvel framework isn’t branding — it’s a prompt-engineering choice. In testing, it gave us stronger role identity, better consistency, and less generic output than traditional agent setups.
And because LLMs naturally agree too easily, Assemble bakes in structural dissent: Deadpool challenges assumptions by default, and Doctor Doom escalates high-stakes decisions.
A real turning point for me: a client project that was supposed to take 2 days turned into 10 days of failed attempts with generic AI tools. With Assemble, it took 30 minutes.
If you try it, I’d genuinely love your feedback — especially on the workflows, platforms, and specialist roles you’d want next.
The 'spec-driven workflows' piece — how do you actually write a spec? Is there a schema or format Assemble expects, or is it more freeform? Trying to understand if this requires upfront investment to define the spec correctly, or if you can start loose and tighten it later.
also 21 platforms is a lot to claim parity across. Does the `/go` command actually behave consistently on all of them, or are some platforms more first-class than others? Like does it work the same on Claude Projects as it does on, say, a GitHub Copilot workflow — or are there meaningful differences in what gets supported?
Congrats on the launch! This looks really impressive. I'm curious about the memory component - when you say it "remembers," does that persist across different AI platforms automatically, or do users need to configure how context flows between integrations? Also, how does the zero runtime constraint work with platforms that have inherent latency?
About Assemble on Product Hunt
“One /go command for AI work that remembers — zero runtime”
Assemble launched on Product Hunt on April 19th, 2026 and earned 94 upvotes and 5 comments, placing #12 on the daily leaderboard. Assemble is an open-source configuration generator for AI work: /go, memory, spec-driven workflows, and zero runtime across 21 platforms.
Assemble was featured in Open Source (68.3k followers), Developer Tools (511.1k followers), Artificial Intelligence (466.4k followers) and GitHub (41.2k followers) on Product Hunt. Together, these topics include over 183.6k products, making this a competitive space to launch in.
Who hunted Assemble?
Assemble was hunted by fmerian. A “hunter” on Product Hunt is the community member who submits a product to the platform — uploading the images, the link, and tagging the makers behind it. Hunters typically write the first comment explaining why a product is worth attention, and their followers are notified the moment they post. Around 79% of featured launches on Product Hunt are self-hunted by their makers, but a well-known hunter still acts as a signal of quality to the rest of the community. See the full all-time top hunters leaderboard to discover who is shaping the Product Hunt ecosystem.
Want to see how Assemble stacked up against nearby launches in real time? Check out the live launch dashboard for upvote speed charts, proximity comparisons, and more analytics.
Hey Product Hunt,
I’m Rénald, founder of Cohesium AI.
I built Assemble because I was tired of AI tools that sound helpful but stay generic. A code review becomes a polite summary. A security audit becomes a reformatted checklist. A multi-step project starts strong, then falls apart as soon as context gets longer or the work gets more complex.
So I built what I actually needed: a structured AI work system, not just another assistant.
With Assemble, you type /go and describe what you need. From there, it routes the task by difficulty, keeps useful cross-session memory, and switches into a spec-driven workflow when the work is complex. For bigger delivery, it can even move execution into a board with review and test stages.
What makes it different from most agent frameworks:
• it’s a configuration generator, not a runtime
• zero daemon, zero SDK, zero dependencies, zero lock-in
• native configs for 21 platforms including Cursor, Claude Code, Codex, Gemini CLI, Copilot, and Windsurf
• it works beyond coding too: docs, contracts, proposals, email, and client operations
The Marvel framework isn’t branding — it’s a prompt-engineering choice. In testing, it gave us stronger role identity, better consistency, and less generic output than traditional agent setups.
And because LLMs naturally agree too easily, Assemble bakes in structural dissent: Deadpool challenges assumptions by default, and Doctor Doom escalates high-stakes decisions.
A real turning point for me: a client project that was supposed to take 2 days turned into 10 days of failed attempts with generic AI tools. With Assemble, it took 30 minutes.
If you try it, I’d genuinely love your feedback — especially on the workflows, platforms, and specialist roles you’d want next.
MIT licensed. Open source. Built for real work.