This product was not featured by Product Hunt yet. It will not yet shown by default on their landing page.
Product upvotes vs the next 3
Waiting for data. Loading
Product comments vs the next 3
Waiting for data. Loading
Product upvote speed vs the next 3
Waiting for data. Loading
Product upvotes and comments
Waiting for data. Loading
Product vs the next 3
Loading
Remoroo
Runs overnight experiments on your code locally
Remoroo is an autonomous experimentation agent for code. Give it a repo, a metric, and a time budget. It edits code and configs, runs tests, benchmarks, and training jobs, keeps changes that improve results, reverts the rest, and leaves you with verified patches, benchmark deltas, and artifacts in the morning. Built for teams doing long-running engineering and research workflows.
I built this because too much engineering work still looks the same: tweak code, wait for a long run, get worse results, revert, try again. The hard part often is not writing the patch. It is babysitting the experiment loop and proving whether anything actually improved.
So we built Remoroo to run that loop autonomously on your machine. You give it a repo, a measurable goal, and a time budget. It plans, edits code and configs, runs training jobs, tests, or benchmarks, evaluates against the metric, keeps changes that improve results, and reverts the rest.
The big shift for us while building it was realizing we did not want another coding copilot. We wanted something that could take ownership of long-running experiments and leave behind proof: verified patches, benchmark deltas, and artifacts you can inspect in the morning.
We would especially love feedback from teams working on ML, research, robotics, performance tuning, or any workflow where a single experiment can take minutes or hours.
What would make you trust an agent to run repeated experiments on your codebase?
About Remoroo on Product Hunt
“Runs overnight experiments on your code locally”
Remoroo was submitted on Product Hunt and earned 8 upvotes and 1 comments, placing #48 on the daily leaderboard. Remoroo is an autonomous experimentation agent for code. Give it a repo, a metric, and a time budget. It edits code and configs, runs tests, benchmarks, and training jobs, keeps changes that improve results, reverts the rest, and leaves you with verified patches, benchmark deltas, and artifacts in the morning. Built for teams doing long-running engineering and research workflows.
On the analytics side, Remoroo competes within Software Engineering, Developer Tools and Artificial Intelligence — topics that collectively have 1M followers on Product Hunt. The dashboard above tracks how Remoroo performed against the three products that launched closest to it on the same day.
Who hunted Remoroo?
Remoroo was hunted by Adham Ghazali. A “hunter” on Product Hunt is the community member who submits a product to the platform — uploading the images, the link, and tagging the makers behind it. Hunters typically write the first comment explaining why a product is worth attention, and their followers are notified the moment they post. Around 79% of featured launches on Product Hunt are self-hunted by their makers, but a well-known hunter still acts as a signal of quality to the rest of the community. See the full all-time top hunters leaderboard to discover who is shaping the Product Hunt ecosystem.
For a complete overview of Remoroo including community comment highlights and product details, visit the product overview.
Hey Product Hunt.
I’m Adham, founder of Remoroo.
I built this because too much engineering work still looks the same: tweak code, wait for a long run, get worse results, revert, try again. The hard part often is not writing the patch. It is babysitting the experiment loop and proving whether anything actually improved.
So we built Remoroo to run that loop autonomously on your machine. You give it a repo, a measurable goal, and a time budget. It plans, edits code and configs, runs training jobs, tests, or benchmarks, evaluates against the metric, keeps changes that improve results, and reverts the rest.
The big shift for us while building it was realizing we did not want another coding copilot. We wanted something that could take ownership of long-running experiments and leave behind proof: verified patches, benchmark deltas, and artifacts you can inspect in the morning.
We would especially love feedback from teams working on ML, research, robotics, performance tuning, or any workflow where a single experiment can take minutes or hours.
What would make you trust an agent to run repeated experiments on your codebase?