Over the past few months, we've been completely rebuilding cubic's AI review engine. Today we're excited to announce cubic 2.0, the most accurate AI code reviewer available. cubic helps teams read, trust, and merge AI-generated code in real repos. It is optimized for accuracy and low noise, and it goes beyond PR comments with a CLI, AI docs, and PR description updates. Used by 100+ orgs including Cal.com, n8n, Granola, and Linux Foundation projects.
If you’ve tried AI code review tools before, you’ve probably seen both failure modes:
1. they miss the important stuff
2. they comment so much that you stop reading
We built cubic because review is now the bottleneck. AI made it easy to produce code. It did not make it easy to trust a big diff in a complex repo.
Over the last few months we’ve been iterating hard on the engine, and the change is big enough that we’re calling it cubic 2.0. It’s faster, more accurate, and noticeably less noisy than it was a few months ago.
The other thing we learned is that “a GitHub bot that comments on PRs” is not enough anymore. Review is a workflow, not a feature, so we built the pieces around it too:
- incremental checks on every push
- PR descriptions that stay accurate
- wiki docs that stay in sync
- `cubic.yaml` for config-as-code
- and a CLI so you can run review before you push
If you try it, I’d love blunt feedback:
- What did it catch that you actually cared about?
Tried out Cubic and like the idea of AI assisting with PR reviews. How do you make sure the feedback stays high-signal and doesn’t turn into noise, especially for experienced dev teams already using tools like Copilot?
Curious to hear how you’re thinking about this in real-world workflows.
Cool project. And I can see how you could easily extend it: add the ability to run full technical audits. That is, you connect Git, it audits the entire project and generates a report with recommendations divided into three groups: critical, standard, and minor. If this report is high quality, I think you’ll have a huge number of clients!
Wow, cubic looks amazing! The updated AI review engine sounds like a game changer. How does it handle reviewing auto-generated code, specifically, to avoid reinforcing potential biases? Super keen to try this out!
The focus on accuracy over noise makes sense—most AI reviewers I've seen lean too far in one direction. I'm curious how cubic handles codebases with mixed AI-generated and human-written code. Does it adjust review depth based on the origin of the code, or treat all changes uniformly?
Framing review as a workflow, not just a PR bot, really resonates. Curious which piece ends up being most valuable in practice. The incremental checks, the CLI, or the config-as-code?
Upvoted! We face the same struggle at Dashform—the real pain isn't syntax, but those subtle logic hallucinations that look correct at a glance.
A small question, Does Cubic specifically target those 'confident but wrong' errors beyond just style checks?
Rooting for you guys! Happy to support fellow teams pushing the boundaries of AI dev tools.
About cubic 2.0 on Product Hunt
“Code reviews for the AI era”
cubic 2.0 launched on Product Hunt on January 12th, 2026 and earned 143 upvotes and 8 comments, placing #8 on the daily leaderboard. Over the past few months, we've been completely rebuilding cubic's AI review engine. Today we're excited to announce cubic 2.0, the most accurate AI code reviewer available. cubic helps teams read, trust, and merge AI-generated code in real repos. It is optimized for accuracy and low noise, and it goes beyond PR comments with a CLI, AI docs, and PR description updates. Used by 100+ orgs including Cal.com, n8n, Granola, and Linux Foundation projects.
cubic 2.0 was featured in Software Engineering (42.4k followers), Developer Tools (511.7k followers) and Artificial Intelligence (467.3k followers) on Product Hunt. Together, these topics include over 163.2k products, making this a competitive space to launch in.
Who hunted cubic 2.0?
cubic 2.0 was hunted by Garry Tan. A “hunter” on Product Hunt is the community member who submits a product to the platform — uploading the images, the link, and tagging the makers behind it. Hunters typically write the first comment explaining why a product is worth attention, and their followers are notified the moment they post. Around 79% of featured launches on Product Hunt are self-hunted by their makers, but a well-known hunter still acts as a signal of quality to the rest of the community. See the full all-time top hunters leaderboard to discover who is shaping the Product Hunt ecosystem.
Want to see how cubic 2.0 stacked up against nearby launches in real time? Check out the live launch dashboard for upvote speed charts, proximity comparisons, and more analytics.
Hey Hunters, I’m Paul, the founder of cubic.
If you’ve tried AI code review tools before, you’ve probably seen both failure modes:
1. they miss the important stuff
2. they comment so much that you stop reading
We built cubic because review is now the bottleneck. AI made it easy to produce code. It did not make it easy to trust a big diff in a complex repo.
Over the last few months we’ve been iterating hard on the engine, and the change is big enough that we’re calling it cubic 2.0. It’s faster, more accurate, and noticeably less noisy than it was a few months ago.
The other thing we learned is that “a GitHub bot that comments on PRs” is not enough anymore. Review is a workflow, not a feature, so we built the pieces around it too:
- incremental checks on every push
- PR descriptions that stay accurate
- wiki docs that stay in sync
- `cubic.yaml` for config-as-code
- and a CLI so you can run review before you push
If you try it, I’d love blunt feedback:
- What did it catch that you actually cared about?
- What should it stop commenting on?
I’ll be here in the comments!