Browser Arena is an open-source benchmark that tests 7 cloud browser providers on speed, reliability, and cost. Same tests, same EC2 instances, 1,000+ runs each. All results and code are public - deploy on Railway and reproduce every number yourself.
We built Browser Arena because we were tired of seeing benchmarks in the AI agent/browser infra space that couldn't be reproduced. Companies claiming SOTA performance based on cherry-picked runs, undisclosed infrastructure, and small sample sizes.
So we built an answer: an open-source benchmark suite that tests every major cloud browser provider under identical conditions.
What makes it different❓
1,000 runs per provider
Same AWS infrastructure, same test, same Playwright version
Full VM metadata published (region, instance type, RTT)
Median, P90, P95 — not just "best case"
Error rates and failure breakdowns included
Cost per session calculated from real pricing
MIT licensed. Clone it and run it yourself.
The benchmark ⚡
Minimal session lifecycle (create → connect → navigate → release). Tests both sequential and concurrent (up to 16 parallel sessions) execution.
Notte performs well in the results. But we didn't build this to win (we don't in all cases), we built it so the results mean something. If another provider is faster, the leaderboard shows it. That's the point.
Run it on your own infra and tell us if your numbers differ.
Would love your feedback on what benchmarks you'd want to see next! 🌸
Congrats Sam! The Browser Functions concept is interesting. How close to zero is the latency when your serverless code runs colocated with the browser vs a normal Lambda?
Love the focus on reproducibility. Btw do you think open benchmarks will become a norm in AI infra or will most companies still optimize for perception over truth?
Really interesting approach to benchmarking cloud browser providers. I have been evaluating a few of these services for running automated workflows and the lack of standardized benchmarks has made it difficult to compare them objectively. The fact that all results and code are public is a huge plus. Do you plan to add latency benchmarks for dynamic page interactions, or is the focus mainly on static page loads and rendering?
Wait, so u built a benchmark where your own product doesnt win 100% of the time? This level of honesty is illegal in silicone valley)
What happens when an agent session goes rogue mid-task? Curious if there are circuit breakers built into the session management layer or if the human has to manually kill it.
Hey PH! It’s Lucas, CTO of Notte 👋
We built Browser Arena to make it easier for people to compare cloud browser solutions using fair, reproducible metrics.
Check it out and we’d love to hear what you think!
About Browser Arena on Product Hunt
“Open-source benchmarks for cloud browser infrastructure”
Browser Arena launched on Product Hunt on April 8th, 2026 and earned 183 upvotes and 29 comments, placing #6 on the daily leaderboard. Browser Arena is an open-source benchmark that tests 7 cloud browser providers on speed, reliability, and cost. Same tests, same EC2 instances, 1,000+ runs each. All results and code are public - deploy on Railway and reproduce every number yourself.
Browser Arena was featured in Open Source (68.3k followers), Developer Tools (511k followers), Artificial Intelligence (466.2k followers) and GitHub (41.2k followers) on Product Hunt. Together, these topics include over 182.5k products, making this a competitive space to launch in.
Who hunted Browser Arena?
Browser Arena was hunted by Garry Tan. A “hunter” on Product Hunt is the community member who submits a product to the platform — uploading the images, the link, and tagging the makers behind it. Hunters typically write the first comment explaining why a product is worth attention, and their followers are notified the moment they post. Around 79% of featured launches on Product Hunt are self-hunted by their makers, but a well-known hunter still acts as a signal of quality to the rest of the community. See the full all-time top hunters leaderboard to discover who is shaping the Product Hunt ecosystem.
Want to see how Browser Arena stacked up against nearby launches in real time? Check out the live launch dashboard for upvote speed charts, proximity comparisons, and more analytics.
Hey PH! I'm Sam from Notte. 👋
We built Browser Arena because we were tired of seeing benchmarks in the AI agent/browser infra space that couldn't be reproduced. Companies claiming SOTA performance based on cherry-picked runs, undisclosed infrastructure, and small sample sizes.
So we built an answer: an open-source benchmark suite that tests every major cloud browser provider under identical conditions.
What makes it different❓
1,000 runs per provider
Same AWS infrastructure, same test, same Playwright version
Full VM metadata published (region, instance type, RTT)
Median, P90, P95 — not just "best case"
Error rates and failure breakdowns included
Cost per session calculated from real pricing
MIT licensed. Clone it and run it yourself.
The benchmark ⚡
Minimal session lifecycle (create → connect → navigate → release). Tests both sequential and concurrent (up to 16 parallel sessions) execution.
Notte performs well in the results. But we didn't build this to win (we don't in all cases), we built it so the results mean something. If another provider is faster, the leaderboard shows it. That's the point.
Run it on your own infra and tell us if your numbers differ.
Would love your feedback on what benchmarks you'd want to see next! 🌸