Open-source benchmarks for cloud browser infrastructure
Browser Arena is an open-source benchmark that tests 7 cloud browser providers on speed, reliability, and cost. Same tests, same EC2 instances, 1,000+ runs each. All results and code are public - deploy on Railway and reproduce every number yourself.
We built Browser Arena because we were tired of seeing benchmarks in the AI agent/browser infra space that couldn't be reproduced. Companies claiming SOTA performance based on cherry-picked runs, undisclosed infrastructure, and small sample sizes.
So we built an answer: an open-source benchmark suite that tests every major cloud browser provider under identical conditions.
What makes it different❓
1,000 runs per provider
Same AWS infrastructure, same test, same Playwright version
Full VM metadata published (region, instance type, RTT)
Median, P90, P95 — not just "best case"
Error rates and failure breakdowns included
Cost per session calculated from real pricing
MIT licensed. Clone it and run it yourself.
The benchmark ⚡
Minimal session lifecycle (create → connect → navigate → release). Tests both sequential and concurrent (up to 16 parallel sessions) execution.
Notte performs well in the results. But we didn't build this to win (we don't in all cases), we built it so the results mean something. If another provider is faster, the leaderboard shows it. That's the point.
Run it on your own infra and tell us if your numbers differ.
Would love your feedback on what benchmarks you'd want to see next! 🌸
About Browser Arena on Product Hunt
“Open-source benchmarks for cloud browser infrastructure”
Browser Arena launched on Product Hunt on April 8th, 2026 and earned 183 upvotes and 29 comments, placing #6 on the daily leaderboard. Browser Arena is an open-source benchmark that tests 7 cloud browser providers on speed, reliability, and cost. Same tests, same EC2 instances, 1,000+ runs each. All results and code are public - deploy on Railway and reproduce every number yourself.
On the analytics side, Browser Arena competes within Open Source, Developer Tools, Artificial Intelligence and GitHub — topics that collectively have 1.1M followers on Product Hunt. The dashboard above tracks how Browser Arena performed against the three products that launched closest to it on the same day.
Who hunted Browser Arena?
Browser Arena was hunted by Garry Tan. A “hunter” on Product Hunt is the community member who submits a product to the platform — uploading the images, the link, and tagging the makers behind it. Hunters typically write the first comment explaining why a product is worth attention, and their followers are notified the moment they post. Around 79% of featured launches on Product Hunt are self-hunted by their makers, but a well-known hunter still acts as a signal of quality to the rest of the community. See the full all-time top hunters leaderboard to discover who is shaping the Product Hunt ecosystem.
For a complete overview of Browser Arena including community comment highlights and product details, visit the product overview.
Hey PH! I'm Sam from Notte. 👋
We built Browser Arena because we were tired of seeing benchmarks in the AI agent/browser infra space that couldn't be reproduced. Companies claiming SOTA performance based on cherry-picked runs, undisclosed infrastructure, and small sample sizes.
So we built an answer: an open-source benchmark suite that tests every major cloud browser provider under identical conditions.
What makes it different❓
1,000 runs per provider
Same AWS infrastructure, same test, same Playwright version
Full VM metadata published (region, instance type, RTT)
Median, P90, P95 — not just "best case"
Error rates and failure breakdowns included
Cost per session calculated from real pricing
MIT licensed. Clone it and run it yourself.
The benchmark ⚡
Minimal session lifecycle (create → connect → navigate → release). Tests both sequential and concurrent (up to 16 parallel sessions) execution.
Notte performs well in the results. But we didn't build this to win (we don't in all cases), we built it so the results mean something. If another provider is faster, the leaderboard shows it. That's the point.
Run it on your own infra and tell us if your numbers differ.
Would love your feedback on what benchmarks you'd want to see next! 🌸