Flowsery is a privacy-first web analytics platform built for teams that care about revenue, not just pageviews. Track the channels, pages, funnels, and user journeys that actually drive conversions, all in one clean dashboard. Flowsery comes with revenue tracking, advanced bot filtering, funnels, goal tracking, and visitor journey analysis out of the box, so you can see what is working and where people drop off.
Hey Hunters 👋, this is not AI written so you can keep on reading 😂
I needed analytics for my side projects. First instinct was PostHog - great, I use it to this day, but too complicated for the simple stuff I wanted: Country, Origin, UTMs, per user attribution, entry page, pages, revenue. Later I discovered PostHog events are immutable, and I couldn't remove my test fake data without writing manual SQL filters all over the place, so I started looking for alternatives.
First I found Plausible - all great, but no per user attribution. Next was DataFast, seen on Twitter, looked like exactly what I needed.
So I installed DataFast, added proxy to get all customers, and it appeared I actually collect much more. Not sure whether Plausible had proxy setup, but I remember not being able to set it up, so I kept DataFast.
Fast forward a couple of months. Traffic increased, now I need to pay $40/m, considering my whole infra cost is $150 including front-end, back-end, emails. Greedy developer in me said nah, not paying $500/yr for analytics. I thought about moving to an alternative, but I'd lose all existing data, revenue attribution, referrers, so I decided to build it myself!
I opened Claude Code, wrote one prompt, and it was done… jk, I'm not an 18yo from Twitter, so I'm not skilled enough to make Claude one-shot a website.
First challenge: getting data from DataFast. No export option (RED FLAG), so I wrote a long script that paginates through all exposed endpoints, collects and transforms data, and creates SQL to run against my DB.
For context I have a microservices architecture - queues, Kafka, Redis, sockets, gateway, auth - all done with established patterns. Front-end is a monorepo with shared components, features, forms, services. So all I needed was the "core" analytics feature.
In a weekend I had a semi-working front-end with some data on the backend. Ugly dashboard, bunch of services, new database, no actual tracking. Simple, a couple of days and I'm done…
Turned out data from DataFast is quite broken and lacks many values. Connecting goals, revenue, and visitors became a nightmare. I connected my readonly DB via MCP, got the readonly key from my payment processor, and tediously re-attributed data to match DataFast. Took multiple days, and still wasn't 100% right since DataFast didn't expose all needed data, but 95% right, so I moved on.
Next I reviewed the backend boilerplate Claude wrote, and had to completely refactor - Claude did attribution with direct calls to Postgres, every visitor a roundtrip to the database…
So I created a caching layer with custom flushes. Events go to Redis first, flushed to DB every ~30 seconds. Instead of bombarding the DB per visitor, it writes a modest query every other second at scale. Flush uses a distributed Redis lock, so with multiple instances, only one machine flushes at a time - no duplicates, no race conditions. Each flush processes data in chunks of 5,000 records per SQL statement (Postgres parameter limits), and failed chunks get re-buffered back to Redis with a retry counter, up to 5 retries. Even if the DB hiccups mid-flush, no data is silently lost.
ClickHouse would have resolved that, but I didn't want to swap vendors - the Redis setup is scalable on its own.
Next, extracting data. LLMs have no idea about heap - everything was loaded into memory and iterated. With 100k+ events the heap spikes and the server dies, so I rewrote it with optimized query calls, pagination, and batched requests. I also added a pre-aggregated daily rollup table - for historical queries without filters, the system reads from a compact summary table instead of scanning millions of raw sessions and pageviews. Simple optimization, but made the dashboard feel instant for date ranges that don't include today.
Back to front-end. Charts are underwhelming to work with, so I spent time perfecting them. Sucker for nice UI, couldn't keep it non-animated. Another thing bugging me with DataFast was their terrible filter system - unusable. The pristine example is PostHog, so I ported that. And rate limits - when I'd move back 3 days in DataFast, I'd get rate limited?! Checked the network: 20 concurrent requests PER DAY (Red Flag). Moving to yesterday? Think the request is aborted? Nope, another 20, one more day - 60 concurrent, rate limited. Haven't seen a lack of signal abort in a prod app in ages (Red Flag). Kept that in mind for how bad their attribution actually is.
I optimized FE requests down to 5, batched for dashboard info, + aborts when moving fast between filters/views. My app was flying. Coming back to DataFast felt nightmarish.
Time to test attributions! Seed scripts fine, payment attribution fine, fresh data every day. UI good, UX good, time to create a tracking script, add it to websites, compare, and… nothing worked. Had to fix CORS, endpoints, queries. After playing around - everything worked!
So I compared attributions, and… I had ~30–50% fewer. Fuming, checking logs, checking DB. Answer was simple - I added Arcjet to the public endpoint, and it got to work: 100k requests in a couple of days, oops, had to turn it off since that would bankrupt me. Started looking deeper.
Turned out DataFast has ABSOLUTE ZERO BOT PROTECTION (Red Flag). Datacenter IPs? passed. User-agent null? passed. Resolution 10x10000? welcome aboard. I read Arcjet blog posts, implemented their suggestions, and achieved 96% bot blockage compared to them. How?
Main one: checking userAgent, filtering obvious bots, non-existing displays. Trickier was analyzing IPs and blocking datacenter IDs. Spent days on that - best I did was MaxMind DB of IPs to block datacenter ones (I blocked my own infra and had 0 attributions). Then I proxied the user's IP through Cloudflare to my Fly backend, compared, and filtered.
While doing that I thought, how does DataFast handle this, and… they don't (RED FLAG). Benefit of the doubt, might have been my mess up needing to proxy real IP, but it's not well documented. Essentially ALL users I tracked were attributed to the closest Cloudflare CDN… I double-checked - turned out I regularly do trips to Germany (I'm in Poland) because my traffic was routed through Germany… Most tracking via DataFast was useless garbage, so I had to do better.
I added non-obvious bot signals too - bounces, no engagement + weird screen sizes, weird browser versions, dozens of params. I attach a bot score to every session, so now I have a toggle showing "probably bots" filtered. Most obvious ones are hard filtered without even getting to DB.
One thing I'm happy about - the bot scorer is import-aware. Since all DataFast imported sessions have zero behavioral metrics (never tracked scroll depth, engagement time, or interactions), the scorer detects these and uses a separate algorithm looking only at fingerprint anomalies like screen dimensions, instead of penalizing them for missing data they never had.
And that's pretty much it. Backend ready, optimized, stress tested (died, had to bump up RAM). Front-end nice with good UX. So what were my savings?
Cost of a new microservice $25/m. So $39 - $25 = $14/m…
Took me around a month (not full-time, on and off). Absolutely genius idea on my part, replace every SaaS and never look back.
No comment highlights available yet. Please check back later!
About Flowsery on Product Hunt
“Revenue-first analytics with real user journeys”
Flowsery was submitted on Product Hunt and earned 4 upvotes and 1 comments, placing #93 on the daily leaderboard. Flowsery is a privacy-first web analytics platform built for teams that care about revenue, not just pageviews. Track the channels, pages, funnels, and user journeys that actually drive conversions, all in one clean dashboard. Flowsery comes with revenue tracking, advanced bot filtering, funnels, goal tracking, and visitor journey analysis out of the box, so you can see what is working and where people drop off.
Flowsery was featured in API (98k followers), Analytics (171.4k followers) and Artificial Intelligence (466.2k followers) on Product Hunt. Together, these topics include over 110.1k products, making this a competitive space to launch in.
Who hunted Flowsery?
Flowsery was hunted by Taras Shynkarenko. A “hunter” on Product Hunt is the community member who submits a product to the platform — uploading the images, the link, and tagging the makers behind it. Hunters typically write the first comment explaining why a product is worth attention, and their followers are notified the moment they post. Around 79% of featured launches on Product Hunt are self-hunted by their makers, but a well-known hunter still acts as a signal of quality to the rest of the community. See the full all-time top hunters leaderboard to discover who is shaping the Product Hunt ecosystem.
Want to see how Flowsery stacked up against nearby launches in real time? Check out the live launch dashboard for upvote speed charts, proximity comparisons, and more analytics.