Product Thumbnail

Vet

Keep your coding agents honest

Open Source
Developer Tools
Artificial Intelligence
GitHub
Visit WebsiteSee on Product Hunt

Hunted byAlexander TibbetsAlexander Tibbets

Vet is a fast and local code review tool open-sourced by the Imbue team. It’s concise where others are verbose, and it catches more relevant issues. Vet verifies your coding agent's work by considering your conversation history to ensure the agent's actions align with your requests. It catches the silent failures: features half-implemented, tests claimed but never run. It reviews full PRs too, like logic errors, unhandled edge cases, and deviations from stated goals.

Top comment

👋 Hey Product Hunt! We're open-sourcing Vet: a fast and local code review tool built for developers using AI coding agents. A common problem: when you're using an agent to write code, it can hit a wall and silently swap in fake data instead of telling you. You ask it to write tests, it tells you they pass, but it never ran them. You may not notice until later, or at all. Vet verifies your coding agent's work by considering your conversation history to ensure the agent's actions align with your requests. It catches logic errors, unhandled edge cases, and deviations from stated goals with high precision. Vet uses your existing API keys, works with local models, and has zero telemetry. Run from the CLI, CI, or as an agent skill. Eager to answer questions!

Comment highlights

@mrtibbets interesting problem to tackle.

One of the biggest trust issues with coding agents is exactly this. Silent assumptions and fake outputs that look correct on the surface.

I like the idea of verifying the agent against the conversation history.


However, I'm curious about two things.

How does Vet handle long threads with multiple iterations of instructions?


And do you see this becoming a layer that sits between the developer and the agent permanently, almost like an AI code auditor?

The silent failures framing is sharp — half-implemented features and unclaimed value are the real activation killers in most B2B products, not churn from explicit dissatisfaction. Curious how Vet surfaces these gaps: is it correlating usage data against the expected activation path, or more of a qualitative signal from user sessions? The distinction matters because one tells you what's broken and the other tells you why. Would be interested in how this fits into a team's existing analytics stack — does it layer on top of tools like Mixpanel or Amplitude, or replace them for the activation layer?

I tried this with Clawdbot and it successfully caught a 'silent failure' where the agent skipped a test. I can't get it to call Vet everytime though. Do you know how can I prompt it to always call Vet before reporting a task as complete?

Curious how Vet handles the audit trail when an agent makes changes across multiple repos do you log at the diff level or capture the full agent reasoning chain too? Trying to figure out where the boundary between "agent decision" and "human accountability" sits in your model.

The 'catches silent failures' angle is what gets me — half-implemented features and tests that were claimed but never actually run are exactly the kind of things that slip through normal code review because reviewers trust that the agent did what it said. How does it handle situations where the conversation history is ambiguous, or the original request was vague to begin with?

@andrewlaack it has been great watching you iterate through all of the different iterations of Vet to get it to here. Congrats on the public launch!

This is the missing piece in the AI coding workflow. We all got comfortable letting agents write code, but verifying what they produce is still mostly manual eyeballing. Love that it's open source too - makes it way easier to trust and customize for different codebases. What's the performance overhead like on larger repos?

Super interesting! We'll try it out for our vibecoding platform at matterhorn.so!

About Vet on Product Hunt

Keep your coding agents honest

Vet launched on Product Hunt on March 6th, 2026 and earned 123 upvotes and 25 comments, placing #11 on the daily leaderboard. Vet is a fast and local code review tool open-sourced by the Imbue team. It’s concise where others are verbose, and it catches more relevant issues. Vet verifies your coding agent's work by considering your conversation history to ensure the agent's actions align with your requests. It catches the silent failures: features half-implemented, tests claimed but never run. It reviews full PRs too, like logic errors, unhandled edge cases, and deviations from stated goals.

Vet was featured in Open Source (68.3k followers), Developer Tools (511.5k followers), Artificial Intelligence (466.8k followers) and GitHub (41.2k followers) on Product Hunt. Together, these topics include over 186.1k products, making this a competitive space to launch in.

Who hunted Vet?

Vet was hunted by Alexander Tibbets. A “hunter” on Product Hunt is the community member who submits a product to the platform — uploading the images, the link, and tagging the makers behind it. Hunters typically write the first comment explaining why a product is worth attention, and their followers are notified the moment they post. Around 79% of featured launches on Product Hunt are self-hunted by their makers, but a well-known hunter still acts as a signal of quality to the rest of the community. See the full all-time top hunters leaderboard to discover who is shaping the Product Hunt ecosystem.

Want to see how Vet stacked up against nearby launches in real time? Check out the live launch dashboard for upvote speed charts, proximity comparisons, and more analytics.