Product Thumbnail

MiniMax M2.7

Self-evolving AI model powering autonomous agents

API
Open Source
Artificial Intelligence

Hunted byRohan ChaubeyRohan Chaubey

MiniMax M2.7 is a self-evolving AI model that helped build its own capabilities. It can create agent harnesses, collaborate via Agent Teams, and handle complex tasks like coding, debugging, and research. With strong SWE-Pro performance and reduced intervention time, it moves beyond static AI into systems that continuously learn, adapt, and execute complex work with minimal human input. Available via API and MiniMax Agent for builders pushing AI-native workflows.

Top comment

MiniMax M2.7 is an AI agent model pushing toward self-evolving systems, not just assisting work, but actively improving how it works.

Current AI still needs heavy human orchestration across research, engineering, and workflows. M2.7 builds and optimizes its own agent harness, using memory, self-feedback, and iterative loops to improve performance over time.


What’s different is the self-evolution loop — it can analyze failures, modify its own setup, and re-run experiments autonomously. That’s a big shift from static models.

Key features:

  • Agent Teams for multi-agent collaboration

  • Complex skill execution with high adherence

  • Strong performance across software engineering + office workflows

  • End-to-end project delivery + real-world debugging

Benefits: Faster experimentation, reduced manual effort, and AI that acts more like a junior researcher/operator than just a tool.

Great for developers, researchers, and teams building AI-native workflows or automating complex tasks.


How far do you think self-evolving agents can go before humans are only setting goals and everything else runs autonomously?

I hunt the latest and greatest launches in tech, SaaS and AI, follow to be notified @rohanrecommends

Comment highlights

Ok this is awesome...just gave it a try on a specific use case and totally worked as intended. Well done. I'm going to keep using this.

"Self-evolving" is doing a lot of work in the tagline - curious what that actually means in practice. Is M2.7 updating weights from deployment feedback, or is it more like improved fine-tuning pipelines between releases? The autonomous agents use case is where I keep hitting model limitations - mostly around tool use consistency across long sessions. Does this address that specifically or is it more general capability improvement?

Big believer in this direction, the real shift is not better models, it’s systems that can execute and improve over time.

What’s interesting here is the move toward agent teams and real task execution. That’s where most things still break today.

Curious how you’re thinking about consistency, memory, and control as the system evolves, especially for enterprise use.

If this works as described, this is not just another AI product, it’s a step toward autonomous execution. @mazula95 @rohanrecommends

I've been using MiniMax 2.5 in my product and the bar is really high already - can't wait to try 2.7

Congratulations on the launch 🎉, we've seen great results with 2.5, and we added 2.7 to agenhq.com already 🚀

This direction feels inevitable.
Once agents start improving their own workflows, it stops being just a tool and becomes more like a system you’re managing.

The part I keep thinking about is control.
If the system keeps evolving its own setup, how do you keep things predictable in production?

Especially for real workflows, stability often matters more than raw capability.

@MiniMax is cooking. They launched M2.5 last month, with SOTA performance at coding (SWE-Bench Verified 80.2%), and they're pushing it forward (again) with M2.7, with an 88% win-rate vs M2.5.

Mind-blowing.

Oh and pro tip: you can give it a spin for free in @Kilo Code and @KiloClaw ✌️

Self-evolving AI is the right direction for any prediction system where the underlying distribution changes continuously. Our football analytics model faces exactly this — features that predicted match outcomes well last season (possession stats, pressing intensity) need reweighting as teams adapt tactically. A static model doesn't flag when its feature importance has drifted, so you only discover the problem in retrospect.

The 'analyze failures, modify setup, re-run' loop you describe is essentially formalizing what good data scientists do manually between seasons. The self-feedback mechanism is what's interesting — the system needs to know not just that it failed, but why it failed in a way that suggests a structural fix vs a data quality issue.

The hard tradeoff in real-time prediction contexts: how does M2.7 balance exploration (trying new configurations) vs exploitation (keeping outputs stable while a process is live)? In a sports context, you can't be A/B testing model architectures mid-match. Curious if the self-evolution loop has a 'freeze' mode for production stability.

The long-term memory feature is what makes this interesting to me. Most AI agents today are essentially stateless – you start fresh every session and lose all the context you've built up. An agent that actually remembers your preferences and past tasks over weeks could be a real productivity unlock.

How does the memory work in practice? Is there a way to review or edit what the agent has stored about you, or is it a black box? Being able to curate that memory layer would make a big difference for trust, especially when connecting it to workplace tools.

About MiniMax M2.7 on Product Hunt

Self-evolving AI model powering autonomous agents

MiniMax M2.7 launched on Product Hunt on March 19th, 2026 and earned 394 upvotes and 12 comments, earning #2 Product of the Day. MiniMax M2.7 is a self-evolving AI model that helped build its own capabilities. It can create agent harnesses, collaborate via Agent Teams, and handle complex tasks like coding, debugging, and research. With strong SWE-Pro performance and reduced intervention time, it moves beyond static AI into systems that continuously learn, adapt, and execute complex work with minimal human input. Available via API and MiniMax Agent for builders pushing AI-native workflows.

MiniMax M2.7 was featured in API (98k followers), Open Source (68.3k followers) and Artificial Intelligence (466.2k followers) on Product Hunt. Together, these topics include over 107.2k products, making this a competitive space to launch in.

Who hunted MiniMax M2.7?

MiniMax M2.7 was hunted by Rohan Chaubey. A “hunter” on Product Hunt is the community member who submits a product to the platform — uploading the images, the link, and tagging the makers behind it. Hunters typically write the first comment explaining why a product is worth attention, and their followers are notified the moment they post. Around 79% of featured launches on Product Hunt are self-hunted by their makers, but a well-known hunter still acts as a signal of quality to the rest of the community. See the full all-time top hunters leaderboard to discover who is shaping the Product Hunt ecosystem.

Reviews

MiniMax M2.7 has received 3 reviews on Product Hunt with an average rating of 5.00/5. Read all reviews on Product Hunt.

Want to see how MiniMax M2.7 stacked up against nearby launches in real time? Check out the live launch dashboard for upvote speed charts, proximity comparisons, and more analytics.