﻿# Xiaomi MiMo-V2.5 series open-sourced & Orbit 100 trillion token plan launched

Today, we officially open source the Xiaomi MiMo-V2.5 series, which uses the MIT license, supports commercial inference deployment and secondary training, and requires no additional authorization.

## Open protocol, fully open source

The MiMo V2.5 series models began public testing on April 23rd. We thank all users for their enthusiastic feedback and encouragement during this period.

This series includes two models, both supporting a 1-million-token context window:

- MiMo-V2.5-Pro: Designed for complex task scenarios, deeply optimized for Agent and Coding applications. It ranks first among open-source models globally on the GDPVal-AA and ClawEval leaderboards.

- MiMo-V2.5: A native full-modal model supporting text, image, video, and audio understanding, with powerful Agent capabilities.

![图片](https://platform.xiaomimimo.com/static/VZxrbdHSUoqx63x5RtycLBnznfe.cb0a305d.png)

We deeply understand that the true value of a model does not lie in its ranking on leaderboards, but rather in its ability to efficiently assist developers in solving real-world problems. On the Claw-Eval leaderboard, MiMo V2.5 ranks at the optimal frontier of task completion rate and Token efficiency 

![图片](https://platform.xiaomimimo.com/static/BhnNbzCq5oBDXCxWPOcc9umtnPe.aeb7e48a.jpeg)

After undergoing refinement and verification during the public beta phase, this series has further improved in terms of intelligence level and stability, and has reached the standard for release. 

Today, we are releasing the model weights of the MiMo V2.5 series to global developers under the MIT License, and at the same time, we are collaborating with chip manufacturers and inference frameworks to provide adaptation code, hoping to contribute to the open-source community and developer ecosystem.

The weights of both models (including the Base model) have been fully open-sourced under the permissive MIT license, allowing free commercial use, secondary training, and fine-tuning without additional authorization.

> Model weight collection: [https://huggingface.co/collections/XiaomiMiMo/mimo-v25](https://huggingface.co/collections/XiaomiMiMo/mimo-v25)

For more details, refer to the model Blog:

https://mimo.xiaomi.com/index#blog

## MiMo Orbit Program

We believe that the value of open source lies not only in the public disclosure of weights, but more importantly, in the co-construction of the ecosystem. 

To this end, we are officially launching the MiMo Orbit Program. 

The MiMo Orbit plan is divided into two parts, namely the " **Creator Trillion Token Incentive Plan"** for AI builders and the " **Agent Ecosystem Co-construction Plan** " for Agent framework teams.

### Creator Trillion Token Incentive Program 

![图片](https://platform.xiaomimimo.com/static/QzbXbqNIlou0rYxl8NjcZgQKn9f.49f56c9c.png)

Xiaomi MiMo will distribute free Tokens to global users, with a total of **100 trillion (100T) Tokens** to be distributed within 30 days, and the distribution will end once all Tokens are given out.

This event adopts an application system, and users whose applications are approved will receive the Max-tier Token Plan at most, which includes 1.6 billion Credits and is worth 659 yuan. 

**Event Time**

From 00:00 on April 28, 2026, to 00:00 on May 28, 2026, Beijing Time

**Participation Method**

You can fill out the application via the following link or QR code. We will carefully evaluate each application material and match corresponding benefits based on your usage scenarios and needs. Successful applicants will receive our follow-up emails. 

Application URL: [100t.xiaomimimo.com](http://100t.xiaomimimo.com)

Application QR Code: 

![图片](https://platform.xiaomimimo.com/static/ZDwPbXuHSoURfdxMjW7cxqnmnMe.c8e38d08.png)

### Agent Ecosystem Co-construction Initiative

Xiaomi MiMo provides specialized support to the global Agent Framework Team. We will offer limited-time free support for the Agent Framework, enabling your users to access and experience the MiMo series of models with zero barriers. 

During the model ecosystem adaptation process, we have carried out in-depth cooperation with Agent framework vendors such as OpenCode, Hermes Agent, and KiloCode, and received a great deal of positive feedback and recognition.

<table>
<colgroup>
<col style="width: 215px" />
<col />
<col style="width: 209px" />
<col style="width: 203px" />
</colgroup>
<thead>
<tr>
<th>![图片](https://platform.xiaomimimo.com/static/BFeBbuvrVoxBXfxmpkPcSSASnCx.40e64d37.png)</th>
<th>![图片](https://platform.xiaomimimo.com/static/GByLbKMNGoOqutxZcybcY9D8nOt.eda23ad9.png)</th>
<th>![图片](https://platform.xiaomimimo.com/static/GjbfbWQbho1sE2xayLIc3TCunSd.116ef429.png)</th>
<th>![图片](https://platform.xiaomimimo.com/static/WlzwbOfDXorn7OxU45UczO0xnae.4621aefb.png)</th>
</tr>
</thead>
<tbody>
</tbody>
</table>

We welcome like-minded Agent framework developers and manufacturers to contact us：[ business-mimo@xiaomi.com ](mailto:business-mimo@xiaomi.com)

## Chip ecosystem and inference framework adaptation

MiMo-V2.5-Pro completed the integration and adaptation with multiple chip manufacturers on the first day of its open source release. The following is a partial list of manufacturers:

- Ali T-HEAD

> The T-HEAD Zhenwu 810E relies on a full-stack self-developed AI software stack to achieve deep adaptation. 

- Amazon Web Services

> Amazon Web Services (AWS) has completed the in-depth adaptation of MiMo-V2.5-Pro based on its self-developed Trainium2 chip, Neuron SDK, and vLLM inference framework, achieving first-day adaptation where the model is globally available upon open-sourcing. The next-generation 3nm process Trainium3 will further unleash the Agentic performance potential of the model. 

- AMD

> AMD, relying on the ROCm open-source software stack, provides Day-0 adaptation and comprehensive optimization support for MiMo-V2.5-Pro, helping developers and enterprise users efficiently complete model deployment and go live. 

- Baidu Kunlun Chip

> Kunlun Chip relies on its self-developed architecture, effectively ensuring the stable and efficient operation of models on the platform through underlying operator optimization and software-hardware co-acceleration, and building a solid computing power foundation for upper-layer applications. 

- Suiyuan Technology

> Suiyuan Technology relies on its self-developed Yusuan TopsRider software stack for in-depth optimization. MiMo-V2.5-Pro has completed full adaptation on Suiyuan L600, achieving stable operation with high throughput and low latency, and maintaining excellent performance in complex tasks and long sequence scenarios. 

- Muxi 

> Muxi Xiyun C Series relies on the full stack self-developed MXMACA software stack to achieve end-to-end native support from Triton syntax to Muxi GPU instruction set, with better performance. 

- Days Intelligence Chip

> Tianshu Zhixin can achieve Day 0-level deep adaptation of models, relying on full-stack self-developed software and hardware to build high-quality computing power, with efficient adaptation and easy migration, capable of precisely unleashing model performance and ensuring stable operation. 

In addition, the MiMo-V2.5 series models have also completed Day-0 adaptation for the mainstream inference frameworks SGLang and vLLM.

![图片](https://platform.xiaomimimo.com/static/F6upbwMkIol7iex8R0Sc81XYnhh.3167ebd9.png)

From the first-generation model to today's full open source of MiMo-V2.5, every step of MiMo's growth has been inseparable from the community's feedback and co-construction. 

We will continue to invest in the iteration of model capabilities and the improvement of the ecosystem, and work together with global developers to enable Agent to truly enter every application scenario.
