All models

MiniMax M2.7 on VM0. Multilingual at ×0.1

Strong multilingual reasoning at one-tenth of Sonnet's credit cost. Generous timeout for long thinking steps.

200K tokens · Text / Code · Prompt cache

MiniMax M2.7 is the cheap multilingual workhorse in the lineup. Reach for it when the agent's primary language isn't English and unit cost matters: multilingual reply drafting, mixed-language support triage, scheduled summarisation over non-English corpora. It's not trying to outscore Sonnet on English benchmarks; it's keeping multilingual production traffic affordable.

Vendor list price is $0.30 / $1.20 per 1M tokens, API is Anthropic-compatible. VM0 sets a 50-minute API timeout for the MiniMax provider so long thinking steps complete reliably. Reach for Sonnet 4.6 on English tool-use and Haiku 4.5 on latency-critical replies.

What is MiniMax M2.7?

Available since the M2 series launch · Latest text reasoning model in MiniMax's M2 series.

MiniMax M2.7 is from MiniMax, an AI lab with a multilingual and multimodal product line. The text reasoning side is what's exposed on VM0; MiniMax's image and voice products are separate offerings on the lab's platform.

On VM0, M2.7 is the default model on the MiniMax API-key provider. The Built-in lineup carries it at ×0.1. One of the lowest multipliers in the catalogue. Making it the default cheap-but-credible reasoner for multilingual workloads.

VM0's MiniMax provider sets a 50-minute API timeout and disables non-essential traffic, so long thinking steps complete reliably without dropping connections.

What's notable about MiniMax M2.7

Headline architecture and capability features.

M2.7 exposes an Anthropic-compatible API surface with a 200K-token context window and multilingual coverage. It runs at api.minimax.io.

Specs at a glance

FamilyMiniMax M2 series
ModalitiesText, code
LanguagesMultilingual
Context window200K tokens
Prompt cachingSupported (Anthropic-compatible)
Available on VM0Available since launch

MiniMax M2.7 benchmarks

MiniMax publishes fewer head-to-head benchmark numbers than Anthropic, Moonshot, or DeepSeek. We've kept this section honest. Pick M2.7 based on language profile and cost positioning rather than chasing leaderboards.

English multi-tool routingVM0 internal
Below Sonnet 4.6

MiniMax M2.7 pricing

Provider list price, per 1M tokens.

Input$0.30
Output$1.20
Cache read$0.06
Cache write$0.38

How MiniMax M2.7 behaves in practice

Observed behaviour from production agent runs.

Multilingual

Stronger on multilingual flows than the Anthropic family. The natural pick when the agent's primary language isn't English.

Reasoning

Solid for general agent work; below Sonnet 4.6 and Kimi K2.6 on the hardest tool-routing edge cases.

Latency

Slower than Haiku 4.5; the 50-minute VM0 timeout means very long thinking steps survive without dropping.

Best agent tasks for MiniMax M2.7

The multilingual customer agent that sounds native

Drafting replies, triaging tickets, holding multilingual chat threads where the conversation switches between languages mid-message. M2.7's training emphasised multilingual coverage, so the output reads more naturally for non-English-speaking customers than the same prompt routed through an English-first model would.

The overnight summariser running over multilingual content

Last quarter's customer conversations, a year of bilingual support tickets, a stack of multilingual regulatory documents — bulk summarisation jobs where speed isn't critical but unit cost matters a lot. M2.7's vendor price keeps the cost of "summarise everything" workflows low enough that they can run on every batch instead of every other week.

The thinking job that needs a long fuse

Multi-step reasoning passes that genuinely take ten minutes or more — deep research, document analysis, planning chains. VM0's MiniMax provider runs with a 50-minute API timeout (and disables non-essential traffic), so those long thinking steps complete cleanly instead of getting cut off and forcing a retry.

When to skip MiniMax M2.7

Skip M2.7 on English-first multi-tool agents where Sonnet 4.6 is more reliable, and on latency-critical replies where Haiku 4.5 is faster.

MiniMax M2.7 vs other models

MiniMax M2.7 vs Kimi K2.6

Kimi K2.6 (×0.3) has stronger reasoning and tool-use. M2.7 (×0.1) is one-third the cost and has a stronger multilingual profile. Default to Kimi for general work; reach for MiniMax for cheap multilingual background jobs.

MiniMax M2.7 vs DeepSeek V4 Flash

Both are sub-Haiku in cost. V4 Flash is faster and even cheaper (×0.02) but with weaker reasoning. M2.7 is the better pick when the work needs more than one-shot reasoning.

MiniMax M2.7 vs GLM-5.1

GLM-5.1 (×0.4) is more capable on long-context English-language work. M2.7 (×0.1) is much cheaper and the right pick when language profile and budget dominate.

Bottom line: should you use MiniMax M2.7?

The cheap multilingual default. Use it when language profile and budget call for it; reach for Kimi K2.6 or Sonnet 4.6 when raw quality matters.

Frequently asked questions

What's the API timeout?

VM0 sets a 50-minute timeout for the MiniMax provider, plus a flag to suppress non-essential traffic. Long thinking steps complete reliably.

Does MiniMax M2.7 support image input?

M2.7 on VM0 is the text reasoning model. MiniMax sells multimodal products separately; image and voice generation aren't part of the VM0 Built-in agent surface today.

Why is the multiplier so low (×0.1)?

Vendor list price is genuinely low ($0.30/$1.20 per 1M) and VM0 prices the model accordingly. Use it as a cheap multilingual workhorse, not a reasoning replacement for Sonnet.

Alternatives

Using MiniMax M2.7 on VM0

Two ways to access MiniMax M2.7 on VM0

VM0 supports MiniMax M2.7 as a Built-in model billed in VM0 credits, and through bring-your-own with a MiniMax API key. The Built-in path uses VM0 Managed routing and the credit multiplier explained below; the bring-your-own path bills you directly with the upstream vendor and skips the VM0 credit conversion entirely.

VM0's recommendation

VM0 positions MiniMax M2.7 as a cost-saving option rather than a core agent model. Use it to optimise unit cost on non-core work, such as bulk classification, pre-filters, latency-critical short replies, or pinned legacy agents, while keeping Claude Opus 4.7, Claude Opus 4.6, or Claude Sonnet 4.6 on the steps that decide the run.

Credits and the ×0.1 multiplier

Every Built-in model on VM0 is priced as a multiple of Claude Sonnet 4.6, which sits at the ×1 credit baseline. MiniMax M2.7 bills at ×0.1 credits. The multiplier is what shows up on your VM0 invoice; the vendor list price in the pricing table above is what the upstream provider charges before VM0 converts it into credits.

MiniMax M2.7 bills at ×0.1, which means a step here costs only 0.1× the credits of an equivalent step on Sonnet 4.6 (the ×1 baseline). That puts it well below the credit baseline and makes it the natural pick for high-volume background work where cost-per-step matters more than peak reasoning quality.

Available on VM0 since Available since launch.