TAAFT
Free mode
100% free
Freemium
Free Trial
Deals

Ming Flash Omni 2.0

Ming-flash-omni 2.0 is Inclusion AI and Ant Group’s flagship omni-MLLM, built on the Ling-2.0 MoE with 100B total and 6B active parameters. It supports any-to-any tasks such as contextual ASR, text-to-image, unified audio and music generation and multimodal reasoning, and reaches state-of-the-art performance among open omni-models across vision, audio and image-editing benchmarks.
New Multimodal Gen 2
Released: February 11, 2026

Overview

Ming-flash-omni 2.0 is an open sparse MoE omni-modal model that unifies text, image, video and audio understanding and generation, using a Ling-2.0 Mixture-of-Experts backbone with 100B parameters and about 6B active per token.

About InclusionAI

Inclusion‑AI is a UK-based nonprofit organisation that researches and promotes inclusive and equitable AI systems, focussing on how machine learning tools can better serve under-represented communities and reduce bias.

View Company Profile

Tools using Ming Flash Omni 2.0

No tools found for this model yet.

Last updated: February 25, 2026
0 AIs selected
Clear selection
#
Name
Task