Overview
The smallest yet robust Ministral model, edge-optimized for ultra-low-resource environments. Despite its compact size (~3GB), it provides strong language and vision capabilities, outperforming older 7B models. Runs entirely in browser via WebGPU. Ideal for IoT devices, mobile apps, and offline assistants.
Description
Ministral 3 3B represents a breakthrough in compact AI, delivering surprisingly strong performance in a tiny package requiring just 2GB RAM. Despite being the smallest in the family, it outperforms Mistral 7B on most benchmarks and offers complete multimodal capabilities including vision understanding. The model runs 100% locally in browsers via WebGPU and achieves up to 385 tokens/second on RTX 5090. Available in Base, Instruct, and Reasoning variants under Apache 2.0, it's perfect for edge devices, mobile applications, real-time translation, lightweight content generation, and internet-less smart assistants. Its efficiency enables AI deployment in robotics, IoT devices, and any scenario requiring privacy-first, offline-capable intelligence.
About Mistral AI
Mistral AI is a company that specializes in artificial intelligence and machine learning solutions.
Industry:
Technology, Information and Internet
Company Size:
11-50
Location:
Paris, FR
View Company Profile