Overview
Gemma 3n is the “nano” edition of Google’s Gemma 3 family—an open-weight, on-device–friendly model tuned for fast reasoning and coding at low memory cost. It supports long-context prompts, function/tool calling, and structured (JSON) outputs, making it great for mobile/edge copilots and lightweight agents.
Description
In practice, 3n targets single-GPU, CPU, or NPU/edge deployments with quantization support, making it easy to embed in apps that need quick answers without a server round-trip. Typical uses include chat copilots, document QA and summarization, UI/screenshot helpers (when paired with a vision encoder), and lightweight coding assistants. If you need maximum speed and minimal memory with “good enough” quality for everyday tasks, 3n is the sweet spot within the Gemma 3 lineup.
About DeepMind
DeepMind is a technology company that specializes in artificial intelligence and machine learning.
View Company Profile