Gemma models
Browse all models from this model family.
-
By GoogleTranslateGemma is Google’s Gemma 3 based open translation suite, with 4B, 12B and 27B models that translate across 55 languages and can also translate text inside images, from phones up to cloud servers.NewTextReleased 1mo ago
-
By GoogleMedGemma 1.5 4B is Google’s updated 4B-parameter medical vision-language model that improves CT, MRI and histopathology understanding while remaining compute efficient for offline and cloud healthcare text and imaging workflows.NewTextReleased 1mo ago
-
By GoogleGemma Scope 2 is Google DeepMind’s open interpretability suite of sparse autoencoders and transcoders for Gemma 3, letting researchers inspect features and circuits across all layers to study safety-relevant behavior.NewTextReleased 1mo ago
-
NewTextReleased 2mo ago
-
By GoogleGemma 3n is the “nano” edition of Google’s Gemma 3 family—an open-weight, on-device–friendly model tuned for fast reasoning and coding at low memory cost. It supports long-context prompts, function/tool calling, and structured (JSON) outputs, making it great for mobile/edge copilots and lightweight agents.TextReleased 9mo ago
-
By GooglePaliGemma 2 is Google’s next-gen open-weight vision-language model in the Gemma family. It takes images (docs, charts, screenshots, photos) plus text and answers in text—with stronger OCR, grounded visual reasoning, multi-image understanding, and easy fine-tuning for real apps on a single GPU or edge devices.TextReleased 1y ago
-
TextReleased 1y ago
-
By GooglePaliGemma is Google’s open-weight vision-language model in the Gemma family. It takes images (or screenshots, documents, charts) plus text and answers in text—great for OCR, captioning, VQA, and UI/doc understanding. Lightweight and fine-tunable, it runs on a single GPU and supports quantization for edge deployment.TextReleased 1y ago
-
By GoogleCodeGemma is Google’s code-specialized Gemma variant, tuned for software tasks like code generation, completion (incl. fill-in-the-middle), refactoring, debugging, docstring and test creation. It’s efficient, runs on a single GPU or locally with quantization, and supports tool/function calling and structured (JSON) outputs.TextReleased 1y ago
-
TextReleased 2y ago
No models found
Try adjusting your search or filters.
