Overview
Description
Large Multimodal Model; MoE architecture (16 experts, 17B active, 109B total params), industry-leading 10M context window; Natively multimodal via early fusion; Fits on single H100 GPU (int4); Use cases: multi-document summarization, personalization, code reasoning.
About Meta
We're connecting people to what they care about, powering new, meaningful experiences, and advancing the state-of-the-art through open research and accessible tooling.
Location:
California, US
Website:
ai.meta.com
Related Models
Last updated: April 15, 2025