Phi models
Browse all models from this model family.
-
By MicrosoftPhi Silica is Microsoft’s on-device small language model (~3.3B params) built to run locally on Copilot+ PC NPUs. It’s pre-tuned, 4-bit–quantized, streams fast (≈230 ms first token, up to ~20 tok/s), and currently offers a 2K context (with 4K coming). Available to developers via the Windows App SDK’s Phi Silica APIs.TextReleased 1y ago
-
By MicrosoftPhi-3 Medium is Microsoft’s ~14B open-weight LLM tuned for strong instruction following, reasoning, and coding. It’s MIT-licensed, supports tool/function calling and structured (JSON) outputs, and runs well on a single high-memory GPU or efficient multi-GPU setups.TextReleased 1y ago
-
By MicrosoftPhi-3 Small is Microsoft’s ~7B open-weight LLM tuned for instruction following, reasoning, and coding. It’s compact enough for single-GPU/edge deployment and supports tool/function calling and structured (JSON) outputs under a permissive license.TextReleased 1y ago
-
Phi-3-Vision is Microsoft’s compact, open-weight multimodal model that understands images + text and answers in text. Optimized for documents, charts, UI screenshots, diagrams, and photos, it delivers strong OCR and visual reasoning in a small footprint suitable for single-GPU or edge deployment.TextReleased 4mo ago
-
By MicrosoftPhi-3.5-mini is Microsoft’s compact open-weight LLM (~3.8B parameters) tuned for strong instruction following, reasoning, and coding. It supports long-context (up to 128K), function calling, and JSON outputs, and is MIT-licensed for easy use on Azure AI or locally.TextReleased 1y ago
-
Phi-3.5-MoE is Microsoft’s open-weight Mixture-of-Experts model (16×3.8B, ~6.6B active) built for strong reasoning and coding with a 128K-token context window. It’s MIT-licensed and available in Azure AI Foundry and on Hugging FaceTextReleased 1y ago
-
By MicrosoftPhi-4-mini-reasoning is a lightweight open model built upon synthetic data with a focus on high-quality, reasoning dense data further finetuned for more advanced math reasoning capabilities. The model belongs to the Phi-4 model family and supports 128K token context length.TextReleased 9mo ago
-
By MicrosoftPhi-4-reasoning-plus is an open-weight reasoning model derived from Phi-4, trained with chain-of-thought supervised fine-tuning and extra reinforcement learning on high-quality synthetic and filtered public-domain data (math, science, coding) with safety alignment. It delivers stronger reasoning in a small model, at the cost of ~50% longer outputs and higher latency.TextReleased 9mo ago
-
By MicrosoftPhi-4-reasoning is an open-weight model fine-tuned from Phi-4 with chain-of-thought SFT and reinforcement learning, trained on high-quality synthetic and filtered public-domain data (math, science, coding) plus safety alignment—aimed at delivering strong reasoning in a compact model.TextReleased 9mo ago
No models found
Try adjusting your search or filters.
