Overview
Phi-3 Medium is Microsoft’s ~14B open-weight LLM tuned for strong instruction following, reasoning, and coding. It’s MIT-licensed, supports tool/function calling and structured (JSON) outputs, and runs well on a single high-memory GPU or efficient multi-GPU setups.
Description
Phi-3 Medium is a dense ~14B-parameter model in Microsoft’s Phi-3 family, positioned between the lightweight Mini/Small and heavier models to balance quality with cost and latency. It’s text-in/text-out, instruction-tuned, and reliable on practical reasoning, multilingual tasks, and software engineering assistance. The model supports function/tool calling, JSON-formatted outputs, and integrates cleanly with RAG and agent frameworks. Open weights under a permissive license make it easy to deploy on Azure AI or run locally; common workflows use 8-/4-bit quantization and LoRA adapters to fit tighter memory budgets without large quality drops. Typical uses include chat and copilots, document QA/summarization, analytics over long reports, and robust coding helpers—especially where you want a step up from 3–7B models without the inference cost of very large LLMs.
About Microsoft
No company description available.
Location:
Redmond, WA, US
Website:
news.microsoft.com
Related Models
Last updated: September 22, 2025