TAAFT
Free mode
100% free
Freemium
Free Trial
Deals

Phi 3.5 MoE

Model family: Phi
Phi-3.5-MoE is a small, efficient MoE LLM from Microsoft. It routes tokens across 16 experts (~3.8B each) with top-2 gating (~6.6B active parameters), delivering solid reasoning and code performance at low latency. The model is trained on high-quality, reasoning-dense data and post-trained with SFT plus RL-style techniques (e.g., PPO/DPO) for instruction following and safety. It supports a 128K context window, multilingual use, and an MIT open-weight license. Builders can deploy it serverlessly in Azure AI or run/fine-tune locally (e.g., LoRA). Typical uses include long-context QA and summarization, coding assistants, math, and multilingual applications in constrained environments.
New Text Gen 7
Released: August 22, 2024

Overview

Phi-3.5-MoE is Microsoft’s open-weight Mixture-of-Experts model (16×3.8B, ~6.6B active) built for strong reasoning and coding with a 128K-token context window. It’s MIT-licensed and available in Azure AI Foundry and on Hugging Face

About Gentext Group

Location: Washington, US
View Company Profile

Tools using Phi 3.5 MoE

No tools found for this model yet.

Last updated: February 3, 2026
0 AIs selected
Clear selection
#
Name
Task