Qwen models
Browse all models from this model family.
-
By AlibabaCode-focused Qwen model family aimed at code generation, reasoning, and fixing across multiple parameter sizes.NewCodingReleased 5d ago
-
By AlibabaQwen3.5-397B-A17B is a native vision-language model from Alibaba's Qwen team with 397B parameters (17B active per token). In the Qwen3.5-Plus hosted variant it supports up to 1M-token multimodal context, with strong reasoning, coding, and agentic tool use for large-scale deployment.NewMultimodalReleased 7d ago
-
By AlibabaQwen-Coder-Qoder is a reinforced code model based on Qwen-Coder, custom trained for the Qoder agentic coding platform to improve end to end programming performance inside its IDE and CLI workflowsNewCodingReleased 20d ago
-
By AlibabaMultilingual forced alignment model that aligns speech and transcripts in 11 languages, predicting timestamps for arbitrary units in up to 5 minutes of audio with accuracy surpassing previous end-to-end aligners.NewAudioReleased 26d ago
-
By AlibabaQwen3 Max Instruct is the trillion-parameter chat and tool-use version of Qwen3-Max, ranking near the top of LMArena and scoring about 69.6 on SWE-Bench Verified and 74.8 on Tau2 Bench for real-world coding and agent tasks.NewTextReleased 29d ago
-
By AlibabaQwen3 Max Thinking is the heavy reasoning variant of Qwen3-Max that pairs a code interpreter with parallel test-time computation to reach perfect scores on AIME 25 and HMMT and tackle the hardest multi step math and logic tasks.NewTextReleased 1mo ago
-
By AlibabaQwen’s open text-to-speech model supporting multilingual speech generation with custom voice capability.NewAudioReleased 1mo ago
-
By AlibabaQwen-Image-2512 is Qwen’s December 2025 update to its 20B text-to-image foundation model, with more realistic humans, finer natural textures and much better Chinese-English text rendering, released as open-source Apache-2.0 weights.NewImageReleased 1mo ago
-
By AlibabaQwen-Image-Edit-2511 is Qwen’s 20B image-edit diffusion model for natural language photo editing, upgrading 2509 with reduced image drift, stronger character and multi-person consistency, and built-in LoRA styles for stable, production-ready edits.NewImageReleased 2mo ago
-
By AlibabaQwen3-TTS-VC-Flash is Qwen’s VoiceClone voice-conversion model that clones any speaker from about 3 seconds of audio, then revoices speech in that identity across 10 languages with low word-error rates.NewAudioReleased 2mo ago
-
By AlibabaQwen3-TTS-VD-Flash is Alibaba Qwen's voice-design TTS model that creates fully custom voices from natural-language instructions, letting users control timbre, rhythm, emotion and persona for expressive, multilingual speech via the Qwen API.NewAudioReleased 2mo ago
-
By AlibabaQwen DeepResearch 2511 is a multi-step research model that plans queries, searches the web, reads pages and PDFs, extracts facts, and writes answers with citations. It supports long context, tool and function calling, and structured JSON logs for traceable results.TextReleased 3mo ago
-
By AlibabaQwen3 VL Flash is Alibaba’s fast vision-language model. It reads images with text, handles OCR and layout, explains charts and screenshots, and returns grounded answers or JSON. It is tuned for low latency, long context, tool calling, and cost-efficient multimodal assistants.TextReleased 4mo ago
-
By AlibabaQwen3-Max is Alibaba’s flagship Qwen3 model for high-accuracy reasoning, coding, and multilingual tasks. It supports long-context prompts, tool/function calling, and reliable JSON outputs, making it a strong fit for enterprise copilots, RAG pipelines, and agentic workflowsTextReleased 5mo ago
-
By AlibabaQwen3-VL-235B A22B is Alibaba’s flagship MoE vision-language model. It takes images (docs, charts, screenshots, photos) plus text and returns grounded answers with strong OCR, layout understanding, and multi-image reasoning. The MoE routing activates ~22B parameters per token for frontier quality at practical latency, with long context, function/tool calling, and reliable JSON outputs.TextReleased 5mo ago
-
By AlibabaQwen 3 Coder Flash is the speed-tuned coding variant. It reads repos, writes diffs and tests, and supports long context, tool calling, and strict JSON or patch outputs.TextReleased 5mo ago
-
By AlibabaQwen3-ASR-Flash is Alibaba Qwen’s all-in-one speech recognition model, built on Qwen3-Omni to stream low-latency transcripts in 11 languages, robust to noise, music and code switching, with optional text prompts to bias recognition.TextReleased 5mo ago
-
By AlibabaQwen3-30B-A3B is Alibaba’s open-weight MoE LLM (Apache-2.0) with 30.5B params and ~3.3B activated. It supports toggleable “thinking” vs. non-thinking modes, strong agent/tool calling, 100+ languages, and a 32K native context (≈131K with YaRN). New 2507 variants raise context to 256K.TextReleased 6mo ago
-
By AlibabaQwen3-235B-A22B is Alibaba’s flagship open-source MoE LLM (Apache-2.0): 235B total parameters with 22B activated (128 experts, 8 active). It uniquely toggles thinking vs. non-thinking modes, supports 100+ languages, and excels at agentic tool use. Context is 32K native (≈131K with YaRN); the Instruct-2507 variant offers 256K native and up to ~1M tokens.TextReleased 6mo ago
-
By AlibabaQwen3 Coder is Alibaba’s code-focused Qwen3 variant. It handles repo-aware generation, completion, debugging, and test creation, with long context, tool and function calling, and strict JSON or diff outputs for IDEs and agents.TextReleased 7mo ago
-
TextReleased 9mo ago
No models found
Try adjusting your search or filters.
