TAAFT
Free mode
100% free
Freemium
Free Trial
Deals

DeepSeek models

Browse all models from this model family.

to
  • Second-generation DeepSeek OCR model, “Visual Causal Flow,” aimed at more human-like visual encoding, with dynamic resolution support and strong document-to-Markdown and layout-aware OCR for images and PDFs.
    NewText
    Released 26d ago
  • DeepSeek-V3.2-Speciale is a 685B-parameter research-only variant of DeepSeek-V3.2 that pushes open-weight reasoning ability to the limit, but disables tool calling and is intended purely for experimentation rather than everyday agent use.
    NewText
    Released 2mo ago
  • DeepSeek V3.2 is the core, stable 3.2-series model designed as a strong general-purpose LLM. It builds on the DeepSeek V3 architecture and delivers balanced performance across reasoning, writing, coding, and multilingual tasks.
    NewText
    Released 2mo ago
  • DeepSeek-Math-V2 is a math-specialized LLM built on DeepSeek-V3.2-Exp-Base, trained to generate and verify step-by-step proofs. It uses a learned verifier as a reward model so the generator learns to fix its own reasoning, reaching gold-level scores on contests like IMO 2025, CMO 2024, and near-perfect Putnam 2024 with scaled test-time compute.
    NewText
    Released 2mo ago
  • LLM-centric OCR model using “Contexts Optical Compression” to explore visual-text compression and provide fast streaming and batch OCR for images and PDFs via vLLM and Transformers runtimes.
    Text
    Released 4mo ago
  • DeepSeek v3.2 Exp is an experimental build of the DeepSeek V3 line, tuned for deeper reasoning and stronger coding while keeping latency practical. It supports long context, function/tool calling, and schema-true JSON—great for RAG, agents, and repo-scale tasks when you want extra accuracy.
    Text
    Released 4mo ago
  • DeepSeek-V3.1-Terminus is DeepSeek’s flagship reasoning model, tuned for difficult analysis, math, and coding. It supports very long context, function/tool calling, reliable JSON outputs, and an optional extended-thinking mode—ideal for enterprise RAG, agents, and high-stakes workflows.
    Text
    Released 5mo ago
  • DeepSeek R1 is a reasoning-first large language model built to solve complex problems with explicit multi-step thinking. It excels at math, coding, and logical analysis, supports long context, tool/function calling, and structured JSON outputs, and can trade latency for higher accuracy via extended "thinking" budgets.
    Text
    Released 1y ago
  • Text
    Released 1y ago
  • Text
    Released 1y ago

No models found

Try adjusting your search or filters.

0 AIs selected
Clear selection
#
Name
Task