TAAFT
Free mode
100% free
Freemium
Free Trial
Deals

Intern S1

Intern-S1 is InternLM’s most advanced open multimodal reasoning model, targeting AI4Science. It combines a 235B MoE language backbone with a 6B InternViT vision encoder and is further pretrained on 5T multimodal tokens, including 2.5T+ scientific tokens. This allows it to handle natural language, figures, molecules, protein sequences and time-series signals, delivering leading results on scientific, math and general multimodal benchmarks. A smaller Intern-S1-mini variant offers similar capabilities with an 8B LLM and 0.3B vision encoder.
New Multimodal Gen 3
Released: August 21, 2025

Overview

Intern-S1 is a scientific multimodal foundation model built on a 235B-parameter Qwen3 MoE LLM plus a 6B vision encoder, trained on 5T multimodal tokens with over half from scientific domains.

About InternLM

Welcome to the InternLM (Intern Large Models) organization. Intern-series large models (chinese name: 书生) are developed by Shanghai AI Laboratory and we keep open-sourcing high quality LLMs/MLLMs as well as toolchains for development and application.

View Company Profile

Tools using Intern S1

No tools found for this model yet.

Last updated: February 5, 2026
0 AIs selected
Clear selection
#
Name
Task