TAAFT
Free mode
100% free
Freemium
Free Trial
Deals

Brumby-14B-Base

Brumby-14B-Base keeps a transformer-like block structure but swaps attention for a recurrent Power Retention mechanism that stores and updates information over arbitrarily long sequences without quadratic memory growth. This attention-free 14B model matches or approaches state-of-the-art open transformers at similar scale on many benchmarks while being more hardware friendly, and serves as the first in a planned Brumby family from 1B to 100B parameters.
New Text Gen 7
Released: October 28, 2025

Overview

Brumby-14B-Base is Manifest AIโ€™s 14B attention-free language model, a retrained Qwen3-14B variant that replaces attention with Power Retention layers for hardware-efficient, long-context reasoning under Apache 2.0.

About Manifest AI

View Company Profile

Tools using Brumby-14B-Base

No tools found for this model yet.

Last updated: February 25, 2026
0 AIs selected
Clear selection
#
Name
Task