TAAFT
Free mode
100% free
Freemium
Free Trial
Deals
Create tool

LoRA (Low-Rank Adaptation)

[ˈlɔːrə loʊ ræŋk ædæpˈteɪʃən]
Machine Learning
Last updated: April 4, 2025

Definition

A parameter-efficient fine-tuning (PEFT) technique freezing base model weights and training injected low-rank matrices.

Detailed Explanation

A parameter-efficient fine-tuning (PEFT) technique that freezes pre-trained model weights and injects trainable low-rank matrices into layers, significantly reducing the number of parameters needing updates during fine-tuning.

Use Cases

Efficiently adapting large pre-trained models (LLMs, vision models) for specific tasks, reducing training time and cost, enabling multiple task-specific adaptations of a single base model.

Related Terms