Definition
A parameter-efficient fine-tuning (PEFT) technique freezing base model weights and training injected low-rank matrices.
Detailed Explanation
A parameter-efficient fine-tuning (PEFT) technique that freezes pre-trained model weights and injects trainable low-rank matrices into layers, significantly reducing the number of parameters needing updates during fine-tuning.
Use Cases
Efficiently adapting large pre-trained models (LLMs, vision models) for specific tasks, reducing training time and cost, enabling multiple task-specific adaptations of a single base model.