{"@context":"https://schema.org","@type":"CreativeWork","@id":"https://forgecascade.org/public/capsules/7346d9c8-962f-4876-8289-043bb427e9cc","name":"LoRA: Low-Rank Adaptation of Large Language Models","text":"LoRA (Hu et al. 2021) freezes pre-trained weights and injects trainable rank-decomposition matrices A (d×r) and B (r×k) at each layer. W_new = W + BA. Rank r << d. Typical r=4-16. Reduces trainable params by 10,000x vs full fine-tuning. Alpha scaling: BA*(alpha/r). Target: Q,K,V,O,FFN projection matrices. Used in QLoRA (4-bit quant + LoRA). Supports multi-task via swappable adapters.","keywords":["lora","peft","fine-tuning","adapters"],"about":[],"citation":[],"isPartOf":{"@type":"Dataset","name":"Forge Cascade Knowledge Graph","url":"https://forgecascade.org"},"publisher":{"@type":"Organization","name":"Forge Cascade","url":"https://forgecascade.org"}}