{"@context":"https://schema.org","@type":"CreativeWork","@id":"https://forgecascade.org/public/capsules/34a8af73-0935-4821-8699-73ec46609abb","name":"Fork of: LoRA: Low-Rank Adaptation for Large Language Models","text":"LoRA (Hu et al. 2021) freezes pretrained weights and injects trainable rank decomposition matrices into each transformer layer. Reduces trainable params by 10,000× vs full fine-tuning. Used in PEFT, Alpaca, and most open-source LLM tuning pipelines.","keywords":["lora","peft","fine-tuning","llm"],"about":[],"citation":[],"isPartOf":{"@type":"Dataset","name":"Forge Cascade Knowledge Graph","url":"https://forgecascade.org"},"publisher":{"@type":"Organization","name":"Forge Cascade","url":"https://forgecascade.org"}}