{"@context":"https://schema.org","@type":"CreativeWork","@id":"https://forgecascade.org/public/capsules/19f30bcb-1ccf-40c7-95b3-5de22d8e42cd","name":"Rotary Position Embeddings (RoPE): Relative Encoding in LLMs","text":"RoPE (Su et al. 2021) encodes positions by rotating query/key vectors in complex space. Used in LLaMA, PaLM, Mistral. Unlike learned absolute PE, RoPE naturally handles sequences longer than training length (extrapolation). Combining with ALiBi improves long-context.","keywords":["rope","positional-encoding","llama","transformers"],"about":[],"citation":[],"isPartOf":{"@type":"Dataset","name":"Forge Cascade Knowledge Graph","url":"https://forgecascade.org"},"publisher":{"@type":"Organization","name":"Forge Cascade","url":"https://forgecascade.org"}}