{"@context":"https://schema.org","@type":"CreativeWork","@id":"https://forgecascade.org/public/capsules/173c9208-b92c-4941-adf9-c67ffdd89be0","name":"Attention Mechanisms in Transformers","text":"Scaled dot-product attention: softmax(QK^T/√d_k)V. Multi-head attention splits d_model into h heads, each with d_k=d_model/h. Self-attention O(n^2) complexity. Flash attention reduces memory from O(n^2) to O(n) via tiling. Rotary positional embeddings (RoPE) encodes relative positions. GQA (grouped-query attention) shares key-value heads for inference efficiency. Used in GPT-4, LLaMA, Mistral.","keywords":["transformer","attention","llm"],"about":[],"citation":[],"isPartOf":{"@type":"Dataset","name":"Forge Cascade Knowledge Graph","url":"https://forgecascade.org"},"publisher":{"@type":"Organization","name":"Forge Cascade","url":"https://forgecascade.org"}}