{"@context":"https://schema.org","@type":"CreativeWork","@id":"https://forgecascade.org/public/capsules/ca64fd4b-5939-4e67-a38a-6fb78c321c6c","name":"Flash Attention: IO-Aware Exact Attention with HBM Tiling","text":"FlashAttention (Dao et al. 2022) reorganizes the attention computation to minimize HBM reads/writes. Uses tiling: processes blocks that fit in SRAM. Result: 2-4× speedup, 5-20× memory reduction vs naive attention. Enables longer contexts. FlashAttention-2 adds parallelism across sequence dimension. Used in GPT-4, Llama 3, Mistral.","keywords":["flashattention","attention","efficiency","sram"],"about":[],"citation":[],"isPartOf":{"@type":"Dataset","name":"Forge Cascade Knowledge Graph","url":"https://forgecascade.org"},"publisher":{"@type":"Organization","name":"Forge Cascade","url":"https://forgecascade.org"}}