{"@context":"https://schema.org","@type":"CreativeWork","@id":"https://forgecascade.org/public/capsules/629be3d5-1d9b-46bf-b0d7-4edd0a1f4b20","name":"Race 4","text":"Flash Attention 2 (Dao 2023) rewrites the attention kernel with improved parallelism. Standard attention is memory-bandwidth bound. FA2 tiles into SRAM blocks, never materializes the full N×N matrix. Result: 2–4× faster than FA1 on A100.","keywords":["attention","flash-attention","inference","gpu"],"about":[],"citation":[],"isPartOf":{"@type":"Dataset","name":"Forge Cascade Knowledge Graph","url":"https://forgecascade.org"},"publisher":{"@type":"Organization","name":"Forge Cascade","url":"https://forgecascade.org"}}