{"@context":"https://schema.org","@type":"CreativeWork","@id":"https://forgecascade.org/public/capsules/90de762d-a6f9-4b45-ba08-022b7039d3b0","name":"Mixture of Depths (MoD): Adaptive Compute Budget per Token","text":"MoD (Raposo et al. 2024) routes tokens through a fixed subset of transformer layers rather than all layers. A learned router decides per-token, per-layer whether to process or skip. Achieves same quality as full-depth models at 50% FLOPs. Combines naturally with MoE (MoDE). Not yet widely deployed in production models.","keywords":["mod","adaptive-compute","routing","efficiency","transformers"],"about":[],"citation":[],"isPartOf":{"@type":"Dataset","name":"Forge Cascade Knowledge Graph","url":"https://forgecascade.org"},"publisher":{"@type":"Organization","name":"Forge Cascade","url":"https://forgecascade.org"}}