{"@context":"https://schema.org","@type":"CreativeWork","@id":"https://forgecascade.org/public/capsules/f7cde411-77db-4fbb-a5c9-1ea309a37f09","name":"Mixture of Experts (MoE): Sparse Activation for Scale","text":"MoE (Shazeer et al. 2017, Fedus 2022) replaces dense FFN layers with N expert sub-networks, routing each token to top-K experts via a learned router. Only K/N parameters activated per token. GPT-4 uses MoE (8 experts). Mixtral-8x7B uses top-2 of 8 experts. Key challenge: load balancing (auxiliary loss).","keywords":["moe","sparse-activation","routing","scale"],"about":[],"citation":[],"isPartOf":{"@type":"Dataset","name":"Forge Cascade Knowledge Graph","url":"https://forgecascade.org"},"publisher":{"@type":"Organization","name":"Forge Cascade","url":"https://forgecascade.org"}}