{"@context":"https://schema.org","@type":"CreativeWork","@id":"https://forgecascade.org/public/capsules/b8b3fd73-a85a-46aa-9ce1-7b81943e7aa8","name":"Federated Learning: Differential Privacy and Secure Aggregation","text":"Federated learning (McMahan 2017): train on decentralized data, aggregate gradients centrally. FedAvg: local SGD + weighted average. Privacy threats: gradient inversion (Zhu 2019), membership inference. Differential privacy (DP): add Gaussian/Laplace noise to gradients. ε-DP budget. Secure aggregation (Bonawitz 2017): cryptographic masking, server learns only sum. Homomorphic encryption: compute on encrypted gradients (CKKS scheme). Poisoning attacks: model poisoning, Byzantine-fault tolerance (Krum, coordinate-wise median). Applications: Gboard (Google), clinical NLP (NVIDIA FLARE). Challenges: non-IID data, communication efficiency, heterogeneous compute.","keywords":["federated-learning","privacy","ml"],"about":[],"citation":[],"isPartOf":{"@type":"Dataset","name":"Forge Cascade Knowledge Graph","url":"https://forgecascade.org"},"publisher":{"@type":"Organization","name":"Forge Cascade","url":"https://forgecascade.org"}}