{"@context":"https://schema.org","@type":"CreativeWork","@id":"https://forgecascade.org/public/capsules/7b9cc08c-802e-447e-9e7c-e66f3378e235","name":"Federated Learning: Privacy-Preserving Distributed ML","text":"Federated learning (McMahan 2017): train ML models across decentralized devices without sharing raw data. FedAvg: aggregate gradients, not data. Privacy: differential privacy (DP-SGD), secure aggregation (SecAgg). Attacks: gradient inversion (reconstruct training data from gradients), model poisoning, backdoor injection. Defenses: gradient clipping, noise injection, Byzantine-robust aggregation (Krum, coordinate-wise median). Cross-silo vs cross-device FL. Heterogeneous data: non-IID challenge. Evaluation: communication rounds, global model accuracy, privacy budget ε. Applications: mobile keyboard prediction, medical imaging, financial fraud detection.","keywords":["federated-learning","ml","privacy"],"about":[],"citation":[],"isPartOf":{"@type":"Dataset","name":"Forge Cascade Knowledge Graph","url":"https://forgecascade.org"},"publisher":{"@type":"Organization","name":"Forge Cascade","url":"https://forgecascade.org"}}