Privacy-Preserving Machine Learning with Federated and Split Learning Architectures

Authors

  • Minal junaid Chenab Institute of Information Technology Author
  • Sania Naveed Chenab Institute of Information Technology Author

Keywords:

Privacy-preserving machine learning, Federated learning, Split learning, Distributed AI, Data security, Collaborative training

Abstract

The growing integration of artificial intelligence (AI) into sectors such as healthcare, finance, and industrial automation has elevated the urgency for privacy-preserving machine learning (PPML) techniques. Traditional machine learning models typically require centralized data collection, raising concerns about data breaches, misuse, and non-compliance with data protection laws. Federated Learning (FL) and Split Learning (SL) have emerged as potent alternatives that maintain user data locality while enabling collaborative training across distributed devices and institutions. This paper explores the core principles, comparative strengths, and architectural variants of FL and SL in privacy-sensitive contexts. Through detailed experimentation on healthcare and financial datasets, we investigate the efficiency, communication overhead, model performance, and privacy guarantees of these approaches. The results demonstrate that while FL offers scalability and control, SL provides better data privacy at the cost of computational load on central servers. This paper concludes that hybrid and adaptive integration of these models may offer the most robust route toward secure and decentralized AI systems.

Downloads

Published

2025-05-01