FL-AsiaCCS’25 Workshop on Secure and Efficient Federated Learning
https://federated-learning.org/fl-asiaccs-2025/
Since its inception in 2016, Federated Learning (FL) has become a popular framework for collaboratively training machine learning models across multiple devices, while ensuring that user data remains on the devices to enhance privacy. With the exponential growth of data and the increasing diversity of data types, coupled with the limited availability of computational resources, improving the efficiency of training processes in FL is even more urgent than before. This challenge is further amplified by the rise in popularity of training and fine-tuning large-scale models, such as Large Language Models (LLMs), which demand significant computational power. In addition, as FL is now being deployed in more complex and heterogeneous environments, it is more pressing to strengthen security and ensure data privacy in FL to maintain user trust. This workshop aims to bring together academics and industry experts to discuss the future directions of federated learning research, along with practical setups and promising extensions of baseline approaches, with a special focus on how to enhance both the training efficiency and the security in FL. By dealing with these critical issues, we aim to pave the way for more sustainable and secure FL implementations that can effectively handle the requirements of modern AI applications.
The Workshop on Secure and Efficient Federated Learning aims to provide a platform for discussing the key promises of federated learning and how they can be addressed simultaneously. Given the growing concern over data leakage in modern distributed systems and the requirement of training large-scaled models with limited resources, the security and efficiency of federated learning is the central focus of this workshop.
Topics of interest include, but are not limited to:
- Coded Federated Learning
- Communication Efficiency in Federated Learning
- Federated Learning in Heterogeneous Networks
- Federated Learning of Large Language Models
- Privacy-Preserving Techniques for Federated Learning
- Scalable and Robust Federated Learning
- Security Attacks and Defenses in Federated Learning
- Trusted Execution Environments for Federated Learning
- Verifiable Federated Learning
Papers in double-blind ACM format of up to six pages, including all text, figures and references can be submitted via EDAS atÂ