This repository is primarily based on our survey paper 📚🔍:
Synergizing Foundation Models and Federated Learning: A Survey
Unlike smaller models, Foundation Models (FMs) 🧠, such as LLMs and VLMs, are built upon vast amounts of training data 📊. While general FMs can use public data, domain-specific FMs require proprietary data for pre-training and fine-tuning, raising privacy concerns 🔒. Federated Learning (FL) 🤝💻, a compelling privacy-preserving approach, enables collaborative learning across distributed datasets while maintaining data privacy🛡️. Synergizing FM and FL 🧠➕🌍 offers a promising way to address data availability and privacy challenges in FM development, potentially revolutionizing large-scale machine learning in sensitive domains.
🙏If you find this survey useful for your research, please consider citing:
@misc{li2024synergizing,
title={Synergizing Foundation Models and Federated Learning: A Survey},
author={Shenghui Li and Fanghua Ye and Meng Fang and Jiaxu Zhao and Yun-Hin Chan and Edith C. -H. Ngai and Thiemo Voigt},
year={2024},
eprint={2406.12844},
archivePrefix={arXiv}
}
Table of Contents
Title | Venue | Year | GitHub |
---|---|---|---|
FedPFT: Federated Proxy Fine-Tuning of Foundation Models | IJCAI | 2024-08 | |
FedMKT: Federated Mutual Knowledge Transfer for Large and Small Language Models | arXiv | 2024-06 | |
Federated Domain-Specific Knowledge Transfer on Large Language Models Using Synthetic Data | arXiv | 2024-05 | |
FedDAT: An Approach for Foundation Model Finetuning in Multi-Modal Heterogeneous Federated Learning | AAAI | 2024-03 |
Title | Venue | Year | GitHub |
---|---|---|---|
FedPFT: Federated Proxy Fine-Tuning of Foundation Models | IJCAI | 2024-08 | |
FedSelect: Customized Selection of Parameters for Fine-Tuning during Personalized Federated Learning | CVPR | 2024-06 | |
Federated LoRA with Sparse Communication | arXiv | 2024-06 | |
Only Send What You Need: Learning to Communicate Efficiently in Federated Multilingual Machine Translation | WWW | 2024-05 |
Title | Venue | Year | GitHub |
---|---|---|---|
Unlocking the Potential of Prompt-Tuning in Bridging Generalized and Personalized Federated Learning | CVPR | 2024-06 | |
Visual Prompt Based Personalized Federated Learning | TMLR | 2024-02 | |
Efficient Model Personalization in Federated Learning via Client-Specific Prompt Generation | ICCV | 2023-10 | |
Learning Federated Visual Prompt in Null Space for MRI Reconstruction | CVPR | 2023-06 |
Title | Venue | Year | GitHub |
---|---|---|---|
Federated LoRA with Sparse Communication | arXiv | 2024-06 | |
Save It All: Enabling Full Parameter Tuning for Federated Large Language Models via Cycle Block Gradient Descent, | arXiv | 2024-06 | |
Promoting Data and Model Privacy in Federated Learning through Quantized LoRA | arXiv | 2024-06 | |
Only Send What You Need: Learning to Communicate Efficiently in Federated Multilingual Machine Translation | WWW | 2024-05 | |
SLoRA: Federated Parameter Efficient Fine-Tuning of Language Models | FL@FM-NeurIPS | 2023-12 |
Title | Venue | Year | GitHub |
---|---|---|---|
On the Client Preference of LLM Fine-tuning in Federated Learning | arXiv | 2024-07 | |
FedSelect: Customized Selection of Parameters for Fine-Tuning during Personalized Federated Learning | CVPR | 2024-06 | |
FedCLIP: Fast Generalization and Personalization for CLIP in Federated Learning | IEEE DEB | 2023-03 |
Title | Venue | Year | GitHub |
---|---|---|---|
FedBPT: Efficient Federated Black-box Prompt Tuning for Large Language Models | ICML | 2024-07 | |
Personalized Federated Learning for Text Classification with Gradient-Free Prompt Tuning | NAACL | 2024-06 | |
ZooPFL: Exploring Black-box Foundation Models for Personalized Federated Learning | arXiv | 2023-10 | |
Efficient Federated Prompt Tuning for Black-box Large Pre-trained Models | arXiv | 2023-10 |
Title | Venue | Year | GitHub |
---|---|---|---|
Analysis of Privacy Leakage in Federated Large Language Models | AISTATS | 2024-05 | |
Recovering Private Text in Federated Learning of Language Models | NeurIPS | 2022-11 |
Title | Venue | Year | GitHub |
---|---|---|---|
Promoting Data and Model Privacy in Federated Learning through Quantized LoRA | arXiv | 2024-06 | |
Improving LoRA in Privacy-preserving Federated Learning | ICLR | 2024-05 | |
Differentially Private Low-Rank Adaptation of Large Language Model Using Federated Learning | arXiv | 2023-12 | |
Federated Learning of Gboard Language Models with Differential Privacy | ACL | 2023-07 |
Title | Venue | Year | GitHub |
---|---|---|---|
Emerging Safety Attack and Defense in Federated Instruction Tuning of Large Language Models | arXiv | 2024-06 | |
Unveiling Backdoor Risks Brought by Foundation Models in Heterogeneous Federated Learning | PAKDD | 2024-04 | |
Backdoor Threats from Compromised Foundation Models to Federated Learning | FL@FM-NeurIPS | 2023-10 |
Title | Venue | Year | GitHub |
---|---|---|---|
Communication-Efficient Personalized Federated Learning for Speech-to-Text Tasks | ICASSP | 2024-03 | |
Joint Federated Learning and Personalization for on-Device ASR | ASRU | 2023-12 | |
Importance of Smoothness Induced by Optimizers in Fl4Asr: Towards Understanding Federated Learning for End-To-End ASR | ASRU | 2023-12 | |
Federated Learning for Speech Recognition: Revisiting Current Trends Towards Large-Scale ASR | FL@FM-NeurIPS | 2023-12 |
Title | Venue | Year | GitHub |
---|---|---|---|
Federated Adaptation for Foundation Model-based Recommendations | IJCAI | 2024-08 | |
Navigating the Future of Federated Recommendation Systems with Foundation Models | arXiv | 2024-06 | |
Prompt-enhanced Federated Content Representation Learning for Cross-domain Recommendation | WWW | 2024-05 | |
Federated Recommendation via Hybrid Retrieval Augmented Generation | arXiv | 2024-03 | |
LLM-based Federated Recommendation | arXiv | 2024-02 | |
TransFR: Transferable Federated Recommendation with Pre-trained Language Models | arXiv | 2024-02 |
Title | Venue | Year | GitHub |
---|---|---|---|
A Survey on Efficient Federated Learning Methods for Foundation Model Training | IJCAI | 2024-08 | |
Federated Foundation Models: Privacy-Preserving and Collaborative Learning for Large Models | LREC-COLING | 2024-05 | |
Advances and Open Challenges in Federated Learning with Foundation Models | arXiv | 2024-04 | |
The Role of Federated Learning in a Wireless World with Foundation Models | IEEE WC | 2024 | |
When foundation model meets federated learning: Motivations, challenges, and future directions | arXiv | 2023-06 |
Title | Venue | Year | GitHub | Developed by |
---|---|---|---|---|
FederatedScope: A Flexible Federated Learning Platform for Heterogeneity | VLDB | 2023-09 | ||
FedLab: A Flexible Federated Learning Framework | JMLR | 2023-01 | ||
OpenFL: the open federated learning library | PMB | 2022-10 | ||
NVIDIA FLARE: Federated Learning from Simulation to Real-World | FL@NeurIPS | 2022-07 | ||
FedScale: Benchmarking Model and System Performance of Federated Learning at Scale | ICML | 2022-07 | ||
Scalable federated machine learning with FEDn | CCGrid | 2022-05 | ||
FLUTE: A Scalable Extensible Framework for High-Performance Federated Learning Simulations | FL@NeurIPS | 2022-03 | ||
FATE: An Industrial Grade Platform for Collaborative Learning With Data Protection | JMLR | 2021-08 | ||
Pysyft: A library for easy federated learning | FLS | 2021-06 | ||
Flower: A friendly federated learning research framework | Arxiv | 2020-07 | ||
FedML: A Research Library and Benchmark for Federated Machine Learning | SpicyFL | 2020-07 |