TY - GEN
T1 - Efficient low-rank federated learning based on singular value decomposition
AU - Kwon, Jungmin
AU - Park, Hyunggon
N1 - Funding Information:
5 CONCLUSION In this paper, we proposed an efficient low-rank FL algorithm based on SVD. By performing dimensionality reduction of the global model parameter matrix using SVD, we transmitted only a few packets to clients so that clients receive low-rank global model parameter. We confirm that the proposed algorithm maintains the accuracy performance of the original global model parameter while significantly reducing the number of packet transmissions. Therefore, the network throughput can be further enhanced by transmitting the low-rank FL global model parameter. ACKNOWLEDGMENTS This work was supported by an Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2021 − 0 − 00739, Development of Distributed/Cooperative AI based 5G+ Network Data Analytics Functions and Control Technology) and in part by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. NRF-2020R1A2B5B01002528) REFERENCES [1] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. 1998.
Publisher Copyright:
© 2022 ACM.
PY - 2022/10/3
Y1 - 2022/10/3
N2 - In this paper, we propose a low-rank federated learning (FL) algorithm based on singular value decomposition (SVD). The SVD factorizes the global parameters that need to be exchanged between a global server and clients for distributed model training, significantly reducing the associated communication cost. Experiment results confirm that the number of transmissions is significantly reduced while maintaining the accuracy performance of the local model using the approximately recovered parameters.
AB - In this paper, we propose a low-rank federated learning (FL) algorithm based on singular value decomposition (SVD). The SVD factorizes the global parameters that need to be exchanged between a global server and clients for distributed model training, significantly reducing the associated communication cost. Experiment results confirm that the number of transmissions is significantly reduced while maintaining the accuracy performance of the local model using the approximately recovered parameters.
KW - edge network
KW - federated learning
KW - low-rank matrix
KW - singular value decomposition
UR - http://www.scopus.com/inward/record.url?scp=85139624057&partnerID=8YFLogxK
U2 - 10.1145/3492866.3561258
DO - 10.1145/3492866.3561258
M3 - Conference contribution
AN - SCOPUS:85139624057
T3 - Proceedings of the International Symposium on Mobile Ad Hoc Networking and Computing (MobiHoc)
SP - 285
EP - 286
BT - MobiHoc 2022 - Proceedings of the 2022 23rd International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing
PB - Association for Computing Machinery
T2 - 23rd ACM International Symposium on Mobile Ad Hoc Networking and Computing, MobiHoc 2022
Y2 - 17 October 2022 through 20 October 2022
ER -