Abstract
Deep learning has been widely used in various fields, especially leading drastic development in the state-of-the-art technologies such as natural language processing, speech recognition, image classification, feature extraction, or machine translation. As the massive data and intricate tasks necessitate enlarged neural networks, the number of layers and parameters in neural networks become tremendous, resulting in great performance of compute-intensive technologies. To make large-scale deep neural networks(DNNs) scalable over resource-constrained devices and accelerate learning, some parallelization approaches have investigated under the name of federated learning. In this survey, we introduce four parallelism methods: data parallelism, model parallelism, hybrid parallelism, and pipeline parallelism.
Original language | English |
---|---|
Pages (from-to) | 1604-1617 |
Number of pages | 14 |
Journal | Journal of Korean Institute of Communications and Information Sciences |
Volume | 46 |
Issue number | 10 |
DOIs | |
State | Published - Oct 2021 |
Bibliographical note
Publisher Copyright:© 2021, Korean Institute of Communications and Information Sciences. All rights reserved.
Keywords
- Data Parallelism
- Deep Learning
- Federated Learning
- Hybrid Parallelism
- Model Parallelism
- Parallel Deep Learning
- Pipeline Parallelism