A Survey on Parallel Deep Learning

Jinyi Yoon, Jiho Lee, Nayoung Han, Hyungjune Lee

Research output: Contribution to journalArticlepeer-review

Abstract

Deep learning has been widely used in various fields, especially leading drastic development in the state-of-the-art technologies such as natural language processing, speech recognition, image classification, feature extraction, or machine translation. As the massive data and intricate tasks necessitate enlarged neural networks, the number of layers and parameters in neural networks become tremendous, resulting in great performance of compute-intensive technologies. To make large-scale deep neural networks(DNNs) scalable over resource-constrained devices and accelerate learning, some parallelization approaches have investigated under the name of federated learning. In this survey, we introduce four parallelism methods: data parallelism, model parallelism, hybrid parallelism, and pipeline parallelism.

Original languageEnglish
Pages (from-to)1604-1617
Number of pages14
JournalJournal of Korean Institute of Communications and Information Sciences
Volume46
Issue number10
DOIs
StatePublished - Oct 2021

Bibliographical note

Publisher Copyright:
© 2021, Korean Institute of Communications and Information Sciences. All rights reserved.

Keywords

  • Data Parallelism
  • Deep Learning
  • Federated Learning
  • Hybrid Parallelism
  • Model Parallelism
  • Parallel Deep Learning
  • Pipeline Parallelism

Fingerprint

Dive into the research topics of 'A Survey on Parallel Deep Learning'. Together they form a unique fingerprint.

Cite this