On-demand computation offloading architecture in fog networks

Yeonjin Jin, Hyung June Lee

Research output: Contribution to journalArticlepeer-review

11 Scopus citations

Abstract

With the advent of the Internet-of-Things (IoT), end-devices have been served as sensors, gateways, or local storage equipment. Due to their scarce resource capability, cloud-based computing is currently a necessary companion. However, raw data collected at devices should be uploaded to a cloud server, taking a significantly large amount of network bandwidth. In this paper, we propose an on-demand computation offloading architecture in fog networks, by soliciting available resources from nearby edge devices and distributing a suitable amount of computation tasks to them. The proposed architecture aims to finish a necessary computation job within a distinct deadline with a reduced network overhead. Our work consists of three elements: (1) resource provider network formation by classifying nodes into stem or leaf depending on network stability, (2) task allocation based on each node’s resource availability and soliciting status, and (3) task redistribution in preparation for possible network and computation losses. Simulation-driven validation in the iFogSim simulator demonstrates that our work achieves a high task completion rate within a designated deadline, while drastically reducing unnecessary network overhead, by selecting only some effective edge devices as computation delegates via locally networked computation.

Original languageEnglish
Article number1076
JournalElectronics (Switzerland)
Volume8
Issue number10
DOIs
StatePublished - Oct 2019

Bibliographical note

Funding Information:
Funding: This work was supported by Samsung Research Funding Center of Samsung Electronics under Project Number SRFC-IT1803-00.

Publisher Copyright:
© 2019 by the authors. Licensee MDPI, Basel, Switzerland.

Keywords

  • Computation offloading
  • Edge computing
  • Fog networks
  • In-network resource allocation

Fingerprint

Dive into the research topics of 'On-demand computation offloading architecture in fog networks'. Together they form a unique fingerprint.

Cite this