TY - JOUR
T1 - On-demand computation offloading architecture in fog networks
AU - Jin, Yeonjin
AU - Lee, Hyung June
N1 - Funding Information:
Funding: This work was supported by Samsung Research Funding Center of Samsung Electronics under Project Number SRFC-IT1803-00.
Publisher Copyright:
© 2019 by the authors. Licensee MDPI, Basel, Switzerland.
PY - 2019/10
Y1 - 2019/10
N2 - With the advent of the Internet-of-Things (IoT), end-devices have been served as sensors, gateways, or local storage equipment. Due to their scarce resource capability, cloud-based computing is currently a necessary companion. However, raw data collected at devices should be uploaded to a cloud server, taking a significantly large amount of network bandwidth. In this paper, we propose an on-demand computation offloading architecture in fog networks, by soliciting available resources from nearby edge devices and distributing a suitable amount of computation tasks to them. The proposed architecture aims to finish a necessary computation job within a distinct deadline with a reduced network overhead. Our work consists of three elements: (1) resource provider network formation by classifying nodes into stem or leaf depending on network stability, (2) task allocation based on each node’s resource availability and soliciting status, and (3) task redistribution in preparation for possible network and computation losses. Simulation-driven validation in the iFogSim simulator demonstrates that our work achieves a high task completion rate within a designated deadline, while drastically reducing unnecessary network overhead, by selecting only some effective edge devices as computation delegates via locally networked computation.
AB - With the advent of the Internet-of-Things (IoT), end-devices have been served as sensors, gateways, or local storage equipment. Due to their scarce resource capability, cloud-based computing is currently a necessary companion. However, raw data collected at devices should be uploaded to a cloud server, taking a significantly large amount of network bandwidth. In this paper, we propose an on-demand computation offloading architecture in fog networks, by soliciting available resources from nearby edge devices and distributing a suitable amount of computation tasks to them. The proposed architecture aims to finish a necessary computation job within a distinct deadline with a reduced network overhead. Our work consists of three elements: (1) resource provider network formation by classifying nodes into stem or leaf depending on network stability, (2) task allocation based on each node’s resource availability and soliciting status, and (3) task redistribution in preparation for possible network and computation losses. Simulation-driven validation in the iFogSim simulator demonstrates that our work achieves a high task completion rate within a designated deadline, while drastically reducing unnecessary network overhead, by selecting only some effective edge devices as computation delegates via locally networked computation.
KW - Computation offloading
KW - Edge computing
KW - Fog networks
KW - In-network resource allocation
UR - http://www.scopus.com/inward/record.url?scp=85073451036&partnerID=8YFLogxK
U2 - 10.3390/electronics8101076
DO - 10.3390/electronics8101076
M3 - Article
AN - SCOPUS:85073451036
SN - 2079-9292
VL - 8
JO - Electronics (Switzerland)
JF - Electronics (Switzerland)
IS - 10
M1 - 1076
ER -