Dynamic security-level maximization for stabilized parallel deep learning architectures in surveillance applications

Joongheon Kim, Yeong Jong Mo, Woojoo Lee, Daehun Nyang

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

9 Scopus citations

Abstract

This paper introduces a new surveillance platform which is equipped with multiple parallel deep learning frameworks. The deep learning frameworks are used for the face recognition of input image and video streams from CCTV cameras in security applications. Each deep learning framework has its own accuracy (related to recognition performance) and operation time (related to system stability) those are in tradeoff relationship. Based on this system architecture, a new dynamic control algorithm which selects one deep learning framework for time-average security-level (i.e., machine learning accuracy for recognition and classification) maximization under the consideration of system stability. The performance of the proposed algorithm was evaluated and also verified that it achieves desired performance.

Original languageEnglish
Title of host publicationProceedings - 2017 IEEE Symposium on Privacy-Aware Computing, PAC 2017
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages192-193
Number of pages2
ISBN (Electronic)9781538610275
DOIs
StatePublished - 4 Dec 2017
Event1st IEEE Symposium on Privacy-Aware Computing, PAC 2017 - Washington, United States
Duration: 1 Aug 20173 Aug 2017

Publication series

NameProceedings - 2017 IEEE Symposium on Privacy-Aware Computing, PAC 2017
Volume2017-January

Conference

Conference1st IEEE Symposium on Privacy-Aware Computing, PAC 2017
Country/TerritoryUnited States
CityWashington
Period1/08/173/08/17

Keywords

  • Deep Learning
  • Lyapunov Optimization
  • Security

Fingerprint

Dive into the research topics of 'Dynamic security-level maximization for stabilized parallel deep learning architectures in surveillance applications'. Together they form a unique fingerprint.

Cite this