TY - JOUR
T1 - AUToSen
T2 - Deep-learning-based implicit continuous authentication using smartphone sensors
AU - Abuhamad, Mohammed
AU - Abuhmed, Tamer
AU - Mohaisen, David
AU - Nyang, Daehun
N1 - Funding Information:
Manuscript received October 2, 2019; revised January 20, 2020; accepted February 4, 2020. Date of publication February 24, 2020; date of current version June 12, 2020. This work was supported in part by NRF under Grant NRF-2016K1A1A2912757, in part by the Collaborative Seed Award Program of Cyber Florida, and in part by the Ewha Womans University Research Grant 2020. (Corresponding authors: Tamer Abuhmed; David Mohaisen; DaeHun Nyang.) Mohammed Abuhamad is with the Department of Computer Engineering, Inha University, Incheon 22212, South Korea, and also with the Department of Computer Science, University of Central Florida, Orlando, FL 32816 USA.
Publisher Copyright:
© 2014 IEEE.
PY - 2020/6
Y1 - 2020/6
N2 - Smartphones have become crucial for our daily life activities and are increasingly loaded with our personal information to perform several sensitive tasks, including, mobile banking and communication, and are used for storing private photos and files. Therefore, there is a high demand for applying usable authentication techniques that prevent unauthorized access to sensitive information. In this article, we propose AUToSen, a deep-learning-based active authentication approach that exploits sensors in consumer-grade smartphones to authenticate a user. Unlike conventional approaches, AUToSen is based on deep learning to identify user distinct behavior from the embedded sensors with and without the user's interaction with the smartphone. We investigate different deep learning architectures in modeling and capturing users' behavioral patterns for the purpose of authentication. Moreover, we explore the sufficiency of sensory data required to accurately authenticate users. We evaluate AUToSen on a real-world data set that includes sensors data of 84 participants' smartphones collected using our designed data-collection application. The experiments show that AUToSen operates accurately using readings of only three sensors (accelerometer, gyroscope, and magnetometer) with a high authentication frequency, e.g., one authentication attempt every 0.5 s. Using sensory data of one second enables an authentication F1-score of approximately 98%, false acceptance rate (FAR) of 0.95%, false rejection rate (FRR) of 6.67%, and equal error rate (EER) of 0.41%. While using sensory data of half a second enables an authentication F1-score of 97.52%, FAR of 0.96%, FRR of 8.08%, and EER of 0.09%. Moreover, we investigate the effects of using different sensory data at variable sampling periods on the performance of the authentication models under various settings and learning architectures.
AB - Smartphones have become crucial for our daily life activities and are increasingly loaded with our personal information to perform several sensitive tasks, including, mobile banking and communication, and are used for storing private photos and files. Therefore, there is a high demand for applying usable authentication techniques that prevent unauthorized access to sensitive information. In this article, we propose AUToSen, a deep-learning-based active authentication approach that exploits sensors in consumer-grade smartphones to authenticate a user. Unlike conventional approaches, AUToSen is based on deep learning to identify user distinct behavior from the embedded sensors with and without the user's interaction with the smartphone. We investigate different deep learning architectures in modeling and capturing users' behavioral patterns for the purpose of authentication. Moreover, we explore the sufficiency of sensory data required to accurately authenticate users. We evaluate AUToSen on a real-world data set that includes sensors data of 84 participants' smartphones collected using our designed data-collection application. The experiments show that AUToSen operates accurately using readings of only three sensors (accelerometer, gyroscope, and magnetometer) with a high authentication frequency, e.g., one authentication attempt every 0.5 s. Using sensory data of one second enables an authentication F1-score of approximately 98%, false acceptance rate (FAR) of 0.95%, false rejection rate (FRR) of 6.67%, and equal error rate (EER) of 0.41%. While using sensory data of half a second enables an authentication F1-score of 97.52%, FAR of 0.96%, FRR of 8.08%, and EER of 0.09%. Moreover, we investigate the effects of using different sensory data at variable sampling periods on the performance of the authentication models under various settings and learning architectures.
KW - Active authentication
KW - Continuous authentication
KW - Deep-learning-based authentication
KW - Mobile sensing
KW - Smartphone authentication
UR - http://www.scopus.com/inward/record.url?scp=85086591597&partnerID=8YFLogxK
U2 - 10.1109/JIOT.2020.2975779
DO - 10.1109/JIOT.2020.2975779
M3 - Article
AN - SCOPUS:85086591597
SN - 2327-4662
VL - 7
SP - 5008
EP - 5020
JO - IEEE Internet of Things Journal
JF - IEEE Internet of Things Journal
IS - 6
M1 - 9007368
ER -