Abstract
Machine learning is becoming increasingly popular in a variety of modern technology. However, research has demonstrated that machine learning models are vulnerable to adversarial examples in their inputs. Potential attacks include poisoning datasets by perturbing input samples to mislead a machine learning model into producing undesirable results. Such perturbations are often subtle and imperceptible from a human’s perspective. This paper investigates two methods of verifying the visual fidelity of image based datasets by detecting perturbations made to the data using QR codes. In the first method, a verification string is stored for each image in a dataset. These verification strings can be used to determine whether an image in the dataset has been perturbed. In the second method, only a single verification string stored and is used to verify whether an entire dataset is intact.
Original language | English |
---|---|
Title of host publication | Machine Learning for Cyber Security - 2nd International Conference, ML4CS 2019, Proceedings |
Editors | Xiaofeng Chen, Xinyi Huang, Jun Zhang |
Publisher | Springer Verlag |
Pages | 320-335 |
Number of pages | 16 |
ISBN (Print) | 9783030306182 |
DOIs | |
State | Published - 2019 |
Event | 2nd International Conference on Machine Learning for Cyber Security, ML4CS 2019 - Xi'an, China Duration: 19 Sep 2019 → 21 Sep 2019 |
Publication series
Name | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
---|---|
Volume | 11806 LNCS |
ISSN (Print) | 0302-9743 |
ISSN (Electronic) | 1611-3349 |
Conference
Conference | 2nd International Conference on Machine Learning for Cyber Security, ML4CS 2019 |
---|---|
Country/Territory | China |
City | Xi'an |
Period | 19/09/19 → 21/09/19 |
Bibliographical note
Publisher Copyright:© 2019, Springer Nature Switzerland AG.
Keywords
- Adversarial machine learning
- Cyber security
- QR code
- Visual fidelity
- Watermarking