Abstract
Machine learning is becoming increasingly popular in modern technology and has been adopted in various application areas. However, researchers have demonstrated that machine learning models are vulnerable to adversarial examples in their inputs, which has given rise to a field of research known as adversarial machine learning. Potential adversarial attacks include methods of poisoning datasets by perturbing input samples to mislead machine learning models into producing undesirable results. While such perturbations are often subtle and imperceptible from the perspective of a human, they can greatly affect the performance of machine learning models. This paper presents two methods of verifying the visual fidelity of image-based datasets by using QR codes to detect perturbations in the data. In the first method, a verification string is stored for each image in a dataset. These verification strings can be used to determine whether or not an image in the dataset has been perturbed. In the second method, only a single verification string is stored and can be used to verify whether an entire dataset is intact.
Original language | English |
---|---|
Article number | 102834 |
Journal | Journal of Network and Computer Applications |
Volume | 173 |
DOIs | |
State | Published - 1 Jan 2021 |
Bibliographical note
Funding Information:The authors would like to acknowledge the support of the NSW Cybersecurity Network grant and the National Natural Science Foundation of China grant (Nos. 61572382 and 61702401 ) that were awarded for this project.
Publisher Copyright:
© Elsevier Ltd
Keywords
- Adversarial machine learning
- Cyber security
- QR code
- Visual fidelity
- Watermarking