ABSTRACT
A concerning weakness of deep neural networks is their susceptibility to adversarial attacks. While methods exist to detect these attacks, they incur significant drawbacks, ignoring external features which could aid in the task of attack detection. In this work, we propose SPN Dash, a method for detection of adversarial attacks based on integrity of sensor pattern noise embedded in submitted images. Through experiment, we show that our SPN Dash method is capable of detecting the addition of adversarial noise with up to 94% accuracy for images of size $256\times256$. Analysis shows that SPN Dash is robust to image scaling techniques, as well as a small amount of image compression. This performance is on par with state of the art neural network-based detectors, while incurring an order of magnitude less computational and memory overhead.
- [1].2017. Bixby Vision | Apps - The Official Samsung Galaxy Site. samsung.com/global/galaxy/apps/bixby/vision/Google Scholar
- [2]. 2014. Mobile Device Identification via Sensor Fingerprinting. CoRR abs/1408.1416 (2014). arxiv. org/abs/1408.1416.Google Scholar
- [3]. 2008. Determining Image Origin and Integrity Using Sensor Noise. IEEE Trans on Inf Forensics and Sec 3, 1 (2008), 74–90. https://doi.org/10.1109/Tifs.2007.916285Google ScholarDigital Library
- [4]. . 2017. Google Lens: Everything to know about the Pixel 2's AR feature. cnet.com/how-to/google-lens-everything-to-know-about-the-pixel-2-ar-feature/Google Scholar
- [5]. 2013. Source Smartphone Identification Using Sensor Pattern Noise and Wavelet Transform. 1. 16–1.16 pages. digital-library.theiet.ore:/content/conferences/10.1049/ic.2013.0267Google Scholar
- [6]. [n. d.]. Modeling and Estimation of FPN Components in CMOS Image Sensors. In Photonics West '98 Elec Img, Vol. 3301. SPIE, 10.Google Scholar
- [7]. 2017. Robust Physical-World Attacks on Machine Learning Models. CoRR abs/1707.08945 (2017). arxiv.org/abs/1707.08945.Google Scholar
- [8]. . 2006. Correction of Photoresponse Nonuniformity for Matrix Detectors Based on Prior Compensation for Their Nonlinear Behavior. Appl Opt 45, 11 (2006), 2422–7. ncbi.nlm.nih.gov/pubmed/16623238Google Scholar
- [9]. . 2009. Large scale test of sensor fingerprint camera identification. In Media Forensics and Security, Vol. 7254. International Society for Optics and Photonics, 72540I.Google Scholar
- [10]. 2014. Generative Adversarial Nets. 2672–2680 pages. papers. nips.cc/paper/5423-generative-adversarial-netsGoogle Scholar
- [11]. . 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv: 1412.6572 (2014).Google Scholar
- [12]. 2017. On Detecting Adversarial Perturbations. adsabs. harvard.edu/abs/2017arXiv170204267HGoogle Scholar
- [13]. 2017. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv: 1704.04861 (2017).Google Scholar
- [14]. 2013. Detection of Upscale-Crop and Partial Manipulation in Surveillance Video Based on Sensor Pattern Noise. Sensors (Basel) 13, 9 (2013), 12605–31. https://doi.org/10.3390/s130912605Google Scholar
- [15]. 2008. A Model for Measurement of Noise in CCD Digital-Video Cameras. Meas Sci and Tech 19, 4 (2008), 045207. https://doi.org/Artn04520710.1088/0957-0233/19/4/045207Google ScholarCross Ref
- [16]. 2014. Caffe: Convolutional Architecture for Fast Feature Embedding. arXiv preprint arXiv: 1408.5093 (2014).Google Scholar
- [17]. . 2012. ImageNet Classification with Deep Convolutional Neural Networks. In Advances in Neural Information Processing Systems 25, . Curran Associates, Inc., 1097–1105. papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdfGoogle Scholar
- [18]. . 2016. Adversarial Examples in the Physical World. Computing Research Repository abs/1607.02533 (2016). arxiv. org/abs/1607.02533.Google Scholar
- [19]. . 2010. Source Camera Identification Using Enhanced Sensor Pattern Noise. IEEE Trans on Inf Forensics and Sec 5, 2 (2010), 280–287. https://doi.org/10.1109/Tifs.2010.2046268Google ScholarDigital Library
- [20]. . 2016. Adversarial Examples Detection in Deep Networks with Convolutional Filter Statistics. Computing Research Repository abs/1612.07767 (2016). arxiv.org/abs/1612.07767.Google Scholar
- [21]. 2016. Delving into transferable adversarial examples and black-box attacks. arXiv preprint arXiv: 1611.02770 (2016).Google Scholar
- [22]. . 2006. Digital Camera Identification from Sensor Pattern Noise. IEEE Trans on Inf Forensics and Sec 1, 2 (2006), 205–214. https://doi.org/10.1109/Tifs.2006.873602Google ScholarDigital Library
- [23]. 2015. Foveation-Based Mechanisms Alleviate Adversarial Examples. Computing Research Repository abs/1511.06292 (2015). arxiv. org/abs/1511.06292.Google Scholar
- [24]. . 2017. On detecting adversarial perturbations. arXiv preprint arXiv: 1702.04267 (2017).Google Scholar
- [25]. 2017. Practical Black-Box Attacks against Machine Learning. 506–519 pages. https://doi.org/10.1145/3052973.3053009Google Scholar
- [26]. . 2003. Analysis of Active Pixel Sensor Readout Circuit. IEEE Trans on Circ and Sys I-Fundamental Theory and Applications 50, 7 (2003), 941–944. https://doi.org/10.1109/Tsci.2003.813977Google Scholar
- [27]. 2016. Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition. 1528–1540 pages. https://doi.org/10.1145/2976749.2978392Google Scholar
- [28]. 2017. A Multi-strength Adversarial Training Method to Mitigate Adversarial Attacks. Computing Research Repository abs/1705.09764 (2017). arxiv. org/abs/1705.09764.Google Scholar
Index Terms
- SPN Dash - Fast Detection of Adversarial Attacks on Mobile via Sensor Pattern Noise Fingerprinting
Recommendations
Black-box adversarial attacks on XSS attack detection model
AbstractCross-site scripting (XSS) has been extensively studied, although mitigating such attacks in web applications remains challenging. While there is an increasing number of XSS attack detection approaches designed based on machine ...
Detection of Iterative Adversarial Attacks via Counter Attack
AbstractDeep neural networks (DNNs) have proven to be powerful tools for processing unstructured data. However, for high-dimensional data, like images, they are inherently vulnerable to adversarial attacks. Small almost invisible perturbations added to ...
Adversarial example detection by predicting adversarial noise in the frequency domain
AbstractRecent advances in deep neural network (DNN) techniques have increased the importance of security and robustness of algorithms where DNNs are applied. However, several studies have demonstrated that neural networks are vulnerable to adversarial ...
Comments