Reference: | [1] Behzadan, V. and Munir, A. (2017). Whatever does not kill deep reinforcement learning, makes it stronger. arXiv:1712.09344v1. [2] Biggio, B. and Roli, F. (2018). Wild patterns: Ten years after the rise of adversarial machine learning. arXiv:1712.03141va2. [3] Biggio,B.,Corona,I.,and Maiorca,D.,etal.(2017).Evasion attacks against machine learning at test time. arXiv:1708.06131. [4] Biggio, B., Fumera, G., and Roli, F. (2014). Pattern recognition systems under attack: Design issues and research challenges. IJPRAI 28 (7). [5] Biggio,B.,Nelson,B.,and Laskov,P. (2012). Poisoning attacks against support vector machines. in: 29th ICML. [6] Carlini, N. and Wagner, D. (2017). Towards evaluating the robustness of neural networks. arXiv:1608.04644. [7] Goodfellow, I. J., Shlens, J., and Szegedy, C. (2014). Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572. [8] Gu, S. and Rigazio, L. (2014). Towards deep neural network architectures robust to adversarial examples. arXiv:1412.5068v4. [9] Harder, P., Pfreundt, F.-J., and Keuper, M., et al. (2021). Spectral defense: Detecting adversarial attacks on cnns in the fourier domain. arXiv preprint arXiv:2103.03000. [10] Ilahi, I., Usama, M., and Qadir, J., et al. (2020). Challenges and countermeasures for adversarial attacks on deep reinforcement learning. arXiv:2001.09684. [11] Kos, J. and Song, D. (2017). Delving into adversarial attacks on deep policies,. arXiv:1705.06452. [12] Krizhevsky, A. (2009). Learning multiple layers of features from tiny images. computer Science Department, University of Toronto, Tech. Rep. [13] Krizhevsky,A., Sutskever,I., and Hinton,G.E (2017). Imagenet classification with deep convolutional neural networks,. Communications of the ACM 60.6: pp.84-90. [14] Kurakin, A., Goodfellow, I., and Bengio, S. (2016). Adversarial examples in the physical world. arXiv:1607.02533. [15] Lee,K.,Lee,K.,andLee,H.,etal.(2018).A simple unified framework for detecting out-of-distribution samples and adversarial attacks. arXiv:1807.03888. [16] Li, B., Chen, C., and Wang, W., et al. (2019). Certified adversarial robustness with additive noise. arXiv:1809.03113v6. [17] Li, Z., Feng, C., and Zheng, J., et al. (2020). Towards adversarial robustness via feature matching. IEEE. [18] Lin,Y.-C.,Liu,M.-Y.,andSun,M.,etal.(2017). Detecting adversarial attacks on neural network policies with visual foresight. arXiv:1710.00814v1. [19] Liu, A., Liu, X., and Zhang, C., et al. (2020). Training robust deep neural networks via adversarial noise propagation. arXiv:1909.09034v2. [20] Ma, X., Li, B., and Wang, Y., et al. (2018). Characterizing adversarial subspaces using local intrinsic dimensionality. arXiv preprint arXiv:1801.02613. [21] Madry, A., Makelov, A., and Schmidt, L., et al. (2019). Towards deep learning models resistant to adversarial attacks. arXiv:1706.06083. [22] Muñoz-González,L.,Biggio,B.,and Demontis,A., etal.(2018).Towardspoisoning of deep learning algorithms with back-gradient optimization. in: AISec ’17, ACM, pp.27–38. [23] Papernot, N., McDaniel, P., and Goodfellow, I., et al (2017). Practical black-box attacks against machine learning. arXiv:1602.02697. [24] Russakovsky, O., Deng, J., and Su, H., et al. (2015). Imagenet large scale visual recognition challenge,. International journal of computer vision 115.3: pp.211-252. [25] Shafique,M.,Naseer,M.,and Theocharides,T., etal.(2020). Robust machine learning systems: Challenges, current trends, perspectives, and the road ahead. IEEE Design and Test, Vol. 37, Issue: 2. [26] Simonyan,K.and Zisserman,A. (2014). Very deep convolutional networks for large scale image recognition. arXiv:1409.1556. [27] Tramèr, F., Zhang, F., and Juels, A., et al. (2016). Stealing machine learning models via prediction apis. arXiv:1609.02943. [28] Zhang, K., Zuo, W., and Chen, Y., et al. (2016). Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising,. arXiv:1608.03981. |