政大機構典藏-National Chengchi University Institutional Repository(NCCUR):Item 140.119/136842
English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  全文筆數/總筆數 : 113318/144297 (79%)
造訪人次 : 51090141      線上人數 : 930
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋
    政大機構典藏 > 商學院 > 資訊管理學系 > 學位論文 >  Item 140.119/136842
    請使用永久網址來引用或連結此文件: https://nccur.lib.nccu.edu.tw/handle/140.119/136842


    題名: 對抗神經網路之執行路徑差異分析研究
    Differential Analysis on Adversarial Neural Network Executions
    作者: 陳怡君
    Chen, Yi-Chun
    貢獻者: 郁方
    Yu, Fang
    陳怡君
    Chen, Yi-Chun
    關鍵詞: 對抗性機器學習
    可解釋人工智慧
    神經網路執行路徑
    差異分析
    Adversarial machine learning
    Explainable AI
    Neural network execution
    Differential analysis
    日期: 2021
    上傳時間: 2021-09-02 15:51:45 (UTC+8)
    摘要: 雖然卷積神經網絡(CNN)已在影像識別方面有非常成功的進展,被廣泛應用於蓬勃發展的機器學習領域,但對抗性機器學習的研究表明,人們可以針對 CNN 模型控制輸入內容,導致模型得到錯誤的結果。
    在本研究中,我們探索了對抗性圖片和正常圖片之間的神經網路執行差異,目的是希望推導出一種可解釋的方法,從程式執行的角度分析對抗性方法如何攻擊 CNN 模型。
    我們針對 Keras Application 中的 CNN 模型來進行差異分析,在正常圖片合成對抗性補丁,便可成功攻擊模型,改變輸出的結果,可在 VGG16、VGG19、Resnet50、InceptionV3 和 Xception 共五種模型上攻擊成功。
    我們利用 python 分析器在程式執行過程追蹤執行函數的參數、返回值、執行時間等,通過分析 Python 的程式執行路徑,我們可以根據不同的標準比較對抗性圖片和正常圖片的執行差異:計算不同的函數調用次數、發現不同的參數和返回值、測量性能差異,例如時間和內存消耗。
    本研究結果報告了針對 Keras Application和物件偵測應用程式兩種模型,比較各種對抗性圖片和一般圖片之間的執行差異,從差異中,我們能夠推導出有效的規則來區分原始圖片和插入對象的圖片,但沒有發現有效的規則來區分對抗性圖片和一般圖片。
    While Convolutional Neural Networks (CNNs) have been widely adopted in the booming state-of-the-art machine learning applications since their great success in image recognition, the study of adversarial machine learning has shown that one can manipulate the input against a CNN model leading the model to conclude incorrect results.
    In this study, we explore the execution differences among adversarial and normal samples with the aim of deriving an explainable method to analyze how the adversarial methods attack the CNN models from the perspective of the program execution.
    We target the CNN models of Keras applications to conduct our differential analysis against normal and adversarial examples. These models that can be successfully attacked by adversarial patches include VGG16, VGG19, Resnet50, InceptionV3, and Xception. We synthesize adversarial patches that can twist their image recognition results.
    We leverage a python profiler to trace deep opcode executions with parameters and return values at runtime.
    By profiling Python program execution, we can compare adversarial and normal executions in deep opcode levels against different criteria: counting different number of function calls, discovering different arguments and return values, and measuring performance differences such as time and memory consumption.
    We report the execution differences among various adversarial and general samples against Keras applications and object detection applications.
    From the differences, we are able to derive effective rules to distinguish original pictures and pictures with inserted objects, but no effective rules to distinguish adversarial patches and objects that twist the image recognition results.
    參考文獻: [1]Google cloud explainable ai. https://cloud.google.com/explainable-ai.
    [2]Alexandra Silva, K. R. M. L.Computer Aided Verification. Springer, Cham,2021.
    [3]Arya, V., Bellamy, R. K., Chen, P.-Y., Dhurandhar, A., Hind, M., Hoff-man, S. C., Houde, S., Liao, Q. V., Luss, R., Mojsilovic, A., et al. Aiexplainability 360: An extensible toolkit for understanding data and machine learning models. Journal of Machine Learning Research 21, 130 (2020), 1–6.
    [4]Baluja, S., and Fischer, I. Learning to attack: Adversarial transformation networks. In Proceedings of the AAAI Conference on Artificial Intelligence(2018).
    [5]Bhambri, S., Muku, S., Tulasi, A., and Buduru, A. B.A survey of black-box adversarial attacks on computer vision models, 2019.
    [6]Brown, T. B., Man ́e, D., Roy, A., Abadi, M., and Gilmer, J.Adversarialpatch.ArXiv abs/1712.09665(2017).
    [7]Chattopadhay, A., Sarkar, A., Howlader, P., and Balasubramanian, V. N. Grad-cam++: Generalized gradient-based visual explanations for deep convolutional networks. In 2018 IEEE Winter Conference on Applications of ComputerVision (WACV)(2018), pp. 839–847.
    [8]Chen, S.-T., Cornelius, C., Martin, J., and Chau, D. H. P. Shapeshifter: Robust physical adversarial attack on faster r-cnn object detector. In Joint European conference on Machine Learning and Knowledge Discovery in Databases(2018), Springer, pp. 52–68.
    [9]Cheng, M., Yi, J., Chen, P.-Y., Zhang, H., and Hsieh, C.-J.Seq2sick: Evaluating the robustness of sequence-to-sequence models with adversarial examples.InAAAI(2020), pp. 3601–3608.
    [10]Chollet, F.Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition(2017), pp. 1251–1258.
    [11]Doshi-Velez, F., and Kim, B.Towards a rigorous science of interpretable machine learning.arXiv preprint arXiv:1702.08608(2017).
    [12]Gehr, T., Mirman, M., Drachsler-Cohen, D., Tsankov, P., Chaudhuri, S., and Vechev, M.Ai2: Safety and robustness certification of neural networks with abstract interpretation. In2018 IEEE Symposium on Security and Privacy (SP)(2018), IEEE, pp. 3–18.
    [13]Goodfellow, I., Shlens, J., and Szegedy, C.Explaining and harnessing adversarial examples. In International Conference on Learning Representations(2015).
    [14]Goswami, G., Ratha, N., Agarwal, A., Singh, R., and Vatsa, M.Unravelling robustness of deep learning based face recognition against adversarial attacks.InProceedings of the AAAI Conference on Artificial Intelligence(2018).
    [15]Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., and Pedreschi, D.A survey of methods for explaining black box models. ACM computing surveys (CSUR) 51, 5 (2018), 1–42.
    [16]He, K., Zhang, X., Ren, S., and Sun, J.Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition(2016), pp. 770–778.
    [17]Jan, S. T., Messou, J., Lin, Y.-C., Huang, J.-B., and Wang, G.Connect-ing the digital and physical world: Improving the robustness of adversarial attacks. In Proceedings of the AAAI Conference on Artificial Intelligence(2019), vol. 33, pp. 962–969.
    [18]Krizhevsky, A., Sutskever, I., and Hinton, G. E.Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Pro-cessing Systems(2012), F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, Eds., vol. 25, Curran Associates, Inc.
    [19]Kurakin, A., Goodfellow, I. J., and Bengio, S.Adversarial examples in the physical world.ArXiv abs/1607.02533(2016).
    [20]Li, O., Liu, H., Chen, C., and Rudin, C.Deep learning for case-based reasoning through prototypes: A neural network that explains its predictions. In Proceedings of the AAAI Conference on Artificial Intelligence(2018).
    [21]Li, Y., Bai, S., Zhou, Y., Xie, C., Zhang, Z., and Yuille, A. L.Learning transferable adversarial examples via ghost networks. In AAAI(2020), pp. 11458–11465.
    [22]Liu, A., Liu, X., Fan, J., Ma, Y., Zhang, A., Xie, H., and Tao, D.Perceptual-sensitive gan for generating adversarial patches. In Proceedings of the AAAI Conference on Artificial Intelligence(2019), vol. 33, pp. 1028–1035.
    [23]Luo, B., Liu, Y., Wei, L., and Xu, Q.Towards imperceptible and robust adversarial example attacks against neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence(2018).
    [24]Madry, A., Makelov, A., Schmidt, L., Tsipras, D., and Vladu, A.Towardsdeep learning models resistant to adversarial attacks, 2017.
    [25]Moosavi-Dezfooli, S.-M., Fawzi, A., and Frossard, P.Deepfool: A simple and accurate method to fool deep neural networks.2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)(2015), 2574–2582.
    [26]Noller, Y., P ̆as ̆areanu, C. S., B ̈ohme, M., Sun, Y., Nguyen, H. L., and Grunske, L.Hydiff: Hybrid differential software analysis. In 2020 IEEE/ACM42nd International Conference on Software Engineering (ICSE)(2020), pp. 1273–1285.
    [27]Nori, H., Jenkins, S., Koch, P., and Caruana, R.Interpretml: A unified framework for machine learning interpretability. arXiv preprint arXiv:1909.09223(2019).
    [28]Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z. B., andSwami, A.Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia conference on computer and communications security(2017), pp. 506–519.
    [29]Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z. B., andSwami, A.The limitations of deep learning in adversarial settings. In 2016 IEEE European symposium on security and privacy (EuroS&P)(2016), IEEE, pp. 372–387.
    [30]Paulsen, B., Wang, J., and Wang, C.Reludiff: Differential verification of deep neural networks. In Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering(New York, NY, USA, 2020), ICSE ’20, Association for Computing Machinery, p. 714–726.
    [31]Ribeiro, M. T., Singh, S., and Guestrin, C. ”why should I trust you?” explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDDinternational conference on knowledge discovery and data mining(2016), pp. 1135–1144.
    [32]Ross, A., and Doshi-Velez, F. Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients. In Proceedings of the AAAI Conference on Artificial Intelligence(2018).
    [33]Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al. Imagenet large scale visual recognition challenge. International journal of computer vision115, 3 (2015), 211–252.
    [34]Sermanet, P., Eigen, D., Zhang, X., Mathieu, M., Fergus, R., and Le-Cun, Y.Overfeat: Integrated recognition, localization and detection using convolutional networks.arXiv preprint arXiv:1312.6229(2013).
    [35]Sharif, M., Bhagavatula, S., Bauer, L., and Reiter, M. K.Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In proceedings of the 2016 ACM sigsac conference on computer and communications security(2016), pp. 1528–1540.
    [36]Shriver, D., Elbaum, S., and Dwyer, M. B.Dnnv: A framework for deep neural network verification.arXiv preprint arXiv:2105.12841(2021).
    [37]Simonyan, K., and Zisserman, A.Very deep convolutional networks for large-scale image recognition. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings (2015), Y. Bengio and Y. LeCun, Eds.
    [38]Singh, G., Ganvir, R., P ̈uschel, M., and Vechev, M.Beyond the single neuron convex barrier for neural network certification.
    [39]Singh, G., Gehr, T., Mirman, M., P ̈uschel, M., and Vechev, M. T.Fastand effective robustness certification.NeurIPS 1, 4 (2018), 6.
    [40]Singh, G., Gehr, T., P ̈uschel, M., and Vechev, M.An abstract domain for certifying neural networks. Proceedings of the ACM on Programming Languages 3, POPL (2019), 1–30.
    [41]Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Er-han, D., Vanhoucke, V., and Rabinovich, A.Going deeper with convolutions.In Proceedings of the IEEE conference on computer vision and pattern recognition(2015), pp. 1–9.
    [42]Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Good-fellow, I. J., and Fergus, R.Intriguing properties of neural networks.CoRRabs/1312.6199(2013).
    [43]Tian, S., Yang, G., and Cai, Y.Detecting adversarial examples through image transformation. In Proceedings of the AAAI Conference on Artificial Intelligence(2018).
    [44]Tram"er, F., Kurakin, A., Papernot, N., Goodfellow, I., Boneh, D., and McDaniel, P.Ensemble adversarial training: Attacks and defenses. In International Conference on Learning Representations(2018).
    [45]Wikipedia contributors. Digital image processing — Wikipedia, the free encyclopedia, 2021. [Online; accessed 4-March-2021].
    [46]Xie, X., Ma, L., Wang, H., Li, Y., Liu, Y., and Li, X.Diffchaser: Detectingdisagreements for deep neural networks. InProceedings of the Twenty-Eighth Inter-national Joint Conference on Artificial Intelligence, IJCAI-19(7 2019), InternationalJoint Conferences on Artificial Intelligence Organization, pp. 5772–5778.
    [47]Yang, P., Chen, J., Hsieh, C.-J., Wang, J.-L., and Jordan, M. I.Ml-loo: Detecting adversarial examples with feature attribution. In AAAI(2020), pp. 6639–6647.
    [48]Zhang, H., Li, Z., Li, G., Ma, L., Liu, Y., and Jin, Z.Generating adversarial examples for holding robustness of source code processing models. In Proceedings of the AAAI Conference on Artificial Intelligence(2020), pp. 1169–1176.
    [49]Zheng, T., Chen, C., and Ren, K.Distributionally adversarial attack. In Proceedings of the AAAI Conference on Artificial Intelligence(2019), vol. 33, pp. 2253–2260.
    描述: 碩士
    國立政治大學
    資訊管理學系
    108356016
    資料來源: http://thesis.lib.nccu.edu.tw/record/#G0108356016
    資料類型: thesis
    DOI: 10.6814/NCCU202101295
    顯示於類別:[資訊管理學系] 學位論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    601601.pdf14708KbAdobe PDF20檢視/開啟


    在政大典藏中所有的資料項目都受到原著作權保護.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 回饋