English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  Items with full text/Total items : 113148/144119 (79%)
Visitors : 50707698      Online Users : 304
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    政大機構典藏 > 商學院 > 資訊管理學系 > 學位論文 >  Item 140.119/153166
    Please use this identifier to cite or link to this item: https://nccur.lib.nccu.edu.tw/handle/140.119/153166


    Title: 基於影響指引的動態型式測試技術研究:以Transformer為例
    Influence-Guided Concolic Testing on Transformer
    Authors: 汪妤
    Wang, Yu
    Contributors: 郁方
    Yu, Fang
    汪妤
    Wang, Yu
    Keywords: 自動單元測試
    並發符號測試
    動態並發符號測試
    Python
    Automatic unit testing
    Concolic testing
    Python
    Dynamic concolication
    Date: 2024
    Issue Date: 2024-09-04 14:06:57 (UTC+8)
    Abstract: 我們提出了一種新穎的影響引導並發符號測試方法,旨在生成關鍵測試輸入,以全面探索神經網絡的各種行為。該方法通過利用決策路徑的影響系統地檢查它們,從而優先考慮重要神經元的狀態變化,以優化並發符號測試性能。通過引導SMT求解器集中在最有可能發現漏洞的約束上,我們的方法在重要的決策路徑上進行並發符號測試,從而提高了網絡漏洞檢測的自動輸入生成效率。初步實驗表明,我們的方法在測試CNN和Transformer模型的對抗性示例方面取得了顯著進展。最後,我們提出了一種綜合關鍵決策路徑的方法,並識別出成功攻擊中共享的常見決策邏輯。
    We present a novel influence-guided concolic testing methodology aimed at generating critical test inputs to comprehensively explore various neural network behaviors. This approach systematically examines decision paths by leveraging their influence, thereby prioritizing status changes of essential neurons to optimize concolic testing performance. By directing the SMT solver to concentrate on constraints most likely to uncover vulnerabilities, our method incorporates concolic testing on significant decision paths, thereby enhancing the efficiency of automatic input synthesis for network vulnerability detection. Preliminary experiments demonstrate the efficacy of our approach, showing substantial advancements in testing adversarial examples across both CNN and Transformer models. Finally, we propose a method for synthesizing critical decision paths and identify a common decision logic shared among successful attacks.
    Reference: Alenezi, R. and Ludwig, S. A. (2021). Explainability of cybersecurity threats data using shap. In 2021 IEEE Symposium Series on Computational Intelligence (SSCI), pages 01–10. IEEE.
    Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.-R., and Samek, W. (2015). On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one, 10(7):e0130140.
    Ball, T. and Daniel, J. (2015). Deconstructing dynamic symbolic execution. In Dependable Soft- ware Systems Engineering, pages 26–41. IOS Press.
    Brummayer, R. and Biere, A. (2009). Boolector: An efficient smt solver for bit-vectors and ar- rays. In Tools and Algorithms for the Construction and Analysis of Systems: 15th International Conference, TACAS 2009, Held as Part of the Joint European Conferences on Theory and Prac- tice of Software, ETAPS 2009, York, UK, March 22-29, 2009. Proceedings 15, pages 174–177. Springer.
    Cadar, C., Ganesh, V., Pawlowski, P. M., Dill, D. L., and Engler, D. R. (2008). Exe: Automatically generating inputs of death. ACM Transactions on Information and System Security (TISSEC), 12(2):1–38.
    Cen, L.-P., Ji, J., Lin, J.-W., Ju, S.-T., Lin, H.-J., Li, T.-P., Wang, Y., Yang, J.-F., Liu, Y.-F., Tan, S., et al. (2021). Automatic detection of 39 fundus diseases and conditions in retinal photographs using deep neural networks. Nature communications, 12(1):4828.
    Chen, H., Lundberg, S. M., and Lee, S.-I. (2022). Explaining a series of models by propagating shapley values. Nature communications, 13(1):4512.
    Chen, Y.-F., Tsai, W.-L., Wu, W.-C., Yen, D.-D., and Yu, F. (2021). Pyct: A python concolic tester. In Programming Languages and Systems: 19th Asian Symposium, APLAS 2021, Chicago, IL, USA, October 17–18, 2021, Proceedings 19, pages 38–46. Springer.
    De Moura, L. and Bjørner, N. (2008). Z3: An efficient smt solver. In International conference on Tools and Algorithms for the Construction and Analysis of Systems, pages 337–340. Springer.
    Elboher, Y. Y., Gottschlich, J., and Katz, G. (2020). An abstraction-based framework for neural network verification. In Computer Aided Verification: 32nd International Conference, CAV 2020, Los Angeles, CA, USA, July 21–24, 2020, Proceedings, Part I 32, pages 43–65. Springer.
    Gehr, T., Mirman, M., Drachsler-Cohen, D., Tsankov, P., Chaudhuri, S., and Vechev, M. (2018). Ai2: Safety and robustness certification of neural networks with abstract interpretation. In 2018 IEEE symposium on security and privacy (SP), pages 3–18. IEEE.
    Godefroid, P., Klarlund, N., and Sen, K. (2005). Dart: Directed automated random testing. In
    Proceedings of the 2005 ACM SIGPLAN conference on Programming language design and implementation, pages 213–223.
    Goodfellow, I., Bengio, Y., and Courville, A. (2016). Deep learning. MIT press.
    Goodfellow, I. J., Shlens, J., and Szegedy, C. (2014). Explaining and harnessing adversarial ex-
    amples. arXiv preprint arXiv:1412.6572.
    Gopinath, D., Zhang, M., Wang, K., Kadron, I. B., Pasareanu, C., and Khurshid, S. (2019). Sym- bolic execution for importance analysis and adversarial generation in neural networks. In 2019 IEEE 30th International Symposium on Software Reliability Engineering (ISSRE), pages 313– 322. IEEE.
    Guo, C., Sablayrolles, A., Jégou, H., and Kiela, D. (2021). Gradient-based adversarial attacks against text transformers. arXiv preprint arXiv:2104.13733.
    He, L., Aouf, N., and Song, B. (2021). Explainable deep reinforcement learning for uav au- tonomous path planning. Aerospace science and technology, 118:107052.
    Hinton, G., Deng, L., Yu, D., Dahl, G. E., Mohamed, A.-r., Jaitly, N., Senior, A., Vanhoucke, V., Nguyen, P., Sainath, T. N., et al. (2012). Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal processing magazine, 29(6):82–97.
    Huang, X., Kroening, D., Ruan, W., Sharp, J., Sun, Y., Thamo, E., Wu, M., and Yi, X. (2020). A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability. Computer Science Review, 37:100270.
    Katz, G., Barrett, C., Dill, D. L., Julian, K., and Kochenderfer, M. J. (2017). Reluplex: An efficient smt solver for verifying deep neural networks. In Computer Aided Verification: 29th Interna- tional Conference, CAV 2017, Heidelberg, Germany, July 24-28, 2017, Proceedings, Part I 30, pages 97–117. Springer.
    Katz, G., Huang, D. A., Ibeling, D., Julian, K., Lazarus, C., Lim, R., Shah, P., Thakoor, S., Wu, H., Zeljić, A., et al. (2019). The marabou framework for verification and analysis of deep neural networks. In Computer Aided Verification: 31st International Conference, CAV 2019, New York City, NY, USA, July 15-18, 2019, Proceedings, Part I 31, pages 443–452. Springer.
    Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012). Imagenet classification with deep convo- lutional neural networks. Advances in neural information processing systems, 25.
    Li, Z., Ma, X., Xu, C., and Cao, C. (2019). Structural coverage criteria for neural networks could be misleading. In 2019 IEEE/ACM 41st International Conference on Software Engineering: New Ideas and Emerging Results (ICSE-NIER), pages 89–92. IEEE.
    Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. Advances in neural information processing systems, 30.
    Ma, L., Juefei-Xu, F., Zhang, F., Sun, J., Xue, M., Li, B., Chen, C., Su, T., Li, L., Liu, Y., et al. (2018a). Deepgauge: Multi-granularity testing criteria for deep learning systems. In Proceed- ings of the 33rd ACM/IEEE international conference on automated software engineering, pages 120–131.
    Ma, L., Zhang, F., Sun, J., Xue, M., Li, B., Juefei-Xu, F., Xie, C., Li, L., Liu, Y., Zhao, J., et al. (2018b). Deepmutation: Mutation testing of deep learning systems. In 2018 IEEE 29th inter- national symposium on software reliability engineering (ISSRE), pages 100–111. IEEE.
    Mirman, M., Gehr, T., and Vechev, M. (2018). Differentiable abstract interpretation for provably robust neural networks. In International Conference on Machine Learning, pages 3578–3586. PMLR.
    Narodytska, N., Kasiviswanathan, S., Ryzhyk, L., Sagiv, M., and Walsh, T. (2018). Verifying prop- erties of binarized deep neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32.
    Nguyen, A., Yosinski, J., and Clune, J. (2015). Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 427–436.
    Odena, A., Olsson, C., Andersen, D., and Goodfellow, I. (2019). Tensorfuzz: Debugging neural networks with coverage-guided fuzzing. In International Conference on Machine Learning, pages 4901–4911. PMLR.
    Pei, K., Cao, Y., Yang, J., and Jana, S. (2017). Deepxplore: Automated whitebox testing of deep learning systems. In proceedings of the 26th Symposium on Operating Systems Principles, pages 1–18.
    Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). ” why should i trust you?” explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144.
    Ruan, W., Huang, X., and Kwiatkowska, M. (2018). Reachability analysis of deep neural networks with provable guarantees. arXiv preprint arXiv:1805.02242.
    Sen, K., Marinov, D., and Agha, G. (2005). Cute: A concolic unit testing engine for c. ACM SIGSOFT Software Engineering Notes, 30(5):263–272.
    Shrikumar, A., Greenside, P., and Kundaje, A. (2017). Learning important features through prop- agating activation differences. In International conference on machine learning, pages 3145– 3153. PMlR.
    Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., et al. (2016). Mastering the game of go with deep neural networks and tree search. nature, 529(7587):484–489.
    Singh, G., Gehr, T., Mirman, M., Püschel, M., and Vechev, M. (2018). Fast and effective robustness certification. Advances in neural information processing systems, 31.
    Singh, G., Gehr, T., Püschel, M., and Vechev, M. (2019). An abstract domain for certifying neural networks. Proceedings of the ACM on Programming Languages, 3(POPL):1–30.
    Su, J., Vargas, D. V., and Sakurai, K. (2019). One pixel attack for fooling deep neural networks. IEEE Transactions on Evolutionary Computation, 23(5):828–841.
    Sun, Y., Huang, X., Kroening, D., Sharp, J., Hill, M., and Ashmore, R. (2019). Deepconcolic: Testing and debugging deep neural networks. In 2019 IEEE/ACM 41st International Conference on Software Engineering: Companion Proceedings (ICSE-Companion), pages 111–114. IEEE.
    Sun, Y., Wu, M., Ruan, W., Huang, X., Kwiatkowska, M., and Kroening, D. (2018). Concolic testing for deep neural networks. In Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering, pages 109–119.
    Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., and Fergus, R. (2013). Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199.
    Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., and Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30.
    Wang, S., Pei, K., Whitehouse, J., Yang, J., and Jana, S. (2018). Formal security analysis of neural networks using symbolic intervals. In 27th USENIX Security Symposium (USENIX Security 18), pages 1599–1614.
    Xie, X., Li, T., Wang, J., Ma, L., Guo, Q., Juefei-Xu, F., and Liu, Y. (2022). Npc: N euron p ath c overage via characterizing decision logic of deep neural networks. ACM Transactions on Software Engineering and Methodology (TOSEM), 31(3):1–27.
    Yu, F., Chi, Y.-Y., and Chen, Y.-F. (2024). Constraint-based adversarial example synthesis.
    Zhang, H., Weng, T.-W., Chen, P.-Y., Hsieh, C.-J., and Daniel, L. (2018). Efficient neural net- work robustness certification with general activation functions. Advances in neural information processing systems, 31.
    Description: 碩士
    國立政治大學
    資訊管理學系
    111356048
    Source URI: http://thesis.lib.nccu.edu.tw/record/#G0111356048
    Data Type: thesis
    Appears in Collections:[資訊管理學系] 學位論文

    Files in This Item:

    File Description SizeFormat
    604801.pdf2368KbAdobe PDF0View/Open


    All items in 政大典藏 are protected by copyright, with all rights reserved.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback