政大機構典藏-National Chengchi University Institutional Repository(NCCUR):Item 140.119/153152
English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  全文筆數/總筆數 : 113451/144438 (79%)
造訪人次 : 51325085      線上人數 : 883
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋
    政大機構典藏 > 商學院 > 資訊管理學系 > 學位論文 >  Item 140.119/153152
    請使用永久網址來引用或連結此文件: https://nccur.lib.nccu.edu.tw/handle/140.119/153152


    題名: 動態符號執行測試用於自動深度網絡測試
    Dynamic Concolication for Automatic Deep Network Testing
    作者: 蔣其叡
    Chiang, Chi-Rui
    貢獻者: 郁方
    Yu, Fang
    蔣其叡
    Chiang, Chi-Rui
    關鍵詞: 動態符號執行測試
    自動單元測試
    Automatic Unit Testing
    Concolic Testing
    Python
    Dynamic Concolication
    NCCU
    日期: 2024
    上傳時間: 2024-09-04 14:04:08 (UTC+8)
    摘要: 結合具體測試和符號執行的動態符號執行測試(Concolic testing)已被證明在識別軟體漏洞方面非常有效。本文重點介紹如何應用 PyCT,一種動態符號執行測試工具,用於自動生成單元測試及其所需的輸入。我們的目標不僅是對目標程式進行動態符號執行測試,還通過使用動態子程序追蹤(DST)對目標程式呼叫的子程序和外部庫進行封裝,以實現符號執行,從而檢查其互動中的潛在漏洞。採用該方法的動機在於解決測試過程中動態符號執行變量過早降級的問題,例如,由於不支持的操作,這可能會妨礙後續測試中符號表達式的使用。通過將當前執行及其子程序的輸入升級為動態符號執行變量,我們可以減輕過早降級的影響,從而確保更全面的動態符號執行測試覆蓋率。在遇到無法升級為動態符號執行變量的輸入類型時,我們還在 DST 中引入了模糊測試技術。實驗結果證明了我們的方法在增強對各種 Python 庫的動態符號執行測試方面的有效性,展示了測試覆蓋率的提高和潛在漏洞的檢測能力。我們的方法能夠從最小的初始努力生成大量針對目標庫的測試用例。
    Concolic testing, which combines concrete testing and symbolic execution, has proven highly effective in identifying software vulnerabilities. This paper focuses on applying PyCT, a concolic testing tool, for the automated generation of unittests and their required inputs. Our objective is not only to perform concolic testing on the target program but also to employ Dynamic Subroutine Tracking (DST) to wrap the subroutines and external libraries called by the target program for symbolic execution, thereby checking for potential vulnerabilities in their interactions.
    The motivation behind this approach is to address the issue of premature downgrading of concolic variables during testing, e.g., due to unsupported operations, which can hinder subsequent testing from using symbolic expressions.
    By upgrading the inputs of current execution and its subroutines to concolic variables, we mitigate the impact of premature downgrading, thus ensuring a more comprehensive concolic testing coverage.
    We also incorporate fuzzing techniques in DST when encountering input types that cannot be upgraded to concolic variables.
    Experimental results demonstrate the effectiveness of our approach in enhancing concolic testing for various Python libraries, showcasing improved testing coverage and the detection of potential vulnerabilities. Our method can generate extensive test cases for target libraries from minimal initial efforts.
    參考文獻: Ahmadilivani, M. H., Taheri, M., Raik, J., Daneshtalab, M., and Jenihhin, M. (2023). A
    systematic literature review on hardware reliability assessment methods for deep neural
    networks.
    Araki., L. Y. and Peres., L. M. (2018). A systematic review of concolic testing with aplication
    of test criteria. In Proceedings of the 20th International Conference on Enterprise
    Information Systems - Volume 2: ICEIS, pages 121–132. INSTICC, SciTePress.
    Bai, T., Huang, S., Huang, Y., Wang, X., Xia, C., Qu, Y., and Yang, Z. (2024). Criticalfuzz:
    A critical neuron coverage-guided fuzz testing framework for deep neural networks.
    Information and Software Technology, 172:107476.
    Ball, T. and Daniel, J. (2015). Deconstructing dynamic symbolic execution. In Irlbeck,
    M., Peled, D. A., and Pretschner, A., editors, Dependable Software Systems Engineering,
    volume 40 of NATO Science for Peace and Security Series, D: Information and
    Communication Security, pages 26–41. IOS Press.
    Cadar, C. and Sen, K. (2013). Symbolic execution for software testing: three decades
    later. Commun. ACM, 56(2):82–90.
    Caniço, A. B. and Santos, A. L. (2023). Witter: A library for white-box testing of introductory
    programming algorithms. In Proceedings of the 2023 ACM SIGPLAN Interna-tional
    Symposium on SPLASH-E, SPLASH-E 2023, page 69–74, New York, NY, USA.
    Association for Computing Machinery.
    Chen, Y.-F., Tsai, W.-L., Wu, W.-C., Yen, D.-D., and Yu, F. (2021). Pyct: A python
    concolic tester. In Oh, H., editor, Programming Languages and Systems, pages 38–46,
    Cham. Springer International Publishing.
    Gopinath, D., Wang, K., Zhang, M., Pasareanu, C. S., and Khurshid, S. (2018). Symbolic
    execution for deep neural networks.
    Gu, J., Luo, X., Zhou, Y., and Wang, X. (2022). Muffin: Testing deep learning libraries
    via neural architecture fuzzing.
    Huang, J.-t., Zhang, J., Wang, W., He, P., Su, Y., and Lyu, M. R. (2022). Aeon: A method
    for automatic evaluation of nlp test cases. In Proceedings of the 31st ACM SIGSOFT
    International Symposium on Software Testing and Analysis, ISSTA 2022, page 202–
    214, New York, NY, USA. Association for Computing Machinery.
    Ji, P., Feng, Y., Liu, J., Zhao, Z., and Chen, Z. (2022). Asrtest: Automated testing for deepneural-
    network-driven speech recognition systems. In Proceedings of the 31st ACM
    SIGSOFT International Symposium on Software Testing and Analysis, ISSTA 2022,
    page 189–201, New York, NY, USA. Association for Computing Machinery.
    Khan, M. (2011). Different approaches to black box testing technique for finding errors.
    International Journal of Software Engineering Applications, 2.
    Klees, G., Ruef, A., Cooper, B., Wei, S., and Hicks, M. (2018). Evaluating fuzz testing.
    Li, R., Yang, P., Huang, C.-C., Sun, Y., Xue, B., and Zhang, L. (2022). Towards practical
    robustness analysis for dnns based on pac-model learning. In Proceedings of the 44th
    International Conference on Software Engineering, ICSE ’22, page 2189–2201, New
    York, NY, USA. Association for Computing Machinery.
    Liu, Z., Feng, Y., and Chen, Z. (2021). Dialtest: Automated testing for recurrent-neuralnetwork-
    driven dialogue systems. In Proceedings of the 30th ACM SIGSOFT International
    Symposium on Software Testing and Analysis, ISSTA 2021, page 115–126, New
    York, NY, USA. Association for Computing Machinery.
    Manès, V. J., Han, H., Han, C., Cha, S. K., Egele, M., Schwartz, E. J., and Woo, M. (2021).
    The art, science, and engineering of fuzzing: A survey. IEEE Transactions on Software
    Engineering, 47(11):2312–2331.
    Sen, K., Marinov, D., and Agha, G. (2005). Cute: a concolic unit testing engine for c. In
    Proceedings of the 10th European Software Engineering Conference Held Jointly with
    13th ACM SIGSOFT International Symposium on Foundations of Software Engineering,
    ESEC/FSE-13, page 263–272, New York, NY, USA. Association for Computing
    Machinery.
    Wang, S., Shrestha, N., Subburaman, A. K., Wang, J., Wei, M., and Nagappan, N. (2021a).
    Automatic unit test generation for machine learning libraries: How far are we? In
    2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE), pages
    1548–1560.
    Wang, Z., You, H., Chen, J., Zhang, Y., Dong, X., and Zhang, W. (2021b). Prioritizing
    test inputs for deep neural networks via mutation analysis. In 2021 IEEE/ACM 43rd
    International Conference on Software Engineering (ICSE), pages 397–409.
    Xia, C. S., Dutta, S., Misailovic, S., Marinov, D., and Zhang, L. (2023). Balancing effectiveness
    and flakiness of non-deterministic machine learning tests. In 2023 IEEE/ACM
    45th International Conference on Software Engineering (ICSE), pages 1801–1813.
    Xie, D., Li, Y., Kim, M., Pham, H. V., Tan, L., Zhang, X., and Godfrey, M. W. (2022).
    Docter: Documentation-guided fuzzing for testing deep learning api functions. ISSTA
    2022, page 176–188, New York, NY, USA. Association for Computing Machinery.
    Yang, C., Deng, Y., Yao, J., Tu, Y., Li, H., and Zhang, L. (2023). Fuzzing automatic
    differentiation in deep-learning libraries.
    Yu, F., Chi, Y.-Y., and Chen, Y.-F. (2024a). Constraint-based adversarial example synthesis.
    Yu, F., Chi, Y.-Y., and Chen, Y.-F. (2024b). Constraint-based adversarial example synthesis.
    Zhang, J. and Li, J. (2020). Testing and verification of neural-network-based safety-critical
    control software: A systematic literature review. Information and Software Technology,
    123:106296.
    Zhang, X., Sun, N., Fang, C., Liu, J., Liu, J., Chai, D., Wang, J., and Chen, Z. (2021).
    Predoo: Precision testing of deep learning operators. In Proceedings of the 30th ACM
    SIGSOFT International Symposium on Software Testing and Analysis, ISSTA 2021,
    page 400–412, New York, NY, USA. Association for Computing Machinery.
    Zhao, X., Qu, H., Xu, J., Li, X., Lv, W., and Wang, G.-G. (2023). A systematic review of
    fuzzing. Soft Comput., 28(6):5493–5522.
    描述: 碩士
    國立政治大學
    資訊管理學系
    111356024
    資料來源: http://thesis.lib.nccu.edu.tw/record/#G0111356024
    資料類型: thesis
    顯示於類別:[資訊管理學系] 學位論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    602401.pdf735KbAdobe PDF0檢視/開啟


    在政大典藏中所有的資料項目都受到原著作權保護.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 回饋