政大機構典藏-National Chengchi University Institutional Repository(NCCUR):Item 140.119/152418
English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  全文笔数/总笔数 : 113160/144130 (79%)
造访人次 : 50751681      在线人数 : 534
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜寻范围 查询小技巧:
  • 您可在西文检索词汇前后加上"双引号",以获取较精准的检索结果
  • 若欲以作者姓名搜寻,建议至进阶搜寻限定作者字段,可获得较完整数据
  • 进阶搜寻
    政大機構典藏 > 商學院 > 資訊管理學系 > 學位論文 >  Item 140.119/152418


    请使用永久网址来引用或连结此文件: https://nccur.lib.nccu.edu.tw/handle/140.119/152418


    题名: 使用量化簡化加速神經網絡的自動測試
    Expedite Automatic Testing on Neural Networks using Quantization Simplification
    作者: 施瑋昱
    Shi, Wei-Yu
    贡献者: 郁方
    Yu, Fang
    施瑋昱
    Shi, Wei-Yu
    关键词: 量化簡化
    神經網路安全
    符號執行測試
    Quantization Simplification
    Neural Network Security
    Concolic Testing
    日期: 2024
    上传时间: 2024-08-05 12:08:21 (UTC+8)
    摘要: 自動化測試在實現神經網路安全性方面扮演著至關重要的角色。
    在本研究中,我們採用動態符號執行 (又稱為符號執行測試) 這種將具體執行與符號執行相結合的系統性測試框架,應用於神經網路模型。我們通過產生能觸發網路模型不同行為的測試輸入,來探索推理執行路徑。
    特別地,我們提議將符號執行測試與三元簡化相結合,以加速對抗樣本的獲取。三元簡化是一種特殊的量化技術,它將模型的計算限制在加法和減法操作,以降低計算複雜度。我們展示了如何藉此改進符號執行測試技術,從而在深度神經網路中探索更多分支。
    我們針對 CNN、LSTM 和 Transformers 模型評估了提出的方法,並在先前無法解決的網路中找到了對抗樣本。本研究為利用量化簡化技術提供了新的視角,有助於促進基於自動約束的輸入合成,從而實現對神經網路模型的對抗攻擊。
    Automatic testing plays a critical role in realizing neural network security.
    In this research, we employ dynamic symbolic execution, a.k.a. concolic testing, a systematic testing framework that amalgamates concrete execution and symbolic execution, on neural network models, exploring the inference execution paths by generating test inputs that trigger different behaviors of network models.
    Particularly, we propose to integrate concolic testing with ternary simplification to expedite the acquisition of adversarial instances. Ternary simplification represents a specific quantization technique that confines the model's computation to addition and subtraction operations to reduce computational complexity, with which we show how concolic testing techniques can be improved to explore more branches in deep neural networks. We evaluate the presented approach against CNN, LSTM and Transformers models, finding adversarial examples in previous unsolvable networks. This research contributes a fresh perspective to utilize quantization simplification techniques on neural networks, facilitating automatic constraint-based input synthesis for adversarial attacks on neural network models.
    參考文獻: Alemdar, H., Leroy, V., Prost-Boucle, A., & Pétrot, F. (2017). Ternary neural networks
    for resource-efficient ai applications. 2017 international joint conference on neural
    networks (IJCNN), 2547–2554.
    Andri, R., Cavigelli, L., Rossi, D., & Benini, L. (2017). Yodann: An architecture for ultralow power binary-weight cnn acceleration. IEEE Transactions on ComputerAided Design of Integrated Circuits and Systems, 37(1), 48–60.
    Audemard, G., Koriche, F., & Marquis, P. (2020). On tractable xai queries based on compiled representations. Proceedings of the International Conference on Principles
    of Knowledge Representation and Reasoning, 17(1), 838–849.
    Bhosekar, A., & Ierapetritou, M. (2018). Advances in surrogate based modeling, feasibility analysis, and optimization: A review. Computers Chemical Engineering, 108,
    250–267. https://doi.org/https://doi.org/10.1016/j.compchemeng.2017.09.017
    Bulat, A., & Tzimiropoulos, G. (2019). Xnor-net++: Improved binary neural networks.
    arXiv preprint arXiv:1909.13863.
    Chen, Y.-F., Tsai, W.-L., Wu, W.-C., Yen, D.-D., & Yu, F. (2021). Pyct: A python concolic
    tester. Programming Languages and Systems: 19th Asian Symposium, APLAS 2021,
    Chicago, IL, USA, October 17–18, 2021, Proceedings 19, 38–46.
    50
    Conti, F., Schiavone, P. D., & Benini, L. (2018). Xnor neural engine: A hardware accelerator ip for 21.6-fj/op binary neural network inference. IEEE Transactions on
    Computer-Aided Design of Integrated Circuits and Systems, 37(11), 2940–2951.
    Courbariaux, M., Hubara, I., Soudry, D., El-Yaniv, R., & Bengio, Y. (2016). Binarized
    neural networks: Training deep neural networks with weights and activations constrained to+ 1 or-1. arXiv preprint arXiv:1602.02830.
    Darvish Rouhani, B., Lo, D., Zhao, R., Liu, M., Fowers, J., Ovtcharov, K., Vinogradsky,
    A., Massengill, S., Yang, L., Bittner, R., et al. (2020). Pushing the limits of narrow precision inferencing at cloud scale with microsoft floating point. Advances
    in neural information processing systems, 33, 10271–10281.
    Fortz, S., Mesnard, F., Payet, E., Perrouin, G., Vanhoof, W., & Vidal, G. (2020). An smtbased concolic testing tool for logic programs. International Symposium on Functional and Logic Programming, 215–219.
    Godefroid, P., Levin, M. Y., & Molnar, D. (2012). Sage: Whitebox fuzzing for security
    testing. Communications of the ACM, 55(3), 40–44.
    Harel-Canada, F., Wang, L., Gulzar, M. A., Gu, Q., & Kim, M. (2020). Is neuron coverage
    a meaningful measure for testing deep neural networks? Proceedings of the 28th
    ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, 851–862. https://doi.org/10.
    1145/3368089.3409754
    Huang, W., Liu, Y., Qin, H., Li, Y., Zhang, S., Liu, X., Magno, M., & Qi, X. (2024). Billm:
    Pushing the limit of post-training quantization for llms. arXiv preprint arXiv:2402.04291.
    Huang, W., Sun, Y., Huang, X., & Sharp, J. (2019). Testrnn: Coverage-guided testing on
    recurrent neural networks. CoRR, abs/1906.08557. http:// arxiv. org/ abs/ 1906.
    08557
    51
    Huang, X., Kroening, D., Ruan, W., Sharp, J., Sun, Y., Thamo, E., Wu, M., & Yi, X.
    (2020). A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability. Computer Science
    Review, 37, 100270.
    Hubara, I., Nahshan, Y., Hanani, Y., Banner, R., & Soudry, D. (2021). Accurate post training quantization with small calibration sets. International Conference on Machine
    Learning, 4466–4475.
    Hwang, K., & Sung, W. (2014). Fixed-point feedforward deep neural network design using
    weights+ 1, 0, and- 1. 2014 IEEE Workshop on Signal Processing Systems (SiPS),
    1–6.
    Ignatiev, A., Narodytska, N., & Marques-Silva, J. (2019). Abduction-based explanations
    for machine learning models. Proceedings of the AAAI Conference on Artificial
    Intelligence, 33(01), 1511–1519.
    Irlbeck, M., et al. (2015). Deconstructing dynamic symbolic execution. Dependable Software Systems Engineering, 40(2015), 26.
    Katz, G., Barrett, C., Dill, D. L., Julian, K., & Kochenderfer, M. J. (2017). Reluplex: An
    efficient smt solver for verifying deep neural networks. Computer Aided Verification: 29th International Conference, CAV 2017, Heidelberg, Germany, July 24-28,
    2017, Proceedings, Part I 30, 97–117.
    Kim, Y., Hong, S., & Kim, M. (2019). Target-driven compositional concolic testing with
    function summary refinement for effective bug detection. Proceedings of the 2019
    27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, 16–26.
    Li, F., Liu, B., Wang, X., Zhang, B., & Yan, J. (2016). Ternary weight networks. arXiv
    preprint arXiv:1605.04711.
    52
    Li, R., Wang, Y., Liang, F., Qin, H., Yan, J., & Fan, R. (2019). Fully quantized network
    for object detection. Proceedings of the IEEE/CVF conference on computer vision
    and pattern recognition, 2810–2819.
    Li, Z., Ma, X., Xu, C., & Cao, C. (2019). Structural coverage criteria for neural networks
    could be misleading. 2019 IEEE/ACM 41st International Conference on Software
    Engineering: New Ideas and Emerging Results (ICSE-NIER), 89–92.
    Lu, X., Zhou, A., Lin, Z., Liu, Q., Xu, Y., Zhang, R., Wen, Y., Ren, S., Gao, P., Yan, J.,
    et al. (2024). Terdit: Ternary diffusion models with transformers. arXiv preprint
    arXiv:2405.14854.
    Luckow, K., Dimjašević, M., Giannakopoulou, D., Howar, F., Isberner, M., Kahsai, T.,
    Rakamarić, Z., & Raman, V. (2016). Jd art: A dynamic symbolic analysis framework. Tools and Algorithms for the Construction and Analysis of Systems: 22nd International Conference, TACAS 2016, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2016, Eindhoven, The Netherlands, April 2-8, 2016, Proceedings 22, 442–459.
    Ma, H., Qiu, H., Gao, Y., Zhang, Z., Abuadbba, A., Xue, M., Fu, A., Zhang, J., Al-Sarawi,
    S. F., & Abbott, D. (2024). Quantization backdoors to deep learning commercial
    frameworks. IEEE Transactions on Dependable and Secure Computing.
    Ma, S., Wang, H., Ma, L., Wang, L., Wang, W., Huang, S., Dong, L., Wang, R., Xue, J.,
    & Wei, F. (2024). The era of 1-bit llms: All large language models are in 1.58 bits.
    arXiv preprint arXiv:2402.17764.
    Martinez, B., Yang, J., Bulat, A., & Tzimiropoulos, G. (2020). Training binary neural
    networks with real-to-binary convolutions. arXiv preprint arXiv:2003.11535.
    Mellempudi, N., Kundu, A., Mudigere, D., Das, D., Kaul, B., & Dubey, P. (2017). Ternary
    neural networks with fine-grained quantization. arXiv preprint arXiv:1705.01462.
    53
    Meng, X., Kundu, S., Kanuparthi, A. K., & Basu, K. (2021). Rtl-contest: Concolic testing
    on rtl for detecting security vulnerabilities. IEEE Transactions on Computer-Aided
    Design of Integrated Circuits and Systems, 41(3), 466–477.
    Nagel, M., Amjad, R. A., Van Baalen, M., Louizos, C., & Blankevoort, T. (2020). Up or
    down? adaptive rounding for post-training quantization. International Conference
    on Machine Learning, 7197–7206.
    Narodytska, N., Kasiviswanathan, S., Ryzhyk, L., Sagiv, M., & Walsh, T. (2018). Verifying properties of binarized deep neural networks. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1).
    Park, E., Ahn, J., & Yoo, S. (2017). Weighted-entropy-based quantization for deep neural
    networks. Proceedings of the IEEE Conference on Computer Vision and Pattern
    Recognition, 5456–5464.
    PyCT-QS: Python Concolic Testing and Simplification Tool. (2024). https://github.com/
    PyCTsimplify/PyCT_Quantization_Simplification.git
    Qin, H., Gong, R., Liu, X., Bai, X., Song, J., & Sebe, N. (2020). Binary neural networks:
    A survey. Pattern Recognition, 105, 107281.
    Qin, H., Gong, R., Liu, X., Shen, M., Wei, Z., Yu, F., & Song, J. (2020). Forward and
    backward information retention for accurate binary neural networks. Proceedings
    of the IEEE/CVF conference on computer vision and pattern recognition, 2250–
    2259.
    Qiu, H., Ma, H., Zhang, Z., Gao, Y., Zheng, Y., Fu, A., Zhou, P., Abbott, D., & Al-Sarawi,
    S. F. (2022). Rbnn: Memory-efficient reconfigurable deep binary neural network
    with ip protection for internet of things. IEEE Transactions on Computer-Aided
    Design of Integrated Circuits and Systems, 42(4), 1185–1198.
    Rastegari, M., Ordonez, V., Redmon, J., & Farhadi, A. (2016). Xnor-net: Imagenet classification using binary convolutional neural networks. Computer Vision–ECCV
    54
    2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14,
    2016, Proceedings, Part IV, 525–542.
    Riccio, V., Jahangirova, G., Stocco, A., Humbatova, N., Weiss, M., & Tonella, P. (2020).
    Testing machine learning based systems: A systematic mapping. Empirical Software Engineering, 25, 5193–5254.
    Sen, K., Kalasapur, S., Brutch, T., & Gibbs, S. (2013). Jalangi: A tool framework for
    concolic testing, selective record-replay, and dynamic analysis of javascript. Proceedings of the 2013 9th Joint Meeting on Foundations of Software Engineering,
    615–618.
    Sen, K., Marinov, D., & Agha, G. (2005). Cute: A concolic unit testing engine for c. ACM
    SIGSOFT Software Engineering Notes, 30(5), 263–272.
    Sen, K., & Zheng, D. (2024). Dynamic inference of likely symbolic tensor shapes in
    python machine learning programs.
    Sun, Y., Wu, M., Ruan, W., Huang, X., Kwiatkowska, M., & Kroening, D. (2018). Concolic testing for deep neural networks. Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering, 109–119.
    Tang, W., Hua, G., & Wang, L. (2017). How to train a compact binary neural network
    with high accuracy? Proceedings of the AAAI conference on artificial intelligence,
    31(1).
    Tihanyi, N., Bisztray, T., Jain, R., Ferrag, M. A., Cordeiro, L. C., & Mavroeidis, V. (2023).
    The formai dataset: Generative ai in software security through the lens of formal
    verification. Proceedings of the 19th International Conference on Predictive Models and Data Analytics in Software Engineering, 33–43.
    Wang, E., Davis, J. J., Zhao, R., Ng, H.-C., Niu, X., Luk, W., Cheung, P. Y., & Constantinides, G. A. (2019). Deep neural network approximation for custom hardware:
    55
    Where we’ve been, where we’re going. ACM Computing Surveys (CSUR), 52(2),
    1–39.
    Wang, P., Hu, Q., Zhang, Y., Zhang, C., Liu, Y., & Cheng, J. (2018). Two-step quantization
    for low-bit neural networks. Proceedings of the IEEE Conference on computer
    vision and pattern recognition, 4376–4384.
    Xu, S., Li, Y., Ma, T., Zeng, B., Zhang, B., Gao, P., & Lv, J. (2022). Tervit: An efficient
    ternary vision transformer. arXiv preprint arXiv:2201.08050.
    Xue, M., Yuan, C., Wu, H., Zhang, Y., & Liu, W. (2020). Machine learning security:
    Threats, countermeasures, and evaluations. IEEE Access, 8, 74720–74742. https:
    //doi.org/10.1109/ACCESS.2020.2987435
    Yao, W., Chen, X., Huang, Y., & van Tooren, M. (2014). A surrogate-based optimization
    method with rbf neural network enhanced by linear interpolation and hybrid infill
    strategy. Optimization Methods and Software, 29(2), 406–429.
    Zhang, J. M., Harman, M., Ma, L., & Liu, Y. (2022). Machine learning testing: Survey,
    landscapes and horizons. IEEE Transactions on Software Engineering, 48(1), 1–
    36. https://doi.org/10.1109/TSE.2019.2962027
    Zhou, Z., Dou, W., Liu, J., Zhang, C., Wei, J., & Ye, D. (2021). Deepcon: Contribution
    coverage testing for deep learning systems. 2021 IEEE International Conference
    on Software Analysis, Evolution and Reengineering (SANER), 189–200.
    Zhu, C., Han, S., Mao, H., & Dally, W. J. (2016). Trained ternary quantization. arXiv
    preprint arXiv:1612.01064.
    描述: 碩士
    國立政治大學
    資訊管理學系
    111356051
    資料來源: http://thesis.lib.nccu.edu.tw/record/#G0111356051
    数据类型: thesis
    显示于类别:[資訊管理學系] 學位論文

    文件中的档案:

    档案 描述 大小格式浏览次数
    605101.pdf2199KbAdobe PDF0检视/开启


    在政大典藏中所有的数据项都受到原著作权保护.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 回馈