政大機構典藏-National Chengchi University Institutional Repository(NCCUR):Item 140.119/146895
English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  全文筆數/總筆數 : 113451/144438 (79%)
造訪人次 : 51309328      線上人數 : 830
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋
    政大機構典藏 > 商學院 > 資訊管理學系 > 學位論文 >  Item 140.119/146895
    請使用永久網址來引用或連結此文件: https://nccur.lib.nccu.edu.tw/handle/140.119/146895


    題名: 神經網路的可控制穩健性訓練研究
    Controllable Robustness Training
    作者: 胡育騏
    Hu, Yu-Chi
    貢獻者: 郁方
    Yu, Fang
    胡育騏
    Hu, Yu-Chi
    關鍵詞: 抽象解釋
    對抗訓練
    邏輯規則
    模型穩健性
    Abstract interpretation
    Adversarial training
    Logic rule
    Robustness
    日期: 2023
    上傳時間: 2023-09-01 14:55:10 (UTC+8)
    摘要: 在大數據時代,神經網路技術取得了突破性進展。然而,神經網路的預測準確性和面對外界擾動和攻擊的穩健性成為一個重要的問題。使用對抗樣本或抽象解釋來訓練神經網路提高模型的穩健性可能會導致模型對原始任務的準確性和訓練性能降低。為了在準確性和穩健性之間達到平衡,我們提出了一種可控制的穩健性訓練方法,通過在對抗訓練過程中引入規則來控制神經網路模型。我們將對抗訓練的損失視為對規則的損失,從而將穩健性訓練與原始任務的訓練過程分離。在測試時,透過調整規則的強度,可以平衡模型在學習規則和約束方面的準確性和穩健性。我們證明了控制穩健性訓練的貢獻,可以在神經網路的準確性和穩健性之間達到更好的平衡。
    Neural network techniques allow for the developing of complex systems that are difficult for humans to implement. However, training these networks for robustness using adversarial examples or abstraction interpretation can reduce precision and training performance on the original task prediction. To balance the trade-off between accuracy and robustness, we propose controllable robustness training, where we control neural network models with rule representations in the adversarial training process. The loss on adversarial training can then be considered a loss on the rule, thus separating the robustness training from the original task process. Rule strength can be adjusted at a testing time on its loss ratio, which balances precision and robustness in how the model learns rules and constraints. We demonstrate that controlling the contribution of robustness training achieves a better balance of good performance in both the accuracy and robustness of neural networks.
    參考文獻: [1] H.-J. Yoo, “Deep convolution neural networks in computer vision: a review,” IEIE Transactions on Smart Processing and Computing, vol. 4, no. 1, pp. 35–43, 2015.
    [2] L. Deng, G. Hinton, and B. Kingsbury, “New types of deep neural network learning for speech recognition and related applications: An overview,” in 2013 IEEE international conference on acoustics, speech and signal processing. IEEE, 2013, pp.8599–8603.
    [3] J. Zhang, C. Zong, et al., “Deep neural networks in machine translation: An overview.” IEEE Intell. Syst., vol. 30, no. 5, pp. 16–25, 2015.
    [4] A. Braylan, M. Hollenbeck, E. Meyerson, and R. Miikkulainen, “Reuse of neural modules for general video game playing,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 30, no. 1, 2016.
    [5] W. Zhou, J. S. Berrio, S. Worrall, and E. Nebot, “Automated evaluation of semantic segmentation robustness for autonomous driving,” IEEE Transactions on Intelligent Transportation Systems, vol. 21, no. 5, pp. 1951–1963, 2019.
    [6] F. Tramer and D. Boneh, “Adversarial training and robustness for multiple perturbations,” Advances in neural information processing systems, vol. 32, 2019.
    [7] S. Zheng, Y. Song, T. Leung, and I. Goodfellow, “Improving the robustness of deep neural networks via stability training,” in Proceedings of the ieee conference on computer vision and pattern recognition, 2016, pp. 4480–4488.
    [8] D. Zügner and S. Günnemann, “Certifiable robustness and robust training for graph convolutional networks,” in Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2019, pp. 246–256.
    [9] C. Xie, M. Tan, B. Gong, J. Wang, A. L. Yuille, and Q. V. Le, “Adversarial examples improve image recognition,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 819–828.
    [10] A. Aldahdooh, W. Hamidouche, S. A. Fezza, and O. Déforges, “Adversarial example detection for dnn models: A review and experimental comparison,” Artificial Intelligence Review, pp. 1–60, 2022.
    [11] A. Chakraborty, M. Alam, V. Dey, A. Chattopadhyay, and D. Mukhopadhyay, “Adversarial attacks and defences: A survey,” arXiv preprint arXiv:1810.00069, 2018.
    [12] K. Han, B. Xia, and Y. Li, “2: adversarial domain adaptation to defense with adversarial perturbation removal,” Pattern Recognition, vol. 122, p. 108303, 2022.
    [13] S. Abramsky and C. Hankin, “An introduction to abstract interpretation,” in Abstract Interpretation of declarative languages, vol. 1. Ellis Horwood London, 1987, pp.63–102.
    [14] K. Ghorbal, E. Goubault, and S. Putot, “The zonotope abstract domain taylor1+,” in International conference on computer aided verification. Springer, 2009, pp.627–633.
    [15] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
    [16] S. Seo, S. Arik, J. Yoon, X. Zhang, K. Sohn, and T. Pfister, “Controlling neural networks with rule representations,” Advances in Neural Information Processing Systems, vol. 34, pp. 11 196–11 207, 2021.
    [17] B. Zhang, T. Cai, Z. Lu, D. He, and L. Wang, “Towards certifying l-infinity robustness using neural networks with l-inf-dist neurons,” in International Conference on Machine Learning. PMLR, 2021, pp. 12 368–12 379.
    [18] K. Pei, Y. Cao, J. Yang, and S. Jana, “Deepxplore: Automated whitebox testing of deep learning systems,” in proceedings of the 26th Symposium on Operating Systems
    Principles, 2017, pp. 1–18.
    [19] T. Asano, S. Bitou, M. Motoki, and N. Usui, “In-place algorithm for image rotation,” in International Symposium on Algorithms and Computation. Springer, 2007, pp.704–715.
    [20] A. Chakraborty, M. Alam, V. Dey, A. Chattopadhyay, and D. Mukhopadhyay, “A survey on adversarial attacks and defences,” CAAI Transactions on Intelligence Technology, vol. 6, no. 1, pp. 25–45, 2021.
    [21] P. Cousot and R. Cousot, “Abstract interpretation frameworks,” Journal of logic and computation, vol. 2, no. 4, pp. 511–547, 1992.
    [22] Y. Dong, F. Liao, T. Pang, H. Su, J. Zhu, X. Hu, and J. Li, “Boosting adversarial attacks with momentum,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 9185–9193.
    [23] S. Park and J. So, “On the effectiveness of adversarial training in defending against adversarial example attacks for image classification,” Applied Sciences, vol. 10, no. 22, p. 8079, 2020.
    [24] K. Ren, T. Zheng, Z. Qin, and X. Liu, “Adversarial attacks and defenses in deep learning,” Engineering, vol. 6, no. 3, pp. 346–360, 2020.
    [25] C. Si, Z. Zhang, F. Qi, Z. Liu, Y. Wang, Q. Liu, and M. Sun, “Better robustness by more coverage: Adversarial and mixup data augmentation for robust finetuning,” in Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, 2021, pp. 1569–1576.
    [26] V. Fischer, M. C. Kumar, J. H. Metzen, and T. Brox, “Adversarial examples for semantic image segmentation,” arXiv preprint arXiv:1703.01101, 2017.
    [27] H. Kwon and S. Lee, “Textual adversarial training of machine learning model for resistance to adversarial examples,” Security and Communication Networks, vol. 2022, 2022.
    [28] H. Xu, Y. Ma, H.-C. Liu, D. Deb, H. Liu, J.-L. Tang, and A. K. Jain, “Adversarial attacks and defenses in images, graphs and text: A review,” International Journal of Automation and Computing, vol. 17, no. 2, pp. 151–178, 2020.
    [29] W. Xu, D. Evans, and Y. Qi, “Feature squeezing: Detecting adversarial examples in deep neural networks,” arXiv preprint arXiv:1704.01155, 2017.
    [30] T. Bai, J. Luo, J. Zhao, B. Wen, and Q. Wang, “Recent advances in adversarial training for adversarial robustness,” arXiv preprint arXiv:2102.01356, 2021.
    [31] N. Papernot, P. McDaniel, X. Wu, S. Jha, and A. Swami, “Distillation as a defense to adversarial perturbations against deep neural networks,” in 2016 IEEE symposium on security and privacy (SP). IEEE, 2016, pp. 582–597.
    [32] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards deep learning models resistant to adversarial attacks,” arXiv preprint arXiv:1706.06083, 2017.
    [33] F. Tramèr, A. Kurakin, N. Papernot, I. Goodfellow, D. Boneh, and P. McDaniel, “Ensemble adversarial training: Attacks and defenses,” arXiv preprint arXiv:1705.07204, 2017.
    [34] P. Cousot and R. Cousot, “Abstract interpretation: a unified lattice model for static analysis of programs by construction or approximation of fixpoints,” in Proceedings of the 4th ACM SIGACT-SIGPLAN symposium on Principles of programming languages, 1977, pp. 238–252.
    [35] J. Bertrane, P. Cousot, R. Cousot, J. Feret, L. Mauborgne, A. Miné, X. Rival, et al., “Static analysis and verification of aerospace software by abstract interpretation,” Foundations and Trends® in Programming Languages, vol. 2, no. 2-3, pp. 71–190, 2015.
    [36] X. Rival and K. Yi, Introduction to static analysis: an abstract interpretation perspective. Mit Press, 2020.
    [37] P. Cousot and R. Cousot, “Basic concepts of abstract interpretation,” in Building the Information Society. Springer, 2004, pp. 359–366.
    [38] L. Pulina and A. Tacchella, “An abstraction-refinement approach to verification of artificial neural networks,” in International Conference on Computer Aided Verification. Springer, 2010, pp. 243–257.
    [39] T. Gehr, M. Mirman, D. Drachsler-Cohen, P. Tsankov, S. Chaudhuri, and M. Vechev, “Ai2: Safety and robustness certification of neural networks with abstract interpretation,” in 2018 IEEE symposium on security and privacy (SP). IEEE, 2018, pp. 3–18.
    [40] G. Singh, T. Gehr, M. Mirman, M. Püschel, and M. Vechev, “Fast and effective robustness certification,” Advances in neural information processing systems, vol. 31, 2018.
    [41] M. Mirman, T. Gehr, and M. Vechev, “Differentiable abstract interpretation for provably robust neural networks,” in International Conference on Machine Learning. PMLR, 2018, pp. 3578–3586.
    [42] G. Singh, T. Gehr, M. Püschel, and M. Vechev, “An abstract domain for certifying neural networks,” Proceedings of the ACM on Programming Languages, vol. 3, no. POPL, pp. 1–30, 2019.
    [43] S. Kabir, R. U. Islam, M. S. Hossain, and K. Andersson, “An integrated approach of belief rule base and deep learning to predict air pollution,” Sensors, vol. 20, no. 7, p. 1956, 2020.
    [44] L. Chong, M. M. Abbas, A. M. Flintsch, and B. Higgs, “A rule-based neural network approach to model driver naturalistic behavior in traffic,” Transportation Research Part C: Emerging Technologies, vol. 32, pp. 207–223, 2013.
    [45] Z. Hu, X. Ma, Z. Liu, E. Hovy, and E. Xing, “Harnessing deep neural networks with logic rules,” arXiv preprint arXiv:1603.06318, 2016.
    [46] I. Harmon, S. Marconi, B. Weinstein, Y. Bai, D. Z. Wang, E. P. White, and S. Bohlman, “Improving rare tree species classification using domain knowledge,” 2022.
    [47] M. Kukar, I. Kononenko, et al., “Cost-sensitive learning with neural networks.” in ECAI, vol. 15, no. 27. Citeseer, 1998, pp. 88–94.
    [48] M. R. Hassan, B. Nath, and M. Kirley, “A fusion model of hmm, ann and ga for stock market forecasting,” Expert systems with Applications, vol. 33, no. 1, pp. 171–180, 2007.
    [49] E. Haber and M. Holtzman Gazit, “Model fusion and joint inversion,” Surveys in Geophysics, vol. 34, pp. 675–695, 2013.
    [50] P. Deepan and L. Sudha, “Fusion of deep learning models for improving classification accuracy of remote sensing images,” Journal of Mechanics of Continua and Mathematical Sciences, vol. 14, no. 1, pp. 189–201, 2019.
    [51] L. Breiman, “Bagging predictors,” Machine learning, vol. 24, pp. 123–140, 1996.
    [52] D. H. Wolpert, “Stacked generalization,” Neural networks, vol. 5, no. 2, pp. 241–259, 1992.
    [53] Y. Cao, Y. Lin, S. Ning, H. Pi, J. Zhang, and J. Hu, “Gan-based fusion adversarial training,” in Knowledge Science, Engineering and Management: 15th International Conference, KSEM 2022, Singapore, August 6–8, 2022, Proceedings, Part III.
    Springer, 2022, pp. 51–64.
    [54] I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” arXiv preprint arXiv:1412.6572, 2014.
    [55] M. Andriushchenko, F. Croce, N. Flammarion, and M. Hein, “Square attack: a query-efficient black-box adversarial attack via random search,” in Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXIII. Springer, 2020, pp. 484-501.
    [56] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
    [57] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in 2009 IEEE conference on computer vision and pattern recognition. Ieee, 2009, pp. 248-255.
    描述: 碩士
    國立政治大學
    資訊管理學系
    110356042
    資料來源: http://thesis.lib.nccu.edu.tw/record/#G0110356042
    資料類型: thesis
    顯示於類別:[資訊管理學系] 學位論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    604201.pdf3946KbAdobe PDF20檢視/開啟


    在政大典藏中所有的資料項目都受到原著作權保護.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 回饋