English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  Items with full text/Total items : 114898/145937 (79%)
Visitors : 53929459      Online Users : 727
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    政大機構典藏 > 資訊學院 > 資訊科學系 > 學位論文 >  Item 140.119/155974
    Please use this identifier to cite or link to this item: https://nccur.lib.nccu.edu.tw/handle/140.119/155974


    Title: 結合彈性壓縮技術之機器遺忘機制
    Machine Unlearning Mechanisms with Flexible Compression Schemes
    Authors: 羅鈺涵
    Lo, Yu-Han
    Contributors: 廖文宏
    Liao, Wen-Hung
    羅鈺涵
    Lo, Yu-Han
    Keywords: 深度學習
    神經網路
    模型壓縮
    機器遺忘
    可解釋性人工智慧
    Deep Learning
    Neural Network
    Model Compression
    Machine Unlearning
    XAI
    Date: 2025
    Issue Date: 2025-03-03 14:04:12 (UTC+8)
    Abstract: 隨著機器學習的蓬勃發展,特別是生成式人工智慧模型的興起,AI 技術已從專業領域延伸至大眾生活。然而,機器學習模型仰賴大量訓練資料的特性,引發智慧財產權與個人隱私的重要議題。歐盟的《一般資料保護規則》(GDPR) 和美國《加州消費者隱私保護法》(CCPA) 明確規範資料遺忘權,但在深度學習時代,單純刪除原始資料已不足以保護隱私,因為這些資料在訓練過程中已深植於模型參數中。

    為了有效解決這個問題,本論文提出一套結合彈性壓縮技術的機器遺忘機制。這種方法不僅能快速且有效地從模型中移除指定資訊,更能透過模型壓縮與稀疏化技術,大幅降低計算成本並提升效率。我們的方法透過零值填充和重新稀疏化訓練的方式,實現靈活的漸進式遺忘,使模型能夠在多輪遺忘操作後仍維持良好性能。

    與傳統方法相比,本研究提出的彈性壓縮遺忘機制在運算效率與隱私保護之間取得更好的平衡。透過對模型參數的動態稀疏化和彈性壓縮,不僅能有效移除遺忘資料集的相關資訊,還能防止潛在的成員推斷攻擊和模型反轉攻擊。這種方法特別適合計算資源受限且需要高度隱私保護的應用場景,為實際部署提供一個兼具效率與安全性的解決方案。
    With the rapid advancement of machine learning, particularly the emergence of generative AI models, AI technology has expanded from professional domains to everyday life. However, machine learning models' reliance on extensive training data raises significant concerns regarding intellectual property rights and personal privacy. While the European Union's General Data Protection Regulation (GDPR) and California's Consumer Privacy Protection Act (CCPA) explicitly regulate the right to be forgotten, merely deleting original data is insufficient for privacy protection in the deep learning era, as this data becomes deeply embedded within model parameters during training.

    To address this challenge, this thesis proposes a novel machine unlearning mechanism integrated with flexible compression techniques. This approach not only enables swift and effective removal of specified information from models but also significantly reduces computational costs and improves efficiency through model compression and sparsification techniques. Our method achieves flexible progressive unlearning through zero-value filling and re-sparsification training, enabling models to maintain high performance even after multiple rounds of unlearning operations.

    Compared to traditional approaches, our proposed flexible compression unlearning mechanism achieves a better balance between computational efficiency and privacy protection. Through dynamic sparsification and flexible compression of model parameters, we effectively remove information related to the forgotten dataset while preventing potential membership inference attacks and model inversion attacks. This method is particularly suitable for applications with limited computational resources requiring high privacy protection, providing a practical deployment solution that combines both efficiency and security.
    Reference: 1. S. Han, J. Pool, J. Tran, and W. Dally, “Learning both weights and connections for efficient neural network,” Advances in Neural Information Processing Systems, vol. 28, 2015.
    2. S. Han, H. Mao, and W. J. Dally, “Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding,” arXiv preprint arXiv:1510.00149, 2015.
    3. G. Fang, X. Ma, M. Song, M. B. Mi, and X. Wang, “Depgraph: Towards any structural pruning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16091–16101, 2023.
    4. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778, 2016.
    5. P. P. Ray, “Chatgpt: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope,” Internet of Things and Cyber-Physical Systems, vol. 3, pp. 121–154, 2023.
    6. P. Regulation, “Regulation (EU) 2016/679.” Official Journal of the European Union, 2016. General Data Protection Regulation (GDPR).
    7. D. U. CCPA, “California Consumer Privacy Act (CCPA) Website Policy,” 2020.
    8. Government of Japan, “Amended act on the protection of personal information,” 2016. Accessed: 2024-06-01.
    9. National People’s Congress of the People’s Republic of China, “Personal information protection law of the people’s republic of China,” 2021. Accessed: 2024-06-01.
    10. L. Bourtoule, V. Chandrasekaran, C. A. Choquette-Choo, H. Jia, A. Travers, B. Zhang, D. Lie, and N. Papernot, “Machine unlearning,” in 2021 IEEE Symposium on Security and Privacy (SP), pp. 141–159, IEEE, 2021.
    11. R. Shokri, M. Stronati, C. Song, and V. Shmatikov, “Membership inference attacks against machine learning models,” in 2017 IEEE Symposium on Security and Privacy (SP), pp. 3–18, IEEE, 2017.
    12. “NeurIPS 2023 Machine Unlearning Challenge.” https://unlearning-challenge.github.io/, 2023. Accessed: 2024/4/5.
    13. E. Ullah, T. Mai, A. Rao, R. A. Rossi, and R. Arora, “Machine unlearning via algorithmic stability,” in Conference on Learning Theory, pp. 4126–4142, PMLR, 2021.
    14. A. Ginart, M. Guan, G. Valiant, and J. Zou, “Making AI forget you: Data deletion in machine learning,” Advances in Neural Information Processing Systems (NeurIPS), 2019.
    15. L. Graves, V. Nagisetty, and V. Ganesh, “Amnesiac machine learning,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 13, pp. 11516–11524, 2021.
    16. Y. Cao, J. Yang, and C. Yang, “Towards making systems forget with machine unlearning,” in IEEE Symposium on Security and Privacy (SP), 2015.
    17. S. Schelter, D. Kossmann, M. Zeller, and A. Halevy, “The case for data versioning in machine learning,” in Proceedings of the 2021 International Conference on Management of Data (SIGMOD), 2021.
    18. C. Guo, G. Pleiss, Y. Sun, and K. Q. Weinberger, “On calibration of modern neural networks,” in Proceedings of the 36th International Conference on Machine Learning (ICML), 2019.
    19. J. Wang, S. Guo, X. Xie, and H. Qi, “Federated unlearning via class-discriminative pruning,” in Proceedings of the ACM Web Conference 2022, pp. 622–632, 2022.
    20. A. Golatkar, A. Achille, and S. Soatto, “Eternal sunshine of the spotless net: Selective forgetting in deep networks,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9304–9312, 2020.
    21. A. Thudi, H. Jia, M. Goldblum, T. Goldstein, and A. Shrivastava, “Model agnostic unlearning,” arXiv preprint arXiv:2108.11577, 2021.
    22. A. K. Tarun, V. S. Chundawat, M. Mandal, and M. Kankanhalli, “Fast yet effective machine unlearning,” IEEE Transactions on Neural Networks and Learning Systems, 2023.
    23. H. Li, A. Kadav, I. Durdanovic, H. Samet, and H. P. Graf, “Pruning filters for efficient convnets,” arXiv preprint arXiv:1608.08710, 2016.
    24. N.-W. Chen, “A compression mechanism of neural networks based on convolution kernel redundancy,” Journal of Machine Learning Research, vol. 22, no. 1, pp. 123–135, 2021.
    25. A. Krizhevsky, “Learning multiple layers of features from tiny images,” University of Toronto, 05 2012.
    26. V. S. Chundawat, A. K. Tarun, M. Mandal, and M. Kankanhalli, “Can bad teaching induce forgetting? Unlearning in deep networks using an incompetent teacher,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, no. 6, pp. 7210–7217, 2023.
    27. J. Lin, “Divergence measures based on the Shannon entropy,” IEEE Transactions on Information Theory, vol. 37, no. 1, pp. 145–151, 1991.
    28. P. Xia, L. Zhang, and F. Li, “Learning similarity with cosine similarity ensemble,” Information Sciences, vol. 307, pp. 39–52, 2015.
    Description: 碩士
    國立政治大學
    資訊科學系
    112753208
    Source URI: http://thesis.lib.nccu.edu.tw/record/#G0112753208
    Data Type: thesis
    Appears in Collections:[資訊科學系] 學位論文

    Files in This Item:

    File Description SizeFormat
    320801.pdf36493KbAdobe PDF0View/Open


    All items in 政大典藏 are protected by copyright, with all rights reserved.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback