Loading...
|
Please use this identifier to cite or link to this item:
https://nccur.lib.nccu.edu.tw/handle/140.119/157883
|
Title: | 基於奇異值分解的醫學影像深度學習隱私保護機制 A Privacy-Preserving Mechanism for Medical Image Deep Learning Based on Singular Value Decomposition |
Authors: | 王思詠 Wang, Sih-Yong |
Contributors: | 左瑞麟 Tso, Ray-Lin 王思詠 Wang, Sih-Yong |
Keywords: | 醫學影像混淆 隱私保護深度學習 奇異值分解 隨機投影 對抗重建攻擊 Medical Image Obfuscation Privacy-Preserving Deep Learning Singular Value Decomposition Random Projection Adversarial Reconstruction |
Date: | 2025 |
Issue Date: | 2025-07-01 15:49:44 (UTC+8) |
Abstract: | 儘管深度學習在醫學影像分析領域已取得顯著的成功,但由於模型容易遭受 重建攻擊(Reconstruction Attacks)及反轉攻擊(Inversion Attacks),醫療影像的隱私保護問題仍然嚴峻。為了解決這些潛在的安全威脅,本研究提出一個新穎且高效的多階段混淆框架,以實現醫學影像深度學習的隱私保護,並同時確保模型效能。本方法透過奇異值分解(Singular Value Decomposition, SVD),結合量化 (Quantization)、高斯雜訊注入(Gaussian Noise Injection)與隨機投影(Random Projection),有效破壞影像的統計與結構特徵,從而降低惡意重建的風險。有別於以往基於 SVD 或主成分分析(PCA)的混淆方案,這些方法仍保留可被攻擊者利用的可識別元件,本研究之設計能夠有效混淆敏感影像特徵,同時維持較高的模型準確度。透過腦腫瘤 MRI 與多囊性卵巢症候群(PCOS)超音波影像資料集進行實驗評估,結果顯示經混淆後之影像使用ResNet50、MobileNetV2 與 InceptionV3 模型進行分類時,仍可達到 85% 以上的分類準確度。此外,本方法亦顯著降低影像之可重建性(Reconstructability),在包括領導位元攻擊(Leading Bit Attack, LBA)與最小差異攻擊(Minimum Difference Attack, MDA)在內的對抗環境中,結構相似性指標(SSIM)均低於 0.07,且峰值訊噪比(PSNR)呈負值。本研究之成果有效彌合資料安全與臨床人工智慧效能之間的鴻溝,為醫學影像領域的隱私保護深度學習提供一種可擴展、輕量且具備攻擊韌性之解決方案。 Despite the remarkable success of deep learning in medical image analysis, serious privacy concerns persist due to model susceptibility to reconstruction and inversion attacks. To address these vulnerabilities, we propose a novel and efficient multistage obfuscation framework that enables privacy-preserving deep learning on medical images without compromising utility. Our method applies Singular Value Decomposition (SVD) by integrating quantization, Gaussian noise injection, and random projection to disrupt both statistical and structural patterns, thereby mitigating adversarial reconstruction risks. Unlike prior SVD-based or PCA-based schemes, which retain identifiable components exploitable by attackers, our design achieves strong obfuscation of sensitive image features while maintaining high model accuracy. Experimental results on brain tumor MRI and PCOS ultrasound datasets show that obfuscated images processed with ResNet50, MobileNetV2, and InceptionV3 maintain over 85% classification accuracy. Furthermore, our framework significantly reduces reconstructability, achieving SSIM scores below 0.07 and negative PSNR in adversarial settings, including Leading Bit and Minimum Difference attacks. This study bridges the gap between data security and clinical AI utility, offering a scalable, lightweight, and attack-resilient solution for privacy-preserving deep learning in medical imaging. |
Reference: | [1] M. Antón-Rodríguez, F. J. Díaz-Pernas, D. González-Ortega, and M. Martínez-Zarzuela. “A Deep Learning Approach for Brain Tumor Classification and Segmentation Using a Multiscale Convolutional Neural Network.” In: Healthcare 9.2 (Feb. 2021), p. 153 (cit. p. 1). [2] S. Arora, S. Gupta, Y. Huang, K. Li, and Z. Song. “Evaluating Gradient Inversion Attacks and Defenses in Federated Learning.” In: arXiv preprint arXiv:2112.00059 (2021) (cit. p. 1). [3] H. et al. Chen. “Developing Privacy-Preserving AI Systems: The Lessons Learned.” In: Proceedings of the 57th ACM/IEEE Design Automation Conference (DAC). San Francisco, CA, USA, 2020, pp. 1–4 (cit. p. 2). [4] A. Choudhari. PCOS Detection Using Ultrasound Images. Kaggle. 2022 (cit. pp. 13, 14). [5] M. Fredrikson, S. Jha, and T. Ristenpart. “Model Inversion Attacks That Exploit Confidence Information and Basic Countermeasures.” In: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security (CCS). Denver, CO, USA, Oct. 2015, pp. 1322–1333 (cit. p. 1). [6] H. Greenspan, B. van Ginneken, and R. M. Summers. “Guest Editorial: Deep Learning in Medical Imaging: Overview and Future Promise of an Exciting New Technique.” In: IEEE Transactions on Medical Imaging 35.5 (May 2016), pp. 1153–1159 (cit. p. 1). [7] A. et al. Hatamizadeh. “Do Gradient Inversion Attacks Make Federated Learning Unsafe?” In: IEEE Transactions on Medical Imaging 42.7 (July 2023), pp. 2044–2056 (cit. p. 1). [8] K. He, X. Zhang, S. Ren, and J. Sun. “Deep Residual Learning for Image Recognition.” In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, NV, USA, 2016, pp. 770–778 (cit. pp. 13, 14). [9] Georgios A. Kaissis, Marcus R. Makowski, Daniel Rückert, and Robert Braren. “Secure, Privacy-Preserving and Federated Machine Learning in Medical Imaging.” In: Nature Machine Intelligence 2 (May 2020), pp. 305–311 (cit. p. 1). [10] S. Kwatra and V. Torra. “Data Reconstruction Attack Against Principal Component Analysis.” In: Proceedings of the International Conference on Security, Privacy and Anonymity in Social Networks and Big Data (SocialSec). Vol. 14097. Lecture Notes in Computer Science. Singapore: Springer, 2023, pp. 108–123 (cit. p. 2). [11] J.-W. et al. Lee. “Privacy-Preserving Machine Learning with Fully Homomorphic Encryption for Deep Neural Network.” In: IEEE Access 10 (2022), pp. 30039–30054 (cit. p. 2). [12] I. D. Mienye, T. G. Swart, G. Obaido, M. Jordan, and P. Ilono. “Deep Convolutional Neural Networks in Medical Image Analysis: A Review.” In: Information 16.3 (2025), p. 195 (cit. p. 1). [13] M. Nickparvar. Brain Tumor Classification Dataset. Kaggle. May 2023 (cit. pp. 13, 14). [14] A. B. Popescu, C. I. Nita, I. A. Taca, A. Vizitiu, and L. M. Itu. “Privacy-Preserving Medical Image Classification through Deep Learning and Matrix Decomposition.” In: Proceedings of the 31st Mediterranean Conference on Control and Automation (MED). Limassol, Cyprus, 2023, pp. 305–310 (cit. pp. 2, 17). [15] C. Qian, M. Zhang, Y. Nie, S. Lu, and H. Cao. “A Survey of Bit-Flip Attacks on Deep Neural Networks and Corresponding Defense Methods.” In: Electronics 12.4 (2023), p. 853 (cit. p. 25). [16] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen. “MobileNetV2: Inverted Residuals and Linear Bottlenecks.” In: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Salt Lake City, UT, USA, 2018, pp. 4510–4520 (cit. pp. 13, 15). [17] N. Subbanna, M. Wilms, A. Tuladhar, and N. D. Forkert. “An Analysis of the Vulnerability of Two Common Deep Learning-Based Medical Image Segmentation Techniques to Model Inversion Attacks.” In: Sensors 21.11 (2021), p. 3874 (cit. p. 23). [18] Nagesh Subbanna, Matthias Wilms, Anup Tuladhar, and Nils D. Forkert. “An Analysis of the Vulnerability of Two Common Deep Learning-Based Medical Image Segmentation Techniques to Model Inversion Attacks.” In: Sensors 21.11 (2021), p. 3874 (cit. p. 21). [19] T. Suimon, Y. Koizumi, J. Takemasa, and T. Hasegawa. “A Data Reconstruction Attack Against Vertical Federated Learning Based on Knowledge Transfer.” In: Proceedings of IEEE INFOCOM Workshops (INFOCOM WKSHPS). Vancouver, BC, Canada, 2024, pp. 1–6 (cit. pp. 2, 7). [20] Xiaoxiao Sun, Nidham Gazagnadou, Vivek Sharma, et al. Privacy Assessment on Reconstructed Images: Are Existing Evaluation Metrics Faithful to Human Perception? 2023. arXiv: 2309.13038 [cs.CV] (cit. p. 21). [21] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. “Rethinking the Inception Architecture for Computer Vision.” In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, NV, USA, 2016, pp. 2818–2826 (cit. pp. 13, 15). [22] Y. Tian, S. Wang, J. Xiong, et al. “Robust and Privacy-Preserving Decentralized Deep Federated Learning Training: Focusing on Digital Healthcare Applications.” In: IEEE/ACM Transactions on Computational Biology and Bioinformatics 21.4 (July 2024), pp. 890–901 (cit. p. 1). [23] Z. Wang, M. Song, Z. Zhang, et al. “Beyond Inferring Class Representatives: UserLevel Privacy Leakage from Federated Learning.” In: Proceedings of IEEE INFOCOM. Paris, France, 2019 (cit. p. 1). [24] G. Xia, J. Chen, C. Yu, and J. Ma. “Poisoning Attacks in Federated Learning: A Survey.” In: IEEE Access 11 (2023), pp. 10708–10722 (cit. p. 1). |
Description: | 碩士 國立政治大學 資訊安全碩士學位學程 112791003 |
Source URI: | http://thesis.lib.nccu.edu.tw/record/#G0112791003 |
Data Type: | thesis |
Appears in Collections: | [資訊安全碩士學位學程] 學位論文
|
Files in This Item:
File |
Description |
Size | Format | |
100301.pdf | | 968Kb | Adobe PDF | 0 | View/Open |
|
All items in 政大典藏 are protected by copyright, with all rights reserved.
|