English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  Items with full text/Total items : 117674/148706 (79%)
Visitors : 72941365      Online Users : 460
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    政大機構典藏 > 資訊學院 > 資訊科學系 > 學位論文 >  Item 140.119/158476
    Please use this identifier to cite or link to this item: https://nccur.lib.nccu.edu.tw/handle/140.119/158476


    Title: 基於客戶端梯度分析的聯邦學習算法 : 以標籤翻轉為例
    Federated Learning with Client Gradient Analysis : A Case Study on Label-Flipping Attacks
    Authors: 黃思穎
    Huang, Sih-Ying
    Contributors: 蔡子傑
    Tsai, Tzu-Chieh
    黃思穎
    Huang, Sih-Ying
    Keywords: 聯邦學習
    標籤翻轉
    攻擊
    機器學習
    分散式學習
    資料隱私性
    Federated Learning
    Label Flipping
    Attack
    Machine Learning
    Decentralized Learning
    Data Privacy
    Date: 2025
    Issue Date: 2025-08-04 13:57:57 (UTC+8)
    Abstract: 聯邦學習是一種分散式機器學習技術,能夠在多個客戶端設備之間協同訓練機器學習模型,而無需集中傳輸原始數據,從而有效保護數據隱私。這種分散式的計算架構不僅降低了單點故障的風險,還提升了系統的穩定性和安全性。在聯邦學習的過程中,各個客戶端基於本地數據進行模型更新,並將更新後的本地模型參數回傳給中央伺服器,以聚合成全局模型。由於只需傳輸模型參數而非原始數據,因此這種方法大幅減少了通訊成本,同時有效降低了數據洩露的風險,使其成為隱私保護機器學習領域的重要技術。
    然而,聯邦學習仍然面臨許多挑戰,其中之一便是來自惡意客戶端的攻擊。傳統研究主要關注於非獨立同分布(Non-IID)數據對模型準確度的影響,例如不同客戶端擁有數據數量不均、數據類型不同等問題。然而,在開放的聯邦學習環境中,惡意客戶端可能會有意對其數據標籤進行翻轉(Label Flipping Attack),即將某些類別的標籤篡改為錯誤的類別,進而影響聯邦學習模型的收斂與準確性。標籤翻轉攻擊屬於模型毒害(Model Poisoning)攻擊的一種,它不僅可能降低模型效
    能,甚至可能導致模型在某些關鍵類別上產生嚴重的錯誤判斷,從而削弱聯邦學習的實用性與可信度。
    本研究針對惡意客戶端的標籤翻轉問題,提出一種新的客戶端選擇策略,以提高聯邦學習在受到攻擊時的穩健性。策略通過分析各個客戶端的歷史貢獻度與模型更新趨勢,識別並排除潛在的惡意客戶端,從而減少標籤翻轉對全局模型的影響。
    此外,比較了該策略與兩種現有聯邦學習方法在不同程度標籤翻轉攻擊以及不同資料分布下的準確性變化,並分析其對模型收斂速度與泛化能力的影響。實驗結果顯示,本研究之方法能夠有效提升聯邦學習在惡意環境中的穩定性與模型表現,進一步增強其在隱私保護場景下的應用價值。
    Federated learning is a distributed machine learning technique that enables collaborative training of models across multiple client devices without the need to centrally transmit raw data, thereby effectively preserving data privacy. This decentralized computing architecture not only reduces the risk of single points of failure but also enhances the system's stability and security. During the federated learning process, each client updates the model based on its local data and then sends the updated local model parameters to a central server, where they are aggregated into a global model. Since only model parameters are transmitted instead of raw data, this method significantly reduces communication costs while effectively lowering the risk of data leakage, making it a crucial technology in the field of privacy-preserving machine learning.
    However, federated learning still faces many challenges, one of which is attacks from malicious clients. Traditional research has mainly focused on the impact of non-independent and identically distributed (Non-IID) data on model accuracy, such as clients having imbalanced data volumes or different data types. In open federated learning environments, malicious clients may deliberately perform label flipping attacks by altering the labels of certain classes to incorrect ones, thereby affecting the convergence and accuracy of the federated learning model. Label flipping attacks are a type of model poisoning attack; they not only degrade model performance but can also cause severe misclassifications in critical categories, undermining the practicality and trustworthiness of federated learning.
    This study addresses the issue of label flipping by malicious clients and proposes a novel client selection strategy to enhance the robustness of federated learning under attack. The strategy identifies and excludes potentially malicious clients by analyzing the historical contribution and model update trends of each client, thereby mitigating the impact of label flipping on the global model. Furthermore, this strategy is compared with two existing federated learning methods under various degrees of label flipping attacks and different data distributions, with analyses of its impact on model convergence speed and generalization capability. Experimental results demonstrate that the proposed method effectively improves the stability and performance of federated learning in adversarial environments, further enhancing its application value in privacy-preserving scenarios.
    Reference: [1] Avishek Ghosh, Justin Hong, Dong Yin, Kannan Ramchandran, 2019. Robust Federated Learning in a Heterogeneous Environment. https://doi.org/10.48550/arXiv.1906.06629.
    [2] Clement Fung, Chris J.M. Yoon, Ivan Beschastnikh, 2020. The Limitations of Federated Learning in Sybil Settings.
    https://www.usenix.org/conference/raid2020/presentation/fung
    [3]Hao Li, Chengcheng Li, Jian Wang, Aimin Yang, Zezhong Ma, Zunqian Zhang, Dianbo Hua, 2023. Review on security of federated learning and its application in healthcare.
    https://doi.org/10.1016/j.future.2023.02.021
    [4] Heiko Ludwig, Nathalie Baracaldo, Gegi Thomas, Yi Zhou, Ali Anwar, Shashank Rajamoni, Yuya Ong, Jayaram Radhakrishnan, Ashish Verma, Mathieu Sinn, Mark
    Purcell, Ambrish Rawat, Tran Minh, Naoise Holohan, Supriyo Chakraborty, Shalisha Whitherspoon, Dean Steuer, Laura Wynter, Hifaz Hassan, Sean Laguna, Mikhail Yurochkin, Mayank Agarwal, Ebube Chuba, Annie Abay, 2020. IBM Federated Learning: an Enterprise Framework White Paper V0.1. arXiv:2007.10987.
    [5]Khan, A., ten Thij, M. & Wilbik, A, 2025. Vertical federated learning: a structured literature review. Knowl Inf Syst 67, 3205–3243 (2025). https://doi.org/10.1007/s10115-025-02356-y
    [6] Lavanya Shanmugam, Ravish Tillu, Manish Tomar, 2023. Federated Learning Architecture: Design, Implementation, and Challenges in Distributed AI Systems. https://doi.org/10.60087/jklst.vol2.n2.p384.
    [7] Li Li, Yuxi Fan, Mike Tse, Kuo-Yi Lin, 2023. A review of applications in federated learning. Computers & Industrial EngineeringVolume 149, November 2020, 106854
    [8] Lingjuan Lyu, Han Yu, Qiang Yang, 2020. Threats to Federated Learning: A Survey. arXiv:2003.02133.
    [9] Mario García-Márquez, Nuria Rodríguez-Barroso, M.V. Luzón, Francisco Herrera, 2025. Krum Federated Chain (KFC): Using blockchain to defend against adversarial attacks in Federated Learning. https://arxiv.org/abs/2502.06917
    [10] Madhura Joshi, Ankit Pal, Malaikannan Sankarasubbu, 2022. Federated Learning for Healthcare Domain - Pipeline, Applications and Challenges. ACM Transactions on Computing for Healthcare, Volume 3, Issue 4, Article No.: 40, Pages 1 – 36, https://doi.org/10.1145/3533708
    [11] Nourah S. AlOtaibi; Muhamad Felemban; Sajjad Mahmood, 2024. Edge-Assisted Label-Flipping Attack Detection in Federated Learning. IEEE.2024 pp. 7278-7300
    [12] Ppkc, A., Fl, A., Zca, B., Ying, S.A., Dsy, C., Sci. 548, 450–460 (2021).Transfer learning based countermeasure against label flipping poisoning attack. Inform. https://www.researchgate.net/publication/347769582_Transfer_learning_based_countermeasure_against_label_flipping_poisoning_attack
    [13] Qingru Li, Xinru Wang, Fangwei Wang , Changguang Wang, 2023. A label flipping attack on machine learning model and its defense mechanism. Lecture Notes in Computer Science ((LNCS,volume 13777))
    [14] Shanqing Yu, Jie Shen, Shaocong Xu, Jinhuan Wang, Zeyu Wang, Qi Xuan, 2025. Label-Flipping Attacks in GNN-Based Federated Learning. IEEE.2025 pp. 1357-1368
    [15]Subrato Bharati subratobharatil, M. Rubaiyat Hossain Mondal, […], and V.B. Surya Prasath, Volume 18, Issue 1-2, 2022. Federated learning: Applications, challenges and future directions. https://doi.org/10.3233/HIS-220006
    [16] Xicong Shen; Ying Liu; Fu Li; Chunguang Li, 2023. Privacy-Preserving Federated Learning Against Label-Flipping Attacks on Non-IID Data. IEEE.2023 pp. 1241-1255
    [17] Xidong Wu, Feihu Huang, Zhengmian Hu, Heng Huang, 2023. Faster Adaptive Federated Learning. https://doi.org/10.1609/aaai.v37i9.26235.
    [18] Xingchen Zhou, Ming Xu, Yiming Wu, Ning Zheng, 2021. Deep Model Poisoning Attack on Federated Learning. https://doi.org/10.3390/fi13030073.
    [19] Yanli Li, Huaming Chen, Wei Bao, Zhengmeng Xu, Dong Yuan, 2023. Honest Score Client Selection Scheme: Preventing Federated Learning Label Flipping Attacks in Non-IID Scenarios. IEEE.2023
    [20] Yifeng Jiang, Weiwen Zhang, Yanxi Chen, 2023. Data Quality Detection Mechanism Against Label Flipping Attacks in Federated Learning. IEEE.2023 pp. 1625-1637
    [21] Ying He, Mingyang Niu, Jingyu Hua, Yunlong Mao, Xu Huang, Chen Li, Sheng Zhong, 2024. LabObf: A Label Protection Scheme for Vertical Federated Learning Through Label Obfuscation. https://doi.org/10.48550/arXiv.2405.17042.
    Description: 碩士
    國立政治大學
    資訊科學系
    112753110
    Source URI: http://thesis.lib.nccu.edu.tw/record/#G0112753110
    Data Type: thesis
    Appears in Collections:[資訊科學系] 學位論文

    Files in This Item:

    File Description SizeFormat
    311001.pdf1176KbAdobe PDF0View/Open


    All items in 政大典藏 are protected by copyright, with all rights reserved.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback