政大機構典藏-National Chengchi University Institutional Repository(NCCUR):Item 140.119/148474
English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  全文筆數/總筆數 : 113656/144643 (79%)
造訪人次 : 51751942      線上人數 : 567
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋
    政大機構典藏 > 資訊學院 > 資訊科學系 > 學位論文 >  Item 140.119/148474
    請使用永久網址來引用或連結此文件: https://nccur.lib.nccu.edu.tw/handle/140.119/148474


    題名: 優化非獨立同分佈的聯邦學習:基於相似性的聚合方法
    Optimizing Federated Learning on Non-IID Data : Aggregation Approaches Based on Similarity
    作者: 吳仁凱
    Wu, Ren-Kai
    貢獻者: 蔡子傑
    Tsai, Tzu-Chieh
    吳仁凱
    Wu, Ren-Kai
    關鍵詞: 聯邦學習
    個性化聯邦學習
    分群
    非獨立同分佈
    資料隱私
    Federated Learning
    Personalized Federated Learning
    Clustering
    Non-Independent Identically Distributed (Non-IID)
    Data Privacy
    日期: 2023
    上傳時間: 2023-12-01 10:33:44 (UTC+8)
    摘要: 隨著資訊技術和人工智慧的持續進步,資料分析和隱私保護的重要性逐漸增加。聯邦學習,作為一種新型的機器學習架構,不僅能夠滿足資料隱私的需求,允許分散的資料保持在原始位置,同時還能進行模型的協同訓練。但隨著資料的增加和分散,聯邦學習尤其在資料非獨立同分佈(Non-IID)情境下,仍面臨諸多挑戰。而多中心聯邦學習是一種有前景的解決方案,本研究深入探討了多中心聯邦學習在不同資料分佈下的效能,特別針對FedSEM算法在學習個性化模型的能力進行了研究。
    為了與FedAVG算法的比較,將所有聯邦學習算法設定相同的通訊輪數及目標預測準確度,以全局模型預測本地任務的準確度作為評估指標,並採用了四種不同的資料切分策略。這些設置有助於深入了解資料分佈對聯邦學習的具體影響。
    本研究對K-means分群算法進行了詳細的探討,分析其在實際應用中的優點和缺點。儘管K-means算法具有簡單性和快速性等優點,但其也存在如須預先設定集群數量及無法偵測離群值等挑戰。為了解決這些問題,本研究引入了基於密度的分群算法,如DBSCAN。DBSCAN具有自動發現集群數量和識別噪聲的特點,但確定其最佳參數仍是一大挑戰。
    在非獨立同分佈(Non-IID)情境下,全局模型的對於客戶端預測能力下降及收斂速度變慢。為此,本研究提出了基於相似性的聚合方法,旨在優化Non-IID情境下的聯邦學習效能。模擬實驗結果證明了此方法在極端Non-IID情境下的有效性,且與其他現有方法相比具有明顯的優勢。
    綜上所述,本研究不僅深入探討了多中心聯邦學習的各種挑戰,還提出了多種優化策略和分群算法,以增進聯邦學習的通訊和訓練效能。這些研究成果對於理解和優化聯邦學習具有重要的參考價值,同時也為未來的研究和實際應用提供了概念性驗證。
    The importance of data analysis and privacy protection has arisen with information technology and AI advancements. Federated learning, a new machine learning approach, ensures privacy and enables data to stay decentralized, benefiting collaborative model training. However, federated learning encounters challenges in non-independent and identically distributed (Non-IID) scenarios. Multi-center federated learning emerges as a promising solution, and this study examines its performance across various data distributions, focusing on evaluating the FedSEM algorithm learning personalized model. In order to compare with FedAVG, all federated learning algorithms used uniform settings. They evaluated performance using global model accuracy on local tasks with four data splitting strategies, providing insights into data distribution's impact on federated learning. This study detailed assesses the K-means clustering algorithm, discussing its pros and cons in practical applications. Despite the advantages of simplicity and speed, K-means faces challenges like presetting cluster numbers and outlier detection. To tackle these challenges, this study introduces density-based clustering methods like DBSCAN, known for cluster detection and noise identification, but finding its optimal parameters is still a significant challenge. In a non-independent and non-identically distributed (Non-IID) scenario, the global model experiences a decrease in the client's task prediction performance and a slower convergence speed. To mitigate this, a similarity-based aggregation method is proposed, robusting federated learning in Non-IID scenarios. Experimental results showcase its effectiveness, presenting advantages over other methods. In summary, this study deeply explores challenges in multi-center federated learning. It introduces optimization strategies and clustering algorithms to enhance communication and training efficiency, providing essential insights for optimizing federated learning and a proof of concept for future research and applications.
    參考文獻: [1] McMahan, B., Moore, E., Ramage, D., Hampson, S., & y Arcas, B. A. (2017, April). Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics (pp. 1273-1282). PMLR.
    [2] Yang, Q., Liu, Y., Chen, T., & Tong, Y. (2019). Federated machine learning: Concept and applications. ACM Transactions on Intelligent Systems and Technology (TIST), 10(2), 1-19.
    [3] Kairouz, P., McMahan, H. B., Avent, B., Bellet, A., Bennis, M., Bhagoji, A. N., ... & Zhao, S. (2021). Advances and open problems in federated learning. Foundations and Trends® in Machine Learning, 14(1–2), 1-210.
    [4] Li, Q., Diao, Y., Chen, Q., & He, B. (2022, May). Federated learning on non-iid data silos: An experimental study. In 2022 IEEE 38th International Conference on Data Engineering (ICDE) (pp. 965-978). IEEE.
    [5] Zhao, Y., Li, M., Lai, L., Suda, N., Civin, D., & Chandra, V. (2018). Federated learning with non-iid data. arXiv preprint arXiv:1806.00582.
    [6] Karimireddy, S. P., Kale, S., Mohri, M., Reddi, S., Stich, S., & Suresh, A. T. (2020, November). Scaffold: Stochastic controlled averaging for federated learning. In International conference on machine learning (pp. 5132-5143). PMLR.
    [7] Tan, A. Z., Yu, H., Cui, L., & Yang, Q. (2022). Towards personalized federated learning. IEEE Transactions on Neural Networks and Learning Systems.
    [8] Long, G., Xie, M., Shen, T., Zhou, T., Wang, X., & Jiang, J. (2022). Multi-center federated learning: clients clustering for better personalization. World Wide Web, 1-20.
    [9] Li, T., Sahu, A. K., Zaheer, M., Sanjabi, M., Talwalkar, A., & Smith, V. (2020). Federated optimization in heterogeneous networks. Proceedings of Machine learning and systems, 2, 429-450.
    [10] Briggs, C., Fan, Z., & Andras, P. (2020, July). Federated learning with hierarchical clustering of local updates to improve training on non-IID data. In 2020 International Joint Conference on Neural Networks (IJCNN) (pp. 1-9). IEEE.
    [11] Ghosh, A., Chung, J., Yin, D., & Ramchandran, K. (2020). An efficient framework for clustered federated learning. Advances in Neural Information Processing Systems, 33, 19586-19597.
    [12] Hartigan, J. A., & Wong, M. A. (1979). Algorithm AS 136: A K-Means Clustering Algorithm. Journal of the Royal Statistical Society. Series C (Applied Statistics), 28(1), 100–108. https://doi.org/10.2307/2346830
    [13] Ester, M., Kriegel, H. P., Sander, J., & Xu, X. (1996, August). A density-based algorithm for discovering clusters in large spatial databases with noise. In kdd (Vol. 96, No. 34, pp. 226-231).
    描述: 碩士
    國立政治大學
    資訊科學系
    110753157
    資料來源: http://thesis.lib.nccu.edu.tw/record/#G0110753157
    資料類型: thesis
    顯示於類別:[資訊科學系] 學位論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    315701.pdf5021KbAdobe PDF0檢視/開啟


    在政大典藏中所有的資料項目都受到原著作權保護.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 回饋