Loading...
|
Please use this identifier to cite or link to this item:
https://nccur.lib.nccu.edu.tw/handle/140.119/155969
|
Title: | 量化技術賦能之高效安全的區塊鏈聯邦學習系統 Efficient and Secure Blockchain-Based Federated Learning System Empowered by Quantization Techniques |
Authors: | 劉育佑 Liu, Yu-You |
Contributors: | 張宏慶 Jang, Hung-Chin 劉育佑 Liu, Yu-You |
Keywords: | 聯邦學習 區塊鏈 智能合約 量化技術 深度學習 Federated Learnin Blockchain Smart Contract Quantization Deep Learning |
Date: | 2025 |
Issue Date: | 2025-03-03 14:03:15 (UTC+8) |
Abstract: | 機器學習(Machine Learning)作為一種資料驅動的技術,已廣泛應用於多個領域。然而,傳統機器學習方法在處理高維度資料或複雜非線性問題時,往往面臨性能瓶頸。深度學習(Deep Learning)作為機器學習的一個子領域,利用多層神經網絡架構,突破了傳統方法的限制,能夠有效學習高維資料中的潛在模式與特徵,並在現代生活中發揮舉足輕重的作用。然而,深度學習的應用與發展仍面臨多重挑戰,包括高計算資源需求、模型更新與遷移的複雜性,以及用戶資料隱私保護的迫切需求等問題。為解決其中的隱私性挑戰,聯邦學習(Federated Learning)技術應運而生。 聯邦學習是一種分散式的機器學習框架,允許多台設備或伺服器協同訓練共享模型,而無需直接傳輸用戶資料。在此基礎上,為進一步強化聯邦學習的安全性,學界提出的其中一種技術手段是將其結合區塊鏈技術(Blockchain, BC)。區塊鏈技術憑藉其不可篡改性與去中心化特性,有效防止了惡意客戶端的攻擊。然而,應用區塊鏈的聯邦學習系統也存在挑戰,其中,為了應對模型儲存與傳輸所需的大量資源消耗,量化技術(Quantization)被視為潛在的解決方案。 量化技術旨在將神經網絡中的浮點數權重(Weight)與偏差(Bias)轉換為低位元(low bit-width)的數值表示,藉此縮小模型大小,加速推理速度,並減少能源消耗。本研究聚焦於探討量化技術如何改善基於區塊鏈的聯邦學習系統效能。我們採用以太坊(Ethereum)與 Ganache 模擬區塊鏈環境,使用 Solidity 撰寫智能合約,並結合星際檔案系統(Inter Planetary File System, IPFS)進行模型儲存。此外,利用 PyTorch 框架實現聯邦學習模型的參數量化,採用訓練後量化(Post-Training Quantization, PTQ)與量化感知訓練(Quantization-Aware Training, QAT)兩種方法,並與未量化的聯邦學習系統進行性能對比。 實驗模擬將針對不同資料集與迭代回合數,進行效能與傳輸成本的評估,透過量化技術的引入,在保障模型準確性的前提下,顯著降低系統傳輸、儲存成本並提升效能,為區塊鏈與聯邦學習的結合提供切實可行的優化策略。 Machine Learning (ML), as a data-driven technology, has been widely applied across various fields. However, traditional ML methods often face performance bottlenecks when handling high-dimensional data or complex nonlinear problems. Deep Learning (DL), a subfield of ML, leverages multi-layer neural network architectures to overcome the limitations of traditional approaches. It effectively captures latent patterns and features in high-dimensional data, playing a crucial role in modern applications. Nevertheless, the adoption and advancement of DL still encounter multiple challenges, including high computational resource requirements, the complexity of model updates and migration, and the urgent need for user data privacy protection. To address privacy concerns, Federated Learning (FL) has emerged as a promising solution. FL is a decentralized ML framework that enables multiple devices or servers to collaboratively train a shared model without directly transmitting user data. To further enhance the security of FL, researchers have proposed integrating it with Blockchain (BC) technology. With its immutability and decentralization, blockchain effectively mitigates attacks from malicious clients. However, blockchain-based FL systems also face challenges, particularly the substantial resource consumption required for model storage and transmission. Quantization has been recognized as a potential solution to this issue. Quantization aims to convert floating-point weights and biases in neural networks into low-bit-width numerical representations, thereby reducing model size, accelerating inference speed, and lowering energy consumption. This study focuses on exploring how quantization can improve the efficiency of blockchain-based FL systems. We simulate a blockchain environment using Ethereum and Ganache, implement smart contracts with Solidity, and leverage the InterPlanetary File System (IPFS) for model storage. Furthermore, we apply parameter quantization to FL models using the PyTorch framework, employing both Post-Training Quantization (PTQ) and Quantization-Aware Training (QAT), and compare their performance with non-quantized FL systems. The experimental simulations evaluate system performance and transmission costs across different datasets and training rounds. By incorporating quantization techniques, we significantly reduce transmission and storage costs while maintaining model accuracy, thereby providing a practical optimization strategy for the integration of blockchain and FL. |
Reference: | [1] B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. Arcas, "Communication-efficient learning of deep networks from decentralized data," in Proc. 20th Int. Conf. Artif. Intell. Statist., Fort Lauderdale, FL, USA, Apr. 2017, pp. 1273-1282. [2] O. R. Almanifi, C.-O. Chow, M.-L. Tham, J. H. Chuah, and J. Kanesan, "Communication and computation efficiency in federated learning: A survey," Internet Things, vol. 22, p. 100742, 2023. [3] A. A. Monrat, O. Schelen, and K. Andersson, "A survey of blockchain from the perspectives of applications, challenges, and opportunities," IEEE Access, vol. 7, pp. 117134-117151, 2019. [4] R. Krishnamoorthi, "Quantizing deep convolutional networks for efficient inference: A whitepaper," arXiv:1806.08342, Jun. 2018. [Online]. Available: https://arxiv.org/abs/1806.08342. Accessed on: Dec. 19, 2023. [5] J. Wu, C. Leng, Y. Wang, Q. Hu, and J. Cheng, "Quantized convolutional neural networks for mobile devices," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Las Vegas, NV, USA, Jun. 2016, pp. 4820-4828. [6] J. Zhu, J. Cao, D. Saxena, S. Jiang, and H. Ferradi, "Blockchain-empowered federated learning: Challenges, solutions, and future directions," ACM Comput. Surv., vol. 55, no. 11, pp. 1-31, 2023. [7] C. Ma et al., "When federated learning meets blockchain: A new distributed learning paradigm," IEEE Comput. Intell. Mag., vol. 17, no. 3, pp. 26-33, 2022. [8] P. C. Arachchige et al., "A trustworthy privacy preserving framework for machine learning in industrial IOT systems," IEEE Trans. Ind. Inform., vol. 16, no. 9, pp. 6092-6102, Sep. 2020. [9] Y. Ji and L. Chen, "FedQNN: A computation–communication-efficient federated learning framework for IOT with low-bitwidth neural network quantization," IEEE Internet Things J., vol. 10, no. 3, pp. 2494-2507, 2023. [10] T. Nishio and R. Yonetani, "Client selection for federated learning with heterogeneous resources in mobile edge," in Proc. IEEE Int. Conf. Commun., Shanghai, China, May 2019, pp. 1-7. [11] Y. Qi, M. S. Hossain, J. Nie, and X. Li, "Privacy-preserving blockchain-based federated learning for traffic flow prediction," Future Gener. Comput. Syst., vol. 117, pp. 328-337, 2021. [12] J. Nguyen et al., "Federated learning with buffered asynchronous aggregation," arXiv:2106.06639, Jun. 2021. [Online]. Available: https://arxiv.org/abs/2106.06639. Accessed on: Dec. 19, 2023. [13] T. Li et al., "Federated optimization in heterogeneous networks," arXiv:1812.06127, Dec. 2018. [Online]. Available: https://arxiv.org/abs/1812.06127. Accessed on: Dec. 19, 2023. [14] M. Nagel et al., "A white paper on neural network quantization," arXiv:2106.08295, Jun. 2021. [Online]. Available: https://arxiv.org/abs/2106.08295. Accessed on: Dec. 19, 2023. [15] C. Dwork, "Differential privacy: A survey of results," in Proc. 5th Int. Conf. Theory Appl. Models Comput., Bangalore, India, Apr. 2008, pp. 1-19. [16] P. Yin et al., "Understanding straight-through estimator in training activation quantized neural nets," arXiv:1903.05662, Mar. 2019. [Online]. Available: https://arxiv.org/abs/1903.05662. Accessed on: Dec. 19, 2023. [17] A. Ziller et al., "PySyft: A library for easy federated learning," in Federated Learning Systems, S. Bhattacharya, S. Li, and Q. Yang, Eds. Cham, Switzerland: Springer, 2021, pp. 111-139. [18] Y. Guo, “A survey on methods and theories of quantized neural networks,” arXiv:1808.04752, Aug. 2018. [Online] Available: https://arxiv.org/abs/1808.04752. Accessed on: Dec. 19, 2023. [19] Z. Sun, Z. Zhou, and F.-W. Fu, “Optimizing code allocation for hybrid on-chip memory in IoT systems,” Integration, vol. 97, p. 102195, Apr. 2024. [20] R. Zong, Y. Qin, F. Wu, Z. Tang, and K. Li, “Fedcs: Efficient communication scheduling in decentralized federated learning,” Information Fusion, vol. 102, p. 102028, Sep. 2023. [21] M. Steichen, B. Fiz, R. Norvill, W. Shbair, and R. State, “Blockchain-Based, Decentralized Access Control for IPFS,” 2018 IEEE Confs on Internet of Things, Green Computing and Communications, Cyber, Physical and Social Computing, Smart Data, Blockchain, Computer and Information Technology, Congress on Cybermatics, pp. 1499–1506, Jul. 2018. [22] E. Daniel and F. Tschorsch, “IPFS and Friends: A Qualitative comparison of next generation Peer-to-Peer data networks,” IEEE Communications Surveys & Tutorials, vol. 24, no. 1, pp. 31–52, Jan. 2022. [23] S. Naveen and M. R. Kounte, “Memory optimization at Edge for Distributed Convolution Neural Network,” Transactions on Emerging Telecommunications Technologies, vol. 33, no. 12, Sep. 2022. [24] J. Benet, “IPFS - Content Addressed, Versioned, P2P File System,” arXiv.org, Jul. 14, 2014. https://arxiv.org/abs/1407.3561v1 [25] C. Zhao et al., “Secure Multi-Party Computation: Theory, practice and applications,” Information Sciences, vol. 476, pp. 357–372, Feb. 2019. [26] Y. Liu, S. Chang, and Y. Liu, “FedCS: Communication-Efficient Federated Learning with Compressive Sensing,” pp. 17–24, Jan. 2023. [27] J. Niu et al., “A survey on membership inference attacks and defenses in machine learning,” Journal of Information and Intelligence, Mar. 2024. [28] M. Fang, X.-Y. Cao, J. Jia, and Neil Zhenqiang Gong, “Local Model Poisoning Attacks to Byzantine-Robust Federated Learning,” Jan. 2020. [29] Y. Guo, “A Survey on Methods and Theories of Quantized Neural Networks,” arXiv (Cornell University), Jan. 2018. [30] N. Yoshida, T. Nishio, Masahiro Morikura, K. Yamamoto, and Ryo Yonetani, “Hybrid-FL: Cooperative Learning Mechanism Using Non-IID Data in Wireless Networks.,” arXiv (Cornell University), May 2019. [31] Zahangir Alom et al., “The History Began from AlexNet: A Comprehensive Survey on Deep Learning Approaches,” Mar. 2018. [32] S. Kumar, A. K. Bharti, and R. Amin, “Decentralized secure storage of medical records using Blockchain and IPFS : A comparative analysis with future directions,” Security and Privacy, Apr. 2021. |
Description: | 碩士 國立政治大學 資訊科學系 111753145 |
Source URI: | http://thesis.lib.nccu.edu.tw/record/#G0111753145 |
Data Type: | thesis |
Appears in Collections: | [資訊科學系] 學位論文
|
Files in This Item:
File |
Size | Format | |
314501.pdf | 5691Kb | Adobe PDF | 0 | View/Open |
|
All items in 政大典藏 are protected by copyright, with all rights reserved.
|