Loading...
|
Please use this identifier to cite or link to this item:
https://nccur.lib.nccu.edu.tw/handle/140.119/147097
|
Title: | 深度學習為導向的可解釋性推薦系統-提升公部門的線上補助平台服務績效 Explainable Deep Learning - Based Recommendation Systems : Enhancing the Services of Public Sector Subsidy Online Platform |
Authors: | 莊鈞諺 Zhuang, Jun-Yan |
Contributors: | 胡毓忠 Hu, Yuh-Jong 莊鈞諺 Zhuang, Jun-Yan |
Keywords: | 可解釋性 AI 神經協同過濾 SHAP 個性化推薦系統 Explainable AI model Neural Collaborative Filtering SHAP Personalized Recommendation System |
Date: | 2023 |
Issue Date: | 2023-09-01 15:39:58 (UTC+8) |
Abstract: | 本研究旨在解決政府補助平台的使用效益不彰問題,透過個性化推薦系統協助台灣中小企業進行數位化轉型。為達此目標,研究者設計與訓練了一個基於深度學習的協同過濾分析推薦系統,此系統能夠根據使用者和推薦項目的多種特徵進行學習,並為使用者提供最可能感興趣的前五名產品推薦。 為強化模型的解釋性,研究者結合SHAP模組與深度學習模型,讓模型的預測結果更具透明度,並利用大型語言模型OpenAI的ChatGPT生成簡單易懂的語言解釋,協助使用者理解模型推薦的原因,進一步提升使用者的滿意度。 實際上線運行後的數據顯示,此組合的推薦系統明顯提升了平台使用效率,相較於僅依賴隨機推薦。總體來看,本研究的成果表明,結合深度學習的推薦系統、解釋性AI模組,以及語言模型生成的方式,可以有效地提升中小企業在政府資源發放平台上選擇雲端解決方案的效率與決策準確度,從而助力台灣中小企業的數位化轉型。 This research addresses inefficiencies in government subsidy platforms and aids SME digital transformation using a personalized recommendation system. We developed a collaborative filtering recommendation system based on deep learning. Empirical data shows significant efficiency improvements when combined with SHAP, an explainable AI module versus a random version.
The integration of SHAP enhances model interpretability, making predictions transparent. User-friendly language explanations, generated using a large language model (LLM), help users understand the system`s operation and boost satisfaction. The combined recommendation system with SHAP provides clear recommendation reasons, improving platform efficiency. In conclusion, the blend of a deep learning-based recommendation system, explainable AI, and language model generation effectively enhances decision-making accuracy for cloud solutions on resource distribution platforms, aiding Taiwan`s SMEs` digital transformation. |
Reference: | [1] 資誠聯合會計師事務所 PwC Taiwan. 2021 臺灣中小企業轉型現況調查報告, 2021. Accessed: 2023-06-10. [2] 經濟部中小企業處. Chapter 2.1. In 2022 年中小企業白皮書, pages 40–56. 2022. [3] Guy Shani and Asela Gunawardana. Evaluating recommendation systems. Recommender systems handbook, pages 257–297, 2011. [4] Alexandra Vultureanu-Albişi and Costin Bădică. Recommender systems: an ex- plainable ai perspective. In 2021 International Conference on INnovations in Intelligent SysTems and Applications (INISTA), pages 1–6. IEEE, 2021. [5] General data protection regulation (gdpr). Online document, 2016. Accessed: 2023- 07-01. [6] Andrew I Schein, Alexandrin Popescul, Lyle H Ungar, and David M Pennock. Meth- ods and metrics for cold-start recommendations. In Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval, pages 253–260, 2002. [7] Jonathan L Herlocker, Joseph A Konstan, and John Riedl. Explaining collabora- tive filtering recommendations. In Proceedings of the 2000 ACM conference on Computer supported cooperative work, pages 241–250, 2000. [8] Zeynep Batmaz, Ali Yurekli, Alper Bilge, et al. A review on deep learning for recom- mender systems: challenges and remedies. Artificial Intelligence Review, 52:1–37, 2019. 45 [9] Kunal Shah, Akshaykumar Salunke, Saurabh Dongare, et al. Recommender systems: An overview of different approaches to recommendations. In 2017 International Conference on Innovations in Information, Embedded and Communication Systems (ICIIECS), pages 1–4. IEEE, 2017. [10] Michael J Pazzani and Daniel Billsus. Content-based recommendation systems. The adaptive web: methods and strategies of web personalization, pages 325–341, 2007. [11] Sarah Bouraga, Ivan Jureta, Stéphane Faulkner, et al. Knowledge-based recom- mendation systems: A survey. International Journal of Intelligent Information Technologies (IJIIT), 10(2):1–19, 2014. [12] Xiangnan He, Lizi Liao, Hanwang Zhang, et al. Neural collaborative filtering. In Proceedings of the 26th international conference on world wide web, pages 173–182, 2017. [13] Hiroki Morise, Kyohei Atarashi, Satoshi Oyama, et al. Neural collaborative filtering with multicriteria evaluation data. Applied Soft Computing, 119:108548, 2022. [14] Farhan Ullah, Bofeng Zhang, and Others. Deep edu: a deep neural collaborative filtering for educational services recommendation. IEEE access, 8:110915–110928, 2020. [15] Shuai Yu, Min Yang, Qiang Qu, et al. Contextual-boosted deep neural collaborative filtering model for interpretable recommendation. Expert systems with applications, 136:365–375, 2019. [16] Hai Chen, Fulan Qian, Jie Chen, et al. Attribute-based neural collaborative filtering. Expert Systems with Applications, 185:115539, 2021. [17] Andreas Holzinger, Anna Saranti, Christoph Molnar, et al. Explainable ai methods- a brief overview. In International Workshop on Extending Explainable AI Beyond Deep Models and Classifiers, pages 13–38. Springer, 2020. 46 [18] Roberto Confalonieri, Ludovik Coba, Benedikt Wagner, et al. A historical perspec- tive of explainable artificial intelligence. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 11(1):e1391, 2021. [19] Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. ” why should i trust you?” explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144, 2016. [20] Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. Anchors: High-precision model-agnostic explanations. In Proceedings of the AAAI conference on artificial intelligence, volume 32, 2018. [21] Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep networks. In International conference on machine learning, pages 3319–3328. PMLR, 2017. [22] Sebastian Bach, Alexander Binder, Grégoire Montavon, et al. On pixel-wise expla- nations for non-linear classifier decisions by layer-wise relevance propagation. PloS one, 10(7):e0130140, 2015. [23] Sujata Khedkar, Priyanka Gandhi, Gayatri Shinde, et al. Deep learning and explain- able ai in healthcare using ehr. Deep learning techniques for biomedical and health informatics, pages 129–148, 2020. [24] Shravan Sajja, Nupur Aggarwal, Sumanta Mukherjee, et al. Explainable ai based interventions for pre-season decision making in fashion retail. In Proceedings of the 3rd ACM India Joint International Conference on Data Science & Management of Data (8th ACM IKDD CODS & 26th COMAD), pages 281–289, 2021. [25] Robert Zimmermann, Daniel Mora, Douglas Cirqueira, et al. Enhancing brick-and- mortar store shopping experience with an augmented reality shopping assistant ap- plication using personalized recommendations and explainable artificial intelligence. Journal of Research in Interactive Marketing, 17(2):273–298, 2023. 47 [26] Alex Gramegna and Paolo Giudici. Why to buy insurance? an explainable artificial intelligence approach. Risks, 8(4):137, 2020. [27] Kamil Matuszelański and Katarzyna Kopczewska. Customer churn in retail e- commerce business: Spatial and machine learning approach. Journal of Theoretical and Applied Electronic Commerce Research, 17(1):165–198, 2022. [28] Gülşah Yılmaz Benk, Bertan Badur, and Sona Mardikyan. A new 360° frame- work to predict customer lifetime value for multi-category e-commerce companies using a multi-output deep neural network and explainable artificial intelligence. Information, 13(8):373, 2022. [29] Yongfeng Zhang, Xu Chen, et al. Explainable recommendation: A survey and new perspectives. Foundations and Trends® in Information Retrieval, 14(1):1–101, 2020. [30] Donghee Shin. The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable ai. International Journal of Human-Computer Studies, 146:102551, 2021. [31] Mauro Dragoni, Ivan Donadello, and Claudio Eccher. Explainable ai meets per- suasiveness: Translating reasoning results into behavioral change advice. Artificial Intelligence in Medicine, 105:101840, 2020. [32] Arun Rai. Explainable ai: From black box to glass box. Journal of the Academy of Marketing Science, 48:137–141, 2020. [33] Arun Kumar Sangaiah, Samira Rezaei, et al. Explainable ai in big data intelli- gence of community detection for digitalization e-healthcare services. Applied Soft Computing, 136:110119, 2023. [34] Chun-Hua Tsai and Peter Brusilovsky. The effects of controllability and explainabil- ity in a social recommender system. User Modeling and User-Adapted Interaction, 31:591–627, 2021. 48 [35] Harald Steck, Linas Baltrunas, Ehtsham Elahi, et al. Deep learning for recommender systems: A netflix case study. AI Magazine, 42(3):7–18, 2021. [36] Scott M Lundberg and Su-In Lee. A unified approach to interpreting model predic- tions. Advances in neural information processing systems, 30, 2017. [37] Alexis Papadimitriou, Panagiotis Symeonidis, and Yannis Manolopoulos. A gener- alized taxonomy of explanations styles for traditional and social recommender sys- tems. Data Mining and Knowledge Discovery, 24:555–583, 2012. [38] Fulian Yin et al. An interpretable neural network tv program recommendation based on shap. International Journal of Machine Learning and Cybernetics, pages 1–14, 2023. [39] Mingang Chen and Pan Liu. Performance evaluation of recommender systems. International Journal of Performability Engineering, 13(8):1246, 2017. [40] Paolo Cremonesi, Yehuda Koren, and Roberto Turrin. Performance of recommender algorithms on top-n recommendation tasks. In Proceedings of the fourth ACM conference on Recommender systems, pages 39–46, 2010. [41] Ziqi Li. Extracting spatial effects from machine learning model using local inter- pretation method: An example of shap and xgboost. Computers, Environment and Urban Systems, 96:101845, 2022. [42] P Aditya and Mayukha Pal. Local interpretable model agnostic shap explanations for machine learning models. arXiv preprint arXiv:2210.04533, 2022. |
Description: | 碩士 國立政治大學 資訊科學系碩士在職專班 110971009 |
Source URI: | http://thesis.lib.nccu.edu.tw/record/#G0110971009 |
Data Type: | thesis |
Appears in Collections: | [資訊科學系碩士在職專班] 學位論文
|
Files in This Item:
File |
Description |
Size | Format | |
100901.pdf | | 1692Kb | Adobe PDF2 | 103 | View/Open |
|
All items in 政大典藏 are protected by copyright, with all rights reserved.
|