政大機構典藏-National Chengchi University Institutional Repository(NCCUR):Item 140.119/147097
English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  全文筆數/總筆數 : 113822/144841 (79%)
造訪人次 : 51831958      線上人數 : 432
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋
    請使用永久網址來引用或連結此文件: https://nccur.lib.nccu.edu.tw/handle/140.119/147097


    題名: 深度學習為導向的可解釋性推薦系統-提升公部門的線上補助平台服務績效
    Explainable Deep Learning - Based Recommendation Systems : Enhancing the Services of Public Sector Subsidy Online Platform
    作者: 莊鈞諺
    Zhuang, Jun-Yan
    貢獻者: 胡毓忠
    Hu, Yuh-Jong
    莊鈞諺
    Zhuang, Jun-Yan
    關鍵詞: 可解釋性 AI
    神經協同過濾
    SHAP
    個性化推薦系統
    Explainable AI model
    Neural Collaborative Filtering
    SHAP
    Personalized Recommendation System
    日期: 2023
    上傳時間: 2023-09-01 15:39:58 (UTC+8)
    摘要: 本研究旨在解決政府補助平台的使用效益不彰問題,透過個性化推薦系統協助台灣中小企業進行數位化轉型。為達此目標,研究者設計與訓練了一個基於深度學習的協同過濾分析推薦系統,此系統能夠根據使用者和推薦項目的多種特徵進行學習,並為使用者提供最可能感興趣的前五名產品推薦。
    為強化模型的解釋性,研究者結合SHAP模組與深度學習模型,讓模型的預測結果更具透明度,並利用大型語言模型OpenAI的ChatGPT生成簡單易懂的語言解釋,協助使用者理解模型推薦的原因,進一步提升使用者的滿意度。
    實際上線運行後的數據顯示,此組合的推薦系統明顯提升了平台使用效率,相較於僅依賴隨機推薦。總體來看,本研究的成果表明,結合深度學習的推薦系統、解釋性AI模組,以及語言模型生成的方式,可以有效地提升中小企業在政府資源發放平台上選擇雲端解決方案的效率與決策準確度,從而助力台灣中小企業的數位化轉型。
    This research addresses inefficiencies in government subsidy platforms and aids SME digital transformation using a personalized recommendation system. We developed a collaborative filtering recommendation system based on deep learning. Empirical data shows significant efficiency improvements when combined with SHAP, an explainable AI module versus a random version.

    The integration of SHAP enhances model interpretability, making predictions transparent. User-friendly language explanations, generated using a large language model (LLM), help users understand the system`s operation and boost satisfaction. The combined recommendation system with SHAP provides clear recommendation reasons, improving platform efficiency.
    In conclusion, the blend of a deep learning-based recommendation system, explainable AI, and language model generation effectively enhances decision-making accuracy for cloud solutions on resource distribution platforms, aiding Taiwan`s SMEs` digital transformation.
    參考文獻: [1] 資誠聯合會計師事務所 PwC Taiwan. 2021 臺灣中小企業轉型現況調查報告,
    2021. Accessed: 2023-06-10.
    [2] 經濟部中小企業處. Chapter 2.1. In 2022 年中小企業白皮書, pages 40–56. 2022.
    [3] Guy Shani and Asela Gunawardana. Evaluating recommendation systems.
    Recommender systems handbook, pages 257–297, 2011.
    [4] Alexandra Vultureanu-Albişi and Costin Bădică. Recommender systems: an ex-
    plainable ai perspective. In 2021 International Conference on INnovations in
    Intelligent SysTems and Applications (INISTA), pages 1–6. IEEE, 2021.
    [5] General data protection regulation (gdpr). Online document, 2016. Accessed: 2023-
    07-01.
    [6] Andrew I Schein, Alexandrin Popescul, Lyle H Ungar, and David M Pennock. Meth-
    ods and metrics for cold-start recommendations. In Proceedings of the 25th annual
    international ACM SIGIR conference on Research and development in information
    retrieval, pages 253–260, 2002.
    [7] Jonathan L Herlocker, Joseph A Konstan, and John Riedl. Explaining collabora-
    tive filtering recommendations. In Proceedings of the 2000 ACM conference on
    Computer supported cooperative work, pages 241–250, 2000.
    [8] Zeynep Batmaz, Ali Yurekli, Alper Bilge, et al. A review on deep learning for recom-
    mender systems: challenges and remedies. Artificial Intelligence Review, 52:1–37,
    2019.
    45
    [9] Kunal Shah, Akshaykumar Salunke, Saurabh Dongare, et al. Recommender systems:
    An overview of different approaches to recommendations. In 2017 International
    Conference on Innovations in Information, Embedded and Communication Systems
    (ICIIECS), pages 1–4. IEEE, 2017.
    [10] Michael J Pazzani and Daniel Billsus. Content-based recommendation systems. The
    adaptive web: methods and strategies of web personalization, pages 325–341, 2007.
    [11] Sarah Bouraga, Ivan Jureta, Stéphane Faulkner, et al. Knowledge-based recom-
    mendation systems: A survey. International Journal of Intelligent Information
    Technologies (IJIIT), 10(2):1–19, 2014.
    [12] Xiangnan He, Lizi Liao, Hanwang Zhang, et al. Neural collaborative filtering. In
    Proceedings of the 26th international conference on world wide web, pages 173–182,
    2017.
    [13] Hiroki Morise, Kyohei Atarashi, Satoshi Oyama, et al. Neural collaborative filtering
    with multicriteria evaluation data. Applied Soft Computing, 119:108548, 2022.
    [14] Farhan Ullah, Bofeng Zhang, and Others. Deep edu: a deep neural collaborative
    filtering for educational services recommendation. IEEE access, 8:110915–110928,
    2020.
    [15] Shuai Yu, Min Yang, Qiang Qu, et al. Contextual-boosted deep neural collaborative
    filtering model for interpretable recommendation. Expert systems with applications,
    136:365–375, 2019.
    [16] Hai Chen, Fulan Qian, Jie Chen, et al. Attribute-based neural collaborative filtering.
    Expert Systems with Applications, 185:115539, 2021.
    [17] Andreas Holzinger, Anna Saranti, Christoph Molnar, et al. Explainable ai methods-
    a brief overview. In International Workshop on Extending Explainable AI Beyond
    Deep Models and Classifiers, pages 13–38. Springer, 2020.
    46
    [18] Roberto Confalonieri, Ludovik Coba, Benedikt Wagner, et al. A historical perspec-
    tive of explainable artificial intelligence. Wiley Interdisciplinary Reviews: Data
    Mining and Knowledge Discovery, 11(1):e1391, 2021.
    [19] Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. ” why should i trust
    you?” explaining the predictions of any classifier. In Proceedings of the 22nd ACM
    SIGKDD international conference on knowledge discovery and data mining, pages
    1135–1144, 2016.
    [20] Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. Anchors: High-precision
    model-agnostic explanations. In Proceedings of the AAAI conference on artificial
    intelligence, volume 32, 2018.
    [21] Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep
    networks. In International conference on machine learning, pages 3319–3328.
    PMLR, 2017.
    [22] Sebastian Bach, Alexander Binder, Grégoire Montavon, et al. On pixel-wise expla-
    nations for non-linear classifier decisions by layer-wise relevance propagation. PloS
    one, 10(7):e0130140, 2015.
    [23] Sujata Khedkar, Priyanka Gandhi, Gayatri Shinde, et al. Deep learning and explain-
    able ai in healthcare using ehr. Deep learning techniques for biomedical and health
    informatics, pages 129–148, 2020.
    [24] Shravan Sajja, Nupur Aggarwal, Sumanta Mukherjee, et al. Explainable ai based
    interventions for pre-season decision making in fashion retail. In Proceedings of the
    3rd ACM India Joint International Conference on Data Science & Management of
    Data (8th ACM IKDD CODS & 26th COMAD), pages 281–289, 2021.
    [25] Robert Zimmermann, Daniel Mora, Douglas Cirqueira, et al. Enhancing brick-and-
    mortar store shopping experience with an augmented reality shopping assistant ap-
    plication using personalized recommendations and explainable artificial intelligence.
    Journal of Research in Interactive Marketing, 17(2):273–298, 2023.
    47
    [26] Alex Gramegna and Paolo Giudici. Why to buy insurance? an explainable artificial
    intelligence approach. Risks, 8(4):137, 2020.
    [27] Kamil Matuszelański and Katarzyna Kopczewska. Customer churn in retail e-
    commerce business: Spatial and machine learning approach. Journal of Theoretical
    and Applied Electronic Commerce Research, 17(1):165–198, 2022.
    [28] Gülşah Yılmaz Benk, Bertan Badur, and Sona Mardikyan. A new 360° frame-
    work to predict customer lifetime value for multi-category e-commerce companies
    using a multi-output deep neural network and explainable artificial intelligence.
    Information, 13(8):373, 2022.
    [29] Yongfeng Zhang, Xu Chen, et al. Explainable recommendation: A survey and
    new perspectives. Foundations and Trends® in Information Retrieval, 14(1):1–101,
    2020.
    [30] Donghee Shin. The effects of explainability and causability on perception,
    trust, and acceptance: Implications for explainable ai. International Journal of
    Human-Computer Studies, 146:102551, 2021.
    [31] Mauro Dragoni, Ivan Donadello, and Claudio Eccher. Explainable ai meets per-
    suasiveness: Translating reasoning results into behavioral change advice. Artificial
    Intelligence in Medicine, 105:101840, 2020.
    [32] Arun Rai. Explainable ai: From black box to glass box. Journal of the Academy of
    Marketing Science, 48:137–141, 2020.
    [33] Arun Kumar Sangaiah, Samira Rezaei, et al. Explainable ai in big data intelli-
    gence of community detection for digitalization e-healthcare services. Applied Soft
    Computing, 136:110119, 2023.
    [34] Chun-Hua Tsai and Peter Brusilovsky. The effects of controllability and explainabil-
    ity in a social recommender system. User Modeling and User-Adapted Interaction,
    31:591–627, 2021.
    48
    [35] Harald Steck, Linas Baltrunas, Ehtsham Elahi, et al. Deep learning for recommender
    systems: A netflix case study. AI Magazine, 42(3):7–18, 2021.
    [36] Scott M Lundberg and Su-In Lee. A unified approach to interpreting model predic-
    tions. Advances in neural information processing systems, 30, 2017.
    [37] Alexis Papadimitriou, Panagiotis Symeonidis, and Yannis Manolopoulos. A gener-
    alized taxonomy of explanations styles for traditional and social recommender sys-
    tems. Data Mining and Knowledge Discovery, 24:555–583, 2012.
    [38] Fulian Yin et al. An interpretable neural network tv program recommendation based
    on shap. International Journal of Machine Learning and Cybernetics, pages 1–14,
    2023.
    [39] Mingang Chen and Pan Liu. Performance evaluation of recommender systems.
    International Journal of Performability Engineering, 13(8):1246, 2017.
    [40] Paolo Cremonesi, Yehuda Koren, and Roberto Turrin. Performance of recommender
    algorithms on top-n recommendation tasks. In Proceedings of the fourth ACM
    conference on Recommender systems, pages 39–46, 2010.
    [41] Ziqi Li. Extracting spatial effects from machine learning model using local inter-
    pretation method: An example of shap and xgboost. Computers, Environment and
    Urban Systems, 96:101845, 2022.
    [42] P Aditya and Mayukha Pal. Local interpretable model agnostic shap explanations
    for machine learning models. arXiv preprint arXiv:2210.04533, 2022.
    描述: 碩士
    國立政治大學
    資訊科學系碩士在職專班
    110971009
    資料來源: http://thesis.lib.nccu.edu.tw/record/#G0110971009
    資料類型: thesis
    顯示於類別:[資訊科學系碩士在職專班] 學位論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    100901.pdf1692KbAdobe PDF2107檢視/開啟


    在政大典藏中所有的資料項目都受到原著作權保護.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 回饋