English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  全文筆數/總筆數 : 118204/149236 (79%)
造訪人次 : 74194398      線上人數 : 127
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋
    政大機構典藏 > 資訊學院 > 資訊科學系 > 學位論文 >  Item 140.119/159417
    請使用永久網址來引用或連結此文件: https://nccur.lib.nccu.edu.tw/handle/140.119/159417


    題名: 結合高密度連結子圖檢索以提升大型語言模型的知識圖譜提示能力
    Enhancing Knowledge Graph Prompting for Large Language Models Based on Densely Connected Subgraph Retrieval
    作者: 陳品伃
    Chen, Pin-Yu
    貢獻者: 沈錳坤
    Shan, Man-Kwan
    陳品伃
    Chen, Pin-Yu
    關鍵詞: 知識圖譜
    子圖檢索
    醫療問答
    提示工程
    大型語言模型
    Knowledge Graph
    Subgraph Retrieval
    Medical Question Answering
    Prompt Engineering
    Large Language Model
    日期: 2025
    上傳時間: 2025-09-01 16:58:06 (UTC+8)
    摘要: 隨著大型語言模型的應用日益廣泛,儘管展現出優異的語言理解與生成能力,但在需要複雜推理的專業領域,推理品質及可解釋性略顯不足,例如醫療和法律領域。為提升模型在專業領域的推理品質及可解釋性,本研究採用知識圖譜作為外部知識來源,以輔助大型語言模型進行推理。為此,我們提出以Densely Connected Subgraph Retrieval為核心的知識圖譜檢索架構,從知識圖譜中檢索出結構緊密且具高度關聯性的子圖,再結合Shortest Path Search與Importance-Based Path Search等方法來選取推理路徑。最後,將這些路徑轉換為自然語言,並以提示工程引導大型語言模型進行推理。本研究以醫療問答任務為專業應用場景,採用患者主訴與醫師回覆的問診對話作為實驗基礎,並採用GPT-4o Ranking與BERT Score評估大型語言模型在醫療問答任務上的推理品質與可解釋性。結果顯示,本研究提升了大型語言模型於醫療問答任務中的推理品質與可解釋性,展現專業應用的潛力。
    Large language models (LLMs) have demonstrated remarkable capabilities in natural language understanding and generation, yet their effectiveness in professional domains such as healthcare and law remains limited by challenges in complex reasoning and explainability. To address these limitations, we propose a knowledge graph–assisted framework for reasoning, which leverages Densely Connected Subgraph Retrieval to extract structurally cohesive and semantically relevant subgraphs. Within these subgraphs, reasoning paths are identified through a combination of shortest-path search and node-importance-based methods, and subsequently transformed into natural language to guide LLMs via prompt engineering. We evaluate this approach on a medical question-answering task using doctor–patient dialogues, assessing reasoning quality and explainability with GPT-4o Ranking and BERTScore. Experimental results demonstrate that our method significantly improves both the reasoning accuracy and interpretability of LLM outputs, underscoring its potential for deployment in high-stakes professional applications.
    參考文獻: [1] Z. Ji, N. Lee, R. Frieske, T. Yu, D. Su, Y. Xu, E. Ishii, Y. Bang, A. Madotto and P. Fung, “Survey of Hallucination in Natural Language Generation,” ACM Computing Surveys, vol. 55, no. 12, pp. 1-38, 2023.
    [2] S. Pan, L. Luo, Y. Wang, C. Chen, J. Wang and X. Wu, “Unifying Large Language Models and Knowledge Graphs: A Roadmap,” IEEE Transactions on Knowledge and Data Engineering, vol. 36, no.7, pp. 3580-3599, July 2024.
    [3] L. Luo, Y. Li, G. Haffari, and S. Pan, “Reasoning on Graphs: Faithful and Interpretable Large Language Model Reasoning,” International Conference on Learning Representations, Vienna, Austria, 2024.
    [4] Y. Wen, Z. Wang and J. Sun, “MindMap: Knowledge Graph Prompting Sparks Graph of Thoughts in Large Language Models,” 62nd Annual Meeting of the Association for Computational Linguistics, Bangkok, Thailand, 2024.
    [5] D. Edge, H. Trinh, N. Cheng, J. Bradley, A. Chao, A. Mody and S. Truitt, “From Local to Global: A Graph RAG Approach to Query-Focused Summarization,” arXiv preprint arXiv:2404.16130, 2024.
    [6] M. Sozio and A. Gionis, “The Community-Search Problem and How to Plan a Successful Cocktail Party,” 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD’10), pp. 939–948, Washington, DC, USA, 2010.
    [7] J. Wei, X. Wang, D. Schuurmans, M. Bosma, B. Ichter, F. Xia, E. Chi, Q. Le and D. Zhou, “Chain-of-Thought Prompting Elicits Reasoning in Large Language Models,” 36th Conference on Neural Information Processing Systems (NeurIPS 2022), New Orleans, LA, USA, 2022.
    [8] T. Kojima, S. Gu, M. Reid, Y. Matsuo and Y. Iwasawa, “Large Language Models are Zero-Shot Reasoners,” 36th Conference on Neural Information Processing Systems (NeurIPS 2022), New Orleans, LA, USA, 2022.
    [9] X. Wang, J. Wei, D. Schuurmans, Q. Le, E. Chi, S. Narang, A. Chowdhery and D. Zhou, “Self-Consistency Improves Chain of Thought Reasoning in Language Models,” arXiv preprint arXiv:2203.11171, 2023.
    [10] X. Xu, C. Tao, T. Shen, C. Xu, H. Xu, G. Long and J. Lou, “Re-Reading Improves Reasoning in Large Language Models,” 2024 Conference on Empirical Methods in Natural Language Processing, pp. 15549-15575, Miami, FL, USA, 2024.
    [11] S. Yao, D. Yu, J. Zhao, I. Shafran, T. Griffiths, Y. Cao and K. Narasimhan, “Tree of Thoughts: Deliberate Problem Solving with Large Language Models,” 37th Conference on Neural Information Processing Systems (NeurIPS 2023), New Orleans, LA, USA, 2023.
    [12] M. Besta, N. Blach, A. Kubicek, R. Gerstenberger, M. Podstawski, L. Gianinazzi, J. Gajda, T. Lehmann, H. Niewiadomski, P. Nyczyk and T. Hoefler, “Graph of Thoughts: Solving Elaborate Problems with Large Language Models,” 38th AAAI Conference on Artificial Intelligence, vol. 38, no. 16, Vancouver, Canada, 2024.
    [13] J. Sun, C. Xu, L. Tang, S. Wang, C. Lin, Y. Gong, L. Ni, H. Shum and J. Guo, “Think-On-Graph: Deep and Responsible Reasoning of Large Language Model on Knowledge Graph,” 12th International Conference on Learning Representations, Vienna, Austria, 2024.
    [14] B. Jiang, Y. Wang, Y. Luo, D. He, P. Cheng and L. Gao, “Reasoning on Efficient Knowledge Paths: Knowledge Graph Guides Large Language Model for Domain Question Answering,” 2024 IEEE International Conference on Knowledge Graph (ICKG), Abu Dhabi, United Arab Emirates, 2024.
    [15] M. Jia, J. Duan, Y. Song and J. Wang, “medIKAL: Integrating Knowledge Graphs as Assistants of LLMs for Enhanced Clinical Diagnosis on EMRs,” arXiv preprint arXiv:2406.14326, 2024.
    [16] L. Wei, G. Xiao and M. Balazinska, “RACOON: An LLM-based Framework for Retrieval-Augmented Column Type Annotation with a Knowledge Graph,” arXiv preprint arXiv:2409.14556, 2024.
    [17] M. Dehghan, M. Alomrani, S. Bagga, D. Alfonso-Hermelo, K. Bibi, A. Ghaddar, Y. Zhang, X. Li, J. Hao, Q. Liu, J. Lin, B. Chen, P. Parthasarathi, M. Biparva and M. Rezagholizadeh, “EWEK-QA: Enhanced Web and Efficient Knowledge Graph Retrieval for Citation-based Question Answering Systems,” arXiv preprint arXiv:2406.10393, 2024.
    [18] W. Xie, X. Liang, Y. Liu, K. Ni, H. Cheng and Z. Hu, “WeKnow-RAG: An Adaptive Approach for Retrieval-Augmented Generation Integrating Web Search and Knowledge Graphs,” arXiv preprint arXiv:2408.07611, 2024.
    [19] V. Sanh, L. Debut, J. Chaumond and T. Wolf, “DistilBERT, a Distilled Version of BERT: Smaller, Faster, Cheaper and Lighter,” arXiv preprint arXiv:1910.01108, 2019.
    [20] Y. Li, Z. Li, K. Zhang, R. Dan, S. Jiang and Y. Zhang, “ChatDoctor: A Medical Chat Model Fine-Tuned on a Large Language Model Meta-AI (LLaMA) Using Medical Domain Knowledge,” Cureus, vol. 55, no. 6, pp. e40895, 2023.
    [21] S. Brin and L. Page, “The Anatomy of a Large-Scale Hypertextual Web Search Engine,” Computer Networks and ISDN Systems, vol. 30, no. 1-7, pp. 107–117, 1998.
    描述: 碩士
    國立政治大學
    資訊科學系
    112753204
    資料來源: http://thesis.lib.nccu.edu.tw/record/#G0112753204
    資料類型: thesis
    顯示於類別:[資訊科學系] 學位論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    320401.pdf4844KbAdobe PDF0檢視/開啟


    在政大典藏中所有的資料項目都受到原著作權保護.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 回饋