政大機構典藏-National Chengchi University Institutional Repository(NCCUR):Item 140.119/153374
English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  Items with full text/Total items : 113318/144297 (79%)
Visitors : 51086219      Online Users : 912
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    Please use this identifier to cite or link to this item: https://nccur.lib.nccu.edu.tw/handle/140.119/153374


    Title: 基於圖預訓練與提示詞學習於推薦系統
    Graph-based Pre-training and Prompting for Recommendation Systems
    Authors: 張立暘
    Chang, Li-Yang
    Contributors: 蔡銘峰
    Tsai, Ming-Feng
    張立暘
    Chang, Li-Yang
    Keywords: 推薦系統
    預訓練模型
    多模態推薦系統
    提示詞學習
    圖神經網路
    冷啟動推薦系統
    Date: 2024
    Issue Date: 2024-09-04 14:58:56 (UTC+8)
    Abstract: 本研究旨在探討基於多模態預訓練模型在推薦系統中的應用,特別是利用圖預訓練和提示詞學習技術來提升推薦系統的性能。我們提出了一種創新的方法,結合了圖神經網絡(Graph Neural Networks, GNNs)來捕捉圖的信息和自然語言處理(NLP)學習文本的長距離相依性。這樣的預訓練模型能夠有效捕捉文本和圖結構等多模態數據中的深層次語義和結構信息,從而為推薦系統提供有質量的預訓練模,供下游推薦任務使用。

    我們的研究重點之一是提示學習技術,它包括離散提示(硬提示)和連續提示(軟提示)兩種方法。離散提示通過設計固定的詞語或短語來引導模型生成特定輸出,而連續提示則通過學習得到的嵌入向量與預訓練模型的輸入層結合,實現模型的微調。我們發現,連續提示在靈活性和適應性方面具有顯著優勢,特別是在處理複雜多變的推薦場景中,這樣的參數高效微調技術(PEFT)的應用,減少了模型微調的資源需求;並透過遷移學習(transfer learning) 有效的利用預訓練模型中的通用知識,並將其應用於推薦系統任務中。這些技術的結合使我們的系統能夠在冷啟動和一般推薦場景中均展現出優異的表現。

    實驗結果顯示,基於圖預訓練和提示詞學習技術的推薦系統在多個評估指標上有不錯的成績,相較於傳統的推薦系統模型無法在冷啟動模型中作使用。特別是在冷啟動場景中,我們的方法顯著提升了多種評估指標,像是命中率(Hit Rate)、平均準確率(Mean Average Precision)、召回率(Recall)和標準折扣累積增益(NCDG)等,顯示出其在處理新用戶和新物品時的強大適應能力。同時,我們的方法在一般推薦場景中也展示了良好的性能,特別是使用圖編碼器時,顯示出圖結構數據在捕捉用戶和物品關係方面的潛力。

    總結來說,本研究通過結合圖預訓練與提示詞學習技術,實現了一種創新的多模態推薦系統,並展示了這些技術在提升推薦質量和適應性方面的潛力。未來的工作將集中於進一步優化這些技術,探索更多應用場景,以期為推薦系統的發展提供更強大的技術支持和理論指導。
    Reference: [1] J. Atwood and D. Towsley. Diffusion-convolutional neural networks, 2016.
    [2] O. Barkan and N. Koenigstein. Item2vec: Neural item embedding for collaborative
    filtering, 2017.
    [3] T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Nee-
    lakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger,
    T. Henighan, R. Child, A. Ramesh, D. M. Ziegler, J. Wu, C. Winter, C. Hesse,
    M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCan-
    dlish, A. Radford, I. Sutskever, and D. Amodei. Language models are few-shot
    learners, 2020.
    [4] C.-M. Chen, M.-F. Tsai, Y.-C. Lin, and Y.-H. Yang. Query-based music recom-
    mendations via preference embedding. In Proceedings of the 10th ACM Conference
    on Recommender Systems, RecSys ’16, page 79–82, New York, NY, USA, 2016.
    Association for Computing Machinery.
    [5] C.-M. Chen, T.-H. Wang, C.-J. Wang, and M.-F. Tsai. Smore: modularize graph
    embedding for recommendation. In Proceedings of the 13th ACM Conference on
    Recommender Systems, RecSys ’19, page 582–583, New York, NY, USA, 2019.
    Association for Computing Machinery.
    [6] C.-M. Chen, T.-H. Wang, C.-J. Wang, and M.-F. Tsai. Smore: modularize graph
    embedding for recommendation. In Proceedings of the 13th ACM Conference on
    Recommender Systems, RecSys ’19, page 582–583, New York, NY, USA, 2019.
    Association for Computing Machinery.
    [7] G. de Souza Pereira Moreira, S. Rabhi, J. M. Lee, R. Ak, and E. Oldridge. Trans-
    formers4rec: Bridging the gap between nlp and sequential / session-based recom-
    mendation. page 143–153, 2021.
    [8] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. Bert: Pre-training of deep
    bidirectional transformers for language understanding. 2019.
    [9] M. Douze, A. Guzhva, C. Deng, J. Johnson, G. Szilvasy, P.-E. Mazar ́e, M. Lomeli,
    L. Hosseini, and H. J ́egou. The faiss library. 2024.
    [10] S. Geng, S. Liu, Z. Fu, Y. Ge, and Y. Zhang. Recommendation as language process-
    ing (rlp): A unified pretrain, personalized prompt & predict paradigm (p5). 2023.
    [11] A. Grover and J. Leskovec. node2vec: Scalable feature learning for networks, 2016.
    [12] W. L. Hamilton, R. Ying, and J. Leskovec. Inductive representation learning on large
    graphs, 2018.
    [13] X. He, K. Deng, X. Wang, Y. Li, Y. Zhang, and M. Wang. Lightgcn: Simplifying
    and powering graph convolution network for recommendation, 2020.
    [14] X. He, L. Liao, H. Zhang, L. Nie, X. Hu, and T.-S. Chua. Neural collaborative
    filtering, 2017.
    [15] T. N. Kipf and M. Welling. Semi-supervised classification with graph convolutional
    networks, 2017.
    [16] O. Kuchaiev and B. Ginsburg. Training deep autoencoders for collaborative filtering,
    2017.
    [17] J. Li, M. Wang, J. Li, J. Fu, X. Shen, J. Shang, and J. McAuley. Text is all you need:
    Learning language representations for sequential recommendation. 2023.
    [18] W. Li, Y. Zhang, Y. Sun, W. Wang, W. Zhang, and X. Lin. Approximate nearest
    neighbor search on high dimensional data — experiments, analyses, and improve-
    ment (v1.0). 2016.
    [19] T. Mikolov, K. Chen, G. Corrado, and J. Dean. Efficient estimation of word repre-
    sentations in vector space. 2013.
    [20] J. Ni, J. Li, and J. McAuley. Justifying recommendations using distantly-labeled
    reviews and fine-grained aspects. pages 188–197, Nov. 2019.
    [21] B. Perozzi, R. Al-Rfou, and S. Skiena. Deepwalk: online learning of social repre-
    sentations. In Proceedings of the 20th ACM SIGKDD international conference on
    Knowledge discovery and data mining, KDD ’14. ACM, Aug. 2014.
    [22] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry,
    A. Askell, P. Mishkin, J. Clark, G. Krueger, and I. Sutskever. Learning transferable
    visual models from natural language supervision. 2021.
    [23] A. Radford and K. Narasimhan. Improving language understanding by generative
    pre-training. 2018.
    [24] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever. Language
    models are unsupervised multitask learners. 2019.
    [25] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li,
    and P. J. Liu. Exploring the limits of transfer learning with a unified text-to-text
    transformer. 2023.
    [26] S. Rendle. Factorization machines. In 2010 IEEE International Conference on Data
    Mining, pages 995–1000, 2010.
    [27] S. Rendle, C. Freudenthaler, Z. Gantner, and L. Schmidt-Thieme. Bpr: Bayesian
    personalized ranking from implicit feedback. 2012.
    [28] S. Ruder. An overview of gradient descent optimization algorithms, 2017.
    [29] B. Sarwar, G. Karypis, J. Konstan, and J. Riedl. Item-based collaborative filtering
    recommendation algorithms. In Proceedings of the 10th International Conferenceon World Wide Web, WWW ’01, page 285–295, New York, NY, USA, 2001. Association for Computing Machinery.
    [30] F. Sun, J. Liu, J. Wu, C. Pei, X. Lin, W. Ou, and P. Jiang. Bert4rec: Sequential
    recommendation with bidirectional encoder representations from transformer. 2019.
    [31] J. Tang, M. Qu, M. Wang, M. Zhang, J. Yan, and Q. Mei. Line: Large-scale infor-
    mation network embedding. In Proceedings of the 24th International Conference on
    World Wide Web, WWW ’15. International World Wide Web Conferences Steering
    Committee, May 2015.
    [32] A. van den Oord, Y. Li, and O. Vinyals. Representation learning with contrastive
    predictive coding. 2019.
    [33] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser,
    and I. Polosukhin. Attention is all you need. 2023.
    [34] Z. Wen and Y. Fang. Augmenting low-resource text classification with graph-
    grounded pre-training and prompting. 2023.
    [35] J. Weston, H. Yee, and R. J. Weiss. Learning to rank recommendations with the
    k-order statistic loss. In Proceedings of the 7th ACM Conference on Recommender
    Systems, RecSys ’13, page 245–248, New York, NY, USA, 2013. Association for
    Computing Machinery.
    [36] J.-H. Yang, C.-M. Chen, C.-J. Wang, and M.-F. Tsai. Hop-rec: high-order prox-
    imity for implicit recommendation. In Proceedings of the 12th ACM Conference
    yon Recommender Systems, RecSys ’18, page 140–144, New York, NY, USA, 2018.
    Association for Computing Machinery.
    [37] K. Zhou, J. Yang, C. C. Loy, and Z. Liu. Learning to prompt for vision-language
    models. International Journal of Computer Vision, 130(9):2337–2348, July 2022.
    Description: 碩士
    國立政治大學
    資訊科學系
    110753140
    Source URI: http://thesis.lib.nccu.edu.tw/record/#G0110753140
    Data Type: thesis
    Appears in Collections:[Department of Computer Science ] Theses

    Files in This Item:

    File Description SizeFormat
    314001.pdf2454KbAdobe PDF0View/Open


    All items in 政大典藏 are protected by copyright, with all rights reserved.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback