政大機構典藏-National Chengchi University Institutional Repository(NCCUR):Item 140.119/137294
English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  Items with full text/Total items : 113392/144379 (79%)
Visitors : 51202058      Online Users : 890
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    Please use this identifier to cite or link to this item: https://nccur.lib.nccu.edu.tw/handle/140.119/137294


    Title: 基於使用者表示法轉換之跨領域偏好排序於推薦系統
    User Embedding Transformation on Cross-domain Preference Ranking for Recommender Systems
    Authors: 陳先灝
    Chen, Hsien-Hao
    Contributors: 蔡銘峰
    Tsai, Ming-Feng
    陳先灝
    Chen, Hsien-Hao
    Keywords: 推薦系統
    機器學習
    跨領域推薦
    冷啟動問題
    Recommendation System
    Recommender System
    Machine Learning
    Cross Domain Recommendation
    Cold-start
    Date: 2021
    Issue Date: 2021-10-01 10:05:46 (UTC+8)
    Abstract: 隨著電子商務、影像串流服務等線上服務平台的發展,各大服務供應商對於「精準掌握用戶喜好」等相關技術的需求也逐季提升。其中,推薦系統作為這類方法的核心技術,如何在多變的現實問題中,提出符合特定需求的解決方式,也成為近年來相關研究的主要方向。

    在本研究中,我們特別關心的是推薦系統中的冷啟動 (Cold Start) 問題。 冷啟動問題發生的主要原因,是因為特定情況造成的資料稀缺,比如推薦系統中的新用戶/物品等等。由於其困難性和實際應用中的無可避免,一直是推薦系統研究中,的一個具有挑戰性的問題。

    其中,緩解此問題的一種有效方法,是利用相關領域的知識來彌補目標領域的數據缺失問題,即所謂跨領域推薦 (Cross-Domain Recommendation)。跨領域推薦的主要目的在於,在多個不同的領域中實行推薦演算法,從中描繪出用戶的個人偏好 (Personal Preference),再利用這些資訊來補充目標領域缺少的數據,從而在某種程度上解決冷啟動問題。

    在本文中,我們提出了一個基於用戶轉換的的跨領域偏好排序方法(CPR),它讓用戶從源域 (Source Domain) 和目標域 (Target Domain)的物品中同時擷取資訊,並據此進行表示法學習,將其轉化為自身偏好的表示向量。通過這樣的轉換形式,CPR 將除了能有效地利用源域的資訊之外,也能直接地以此更新目標域中用戶和物品的相關表示,從而有效地改善目標域的推薦成果。

    在數據實驗中,為了能有效證明 CPR 方法的能力,我們將 CPR 方法實驗在六個不同的工業級資料上,並在差異化的條件設定 (目標域全體、冷啟動用戶、共同用戶) 中進行測試,也以先進的跨領域和單領域推薦演算法做為比較基準,進行比較。最後發現,CPR 不僅成功提高目標域整體的推薦效能,針對特定的冷啟動用戶也達到相當好的成果。
    With the development of online service platforms such as e-commerce and video streaming services, the major service providers’ demand for related technologies such as ”accurately extracting user preferences” has also
    increased quarter by quarter. Among them, the recommendation system is the core technology of this kind of method. Therefore, how to propose solutions that meet specific needs in changing real problems has also become
    the main direction of related research in recent years. In this research, we are particularly concerned about the ”cold-start problem” in the recommendation system.
    The main reason for the cold-start problem is the scarcity of data caused by specific circumstances, such as recommending new users/items in the system, and so on. Due to its difficulty and inevitable practical application, it has always been a challenging problem in recommender systems research.
    In this thesis, we propose a cross-domain preference ranking method (CPR) based on user conversion, which allows users to simultaneously extract information from items in the source domain (Source Domain) and the target domain, and based on this, perform representation learning and transform it into a representation vector of their preferences. Through this conversion form, CPR will effectively use the information in the source domain
    and directly update the relevant representations of users and items in the target domain, thereby effectively improving the recommendation results of the target domain.
    In the data experiment, to effectively prove the ability of the CPR method, we experimented with the CPR method on six different industrial-level data and conducted it in a differentiated condition setting (all target domains, coldstart users, shared users). The test also uses advanced cross-domain and single-domain recommendation algorithms as a benchmark for comparison.
    Finally, it was found that CPR successfully improved the overall recommendation performance of the target domain and achieved quite good results for specific cold-start users.
    Reference: [1] A. Andoni, R. Panigrahy, G. Valiant, and L. Zhang. Learning polynomials with neural networks. In ICML, volume 32 of JMLR Workshop and Conference Proceedings, pages 1908–1916. JMLR.org, 2014.
    [2] C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N. Hamilton, and G. Hullender. Learning to rank using gradient descent. In ICML ’05: Proceedings of the 22nd international conference on Machine learning, pages 89–96, New York, NY, USA, 2005. ACM.
    [3] C.-C. Chang and C.-J. Lin. LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2:27:1–27:27, 2011. Software
    available at http://www.csie.ntu.edu.tw/˜cjlin/libsvm.
    [4] W.-L. Chiang, X. Liu, S. Si, Y. Li, S. Bengio, and C.-J. Hsieh. Cluster-gcn: An efficient algorithm for training deep and large graph convolutional networks. CoRR,
    abs/1905.07953, 2019.
    [5] P. Covington, J. Adams, and E. Sargin. Deep neural networks for youtube recommendations. In Proceedings of the 10th ACM Conference on Recommender Systems, New York, NY, USA, 2016.
    [6] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding, 2019.
    [7] J. Gilmer, S. S. Schoenholz, P. F. Riley, O. Vinyals, and G. E. Dahl. Neural message passing for quantum chemistry. CoRR, abs/1704.01212, 2017.
    [8] A. Grover and J. Leskovec. node2vec: Scalable feature learning for networks. CoRR, abs/1607.00653, 2016.
    [9] W. Hamilton, Z. Ying, and J. Leskovec. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems, page 11, 2017.
    [10] X. He, K. Deng, X. Wang, Y. Li, Y. Zhang, and M. Wang. Lightgcn: Simplifying and powering graph convolution network for recommendation. CoRR, abs/2002.02126, 2020.
    [11] X. He, L. Liao, H. Zhang, L. Nie, X. Hu, and T.-S. Chua. Neural collaborative filtering. In Proceedings of the 26th International Conference on World Wide Web, WWW ’17, page 173–182, Republic and Canton of Geneva, CHE, 2017. International World Wide Web Conferences Steering Committee.
    [12] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997.
    [13] K. Hornik, M. Stinchcombe, and H. White. Multilayer feedforward networks are universal approximators. Neural Networks, 2(5):359 – 366, 1989.
    [14] T. N. Kipf and M. Welling. Semi-supervised classification with graph convolutional networks. 5th International Conference on Learning Representations, 2016.
    [15] Y. Koren, R. Bell, and C. Volinsky. Matrix factorization techniques for recommender systems. Computer, 42(8):30–37, Aug. 2009.
    [16] Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. Nature, 521(7553):436–444, 2015.
    [17] M. Liu, J. Li, G. Li, and P. Pan. Cross domain recommendation via bi-directional transfer graph collaborative filtering networks. In M. d’Aquin, S. Dietze, C. Hauff, E. Curry, and P. Cudre-Mauroux, editors, ´ CIKM, pages 885–894. ACM, 2020.
    [18] T. Man, H. Shen, X. Jin, and X. Cheng. Cross-domain recommendation: An embedding and mapping approach. In Proceedings of the Twenty-Sixth International Joint
    Conference on Artificial Intelligence, IJCAI-17, pages 2464–2470, 2017.
    [19] T. Mikolov, K. Chen, G. Corrado, and J. Dean. Efficient estimation of word representations in vector space. In Y. Bengio and Y. LeCun, editors, 1st International
    Conference on Learning Representations, ICLR 2013, Scottsdale, Arizona, USA, May 2-4, 2013, Workshop Track Proceedings, 2013.
    [20] T. Mikolov, I. Sutskever, K. Chen, G. Corrado, and J. Dean. Distributed representations of words and phrases and their compositionality. CoRR, abs/1310.4546, 2013.
    [21] B. Perozzi, R. Al-Rfou, and S. Skiena. Deepwalk: Online learning of social representations. CoRR, abs/1403.6652, 2014.
    [22] R. Raina, A. Madhavan, and A. Y. Ng. Large-scale deep unsupervised learning using graphics processors. In A. P. Danyluk, L. Bottou, and M. L. Littman, editors, ICML,
    volume 382 of ACM International Conference Proceeding Series, pages 873–880. ACM, 2009.
    [23] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi. You only look once: Unified, real-time object detection, 2015. cite arxiv:1506.02640.
    [24] S. Rendle, C. Freudenthaler, Z. Gantner, and L. Schmidt-Thieme. BPR: bayesian personalized ranking from implicit feedback. CoRR, abs/1205.2618, 2012.
    [25] S. Rendle, W. Krichene, L. Zhang, and J. R. Anderson. Neural collaborative filtering vs. matrix factorization revisited. CoRR, abs/2005.09683, 2020.
    [26] P. Resnick, N. Iacovou, M. Suchak, P. Bergstrom, and J. Riedl. Grouplens: an open architecture for collaborative filtering of netnews. In CSCW ’94: Proceedings of the
    1994 ACM conference on Computer supported cooperative work, pages 175–186, New York, NY, USA, 1994. ACM Press.
    [27] B. Sarwar, G. Karypis, J. Konstan, and J. Riedl. Incremental singular value decomposition algorithms for highly scalable recommender systems. In Proceedings of the
    5th International Conference in Computers and Information Technology, 2002.
    [28] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
    [29] A. P. Singh and G. J. Gordon. Relational learning via collective matrix factorization. In Proceedings of the 14th ACM SIGKDD international conference on Knowledge
    discovery and data mining, pages 650–658, 2008.
    [30] J. Tang, M. Qu, M. Wang, M. Zhang, J. Yan, and Q. Mei. LINE: large-scale information network embedding. CoRR, abs/1503.03578, 2015.
    [31] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Laiser, and I. Polosukhin. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio,
    H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, page 5998–6008. Curran Associates, Inc., 2017.
    [32] P. Velickovi ˇ c, G. Cucurull, A. Casanova, A. Romero, P. Li ´ o, and Y. Bengio. Graph attention networks, 2018.
    [33] M. Volkovs, G. W. Yu, and T. Poutanen. Content-based neighbor models for cold start in recommender systems. In Proceedings of the Recommender Systems Challenge 2017, pages 1–6. 2017.
    [34] M. Volkovs, G. W. Yu, and T. Poutanen. Dropoutnet: Addressing cold start in recommender systems. In I. Guyon, U. von Luxburg, S. Bengio, H. M. Wallach, R. Fergus,
    S. V. N. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 4957–4966, 2017.
    [35] X. Wang, X. He, Y. Cao, M. Liu, and T.-S. Chua. Kgat: Knowledge graph attention network for recommendation. CoRR, abs/1905.07854, 2019.
    [36] X. Wang, X. He, M. Wang, F. Feng, and T. Chua. Neural graph collaborative filtering. CoRR, abs/1905.08108, 2019.
    [37] K. Xu, W. Hu, J. Leskovec, and S. Jegelka. How powerful are graph neural networks? CoRR, abs/1810.00826, 2018.
    [38] R. Ying, R. He, K. Chen, P. Eksombatchai, W. L. Hamilton, and J. Leskovec. Graph convolutional neural networks for web-scale recommender systems, 2018.
    cite arxiv:1806.01973Comment: KDD 2018.
    Description: 碩士
    國立政治大學
    資訊科學系
    108753107
    Source URI: http://thesis.lib.nccu.edu.tw/record/#G0108753107
    Data Type: thesis
    DOI: 10.6814/NCCU202101563
    Appears in Collections:[Department of Computer Science ] Theses

    Files in This Item:

    File Description SizeFormat
    310701.pdf1786KbAdobe PDF229View/Open


    All items in 政大典藏 are protected by copyright, with all rights reserved.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback