Reference: | [1] J. Atwood and D. Towsley. Diffusion-convolutional neural networks, 2016.
[2] O. Barkan and N. Koenigstein. Item2vec: Neural item embedding for collaborative
filtering, 2017.
[3] T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Nee-
lakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger,
T. Henighan, R. Child, A. Ramesh, D. M. Ziegler, J. Wu, C. Winter, C. Hesse,
M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCan-
dlish, A. Radford, I. Sutskever, and D. Amodei. Language models are few-shot
learners, 2020.
[4] C.-M. Chen, M.-F. Tsai, Y.-C. Lin, and Y.-H. Yang. Query-based music recom-
mendations via preference embedding. In Proceedings of the 10th ACM Conference
on Recommender Systems, RecSys ’16, page 79–82, New York, NY, USA, 2016.
Association for Computing Machinery.
[5] C.-M. Chen, T.-H. Wang, C.-J. Wang, and M.-F. Tsai. Smore: modularize graph
embedding for recommendation. In Proceedings of the 13th ACM Conference on
Recommender Systems, RecSys ’19, page 582–583, New York, NY, USA, 2019.
Association for Computing Machinery.
[6] C.-M. Chen, T.-H. Wang, C.-J. Wang, and M.-F. Tsai. Smore: modularize graph
embedding for recommendation. In Proceedings of the 13th ACM Conference on
Recommender Systems, RecSys ’19, page 582–583, New York, NY, USA, 2019.
Association for Computing Machinery.
[7] G. de Souza Pereira Moreira, S. Rabhi, J. M. Lee, R. Ak, and E. Oldridge. Trans-
formers4rec: Bridging the gap between nlp and sequential / session-based recom-
mendation. page 143–153, 2021.
[8] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. Bert: Pre-training of deep
bidirectional transformers for language understanding. 2019.
[9] M. Douze, A. Guzhva, C. Deng, J. Johnson, G. Szilvasy, P.-E. Mazar ́e, M. Lomeli,
L. Hosseini, and H. J ́egou. The faiss library. 2024.
[10] S. Geng, S. Liu, Z. Fu, Y. Ge, and Y. Zhang. Recommendation as language process-
ing (rlp): A unified pretrain, personalized prompt & predict paradigm (p5). 2023.
[11] A. Grover and J. Leskovec. node2vec: Scalable feature learning for networks, 2016.
[12] W. L. Hamilton, R. Ying, and J. Leskovec. Inductive representation learning on large
graphs, 2018.
[13] X. He, K. Deng, X. Wang, Y. Li, Y. Zhang, and M. Wang. Lightgcn: Simplifying
and powering graph convolution network for recommendation, 2020.
[14] X. He, L. Liao, H. Zhang, L. Nie, X. Hu, and T.-S. Chua. Neural collaborative
filtering, 2017.
[15] T. N. Kipf and M. Welling. Semi-supervised classification with graph convolutional
networks, 2017.
[16] O. Kuchaiev and B. Ginsburg. Training deep autoencoders for collaborative filtering,
2017.
[17] J. Li, M. Wang, J. Li, J. Fu, X. Shen, J. Shang, and J. McAuley. Text is all you need:
Learning language representations for sequential recommendation. 2023.
[18] W. Li, Y. Zhang, Y. Sun, W. Wang, W. Zhang, and X. Lin. Approximate nearest
neighbor search on high dimensional data — experiments, analyses, and improve-
ment (v1.0). 2016.
[19] T. Mikolov, K. Chen, G. Corrado, and J. Dean. Efficient estimation of word repre-
sentations in vector space. 2013.
[20] J. Ni, J. Li, and J. McAuley. Justifying recommendations using distantly-labeled
reviews and fine-grained aspects. pages 188–197, Nov. 2019.
[21] B. Perozzi, R. Al-Rfou, and S. Skiena. Deepwalk: online learning of social repre-
sentations. In Proceedings of the 20th ACM SIGKDD international conference on
Knowledge discovery and data mining, KDD ’14. ACM, Aug. 2014.
[22] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry,
A. Askell, P. Mishkin, J. Clark, G. Krueger, and I. Sutskever. Learning transferable
visual models from natural language supervision. 2021.
[23] A. Radford and K. Narasimhan. Improving language understanding by generative
pre-training. 2018.
[24] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever. Language
models are unsupervised multitask learners. 2019.
[25] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li,
and P. J. Liu. Exploring the limits of transfer learning with a unified text-to-text
transformer. 2023.
[26] S. Rendle. Factorization machines. In 2010 IEEE International Conference on Data
Mining, pages 995–1000, 2010.
[27] S. Rendle, C. Freudenthaler, Z. Gantner, and L. Schmidt-Thieme. Bpr: Bayesian
personalized ranking from implicit feedback. 2012.
[28] S. Ruder. An overview of gradient descent optimization algorithms, 2017.
[29] B. Sarwar, G. Karypis, J. Konstan, and J. Riedl. Item-based collaborative filtering
recommendation algorithms. In Proceedings of the 10th International Conferenceon World Wide Web, WWW ’01, page 285–295, New York, NY, USA, 2001. Association for Computing Machinery.
[30] F. Sun, J. Liu, J. Wu, C. Pei, X. Lin, W. Ou, and P. Jiang. Bert4rec: Sequential
recommendation with bidirectional encoder representations from transformer. 2019.
[31] J. Tang, M. Qu, M. Wang, M. Zhang, J. Yan, and Q. Mei. Line: Large-scale infor-
mation network embedding. In Proceedings of the 24th International Conference on
World Wide Web, WWW ’15. International World Wide Web Conferences Steering
Committee, May 2015.
[32] A. van den Oord, Y. Li, and O. Vinyals. Representation learning with contrastive
predictive coding. 2019.
[33] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser,
and I. Polosukhin. Attention is all you need. 2023.
[34] Z. Wen and Y. Fang. Augmenting low-resource text classification with graph-
grounded pre-training and prompting. 2023.
[35] J. Weston, H. Yee, and R. J. Weiss. Learning to rank recommendations with the
k-order statistic loss. In Proceedings of the 7th ACM Conference on Recommender
Systems, RecSys ’13, page 245–248, New York, NY, USA, 2013. Association for
Computing Machinery.
[36] J.-H. Yang, C.-M. Chen, C.-J. Wang, and M.-F. Tsai. Hop-rec: high-order prox-
imity for implicit recommendation. In Proceedings of the 12th ACM Conference
yon Recommender Systems, RecSys ’18, page 140–144, New York, NY, USA, 2018.
Association for Computing Machinery.
[37] K. Zhou, J. Yang, C. C. Loy, and Z. Liu. Learning to prompt for vision-language
models. International Journal of Computer Vision, 130(9):2337–2348, July 2022. |