Reference: | [1] J. Bergstra, N. Casagrande, D. Erhan, D. Eck, and B. K´egl. Aggregate features and adaboost for music classification. Machine Learning, 65(2-3):473–484, 2006. [2] M. M. Bradley and P. J. Lang. Affective norms for english words (anew): Instruction manual and affective ratings. Technical report, Technical Report C-1, The Center for Research in Psychophysiology, University of Florida, 1999. [3] M. Brysbaert and B. New. Moving beyond kuˇcera and francis: A critical evaluation of current word frequency norms and the introduction of a new and improved word frequency measure for american english. Behavior Research Methods, 41(4):977– 990, 2009. [4] C.-C. Chang and C.-J. Lin. LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2:27:1–27:27, 2011. Software available at http://www.csie.ntu.edu.tw/˜cjlin/libsvm. [5] C. Cortes and V. Vapnik. Support-vector networks. Machine Learning, 20(3):273– 297, 1995. [6] A. Esuli and F. Sebastiani. Sentiwordnet: A publicly available lexical resource for opinion mining. In Proceedings of the 5th Conference on Language Resources and Evaluation, pages 417–422, 2006. [7] Y. Feng, Y. Zhuang, and Y. Pan. Popular music retrieval by detecting mood. In Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Informaion Retrieval, pages 375–376. ACM, 2003. [8] S. Hallam, I. Cross, and M. Thaut. Oxford handbook of music psychology. Oxford University Press, 2008. [9] T. Hofmann. Probabilistic latent semantic indexing. In Proceedings of the 22nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 50–57. ACM, 1999. [10] X. Hu and J. S. Downie. Improving mood classification in music digital libraries by combining lyrics and audio. In Proceedings of the 10th Annual Joint Conference on Digital Libraries, pages 159–168. ACM, 2010. [11] X. Hu and J. S. Downie. When lyrics outperform audio for music mood classification: a feature analysis. In Proceedings of International Society of Music Information Retrieval Conference, pages 1–6, 2010. [12] X. Hu, J. S. Downie, and A. F. Ehmann. Lyric text mining in music mood classification. American Music, 183(5,049):2–209, 2009. [13] Y. Hu, X. Chen, and D. Yang. Lyric-based song emotion detection with affective lexicon and fuzzy clustering method. In Proceedings of International Society of Music Information Retrieval Conference, pages 123–128, 2009. [14] R. Kempter, V. Sintsova, C. Musat, and P. Pu. Emotionwatch: Visualizing finegrained emotions in event-related tweets. In Proceedings of the 8th International AAAI Conference on Weblogs and Social Media, 2014. [15] L.-W. Ku, Y.-T. Liang, and H.-H. Chen. Opinion extraction, summarization and tracking in news and blog corpora. In Proceedings of AAAI spring symposium: Computational approaches to analyzing weblogs, pages 100–107, 2006. [16] C. Laurier, J. Grivolla, and P. Herrera. Multimodal music mood classification using audio and lyrics. In Proceedings of the 7th International Conference on Machine Learning and Applications, pages 688–693. IEEE, 2008. [17] C. Laurier and P. Herrera. Audio music mood classification using support vector machine. [18] J. H. Lee and J. S. Downie. Survey of music information needs, uses, and seeking behaviours: Preliminary findings. In Proceedings of the 5th International Conference on Music Information Retrieval, pages 441–446, 2004. [19] T. Li and M. Ogihara. Content-based music similarity search and emotion detection. In Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing, volume 5, pages V–705. IEEE, 2004. [20] M. I. Mandel and D. P. Ellis. Song-level features and support vector machines for music classification. In Proceedings of International Conference on Music Information Retrieval, pages 594–599, 2005. [21] L. Martin and P. Pu. Prediction of helpful reviews using emotions extraction. In Proceedings of the 28th AAAI Conference on Artificial Intelligence, 2014. [22] R. Mayer, R. Neumayer, and A. Rauber. Rhyme and style features for musical genre classification by song lyrics. 2008. [23] M. F. Mckinney and J. Breebaart. Features for audio and music classification. In Proceedings of International Conference on Music Information Retrieval, 2003. [24] R. Plutchik. The nature of emotions. American Scientist, 89:344, 2001. [25] J. F. Y. W. Robert J Ellis, Zhe Xing. Quantifying lexical novelty in song lyrics. In Proceedings of the 16th International Society for Music Information Retrieval Conference, 2015. [26] J. A. Russell. Affective space is bipolar. Journal of Personality and Social Psychology, 37(3):345–356, 1979. [27] J. A. Russell. A circumplex model of affect. Journal of personality and social psychology, 39(6):1161–1178, 1980. [28] P. Saari and T. Eerola. Semantic computing of moods based on tags in social media of music. IEEE Transactions on Knowledge and Data Engineering, 26(10):2548– 2560, 2014. [29] K. R. Scherer. What are emotions? and how can they be measured? Social Science Information, 44(4):695–729, 2005. [30] V. Sintsova, C.-C. Musat, and P. Pu. Fine-grained emotion recognition in olympic tweets based on human computation. In Proceedings of the 4thWorkshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, number EPFL-CONF-197185, 2013. [31] P. J. Stone, D. C. Dunphy, and M. S. Smith. The general inquirer: A computer approach to content analysis. 1966. [32] G. Tzanetakis. Music analysis, retrieval and synthesis of audio signals marsyas. In Proceedings of the 17th ACM International Conference on Multimedia, pages 931– 932. ACM, 2009. [33] M. Van Zaanen and P. Kanters. Automatic mood classification using tf*idf based on lyrics. In Proceedings of the 11th International Society of Music Information Retrieval Conference, pages 75–80, 2010. [34] Y.-H. Yang, Y.-C. Lin, H.-T. Cheng, I.-B. Liao, Y.-C. Ho, and H. H. Chen. Toward multi-modal music emotion classification. In Proceedings of Pacific-Rim Conference on Multimedia, pages 70–79. Springer, 2008. [35] Y.-H. Yang and J.-Y. Liu. Quantitative study of music listening behavior in a social and affective context. IEEE Transactions on Multimedia, 15(6):1304–1315, 2013. |