政大機構典藏-National Chengchi University Institutional Repository(NCCUR):Item 140.119/94859
English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  全文笔数/总笔数 : 113324/144300 (79%)
造访人次 : 51118925      在线人数 : 831
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜寻范围 查询小技巧:
  • 您可在西文检索词汇前后加上"双引号",以获取较精准的检索结果
  • 若欲以作者姓名搜寻,建议至进阶搜寻限定作者字段,可获得较完整数据
  • 进阶搜寻
    政大機構典藏 > 資訊學院 > 資訊科學系 > 學位論文 >  Item 140.119/94859


    请使用永久网址来引用或连结此文件: https://nccur.lib.nccu.edu.tw/handle/140.119/94859


    题名: 基於個人電腦使用者操作情境之音樂推薦
    Context-based Music Recommendation for Desktop Users
    作者: 謝棋安
    Hsieh, Chi An
    贡献者: 沈錳坤
    Shan, Man Kwan
    謝棋安
    Hsieh, Chi An
    关键词: 音樂
    推薦
    情境
    個人電腦
    MAML
    演算法
    music
    recommendation
    context
    desktop
    MAML
    algorithm
    日期: 2009
    上传时间: 2016-05-09 12:02:32 (UTC+8)
    摘要: 隨著電腦音樂技術的蓬勃發展,合乎情境需求的音樂若被能自動推薦給使用者,將是知識工作者所樂見的。我們提出了一個定義使用者操作情境的情境塑模,定義使用者操作情境,並利用累計專注視窗的轉變,找出使用者的操作情境。同時,我們也提出了音樂推薦塑模,依據使用者的操作情境與聆聽的音樂,分析探勘情境與音樂特徵間的關聯特性,利用探勘出的關聯推薦適合情境的音樂給使用者。在此音樂推薦塑模中,我們採用Content-based Recommendation的作法。我們分析音樂的特徵值,並發展MAML(Multi-attribute Multi-label)的分類演算法以及Probability Measure二種方法來探勘情境屬性與音樂特徵間的關聯特性。根據探勘出的關聯特性,找出適合情境的音樂特徵,再從音樂資料庫中推薦符合音樂特徵的音樂給使用者。本論文的符合使用者操作情境的音樂推薦系統是利用Windows Hook API實作。經實驗證明,本論文方法在符合情境的音樂推薦上,擁有近七成準確率。
    With the development of digital music technology, knowledge workers will be delighted if the music recommendation system is able to automatically recommend music based on the operating context in the desktop. The context model and context identification algorithm are proposed to define the operating context of users and to detect the transition of context based on the changes of focused windows. Two association discovery mechanisms, MMAL (Multi-attribute Multi-label) algorithm and PM (Probability Measure), are proposed to discover the relationships between context features and music features. Based on the discovered rules, the proposed music recommendation mechanism recommends music to the user from the music database according to the operating context of users. The context-based recommendation system is implemented using Windows Hook API. Experimental results show that near 70% accuracy can be achieved.
    參考文獻: [1] 高臺茜、倪珮晶,華語文網路言論負向情緒用詞檢核軟體研發,第三屆全球華文網路教育研討會(ICICE2003)論文集,2003。
    [2] G. D. Abowd, A. K. Dey, P. J. Brown, N. Davies, M. Smith, and P. Steggles, “Towards a Better Understanding of Context and Context-Awareness,” Proceedings of IEEE International Symposium on Handheld and Ubiquitous Computing, 1999.
    [3] M. G. Brown, “Supporting User Mobility,” Proceeding of International Federation for Information Processing on Mobile Communications, 1996.
    [4] R. Cai, C. Zhang, C. Wang, L. Zhang, and W. Y. Ma, “MusicSense: Contextual Music Recommendation Using Emotional Allocation Modeling,” Proceedings of ACM International Conference on Multimedia, 2007.
    [5] R. Cai, C. Zhang, L. Zhang, and W. Y. Ma, “Scalable Music Recommendation by Search,” Proceedings of ACM International Conference on Multimedia, 2007.
    [6] P. Cano, M. Koppenberger, and N. Wack, “Content-based Music Audio Recommendation,” Proceedings of ACM International Conference on Multimedia, 2005.
    [7] H. C. Chen and A. L. P. Chen, “A Music Recommendation System Based on Music Data Grouping and User Interests,” Proceedings of ACM International Conference on Information and Knowledge Management, 2001.
    [8] W. W. Cohen, and W. Fan, “Web-collaborative Filtering: Recommending Music by Crawling the Web,” International Journal of Computer and Telecommunications Networking, Volume 33, Issue 1-6, 2000.
    [9] J. R. Cooperstock, K. Tanikoshi, G. Beirne, T. Narine, and W. Buxton, “Evolution of a Reactive Environment,” Proceedings of ACM International Conference on Human
    55
    Factors in Computing Systems, 1995.
    [10] B. Cui, L. Liu, C. Pu, J. Shen., and K. L. Tan, “QueST: Querying Music Databases by Acoustic and Textual Features,” Proceedings of ACM International Conference on Multimedia, 2007.
    [11] A. K. Dey, “Understanding and Using Context,” Personal and Ubiquitous Computing, Volume 5, Issue 1, 2001.
    [12] S. Elrod, G. Hall, R. Costanza, M. Dixon, and J. D. Rivières, “Responsive Office Environments,” Communications of the ACM, Volume 36, Issue 7, 1993.
    [13] S. Fickas, G. Kortuem, and Z. Segall, “Software Organization for Dynamic and Adaptable Wearable Systems” Proceedings of IEEE International Symposium on Wearable Computers, 1997.
    [14] R. Hull, P. Neaves, and J. Bedford-Roberts, “Towards Situated Computing,” Proceedings of IEEE International Symposium on Wearable Computers, 1997.
    [15] J. S. Jang, Audio Signal Processing and Recognition, http://neural.cs.nthu.edu.tw/jang/books/audioSignalProcessing/index.asp.
    [16] D. Kirovski and H. Attias, “Beat-id: Identifying Music via Beat Analysis,” Proceeding of IEEE Workshop on Multimedia Signal Processing, 2002.
    [17] P. Knees, E. Pampalk, and G. Widmer, “Artist Classification with Web-based Data,” Proceedings of International Symposium on Music Information Retrieval, 2004.
    [18] F. F. Kuo, M. F. Chiang., M. K. Shan, and S. Y. Lee, “Emotion-based Music Recommendation by Association Discovery from Film Music,” Proceedings of ACM International Conference on Multimedia, 2005.
    [19] Q. Li, B. M. Kim, D. H. Guan, and D. W. Oh, “A Music Recommender Based on Audio Features,” Proceedings of ACM International Conference on Research and Development
    56
    in Information Retrieval, 2004.
    [20] G. C. Li, K.Y. Liu and Y. K. Zhang, “Identifying Chinese Word and Processing Different Meaning Structures”, Journal of Chinese Information Processing, Volume 2, 1988.
    [21] N.Y. Liang, “Knowledge of Chinese Word Segmentation”, Journal of Chinese Information Processing, Volume 4, Issue 2, 1990
    [22] B. Liu, W. Hsu, and Y. Ma, “Integrating Classification and Association Rule Mining,” Proceeding of ACM International Conference on Knowledge Discovery and Data Mining, 1998.
    [23] B. Logan, “Music Recommendation from Song Sets,” Proceedings of ACM International Conference on Music Information Retrieval, 2004.
    [24] J MacQueen, "Some Methods for Classification and Analysis of Multivariate Observations,” Proceedings of Berkeley Symposium on Mathematical Statistics and Probability, 1967.
    [25] D. Mcennis, C. Mckay, I. Fujinaga, and P. Depalle, “jAudio: a Feature Extraction Library,” Proceeding of International Conference on Music Information Retrieval, 2005.
    [26] J. Y. Pan, H. J. Yang, C. Faloutsos, and P. Duygulu, “Automatic Multimedia Cross-modal Correlation Discovery,” Proceeding of ACM International Conference on Knowledge Discovery and Data Mining, 2004.
    [27] H. S. Park, J. O. Yoo, and S. B. Cho, “A Context-Aware Music Recommendation System Using Fuzzy Bayesian Networks with Utility Theory,” Proceeding of Fuzzy Systems and Knowledge Discovery, 2007.
    [28] J. Pascoe, “Adding Generic Contextual Capabilities to Wearable Computers,” Proceedings of IEEE International Symposium on Wearable Computers, 1998.
    [29] J Rekimoto, Y. Ayatsuka, and K. Hayashi, “Augment-able Reality: Situated
    57
    Communication through Physical and Digital Spaces,” Proceedings of IEEE International Symposium on Wearable Computers, 1998.
    [30] B. Schilit, N. Adams, and R. Want, “Context-Aware Computing Applications,” Proceeding of IEEE Workshop on Mobile Computing Systems and Applications, 1994.
    [31] B. Schilit and M. Theimer, “Disseminating Active Map Information to Mobile Hosts,” IEEE Network, Volume 8, Issue 5, 1994.
    [32] U. Shardanand, “Social Information Filtering for Music Recommendation,” Master Thesis, Electrical Engineering and Computer Science, Massachusetts Institute of Technology, 1994.
    [33] J. Shen, L. Li and T. G. Dietterich, “Real-Time Detection of Task Switches of Desktop Users,” Processing of International Joint Conference on Artificial Intelligence, 2007.
    [34] T. Li, M. Ogihara, and Q. Li, “A Comparative Study on Content-Based Music Genre Classification,” Proceedings of ACM SIGIR Conference on Research and Development in Informaion Retrieval, 2003.
    [35] R. E. Thayer, The Biopsychology of Mood and Arousal, Oxford University Press, ISBN 0195051629, 1989.
    [36] F. A. Thabtah and P. I. Cowling, “A Greedy Classification Algorithm Based on Association Rule,” Applied Soft Computing, Volume 7, Issue 3, 2007.
    [37] F. A. Thabtah, P. I. Cowling, and P. Yonghong, “MMAC: a New Multi-Class, Multi-Label Associative Classification Approach,” Proceeding of IEEE International Conference on Data Mining, 2004.
    [38] G. Tzanetakis and P. Cook, “Musical Genre Classification of Audio Signals,” IEEE Transactions on Speech and Audio Processing, Volume 10, Issue 5, 2002.
    [39] Y. H. Yang, Y. C. Lin, Y. F. Su, and H. H. Chen, “A Regression Approach to Music
    58
    Emotion Recognition,” IEEE Transactions on Audio, Speech, and Language Processing, Volume 16, Issue 2, 2007.
    [40] Y. Yang and G. Webb, “Proportional k-Interval Discretization for Naive-Bayes Classifiers,” Proceeding of European Conference on Machine Learning, 2001.
    [41] K. Yoshii, M. Goto, K. Komatani, T. Ogata, and H. Okuno, “Hybrid Collaborative and Content-based Music Recommendation Using Probabilistic Model with Latent User Preferences,” Proceeding of International Conference on Music Information Retrieval, 2006.
    [42] Feeling Words: http://eqi.org/fw.htm
    [43] Last.fm. http://www.last.fm/.
    描述: 碩士
    國立政治大學
    資訊科學學系
    95753007
    資料來源: http://thesis.lib.nccu.edu.tw/record/#G0095753007
    数据类型: thesis
    显示于类别:[資訊科學系] 學位論文

    文件中的档案:

    档案 大小格式浏览次数
    index.html0KbHTML2345检视/开启


    在政大典藏中所有的数据项都受到原著作权保护.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 回馈