Loading...
|
Please use this identifier to cite or link to this item:
https://nccur.lib.nccu.edu.tw/handle/140.119/112677
|
Title: | 結合局部特徵序列的影片背景音樂推薦機制 Background Music Recommendation for Video by Incorporating Temporal Sequence of Local Features |
Authors: | 林鼎崴 Lin, Ting Wei |
Contributors: | 沈錳坤 Shan, Man-Kwan 林鼎崴 Lin, Ting Wei |
Keywords: | 影片自動配樂 資料探勘 關聯模型 背景音樂推薦 |
Date: | 2017 |
Issue Date: | 2017-09-13 14:47:48 (UTC+8) |
Abstract: | 隨著手持裝置的普及與社群網路的興起,大眾可以隨時拍攝影片並且上傳至網路上與他人分享。但是一般使用者產生的影片若少了配樂,將失色許多。除了原本影片帶給人們的視覺觀感之外,配樂可以帶給人們聽覺的觀感,進而使得人們可以更容易了解影片的情感,也可以讓人們更能夠融入在影片中。背景音樂推薦的研究主要有兩大種做法,Emotion-mediated Approach與Correlation-based Approach。我們使用Correlational-based Approach的方法,利用Correlation Modeling找出影片特徵值與音樂特徵值之間的關係。但是由於目前Correlation-based Approach的研究只有考慮到全域特徵,因此在此論文中,我們提出了區域特徵。區域特徵利用時間序列表達影片細部的變化,並且將區域特徵與全域特徵結合至Correlation Modeling中,透過 MLSA、CFA、CCA、KCCA、DCCA、PLS、PLSR演算法找出其中的關係並且產生背景音樂推薦的Ranking List,實驗部份比較了各個演算法在背景音樂推薦上的準確率,並且觀察Global Features與Local Features之間的準確率。 Background music plays an important role in making user-generated video more colorful and attractive. One of current research on automatic background music recommendation is the correlation-based approach in which the correlation model between visual and music features is discovered from training data and is utilized to recommend background music for query video. Because the existing correlation-based approaches consider global features only, in this work we proposed to integrate the temporal sequence of local features along with global features into the correlation modeling process. The local features are derived from segmented audiovisual clips and can represent the local variation of features. Then the temporal sequence of local features is transformed and incorporated into correlation modeling process. Cross-Modal Factor Analysis along with Multiple-type Latent Semantic Analysis, Canonical Correlation Analysis, Kernel Canonical Correlation Analysis, Deep Canonical Correlation Analysis, Partial Least Square and Partial Least Square Regression, are investigated for correlation modeling which recommends background music in ranking order. In the experiments, we first compare the results of only global features, only local Features and incorporating global and local Features among each algorithm. Then second compare the results of different clip numbers and Fourier coefficients. |
Reference: | [1] Y. Altun, I. Tsochantaridis, and T. Hofmann, Hidden markov support vector machines. International Conference on Machine Learning, 2003. [2] G. Andrew, R. Arora, J. A. Bilmes, and K. Livescu, Deep canonical correlation analysis. International Conference on Machine Learning (ICML), 2013. [3] M. Cristani, A. Pesarin, C. Drioli, V. Murino, A. Rodà, M. Grapulin, and N. Sebe, Toward an automatically generated soundtrack from low-level cross-modal correlations for automotive scenarios. Proceedings of the 18th ACM International Conference on Multimedia, 2010. [4] S. Deerwester, S. T. Dumais, G. W. Furnas, T. K. Landauer, and R. Harshman, Indexing by latent semantic analysis. Journal of the American Society for Information Science, 41(416), 1990. [5] A. Hanjalic and L. Q. Xu, Affective video content representation and modeling. IEEE Transactions on Multimedia, 7(1), 2005. [6] D. R. Hardoon, S. Szedmak, and J. Shawe-Taylor, Canonical correlation analysis: An overview with application to learning methods. Neural Computation, 16(12), 2004. [7] H. Wold, Path models with latent variables: The NIPALS approach. In H.M. Blalock et al., editor, Quantitative Socialogy: International Perspectives on Mathematical and Statistical Model Building, 1975. [8] G. E. Hinton, S. Osindero, and Y. W. Teh, A fast learning algorithm for deep belief nets. Neural Computation, 18(7), 2006. [9] F. F. Kuo, M. F. Chiang, M. K. Shan, and S. Y. Lee, Emotion-based music recommendation by association discovery from film music. Proceedings of the 13th annual ACM International Conference on Multimedia, 2005. [10] F. F. Kuo, M. K. Shan, and S. Y. Lee, Background music recommendation for video based on multimodal latent semantic analysis. 2013 IEEE International Conference on Multimedia and Expo, 2013. [11] D. Li, N. Dimitrova, M. Li, and I. K. Sethi, Multimedia content processing through cross-modal association. Proceedings of the 11th ACM International Conference on Multimedia, 2003. [12] J. C. Lin, W. L. Wei, and H. M. Wang, EMV-matchmaker: Emotional temporal course modeling and matching for automatic music video generation. Proceedings of the 23rd ACMIinternational Conference on Multimedia, 2015. [13] J. C. Lin, W. L. Wei, and H. M. Wang, DEMV-matchmaker: Emotional temporal course representation and deep similarity matching for automatic music video generation. IEEE International Conference on Acoustics, Speech and Signal Processing, 2016. [14] L. Lu, D. Liu, and H. J. Zhang, Automatic mood detection and tracking of music audio signals. IEEE Transactions on Audio, Speech, and Language Processing, 14(1), 2006. [15] L. R. Rabiner and B. Gold, Theory and application of digital signal processing. Englewood Cliffs, NJ, Prentice-Hall, Inc., 1975. [16] R. Rosipal and N. Krämer, Overview and recent advances in partial least squares. In Subspace,Latent Structure and Feature Selection, Springer, 34-51, 2006. [17] E. M. Schmidt and Y. E. Kim, Prediction of time-varying musical mood distributions from audio. Proceedings of the 11th Internation Society for Music Information Retrieval Conference, 2010. [18] R. R. Shah, Y. Yu, and R. Zimmermann, Advisor: Personalized video soundtrack recommendation by late fusion with heuristic rankings. Proceedings of the 22nd ACM International Conference on Multimedia, 2014. [19] R. R. Shah, Y. Yu, and R. Zimmermann, User preference-aware music video generation based on modeling scene moods. Proceedings of the 5th ACM Multimedia Systems Conference (MMSys), 2014. [20] H. Su, F. F. Kuo, C. H. Chiu, Y. J. Chou, and M. K. Shan, MediaEval 2013: Soundtrack selection for commercials based on content Correlation Modeling. MedaiEval Benchmarking Initiative for Multimedia Evaluation, 2013. [21] R. E. Thayer, The biopsychology of mood and arousal. Oxford University Press, 1990. [22] H. Tong, C. Faloutsos, and J. Y. Pan, Fast random walk with restart and its applications. Proceedings of the 6th IEEE International Conference on Data Mining (ICDM), 2006. [23] J. C. Wang, Y. H. Yang, I. H. Jhuo, Y. Y. Lin, and H. M. Wang, The acousticvisual emotion Guassians model for automatic generation of music video. Proceedings of the 20th ACM International Conference on Multimedia, 2012. [24] X. Wang, J. T. Sun, Z. Chen, and C. Zhai, Latent semantic analysis for multiple-type interrelated data objects. Proceedings of the 29th annual International ACM SIGIR Conference on Research and Development in Information Retrieval, 2006. [25] Y. Yu, Z. Shen, and R. Zimmermann, Automatic music soundtrack generation for outdoor videos from contextual sensor information. Proceedings of the 20th ACM International Conference on Multimedia, 2012. |
Description: | 碩士 國立政治大學 資訊科學學系 103753008 |
Source URI: | http://thesis.lib.nccu.edu.tw/record/#G0103753008 |
Data Type: | thesis |
Appears in Collections: | [資訊科學系] 學位論文
|
Files in This Item:
File |
Size | Format | |
300801.pdf | 3809Kb | Adobe PDF2 | 69 | View/Open |
|
All items in 政大典藏 are protected by copyright, with all rights reserved.
|