English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  Items with full text/Total items : 113311/144292 (79%)
Visitors : 50935389      Online Users : 959
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    政大機構典藏 > 資訊學院 > 資訊科學系 > 學位論文 >  Item 140.119/38537
    Please use this identifier to cite or link to this item: https://nccur.lib.nccu.edu.tw/handle/140.119/38537


    Title: 基於電影拍攝手法之電影場景情緒探勘
    Emotion Discovery of Movie Content Based on Film Grammar
    Authors: 廖家慧
    Liao, Chia Hui
    Contributors: 沈錳坤
    Shan, Man Kwan
    廖家慧
    Liao, Chia Hui
    Keywords: 內涵式分析
    拍攝手法
    電影場景
    視聽覺特徵
    情緒
    content-based analysis
    film grammar
    movie scene
    audiovisual features
    emotion
    affective classification
    Date: 2007
    Issue Date: 2010-04-09 13:17:50 (UTC+8)
    Abstract: 數位化的今天,電影逐漸成為人們日常生活的一部份,電影資料的內涵式分析也成為目前重要的研究主題。透過電影拍攝手法,我們知道電影視聽覺特徵與情緒之間有密不可分的關係。因此,在本研究中,我們希望利用探勘電影視聽覺特徵與情緒的關聯來達到自動判斷電影場景的情緒。

    首先,先由人工標記訓練場景的情緒,之後,我們對所有的場景擷取定義的六類特徵值。特徵值包括電影場景的顏色、燈光、影片速度、特寫鏡頭、聲音和字幕六類。最後,我們利用Mixed Media Graph演算法來探勘場景情緒與特徵值之間的關聯,達到自動判斷電影場景情緒的功能。實驗結果顯示,準確率最高可達到70%。
    Movies play an important role in our life nowadays. How to analyze the emotional content of movies becomes one of the major issues. Based on film grammar, there are many audiovisual cues in movies helpful for detecting the emotions of scenes. In this research, we investigate the discovery of the relationship between audiovisual cues and emotions of scenes and the automatic emotion annotation of scenes is achieved.

    First, the training scenes are labeled with the emotions manually. Second, six classes of audiovisual features are extracted from all scenes. These classes of features consist of color, light, tempo, close-up, audio, and textual. Finally, the graph-based approach, Mixed Media Graph is modified to mine the association between audiovisual features and emotions of the scenes. The experiments show that the accuracy achieves 70%.
    Reference: [1] B. Adams, C. Dorai, and S.Venkatesh, “Toward Automatic Extraction of Expressive Elements from Motion Pictures: Tempo,” IEEE Transactions on Multimedia, Vol. 4, No. 4, pp. 472-481, December 2002.
    [2] D. Arijon, Grammar of the Film Language. CA: Silman-James Press, 1976.
    [3] Christopher J. C. Burges, “A Tutorial on Support Vector Machines for Pattern Recognition,” Journal of Data Mining and Knowledge Discovery, Vol. 2, No. 2, pp. 121-167, 1998.
    [4] A. R. Damasio, The Feeling of What Happens: Body and Emotion in the Making of Consciousness. New York: Harcourt Brace, 1999.
    [5] R. Dietz and A. Lang, “Affective Agents: Effects of Agent Affect on Arousal, Attention, Liking and Learning,” Proceedings of Cognitive Technology Conference, San Francisco, CA, 1999.
    [6] N. Dimitrova, J. Martino, H. Elenbaas, and L. Agnihotri, “Color SuperHistograms for Video Representation,” IEEE International Conference on Image Processing (ICIP ‘99), Kobe, Japan, Vol. 3, pp. 314-318, October 1999.
    [7] P. Ekman, “Universals and Cultural Differences in the Judgments of Facial Expressions of Emotion,” Journal of Personality and Social Psychology, Vol. 54, No. 4, pp. 712-717, October 1987.
    [8] L. Giannetti, Understanding Movies, 10th ed. Englewood Cliffs, New Jersey: Prentice Hall, 2005.
    [9] A. Hanjalic and L. Q. Xu, “Extracting Moods from Pictures and Sounds: Towards truly personalized TV,” IEEE Signal Processing Magazine, Vol. 23, No. 2, pp. 90-100, March 2006.
    [10] A. Hanjalic and L. Q. Xu, “Affective Video Content Representation and Modeling,” IEEE Transaction on Multimedia, Vol. 7, No. 1, pp. 143-154, February 2005.
    [11] A. Hanjalic and L. Q. Xu, “User-oriented Affective Video Content Analysis,” Proceedings of IEEE CBAIBL, Kauai, Hawaii, pp. 50-57, December 2001.
    [12] H. B. Kang, “Affective Content Retrieval from Video with Relevance Feedback,” International Conference on Asian Digital Libraries, Kuala Lumpur, Malaysia, pp. 243-252, December 2003.
    [13] H. B. Kang, “Affective Content Detection using HMMs,” Proceedings of ACM International Conference on Multimedia, Berkeley, California, U.S.A, pp. 259-262, November 2003.
    [14] G. Kirouac, Les émotions: Monographies de psychologie. Sillery: Presses de l’Université du Québec, 1992.
    [15] F. F. Kuo, M. F. Chiang, M. K. Shan, and S. Y. Lee, “Emotion-based Music Recommendation by Association Discovery from Film Music,” Proceedings of ACM International Conference on Multimedia, Singapore, pp. 507-510, November 2005.
    [16] Y. Li, S. H. Lee, C. H. Yeh, and C. C. J. Kuo, ”Techniques for Movie Content Analysis and Skimming,” IEEE Signal Processing Magazine, Vol. 23, No. 2, pp. 79-89, March 2006.
    [17] L. Lu, H. Jiang, and H. J. Zhang, "A Robust Audio Classification and Segmentation Method," Proceedings of ACM International Conference on Multimedia, Ottawa, Ontario, Canada, pp. 203-211, September 2001.
    [18] L. Lu, H. J. Zhang, and H. Jiang, "Content Analysis for Audio Classification and Segmentation," IEEE Transaction on Speech and Audio Processing, Vol. 10, No. 7, pp. 504-516, October 2002.
    [19] F. H. Mahnke, Color, Environmental and Human Response. New York: Van Nostrand Reinhold, 1996.
    [20] S. Moncrieff, C. Dorai, and S. Venkatesh, “Affect Computing in Film through Sound Energy Dynamics,” Proceedings of ACM International Conference on Multimedia, Ottawa, Ontario, Canada, pp. 525-527, September 2001.
    [21] A. Ortony, G. Clore, and A. Collins, The Cognitive Structure of Emotions. New York: Oxford University Press, 1988.
    [22] C. E. Osgood, G. J. Suci, and P. H. Tannenbaum, The Measurement of Meaning. Urbana, IL: University of Illinois Press, 1957.
    [23] J. Y. Pan, H. J. Yang, P. Duygulu, and C. Faloutsos, “Automatic Image Captioning,” Proceedings of IEEE International Conference on Multimedia and Expo (ICME ’04), Taipei, Taiwan, pp. 1987-1990, June 2004.
    [24] J. Y. Pan, H. J. Yang, C. Faloutsos, and P. Duygulu, “Automatic Multimedia Cross-modal Correlation Discovery,” Proceedings of ACM International Conference on Knowledge Discovery on Database (SIGKDD ‘04), Seattle, Washington, pp. 653-658, August 2004.
    [25] D. S. Park, J. S. Park, and J. H. Han, “Image Indexing Using Color Histogram in the CIELUV Color Space,” Proceedings of 5th Japan-Korea Workshop on Computer Vision, Korea, pp. 126-132, 1999.
    [26] G. Peeters, “A Large Set of Audio Features for Sound Description (Similarity and Classification),” in the CUIDADO project. Technical report, Ircam, Paris, France, April 2004.
    [27] Z. Rasheed, Y. Sheikh, and M. Shah, “On the Use of Computable Features for Film Classification,” IEEE Transaction on Circuits and Systems for Video Technology (CSVT), Vol. 15, No. 1, pp. 52-64, January 2005.
    [28] J. A. Russell and A. Mehrabian, “Evidence for a Three-Factor Theory of Emotions,” Journal of Research in Personality, Vol. 11, pp. 273-294, 1977.
    [29] J. Saunders, “Real-Time Discrimination of Broadcast Speech/Music,” Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP ’96), Atlanta, Ga, Vol. 2, pp. 993-996, May 1996.
    [30] E. Scheirer and M. Slaney, “Construction and Evaluation of a Robust Multifeature Speech/Music Discriminator”, Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP ’97), Munich, Germany, Vol. 2, pp. 1331-1334, April 1997.
    [31] R. E. Thayer, The Biopsychology of Mood and Arousal. New York: Oxford University Press, 1989.
    [32] H. L. Wang and L. F. Cheong, “Affective Understanding in Film,” IEEE Transactions on Circuits and Systems for Video Technology (CSVT), Vol. 16, No. 6, pp. 689-704, June 2006.
    [33] C. Y. Wei, N. Dimitrova, and S.F. Chang, “Color-Mood Analysis of Films on Syntactic and Psychological Models,” Proceedings of IEEE International Conference on Multimedia and Expo. (ICME ’04), Taipei, Taiwan, pp. 831-834, June 2004.
    [34] H. Zettl, Sight Sound Motion: Applied Media Aesthetics, 3rd ed. Belmont, CA: Wadsworth Publishing Company, 1998.
    [35] http://www.intel.com/technology/computing/opencv/index.htm
    [36] http://eqi.org/fw.htm
    Description: 碩士
    國立政治大學
    資訊科學學系
    94753027
    96
    Source URI: http://thesis.lib.nccu.edu.tw/record/#G0094753027
    Data Type: thesis
    Appears in Collections:[資訊科學系] 學位論文

    Files in This Item:

    File Description SizeFormat
    302701.pdf43KbAdobe PDF2922View/Open
    302702.pdf58KbAdobe PDF22319View/Open
    302703.pdf99KbAdobe PDF21275View/Open
    302704.pdf18KbAdobe PDF2726View/Open
    302705.pdf16KbAdobe PDF2745View/Open
    302706.pdf18KbAdobe PDF2840View/Open
    302707.pdf248KbAdobe PDF23125View/Open
    302708.pdf221KbAdobe PDF21134View/Open
    302709.pdf71KbAdobe PDF2822View/Open
    302710.pdf14KbAdobe PDF2719View/Open
    302711.pdf22KbAdobe PDF21058View/Open


    All items in 政大典藏 are protected by copyright, with all rights reserved.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback