Loading...
|
Please use this identifier to cite or link to this item:
https://nccur.lib.nccu.edu.tw/handle/140.119/29692
|
Title: | 以角色的社會網絡為基礎之電影敘事結構分析 Film Narrative Exploration Based on Social Network Analysis of Characters |
Authors: | 余孟芝 Yu, Meng Chih |
Contributors: | 沈錳坤 Shan, Man Kwan 余孟芝 Yu, Meng Chih |
Keywords: | 視訊分段 電影敘事結構分析 社會網絡 |
Date: | 2008 |
Issue Date: | 2009-09-11 16:04:16 (UTC+8) |
Abstract: | 由於電影工業的蓬勃發展,以及數位化視訊分析與儲存技術的進步,使用者可藉由DVD所提供的故事分段索引來快速瀏覽及搜尋影片。因此一套能對電影作故事分段的工具是不可或缺的。<br>本論文的研究目的是針對人際關係類型的電影做故事單元的分段。我們提出以角色的社會網絡為基礎的方法,作電影故事單元的分段。此方法包括四個步驟。首先對電影作場景變換偵測。接著,我們利用人臉辨識技術擷取出每一個場景出現的角色。第三,我們考慮角色重要性對分段的影響,利用社會網絡分析中計算角色的網絡中心性,來衡量角色的重要性。最後,以角色為特徵值,比對場景之間的角色來計算相似度,並且利用循序性叢集分析,來達到電影故事單元的分段。我們的實驗針對四部人際關係類型的電影,以切成故事單元來評估分段的效果,實驗顯示以角色的社會網路為基礎的方法,準確率介於0.63到0.94之間。 With the progress of entertainment industry, and the advances of digital video analysis and storage technologies, users can utilize the indexes of the DVD chapters for quick access, retrieval and browsing of movie content. Therefore, development of automatic movie content analysis is important.
In this thesis, we focus on the romance and relationship movies, which contain the narrative of the relation between peoples. We propose a novel method for film narrative exploration based on social network analysis of characters. There are four steps. First, we perform movie scene change detection to segment a movie into scenes. In the second step, we extract the characters as the feature model of scenes by utilizing the face recognition system. Then, we measure the weight value of the characters by the centrality of social network analysis. Finally, we calculate the cosine similarity between scenes, and segment a movie into story units by using sequential clustering algorithm.
We conduct experiments on four romance and relationship movies. Experimental result show that our proposed story unit segmentation method based on social network analysis of characters achieves from 63% to 94% accuracy. |
Reference: | 6. 參考文獻 [1]D. Bordwell, K. Thompson, Film Art: An Introduction, 8th ed., McGraw-Hill, 2008. [2]L. Giannetti, Understanding Movies, 10th ed., Prentice Hall, 2005. [3]V. I. Pudovkin, Film Technique and Film Acting, Grove Press, 1958. [4]Syd Field, Screenplay: The Foundations of Screenwriting, Delta, 1979 [5]Lewis Herman. A Practical Manual of Screen Play Writing, The New American Library, 1974. [6]B. Block, The Visual Story: Seeing the Structure of Film, TV, and New Media, Focal Press, 2001. [7]B. Adams, C. Dorai, and S. Venkatesh, “Novel Approach to Determining Tempo and Dramatic Story Sections in Motion Pictures,” Proc. of International Conference on Image Processing, vol. 2, pp. 283-286, 2000. [8]B. Adams, C. Dorai, S. Venkatesh, and H. H. Bui, “Indexing Narrative Structure and Semantics in Motion Pictures with a Probabilistic Frame Work,” IEEE International Conference on Multimedia and Expo, vol. 2, pp. II 453-456, 2003 [9]H. W. Chen, J. H. Kuo, W. T. Chu, and J. L. Wu, “Action Movies Segmentation and Summarization based on Tempo Analysis” Proc. of ACM SIGMM International Workshop on Multimedia Information Retrieval, pp. 251-258, 2004. [10]C. W. Wang, W. H. Cheng, J. C. Chen, S. S. Yang, and J. L. Wu, “Film Narrative Exploration Through Analyzing Aesthetic Elements,” Lecture Notes in Computer Science, vol. 4351, pp. 606-615, 2007. [11]Z. Xingquan, W. Xindong, F. Jianping, A. K. Elmagarmid, and W. G. Aref, “Exploring Video Content Structure for Hierarchical Summarization,” Multimedia Systems, vol. 10, pp. 98-115, 2004. [12]R. Yong, S. H. Thomas, and S. Mehrotra, “Constructing Table-of-content for Videos,” Multimedia Systems, vol. 7, pp. 359-368, 1998. [13]J. C. Chen, J. H. Yeh, W. T. Chu, J. H. Kao, J. L. Wu, “Improvement of Commercial Boundary Detection Using Audiovisual Features,” the 6th pacific-Rim Conference on Multimedia, pp. 776-786, 2005. [14]A. Hanjalic, R. L. Lagendijk, and J. Biemond, “Automated High-level Movie Segmentation for Advanced Video-retrieval Systems,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 9, pp. 580-588, 1999. [15]W. Tavanapong and Z. Junyu, “Shot Clustering Techniques for Story Browsing,” IEEE Transactions on Multimedia, vol. 6, pp. 517-527, 2004. [16]Z. Rasheed, and M. Shah, “Scene Detection in Hollywood Movies and TV Shows,” Proc. of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 343-348, 2003. [17]H. Sundaram and S. F. Chang, “Video Scene Segmentation Using Video and Audio Features,” International Conference on Multimedia and Expo, 2000. [18]M. Yeung, B. Yeo, and B. Liu, “Segmentation of Videos by Clustering and Graph Analysis,” Computer Vision and Image Understanding, vol. 71, pp.94-109, 1998. [19]Y. Zhai, and M. Shah, “Video Scene Segmentation Using Markov Chain Monte Carlo,” IEEE Transactions on Multimedia, vol. 5, 2005 [20]L. H. Chen, Y. C. Lai, and H. Y. Liao, “Movie Scene Segmentation Using Background Information,” Pattern Recognition, vol. 41, pp. 1056-1065, 2008. [21]http://www.intel.com/technology/computing/opencv/index.htm [22]M. Kirby and L. Sirovich, “Application of the karhunen-loeve procedure for the characterization of human faces,” IEEE Transaction On PAMi, vol. 12, pp.103-108, 1987. [23]P. Campadelli, R. Lanzarotti, and C. Savazzi, “Feature-Based Face Recognition System,” ICIAP, 2003. [24]王財得,基於方向性邊緣特徵之即時物件偵測與追蹤,國立政治大學資訊科學研究所,民國96年。 [25]J. Nieminen, “On Centrality in a Graph,” Scandinavian Journal of Psychology, vol. 15, 1974. [26]S. Wasserman and K. Faust, Social Network Analysis: Methods and Applications, New York: Cambridge University Press, 1994. [27]C. Y. Weng, W. T. Chu, and J. L. Wu, “RoleNet: Treat a Movie as a Small Society,” Proc. of ACM SIGMM International Workshop on Multimedia Information Retrieval, pp.51-60, 2007. [28]C. R. Lin, M. S. Chen, “On the Optimal Clustering of Sequential Data,” Proc. of SIAM International Conference on Data Mining, pp. 141-157, 2002. [29]何旻璟,以主題為基礎的音樂結構分析,國立政治大學資訊科學研究所,民國93年。 [30]P. Campisi, A. Longari and A.Neri, “Automatic Key Frame Selection Using a Wavelet Based Approach,” Proc. of SPIE Wavelet Applications in Signal and Image Processing, Vol. 3813, pp. 861-872, 1999. [31]C. Kim , and J. N. Hwang, “An Integrated Scheme for Object Based Video Abstraction,” Proc. of ACM International Conference on Multimedia, pp.303 - 311, 2000. [32]N. Omoigui, L. He, A. Gupta, J. Grudin and E. Sanoki, “Time-compression: System Concerns, Usage, and Benefits,” Proc. of ACM Computer Human Interaction, pp. 136-143, 1999. [33]M.A. Smith, and T. Kanade, “Video Skimming and Characterization through the Combination of Image and Language Understanding Techniques,” Proc. of IEEE International Workshop on Content-based Access of Image and Video Database, pp. 61-70, 1998. [34]N. Jeho, and H.T. Ahmed, “Dynamic Video Summarization and Visualization,” Proc. of ACM International Conference on Multimedia, pp. 53-56, 1999. [35]Y. Ma, L. Lu, H. Zhang, and M. Li, “A User Attention Model for Video Summarization,” Proc. of ACM International Conference on Multimedia, pp. 533–542, 2002. [36]Y. Li, S. H. Lee, C. H. Yeh, and C. C. Jay Kuo, “Techniques for Movie Content Analysis and Skimming,” IEEE Signal Processing Magazine, Vol. 23, No. 2, pp.79-89, 2006. [37]T. Wang, Y. Gao, P. P. Wang, E. Li, W. Hu, Y. Zhang, and J. Yong, “Video Summarization by Redundancy Removing and Content Ranking,” Proc. of ACM International Conference on Multimedia, pp. 577-580, 2007. [38]L. Danon, J. Duch, A. D. Guilera, and A. Arenas, “Comparing Community Structure Identification,” Journal of Statistical Mechanics: Theory and Experiment, P09008, 2005. [39]A. Clauset, M. E. J. Newman, and C. Moore, “Finding Community Structure in Very Large Networks,” Physical Review E, Vol. 70, No. 6, 066111, 2004. [40]M. Girvan, and M. E. J. Newman, “Community Structure in Social and Biological Networks,” Proc. of the National Academy Science of the United Sates America, Vol. 99, No. 12, pp. 7821-7826 , 2002. [41]E. A. Leicht, and M. E. J. Newman, “Community Structure in Directed Networks,” Physical Review Letters, Vol. 100, 118703, 2008. [42]L. Getoor, and C. P. Diehl, “Link Mining: A Survey,” Proc. of ACM International Conference on Knowledge Discovery and Data Mining, Vol. 7, No. 2, 2005. [43]Z. Rasheed, and M, Shah, “Detection and Representation of Scenes in Videos,” Proc. of IEEE Transactions on Multimedia, Vol. 7, No. 6, pp. 1097-1105, 2005. [44]Y. Gao, T. Wang, J. Li, Y. Du, W. Hu, Y. Zhang, and H. Ai, “Cast Indexing for Videos by NCuts and Page Ranking,” Proc. of the ACM International Conference on Image and Video Retrieval, pp. 441-447, 2007. [45]Y. Li, S. S. Narayanan and C. C. Jay Kuo, “Content-Based Movie Analysis and Indexing Based on Audio Visual Cues,” IEEE Transactions on Circuits and Systems for Video Technology, Vol. 14, No. 8, pp. 1073-1085, 2004. [46]Y. Rui, T. S. Huang, and S. Mehrotra, “Constructing Table-of-Content for Video,” Proc. of ACM Multimedia Systems, Vol. 7, No. 5, pp. 359–368, 1998. [47]M. Yeung, B. Yeo, and B. Liu, “Extracting Story Units from Long Programs for Video Browsing and Navigation,” Proc. of IEEE Multimedia Computing and Systems, pp. 296–305, 1996. [48]J. Huang, Z. Liu, and Y. Wang, “Integration of Audio and Visual Information for Content-Based Video Segmentation,” Proc. of IEEE International Conference on Image Processing, Vol. 3, pp. 526–529, 1998. [49]A. G. Hauptmann, and M. A. Smith, “Text, Speech, and Vision for Video Segmentation: the Informedia Project,” AAAI Fall Symposium on Computational Models for Integrating Language and Vision, pp. 10–15, 1995. [50]R. Lienhart, S. Pfeiffer, and W. Effelsberg, “Scene Determination Based on Video and Audio Features,” Journal of Multimedia Tools and Applications, Vol. 15, No. 1, pp. 59–81, 2001. [51]C. E. Osgood, G. J. Suci, and P. H. Tannenbaum, The Measurement of Meaning. Urbana, IL: University of Illinois Press, 1957. |
Description: | 碩士 國立政治大學 資訊科學學系 95753043 97 |
Source URI: | http://thesis.lib.nccu.edu.tw/record/#G0095753043 |
Data Type: | thesis |
Appears in Collections: | [資訊科學系] 學位論文
|
Files in This Item:
File |
Size | Format | |
index.html | 0Kb | HTML2 | 520 | View/Open |
|
All items in 政大典藏 are protected by copyright, with all rights reserved.
|