English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  Items with full text/Total items : 113311/144292 (79%)
Visitors : 50941723      Online Users : 965
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    政大機構典藏 > 資訊學院 > 資訊科學系 > 學位論文 >  Item 140.119/95268
    Please use this identifier to cite or link to this item: https://nccur.lib.nccu.edu.tw/handle/140.119/95268


    Title: Vizstory:視覺化數位童話故事
    VizStory: Visualization of Digital Narrative for Fairy Tales
    Authors: 黃詰仁
    Huang, Chieh-Jen
    Contributors: 沈錳坤
    Shan, Man-Kwan
    黃詰仁
    Huang, Chieh-Jen
    Keywords: 童話
    視覺化
    數位敘事
    影像檢索
    Date: 2009
    Issue Date: 2016-05-09 15:29:10 (UTC+8)
    Abstract: 在現今的社會中,我們可以隨處見到影像被用到各個地方,像是報章雜誌、網站或是兒童圖畫書中,影像可以加深讀者對文字的印象。對一般人而言,這些影像往往比周遭的文字來的更吸引人。尤其,童話故事的文本若以影像的視覺方式呈現,將更可吸引兒童的注意。
    因此,本論文研究將文字形式的童話故事文本轉換為影像的視覺化技術。我們利用童話故事的敘事結構、角色等特性,將童話故事依故事劇情分段。從中找出代表每個段落主題與故事全文的關鍵字,並利用全球資訊網的影像搜尋引擎來找出初步的影像集合。最後再為每一段落找尋適合的影像,進而達到視覺化的效果。實驗結果顯示,本研究所提出的視覺化技術,在還原童話故事的敘事結構上,準確率約70%。
    Stories are often accompanied with images, in order to emphasize the effect of stories. In particular, most fairy tales written for children are decorated by images to attract children’s interest.
    This thesis focuses on story visualization technology which transforms text of a fairy tale into s series of visual images. The proposed visualization technology is developed based on the narrative structure of fairy tales. First, the input fairy tale is divided into segments in accordance with the plot of the story. Then, global keywords for the whole story and segment keywords for each segment are extracted. Moreover, expanded keywords which are important but infrequent in each segment are discovered. These three types of keywords are fed into Web Image Search Engine to find the initial image set. At last, the proposed system filters out the irrelevant images from the initial image set, and selects the representative image for each segment. Experiments show that the proposed method achieves 70% accuracy for the reconstruction of narrative structures of fairy tales. 
    Reference: [1] 林文寶、徐守濤、蔡尚志與陳正治,兒童文學,國立空中大學,1993。
    [2] 林逸君與徐雅惠,傳統童話圖畫書與顛覆性童話圖畫書表現手法之比較研究-以「三隻小豬」為例,國立台中教育大學碩士論文,2003。
    [3] 林守為,兒童文學,五南圖書出版公司,1988。
    [4] 陳正治,童話寫作研究,五南圖書出版公司,1990。
    [5] 傅林統,兒童文學的思想與技巧,富春文化事業股份有限公司,1990。
    [6] 蔡尚志,童話創作的原理與技巧,五南圖書出版公司,1996。
    [7] A. Z. Broder, D. Carmel, M. Herscovici, A. Soffer, and J. Zien, “Efficient Query Evalua-tion using a Two-level Retrieval Process,” Proc. of ACM International Conference on Information and Knowledge Management CIKM, 2003.
    [8] R. Cilibrasi, and P. M. B. Vitanyi, “The Google Similarity Distance,” IEEE Transaction on Knowledge and Data Engineering, Vol. 19, No. 3, 2007.
    [9] B. Coyne, and R. Sproat, “WordsEye: An Automatic Text-to-Scene Conversion System,” Proc. of ACM International Conference on Computer Graphics and Interactive Tech-niques SIGGRAPH, 2001.
    [10] R. Datta, D. Joshi, J. Li, and James Z. Wang, “Image Retrieval: Idea, Influences, and Trends of the New Age,” ACM Computing Surveys, Vol. 40, No. 1, 2008.
    [11] R. Datta, J. Li , and J. Z. Wang, “Content-based Image Retrieval: Approaches and Trends of the New Age,” Proc. of the ACM International Workshop on Multimedia In-formation Retrieval MIR, 2005.
    [12] E. Frank, G. W. Paynter, I. H. Witten, C. Gutwin, and C. G. Nevil-Manning, “Do-main-specific Keyphrase Extraction,” Proc. of the International Joint Conference on Ar-tificial Intelligence, 1999.
    [13] A. B. Goldberg, X. Zhu, Charles R. Dyer, M. Eldawy, and L. Heng, “Easy as ABC? Fa-cilitating Pictorial Communication via Semantically Enhanced Layout,” Proc. of Confe-rence on Computational Natural Language Learning CoNLL, 2008.
    [14] M. A. Hearst, “TextTiling: Segmenting Text into Multi-paragraph Subtopic Passages,” Computational Linguistics, Vol. 23, No. 1, 1997.
    [15] Y. Itabashi, and Y. Masunaga, “Correlating Scenes as Series of Document Sentences with Images – Taking The Tale of Genji as an Example,” Proc. of IEEE International Confe-rence on Data Engineering ICDE, 2005.
    [16] R. Johansson, A. Berglund, M. Danielsson, and P. Nugues, “Automatic Text-to-Scene Conversion in the Traffic Accident domain,” Proc. of International Joint Conference on Artificial Intelligence IJCAI, 2005.
    [17] D. Joshi, J. Z. Wang, and J. Li, “The Story Picturing Engine – A System for Automatic Text Illustration,” ACM Transaction on Multimedia Computing, Communications and Applications, Vol. 2, No. 1, 2006.
    [18] D. Joshi, J. Z. Wang, and J. Li, “The Story Picturing Engine: Finding Elite Images to Il-lustrate a Story Using Mutual Reinforcement,” Proc. of ACM International Workshop on Multimedia Information Retrieval MIR, 2004.
    [19] D. Kauchak, and F. Chen, “Feature-Based Segmentation of Narrative Documents,” Proc. of ACL Workshop on Feature Engineering for Machine Learning in Natural Language Processing, 2005.
    [20] H. Kozima, and T. Furugori, “Segmenting Narrative Text into Coherent Scenes,” Literary and Linguistic Computing, Vol. 9, No.1, 1993.
    [21] D. Lowe, “Distinctive Image Features from Scale-invariant keypoints,” International Journal of Computer Vision, Vol. 60, No. 2, 2004.
    [22] M. de Marneffe, B. MacCartney, and C. D. Manning, “Generating Typed Dependency Parses from Phrase Structure Parses,” Proc. of the International Conference on Language Resources and Evaluation LREC 2006.
    [23] O. Medelyan, and I. H. Witten, “Thesaurus Based Automatic Keyphrase Indexing,” Proc. of the Joint Conference on Digital Libraries JCDL 2006.
    [24] R. Mihalcea, and B. Leong, “Toward Communicating Simple Sentences Using Pictorial Representations,” Proc. of the Conference of the Association for Machine Translation in the America AMTA, 2005.
    [25] G. Miller, R. Beckwith, C. Fellbaum, and K. Miller, “Introduction to WordNet: An On-line Lexical Database,” Journal of Lexicography, Vol. 3, No. 4, 1990.
    [26] S. Osiński, D. Weiss, “A Concept-Driven Algorithm for Clustering Search Results,” IEEE Intelligent Systems, Vol. 20, No.3, 2005.
    [27] S. Osiński, D. Weiss, “Lingo: Search Results Clustering Algorithm Based on Singular Value Decomposition,” Proc. of International Joint Conference on Intelligent Information Systems IIS, 2004.
    [28] J. Y. Pan, H. J. Yang, C. Faloutsos, and P. Duygulu, “Automatic Multimedia Cross-modal Correlation Discovery,” Proc. of ACM International Conference on Knowledge Discovery on Database SIGKDD, 2004.
    [29] J. Y. Pan, H. J. Yang, P. Duygulu, and C. Faloutsos, “Automatic Image Captioning,” Proc. of IEEE International Conference on Multimedia and Expo. ICME, 2004.
    [30] J. Y. Pan, H. J. Yang, C. Faloutsos, and P. Duygulu, “GCap: Graph-based Automatic Im-age Captioning,” Proc. of International Workshop on Multimedia Data and Document Engineering, 2004.
    [31] J. Y. Pan, H. J. Yang, C. Faloutsos, and P. Duygulu, “Cross-modal Correlation Mining using Graph Algorithm,” In Xingquan Zhu and Ian Davidson (Ed.), Knowledge Discov-ery and Data Mining: Challenges and Realities with Real World Data, Ideal Group, Inc., Hershey, PA., 2006.
    [32] I. H. Witten, G. W. Paynter, E. Frank, C. Gutwin, and C. G. Nevil-Manning, “Kea: Prac-tice Automatic Keyphrase Extraction,” Proc. of the ACM Conference on Digital Libraries DL, 1999.
    [33] L. Wu, X.-S. Hua, N. Yu, W.-Y. Ma, and S. Li, “Flickr Distance,” Proc. of ACM Interna-tional Conference on Multimedia MM, 2008.
    [34] X. Zhu, A. B. Goldberg, M. Eldawy, C. R. Dyer, and B. Strock, “A Text-to-Picture Syn-thesis System for Augmenting Communication,” Proc. of the AAAI Conference on Artifi-cial Intelligence, AAAI-07, 2007.
    Description: 碩士
    國立政治大學
    資訊科學學系
    96753011
    Source URI: http://thesis.lib.nccu.edu.tw/record/#G0096753011
    Data Type: thesis
    Appears in Collections:[資訊科學系] 學位論文

    Files in This Item:

    File SizeFormat
    index.html0KbHTML2572View/Open


    All items in 政大典藏 are protected by copyright, with all rights reserved.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback