English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  Items with full text/Total items : 114875/145929 (79%)
Visitors : 53848410      Online Users : 165
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    Please use this identifier to cite or link to this item: https://nccur.lib.nccu.edu.tw/handle/140.119/156080


    Title: 劇透預警:以敘事結構自動偵測劇透
    Spoiler Alert: Automatically Detecting Spoilers as Part of Narrative Structure
    Authors: 余盈蓓
    Yu, Ying-Pei
    Contributors: 張瑜芸
    Chang, Yu-Yun
    余盈蓓
    Yu, Ying-Pei
    Keywords: 劇透偵測
    敘事理論
    深度學習
    電影評論
    Spoiler Detection
    Narrative Extraction
    Narrative Theory
    Online Movie Reviews
    Date: 2025
    Issue Date: 2025-03-03 15:25:57 (UTC+8)
    Abstract: 本研究以敘事理論為基礎,提出透過結合深度學習模型及敘事語言特徵改進中文劇透偵測之方法。劇透作為伴隨著社群媒體及網路論壇之便利性而來的缺點,已成為大眾關注之議題。為避免使用者因接收到未預期之電影內容而產生對平台的負面情緒,各大平台皆提供劇透提醒之服務。使用者可以在撰寫評論時將其標註為含有劇透以提醒其他使用者。然而因該服務非強制性且劇透之定義因人而異,仍有許多改善空間。因此本研究提出以敘事理論作為基礎,能夠自動偵測評論中劇透的 BERT 模型。
    該模型在預訓練的 BERT 模型基礎上,加入敘事理論中提及之敘事特有語言特徵進行微調,期能增進模型偵測中文劇透的表現。研究結果顯示 BERT(F-score:0.74,正確預測之劇透數:238)在加上敘事特徵進行訓練(F-score:0.75,正確預測之劇透數:256)後,對於劇透的敏感度有所提升。研究透過對 BERT 自我注意機制之分析發現,動詞種類、名詞種類、核心依存關係、置後時間詞、時貌標記、及代名詞之使用對劇透偵測有較大的影響力。此外,在針對分類錯誤案例的分析中,文字主觀性高低程度對劇透分類的影響性也指出未來研究之可行方向。
    整體而言,本研究透過結合語言學觀點的敘事理論及深度學習技術,提供提升自動偵測中文劇透效用的解決方法。
    This study proposed an approach to combine narrative theory and deep learning models for spoiler detection in Chinese online reviews. Spoiler is the inconvenience brought about social media and online forums. When people share their thoughts on fictive works such as books and movies, they may accidentally reveal some important plots in these works, which the other users do not wish to know before watching them. We extracted the characteristics of narrative discourse as features with thematic analysis and structural analysis.
    A BERT model is trained with these narrative features encoded. The results showed that BERT trained with narrative features (F-score:0.75, correct spans: 256) performed better than BERT trained without narrative features (F-score:0.74, correct spans: 238). With analysis on the attention mechanism in our model, we found verbs, nouns, core argument relations, time-related linguistic devices, and pronouns especially helpful for spoiler detection. Moreover, with error analysis, a possibility for subjectivity being a critical element for spoiler detection was discovered. In conclusion, this study proved that spoiler detection can be seen as a narrative extraction task and be improved by doing narrative analysis.
    Reference: 1. Bamberg, M. (2012). Narrative analysis. In APA handbook of research methods in psychology, Vol 2: Research designs: Quantitative, qualitative, neuropsychological, and biological. (pp. 85–102). American Psychological Association.
    2. Bao, A., Ho, M., & Sangamnerkar, S. (2021). Spoiler Alert: Using Natural Language
    3. Processing to Detect Spoilers in Book Reviews. arXiv preprint arXiv:2102.03882.
    4. Baynham, M. (2015). Narrative and Space/Time. In The Handbook of Narrative Analysis (pp. 117–139). Wiley Online Library.
    5. Bestgen, Y., & Costermans, J. (1994). Time, space, and action: Exploring the narrative structure and its linguistic marking. Discourse Processes, 17(3), 421–446.
    6. Boyd-Graber, J., Glasgow, K., & Zajac, J. S. (2013). Spoiler alert: Machine learning approaches to detect social media posts with revelatory information. Proceedings of the American Society for Information Science and Technology, 50(1), 1–9.
    7. Chang, B., Kim, H., Kim, R., Kim, D., & Kang, J. (2018). A deep neural spoiler detection model using a genre-aware attention mechanism. Advances in Knowledge Discovery and Data Mining: 22nd Pacific-Asia Conference, PAKDD 2018, Melbourne, VIC, Australia, June 3-6, 2018, Proceedings, Part I 22, 183–195.
    8. Chang, B., Lee, I., Kim, H., & Kang, J. (2021). “Killing Me” Is Not a Spoiler: Spoiler Detection Model using Graph Neural Networks with Dependency Relation-Aware Attention Mechanism. arXiv preprint arXiv:2101.05972.
    9. Chatman, S. B. (1978). Story and discourse: Narrative structure in fiction and film. Cornell university press.
    10. Clark, K., Khandelwal, U., Levy, O., & Manning, C. D. (2019). What does BERT look at? an analysis of BERT’s attention. Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, 276–286.
    11. Costermans, J., & Bestgen, Y. (1991). The role of temporal markers in the segmentation of narrative discourse. Cahiers de psychologie cognitive, 11(3), 349–370.
    12. Daniel, T. A., & Katz, J. S. (2019). Spoilers affect the enjoyment of television episodes but not short stories. Psychological reports, 122(5), 1794–1807.
    13. De Marneffe, M.-C., Dozat, T., Silveira, N., Haverinen, K., Ginter, F., Nivre, J., & Manning, C. D. (2014). Universal stanford dependencies: A cross-linguistic typology. LREC, 14, 4585–4592.
    14. Devlin, J. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
    15. Dry, H. (1981). Sentence aspect and the movement of narrative time. Text-Interdisciplinary Journal for the Study of Discourse, 1(3), 233–240.
    16. Golbeck, J. (2012). The twitter mute button: A web filtering challenge. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2755–2758.
    17. Grimshaw, J. (1990). Argument structure. The MIT Press.
    18. Guo, S., & Ramakrishnan, N. (2010). Finding the storyteller: Automatic spoiler tagging using linguistic cues. Proceedings of the 23rd International Conference onComputational Linguistics (Coling 2010), 412–420.
    19. Hijikata, Y., Iwai, H., & Nishida, S. (2016). Context-based plot detection from online review comments for preventing spoilers. 2016 IEEE/WIC/ACM International Conferences on Web Intelligence (WI), 57–65.
    20. Hitzeman, J. (2007). Text type and the position of a temporal adverbial within the sentence. Annotating, Extracting and Reasoning about Time and Events: International Seminar, Dagstuhl Castle, Germany, April 10-15, 2005. Revised Papers, 29–40.
    21. Iwai, H., Hijikata, Y., Ikeda, K., & Nishida, S. (2014). Sentence-based plot classification for online review comments. 2014 IEEE/WIC/ACM International Joint Conferences on Web Intelligence (WI) and Intelligent Agent Technologies (IAT), 1, 245–253.
    22. Jawahar, G., Sagot, B., & Seddah, D. (2019). What does bert learn about the structure of language? ACL 2019-57th Annual Meeting of the Association for Computational Linguistics.
    23. Jeon, S., Kim, S., & Yu, H. (2013). Don’t be spoiled by your friends: Spoiler detection in tv program tweets. Proceedings of the International AAAI Conference on Web and Social Media, 7(1), 681–684.
    24. Johnson, B. K., & Rosenbaum, J. E. (2015). Spoiler alert: Consequences of narrative spoilers for dimensions of enjoyment, appreciation, and transportation. Communication Research, 42(8), 1068–1088.
    25. Johnstone, B. (2005). Discourse analysis and narrative. In The handbook of discourse analysis (pp. 635–649). Wiley Online Library.
    26. Kim, H., Park, Y., Lee, J., & Kang, J. (2019). Span-Level Spoiler Detection for Higher User Satisfaction. KIISE Transactions on Computing Practices, 25(1), 82–86.
    27. Kobayashi, G., Kuribayashi, T., Yokoi, S., & Inui, K. (2020). Attention is not only a weight: Analyzing transformers with vector norms. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 7057–7075.
    28. Labov, W., & Waletzky, J. (1997). Narrative analysis: Oral versions of personal experience.
    29. Landis, J. R., & Koch, G. G. (1977). The measurement of observer agreement for categorical data. biometrics, 159–174.
    30. Leavitt, J. D., & Christenfeld, N. J. (2011). Story spoilers don’t spoil stories. Psychological science, 22(9), 1152–1154.
    31. Lin, T., Wang, Y., Liu, X., & Qiu, X. (2022). A survey of transformers. AI open, 3, 111–132.
    32. Maeda, K., Hijikata, Y., & Nakamura, S. (2016). A basic study on spoiler detection from review comments using story documents. 2016 IEEE/WIC/ACM International Conferences on Web Intelligence (WI), 572–577.
    33. Mareček, D., & Rosa, R. (2019). From balustrades to pierre vinken: Looking for syntax in transformer self-attentions. arXiv preprint arXiv:1906.01958.
    34. Marukatat, R. (2020). A comparative study of using bag-of-words and word-embedding attributes in the spoiler classification of english and thai text. Applied Computing and Information Technology, 81–93.
    35. Maxwell, L. C. (2022). Spoilers ahead, proceed with caution: How engagement, enjoyment, and fomo predict avoidance of spoilers. Psychology of Popular Media, 11(2), 163.
    36. Nakamura, S., & Komatsu, T. (2012). Study of information clouding methods to prevent spoilers of sports match. Proceedings of the International Working Conference on Advanced Visual Interfaces, 661–664.
    37. Nakamura, S., & Tanaka, K. (2007). Temporal filtering system to reduce the risk of spoiling a user’s enjoyment. Proceedings of the 12th international conference on Intelligent user interfaces, 345–348.
    38. Nerbonne, J. (1986). Reference time and time in narration. Linguistics and Philosophy, 83–95.
    39. Nguyen, T. H., & Grishman, R. (2018). Graph convolutional networks with argument-aware pooling for event detection.
    40. Prince, G. (2012). Narratology: The form and functioning of narrative (Vol. 108). Walter de Gruyter.
    41. Reif, E., Yuan, A., Wattenberg, M., Viegas, F. B., Coenen, A., Pearce, A., & Kim, B. (2019). Visualizing and measuring the geometry of bert. Advances in neural information processing systems, 32.
    42. Riessman, C. K. (2005). Narrative analysis. In Narrative, Memory & Everyday Life. (pp. 1–7). University of Huddersfield.
    43. Taboada, M. (2011). Stages in an online review genre. Text & Talk, 31(2), 247–269.
    44. Tenney, I., Das, D., & Pavlick, E. (2019). BERT rediscovers the classical NLP pipeline. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 4593–4601.
    45. Tesnière, L. (2015). Elements of structural syntax. John Benjamins Publishing Company.
    46. Topolinski, S. (2014). A processing fluency-account of funniness: Running gags and spoiling punchlines. Cognition & emotion, 28(5), 811–820.
    47. Ueno, A., Kamoda, Y., & Takubo, T. (2019). A spoiler detection method for japanese-written reviews of stories. International Journal of Innovative Computing Information and Control, 15(1), 189–198.
    48. Vásquez, C. (2014). The Discourse of Online Consumer Reviews. Bloomsbury Publishing.
    49. Wan, M., Misra, R., Nakashole, N., & McAuley, J. (2019). Fine-grained spoiler detection from large-scale review corpora. arXiv preprint arXiv:1905.13416.
    50. Wang, A. (2018). Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461.
    51. Wróblewska, A., Rzepiński, P., & Sysko-Romańczuk, S. (2021). Spoiler in a Textstack: How Much Can Transformers Help? arXiv preprint arXiv:2112.12913.
    52. Yang, Z., Yang, D., Dyer, C., He, X., Smola, A., & Hovy, E. (2016). Hierarchical attention networks for document classification. Proceedings of the 2016 conference of the North American chapter of the association for computational linguistics: human language technologies, 1480–1489.https://doi.org/10.18653/v1/N16-1174
    Description: 碩士
    國立政治大學
    語言學研究所
    110555009
    Source URI: http://thesis.lib.nccu.edu.tw/record/#G0110555009
    Data Type: thesis
    Appears in Collections:[語言學研究所] 學位論文

    Files in This Item:

    File Description SizeFormat
    500901.pdf8007KbAdobe PDF0View/Open


    All items in 政大典藏 are protected by copyright, with all rights reserved.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback