English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  Items with full text/Total items : 113303/144284 (79%)
Visitors : 50799923      Online Users : 818
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    政大機構典藏 > 理學院 > 應用數學系 > 學位論文 >  Item 140.119/150225
    Please use this identifier to cite or link to this item: https://nccur.lib.nccu.edu.tw/handle/140.119/150225


    Title: 應用情感分析在台灣立法院委員會會議發言
    Application of Sentiment Analysis in the Committee Speeches of Taiwan Legislative Yuan Members
    Authors: 蔡佳穎
    Tsai, Chia-Ying
    Contributors: 蔡炎龍
    邱訪義

    Tsai, Yen-Lung
    Chiou, Fang-Yi

    蔡佳穎
    Tsai, Chia-Ying
    Keywords: 自然語言處理
    BERT
    情感分析
    部門聲譽
    議會發言
    NLP
    BERT
    Sentiment Analysis
    Agency Reputation
    Parliamentary Speeches
    Date: 2024
    Issue Date: 2024-03-01 13:59:38 (UTC+8)
    Abstract: Transformer的架構在自然語言處理領域中具有重要的貢獻,其自注意機制和多頭注意力機制的設計使模型能夠更好地捕捉句子中的語義資訊。例如,BERT和GPT等模型均採用了Transformer的架構。在本文中,我們採用了 BERT模型,針對台灣立法委員在委員會會議中對各個部門的質詢發言進行情感分析。透過對這些發言的分類,我們統整了不同情感的數量後,再去計算負面情感的比率,以深入分析不同部門在四年期間聲譽的變化情況。
    The Transformer architecture has made significant contributions to the field of natural language processing, allowing models to more effectively capture semantic information within sentences through its self-attention and multi-head attention mechanisms, such as BERT and GPT. In this study, we utilized the BERT model to perform sentiment analysis on parliamentary speeches by Taiwanese legislators during committee meetings. Our specific focus was on interpellations directed at various government agencies. By categorizing these speeches, we aggregated the quantities of different sentiments to calculate the negative ratio, offering an in-depth analysis of the changing reputation of different agencies over a four-year period.
    Reference: [1] L Jason Anastasopoulos and Andrew B Whitford. Machine learning for public administration research, with application to organizational reputation. Journal of Public Administration Research and Theory, 29(3):491–510, 2019.
    [2] Luca Bellodi. A dynamic measure of bureaucratic reputation: New data for new theory. American Journal of Political Science, 67(4):880–897, 2023.
    [3] Leo Breiman. Random forests. Machine learning, 45:5–32, 2001.
    [4] Daniel P Carpenter and George A Krause. Reputation and public administration. Public
    administration review, 72(1):26–32, 2012.
    [5] Kakia Chatsiou and Slava Jankin Mikhaylov. Deep learning for political science. arXiv
    preprint arXiv:2005.06540, 2020.
    [6] Corinna Cortes and Vladimir Vapnik. Support-vector networks. Machine learning,
    20:273–297, 1995.
    [7] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre- training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
    [8] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.
    [9] Niels D Goet. Measuring polarization with text analysis: Evidence from the uk house of commons, 1811–2015. Political Analysis, 27(4):518–539, 2019.
    [10] Anmol Gulati, James Qin, Chung-Cheng Chiu, Niki Parmar, Yu Zhang, Jiahui Yu, Wei Han, Shibo Wang, Zhengdong Zhang, Yonghui Wu, et al. Conformer: Convolution- augmented transformer for speech recognition. arXiv preprint arXiv:2005.08100, 2020.
    [11] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997.
    [12] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019.
    [13] Andrew McCallum, Kamal Nigam, et al. A comparison of event models for naive bayes text classification. In AAAI-98 workshop on learning for text categorization, volume 752, pages 41–48. Madison, WI, 1998.
    [14] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013.
    [15] Tomas Mikolov, Quoc V Le, and Ilya Sutskever. Exploiting similarities among languages for machine translation (2013). arXiv preprint arXiv:1309.4168, 2022.
    [16] Viet-An Nguyen, Jordan Boyd-Graber, Philip Resnik, and Kristina Miler. Tea party in the house: A hierarchical ideal point topic model and its application to republican legislators in the 112th congress. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1438–1448, 2015.
    [17] Sjors Overman, Madalina Busuioc, and Matthew Wood. A multidimensional reputation barometer for public agencies: A validated instrument. Public Administration Review, 80(3):415–425, 2020.
    [18] Jeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543, 2014.
    [19] Andrew Peterson and Arthur Spirling. Classification accuracy as a substantive quantity of interest: Measuring polarization in westminster systems. Political Analysis, 26(1):120– 128, 2018.
    [20] Daniel Preoţiuc-Pietro, Ye Liu, Daniel Hopkins, and Lyle Ungar. Beyond binary labels: Political ideology prediction of twitter users. In Proceedings of the 55th annual meeting of the association for computational linguistics (volume 1: long papers), pages 729–740, 2017.
    [21] XinRong.word2vecparameterlearningexplained.arXivpreprintarXiv:1411.2738,2014.
    [22] David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning representations
    by back-propagating errors. nature, 323(6088):533–536, 1986.
    [23] James Sanders, Giulio Lisi, and Cheryl Schonhardt-Bailey. Themes and topics in parliamentary oversight hearings: A new direction in textual data analysis. Statistics, Politics and Policy, 8(2):153–194, 2017.
    [24] Alper Kursat Uysal. An improved global feature selection scheme for text classification. Expert systems with Applications, 43:82–92, 2016.
    [25] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017.
    [26] QianZhang,HanLu,HasimSak,AnshumanTripathi,ErikMcDermott,StephenKoo,and Shankar Kumar. Transformer transducer: A streamable speech recognition model with transformer encoders and rnn-t loss. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 7829–7833. IEEE, 2020.
    Description: 碩士
    國立政治大學
    應用數學系
    110751002
    Source URI: http://thesis.lib.nccu.edu.tw/record/#G0110751002
    Data Type: thesis
    Appears in Collections:[應用數學系] 學位論文

    Files in This Item:

    File SizeFormat
    100201.pdf4486KbAdobe PDF0View/Open


    All items in 政大典藏 are protected by copyright, with all rights reserved.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback