English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  全文筆數/總筆數 : 118260/149296 (79%)
造訪人次 : 77202225      線上人數 : 396
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋
    請使用永久網址來引用或連結此文件: https://nccur.lib.nccu.edu.tw/handle/140.119/159459


    題名: 生成式AI新聞可信度比較研究:基於台灣讀者對兩岸關係感知態度
    The Credibility of AI-Generated News in Taiwan’s Cross-Strait Discourse: A Study on Readers with different attitudes towards Cross-strait relationships
    作者: 蔡欣妤
    Tsai, Hsin-Yu
    貢獻者: 侯宗佑
    Hou, Tsung-Yu
    蔡欣妤
    Tsai, Hsin-Yu
    關鍵詞: 生成式人工智慧
    AI新聞
    政治傾向
    可信度
    兩岸議題
    Artificial intelligence
    News credibility
    Trustworthiness
    Political communication
    Taiwan-China relations
    Automated journalism
    日期: 2025
    上傳時間: 2025-09-01 17:09:59 (UTC+8)
    摘要: 在生成式新聞報導日益普及的時代,人工智慧(AI)所生成的新聞內容如何影響受眾對新聞可信度的判斷,特別是在具高度政治敏感性的情境中,成為亟待探討的議題。本研究採取混合方法設計,透過2×2×2之實驗法與後續訪談,探究兩岸政治新聞的實際撰寫者(AI或人類記者)、作者標示方式、政治立場一致性,以及受試者對中國態度等變項,如何影響其對新聞可信度的認知。
    研究結果顯示,受試者對新聞可信度的評價受到「標示作者」的顯著影響,而非實際作者身份。被標示為AI撰寫的新聞,整體上被視為可信度較低;反之,標示為人類記者撰寫的新聞則獲得較高信任。雖然新聞的政治立場與個人政治傾向相符與否並未對可信度造成顯著差異,但受試者對中國的整體態度對「標示作者」與「可信度」之間的關係具有調節作用。具反中態度者較信任標示為人類記者撰寫的新聞。
    質性資料進一步揭示,受試者在評估新聞時高度依賴作者標示,並對AI處理政治性新聞的能力表現出審慎但多元的觀點。部分受試者認為AI或許在台灣的政治媒體環境中能提供相對中立的聲音,然亦有人指出AI所依據的資料來源與演算邏輯,終究反映人類的意識形態與偏見,難以完全客觀。
    本研究拓展了AI新聞應用於政治傳播領域的理解,突顯在新聞可信度建構中,受眾對人類記者標示的信任依舊具有關鍵地位,並指出AI新聞在面對政治敏感議題時,其公信力仍受限於社會脈絡與受眾政治認同的交互作用。
    In the age of automated journalism, the increasing use of AI-generated content raises important questions about credibility perception, particularly in politically sensitive contexts. This mixed-method study examines how news authorship (AI vs. human), labeled authorship, political stance similarity, and participants’ attitudes toward China affect perceived news credibility. Using a 2×2×2 between-subjects experiment and follow-up interviews, we found that labeled authorship had a significant influence on credibility perceptions, whereas actual authorship did not. Articles labeled as written by AI were perceived as less credible than those labeled as written by human journalists. While political stance similarity did not significantly affect credibility ratings, participants’ attitudes toward China moderated the effect of labeled authorship. Specifically, those with anti-China attitudes perceived human-labeled news as more credible, whereas those with pro-China attitudes were more accepting of AI-labeled news. Qualitative findings further revealed that participants relied on author labels when making credibility judgments and expressed mixed views on AI’s capacity to handle ideologically charged topics. This study contributes to the understanding of AI’s role in political news by highlighting the persistent influence of human authorship perception, and with different or same political stance credibility performance.
    參考文獻: Chinese Section
    黃淑惠. (2025). 獨家/陸客來台旅遊玩真的!上海、福建的旅行業最快2月12日來台踩線. 經濟日報. https://money.udn.com/money/story/5612/8504743
    周湘芸. (2025). 福建上海將開放陸客團 業者樂喊話:解除有名無實禁團令、釋更多航點. 聯合新聞網. https://udn.com/news/story/7331/8496012
    政治中心. (2025). 中國解觀光客「赴台禁令」?兩岸官員雙手一攤「沒具體方案」. 三立iNEWS. https://inews.setn.com/news/1600522
    莊文仁. (2024). 上海擬恢復中客團赴台引中國追星族暴動 台灣網友憂亂象叢生. 自由時報. https://news.ltn.com.tw/news/life/breakingnews/4896302
    Taiwan Media Watch Foundation. (2019). 2019台灣新聞媒體可信度研究報告 [Report on media credibility in Taiwan 2019]. https://www.mediawatch.org.tw/news/9911
    English Section
    Appelman, A., & Sundar, S. S. (2016). Measuring Message Credibility:Construction and Validation of an Exclusive Scale. Journalism & Mass Communication Quarterly, 93(1), 59-79. https://doi.org/10.1177/1077699015606057
    AppliedXL. (2023). Francesco Marconi on AI in News: Testimony Before UK Parliament [Video]. YouTube. https://www.youtube.com/watch?v=f9S32ugkgWs
    Bien-Aimé, S., Wu, M., Appelman, A., & Jia, H. (2024). Who Wrote It? News Readers’ Sensemaking of AI/Human Bylines. Communication Reports, 38(1), 46–58. https://doi.org/10.1080/08934215.2024.2424553
    Boyer, M. M., Aaldering, L., & Lecheler, S. (2022). Motivated Reasoning in Identity Politics: Group Status as a Moderator of Political Motivations. Political Studies, 70(2), 385-401. https://doi.org/10.1177/0032321720964667
    Boczek, K., Dogruel, L., & Schallhorn, C. (2023). Gender byline bias in sports reporting: Examining the visibility and audience perception of male and female journalists in sports coverage. Journalism, 24(7), 1462-1481. https://doi.org/10.1177/14648849211063312
    Bloomberg. (2023). Introducing BloombergGPT, Bloomberg’s 50-billion parameter large language model, purpose-built from scratch for finance. Bloomberg. https://www.bloomberg.com/company/press/bloomberggpt-50billion-parameter-llm-tuned-finance/
    Bauer, F., & Wilson, K. L. (2022). Reactions to China-linked fake news: Experimental evidence from Taiwan. The China Quarterly, 249, 21–46. https://doi.org/10.1017/S030574102100134X
    Chiang, C.-F., & Shih, H.-H. (2017). Consumer Preferences regarding news slant and accuracy in news programs. Jing Ji Lun Wen Cong Kan, 45(4), 515-545.
    Clerwall, C. (2014). Enter the Robot Journalist: Users’ perceptions of automated content. Journalism Practice, 8(5), 519–531. https://doi.org/10.1080/17512786.2014.883116
    Davies, R.T. (2024). An update on the BBC’s plans for generative AI (GenAI) and how we plan to use AI tools responsibly. BBC. https://www.bbc.com/mediacentre/articles/2024/update-generative-aiand-ai-tools-bbc
    Dörr, K. N. (2015). Mapping the field of Algorithmic Journalism. Digital Journalism, 4(6), 700–722. https://doi.org/10.1080/21670811.2015.1096748
    Davis, M., Attard, M., & Main, L. (2023). Gen AI and journalism. UTS Centre for media transition. https://doi.org/10.6084/m9.figshare.24751881.v3
    Eun-Ju Lee, Hyun Suk Kim, & Jong Min Lee (2023). When Do People Trust Fact-Check Messages? : Effects of Fact-Checking Agents and Confirmation Bias. Journal of communication research, 60(3), 47-88.
    Faul, F., Erdfelder, E., Buchner, A., & Lang, A.-G. (2009). Statistical power analyses using G*Power 3.1: Tests for correlation and regression analyses. Behavior Research Methods, 41(4), 1149-1160. https://doi.org/10.3758/BRM.41.4.1149
    Gherheș, V., Fărcașiu, M. A., Cernicova-Buca, M., & Coman, C. (2025). AI vs. Human-Authored Headlines: Evaluating the Effectiveness, Trust, and Linguistic Features of ChatGPT-Generated Clickbait and Informative Headlines in Digital News. Information, 16(2), 150. https://www.mdpi.com/2078-2489/16/2/150
    Graefe, A., Haim, M., Haarmann, B., & Brosius, H.-B. (2016). Perception of automated computer-generated news: Credibility, expertise, and readability. Journalism, 19(5).
    Jung, J., Song, H., Kim, Y., Im, H., & Oh, S. (2017). Intrusion of software robots into journalism: The public's and journalists' perceptions of news written by algorithms and human journalists. Computers in Human Behavior, 71, 291-298. https://doi.org/https://doi.org/10.1016/j.chb.2017.02.022
    Kahan, D. M. (2013). Ideology, motivated reasoning, and cognitive reflection. Judgment and Decision Making, 8(4), 407-424. https://doi.org/10.1017/S1930297500005271
    Longoni, C., Fradkin, A., Cian, L., & Pennycook, G. (2022). News from Generative Artificial Intelligence Is Believed Less Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, Seoul, Republic of Korea. https://doi.org/10.1145/3531146.3533077
    Lermann Henestrosa, A., Greving, H., & Kimmerle, J. (2023). Automated journalism: The effects of AI authorship and evaluative information on the perception of a science journalism article. Computers in Human Behavior, 138, 107445. https://doi.org/https://doi.org/10.1016/j.chb.2022.107445
    Lermann Henestrosa, A., & Kimmerle, J. (2024). The Effects of Assumed AI vs. Human Authorship on the Perception of a GPT-Generated Text. Journalism and Media, 5(3), 1085-1097. https://www.mdpi.com/2673-5172/5/3/69
    Lazaridou, K., & Krestel, R. (2016). Identifying political bias in news articles. Bulletin of the IEEE TCDL, 12(2), 1-12.
    Lin, L. (2024). Reuters Institute Digital News Report 2024 (p. 150). Reuters Institute for the Study of Journalism. https://reutersinstitute.politics.ox.ac.uk/digital-news-report/2024/taiwan
    Li, X. (2021). More than Meets the Eye: Understanding Perceptions of China Beyond the Favorable–Unfavorable Dichotomy. Studies in Comparative International Development, 56(1), 68-86. https://doi.org/10.1007/s12116-021-09320-1
    London School of Economics and Political Science. (2018). Journalism credibility: Strategies to restore trust. https://www.lse.ac.uk/media-and-communications/truth-trust-and-technology-commission/journalism-credibility
    Meade, A. (2023, July 31). News Corp using AI to produce 3,000 Australian local news stories a week. The Guardian. https://www.theguardian.com/media/2023/aug/01/news-corp-ai-chat-gpt-stories
    Marconi/Reuters Institute. (2023, March 23). Is ChatGPT a threat or an opportunity for journalism? Five AI experts weigh in. Reuters Institute for the Study of Journalism. https://reutersinstitute.politics.ox.ac.uk/news/chatgpt-threat-or-opportunity-journalism-five-ai-experts-weigh
    McCroskey, J. C., & and Teven, J. J. (1999). Goodwill: A reexamination of the construct and its measurement. Communication Monographs, 66(1), 90-103. https://doi.org/10.1080/03637759909376464
    Moon, W.-K., & Kahlor, L. A. (2025). Fact-checking in the age of AI: Reducing biases with non-human information sources. Technology in Society, 80, 102760. https://doi.org/https://doi.org/10.1016/j.techsoc.2024.102760
    Nielsen, R. K., Cornia, A., & Kalogeropoulos, A. (2016). Challenges and Opportunities for News Media and Journalism in an Increasingly Digital, Mobile, and Social Media Environment. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2879383
    Nadeem, M. U., & Raza, S. (2016). Detecting bias in news articles using NLP models (Stanford CS224N Custom Project). Stanford University.
    Newman, N. (2025). Overview and key findings of the 2025 Digital News Report. Reuters Institute for the Study of Journalism. https://reutersinstitute.politics.ox.ac.uk/digital-news-report/2025/dnr-executive-summary
    OpenAI. (2022, November 30). Introducing ChatGPT. OpenAI. https://openai.com/index/chatgpt/
    Peiser, J. (2019, Feb 5). The rise of the robot reporter. The New York Times. https://www.nytimes.com/2019/02/05/business/media/artificial-intelligence-journalism-robots.html?smid=url-share
    Pecquerie, B. (2018, January 5). AI is the new horizon for news. Global Editors Network. Retrieved from https://medium.com/global-editors-network/ai-is-the-new-horizon-for-news-22b5abb752e6
    Simon, F. M. (2024, February 6). Artificial intelligence in the news: How AI retools, rationalizes, and reshapes journalism and the public arena. Columbia Journalism Review. https://www.cjr.org/tow_center_reports/artificial-intelligence-in-the-news.php
    Saez-Trumper, D., Castillo, C., & Lalmas, M. (2013). Social media news communities: gatekeeping, coverage, and statement bias Proceedings of the 22nd ACM international conference on Information & Knowledge Management, San Francisco, California, USA. https://doi.org/10.1145/2505515.2505623
    Shu, P. L. (2000). What Does the News Tell Us: Examining the Impact of Group Identity on News Coverage of Taiwan-China Relations (Order No. 28442498). Available from ProQuest Dissertations & Theses A&I; ProQuestDissertations & Theses Global. (2498537956).
    Sundar, S. S. (1999). Exploring Receivers' Criteria for Perception of Print and Online News. Journalism & Mass Communication Quarterly, 76(2), 373-386. https://doi.org/10.1177/107769909907600213
    Sundar, S. S., & Shyam. (2008). The MAIN Model: A Heuristic Approach to Understanding Technology Effects on Credibility. In.
    Toff, B., & Simon, F. (2023). “Or they could just not use it?”: the paradox of AI disclosure for audience trust in news. In (pp. 1-38): SocArXiv.
    Waddell, T. F. (2019). Attribution Practices for the Man-Machine Marriage: How Perceived Human Intervention, Automation Metaphors, and Byline Location Affect the Perceived Bias and Credibility of Purportedly Automated Content. Journalism Practice, 13(10), 1255–1272. https://doi.org/10.1080/17512786.2019.1585197
    Waddell, T. F. (2017). A Robot Wrote This? How perceived machine authorship affects news credibility. Digital Journalism, 6(2), 236–255. https://doi.org/10.1080/21670811.2017.1384319
    描述: 碩士
    國立政治大學
    全球傳播與創新科技碩士學位學程
    112ZM1020
    資料來源: http://thesis.lib.nccu.edu.tw/record/#G0112ZM1020
    資料類型: thesis
    顯示於類別:[全球傳播與創新科技碩士學位學程] 學位論文

    文件中的檔案:

    檔案 大小格式瀏覽次數
    102001.pdf4301KbAdobe PDF1檢視/開啟


    在政大典藏中所有的資料項目都受到原著作權保護.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 回饋