English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  Items with full text/Total items : 118204/149236 (79%)
Visitors : 74196455      Online Users : 433
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    Please use this identifier to cite or link to this item: https://nccur.lib.nccu.edu.tw/handle/140.119/159294


    Title: 運用 DISARM 框架之代理人多臂賭博機演算法於資訊操作的證據探勘
    Evidence Discovery in Information Operations via Agent-Based Multi-Armed Bandits Using the DISARM Framework
    Authors: 曾仲毅
    Zeng, Chung-Yi
    Contributors: 沈錳坤
    Shan, Man-Kwan
    曾仲毅
    Zeng, Chung-Yi
    Keywords: 資訊操作
    不實資訊
    DISARM 框架
    多臂賭博機
    大型語言模型
    Information Operations
    Disinformation
    DISARM Framework
    Multi-Armed Bandits
    Large Language Models
    Date: 2025
    Issue Date: 2025-09-01 16:18:49 (UTC+8)
    Abstract: 生成式人工智慧大大推進了資訊操作的能力,以遠超分析師反應速度,製造了具有說服力的不實資訊。為了應對這項威脅,需要透過自動化調查流程來應對。大型語言模型(LLM)是一把雙刃劍:同一種技術既能製造錯誤資訊,也能驅動自主的LLM代理人。然而,必須引導其推理方向,才能確保輸出符合專家技術並快速揭露操作的證據。
    本論文提出一套自主證據探勘框架,並輔以由分析師建立的DISARM假訊息活動分析框架。因為DISARM細緻的資訊操作技術分類,可被用來轉化為可執行的調查程序。本論文以其為基礎,可確保自動化結果與專業工作流程中既有術語無縫對接。在此基礎上,代理人程式首先鎖定宏觀層面的攻擊手法,並將其轉化為具體且可驗證的證據探勘任務,從而決定優先查找方向,並透過每一輪的評估引導後續的證據探勘。
    本論文以2024年臺灣總統大選資訊操作 YouTube 影片作為實驗,在 25 回合內找出12項高品質證據。不僅優於隨機加權的9項,且與分析師達成 Cohen’s κ = 0.65 的一致性。此成果顯示,透過具適應性的LLM 驅動代理人程式,DISARM得以實現自動化,為資訊操作研究人員提供可擴充且可查核的助手,縮短快速演變的資訊操作技術與有限的分析資源之間日益擴大的鴻溝。
    Generative artificial intelligence has turbo charged information operation (IO) campaigns, enabling adversaries to fabricate persuasive falsehoods faster than human analysts can react. To counter this threat, we must automate the investigative process. Large language models (LLMs) are a double edged sword: the same technology that manufactures disinformation can also power autonomous LLM agents. Yet their reasoning must be steered so that outputs align with expert tradecraft and rapidly surface manipulation evidence.
    This thesis presents a self directed evidence discovery framework that operationalizes the DISARM taxonomy—an open source, analyst curated catalogue of IO attack techniques adopted by practitioners for its fine grained vocabulary. Building on DISARM’s structure ensures that automated findings map cleanly to terminology already used in professional workflows. On top of this foundation, agents target high-level attack techniques and translate them into concrete, verifiable evidence discovery tasks, allowing the system to prioritize where to look and to iteratively evaluate findings to guide subsequent evidence discovery.
    When applied to YouTube videos related to the IO in 2024 Taiwanese presidential election, the framework uncovered 12 high quality evidence items in 25 iterations—outperforming a weighted random baseline (9 items) and achieving Cohen’s κ = 0.65 agreement with human analysts. These results demonstrate that DISARM can be automated by adaptive, LLM driven agents, offering IO researchers a scalable and auditable assistant that narrows the widening gap between rapidly evolving IO tactics and limited analytic capacity.
    Reference: [1] Microsoft Threat Intelligence, Same Targets, New Playbooks: East Asia Threat Actors Employ Unique Methods, Technical Report, Microsoft Corporation, Redmond, WA, 2024.
    [2] S. Terp and P. Breuer, DISARM: a Framework for Analysis of Disinformation Campaigns, in Proceedings of the 2022 IEEE Conference on Cognitive and Computational Aspects of Situation Management (CogSIMA), 2022.
    [3] J. Wei, X. Wang, D. Schuurmans, M. Bosma, F. Xia, E. Chi, Q. V. Le, D. Zhou, et al., Chain-of-Thought Prompting Elicits Reasoning in Large Language Models, Advances in Neural Information Processing Systems, vol. 35, 2022.
    [4] S. Yao, J. Zhao, D. Yu, N. Du, I. Shafran, K. Narasimhan, and Y. Cao, ReAct: Synergizing Reasoning and Acting in Language Models, in Proceedings of the International Conference on Learning Representations (ICLR), 2023.
    [5] K. Tseng and M.-K. Shan, LLM Agent for Disinformation Detection Based on DISARM Framework, in Proceedings of the 18th International AAAI Conference on Web and Social Media (ICWSM) – Workshop on Digital State-Sponsored Disinformation and Propaganda: Challenges and Opportunities, 2024.
    [6] P. N. Howard, B. Ganesh, D. Liotsiou, J. Kelly, and C. François, The IRA, Social Media and Political Polarization in the United States, 2012-2018, Project on Computational Propaganda, 2018.
    [7] C. Wardle, Fake news. It’s complicated, First Draft, 2017.
    [8] J. Weedon, W. Nuland, and A. Stamos, Information Operations and Facebook, Facebook, White Paper, 2017.
    [9] B. Nimmo, Anatomy of an Info-War: How Russia’s Propaganda Machine Works, and How to Counter It, Central European Policy Institute, Report, 2015.
    [10] C. François, Actors, Behaviors, Content: A Disinformation ABC, Transatlantic High Level Working Group on Content Moderation Online and Freedom of Expression, Working Paper, 2019.
    [11] E. M. Hutchins, M. J. Cloppert, and R. M. Amin, Intelligence‑Driven Computer Network Defense Informed by Analysis of Adversary Campaigns and Intrusion Kill Chains, in Leading Issues in Information Warfare and Security Research, J. J. C. H. Ryan, Ed., vol. 1. Reading, U.K.: Academic Publishing International, 2011, pp. 78–104.
    [12] H. Newman, Foreign Information Manipulation and Interference Defence Standards: Test for Rapid Adoption of the Common Language and Framework ‘DISARM’, The European Centre of Excellence for Countering Hybrid Threats and NATO Strategic Communications Centre of Excellence, Helsinki, Finland, 2023.
    [13] European External Action Service, 1st EEAS Report on Foreign Information Manipulation and Interference Threats, European External Action Service, 2023.  
    [14] C. Chen and K. Shu, Combating Misinformation in the Age of LLMs: Opportunities and Challenges, AI Magazine, vol. 45, no. 3, 2024.
    [15] B. Horne and S. Adali, This Just In: Fake News Packs a Lot in Title, Uses Simpler, Repetitive Content in Text Body, More Similar to Satire than Real News, in Proceedings of the International AAAI Conference on Web and Social Media (ICWSM), vol. 11, no. 1, 2017.
    [16] K. Shu, S. Wang, and H. Liu, Beyond News Contents: The Role of Social Context for Fake News Detection, in Proceedings of the 12th ACM International Conference on Web Search and Data Mining (WSDM), 2019.
    [17] J. Devlin, M.‑W. Chang, K. Lee, and K. Toutanova, BERT: Pre‑Training of Deep Bidirectional Transformers for Language Understanding, in Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2019.
    [18] R. K. Kaliyar, A. Goswami, and P. Narang, FakeBERT: Fake News Detection in Social Media With a BERT-Based Deep Learning Approach, Multimedia Tools and Applications, vol. 80, no. 8, 2021.
    [19] Z. Ji, N. Lee, R. Frieske, T. Yu, D. Su, Y. Xu, E. Ishii, Y. J. Bang, et al., Survey of Hallucination in Natural Language Generation, ACM Computing Surveys, vol. 55, no. 12, 2023.
    [20] P. Lewis, E. Perez, A. Piktus, F. Petroni, V. Karpukhin, N. Goyal, H. Küttler, et al., Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks, Advances in Neural Information Processing Systems, vol. 33, 2020.
    [21] I. Chern, S. Chern, S. Chen, W. Yuan, K. Feng, C. Zhou, et al., FacTool: Factuality Detection in Generative AI—A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios, arXiv preprint arXiv:2307.13528., 2023.
    [22] Y. Liu, A. Fabbri, P. Liu, Y. Zhao, L. Nan, R. Han, et al., Revisiting the Gold Standard: Grounding Summarization Evaluation with Robust Human Evaluation, in Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics, 2023.
    [23] J. de Curtò, I. de Zarzà, G. Roig, J. C. Cano, P. Manzoni, and C. T. Calafate, LLM-Informed Multi-Armed Bandit Strategies for Non-Stationary Environments, Electronics, vol. 12, no. 13, 2023.
    [24] L. Dekel, I. Leybovich, P. Zilberman, and R. Puzis, MABAT: A Multi-Armed Bandit Approach for Threat-Hunting, IEEE Transactions on Information Forensics and Security, vol. 18, 2022.
    [25] P. Auer, N. Cesa-Bianchi, and P. Fischer, Finite-Time Analysis of the Multiarmed Bandit Problem, Machine Learning, vol. 47, no. 2–3, 2002.
    [26] S. Sivaprasad, P. Kaushik, S. Abdelnabi, and M. Fritz, A Theory of LLM Sampling: Part Descriptive and Part Prescriptive, arXiv preprint arXiv:2402.11005., 2024.
    Description: 碩士
    國立政治大學
    資訊科學系碩士在職專班
    111971016
    Source URI: http://thesis.lib.nccu.edu.tw/record/#G0111971016
    Data Type: thesis
    Appears in Collections:[資訊科學系碩士在職專班] 學位論文

    Files in This Item:

    File Description SizeFormat
    101601.pdf1934KbAdobe PDF0View/Open


    All items in 政大典藏 are protected by copyright, with all rights reserved.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback