政大機構典藏-National Chengchi University Institutional Repository(NCCUR):Item 140.119/157856
English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  全文笔数/总笔数 : 117629/148660 (79%)
造访人次 : 71783780      在线人数 : 265
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜寻范围 Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    政大機構典藏 > 商學院 > 資訊管理學系 > 學位論文 >  Item 140.119/157856
    Please use this identifier to cite or link to this item: https://nccur.lib.nccu.edu.tw/handle/140.119/157856


    Title: AI賦能系統開發階段之人智協作循環
    Human-AI Collaboration Cycle in the Development Stage of an AI-enabled System
    Authors: 翁必揚
    Weng, Pi-Yang
    Contributors: 蔡瑞煌
    張欣綠

    Tsaih, Rua-Huan
    Chang, Hsin-Lu

    翁必揚
    Weng, Pi-Yang
    Keywords: 資料互動
    領域專家
    可解釋人工智慧
    理解度
    信任度
    Engagement
    Domain expert
    Explainable artificial intelligence (XAI)
    Comprehensibility
    Trust
    Date: 2025
    Issue Date: 2025-07-01 15:32:34 (UTC+8)
    Abstract: 本研究提出一個命名為人智協作循環的架構,此架構包含了資料互動、使用者理解度、使用者信任度與使用者接受度四個元素。此架構表明資料互動是人工智慧系統開發階段中,很重要的第一步,也顯示領域專家與人工智慧系統協作的必要性。我們提出四個假設:(1)使用者使用可解釋人工智慧技術的人工智慧系統,其對人工智慧系統的信任度提升,會高過於沒有使用可解釋人工智慧技術的人工智慧系統。 (2)對於有較深度認知資料互動的使用者, 模型可解釋性可以讓其產生較高的使用者理解度,進而增強使用者信任。 (3) 對於有較深度認知資料互動的使用者,模型單一輸出可解釋性可以讓其產生較高的使用者理解度,進而增強使用者信任。(4) 對於有較深度認知資料互動的使用者,兼具有模型可解釋性與模型單一輸出可解釋性,與只具備一項可解釋性相比,可以讓其產生較高的使用者理解度,進而增強使用者信任。為了測試我們的假設,我們在一所醫院執行了一個實驗。實驗結果顯示:可解釋人工智慧技術是一個有用的工具,用以協助促進使用者對人工智慧系統信任度的提升,且對於有較深度資料互動的使用者,可解釋人工智慧技術確實能協助提升模型可解釋性與模型輸出理由的可解釋性,並進而增強使用者信任。本研究的結論是:資料互動是必要的,而此人智協作循環架構也將可以成為一種持續增強使用者對人工智慧賦能系統信任的方法論。
    In this study, we present a framework termed as Human-AI Collaboration Cycle, encompassing four components: data engagement, user comprehensibility, user trust, and user adoption. This framework shows that data engagement is an essential first step in the AI system development stage, indicating the necessity for collaboration between domain experts and AI systems. We propose four hypotheses for this study: (1)H1: AI systems with XAI would help achieve a higher user trust enhancement than those without XAI. (2)H2: For those users with deeper data engagement, model interpretability would lead to a higher level of user comprehensibility, which in turn enhances user trust. (3)H3: For those users with deeper data engagement, instance explainability would lead to a higher level of user comprehensibility, which in turn enhances user trust. (4)H4: For those users with deeper data engagement, AI systems with both model interpretability and instance explainability would lead to a higher level of user comprehensibility, which in turn enhances user trust compared to AI systems implementing only one XAI approach in isolation. To test our hypotheses, we carry out a field experiment in a hospital. This study result shows that XAI is an useful tool to help facilitate user trust enhancement in an AI-enabled system. This study result also shows that with higher levels of data engagement, XAI does help improve model interpretation and the system output explanation, which subsequently enhances the user trust. We conclude that data engagement is essential and the human-AI collaboration cycle framework would become a methodology for continuously enhancing user trust in an AI-enabled system.
    Reference: 台灣病人安全通報系統2023年年報(2023). https://www.patientsafety.mohw.gov.tw/files/file_pool/1/0p050589099771082437/2023_tpr%e5%b9%b4%e5%a0%b1_%e9%9b%bb%e5%ad%90%e6%9b%b8_f.pdf
    Alam, L. and Mueller, S. (2021). Examing the effect of explanation on satisfaction and trust in AI diagnostic systems. BNC Medical Informatics and Decision Making, Vol. 21, No. 1, 178.
    Al-Ansari, N., Al-Thani, D., and Ai-Mansoori, R. (2024). User-centered evaluation of explainable artificial intelligence (XAI): A systematic literature review. Human Behavior and Emerging Technologies, Vol. 2024, 4628855, pp. 1-28.
    Ali, S., Abuhmed, T., El-Sappagh, S., Muhammad, K., Alonso-Moral, J., Confalonieri, R., Guidotti, R., Del Ser, J., Diaz-Rodriguez, N., and Herrera, F. (2023). Explainable artificial intelligence(XAI): what we know and what is left to attain trustworthy artificial intelligence. Information Fusion, 99, 10185, pp. 1-52.
    Alzoubi, S., Alifan, B., Ahmed, H., and El-Ebiary, T. (2024). The impact of human-AI collaboration on decision-making in management. Journal of Tianjin University of Science and Technology, Vol. 57, Issue 06, pp. 429-448.
    Angelov, P., Soares, E., Jiang, R., Arnold, N., and Atkinson, P. (2021). Explainable artificial intelligence: An analytical review. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, Vol. 11, No.5, e1424.
    Bao, Y., Cheng, X., Vreede, T., and Vreede, G. (2021). Investigating the relationship between AI and trust in human-AI collaboration. Proceedings of the 54th Hawaii International Conference on System Sciences, pp. 607-616.
    Bauer, K., Zahn, M., and Hinz, O. (2023). Expl(AI)ned: The impact of explainable artificial intelligence on users’ information processing. Information Systems Research, Vol. 34, No. 4, pp. 1582-1602.
    Bedue, P. and Fritzsche, A. (2022). Can we trust AI ? An empirical investigation of trust requirements and guides to successful AI adoption. Journal of Enterprise Information Management, Vol. 35, No. 2, pp. 530-549.
    Chaddad, A., Peng, J., Xu, J., and Bouridane, A. (2023). Survey of explainable AI techniques in healthcare. Sensors, Vol. 23, No. 2, pp. 1-19.
    Choung, H., David, P., and Ross, A. (2022). Trust in AI and its role in the acceptance of AI technologies. International Journal of Human-Computer Interaction, Vol. 39, No. 9, pp. 1727-1739.
    Ciná, G., Rober, T., Goedhart, R., and Birbil I. (2022). Why we do need explainable AI for healthcare. arXiv:2206.15363(Computer Science).
    Daly, S., Hearn, G., and Papageorgiou, K. (2025). Sensemaking with AI: How trust influence human-AI collaboration in health and creative industries. Social Sciences and Humanities Open, 11, 101346.
    Das, A. and Rad, P. (2020). Opportunities and challenges in explainable artificial intelligence (XAI): A survey. arXiv preprint arXiv: 2006.11371.
    Eisbach, S., Langer, M., and Hertel, G. (2023). Optimizing human-AI collaboration: Effects of motivation and accuracy information in AI-supported decision-making. Computers in Human Behavior: Artificial Humans, Vol. 1, No. 2, 100015.
    Ferreira, J. and Monteiro, M. (2020). Do ML experts discuss explainability for AI systems ? A discussion case in the industry for a domain-specific solution. arXiv preprint arXiv: 2002.12450.
    Fornell, C. and Larcker, D. (1981). Evaluating structural equation models with unobservable variables and measurement error. Journal of Marketing Research, Vol. 18, No. 1, pp. 39-52.
    Fragiadakis, G., Diou, C., Kousiouris, G., and Nikolaidou, M. (2024). Evaluating human-AI collaboration: A review and methodological framework. arXiv preprint arXiv: 2407.19098.
    Futia, G. and Vetro, A. (2020). On the integration of knowledge graph into deep learning models for a more comprehensible AI: Three challenges for future research. Information, Vol. 11, No. 122, pp. 1-10.
    Global Bio & Investment (2024). A new look on AI, remote healthcare, and electronic medical record in Taiwan. https://news.gbimonthly.com/tw/magazine/article_show.php?num=64557
    Google Cloud (2020). MLOps level 0: Manual process, Google Cloud Architecture Center. https://cloud.google.com/architecture/mlops-continuous-delivery-and-automation-pipelines
    Hamida, S., Chowdhury, M., Chakraborty, N., Biswas, K., and Sami, S. (2024). Exploring the landscape of explainable artificial intelligence (XAI): A systematic review of techniques and applications. Big Data and Cognitive Computing, Vol. 8, No. 11, 149, pp. 1-30.
    Hoffman, R., Mueller, S., Klein, G., and Litman, J. (2018). Measuring trust in the XAI context. Technical Report, DARPA Explainable AI Program.
    Hoffman, R., Mueller, S., Klein, G., and Litman, J. (2023). Measures for explainable AI: Explanation goodness, user satisfaction, mental models, curiosity, trust, and human-AI performance, Frontiers in Computer Science, Vol. 5, 1096257, pp. 1-15.
    Holter, S. and El-Assady, M. (2024). Deconstructing human-AI collaboration: Agence, interaction, and adaptation. Computer Graphics Forum, Vol. 43, No. 3, p. e15107.
    Humble, J. and Molesky, J. (2011). Why enterprises must adopt Devops to enable continuous delivery. Cutter IT Journal, Vol. 24, No. 8, pp. 6-12.
    IBM (2021). https://developer.ibm.com/blogs/ibm-announces-first-machine-learning-end-to-end-pipeline-starter-kit/
    Kaur, D., Uslu, S., Rittichier, K., and Durresi, A. (2022). Trustworthy artificial intelligence: A review. ACM Computing Surveys, Vol. 55, No. 2, Article 39.
    Lai, Y., Kankanhalli, A, and Ong, D. (2021). Human-AI collaboration in healthcare: A review and research agenda. Proceedings of the 54th Hawaii International Conference on System Sciences, pp. 390-399.
    Maadi, M., Khorshidi, H., and Ackelin, U. (2021). A review on human-AI interaction in machine learning and insights for medical applications. International Journal of Environmental research and public health.Vol. 18, pp. 1-27.
    Markus, A., Kors, J., and Rijnbeek, P. (2020). The role of explainability in creating trustworthy artificial intelligence for health care: A oncomprehensive survey of the terminology, desigh choices, and evaluation strategies. Journal of Biomedical Informatics, 113, 103655. Preprint, Vol. 10, No. 1.
    Mensah, G. (2023). Artificial intelligence and ethics: A comprehensive review of bias mitigation, transparency, and accountability in AI systems.
    Microsoft (2019). https://devblogs.microsoft.com/dotnet/announcing-ml-net-1-0/
    Montavon, G., Samek, W., and Muller, K. (2017). Methods for interpreting and understanding deep neural networks. Digital Signal Processing, Vol. 73, pp. 1-15.
    Mosqueira-Rey, E., Hemandez-Pereira, E., Alonso-Rios, D., Bobes-Bascaran, J., and Fernandez-Leal, A. (2023). Human-in-the-loop machine learning: A state of the art. Artificial Intelligence Review, Vol. 56, No. 4, pp. 3005-3054.
    OECD (2019). Recommendation of the Council on Artificial Intelligence, OECD/LEGAL/0449, https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449/.
    Pasch, S. and Ha, S. (2025). Human-AI interaction and user satisfaction: Empirical evidence from online reviews of AI products. arXiv: 2503.17955.
    Pawar, U., O’shea, D, Rea, S., and O’Reilly, R. (2020). Incorporating explainable artificial intelligence (XAI) to aid the understanding of machine learning in the healthcare domain. In Aics, pp. 169-180.
    Pedreschi, D., Giannotti, F., Guidotti, R., Monreale, A., Ruggieri, S., and. Turini, F. (2019). Meaningful explanations of black box AI decision systems. The Thirty-Third AAAI conference on Artificial Intelligence, pp. 9780-9784.
    Puerta-Beldarrain, M., Gomez-Carmona, O, Sanchez-Corcuera, R., Casado-Mansilla, D., Diego Lopez-de-Ipina, D., and Chen, L. (2025). A multifaceted vision of the human-AI collaboration: A comprehensive review. IEEE Access, Vol. 13, pp. 29375-29405.
    Pyae, A. (2025). The human-AI handshake framework: A bidirectional approach to human-AI collaboration. Forty-Fifth International Conference on Information Systems. arXiv preprint arXiv: 2502.01493.
    Rebeiro, M., Signh, S. and Guestrin, C. (2016). “Why should I trust you?” Explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.
    Rzepka, C. and Berger, B. (2018). User interaction with AI-enabled system,: A systematic review of IS research. Thirty Ninth International Conference on Information Systems, San Francisco, pp. 1-18.
    Schroder, A., Constantiou, I., Tuunainen, V., and Austin, R. (2022). Human-AI collaboration—Coordinating automation and augmentation tasks in a digital service company. Proceedings of the 55th Hawaii International Conference on System Sciences, pp. 206-215.
    Seeber, I., Bittner, E., Briggs, R., Vreede, G., Vreede, T., Elkins, A., Maier, R., Merz, A., Oeste-ReiB, S., Randrup, N., Schwabe, G., and Sollner, M. (2020). Machines as teammates: A research agenda on AI in team collaboration. Information & Management, Vol. 57, pp. 1-22.
    Seymoens, T., Ongenae, F., Jacobs, A., Verstichel, S., and Ackaert, A. (2019). A methodology to involve domain experts and machine learning techniques in the design of human-centered algorithms. In Human Work Interaction Design. Designing Engaging Automation: 5th IFIP WG 13.6 Working Conference, pp. 200-214.
    Shaban-Nejad, A., Michalowski, M., and Buckeridge, D. (2021). Explainability and interpretability: Keys to deep medicine. Explainable AI in Healthcare and Medicine, Vol. 914.
    Shin, D. (2020). The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. International Journal of Human-Computer Studies, Vol. 146, 102551.
    Simkute, A., Luger, E., Evans, M., Jones, R. (2024). “It is there, and you need it, so why do you not use it?” Achieving better adoption of AI systems by domain experts, in the case study of natural science research. arXiv preprint arXiv: 2403.16895.
    Sturm, T., Pumplun, L., Gerlach, J., Kowakzyk, M., and Buxmann, P. (2023). Machine learning advice in managerial decision-making: The overlooked role of decision makers’ advice utilization. Journal of Strategic Information Systems, Vol. 32, No. 4, 101790.
    The Intake (2023). Perceptions of AI in healthcare: What professionals and the public think. https://www.tebra.com/theintake/medical-deep-dives/tips-and-trends/research-perceptions-of-ai-in-healthcare
    Tjoa, E and Guan, C. (2021). A survey on explainable artificial intelligence (XAI): Toward medical XAI. IEEE Transactions and Neural Networks and Learning Systems, Vol. 32, No. 11, pp. 4793-4813.
    Tursunalieva, A., Alexander, D., Dunne, R., Li, J., Riera, L., and Zhao, Y. (2024). Making sense of machine learning: A review of interpretation techniques and their applications. Applied Sciences, Vol. 14, No. 2, 496, pp. 1-24.
    Tveita, L., Kotsialos, A., and Vassilakopoulou, P. (2025). Human-AI collaboration for case management: Exemplars, capabilities and transition management. Procedia Computer Science, 256, pp. 166-173.
    Wang, D., Weiz, J., Muller, M., Ram, P., Geyer, W., Dugan, C., Tausczik, Y., Samulowitz, H., and Gray, A. (2019). Human-AI collaboration in data science: Exploring data scientists’ perceptions of automated AI. Proceedings of the ACM on Human-Computer Interaction, Vol. 3, Issue CSCW, Article 211, pp. 1-24.
    Westphal, M., Vössing, M., Satzger, G., Yom-Tov, G., and Rafaeli, R. (2023). Decision control and explanations in human-AI collaboration: Improving user perceptions and compliance. Computers in Human Behavior, 144, 107714, pp. 1-60.
    World Health Organization (2023). https://www.who.int/news-room/fact-sheets/detail/patient-safety
    Yang, R. and Wibowo, S. (2022). User trust in artificial intelligence: A comprehensive conceptual framework. Electronic Markets, Vol. 32, No. 4, pp. 2053-2077.
    Yu, K., Berkovsky, S., Taib, R., Conway, D., Zhou, J., and Chen, F. (2017). User trust Dynamics: An investigation driven by differences in system performance.
    Description: 博士
    國立政治大學
    資訊管理學系
    109356507
    Source URI: http://thesis.lib.nccu.edu.tw/record/#G0109356507
    Data Type: thesis
    Appears in Collections:[Department of MIS] Theses

    Files in This Item:

    File Description SizeFormat
    650701.pdf2484KbAdobe PDF0View/Open


    All items in 政大典藏 are protected by copyright, with all rights reserved.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback