English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  Items with full text/Total items : 113648/144635 (79%)
Visitors : 51610878      Online Users : 365
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    Please use this identifier to cite or link to this item: https://nccur.lib.nccu.edu.tw/handle/140.119/148508


    Title: 解釋的合理性及代理人擬人化程度對於虛擬代理人信任之影響
    The effect of anthropomorphism and attribution plausibility on human trust in virtual agent
    Authors: 蔡佳洵
    Tsai, Jia-Shiun
    Contributors: 陳宜秀
    陳柏良

    Yihsiu Chen
    Po-Liang Chen

    蔡佳洵
    Tsai, Jia-Shiun
    Keywords: 擬人化
    合理性
    歸因
    虛擬代理人
    信任
    人智互動
    可解釋人工智慧
    Anthropomorphism
    Plausibility
    Attribution
    Virtual Agent
    Trust
    Human-AI Interaction
    Explainable AI
    Date: 2023
    Issue Date: 2023-12-01 10:49:35 (UTC+8)
    Abstract: 電腦及運算能力的快速發展使得人機互動(Human-Computer Interaction, HCI)變得越來越複雜。聊天機器人、機器人和虛擬代理等多樣化的介面形式改變了人類與系統互動的方式。早期,電腦被視為穩定且可預測的人類工具。近年來,隨著人工智慧 (AI) 的發展,電腦通常會以具有目的、動機和意圖的「代理人 (Agent) 」形式出現,而人類在協作時開始將電腦視為隊友。隊友之間的信任對於團隊建立至關重要,同樣的對於人與代理人之間的互動 (Human-Agent interaction, HAI) 也非常重要。可解釋 AI(Explainable AI, XAI)相關的研究旨在使人類用戶理解 AI 合作夥伴的行為,並提升 AI 系統的可信度 (trustworthiness) 及透明度 (transparency)。另一方面,人類也在透過電腦的外顯行為推測其隱性的內在原因。去理解和解釋所觀察到的活動的過程稱為「歸因 (attribution) 」,歸因是一種人類天生的自然能力,並在社會科學中被廣泛的研究。AI 可以展現與人類相似的歸因能力嗎?過去的研究發現,像人的特質會影響人類對於 AI 的信任。然而,像人的特質可以從很多面向來討論,例如像人的外表、像人的身體動作、像人的心理模型等。如果 AI 能夠展現隱性的人類歸因行為,人類會更信任它嗎?此外,不同的類人特質之間有什麼關聯?AI 代理人是否會因為像人的外表而被期待展現出更好的人類能力?本研究試圖透過線上的實驗流程來回答這些問題,實驗設計包含兩個因子:1. 代理外觀(像人或像機器人)和 2. AI 歸因能力(合理或不合理)。實驗設置在法律的情境之下,參與者會被告知研究團隊正在訓練一名 AI 法官進行肇事責任的判斷,參與者協助檢視 AI 法官的決策內容及對應的解釋,並衡量 AI 法官的表現及自己的信任程度。本研究結果發現,將事物擬人化的傾向使得人類期望 AI 能夠展現出像人的能力。當 AI 表現出不合理的歸因能力時,信任就會下降。此外,像人的外觀會提升人類的期望,並在期望未得到滿足時導致信任下降的更多更快。本研究結合社會心理學的概念,從人類使用者的角度切入人智互動(Human-AI interaction, HAII)研究,試圖建立人機互動和社會科學研究之間的橋樑。
    Human-Computer Interaction (HCI) is becoming more complicated due to the rapid development of computing machinery. Diversified interfaces such as chatbots, robotics, and virtual agents change how humans interact with the system. Previously, humans used computers as tools that remained stable and mostly predictable. Today, with Artificial Intelligence (AI), computers often adopt the form of an 'agent' with purpose, motivation, and intentions. Humans begin to consider computers as teammates while collaborating. Trust between teammates is essential for team building, and thus also vital to Human-Agent interaction (HAI). Explainable AI (XAI) research aims to improve the trustworthiness and transparency of AI-based systems by allowing human users to understand the behavior of the AI partner. On the other hand, human is also figuring the implicit root cause out of computers' explicit behaviors. The process of understanding and interpreting observed activities is called "attribution". Attribution is a human natural ability and has been studied for a long time in social science.

    Can AI perform human-like attribution ability? Studies have shown that human-like qualities affect human trust in AI. However, human-like qualities can be discussed from many perspectives, for example, human-like appearance, human-like body movement, human-like mental model, etc. Will humans trust AI more if it is able to perform implicit human-like ability - - attribution? Further, what is the link between different human-like qualities? When an AI agent looks like a human, is it expected to perform the human-like abilities better? This study tried to answer these questions with an online experiment. The experiment was constructed with two variables: 1. Agent appearances (human-like or machine-like) and 2. AI attribution ability (plausible or implausible). The main setup was an AI Judge who was trained to perform responsibility allocation decisions for car accidents, and the participants were asked to review the AI Judge's performance.

    It was found that the tendency to anthropomorphize makes human expects AI to demonstrate human-like ability. Trust decreased when the AI demonstrated implausible attribution ability. Further, the human-form appearance increased human expectations and led to more negative trust when the expectation was not met. The study frames human-AI interaction (HAII) research from human users' perspectives by incorporating concepts of social psychology, and bridges HCI and social science research.
    Reference: Ahmad, M. (2021). Software as a medical device: Regulating ai in healthcare via responsible ai. Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery Data Mining.
    Al-Shayea, Q. K. (2011). Artificial neural networks in medical diagnosis. International Journal of Computer Science Issues, 8(2):150–154.
    Alavi, M. (2001). Review: Knowledge management and knowledge management systems: Conceptual foundations and research issues. MIS Q., 25:107–136.
    Aletras, N., Tsarapatsanis, D., Preoţiuc-Pietro, D., and Lampos, V. (2016). Predicting judicial decisions of the european court of human rights: A natural language processing perspective. PeerJ Computer Science, 2:e93.
    Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., García, S., Gil-López, S., Molina, D., Benjamins, R., et al. (2020). Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai. Information fusion, 58:82–115.
    Asan, O. (2021). Research trends in artificial intelligence applications in human factors health care: Mapping review. JMIR Human Factors.
    Assefi, M., Liu, G., Wittie, M. P., and Izurieta, C. (2015). An experimental evaluation of apple siri and google speech recognition. Proccedings of the 2015 ISCA SEDE, 118.
    Atkinson, K. (2020). Explanation in ai and law: Past, present and future. Artif. Intell., 289:103387.
    Auernhammer, J. (2020a). Human-centered ai: The role of human-centered design research in the development of ai.
    Auernhammer, J. (2020b). Human-centered ai: The role of human-centered design research in the development of ai.
    Badahdah, A. M. and Alkhder, O. H. (2006). Helping a friend with aids: A test of weiner’s attributional theory in kuwait. Illness, Crisis & Loss, 14(1):43–54.
    Bahrini, A., Khamoshifar, M., Abbasimehr, H., Riggs, R. J., Esmaeili, M., Majdabad- kohne, R. M., and Pasehvar, M. (2023). Chatgpt: Applications, opportunities, and threats. In 2023 Systems and Information Engineering Design Symposium (SIEDS), pages 274–279. IEEE.
    Bansal, G., Nushi, B., Kamar, E., Weld, D. S., Lasecki, W. S., and Horvitz, E. (2019). Updates in human-ai teams: Understanding and addressing the performance/compatibility tradeoff. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 2429–2437.
    Bartneck, C., Bleeker, T., Bun, J., Fens, P., and Riet, L. (2010). The influence of robot anthropomorphism on the feelings of embarrassment when interacting with robots. Pal- adyn, 1(2):109–115.
    Berg, A., O’Connor, M., and Cruz, M. T. (2021). Keyword transformer: A self-attention model for keyword spotting. arXiv preprint arXiv:2104.00769.
    Bergmann, K., Eyssel, F., and Kopp, S. (2012). A second chance to make a first impression? how appearance and nonverbal behavior affect perceived warmth and competence of virtual agents over time. In International conference on intelligent virtual agents, pages 126–138. Springer.
    Bigley, G. A. and Pearce, J. L. (1998). Straining for shared meaning in organization science: Problems of trust and distrust. Academy of management review, 23(3):405– 421.
    Blind, P. K. (2007). Building trust in government in the twenty-first century: Review of literature and emerging issues. In 7th global forum on reinventing government building trust in government, volume 2007, pages 26–29. UNDESA Vienna.
    Boden, M. (1980). Artificial intelligence and natural man. Synthese, 43(3).
    Bosman, K., Bosse, T., and Formolo, D. (2019). Virtual agents for professional social skills training: An overview of the state-of-the-art. In Intelligent Technologies for Interactive Entertainment: 10th EAI International Conference, INTETAIN 2018, Guimarães, Portugal, November 21-23, 2018, Proceedings 10, pages 75–84. Springer.
    Bromiley, P. and Cummings, L. L. (1989). Transactions costs in organizations with trust.
    Number 128. Strategic Management Research Center, University of Minnesota.
    Bruzzese, T., Gao, I., Dietz, G., Ding, C., and Romanos, A. (2020). Effect of confidence indicators on trust in ai-generated profiles. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, pages 1–8.
    Buijsman, S. (2021). Why and how should we explain ai? In ECCAI Advanced Course on Artificial Intelligence, pages 196–215. Springer.
    Burke, R., Felfernig, A., and Göker, M. H. (2011). Recommender systems: An overview.
    Ai Magazine, 32(3):13–18.
    Cambria, E. and White, B. (2014). Jumping nlp curves: A review of natural language processing research. IEEE Computational intelligence magazine, 9(2):48–57.
    Camburu, O.-M., Rocktäschel, T., Lukasiewicz, T., and Blunsom, P. (2018). e-snli: Natural language inference with natural language explanations. Advances in Neural Information Processing Systems, 31.
    Cao, Y., Li, S., Liu, Y., Yan, Z., Dai, Y., Yu, P. S., and Sun, L. (2023). A comprehensive survey of ai-generated content (aigc): A history of generative ai from gan to chatgpt. arXiv preprint arXiv:2303.04226.
    Carmeli, A., Tishler, A., and Edmondson, A. C. (2012). Ceo relational leadership and strategic decision quality in top management teams: The role of team trust and learning from failure. Strategic Organization, 10(1):31–54.
    Chalkidis, I., Androutsopoulos, I., and Aletras, N. (2019). Neural legal judgment prediction in english. arXiv preprint arXiv:1906.02059.
    Chan, W. T. Y. and Leung, C. H. (2021). Mind the gap: Discrepancy between customer expectation and perception on commercial chatbots usage. Asian Journal of Empirical Research, 11(1):1–10.
    Chang, S.-L., Chen, L.-S., Chung, Y.-C., and Chen, S.-W. (2004). Automatic license plate recognition. IEEE transactions on intelligent transportation systems, 5(1):42–53.
    Chen, B. M., Stremitzer, A., and Tobia, K. (2021). Having your day in robot court. UCLA School of Law, Public Law Research Paper, (21-20).
    Chen, L. (2020). Artificial intelligence in education: A review. IEEE Access.
    Chen, Y.-J., Wu, C.-H., Chen, Y.-M., Li, H.-Y., and Chen, H.-K. (2017). Enhancement of fraud detection for narratives in annual reports. International Journal of Accounting Information Systems, 26:32–45.
    Chi, N. T. K. and Hoang Vu, N. (2023). Investigating the customer trust in artificial intelligence: The role of anthropomorphism, empathy response, and interaction. CAAI Transactions on Intelligence Technology, 8(1):260–273.
    Cho, J.-H., Chan, K., and Adali, S. (2015). A survey on trust modeling. ACM Computing Surveys (CSUR), 48(2):1–40.
    Choi, J. and Nazareth, D. L. (2014). Repairing trust in an e-commerce and security context: an agent-based modeling approach. Information Management & Computer Security, 22(5):490–512.
    Choi, S., Liu, S. Q., and Mattila, A. S. (2019). “how may i help you?” says a robot: Examining language styles in the service encounter. International Journal of Hospitality
    Management, 82:32–38.
    Choudhury, A. and Shamszare, H. (2023). Investigating the impact of user trust on the adoption and use of chatgpt: Survey analysis. Journal of Medical Internet Research, 25:e47184.
    Collins, A. and Michalski, R. (1989). The logic of plausible reasoning: A core theory.
    cognitive science, 13(1):1–49.
    Connell, L. and Keane, M. T. (2006). A model of plausibility. Cognitive Science, 30(1):95– 120.
    Crowell, C. R., Deska, J. C., Villano, M., Zenk, J., and Roddy Jr, J. T. (2019). Anthropomorphism of robots: study of appearance and agency. JMIR human factors, 6(2):e12629.
    Dale, R. (2019). Law and word order: Nlp in legal tech. Natural Language Engineering, 25(1):211–217.
    De Jong, B. A., Dirks, K. T., and Gillespie, N. (2016). Trust and team performance: A meta-analysis of main effects, moderators, and covariates. Journal of applied psychology, 101(8):1134.
    de Laat, P. B. (2021). Companies committed to responsible ai: From principles towards implementation and regulation? Philosophy & technology, 34:1135–1193.
    De Visser, E. J., Pak, R., and Shaw, T. H. (2018). From ‘automation’ to ‘autonomy’: the importance of trust repair in human–machine interaction. Ergonomics, 61(10):1409–1427.
    DeChurch, L. A. and Mesmer-Magnus, J. R. (2010). The cognitive underpinnings of effective teamwork: a meta-analysis. Journal of applied psychology, 95(1):32.
    Dixon, S. R. and Wickens, C. D. (2006). Automation reliability in unmanned aerial vehicle control: A reliance-compliance model of automation dependence in high workload. Human factors, 48(3):474–486.
    Dubber, M. D., Pasquale, F., and Das, S. (2020). The Oxford handbook of ethics of AI. Oxford Handbooks.
    Duffy, B. R. (2003). Anthropomorphism and the social robot. Robotics and Autonomous Systems, 42(3):177–190. Socially Interactive Robots.
    Dunn, P. (2000). The importance of consistency in establishing cognitive-based trust: a laboratory experiment. Teaching Business Ethics, 4:285–306.
    Dzindolet, M. T., Peterson, S. A., Pomranky, R. A., Pierce, L. G., and Beck, H. P. (2003). The role of trust in automation reliance. International journal of human-computer stud- ies, 58(6):697–718.
    Elworthy, D. (2000). Question answering using a large nlp system. In TREC.
    Epley, N., Waytz, A., Akalis, S., and Cacioppo, J. T. (2008). When we need a human: Motivational determinants of anthropomorphism. Social cognition, 26(2):143–155.
    Epley, N., Waytz, A., and Cacioppo, J. T. (2007). On seeing human: a three-factor theory of anthropomorphism. Psychological review, 114(4):864.
    Erdem, F., Ozen, J., and Atsan, N. (2003). The relationship between trust and team performance. Work study, 52(7):337–340.
    Esterwood, C. and Robert, L. P. (2022). A literature review of trust repair in hri. In 2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), pages 1641–1646. IEEE.
    Ezer, N., Bruni, S., Cai, Y., Hepenstal, S. J., Miller, C. A., and Schmorrow, D. D. (2019). Trust engineering for human-ai teams. In Proceedings of the human factors and ergonomics society annual meeting, volume 63, pages 322–326. SAGE Publications Sage CA: Los Angeles, CA.
    Falcone, R. and Castelfranchi, C. (2004). Trust dynamics: How trust is influenced by direct experiences and by trust itself. In Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems, 2004. AAMAS 2004., pages 740–747. IEEE.
    Feder, A., Keith, K. A., Manzoor, E., Pryzant, R., Sridhar, D., Wood-Doughty, Z., Eisen- stein, J., Grimmer, J., Reichart, R., Roberts, M. E., et al. (2021). Causal inference in natural language processing: Estimation, prediction, interpretation and beyond. arXiv preprint arXiv:2109.00725.
    Ferràs-Hernández, X. (2018). The future of management in a world of electronic brains.
    Journal of Management Inquiry, 27(2):260–263.
    Flowers, J. C. (2019). Strong and weak ai: Deweyan considerations. In AAAI Spring Symposium: Towards Conscious AI Systems, volume 22877.
    Fulmer, C. A. and Gelfand, M. J. (2013). How do i trust thee? dynamic trust patterns and their individual and social contextual determinants. Models for intercultural collaboration and negotiation, pages 97–131.
    Galitsky, B. (2013). Machine learning of syntactic parse trees for search and classification of text. Engineering Applications of Artificial Intelligence, 26(3):1072–1091.
    Gaur, Y., Lasecki, W. S., Metze, F., and Bigham, J. P. (2016). The effects of automatic speech recognition quality on human transcription latency. In Proceedings of the 13th International Web for All Conference, pages 1–8.
    Gefen, D., Benbasat, I., and Pavlou, P. (2008). A research agenda for trust in online environments. Journal of Management Information Systems, 24(4):275–286.
    Glikson, E. and Woolley, A. W. (2020). Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals, 14(2):627–660.
    Golbin, I., Rao, A. S., Hadjarian, A., and Krittman, D. (2020). Responsible ai: a primer for the legal community. In 2020 IEEE International Conference on Big Data (Big Data), pages 2121–2126. IEEE.
    Gottman, J., Gottman, J., and McNulty, M. A. (2017). The role of trust and commitment in love relationships. In Foundations for Couples’ Therapy, pages 438–452. Routledge.
    Goudey, A. and Bonnin, G. (2016). Must smart objects look human? study of the impact of anthropomorphism on the acceptance of companion robots. Recherche et Applications en Marketing (English Edition), 31(2):2–20.
    Green, S., Hurst, L., Nangle, B., Cunningham, P., Somers, F., and Evans, R. (1997). Software agents: A review. Department of Computer Science, Trinity College Dublin, Tech. Rep. TCS-CS-1997-06.
    Grosz, B. J. (1996). Collaborative systems (aaai-94 presidential address). AI magazine, 17(2):67–67.
    Gunning, D. and Aha, D. (2019). Darpa’s explainable artificial intelligence (xai) program. AI Magazine, 40(2):44–58.
    Gunning, D., Vorm, E., Wang, Y., and Turek, M. (2021). Darpa’s explainable ai (xai) program: A retrospective. Authorea Preprints.
    Guzman, A. (2020). Ontological boundaries between humans and computers and the implications for human-machine communication. Human-Machine Communication.
    Hagendorff, T. (2020). The ethics of ai ethics: An evaluation of guidelines. Minds and Machines, 30(1):99–120.
    Hakanen, M. and Soudunsaari, A. (2012). Building trust in high-performing teams. Technology Innovation Management Review, 2(6).
    Haring, K. S., Mosley, A., Pruznick, S., Fleming, J., Satterfield, K., Visser, E. J. d., Tos- sell, C. C., and Funke, G. (2019). Robot authority in human-machine teams: effects of human-like appearance on compliance. In International Conference on Human-Computer Interaction, pages 63–78. Springer.
    He, J., Baxter, S. L., Xu, J., Xu, J., Zhou, X., and Zhang, K. (2019). The practical implementation of artificial intelligence technologies in medicine. Nature medicine, 25(1):30–36.
    Hebesberger, D., Koertner, T., Gisinger, C., Pripfl, J., and Dondrup, C. (2016). Lessons learned from the deployment of a long-term autonomous robot as companion in physical therapy for older adults with dementia a mixed methods study. In 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pages 27–34. IEEE.
    Heider, F. (1958). The psychology of interpersonal relations. The Psychology of Inter-personal Relations.
    Heider, F. and Simmel, M. (1944). An experimental study of apparent behavior. The American journal of psychology, 57(2):243–259.
    Hepenstal, S. (2020). Explainable artificial intelligence: What do you need to know? pages 266–275.
    Hewitt, C. (1977). Viewing control structures as patterns of passing messages. Artificial intelligence, 8(3):323–364.
    Ho, T.-H. and Weigelt, K. (2005). Trust building among strangers. Management Science, 51(4):519–530.
    Hoeller, F., Schulz, D., Moors, M., and Schneider, F. E. (2007). Accompanying persons with a mobile robot using motion prediction and probabilistic roadmaps. In 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 1260– 1265. IEEE.
    Hoff, K. A. and Bashir, M. (2015). Trust in automation: Integrating empirical evidence on factors that influence trust. Human factors, 57(3):407–434.
    Hoffmann-Riem, W. (2022). Legal technology/computational law: Preconditions, opportunities and risks. Journal of Cross-disciplinary Research in Computational Law, 1(1).
    Hohenstein, J. and Jung, M. (2020). Ai as a moral crumple zone: The effects of ai-mediated communication on attribution and trust. Computers in Human Behavior, 106:106190.
    Hupcey, J. E. and Miller, J. (2006). Community dwelling adults’perception of interpersonal trust vs. trust in health care providers. Journal of clinical nursing, 15(9):1132– 1139.
    Ingle, R. R., Fujii, Y., Deselaers, T., Baccash, J., and Popat, A. C. (2019). A scalable handwritten text recognition system. In 2019 International Conference on Document Analysis and Recognition (ICDAR), pages 17–24. IEEE.
    Ingle, S. and Phute, M. (2016). Tesla autopilot: semi autonomous driving, an uptick for future autonomy. International Research Journal of Engineering and Technology, 3(9):369–372.
    Jones, E. E. and Davis, K. E. (1965). From acts to dispositions: The attribution process in social psychology. Advances in experimental social psychology, 2:219–266.
    Jones, G. R. and George, J. M. (1998). The experience and evolution of trust: Implications for cooperation and teamwork. Academy of management review, 23(3):531–546.
    Juvina, I., Larue, O., and Hough, A. (2018). Modeling valuation and core affect in a cognitive architecture: The impact of valence and arousal on memory and decision-making. Cognitive Systems Research, 48:4–24.
    Kamar, E., Hacker, S., and Horvitz, E. (2012). Combining human and machine intelligence in large-scale crowdsourcing. In AAMAS, volume 12, pages 467–474.
    Kazim, E. and Koshiyama, A. S. (2021). A high-level overview of ai ethics. Patterns, 2(9).
    Keding, C. (2021). Managerial overreliance on ai-augmented decision-making processes: How the use of ai-based advisory systems shapes choice behavior in rd investment decisions. Technological Forecasting and Social Change, 171:120970.
    Kelley, H. H. (1967). Attribution theory in social psychology. In Nebraska symposium on motivation. University of Nebraska Press.
    Kenny, E. M. and Keane, M. T. (2019). Twin-systems to explain artificial neural networks using case-based reasoning: Comparative tests of feature-weighting methods in ann-cbr twins for xai. In Twenty-Eighth International Joint Conferences on Artifical Intelligence (IJCAI), Macao, 10-16 August 2019, pages 2708–2715.
    Kenny, E. M. and Keane, M. T. (2021). On generating plausible counterfactual and semi-factual explanations for deep learning. AAAI-21, pages 11575–11585.
    Kim, J., Merrill Jr, K., and Collins, C. (2021). Ai as a friend or assistant: The mediating role of perceived usefulness in social ai vs. functional ai. Telematics and Informatics, 64:101694.
    Kim, T. and Song, H. (2021). How should intelligent agents apologize to restore trust? interaction effects between anthropomorphism and apology attribution on trust repair. Telematics and Informatics, 61:101595.
    Knijnenburg, B. P. and Willemsen, M. C. (2016). Inferring capabilities of intelligent agents
    from their external traits. ACM Transactions on Interactive Intelligent Systems (TiiS), 6(4):1–25.
    Langer, M., Oster, D., Speith, T., Hermanns, H., Kästner, L., Schmidt, E., Sesing, A., and Baum, K. (2021). What do we want from explainable artificial intelligence (xai)?– a stakeholder perspective on xai and a conceptual model guiding interdisciplinary xai research. Artificial Intelligence, 296:103473.
    Lasecki, W., Miller, C., Sadilek, A., Abumoussa, A., Borrello, D., Kushalnagar, R., and Bigham, J. (2012). Real-time captioning by groups of non-experts. In Proceedings of the 25th annual ACM symposium on User interface software and technology, pages 23–34.
    Lawrence, V., Murray, J., Banerjee, S., Turner, S., Sangha, K., Byng, R., Bhugra, D., Huxley, P., Tylee, A., and Macdonald, A. (2006). Concepts and causation of depression: A cross-cultural study of the beliefs of older adults. The Gerontologist, 46(1):23–32.
    Leite, I., Pereira, A., Mascarenhas, S., Martinho, C., Prada, R., and Paiva, A. (2013). The influence of empathy in human–robot relations. International journal of human-computer studies, 71(3):250–260.
    Lenard, P. T. (2005). The decline of trust, the decline of democracy? Critical Review of International Social and Political Philosophy, 8(3):363–378.
    Lewicki, R. J. and Wiethoff, C. (2000). Trust, trust development, and trust repair. The handbook of conflict resolution: Theory and practice, 1(1):86–107.
    Li, Q. and Zhang, Q. (2021). Court opinion generation from case fact description with legal basis. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 14840–14848.
    Li, Y., Jiang, Y., Tian, D., Hu, L., Lu, H., and Yuan, Z. (2019). Ai-enabled emotion communication. IEEE Network, 33(6):15–21.
    Liao, W., Oh, Y. J., Feng, B., and Zhang, J. (2023). Understanding the influence discrepancy between human and artificial agent in advice interactions: The role of stereotypical perception of agency. Communication Research, page 00936502221138427.
    Lin, B. (2022). Knowledge management system with nlp-assisted annotations: A brief survey and outlook. arXiv preprint arXiv:2206.07304.
    Liu, H., Lai, V., and Tan, C. (2021). Understanding the effect of out-of-distribution examples and interactive explanations on human-ai decision making. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW2):1–45.
    Liu, Q., Zhang, N., Yang, W., Wang, S., Cui, Z., Chen, X., and Chen, L. (2017). A review of image recognition with deep convolutional neural network. In Intelligent Computing Theories and Application: 13th International Conference, ICIC 2017, Liverpool, UK, August 7-10, 2017, Proceedings, Part I 13, pages 69–80. Springer.
    Liu, Y., Han, T., Ma, S., Zhang, J., Yang, Y., Tian, J., He, H., Li, A., He, M., Liu, Z., et al. (2023). Summary of chatgpt/gpt-4 research and perspective towards the future of large language models. arXiv preprint arXiv:2304.01852.
    Luck, M. and Aylett, R. (2000). Applying artificial intelligence to virtual reality: Intelligent virtual environments. Applied artificial intelligence, 14(1):3–32.
    Luengo-Oroz, M. (2019). Solidarity should be a core ethical principle of ai. Nature Machine Intelligence.
    Lv, L., Huang, M., and Huang, R. (2023). Anthropomorphize service robots: The role of human nature traits. The Service Industries Journal, 43(3-4):213–237.
    Maddux, W. W. and Yuki, M. (2006). The “ripple effect”: Cultural differences in perceptions of the consequences of events. Personality and social psychology bulletin,
    32(5):669–683.
    Madhavan, P. and Wiegmann, D. A. (2007). Similarities and differences between human–human and human–automation trust: an integrative review. Theoretical Issues in Ergonomics Science, 8(4):277–301.
    Mahon, B. Z. and Caramazza, A. (2009). Concepts and categories: a cognitive neuropsychological perspective. Annual review of psychology, 60:27–51.
    Manwatkar, P. M. and Yadav, S. H. (2015). Text recognition from images. In 2015 International Conference on Innovations in Information, Embedded and Communication Systems (ICIIECS), pages 1–6. IEEE.
    Maree, C., Modal, J. E., and Omlin, C. W. (2020). Towards responsible ai for financial transactions. In 2020 IEEE symposium series on computational intelligence (SSCI), pages 16–21. IEEE.
    Mashaabi, M., Alotaibi, A., Qudaih, H., Alnashwan, R., and Al-Khalifa, H. (2022). Natural language processing in customer service: A systematic review. arXiv preprint arXiv:2212.09523.
    Matarneh, R., Maksymova, S., Lyashenko, V., and Belova, N. (2017). Speech recognition systems: A comparative review.
    Mayer, R. C., Davis, J. H., and Schoorman, F. D. (1995). An integrative model of organizational trust. Academy of management review, 20(3):709–734.
    McCarthy, J. (1988). Mathematical logic in artificial intelligence. Daedalus, pages 297– 311.
    McKnight, D. H. and Chervany, N. L. (2001). What trust means in e-commerce customer relationships: An interdisciplinary conceptual typology. International journal of electronic commerce, 6(2):35–59.
    McKnight, D. H. and Chervany, N. L. (2006). Reflections on an initial trust-building model. Handbook of trust research, 29.
    Mcleod, S. (2012). Attribution theory - situational vs dispositional | simply psychology. https://www.simplypsychology.org/attribution-theory.html. (Accessed on 04/05/2022).
    McNeese, N. J., Schelble, B. G., Canonico, L. B., and Demir, M. (2021). Who/what is my teammate? team composition considerations in human–ai teaming. IEEE Transactions on Human-Machine Systems, 51(4):288–299.
    Merenda, M., Porcaro, C., and Iero, D. (2020). Edge machine learning for ai-enabled iot devices: A review. Sensors, 20(9):2533.
    Mhlanga, D. (2023). Open ai in education, the responsible and ethical use of chatgpt towards lifelong learning. Education, the Responsible and Ethical Use of ChatGPT Towards Lifelong Learning (February 11, 2023).
    Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences.
    Artificial intelligence, 267:1–38.
    Mimoun, M. S. B., Poncin, I., and Garnier, M. (2012). Case study—embodied virtual agents: An analysis on reasons for failure. Journal of Retailing and Consumer services, 19(6):605–612.
    Minsky, M. (1988). Society of mind. Simon and Schuster.
    Moon, M. J. (2003). Can it help government to restore public trust? declining public trust and potential prospects of it in the public sector. In 36th Annual Hawaii International Conference on System Sciences, 2003. Proceedings of the, pages 8–pp. IEEE.
    Morewedge, C. K. (2009). Negativity bias in attribution of external agency. Journal of Experimental Psychology: General, 138(4):535.
    Morgan, D. and Zeffane, R. (2003). Employee involvement, organizational change and trust in management. International journal of human resource management, 14(1):55– 75.
    Murdoch, W. J., Liu, P. J., and Yu, B. (2018). Beyond word importance: Contextual decomposition to extract interactions from lstms. arXiv preprint arXiv:1801.05453.
    Murphy, J., Gretzel, U., and Pesonen, J. (2019). Marketing robot services in hospitality and tourism: the role of anthropomorphism. Journal of Travel & Tourism Marketing, 36(7):784–795.
    Myers, C. D. and Tingley, D. (2016). The influence of emotion on trust. Political Analysis, 24:492 – 500.
    Nallapati, R. and Manning, C. D. (2008). Legal docket classification: Where machine learning stumbles. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 438–446.
    Natarajan, M. and Gombolay, M. (2020). Effects of anthropomorphism and accountability on trust in human robot interaction. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, pages 33–42.
    Naumann, L. P., Vazire, S., Rentfrow, P. J., and Gosling, S. D. (2009). Personality
    judgments based on physical appearance. Personality and social psychology bulletin, 35(12):1661–1671.
    Niu, D., Terken, J., and Eggen, B. (2018). Anthropomorphizing information to enhance trust in autonomous vehicles. Human Factors and Ergonomics in Manufacturing & Service Industries, 28(6):352–359.
    Noguti, M. Y., Vellasques, E., and Oliveira, L. S. (2020). Legal document classification: An application to law area prediction of petitions to public prosecution service. In 2020 International Joint Conference on Neural Networks (IJCNN), pages 1–8. IEEE.
    Norman, D. A. (1988). The psychology of everyday things. Basic books.
    Obremski, D., Lugrin, J.-L., Schaper, P., and Lugrin, B. (2021). Non-native speaker perception of intelligent virtual agents in two languages: the impact of amount and type of grammatical mistakes. Journal on Multimodal User Interfaces, 15:229–238.
    O’Brien, T. C. and Tyler, T. R. (2019). Rebuilding trust between police & communities through procedural justice & reconciliation. Behavioral science & policy, 5(1):35–50.
    Omrani, N., Rivieccio, G., Fiore, U., Schiavone, F., and Agreda, S. G. (2022). To trust or not to trust? an assessment of trust in ai-based systems: Concerns, ethics and contexts. Technological Forecasting and Social Change, 181:121763.
    Paiva, A., Leite, I., Boukricha, H., and Wachsmuth, I. (2017). Empathy in virtual agents and robots: A survey. ACM Transactions on Interactive Intelligent Systems (TiiS), 7(3):1–40.
    Pan, Y. (2016). Heading toward artificial intelligence 2.0. Engineering, 2(4):409–413.
    Parasuraman, R. and Manzey, D. H. (2010). Complacency and bias in human use of automation: An attentional integration. Human factors, 52(3):381–410.
    Parker, L. E. (2000). Current state of the art in distributed autonomous mobile robotics.
    Distributed autonomous robotic systems 4, pages 3–12.
    Pavlopoulos, J., Malakasiotis, P., and Androutsopoulos, I. (2017). Deeper attention to abusive user content moderation. In Proceedings of the 2017 conference on empirical methods in natural language processing, pages 1125–1135.
    Petrellis, N. (2017). A smart phone image processing application for plant disease diagnosis. In 2017 6th international conference on modern circuits and systems technologies (MOCAST), pages 1–4. IEEE.
    Phillips, E., Zhao, X., Ullman, D., and Malle, B. F. (2018). What is human-like?: Decomposing robots’ human-like appearance using the anthropomorphic robot (abot) database. In 2018 13th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pages 105–113. IEEE.
    Piccoli, G. and Ives, B. (2003). Trust and the unintended effects of behavior control in virtual teams. MIS quarterly, pages 365–395.
    PLANT, A. (2005). Pedagogical agents as social models for engineering: The influence of agent appearance on female choice. Artificial intelligence in education: Supporting learning through intelligent and socially informed technology, 125:65.
    Preece, A., Harborne, D., Braines, D., Tomsett, R., and Chakraborty, S. (2018). Stakeholders in explainable ai. arXiv preprint arXiv:1810.00184.
    Press, G. (2017). Top 10 hot artificial intelligence (ai) technologies. Forbes, viewed, 23.
    Qian, C., Mathur, N., Zakaria, N. H., Arora, R., Gupta, V., and Ali, M. (2022). Understanding public opinions on social media for financial sentiment analysis using ai-based techniques. Information Processing & Management, 59(6):103098.
    Rawat, A., Ghildiyal, S., Dixit, A. K., Memoria, M., Kumar, R., and Kumar, S. (2022). Approaches towards ai-based recommender system. In 2022 International Conference on Machine Learning, Big Data, Cloud and Parallel Computing (COM-IT-CON), volume 1, pages 191–196. IEEE.
    Raximov, N., Primqulov, O., and Daminova, B. (2021). Basic concepts and stages of research development on artificial intelligence. In 2021 International Conference on Information Science and Communications Technologies (ICISCT), pages 1–4.
    Reder, L. M. (1982). Plausibility judgments versus fact retrieval: Alternative strategies for sentence verification. Psychological Review, 89(3):250.
    Reder, L. M. and Ross, B. H. (1983). Integrated knowledge in different tasks: The role of retrieval strategy on fan effects. Journal of Experimental Psychology: Learning, Memory, and Cognition, 9(1):55.
    Reder, L. M., Wible, C., and Martin, J. (1986). Differential memory changes with age: Exact retrieval versus plausible inference. Journal of Experimental Psychology: Learning, Memory, and Cognition, 12(1):72.
    Robbins, B. G. (2016). What is trust? a multidisciplinary review, critique, and synthesis.
    Sociology compass, 10(10):972–986.
    Rutjes, H., Willemsen, M., and IJsselsteijn, W. (2019). Considerations on explainable ai and users’ mental models. In CHI 2019 Workshop: Where is the Human? Bridging the Gap Between AI and HCI. Association for Computing Machinery, Inc.
    Sallam, M. (2023). Chatgpt utility in healthcare education, research, and practice: systematic review on the promising perspectives and valid concerns. In Healthcare, volume 11, page 887. MDPI.
    Sanders, T. (2019). The relationship between trust and use choice in human-robot interaction. Human Factors: The Journal of Human Factors and Ergonomics Society, 61:614 – 626.
    Schaefer, K. E. (2016). Measuring trust in human robot interactions: Development of the
    “trust perception scale-hri”. In Robust Intelligence and Trust in Autonomous Systems, pages 191–218. Springer.
    Schelble, B. G., Lopez, J., Textor, C., Zhang, R., Mcneese, N. J., Pak, R., and Freeman,
    G. (2022). Towards ethical ai: Empirically investigating dimensions of ai ethics, trust repair, and performance in human-ai teaming. Human factors, page 187208221116952.
    Schindler, P. L. and Thomas, C. C. (1993). The structure of interpersonal trust in the workplace. Psychological Reports, 73(2):563–573.
    Schmid, H. (1994). Part-of-speech tagging with neural networks. arXiv preprint cmp- lg/9410018.
    Schoorman, F. D., Mayer, R. C., and Davis, J. H. (2007). An integrative model of organizational trust: Past, present, and future.
    Schwaninger, I., Fitzpatrick, G., and Weiss, A. (2019). Exploring trust in human-agent collaboration. In Proceedings of 17th European Conference on Computer-Supported Cooperative Work. European Society for Socially Embedded Technologies (EUSSET).
    Searle, J. R. (1980). Minds, brains, and programs. Behavioral and brain sciences, 3(3):417–424.
    Searle, J. R. (1990). The mystery of consciousness. New York Review of Books.
    Searle, R. H., Nienaber, A.-M. I., and Sitkin, S. B. (2018). The Routledge companion to trust. Routledge.
    Shi, S., Tse, R., Luo, W., D’Addona, S., and Pau, G. (2022). Machine learning-driven credit risk: a systemic review. Neural Computing and Applications, 34(17):14327–
    14339.
    Shiban, Y., Schelhorn, I., Jobst, V., Hörnlein, A., Puppe, F., Pauli, P., and Mühlberger, A. (2015). The appearance effect: Influences of virtual agent features on performance and motivation. Computers in Human Behavior, 49:5–11.
    Shin, D. (2021). The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable ai. Int. J. Hum. Comput. Stud., 146:102551.
    Shuang, X. (2021). An air-water integrated auxiliary navigation system for complex waters. Journal of Physics: Conference Series.
    Shukla, P. (2022). Conceptualizing the use of artificial intelligence in customer relationship management and quality of services. Advances in Marketing, Customer Relationship Management, and E-Services.
    Simons, T. L. and Peterson, R. S. (2000). Task conflict and relationship conflict in top management teams: the pivotal role of intragroup trust. Journal of applied psychology, 85(1):102.
    Sindermann, C., Sha, P., Zhou, M., Wernicke, J., Schmitt, H. S., Li, M., Sariyska, R., Stavrou, M., Becker, B., and Montag, C. (2021). Assessing the attitude towards artificial intelligence: Introduction of a short measure in german, chinese, and english language. KI-Künstliche intelligenz, 35:109–118.
    Singh, B. (2020). Analysis of autopilot system, integrated with modelling and comparison of different controllers with the system. Journal of Discrete Mathematical Sciences and Cryptography.
    Singh, D., Psychoula, I., Kropf, J., Hanke, S., and Holzinger, A. (2018). Users’perceptions and attitudes towards smart home technologies. In Smart Homes and Health
    Telematics, Designing a Better Future: Urban Assisted Living: 16th International Conference, ICOST 2018, Singapore, Singapore, July 10-12, 2018, Proceedings 16, pages
    203–214. Springer.
    Singh, G. and Goel, A. K. (2020). Face detection and recognition system using digital image processing. In 2020 2nd International Conference on Innovative Mechanisms for Industry Applications (ICIMIA), pages 348–352. IEEE.
    Slovic, P. (1993). Perceived risk, trust, and democracy. Risk analysis, 13(6):675–682.
    Smeaton, A. F. (1999). Using nlp or nlp resources for information retrieval tasks. In
    Natural language information retrieval, pages 99–111. Springer.
    Sourdin, T. (2018). Judge v robot?: Artificial intelligence and judicial decision-making.
    University of New South Wales Law Journal, The, 41(4):1114–1133.
    Spitzberg, B. H. and Manusov, V. (2021). Attribution theory: finding good cause in the search for theory. In Engaging Theories in Interpersonal Communication, pages 39–51. Routledge.
    Sternberg, R. J. (2000). Handbook of intelligence. Cambridge University Press.
    Straßmann, C., von der Pütten, A. R., Yaghoubzadeh, R., Kaminski, R., and Krämer, N. (2016). The effect of an intelligent virtual agent’s nonverbal behavior with regard to dominance and cooperativity. In Intelligent Virtual Agents: 16th International Conference, IVA 2016, Los Angeles, CA, USA, September 20–23, 2016, Proceedings 16, pages
    15–28. Springer.
    Stuurman, K. and Lachaud, E. (2022). Regulating ai. a label to complete the proposed act on artificial intelligence. Computer Law & Security Review, 44:105657.
    Sullivan, Y., de Bourmont, M., and Dunaway, M. (2022). Appraisals of harms and injustice trigger an eerie feeling that decreases trust in artificial intelligence systems. Annals of Operations Research, 308:525–548.
    Sun, S., Luo, C., and Chen, J. (2017). A review of natural language processing techniques for opinion mining systems. Information fusion, 36:10–25.
    Susskind, R. (2008). The end of lawyers. Oxford: Oxford University Press.
    Tadavarthi, Y., Vey, B., Krupinski, E., Prater, A., Gichoya, J., Safdar, N., and Trivedi, H. (2020). The state of radiology ai: considerations for purchase decisions and current market offerings. Radiology: Artificial Intelligence, 2(6):e200004.
    Tas, O. and Kiyani, F. (2007). A survey automatic text summarization. PressAcademia Procedia, 5(1):205–213.
    Teng, X. (2019). Discussion about artificial intelligence’s advantages and disadvantages compete with natural intelligence. In Journal of Physics: Conference Series, volume
    1187, page 032083. IOP Publishing.
    Thomas, R. and Skinner, L. (2010). Total trust and trust asymmetry: Does trust need to be equally distributed in interfirm relationships? Journal of Relationship Marketing, 9(1):43–53.
    Throne, R. (2022). Adverse trends in data ethics: The ai bill of rights and human subjects protections. Available at SSRN.
    Travaini, G. V., Pacchioni, F., Bellumore, S., Bosia, M., and De Micco, F. (2022). Ma- chine learning and criminal justice: a systematic review of advanced methodology for
    recidivism risk prediction. International journal of environmental research and public health, 19(17):10594.
    Trocin, C., Mikalef, P., Papamitsiou, Z., and Conboy, K. (2021). Responsible ai for digital health: a synthesis and a research agenda. Information Systems Frontiers, pages 1–19.
    Van de Walle, S., Van Roosbroek, S., and Bouckaert, G. (2008). Trust in the public sector: Is there any evidence for a long-term decline? International Review of Administrative Sciences, 74(1):47–64.
    Vodrahalli, K. (2021). Do humans trust advice more if it comes from ai?: An analysis of human-ai interactions. Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society.
    Waltersmann, L., Kiemel, S., Stuhlsatz, J., Sauer, A., and Miehe, R. (2021). Artificial intelligence applications for increasing resource efficiency in manufacturing companies
    —a comprehensive review. Sustainability, 13(12):6689.
    Wang, D., Khosla, A., Gargeya, R., Irshad, H., and Beck, A. H. (2016). Deep learning for identifying metastatic breast cancer. arXiv preprint arXiv:1606.05718.
    Wang, M., Zhang, Q., and Zhou, K. Z. (2020). The origins of trust asymmetry in international relationships: An institutional view. Journal of International Marketing, 28(2):81–101.
    Wang, N. (2020). “black box justice”: Robot judges and ai-based judgment processes in china’s court system. In 2020 IEEE International Symposium on Technology and Society (ISTAS), pages 58–65. IEEE.
    Wang, P. (2008). What do you mean by” ai”? In AGI, volume 171, pages 362–373.
    Wang, W. Y., Mayfield, E., Naidu, S., and Dittmar, J. (2012). Historical analysis of legal opinions with a sparse mixed-effects latent variable model. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 740–749.
    Wang, X. (2021). Are explanations helpful? a comparative study of the effects of explanations in ai-assisted decision-making. 26th International Conference on Intelligent User Interfaces.
    Wang, X., Liu, C., and Qi, Y. (2021). Research on new media content production based on artificial intelligence technology. In Journal of Physics: Conference Series, volume 1757, page 012062. IOP Publishing.
    Warikoo, N., Mayer, T., Atzil-Slonim, D., Eliassaf, A., Haimovitz, S., and Gurevych, I. (2022). Nlp meets psychotherapy: Using predicted client emotions and self-reported client emotions to measure emotional coherence. arXiv preprint arXiv:2211.12512.
    Waytz, A., Heafner, J., and Epley, N. (2014). The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle. Journal of Experimental Social Psychology, 52:113–117.
    Waytz, A., Morewedge, C. K., Epley, N., Monteleone, G., Gao, J.-H., and Cacioppo, J. T. (2010). Making sense by making sentient: effectance motivation increases anthropomorphism. Journal of personality and social psychology, 99(3):410.
    Weiner, B. (1972). Attribution theory, achievement motivation, and the educational process. Review of educational research, 42(2):203–215.
    Weiner, B. (2012). An attributional theory of motivation and emotion. Springer Science & Business Media.
    Wen, H. (2023). Alert of the second decision-maker: An introduction to human-ai conflict.
    arXiv preprint arXiv:2305.16477.
    West, D. M. (2018). The future of work: Robots, AI, and automation. Brookings Institution Press.
    Willegen, I. v., Rothkrantz, L., and Wiggers, P. (2009). Lexical affinity measure between words. In International Conference on Text, Speech and Dialogue, pages 234–241. Springer.
    Wu, Y., Schuster, M., Chen, Z., Le, Q. V., Norouzi, M., Macherey, W., Krikun, M., Cao, Y., Gao, Q., Macherey, K., et al. (2016). Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144.
    Xie, Y. and Peng, S. (2009). How to repair customer trust after negative publicity: The roles of competence, integrity, benevolence, and forgiveness. Psychology & Marketing, 26(7):572–589.
    Xue, N. (2003). Chinese word segmentation as character tagging. In International Journal of Computational Linguistics & Chinese Language Processing, Volume 8, Number 1, February 2003: Special Issue on Word Formation and Chinese Language Processing, pages 29–48.
    Yang, Y., Liu, Y., Lv, X., Ai, J., and Li, Y. (2022). Anthropomorphism and customers’ willingness to use artificial intelligence service agents. Journal of Hospitality Marketing
    & Management, 31(1):1–23.
    Yu, K., Berkovsky, S., Taib, R., Conway, D., Zhou, J., and Chen, F. (2017). User trust dynamics: An investigation driven by differences in system performance. In Proceedings of the 22nd international conference on intelligent user interfaces, pages 307–317.
    Zahedi, F. M. and Song, J. (2008). Dynamics of trust revision: Using health infomediaries.
    Journal of Management Information Systems, 24(4):225–248.
    Zerilli, J., Bhatt, U., and Weller, A. (2022). How transparency modulates trust in artificial intelligence. Patterns.
    Zhang, A. and Yang, Q. (2022). To be human-like or machine-like? an empirical research on user trust in ai applications in service industry. In 2022 8th International Conference on Automation, Robotics and Applications (ICARA), pages 9–15. IEEE.
    Zhang, R. (2021). ”an ideal human”. Proceedings of the ACM on Human-Computer Interaction.
    Zhang, Z. (2018). Insert beyond the traffic sign recognition: constructing an auto-pilot map for autonomous vehicles. Proceedings of the 26th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems.
    Zhang, Z., Trinh, H., Chen, Q., and Bickmore, T. (2015). Adapting a geriatrics health counseling virtual agent for the chinese culture. In International Conference on Intelligent Virtual Agents, pages 275–278. Springer.
    Zhao, L. (2022). Artificial intelligence and law: Emerging divergent national regulatory approaches in a changing landscape of fast-evolving ai technologies.
    Zhao, W., Chellappa, R., Phillips, P. J., and Rosenfeld, A. (2003). Face recognition: A literature survey. ACM computing surveys (CSUR), 35(4):399–458.
    Zhou, J. and Chen, F. (2019). Towards trustworthy human-ai teaming under uncertainty.
    In IJCAI 2019 workshop on explainable AI (XAI).
    Zhu, D. H. and Chang, Y. P. (2020). Robot with humanoid hands cooks food better? effect of robotic chef anthropomorphism on food quality prediction. International Journal of Contemporary Hospitality Management.
    Złotowski, J., Sumioka, H., Nishio, S., Glas, D. F., Bartneck, C., and Ishiguro, H. (2016). Appearance of a robot affects the impact of its behaviour on perceived trustworthiness and empathy. Paladyn, Journal of Behavioral Robotics, 7(1):000010151520160005.
    張永健, 何漢葳, and 李宗憲 (2017). 或重於泰山, 或輕於鴻毛-地方法院車禍致死案件撫慰金之實證研究. 政大法學評論, (149):139–219.
    黃詩淳, 邵軒磊, 康心宥, and 郭恩佳 (2020). 初探車禍判決中法院認定之過失比例之因素. 月旦法學雜誌 , 305(206-221).
    Description: 碩士
    國立政治大學
    數位內容碩士學位學程
    109462011
    Source URI: http://thesis.lib.nccu.edu.tw/record/#G0109462011
    Data Type: thesis
    Appears in Collections:[數位內容碩士學位學程] 學位論文

    Files in This Item:

    File Description SizeFormat
    201101.pdf1960KbAdobe PDF5View/Open


    All items in 政大典藏 are protected by copyright, with all rights reserved.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback