English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  Items with full text/Total items : 116112/147149 (79%)
Visitors : 59381616      Online Users : 1280
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    Please use this identifier to cite or link to this item: https://nccur.lib.nccu.edu.tw/handle/140.119/157209


    Title: 虛擬代理人對話中的即饋行為對人信任之影響
    Backchannel and Trust in Conversations between Human and Virtual Agent
    Authors: 陳昱安
    Chen, Yu-An
    Contributors: 陳宜秀
    廖峻鋒

    Chen, Yi-Hsiu
    Liao, Chun-Feng

    陳昱安
    Chen, Yu-An
    Keywords: 即饋
    信任
    代理人
    人機互動
    擬人化
    Date: 2025
    Issue Date: 2025-06-02 14:37:22 (UTC+8)
    Abstract: 隨著科技的發展與人類需求的改變,與電腦或機器互動的介面不再只是平面的操作面板或顯示螢幕,代理人(agent)在未來會成為普及的介面型態,提供了我們更多元的功能與互動模式來與我們共同完成任務。HAI(Human-Agent Interaction)的互動中,代理人是否能夠展現類似人類的溝通行為一直是研究領域積極探索的方向,而建立合適的「信任」關係,亦是人與代理人互動與合作的關鍵因素之一。在溝通的研究中,「即饋(Backchannel)」是指溝通的參與者在交談中使用的語言性質 (例如:嗯哼、喔)和非語言性質(例如:點頭、微笑) 的回應。人與人的溝通過程,雙方會經由「即饋」所提供的正向信號來維持溝通過程中「現在該由誰講話」的訊號(turn-taking signal),並確認對話得到接收、認可並得與延續。

    有鑒於人常常藉由過去與其他人互動的經驗來開始與代理人的互動歷程,我們期待在人與代理人的互動中代理人亦能展現合適的即饋機制來提升溝通的相互性,並讓人建立合適的信任程度。本研究聚焦於即饋機制在人機互動中的意義,探討即饋對人與代理人互動過程使否也能造成影響?為了驗證此實驗問題,本研究以是否表現即饋行為(有即饋行為/無即饋行為)以及代理人外觀與語音的擬人化程度(人形/機器人形)作為自變項進行二因子組間設計。實驗過程中,受試者與代理人共同透過電腦完成使用七巧板圖形進行的「你說我猜」任務。實驗結束後受試者需填寫分為「認知信任」與「情感信任」兩個面向的AI語意量表以及任務感受問卷,了解即饋行為與擬人化程度是否影響人類對代理人的信任評價。

    實驗結果發現,在即饋與擬人化操弄上未能直接造成預期中的顯著差異。即饋行為可以有效的提升對話效率,但其對信任的影響,需在符合使用者心智模型與互動預期的情境下,才可能展現助益。在主觀感受層面,即饋能促進使用者產生被理解與共同認知的感覺,進而提升互動參與感。另一方面,受試者對高度擬人化(人型)代理人在情感方面有更多的信任,但也伴隨著更高的期待與審視標準;當代理人的表現未達使用者預期時,信任感則容易出現明顯下降。總結來說,信任在虛擬代理人互動中的形成並非單一向度,我們經過了認知與情感兩種不同層次的評估,了解到代理人行為與外觀以及主觀感受在信任上展現出多重維度特性。
    Reference: Adiba, A. I., Homma, T., and Miyoshi, T. (2021). Towards immediate backchannel gen- eration using attention-based early prediction model. In ICASSP 2021-2021 IEEE In- ternational Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 7408–7412. IEEE.
    Allouch, M., Azaria, A., and Azoulay, R. (2021). Conversational agents: Goals, tech- nologies, vision and challenges. Sensors, 21(24):8448.
    Arsenyan, J., Mirowska, A., and Piepenbrink, A. (2023). Close encounters with the virtual kind: Defining a human-virtual agent coexistence framework. Technological Forecast- ing and Social Change, 193:122644.
    Bakhtin, M. M. (2010). The dialogic imagination: Four essays. University of texas Press.
    Balakrishnan, K. & Ramachandran, P. A. (2001). Osculatory interpolation in the method of fundamental solution for nonlinear poisson problems. Journal of Computational Physics, 172(1):1–18. Cited by: 56.
    Barón, A. and Green, P. (2006). Safety and usability of speech interfaces for in-vehicle tasks while driving: A brief literature review.
    Bickmore, T. and Cassell, J. (2001). Relational agents: a model and implementation of building user trust. In Proceedings of the SIGCHI conference on Human factors in computing systems, pages 396–403.
    Blache, P. (2017). Dialogue management in task-oriented dialogue systems. In Proceed- ings of the 1st ACM SIGCHI International Workshop on Investigating Social Interac- tions with Artificial Agents, pages 4–8.
    Blomqvist, K. (1997). The many faces of trust. Scandinavian journal of management, 13(3):271–286.
    Branigan, H. P., Pickering, M. J., Pearson, J., McLean, J. F., and Brown, A. (2011). The role of beliefs in lexical alignment: Evidence from dialogs with humans and computers. Cognition, 121(1):41–57.
    Bratman, M. E. (1992). Shared cooperative activity. The philosophical review, 101(2):327–341.
    Brennan, S. E. and Hanna, J. E. (2009). Partner-specific adaptation in dialog. Topics in Cognitive Science, 1(2):274–291.
    Buschmeier, H. and Kopp, S. (2018). Communicative listener feedback in human-agent interaction: Artificial speakers need to be attentive and adaptive. In Proceedings of the 17th international conference on autonomous agents and multiagent systems, pages 1213–1221.
    Cathcart, N., Carletta, J., and Klein, E. (2003). A shallow model of backchannel continuers in spoken dialogue. In European ACL, pages 51–58. Citeseer.
    Chattaraman, V., Kwon, W.-S., Gilbert, J. E., and Ross, K. (2019). Should ai-based, con- versational digital assistants employ social-or task-oriented interaction style? a task- competency and reciprocity perspective for older adults. Computers in Human Behav- ior, 90:315–330.
    Chen Y., K. R. M. (1991). When we smile and when we don’t: The functionality of back-channel responses.
    Cila, N. (2022). Designing human-agent collaborations: Commitment, responsiveness, and support. In Proceedings of the 2022 CHI Conference on Human Factors in Com- puting Systems, pages 1–18.
    Clark, H. H. (1992). Arenas of language use. University of Chicago Press. Clark, H. H. (1996). Using language. Cambridge university press.
    Clark, H. H. and Brennan, S. E. (1991). Grounding in communication.
    Clark, L., Pantidi, N., Cooney, O., Doyle, P., Garaialde, D., Edwards, J., Spillane, B., Gilmartin, E., Murad, C., Munteanu, C., et al. (2019). What makes a good conversation? challenges in designing truly conversational agents. In Proceedings of the 2019 CHI conference on human factors in computing systems, pages 1–12.
    Cohen, P. R. and Levesque, H. J. (1991). Teamwork. Nous, 25(4):487–512.
    Cowan, B. R., Pantidi, N., Coyle, D., Morrissey, K., Clarke, P., Al-Shehri, S., Earley, D., and Bandeira, N. (2017). ” what can i help you with?” infrequent users’ experiences of intelligent personal assistants. In Proceedings of the 19th international conference on human-computer interaction with mobile devices and services, pages 1–12.
    Daronnat, S., Azzopardi, L., Halvey, M., and Dubiel, M. (2020). Impact of agent reliability and predictability on trust in real time human-agent collaboration. In Proceedings of the 8th International Conference on Human-Agent Interaction, pages 131–139.
    De Visser, E. J., Peeters, M. M., Jung, M. F., Kohn, S., Shaw, T. H., Pak, R., and Neerincx,
    M. A. (2020). Towards a theory of longitudinal trust calibration in human–robot teams. International journal of social robotics, 12(2):459–478.
    Ding, Z., Kang, J., Ho, T. O. T., Wong, K. H., Fung, H. H., Meng, H., and Ma, X. (2022).
    Talktive: A conversational agent using backchannels to engage older adults in neu- rocognitive disorders screening. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, pages 1–19.
    Doyle, P. R., Edwards, J., Dumbleton, O., Clark, L., and Cowan, B. R. (2019). Mapping perceptions of humanness in intelligent personal assistant interaction. In Proceedings of the 21st international conference on human-computer interaction with mobile devices and services, pages 1–12.
    Dubberly, H. and Pangaro, P. (2009). What is conversation? how can we design for effective conversation. Interactions Magazine, 16(4):22–28.
    Dzindolet, M. T., Peterson, S. A., Pomranky, R. A., Pierce, L. G., and Beck, H. P. (2003). The role of trust in automation reliance. International journal of human-computer stud- ies, 58(6):697–718.
    Ehsan, U., Liao, Q. V., Muller, M., Riedl, M. O., and Weisz, J. D. (2021). Expanding explainability: Towards social transparency in ai systems. In Proceedings of the 2021 CHI conference on human factors in computing systems, pages 1–19.
    Epley, N., Waytz, A., Akalis, S., and Cacioppo, J. T. (2008). When we need a human: Motivational determinants of anthropomorphism. Social cognition, 26(2):143–155.
    Falcone, R. and Castelfranchi, C. (2004). Trust dynamics: How trust is influenced by direct experiences and by trust itself. In Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems, 2004. AAMAS 2004., pages 740–747. IEEE.
    Feine, J., Gnewuch, U., Morana, S., and Maedche, A. (2019). A taxonomy of social cues for conversational agents. International Journal of Human-Computer Studies, 132:138–161.
    Frijns, H. A., Schürer, O., and Koeszegi, S. T. (2023). Communication models in human– robot interaction: an asymmetric model of alterity in human–robot interaction (amodal- hri). International Journal of Social Robotics, 15(3):473–500.
    Fusaroli, R., Tylén, K., Garly, K., Steensig, J., Christiansen, M. H., and Dingemanse, M. (2017). Measures and mechanisms of common ground: Backchannels, conversational repair, and interactive alignment in free and task-oriented social interactions. In the 39th Annual Conference of the Cognitive Science Society (CogSci 2017), pages 2055–2060. Cognitive Science Society.
    Gambetta, D. et al. (2000). Can we trust trust. Trust: Making and breaking cooperative relations, 13(2000):213–237.
    Goetz, J., Kiesler, S., and Powers, A. (2003). Matching robot appearance and behavior to tasks to improve human-robot cooperation. In The 12th IEEE International Workshop on Robot and Human Interactive Communication, 2003. Proceedings. ROMAN 2003., page 55–60. IEEE.
    Gravano, A. and Hirschberg, J. (2011). Turn-taking cues in task-oriented dialogue. Com- puter Speech & Language, 25(3):601–634.
    Groom, V. and Nass, C. (2007a). Can robots be teammates?: Benchmarks in human–robot teams. Interaction studies, 8(3):483–500.
    Groom, V. and Nass, C. (2007b). Can robots be teammates?: Benchmarks in human–robot teams. Interaction studies, 8(3):483–500.
    Grossen, M. (2010). Interaction analysis and psychology: A dialogical perspective. Inte- grative Psychological and Behavioral Science, 44:1–22.
    Guzman, A. L. (2016). Making ai safe for humans: A conversation with siri. In Socialbots and their friends, pages 85–101. Routledge.
    Hancock, P., Kessler, T. T., Kaplan, A. D., Stowers, K., Brill, J. C., Billings, D. R., Schae- fer, K. E., and Szalma, J. L. (2023). How and why humans trust: A meta-analysis and elaborated model. Frontiers in psychology, 14.
    Heinz, B. (2003). Backchannel responses as strategic responses in bilingual speakers’ conversations. Journal of pragmatics, 35(7):1113–1142.
    Hirasawa, J.-i., Nakano, M., Kawabata, T., and Aikawa, K. (1999). Effects of system barge-in responses on user impressions. In Sixth European Conference on Speech Com- munication and Technology.
    Hoy, M. B. (2018). Alexa, siri, cortana, and more: an introduction to voice assistants. Medical reference services quarterly, 37(1):81–88.
    Human-Agent-Interaction (2019). What is hai? | human-agent interaction.
    Isbister, K. and Nass, C. (2000). Consistency of personality in interactive characters: verbal cues, non-verbal cues, and user characteristics. International journal of human- computer studies, 53(2):251–267.
    Kashima, E. S. and Kashima, Y. (1998). Culture and language: The case of cultural dimen- sionsand personal pronoun use. Journal of cross-cultural psychology, 29(3):461–486.
    Khouzaimi, H., Laroche, R., and Lefèvre, F. (2018). A methodology for turn-taking capa- bilities enhancement in spoken dialogue systems using reinforcement learning. Com- puter Speech & Language, 47:93–111.
    Klein, G., Feltovich, P. J., Bradshaw, J. M., and Woods, D. D. (2005). Common ground and coordination in joint activity. Organizational simulation, 53:139–184.
    Krauss, R. M., Garlock, C. M., Bricker, P. D., and McMahon, L. E. (1977). The role of audible and visible back-channel responses in interpersonal communication. Journal of personality and social psychology, 35(7):523.
    Laranjo, L., Dunn, A. G., Tong, H. L., Kocaballi, A. B., Chen, J., Bashir, R., Surian, D., Gallego, B., Magrabi, F., Lau, A. Y., et al. (2018). Conversational agents in health- care: a systematic review. Journal of the American Medical Informatics Association, 25(9):1248–1258.
    Lee, J. D. and See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human factors, 46(1):50–80.
    Lewicki, R. J. and Brinsfield, C. (2017). Trust repair. Annual review of organizational psychology and organizational behavior, 4:287–313.
    Lin, L., Atkinson, R. K., Christopherson, R. M., Joseph, S. S., and Harrison, C. J. (2013). Animated agents and learning: Does the type of verbal feedback they provide matter? Computers & Education, 67:239–249.
    Linell, P. (1998). Approaching dialogue: Talk, interaction and contexts in dialogical perspectives, volume 3. John Benjamins Publishing.
    Looije, R., Neerincx, M. A., and Cnossen, F. (2010). Persuasive robotic assistant for health self-management of older adults: Design and evaluation of social behaviors. International Journal of Human-Computer Studies, 68(6):386–397.
    Lucas, G. M., Boberg, J., Traum, D., Artstein, R., Gratch, J., Gainer, A., Johnson, E., Leuski, A., and Nakano, M. (2018). Getting to know each other: The role of social dialogue in recovery from errors in social robots. In Proceedings of the 2018 acm/ieee international conference on human-robot interaction, pages 344–351.
    Luckmann, T. (1990). Social communication, dialogue and conversation. The dynamics of dialogue, pages 45–61.
    Luger, E. and Sellen, A. (2016). ” like having a really bad pa” the gulf between user expectation and experience of conversational agents. In Proceedings of the 2016 CHI conference on human factors in computing systems, pages 5286–5297.
    Macskassy, S. and Stevenson, S. (1996). A conversational agent. Master Essay, Rutgers University.
    Marková, I., Graumann, C. F., and Foppa, K. (1995). Mutualities in dialogue. Cambridge University Press.
    Maurtua, I., Ibarguren, A., Kildal, J., Susperregi, L., and Sierra, B. (2017). Human– robot collaboration in industrial applications: Safety, interaction and trust. International Journal of Advanced Robotic Systems, 14(4):1729881417716010.
    Mayer, R. E., Johnson, W. L., Shaw, E., and Sandhu, S. (2006). Constructing computer- based tutors that are socially sensitive: Politeness in educational software. International Journal of Human-Computer Studies, 64(1):36–42.
    Mertes, S., Kiderle, T., Schlagowski, R., Lingenfelser, F., and Andre, E. (2021). On the potential of modular voice conversion for virtual agents.
    Moore, R. K. (2017). Is spoken language all-or-nothing? implications for future speech- based human-machine interaction. Dialogues with Social Robots: Enablements, Anal- yses, and Evaluation, pages 281–291.
    Mostafa, S., Mohd, S., Mustapha, A., and Mohammed, M. (2017). A concise overview of software agent research, modeling, and development. Software Engineering, 5:8–25.
    Mutlu, B., Forlizzi, J., and Hodgins, J. (2006). A storytelling robot: Modeling and eval- uation of human-like gaze behavior. In 2006 6th IEEE-RAS International Conference on Humanoid Robots, pages 518–523. IEEE.
    Nakatsu, R., Nicholson, J., and Tosa, N. (1999). Emotion recognition and its application to computer agents with spontaneous interactive capabilities. In Proceedings of the seventh ACM international conference on Multimedia (Part 1), pages 343–351.
    Nass, C., Steuer, J., and Tauber, E. R. (1994). Computers are social actors. In Proceedings of the SIGCHI conference on Human factors in computing systems, pages 72–78.
    Norman, D. (2013). The design of everyday things: Revised and expanded edition. Basic books.
    Norman, D. A. (1999). Affordance, conventions, and design. Interactions, 6(3):38–43.
    Ogreten, S., Lackey, S., and Nicholson, D. (2010). Recommended roles for uninhabited team members within mixed-initiative combat teams. In 2010 International symposium on collaborative technologies and systems, pages 531–536. IEEE.
    Oviatt, S., MacEachern, M., and Levow, G.-A. (1998). Predicting hyperarticulate speech during human-computer error resolution. Speech Communication, 24(2):87–110.
    Papoušek, M. (1995). Origins of reciprocity and mutuality in prelinguistic parent– infant’dialogues.’.
    Papoušek, M. (2007). Communication in early infancy: An arena of intersubjective learn- ing. Infant Behavior and Development, 30(2):258–266.
    Peña, P. R., Doyle, P., Edwards, J., Garaialde, D., Rough, D., Bleakley, A., Clark, L., Hen- riquez, A. T., Branigan, H., Gessinger, I., et al. (2023). Audience design and egocen- trism in reference production during human-computer dialogue. International Journal of Human-Computer Studies, 176:103058.
    Porcheron, M., Fischer, J. E., Reeves, S., and Sharples, S. (2018). Voice interfaces in ev- eryday life. In proceedings of the 2018 CHI conference on human factors in computing systems, pages 1–12.
    Reeves, B. and Nass, C. (1996). The media equation: How people treat computers, tele- vision, and new media like real people. Cambridge, UK, 10(10).
    Resnick, L. B., Levine, J. M., and Teasley, S. D. (1991). Perspectives on socially shared cognition. American Psychological Association.
    Rhee, C. E. and Choi, J. (2020). Effects of personalization and social role in voice shop- ping: An experimental study on product recommendation by a conversational voice agent. Computers in Human Behavior, 109:106359.
    Riou, M., Huet, S., Jabaian, B., and Lefevre, F. (2018). Automation and optimisation of humor trait generation in a vocal dialogue system. In Proceedings of the Workshop on Intelligent Interactive Systems and Language Generation (2IS&NLG), pages 9–14.
    Rothwell, C. D., Shalin, V. L., and Romigh, G. D. (2021). Comparison of common ground models for human–computer dialogue: Evidence for audience design. ACM Transac- tions on Computer-Human Interaction (TOCHI), 28(2):1–35.
    Russell, S. J. and Norvig, P. (2016). Artificial intelligence: a modern approach. Pearson.
    Salem, M., Eyssel, F., Rohlfing, K., Kopp, S., and Joublin, F. (2013). To err is human (-like): Effects of robot gesture on perceived anthropomorphism and likability. Inter- national Journal of Social Robotics, 5:313–323.
    Schmader, C. and Horton, W. S. (2019). Conceptual effects of audience design in human– computer and human–human dialogue. Discourse Processes, 56(2):170–190.
    Schutz, A. and Luckmann, T. (1973). The structures of the life-world, volume 1. north- western university press.
    Seaborn, K., Miyake, N. P., Pennefather, P., and Otake-Matsuura, M. (2021). Voice in human–agent interaction: A survey. ACM Comput. Surv., 54(4).
    Shang, R., Hsieh, G., and Shah, C. (2024). Trusting your ai agent emotionally and cog- nitively: Development and validation of a semantic differential scale for ai trust. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, volume 7, pages 1343–1356.
    Specht, N., Fichtel, S., and Meyer, A. (2007). Perception and attribution of employees’ ef- fort and abilities: The impact on customer encounter satisfaction. International Journal of Service Industry Management, 18(5):534–554.
    Stalnaker, R. (2002). Common ground. Linguistics and philosophy, 25(5/6):701–721.
    Torre, I., Carrigan, E., McDonnell, R., Domijan, K., McCabe, K., and Harte, N. (2019). The effect of multimodal emotional expression and agent appearance on trust in human- agent interaction. In Proceedings of the 12th ACM SIGGRAPH Conference on Motion, Interaction and Games, pages 1–6.
    Tsai, C.-Y., Marshall, J. D., Choudhury, A., Serban, A., Hou, Y. T.-Y., Jung, M. F., Dionne, S. D., and Yammarino, F. J. (2022). Human-robot collaboration: A multilevel and integrated leadership framework. The Leadership Quarterly, 33(1):101594.
    Van Pinxteren, M. M., Pluymaekers, M., and Lemmink, J. G. (2020). Human-like com- munication in conversational agents: a literature review and research agenda. Journal of Service Management, 31(2):203–225.
    Von der Pütten, A. M., Krämer, N. C., Gratch, J., and Kang, S.-H. (2010). “it doesn’ t matter what you are!”explaining social effects of agents and avatars. Computers in Human Behavior, 26(6):1641–1650.
    Wagner, A. R., Robinette, P., and Howard, A. (2018). Modeling the human-robot trust phenomenon: A conceptual framework based on risk. ACM Transactions on Interactive Intelligent Systems (TiiS), 8(4):1–24.
    Wang, M., Hock, P., Lee, S. C., Baumann, M., and Jeon, M. (2021). Genie vs. jarvis: characteristics and design considerations of in-vehicle intelligent agents. In 13th Inter- national Conference on Automotive User Interfaces and Interactive Vehicular Applica- tions, pages 197–199.
    Ward, N. and Tsukahara, W. (2000). Prosodic features which cue back-channel responses in english and japanese. Journal of pragmatics, 32(8):1177–1207.
    Ward, N. G., Rivera, A. G., Ward, K., and Novick, D. G. (2005). Root causes of lost time and user stress in a simple dialog system.
    Wolf, J. P. (2008). The effects of backchannels on fluency in l2 oral task production. System, 36(2):279–294.
    Wooldridge, M. and Jennings, N. R. (1995). Intelligent agents: Theory and practice. The knowledge engineering review, 10(2):115–152.
    Worgan, S. and Moore, R. (2010). Speech as the perception of affordances. Ecological Psychology, 22(4):327–343.
    Yankelovich, N., Levow, G.-A., and Marx, M. (1995). Designing speechacts: Issues in speech user interfaces. In Proceedings of the SIGCHI conference on Human factors in computing systems, pages 369–376.
    Yen, J., Fan, X., Sun, S., Hanratty, T., and Dumer, J. (2006). Agents with shared mental models for enhancing team decision makings. Decision Support Systems, 41(3):634– 653.
    Zhang, C., Liu, X., Ziska, K., Jeon, S., Yu, C.-L., and Xu, Y. (2024). Mathemyths: Leveraging large language models to teach mathematical language through child-ai co- creative storytelling. arXiv preprint arXiv:2402.01927.
    林庭羽(2024). 虛擬代理人的相互性行為對信任之影響. Master’s thesis, 國立政治大學.
    蔡佳洵 (2024). 解釋的合理性及代理人擬人化程度對於虛擬代理人信任之影響. Master’s thesis, 國立政治大學.
    Description: 碩士
    國立政治大學
    數位內容碩士學位學程
    110462006
    Source URI: http://thesis.lib.nccu.edu.tw/record/#G0110462006
    Data Type: thesis
    Appears in Collections:[數位內容碩士學位學程] 學位論文

    Files in This Item:

    File SizeFormat
    200601.pdf6045KbAdobe PDF0View/Open


    All items in 政大典藏 are protected by copyright, with all rights reserved.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback