政大機構典藏-National Chengchi University Institutional Repository(NCCUR):Item 140.119/153167
English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  全文筆數/總筆數 : 113160/144130 (79%)
造訪人次 : 50752003      線上人數 : 752
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋
    政大機構典藏 > 商學院 > 資訊管理學系 > 學位論文 >  Item 140.119/153167
    請使用永久網址來引用或連結此文件: https://nccur.lib.nccu.edu.tw/handle/140.119/153167


    題名: 混合實境於金融無障礙的探索性研究: 以實時手語翻譯系統為例
    Exploring Financial Accessibility with Mixed Reality: An empirical validation of Real-time Sign Language Interpretation System
    作者: 張郁雯
    Chang, Yu-Wen
    貢獻者: 簡士鎰
    Chien, Shih-Yi
    張郁雯
    Chang, Yu-Wen
    關鍵詞: 混合實境(MR)
    SignBank系統
    金融無障礙
    雙向溝通
    科技接受度
    Mixed Reality (MR)
    SignBank system
    financial accessibility
    bidirectional communication
    technology acceptance
    日期: 2024
    上傳時間: 2024-09-04 14:07:10 (UTC+8)
    摘要: 隨著解決社會不平等和改善弱勢群體福祉成為全球發展的核心事項,聯合國的可持續發展目標(SDGs)中特別強調了這些願景。由於台灣有眾多聽障人口,本研究針對銀行行員與聽障人士在金融服務中面臨的溝通挑戰,提出了創新的SignBank系統。由於銀行傳統面對面的交易中,銀行員工通常不具備手語能力,造成雙方溝通上的困難,因此本系統結合手語與語音識別技術,利用混合實境(MR)技術提升溝通效率。SignBank系統主要針對「轉帳」、「提款」、「存款」和「開戶」這四項常見的金融服務,將手語翻譯成字幕以供銀行行員理解,並結合語音識別技術,將行員所說的話轉換成字幕,提升聽障顧客的訊息接收能力。本研究採用質化研究方法,分為兩階段實驗。第一階段為在線評估,受測者觀看包含SignBank系統運作的影片並填寫前後測問卷,以評估系統的可行性與使用意願。第二階段則邀請聽障人士實際配戴Microsoft HoloLens 2 MR眼鏡進行互動實驗,並通過半結構化訪談深入探討使用者體驗與建議。研究結果顯示,年輕受測者對新科技的接受度較高,能夠快速上手,並強調其在溝通便利性和效率提升方面的優點,表現出較強的使用意願。同時,他們也對系統提出了修改和功能擴充的建議。相較之下,老年受測者表現出較高的抗拒感,主要原因包括偏好使用已習慣的傳統溝通方式、缺乏使用經驗,以及生理因素如老花眼和識字能力不足等。總結而言,年輕用戶與老年用戶在經驗和知識(例如IT知識)方面存在差異,導致科技接受度上的差異。年輕用戶較重視系統的實質功能,如互動模式,而老年用戶則更關注界面設計等介面組件。本研究證實了SignBank系統在金融服務中的可行性,並提供了具體的改進建議,如優化字幕設計、增加輔助功能和輕量化設計。未來的研究可擴大樣本量,涵蓋更多的使用情境,甚至將系統應用於其他場域,以進一步驗證系統的實用性與有效性。
    Addressing social inequalities and improving the well-being of vulnerable groups have become core priorities in global development efforts, as highlighted by the United Nations' Sustainable Development Goals (SDGs). Given the significant population of hearing-impaired individuals in Taiwan, this research addresses the communication challenges faced by bank staff and hearing-impaired customers in financial services by introducing the innovative SignBank system. Traditional face-to-face transactions in banks often result in communication difficulties because bank employees typically lack sign language skills. To address this issue, the SignBank system combines sign language and speech recognition technologies, leveraging mixed reality (MR) to enhance communication efficiency. The system focuses on four common financial services: "transfer," "withdrawal," "deposit," and "account opening," translating sign language into subtitles for bank employees and converting spoken language into text for hearing-impaired customers. This study employs a qualitative research method, divided into two stages. The first stage involves an online evaluation where participants watch videos demonstrating the SignBank system and complete pre- and post-test questionnaires to assess the system's feasibility and user willingness. The second stage invites hearing-impaired individuals to wear the Microsoft HoloLens 2 MR glasses for interactive experiments, followed by semi-structured interviews to explore user experiences and suggestions. The results indicate that younger participants have a higher acceptance of new technology, adapt quickly, and value its communication convenience and efficiency, showing strong willingness to use it. They also propose modifications and feature expansions for the system. In contrast, older participants exhibit higher resistance, mainly due to a preference for traditional communication methods, lack of experience, and physiological factors such as presbyopia and limited literacy skills. Overall, differences in experience and knowledge (e.g., IT knowledge) between younger and older users lead to varying degrees of technology acceptance. Younger users focus more on the system's functional aspects, such as interactive modes, while older users pay more attention to interface design and components. This research confirms the feasibility of the SignBank system in financial services and provides specific improvement suggestions, such as optimizing subtitle design, adding auxiliary functions, and creating a lightweight design. Future research can expand the sample size, cover more use scenarios, and even apply the system to other fields to further validate its practicality and effectiveness.
    參考文獻: United Nations. (2015). Transforming our world: The 2030 Agenda for Sustainable Development. United Nations General Assembly. Retrieved from https://sustainabledevelopment.un.org/post2015/transformingourworld
    Smith, E. M., Huff, S., Wescott, H., Daniel, R., Ebuenyi, I. D., O’Donnell, J., ... & MacLachlan, M. (2024). Assistive technologies are central to the realization of the Convention on the Rights of Persons with Disabilities. Disability and Rehabilitation: Assistive Technology, 19(2), 486-491.
    Ministry of Health and Welfare. (2021). The Disabled Population by Classification and Locality. Retrieved from https://dep.mohw.gov.tw/dos/cp-5224-62359-113.html
    Morata, T. C., Themann, C. L., Randolph, R. F., Verbsky, B. L., Byrne, D. C., & Reeves, E. R. (2005). Working in noise with a hearing loss: perceptions from workers, supervisors, and hearing conservation program managers. Ear and hearing, 26(6), 529-545.
    Garg, S., Deshmukh, C. P., Singh, M. M., Borle, A., & Wilson, B. S. (2021). Challenges of the deaf and hearing impaired in the masked world of COVID-19. Indian Journal of Community Medicine, 46(1), 11-14.
    Sugandhi, Kumar, P., & Kaur, S. (2020). Sign language generation system based on Indian sign language grammar. ACM Transactions on Asian and Low-Resource Language Information Processing (TALLIP), 19(4), 1-26.
    Grover, Y., Aggarwal, R., Sharma, D., & Gupta, P. K. (2021, February). Sign language translation systems for hearing/speech impaired people: a review. In 2021 International Conference on Innovative Practices in Technology and Management (ICIPTM) (pp. 10-14). IEEE.
    Naert, L., Larboulette, C., & Gibet, S. (2020). A survey on the animation of signing avatars: From sign representation to utterance synthesis. Computers & Graphics, 92, 76-98.
    Shadiev, R., Hwang, W. Y., Chen, N. S., & Huang, Y. M. (2014). Review of speech-to-text recognition technology for enhancing learning. Journal of Educational Technology & Society, 17(4), 65-84.
    Kawas, S., Karalis, G., Wen, T., & Ladner, R. E. (2016, October). Improving real-time captioning experiences for deaf and hard of hearing students. In Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility (pp. 15-23).
    Morris, M. G., & Venkatesh, V. (2000). Age differences in technology adoption. decisions: Implications for a changing work force. Personnel psychology, 53(2), 375-403.
    Jayadeep, G., Vishnupriya, N. V., Venugopal, V., Vishnu, S., & Geetha, M. (2020, May). Mudra: convolutional neural network based Indian sign language translator for banks. In 2020 4th International Conference on Intelligent Computing and Control Systems (ICICCS) (pp. 1228-1232). IEEE.
    Nguyen, L. T., Schicktanz, F., Stankowski, A., & Avramidis, E. (2021, June). Evaluating the translation of speech to virtually-performed sign language on AR glasses. In 2021 13th International Conference on Quality of Multimedia Experience (QoMEX) (pp. 141-144). IEEE.
    Guo, R., Yang, Y., Kuang, J., Bin, X., Jain, D., Goodman, S., ... & Froehlich, J. (2020, October). Holosound: Combining speech and sound identification for deaf or hard of hearing users on a head-mounted display. In Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility (pp. 1-4).
    Yang, F. C., Mousas, C., & Adamo, N. (2022). Holographic sign language avatar interpreter: A user interaction study in a mixed reality classroom. Computer Animation and Virtual Worlds, 33(3-4), e2082.
    Dachisc. Holohear is like google translate for deaf people. Available at https://hololens.reality.news/news/ holohear-is-like-google-translate-for-deaf-people-0171936/, 2017.
    Rastgoo, R., Kiani, K., & Escalera, S. (2021). Sign language recognition: A deep survey. Expert Systems with Applications, 164, 113794.
    Vinayagamoorthy, V., Glancy, M., Ziegler, C., & Schäffer, R. (2019, May). Personalising the TV experience using augmented reality: An exploratory study on delivering synchronised sign language interpretation. In Proceedings of the 2019 CHI conference on human factors in computing systems (pp. 1-12).
    Seita, M., & Huenerfauth, M. (2020, April). Deaf individuals' views on speaking behaviors of hearing peers when using an automatic captioning app. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1-8).
    Zhuang, Y. (2021). HoloV: A Mixed Reality Application Supporting Deaf and Hard of Hearing Students in Classrooms (Doctoral dissertation, Waseda University).
    Peng, Y. H., Hsi, M. W., Taele, P., Lin, T. Y., Lai, P. E., Hsu, L., ... & Chen, M. Y. (2018, April). Speechbubbles: Enhancing captioning experiences for deaf and hard-of-hearing people in group conversations. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1-10).
    Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance. of information technology. MIS quarterly, 319-340.
    Masrom, M. (2007). Technology acceptance model and e-learning. Technology, 21(24), 81.
    Gao, Y., Li, H., & Luo, Y. (2015). An empirical study of wearable technology. acceptance in healthcare. Industrial Management & Data Systems, 115(9), 1704-1723.
    Ahmad, S., Bhatti, S. H., & Hwang, Y. (2020). E-service quality and actual use of e-banking: Explanation through the Technology Acceptance Model. Information Development, 36(4), 503-519.
    Venkatesh, V., Thong, J. Y., & Xu, X. (2012). Consumer acceptance and use of information technology: extending the unified theory of acceptance and use of technology. MIS quarterly, 157-178.
    Luo, L., Weng, D., Songrui, G., Hao, J., & Tu, Z. (2022, April). Avatar interpreter: improving classroom experiences for deaf and hard-of-hearing people based on augmented reality. In CHI Conference on Human Factors in Computing Systems Extended Abstracts (pp. 1-5).
    描述: 碩士
    國立政治大學
    資訊管理學系
    112356005
    資料來源: http://thesis.lib.nccu.edu.tw/record/#G0112356005
    資料類型: thesis
    顯示於類別:[資訊管理學系] 學位論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    600501.pdf7985KbAdobe PDF0檢視/開啟


    在政大典藏中所有的資料項目都受到原著作權保護.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 回饋