政大機構典藏-National Chengchi University Institutional Repository(NCCUR):Item 140.119/153150
English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  全文筆數/總筆數 : 113325/144300 (79%)
造訪人次 : 51175871      線上人數 : 905
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋
    政大機構典藏 > 商學院 > 資訊管理學系 > 學位論文 >  Item 140.119/153150
    請使用永久網址來引用或連結此文件: https://nccur.lib.nccu.edu.tw/handle/140.119/153150


    題名: 使用SHAP加強基於卷積神經網路的跌倒預防系統的解釋性以提高患者安全性
    Enhancing Explainability in Convolutional Neural Network-Based Fall Prevention System for Patient Safety: A SHAP-Based Approach
    作者: 吳伊雯
    Wu, Yi-Wen
    貢獻者: 張欣綠
    Chang, Hsin-Lu
    吳伊雯
    Wu, Yi-Wen
    關鍵詞: AI跌倒偵測系統
    可解釋性AI
    使用者信任
    使用者行為
    錯誤信號
    人機互動
    AI-based Fall Detection
    Explainable AI (XAI)
    SHapley Additive exPlanations (SHAP)
    User Trust
    User Behavior
    False Alarms
    Explainability
    Human-Machine Interaction
    日期: 2024
    上傳時間: 2024-09-04 14:03:43 (UTC+8)
    摘要: 本研究調查了錯誤信號如何影響用戶對基於人工智能的防跌倒系統的信任和行為,重點關注錯誤報警和漏報等不準確情況對情感和行為反應的影響。這些錯誤可能會降低信任並阻礙技術的採用。我們研究了可解釋人工智能(XAI),特別是SHapley Additive exPlanations(SHAP),在提高用戶信任和系統理解方面的作用。我們的假設是,通過XAI提供清晰且全面的系統輸出解釋,能夠顯著提升用戶對該技術的態度。

    我們的研究結果揭示了錯誤率與用戶信任和滿意度之間的複雜動態。更高的錯誤率大幅降低了信任,尤其是在使用XAI的情況下。然而,錯誤率對信任的直接影響很小,其通過提高系統性能和感知可靠性所帶來的間接影響顯著增強了信任。解釋滿意度和解釋全面性模型顯示完全中介作用,表明模型對信任的影響主要是通過增強的解釋。

    這項研究強調了在促進信任方面有效錯誤管理和清晰解釋的重要性,尤其是在防跌倒等關鍵應用中。我們的研究為如何提高人工智能系統的透明度和可解釋性提供了見解,為開發者提供了寶貴的指導,以改進用戶對人工智能技術的接受度和信任度。
    This research investigates how error signals influence user trust and behavior in AI-based fall prevention systems, focusing on the emotional and behavioral responses to inaccuracies like false alarms and negatives. These errors potentially decrease trust and hinder technology adoption. We examine the role of Explainable AI (XAI), specifically SHapley Additive exPlanations (SHAP), in improving user trust and system comprehension. Our hypothesis is that clear and comprehensive explanations of system outputs via XAI significantly enhance user attitudes towards the technology.

    Our findings reveal complex dynamics between error rates and their effects on user trust and satisfaction. Higher error rates drastically reduce trust, especially when compounded by the use of XAI. However, the direct impact of error rates on trust is minimal, with their indirect effects through improved system performance and perceived reliability significantly enhancing trust. Both the Explanation Satisfaction and Comprehensiveness of Explanation models demonstrate full mediation, indicating that the impact of the model on trust is primarily through enhanced explanations.

    The study underscores the importance of effective error management and clear explanations in fostering trust, particularly in critical applications like fall prevention. Our research contributes insights on increasing transparency and explainability in AI systems, providing valuable guidance for developers to improve user receptiveness and trust in AI technologies.
    參考文獻: Abbate, S., Avvenuti, M., Bonatesta, F., Cola, G., Corsini, P., & Vecchio, A. (2012). A smartphone-based fall detection system. Pervasive and Mobile Computing, 8(6), 883-899.
    Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE access, 6, 52138-52160.
    Adams, B. D., Bruyn, L. E., Houde, S., Angelopoulos, P., Iwasa-Madge, K., & McCann, C. (2003). Trust in automated systems. Ministry of National Defence.
    Anjomshoae, S., Najjar, A., Calvaresi, D., & Främling, K. (2019). Explainable agents and robots: Results from a systematic literature review. Paper presented at the 18th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2019), Montreal, Canada, May 13–17, 2019.
    Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., . . . Benjamins, R. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information fusion, 58, 82-115.
    Belle, V., & Papantonis, I. (2021). Principles and practice of explainable machine learning. Frontiers in big Data, 39.
    Breznitz, S. (2013). Cry wolf: The psychology of false alarms: Psychology Press.
    Bright, T. J., Wong, A., Dhurjati, R., Bristow, E., Bastian, L., Coeytaux, R. R., . . . Musty, M. D. (2012). Effect of clinical decision-support systems: a systematic review. Annals of internal medicine, 157(1), 29-43.
    Cahour, B., & Forzy, J.-F. (2009). Does projection into use improve trust and exploration? An example with a cruise control system. Safety science, 47(9), 1260-1270.
    Casalicchio, G., Molnar, C., & Bischl, B. (2019). Visualizing the feature importance for black box models. Paper presented at the Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2018, Dublin, Ireland, September 10–14, 2018, Proceedings, Part I 18.
    De Miguel, K., Brunete, A., Hernando, M., & Gambao, E. (2017). Home camera-based fall detection system for the elderly. Sensors, 17(12), 2864.
    Dignum, V. (2019). Responsible artificial intelligence: how to develop and use AI in a responsible way (Vol. 2156): Springer.
    Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608.
    Edgcomb, A., & Vahid, F. (2012). Privacy perception and fall detection accuracy for in-home video assistive monitoring with privacy enhancements. ACM SIGHIT Record, 2(2), 6-15.
    García, E., Villar, M., Fáñez, M., Villar, J. R., de la Cal, E., & Cho, S.-B. (2022). Towards effective detection of elderly falls with CNN-LSTM neural networks. Neurocomputing, 500, 231-240.
    Gilpin, L. H., Bau, D., Yuan, B. Z., Bajwa, A., Specter, M., & Kagal, L. (2018). Explaining explanations: An overview of interpretability of machine learning. Paper presented at the 2018 IEEE 5th International Conference on data science and advanced analytics (DSAA).
    Girshick, R., Donahue, J., Darrell, T., & Malik, J. (2014). Rich feature hierarchies for accurate object detection and semantic segmentation. Paper presented at the Proceedings of the IEEE conference on computer vision and pattern recognition.
    Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2018). A survey of methods for explaining black box models. ACM computing surveys (CSUR), 51(5), 1-42.
    Gunning, D. (2017). Explainable artificial intelligence (xai). Defense advanced research projects agency (DARPA), nd Web, 2(2), 1.
    Hayes, A. F. (2017). Introduction to mediation, moderation, and conditional process analysis: A regression-based approach: Guilford publications.
    Herm, L.-V. (2023). Impact Of Explainable AI On Cognitive Load: Insights From An Empirical Study. arXiv preprint arXiv:2304.08861.
    Ho, C.-Y., Lai, Y.-C., Chen, I.-W., Wang, F.-Y., & Tai, W.-H. (2012). Statistical analysis of false positives and false negatives from real traffic with intrusion detection/prevention systems. IEEE Communications Magazine, 50(3), 146-154.
    Hoffman, R. R., Mueller, S. T., Klein, G., & Litman, J. (2023). Measures for explainable AI: Explanation goodness, user satisfaction, mental models, curiosity, trust, and human-AI performance. Frontiers in Computer Science, 5, 1096257.
    Igual, R., Medrano, C., & Plaza, I. (2013). Challenges, issues and trends in fall detection systems. Biomedical engineering online, 12(1), 66.
    Ikeda, T., Cooray, U., Hariyama, M., Aida, J., Kondo, K., Murakami, M., & Osaka, K. (2022). An interpretable machine learning approach to predict fall risk among community-dwelling older adults: a three-year longitudinal study. Journal of General Internal Medicine, 37(11), 2727-2735.
    Karran, A. J., Demazure, T., Hudon, A., Senecal, S., & Léger, P.-M. (2022). Designing for Confidence: The Impact of Visualizing Artificial Intelligence Decisions. Frontiers in neuroscience, 16, 883385.
    Kawamoto, K., Houlihan, C. A., Balas, E. A., & Lobach, D. F. (2005). Improving clinical practice using clinical.
    Kim, J.-K., Bae, M.-N., Lee, K., Kim, J.-C., & Hong, S. G. (2022). Explainable artificial intelligence and wearable sensor-based gait analysis to identify patients with osteopenia and sarcopenia in daily life. Biosensors, 12(3), 167.
    Kim, J.-K., Oh, D.-S., Lee, K., & Hong, S. G. (2022). Fall detection based on interpretation of important features with wrist-wearable sensors. Paper presented at the Proceedings of the 28th Annual International Conference on Mobile Computing And Networking.
    Laato, S., Tiainen, M., Najmul Islam, A., & Mäntymäki, M. (2022). How to explain AI systems to end users: a systematic literature review and research agenda. Internet Research, 32(7), 1-31.
    Lalmas, M., O'Brien, H., & Yom-Tov, E. (2022). Measuring user engagement: Springer Nature.
    Linardatos, P., Papastefanopoulos, V., & Kotsiantis, S. (2020). Explainable ai: A review of machine learning interpretability methods. Entropy, 23(1), 18.
    Liu, Y., Liu, Z., Luo, X., & Zhao, H. (2022). Diagnosis of Parkinson's disease based on SHAP value feature selection. Biocybernetics and Biomedical Engineering, 42(3), 856-869.
    Lundberg, S. M., & Lee, S.-I. (2017). A unified approach to interpreting model predictions. Advances in neural information processing systems, 30.
    Madsen, M., & Gregor, S. (2000). Measuring human-computer trust. Paper presented at the 11th australasian conference on information systems.
    Mankodiya, H., Jadav, D., Gupta, R., Tanwar, S., Alharbi, A., Tolba, A., . . . Raboaca, M. S. (2022). XAI-Fall: Explainable AI for Fall Detection on Wearable Devices Using Sequence Models and XAI Techniques. Mathematics, 10(12), 1990.
    Marcílio, W. E., & Eler, D. M. (2020). From explanations to feature selection: assessing SHAP values as feature selection mechanism. Paper presented at the 2020 33rd SIBGRAPI conference on Graphics, Patterns and Images (SIBGRAPI).
    Mastorakis, G., & Makris, D. (2014). Fall detection system using Kinect’s infrared sensor. Journal of Real-Time Image Processing, 9, 635-646.
    Ngo, T., Kunkel, J., & Ziegler, J. (2020). Exploring mental models for transparent and controllable recommender systems: a qualitative study. Paper presented at the Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization.
    Noury, N., Fleury, A., Rumeau, P., Bourke, A. K., Laighin, G., Rialle, V., & Lundy, J.-E. (2007). Fall detection-principles and methods. Paper presented at the 2007 29th annual international conference of the IEEE engineering in medicine and biology society.
    Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). " Why should i trust you?" Explaining the predictions of any classifier. Paper presented at the Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining.
    Ripberger, J. T., Silva, C. L., Jenkins‐Smith, H. C., Carlson, D. E., James, M., & Herron, K. G. (2015). False alarms and missed events: The impact and origins of perceived inaccuracy in tornado warning systems. Risk analysis, 35(1), 44-56.
    Schroff, F., Kalenichenko, D., & Philbin, J. (2015). Facenet: A unified embedding for face recognition and clustering. Paper presented at the Proceedings of the IEEE conference on computer vision and pattern recognition.
    Sourour, M., Adel, B., & Tarek, A. (2009). Environmental awareness intrusion detection and prevention system toward reducing false positives and false negatives. Paper presented at the 2009 IEEE Symposium on Computational Intelligence in Cyber Security.
    Tang, Y. T., & Romero-Ortuno, R. (2022). Using explainable AI (XAI) for the prediction of falls in the older population. Algorithms, 15(10), 353.
    Thapa, R., Garikipati, A., Shokouhi, S., Hurtado, M., Barnes, G., Hoffman, J., . . . Das, R. (2022). Predicting falls in long-term care facilities: machine learning study. JMIR aging, 5(2), e35373.
    van der Waa, J., Nieuwburg, E., Cremers, A., & Neerincx, M. (2021). Evaluating XAI: A comparison of rule-based and example-based explanations. Artificial Intelligence, 291, 103404.
    Van Lent, M., Fisher, W., & Mancuso, M. (2004). An explainable artificial intelligence system for small-unit tactical behavior. Paper presented at the Proceedings of the national conference on artificial intelligence.
    Yagoda, R. E., & Gillan, D. J. (2012). You want me to trust a ROBOT? The development of a human–robot interaction trust scale. International Journal of Social Robotics, 4, 235-248.
    Zhang, C., Tian, Y., & Capezuti, E. (2012). Privacy preserving automatic fall detection for elderly using RGBD cameras. Paper presented at the Computers Helping People with Special Needs: 13th International Conference, ICCHP 2012, Linz, Austria, July 11-13, 2012, Proceedings, Part I 13.
    Zou, L., Xia, L., Ding, Z., Song, J., Liu, W., & Yin, D. (2019). Reinforcement learning to optimize long-term user engagement in recommender systems. Paper presented at the Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining.
    描述: 碩士
    國立政治大學
    資訊管理學系
    111356016
    資料來源: http://thesis.lib.nccu.edu.tw/record/#G0111356016
    資料類型: thesis
    顯示於類別:[資訊管理學系] 學位論文

    文件中的檔案:

    檔案 大小格式瀏覽次數
    601601.pdf1482KbAdobe PDF0檢視/開啟


    在政大典藏中所有的資料項目都受到原著作權保護.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 回饋