Reference: | Abbate, S., Avvenuti, M., Bonatesta, F., Cola, G., Corsini, P., & Vecchio, A. (2012). A smartphone-based fall detection system. Pervasive and Mobile Computing, 8(6), 883-899. Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE access, 6, 52138-52160. Adams, B. D., Bruyn, L. E., Houde, S., Angelopoulos, P., Iwasa-Madge, K., & McCann, C. (2003). Trust in automated systems. Ministry of National Defence. Anjomshoae, S., Najjar, A., Calvaresi, D., & Främling, K. (2019). Explainable agents and robots: Results from a systematic literature review. Paper presented at the 18th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2019), Montreal, Canada, May 13–17, 2019. Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., . . . Benjamins, R. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information fusion, 58, 82-115. Belle, V., & Papantonis, I. (2021). Principles and practice of explainable machine learning. Frontiers in big Data, 39. Breznitz, S. (2013). Cry wolf: The psychology of false alarms: Psychology Press. Bright, T. J., Wong, A., Dhurjati, R., Bristow, E., Bastian, L., Coeytaux, R. R., . . . Musty, M. D. (2012). Effect of clinical decision-support systems: a systematic review. Annals of internal medicine, 157(1), 29-43. Cahour, B., & Forzy, J.-F. (2009). Does projection into use improve trust and exploration? An example with a cruise control system. Safety science, 47(9), 1260-1270. Casalicchio, G., Molnar, C., & Bischl, B. (2019). Visualizing the feature importance for black box models. Paper presented at the Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2018, Dublin, Ireland, September 10–14, 2018, Proceedings, Part I 18. De Miguel, K., Brunete, A., Hernando, M., & Gambao, E. (2017). Home camera-based fall detection system for the elderly. Sensors, 17(12), 2864. Dignum, V. (2019). Responsible artificial intelligence: how to develop and use AI in a responsible way (Vol. 2156): Springer. Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608. Edgcomb, A., & Vahid, F. (2012). Privacy perception and fall detection accuracy for in-home video assistive monitoring with privacy enhancements. ACM SIGHIT Record, 2(2), 6-15. García, E., Villar, M., Fáñez, M., Villar, J. R., de la Cal, E., & Cho, S.-B. (2022). Towards effective detection of elderly falls with CNN-LSTM neural networks. Neurocomputing, 500, 231-240. Gilpin, L. H., Bau, D., Yuan, B. Z., Bajwa, A., Specter, M., & Kagal, L. (2018). Explaining explanations: An overview of interpretability of machine learning. Paper presented at the 2018 IEEE 5th International Conference on data science and advanced analytics (DSAA). Girshick, R., Donahue, J., Darrell, T., & Malik, J. (2014). Rich feature hierarchies for accurate object detection and semantic segmentation. Paper presented at the Proceedings of the IEEE conference on computer vision and pattern recognition. Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2018). A survey of methods for explaining black box models. ACM computing surveys (CSUR), 51(5), 1-42. Gunning, D. (2017). Explainable artificial intelligence (xai). Defense advanced research projects agency (DARPA), nd Web, 2(2), 1. Hayes, A. F. (2017). Introduction to mediation, moderation, and conditional process analysis: A regression-based approach: Guilford publications. Herm, L.-V. (2023). Impact Of Explainable AI On Cognitive Load: Insights From An Empirical Study. arXiv preprint arXiv:2304.08861. Ho, C.-Y., Lai, Y.-C., Chen, I.-W., Wang, F.-Y., & Tai, W.-H. (2012). Statistical analysis of false positives and false negatives from real traffic with intrusion detection/prevention systems. IEEE Communications Magazine, 50(3), 146-154. Hoffman, R. R., Mueller, S. T., Klein, G., & Litman, J. (2023). Measures for explainable AI: Explanation goodness, user satisfaction, mental models, curiosity, trust, and human-AI performance. Frontiers in Computer Science, 5, 1096257. Igual, R., Medrano, C., & Plaza, I. (2013). Challenges, issues and trends in fall detection systems. Biomedical engineering online, 12(1), 66. Ikeda, T., Cooray, U., Hariyama, M., Aida, J., Kondo, K., Murakami, M., & Osaka, K. (2022). An interpretable machine learning approach to predict fall risk among community-dwelling older adults: a three-year longitudinal study. Journal of General Internal Medicine, 37(11), 2727-2735. Karran, A. J., Demazure, T., Hudon, A., Senecal, S., & Léger, P.-M. (2022). Designing for Confidence: The Impact of Visualizing Artificial Intelligence Decisions. Frontiers in neuroscience, 16, 883385. Kawamoto, K., Houlihan, C. A., Balas, E. A., & Lobach, D. F. (2005). Improving clinical practice using clinical. Kim, J.-K., Bae, M.-N., Lee, K., Kim, J.-C., & Hong, S. G. (2022). Explainable artificial intelligence and wearable sensor-based gait analysis to identify patients with osteopenia and sarcopenia in daily life. Biosensors, 12(3), 167. Kim, J.-K., Oh, D.-S., Lee, K., & Hong, S. G. (2022). Fall detection based on interpretation of important features with wrist-wearable sensors. Paper presented at the Proceedings of the 28th Annual International Conference on Mobile Computing And Networking. Laato, S., Tiainen, M., Najmul Islam, A., & Mäntymäki, M. (2022). How to explain AI systems to end users: a systematic literature review and research agenda. Internet Research, 32(7), 1-31. Lalmas, M., O'Brien, H., & Yom-Tov, E. (2022). Measuring user engagement: Springer Nature. Linardatos, P., Papastefanopoulos, V., & Kotsiantis, S. (2020). Explainable ai: A review of machine learning interpretability methods. Entropy, 23(1), 18. Liu, Y., Liu, Z., Luo, X., & Zhao, H. (2022). Diagnosis of Parkinson's disease based on SHAP value feature selection. Biocybernetics and Biomedical Engineering, 42(3), 856-869. Lundberg, S. M., & Lee, S.-I. (2017). A unified approach to interpreting model predictions. Advances in neural information processing systems, 30. Madsen, M., & Gregor, S. (2000). Measuring human-computer trust. Paper presented at the 11th australasian conference on information systems. Mankodiya, H., Jadav, D., Gupta, R., Tanwar, S., Alharbi, A., Tolba, A., . . . Raboaca, M. S. (2022). XAI-Fall: Explainable AI for Fall Detection on Wearable Devices Using Sequence Models and XAI Techniques. Mathematics, 10(12), 1990. Marcílio, W. E., & Eler, D. M. (2020). From explanations to feature selection: assessing SHAP values as feature selection mechanism. Paper presented at the 2020 33rd SIBGRAPI conference on Graphics, Patterns and Images (SIBGRAPI). Mastorakis, G., & Makris, D. (2014). Fall detection system using Kinect’s infrared sensor. Journal of Real-Time Image Processing, 9, 635-646. Ngo, T., Kunkel, J., & Ziegler, J. (2020). Exploring mental models for transparent and controllable recommender systems: a qualitative study. Paper presented at the Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization. Noury, N., Fleury, A., Rumeau, P., Bourke, A. K., Laighin, G., Rialle, V., & Lundy, J.-E. (2007). Fall detection-principles and methods. Paper presented at the 2007 29th annual international conference of the IEEE engineering in medicine and biology society. Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). " Why should i trust you?" Explaining the predictions of any classifier. Paper presented at the Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. Ripberger, J. T., Silva, C. L., Jenkins‐Smith, H. C., Carlson, D. E., James, M., & Herron, K. G. (2015). False alarms and missed events: The impact and origins of perceived inaccuracy in tornado warning systems. Risk analysis, 35(1), 44-56. Schroff, F., Kalenichenko, D., & Philbin, J. (2015). Facenet: A unified embedding for face recognition and clustering. Paper presented at the Proceedings of the IEEE conference on computer vision and pattern recognition. Sourour, M., Adel, B., & Tarek, A. (2009). Environmental awareness intrusion detection and prevention system toward reducing false positives and false negatives. Paper presented at the 2009 IEEE Symposium on Computational Intelligence in Cyber Security. Tang, Y. T., & Romero-Ortuno, R. (2022). Using explainable AI (XAI) for the prediction of falls in the older population. Algorithms, 15(10), 353. Thapa, R., Garikipati, A., Shokouhi, S., Hurtado, M., Barnes, G., Hoffman, J., . . . Das, R. (2022). Predicting falls in long-term care facilities: machine learning study. JMIR aging, 5(2), e35373. van der Waa, J., Nieuwburg, E., Cremers, A., & Neerincx, M. (2021). Evaluating XAI: A comparison of rule-based and example-based explanations. Artificial Intelligence, 291, 103404. Van Lent, M., Fisher, W., & Mancuso, M. (2004). An explainable artificial intelligence system for small-unit tactical behavior. Paper presented at the Proceedings of the national conference on artificial intelligence. Yagoda, R. E., & Gillan, D. J. (2012). You want me to trust a ROBOT? The development of a human–robot interaction trust scale. International Journal of Social Robotics, 4, 235-248. Zhang, C., Tian, Y., & Capezuti, E. (2012). Privacy preserving automatic fall detection for elderly using RGBD cameras. Paper presented at the Computers Helping People with Special Needs: 13th International Conference, ICCHP 2012, Linz, Austria, July 11-13, 2012, Proceedings, Part I 13. Zou, L., Xia, L., Ding, Z., Song, J., Liu, W., & Yin, D. (2019). Reinforcement learning to optimize long-term user engagement in recommender systems. Paper presented at the Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. |