政大機構典藏-National Chengchi University Institutional Repository(NCCUR):Item 140.119/152417
English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  全文笔数/总笔数 : 113160/144130 (79%)
造访人次 : 50737001      在线人数 : 515
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜寻范围 查询小技巧:
  • 您可在西文检索词汇前后加上"双引号",以获取较精准的检索结果
  • 若欲以作者姓名搜寻,建议至进阶搜寻限定作者字段,可获得较完整数据
  • 进阶搜寻
    政大機構典藏 > 商學院 > 資訊管理學系 > 學位論文 >  Item 140.119/152417


    请使用永久网址来引用或连结此文件: https://nccur.lib.nccu.edu.tw/handle/140.119/152417


    题名: 不同透明度下的人工智慧與群體電商推薦對比研究
    AI versus Crowd-Based E-Commerce Recommendations Under Different Levels of Transparency
    作者: 宋吟軒
    Sung, Yin-Hsuan
    贡献者: 簡士鎰
    郁方

    Chien, Shih-Yi
    Yu, Fang

    宋吟軒
    Sung, Yin-Hsuan
    关键词: 可解釋人工智慧
    透明度
    捷思法
    Explainable AI (XAI)
    Transparency
    Heuristics
    日期: 2024
    上传时间: 2024-08-05 12:08:09 (UTC+8)
    摘要: 此研究探討捷思法機制和透明度在電子商務情境下對產品推薦接受度的影響。人工智慧推薦採用 SASRec 模型並使用 Kernel SHAP 作為 XAI 技術來解釋模型預測,用來激發機器捷思法。另一方面,群眾推薦透過展示推薦產品的銷售排名和星級評分來觸發從眾捷思法。Situation Awareness-Based Agent Transparency(SAT)模型則被用來創建三種不同級別的模型透明度。我們進行三輪前導試驗以驗證我們的界面設計在表現不同推薦方面的有效性。使用者研究結果(N=45)顯示在 SAT-1 和 SAT-1+2 條件下受測者偏好群眾推薦。雖然在三種透明度級別間沒有觀察到顯著差異,但我們在平均值中發現一些趨勢。具體而言,人工智慧推薦在 SAT-1+2+3 下擁有最高的平均接受度,而群眾推薦在 SAT-1+2 下最被接受。這些發現表明,消費者在中等透明度下偏好群眾推薦,但在更高透明度下群眾推薦的局限性資訊反而引發使用者更系統性的思考。我們的結果為電子商務平台提供實務上的見解,平台可以優先使用群眾推薦,並利用銷售排名和星級評分來觸發從眾捷思法,從而解決用戶冷啟動問題。隨著消費者數據增加,平台則可轉向採用人工智慧推薦,提供推薦背後的原因及侷限性資訊以突顯系統能力,從而提升消費者接受度,解決產品冷啟動問題。
    This study investigates the influence of heuristic mechanisms and transparency on accepting product recommendations in e-commerce contexts. AI-based recommendations utilize the SASRec model and incorporate Kernel SHAP as XAI techniques to explain the model predictions, stimulating machine heuristics. In contrast, crowd-based recommendations trigger bandwagon heuristics by presenting recommended products’ sales rank and star ratings. The Situation Awareness-Based Agent Transparency (SAT) model is used to create three levels of model transparency. We conducted three pilot tests to validate the effectiveness of our interface designs for representing different recommendations. The user study results (N=45) indicate that participants favor crowd-based recommendations, particularly under SAT-1 and SAT-1+2 conditions. Although no significant differences were observed across the three transparency levels, some trends emerged in the mean values. Specifically, AI-based recommendations showed the highest mean acceptance under SAT-1+2+3, while crowd-based recommendations were most accepted under SAT-1+2. These findings suggest that while consumers may favor crowd-based recommendations under moderate transparency, the limitations of crowd-based recommendations prompt more systematic processing under higher transparency levels. Our results offer practical insights for e-commerce platforms to prioritize crowd-based recommendations for new users by leveraging sales rank and star ratings to stimulate consumers’ bandwagon heuristics, thus addressing the user cold start problem. As more consumer data becomes available, platforms can transition to AI-based recommendations to effectively tackle the item cold start problem, providing additional explanations about the reasoning behind the recommendations and including limitation information to highlight the system’s capabilities, thereby enhancing consumer acceptance.
    參考文獻: Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.-R., & Samek, W. (2015). On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one, 10(7), e0130140.
    Baier, D., & Stüber, E. (2010). Acceptance of recommendations to buy in online retailing. Journal of Retailing and Consumer Services, 17(3), 173–180.
    Banas, J. A., Palomares, N. A., Richards, A. S., Keating, D. M., Joyce, N., & Rains, S. A. (2022). When machine and bandwagon heuristics compete: Understanding users’ response to conflicting ai and crowdsourced fact-checking. Human Communication Research, 48(3), 430–461.
    Benbasat, I., & Wang, W. (2005). Trust in and adoption of online recommendation agents. Journal of the association for information systems, 6(3), 4.
    Chen, J. Y., Procci, K., Boyce, M., Wright, J., Garcia, A., & Barnes, M. (2014). Situation awareness-based agent transparency. US Army Research Laboratory, 1–29.
    Cheng, H.-T., Koc, L., Harmsen, J., Shaked, T., Chandra, T., Aradhye, H., Anderson, G., Corrado, G., Chai, W., Ispir, M., et al. (2016). Wide & deep learning for recommender systems. Proceedings of the 1st workshop on deep learning for recommender systems, 7–10.
    Chien, S.-Y., Yang, C.-J., & Yu, F. (2022). Xflag: Explainable fake news detection model on social media. International Journal of Human–Computer Interaction, 38(18-20), 1808–1827.
    Cialdini, R. B. (2009). Influence: Science and practice (Vol. 4). Pearson education Boston.
    Cramer, H., Evers, V., Ramlal, S., Van Someren, M., Rutledge, L., Stash, N., Aroyo, L., & Wielinga, B. (2008). The effects of transparency on trust in and acceptance of a content-based art recommender. User Modeling and User-adapted interaction, 18, 455–496.
    Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of experimental psychology: General, 144(1), 114.
    Gunning, D., & Aha, D. (2019). Darpa’s explainable artificial intelligence (xai) program. AI magazine, 40(2), 44–58.
    Hidasi, B., Karatzoglou, A., Baltrunas, L., & Tikk, D. (2015). Session-based recommendations with recurrent neural networks. arXiv preprint arXiv:1511.06939.
    Howard, J. (2019). Cognitive errors and diagnostic mistakes (Vol. 10). Springer.
    Hussien, F. T. A., Rahma, A. M. S., & Wahab, H. B. A. (2021). Recommendation systems for e-commerce systems an overview. Journal of Physics: Conference Series, 1897(1), 012024.
    Järvelin, K., & Kekäläinen, J. (2002). Cumulated gain-based evaluation of ir techniques. ACM Transactions on Information Systems (TOIS), 20(4), 422–446.
    Jussupow, E., Benbasat, I., & Heinzl, A. (2020). Why are we averse towards algorithms? a comprehensive literature review on algorithm aversion.
    Kahneman, D., Frederick, S., et al. (2002). Heuristics and biases: The psychology of intuitive judgment, 49(49-81), 74.
    Kang, W.-C., & McAuley, J. (2018). Self-attentive sequential recommendation. 2018 IEEE international conference on data mining (ICDM), 197–206.
    Knijnenburg, B. P., Willemsen, M. C., Gantner, Z., Soncu, H., & Newell, C. (2012). Explaining the user experience of recommender systems. User modeling and useradapted interaction, 22, 441–504.
    Koren, Y., Bell, R., & Volinsky, C. (2009). Matrix factorization techniques for recommender systems. Computer, 42(8), 30–37.
    Leibenstein, H. (1950). Bandwagon, snob, and veblen effects in the theory of consumers’ demand. The quarterly journal of economics, 64(2), 183–207.
    Liao, Q. V., Gruen, D., & Miller, S. (2020). Questioning the ai: Informing design practices for explainable ai user experiences. Proceedings of the 2020 CHI conference on human factors in computing systems, 1–15.
    Lundberg, S. M., & Lee, S.-I. (2017). A unified approach to interpreting model predictions. Advances in neural information processing systems, 30.
    Mahmud, H., Islam, A. N., Ahmed, S. I., & Smolander, K. (2022). What influences algorithmic decision-making? a systematic literature review on algorithm aversion. Technological Forecasting and Social Change, 175, 121390.
    Maqbool, M. H., Farooq, U., Mosharrof, A., Siddique, A., & Foroosh, H. (2023). Mobilerec: A large scale dataset for mobile apps recommendation. Proceedings of the 46th international ACM SIGIR conference on research and development in information retrieval, 3007–3016.
    Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological review, 63(2), 81.
    Molnar, C. (2020). Interpretable machine learning. Lulu. com.
    Ni, J., Li, J., & McAuley, J. (2019). Justifying recommendations using distantly-labeled reviews and fine-grained aspects. Proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (EMNLP-IJCNLP), 188–197.
    Ochmann, J., Michels, L., Zilker, S., Tiefenbeck, V., & Laumer, S. (2020). The influence of algorithm aversion and anthropomorphic agent design on the acceptance of aibased job recommendations. ICIS.
    Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). ” why should i trust you?” explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, 1135–1144.
    Stowers, K., Kasdaglis, N., Rupp, M. A., Newton, O. B., Chen, J. Y., & Barnes, M. J. (2020). The impact of agent transparency on human performance. IEEE Transactions on Human-Machine Systems, 50(3), 245–253.
    Sundar, S. S. (2008). The main model: A heuristic approach to understanding technology effects on credibility. MacArthur Foundation Digital Media; Learning Initiative Cambridge, MA.
    Sundar, S. S., & Kim, J. (2019). Machine heuristic: When we trust computers more than humans with our personal information. Proceedings of the 2019 CHI Conference on human factors in computing systems, 1–9.
    Sundar, S. S., Oeldorf-Hirsch, A., & Xu, Q. (2008). The bandwagon effect of collaborative filtering technology. In Chi’08 extended abstracts on human factors in computing systems (pp. 3453–3458).
    Tang, J., & Wang, K. (2018). Personalized top-n sequential recommendation via convolutional sequence embedding. Proceedings of the eleventh ACM international conference on web search and data mining, 565–573.
    Van der Heijden, H., Verhagen, T., & Creemers, M. (2003). Understanding online purchase intentions: Contributions from technology and trust perspectives. European journal of information systems, 12(1), 41–48.
    Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30.
    Wang & Benbasat, I. (2016). Empirical assessment of alternative designs for enhancing different types of trusting beliefs in online recommendation agents. Journal of management information systems, 33(3), 744–775.
    Wang, Y.-F., Chen, Y.-C., & Chien, S.-Y. (2023). Citizens’intention to follow recommendations from a government-supported ai-enabled system. Public Policy and Administration, 09520767231176126.
    Wanner, J., Herm, L.-V., Heinrich, K., & Janiesch, C. (2022). The effect of transparency and trust on intelligent system acceptance: Evidence from a user-based study. Electronic Markets, 32(4), 2079–2102.
    Xue, H.-J., Dai, X., Zhang, J., Huang, S., & Chen, J. (2017). Deep matrix factorization models for recommender systems. IJCAI, 17, 3203–3209.
    Yeomans, M., Shah, A., Mullainathan, S., & Kleinberg, J. (2019). Making sense of recommendations. Journal of Behavioral Decision Making, 32(4), 403–414.
    Zhang, Y., Liao, Q. V., & Bellamy, R. K. (2020). Effect of confidence and explanation on accuracy and trust calibration in ai-assisted decision making. Proceedings of the 2020 conference on fairness, accountability, and transparency, 295–305.
    Zhao, X., Strasser, A., Cappella, J. N., Lerman, C., & Fishbein, M. (2011). A measure of perceived argument strength: Reliability and validity. Communication methods and measures, 5(1), 48–75.
    Zhou, T. (2012). Examining location-based services usage from the perspectives of unified theory of acceptance and use of technology and privacy risk. Journal of Electronic Commerce Research, 13(2), 135.
    Zuckerman, A., & Chaiken, S. (1998). A heuristic-systematic processing analysis of the effectiveness of product warning labels. Psychology & Marketing, 15(7), 621–642.
    描述: 碩士
    國立政治大學
    資訊管理學系
    111356050
    資料來源: http://thesis.lib.nccu.edu.tw/record/#G0111356050
    数据类型: thesis
    显示于类别:[資訊管理學系] 學位論文

    文件中的档案:

    档案 描述 大小格式浏览次数
    605001.pdf2745KbAdobe PDF0检视/开启


    在政大典藏中所有的数据项都受到原著作权保护.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 回馈