政大機構典藏-National Chengchi University Institutional Repository(NCCUR):Item 140.119/152412
English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  Items with full text/Total items : 113318/144297 (79%)
Visitors : 51076826      Online Users : 924
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    政大典藏 > College of Commerce > Department of MIS > Theses >  Item 140.119/152412
    Please use this identifier to cite or link to this item: https://nccur.lib.nccu.edu.tw/handle/140.119/152412


    Title: 應用設計思考來改善強化學習作業服務
    Apply the design thinking concept to improving the RLOps service
    Authors: 陳元熙
    CHEN, YUAN-HSI
    Contributors: 蔡瑞煌
    Tsaih, Rua-Huan
    陳元熙
    CHEN, YUAN-HSI
    Keywords: 設計思考
    強化學習
    強化學習作業服務
    Design thinking
    Reinforcement learning
    RLOps service
    Date: 2024
    Issue Date: 2024-08-05 12:07:10 (UTC+8)
    Abstract: 此研究以強化學習作業服務 (RLOps service) 為基礎,欲利用設計思考的方式提升此服務,減緩強化學習陡峭的學習曲線,降低強化學習進入障礙,並增進開發上的實驗效率與簡化流程。而透過 RLOps service所提供的部署和管理方式,將可再進一步協助使用者分析與版本控制所訓練出的代理人策略。
    並提出了定位於金融投資領域的強化學習作業服務,InvestPRL 服務,來邀請受測者進行實驗,以將其使用情形作為考量,來探討此研究的主要目的。即為設計思考所帶給強化學習作業服務在採用度上的改進以增進強化學習的運用潛力,與瞭解未來RLOps service 在提供服務予使用者時需注意的議題。最後,透過此實驗的結果了解到,將設計思考應用在RLOps service 當中時,將可提升其服務的採用度,且特別在於其中的易用性與適配度的部分最為顯著。
    Through the base of Reinforcement Learning Operations Service (RLOps service) in this study, employing design thinking aims to ease the steep learning curve in reinforcement learning, reduce entry barriers, enhance experimental efficiency, and simplify the development process. Moreover, the management capabilities provided by RLOps service further assist users in analyzing and version-controlling the trained agent strategies.
    The study introduces the InvestPRL service for the experiment, an RLOps service positioned in the financial investment field, and invites participants to interact with it. By considering their usage, the study explores the primary objective: understanding how design thinking improves the adoption of RLOps services, enhances its potential applications, and identifies key issues for future RLOps services.
    The experimental results demonstrate that applying design thinking to the RLOps service increases its adoption, particularly improving ease of use and compatibility.
    Reference: Achiam, J. (2018). Spinning up in deep reinforcement learning.
    Awad, A. L., Elkaffas, S. M., & Fakhr, M. W. (2023). Stock Market Prediction Using Deep Reinforcement Learning. Applied System Innovation, 6(6), 106.
    Brockman, G., Cheung, V., Pettersson, L., Schneider, J., Schulman, J., Tang, J., & Zaremba, W. (2016). OpenAI Gym. arXiv:1606.01540. Retrieved June 01, 2016
    Brown, T. (2009). Change by Design: How Design Thinking Transforms Organizations and Inspires Innovation. HarperCollins.
    Chen, X., Yao, L., McAuley, J., Zhou, G., & Wang, X. (2021). A Survey of Deep Reinforcement Learning in Recommender Systems: A Systematic Review and Future Directions. arXiv:2109.03540. Retrieved September 01, 2021
    Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS quarterly, 319-340.
    DeepLearning.AI. (2021). Introduction to Machine Learning in Production.
    Dewi, R. N. P. N., Suzianti, A., & Puspasari, M. A. A. (2022). Design of Driver Monitoring System for Logistics Truck with Design Thinking Approach Proceedings of the 4th Asia Pacific Conference on Research in Industrial and Systems Engineering, Depok, Indonesia.
    Fujimoto, S., van Hoof, H., & Meger, D. (2018). Addressing Function Approximation Error in Actor-Critic Methods. arXiv:1802.09477. Retrieved February 01, 2018
    Google Cloud Architecture Center. (2020). MLOps: Continuous delivery and automation pipelines in machine learning.
    Hasselt, H. (2010). Double Q-learning
    Hasso Plattner Institute of Design at Stanford University. (2010). An Introduction to Design Thinking Process Guide.
    Irpan, A. (2018). Deep Reinforcement Learning Doesn't Work Yet.
    Jensen, M. B., Lozano, F., & Steinert, M. (2016). The Origins of Design Thinking and the Relevance in Software Innovations. Product-Focused Software Process Improvement, Cham.
    Kreuzberger, D., Kühl, N., & Hirschl, S. (2022). Machine Learning Operations (MLOps): Overview, Definition, and Architecture. arXiv:2205.02302. Retrieved May 01, 2022
    Li, P., Thomas, J., Wang, X., Khalil, A., Ahmad, A., Inacio, R., Kapoor, S., Parekh, A., Doufexi, A., Shojaeifard, A., & Piechocki, R. (2021). RLOps: Development Life-cycle of Reinforcement Learning Aided Open RAN. arXiv:2111.06978. Retrieved November 01, 2021
    Li, Z., Liu, X.-Y., Zheng, J., Wang, Z., Walid, A., & Guo, J. (2021). FinRL-Podracer: High 51
    Performance and Scalable Deep Reinforcement Learning for Quantitative Finance. arXiv:2111.05188. Retrieved November 01, 2021
    Li, Z., Peng, X. B., Abbeel, P., Levine, S., Berseth, G., & Sreenath, K. (2024). Reinforcement Learning for Versatile, Dynamic, and Robust Bipedal Locomotion Control. arXiv:2401.16889. Retrieved January 01, 2024
    Liu, X.-Y., Yang, H., Chen, Q., Zhang, R., Yang, L., Xiao, B., & Wang, C. D. (2020). FinRL: A Deep Reinforcement Learning Library for Automated Stock Trading in Quantitative Finance. arXiv:2011.09607. Retrieved November 01, 2020
    Liu, X.-Y., Yang, H., Gao, J., & Wang, C. D. (2021). FinRL: Deep Reinforcement Learning Framework to Automate Trading in Quantitative Finance. arXiv:2111.09395. Retrieved November 01, 2021
    Masias, R. M. S. G., & Intal, G. L. D. (2023). Design of a Productivity Monitoring System for an Asset Maintenance Group Using Design Thinking Methodology Proceedings of the 2023 5th International Conference on Management Science and Industrial Engineering, Chiang Mai, Thailand.
    Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., & Hassabis, D. (2015). Human-level control through deep reinforcement learning. Nature, 518(7540), 529-533. https://doi.org/10.1038/nature14236
    Myrbakken, H., & Colomo-Palacios, R. (2017). DevSecOps: A Multivocal Literature Review. Software Process Improvement and Capability Determination, Cham.
    Paulus, R., Xiong, C., & Socher, R. (2017). A Deep Reinforced Model for Abstractive Summarization. arXiv:1705.04304. Retrieved May 01, 2017
    Rogers, E. M. (2003). Diffusion of Innovations, 5th Edition. Free Press.
    Rogers, E. M., Singhal, A., & Quinlan, M. M. (2014). Diffusion of innovations. In An integrated approach to communication theory and research (pp. 432-448). Routledge.
    Rowe, P. G. (1991). Design thinking. MIT press.
    Samuylova, E. (2020). Machine Learning in Production: Why You Should Care About Data and Concept Drift.
    Sarkar, S. (2023). Quantitative Trading using Deep Q Learning. arXiv:2304.06037. Retrieved April 01, 2023
    Simon, H. A. (1996). The sciences of the artificial. MIT press.
    Stickdorn, M., & Schneider, J. (2012). This is service design thinking: Basics, tools, cases. John Wiley & Sons.
    Sutton, R. S., & Barto, A. G. (1998). Reinforcement Learning: An Introduction. MIT Press.
    Zhang, J., & Lei, Y. (2022). Deep Reinforcement Learning for Stock Prediction. Scientific Programming, 2022, 5812546. https://doi.org/10.1155/2022/5812546 52
    Zheng, G., Zhang, F., Zheng, Z., Xiang, Y., Yuan, N., Xie, X., & Li, Z. (2018). DRN: A Deep Reinforcement Learning Framework for News Recommendation. https://doi.org/10.1145/3178876.3185994
    Description: 碩士
    國立政治大學
    資訊管理學系
    111356025
    Source URI: http://thesis.lib.nccu.edu.tw/record/#G0111356025
    Data Type: thesis
    Appears in Collections:[Department of MIS] Theses

    Files in This Item:

    File Description SizeFormat
    602501.pdf84742KbAdobe PDF0View/Open


    All items in 政大典藏 are protected by copyright, with all rights reserved.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback