English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  Items with full text/Total items : 113656/144643 (79%)
Visitors : 51755360      Online Users : 546
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    政大機構典藏 > 理學院 > 應用數學系 > 學位論文 >  Item 140.119/138942
    Please use this identifier to cite or link to this item: https://nccur.lib.nccu.edu.tw/handle/140.119/138942


    Title: 應用深度雙Q網路於股票自動交易系統
    Double Deep Q-Network in Automated Stock Trading
    Authors: 黃冠棋
    Huang, Kuan-Chi
    Contributors: 蔡炎龍
    黃冠棋
    Huang, Kuan-Chi
    Keywords: 深度強化學習
    神經網路
    Q學習
    深度雙Q網路
    股票交易
    Deep reinforcement learning
    Neural network
    Q-learning
    DDQN
    Stocks trading
    Date: 2021
    Issue Date: 2022-02-10 13:07:06 (UTC+8)
    Abstract: 本篇文章使用了強化學習結合深度學習的技術去訓練自動交易系統,我們分別建立了深度卷積網路和全連接網路去預測動作的Q值,並使用DDQN的模型去更新我們的動作價值。我們的交易系統每天採用10天前的股票資訊,去預測股票的趨勢,並最大化我們的利益。

    DDQN是一種深度強化學習模型,透過建立目標網路和調整誤差函數使得他能夠避免DQN的過估計問題,並得到更好的效能,在我們的實驗中,我們得到了一個良好的效果,證明DDQN在自動交易系統上是有效的。
    In this paper, we use the artificial neural network combined with reinforcement learning to train the automated trading system. We construct the CNN model and the fully-connected model to predict the Q-values of the actions and use the algorithm of DDQN to correct the TD error. According to past 10 days data, the system predicts the trend of the stocks and maximize our profit.

    DDQN is a deep reinforcement model, which is an improvement of DQN, build the target network and modify loss function to avoid overestimation and get better performance. In our experiment, we get a good result that DDQN is feasible on automated trading systems.
    Reference: [1] Fei-Ting Chen. Convolutional deep q-learning for etf automated trading system, 2017.
    [2] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Machine learning basics. Deep
    learning, 1(7):98–164, 2016.
    [3] RobertHecht-Nielsen.Theoryofthebackpropagationneuralnetwork.InNeuralnetworks
    for perception, pages 65–93. Elsevier, 1992.
    [4] Yu-Ping Huang. A comparison of deep reinforcement learning models: The case of stock
    automated trading system, 2021.
    [5] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning
    applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
    [6] Moshe Leshno, Vladimir Ya Lin, Allan Pinkus, and Shimon Schocken. Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural networks, 6(6):861–867, 1993.
    [7] Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015.
    [8] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602, 2013.
    [9] Jerome H Saltzer, David P Reed, and David D Clark. End-to-end arguments in system design. ACM Transactions on Computer Systems (TOCS), 2(4):277–288, 1984.
    31
    [10] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. nature, 529(7587):484–489, 2016.
    [11] David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin Riedmiller. Deterministic policy gradient algorithms. In International conference on machine learning, pages 387–395. PMLR, 2014.
    [12] Richard S Sutton, David A McAllester, Satinder P Singh, and Yishay Mansour. Policy gradient methods for reinforcement learning with function approximation. In Advances in neural information processing systems, pages 1057–1063, 2000.
    [13] Hado Van Hasselt, Arthur Guez, and David Silver. Deep reinforcement learning with double q-learning. In Proceedings of the AAAI conference on artificial intelligence, volume 30, 2016.
    [14] Christopher John Cornish Hellaby Watkins. Learning from delayed rewards. 1989.
    [15] Bayya Yegnanarayana. Artificial neural networks. PHI Learning Pvt. Ltd., 2009.
    Description: 碩士
    國立政治大學
    應用數學系
    107751007
    Source URI: http://thesis.lib.nccu.edu.tw/record/#G0107751007
    Data Type: thesis
    DOI: 10.6814/NCCU202200014
    Appears in Collections:[應用數學系] 學位論文

    Files in This Item:

    File Description SizeFormat
    100701.pdf1548KbAdobe PDF228View/Open


    All items in 政大典藏 are protected by copyright, with all rights reserved.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback