Loading...
|
Please use this identifier to cite or link to this item:
https://nccur.lib.nccu.edu.tw/handle/140.119/124137
|
Title: | 利用深度強化式學習建構價差交易策略:以台指期與摩台期為例 The Construction of TX-TW Pair Trading Strategies based on Deep Reinforcement Learning |
Authors: | 許晏寧 Hsu, Yen-Ning |
Contributors: | 江彌修 Chiang, Mi-Hsiu 許晏寧 Hsu, Yen-Ning |
Keywords: | 價差交易 強化式學習 類神經網路 台股期貨 摩台期貨 Pairs trading Reinforcement learning Neural network Taiwan Stock Index Futures MSCI Taiwan Index Futures |
Date: | 2019 |
Issue Date: | 2019-07-01 10:46:58 (UTC+8) |
Abstract: | 本研究使用三種基於模型的深度強化式學習DQN、Double DQN和Dueling DQN來建構價差交易策略,本研究會選擇開發此類交易策略,主要是因為深度強化式學習的獎勵機制和建構交易策略有很好的對應性且價差交易策略能夠有效的減少市場風險。本研究採用2006/01/01至2018/11/16的台股期貨和摩台期貨進行回測,並設計隨機策略、固定策略當作基準策略,實證結果發現深度強化式學習均可以獲得比基準策略更好的表現,而整體上DQN表現勝過Double DQN和Dueling DQN。但細看可以發現,在不同的回測期間,三種深度強化式學習分別有其表現最好的時候,代表此三種模型分別學到不一樣的規則,此規則在不同的時期有不一樣的適用性。 In this paper, we implement three model-based reinforcement learning algorithms with deep learning, Deep Q-Learning Network (DQN), Double Deep Q-learning Network (Double DQN) and Dueling Deep Q-Learning Network (Dueling DQN) in pair trading strategy. In addition, deep reinforcement learning (DRL) has appealing theoretical properties which are hopefully potential since the reward mechanism in DRL with pair trading rules is able to significantly reduce the market risk. We conduct experiments in TX and TW historical data (2006/01/01-2018/11-16) and design the random strategy and fixed strategy to be the benchmark. The empirical results show that three DRL strategies can achieve better performance than the benchmark strategies overall and DQN is more desirable than Double DQN and Dueling DQN. However, during different back-testing period, we observe that they have the best performance respectively. It means that three models learn different rules separately and the rules have different applicability in different periods. |
Reference: | [1] Bellman, R.E. (1957). Dynamic Programming. Princeton University Press, Princeton, NJ. Republished 2003. [2] Binh H. D. & Robert W. F. (2012). Are Pairs Trading Profits Robust to Trading Costs? The Journal of Financial Research, 35(2), 261-287. [3] Chien Y. H. (2018). Financial Trading as a Game: A Deep Reinforcement Learning Approach. arXiv preprint arXiv:1807.02787 [4] Evan G. , William N. G., & K. G. R. (2006). Pairs Trading: Performance of a Relative-Value Arbitrage Rule. The Review of Financial Studies, 19(3), 797-827. [5] Gold C. (2003), FX trading via recurrent Reinforcement Learning, Proceedings of the IEEE International Conference on Computational Intelligence in Financial Engineering, 363-370. [6] Hado van H., Arthur G., & David S. (2015). Deep Reinforcement Learning with Double Q-learning. arXiv preprint arXiv:1509.06461 [7] Jae W. L., Jonghun P., O J., Jongwoo L., & Euyseok H. (2007). A Multiagent Approach to Q-Learning for Daily Stock Trading. IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans, 37(6), 864 – 877. [8] Kearns M., Nevmyvaka Y. (2013) Machine learning for market microstructure and high frequency trading. In: Easley D., López de Prado M., O’Hara M. (Eds.) High-Frequency Trading – New Realities for Traders, Markets and Regulators, 91-124. [9] Moody J., Saffel M. (2001), Learning to trade via Direct Reinforcement, IEEE Transactions on Neural Network, 12, 875-889. [10] Moody J., Wu L., Liao Y., Saffel M. (1998), Performance functions and Reinforcement Learning for trading systems and portfolios, Journal of Forecasting, 17 (56), 441-470. [11] Moody, J. & Wu, L. (1997), Optimization of trading systems and portfolios, in Y. Abu-Mostafa, A. N. Refenes & A. S. Weigend, eds, `Decision Technologies for Financial Engineering`, World Scientific, London, 23-35. [12] O J., Lee J., Lee, J.W., Zhang, B.-T. (2006) Adaptive stock trading with dynamic asset allocation using reinforcement learning, Information Sciences, 176 (15), 2121-2147. [13] Richard S. S. and Andrew G. B. (1998) Reinforcement Learning: An Introduction. MIT Press. [14] Sergey I., Christian S. (2015) Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. arXiv preprint arXiv:1502.03167 [15] Vidyamurthy, G. (2004). Pairs Trading: quantitative methods and analysis (Vol. 217). John Wiley & Sons. [16] Volodymyr M., Koray K., David S., Alex G., Ioannis A., Daan W., Martin R., (2013). Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602. [17] Volodymyr M., Koray K., David S., Andrei A. R., Joel V., Marc G. B., Alex G., Martin R., Andreas K. F., Georg O., et al. Human-level control through deep reinforcement learning. Nature 518(7540): 529–533, 201. [18] Yuriy N., Yi F., & Michael K. (2006) Reinforcement Learning for Optimized Trade Execution. In Proceedings of the 23rd international conference on Machine learning. 673-680. [19] Ziyu W., Tom S., Matteo H., Hado van H., Marc L., & Nando de F.(2015). Dueling Network Architectures for Deep Reinforcement Learning. arXiv preprint arXiv:1511.06581 |
Description: | 碩士 國立政治大學 金融學系 106352004 |
Source URI: | http://thesis.lib.nccu.edu.tw/record/#G0106352004 |
Data Type: | thesis |
DOI: | 10.6814/NCCU201900104 |
Appears in Collections: | [金融學系] 學位論文
|
Files in This Item:
File |
Size | Format | |
200401.pdf | 2840Kb | Adobe PDF2 | 2 | View/Open |
|
All items in 政大典藏 are protected by copyright, with all rights reserved.
|