Reference: | [1] Adya, M., and Collopy, F. (1998). “How effective are neural networks at forecasting and prediction ? A review and evaluation”. Journal of forecasting J. Forecast., 17, 481–495. [2] Bao, W., Yue, J., and Rao, Y. (2017). “A deep learning framework for financial time series using stacked autoencoders and long-short term-memory”, PLOS ONE, 12(7). [3] Bengio, Y., Simard, P., and Frasconi, P. (1994). “Learning long-term dependencies with gradient descent is difficult”, Neural Networks, 5(2),157-166 [4] Caire, P., Hatabian, G. and Muller, C. (1992). “Progress in forecast- ing by neural networks”. Neural Networks, 2, 540-545. [5] Connor, J., Martin, R., and Atlas, L. (1994). “Recurrent neural net- works and robust time series prediction” . Neural Networks, 5(2), 240–254. [6] Contribution, O. (1989). “On the approximate realization of contin- uous mappings by neural networks”. Neural Networks, 2, 183–192. [7] Cybenko, G. (1989). “Approximation by superpositions of a sigmoidal function”. Mathematics of Control, Signals, and Systems, 2, 303–314. [8] Enders, W. (2014). Applied econometric time series, 4th Edition. New York, United States : Wiley. [9] Granger, C.W.J., and A.P. Andersen. (1978). An introduction to bi- linear time series models (Vandenhoeck and Ruprecht, GSttingen). [10] Graves, A. (2012). Supervised sequence labeling with recurrent neural networks. Berlin, Germany : Springer. [11] Hornik, K. (1991). “Approximation capabilities of muitilayer feedfor- ward networks”. Neural Networks, 4(2), 251–257. [12] Hornik, K. (1993). “Some new results on neural network approxima- tion”. Neural Networks, 6(8), 1069-1072. [13] Hornik, K. Stinchcombe, M., and White, H. (1989). “Multilayer feed- forward networks are universal approximators”. Neural Networks, 2(5), 359–366. [14] Kim, T. Y., Oh, K. J., Kim, C., and Do, J. D. (2004). “Artificial neu- ral networks for non-stationary time series”. Neurocomputing, 61(1– 4), 439-447. [15] Kingma, D. P., and Ba, J. L. (2015). “Adam A method for stochastic optimization”. ICLR, 1-15. [16] Kuan, C. (2006). Artificial neural networks. IEAS Working Paper : academic research 06-A010, Institute of Economics, Academia Sinica, Taipei, Taiwan. [17] Lipton, Z. C., Berkowitz, J., and Elkan, C. (2015). A crit- ical review of recurrent neural networks for sequence learning. arXiv.1506.00019[cs.LG] [18] Refenes A. N. , M. Azema-Barac, L. Chen, and S. A. Karoussos. (1993). “Currency exchange rate prediction and neural network design Strategies”. Neural Comput Applic, 1(1), 46-58 [19] Tong, K., and Lim, K. S. (1980). “Threshold autoregression, limit cycles and cyclical data”. Royal Statistical Society, 42(3), 245-292. [20] Vincent, P. (2010). Stacked denoising autoencoders : learning useful representations in a deep network with a local denoising criterion, Paper presented at the 27th International Conference on Machine Learning, 11, 3371–3408. [21] Weigend, A.S., Huberman, B.A. and Rumelhart, D.E., (1992). Pre- dicting sunspots and exchange rates with connectionist networks. In: M. Casdagli and S. Eubank (Editors), Nonlinear Modelling and Fore- casting, SFI Studies in the Sciences of Complexity, Proc. Vol. XII. Addison-Wesley, Redwood City, pp. 395-432. [22] Zhang, G., Patuwo, E. B., Hu, M. Y. (1998). “Forecasting with arti- ficial neural networks : The state of the art”. International Journal of Forecasting, 14, 35–62. |