Loading...
|
Please use this identifier to cite or link to this item:
https://nccur.lib.nccu.edu.tw/handle/140.119/83304
|
Title: | 類神經網路與混沌現象 The Neural Network and Chaos |
Authors: | 吳慧娟 Wu, Hui-Chuan |
Contributors: | 蔡瑞煌 Ray Tsaih, R. 吳慧娟 Wu, Hui-Chuan |
Keywords: | 神經網路系統 混沌 有限性 確定性 非週期性 對初始條件的敏感依賴 Neural Network systems Chaos Boundedness Determinism Aperiodicity Sensitive dependence on initial conditions |
Date: | 2000 |
Issue Date: | 2016-03-31 15:43:24 (UTC+8) |
Abstract: | 本研究設計了一些實驗來檢測學習完混沌資料的神經網路系統是否為混沌系統,驗證的方法是檢驗是否具有混沌資料的四個特性,這四個特性包括:有限性、非週期性、確定性、及對初始條件的敏感依賴。同時,更進一步地利用上述學習完的網路系統來預測所學習的混沌模型,這麼做的目的是想要了解:學習後的網路系統是一個混沌系統時,與學習後網路系統不是一個混沌系統時,其預測能力的比較。
此外,我們亦從理論上證明:學習完混沌資料後的神經網路系統無法重建其所學習的混沌模型。然而,有時網路系統卻能夠模擬成一個混沌系統;如果使用模擬成混沌系統的神經網路來預測具有混沌現象的資料,換句話說,也就可能是使用一個混沌系統去預測另一個混沌系統,根據混沌的特性 -- 對初始條件的敏感依賴,這樣的預測應該會造成相當大的誤差;不過,從本研究的實驗中發現,無論學習後的神經網路系統是否為一個混沌系統,對其預測能力並無顯著的影響。
本論文希望能給「用神經網路系統來預測具有混沌現象的金融市場或其他領域」一些貢獻與幫助。This paper uses some experimental designs to detect if the Neural Networks system after learning the chaotic data is a chaotic system. That is verified via testing four characteristics in chaotic data, inclusive of boundedness, determinism, aperiodicity and sensitive dependence on initial conditions. Further, this paper uses the result above to predict the learned chaotic model. The purpose is to probe into if the Neural Networks system after learning the chaotic data is a chaotic system and is used to predict, how good the short-term and the long-term predictions will be? And, compare with if the Neural Networks system after learning the chaotic data is not a chaotic system and is used to predict, how large the error will be? We present the Neural Network systems after learning the chaotic data never can rebuild the learned chaotic model. But, sometimes the Neural Network system would mimic as a chaotic system. So, if we take Neural Network system to predict something with chaotic phenomena, it is possible to use one chaotic system to predict another chaotic system. According to the property of sensitive dependence on initial conditions, it should make large errors. However, from the experiments we design, we find whether the Neural Network system after learning is a chaotic system or not, it has no influence on its predicting effect. This hint is applied to use ANN to predict in financial markets or other areas with chaotic phenomenon. This paper uses some experimental designs to detect if the Neural Networks system after learning the chaotic data is a chaotic system. That is verified via testing four characteristics in chaotic data, inclusive of boundedness, determinism, aperiodicity and sensitive dependence on initial conditions. Further, this paper uses the result above to predict the learned chaotic model. The purpose is to probe into if the Neural Networks system after learning the chaotic data is a chaotic system and is used to predict, how good the short-term and the long-term predictions will be? And, compare with if the Neural Networks system after learning the chaotic data is not a chaotic system and is used to predict, how large the error will be? We present the Neural Network systems after learning the chaotic data never can rebuild the learned chaotic model. But, sometimes the Neural Network system would mimic as a chaotic system. So, if we take Neural Network system to predict something with chaotic phenomena, it is possible to use one chaotic system to predict another chaotic system. According to the property of sensitive dependence on initial conditions, it should make large errors. However, from the experiments we design, we find whether the Neural Network system after learning is a chaotic system or not, it has no influence on its predicting effect. This hint is applied to use ANN to predict in financial markets or other areas with chaotic phenomenon. |
Reference: | English:
1. Barndorff-Nielsen, O.E., Jensen, J.L., and Kendall, W.S. (1993), Networks and Chaos - Statistical and Probabilistic Aspects, 1th edition, Chapman & Hall, Inc.
2. Barron, A.R. (1991), “Complexity regularization with application to artificial neural networks.” Nonparametric Functional Estimation and Related Topics (G. Roussas, ed.), pp. 561-576.
3. Barron, A.R. (1992), “Neural net approximation.” Proceedings of the Seventh Yale Workshop on Adaptive and Learning Systems, pp. 69-72.
4. Battiti, R. (1992), ”First- and second-order methods for learning between steepest descent and Newton’s method,” Neural Networks, Vol. 5, pp507-529.
5. Cohen, J., Kesten, H., and Newman, C. (1986), ”Random Matrices and their Applications. Contemporary Mathematics,” American Math. Soc., Vol. 50.
6. Devaney, Robert L. (1989), An Introduction to Chaotic Dynamical Systems, 2th edition, Addison-Wesley, Inc.
7. Edward Ott, Tim Sauer, and James A. Yorke (1994), Coping With Chaos – Analysis of Chaotic Data and the Exploitation of Chaotic Systems, 1th edition, Wiley Inc.
8. Fischer, P. and Smith, W.R. (1985), Chaos, Fractals, and Dynamics, Marcel Dekker, Inc.
9. Feigenbaum, M. J. (1980), “Universal behavior in nonlinear systems,” Los Alamos Sci., Vol. 1, pp4-27.
10. Gleick, J. (1987), Chaos, 1th edition, Viking, Inc.
11. Jacobs, R.A. (1988), ”Increased rate of convergence through learning rate adaptation,” Neural Networks, Vol. 1, pp295-307.
12. Kaplan, Daniel, and Glass, Leon (1995), Understanding Nonlinear Dynamics, Springer-Verlag New York, Inc.
13. Matsuba I., Masui H. & Hebishima S. (1992), “Prediction of Chaotic Time-Series Data Using Optimized Neural Networks,” Proceedings of IJCNN, Beijirg.
14. McCaffrey, Daniel F., Ellner, Stephen, Gallant, A. Ronald, and Nychka, Douglas W. (1992), ”Estimating the Lyapunov Exponent of a Chaotic System With Nonparametric Regression,” Journal of the American Statistical Association, Vol. 87, No. 419, pp682-695.
15. Oseledec, V. (1968), ”A multiplicative ergodic theorem. Lyapunov characteristic numbers for dynamical systems,” Trans. Moscow Math. Soc., 19, pp197-231.
16. Rosenblatt, F. (1958), ”The perceptron: a probabilistic model for information storage and organization in the brain,” Psychological Review, Vol. 65, pp386-408.
17. Rumelhart, D.E., Hinton, G.E., and Williams, R. (1986), ”Learning internal representation by error propagation,” Parallel Distributed Processing, Cambridge, MA: MIT Press, Vol. 1, pp318-362.
18. Sarkar, D. (1995), ”Methods to speed up error back-propagation learning algorithm,” ACM Computer Surveys, Vol. 27, No. 4, pp519-542.
19. Stewart, I. (1989), Does God Play Dice? The Mathematics of Chaos, 1th edition, Blackwell, Inc.
20. Takechi, H., Murakami, K., and Izumida, M. (1995), ”Back propagation learning algorithm with different learning coefficients for each layer,” Systems and Computers in Japan, Vol. 26-(7), pp47-56.
21. Tsoukas, H. (1998), ”Chaos, Complexity and Organisation Theory,” Organization, 5, pp291-313.
22. Walters, P. (1982), An Introduction to Ergodic Theory, Springer-Verlag, Inc.
23. Wang, S. (1995), ”The unpredictability of standard back propagation neural networks in classification applications,” Management Science, Vol. 41, No. 3, pp555-559.
24. Wong, F.S. (1991), ”Time Series Forecasting Using Back-Propagation Neural Networks,” Neurocomputing, 2, pp147-159.
Chinese:
楊朝成,民國85年,「渾沌理論與類神經網路之結合運用於股市走勢預測」,行政院國科會科資中心。 |
Description: | 碩士 國立政治大學 資訊管理學系 87356001 |
Source URI: | http://thesis.lib.nccu.edu.tw/record/#A2002002104 |
Data Type: | thesis |
Appears in Collections: | [資訊管理學系] 學位論文
|
Files in This Item:
File |
Size | Format | |
index.html | 0Kb | HTML2 | 366 | View/Open |
|
All items in 政大典藏 are protected by copyright, with all rights reserved.
|