政大機構典藏-National Chengchi University Institutional Repository(NCCUR):Item 140.119/119155
English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  全文筆數/總筆數 : 113148/144119 (79%)
造訪人次 : 50714060      線上人數 : 257
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋
    政大機構典藏 > 商學院 > 資訊管理學系 > 學位論文 >  Item 140.119/119155
    請使用永久網址來引用或連結此文件: https://nccur.lib.nccu.edu.tw/handle/140.119/119155


    題名: 優化資料清理與機器學習的機制
    The refined mechanism for data cleaning and machine learning
    作者: 余艾玨
    Yu, Ai-Chueh
    貢獻者: 蔡瑞煌
    Tsaih, Rua-Huan
    余艾玨
    Yu, Ai-Chueh
    關鍵詞: 人工神經網路
    正規化
    單一隱藏層倒傳遞神經網路
    Artificial neural networks
    Regularization
    Single-hidden layer feed-forward neural networks
    Resistant learning with envelope module
    日期: 2018
    上傳時間: 2018-08-02 16:15:32 (UTC+8)
    摘要: 近年來人工智慧在機器學習的應用扮演重要的角色,而相較於大數據分析的統計方法,ANN成為最有用方法中的其中一個,為了處理動態環境中的時間序列資料和離群值,Wu (2017)提出一個資料清理和機器學習的機制,實驗結果顯示提出的機制在資料清理和機器學習方面是很有效的,Wu (2017)已經透過單一隱藏層倒傳遞神經網路實作RLEM,這個研究將使用兩個方法優化此機制,一個是在RLEM的損失函數(loss function)加上正規化項來避免過度擬合(overfitting)的問題,另一個是修改RLEM並透過新版的Tensorflow實作來達成目標。
    In recent years, artificial intelligence (AI) has become an important part in the application of machine learning, and the artificial neural networks (ANN) serves as one of the most useful methods compared to statistical methods for the purpose of big data analytics. To cope with the time series data that may have concept-drifting phenomenon and outliers, Wu (2017) had derived a mechanism for effective data cleaning and machine learning. The experiment results had shown that the proposed mechanism is promising in effective data cleaning and machine learning. Wu (2017) had implemented the resistant learning with envelope module (RLEM) via the adaptive single-hidden layer feed-forward neural networks (SLFN). This research will add the regularization term to loss function to prevent overfitting and will refine RLEM to improve the accuracy of the predicted return of carry trade. The refined mechanism will be implemented via the updated version of Tensorflow.
    參考文獻: 1. Android Authority (2018) “Artificial intelligence vs machine learning : what’s the difference?”, available at https://www.androidauthority.com/artificial-intelligence-vs-machine-learning-832331/ (accessed 5 March 2018)
    2. J. Cao, Y. Pang, X. Li, J. Liang (2018) “Randomly translational activation inspired by the input distributions of ReLU,” Neurocomputing (275), pp:859-868
    3. D.A. Clevert, T. Unterthiner, S. Hochreiter (2016) “Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs),” Published as a conference paper at ICLR
    4. Educational Research Techniques (2016) “Black Box Method-Artificial Neural Networks”, available at
    https://educationalresearchtechniques.com/2016/07/06/black-box-method-artificial-neural-networks/ (accessed 5 March 2018)
    5. Enhance Data Science (2017) “Machine Learning Explained: Regularization”, available at
    http://enhancedatascience.com/2017/07/04/machine-learning-explained-regularization/ (accessed 5 March 2018)
    6. I. Goodfellow , Y. Bengio, A. Courville (2016), “Deep Learning,” The MIT Press
    7. S. Y. Huang, J. W. Lin, and R. H. Tsaih (2106), “Outlier Detection in the Concept Drifting Environment,” In: Proceedings of the International Joint Conference on Neural Networks (IJCNN), pp:31-37
    8. S. Y. Huang, F. Yu, R. H. Tsaih, and Y. Huang (2104), “Resistant learning on the envelope bulk for identifying anomalous patterns,” In: Proceedings of the 2014 International Joint Conference on Neural Networks (IJCNN), pp:3303-3310
    9. Investopedia “Carry Tradde” available at https://www.investopedia.com/terms/c/carry-trade.asp-0 (accessed 20 March 2018)
    10. Ò. Jordà, and A. M. Taylor (2012), “The carry trade and fundamentals: Nothing to fear but FEER itself,” Journal of International Economics, vol. 88, pp:74-90
    11. F. F. Li, J. Johnson, S. Yeung (2017), “Convolutional Neural Networks for Visual Recognition, Stanford University School of Engineering,” available at http://cs231n.stanford.edu/ (accessed 5 March 2018)
    12. J. D. Olden, M. K. Joy, R. G. Death (2004), “An accurate comparison of methods for quantifying variable importance in artificial neural networks using simulated data,” Ecological Modeling (178:3), pp:389-397
    13. Quora (2013), “Differences between L1 and L2 as Loss Function and Regularization”, available at
    http://www.chioka.in/differences-between-l1-and-l2-as-loss-function-and-regularization/ (accessed 5 March 2018)
    14. S. Ruder (2016), “An overview of gradient descent optimization algorithms”, available at http://ruder.io/optimizing-gradient-descent/index.html#adam (accessed 5 March 2018)
    15. N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, R. Salakhutdinov (2014) “Dropout: A Simple Way to Prevent Neural Networks from Overfitting,” Journal of Machine Learning Research (15) 2014, pp:1929-1958
    16. The Theory of Everything (2017), “Understanding Activation Functions in Neural Networks”, available at https://medium.com/the-theory-of-everything/understanding-activation-functions-in-neural-networks-9491262884e0 (accessed 5 March 2018).
    17. Towards Data Science (2017), “Types of Optimization Algorithms used in Neural Networks and Ways to Optimize Gradient Descent”, available at https://towardsdatascience.com/types-of-optimization-algorithms-used-in-neural-networks-and-ways-to-optimize-gradient-95ae5d39529f (accessed 5 March 2018).
    18. R. H. Tsaih, T. C. Cheng (2009), “A resistant learning procedure for coping with outliers,” Annals of Mathematics and Artificial Intelligence (57:2), pp:161-180
    19. J. V. Tu (1996), “Advantages and disadvantages of using artificial neural networks versus logistic regression for predicting medical outcomes” Journal of Clinical Epidemiology 49(11), pp:1225-1231.
    20. F. Y. Tzeng, K. L. Ma (2005), “Opening the Black Box — Data Driven Visualization of Neural Networks”, Visualization, IEEE
    21. L. Wan, M. Zeiler, S. Zhang, Y. L. Cun, R. Fergus (2013), “Regularization of Neural Networks using DropConnect” Proceedings of the 30th International Conference on Machine Learning, PMLR (28:3), pp:1058-1066
    22. J. Wright, A. Y. Yang, A. Ganesh, S. S. Sastry, Y. Ma (2009), “Robust face recognition via sparse representation,” IEEE Transactions (31:1), pp:210-227
    23. J. Wu. (2017), “Application of Machine Learning to Predicting the Returns of Carry Trade. Unpubliched Master Thesis,” National Chengchi University, Taipei
    24. S. N. Zeng, J. P. Gou, L. M. Deng (2017), “An antinoise sparse representation method for robust face recognition via joint l1 and l2 regularization,” Expert Systems with Applications (82), pp:1-9
    描述: 碩士
    國立政治大學
    資訊管理學系
    105356017
    資料來源: http://thesis.lib.nccu.edu.tw/record/#G0105356017
    資料類型: thesis
    DOI: 10.6814/THE.NCCU.MIS.011.2018.A05
    顯示於類別:[資訊管理學系] 學位論文

    文件中的檔案:

    檔案 大小格式瀏覽次數
    601701.pdf2094KbAdobe PDF26檢視/開啟


    在政大典藏中所有的資料項目都受到原著作權保護.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 回饋