政大機構典藏-National Chengchi University Institutional Repository(NCCUR):Item 140.119/122257
English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  Items with full text/Total items : 113392/144379 (79%)
Visitors : 51223022      Online Users : 915
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    政大典藏 > College of Commerce > Department of MIS > Theses >  Item 140.119/122257
    Please use this identifier to cite or link to this item: https://nccur.lib.nccu.edu.tw/handle/140.119/122257


    Title: 二元主體學習技術研究與張量流實作
    Bipartite Majority Learning with Tensors
    Authors: 李佳倫
    Lee, Chia-Lun
    Contributors: 郁方
    Yu, Fang
    李佳倫
    Lee, Chia-Lun
    Keywords: 二元主體學習
    抵抗性學習
    惡意程式分類
    Bipartite majority learning
    Resistant learning
    Malware classification
    Date: 2019
    Issue Date: 2019-02-12 15:41:32 (UTC+8)
    Abstract: 由於AlphaGo和人工智慧機器人的顯著成就,機器學習領域受到了廣大的關注。從那時起,機器學習技術被廣泛用於計算機視覺,信息檢索和語音識別。但是,資料集當中不可避免地會包含統計上的異常值或錯誤標記。這些異常資料可能會干擾學習的有效性。在主體模式發生變化的動態環境中,將異常與主體資料區分開來變得更加困難。本研究解決了關於分類數據在抗性學習中的研究問題。具體來說,我們提出了一種有效的二元主體學習算法,並使用張量進行數據分類。我們採用抵抗性學習方法來避免異常資料對模型訓練造成重大影響,然後迭代地對主體資料進行二元分類。本研究中的學習系統使用TensorFlow API實現,並使用GPU加速模型訓練過程。
    我們對惡意軟體分類的實驗結果說明,與以前的抗性學習演算法相比,我們的二元主體學習演算法可以顯著縮短訓練時間,同時保持有競爭性的分類準確度。
    A great deal of attention has been given to machine learning owing to the remarkable achievement in Go game and AI robot. Since then, machine learning techniques have been widely used in computer vision, information retrieval, and speech recognition. However, data are inevitably containing statistically outliers or mislabeled. These anomalies could interfere with the effectiveness of learning. In a dynamic environment where the majority pattern changes, it is even harder to distinguish anomalies from majorities. This work addresses the research issue on resistant learning on categorical data. Specifically, we propose an efficient bipartite majority learning algorithm for data classification with tensors. We adopt the resistant learning approach to avoid significant impact from anomalies and iteratively conduct bipartite classification for majorities afterward. The learning system is implemented with TensorFlow API and uses GPU to speed up the training process.
    Our experimental results on malware classification show that our bipartite majority learning algorithm can reduce training time significantly while keeping competitive accuracy compared to previous resistant learning algorithms.
    Reference: [1] W. Huang, Y. Yang, Z. Lin, G.-B. Huang, J. Zhou, Y. Duan, and W. Xiong, “Random feature subspace ensemble based extreme learning machine for liver tumor detection
    and segmentation,” in Engineering in Medicine and Biology Society (EMBC), 2014 36th Annual International Conference of the IEEE, pp. 4675–4678, IEEE, 2014.
    [2] S. Lawrence, C. L. Giles, A. C. Tsoi, and A. D. Back, “Face recognition: A convolutional neural-network approach,” IEEE transactions on neural networks, vol. 8,
    no. 1, pp. 98–113, 1997.
    [3] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, pp. 1097–1105, 2012.
    [4] A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei, “Large scale video classification with convolutional neural networks,” in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 1725–1732, 2014.
    [5] G.-B. Huang and H. A. Babri, “Upper bounds on the number of hidden neurons in feedforward networks with arbitrary bounded nonlinear activation functions,” IEEE Transactions on Neural Networks, vol. 9, no. 1, pp. 224–229, 1998.
    [6] F. Anscombe, “Graphs in statistical analysis,” The American Statistician, vol. 27, no. 1, pp. 17–21, 1973.
    [7] R.-H. Tsaih and T.-C. Cheng, “A resistant learning procedure for coping with outliers,” Annals of Mathematics and Artificial Intelligence, vol. 57, no. 2, pp. 161–180,
    2009.
    [8] M. Egele, T. Scholte, E. Kirda, and C. Kruegel, “A survey on automated dynamic malware-analysis techniques and tools,” ACM computing surveys (CSUR), vol. 44,
    no. 2, p. 6, 2012.
    [9] S.-Y. Huang, F. Yu, R.-H. Tsaih, and Y. Huang, “Resistant learning on the envelope bulk for identifying anomalous patterns,” in Neural Networks (IJCNN), 2014 International Joint Conference on, pp. 3303–3310, IEEE, 2014.
    [10] “TensorFlow.” https://www.tensorflow.org/.
    [11] G.-B. Huang, Y.-Q. Chen, and H. A. Babri, “Classification ability of single hidden layer feedforward neural networks,” IEEE Transactions on Neural Networks, vol. 11, no. 3, pp. 799–801, 2000.
    [12] R. Tsaih, “The softening learning procedure,” Mathematical and computer modelling, vol. 18, no. 8, pp. 61–64, 1993.
    [13] G.-B. Huang, Q.-Y. Zhu, and C.-K. Siew, “Extreme learning machine: a new learning scheme of feedforward neural networks,” in Neural Networks, 2004. Proceedings. 2004 IEEE International Joint Conference on, vol. 2, pp. 985–990, IEEE, 2004.
    [14] G. Feng, G.-B. Huang, Q. Lin, and R. Gay, “Error minimized extreme learning machine with growth of hidden nodes and incremental learning,” IEEE Transactions on Neural Networks, vol. 20, no. 8, pp. 1352–1357, 2009.
    [15] P. J. Rousseuw and A. M. Leroy, “Robust regression and outlier detection,” 1987.
    [16] A. C. Atkinson, “Plots, transformations and regression; an introduction to graphical methods of diagnostic regression analysis,” tech. rep., 1985.
    [17] R. D. Cook and S. Weisberg, Residuals and influence in regression. New York: Chapman and Hall, 1982.
    [18] J. Law, “Robust statistics-the approach based on influence functions.,” 1986.
    [19] Y. Ren, P. Zhao, Y. Sheng, D. Yao, and Z. Xu, “Robust softmax regression for multiclass classification with self-paced learning,” in Proceedings of the 26th International
    Joint Conference on Artificial Intelligence, pp. 2641–2647, 2017.
    [20] W. Jiang, H. Gao, F.-l. Chung, and H. Huang, “The l2, 1-norm stacked robust autoencoders for domain adaptation.,” in AAAI, pp. 1723–1729, 2016.
    [21] C. Zhou and R. C. Paffenroth, “Anomaly detection with robust deep autoencoders,” in Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 665–674, ACM, 2017.
    [22] H. Zhao and Y. Fu, “Semantic single video segmentation with robust graph representation.,” in IJCAI, pp. 2219–2226, 2015.
    [23] D. Wang and X. Tan, “Robust distance metric learning in the presence of label noise.,” in AAAI, pp. 1321–1327, 2014.
    [24] Z. Jia and H. Zhao, “A joint graph model for pinyin-to-chinese conversion with typo correction,” in Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), vol. 1, pp. 1512–1523, 2014.
    [25] P. J. Huber, “Robust statistics. 1981.”
    [26] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A simple way to prevent neural networks from overfitting,” The Journal of Machine Learning Research, vol. 15, no. 1, pp. 1929–1958, 2014.
    [27] S. Hou, Y. Ye, Y. Song, and M. Abdulhayoglu, “Hindroid: An intelligent android malware detection system based on structured heterogeneous information network,” in Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1507–1515, ACM, 2017.
    [28] K. Grosse, N. Papernot, P. Manoharan, M. Backes, and P. McDaniel, “Adversarial perturbations against deep neural networks for malware classification,” arXiv
    preprint arXiv:1606.04435, 2016.
    [29] Q. Wang, W. Guo, K. Zhang, A. G. Ororbia II, X. Xing, X. Liu, and C. L. Giles, “Adversary resistant deep neural networks with an application to malware detection,” in Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1145–1153, 2017.
    [30] G. E. Dahl, J. W. Stokes, L. Deng, and D. Yu, “Large-scale malware classification using random projections and neural networks,” in Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on, pp. 3422–3426, IEEE, 2013.
    [31] C.-H. Chiu, J.-J. Chen, and F. Yu, “An effective distributed ghsom algorithm for unsupervised clustering on big data,” in Big Data (BigData Congress), 2017 IEEE International Congress on, pp. 297–304, IEEE, 2017.
    [32] L. Breiman, “Random forests,” Machine learning, vol. 45, no. 1, pp. 5–32, 2001.
    [33] R. M. Bell and Y. Koren, “Lessons from the netflix prize challenge,” Acm Sigkdd Explorations Newsletter, vol. 9, no. 2, pp. 75–79, 2007.
    [34] S. Haykin, Neural networks: a comprehensive foundation. Prentice Hall PTR, 1994.
    [35] “TensorFlow - MNIST For ML Beginners.” https://www.tensorflow.org/versions/r1.1/get_started/mnist/beginners.
    [36] A. R. Barron, “Universal approximation bounds for superpositions of a sigmoidal function,” IEEE Transactions on Information theory, vol. 39, no. 3, pp. 930–945,
    1993.
    [37] “TensorFlow - tf.matrix solve ls.” https://www.tensorflow.org/api_docs/python/tf/matrix_solve_ls.
    [38] C.-C. Chang and C.-J. Lin, “Libsvm: a library for support vector machines,” ACM transactions on intelligent systems and technology (TIST), vol. 2, no. 3, p. 27, 2011.
    [39] “Scikit-Learn Support Vector Machines.” https://scikit-learn.org/stable/modules/svm.html.
    [40] B. Zhang, “Reliable classification of vehicle types based on cascade classifier ensembles,” IEEE Transactions on Intelligent Transportation Systems, vol. 14, no. 1, pp. 322–332, 2013.
    [41] “Malware Knowledge Base.” https://owl.nchc.org.tw/.
    [42] “Cuckoo Sandbox.” https://cuckoosandbox.org/.
    [43] Y.-H. Li, Y.-R. Tzeng, and F. Yu, “Viso: characterizing malicious behaviors of virtual machines with unsupervised clustering,” in Cloud Computing Technology and Science (CloudCom), 2015 IEEE 7th International Conference on, pp. 34–41, IEEE, 2015.
    Description: 碩士
    國立政治大學
    資訊管理學系
    104356041
    Source URI: http://thesis.lib.nccu.edu.tw/record/#G0104356041
    Data Type: thesis
    DOI: 10.6814/THE.NCCU.MIS.001.2019.A05
    Appears in Collections:[Department of MIS] Theses

    Files in This Item:

    File SizeFormat
    604101.pdf2328KbAdobe PDF2108View/Open


    All items in 政大典藏 are protected by copyright, with all rights reserved.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback