政大機構典藏-National Chengchi University Institutional Repository(NCCUR):Item 140.119/124747
English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  全文笔数/总笔数 : 113873/144892 (79%)
造访人次 : 51919551      在线人数 : 484
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜寻范围 查询小技巧:
  • 您可在西文检索词汇前后加上"双引号",以获取较精准的检索结果
  • 若欲以作者姓名搜寻,建议至进阶搜寻限定作者字段,可获得较完整数据
  • 进阶搜寻
    政大機構典藏 > 商學院 > 金融學系 > 學位論文 >  Item 140.119/124747


    请使用永久网址来引用或连结此文件: https://nccur.lib.nccu.edu.tw/handle/140.119/124747


    题名: 神經網路與機器學習方法建構P2P信用評分模型: 以Lending Club為例
    Neural Network and Machine Learning to Construct P2P Lending Credit Score Model: A Case of Lending Club
    作者: 楊立楷
    Yang, Li-Kai
    贡献者: 林士貴
    蔡瑞煌

    楊立楷
    Yang, Li-Kai
    关键词: P2P借貸
    信用評分
    機器學習
    類神經網路
    特徵工程
    Lending Club
    P2P lending
    Credit score
    Machine learning
    Neural network
    Feature engineering
    Lending Club
    日期: 2019
    上传时间: 2019-08-07 16:14:20 (UTC+8)
    摘要: 本研究以不同之神經網路與機器學習方法,包含羅吉斯回歸、支援向量機、決策樹、隨機森林、XGBoost、LightGBM與類神經網路七種,分別建構P2P信用評分模型,並經由交叉驗證方式找尋最佳超參數組合,再計算測試表現,綜合比較得到最適合於實務上應用之信用評分模型。
    本研究使用之資料集來自Lending Club網站公開之P2P貸款資料,首先使用特徵工程的概念進行資料清理,而後為了找尋顯著影響違約的因子,使用XGBoost方法預訓練一次所有資料得到特徵重要度,將重要度最高的數個特徵篩選出,以供模型做為違約因子使用。
    經由比較後,最終得知GBDT方法,包括XGBoost與LightGBM,最適合用於建構P2P評分模型,表現超越神經網路與傳統上一般使用之羅吉斯回歸,而其中以XGBoost表現最為優異。
    This study applies several machine learning and neural network methods, including logistic regression, support vector machine, decision tree, random forest, XGBoost, LightGBM and neural network, to construct a credit score model of P2P loans; and then finds the best set of hyperparameters of each method by cross validation, computes training time and test performance, finally comprehensively compares those statistics, finding out the most suitable credit score model on practical application.
    This study uses open P2P loan data on the website of Lending Club, first we clean the data combining with some feature engineering concepts, in order to find significant default factors, we use XGBoost method to pre-train all data and get feature importances, sort out the most important features to be used as default factors on data modelling.
    After comprehensive comparison, we figure out that GBDT methods, including XGBoost and LightGBM, are the most appropriate for constructing credit score model, outperform neural network and logistic regression, which is commonly used on credit scoring in tradition; among the all methods, XGBoost performs best.
    參考文獻: 中文文獻
    1. 中央銀行 (2018) , “主要國家P2P借貸之發展經驗與借鏡。”, 2018年9月27日央行理監事會後記者會參考資料。
    2. 朱君亞 (2018), “金融壓力事件預警模型:類神經網路、支援向量機與羅吉斯迴歸之比較。” 碩士論文。國立政治大學金融學系碩士班。
    3. 陳勃文 (2018), “機器學習在P2P借貸信用風險模型之應用:以Lending Club為例。” 碩士論文。國立政治大學金融學系碩士班。
    英文文獻
    1. Aldrich, J. H., & Nelson, F. D. (1984), Quantitative Applications in the Social Sciences: Linear Probability, Logit, and Probit Models. Thousand Oaks, CA: SAGE Publications.
    2. Alexander, V. E. & Clifford, C. C. (1996), Categorical Variables in Developmental Research: Methods of Analysis. Elsevier.
    3. Arya, S., Eckel C. & Wichman C. (2013), “Anatomy of the Credit Score.” Journal of Economic Behavior & Organization, Vol. 95, 175-185.
    4. Baesens, B., Gestel, T. V., Stepanova M. & Poel, D. V. D. (2004), “Neural Network Survival Analysis for Personal Loan Data.” Journal of the Operational Research Society, Vol. 56, 1089-1098.
    5. Bishop, C. M. (2006), Pattern Recognition and Machine Learning. Springer.
    6. Bolton, C. (2009), Logistic Regression and its Application in Credit Scoring. University of Pretoria.
    7. Breiman, L. (1996), “Bagging Predictors.” Machine Learning, Vol. 24, No. 2, 123-140.
    8. Breiman, L. (2001). “Random Forests.” Machine Learning, Vol. 45, No. 1, 5-32.
    9. Breiman, L., Friedman, J., Stone, C. J. & Olshen, R. A. (1984), Classification and Regression Trees., Taylor & Francis.
    10. Brown, M., Grundy, M., Lin, D., Cristianini, N., Sugnet, C., Furey, T., Ares, M. & Haussler, D. (1999), “Knowledge-Base Analysis of Microarray Gene Expression Data Using Support Vector Machines.” Technical Report, University of California in Santa Cruz.
    11. Chen, T. Q. & Guestrin, C. (2016), “XGBoost: A Scalable Tree Boosting System.” KDD `16 Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785-794.
    12. Crouhy, M., Galai D. & Mark R. (2014), The Essentials of Risk Management 2nd Edition. McGraw-Hill.
    13. Cybenko, G. (1989), “Approximation by Superpositions of a Sigmoidal Function Mathematics of Control.” Signals and Systems, Vol. 2, No. 4, 303-314.
    14. Dietterich, T. G. (2000), “An Experimental Comparison of Three Methods for Constructing Ensembles of Decision Trees: Bagging, Boosting, and Randomization.” Machine Learning, Vol. 40, No. 2, 139-157.
    15. Duchi, J., Hazan, E. & Singer, Y. (2011), “Adaptive Subgradient Methods for Online Learning and Stochastic Optimization.” Journal of Machine Learning Research, Vol. 12, 2121–2159.
    16. Elrahman, S. M. A & Abraham, A. (2013), “A Review of Class Imbalance Problem.” Journal of Network and Innovative Computing, Vol. 1, 332-340.
    17. Everett, C. R. (2015). “Group Membership, Relationship Banking and Loan Default Risk: the Case of Online Social Lending.” Banking and Finance Review, Vol.7, No.2, 15-54.
    18. Fletcher, R. (1981), “A Nonlinear Programming Problem in Statistics.” SIAM Journal on Scientific and Statistical Computing, Vol. 2, No. 3, 257-267.
    19. Friedman, J. H. (2001), “Greedy Function Approximation: A Gradient Boosting Machine.” The Annals of Statistics, Vol. 29, No. 5, 1189-1232.
    20. Genuer, R., Poggi, J. M. & Tuleau-Malot, C. (2010), “Variable selection Using Random Forests.” Pattern Recognition Letters, Vol. 31, No. 14, 2225-2236
    21. Glorot, X. & Bengio, Y. (2010), “Understanding the Difficulty of Training Deep Feedforward Neural Networks.” Journal of Machine Learning Research, Vol. 9, 249-256
    22. Guyon, I. & ElNoeeff, A. (2003), “An Introduction to Variable and Feature Selection.” The Journal of Machine Learning Research, Vol. 3, 1157-1182.
    23. He, K. M., Zhang, X. Y., Ren, S. Q. & Sun, J. (2015), “Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification.” Arxiv.
    24. Ho, T. K. (1995). “Random Decision Forest.” Proceeding of the 3rd International Conference on Document Analysis and Recognition, 278-282.
    25. Ho, T. K. (1998). "The Random Subspace Method for Constructing Decision Forests." IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 20, No. 8, 832-844.
    26. Hochreiter, S., Bengio, Y., Frasconi, P., Schmidhuber, J. (2001), Gradient Flow in Recurrent Nets: the Difficulty of Learning Long-Term Dependencies.
    27. Hsu, C. W., Chang, C. C. & Lin, C. J. (2003), “A Practical Guide to Support Vector Classification.” Technical Report, Department of Computer Science and Information Engineering, University of National Taiwan, 1-12.
    28. Ioffe, S. & Szegedy, C (2015), “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift.”, Arxiv
    29. Iyer, R., Khwaja, A. I., Luttmer, E. F., & Shue, K. (2009), “Screening in New Credit Markets: Can Individual Lenders Infer Borrower Creditworthiness in Peer-to-Peer Lending?” AFA 2011 Denver Meetings Paper.
    30. Kang, H. (2013), “The Prevention and Handling of the Missing Data.” Korean Journal of Anesthesiology, Vol. 64, No. 5, 402-406.
    31. Ke, G. L., Meng, Q., Finley, T., Wang, T. F., Chen, W., Ma, W. D., Ye, Q. W. & Liu, T. Y. (2017), “LightGBM: A highly Efficient Gradient Boosting Decision Tree.” Neural Information Processing Systems, 3149-3157.
    32. Keogh, E. & Mueen, A. (2017), “Curse of Dimensionality.” Encyclopedia of Machine Learning and Data Mining, Springer, Boston, MA.
    33. Kingma, D. P. & Ba, J. L. (2015), “Adam: a Method for Stochastic Optimization.” International Conference on Learning Representations, 1–13.
    34. Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012), “Imagenet Classification with Deep Convolutional Neural Networks.” Advances in Neural Information Processing Systems, 1097-1105.
    35. Lantz, B. (2013), Machine Learning with R. Packt Publishing Limited.
    36. Lin, H. T & Lin, C. J. (2003), “A Study on Sigmoid Kernels for SVM and the Training of Non-PSD Kernels by SMO-type Methods.” Technical Report, Department of Computer Science & Information Engineering, National Taiwan University.
    37. Lu, L., Shin, Y. J., Su, Y. H. & Karniadakis, G. E. (2019), “Dying ReLU and Initialization: Theory and Numerical Examples.” Arxiv.
    38. Maas, A. L., Hannun, A. Y. & Ng, A. Y. (2013), “Rectifier Nonlinearities Improve Neural Network Acoustic Models.” ICML Workshop on Deep Learning for Audio, Speech, and Language Processing.
    39. Madasamy, K. & Ramaswami, M. (2017), “Data Imbalance and Classifiers: Impact and Solutions from a Big Data Perspective.” International Journal of Computational Intelligence Research, Vol. 13, No. 9, 2267-2281.
    40. McCulloch, W. S. & Pitts, W. (1943), “A Logical Calculus of the Ideas Immanent in Nervous Activity.” The Bulletin of Mathematical Biophysics, Vol. 5, No. 4, 115-133.
    41. Mester, L. J. (1997), “What’s the Point of Credit Scoring?” Business Review, No. 3, 3-16.
    42. Mijwel, M. M. (2018), “Artificial Neural Networks Advantages and Disadvantages.”
    43. Mills, K. G. & McCarthy, B. (2016), “The State of Small Business Lending: Innovation and Technology and the Implications for Regulation.” HBS Working Paper No. 17-042.
    44. Milne, A. & Parboteeah, P. (2016) “The Business Models and Economics of Peer-to-Peer Lending.” ECRI Research Report, No. 17.
    45. Mountcastle, V. B. (1957), “Modality and Topographic Properties of Single Neurons of Cat`s Somatic Sensory Cortex.” Journal of Neurophysiology, Vol. 20, 408-434.
    46. Ng, A. Y. (2004), “Feature Selection, L1 vs. L2 Regularization, and Rotational Invariance.”, ICML `04 Proceedings of the twenty-first international conference on Machine Learning, 78-85.
    47. Ohlson, J. A. (1980), “Financial Ratios and the Probabilistic Prediction of Bankruptcy.” Journal of Accounting Research, Vol. 18, No. 1, 109-131.
    48. Patro, S. G. K. & Sahu, K. K. (2015), “Normalization: A Preprocessing Stage.” ArXiv.
    49. Pontil, M. & Verri, A. (1998), “Support Vector Machines for 3D Object Recognition.” IEEE Transaction On PAMI, Vol. 20, 637-646.
    50. Qian, N. (1999), “On the Momentum Term in Gradient Descent Learning Algorithms.” Neural Networks : The Official Journal of the International Neural Network Society, Vol. 12, No.1, 145–151
    51. Quinlan, J. R. (1987), “Simplifying Decision Trees.” International Journal of Man-Machine Studies, Vol. 27, No. 3, 221-234.
    52. Quinlan, J. R. (1993), C4.5: Programs for Machine Learning. Morgan Kaufmann Publishers Inc. San Francisco, CA, USA.
    53. Raina, R., Madhavan, A. & Ng, A. Y. (2009), "Large-Scale Deep Unsupervised Learning Using Graphics Processors.” Proceedings of the 26th International Conference on Machine Learning.
    54. Rajan, U., Seru, A. & Vig, V. (2015), “The Failure of Models that Predict Failure: Distance, Incentives, and Defaults.” Journal of Financial Economics, Vol. 115, No. 2, 237-260.
    55. Rosenblatt, F. (1958), “The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain.” Psychological Review, Vol. 65, No. 6, 386-408.
    56. Ruder, S. (2017), “An Overview of Gradient Descent Optimization Algorithms.” Arxiv.
    57. Rumelhart, D. E., Hinton, G. E. & Williams, R. J. (1986), “Learning Representations by Back-Propagating Errors.” Nature, Vol. 323, 533-536.
    58. Samitsu, A. (2017), “The Structure of P2P Lending and Legal Arrangements: Focusing on P2P Lending Regulation in the UK.” IMES Discussion Paper Series, No. 17-J-3.
    59. Serrano-Cinca, C., Gutierrez-Nieto, B., & López-Palacios, L. (2015), “Determinants of Default in P2P Lending.” PloS One, Vol. 10, No. 10, e0139427.
    60. Shannon, C. (1948), “A Mathematical Theory of Communication.” The Bell System Technical Journal, Vol. 27, No. 3, 379-423.
    61. Shelke, M. S., Deshmukh, P. R. & Shandilya, V. K. (2017), “A Review on Imbalanced Data Handling using Undersampling and Oversampling Technique.” International Journal of Recent Trends in Engineering and Research.
    62. Singh, S. & Gupta, P. (2014), “Comparative Study Id3, Cart and C4.5 Decision Tree Algorithm: A Survey.” International Journal of Advanced Information Science and Technology (IJAIST), Vol.3, No.7.
    63. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I. & Salakhutdinov, R. (2014), “Dropout: A Simple Way to Prevent Neural Networks from Overfitting.” Journal of Machine Learning Research, Vol. 15, 1929-1958.
    64. Thomas, L. C. (2000), “A Survey of Credit and Behavioural Scoring: Forecasting Financial Risk of Lending to Consumers.” International Journal of Forecasting, Vol. 16, No. 2, 149-172.
    65. Tieleman, T. & Hinton, G (2012), “Lecture 6.5 - RMSProp, COURSERA: Neural Networks for Machine Learning.” Technical report.
    66. Wang, Z., Cui, P., Li, F. T., Chang, E. & Yang, S. Q. (2014), “A Data-Driven Study of Image Feature Extraction and Fusion.” Information Sciences, Vol. 281, 536-558.
    描述: 碩士
    國立政治大學
    金融學系
    1063520292
    資料來源: http://thesis.lib.nccu.edu.tw/record/#G1063520292
    数据类型: thesis
    DOI: 10.6814/NCCU201900106
    显示于类别:[金融學系] 學位論文

    文件中的档案:

    档案 大小格式浏览次数
    029201.pdf2304KbAdobe PDF20检视/开启


    在政大典藏中所有的数据项都受到原著作权保护.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 回馈