政大機構典藏-National Chengchi University Institutional Repository(NCCUR):Item 140.119/124722
English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  全文筆數/總筆數 : 113148/144119 (79%)
造訪人次 : 50712268      線上人數 : 132
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋
    政大機構典藏 > 商學院 > 資訊管理學系 > 學位論文 >  Item 140.119/124722
    請使用永久網址來引用或連結此文件: https://nccur.lib.nccu.edu.tw/handle/140.119/124722


    題名: 集成式學習應用於電子零組件供應鏈需求預測
    Ensemble learning for demand forecasting in electronics supply chains
    作者: 郭泰竹
    Kuo, Tai-Chu
    貢獻者: 莊皓鈞
    周彥君

    Chuang, Hao-Chun
    Chou, Yen-Chun

    郭泰竹
    Kuo, Tai-Chu
    關鍵詞: 電子供應鏈
    需求預測
    集成式學習
    Electronics supply chains
    Demand forecasting
    Ensemble learning
    日期: 2019
    上傳時間: 2019-08-07 16:09:08 (UTC+8)
    摘要: 隨著半導體產業以及電子成品業快速發展,電子零組件之供應成為該產業中重要的一環。而通路商在整條供應鏈當中所扮演的角色尤其重要,需要及時反映下游需求並知會上游,達成良好的需求供給,也需要控制存貨水位,降低存貨成本。根據文獻探討,半導體通路商遇到以下三個困難,包括需求前置時間過長,需求變動量大,以及零組件生命週期短。本研究之研究問題為「如何在需求不穩、資料有限的狀況下,預測前置時間需求?」,希望能透過機器學習之技術,預測未來時間之前置需求量,協助通路商控制良好的存貨水位,提高其營業獲利。
    本次研究與亞洲一代表性零組件通路商合作,其在擁有龐大營業額下,也需面對相當高的存貨成本,無論是訂貨超出或是不足,都需負擔存貨與缺貨成本。本研究運用其近三年之資料,結合下游製造廠商對於未來需求之預期數量矩陣,針對通路商零組件資料建立預測情境。目標是更精準預測在未來前置時間,下游廠商對於半導體通路商的零組件需求。
    為達成以上目標,本研究使用機器學習演算法以及集成式模型架構,首先就不穩定的需求資料進行Temporal Aggregation,並利用跨產品的訓練樣本,以及不同演算法(包含XGBoost、LightGBM、Random Forest等),配合多種參數隨機組合,訓練多個需求預測模型。接著,利用集成式作法(ensemble)將模型混合,本研究使用的方法為平均法以及爬山法(hill-climbing)結合向前選取法(forward-selection),降低各個單一模型預測所產生的偏誤,經較小樣本的完整資料預測測試,和較大樣本的多缺失資料預測測試,本研究發現,利用所提出的資料建構和集成預測,能較各種單一模型顯著降低需求預測誤差,並較該通路商的既有預測方法,能優化70%-80%的零組件預測,使其更能優化存貨控制。
    As the semi-conductor and electronics industries grow, the supply of components has become an important part of this industrial chain. Particularly, the distributor is the key of the supply chain because it is responsible for holding inventories to meet demand from downstream plants, in addition to sourcing from upstream vendors with long production lead time. According to the related literature, the distributor faces the problems including the long lead time, high demand volatility, and short life time cycle of the components. This study addresses the following question - How to predict demand over long lead time with limited data and unstable demand series?
    This study collaborates with a representative electronics distributor in Asia. Despite its large revenue, the distributor operates under high inventory costs due to excessive holdings or shortages. This study first collects data on downstream demand and rolling forecast over three years. After data engineering, we build scenarios from data sets. The objective is to improve accuracy of demand forecasting, such that the distributor can better prepare for volatile demand from downstream production plants.
    To achieve the goal, this study takes machine learning and ensemble learning techniques. First, we use temporal aggregation to smooth demand series and reduce noise. We construct cross-item training samples for different algorithms. After training models with random hyper-parameter search, we use model-averaging and hill-climbing methods to ensemble models` predictions. After testing a relatively small samples with complete data and a larger sample with incomplete data, we show that the proposed approaches can reduce prediction errors, compared with single models. Our method also outperforms the distributor`s internal forecast method for 70%-80% of components, such that the company can apply our method for better inventory control.
    參考文獻: Breiman, L. (1996). Stacked regressions. Machine Learning, 24(1), pp. 49–64.
    Breiman, L. (2001). Random forests. Machine learning, 45(1), pp. 5-32.
    Carbonneau, R., Laframboise, K., & Vahidov, R. (2008). Application of machine learning techniques for supply chain demand forecasting. European Journal of Operational Research, 184(3), pp. 1140-1154.
    Chen, T., & Guestrin, C. (2016, August). Xgboost: A scalable tree boosting system. In Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining (pp. 785-794). ACM.
    Freund, Y., & Schapire, R. E. (1996, July). Experiments with a new boosting algorithm. In icml (Vol. 96, pp. 148-156).
    Friedman, J. H. (2001). Greedy function approximation: a gradient boosting machine. Annals of statistics, pp. 1189-1232.
    Graczyk, M., Lasota, T., Trawiński, B., & Trawiński, K. (2010, March). Comparison of bagging, boosting and stacking ensembles applied to real estate appraisal. In Asian conference on intelligent information and database systems (pp. 340-350). Springer, Berlin, Heidelberg.
    Guo, H., Sun, F., Cheng, J., Li, Y., & Xu, M. (2016). A novel margin-based measure for directed hill climbing ensemble pruning. Mathematical Problems in Engineering, 2016.
    Hill, T., O`Connor, M., & Remus, W. (1996). Neural network models for time series forecasts. Management science, 42(7), pp. 1082-1092.
    Hyndman, R. J., & Koehler, A. B. (2006). Another look at measures of forecast accuracy. International journal of forecasting, 22(4), pp. 679-688.
    Ke, G., Meng, Q., Finley, T., Wang, T., Chen, W., Ma, W., ... & Liu, T. Y. (2017). Lightgbm: A highly efficient gradient boosting decision tree. In Advances in Neural Information Processing Systems (pp. 3146-3154).
    Meddahi, N., & Renault, E. (2004). Temporal aggregation of volatility models. Journal of Econometrics, 119(2), pp. 355-379.
    Mendes-Moreira, J., Soares, C., Jorge, A. M., & Sousa, J. F. D. (2012). Ensemble approaches for regression: A survey. Acm computing surveys (csur), 45(1), 10.
    Montero-Manso, P., Athanasopoulos, G., Hyndman, R. J., & Talagala, T. S. (2018). FFORMA: Feature-based forecast model averaging. Monash Econometrics and Business Statistics Working Papers, 19(18), 2018-19.
    Natekin, A., & Knoll, A. (2013). Gradient boosting machines, a tutorial. Frontiers in neurorobotics, 7, 21.
    Nikolopoulos, K., Syntetos, A. A., Boylan, J. E., Petropoulos, F., & Assimakopoulos, V. (2011). An aggregate–disaggregate intermittent demand approach (ADIDA) to forecasting: an empirical proposition and analysis. Journal of the Operational Research Society, 62(3), pp. 544-554.
    Opitz, D., & Maclin, R. (1999). Popular ensemble methods: An empirical study. Journal of artificial intelligence research, 11, pp. 169-198.
    Regattieri, A., Gamberi, M., Gamberini, R., & Manzini, R. (2005). Managing lumpy demand for aircraft spare parts. Journal of Air Transport Management, 11(6), pp. 426-431.
    Ren, Y., Zhang, L., & Suganthan, P. N. (2016). Ensemble classification and regression-recent developments, applications and future directions. IEEE Computational intelligence magazine, 11(1), pp. 41-53.
    Caruana, R., Niculescu-Mizil, A., Crew, G., & Ksikes, A. (2004, July). Ensemble selection from libraries of models. In Proceedings of the twenty-first international conference on Machine learning (pp. 18). ACM.
    Rostami‐Tabar, B., Babai, M. Z., Syntetos, A., & Ducq, Y. (2013). Demand forecasting by temporal aggregation. Naval Research Logistics (NRL), 60(6), pp. 479-498.
    Sill, J., Takács, G., Mackey, L., & Lin, D. (2009). Feature-weighted linear stacking. arXiv preprint arXiv:0911.0460.
    Sun, Q., & Pfahringer, B. (2011, December). Bagging ensemble selection. In Australasian Joint Conference on Artificial Intelligence (pp. 251-260). Springer, Berlin, Heidelberg.
    Syntetos, A. A., Babai, Z., Boylan, J. E., Kolassa, S., & Nikolopoulos, K. (2016). Supply chain forecasting: Theory, practice, their gap and the future. European Journal of Operational Research, 252(1), pp. 1-26.
    Syntetos, A. (2001). Forecasting of intermittent demand (Doctoral dissertation, Brunel University).
    Syntetos, A. A., & Boylan, J. E. (2005). The accuracy of intermittent demand estimates. International Journal of forecasting, 21(2), pp. 303-314.
    Willemain, T. R., Smart, C. N., Shockor, J. H., & DeSautels, P. A. (1994). Forecasting intermittent demand in manufacturing: a comparative evaluation of Croston`s method. International Journal of forecasting, 10(4), pp. 529-538.
    Zhou, Z. H., & Tang, W. (2003, May). Selective ensemble of decision trees. In International Workshop on Rough Sets, Fuzzy Sets, Data Mining, and Granular-Soft Computing (pp. 476-483). Springer, Berlin, Heidelberg.
    Zhou, Z. H., Wu, J., & Tang, W. (2010). Corrigendum to “Ensembling neural networks: Many could be better than all”[Artificial Intelligence 137 (1–2)(2002) 239–263]. Artificial Intelligence, 174(18), 1570.
    描述: 碩士
    國立政治大學
    資訊管理學系
    107356002
    資料來源: http://thesis.lib.nccu.edu.tw/record/#G0107356002
    資料類型: thesis
    DOI: 10.6814/NCCU201900450
    顯示於類別:[資訊管理學系] 學位論文

    文件中的檔案:

    檔案 大小格式瀏覽次數
    600201.pdf1179KbAdobe PDF2154檢視/開啟


    在政大典藏中所有的資料項目都受到原著作權保護.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 回饋