English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  全文筆數/總筆數 : 118204/149236 (79%)
造訪人次 : 74241025      線上人數 : 84
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋
    政大機構典藏 > 商學院 > 統計學系 > 學位論文 >  Item 140.119/159043
    請使用永久網址來引用或連結此文件: https://nccur.lib.nccu.edu.tw/handle/140.119/159043


    題名: 結合資料切分與階段式分類之混合型支援向量機模型
    A Hybrid Support Vector Machine Model Combining Data Partitioning and Staged Classification
    作者: 劉松憲
    Liu, Sung-Hsien
    貢獻者: 張志浩
    劉松憲
    Liu, Sung-Hsien
    關鍵詞: 支持向量機
    混合模型
    資料切分
    二元分類
    Support Vector Machine
    Hybrid Model
    Data Partitioning
    Classification
    日期: 2025
    上傳時間: 2025-09-01 14:50:26 (UTC+8)
    摘要: 在機器學習領域中,如何對類別數據建立線性或非線性的分類結構使其兼具模型解釋力與預測能力,始終是一項重要且具挑戰性的課題。傳統的支持向量機(Support Vector Machine, SVM)雖具備良好的分類能力,但其效能高度仰賴資料的邊界結構與核函數選擇,當資料同時存在多樣化邊界型態時,單一核函數模型在分析的向上則略顯不足。本研究提出一種結合資料切分與階段式分類策略的混合型支持向量機(Hybrid SVM)模型。首先以線性核函數在固定成本參數C值下,透過SVM辨識落於分類邊界區域的關鍵樣本,進行樣本過濾與資料縮減。而後針對邊界內之樣本進一步建構非線性分類器,以提升模型效能與泛化能力。同時,本研究亦設計一選取準則,其權衡分類準確率與邊界外樣本比例用以調整成本參數,藉此達到最佳線性邊界截取適量之邊界內樣本。同時我們亦透過交叉驗證調整成本參數並與選取準則法進行分析效能比較。為驗證所提方法之有效性與適應性,本研究設計多組涵蓋多樣邊界結構與樣本數條件的模擬實驗,探討 Hybrid SVM 於模擬試驗下之分類效能。整體結果顯示,Hybrid SVM 在多數情境中都有不錯表現,特別於混合型邊界資料具顯著優勢,顯示其為一兼具彈性與效能的分類方法。
    In the field of machine learning, constructing either linear or nonlinear classification structures for categorical data that simultaneously ensure model interpretability and predictive performance has long been recognized as a critical and challenging task. Although the traditional Support Vector Machine (SVM) demonstrates strong classification capability, its performance heavily depends on the boundary structure of the data and the selection of the kernel function. When the data exhibit heterogeneous boundary characteristics, single-kernel models may fall short in capturing such complexity. This study proposes a Hybrid Support Vector Machine (Hybrid SVM) model that integrates data partitioning and a staged classification strategy. Specifically, a linear kernel SVM with a fixed cost parameter C is first employed to identify key samples located within the margin area, enabling sample filtering and data reduction. Subsequently, a nonlinear classifier is constructed for those margin-area samples to enhance model performance and generalization ability. Additionally, this study introduces a criterion that balances classification accuracy and the proportion of non-margin samples to guide the selection of the cost parameter, thereby optimizing the extraction of margin-area samples along the linear decision boundary. Furthermore, we apply cross-validation to adjust the cost parameter and compare the effectiveness of this approach with that of the proposed criterion-based method. To validate the effectiveness and adaptability of the proposed model, a series of simulation experiments were designed under various boundary structures and sample size conditions to evaluate the classification performance of the Hybrid SVM. The overall results indicate that the Hybrid SVM performs well across most scenarios, with particularly notable advantages in datasets featuring mixed boundary types, demonstrating its flexibility and effectiveness as a classification method.
    參考文獻: Arlot, S. and Celisse, A. (2010). A survey of cross-validation procedures for model selection. Statistics Surveys, 4:40–79.

    Bi, J. and Bennett, K. P. (2003). Dimensionality reduction via sparse support vector machines. Journal of Machine Learning Research, 3:1229–1243.

    Breiman, L. (2001). Random forests. Machine Learning, 45(1):5–32.

    Breiman, L., Friedman, J. H., Olshen, R. A., and Stone, C. J. (1984). Classification and Regression Trees. Wadsworth International Group

    Cortes, C. and Vapnik, V. (1995). Support-vector networks. Machine Learning, 20(3):273-297.

    Cox, D. R. (1958). The regression analysis of binary sequences. Journal of the Royal Statistical Society: Series B (Methodological), 20(2):215–232.

    Friedman, J.H.(2001). Greedyfunctionapproximation: Agradientboostingmachine. Annals of Statistics, 29(5):1189–1232.

    Gönen, M.andAlpaydın, E. (2011). Multiple kernel learning algorithms. Journal of Machine Learning Research, 12:2211–2268.

    Lanckriet, G. R., Cristianini, N., Bartlett, P., Ghaoui, L. E., and Jordan, M. I. (2004). Learning the kernel matrix with semidefinite programming. Journal of Machine Learning Research,5:27–72.

    Schölkopf, B., Burges, C.J.C., andVapnik, V.N.(1996). Incorporating invariances in support vector learning machines. In International Conferenceon Artificial Neural Networks, pages 47–52.

    Wang, W., Arora, R., Livescu, K., and Bilmes, J. A. (2014). On deep multi-view representation learning. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 2160–2166.
    描述: 碩士
    國立政治大學
    統計學系
    112354028
    資料來源: http://thesis.lib.nccu.edu.tw/record/#G0112354028
    資料類型: thesis
    顯示於類別:[統計學系] 學位論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    402801.pdf4516KbAdobe PDF0檢視/開啟


    在政大典藏中所有的資料項目都受到原著作權保護.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 回饋