English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  Items with full text/Total items : 113822/144841 (79%)
Visitors : 51830787      Online Users : 516
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    Please use this identifier to cite or link to this item: https://nccur.lib.nccu.edu.tw/handle/140.119/134202


    Title: 基於深度學習框架之電器火災電線金相識別與應用
    Metallographic Analysis of Electric Wires in Fire Accidents Using Deep Learning Approaches
    Authors: 彭建凱
    Peng, Chien-Kai
    Contributors: 廖文宏
    Liao, Wen-Hung
    彭建凱
    Peng, Chien-Kai
    Keywords: 深度學習
    圖像分類
    遷移學習
    資料增強
    模型可解釋化
    導線熔痕分析
    Deep learning
    Image classification
    Transfer learning
    Data Augmentation
    Model interpretability
    Metallographic Analysis
    Date: 2021
    Issue Date: 2021-03-02 14:56:59 (UTC+8)
    Abstract: 本論文試圖探究如何在資料集高度不平衡且稀少的情況下,利用深度學習之方法,將火災現場所取得之巨觀以及微觀之導線熔痕進行分類,並以Grad-CAM的方法分析深度學習模型所學習之特徵。
    本研究所使用的方法,將使用深度學習中遷移學習之概念訓練模型,同時透過資料增強的方法,擴充並平衡資料集之分布,以提高熔痕識別之效能。經過資料增強、資料清理、模型優化與參數調校後,最佳實驗結果得出巨觀通電痕 F1-Score 89.22%、巨觀熱熔痕 F1-Score 80.85%、微觀通電痕 F1-Score 79.46%、微觀熱熔痕 F1-Score 81.90%,並同步建置可用實務上之應用程式原型,達到輔助現場判決與進一步蒐集資料之目的,也期許這樣的成果可提升實務之效率,以提供相關政策制定之參考。
    未來希望能以此為基礎,探討更進一步優化導線金相之識別分法,並投入到更多的應用當中,持續改善實務之工作流程。
    The objective of this thesis aims to classify the wire melting marks from fire scenes based on deep learning approaches when the data set is imbalanced and only a limited amount of data is available. The correctness of the results is verified through the Grad-CAM method.
    This thesis employs the concept of transfer learning to train models, and balance the distribution of the data set through the method of data augmentation, so as to improve the efficiency of melting mark recognition. After data augmentation, data cleaning, model optimization and parameter fine-tuning, the best experimental results in terms of F1 are: 89.22% for macro electricity mark, 80.85% for macro heat-melting mark, 79.46% for micro electricity mark, and 81.90% for micro heat-melting mark. An application prototype has been built to assist on-site recognition and further data collection. It is hoped that the results can enhance the performance and provide references for policies making.
    In addition to laying the foundation for further optimizing the wire melting marks identification method, this thesis also improves task efficiency and government`s work flow.
    Reference: [1] 中華民國內政部消防署全球資訊網 火災統計 https://www.nfa.gov.tw/cht/index.php?code=list&ids=220
    [2] 中華民國內政部消防署全球資訊網 修正「火災調查鑑定標準作業程序」、「火災原因調查鑑定書製作規定」、「火災原因調查鑑定書分級列管實施規定」之名稱及規定 https://www.nfa.gov.tw/cht/index.php?code=list&flag=detail&ids=23&article_id=343
    [3] Johnson, J. M., & Khoshgoftaar, T. M. (2019). Survey on deep learning with class imbalance. Journal of Big Data, 6(1), 27.
    [4] Chawla, N. V., Bowyer, K. W., Hall, L. O., & Kegelmeyer, W. P. (2002). SMOTE: synthetic minority over-sampling technique. Journal of artificial intelligence research, 16, 321-357.
    [5] Elkan, C. (2001, August). The foundations of cost-sensitive learning. In International joint conference on artificial intelligence (Vol. 17, No. 1, pp. 973-978). Lawrence Erlbaum Associates Ltd.
    [6] Tan, C., Sun, F., Kong, T., Zhang, W., Yang, C., & Liu, C. (2018, October). A survey on deep transfer learning. In International conference on artificial neural networks (pp. 270-279). Springer, Cham.
    [7] Goodman, B., & Flaxman, S. (2017). European Union regulations on algorithmic decision-making and a “right to explanation”. AI magazine, 38(3), 50-57.
    [8] Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., & Torralba, A. (2016). Learning deep features for discriminative localization. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2921-2929).
    [9] Lin, M., Chen, Q., & Yan, S. (2013). Network in network. arXiv preprint arXiv:1312.4400.
    [10] Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., & Batra, D. (2017). Grad-CAM: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision (pp. 618-626).
    [11] Springenberg, J. T., Dosovitskiy, A., Brox, T., & Riedmiller, M. (2014). Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806.
    [12] Chattopadhay, A., Sarkar, A., Howlader, P., & Balasubramanian, V. N. (2018, March). Grad-cam++: Generalized gradient-based visual explanations for deep convolutional networks. In 2018 IEEE Winter Conference on Applications of Computer Vision (WACV) (pp. 839-847). IEEE.
    [13] ImageNet http://www.image-net.org/
    [14] Coco DataSet https://cocodataset.org/
    [15] Open Image DataSet https://storage.googleapis.com/openimages/web/index.html
    [16] Szegedy, C., Ioffe, S., Vanhoucke, V., & Alemi, A. (2017, February). Inception-v4, inception-resnet and the impact of residual connections on learning. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 31, No. 1).
    Description: 碩士
    國立政治大學
    資訊科學系碩士在職專班
    107971001
    Source URI: http://thesis.lib.nccu.edu.tw/record/#G0107971001
    Data Type: thesis
    DOI: 10.6814/NCCU202100228
    Appears in Collections:[資訊科學系碩士在職專班] 學位論文

    Files in This Item:

    File Description SizeFormat
    100101.pdf8197KbAdobe PDF22View/Open


    All items in 政大典藏 are protected by copyright, with all rights reserved.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback