政大機構典藏-National Chengchi University Institutional Repository(NCCUR):Item 140.119/134087
English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  全文筆數/總筆數 : 113324/144300 (79%)
造訪人次 : 51118800      線上人數 : 1061
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋
    政大機構典藏 > 資訊學院 > 資訊科學系 > 學位論文 >  Item 140.119/134087
    請使用永久網址來引用或連結此文件: https://nccur.lib.nccu.edu.tw/handle/140.119/134087


    題名: 應用機器學習技術於中文裁判書之要旨抽取
    Application of Machine Learning for Extraction of Gist of Chinese Judgments
    作者: 鄭禔雍
    Zheng, Ti-Yong
    貢獻者: 劉昭麟
    Liu, Chao-Lin
    鄭禔雍
    Zheng, Ti-Yong
    關鍵詞: 裁判書要旨抽取
    法律科技應用
    深度學習
    機器學習
    自動文本摘要
    日期: 2021
    上傳時間: 2021-03-02 14:32:46 (UTC+8)
    摘要: 法律用語中之『裁判』,依刑事訴訟法第220條規定:「裁判,除依本法應已判決行之者外,以裁定之」。主要依形式區分為『裁定』與『判決』兩類,而就裁定與判決事項所下判斷之裁定書與判決書,合稱『裁判書』。
    裁判書為法律工作者或訴訟相關人士在處理法律問題時的重要參考資料,包含著法院對於特定法律問題的見解。然而,裁判書中亦包含著大量無法適用於其他類型案件之資訊,故常需花費大量時間精讀。若能透過閱讀專業法律工作者所製作具有參考價值之『裁判要旨』,便能透過裁判要旨快速領略裁判書之摘要與重點。但由於製作裁判要旨亦需花費大量時間、精力,故大部分裁判書目前仍不具有人工製作之裁判要旨。
    在法院所製作之裁判要旨中,多為節錄原先裁判書內文中之敘述,故應適用於機器學習之抽取式自動文本摘要技術。若能透過此一技術輔助法律工作者製作裁判書之裁判要旨,應能進一步提升製作裁判書要旨之效率。
    本研究將抽取式自動文本摘要視為二元分類任務,使用深度神經網路搭建分類模型,進行了不同上下文長度實驗、不同嵌入模型實驗、加入不同特徵實驗、不同深度神經網路的實驗,最終發現在使用BiLSTM和BiGRU作為模型中深度神經網路結構的實驗效果最佳,最後更通過使用bagging的投票機制進一步提升模型分類效果。
    由於裁判書中要旨遠比非要旨敘述來得更少,故資料類別比例十分失衡。在這樣的情況下,本研究所提出的模型在地方法院裁判書資料集中F1之分數可達0.547、高等法院裁判書資料集中F1之分數可達0.492、最高法院裁判書資料集中F1之分數可達0.576,可證實分類模型有確實學習到如何抽取裁判書中的裁判要旨。
    The judgments of courts are important reference material for legal practitioners or persons involved in litigation when they dealing with legal issues, they contain the court’s opinions on specific legal issues.
    However, the judgments also contain a lot of information that cannot be suitable for the other types of cases, so we need spend a lot of time to read and understand. If we can read a good gist of judgment which produced by professional legal practitioners, we can quickly understand the summary and key points of the judgment via the gist of judgment.
    Since it takes a lot of times and energy to extract the gist of judgment, most of the judgments of courts do not have the gist, currently.
    Most of the gist of judgment produced by the court is an excerpt from the paragraph in the original judgment, so we should be able to imitate this action to use machine learn technique to train a extractive automatic text summarization model. If we can build a system to assist legal practitioners to excerpt the gist of judgment, it should be able to further improve the efficiency of making the gist of judgment.
    In this research, extractive automatic text summarization is regarded as a binary classification task and we can use deep neural network to build a classification model. We experiment with different context length experiments, different embedding models experiments and different deep neural networks experiments. We found that using BiLSTM to build the classification model structure are the best.
    Finally, we use ensemble learning “bagging” to improve the classification effect.
    F1 score is 0.547 in the District Court judgment data set.
    F1 score is 0.493 in the District Court judgment data set.
    F1 score is 0.576 in the Supreme Court judgment data set.
    參考文獻: [1] 中央研究院詞庫小組(CKIP 中文斷詞系統), from: https://ckipsvr.iis.sinica.edu.tw
    [2] 何君豪。2006。階層式分群法在民事裁判要旨分群上之應用。碩士論文。國立政治大學,臺北市,臺灣。
    [3] 陳冠群。2018。中文裁判書之要旨擷取:以最高法院裁判書為例。碩士論文。國立政治大學,臺北市,臺灣。
    [4] 黃詩淳及邵軒磊。2017。運用機器學習預測法院裁判——法資訊學之實踐。月旦法學雜誌,第270期。86-96。DOI: 10.2966/102559312017110270006
    [5] 廖鼎銘。2004。觸犯多款法條之賭博與竊盜案件的法院文書的分類與分析。碩士論文。國立政治大學,臺北市,臺灣。
    [6] Abigail See, Perter J. Liu and Christopher D. Manning. 2017. Get to the point: Summarization with pointer-generator networks. arXiv preprint arXiv:1704.04368.
    [7] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei and Ilya Sutskever. 2019. Language Models are Unsupervised Multitask Learners. In OpenAI blog, 2019, 1(8): 9.
    [8] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of Advances in neural information processing systems. 5998-6008.
    [9] ConvertZ, from: http://alf-li.pcdiscuss.com/c_convertz.html
    [10] Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Prismatic Inc, Steve J. Bethard and David Mcclosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations. 55-60.
    [11] Daniel Jurafsky and James H. Martin. 2009. Speech and Language Processing (2Nd Edition). Upper Saddle River, NJ, USA. Prentice-Hall, Inc.
    [12] Hochreiter and Sepp. 1998. The vanishing gradient problem during learning recurrent neural nets and problem solutions. In International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems. 6(02): 107-116.
    [13] Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova. 2018. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv preprint arXiv: 1810.04805.
    [14] Jiatao Gu, Zhengdong Lu, Hang Li and Victor O.K. Li. 2016. Incorporating Copying Mechanism in Sequence-to-Sequence Learning. arXiv preprint arXiv:1603.06393.
    [15] Jieba, from: https://github.com/fxsjy/jieba
    [16] Jeffrey L. Elman. 1990. Finding structure in time. In Cognitive science, 14(2): 179-211.
    [17] Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk and Yoshua Bengio. 2014. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation. arXiv preprint arXiv:1406.1078.
    [18] LeCun Yann, Bottou Lèon and Bengio Yoshua and Haffner Patrick. 1998. Gradient-based learning applied to document recognition. In Proceedings of the IEEE, 86(11): 2278-2324.
    [19] Quoc Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In Proceedings of International conference on machine learning. 1188-1196.
    [20] Richard Maclin and David Opitz. 1997. An empirical evaluation of bagging and boosting. In Proceedings of AAAI. 546-551.
    [21] Scikit-Learn, from: https://scikit-learn.org/stable/
    [22] Sepp Hochreiter and Juergen Schmidhuber. 1997. Long short-term memory. In Neural Computation 9(8): 1735-1780. DOI:
    http://dx.doi.org/10.1162/neco.1997.9.8.1735
    [23] Simon Tong and Daphne Koller. 2001. Support vector machine active learning with applications to text classification. In Proceedings of Journal of machine learning research. 45-66.
    [24] Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest and Alexander M. Rush. 2019. HuggingFace’s
    Transformers: State-of-the-art Natural Language Processing. arXiv preprint arXiv:1910.03771.
    [25] Tomas Mikolov, Kai Chen, Greg Corrado and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.
    [26] Yoon Kim. 2014. Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882.
    [27] Yu-Lun Hsieh, Shih-Hung Liu, Kuan-Yu Chen, Hsin-Min Wang, Wen-Lian Hsu and Berlin Chen. 2016. 運用序列到序列生成架構於重寫式自動摘要 (Exploiting Sequence-to-Sequence Generation Framework for Automatic Abstractive Summarization)[In Chinese]. In Proceedings of the 28th Conference on Computational Linguistics and Speech Processing (ROCLING 2016). 115-128.
    [28] Zhengyan Zhang, Xu Han, Zhiyuan Liu1, Xin Jiang, Maosong Sun and Qun Liu. 2019. ERNIE: Enhanced Language Representation with Informative Entities. arXiv preprint arXiv:1905.07129.
    [29] Zhen-Zhong Lan, Ming-Da Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma and Radu Soricut. 2019. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:1909.11942.
    [30] Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R. Salakhutdinov and Quoc V. Le. 2019. XLNet: Generalized Autoregressive Pretraining for Language Understanding. In Advances in neural information processing systems. 5753-5763.
    [31] Zhong-guo Li and Mao-song Sun. 2009. Punctuation as Implicit Annotations for Chinese Word Segmentation. In Computational Linguistics. 35(4): 505-512.
    描述: 碩士
    國立政治大學
    資訊科學系
    107753037
    資料來源: http://thesis.lib.nccu.edu.tw/record/#G0107753037
    資料類型: thesis
    DOI: 10.6814/NCCU202100220
    顯示於類別:[資訊科學系] 學位論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    303701.pdf2628KbAdobe PDF20檢視/開啟


    在政大典藏中所有的資料項目都受到原著作權保護.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 回饋