政大機構典藏-National Chengchi University Institutional Repository(NCCUR):Item 140.119/152639
English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  全文筆數/總筆數 : 113822/144841 (79%)
造訪人次 : 51822203      線上人數 : 498
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋
    請使用永久網址來引用或連結此文件: https://nccur.lib.nccu.edu.tw/handle/140.119/152639


    題名: 以人工智慧分析法律裁決之研究:以域名爭議為例
    Applied AI Analysis on Law Case Prediction: Using Domain Name Dispute as Use Case
    作者: 劉珈卉
    Liou, Chia-Hui
    貢獻者: 宋皇志
    姜國輝

    Sung, Huang-Chih
    Chiang, Kuo-Huie

    劉珈卉
    Liou, Chia-Hui
    關鍵詞: 域名爭議
    域名搶註
    人工智慧
    法律判決預測
    小語言模型
    BERT
    Domain Name Dispute
    Cybersquatting
    AI
    Law Case Prediction
    SLM
    BERT
    日期: 2024
    上傳時間: 2024-08-05 13:03:36 (UTC+8)
    摘要: 在現代資訊社會中,隨著網路蓬勃發展,網路詐騙案件不斷增加,對域名爭議的管理單位構成了巨大挑戰。本研究旨在利用人工智慧分析方法,預測域名爭議案件的裁決結果,以開發提高司法資源使用效率的輔助工具。研究問題主要分為兩個層面:一是方法層面,包括小語言模型應用在專門法律領域與避免參數與樣本數量少的過度訓練問題,以及BERT模型的優化方法;二是管理層面,涵蓋使用者、科技發展與道德管理風險,以及未來潛在的商業應用策略分析。
    在方法層面,本研究證實,透過凍結部分BERT模型層並引入循環神經網路模型如LSTM、GRU、或是注意力機制Transformer層,能有效提升模型的表現能力,其中使用循環神經網絡的效果更好。同時,採用拔靴法、早停策略、學習率調整等統方法,提升模型的泛化能力,避免過度訓練,為預測案件結果提供了可靠基礎。
    在管理層面,本研究分析了域名爭議人工智慧在早期和成熟市場階段的使用者,並探討了科技、市場和道德倫理等層面對人工智慧應用策略的影響,並且表達本研究者對於域名爭議人工智慧發展的態度,希望能提供未來研究者新的研究方向。
    本研究的貢獻在於結合人工智慧技術方法和商業管理視角,深入探討域名爭議領域的應用潛力,並提供了實際的模型實作和商業策略分析。然而,研究受樣本與研究資源的限制,未來的研究者應試圖探索更多樣化的標記方法,並深入研究主動學習方法。隨著技術和社會環境的變化,持續關注域名爭議人工智慧的發展將是未來研究的重要方向。總而言之,本研究不僅在技術和方法層面取得了顯著進展,也為推廣域名爭議人工智慧及其在其他專業領域的應用奠定了堅實基礎。
    In today's information society, the rapid development of the internet has led to a continuous increase in online fraud cases, posing significant challenges to the management of domain name disputes. This study aims to use artificial intelligence analysis methods to predict the outcomes of domain name dispute cases and develop tools to improve judicial efficiency. The research focuses on two main aspects: methodology and management.
    Methodologically, the study addresses the application of small language models in specialized legal fields and the optimization of BERT models to avoid overfitting due to limited parameters and sample sizes. By freezing certain BERT layers and incorporating recurrent neural networks like LSTM, GRU, or attention mechanism Transformer layers, performance is enhanced, especially with recurrent networks. Techniques such as bootstrapping, early stopping, and learning rate adjustments further improve generalization, providing a reliable foundation for predicting case outcomes.
    On the management side, the study analyzes the users of AI in early and mature market stages and explores the impact of technology, market dynamics, and ethics on AI application strategies. It also expresses the researchers' positive stance on AI development in domain name disputes and suggests new research directions.
    The study combines AI methods with a business management perspective, exploring the application potential in domain name disputes and providing practical models and commercial strategy analysis. Despite limitations in sample size and resources, it suggests future research should explore diverse labeling methods and active learning. Continuous attention to AI development in this field is crucial. This study achieves significant progress in technology and methodology, laying a foundation for broader AI applications in professional fields.
    參考文獻: Armour, J., & Sako, M. (2020). AI-enabled business models in legal services: from traditional law firms to next-generation law companies?. Journal of Professions and Organization, 7(1), 27-46.
    Balaprakash, P., Gramacy, R. B., & Wild, S. M. (2013, September). Active-learning-based surrogate models for empirical performance tuning. In 2013 IEEE International Conference on Cluster Computing (CLUSTER) (pp. 1-8). IEEE.
    Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., ... & Amodei, D. (2020). Language models are few-shot learners. Advances in neural information processing systems, 33, 1877-1901.
    Cohn, D. A., Ghahramani, Z., & Jordan, M. I. (1996). Active learning with statistical models. Journal of artificial intelligence research, 4, 129-145.
    Davis, A. E. (2020). The future of law firms (and lawyers) in the age of artificial intelligence. Revista Direito GV, 16, e1945.
    Efron, B. (1992). Bootstrap methods: another look at the jackknife. In Breakthroughs in statistics: Methodology and distribution (pp. 569-593). New York, NY: Springer New York.
    Davison, A. C., & Hinkley, D. V. (1997). Bootstrap methods and their application (No. 1). Cambridge university press.
    Greff, K., Srivastava, R. K., Koutník, J., Steunebrink, B. R., & Schmidhuber, J. (2016). LSTM: A search space odyssey. IEEE transactions on neural networks and learning systems, 28(10), 2222-2232.
    Grießhaber, D., Maucher, J., & Vu, N. T. (2020). Fine-tuning BERT for low-resource natural language understanding via active learning. arXiv preprint arXiv:2012.02462.
    Hesterberg, T. (2011). Bootstrap. Wiley Interdisciplinary Reviews: Computational Statistics, 3(6), 497-526.
    Horowitz, J. L. (2001). The bootstrap. In Handbook of econometrics (Vol. 5, pp. 3159-3228). Elsevier.
    Kaplan, J., McCandlish, S., Henighan, T., Brown, T. B., Chess, B., Child, R., ... & Amodei, D. (2020). Scaling laws for neural language models. arXiv preprint arXiv:2001.08361.
    Vihikan, W. O., Mistica, M., Levy, I., Christie, A., & Baldwin, T. (2021, November). Automatic resolution of domain name disputes. In Proceedings of the Natural Legal Language Processing Workshop 2021 (pp. 228-238).
    Han, K., Xiao, A., Wu, E., Guo, J., Xu, C., & Wang, Y. (2021). Transformer in transformer. Advances in neural information processing systems, 34, 15908-15919.
    Howard, J., & Ruder, S. (2018). Universal language model fine-tuning for text classification. arXiv preprint arXiv:1801.06146.
    Kenton, J. D. M. W. C., & Toutanova, L. K. (2019, June). Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of naacL-HLT (Vol. 1, p. 2).
    Kossen, J., Farquhar, S., Gal, Y., & Rainforth, T. (2022). Active surrogate estimators: An active learning approach to label-efficient model evaluation. Advances in Neural Information Processing Systems, 35, 24557-24570.
    Oskamp, Anja, & Lauritsen, Marc. (2002). Ai in law practice? so far, not much. Artificial Intelligence and Law, 10(4), 227-236.
    Schick, T., & Schütze, H. (2020). It's not just size that matters: Small language models are also few-shot learners. arXiv preprint arXiv:2009.07118.
    Semmler, S., & Rose, Z. (2017). Artificial Intelligence: Application today and implications tomorrow. Duke L. & Tech. Rev., 16, 85.
    Sherstinsky, A. (2020). Fundamentals of recurrent neural network (RNN) and long short-term memory (LSTM) network. Physica D: Nonlinear Phenomena, 404, 132306.
    Sil, R., Roy, A., Bhushan, B., & Mazumdar, A. K. (2019, October). Artificial intelligence and machine learning based legal application: the state-of-the-art and future research trends. In 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS) (pp. 57-62). IEEE.
    Simshaw, D. (2018). Ethical issues in robo-lawyering: The need for guidance on developing and using artificial intelligence in the practice of law. Hastings LJ, 70, 173.
    Sun, C., Qiu, X., Xu, Y., & Huang, X. (2019). How to fine-tune bert for text classification?. In Chinese Computational Linguistics: 18th China National Conference, CCL 2019, Kunming, China, October 18–20, 2019, Proceedings 18 (pp. 194-206). Springer International Publishing.
    Surden, H. (2019). Artificial intelligence and law: An overview. Georgia State University Law Review, 35, 19-22.
    Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30.
    Westermann, H., Savelka, J., & Benyekhlef, K. (2021). Paragraph similarity scoring and fine-tuned BERT for legal information retrieval and entailment. In New Frontiers in Artificial Intelligence: JSAI-isAI 2020 Workshops, JURISIN, LENLS 2020 Workshops, Virtual Event, November 15–17, 2020, Revised Selected Papers 12 (pp. 269-285). Springer International Publishing.
    描述: 碩士
    國立政治大學
    科技管理與智慧財產研究所
    108364124
    資料來源: http://thesis.lib.nccu.edu.tw/record/#G0108364124
    資料類型: thesis
    顯示於類別:[科技管理與智慧財產研究所] 學位論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    412401.pdf1828KbAdobe PDF0檢視/開啟


    在政大典藏中所有的資料項目都受到原著作權保護.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 回饋