政大機構典藏-National Chengchi University Institutional Repository(NCCUR):Item 140.119/150170
English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  Items with full text/Total items : 113451/144438 (79%)
Visitors : 51274309      Online Users : 895
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    Please use this identifier to cite or link to this item: https://nccur.lib.nccu.edu.tw/handle/140.119/150170


    Title: 基於深度和梯度角變化的交通違規偵測與車速預測
    Traffic Violation Detection based on Depth and Gradient Angle Change and Speed Prediction
    Authors: 劉宸羽
    Liu, Chen-Yu
    Contributors: 彭彥璁
    Peng, Yan-Tsung
    劉宸羽
    Liu, Chen-Yu
    Keywords: 交通違規檢測
    智能交通系統
    車輛行為分析
    Vehicle action analysis
    Traffic violation detection system
    Date: 2024
    Issue Date: 2024-03-01 13:42:07 (UTC+8)
    Abstract: 近年來,民眾檢舉的車輛違規案例越來越多,缺乏自動判斷檢舉影片中的車輛是否違規的系統導致警方的業務繁重。為了解決科技執法的問題,我們設計了一個基於物件偵測和深度變化的判斷系統,並加入了物件追蹤的先驗演算法來輔助判斷,我們還建立了自己的交通違規數據集。本文中,基於物件偵測和深度變化判斷系統的目標是解決兩種類型的交通違規行為:(1) 紅燈直行 (2) 紅燈左右轉,所提出的交通違規檢測系統包括兩個主要部分:違規目標跟踪(VTT:Violation Target Tracking)和目標行為分析(TAA:Target Action Analysis),我們首先在VTT階段檢測紅綠燈和車輛的車牌,並獲得它們的軌跡位置和深度;接下來,我們對車輛深度和方位角的變化進行建模以判斷交通違規行為。實驗結果表明,我們的違規檢測系統對於所有違規案例平均可以達到 76\% 的真實準確度和 81\% 的條件準確度。另外我們應用道路線偵測模型和數學分析方法來預測拍攝者行車紀錄器的車速,進而檢測是否超越道路速限。
    In recent times, there's been a surge in public reports on vehicle violations, straining law enforcement due to the absence of an automated system to assess reported video evidence for potential traffic breaches. To overcome these challenges in technology-driven law enforcement, we designed a system based on object detection, depth variation assessment, and prior object tracking algorithms. Our unique traffic violation dataset supports this system, which targets two specific violations: running red lights and making turns at red lights. It consists of Violation Target Tracking (VTT) and Target Action Analysis (TAA). The VTT phase identifies traffic lights and license plates, tracking their trajectory and depth. Modeling changes in vehicle depth and azimuth enables us to determine violations. Our system achieves an average accuracy of 0.76 for true positives and 0.81 for conditional accuracy in detecting violations. Furthermore, employing road line detection models and mathematical analysis enables us to predict vehicle speeds from dashcam footage, aiding in identifying speeding violations beyond road limits.
    Reference: [1] Radhakrishna Achanta, Sheila Hemami, Francisco Estrada, and Sabine Susstrunk. Frequency-tuned salient region detection. In 2009 IEEE conference on computer vision and pattern recognition, pages 1597–1604. IEEE, 2009.
    [2] Chemesse ennehar Bencheriet, S Belhadad, and M Menai. Vehicle tracking and trajectory estimation for detection of traffic road violation. In Advanced Computational Paradigms and Hybrid Intelligent Computing: Proceedings of ICACCP 2021, pages 561–571. Springer, 2022.
    [3] Alex Bewley, Zongyuan Ge, Lionel Ott, Fabio Ramos, and Ben Upcroft. Simple online and realtime tracking. In 2016 IEEE international conference on image processing (ICIP), pages 3464–3468. IEEE, 2016.
    [4] Alexey Bochkovskiy, Chien-Yao Wang, and Hong-Yuan Mark Liao. Yolov4: Optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934, 2020.
    [5] Weitao Feng, Deyi Ji, Yiru Wang, Shuorong Chang, Hansheng Ren, and Weihao Gan. Challenges on large scale surveillance video analysis. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pages 69–76, 2018.
    [6] Zhengyang Feng, Shaohua Guo, Xin Tan, Ke Xu, Min Wang, and Lizhuang Ma. Rethinking efficient lane detection via curve modeling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 17062–17070, 2022.
    [7] Mauro Fernández-Sanjurjo, Manuel Mucientes, and Víctor M Brea. A real-time processing stand-alone multiple object visual tracking system. In International Conference on Computer Analysis of Images and Patterns, pages 64–74. Springer, 2019.
    [8] Ruben J Franklin et al. Traffic signal violation detection using artificial intelligence and deep learning. In 2020 5th international conference on communication and electronics systems (ICCES), pages 839–844. IEEE, 2020.
    [9] Ross Girshick. Fast r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 1440–1448, 2015.
    [10] Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 580–587, 2014.
    [11] Aman Goyal, Dev Agarwal, Anbumani Subramanian, CV Jawahar, Ravi Kiran Sarvadevabhatla, and Rohit Saluja. Detecting, tracking and counting motorcycle rider traffic violations on unconstrained roads. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4303–4312, 2022.
    [12] Kristen Grauman and Trevor Darrell. The pyramid match kernel: Efficient learning with sets of features. Journal of Machine Learning Research, 8(4), 2007.
    [13] Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 2961–2969, 2017.
    [14] Glenn Jocher, Ayush Chaurasia, Alex Stoken, Jirka Borovec, Yonghye Kwon, Jiacong Fang, Kalen Michael, Diego Montes, Jebastin Nadar, Piotr Skalski, et al. ultralytics/yolov5: v6. 1-tensorrt, tensorflow edge tpu and openvino export and inference. Zenodo, 2022.
    [15] Zhengqi Li and Noah Snavely. Megadepth: Learning single-view depth prediction from internet photos. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2041–2050, 2018.
    [16] Tsung-Yi Lin, Michael Maire, Serge J. Belongie, Lubomir D. Bourdev, Ross B. Girshick, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll’a r, and C. Lawrence Zitnick. Microsoft COCO: common objects in context. CoRR, abs/1405.0312, 2014.
    [17] Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi. You only look once: Unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 779–788, 2016.
    [18] Joseph Redmon and Ali Farhadi. Yolo9000: better, faster, stronger. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7263–7271, 2017.
    [19] Joseph Redmon and Ali Farhadi. Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767, 2018.
    [20] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards realtime object detection with region proposal networks. Advances in neural information processing systems, 28, 2015.
    [21] Aniruddha Tonge, Shashank Chandak, Renuka Khiste, Usman Khan, and LA Bewoor. Traffic rules violation detection using deep learning. In 2020 4th in ternational conference on electronics, communication and aerospace technology (ICECA), pages 1250–1257. IEEE, 2020.
    [22] Balaji Veeramani, John W Raymond, and Pritam Chanda. Deepsort: deep convolutional networks for sorting haploid maize seeds. BMC bioinformatics, 19(9):1–9, 2018.
    [23] Nicolai Wojke, Alex Bewley, and Dietrich Paulus. Simple online and realtime tracking with a deep association metric. In 2017 IEEE international conference on image processing (ICIP), pages 3645–3649. IEEE, 2017.
    [24] Gang Yan, Ming Yu, Yang Yu, and Longfei Fan. Real-time vehicle detection using histograms of oriented gradients and adaboost classification. Optik, 127(19):7941–7951, 2016.
    [25] Zhou Yu-Xiang. The demon of justice reports, traffic violations tripled in five years, reaching a record high of nearly 6 million. https://today.line.me/tw/v2/article/ZjWpzr.
    [26] Liwei Zhang, Jiahong Lai, Zenghui Zhang, Zhen Deng, Bingwei He, and Yucheng He. Multimodal multiobject tracking by fusing deep appearance features and motion information. Complexity, 2020:1–10, 2020.
    Description: 碩士
    國立政治大學
    資訊科學系
    110753129
    Source URI: http://thesis.lib.nccu.edu.tw/record/#G0110753129
    Data Type: thesis
    Appears in Collections:[Department of Computer Science ] Theses

    Files in This Item:

    File SizeFormat
    312901.pdf9177KbAdobe PDF1View/Open


    All items in 政大典藏 are protected by copyright, with all rights reserved.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback