English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  Items with full text/Total items : 113311/144292 (79%)
Visitors : 50926331      Online Users : 873
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    Please use this identifier to cite or link to this item: https://nccur.lib.nccu.edu.tw/handle/140.119/147096


    Title: 基於日間合成資料之夜晚複合天候影像還原機制
    Restoration Mechanism for Nighttime Composite Weather Images Using Daytime Synthetic Data
    Authors: 張佑笙
    Chang, Yu-Sheng
    Contributors: 廖文宏
    Liao, Wen-Hung
    張佑笙
    Chang, Yu-Sheng
    Keywords: 夜間影像還原
    複合天氣損害影像
    多任務影像還原
    深度學習
    Nighttime image restoration
    Composite weather degraded images
    Multi-task image restoration
    Deep learning
    Date: 2023
    Issue Date: 2023-09-01 15:39:42 (UTC+8)
    Abstract: 近年來,深度學習在影像還原任務獲得了顯著的進展,為許多應用中帶來巨大的貢獻,然而在自駕車系統中,天氣條件不佳的情況下所拍攝到的影像,容易降低物件偵測演算法的準確度,影響系統對危險駕駛事件的判斷,除了受單一天氣影響外,現實世界中也會受到複合天氣的影響,所以利用單一模型,同時還原受多種天氣狀況影響的影像品質,也成為影像還原任務中的重要議題。
    本研究基於物理特性,在清晰的白天影像中,添加雨痕、霧、雨滴以及前述天候排列組合,合成七種複合天氣影像作為還原目標,另外為了同時加強夜間還原情況,使用生成對抗網路,生成清晰的夜間影像。在網路架構方面,結合任務自適應機制及亮度強化模型,將多種天氣的生成影像作為訓練資料,提出一個可同時處理白天及夜間的複合天氣還原模型。
    為了進一步驗證夜間還原的影像品質,本研究也合成複合天氣的夜間影像,計算加入亮度強化模型前後的還原影像品質指標,客觀的分析還原結果,除此之外,我們也使用物件偵測模型-YOLOv7,偵測道路上常見的物件,驗證損害影像經複合天氣模型還原後,能有效提升物件偵測的準確度。
    In recent years, substantial progress has been made in deep learning for image restoration tasks, making noteworthy contributions to various applications. However, in self-driving systems, images captured under adverse weather conditions can considerably reduce the accuracy of object detection algorithms, impacting the system`s ability to assess dangerous driving events. In addition to being influenced by individual weather conditions, real-world scenarios are also subject to the degradation caused by compound weather conditions. Therefore, the utilization of a single model to restore image quality affected by multiple weather conditions has emerged as a crucial topic in image restoration tasks.

    This thesis utilizes physics-based models to synthesize images including diverse weather conditions, including rain streaks, fog, raindrops, and their combinations. A total of seven types of composite weather images are used as restoration targets. To enhance nighttime restoration simultaneously, a generative adversarial network (GAN) is employed to generate clear nighttime images. In terms of network architecture, a task-adaptive mechanism and a brightness enhancement module are integrated. Multiple weather-degraded images are used as training data, resulting in a single weather restoration model capable of handling both daytime and nighttime scenarios.

    To further verify the image quality of nighttime restoration, this thesis synthesizes composite weather nighttime images and calculates the restored image quality indicators before and after applying a brightness enhancement module. The restoration results are objectively analyzed using image quality indicators. In addition to image quality metrics, the YOLOv7 object detection model is used to detect common objects on the road, validating the effectiveness of enhancing object detection accuracy by restoring degraded images using the proposed composite weather model.
    Reference: [1] 維基百科:類神經網路https://zh-yue.wikipedia.org/wiki/File:ArtificialNeuronModel_english.png
    [2] Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. MIT press.
    [3] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., ... & Bengio, Y. (2020). Generative adversarial networks. Communications of the ACM, 63(11), 139-144.
    [4] Radford, A., Metz, L., & Chintala, S. (2015). Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434.
    [5] Karras, T., Laine, S., & Aila, T. (2019). A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 4401-4410).
    [6] Zhu, J. Y., Park, T., Isola, P., & Efros, A. A. (2017). Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision (pp. 2223-2232).
    [7] Hu, J., Shen, L., & Sun, G. (2018). Squeeze-and-excitation networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 7132-7141).
    [8] Qin, X., Wang, Z., Bai, Y., Xie, X., & Jia, H. (2020, April). FFA-Net: Feature fusion attention network for single image dehazing. In Proceedings of the AAAI conference on artificial intelligence (Vol. 34, No. 07, pp. 11908-11915).
    [9] 鄭可昕,基於深度學習框架之夜晚霧霾圖像模擬與復原方法評估,碩士論文,國立政治大學資訊科學系碩士班,臺北,2021.
    [10] Park, Y., Jeon, M., Lee, J., & Kang, M. (2022). MCW-Net: Single image deraining with multi-level connections and wide regional non-local blocks. Signal Processing: Image Communication, 105, 116701.
    [11] Wang, L. W., Liu, Z. S., Siu, W. C., & Lun, D. P. (2020). Lightening network for low-light image enhancement. IEEE Transactions on Image Processing, 29, 7984-7996.
    [12] Cordts, M., Omran, M., Ramos, S., Rehfeld, T., Enzweiler, M., Benenson, R., ... & Schiele, B. (2016). The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3213-3223).
    [13] Porav, H., Musat, V. N., Bruls, T., & Newman, P. (2020). Rainy screens: Collecting rainy datasets, indoors. arXiv preprint arXiv:2003.04742.
    [14] Wang, T. C., Liu, M. Y., Zhu, J. Y., Tao, A., Kautz, J., & Catanzaro, B. (2018). High-resolution image synthesis and semantic manipulation with conditional gans. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 8798-8807).
    [15] Halder, S. S., Lalonde, J. F., & Charette, R. D. (2019). Physics-based rendering for improving robustness to rain. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 10203-10212).
    [16] Garg, K., & Nayar, S. K. (2006). Photorealistic rendering of rain streaks. ACM Transactions on Graphics (TOG), 25(3), 996-1002.
    [17] De Charette, R., Tamburo, R., Barnum, P. C., Rowe, A., Kanade, T., & Narasimhan, S. G. (2012, April). Fast reactive control for illumination through rain and snow. In 2012 IEEE International Conference on Computational Photography (ICCP) (pp. 1-10). IEEE.
    [18] Sakaridis, C., Dai, D., & Van Gool, L. (2018). Semantic foggy scene understanding with synthetic data. International Journal of Computer Vision, 126, 973-992.
    [19] Kahraman, S., & De Charette, R. (2017). Influence of fog on computer vision algorithms (Doctoral dissertation, Inria Paris).
    [20] Sakaridis, C., Dai, D., & Van Gool, L. (2020). Map-guided curriculum domain adaptation and uncertainty-aware evaluation for semantic nighttime image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(6), 3139-3153.
    [21] Mușat, V., Fursa, I., Newman, P., Cuzzolin, F., & Bradley, A. (2021). Multi-weather city: Adverse weather stacking for autonomous driving. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 2906-2915).
    [22] Zhou, J., Leong, C., Lin, M., Liao, W., & Li, C. (2022). Task adaptive network for image restoration with combined degradation factors. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (pp. 1-8).
    [23] Chen, D., He, M., Fan, Q., Liao, J., Zhang, L., Hou, D., ... & Hua, G. (2019, January). Gated context aggregation network for image dehazing and deraining. In 2019 IEEE winter conference on applications of computer vision (WACV) (pp. 1375-1383). IEEE.
    [24] Wang, C. Y., Bochkovskiy, A., & Liao, H. Y. M. (2022). YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. arXiv preprint arXiv:2207.02696.
    [25] Baskar, H., Chakravarthy, A. S., Garg, P., Goel, D., Raj, A. S., Kumar, K., ... & Rout, B. K. (2022). Nighttime Dehaze-Enhancement. arXiv preprint arXiv:2210.09962.
    [26] Lv, F., Li, Y., & Lu, F. (2021). Attention guided low-light image enhancement with a large scale low-light simulation dataset. International Journal of Computer Vision, 129(7), 2175-2193.
    [27] Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4), 600-612.
    [28] Cityscapes label:https://github.com/TillBeemelmanns/cityscapes-to-coco-conversion
    [29] Yang, W., Tan, R. T., Feng, J., Liu, J., Guo, Z., & Yan, S. (2017). Deep joint rain detection and removal from a single image. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1357-1366).
    [30] Qian, R., Tan, R. T., Yang, W., Su, J., & Liu, J. (2018). Attentive generative adversarial network for raindrop removal from a single image. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2482-2491).
    [31] Zhang, J., Cao, Y., Zha, Z. J., & Tao, D. (2020, October). Nighttime dehazing with a synthetic benchmark. In Proceedings of the 28th ACM international conference on multimedia (pp. 2355-2363).
    [32] Lim, W. T., Ang, K., & Loh, Y. P. (2022, December). Deep Enhancement-Object Features Fusion for Low-Light Object Detection. In Proceedings of the 4th ACM International Conference on Multimedia in Asia (pp. 1-6).
    [33] Howard, A., Sandler, M., Chu, G., Chen, L. C., Chen, B., Tan, M., ... & Adam, H. (2019). Searching for mobilenetv3. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 1314-1324).
    [34] Valanarasu, J. M. J., Yasarla, R., & Patel, V. M. (2022). Transweather: Transformer-based restoration of images degraded by adverse weather conditions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 2353-2363).
    [35] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., & Zagoruyko, S. (2020). End-to-end object detection with transformers. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part I 16 (pp. 213-229). Springer International Publishing.
    [36] Koblik, K. (2021). Simulation of rain on a windshield: Creating a real-time effect using GPGPU computing.
    Description: 碩士
    國立政治大學
    資訊科學系碩士在職專班
    110971001
    Source URI: http://thesis.lib.nccu.edu.tw/record/#G0110971001
    Data Type: thesis
    Appears in Collections:[資訊科學系碩士在職專班] 學位論文

    Files in This Item:

    File SizeFormat
    100101.pdf5282KbAdobe PDF2124View/Open


    All items in 政大典藏 are protected by copyright, with all rights reserved.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback