Loading...
|
Please use this identifier to cite or link to this item:
https://nccur.lib.nccu.edu.tw/handle/140.119/142641
|
Title: | 基於深度直方圖網路之水下影像還原模型 Underwater Image Restoration using Histogram-based Deep Networks |
Authors: | 陳彥蓉 Chen, Yen-Rong |
Contributors: | 彭彥璁 Peng, Yan-Tsung 陳彥蓉 Chen, Yen-Rong |
Keywords: | 影像處理 影像還原 直方圖 深度學習 Image processing Image restoration Histogram Deep learning |
Date: | 2022 |
Issue Date: | 2022-12-02 15:20:32 (UTC+8) |
Abstract: | 水下的環境複雜,能見度低,當我們拍攝水下物體或生物的照片 時,總會產生模糊似霧或是水色失真的問題,導致看不清楚水下的狀 況。由於光在水下傳播時的吸收、散射與衰減,導致水下圖像存在嚴 重的色偏、模糊與低對比度的情況,因此我們提出了一個基於深度直 方圖網路之水下影像還原的模型,應用深度學習的概念學習圖像的直 方圖分布,學習好的水下圖像的直方圖分佈,來生成所需的直方圖, 以增強圖像對比度和解決偏色問題。再者,我們結合了一個局部區塊 優化的模型,進一步加強影像的視覺表現。此外,我們提出的網路結 構設計,具有執行速度快的優點。透過實驗證明,我們提出的方法不 僅可以完全地恢復水下圖像,而且在水下圖像恢復和增強的最新方法 中表現良好。 The underwater environment is complex, and its visibility is low. When we take photos of underwater objects or creatures, there will always be blurry fog or water color distortion, making it difficult to see the underwater conditions. Due to the absorption, scattering, and attenuation of propagated light, underwater images are prone to severe color casts, blurriness, and low contrast. Therefore, we propose a model for underwater image restoration based on a deep histogram model, learning histogram distributions of good underwater images to produce the desired histogram for enhancing image contrast and resolving color cast problems. Furthermore, we combine a local optimization model to further increase the visual performance of the image. In addition, our proposed network structure design has the advantage of fast execution speed. Experimental results demonstrate that the proposed method performs favorably against state-of-the-art approaches for underwater image restoration and enhancement. |
Reference: | [1] Y. Wang, J. Zhang, Y. Cao, and Z. Wang, “A deep cnn method for underwater image enhancement,” in Proc. Int’l Conf. Image Processing, 2017. [2] P. Liu, G. Wang, H. Qi, C. Zhang, H. Zheng, and Z. Yu, “Underwater image enhancement with a deep residual framework,” IEEE Access, 2019. [3] Y. Guo, H. Li, and P. Zhuang, “Underwater image enhancement using a multiscale dense generative adversarial network,” J. Oceanic Engineering, 2019. [4] C. Li, C. Guo, W. Ren, R. Cong, J. Hou, S. Kwong, and D. Tao, “An underwater image enhancement benchmark dataset and beyond,” IEEE Trans. on Image Processing, 2019. [5] C. Li, S. Anwar, and F. Porikli, “Underwater scene prior inspired deep underwater image and video enhancement,” Pattern Recognition, 2020. [6] C. Li, S. Anwar, J. Hou, R. Cong, C. Guo, and W. Ren, “Underwater image enhancement via medium transmission-guided multi-color space embedding,” IEEE Trans. on Image Processing, 2021. [7] R. Liu, X. Fan, M. Zhu, M. Hou, and Z. Luo, “Real-world underwater enhancement: Challenges, benchmarks, and solutions under natural light,” IEEE Trans. on Circuits and Systems for Video Technology, 2020. [8] H. Li, J. Li, and W. Wang, “A fusion adversarial underwater image enhancement network with a public test dataset,” arXiv preprint arXiv:1906.06819, 2019. [9] D. Berman, D. Levy, S. Avidan, and T. Treibitz, “Underwater single image color restoration using haze-lines and a new quantitative dataset,” IEEE Trans. on Pattern Analysis and Machine Intelligence, 2020. [10] C. O. Ancuti, C. Ancuti, C. De Vleeschouwer, and P. Bekaert, “Color balance and fusion for underwater image enhancement,” IEEE Trans. on Image Processing, 2017. [11] Y.-T. Peng and P. C. Cosman, “Underwater image restoration based on image blurriness and light absorption,” IEEE Trans. on Image Processing, 2017. [12] Y.-T. Peng, K. Cao, and P. C. Cosman, “Generalization of the dark channel prior for single image restoration,” IEEE Trans. on Image Processing, 2018. [13] W. Song, Y. Wang, D. Huang, A. Liotta, and C. Perra, “Enhancement of underwater images with statistical model of background light and optimization of transmission map,” IEEE Trans. on Broadcasting, 2020. [14] P. Zhuang, J. Wu, F. Porikli, and C. Li, “Underwater image enhancement with hyperlaplacian reflectance priors,” IEEE Trans. on Image Processing, 2022. [15] J. R. V. Zaneveld, “Light and water: Radiative transfer in natural waters,” 1995. [16] M. J. Islam, Y. Xia, and J. Sattar, “Fast underwater image enhancement for improved visual perception,” J. Robotics and Automation letters, 2020. [17] R. Hummel, “Image enhancement by histogram transformation,” Unknown, 1975. [18] A. Dudhane, P. Hambarde, P. Patil, and S. Murala, “Deep underwater image restoration and beyond,” Signal Processing Letters, 2020. [19] M. J. Islam, P. Luo, and J. Sattar, “Simultaneous enhancement and superresolution of underwater imagery for improved visual perception,” arXiv preprint arXiv:2002.01155, 2020. [20] K. Cao, Y.-T. Peng, and P. C. Cosman, “Underwater image restoration using deep networks to estimate background light and scene depth,” in Proc. Southwest Symposium on Image Analysis and Interpretation, 2018. [21] J. Li, K. A. Skinner, R. M. Eustice, and M. Johnson-Roberson, “Watergan: Unsupervised generative network to enable real-time color correction of monocular underwater images,” J. Robotics and Automation letters, 2017. [22] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” Advances in neural information processing systems, 2014. [23] D. Coltuc, P. Bolon, and J.-M. Chassery, “Exact histogram specification,” IEEE Trans. on Image Processing, 2006. [24] C. Ancuti, C. O. Ancuti, T. Haber, and P. Bekaert, “Enhancing underwater images and videos by fusion,” in Proc. Conf. Computer Vision and Pattern Recognition, 2012. [25] X. Fu, Z. Fan, M. Ling, Y. Huang, and X. Ding, “Two-step approach for single underwater image enhancement,” in Proc. Int’l Symp. Intelligent Signal Processing and Communication Systems, 2017. [26] E. H. Land and J. J. McCann, “Lightness and retinex theory,” Josa, 1971. [27] X. Fu, P. Zhuang, Y. Huang, Y. Liao, X.-P. Zhang, and X. Ding, “A retinex-basedenhancing approach for single underwater image,” in Proc. Int’l Conf. Image Processing, 2014. [28] S. Zhang, T. Wang, J. Dong, and H. Yu, “Underwater image enhancement via extended multi-scale retinex,” Neurocomputing, 2017. [29] K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. on Pattern Analysis and Machine Intelligence, 2010. [30] J. Y. Chiang and Y.-C. Chen, “Underwater image enhancement by wavelength compensation and dehazing,” IEEE Trans. on Image Processing, 2011. [31] P. L. Drews, E. R. Nascimento, S. S. Botelho, and M. F. M. Campos, “Underwater depth estimation and image restoration based on single images,” J. Computer Graphics and Applications, 2016. [32] N. Carlevaris-Bianco, A. Mohan, and R. M. Eustice, “Initial results in underwater single image dehazing,” in Oceans 2010 Mts/IEEE Seattle, 2010. [33] A. Galdran, D. Pardo, A. Picón, and A. Alvarez-Gila, “Automatic red-channel underwater image restoration,” J. Visual Communication and Image Representation, 2015. [34] Y.-T. Peng, X. Zhao, and P. C. Cosman, “Single underwater image enhancement using depth estimation based on blurriness,” in Proc. Int’l Conf. Image Processing, 2015. [35] C. Li, J. Guo, S. Chen, Y. Tang, Y. Pang, and J. Wang, “Underwater image restoration based on minimum information loss principle and optical properties of underwater imaging,” in Proc. Int’l Conf. Image Processing, 2016. [36] C.-Y. Li, J.-C. Guo, R.-M. Cong, Y.-W. Pang, and B. Wang, “Underwater image enhancement by dehazing with minimum information loss and histogram distribution prior,” IEEE Trans. on Image Processing, 2016. [37] Y. Wang, H. Liu, and L.-P. Chau, “Single underwater image restoration using adaptive attenuation-curve prior,” IEEE Transactions on Circuits and Systems I: Regular Papers, 2017. [38] G. Hou, X. Zhao, Z. Pan, H. Yang, L. Tan, and J. Li, “Benchmarking underwater image enhancement and restoration, and beyond,” IEEE Access, 2020. [39] J. Xu, J. Sun, and C. Zhang, “Non-linear algorithm for contrast enhancement for image using wavelet neural network,” in 2006 9th International Conference on Control, Automation, Robotics and Vision, 2006. [40] A. K. Alexandridis and A. D. Zapranis, “Wavelet neural networks: A practical guide,” Neural Networks, 2013. [41] C. Li, J. Guo, and C. Guo, “Emerging from water: Underwater image color correction based on weakly supervised color transfer,” Signal Processing Letters, 2018. [42] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in Proc. Int’l Conf. Computer Vision, 2017. [43] J. Kim, J. K. Lee, and K. M. Lee, “Accurate image super-resolution using very deep convolutional networks,” in Proc. Conf. Computer Vision and Pattern Recognition, 2016. [44] X. Liu, Z. Gao, and B. M. Chen, “Mlfcgan: Multilevel feature fusion-based conditional gan for underwater image color correction,” J. Geoscience and Remote Sensing Letters, 2019. [45] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Advances in neural information processing systems, 2017. [46] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv preprint arXiv:2010.11929, 2020. [47] Z. Wang, X. Cun, J. Bao, W. Zhou, J. Liu, and H. Li, “Uformer: A general u-shaped transformer for image restoration,” in Proc. Conf. Computer Vision and Pattern Recognition, 2022. [48] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, “Mobilenets: Efficient convolutional neural networks for mobile vision applications,” arXiv preprint arXiv:1704.04861, 2017. [49] R. Qian, R. T. Tan, W. Yang, J. Su, and J. Liu, “Attentive generative adversarial network for raindrop removal from a single image,” in Proc. Conf. Computer Vision and Pattern Recognition, 2018. [50] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in Proc. Conf. Computer Vision and Pattern Recognition, 2009. [51] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014. [52] C. Fabbri, M. J. Islam, and J. Sattar, “Enhancing underwater imagery using generative adversarial networks,” in Proc. Int’l Conf. on Robotics and Automation, 2018. [53] J. Xiao, J. Hays, K. A. Ehinger, A. Oliva, and A. Torralba, “Sun database: Largescale scene recognition from abbey to zoo,” in Proc. Conf. Computer Vision and Pattern Recognition, 2010. [54] K. Panetta, C. Gao, and S. Agaian, “Human-visual-system-inspired underwater image quality measures,” J. Oceanic Engineering, 2015. [55] M. Yang and A. Sowmya, “An underwater color image quality evaluation metric,” IEEE Trans. on Image Processing, 2015. [56] S. Wang, K. Ma, H. Yeganeh, Z. Wang, and W. Lin, “A patch-structure representation method for quality assessment of contrast changed images,” Signal Processing Letters, 2015. [57] A. Mittal, R. Soundararajan, and A. C. Bovik, “Making a “completely blind”image quality analyzer,” Signal Processing Letters, 2012. |
Description: | 碩士 國立政治大學 資訊科學系 109753204 |
Source URI: | http://thesis.lib.nccu.edu.tw/record/#G0109753204 |
Data Type: | thesis |
DOI: | 10.6814/NCCU202201675 |
Appears in Collections: | [資訊科學系] 學位論文
|
Files in This Item:
File |
Description |
Size | Format | |
320401.pdf | | 25693Kb | Adobe PDF2 | 0 | View/Open |
|
All items in 政大典藏 are protected by copyright, with all rights reserved.
|