政大機構典藏-National Chengchi University Institutional Repository(NCCUR):Item 140.119/153379
English  |  正體中文  |  简体中文  |  Post-Print筆數 : 27 |  全文笔数/总笔数 : 113392/144379 (79%)
造访人次 : 51224800      在线人数 : 871
RC Version 6.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜寻范围 查询小技巧:
  • 您可在西文检索词汇前后加上"双引号",以获取较精准的检索结果
  • 若欲以作者姓名搜寻,建议至进阶搜寻限定作者字段,可获得较完整数据
  • 进阶搜寻
    政大機構典藏 > 資訊學院 > 資訊科學系 > 學位論文 >  Item 140.119/153379


    请使用永久网址来引用或连结此文件: https://nccur.lib.nccu.edu.tw/handle/140.119/153379


    题名: 基於直方圖-視覺之雙變換器架構的白平衡校正
    HVDualformer: Histogram-Vision Dual Transformer for White Balance Correction
    作者: 陳冠融
    Chen, Guan-Rong
    贡献者: 彭彥璁
    Peng, Yan-Tsung
    陳冠融
    Chen, Guan-Rong
    关键词: 色彩一致性
    白平衡
    變換器
    Color Constancy
    White Balance
    Transformer
    日期: 2024
    上传时间: 2024-09-04 14:59:56 (UTC+8)
    摘要: 在不同色溫條件下拍攝照片可能導致色偏,使呈現的顏色與人眼通常看到的顏色不同。消除這樣的色溫偏移以實現白平衡是一項具有挑戰性的任務。它需要考慮來自不同光源的色調變化,並確定一個單一的參考點來消除色偏。深度神經網絡的出現顯著推動了白平衡方法的進展,從找到場景照明顏色到直接從顏色偏移的輸入中獲得色彩一致的圖像。為了更好地從輸入圖像中提取顏色分佈和場景信息以進行白平衡,我們提出了HVDualformer,一種直方圖-視覺雙變換器架構,可以校正圖像色溫直方圖中的色溫特徵並將其與圖像特徵相關聯。所提出的HVDualformer可以處理單一光源和多光源的兩種情況。對公開基準數據集的廣泛實驗結果表明,所提出的模型在性能上優於最先進的方法。
    Shooting photos under different color temperatures could lead to color casts, causing the presented color to be different from what human eyes see normally. Removing such color temperature shifts to achieve white balance is a challenging task. It needs to consider color tone variations from different light sources and pinpoint a single reference point to remove color casts.
    The emergence of deep neural networks has significantly advanced the progress of white balance methods, from finding the scene illumination color to obtaining a color-consistent image directly from the color-shifted input. To better extract color distributions and scene information from the input image for white balance, we propose HVDualformer, a histogram-vision dual transformer architecture that rectifies color temperature features from image color histograms and correlates them with image features. The proposed HVDualformer can handle both scenarios with the single-light source and multiple-light source. The extensive experimental results on public benchmark datasets show that the proposed model performs favorably against state-of-the-art methods.
    參考文獻: [1] Mahmoud Afifi and Michael S Brown. Deep white-balance editing. In CVPR, 2020.
    [2] Mahmoud Afifi, Marcus A Brubaker, and Michael S Brown. Auto white-balance
    correction for mixed-illuminant scenes. In Proceedings of the IEEE/CVF Winter
    Conference on Applications of Computer Vision, 2022.
    [3] Mahmoud Afifi, Brian Price, Scott Cohen, and Michael S Brown. When color constancy goes wrong: Correcting improperly white-balanced images. In CVPR, 2019.
    [4] Nikola Banić, Karlo Koščević, and Sven Lončarić. Unsupervised learning for color
    constancy. arXiv preprint arXiv:1712.00436, 2017.
    [5] Jonathan T Barron. Convolutional color constancy. In ICCV, 2015.
    [6] Jonathan T Barron and Ben Poole. The fast bilateral solver. In ECCV, 2016.
    [7] Jonathan T Barron and Yun-Ta Tsai. Fast fourier color constancy. In CVPR, 2017.
    [8] Simone Bianco and Claudio Cusano. Quasi-unsupervised color constancy. In CVPR,
    2019.
    [9] Simone Bianco and Raimondo Schettini. Adaptive color constancy using faces.
    IEEE Trans. Pattern Analysis and Machine Intelligence., 2014.
    [10] David H Brainard and Brian A Wandell. Analysis of the retinex theory of color
    vision. JOSA A, 1986.
    60
    [11] Gershon Buchsbaum. A spatial processor model for object colour perception. Journal of the Franklin institute, 1980.
    [12] Vladimir Bychkovsky, Sylvain Paris, Eric Chan, and Frédo Durand. Learning photographic global tonal adjustment with a database of input/output image pairs. In
    CVPR, 2011.
    [13] Hanting Chen, Yunhe Wang, Tianyu Guo, Chang Xu, Yiping Deng, Zhenhua Liu,
    Siwei Ma, Chunjing Xu, Chao Xu, and Wen Gao. Pre-trained image processing
    transformer. In CVPR, 2021.
    [14] Dongliang Cheng, Dilip K Prasad, and Michael S Brown. Illuminant estimation
    for color constancy: why spatial-domain methods work and the role of the color
    distribution. JOSA A, 2014.
    [15] Dinu Coltuc, Philippe Bolon, and J-M Chassery. Exact histogram specification.
    IEEE TIP, 15(5):1143–1152, 2006.
    [16] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg
    Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for
    image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.
    [17] Graham Finlayson and Steven Hordley. Improving gamut mapping color constancy.
    IEEE TIP, 2000.
    [18] Graham D Finlayson, Steven D Hordley, and Ingeborg Tastl. Gamut constrained
    illuminant estimation. IJCV, 2006.
    [19] David A Forsyth. A novel algorithm for color constancy. IJCV, 1990.
    61
    [20] Peter Vincent Gehler, Carsten Rother, Andrew Blake, Tom Minka, and Toby Sharp.
    Bayesian color constancy revisited. In CVPR, pages 1–8. IEEE, IEEE Computer
    Society, 2008.
    [21] Arjan Gijsenij, Theo Gevers, and Joost Van De Weijer. Generalized gamut mapping
    using image derivative structures for color constancy. IJCV, 2010.
    [22] Jie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation networks. In CVPR, 2018.
    [23] Yuanming Hu, Baoyuan Wang, and Stephen Lin. Fc4: Fully convolutional color
    constancy with confidence-weighted pooling. In CVPR, 2017.
    [24] Xun Huang and Serge Belongie. Arbitrary style transfer in real-time with adaptive
    instance normalization. In Proceedings of the IEEE international conference on
    computer vision, pages 1501–1510, 2017.
    [25] Thomas Kailath. The divergence and bhattacharyya distance measures in signal selection. IEEE Transactions on Communication Technology, 1967.
    [26] Hakki Can Karaimer and Michael S Brown. Improving color reproduction accuracy
    on cameras. In Proceedings of the IEEE Conference on Computer Vision and Pattern
    Recognition, pages 6440–6449, 2018.
    [27] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization.
    arXiv preprint arXiv:1412.6980, 2014.
    [28] Furkan Kınlı, Doğa Yılmaz, Barış Özcan, and Furkan Kıraç. Modeling the lighting
    in scenes as style for auto white-balance correction. In Proceedings of the IEEE/CVF
    Winter Conference on Applications of Computer Vision, 2023.
    [29] Chunxiao Li, Xuejing Kang, and Anlong Ming. Wbflow: Few-shot white balance
    62
    for srgb images via reversible neural flows. In Proceedings of the Thirty-Second
    International Joint Conference on Artificial Intelligence, pages 1026–1034, 2023.
    [30] Chunxiao Li, Xuejing Kang, Zhifeng Zhang, and Anlong Ming. Swbnet: a stable
    white balance network for srgb images. In Proceedings of the AAAI Conference on
    Artificial Intelligence, 2023.
    [31] Yi-Chen Lo, Chia-Che Chang, Hsuan-Chao Chiu, Yu-Hao Huang, Chia-Ping Chen,
    Yu-Lin Chang, and Kevin Jou. Clcc: Contrastive learning for color constancy. In
    CVPR, 2021.
    [32] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv
    preprint arXiv:1711.05101, 2017.
    [33] Gaurav Sharma, Wencheng Wu, and Edul N Dalal. The ciede2000 color-difference
    formula: Implementation notes, supplementary test data, and mathematical observations. Color Research & Application: Endorsed by Inter-Society Color Council,
    The Colour Group (Great Britain), Canadian Society for Color, Color Science Association of Japan, Dutch Society for the Study of Color, The Swedish Colour Centre
    Foundation, Colour Society of Australia, Centre Français de la Couleur, 2005.
    [34] Wu Shi, Chen Change Loy, and Xiaoou Tang. Deep specialized network for illuminant estimation. In ECCV, 2016.
    [35] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for
    large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
    [36] Joost Van De Weijer, Theo Gevers, and Arjan Gijsenij. Edge-based color constancy.
    IEEE TIP, 2007.
    63
    [37] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N
    Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in
    neural information processing systems, 2017.
    [38] Zhendong Wang, Xiaodong Cun, Jianmin Bao, Wengang Zhou, Jianzhuang Liu, and
    Houqiang Li. Uformer: A general u-shaped transformer for image restoration. In
    CVPR, 2022.
    [39] 陳彥蓉. 基於深度直方圖網路之水下影像還原模型. 碩士論文, 國立政治大學,
    臺灣, 2022. 臺灣博碩士論文知識加值系統.
    描述: 碩士
    國立政治大學
    資訊科學系
    111753139
    資料來源: http://thesis.lib.nccu.edu.tw/record/#G0111753139
    数据类型: thesis
    显示于类别:[資訊科學系] 學位論文

    文件中的档案:

    档案 描述 大小格式浏览次数
    313901.pdf20118KbAdobe PDF0检视/开启


    在政大典藏中所有的数据项都受到原著作权保护.


    社群 sharing

    著作權政策宣告 Copyright Announcement
    1.本網站之數位內容為國立政治大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度,合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    The digital content of this website is part of National Chengchi University Institutional Repository. It provides free access to academic research and public education for non-commercial use. Please utilize it in a proper and reasonable manner and respect the rights of copyright owners. For commercial use, please obtain authorization from the copyright owner in advance.

    2.本網站之製作,已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(nccur@nccu.edu.tw),維護人員將立即採取移除該數位著作等補救措施。
    NCCU Institutional Repository is made to protect the interests of copyright owners. If you believe that any material on the website infringes copyright, please contact our staff(nccur@nccu.edu.tw). We will remove the work from the repository and investigate your claim.
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 回馈