Cargando…
CUI-Net: a correcting uneven illumination net for low-light image enhancement
Uneven lighting conditions often occur during real-life photography, such as images taken at night that may have both low-light dark areas and high-light overexposed areas. Traditional algorithms for enhancing low-light areas also increase the brightness of overexposed areas, affecting the overall v...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10412593/ https://www.ncbi.nlm.nih.gov/pubmed/37558723 http://dx.doi.org/10.1038/s41598-023-39524-5 |
Sumario: | Uneven lighting conditions often occur during real-life photography, such as images taken at night that may have both low-light dark areas and high-light overexposed areas. Traditional algorithms for enhancing low-light areas also increase the brightness of overexposed areas, affecting the overall visual effect of the image. Therefore, it is important to achieve differentiated enhancement of low-light and high-light areas. In this paper, we propose a network called correcting uneven illumination network (CUI-Net) with sparse attention transformer and convolutional neural network (CNN) to better extract low-light features by constraining high-light features. Specifically, CUI-Net consists of two main modules: a low-light enhancement module and an auxiliary module. The enhancement module is a hybrid network that combines the advantages of CNN and Transformer network, which can alleviate uneven lighting problems and enhance local details better. The auxiliary module is used to converge the enhancement results of multiple enhancement modules during the training phase, so that only one enhancement module is needed during the testing phase to speed up inference. Furthermore, zero-shot learning is used in this paper to adapt to complex uneven lighting environments without requiring paired or unpaired training data. Finally, to validate the effectiveness of the algorithm, we tested it on multiple datasets of different types, and the algorithm showed stable performance, demonstrating its good robustness. Additionally, by applying this algorithm to practical visual tasks such as object detection, face detection, and semantic segmentation, and comparing it with other state-of-the-art low-light image enhancement algorithms, we have demonstrated its practicality and advantages. |
---|