Pre-trained low-light image enhancement transformer

Jingyao Zhang, Shijie Hao, Yuan Rao

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

Low-light image enhancement is a longstanding challenge in low-level vision, as images captured in low-light conditions often suffer from significant aesthetic quality flaws. Recent methods based on deep neural networks have made impressive progress in this area. In contrast to mainstream convolutional neural network (CNN)-based methods, an effective solution inspired by the transformer, which has shown impressive performance in various tasks, is proposed. This solution is centred around two key components. The first is an image synthesis pipeline, and the second is a powerful transformer-based pre-trained model, known as the low-light image enhancement transformer (LIET). The image synthesis pipeline includes illumination simulation and realistic noise simulation, enabling the generation of more life-like low-light images to overcome the issue of data scarcity. LIET combines streamlined CNN-based encoder-decoders with a transformer body, efficiently extracting global and local contextual features at a relatively low computational cost. The extensive experiments show that this approach is highly competitive with current state-of-the-art methods. The codes have been released and are available at LIET.

Original languageEnglish
Pages (from-to)1967-1984
Number of pages18
JournalIET Image Processing
Volume18
Issue number8
DOIs
StatePublished - 19 Jun 2024
Externally publishedYes

Keywords

  • image enhancement
  • image processing

ASJC Scopus subject areas

  • Software
  • Signal Processing
  • Computer Vision and Pattern Recognition
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Pre-trained low-light image enhancement transformer'. Together they form a unique fingerprint.

Cite this