Light invariant video imaging for improved performance of convolution neural networks

Amir Kolaman, Dan Malowany, Rami R. Hagege, Hugo Guterman

Research output: Contribution to journalArticlepeer-review

1 Scopus citations


Light conditions affect the performance of computer vision algorithms by creating spatial changes in color and intensity across a scene. Convolutional neural networks (CNNs) use color components of the input image and, as a result, are sensitive to ambient light conditions. This work analyzes the influence of ambient light conditions on CNN classifiers. We suggest a method for boosting the performance of CNN-based object detection and classification algorithms by using light invariant video imaging (LIVI). LIVI neutralizes the influence of ambient light conditions and renders the perceived object's appearance independent of the light conditions. Training sets consist mainly, if not only, of objects in natural light conditions. As such, using LIVI boosts CNN performance by matching object appearance to that expected by the CNN model, which was created according to the training set. We further investigate the use of LIVI as a general self-supervised learning framework for CNN. Faster region-based CNN (Faster R-CNN) was used as a case study in order to validate the importance of light conditions on CNN performance and on how it can be improved by using LIVI as an input or feedback mechanism in a self-supervised framework. We show that LIVI enables reduced CNN size, enhanced performance and improved training.

Original languageEnglish
Article number8382234
Pages (from-to)1584-1594
Number of pages11
JournalIEEE Transactions on Circuits and Systems for Video Technology
Issue number6
StatePublished - 1 Jun 2019


  • Computational cameras
  • Convolutional neural networks
  • Light invariant imaging


Dive into the research topics of 'Light invariant video imaging for improved performance of convolution neural networks'. Together they form a unique fingerprint.

Cite this