There are many studies that use color space models (CSM) for detection of faces in an image. Most researchers a priori select a given CSM, and proceed to use the selected model for color segmentation of the face by constructing a color distribution model (CDM). There is limited work on finding the overall best CSM. We develop a procedure to adaptively change the CSM throughout the processing of a video. We show that this works in environments where the face moves through multi-positioned light sources with varying types of illumination. A test of the procedure using the 2D color space models; RG, rg, HS, YQ and CbCr found that switching between the color spaces resulted in increased tracking performance. In addition, we have proposed a new performance measure for evaluating color-tracking algorithms, which include both accuracy and robustness of the tracking window. The methodology developed can be used to find the optimal CSM-CDM combination in adaptive color tracking systems.