A novel "contour person" (CP) model of the human body is proposed that has the expressive power of a detailed 3D model and the computational benefits of a simple 2D part-based model. The CP model is learned from a 3D model of the human body that captures natural shape and pose variations; the projected contours of this model, along with their segmentation into parts forms the training set. The CP model factors deformations of the body into three components: shape variation, viewpoint change and pose variation. The CP model can be "dressed" with a low-dimensional clothing model, referred to as "dressed contour person" (DCP) model. The clothing is represented as a deformation from the underlying CP representation. This deformation is learned from training examples using principal component analysis to produce so-called eigen-clothing. The coefficients of the eigen-clothing can be used to recognize different categories of clothing on dressed people. The parameters of the estimated 2D body can be used to discriminatively predict 3D body shape using a learned mapping approach. The prediction framework can be used to estimate/predict the 3D shape of a person from a cluttered video sequence and/or from several snapshots taken with a digital camera or a cell phone.
|IPC||G06K 9/ 00 A I|
|State||Published - 15 Dec 2011|