TY - GEN
T1 - GIF
T2 - 8th International Conference on 3D Vision, 3DV 2020
AU - Ghosh, Partha
AU - Gupta, Pravir Singh
AU - Uziel, Roy
AU - Ranjan, Anurag
AU - Black, Michael J.
AU - Bolkart, Timo
N1 - Publisher Copyright:
© 2020 IEEE.
PY - 2020/11/1
Y1 - 2020/11/1
N2 - Photo-realistic visualization and animation of expressive human faces have been a long standing challenge. 3D face modeling methods provide parametric control but generates unrealistic images, on the other hand, generative 2D models like GANs (Generative Adversarial Networks) output photo-realistic face images, but lack explicit control. Recent methods gain partial control, either by attempting to disentangle different factors in an unsupervised manner, or by adding control post hoc to a pre-trained model. Unconditional GANs, however, may entangle factors that are hard to undo later. We condition our generative model on pre-defined control parameters to encourage disentanglement in the generation process. Specifically, we condition StyleGAN2 on FLAME, a generative 3D face model. While conditioning on FLAME parameters yields unsatisfactory results, we find that conditioning on rendered FLAME geometry and photometric details works well. This gives us a generative 2D face model named GIF (Generative Interpretable Faces) that offers FLAME's parametric control. Here, interpretable refers to the semantic meaning of different parameters. Given FLAME parameters for shape, pose, expressions, parameters for appearance, lighting, and an additional style vector, GIF outputs photo-realistic face images. We perform an AMT based perceptual study to quantitatively and qualitatively evaluate how well GIF follows its conditioning. The code, data, and trained model are publicly available for research purposes at http://gif.is.tue.mpg.de.
AB - Photo-realistic visualization and animation of expressive human faces have been a long standing challenge. 3D face modeling methods provide parametric control but generates unrealistic images, on the other hand, generative 2D models like GANs (Generative Adversarial Networks) output photo-realistic face images, but lack explicit control. Recent methods gain partial control, either by attempting to disentangle different factors in an unsupervised manner, or by adding control post hoc to a pre-trained model. Unconditional GANs, however, may entangle factors that are hard to undo later. We condition our generative model on pre-defined control parameters to encourage disentanglement in the generation process. Specifically, we condition StyleGAN2 on FLAME, a generative 3D face model. While conditioning on FLAME parameters yields unsatisfactory results, we find that conditioning on rendered FLAME geometry and photometric details works well. This gives us a generative 2D face model named GIF (Generative Interpretable Faces) that offers FLAME's parametric control. Here, interpretable refers to the semantic meaning of different parameters. Given FLAME parameters for shape, pose, expressions, parameters for appearance, lighting, and an additional style vector, GIF outputs photo-realistic face images. We perform an AMT based perceptual study to quantitatively and qualitatively evaluate how well GIF follows its conditioning. The code, data, and trained model are publicly available for research purposes at http://gif.is.tue.mpg.de.
KW - Conditional GANs
KW - Disentanglement
KW - FaceAnimation
KW - GANs
KW - Generative models
KW - PhotorealisticImageGeneration
UR - http://www.scopus.com/inward/record.url?scp=85101432363&partnerID=8YFLogxK
U2 - 10.1109/3DV50981.2020.00097
DO - 10.1109/3DV50981.2020.00097
M3 - Conference contribution
AN - SCOPUS:85101432363
T3 - Proceedings - 2020 International Conference on 3D Vision, 3DV 2020
SP - 868
EP - 878
BT - Proceedings - 2020 International Conference on 3D Vision, 3DV 2020
PB - Institute of Electrical and Electronics Engineers
Y2 - 25 November 2020 through 28 November 2020
ER -