Emotion Analysis and Transfer for Facial Expressions and Body Movements


Emotion is considered to be a core element in performances. In computer animation, both body motions and facial expressions are two popular mediums for a character to express emotion. However, there has been limited research in studying how to effectively synthesize these two types of character movements using different levels of emotion strength with intuitive control, which is difficult to be modelled effectively. In this project, we explore a common model [Chan et al. CAVW2019] that can be used to represent emotion for the applications of body motions [Ho et al. D2AT2017] and facial expressions [Stef et al. SKIMA2018] synthesis. Unlike previous work which encode emotions into discrete motion style descriptors, we propose a continuous control indicator called \textit{emotion strength}, by controlling which a data-driven approach is presented to synthesize motions with fine control over emotions. Rather than interpolating motion features to synthesize new motion as in existing work, our method explicitly learns a model mapping low-level motion features to the emotion strength. Since the motion synthesis model is learned in the training stage, the computation time required for synthesizing motions at run-time is very low. We further demonstrate the generality of our proposed framework by editing 2D face images using relative emotion strength. As a result, our method can be applied to interactive applications such as computer games, image editing tools and virtual reality applications, as well as offline applications such as animation and movie production.

Recently, we further proposed a new data-driven framework for 3D hand [Chan et al. CGVC2020, Irimia et al. MIG2019] and full-body motion emotion transfer [Chan and Ho Computers2021]. Specifically, we formulate the motion synthesis task as an image-to-image translation problem. By presenting a motion sequence as an image representation, the emotion can be transferred by our framework using StarGAN.


  1. Jacky C. P. Chan and Edmond S. L. Ho, "Emotion Transfer for 3D Hand and Full Body Motion using StarGAN"journal , Computers, accepted, 2021. video
  2. Jacky C. P. Chan, Ana-Sabina Irimia and Edmond S. L. Ho, "Emotion Transfer for 3D Hand Motion using StarGAN"conference , The 38th Computer Graphics & Visual Computing Conference (CGVC2020), , Sept 2020. PDF Preprint video bibtex
  3. Jacky C. P. Chan, Hubert P. H. Shum, He Wang, Li Yi, Wei Wei and Edmond S. L. Ho, "A Generic Framework for Editing and Synthesizing Multimodal Data with Relative Emotion Strength"journal , Computer Animation and Virtual Worlds, vol 30(6), pp. e1871, November/December 2019. PDF Preprint video bibtex
  4. Ana-Sabina Irimia, Jacky C. P. Chan, Kamlesh Mistry, Wei Wei and Edmond S. L. Ho, "Emotion Transfer for Hand Animation"conference , ACM SIGGRAPH conference on Motion, Interaction and Games (MIG 2019), Article No.: 41, Oct 2019. PDF video bibtex
  5. Andreea Stef, Kaveen Perera, Hubert P. H. Shum, Edmond S. L. Ho, "Synthesizing Expressive Facial and Speech Animation by Text-to-IPA Translation with Emotion Control"conference , Proceedings of 12th International Conference on Software, Knowledge, Information Management and Applications (SKIMA 2018), Dec 2018. PDF video code bibtex
  6. Edmond S. L. Ho, Hubert P. H. Shum, He Wang, Li Yi, "Synthesizing Motion with Relative Emotion Strength"conference , Proceedings of the 2017 ACM SIGGRAPH Asia Workshop on Data-Driven Animation Techniques (D2AT), Nov 2017. PDF video

The Team

Dr. Jacky C. P. Chan

Lecturer, Hong Kong Baptist University

Dr. Edmond S. L. Ho

Senior Lecturer, Northumbria University

Ana-Sabina Irimia

Alumnus (BSc (Hons) Comp. Sci.), Northumbria University