Facial Image Analysis and Editing


Introduction

In [Kar et al. IEEE TIFS2021], we proposed a novel method, called Local Modified Zernike Moment per unit Mass (LMZMPM), for face recognition, which is invariant to illumination, scaling, noise, in-plane rotation, and translation, along with other orthogonal and inherent properties of the Zernike Moments (ZMs). The proposed LMZMPM is computed for each pixel in a neighborhood of size 3 × 3 , and then considers the complex tuple that contains both the phase and magnitude coefficients of LMZMPM as the extracted features. As it contains both the phase and the magnitude components of the complex feature, it has more information about the image and thus preserves both the edge and structural information. We also propose a hybrid similarity measure, combining the Jaccard Similarity with the L1 distance, and applied to the extracted feature set for classification. The feasibility of the proposed LMZMPM technique on varying illumination has been evaluated on the CMU-PIE and the extended Yale B databases with an average Rank-1 Recognition (R1R) accuracy of 99.8% and 98.66% respectively. To assess the reliability of the method with variations in noise, rotation, scaling, and translation, we evaluate it on the AR database and obtain an average R1R higher than that of recent state-of-the-art methods. The proposed method shows a very high recognition rate on Heterogeneous Face Recognition as well, with 100% on CUFS, and 98.80% on CASIA-HFB.

Facial features analysis can also be used for assisting plastic surgeons in assessing patients objectively [Ho et al. 2018] as well as predicting the perceived attractiveness [Wei et al. 2021]. In [Organisciak et al. ICPR2020], we furhter propose an end-to-end holistic approach to effectively transfer makeup styles between two low-resolution images. The idea is built upon a novel weighted multi-scale spatial attention module, which identifies salient pixel regions on low-resolution images in multiple scales, and uses channel attention to determine the most effective attention map. This design provides two benefits: low-resolution images are usually blurry to different extents, so a multi-scale architecture can select the most effective convolution kernel size to implement spatial attention; makeup is applied on both a macro-level (foundation, fake tan) and a micro-level (eyeliner, lipstick) so different scales can excel in extracting different makeup features. We develop an Augmented CycleGAN network that embeds our attention modules at selected layers to most effectively transfer makeup. Our system is tested with the FBD data set, which consists of many low-resolution facial images, and demonstrate that it outperforms state-of-the-art methods, particularly in transferring makeup for blurry images and partially occluded images.

Readers are also referred to a closely related project on emotion analysis and transfer for facial expressions and body movements.
Publications

  1. Arindam Kar, Pinaki Prasad Guha Neogi, Arghya Chakraborty, Debotosh Bhattacharjee, Edmond S. L. Ho and Hubert P. H. Shum, "LMZMPM: Local Modified Zernike Moment Per-unit Mass for Robust Heterogeneous Face Recognition"journal , IEEE Transactions on Information Forensics and Security, vol 16(1), pp. 495-509, Dec 2021. PDF Preprint bibtex
  2. Wei Wei, Edmond S. L. Ho, Kevin D. McCay, Robertas Damaševičius, Rytis Maskeliūnas, Anna Esposito, "Assessing Facial Symmetry and Attractiveness using Augmented Reality"journal , Pattern Analysis and Applications, accepted, Mar 2021. PDF bibtex
  3. Daniel Organisciak, Edmond S. L. Ho and Hubert P. H. Shum, "Makeup Style Transfer on Low-quality Images with Weighted Multi-scale Attention"conference , Proceedings of the 2020 International Conference on Pattern Recognition (ICPR2020), accepted, Jan 2021. PDF code
  4. Edmond S. L. Ho, Kevin David McCay, Hubert P. H. Shum, Longzhi Yang, David Sainsbury, Peter Hodgkinson, "Patient Assessment Assistant Using Augmented Reality"conference , Proceedings of the 2018 UK-China Newton Fund Researcher Links Workshop Health and Well-being Through VR and AR, June 2018.

The Team

Daniel Organisciak

PhD student, Northumbria University
daniel.organisciak@northumbria.ac.uk

Dr. Edmond S. L. Ho

Senior Lecturer, Northumbria University
e.ho@northumbria.ac.uk

Prof. Debotosh Bhattacharjee

Professor, Jadavpur University
debotoshb@hotmail.com

Dr. Hubert P. H. Shum

Associate Professor, Durham University
hubert.shum@durham.ac.uk

David Sainsbury

Consultant Cleft and Plastic Surgeon, The Newcastle Upon Tyne Hospitals NHS Foundation Trust


Kevin McCay

PhD student, Northumbria University
kevin.d.mccay@northumbria.ac.uk