Over the past two decades, automatic understanding about human behavior attracts much interest, with which can apply in lots of fields, like computer technology, security, psychology and medicine. Initially, there is an urgent requirement for a user-friendly interaction between next-generation computer systems and human operators. Therefore, the essential question is how to design the human-centered interfaces, which can better capture and understand users’ cognitive emotions.
Among all the personal information, facial expression is the most natural and cogent way for human to give and express emphasis, show their comprehension disagreement, represent emotions and interacts with other people or environment [1-3]. All these facts just highlight the significance to automatically analysis for human facial behavior, specific in action unit (AU) detection in facial action coding system (FACS) or discrete emotion categories for facial expression imagery. A lot of interest and researches are dedicated in these areas in the past two decades [1-3]. Automatic analysis of facial expressions plays a crucial role in human-computer interaction.
Some difficulties issues for facial expression recognition and analysis are listed as following:
Traditional faces expressions are classified into six basic emotions prototypes, which cannot satisfy real-world emotion analysis and far from application [1-2]. There should be more complex and precise ways to detect them.
The changes of facial appearance, like makeup and facial hair, will always exist in reality face recordings and highly affecting the facial expression analysis in most of existed systems.
High level resolution and speed is necessary in real-time facial-analysis-level recording. It is reported that the micro-expressions from human face will last no more than 0.04s which requires solution to capture at a frame rate of at least 50–60 frames/s [1].
Current facial research publications are difficult to compare in a fair manner, because of some issues, like some publications neglect the training and testing protocols, and hardly report about the cross-database evaluations. To make it more clear, some research should be done to clarify the developing of this field and identify goals, challenges and targets.
My PhD aims at doing research and improve the performances in static and dynamic 3D facial recognition which is still in infant state. Here, some basic targets and objectives are listed as follows.
Besides the techniques, all these improvement or researches will be applied to different real scenarios, like social relation traits, large-scale attributes on face detection, or even neuroaesthetics in fashion [4-6]. My PhD researches are on the way and will be adjusted for personal digital data usage. Just stay hungry, stay fool.
This author is supported by the Horizon Centre for Doctoral Training at the University of Nottingham (RCUK Grant No. EP/L015463/1) and Shenzhen University (China).