workshop in conjunction with ACCV 2012 on
Face analysis: The intersection of computer vision and human perception
|Home||Call for papers||Camera Ready Submission||People||Invited Speakers||Program|
The analysis of faces is a very active research area within both the computer vision and the human perception communities. Although the two communities have traditionally worked separately, the synergy created by the meeting of minds from both communities will not only provide for great research impetus, but will also allow otherwise intractable problems to be solved. The array of potential applications and research topics is impressive. The goal of this workshop is to examine existing work that straddles the border of these communities, and to map out future steps for integrative research.
Throughout development, we humans receive extensive training and experience in the processing of faces resulting in an incredible degree of expertise. This expertise encompasses processing of face identity, age, gender, and non-verbal communication signals such as facial expressions. Thus, tapping into existing knowledge from human facial perception research can enable the targeted design of computer vision facial analysis and synthesis systems with more realistic behavioural facial models and performance. For example, employing knowledge about the perceptual importance of different information in the moving face (such as using the Facial Action Coding System (FACS) or empirically derived descriptive models) has helped improve the automatic recognition and measurement of facial expressions and prosody.
While perception research has gathered tremendous data on the behavioural consequences of face information, it has often had difficulty producing accurate models of how the information is processed. Thus, computer vision systems can not only provide many useful tools for research into the human perception of faces, they can also be used as models of perceptual processes. Machine learning techniques can be applied to learn from input facial images/video and automatically extract descriptions of expressions and timings, leading to a better modelling of the dynamics of expression and the possibility of recognising emotion, for example. Likewise, machine learning techniques can be used to simulate human perceptual learning. Another example is the use of facial image synthesis to generate stimuli for perceptual experiments, which enables more subtle manipulation of facial appearance and dynamics than would be possible using natural video capture – a good example of this is the highly successful application of the face morphable model in perceptual and neuroscientific research.