2007 IEEE International Conference on Image Processing - San Antonio, Texas, U.S.A. - September 16-19, 2007

Technical Program

Paper Detail

Paper:TP-P3.2
Session:Image and Video Segmentation IV
Time:Tuesday, September 18, 14:30 - 17:10
Presentation: Poster
Title: ESTIMATION AND ANALYSIS OF FACIAL ANIMATION PARAMETER PATTERNS
Authors: Ferda Ofli; Koç University 
 Engin Erzin; Koç University 
 Yucel Yemez; Koç University 
 A. Murat Tekalp; Koç University 
Abstract: We propose a framework for estimation and analysis of temporal facial expression patterns of a speaker. The proposed system aims to learn personalized elementary dynamic facial expression patterns for a particular speaker. We use head-and-shoulder stereo video sequences to track lip, eye, eyebrow, and eyelid motion of a speaker in 3D. MPEG-4 Facial Definition Parameters (FDPs) are used as the feature set, and temporal facial expression patterns are represented by the MPEG-4 Facial Animation Parameters (FAPs). We perform Hidden Markov Model (HMM) based unsupervised temporal segmentation of upper and lower facial expression features separately to determine recurrent elementary facial expression patterns for a particular speaker. These facial expression patterns coded by FAP sequences, which may not be tied with prespecified emotions, can be used for personalized emotion estimation and synthesis of a speaker. Experimental results are presented.



©2016 Conference Management Services, Inc. -||- email: webmaster@icip2007.com -||- Last updated Friday, August 17, 2012