4EyesFace--Realtime Detecting, Tracking and Aligning Face
Changbo Hu, Rogerio Feris, Matthew Turk
The goal of the 4EyesFace is to find face from video, then track the face pose and align the face to models.
We use Adaboost face detector to detect face and give the initial face position for tracking. See the face detecor video demo.
Then a facial feature based tracker is used to estimate the face pose. This pose tracker is base on Kentaro Toyama’s work. See the video demo. (PoseTracker)
For different poses we have a 3 view models. To align the faces to the models, we proposed a method called Active Wavelet Networks (AWN). AWN is an improvement from Active Appearance Models by Tim Cootes. The active appearance model (AAM) algorithm has proved to be a successful method for face alignment and synthesis. By elegantly combining both shape and texture models, AAM allows fast and robust deformable image matching. However, the method is sensitive to partial occlusions and illumination changes. In such cases, the PCA-based texture model causes the reconstruction error to be globally spread over the image. The following images show this case.
AWN replaces the AAM texture model by a wavelet network representation.
The face represented by wavelet networks can be in variable accuracy by different number of wavelets.
The view based shape models in our experiments are
Since we consider spatially localized wavelets for modeling texture, our method shows more robustness against partial occlusions and some illumination changes. See more performance graphs and figures from the publications.
· Changbo Hu, Rogerio Feris and Matthew Turk, Real-time View-based Face Alignment using Active Wavelet Networks, on ICCV 2003 Workshop on Analysis and Modeling of Faces and Gestures.(PDF)