%0 Journal Article
%J IEEE Transactions on Pattern Analysis and Machine Intelligence
%D 2012
%T A Blur-robust Descriptor with Applications to Face Recognition
%A Gopalan,R.
%A Taheri, S.
%A Turaga,P.
%A Chellapa, Rama
%K Blur
%K convolution
%K Face
%K face recognition
%K Grassmann manifold
%K Kernel
%K Manifolds
%K NOISE
%K PROBES
%X Understanding the effect of blur is an important problem in unconstrained visual analysis. We address this problem in the context of image-based recognition, by a fusion of image-formation models, and differential geometric tools. First, we discuss the space spanned by blurred versions of an image and then under certain assumptions, provide a differential geometric analysis of that space. More specifically, we create a subspace resulting from convolution of an image with a complete set of orthonormal basis functions of a pre-specified maximum size (that can represent an arbitrary blur kernel within that size), and show that the corresponding subspaces created from a clean image and its blurred versions are equal under the ideal case of zero noise, and some assumptions on the properties of blur kernels. We then study the practical utility of this subspace representation for the problem of direct recognition of blurred faces, by viewing the subspaces as points on the Grassmann manifold and present methods to perform recognition for cases where the blur is both homogenous and spatially varying. We empirically analyze the effect of noise, as well as the presence of other facial variations between the gallery and probe images, and provide comparisons with existing approaches on standard datasets.
%B IEEE Transactions on Pattern Analysis and Machine Intelligence
%V PP
%P 1 - 1
%8 2012/01/10/
%@ 0162-8828
%G eng
%N 99
%R 10.1109/TPAMI.2012.15
%0 Journal Article
%J Computer Vision and Image Understanding
%D 2012
%T Class consistent k-means: Application to face and action recognition
%A Zhuolin Jiang
%A Zhe Lin
%A Davis, Larry S.
%K Action recognition
%K Class consistent k-means
%K Discriminative tree classifier
%K face recognition
%K Supervised clustering
%X A class-consistent k-means clustering algorithm (CCKM) and its hierarchical extension (Hierarchical CCKM) are presented for generating discriminative visual words for recognition problems. In addition to using the labels of training data themselves, we associate a class label with each cluster center to enforce discriminability in the resulting visual words. Our algorithms encourage data points from the same class to be assigned to the same visual word, and those from different classes to be assigned to different visual words. More specifically, we introduce a class consistency term in the clustering process which penalizes assignment of data points from different classes to the same cluster. The optimization process is efficient and bounded by the complexity of k-means clustering. A very efficient and discriminative tree classifier can be learned for various recognition tasks via the Hierarchical CCKM. The effectiveness of the proposed algorithms is validated on two public face datasets and four benchmark action datasets.
%B Computer Vision and Image Understanding
%V 116
%P 730 - 741
%8 2012/06//
%@ 1077-3142
%G eng
%U http://www.sciencedirect.com/science/article/pii/S1077314212000367
%N 6
%R 10.1016/j.cviu.2012.02.004
%0 Journal Article
%J IEEE Transactions on Information Forensics and Security
%D 2012
%T Dictionary-based Face Recognition Under Variable Lighting and Pose
%A Patel, Vishal M.
%A Wu,T.
%A Biswas,S.
%A Phillips,P.
%A Chellapa, Rama
%K Biometrics
%K dictionary learning
%K face recognition
%K illumination variation,
%K outlier rejection
%X We present a face recognition algorithm based on simultaneous sparse approximations under varying illumination and pose. A dictionary is learned for each class based on given training examples which minimizes the representation error with a sparseness constraint. A novel test image is projected onto the span of the atoms in each learned dictionary. The resulting residual vectors are then used for classification. To handle variations in lighting conditions and pose, an image relighting technique based on pose-robust albedo estimation is used to generate multiple frontal images of the same person with variable lighting. As a result, the proposed algorithm has the ability to recognize human faces with high accuracy even when only a single or a very few images per person are provided for training. The efficiency of the proposed method is demonstrated using publicly available databases and it is shown that this method is efficient and can perform significantly better than many competitive face recognition algorithms.
%B IEEE Transactions on Information Forensics and Security
%V PP
%P 1 - 1
%8 2012/02/27/
%@ 1556-6013
%G eng
%N 99
%R 10.1109/TIFS.2012.2189205
%0 Conference Paper
%B 2011 18th IEEE International Conference on Image Processing (ICIP)
%D 2011
%T Face tracking in low resolution videos under illumination variations
%A Zou, W.W.W.
%A Chellapa, Rama
%A Yuen, P.C.
%K Adaptation models
%K Computational modeling
%K Face
%K face recognition
%K face tracking
%K GLF-based tracker
%K gradient methods
%K gradient-logarithmic field feature
%K illumination variations
%K lighting
%K low resolution videos
%K low-resolution
%K particle filter
%K particle filter framework
%K particle filtering (numerical methods)
%K Robustness
%K tracking
%K video signal processing
%K Videos
%K Visual face tracking
%X In practical face tracking applications, the face region is often small and affected by illumination variations. We address this problem by using a new feature, namely the Gradient-Logarithmic Field (GLF) feature, in the particle filter framework. The GLF feature is robust under illumination variations and the GLF-based tracker does not assume any model for the face being tracked and is effective in low-resolution video. Experimental results show that the proposed GFL-based tracker works well under significant illumination changes and outperforms some of the state-of-the-art algorithms.
%B 2011 18th IEEE International Conference on Image Processing (ICIP)
%I IEEE
%P 781 - 784
%8 2011/09/11/14
%@ 978-1-4577-1304-0
%G eng
%R 10.1109/ICIP.2011.6116672
%0 Conference Paper
%B 2011 18th IEEE International Conference on Image Processing (ICIP)
%D 2011
%T Illumination robust dictionary-based face recognition
%A Patel, Vishal M.
%A Tao Wu
%A Biswas,S.
%A Phillips,P.J.
%A Chellapa, Rama
%K albedo
%K approximation theory
%K classification
%K competitive face recognition algorithms
%K Databases
%K Dictionaries
%K Face
%K face recognition
%K face recognition method
%K filtering theory
%K human face recognition
%K illumination robust dictionary-based face recognition
%K illumination variation
%K image representation
%K learned dictionary
%K learning (artificial intelligence)
%K lighting
%K lighting conditions
%K multiple images
%K nonstationary stochastic filter
%K publicly available databases
%K relighting
%K relighting approach
%K representation error
%K residual vectors
%K Robustness
%K simultaneous sparse approximations
%K simultaneous sparse signal representation
%K sparseness constraint
%K Training
%K varying illumination
%K vectors
%X In this paper, we present a face recognition method based on simultaneous sparse approximations under varying illumination. Our method consists of two main stages. In the first stage, a dictionary is learned for each face class based on given training examples which minimizes the representation error with a sparseness constraint. In the second stage, a test image is projected onto the span of the atoms in each learned dictionary. The resulting residual vectors are then used for classification. Furthermore, to handle changes in lighting conditions, we use a relighting approach based on a non-stationary stochastic filter to generate multiple images of the same person with different lighting. As a result, our algorithm has the ability to recognize human faces with good accuracy even when only a single or a very few images are provided for training. The effectiveness of the proposed method is demonstrated on publicly available databases and it is shown that this method is efficient and can perform significantly better than many competitive face recognition algorithms.
%B 2011 18th IEEE International Conference on Image Processing (ICIP)
%I IEEE
%P 777 - 780
%8 2011/09/11/14
%@ 978-1-4577-1304-0
%G eng
%R 10.1109/ICIP.2011.6116670
%0 Conference Paper
%B Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on
%D 2011
%T Learning a discriminative dictionary for sparse coding via label consistent K-SVD
%A Zhuolin Jiang
%A Zhe Lin
%A Davis, Larry S.
%K classification error
%K Dictionaries
%K dictionary learning process
%K discriminative sparse code error
%K face recognition
%K image classification
%K Image coding
%K K-SVD
%K label consistent
%K learning (artificial intelligence)
%K object category recognition
%K Object recognition
%K optimal linear classifier
%K reconstruction error
%K singular value decomposition
%K Training data
%X A label consistent K-SVD (LC-KSVD) algorithm to learn a discriminative dictionary for sparse coding is presented. In addition to using class labels of training data, we also associate label information with each dictionary item (columns of the dictionary matrix) to enforce discriminability in sparse codes during the dictionary learning process. More specifically, we introduce a new label consistent constraint called `discriminative sparse-code error' and combine it with the reconstruction error and the classification error to form a unified objective function. The optimal solution is efficiently obtained using the K-SVD algorithm. Our algorithm learns a single over-complete dictionary and an optimal linear classifier jointly. It yields dictionaries so that feature points with the same class labels have similar sparse codes. Experimental results demonstrate that our algorithm outperforms many recently proposed sparse coding techniques for face and object category recognition under the same learning conditions.
%B Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on
%P 1697 - 1704
%8 2011/06//
%G eng
%R 10.1109/CVPR.2011.5995354
%0 Journal Article
%J IEEE Transactions on Pattern Analysis and Machine Intelligence
%D 2011
%T Statistical Computations on Grassmann and Stiefel Manifolds for Image and Video-Based Recognition
%A Turaga,P.
%A Veeraraghavan,A.
%A Srivastava, A.
%A Chellapa, Rama
%K activity based video clustering
%K activity recognition
%K computational geometry
%K Computational modeling
%K Data models
%K face recognition
%K feature representation
%K finite dimensional linear subspaces
%K geometric properties
%K Geometry
%K Grassmann Manifolds
%K Grassmann.
%K HUMANS
%K Image and video models
%K image recognition
%K linear dynamic models
%K linear subspace structure
%K Manifolds
%K maximum likelihood classification
%K maximum likelihood estimation
%K Object recognition
%K Riemannian geometry
%K Riemannian metrics
%K SHAPE
%K statistical computations
%K statistical models
%K Stiefel
%K Stiefel Manifolds
%K unsupervised clustering
%K video based face recognition
%K video based recognition
%K video signal processing
%X In this paper, we examine image and video-based recognition applications where the underlying models have a special structure-the linear subspace structure. We discuss how commonly used parametric models for videos and image sets can be described using the unified framework of Grassmann and Stiefel manifolds. We first show that the parameters of linear dynamic models are finite-dimensional linear subspaces of appropriate dimensions. Unordered image sets as samples from a finite-dimensional linear subspace naturally fall under this framework. We show that an inference over subspaces can be naturally cast as an inference problem on the Grassmann manifold. To perform recognition using subspace-based models, we need tools from the Riemannian geometry of the Grassmann manifold. This involves a study of the geometric properties of the space, appropriate definitions of Riemannian metrics, and definition of geodesics. Further, we derive statistical modeling of inter and intraclass variations that respect the geometry of the space. We apply techniques such as intrinsic and extrinsic statistics to enable maximum-likelihood classification. We also provide algorithms for unsupervised clustering derived from the geometry of the manifold. Finally, we demonstrate the improved performance of these methods in a wide variety of vision applications such as activity recognition, video-based face recognition, object recognition from image sets, and activity-based video clustering.
%B IEEE Transactions on Pattern Analysis and Machine Intelligence
%V 33
%P 2273 - 2286
%8 2011/11//
%@ 0162-8828
%G eng
%N 11
%R 10.1109/TPAMI.2011.52
%0 Conference Paper
%B 2011 International Joint Conference on Biometrics (IJCB)
%D 2011
%T Synthesis-based recognition of low resolution faces
%A Shekhar, S.
%A Patel, Vishal M.
%A Chellapa, Rama
%K Dictionaries
%K Face
%K face images
%K face recognition
%K face recognition literature
%K face recognition systems
%K illumination variations
%K image resolution
%K low resolution faces
%K Organizations
%K PROBES
%K support vector machines
%K synthesis based recognition
%X Recognition of low resolution face images is a challenging problem in many practical face recognition systems. Methods have been proposed in the face recognition literature for the problem when the probe is of low resolution, and a high resolution gallery is available for recognition. These methods modify the probe image such that the resultant image provides better discrimination. We formulate the problem differently by leveraging the information available in the high resolution gallery image and propose a generative approach for classifying the probe image. An important feature of our algorithm is that it can handle resolution changes along with illumination variations. The effective- ness of the proposed method is demonstrated using standard datasets and a challenging outdoor face dataset. It is shown that our method is efficient and can perform significantly better than many competitive low resolution face recognition algorithms.
%B 2011 International Joint Conference on Biometrics (IJCB)
%I IEEE
%P 1 - 6
%8 2011/10/11/13
%@ 978-1-4577-1358-3
%G eng
%R 10.1109/IJCB.2011.6117545
%0 Conference Paper
%B 2011 IEEE International Conference on Automatic Face & Gesture Recognition and Workshops (FG 2011)
%D 2011
%T Towards view-invariant expression analysis using analytic shape manifolds
%A Taheri, S.
%A Turaga,P.
%A Chellapa, Rama
%K Databases
%K Deformable models
%K Face
%K face recognition
%K facial expression analysis
%K Geometry
%K Gold
%K Human-computer interaction
%K Manifolds
%K projective transformation
%K Riemannian interpretation
%K SHAPE
%K view invariant expression analysis
%X Facial expression analysis is one of the important components for effective human-computer interaction. However, to develop robust and generalizable models for expression analysis one needs to break the dependence of the models on the choice of the coordinate frame of the camera i.e. expression models should generalize across facial poses. To perform this systematically, one needs to understand the space of observed images subject to projective transformations. However, since the projective shape-space is cumbersome to work with, we address this problem by deriving models for expressions on the affine shape-space as an approximation to the projective shape-space by using a Riemannian interpretation of deformations that facial expressions cause on different parts of the face. We use landmark configurations to represent facial deformations and exploit the fact that the affine shape-space can be studied using the Grassmann manifold. This representation enables us to perform various expression analysis and recognition algorithms without the need for the normalization as a preprocessing step. We extend some of the available approaches for expression analysis to the Grassmann manifold and experimentally show promising results, paving the way for a more general theory of view-invariant expression analysis.
%B 2011 IEEE International Conference on Automatic Face & Gesture Recognition and Workshops (FG 2011)
%I IEEE
%P 306 - 313
%8 2011/03/21/25
%@ 978-1-4244-9140-7
%G eng
%R 10.1109/FG.2011.5771415
%0 Journal Article
%J Computer Vision and Image Understanding
%D 2010
%T Comparing and combining lighting insensitive approaches for face recognition
%A Gopalan,Raghuraman
%A Jacobs, David W.
%K Classifier comparison and combination
%K face recognition
%K Gradient direction
%K lighting
%X Face recognition under changing lighting conditions is a challenging problem in computer vision. In this paper, we analyze the relative strengths of different lighting insensitive representations, and propose efficient classifier combination schemes that result in better recognition rates. We consider two experimental settings, wherein we study the performance of different algorithms with (and without) prior information on the different illumination conditions present in the scene. In both settings, we focus on the problem of having just one exemplar per person in the gallery. Based on these observations, we design algorithms for integrating the individual classifiers to capture the significant aspects of each representation. We then illustrate the performance improvement obtained through our classifier combination algorithms on the illumination subset of the PIE dataset, and on the extended Yale-B dataset. Throughout, we consider galleries with both homogenous and heterogeneous lighting conditions.
%B Computer Vision and Image Understanding
%V 114
%P 135 - 145
%8 2010/01//
%@ 1077-3142
%G eng
%U http://www.sciencedirect.com/science/article/pii/S1077314209001210
%N 1
%R 10.1016/j.cviu.2009.07.005
%0 Conference Paper
%B Control Automation Robotics Vision (ICARCV), 2010 11th International Conference on
%D 2010
%T Sparse representations and Random Projections for robust and cancelable biometrics
%A Patel, Vishal M.
%A Chellapa, Rama
%A Tistarelli,M.
%K Biometric identification
%K Cancelable Biometrics
%K Compressed sensing
%K face data
%K face recognition
%K iris data
%K iris recognition
%K personal biometric data
%K Random Projections
%K robust biometrics
%K sparse representations
%X In recent years, the theories of Sparse Representation (SR) and Compressed Sensing (CS) have emerged as powerful tools for efficiently processing data in non-traditional ways. An area of promise for these theories is biome #x0301;trie identification. In this paper, we review the role of sparse representation and CS for efficient biome #x0301;trie identification. Algorithms to perform identification from face and iris data are reviewed. By applying Random Projections it is possible to purposively hide the biome #x0301;trie data within a template. This procedure can be effectively employed for securing and protecting personal biome #x0301;trie data against theft. Some of the most compelling challenges and issues that confront research in biometrics using sparse representations and CS are also addressed.
%B Control Automation Robotics Vision (ICARCV), 2010 11th International Conference on
%P 1 - 6
%8 2010/12//
%G eng
%R 10.1109/ICARCV.2010.5707955
%0 Journal Article
%J Journal of Visual Languages & Computing
%D 2009
%T Computational methods for modeling facial aging: A survey
%A Ramanathan,Narayanan
%A Chellapa, Rama
%A Biswas,Soma
%K age estimation
%K Age progression
%K Craniofacial growth
%K face recognition
%X Facial aging, a new dimension that has recently been added to the problem of face recognition, poses interesting theoretical and practical challenges to the research community. The problem which originally generated interest in the psychophysics and human perception community has recently found enhanced interest in the computer vision community. How do humans perceive age? What constitutes an age-invariant signature that can be derived from faces? How compactly can the facial growth event be described? How does facial aging impact recognition performance? In this paper, we give a thorough analysis on the problem of facial aging and further provide a complete account of the many interesting studies that have been performed on this topic from different fields. We offer a comparative analysis of various approaches that have been proposed for problems such as age estimation, appearance prediction, face verification, etc. and offer insights into future research on this topic.
%B Journal of Visual Languages & Computing
%V 20
%P 131 - 144
%8 2009/06//
%@ 1045-926X
%G eng
%U http://www.sciencedirect.com/science/article/pii/S1045926X09000032
%N 3
%R 10.1016/j.jvlc.2009.01.011
%0 Conference Paper
%B 2006 International Conference on Parallel Processing Workshops, 2006. ICPP 2006 Workshops
%D 2006
%T Model-based OpenMP implementation of a 3D facial pose tracking system
%A Saha,S.
%A Chung-Ching Shen
%A Chia-Jui Hsu
%A Aggarwal,G.
%A Veeraraghavan,A.
%A Sussman, Alan
%A Bhattacharyya, Shuvra S.
%K 3D facial pose tracking system
%K application modeling
%K application program interfaces
%K application scheduling
%K coarse-grain dataflow graphs
%K Concurrent computing
%K data flow graphs
%K Educational institutions
%K face recognition
%K IMAGE PROCESSING
%K image processing applications
%K Inference algorithms
%K Message passing
%K OpenMP platform
%K parallel implementation
%K PARALLEL PROCESSING
%K parallel programming
%K Particle tracking
%K Processor scheduling
%K SHAPE
%K shared memory systems
%K shared-memory systems
%K Solid modeling
%K tracking
%X Most image processing applications are characterized by computation-intensive operations, and high memory and performance requirements. Parallelized implementation on shared-memory systems offer an attractive solution to this class of applications. However, we cannot thoroughly exploit the advantages of such architectures without proper modeling and analysis of the application. In this paper, we describe our implementation of a 3D facial pose tracking system using the OpenMP platform. Our implementation is based on a design methodology that uses coarse-grain dataflow graphs to model and schedule the application. We present our modeling approach, details of the implementation that we derived based on this modeling approach, and associated performance results. The parallelized implementation achieves significant speedup, and meets or exceeds the target frame rate under various configurations
%B 2006 International Conference on Parallel Processing Workshops, 2006. ICPP 2006 Workshops
%I IEEE
%P 8 pp.-73 - 8 pp.-73
%8 2006///
%@ 0-7695-2637-3
%G eng
%R 10.1109/ICPPW.2006.55
%0 Conference Paper
%B Computer Vision, 2005. ICCV 2005. Tenth IEEE International Conference on
%D 2005
%T Detection, analysis and matching of hair
%A Yacoob,Yaser
%A Davis, Larry S.
%K automatic hair detection
%K eigenface-based recognition
%K eigenface-hair based identification
%K Eigenvalues and eigenfunctions
%K face image indexing
%K face recognition
%K Feature extraction
%K hair analysis
%K hair appearance
%K hair attributes
%K hair matching
%K Image matching
%K image representation
%K multidimensional representation
%K person recognition
%X We develop computational models for measuring hair appearance for comparing different people. The models and methods developed have applications to person recognition and face image indexing. An automatic hair detection algorithm is described and results reported. A multidimensional representation of hair appearance is presented and computational algorithms are described. Results on a dataset of 524 subjects are reported. Identification of people using hair attributes is compared to eigenface-based recognition along with a joint, eigenface-hair based identification.
%B Computer Vision, 2005. ICCV 2005. Tenth IEEE International Conference on
%V 1
%P 741 - 748 Vol. 1 - 741 - 748 Vol. 1
%8 2005/10//
%G eng
%R 10.1109/ICCV.2005.75
%0 Journal Article
%J ACM Comput. Surv.
%D 2003
%T Face recognition: A literature survey
%A Zhao, W.
%A Chellapa, Rama
%A Phillips,P.J.
%A Rosenfeld, A.
%K face recognition
%K person identification
%X As one of the most successful applications of image analysis and understanding, face recognition has recently received significant attention, especially during the past several years. At least two reasons account for this trend: the first is the wide range of commercial and law enforcement applications, and the second is the availability of feasible technologies after 30 years of research. Even though current machine recognition systems have reached a certain level of maturity, their success is limited by the conditions imposed by many real applications. For example, recognition of face images acquired in an outdoor environment with changes in illumination and/or pose remains a largely unsolved problem. In other words, current systems are still far away from the capability of the human perception system.This paper provides an up-to-date critical survey of still- and video-based face recognition research. There are two underlying motivations for us to write this survey paper: the first is to provide an up-to-date review of the existing literature, and the second is to offer some insights into the studies of machine recognition of faces. To provide a comprehensive survey, we not only categorize existing recognition techniques but also present detailed descriptions of representative methods within each category. In addition, relevant topics such as psychophysical studies, system evaluation, and issues of illumination and pose variation are covered.
%B ACM Comput. Surv.
%V 35
%P 399 - 458
%8 2003/12//
%@ 0360-0300
%G eng
%U http://doi.acm.org/10.1145/954339.954342
%N 4
%R 10.1145/954339.954342
%0 Journal Article
%J Computer Vision and Image Understanding
%D 2003
%T Probabilistic recognition of human faces from video
%A Zhou,Shaohua
%A Krueger,Volker
%A Chellapa, Rama
%K Exemplar-based learning
%K face recognition
%K sequential importance sampling
%K Still-to-video
%K Time series state space model
%K Video-to-video
%X Recognition of human faces using a gallery of still or video images and a probe set of videos is systematically investigated using a probabilistic framework. In still-to-video recognition, where the gallery consists of still images, a time series state space model is proposed to fuse temporal information in a probe video, which simultaneously characterizes the kinematics and identity using a motion vector and an identity variable, respectively. The joint posterior distribution of the motion vector and the identity variable is estimated at each time instant and then propagated to the next time instant. Marginalization over the motion vector yields a robust estimate of the posterior distribution of the identity variable. A computationally efficient sequential importance sampling (SIS) algorithm is developed to estimate the posterior distribution. Empirical results demonstrate that, due to the propagation of the identity variable over time, a degeneracy in posterior probability of the identity variable is achieved to give improved recognition. The gallery is generalized to videos in order to realize video-to-video recognition. An exemplar-based learning strategy is adopted to automatically select video representatives from the gallery, serving as mixture centers in an updated likelihood measure. The SIS algorithm is applied to approximate the posterior distribution of the motion vector, the identity variable, and the exemplar index, whose marginal distribution of the identity variable produces the recognition result. The model formulation is very general and it allows a variety of image representations and transformations. Experimental results using images/videos collected at UMD, NIST/USF, and CMU with pose/illumination variations illustrate the effectiveness of this approach for both still-to-video and video-to-video scenarios with appropriate model choices.
%B Computer Vision and Image Understanding
%V 91
%P 214 - 245
%8 2003/07//
%@ 1077-3142
%G eng
%U http://www.sciencedirect.com/science/article/pii/S1077314203000808
%N 1–2
%R 10.1016/S1077-3142(03)00080-8
%0 Conference Paper
%B CHI '02 extended abstracts on Human factors in computing systems
%D 2002
%T Interacting with identification technology: can it make us more secure?
%A Scholtz,Jean
%A Johnson,Jeff
%A Shneiderman, Ben
%A Hope-Tindall,Peter
%A Gosling,Marcus
%A Phillips,Jonathon
%A Wexelblat,Alan
%K Biometrics
%K civil liberties
%K face recognition
%K national id card
%K privacy
%K Security
%B CHI '02 extended abstracts on Human factors in computing systems
%S CHI EA '02
%I ACM
%C New York, NY, USA
%P 564 - 565
%8 2002///
%@ 1-58113-454-1
%G eng
%U http://doi.acm.org/10.1145/506443.506484
%R 10.1145/506443.506484
%0 Conference Paper
%B Fifth IEEE International Conference on Automatic Face and Gesture Recognition, 2002. Proceedings
%D 2002
%T Smiling faces are better for face recognition
%A Yacoob,Yaser
%A Davis, Larry S.
%K between-class scatter matrices
%K Databases
%K discrimination power measure
%K dynamic scenarios
%K expressive faces
%K face recognition
%K facial expressions
%K performance
%K performance differences
%K smiling faces
%K software performance evaluation
%K Training
%K visual databases
%K within-class scatter matrices
%X This paper investigates face recognition during facial expressions. While face expressions have been treated as an adverse factor in standard face recognition approaches, our research suggests that, if a system has a choice in the selection of faces to use in training and recognition, its best performance would be obtained on faces displaying expressions. Naturally, smiling faces are the most prevalent (among expressive faces) for both training and recognition in dynamic scenarios. We employ a measure of discrimination power that is computed from between-class and within-class scatter matrices. Two databases are used to show the performance differences on different sets of faces
%B Fifth IEEE International Conference on Automatic Face and Gesture Recognition, 2002. Proceedings
%I IEEE
%P 52 - 57
%8 2002/05//
%@ 0-7695-1602-5
%G eng
%R 10.1109/AFGR.2002.1004132
%0 Conference Paper
%B Automatic Face and Gesture Recognition, 1996., Proceedings of the Second International Conference on
%D 1996
%T Computing 3-D head orientation from a monocular image sequence
%A Horprasert,T.
%A Yacoob,Yaser
%A Davis, Larry S.
%K 3D head orientation computation
%K anthropometric statistics
%K camera plane
%K coarse structure
%K eye
%K eye boundary
%K eye corners
%K face features
%K face recognition
%K Feature extraction
%K head pitch
%K head roll
%K head yaw
%K Image sequences
%K image-based parameterized tracking
%K monocular image sequence
%K projective cross-ratio invariance
%K sub-pixel parameterized shape estimation
%K tracking
%X An approach for estimating 3D head orientation in a monocular image sequence is proposed. The approach employs recently developed image-based parameterized tracking for face and face features to locate the area in which a sub-pixel parameterized shape estimation of the eye's boundary is performed. This involves tracking of five points (four at the eye corners and the fifth is the lip of the nose). The authors describe an approach that relies on the coarse structure of the face to compute orientation relative to the camera plane. Our approach employs projective invariance of the cross-ratios of the eye corners and anthropometric statistics to estimate the head yaw, roll and pitch. Analytical and experimental results are reported
%B Automatic Face and Gesture Recognition, 1996., Proceedings of the Second International Conference on
%P 242 - 247
%8 1996/10//
%G eng
%R 10.1109/AFGR.1996.557271
%0 Journal Article
%J IEEE Transactions on Pattern Analysis and Machine Intelligence
%D 1996
%T Recognizing human facial expressions from long image sequences using optical flow
%A Yacoob,Yaser
%A Davis, Larry S.
%K Computer vision
%K Eyebrows
%K face recognition
%K facial dynamics
%K Facial features
%K human facial expression recognition
%K HUMANS
%K Image motion analysis
%K image recognition
%K image representation
%K Image sequences
%K Motion analysis
%K Motion estimation
%K Optical computing
%K optical flow
%K symbolic representation
%K tracking
%X An approach to the analysis and representation of facial dynamics for recognition of facial expressions from image sequences is presented. The algorithms utilize optical flow computation to identify the direction of rigid and nonrigid motions that are caused by human facial expressions. A mid-level symbolic representation motivated by psychological considerations is developed. Recognition of six facial expressions, as well as eye blinking, is demonstrated on a large set of image sequences
%B IEEE Transactions on Pattern Analysis and Machine Intelligence
%V 18
%P 636 - 642
%8 1996/06//
%@ 0162-8828
%G eng
%N 6
%R 10.1109/34.506414