%0 Conference Paper %B 2011 IEEE International Conference on Automatic Face & Gesture Recognition and Workshops (FG 2011) %D 2011 %T Recent advances in age and height estimation from still images and video %A Chellapa, Rama %A Turaga,P. %K age estimation %K biometrics (access control) %K Calibration %K Estimation %K Geometry %K height estimation %K HUMANS %K image fusion %K image-formation model fusion %K Legged locomotion %K multiview-geometry %K Robustness %K SHAPE %K shape-space geometry %K soft-biometrics %K statistical analysis %K statistical methods %K video signal processing %X Soft-biometrics such as gender, age, race, etc have been found to be useful characterizations that enable fast pre-filtering and organization of data for biometric applications. In this paper, we focus on two useful soft-biometrics - age and height. We discuss their utility and the factors involved in their estimation from images and videos. In this context, we highlight the role that geometric constraints such as multiview-geometry, and shape-space geometry play. Then, we present methods based on these geometric constraints for age and height-estimation. These methods provide a principled means by fusing image-formation models, multi-view geometric constraints, and robust statistical methods for inference. %B 2011 IEEE International Conference on Automatic Face & Gesture Recognition and Workshops (FG 2011) %I IEEE %P 91 - 96 %8 2011/03/21/25 %@ 978-1-4244-9140-7 %G eng %R 10.1109/FG.2011.5771367 %0 Journal Article %J IEEE Computer Graphics and Applications %D 2009 %T Integrating Statistics and Visualization for Exploratory Power: From Long-Term Case Studies to Design Guidelines %A Perer,A. %A Shneiderman, Ben %K case studies %K Control systems %K Data analysis %K data mining %K data visualisation %K Data visualization %K data-mining %K design guidelines %K Employment %K exploration %K Filters %K Guidelines %K Information Visualization %K insights %K laboratory-based controlled experiments %K Performance analysis %K social network analysis %K Social network services %K social networking (online) %K social networks %K SocialAction %K statistical analysis %K Statistics %K visual analytics %K visual-analytics systems %K Visualization %X Evaluating visual-analytics systems is challenging because laboratory-based controlled experiments might not effectively represent analytical tasks. One such system, Social Action, integrates statistics and visualization in an interactive exploratory tool for social network analysis. This article describes results from long-term case studies with domain experts and extends established design goals for information visualization. %B IEEE Computer Graphics and Applications %V 29 %P 39 - 51 %8 2009/06//May %@ 0272-1716 %G eng %N 3 %R 10.1109/MCG.2009.44 %0 Conference Paper %B Software Maintenance, 2007. ICSM 2007. IEEE International Conference on %D 2007 %T Fault Detection Probability Analysis for Coverage-Based Test Suite Reduction %A McMaster,S. %A Memon, Atif M. %K coverage-based test suite reduction %K fault detection probability analysis %K Fault diagnosis %K force coverage-based reduction %K percentage fault detection reduction %K percentage size reduction %K program testing %K software reliability %K statistical analysis %X Test suite reduction seeks to reduce the number of test cases in a test suite while retaining a high percentage of the original suite's fault detection effectiveness. Most approaches to this problem are based on eliminating test cases that are redundant relative to some coverage criterion. The effectiveness of applying various coverage criteria in test suite reduction is traditionally based on empirical comparison of two metrics derived from the full and reduced test suites and information about a set of known faults: (1) percentage size reduction and (2) percentage fault detection reduction, neither of which quantitatively takes test coverage data into account. Consequently, no existing measure expresses the likelihood of various coverage criteria to force coverage-based reduction to retain test cases that expose specific faults. In this paper, we develop and empirically evaluate, using a number of different coverage criteria, a new metric based on the "average expected probability of finding a fault" in a reduced test suite. Our results indicate that the average probability of detecting each fault shows promise for identifying coverage criteria that work well for test suite reduction. %B Software Maintenance, 2007. ICSM 2007. IEEE International Conference on %P 335 - 344 %8 2007/10// %G eng %R 10.1109/ICSM.2007.4362646 %0 Journal Article %J IEEE Transactions on Visualization and Computer Graphics %D 2006 %T Balancing Systematic and Flexible Exploration of Social Networks %A Perer,A. %A Shneiderman, Ben %K Aggregates %K algorithms %K attribute ranking %K Cluster Analysis %K Computer Graphics %K Computer simulation %K Coordinate measuring machines %K coordinated views %K Data analysis %K data visualisation %K Data visualization %K exploratory data analysis %K Filters %K Gain measurement %K graph theory %K Graphical user interfaces %K Information Storage and Retrieval %K interactive graph visualization %K matrix algebra %K matrix overview %K Models, Biological %K Navigation %K network visualization %K Pattern analysis %K Population Dynamics %K Social Behavior %K social network analysis %K Social network services %K social networks %K social sciences computing %K Social Support %K SocialAction %K software %K statistical analysis %K statistical methods %K User-Computer Interface %X Social network analysis (SNA) has emerged as a powerful method for understanding the importance of relationships in networks. However, interactive exploration of networks is currently challenging because: (1) it is difficult to find patterns and comprehend the structure of networks with many nodes and links, and (2) current systems are often a medley of statistical methods and overwhelming visual output which leaves many analysts uncertain about how to explore in an orderly manner. This results in exploration that is largely opportunistic. Our contributions are techniques to help structural analysts understand social networks more effectively. We present SocialAction, a system that uses attribute ranking and coordinated views to help users systematically examine numerous SNA measures. Users can (1) flexibly iterate through visualizations of measures to gain an overview, filter nodes, and find outliers, (2) aggregate networks using link structure, find cohesive subgroups, and focus on communities of interest, and (3) untangle networks by viewing different link types separately, or find patterns across different link types using a matrix overview. For each operation, a stable node layout is maintained in the network visualization so users can make comparisons. SocialAction offers analysts a strategy beyond opportunism, as it provides systematic, yet flexible, techniques for exploring social networks %B IEEE Transactions on Visualization and Computer Graphics %V 12 %P 693 - 700 %8 2006/10//Sept %@ 1077-2626 %G eng %N 5 %R 10.1109/TVCG.2006.122 %0 Conference Paper %D 2006 %T A Statistical Analysis of Attack Data to Separate Attacks %A Michel Cukier %A Berthier,R. %A Panjwani,S. %A Tan,S. %K attack data statistical analysis %K attack separation %K computer crime %K Data analysis %K data mining %K ICMP scans %K K-Means algorithm %K pattern clustering %K port scans %K statistical analysis %K vulnerability scans %X This paper analyzes malicious activity collected from a test-bed, consisting of two target computers dedicated solely to the purpose of being attacked, over a 109 day time period. We separated port scans, ICMP scans, and vulnerability scans from the malicious activity. In the remaining attack data, over 78% (i.e., 3,677 attacks) targeted port 445, which was then statistically analyzed. The goal was to find the characteristics that most efficiently separate the attacks. First, we separated the attacks by analyzing their messages. Then we separated the attacks by clustering characteristics using the K-Means algorithm. The comparison between the analysis of the messages and the outcome of the K-Means algorithm showed that 1) the mean of the distributions of packets, bytes and message lengths over time are poor characteristics to separate attacks and 2) the number of bytes, the mean of the distribution of bytes and message lengths as a function of the number packets are the best characteristics for separating attacks %P 383 - 392 %8 2006/06// %G eng %R 10.1109/DSN.2006.9 %0 Journal Article %J Image Processing, IEEE Transactions on %D 2004 %T An information theoretic criterion for evaluating the quality of 3-D reconstructions from video %A Roy-Chowdhury, A.K. %A Chellapa, Rama %K 3D reconstruction %K algorithms %K Artificial intelligence %K Automated;Reproducibility of Results;Sensitivity and Specificity;Signal Processing %K Computer Graphics %K Computer-Assisted;Imaging %K Computer-Assisted;Software Validation;Subtraction Technique;Video Recording; %K Image Enhancement %K Image Interpretation %K Image reconstruction %K Image sequences %K information theoretic criterion %K Mutual information %K NOISE %K noise distribution %K optical flow equations %K second order moments %K statistical analysis %K Three-Dimensional;Information Storage and Retrieval;Information Theory;Movement;Pattern Recognition %K Video sequences %K video signal processing %X Even though numerous algorithms exist for estimating the three-dimensional (3-D) structure of a scene from its video, the solutions obtained are often of unacceptable quality. To overcome some of the deficiencies, many application systems rely on processing more data than necessary, thus raising the question: how is the accuracy of the solution related to the amount of data processed by the algorithm? Can we automatically recognize situations where the quality of the data is so bad that even a large number of additional observations will not yield the desired solution? Previous efforts to answer this question have used statistical measures like second order moments. They are useful if the estimate of the structure is unbiased and the higher order statistical effects are negligible, which is often not the case. This paper introduces an alternative information-theoretic criterion for evaluating the quality of a 3-D reconstruction. The accuracy of the reconstruction is judged by considering the change in mutual information (MI) (termed as the incremental MI) between a scene and its reconstructions. An example of 3-D reconstruction from a video sequence using optical flow equations and known noise distribution is considered and it is shown how the MI can be computed from first principles. We present simulations on both synthetic and real data to demonstrate the effectiveness of the proposed criterion. %B Image Processing, IEEE Transactions on %V 13 %P 960 - 973 %8 2004/07// %@ 1057-7149 %G eng %N 7 %R 10.1109/TIP.2004.827240 %0 Conference Paper %B IEEE Symposium on Information Visualization, 2004. INFOVIS 2004 %D 2004 %T A Rank-by-Feature Framework for Unsupervised Multidimensional Data Exploration Using Low Dimensional Projections %A Seo,J. %A Shneiderman, Ben %K axis-parallel projections %K boxplot %K color-coded lower-triangular matrix %K computational complexity %K computational geometry %K Computer displays %K Computer science %K Computer vision %K Data analysis %K data mining %K data visualisation %K Data visualization %K Displays %K dynamic query %K Educational institutions %K exploratory data analysis %K feature detection %K feature detection/selection %K Feature extraction %K feature selection %K graph theory %K graphical displays %K histogram %K Information Visualization %K interactive systems %K Laboratories %K Multidimensional systems %K Principal component analysis %K rank-by-feature prism %K scatterplot %K statistical analysis %K statistical graphics %K statistical graphs %K unsupervised multidimensional data exploration %K very large databases %X Exploratory analysis of multidimensional data sets is challenging because of the difficulty in comprehending more than three dimensions. Two fundamental statistical principles for the exploratory analysis are (1) to examine each dimension first and then find relationships among dimensions, and (2) to try graphical displays first and then find numerical summaries (D.S. Moore, (1999). We implement these principles in a novel conceptual framework called the rank-by-feature framework. In the framework, users can choose a ranking criterion interesting to them and sort 1D or 2D axis-parallel projections according to the criterion. We introduce the rank-by-feature prism that is a color-coded lower-triangular matrix that guides users to desired features. Statistical graphs (histogram, boxplot, and scatterplot) and information visualization techniques (overview, coordination, and dynamic query) are combined to help users effectively traverse 1D and 2D axis-parallel projections, and finally to help them interactively find interesting features %B IEEE Symposium on Information Visualization, 2004. INFOVIS 2004 %I IEEE %P 65 - 72 %8 2004/// %@ 0-7803-8779-3 %G eng %R 10.1109/INFVIS.2004.3 %0 Journal Article %J IEEE Transactions on Image Processing %D 2004 %T Visual tracking and recognition using appearance-adaptive models in particle filters %A Zhou,Shaohua Kevin %A Chellapa, Rama %A Moghaddam, B. %K adaptive filters %K adaptive noise variance %K algorithms %K appearance-adaptive model %K Artificial intelligence %K Cluster Analysis %K Computer Graphics %K Computer simulation %K Feedback %K Filtering %K first-order linear predictor %K hidden feature removal %K HUMANS %K Image Enhancement %K Image Interpretation, Computer-Assisted %K image recognition %K Information Storage and Retrieval %K Kinematics %K Laboratories %K Male %K Models, Biological %K Models, Statistical %K MOTION %K Movement %K Noise robustness %K Numerical Analysis, Computer-Assisted %K occlusion analysis %K Particle filters %K Particle tracking %K Pattern Recognition, Automated %K Predictive models %K Reproducibility of results %K robust statistics %K Sensitivity and Specificity %K Signal Processing, Computer-Assisted %K State estimation %K statistical analysis %K Subtraction Technique %K tracking %K Training data %K visual recognition %K visual tracking %X We present an approach that incorporates appearance-adaptive models in a particle filter to realize robust visual tracking and recognition algorithms. Tracking needs modeling interframe motion and appearance changes, whereas recognition needs modeling appearance changes between frames and gallery images. In conventional tracking algorithms, the appearance model is either fixed or rapidly changing, and the motion model is simply a random walk with fixed noise variance. Also, the number of particles is typically fixed. All these factors make the visual tracker unstable. To stabilize the tracker, we propose the following modifications: an observation model arising from an adaptive appearance model, an adaptive velocity motion model with adaptive noise variance, and an adaptive number of particles. The adaptive-velocity model is derived using a first-order linear predictor based on the appearance difference between the incoming observation and the previous particle configuration. Occlusion analysis is implemented using robust statistics. Experimental results on tracking visual objects in long outdoor and indoor video sequences demonstrate the effectiveness and robustness of our tracking algorithm. We then perform simultaneous tracking and recognition by embedding them in a particle filter. For recognition purposes, we model the appearance changes between frames and gallery images by constructing the intra- and extrapersonal spaces. Accurate recognition is achieved when confronted by pose and view variations. %B IEEE Transactions on Image Processing %V 13 %P 1491 - 1506 %8 2004/11// %@ 1057-7149 %G eng %N 11 %R 10.1109/TIP.2004.836152 %0 Conference Paper %B 16th International Conference on Pattern Recognition, 2002. Proceedings %D 2002 %T Mixture models for dynamic statistical pressure snakes %A Abd-Almageed, Wael %A Smith,C.E. %K active contour models %K Active contours %K Artificial intelligence %K Bayes methods %K Bayesian methods %K Bayesian theory %K complex colored object %K Computer vision %K decision making %K decision making mechanism %K dynamic statistical pressure snakes %K Equations %K expectation maximization algorithm %K Gaussian distribution %K image colour analysis %K Image edge detection %K Image segmentation %K Intelligent robots %K mixture models %K mixture of Gaussians %K mixture pressure model %K Robot vision systems %K robust pressure model %K Robustness %K segmentation results %K statistical analysis %K statistical modeling %X This paper introduces a new approach to statistical pressure snakes. It uses statistical modeling for both object and background to obtain a more robust pressure model. The Expectation Maximization (EM) algorithm is used to model the data into a Mixture of Gaussians (MoG). Bayesian theory is then employed as a decision making mechanism. Experimental results using the traditional pressure model and the new mixture pressure model demonstrate the effectiveness of the new models. %B 16th International Conference on Pattern Recognition, 2002. Proceedings %I IEEE %V 2 %P 721- 724 vol.2 - 721- 724 vol.2 %8 2002/// %@ 0-7695-1695-X %G eng %R 10.1109/ICPR.2002.1048404 %0 Conference Paper %B Computer Vision and Pattern Recognition, 1999. IEEE Computer Society Conference on. %D 1999 %T Statistical biases in optic flow %A Fermüller, Cornelia %A Pless, R. %A Aloimonos, J. %K Distributed computing %K Frequency domain analysis %K HUMANS %K image derivatives %K Image motion analysis %K Image sequences %K Least squares methods %K Motion estimation %K Optical computing %K Optical distortion %K optical flow %K Optical noise %K Ouchi illusion %K perception of motion %K Psychology %K Spatiotemporal phenomena %K statistical analysis %K systematic bias %K total least squares %X The computation of optical flow from image derivatives is biased in regions of non uniform gradient distributions. A least-squares or total least squares approach to computing optic flow from image derivatives even in regions of consistent flow can lead to a systematic bias dependent upon the direction of the optic flow, the distribution of the gradient directions, and the distribution of the image noise. The bias a consistent underestimation of length and a directional error. Similar results hold for various methods of computing optical flow in the spatiotemporal frequency domain. The predicted bias in the optical flow is consistent with psychophysical evidence of human judgment of the velocity of moving plaids, and provides an explanation of the Ouchi illusion. Correction of the bias requires accurate estimates of the noise distribution; the failure of the human visual system to make these corrections illustrates both the difficulty of the task and the feasibility of using this distorted optic flow or undistorted normal flow in tasks requiring higher lever processing %B Computer Vision and Pattern Recognition, 1999. IEEE Computer Society Conference on. %I IEEE %V 1 %P 566 Vol. 1 - 566 Vol. 1 %8 1999/// %@ 0-7695-0149-4 %G eng %R 10.1109/CVPR.1999.786994