TY - CONF T1 - Recent advances in age and height estimation from still images and video T2 - 2011 IEEE International Conference on Automatic Face & Gesture Recognition and Workshops (FG 2011) Y1 - 2011 A1 - Chellapa, Rama A1 - Turaga,P. KW - age estimation KW - biometrics (access control) KW - Calibration KW - Estimation KW - Geometry KW - height estimation KW - HUMANS KW - image fusion KW - image-formation model fusion KW - Legged locomotion KW - multiview-geometry KW - Robustness KW - SHAPE KW - shape-space geometry KW - soft-biometrics KW - statistical analysis KW - statistical methods KW - video signal processing AB - Soft-biometrics such as gender, age, race, etc have been found to be useful characterizations that enable fast pre-filtering and organization of data for biometric applications. In this paper, we focus on two useful soft-biometrics - age and height. We discuss their utility and the factors involved in their estimation from images and videos. In this context, we highlight the role that geometric constraints such as multiview-geometry, and shape-space geometry play. Then, we present methods based on these geometric constraints for age and height-estimation. These methods provide a principled means by fusing image-formation models, multi-view geometric constraints, and robust statistical methods for inference. JA - 2011 IEEE International Conference on Automatic Face & Gesture Recognition and Workshops (FG 2011) PB - IEEE SN - 978-1-4244-9140-7 M3 - 10.1109/FG.2011.5771367 ER - TY - JOUR T1 - Integrating Statistics and Visualization for Exploratory Power: From Long-Term Case Studies to Design Guidelines JF - IEEE Computer Graphics and Applications Y1 - 2009 A1 - Perer,A. A1 - Shneiderman, Ben KW - case studies KW - Control systems KW - Data analysis KW - data mining KW - data visualisation KW - Data visualization KW - data-mining KW - design guidelines KW - Employment KW - exploration KW - Filters KW - Guidelines KW - Information Visualization KW - insights KW - laboratory-based controlled experiments KW - Performance analysis KW - social network analysis KW - Social network services KW - social networking (online) KW - social networks KW - SocialAction KW - statistical analysis KW - Statistics KW - visual analytics KW - visual-analytics systems KW - Visualization AB - Evaluating visual-analytics systems is challenging because laboratory-based controlled experiments might not effectively represent analytical tasks. One such system, Social Action, integrates statistics and visualization in an interactive exploratory tool for social network analysis. This article describes results from long-term case studies with domain experts and extends established design goals for information visualization. VL - 29 SN - 0272-1716 CP - 3 M3 - 10.1109/MCG.2009.44 ER - TY - CONF T1 - Fault Detection Probability Analysis for Coverage-Based Test Suite Reduction T2 - Software Maintenance, 2007. ICSM 2007. IEEE International Conference on Y1 - 2007 A1 - McMaster,S. A1 - Memon, Atif M. KW - coverage-based test suite reduction KW - fault detection probability analysis KW - Fault diagnosis KW - force coverage-based reduction KW - percentage fault detection reduction KW - percentage size reduction KW - program testing KW - software reliability KW - statistical analysis AB - Test suite reduction seeks to reduce the number of test cases in a test suite while retaining a high percentage of the original suite's fault detection effectiveness. Most approaches to this problem are based on eliminating test cases that are redundant relative to some coverage criterion. The effectiveness of applying various coverage criteria in test suite reduction is traditionally based on empirical comparison of two metrics derived from the full and reduced test suites and information about a set of known faults: (1) percentage size reduction and (2) percentage fault detection reduction, neither of which quantitatively takes test coverage data into account. Consequently, no existing measure expresses the likelihood of various coverage criteria to force coverage-based reduction to retain test cases that expose specific faults. In this paper, we develop and empirically evaluate, using a number of different coverage criteria, a new metric based on the "average expected probability of finding a fault" in a reduced test suite. Our results indicate that the average probability of detecting each fault shows promise for identifying coverage criteria that work well for test suite reduction. JA - Software Maintenance, 2007. ICSM 2007. IEEE International Conference on M3 - 10.1109/ICSM.2007.4362646 ER - TY - JOUR T1 - Balancing Systematic and Flexible Exploration of Social Networks JF - IEEE Transactions on Visualization and Computer Graphics Y1 - 2006 A1 - Perer,A. A1 - Shneiderman, Ben KW - Aggregates KW - algorithms KW - attribute ranking KW - Cluster Analysis KW - Computer Graphics KW - Computer simulation KW - Coordinate measuring machines KW - coordinated views KW - Data analysis KW - data visualisation KW - Data visualization KW - exploratory data analysis KW - Filters KW - Gain measurement KW - graph theory KW - Graphical user interfaces KW - Information Storage and Retrieval KW - interactive graph visualization KW - matrix algebra KW - matrix overview KW - Models, Biological KW - Navigation KW - network visualization KW - Pattern analysis KW - Population Dynamics KW - Social Behavior KW - social network analysis KW - Social network services KW - social networks KW - social sciences computing KW - Social Support KW - SocialAction KW - software KW - statistical analysis KW - statistical methods KW - User-Computer Interface AB - Social network analysis (SNA) has emerged as a powerful method for understanding the importance of relationships in networks. However, interactive exploration of networks is currently challenging because: (1) it is difficult to find patterns and comprehend the structure of networks with many nodes and links, and (2) current systems are often a medley of statistical methods and overwhelming visual output which leaves many analysts uncertain about how to explore in an orderly manner. This results in exploration that is largely opportunistic. Our contributions are techniques to help structural analysts understand social networks more effectively. We present SocialAction, a system that uses attribute ranking and coordinated views to help users systematically examine numerous SNA measures. Users can (1) flexibly iterate through visualizations of measures to gain an overview, filter nodes, and find outliers, (2) aggregate networks using link structure, find cohesive subgroups, and focus on communities of interest, and (3) untangle networks by viewing different link types separately, or find patterns across different link types using a matrix overview. For each operation, a stable node layout is maintained in the network visualization so users can make comparisons. SocialAction offers analysts a strategy beyond opportunism, as it provides systematic, yet flexible, techniques for exploring social networks VL - 12 SN - 1077-2626 CP - 5 M3 - 10.1109/TVCG.2006.122 ER - TY - CONF T1 - A Statistical Analysis of Attack Data to Separate Attacks Y1 - 2006 A1 - Michel Cukier A1 - Berthier,R. A1 - Panjwani,S. A1 - Tan,S. KW - attack data statistical analysis KW - attack separation KW - computer crime KW - Data analysis KW - data mining KW - ICMP scans KW - K-Means algorithm KW - pattern clustering KW - port scans KW - statistical analysis KW - vulnerability scans AB - This paper analyzes malicious activity collected from a test-bed, consisting of two target computers dedicated solely to the purpose of being attacked, over a 109 day time period. We separated port scans, ICMP scans, and vulnerability scans from the malicious activity. In the remaining attack data, over 78% (i.e., 3,677 attacks) targeted port 445, which was then statistically analyzed. The goal was to find the characteristics that most efficiently separate the attacks. First, we separated the attacks by analyzing their messages. Then we separated the attacks by clustering characteristics using the K-Means algorithm. The comparison between the analysis of the messages and the outcome of the K-Means algorithm showed that 1) the mean of the distributions of packets, bytes and message lengths over time are poor characteristics to separate attacks and 2) the number of bytes, the mean of the distribution of bytes and message lengths as a function of the number packets are the best characteristics for separating attacks M3 - 10.1109/DSN.2006.9 ER - TY - JOUR T1 - An information theoretic criterion for evaluating the quality of 3-D reconstructions from video JF - Image Processing, IEEE Transactions on Y1 - 2004 A1 - Roy-Chowdhury, A.K. A1 - Chellapa, Rama KW - 3D reconstruction KW - algorithms KW - Artificial intelligence KW - Automated;Reproducibility of Results;Sensitivity and Specificity;Signal Processing KW - Computer Graphics KW - Computer-Assisted;Imaging KW - Computer-Assisted;Software Validation;Subtraction Technique;Video Recording; KW - Image Enhancement KW - Image Interpretation KW - Image reconstruction KW - Image sequences KW - information theoretic criterion KW - Mutual information KW - NOISE KW - noise distribution KW - optical flow equations KW - second order moments KW - statistical analysis KW - Three-Dimensional;Information Storage and Retrieval;Information Theory;Movement;Pattern Recognition KW - Video sequences KW - video signal processing AB - Even though numerous algorithms exist for estimating the three-dimensional (3-D) structure of a scene from its video, the solutions obtained are often of unacceptable quality. To overcome some of the deficiencies, many application systems rely on processing more data than necessary, thus raising the question: how is the accuracy of the solution related to the amount of data processed by the algorithm? Can we automatically recognize situations where the quality of the data is so bad that even a large number of additional observations will not yield the desired solution? Previous efforts to answer this question have used statistical measures like second order moments. They are useful if the estimate of the structure is unbiased and the higher order statistical effects are negligible, which is often not the case. This paper introduces an alternative information-theoretic criterion for evaluating the quality of a 3-D reconstruction. The accuracy of the reconstruction is judged by considering the change in mutual information (MI) (termed as the incremental MI) between a scene and its reconstructions. An example of 3-D reconstruction from a video sequence using optical flow equations and known noise distribution is considered and it is shown how the MI can be computed from first principles. We present simulations on both synthetic and real data to demonstrate the effectiveness of the proposed criterion. VL - 13 SN - 1057-7149 CP - 7 M3 - 10.1109/TIP.2004.827240 ER - TY - CONF T1 - A Rank-by-Feature Framework for Unsupervised Multidimensional Data Exploration Using Low Dimensional Projections T2 - IEEE Symposium on Information Visualization, 2004. INFOVIS 2004 Y1 - 2004 A1 - Seo,J. A1 - Shneiderman, Ben KW - axis-parallel projections KW - boxplot KW - color-coded lower-triangular matrix KW - computational complexity KW - computational geometry KW - Computer displays KW - Computer science KW - Computer vision KW - Data analysis KW - data mining KW - data visualisation KW - Data visualization KW - Displays KW - dynamic query KW - Educational institutions KW - exploratory data analysis KW - feature detection KW - feature detection/selection KW - Feature extraction KW - feature selection KW - graph theory KW - graphical displays KW - histogram KW - Information Visualization KW - interactive systems KW - Laboratories KW - Multidimensional systems KW - Principal component analysis KW - rank-by-feature prism KW - scatterplot KW - statistical analysis KW - statistical graphics KW - statistical graphs KW - unsupervised multidimensional data exploration KW - very large databases AB - Exploratory analysis of multidimensional data sets is challenging because of the difficulty in comprehending more than three dimensions. Two fundamental statistical principles for the exploratory analysis are (1) to examine each dimension first and then find relationships among dimensions, and (2) to try graphical displays first and then find numerical summaries (D.S. Moore, (1999). We implement these principles in a novel conceptual framework called the rank-by-feature framework. In the framework, users can choose a ranking criterion interesting to them and sort 1D or 2D axis-parallel projections according to the criterion. We introduce the rank-by-feature prism that is a color-coded lower-triangular matrix that guides users to desired features. Statistical graphs (histogram, boxplot, and scatterplot) and information visualization techniques (overview, coordination, and dynamic query) are combined to help users effectively traverse 1D and 2D axis-parallel projections, and finally to help them interactively find interesting features JA - IEEE Symposium on Information Visualization, 2004. INFOVIS 2004 PB - IEEE SN - 0-7803-8779-3 M3 - 10.1109/INFVIS.2004.3 ER - TY - JOUR T1 - Visual tracking and recognition using appearance-adaptive models in particle filters JF - IEEE Transactions on Image Processing Y1 - 2004 A1 - Zhou,Shaohua Kevin A1 - Chellapa, Rama A1 - Moghaddam, B. KW - adaptive filters KW - adaptive noise variance KW - algorithms KW - appearance-adaptive model KW - Artificial intelligence KW - Cluster Analysis KW - Computer Graphics KW - Computer simulation KW - Feedback KW - Filtering KW - first-order linear predictor KW - hidden feature removal KW - HUMANS KW - Image Enhancement KW - Image Interpretation, Computer-Assisted KW - image recognition KW - Information Storage and Retrieval KW - Kinematics KW - Laboratories KW - Male KW - Models, Biological KW - Models, Statistical KW - MOTION KW - Movement KW - Noise robustness KW - Numerical Analysis, Computer-Assisted KW - occlusion analysis KW - Particle filters KW - Particle tracking KW - Pattern Recognition, Automated KW - Predictive models KW - Reproducibility of results KW - robust statistics KW - Sensitivity and Specificity KW - Signal Processing, Computer-Assisted KW - State estimation KW - statistical analysis KW - Subtraction Technique KW - tracking KW - Training data KW - visual recognition KW - visual tracking AB - We present an approach that incorporates appearance-adaptive models in a particle filter to realize robust visual tracking and recognition algorithms. Tracking needs modeling interframe motion and appearance changes, whereas recognition needs modeling appearance changes between frames and gallery images. In conventional tracking algorithms, the appearance model is either fixed or rapidly changing, and the motion model is simply a random walk with fixed noise variance. Also, the number of particles is typically fixed. All these factors make the visual tracker unstable. To stabilize the tracker, we propose the following modifications: an observation model arising from an adaptive appearance model, an adaptive velocity motion model with adaptive noise variance, and an adaptive number of particles. The adaptive-velocity model is derived using a first-order linear predictor based on the appearance difference between the incoming observation and the previous particle configuration. Occlusion analysis is implemented using robust statistics. Experimental results on tracking visual objects in long outdoor and indoor video sequences demonstrate the effectiveness and robustness of our tracking algorithm. We then perform simultaneous tracking and recognition by embedding them in a particle filter. For recognition purposes, we model the appearance changes between frames and gallery images by constructing the intra- and extrapersonal spaces. Accurate recognition is achieved when confronted by pose and view variations. VL - 13 SN - 1057-7149 CP - 11 M3 - 10.1109/TIP.2004.836152 ER - TY - CONF T1 - Mixture models for dynamic statistical pressure snakes T2 - 16th International Conference on Pattern Recognition, 2002. Proceedings Y1 - 2002 A1 - Abd-Almageed, Wael A1 - Smith,C.E. KW - active contour models KW - Active contours KW - Artificial intelligence KW - Bayes methods KW - Bayesian methods KW - Bayesian theory KW - complex colored object KW - Computer vision KW - decision making KW - decision making mechanism KW - dynamic statistical pressure snakes KW - Equations KW - expectation maximization algorithm KW - Gaussian distribution KW - image colour analysis KW - Image edge detection KW - Image segmentation KW - Intelligent robots KW - mixture models KW - mixture of Gaussians KW - mixture pressure model KW - Robot vision systems KW - robust pressure model KW - Robustness KW - segmentation results KW - statistical analysis KW - statistical modeling AB - This paper introduces a new approach to statistical pressure snakes. It uses statistical modeling for both object and background to obtain a more robust pressure model. The Expectation Maximization (EM) algorithm is used to model the data into a Mixture of Gaussians (MoG). Bayesian theory is then employed as a decision making mechanism. Experimental results using the traditional pressure model and the new mixture pressure model demonstrate the effectiveness of the new models. JA - 16th International Conference on Pattern Recognition, 2002. Proceedings PB - IEEE VL - 2 SN - 0-7695-1695-X M3 - 10.1109/ICPR.2002.1048404 ER - TY - CONF T1 - Statistical biases in optic flow T2 - Computer Vision and Pattern Recognition, 1999. IEEE Computer Society Conference on. Y1 - 1999 A1 - Fermüller, Cornelia A1 - Pless, R. A1 - Aloimonos, J. KW - Distributed computing KW - Frequency domain analysis KW - HUMANS KW - image derivatives KW - Image motion analysis KW - Image sequences KW - Least squares methods KW - Motion estimation KW - Optical computing KW - Optical distortion KW - optical flow KW - Optical noise KW - Ouchi illusion KW - perception of motion KW - Psychology KW - Spatiotemporal phenomena KW - statistical analysis KW - systematic bias KW - total least squares AB - The computation of optical flow from image derivatives is biased in regions of non uniform gradient distributions. A least-squares or total least squares approach to computing optic flow from image derivatives even in regions of consistent flow can lead to a systematic bias dependent upon the direction of the optic flow, the distribution of the gradient directions, and the distribution of the image noise. The bias a consistent underestimation of length and a directional error. Similar results hold for various methods of computing optical flow in the spatiotemporal frequency domain. The predicted bias in the optical flow is consistent with psychophysical evidence of human judgment of the velocity of moving plaids, and provides an explanation of the Ouchi illusion. Correction of the bias requires accurate estimates of the noise distribution; the failure of the human visual system to make these corrections illustrates both the difficulty of the task and the feasibility of using this distorted optic flow or undistorted normal flow in tasks requiring higher lever processing JA - Computer Vision and Pattern Recognition, 1999. IEEE Computer Society Conference on. PB - IEEE VL - 1 SN - 0-7695-0149-4 M3 - 10.1109/CVPR.1999.786994 ER -