TY - CONF T1 - Visual Analysis of Temporal Trends in Social Networks Using Edge Color Coding and Metric Timelines T2 - Privacy, Security, Risk and Trust (PASSAT), 2011 IEEE Third International Conference on and 2011 IEEE Third International Confernece on Social Computing (SocialCom) Y1 - 2011 A1 - Khurana,U. A1 - Nguyen,Viet-An A1 - Hsueh-Chien Cheng A1 - Ahn,Jae-wook A1 - Chen,Xi A1 - Shneiderman, Ben KW - Color KW - computer scientists KW - data analysts KW - data visualisation KW - Data visualization KW - dynamic social network KW - dynamic timeslider KW - edge color coding KW - excel sheet KW - Image coding KW - image colour analysis KW - Layout KW - measurement KW - metric timelines KW - Microsoft excel KW - multiple graph metrics KW - Net EvViz KW - network components KW - network layout KW - network visualization tool KW - NodeXL template KW - social networking (online) KW - temporal trends KW - Twitter KW - Visualization AB - We present Net EvViz, a visualization tool for analysis and exploration of a dynamic social network. There are plenty of visual social network analysis tools but few provide features for visualization of dynamically changing networks featuring the addition or deletion of nodes or edges. Our tool extends the code base of the Node XL template for Microsoft Excel, a popular network visualization tool. The key features of this work are (1) The ability of the user to specify and edit temporal annotations to the network components in an Excel sheet, (2) See the dynamics of the network with multiple graph metrics plotted over the time span of the graph, called the Timeline, and (3) Temporal exploration of the network layout using an edge coloring scheme and a dynamic Time slider. The objectives of the new features presented in this paper are to let the data analysts, computer scientists and others to observe the dynamics or evolution in a network interactively. We presented Net EvViz to five users of Node XL and received positive responses. JA - Privacy, Security, Risk and Trust (PASSAT), 2011 IEEE Third International Conference on and 2011 IEEE Third International Confernece on Social Computing (SocialCom) PB - IEEE SN - 978-1-4577-1931-8 M3 - 10.1109/PASSAT/SocialCom.2011.212 ER - TY - CONF T1 - Semi non-intrusive training for cell-phone camera model linkage T2 - Information Forensics and Security (WIFS), 2010 IEEE International Workshop on Y1 - 2010 A1 - Chuang,Wei-Hong A1 - M. Wu KW - accuracy;training KW - analysis;cameras;cellular KW - analysis;image KW - camera KW - Color KW - colour KW - complexity;training KW - content KW - dependency;variance KW - feature;cell KW - forensics;digital KW - forensics;image KW - image KW - Interpolation KW - linkage;component KW - matching;interpolation; KW - matching;semi KW - model KW - nonintrusive KW - phone KW - radio;computer KW - training;testing AB - This paper presents a study of cell-phone camera model linkage that matches digital images against potential makes / models of cell-phone camera sources using camera color interpolation features. The matching performance is examined and the dependency on the content of training image collection is evaluated via variance analysis. Training content dependency can be dealt with under the framework of component forensics, where cell-phone camera model linkage is seen as a combination of semi non-intrusive training and completely non-intrusive testing. Such a viewpoint suggests explicitly the goodness criterion of testing accuracy for training data selection. It also motivates other possible alternative training procedures based on different criteria, such as the training complexity, for which preliminary but promising experiment designs and results have been obtained. JA - Information Forensics and Security (WIFS), 2010 IEEE International Workshop on M3 - 10.1109/WIFS.2010.5711468 ER - TY - CONF T1 - Concurrent transition and shot detection in football videos using Fuzzy Logic T2 - Image Processing (ICIP), 2009 16th IEEE International Conference on Y1 - 2009 A1 - Refaey,M.A. A1 - Elsayed,K.M. A1 - Hanafy,S.M. A1 - Davis, Larry S. KW - analysis;inference KW - boundary;shot KW - Color KW - colour KW - detection;sports KW - functions;shot KW - histogram;concurrent KW - logic;image KW - logic;inference KW - mechanism;intensity KW - mechanisms;sport;video KW - processing; KW - processing;videonanalysis;fuzzy KW - signal KW - transition;edgeness;football KW - variance;membership KW - video;video KW - videos;fuzzy AB - Shot detection is a fundamental step in video processing and analysis that should be achieved with high degree of accuracy. In this paper, we introduce a unified algorithm for shot detection in sports video using fuzzy logic as a powerful inference mechanism. Fuzzy logic overcomes the problems of hard cut thresholds and the need to large training data used in previous work. The proposed algorithm integrates many features like color histogram, edgeness, intensity variance, etc. Membership functions to represent different features and transitions between shots have been developed to detect different shot boundary and transition types. We address the detection of cut, fade, dissolve, and wipe shot transitions. The results show that our algorithm achieves high degree of accuracy. JA - Image Processing (ICIP), 2009 16th IEEE International Conference on M3 - 10.1109/ICIP.2009.5413648 ER - TY - CONF T1 - Component Forensics of Digital Cameras: A Non-Intrusive Approach T2 - Information Sciences and Systems, 2006 40th Annual Conference on Y1 - 2006 A1 - Swaminathan,A. A1 - Wu,M. A1 - Liu,K. J.R KW - analysis;image KW - approach;processing KW - array KW - camera;image KW - Color KW - colour KW - detection;optical KW - filter KW - filters; KW - forensics;detecting KW - identification;intellectual KW - infringement;digital KW - interpolation;component KW - intrusive KW - module;cameras;image KW - pattern;color KW - property KW - protection;non KW - right KW - sensors;interpolation;object KW - tampering KW - technology AB - This paper considers the problem of component forensics and proposes a methodology to identify the algorithms and parameters employed by various processing modules inside a digital camera. The proposed analysis techniques are non-intrusive, using only sample output images collected from the camera to find the color filter array pattern; and the algorithm and parameters of color interpolation employed in cameras. As demonstrated by various case studies in the paper, the features obtained from component forensic analysis provide useful evidence for such applications as detecting technology infringement, protecting intellectual property rights, determining camera source, and identifying image tampering. JA - Information Sciences and Systems, 2006 40th Annual Conference on M3 - 10.1109/CISS.2006.286646 ER - TY - CONF T1 - Non-Intrusive Forensic Analysis of Visual Sensors Using Output Images T2 - Acoustics, Speech and Signal Processing, 2006. ICASSP 2006 Proceedings. 2006 IEEE International Conference on Y1 - 2006 A1 - Swaminathan,A. A1 - M. Wu A1 - Liu,K. J.R KW - algorithms;interpolation KW - analysis;image KW - analysis;output KW - array KW - cameras;forensic KW - Color KW - colour KW - engineering;forensic KW - forensic KW - images;visual KW - methods;nonintrusive KW - PROCESSING KW - sensor;digital KW - sensors;cameras;image KW - sensors;interpolation; KW - signal AB - This paper considers the problem of non-intrusive forensic analysis of the individual components in visual sensors and its implementation. As a new addition to the emerging area of forensic engineering, we present a framework for analyzing technologies employed inside digital cameras based on output images, and develop a set of forensic signal processing algorithms for visual sensors based on color array sensor and interpolation methods. We show through simulations that the proposed method is robust against compression and noise, and can help identify various processing components inside the camera. Such a non-intrusive forensic framework would provide useful evidence for analyzing technology infringement and evolution for visual sensors JA - Acoustics, Speech and Signal Processing, 2006. ICASSP 2006 Proceedings. 2006 IEEE International Conference on VL - 5 M3 - 10.1109/ICASSP.2006.1661297 ER - TY - CONF T1 - Efficient mean-shift tracking via a new similarity measure T2 - Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on Y1 - 2005 A1 - Yang,Changjiang A1 - Duraiswami, Ramani A1 - Davis, Larry S. KW - algorithm; KW - analysis; KW - Bhattacharyya KW - coefficient; KW - Color KW - colour KW - density KW - divergence; KW - estimates; KW - extraction; KW - fast KW - feature KW - frame-rate KW - Gauss KW - Gaussian KW - histograms; KW - image KW - Kernel KW - Kullback-Leibler KW - matching; KW - Mean-shift KW - measures; KW - nonparametric KW - processes; KW - sample-based KW - sequences; KW - similarity KW - spaces; KW - spatial-feature KW - tracking KW - tracking; KW - transform; AB - The mean shift algorithm has achieved considerable success in object tracking due to its simplicity and robustness. It finds local minima of a similarity measure between the color histograms or kernel density estimates of the model and target image. The most typically used similarity measures are the Bhattacharyya coefficient or the Kullback-Leibler divergence. In practice, these approaches face three difficulties. First, the spatial information of the target is lost when the color histogram is employed, which precludes the application of more elaborate motion models. Second, the classical similarity measures are not very discriminative. Third, the sample-based classical similarity measures require a calculation that is quadratic in the number of samples, making real-time performance difficult. To deal with these difficulties we propose a new, simple-to-compute and more discriminative similarity measure in spatial-feature spaces. The new similarity measure allows the mean shift algorithm to track more general motion models in an integrated way. To reduce the complexity of the computation to linear order we employ the recently proposed improved fast Gauss transform. This leads to a very efficient and robust nonparametric spatial-feature tracking algorithm. The algorithm is tested on several image sequences and shown to achieve robust and reliable frame-rate tracking. JA - Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on VL - 1 M3 - 10.1109/CVPR.2005.139 ER - TY - CONF T1 - Fast multiple object tracking via a hierarchical particle filter T2 - Computer Vision, 2005. ICCV 2005. Tenth IEEE International Conference on Y1 - 2005 A1 - Yang,Changjiang A1 - Duraiswami, Ramani A1 - Davis, Larry S. KW - (numerical KW - algorithm; KW - analysis; KW - Color KW - colour KW - Computer KW - Convergence KW - detection; KW - edge KW - fast KW - filter; KW - Filtering KW - hierarchical KW - histogram; KW - image KW - images; KW - integral KW - likelihood; KW - methods); KW - methods; KW - multiple KW - numerical KW - object KW - observation KW - of KW - orientation KW - particle KW - processes; KW - quasirandom KW - random KW - sampling; KW - tracking KW - tracking; KW - vision; KW - visual AB - A very efficient and robust visual object tracking algorithm based on the particle filter is presented. The method characterizes the tracked objects using color and edge orientation histogram features. While the use of more features and samples can improve the robustness, the computational load required by the particle filter increases. To accelerate the algorithm while retaining robustness we adopt several enhancements in the algorithm. The first is the use of integral images for efficiently computing the color features and edge orientation histograms, which allows a large amount of particles and a better description of the targets. Next, the observation likelihood based on multiple features is computed in a coarse-to-fine manner, which allows the computation to quickly focus on the more promising regions. Quasi-random sampling of the particles allows the filter to achieve a higher convergence rate. The resulting tracking algorithm maintains multiple hypotheses and offers robustness against clutter or short period occlusions. Experimental results demonstrate the efficiency and effectiveness of the algorithm for single and multiple object tracking. JA - Computer Vision, 2005. ICCV 2005. Tenth IEEE International Conference on VL - 1 M3 - 10.1109/ICCV.2005.95 ER - TY - CONF T1 - Iterative figure-ground discrimination T2 - Pattern Recognition, 2004. ICPR 2004. Proceedings of the 17th International Conference on Y1 - 2004 A1 - Zhao, L. A1 - Davis, Larry S. KW - algorithm; KW - analysis; KW - Bandwidth KW - calculation; KW - Color KW - colour KW - Computer KW - density KW - dimensional KW - discrimination; KW - distribution; KW - distributions; KW - Estimation KW - estimation; KW - expectation KW - figure KW - Gaussian KW - ground KW - image KW - initialization; KW - iterative KW - Kernel KW - low KW - methods; KW - mixture; KW - model KW - model; KW - nonparametric KW - parameter KW - parametric KW - processes; KW - sampling KW - sampling; KW - segmentation KW - segmentation; KW - statistics; KW - theory; KW - vision; AB - Figure-ground discrimination is an important problem in computer vision. Previous work usually assumes that the color distribution of the figure can be described by a low dimensional parametric model such as a mixture of Gaussians. However, such approach has difficulty selecting the number of mixture components and is sensitive to the initialization of the model parameters. In this paper, we employ non-parametric kernel estimation for color distributions of both the figure and background. We derive an iterative sampling-expectation (SE) algorithm for estimating the color, distribution and segmentation. There are several advantages of kernel-density estimation. First, it enables automatic selection of weights of different cues based on the bandwidth calculation from the image itself. Second, it does not require model parameter initialization and estimation. The experimental results on images of cluttered scenes demonstrate the effectiveness of the proposed algorithm. JA - Pattern Recognition, 2004. ICPR 2004. Proceedings of the 17th International Conference on VL - 1 M3 - 10.1109/ICPR.2004.1334006 ER - TY - CONF T1 - An appearance based approach for human and object tracking T2 - Image Processing, 2003. ICIP 2003. Proceedings. 2003 International Conference on Y1 - 2003 A1 - Capellades,M. B A1 - David Doermann A1 - DeMenthon,D. A1 - Chellapa, Rama KW - algorithm; KW - analysis; KW - background KW - basis; KW - by KW - Color KW - colour KW - correlogram KW - detection; KW - distributions; KW - frame KW - histogram KW - human KW - image KW - information; KW - object KW - processing; KW - segmentation; KW - sequences; KW - signal KW - subtraction KW - tracking; KW - video AB - A system for tracking humans and detecting human-object interactions in indoor environments is described. A combination of correlogram and histogram information is used to model object and human color distributions. Humans and objects are detected using a background subtraction algorithm. The models are built on the fly and used to track them on a frame by frame basis. The system is able to detect when people merge into groups and segment them during occlusion. Identities are preserved during the sequence, even if a person enters and leaves the scene. The system is also able to detect when a person deposits or removes an object from the scene. In the first case the models are used to track the object retroactively in time. In the second case the objects are tracked for the rest of the sequence. Experimental results using indoor video sequences are presented. JA - Image Processing, 2003. ICIP 2003. Proceedings. 2003 International Conference on VL - 2 M3 - 10.1109/ICIP.2003.1246622 ER - TY - JOUR T1 - Efficient kernel density estimation using the fast gauss transform with applications to color modeling and tracking JF - Pattern Analysis and Machine Intelligence, IEEE Transactions on Y1 - 2003 A1 - Elgammal,A. A1 - Duraiswami, Ramani A1 - Davis, Larry S. KW - algorithms; KW - Color KW - Computer KW - density KW - estimation; KW - fast KW - function; KW - Gauss KW - image KW - Kernel KW - modeling; KW - segmentation; KW - tracking; KW - transform; KW - transforms; KW - VISION KW - vision; AB - Many vision algorithms depend on the estimation of a probability density function from observations. Kernel density estimation techniques are quite general and powerful methods for this problem, but have a significant disadvantage in that they are computationally intensive. In this paper, we explore the use of kernel density estimation with the fast Gauss transform (FGT) for problems in vision. The FGT allows the summation of a mixture of ill Gaussians at N evaluation points in O(M+N) time, as opposed to O(MN) time for a naive evaluation and can be used to considerably speed up kernel density estimation. We present applications of the technique to problems from image segmentation and tracking and show that the algorithm allows application of advanced statistical techniques to solve practical vision problems in real-time with today's computers. VL - 25 SN - 0162-8828 CP - 11 M3 - 10.1109/TPAMI.2003.1240123 ER - TY - JOUR T1 - Visual and textual consistency checking tools for graphical user interfaces JF - IEEE Transactions on Software Engineering Y1 - 1997 A1 - Mahajan,R. A1 - Shneiderman, Ben KW - button analysis tools KW - Color KW - dialog box summary table KW - experiment KW - graphical analysis tools KW - Graphical user interfaces KW - human factors KW - inconsistent interface terminology KW - interface color KW - interface spell checker KW - Output feedback KW - Prototypes KW - SHERLOCK KW - Software architecture KW - Software design KW - software metrics KW - software prototyping KW - software tools KW - Terminology KW - terminology analysis tools KW - Testing KW - textual consistency checking tools KW - user interface management systems KW - User interfaces KW - user performance KW - visual consistency checking tools AB - Designing user interfaces with consistent visual and textual properties is difficult. To demonstrate the harmful effects of inconsistency, we conducted an experiment with 60 subjects. Inconsistent interface terminology slowed user performance by 10 to 25 percent. Unfortunately, contemporary software tools provide only modest support for consistency control. Therefore, we developed SHERLOCK, a family of consistency analysis tools, which evaluates visual and textual properties of user interfaces. It provides graphical analysis tools such as a dialog box summary table that presents a compact overview of visual properties of all dialog boxes. SHERLOCK provides terminology analysis tools including an interface concordance, an interface spellchecker, and terminology baskets to check for inconsistent use of familiar groups of terms. Button analysis tools include a button concordance and a button layout table to detect variant capitalization, distinct typefaces, distinct colors, variant button sizes, and inconsistent button placements. We describe the design, software architecture, and the use of SHERLOCK. We tested SHERLOCK with four commercial prototypes. The outputs, analysis, and feedback from designers of the applications are presented VL - 23 SN - 0098-5589 CP - 11 M3 - 10.1109/32.637386 ER -