TY - CONF T1 - Computing the Tree of Life: Leveraging the Power of Desktop and Service Grids T2 - Parallel and Distributed Processing Workshops and Phd Forum (IPDPSW), 2011 IEEE International Symposium on Y1 - 2011 A1 - Bazinet,A. L A1 - Cummings, Michael P. KW - (artificial KW - (mathematics);user KW - analysis;science KW - BOINC KW - computation;grid KW - computational KW - computing KW - computing;information KW - computing;machine KW - data;grid KW - estimation;portals;trees KW - evolutionary KW - Grid KW - grids;service KW - handling;evolutionary KW - history;evolutionary KW - intelligence);maximum KW - interface;computational KW - interfaces; KW - jobs;HPC KW - learning;maximum KW - likelihood KW - load;Internet;data KW - method;molecular KW - model;genetic KW - model;substantial KW - portal;service KW - power;data KW - project;life KW - resource;lattice KW - resource;Web KW - sequence KW - services;learning KW - sets;evolutionary KW - software;GARLI KW - system;heterogeneous KW - systematics;phylogenetic KW - tree AB - The trend in life sciences research, particularly in molecular evolutionary systematics, is toward larger data sets and ever-more detailed evolutionary models, which can generate substantial computational loads. Over the past several years we have developed a grid computing system aimed at providing researchers the computational power needed to complete such analyses in a timely manner. Our grid system, known as The Lattice Project, was the first to combine two models of grid computing - the service model, which mainly federates large institutional HPC resources, and the desktop model, which harnesses the power of PCs volunteered by the general public. Recently we have developed a "science portal" style web interface that makes it easier than ever for phylogenetic analyses to be completed using GARLI, a popular program that uses a maximum likelihood method to infer the evolutionary history of organisms on the basis of genetic sequence data. This paper describes our approach to scheduling thousands of GARLI jobs with diverse requirements to heterogeneous grid resources, which include volunteer computers running BOINC software. A key component of this system provides a priori GARLI runtime estimates using machine learning with random forests. JA - Parallel and Distributed Processing Workshops and Phd Forum (IPDPSW), 2011 IEEE International Symposium on M3 - 10.1109/IPDPS.2011.344 ER - TY - CONF T1 - Automatic target recognition based on simultaneous sparse representation T2 - Image Processing (ICIP), 2010 17th IEEE International Conference on Y1 - 2010 A1 - Patel, Vishal M. A1 - Nasrabadi,N.M. A1 - Chellapa, Rama KW - (artificial KW - algorithm;feature KW - based KW - classification;iterative KW - classification;learning KW - Comanche KW - data KW - dictionary;matching KW - extraction;image KW - forward-looking KW - infrared KW - intelligence);military KW - learning KW - MATCHING KW - matrix;dictionary KW - measure;military KW - methods;learning KW - orthogonal KW - pursuit KW - pursuit;confusion KW - recognition;class KW - recognition;target KW - representation;feature KW - representation;sparse KW - set;automatic KW - signal KW - similarity KW - simultaneous KW - sparse KW - supervised KW - systems;object KW - target KW - target;simultaneous KW - tracking; AB - In this paper, an automatic target recognition algorithm is presented based on a framework for learning dictionaries for simultaneous sparse signal representation and feature extraction. The dictionary learning algorithm is based on class supervised simultaneous orthogonal matching pursuit while a matching pursuit-based similarity measure is used for classification. We show how the proposed framework can be helpful for efficient utilization of data, with the possibility of developing real-time, robust target classification. We verify the efficacy of the proposed algorithm using confusion matrices on the well known Comanche forward-looking infrared data set consisting of ten different military targets at different orientations. JA - Image Processing (ICIP), 2010 17th IEEE International Conference on M3 - 10.1109/ICIP.2010.5652306 ER - TY - CONF T1 - Combining multiple kernels for efficient image classification T2 - Applications of Computer Vision (WACV), 2009 Workshop on Y1 - 2009 A1 - Siddiquie,B. A1 - Vitaladevuni,S.N. A1 - Davis, Larry S. KW - (artificial KW - AdaBoost;base KW - channels;multiple KW - classification;kernel KW - classification;learning KW - decision KW - feature KW - function;discriminative KW - intelligence);support KW - Kernel KW - kernel;image KW - kernels;composite KW - learning;support KW - machine;image KW - machines; KW - similarity;multiple KW - vector AB - We investigate the problem of combining multiple feature channels for the purpose of efficient image classification. Discriminative kernel based methods, such as SVMs, have been shown to be quite effective for image classification. To use these methods with several feature channels, one needs to combine base kernels computed from them. Multiple kernel learning is an effective method for combining the base kernels. However, the cost of computing the kernel similarities of a test image with each of the support vectors for all feature channels is extremely high. We propose an alternate method, where training data instances are selected, using AdaBoost, for each of the base kernels. A composite decision function, which can be evaluated by computing kernel similarities with respect to only these chosen instances, is learnt. This method significantly reduces the number of kernel computations required during testing. Experimental results on the benchmark UCI datasets, as well as on a challenging painting dataset, are included to demonstrate the effectiveness of our method. JA - Applications of Computer Vision (WACV), 2009 Workshop on M3 - 10.1109/WACV.2009.5403040 ER - TY - CONF T1 - Learning Discriminative Appearance-Based Models Using Partial Least Squares T2 - Computer Graphics and Image Processing (SIBGRAPI), 2009 XXII Brazilian Symposium on Y1 - 2009 A1 - Schwartz, W.R. A1 - Davis, Larry S. KW - (artificial KW - analysis;learning KW - appearance KW - approximations;object KW - based KW - colour KW - descriptors;learning KW - discriminative KW - intelligence);least KW - learning KW - least KW - models;machine KW - person KW - recognition; KW - recognition;feature KW - squares KW - squares;image KW - techniques;partial AB - Appearance information is essential for applications such as tracking and people recognition. One of the main problems of using appearance-based discriminative models is the ambiguities among classes when the number of persons being considered increases. To reduce the amount of ambiguity, we propose the use of a rich set of feature descriptors based on color, textures and edges. Another issue regarding appearance modeling is the limited number of training samples available for each appearance. The discriminative models are created using a powerful statistical tool called partial least squares (PLS), responsible for weighting the features according to their discriminative power for each different appearance. The experimental results, based on appearance-based person recognition, demonstrate that the use of an enriched feature set analyzed by PLS reduces the ambiguity among different appearances and provides higher recognition rates when compared to other machine learning techniques. JA - Computer Graphics and Image Processing (SIBGRAPI), 2009 XXII Brazilian Symposium on M3 - 10.1109/SIBGRAPI.2009.42 ER - TY - CONF T1 - Understanding videos, constructing plots learning a visually grounded storyline model from annotated videos T2 - Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on Y1 - 2009 A1 - Gupta,A. A1 - Srinivasan,P. A1 - Shi,Jianbo A1 - Davis, Larry S. KW - (artificial KW - action KW - activity KW - analysis;integer KW - AND-OR KW - annotation;video KW - coding; KW - constraint;video KW - construction;semantic KW - extraction;graph KW - framework;plots KW - graph;encoding;human KW - grounded KW - intelligence);spatiotemporal KW - learning KW - meaning;spatio-temporal KW - model KW - phenomena;video KW - Programming KW - programming;learning KW - recognition;human KW - representation;integer KW - storyline KW - theory;image KW - understanding;visually AB - Analyzing videos of human activities involves not only recognizing actions (typically based on their appearances), but also determining the story/plot of the video. The storyline of a video describes causal relationships between actions. Beyond recognition of individual actions, discovering causal relationships helps to better understand the semantic meaning of the activities. We present an approach to learn a visually grounded storyline model of videos directly from weakly labeled data. The storyline model is represented as an AND-OR graph, a structure that can compactly encode storyline variation across videos. The edges in the AND-OR graph correspond to causal relationships which are represented in terms of spatio-temporal constraints. We formulate an Integer Programming framework for action recognition and storyline extraction using the storyline model and visual groundings learned from training data. JA - Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on M3 - 10.1109/CVPR.2009.5206492 ER - TY - CONF T1 - Augmenting spatio-textual search with an infectious disease ontology T2 - Data Engineering Workshop, 2008. ICDEW 2008. IEEE 24th International Conference on Y1 - 2008 A1 - Lieberman,M.D. A1 - Sankaranarayanan,J. A1 - Samet, Hanan A1 - Sperling,J. KW - (artificial KW - article;ontology;spatio-textual KW - disease;map KW - engines; KW - information KW - intelligence);search KW - interface;newspaper KW - search;classification;diseases;indexing;medical KW - STEWARD KW - system;classification;epidemics;indexing;infectious KW - systems;ontologies AB - A system is described that automatically categorizes and classifies infectious disease incidence reports by type and geographic location, to aid analysis by domain experts. It identifies references to infectious diseases by using a disease ontology. The system leverages the textual and spatial search capabilities of the STEWARD system to enable queries such as reports on "influenza" near "Hong Kong", possibly within a particular time period. Documents from the U.S. National Library of Medicine (http://www.pubmed.gov) and the World Health Organization (http://www.who.int) are tagged so that spatial relationships to specific disease occurrences can be presented graphically via a map interface. In addition, newspaper articles can be tagged and indexed to bolster the surveillance of ongoing epidemics. Examining past epidemics using this system may lead to improved understanding of the cause and spread of infectious diseases. JA - Data Engineering Workshop, 2008. ICDEW 2008. IEEE 24th International Conference on M3 - 10.1109/ICDEW.2008.4498330 ER - TY - JOUR T1 - CONVEX: Similarity-Based Algorithms for Forecasting Group Behavior JF - Intelligent Systems, IEEE Y1 - 2008 A1 - Martinez,V. A1 - Simari,G. I A1 - Sliva,A. A1 - V.S. Subrahmanian KW - (artificial KW - algorithm;action KW - algorithm;behavioural KW - algorithm;CONVEXk-NN KW - BEHAVIOR KW - computing;ontologies KW - CONVEXMerge KW - forecasting;high-dimensional KW - intelligence); KW - metric KW - sciences KW - space;ontology;similarity-based KW - vector;context KW - vector;group AB - A proposed framework for predicting a group's behavior associates two vectors with that group. The context vector tracks aspects of the environment in which the group functions; the action vector tracks the group's previous actions. Given a set of past behaviors consisting of a pair of these vectors and given a query context vector, the goal is to predict the associated action vector. To achieve this goal, two families of algorithms employ vector similarity. CONVEXk _NN algorithms use k-nearest neighbors in high-dimensional metric spaces; CONVEXMerge algorithms look at linear combinations of distances of the query vector from context vectors. Compared to past prediction algorithms, these algorithms are extremely fast. Moreover, experiments on real-world data sets show that the algorithms are highly accurate, predicting actions with well over 95-percent accuracy. VL - 23 SN - 1541-1672 CP - 4 M3 - 10.1109/MIS.2008.62 ER - TY - CONF T1 - Learning action dictionaries from video T2 - Image Processing, 2008. ICIP 2008. 15th IEEE International Conference on Y1 - 2008 A1 - Turaga,P. A1 - Chellapa, Rama KW - (artificial KW - action KW - action-phrases;learning KW - automated KW - decomposition;video KW - dictionaries;spatial KW - intelligence);video KW - segment KW - segmentation;learning KW - sequence;computer KW - Surveillance KW - surveillance; KW - systems;computer KW - transforms;video KW - vision;image KW - vision;independent AB - Summarizing the contents of a video containing human activities is an important problem in computer vision and has important applications in automated surveillance systems. Summarizing a video requires one to identify and learn a 'vocabulary' of action-phrases corresponding to specific events and actions occurring in the video. We propose a generative model for dynamic scenes containing human activities as a composition of independent action-phrases - each of which is derived from an underlying vocabulary. Given a long video sequence, we propose a completely unsupervised approach to learn the vocabulary. Once the vocabulary is learnt, a video segment can be decomposed into a collection of phrases for summarization. We then describe methods to learn the correlations between activities and sequentiality of events. We also propose a novel method for building invariances to spatial transforms in the summarization scheme. JA - Image Processing, 2008. ICIP 2008. 15th IEEE International Conference on M3 - 10.1109/ICIP.2008.4712102 ER - TY - CONF T1 - Human Appearance Change Detection T2 - Image Analysis and Processing, 2007. ICIAP 2007. 14th International Conference on Y1 - 2007 A1 - Ghanem,N.M. A1 - Davis, Larry S. KW - (artificial KW - appearance KW - approach;occupancy KW - change KW - changes KW - classification;support KW - classifier;vector KW - detection;image KW - detection;machine KW - difference KW - Frequency KW - intelligence);pattern KW - intersection KW - learning KW - machine KW - machines;vector KW - map;histogram KW - map;human KW - map;support KW - package KW - quantisation;video KW - quantization;video KW - recognition;boosting KW - recognition;image KW - sequence;left KW - sequences;image KW - sequences;learning KW - surveillance; KW - technique;codeword KW - vector AB - We present a machine learning approach to detect changes in human appearance between instances of the same person that may be taken with different cameras, but over short periods of time. For each video sequence of the person, we approximately align each frame in the sequence and then generate a set of features that captures the differences between the two sequences. The features are the occupancy difference map, the codeword frequency difference map (based on a vector quantization of the set of colors and frequencies) at each aligned pixel and the histogram intersection map. A boosting technique is then applied to learn the most discriminative set of features, and these features are then used to train a support vector machine classifier to recognize significant appearance changes. We apply our approach to the problem of left package detection. We train the classifiers on a laboratory database of videos in which people are seen with and without common articles that people carry - backpacks and suitcases. We test the approach on some real airport video sequences. Moving to the real world videos requires addressing additional problems, including the view selection problem and the frame selection problem. JA - Image Analysis and Processing, 2007. ICIAP 2007. 14th International Conference on M3 - 10.1109/ICIAP.2007.4362833 ER - TY - CONF T1 - Learning Higher-order Transition Models in Medium-scale Camera Networks T2 - Computer Vision, 2007. ICCV 2007. IEEE 11th International Conference on Y1 - 2007 A1 - Farrell,R. A1 - David Doermann A1 - Davis, Larry S. KW - (artificial KW - approach;medium-scale KW - association KW - Bayesian KW - camera KW - cameras;video KW - framework;data KW - fusion;iterative KW - graphical KW - intelligence);optical KW - learning;incremental KW - likelihood;multicamera KW - likely KW - methods;higher KW - methods;learning KW - model KW - model;video KW - movement;probabilistic KW - network;Bayes KW - network;most KW - order KW - partition KW - problem;higher-order KW - statistics;higher-order KW - statistics;image KW - Surveillance KW - surveillance; KW - tracking;object KW - tracking;probability;video KW - transition AB - We present a Bayesian framework for learning higher- order transition models in video surveillance networks. Such higher-order models describe object movement between cameras in the network and have a greater predictive power for multi-camera tracking than camera adjacency alone. These models also provide inherent resilience to camera failure, filling in gaps left by single or even multiple non-adjacent camera failures. Our approach to estimating higher-order transition models relies on the accurate assignment of camera observations to the underlying trajectories of objects moving through the network. We addresses this data association problem by gathering the observations and evaluating alternative partitions of the observation set into individual object trajectories. Searching the complete partition space is intractable, so an incremental approach is taken, iteratively adding observations and pruning unlikely partitions. Partition likelihood is determined by the evaluation of a probabilistic graphical model. When the algorithm has considered all observations, the most likely (MAP) partition is taken as the true object trajectories. From these recovered trajectories, the higher-order statistics we seek can be derived and employed for tracking. The partitioning algorithm we present is parallel in nature and can be readily extended to distributed computation in medium-scale smart camera networks. JA - Computer Vision, 2007. ICCV 2007. IEEE 11th International Conference on M3 - 10.1109/ICCV.2007.4409203 ER - TY - CONF T1 - Handwriting matching and its application to handwriting synthesis T2 - Document Analysis and Recognition, 2005. Proceedings. Eighth International Conference on Y1 - 2005 A1 - Yefeng Zheng A1 - David Doermann KW - (artificial KW - deformation KW - deformation; KW - handwriting KW - image KW - intelligence); KW - learning KW - learning; KW - matching; KW - point KW - recognition; KW - sampling; KW - SHAPE KW - synthesis; AB - Since it is extremely expensive to collect a large volume of handwriting samples, synthesized data are often used to enlarge the training set. We argue that, in order to generate good handwriting samples, a synthesis algorithm should learn the shape deformation characteristics of handwriting from real samples. In this paper, we present a point matching algorithm to learn the deformation, and apply it to handwriting synthesis. Preliminary experiments show the advantages of our approach. JA - Document Analysis and Recognition, 2005. Proceedings. Eighth International Conference on M3 - 10.1109/ICDAR.2005.122 ER - TY - CONF T1 - Modelling pedestrian shapes for outlier detection: a neural net based approach T2 - Intelligent Vehicles Symposium, 2003. Proceedings. IEEE Y1 - 2003 A1 - Nanda,H. A1 - Benabdelkedar,C. A1 - Davis, Larry S. KW - (artificial KW - complex KW - Computer KW - computing; KW - custom KW - design; KW - detection; KW - engineering KW - intelligence); KW - layer KW - learning KW - method; KW - modelling; KW - net; KW - nets; KW - neural KW - object KW - outlier KW - pedestrian KW - pedestrians KW - rate; KW - recognition KW - recognition; KW - SHAPE KW - shapes; KW - traffic KW - two KW - vision; AB - In this paper we present an example-based approach to learn a given class of complex shapes, and recognize instances of that shape with outliers. The system consists of a two-layer custom-designed neural network. We apply this approach to the recognition of pedestrians carrying objects from a single camera. The system is able to capture and model an ample range of pedestrian shapes at varying poses and camera orientations, and achieves a 90% correct recognition rate. JA - Intelligent Vehicles Symposium, 2003. Proceedings. IEEE M3 - 10.1109/IVS.2003.1212949 ER -