%0 Conference Paper %B 2011 IEEE International Conference on Computer Vision (ICCV) %D 2011 %T Blurring-invariant Riemannian metrics for comparing signals and images %A Zhengwu Zhang %A Klassen, E. %A Srivastava, A. %A Turaga,P. %A Chellapa, Rama %K blurring-invariant Riemannian metrics %K Estimation %K Fourier transforms %K Gaussian blur function %K Gaussian processes %K image representation %K log-Fourier representation %K measurement %K Orbits %K Polynomials %K signal representation %K Space vehicles %K vectors %X We propose a novel Riemannian framework for comparing signals and images in a manner that is invariant to their levels of blur. This framework uses a log-Fourier representation of signals/images in which the set of all possible Gaussian blurs of a signal, i.e. its orbits under semigroup action of Gaussian blur functions, is a straight line. Using a set of Riemannian metrics under which the group actions are by isometries, the orbits are compared via distances between orbits. We demonstrate this framework using a number of experimental results involving 1D signals and 2D images. %B 2011 IEEE International Conference on Computer Vision (ICCV) %I IEEE %P 1770 - 1775 %8 2011/11/06/13 %@ 978-1-4577-1101-5 %G eng %R 10.1109/ICCV.2011.6126442 %0 Conference Paper %B 2011 IEEE International Conference on Computer Vision (ICCV) %D 2011 %T Domain adaptation for object recognition: An unsupervised approach %A Gopalan,R. %A Ruonan Li %A Chellapa, Rama %K Data models %K data representations %K discriminative classifier %K Feature extraction %K Grassmann manifold %K image sampling %K incremental learning %K labeled source domain %K Manifolds %K measurement %K object category %K Object recognition %K Principal component analysis %K sampling points %K semisupervised adaptation %K target domain %K underlying domain shift %K unsupervised approach %K unsupervised domain adaptation %K Unsupervised learning %K vectors %X Adapting the classifier trained on a source domain to recognize instances from a new target domain is an important problem that is receiving recent attention. In this paper, we present one of the first studies on unsupervised domain adaptation in the context of object recognition, where we have labeled data only from the source domain (and therefore do not have correspondences between object categories across domains). Motivated by incremental learning, we create intermediate representations of data between the two domains by viewing the generative subspaces (of same dimension) created from these domains as points on the Grassmann manifold, and sampling points along the geodesic between them to obtain subspaces that provide a meaningful description of the underlying domain shift. We then obtain the projections of labeled source domain data onto these subspaces, from which a discriminative classifier is learnt to classify projected data from the target domain. We discuss extensions of our approach for semi-supervised adaptation, and for cases with multiple source and target domains, and report competitive results on standard datasets. %B 2011 IEEE International Conference on Computer Vision (ICCV) %I IEEE %P 999 - 1006 %8 2011/11/06/13 %@ 978-1-4577-1101-5 %G eng %R 10.1109/ICCV.2011.6126344 %0 Conference Paper %B 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) %D 2011 %T Entropy rate superpixel segmentation %A Ming-Yu Liu %A Tuzel, O. %A Ramalingam, S. %A Chellapa, Rama %K balancing function %K Berkeley segmentation benchmark %K Complexity theory %K Entropy %K entropy rate %K graph construction %K graph theory %K graph topology %K greedy algorithm %K Greedy algorithms %K homogeneous clusters %K Image edge detection %K Image segmentation %K matrix algebra %K matroid constraint %K measurement %K pattern clustering %K Random variables %K standard evaluation metrics %K superpixel segmentation %K vector spaces %X We propose a new objective function for superpixel segmentation. This objective function consists of two components: entropy rate of a random walk on a graph and a balancing term. The entropy rate favors formation of compact and homogeneous clusters, while the balancing function encourages clusters with similar sizes. We present a novel graph construction for images and show that this construction induces a matroid - a combinatorial structure that generalizes the concept of linear independence in vector spaces. The segmentation is then given by the graph topology that maximizes the objective function under the matroid constraint. By exploiting submodular and mono-tonic properties of the objective function, we develop an efficient greedy algorithm. Furthermore, we prove an approximation bound of ½ for the optimality of the solution. Extensive experiments on the Berkeley segmentation benchmark show that the proposed algorithm outperforms the state of the art in all the standard evaluation metrics. %B 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) %I IEEE %P 2097 - 2104 %8 2011/06/20/25 %@ 978-1-4577-0394-2 %G eng %R 10.1109/CVPR.2011.5995323 %0 Conference Paper %B 2011 44th Hawaii International Conference on System Sciences (HICSS) %D 2011 %T EventGraphs: Charting Collections of Conference Connections %A Hansen,D. %A Smith,M. A %A Shneiderman, Ben %K conference connections %K Conferences %K Data visualization %K EventGraphs %K hashtag %K measurement %K Media %K message identification %K multimedia computing %K NodeXL %K Real time systems %K social media network diagrams %K social networking (online) %K Twitter %X EventGraphs are social media network diagrams of conversations related to events, such as conferences. Many conferences now communicate a common "hashtag" or keyword to identify messages related to the event. EventGraphs help make sense of the collections of connections that form when people follow, reply or mention one another and a keyword. This paper defines EventGraphs, characterizes different types, and shows how the social media network analysis add-in NodeXL supports their creation and analysis. The structural patterns to look for in EventGraphs are highlighted and design ideas for their improvement are discussed. %B 2011 44th Hawaii International Conference on System Sciences (HICSS) %I IEEE %P 1 - 10 %8 2011/01/04/7 %@ 978-1-4244-9618-1 %G eng %R 10.1109/HICSS.2011.196 %0 Conference Paper %B Privacy, Security, Risk and Trust (PASSAT), 2011 IEEE Third International Conference on and 2011 IEEE Third International Confernece on Social Computing (SocialCom) %D 2011 %T NetVisia: Heat Map & Matrix Visualization of Dynamic Social Network Statistics & Content %A Gove,R. %A Gramsky,N. %A Kirby,R. %A Sefer,E. %A Sopan,A. %A Dunne,C. %A Shneiderman, Ben %A Taieb-Maimon,M. %K business intelligence concept %K business intelligence entity %K competitive intelligence %K data visualisation %K dynamic networks %K dynamic social network %K heat map %K Heating %K Image color analysis %K Information Visualization %K Layout %K matrix visualization %K measurement %K NetVisia system %K network evolution %K network visualization %K node-link diagrams %K outlier node %K social network content %K Social network services %K social network statistics %K social networking (online) %K social networks %K static network visualization %K time period %K topological similarity %K Training %K usability %K user evaluation %K User interfaces %X Visualizations of static networks in the form of node-link diagrams have evolved rapidly, though researchers are still grappling with how best to show evolution of nodes over time in these diagrams. This paper introduces NetVisia, a social network visualization system designed to support users in exploring temporal evolution in networks by using heat maps to display node attribute changes over time. NetVisia's novel contributions to network visualizations are to (1) cluster nodes in the heat map by similar metric values instead of by topological similarity, and (2) align nodes in the heat map by events. We compare NetVisia to existing systems and describe a formative user evaluation of a NetVisia prototype with four participants that emphasized the need for tool tips and coordinated views. Despite the presence of some usability issues, in 30-40 minutes the user evaluation participants discovered new insights about the data set which had not been discovered using other systems. We discuss implemented improvements to NetVisia, and analyze a co-occurrence network of 228 business intelligence concepts and entities. This analysis confirms the utility of a clustered heat map to discover outlier nodes and time periods. %B Privacy, Security, Risk and Trust (PASSAT), 2011 IEEE Third International Conference on and 2011 IEEE Third International Confernece on Social Computing (SocialCom) %I IEEE %P 19 - 26 %8 2011/10/09/11 %@ 978-1-4577-1931-8 %G eng %R 10.1109/PASSAT/SocialCom.2011.216 %0 Conference Paper %B Privacy, Security, Risk and Trust (PASSAT), 2011 IEEE Third International Conference on and 2011 IEEE Third International Confernece on Social Computing (SocialCom) %D 2011 %T Visual Analysis of Temporal Trends in Social Networks Using Edge Color Coding and Metric Timelines %A Khurana,U. %A Nguyen,Viet-An %A Hsueh-Chien Cheng %A Ahn,Jae-wook %A Chen,Xi %A Shneiderman, Ben %K Color %K computer scientists %K data analysts %K data visualisation %K Data visualization %K dynamic social network %K dynamic timeslider %K edge color coding %K excel sheet %K Image coding %K image colour analysis %K Layout %K measurement %K metric timelines %K Microsoft excel %K multiple graph metrics %K Net EvViz %K network components %K network layout %K network visualization tool %K NodeXL template %K social networking (online) %K temporal trends %K Twitter %K Visualization %X We present Net EvViz, a visualization tool for analysis and exploration of a dynamic social network. There are plenty of visual social network analysis tools but few provide features for visualization of dynamically changing networks featuring the addition or deletion of nodes or edges. Our tool extends the code base of the Node XL template for Microsoft Excel, a popular network visualization tool. The key features of this work are (1) The ability of the user to specify and edit temporal annotations to the network components in an Excel sheet, (2) See the dynamics of the network with multiple graph metrics plotted over the time span of the graph, called the Timeline, and (3) Temporal exploration of the network layout using an edge coloring scheme and a dynamic Time slider. The objectives of the new features presented in this paper are to let the data analysts, computer scientists and others to observe the dynamics or evolution in a network interactively. We presented Net EvViz to five users of Node XL and received positive responses. %B Privacy, Security, Risk and Trust (PASSAT), 2011 IEEE Third International Conference on and 2011 IEEE Third International Confernece on Social Computing (SocialCom) %I IEEE %P 549 - 554 %8 2011/10/09/11 %@ 978-1-4577-1931-8 %G eng %R 10.1109/PASSAT/SocialCom.2011.212 %0 Report %D 2010 %T Compressive Video Acquisition, Fusion and Processing %A Baraniuk,Richard G. %A Chellapa, Rama %A Wakin,Michael %K *DATA FUSION %K *DETECTORS %K *SIGNAL PROCESSING %K *VIDEO SIGNALS %K ACQUISITION %K COLLECTION %K COMMUNICATION AND RADIO SYSTEMS %K COMPRESSIVE PROPERTIES %K COMPRESSIVE SAMPLING %K compressive sensing %K COMPRESSIVE VIDEO %K decision making %K DEPLOYMENT %K DETECTION %K DYNAMICS %K IMAGE PROCESSING %K Linear systems %K Linearity %K MANIFOLDS(ENGINES) %K measurement %K MISCELLANEOUS DETECTION AND DETECTORS %K MODELS %K sampling %K THEORY %K TRAJECTORIES %X Modern developments in sensor technology, signal processing, and wireless communications have enabled the conception and deployment of large-scale networked sensing systems spanning numerous collection platforms and varied modalities. These systems have the potential to make intelligent decisions by integrating information from massive amounts of sensor data. Before such benefits can be achieved, significant advances must be made in methods for communicating, fusing, and processing this evergrowing volume of diverse data. In this one-year research project, we aimed to expose the fundamental issues and pave the way for further careful study of compressive approaches to video acquisition, fusion, and processing. In doing so, we developed a theoretical definition of video temporal bandwidth and applied the theory to compressive sampling and reconstruction. We created a new framework for compressive video sensing based on linear dynamical systems, lowering the compressive measurement rate. Finally, we applied our own joint manifold model to a variety of relevant image processing problems, demonstrating the model's effectiveness and ability to overcome noise and occlusion obstacles. We also showed how joint manifold models can discover an object's trajectory, an important step towards video fusion. %8 2010/12/14/ %G eng %U http://stinet.dtic.mil/oai/oai?&verb=getRecord&metadataPrefix=html&identifier=ADA533703 %0 Conference Paper %B Image Processing (ICIP), 2009 16th IEEE International Conference on %D 2009 %T Enhancing sparsity using gradients for compressive sensing %A Patel, Vishal M. %A Easley,G. R %A Chellapa, Rama %A Healy,D. M %K analysis;gradient %K domain;compressive %K domain;image %K Fourier %K generalized %K measurement %K methods; %K methods;image %K Poisson %K reconstruction;image %K reconstruction;partial %K representation;Fourier %K representation;sampling %K samples;robust %K scenarios;sparse %K sensing;enhancing %K solver;sampling %K sparsity;gradient %X In this paper, we propose a reconstruction method that recovers images assumed to have a sparse representation in a gradient domain by using partial measurement samples that are collected in the Fourier domain. A key improvement of this technique is that it makes use of a robust generalized Poisson solver that greatly aids in achieving a significantly improved performance over similar proposed methods. Experiments provided also demonstrate that this new technique is more flexible to work with either random or restricted sampling scenarios better than its competitors. %B Image Processing (ICIP), 2009 16th IEEE International Conference on %P 3033 - 3036 %8 2009/11// %G eng %R 10.1109/ICIP.2009.5414411 %0 Report %D 2009 %T The Use of Empirical Studies in the Development of High End Computing Applications %A Basili, Victor R. %A Zelowitz,Marvin V %K *COMPUTER PROGRAMMING %K *EMPIRICAL STUDIES %K *HPC(HIGH PERFORMANCE COMPUTING) %K *METHODOLOGY %K *PARALLEL PROCESSING %K *PARALLEL PROGRAMMING %K *PRODUCTIVITY %K *SOFTWARE ENGINEERING %K *SOFTWARE METRICS %K ADMINISTRATION AND MANAGEMENT %K APMS(AUTOMATED PERFORMANCE MEASUREMENT SYSTEM) %K COMPUTER PROGRAMMING AND SOFTWARE %K COMPUTER SYSTEMS MANAGEMENT AND STANDARDS %K data acquisition %K efficiency %K ENVIRONMENTS %K HIGH END COMPUTING %K HPCBUGBASE %K HUMAN FACTORS ENGINEERING & MAN MACHINE SYSTEM %K learning %K measurement %K MPI(MESSAGE PASSING INTERFACE) %K PARALLEL ORIENTATION %K PE62303E %K PROGRAMMERS %K SE(SOFTWARE ENGINEERING) %K SUPERVISORS %K TEST AND EVALUATION %K TEST FACILITIES, EQUIPMENT AND METEORS %K TIME %K tools %K United States %K WUAFRLT810HECA %X This report provides a description of the research and development activities towards learning much about the development and measurement of productivity in high performance computing environments. Many objectives were accomplished including the development of a methodology for measuring productivity in the parallel programming domain. This methodology was tested over 25 times at 8 universities across the United States and can be used to aid other researchers studying similar environments. The productivity measurement methodology incorporates both development time and performance into a single productivity number. An Experiment Manager tool for collecting data on the development of parallel programs, as well as a suite of tools to aid in the capture and analysis of such data was also developed. Lastly, several large scale development environments were studied in order to better understand the environment used to build large parallel programming applications. That work also included several surveys and interviews with many professional programmers in these environments. %I University of Maryland, College Park %8 2009/12// %G eng %U http://stinet.dtic.mil/oai/oai?&verb=getRecord&metadataPrefix=html&identifier=ADA511351 %0 Conference Paper %B Proceedings of the first workshop on Online social networks %D 2008 %T Growth of the flickr social network %A Mislove,Alan %A Koppula,Hema Swetha %A Gummadi,Krishna P. %A Druschel,Peter %A Bhattacharjee, Bobby %K growth %K measurement %K social networks %X Online social networking sites like MySpace, Orkut, and Flickr are among the most popular sites on the Web and continue to experience dramatic growth in their user population. The popularity of these sites offers a unique opportunity to study the dynamics of social networks at scale. Having a proper understanding of how online social networks grow can provide insights into the network structure, allow predictions of future growth, and enable simulation of systems on networks of arbitrary size. However, to date, most empirical studies have focused on static network snapshots rather than growth dynamics. In this paper, we collect and examine detailed growth data from the Flickr online social network, focusing on the ways in which new links are formed. Our study makes two contributions. First, we collect detailed data covering three months of growth, encompassing 950,143 new users and over 9.7 million new links, and we make this data available to the research community. Second, we use a first-principles approach to investigate the link formation process. In short, we find that links tend to be created by users who already have many links, that users tend to respond to incoming links by creating links back to the source, and that users link to other users who are already close in the network. %B Proceedings of the first workshop on Online social networks %S WOSN '08 %I ACM %C New York, NY, USA %P 25 - 30 %8 2008/// %@ 978-1-60558-182-8 %G eng %U http://doi.acm.org/10.1145/1397735.1397742 %R 10.1145/1397735.1397742 %0 Report %D 2008 %T Measures and Risk Indicators for Early Insight into Software Safety. Development of Fault-Tolerant Systems %A Basili, Victor R. %A Marotta,Frank %A Dangle,Kathleen %A Esker,Linda %A Rus,Ioana %K *SOFTWARE ENGINEERING %K *SYSTEM SAFETY %K COMPUTER PROGRAMMING AND SOFTWARE %K fault tolerant computing %K INDICATORS %K measurement %K REPRINTS %K risk %K SAFETY ENGINEERING %X Software contributes an ever-increasing level of functionality and control in today's systems. This increased use of software can dramatically increase the complexity and time needed to evaluate the safety of a system. Although the actual system safety cannot be verified during its development, measures can reveal early insights into potential safety problems and risks. An approach for developing early software safety measures is presented in this article. The approach and the example software measures presented are based on experience working with the safety engineering group on a large Department of Defense program. %I ABERDEEN TEST CENTER MD %8 2008/10// %G eng %U http://stinet.dtic.mil/oai/oai?&verb=getRecord&metadataPrefix=html&identifier=ADA487120 %0 Conference Paper %B Proceedings of the 7th ACM SIGCOMM conference on Internet measurement %D 2007 %T Measurement and analysis of online social networks %A Mislove,Alan %A Marcon,Massimiliano %A Gummadi,Krishna P. %A Druschel,Peter %A Bhattacharjee, Bobby %K analysis %K measurement %K social networks %X Online social networking sites like Orkut, YouTube, and Flickr are among the most popular sites on the Internet. Users of these sites form a social network, which provides a powerful means of sharing, organizing, and finding content and contacts. The popularity of these sites provides an opportunity to study the characteristics of online social network graphs at large scale. Understanding these graphs is important, both to improve current systems and to design new applications of online social networks. This paper presents a large-scale measurement study and analysis of the structure of multiple online social networks. We examine data gathered from four popular online social networks: Flickr, YouTube, LiveJournal, and Orkut. We crawled the publicly accessible user links on each site, obtaining a large portion of each social network's graph. Our data set contains over 11.3 million users and 328 million links. We believe that this is the first study to examine multiple online social networks at scale. Our results confirm the power-law, small-world, and scale-free properties of online social networks. We observe that the indegree of user nodes tends to match the outdegree; that the networks contain a densely connected core of high-degree nodes; and that this core links small groups of strongly clustered, low-degree nodes at the fringes of the network. Finally, we discuss the implications of these structural properties for the design of social network based systems. %B Proceedings of the 7th ACM SIGCOMM conference on Internet measurement %S IMC '07 %I ACM %C New York, NY, USA %P 29 - 42 %8 2007/// %@ 978-1-59593-908-1 %G eng %U http://doi.acm.org/10.1145/1298306.1298311 %R 10.1145/1298306.1298311 %0 Conference Paper %B Proceedings of the 3rd International Workshop on Software Engineering for High Performance Computing Applications %D 2007 %T Performance Measurement of Novice HPC Programmers Code %A Alameh, Rola %A Zazworka, Nico %A Hollingsworth, Jeffrey K %K measurement %K performance %K performance measures %K product metrics %K program analysis %X Performance is one of the key factors of improving productivity in High Performance Computing (HPC). In this paper we discuss current studies in the field of performance measurement of codes captured in classroom experiments for the High Productivity Computing Project (HPCS). We give two examples of measurements introducing two new hypotheses: spending more effort doesn't always result in improvement of performance for novices; the use of higher level MPI functions promises better performance for novices. We also present a tool - the Automated Performance Measurement System (APMS). APMS helps to partially automate the measurement of the performance of a set of parallel programs with several inputs. The design and implementation of the tool is flexible enough to allow other researchers to conduct similar studies. %B Proceedings of the 3rd International Workshop on Software Engineering for High Performance Computing Applications %S SE-HPC '07 %I IEEE Computer Society %P 3– - 3– %8 2007/// %@ 0-7695-2969-0 %G eng %U http://dx.doi.org/10.1109/SE-HPC.2007.4 %R http://dx.doi.org/10.1109/SE-HPC.2007.4 %0 Conference Paper %B Proceedings of the 10th European software engineering conference held jointly with 13th ACM SIGSOFT international symposium on Foundations of software engineering %D 2005 %T Combining self-reported and automatic data to improve programming effort measurement %A Hochstein, Lorin %A Basili, Victor R. %A Zelkowitz, Marvin V %A Hollingsworth, Jeffrey K %A Carver, Jeff %K effort %K experimentation %K human factors %K manual approaches %K measurement %K process metrics %K verification %X Measuring effort accurately and consistently across subjects in a programming experiment can be a surprisingly difficult task. In particular, measures based on self-reported data may differ significantly from measures based on data which is recorded automatically from a subject's computing environment. Since self-reports can be unreliable, and not all activities can be captured automatically, a complete measure of programming effort should incorporate both classes of data. In this paper, we show how self-reported and automatic effort can be combined to perform validation and to measure total programming effort. %B Proceedings of the 10th European software engineering conference held jointly with 13th ACM SIGSOFT international symposium on Foundations of software engineering %S ESEC/FSE-13 %I ACM %C Lisbon, Portugal %P 356 - 365 %8 2005/// %@ 1-59593-014-0 %G eng %R 10.1145/1081706.1081762 %0 Conference Paper %B Proceedings of the 2005 workshops on Genetic and evolutionary computation %D 2005 %T Measurements for understanding the behavior of the genetic algorithm in dynamic environments: a case study using the Shaky Ladder Hyperplane-Defined Functions %A Rand, William %A Riolo,Rick %K dynamic environments %K Genetic algorithms %K hyperplane-defined functions %K measurement %X We describe a set of measures to examine the behavior of the Genetic Algorithm (GA) in dynamic environments. We describe how to use both average and best measures to look at performance, satisficability, robustness, and diversity. We use these measures to examine GA behavior with a recently devised dynamic test suite, the Shaky Ladder Hyperplane-Defined Functions (sl-hdf's). This test suite can generate random problems with similar levels of difficulty and provides a platform allowing systematic controlled observations of the GA in dynamic environments. We examine the results of these measures in two different versions of the sl-hdf's, one static and one regularly-changing. We provide explanations for the observations in these two different environments, and give suggestions as to future work. %B Proceedings of the 2005 workshops on Genetic and evolutionary computation %S GECCO '05 %I ACM %C New York, NY, USA %P 32 - 38 %8 2005/// %G eng %U http://doi.acm.org/10.1145/1102256.1102263 %R 10.1145/1102256.1102263 %0 Journal Article %J SIGSOFT Softw. Eng. Notes %D 2005 %T Recovering system specific rules from software repositories %A Williams, Chadd C %A Hollingsworth, Jeffrey K %K data warehouse and repository %K debugging aids %K design %K experimentation %K measurement %K performance %X One of the most successful applications of static analysis based bug finding tools is to search the source code for violations of system-specific rules. These rules may describe how functions interact in the code, how data is to be validated or how an API is to be used. To apply these tools, the developer must encode a rule that must be followed in the source code. The difficulty is that many of these system-specific rules are undocumented and "grow" over time as the source code changes. Most research in this area relies on expert programmers to document these little-known rules. In this paper we discuss a method to automatically recover a subset of these rules, function usage patterns, by mining the software repository. We present a preliminary study that applies our work to a large open source software project. %B SIGSOFT Softw. Eng. Notes %V 30 %P 1 - 5 %8 2005/05// %@ 0163-5948 %G eng %N 4 %R 10.1145/1082983.1083144 %0 Conference Paper %B Proceedings of the 2005 ACM/IEEE conference on Supercomputing %D 2005 %T Using Dynamic Tracing Sampling to Measure Long Running Programs %A Odom, Jeffrey %A Hollingsworth, Jeffrey K %A DeRose,Luiz %A Ekanadham, Kattamuri %A Sbaraglia, Simone %K data communications %K design %K experimentation %K measurement %K performance %K tracing %X Detailed cache simulation can be useful to both system developers and application writers to understand an application's performance. However, measuring long running programs can be extremely slow. In this paper we present a technique to use dynamic sampling of trace snippets throughout an application's execution. We demonstrate that our approach improves accuracy compared to sampling a few timesteps at the beginning of execution by judiciously choosing the frequency, as well as the points in the control flow, at which samples are collected. Our approach is validated using the SIGMA tracing and simulation framework for the IBM Power family of processors. %B Proceedings of the 2005 ACM/IEEE conference on Supercomputing %S SC '05 %I IEEE Computer Society %P 59– - 59– %8 2005/// %@ 1-59593-061-2 %G eng %U http://dx.doi.org/10.1109/SC.2005.77 %R http://dx.doi.org/10.1109/SC.2005.77 %0 Journal Article %J Image Processing, IEEE Transactions on %D 2002 %T A generic approach to simultaneous tracking and verification in video %A Li,Baoxin %A Chellapa, Rama %K approach; %K Carlo %K configuration; %K correspondence %K data; %K density %K density; %K estimated %K estimation; %K evaluation; %K extraction; %K Face %K facial %K feature %K generic %K human %K hypothesis %K image %K measurement %K methods; %K Monte %K object %K performance %K posterior %K probability %K probability; %K problem; %K processing; %K propagation; %K recognition; %K road %K sequence %K sequences; %K sequential %K signal %K space; %K stabilization; %K state %K synthetic %K temporal %K testing; %K tracking; %K vector; %K vehicle %K vehicles; %K verification; %K video %K visual %X A generic approach to simultaneous tracking and verification in video data is presented. The approach is based on posterior density estimation using sequential Monte Carlo methods. Visual tracking, which is in essence a temporal correspondence problem, is solved through probability density propagation, with the density being defined over a proper state space characterizing the object configuration. Verification is realized through hypothesis testing using the estimated posterior density. In its most basic form, verification can be performed as follows. Given a measurement vector Z and two hypotheses H1 and H0, we first estimate posterior probabilities P(H0|Z) and P(H1|Z), and then choose the one with the larger posterior probability as the true hypothesis. Several applications of the approach are illustrated by experiments devised to evaluate its performance. The idea is first tested on synthetic data, and then experiments with real video sequences are presented, illustrating vehicle tracking and verification, human (face) tracking and verification, facial feature tracking, and image sequence stabilization. %B Image Processing, IEEE Transactions on %V 11 %P 530 - 544 %8 2002/05// %@ 1057-7149 %G eng %N 5 %R 10.1109/TIP.2002.1006400 %0 Conference Paper %B Proceedings of the 2002 ACM SIGPLAN-SIGSOFT workshop on Program analysis for software tools and engineering %D 2002 %T Recompilation for debugging support in a JIT-compiler %A Tikir, Mustafa M. %A Hollingsworth, Jeffrey K %A Lueh,Guei-Yuan %K algorithms %K debug information %K debugging aids %K dynamic recompilation %K field access watch %K java %K java virtual machine debugger interface %K just-in-time compilation %K measurement %K performance %X A static Java compiler converts Java source code into a verifiably secure and compact architecture-neutral intermediate format, called Java byte codes. The Java byte codes can be either interpreted by a Java Virtual Machine or translated into native code by Java Just-In-Time compilers. Static Java compilers embed debug information in the Java class files to be used by the source level debuggers. However, the debug information is generated for architecture independent byte codes and most of the debug information is valid only when the byte codes are interpreted. Translating byte codes into native instructions puts a limitation on the amount of usable debug information that can be used by source level debuggers. In this paper, we present a new technique to generate valid debug information when Just-In-Time compilers are used. Our approach is based on the dynamic recompilation of Java methods by a fast code generator and lazily generates debug information when it is required. We also present three implementations for field watch support in the Java Virtual Machine Debugger Interface to investigate the runtime overhead and code size growth by our approach. %B Proceedings of the 2002 ACM SIGPLAN-SIGSOFT workshop on Program analysis for software tools and engineering %S PASTE '02 %I ACM %C Charleston, South Carolina, USA %P 10 - 17 %8 2002/// %@ 1-58113-479-7 %G eng %R 10.1145/586094.586100 %0 Conference Paper %B Global Telecommunications Conference, 2002. GLOBECOM '02. IEEE %D 2002 %T Scalable peer finding on the Internet %A Banerjee,S. %A Kommareddy,C. %A Bhattacharjee, Bobby %K application %K application-layer; %K Beaconing; %K CONTROL %K finding; %K Internet-like %K Internet; %K measurement %K network %K overhead %K peer %K peer-finding %K peers; %K protocol; %K protocols; %K reduction; %K scalable %K scheme; %K services; %K simulations; %K solution; %K Tiers; %K topologies; %K topology; %X We consider the problem of finding nearby application peers over the Internet. We define a new peer-finding scheme (called Tiers) that scales to large application peer groups. Tiers creates a hierarchy of the peers, which allows an efficient and scalable solution to this problem. The scheme can be implemented entirely in the application-layer and does not require the deployment of either any additional measurement services, or well-known reference landmarks in the network. We present detailed evaluation of Tiers and compare it to one previously proposed scheme called Beaconing. Through analysis and detailed simulations on 10,000 node Internet-like topologies we show that Tiers achieves comparable or better performance with a significant reduction in control overheads for groups of size 32 or more. %B Global Telecommunications Conference, 2002. GLOBECOM '02. IEEE %V 3 %P 2205 - 2209 vol.3 - 2205 - 2209 vol.3 %8 2002/11// %G eng %R 10.1109/GLOCOM.2002.1189023 %0 Conference Paper %B Network Protocols, 2001. Ninth International Conference on %D 2001 %T Finding close friends on the Internet %A Kommareddy,C. %A Shankar,N. %A Bhattacharjee, Bobby %K application-layer; %K application-peers; %K Beaconing; %K beacons; %K close %K distance %K Expanding %K friends; %K Internet-like %K Internet; %K IP; %K measurement %K nearby %K network %K peer-location %K points; %K protocols; %K Ring %K searches; %K service; %K solutions; %K testbed; %K topologies; %K topology; %K transport %K Triangulation; %K unicast-only %K wide-area %X We consider the problem of finding nearby application-peers (close friends) over the Internet. We focus on unicast-only solutions and introduce a new scheme -Beaconing-for finding peers that are near. Our scheme uses distance measurement points (called beacons) and can be implemented entirely in the application-layer without investing in large infrastructure changes. We present an extensive evaluation of Beaconing and compare it to existing schemes including Expanding Ring searches and Triangulation. Our experiments show that 3-8 beacons are sufficient to provide efficient peer-location service on 10 000 node Internet-like topologies. Further, our results are 2-5 times more accurate than existing techniques. We also present results from an implementation of Beaconing over a non-trivial wide-area testbed. In our experiments, Beaconing is able to efficiently (< 3 K Bytes and < 50 packets on average), quickly (< 1 second on average), and accurately (< 20 ms error on average) find nearby peers on the Internet. %B Network Protocols, 2001. Ninth International Conference on %P 301 - 309 %8 2001/11// %G eng %R 10.1109/ICNP.2001.992910 %0 Conference Paper %B Geoscience and Remote Sensing Symposium, 2000. Proceedings. IGARSS 2000. IEEE 2000 International %D 2000 %T Web based progressive transmission for browsing remotely sensed imagery %A Mareboyana,M. %A Srivastava,S. %A JaJa, Joseph F. %K based %K decomposition;geophysical %K image;model-based %K interest;remote %K interest;vector %K mapping;user %K mapping;vector %K measurement %K of %K processing;geophysical %K processing;image %K Progressive %K quantisation;wavelet %K quantization;wavelet %K refinement;region %K regions %K representation;land %K representation;remote %K sensing;scalar;terrain %K sensing;terrain %K signal %K specified %K specified;user %K surface;large %K technique;image %K techniques;image %K transforms; %K transmission;browsing;geophysical %K VQ;progressive %K Web %X This paper describes an image representation technique that entails progressive refinement of user specified regions of interest (ROI) of large images. Progressive refinement to original quality can be accomplished in theory. However, due to heavy burden on storage resources for the authors' applications, they restrict the refinement to about 25% of the original data resolution. A wavelet decomposition of the data combined with scalar and vector quantization (VQ) of the high frequency components and JPEG/DCT compression of low frequency component is used as representation framework. Their software will reconstruct the region selected by the user from its wavelet decomposition such that it fills up the preview window with the appropriate subimages at the desired resolution, including full resolution stored for preview. Further refinement from the first preview can be obtained progressively by transmitting high frequency coefficients from low resolution to high resolution which are compressed by variant of vector quantization called model-based VQ (MVQ). The user will have an option for progressive build up of the ROIs until full resolution stored or terminate the transmission at any time during the progressive refinement %B Geoscience and Remote Sensing Symposium, 2000. Proceedings. IGARSS 2000. IEEE 2000 International %V 2 %P 591 -593 vol.2 - 591 -593 vol.2 %8 2000/// %G eng %R 10.1109/IGARSS.2000.861640 %0 Conference Paper %B Geoscience and Remote Sensing Symposium, 1999. IGARSS '99 Proceedings. IEEE 1999 International %D 1999 %T A hierarchical data archiving and processing system to generate custom tailored products from AVHRR data %A Kalluri, SNV %A Zhang,Z. %A JaJa, Joseph F. %A Bader, D.A. %A Song,H. %A El Saleous,N. %A Vermote,E. %A Townshend,J.R.G. %K archiving;image %K AVHRR;GIS;PACS;custom %K data %K image;land %K image;remote %K mapping; %K mapping;PACS;geophysical %K measurement %K PROCESSING %K processing;geophysical %K product;data %K remote %K scheme;infrared %K sensing;optical %K sensing;terrain %K signal %K surface;multispectral %K system;indexing %K tailored %K technique;hierarchical %K techniques;remote %X A novel indexing scheme is described to catalogue satellite data on a pixel basis. The objective of this research is to develop an efficient methodology to archive, retrieve and process satellite data, so that data products can be generated to meet the specific needs of individual scientists. When requesting data, users can specify the spatial and temporal resolution, geographic projection, choice of atmospheric correction, and the data selection methodology. The data processing is done in two stages. Satellite data is calibrated, navigated and quality flags are appended in the initial processing. This processed data is then indexed and stored. Secondary processing such as atmospheric correction and projection are done after a user requests the data to create custom made products. By dividing the processing in to two stages saves time, since the basic processing tasks such as navigation and calibration which are common to all requests are not repeated when different users request satellite data. The indexing scheme described can be extended to allow fusion of data sets from different sensors %B Geoscience and Remote Sensing Symposium, 1999. IGARSS '99 Proceedings. IEEE 1999 International %V 5 %P 2374 -2376 vol.5 - 2374 -2376 vol.5 %8 1999/// %G eng %R 10.1109/IGARSS.1999.771514 %0 Conference Paper %B 18th International Conference on Distributed Computing Systems, 1998. Proceedings %D 1998 %T LBF: a performance metric for program reorganization %A Eom, H. %A Hollingsworth, Jeffrey K %K case study %K Computational modeling %K computer network %K Computer science %K Debugging %K distributed processing %K distributed program %K Educational institutions %K Integrated circuit testing %K LBF metric %K load balancing factor %K Load management %K measurement %K NIST %K parallel program %K parallel programming %K performance metric %K program reorganization %K program tuning %K Programming profession %K resource allocation %K software metrics %K software performance evaluation %K US Department of Energy %X We introduce a new performance metric, called Load Balancing Factor (LBF), to assist programmers with evaluating different tuning alternatives. The LBF metric differs from traditional performance metrics since it is intended to measure the performance implications of a specific tuning alternative rather than quantifying where time is spent in the current version of the program. A second unique aspect of the metric is that it provides guidance about moving work within a distributed or parallel program rather than reducing it. A variation of the LBF metric can also be used to predict the performance impact of changing the underlying network. The LBF metric can be computed incrementally and online during the execution of the program to be tuned. We also present a case study that shows that our metric can predict the actual performance gains accurately for a test suite of six programs %B 18th International Conference on Distributed Computing Systems, 1998. Proceedings %I IEEE %P 222 - 229 %8 1998/05/26/29 %@ 0-8186-8292-2 %G eng %R 10.1109/ICDCS.1998.679505 %0 Conference Paper %B , The 19th IEEE Real-Time Systems Symposium, 1998. Proceedings %D 1998 %T Performance measurement using low perturbation and high precision hardware assists %A Mink, A. %A Salamon, W. %A Hollingsworth, Jeffrey K %A Arunachalam, R. %K Clocks %K Computerized monitoring %K Counting circuits %K Debugging %K Hardware %K hardware performance monitor %K high precision hardware assists %K low perturbation %K measurement %K MPI message passing library %K MultiKron hardware performance monitor %K MultiKron PCI %K NIST %K online performance monitoring tools %K Paradyn parallel performance measurement tools %K PCI bus slot %K performance bug %K performance evaluation %K performance measurement %K program debugging %K program testing %K real-time systems %K Runtime %K Timing %X We present the design and implementation of MultiKron PCI, a hardware performance monitor that can be plugged into any computer with a free PCI bus slot. The monitor provides a series of high-resolution timers, and the ability to monitor the utilization of the PCI bus. We also demonstrate how the monitor can be integrated with online performance monitoring tools such as the Paradyn parallel performance measurement tools to improve the overhead of key timer operations by a factor of 25. In addition, we present a series of case studies using the MultiKron hardware performance monitor to measure and tune high-performance parallel completing applications. By using the monitor, we were able to find and correct a performance bug in a popular implementation of the MPI message passing library that caused some communication primitives to run at one half their potential speed %B , The 19th IEEE Real-Time Systems Symposium, 1998. Proceedings %I IEEE %P 379 - 388 %8 1998/12/02/4 %@ 0-8186-9212-X %G eng %R 10.1109/REAL.1998.739771 %0 Journal Article %J Software Engineering, IEEE Transactions on %D 1997 %T Comments on "Towards a framework for software measurement validation" %A Morasca,S. %A Briand,L.C. %A Basili, Victor R. %A Weyuker,E.J. %A Zelkowitz, Marvin V %A Kitchenham,B. %A Lawrence Pfleeger,S. %A Fenton,N. %K measurement %K metrics; %K software %K testing;program %K validation;program %K verification;software %X A view of software measurement that disagrees with the model presented by Kitchenham, Pfleeger, and Fenton (1995), is given. Whereas Kitchenham et al. argue that properties used to define measures should not constrain the scale type of measures, the authors contend that that is an inappropriate restriction. In addition, a misinterpretation of Weyuker's (1988) properties is noted. %B Software Engineering, IEEE Transactions on %V 23 %P 187 - 189 %8 1997/03// %@ 0098-5589 %G eng %N 3 %R 10.1109/32.585506 %0 Journal Article %J Information and Software Technology %D 1997 %T Experimental validation in software engineering %A Zelkowitz, Marvin V %A Wallace,Dolores %K data collection %K Evaluation %K experimentation %K measurement %X Although experimentation is an accepted approach toward scientific validation in most scientific disciplines, it only recently has gained acceptance within the software development community. In this paper we discuss a 12-model classification scheme for performing experimentation within the software development domain. We evaluate over 600 published papers in the computer science literature and over one hundred papers from other scientific disciplines in order to determine: (1) how well the computer science community is succeeding at validating its theories, and (2) how computer science compares to other scientific disciplines. %B Information and Software Technology %V 39 %P 735 - 743 %8 1997/// %@ 0950-5849 %G eng %U http://www.sciencedirect.com/science/article/pii/S0950584997000256 %N 11 %R 10.1016/S0950-5849(97)00025-6 %0 Conference Paper %B , 1997 International Conference on Parallel Architectures and Compilation Techniques., 1997. Proceedings %D 1997 %T MDL: a language and compiler for dynamic program instrumentation %A Hollingsworth, Jeffrey K %A Niam, O. %A Miller, B. P %A Zhichen Xu %A Goncalves,M. J.R %A Ling Zheng %K Alpha architecture %K application program %K application program interfaces %K Application software %K compiler generators %K Computer science %K dynamic code generation %K Dynamic compiler %K dynamic program instrumentation %K Educational institutions %K files %K instrumentation code %K Instruments %K MDL %K measurement %K message channels %K Message passing %K Metric Description Language %K modules %K nodes %K Operating systems %K optimising compilers %K PA-RISC %K Paradyn Parallel Performance Tools %K Parallel architectures %K parallel programming %K performance data %K platform independent descriptions %K Power 2 architecture %K Power generation %K procedures %K program debugging %K Program processors %K running programs %K Runtime %K software metrics %K SPARC %K Specification languages %K x86 architecture %X We use a form of dynamic code generation, called dynamic instrumentation, to collect data about the execution of an application program. Dynamic instrumentation allows us to instrument running programs to collect performance and other types of information. The instrumentation code is generated incrementally and can be inserted and removed at any time. Our instrumentation currently runs on the SPARC, PA-RISC, Power 2, Alpha, and x86 architectures. Specification of what data to collect are written in a specialized language called the Metric Description Language, that is part of the Paradyn Parallel Performance Tools. This language allows platform independent descriptions of how to collect performance data. It also provides a concise way to specify, how to constrain performance data to particular resources such as modules, procedures, nodes, files, or message channels (or combinations of these resources). We also describe the details of how we weave instrumentation into a running program %B , 1997 International Conference on Parallel Architectures and Compilation Techniques., 1997. Proceedings %I IEEE %P 201 - 212 %8 1997/11/10/14 %@ 0-8186-8090-3 %G eng %R 10.1109/PACT.1997.644016 %0 Journal Article %J Computer %D 1995 %T The Paradyn parallel performance measurement tool %A Miller, B. P %A Callaghan, M. D %A Cargille, J. M %A Hollingsworth, Jeffrey K %A Irvin, R. B %A Karavanic, K. L %A Kunchithapadam, K. %A Newhall, T. %K Aerodynamics %K Automatic control %K automatic instrumentation control %K Debugging %K dynamic instrumentation %K flexible performance information %K high level languages %K insertion %K Instruments %K large-scale parallel program %K Large-scale systems %K measurement %K Paradyn parallel performance measurement tool %K Parallel machines %K parallel programming %K Performance Consultant %K Programming profession %K scalability %K software performance evaluation %K software tools %X Paradyn is a tool for measuring the performance of large-scale parallel programs. Our goal in designing a new performance tool was to provide detailed, flexible performance information without incurring the space (and time) overhead typically associated with trace-based tools. Paradyn achieves this goal by dynamically instrumenting the application and automatically controlling this instrumentation in search of performance problems. Dynamic instrumentation lets us defer insertion until the moment it is needed (and remove it when it is no longer needed); Paradyn's Performance Consultant decides when and where to insert instrumentation %B Computer %V 28 %P 37 - 46 %8 1995/11// %@ 0018-9162 %G eng %N 11 %R 10.1109/2.471178 %0 Report %D 1994 %T Software Process Improvement in the NASA Software Engineering Laboratory. %A McGarry,Frank %A Pajerski,Rose %A Page,Gerald %A Waligora,Sharon %A Basili, Victor R. %K *AWARDS %K *SOFTWARE ENGINEERING %K *SYSTEMS ANALYSIS %K *WORK MEASUREMENT %K ADMINISTRATION AND MANAGEMENT %K COMPUTER PROGRAMMING AND SOFTWARE %K COMPUTER PROGRAMS %K COMPUTERS %K data acquisition %K ENVIRONMENTS %K EXPERIMENTAL DATA %K GROUND SUPPORT. %K measurement %K Organizations %K PE63756E %K Production %K SOFTWARE PROCESS IMPROVEMENT %K SPI(SOFTWARE PROCESS IMPROVEMENT) %K SPN-19950120014 %X The Software Engineering Laboratory (SEL) was established in 1976 for the purpose of studying and measuring software processes with the intent of identifying improvements that could be applied to the production of ground support software within the Flight Dynamics Division (FDD) at the National Aeronautics and Space Administration (NASA)/Goddard Space Flight Center (GSFC). The SEL has three member organizations: NASA/GSFC, the University of Maryland, and Computer Sciences Corporation (CSC). The concept of process improvement within the SEL focuses on the continual understanding of both process and product as well as goal-driven experimentation and analysis of process change within a production environment. %I CARNEGIE-MELLON UNIV PITTSBURGH PA SOFTWARE ENGINEERING INSTITUTE %8 1994/12// %G eng %U http://stinet.dtic.mil/oai/oai?&verb=getRecord&metadataPrefix=html&identifier=ADA289912 %0 Journal Article %J IEEE Transactions on Parallel and Distributed Systems %D 1990 %T IPS-2: the second generation of a parallel program measurement system %A Miller, B. P %A Clark, M. %A Hollingsworth, Jeffrey K %A Kierstead, S. %A Lim,S. -S %A Torzewski, T. %K 4.3BSD UNIX systems %K automatic guidance techniques %K Automatic testing %K Charlotte distributed operating system %K CPA %K DECstation %K design concepts %K distributed programs %K graphical user interface %K Graphical user interfaces %K Instruments %K interactive program analysis %K IPS-2 %K measurement %K message systems %K network operating systems %K Operating systems %K parallel program measurement system %K parallel programming %K parallel programs %K Performance analysis %K performance analysis techniques %K performance evaluation %K performance measurement system %K Power system modeling %K program bottlenecks %K program diagnostics %K Programming profession %K semantics %K Sequent Symmetry multiprocessor %K shared-memory systems %K software tools %K Springs %K Sun %K Sun 4 %K Unix %K VAX %X IPS, a performance measurement system for parallel and distributed programs, is currently running on its second implementation. IPS's model of parallel programs uses knowledge about the semantics of a program's structure to provide two important features. First, IPS provides a large amount of performance data about the execution of a parallel program, and this information is organized so that access to it is easy and intuitive. Secondly, IPS provides performance analysis techniques that help to guide the programmer automatically to the location of program bottlenecks. The first implementation of IPS was a testbed for the basic design concepts, providing experience with a hierarchical program and measurement model, interactive program analysis, and automatic guidance techniques. It was built on the Charlotte distributed operating system. The second implementation, IPS-2, extends the basic system with new instrumentation techniques, an interactive and graphical user interface, and new automatic guidance analysis techniques. This implementation runs on 4.3BSD UNIX systems, on the VAX, DECstation, Sun 4, and Sequent Symmetry multiprocessor %B IEEE Transactions on Parallel and Distributed Systems %V 1 %P 206 - 217 %8 1990/04// %@ 1045-9219 %G eng %N 2 %R 10.1109/71.80132 %0 Journal Article %J Computer Languages %D 1988 %T Program complexity using Hierarchical Abstract Computers %A Bail,William G %A Zelkowitz, Marvin V %K CASE tools %K complexity %K ENVIRONMENTS %K measurement %K Prime programs %X A model of program complexity is introduced which combines structural control flow measures with data flow measures. This complexity measure is based upon the prime program decomposition of a program written for a Hierarchical Abstract Computer. It is shown that this measure is consistent with the ideas of information hiding and data abstraction. Because this measure is sensitive to the linear form of a program, it can be used to measure different concrete representations of the same algorithm, as in a structured and an unstructured version of the same program. Application of the measure as a model of system complexity is given for “upstream” processes (e.g. specification and design phases) where there is no source program to measure by other techniques. %B Computer Languages %V 13 %P 109 - 123 %8 1988/// %@ 0096-0551 %G eng %U http://www.sciencedirect.com/science/article/pii/0096055188900197 %N 3–4 %R 10.1016/0096-0551(88)90019-7 %0 Report %D 1985 %T Research in Programming Languages and Software Engineering. %A Gannon,John %A Basili, Victor R. %A Zelkowitz, Marvin V %A Yeh,Raymond %K *BEARINGS %K *COMPUTER PROGRAMS %K *ESTIMATES %K *GUIDANCE %K *KALMAN FILTERING %K *LINEAR SYSTEMS %K *STOCHASTIC PROCESSES %K COMPUTER PROGRAMMING AND SOFTWARE %K GAIN %K identification %K measurement %K programming languages %K STATISTICS AND PROBABILITY %K SYSTEMS ENGINEERING. %K TARGET DIRECTION, RANGE AND POSITION FINDING %X During the past year three research papers were written and two published conference presentations were given. Titles of the published research articles are: A Stochastic Analysis of a Modified Gain Extended Kalman Filter with Applications to Estimation with Bearings only Measurements; The Modified Gain Extended Kalman Kilter and Parameter Identification in Linear Systems and Maximum Information Guidance for Homing Missiles. %I Department of Computer Science, University of Maryland, College Park %8 1985/12/24/ %G eng %U http://stinet.dtic.mil/oai/oai?&verb=getRecord&metadataPrefix=html&identifier=ADA186269