%0 Journal Article %J Journal of Microbiological Methods %D 2017 %T Application of a paper based device containing a new culture medium to detect Vibrio cholerae in water samples collected in Haiti %A Briquaire, Romain %A Rita R Colwell %A Boncy, Jacques %A Rossignol, Emmanuel %A Dardy, Aline %A Pandini, Isabelle %A Villeval, François %A Machuron, Jean-Louis %A Huq, Anwar %A Rashed, Shah %A Vandevelde, Thierry %A Rozand, Christine %B Journal of Microbiological Methods %V 133 %P 23 - 31 %8 Jan-02-2017 %G eng %U https://www.sciencedirect.com/science/article/pii/S0167701216303578?via%3Dihub %! Journal of Microbiological Methods %R 10.1016/j.mimet.2016.12.014 %0 Journal Article %J The American Journal of Tropical Medicine and Hygiene %D 2017 %T Assessment of Risk of Cholera in Haiti following Hurricane Matthew %A Huq, Anwar %A Anwar, Rifat %A Rita R Colwell %A McDonald, Michael D. %A Khan, Rakib %A Jutla, Antarpreet %A Akanda, Shafqat %X Damage to the inferior and fragile water and sanitation infrastructure of Haiti after Hurricane Matthew has created an urgent public health emergency in terms of likelihood of cholera occurring in the human population. Using satellite-derived data on precipitation, gridded air temperature, and hurricane path and with information on water and sanitation (WASH) infrastructure, we tracked changing environmental conditions conducive for growth of pathogenic vibrios. Based on these data, we predicted and validated the likelihood of cholera cases occurring past hurricane. The risk of cholera in the southwestern part of Haiti remained relatively high since November 2016 to the present. Findings of this study provide a contemporary process for monitoring ground conditions that can guide public health intervention to control cholera in human population by providing access to vaccines, safe WASH facilities. Assuming current social and behavioral patterns remain constant, it is recommended that WASH infrastructure should be improved and considered a priority especially before 2017 rainy season. %B The American Journal of Tropical Medicine and Hygiene %V 97 %P 896 - 903 %8 Jul-09-2017 %G eng %U http://www.ajtmh.org/content/journals/10.4269/ajtmh.17-0048 %N 3 %R 10.4269/ajtmh.17-0048 %0 Journal Article %J Letters in Applied Microbiology %D 2016 %T Are natural reservoirs important for cholera surveillance? The case of an outbreak in a Brazilian estuary %A Martinelli Filho, J.E. %A Lopes, R.M. %A Rivera, I.N.G. %A Rita R Colwell %X Paranaguá Bay is one of the largest estuarine systems on the Southern Brazilian coast. The only recorded cholera outbreak in this region since the early 20th century occurred in 1999 and resulted in 467 cases and at least three reported deaths in a population of approx. 150 000 people. This short communication reports historical, unpublished data related to that outbreak. Water, zooplankton and bivalve samples were collected and evaluated using direct fluorescence assay to determine whether Vibrio cholerae serogroups O1 and O139 were present in the estuarine system at that time. Most of the water (83%) and zooplankton samples (75%) were positive for V. cholerae O1, while V. cholerae O139 was not detected. Shellfish (Mytella sp.) were also positive for V. cholerae O1. These results indicate that the estuary, including biological vectors such as copepods and bivalves, comprise an important reservoir of V. cholerae O1 and a probable waterborne pathway for the disease, in addition to contamination with untreated sewage. %B Letters in Applied Microbiology %P 183 - 188 %8 Jan-09-2016 %G eng %U https://onlinelibrary.wiley.com/doi/10.1111/lam.12614 %N 3 %! Lett Appl Microbiol %R 10.1111/lam.12614 %0 Book Section %B Advances in Cryptology - ASIACRYPT 2013 %D 2013 %T Adaptive and Concurrent Secure Computation from New Adaptive, Non-malleable Commitments %A Dana Dachman-Soled %A Malkin, Tal %A Raykova, Mariana %A Venkitasubramaniam, Muthuramakrishnan %E Sako, Kazue %E Sarkar, Palash %K Algorithm Analysis and Problem Complexity %K Applications of Mathematics %K Data Encryption %K Discrete Mathematics in Computer Science %K Management of Computing and Information Systems %K Systems and Data Security %X We present a unified approach for obtaining general secure computation that achieves adaptive-Universally Composable (UC)-security. Using our approach we essentially obtain all previous results on adaptive concurrent secure computation, both in relaxed models (e.g., quasi-polynomial time simulation), as well as trusted setup models (e.g., the CRS model, the imperfect CRS model). This provides conceptual simplicity and insight into what is required for adaptive and concurrent security, as well as yielding improvements to set-up assumptions and/or computational assumptions in known models. Additionally, we provide the first constructions of concurrent secure computation protocols that are adaptively secure in the timing model, and the non-uniform simulation model. As a corollary we also obtain the first adaptively secure multiparty computation protocol in the plain model that is secure under bounded-concurrency. Conceptually, our approach can be viewed as an adaptive analogue to the recent work of Lin, Pass and Venkitasubramaniam [STOC ‘09], who considered only non-adaptive adversaries. Their main insight was that the non-malleability requirement could be decoupled from the simulation requirement to achieve UC-security. A main conceptual contribution of this work is, quite surprisingly, that it is still the case even when considering adaptive security. A key element in our construction is a commitment scheme that satisfies a strong definition of non-malleability. Our new primitive of concurrent equivocal non-malleable commitments, intuitively, guarantees that even when a man-in-the-middle adversary observes concurrent equivocal commitments and decommitments, the binding property of the commitments continues to hold for commitments made by the adversary. This definition is stronger than previous ones, and may be of independent interest. Previous constructions that satisfy our definition have been constructed in setup models, but either require existence of stronger encryption schemes such as CCA-secure encryption or require independent “trapdoors” provided by the setup for every pair of parties to ensure non-malleability. A main technical contribution of this work is to provide a construction that eliminates these requirements and requires only a single trapdoor. %B Advances in Cryptology - ASIACRYPT 2013 %S Lecture Notes in Computer Science %I Springer Berlin Heidelberg %P 316 - 336 %8 2013/01/01/ %@ 978-3-642-42032-0, 978-3-642-42033-7 %G eng %U http://link.springer.com/chapter/10.1007/978-3-642-42033-7_17 %0 Conference Paper %B CHI 2013 To Appear %D 2013 %T Age-Related Differences in Performance with Touchscreens Compared to Traditional Mouse Input %A Findlater,L. %A Jon Froehlich %A Fattal, K. %A Wobbrock,J.O. %A Dastyar, T. %X Despite the apparent popularity of touch screen for older adults, little is known about the psychomotor performance of these devices. We compare performance between older adults and younger adults in four desktop and touchscreen tasks: pointing, dragging, crossing and steering. On the touchscreen, we also examine pinch zoom. Our results show that while older adults were significantly slower than younger adults in general, the touchscreen reduced the performance gap relative to the desktop and mouse. Indeed, the touchscreen resulted in a significant movement time reduction of 35% over the mouse for older adults, compared to only 16% for younger adults. Error rates also decreased. %B CHI 2013 To Appear %8 2013 %G eng %0 Journal Article %J Human Language Technologies: The 2013 Annual Conference of the North American Chapter of the Association for Computational Linguistics %D 2013 %T Argviz: Interactive Visualization of Topic Dynamics in Multi-party Conversations %A Nguyen, V A %A Hu, Y %A Jordan Boyd-Graber %X Abstract We introduce an efficient, interactive framework—Argviz—for experts to analyze the dynamic topical structure of multi-party conversations. Users inject their needs, expertise, and insights into models via iterative topic refinement. The refined topics feed into a ... %B Human Language Technologies: The 2013 Annual Conference of the North American Chapter of the Association for Computational Linguistics %8 2013/00/01 %G eng %U https://www.aclweb.org/anthology-new/N/N13/N13-3.pdf#page=42 %0 Journal Article %J The ISME Journal %D 2012 %T An additional step in the transmission of Yersinia pestis? %A Easterday, W Ryan %A Kausrud, Kyrre L %A Star, Bastiaan %A Heier, Lise %A Haley, Bradd J %A Ageyev, Vladimir %A Rita R Colwell %A Stenseth, Nils Chr %X Plague, caused by the bacterium Yersinia pestis, is a mammalian vector-borne disease, transmitted by fleas that serve as the vector between rodent hosts. For many pathogens, including Y. pestis, there are strong evolutionary pressures that lead to a reduction in ‘useless genes’, with only those retained that reflect function in the specific environment inhabited by the pathogen. Genetic traits critical for survival and transmission between two environments, the rodent and the flea, are conserved in epizootic/epidemic plague strains. However, there are genes that remain conserved for which no function in the flea–rodent cycle has yet been observed, indicating an additional environment may exist in the transmission cycle of plague. Here, we present evidence for highly conserved genes that suggests a role in the persistence of Y. pestis after death of its host. Furthermore, maintenance of these genes points to Y. pestis traversing a post-mortem path between, and possibly within, epizootic periods and offering insight into mechanisms that may allow Y. pestis an alternative route of transmission in the natural environment. %B The ISME Journal %P 231 - 236 %8 Jan-02-2012 %G eng %U http://www.nature.com/articles/ismej2011105 %N 2 %! ISME J %R 10.1038/ismej.2011.105 %0 Book %D 2012 %T Advances in Computers %A Memon, Atif M. %K Computers / Programming / General %K Computers / Software Development & Engineering / General %K Mathematics / General %X Since its first volume in 1960, Advances in Computers has presented detailed coverage of innovations in computer hardware, software, theory, design, and applications. It has also provided contributors with a medium in which they can explore their subjects in greater depth and breadth than journal articles usually allow. As a result, many articles have become standard references that continue to be of sugnificant, lasting value in this rapidly expanding field. In-depth surveys and tutorials on new computer technologyWell-known authors and researchers in the fieldExtensive bibliographies with most chaptersMany of the volumes are devoted to single themes or subfields of computer science %I Academic Press %8 2012/05/01/ %@ 9780123964786 %G eng %0 Journal Article %J BMC bioinformatics %D 2012 %T AGORA: Assembly Guided by Optical Restriction Alignment %A Lin, H.C. %A Goldstein, S. %A Mendelowitz, L. %A Zhou, S. %A Wetzel, J. %A Schwartz, D.C. %A Pop, Mihai %X Genome assembly is difficult due to repeated sequences within the genome, which create ambiguities and cause the final assembly to be broken up into many separate sequences (contigs). Long range linking information, such as mate-pairs or mapping data, is necessary to help assembly software resolve repeats, thereby leading to a more complete reconstruction of genomes. Prior work has used optical maps for validating assemblies and scaffolding contigs, after an initial assembly has been produced. However, optical maps have not previously been used within the genome assembly process. Here, we use optical map information within the popular de Bruijn graph assembly paradigm to eliminate paths in the de Bruijn graph which are not consistent with the optical map and help determine the correct reconstruction of the genome.We developed a new algorithm called AGORA: Assembly Guided by Optical Restriction Alignment. AGORA is the first algorithm to use optical map information directly within the de Bruijn graph framework to help produce an accurate assembly of a genome that is consistent with the optical map information provided. Our simulations on bacterial genomes show that AGORA is effective at producing assemblies closely matching the reference sequences. Additionally, we show that noise in the optical map can have a strong impact on the final assembly quality for some complex genomes, and we also measure how various characteristics of the starting de Bruijn graph may impact the quality of the final assembly. Lastly, we show that a proper choice of restriction enzyme for the optical map may substantially improve the quality of the final assembly. Our work shows that optical maps can be used effectively to assemble genomes within the de Bruijn graph assembly framework. Our experiments also provide insights into the characteristics of the mapping data that most affect the performance of our algorithm, indicating the potential benefit of more accurate optical mapping technologies, such as nano-coding. %B BMC bioinformatics %V 13 %8 2012 %G eng %N 1 %0 Journal Article %J J Bacteriol %D 2012 %T Archaeosortases and exosortases are widely distributed systems linking membrane transit with posttranslational modification. %A Haft, Daniel H %A Payne, Samuel H %A Jeremy D Selengut %K Amino Acid Sequence %K Aminoacyltransferases %K Archaeal Proteins %K Bacterial Proteins %K Cell Membrane %K Cysteine Endopeptidases %K Gene Expression Regulation, Archaeal %K Gene Expression Regulation, Bacterial %K Gene Expression Regulation, Enzymologic %K Molecular Sequence Data %K Protein Processing, Post-Translational %X
Multiple new prokaryotic C-terminal protein-sorting signals were found that reprise the tripartite architecture shared by LPXTG and PEP-CTERM: motif, TM helix, basic cluster. Defining hidden Markov models were constructed for all. PGF-CTERM occurs in 29 archaeal species, some of which have more than 50 proteins that share the domain. PGF-CTERM proteins include the major cell surface protein in Halobacterium, a glycoprotein with a partially characterized diphytanylglyceryl phosphate linkage near its C terminus. Comparative genomics identifies a distant exosortase homolog, designated archaeosortase A (ArtA), as the likely protein-processing enzyme for PGF-CTERM. Proteomics suggests that the PGF-CTERM region is removed. Additional systems include VPXXXP-CTERM/archeaosortase B in two of the same archaea and PEF-CTERM/archaeosortase C in four others. Bacterial exosortases often fall into subfamilies that partner with very different cohorts of extracellular polymeric substance biosynthesis proteins; several species have multiple systems. Variant systems include the VPDSG-CTERM/exosortase C system unique to certain members of the phylum Verrucomicrobia, VPLPA-CTERM/exosortase D in several alpha- and deltaproteobacterial species, and a dedicated (single-target) VPEID-CTERM/exosortase E system in alphaproteobacteria. Exosortase-related families XrtF in the class Flavobacteria and XrtG in Gram-positive bacteria mark distinctive conserved gene neighborhoods. A picture emerges of an ancient and now well-differentiated superfamily of deeply membrane-embedded protein-processing enzymes. Their target proteins are destined to transit cellular membranes during their biosynthesis, during which most undergo additional posttranslational modifications such as glycosylation.
%B J Bacteriol %V 194 %P 36-48 %8 2012 Jan %G eng %N 1 %R 10.1128/JB.06026-11 %0 Conference Paper %B LEET'12 Proceedings of the 5th USENIX conference on Large-Scale Exploits and Emergent Threats %D 2012 %T Ask WINE: Are We Safer Today? Evaluating Operating System Security Through Big Data Analysis %A Tudor Dumitras %A Efstathopoulos, Petros %X The Internet can be a dangerous place: 800,000 new malware variants are detected each day, and this number is growing at an exponential rate--driven by the quest for economic gains. However, over the past ten years operating-system vendors have introduced a number of security technologies that aim to make exploits harder and to reduce the attack surface of the platform. Faced with these two conflicting trends, it is difficult for end-users to determine what techniques make them safer from Internet attacks. In this position paper, we argue that to answer this question conclusively we must analyze field data collected on real hosts that are targeted by attacks--e.g., the approximately 50 million records of anti-virus telemetry available through Symantec's WINE platform. Such studies can characterize the factors that drive the production of malware, can help us understand the impact of security technologies in the real world and can suggest new security metrics, derived from field observations rather than small lab experiments, indicating how susceptible to attacks a computing platform may be. %B LEET'12 Proceedings of the 5th USENIX conference on Large-Scale Exploits and Emergent Threats %S LEET'12 %I USENIX Association %P 11 - 11 %8 2012/04/24/ %G eng %U http://dl.acm.org/citation.cfm?id=2228340.2228356 %0 Journal Article %J Autonomous Robots %D 2012 %T Automated synthesis of action selection policies for unmanned vehicles operating in adverse environments %A Svec,Petr %A Gupta, Satyandra K. %K Computer science %X We address the problem of automated action selection policy synthesis for unmanned vehicles operating in adverse environments. We introduce a new evolutionary computation-based approach using which an initial version of the policy is automatically generated and then gradually refined by detecting and fixing its shortcomings. The synthesis technique consists of the automated extraction of the vehicle’s exception states and Genetic Programming (GP) for automated composition and optimization of corrective sequences of commands in the form of macro-actions to be applied locally. The focus is specifically on automated synthesis of a policy for Unmanned Surface Vehicle (USV) to efficiently block the advancement of an intruder boat toward a valuable target. This task requires the USV to utilize reactive planning complemented by short-term forward planning to generate specific maneuvers for blocking. The intruder is human-competitive and exhibits a deceptive behavior so that the USV cannot exploit regularity in its attacking behavior. We compared the performance of a hand-coded blocking policy to the performance of a policy that was automatically synthesized. Our results show that the performance of the automatically generated policy exceeds the performance of the hand-coded policy and thus demonstrates the feasibility of the proposed approach. %B Autonomous Robots %V 32 %P 149 - 164 %8 2012/// %@ 0929-5593 %G eng %U http://www.springerlink.com/content/664k84x080868141/abstract/ %N 2 %R 10.1007/s10514-011-9268-6 %0 Journal Article %J IEEE Transactions on Information Forensics & Security %D 2012 %T Automatic Authentication of Banknotes %A Roy,Ankush %A Halder,Biswaiit %A Garain,Utpal %A David Doermann %X In this paper, we address the problem of the automatic authentication of paper currency. Indian banknotes are used to show how a system can be developed for discriminating counterfeit notes from genuine notes. Image processing and pattern recognition techniques are designed to carefully analyze embedded security features. Experiments conducted on forensic samples show that a high precision low cost machine can be developed to address this problem. The analysis of current security features’ ability to protect against counterfeiting also suggest topics that should be considered in designing future currency notes. %B IEEE Transactions on Information Forensics & Security %8 2012/// %G eng %0 Conference Paper %B Technical Communications of the 27th International Conference on Logic Programming (ICLP'11) %D 2011 %T Abduction in Annotated Probabilistic Temporal Logic %A Molinaro,Cristian %A Sliva,Amy %A V.S. Subrahmanian %X Annotated Probabilistic Temporal (APT) logic programs are a form of logic programs that allow users to state (or systems to automatically learn)rules of the form ``formula G becomes true K time units after formula F became true with L to U% probability.'' In this paper, we develop a theory of abduction for APT logic programs. Specifically, given an APT logic program Pi, a set of formulas H that can be ``added'' to Pi, and a goal G, is there a subset S of H such that Pi \cup S is consistent and entails the goal G? In this paper, we study the complexity of the Basic APT Abduction Problem (BAAP). We then leverage a geometric characterization of BAAP to suggest a set of pruning strategies when solving BAAP and use these intuitions to develop a sound and complete algorithm. %B Technical Communications of the 27th International Conference on Logic Programming (ICLP'11) %S Leibniz International Proceedings in Informatics (LIPIcs) %I Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik %V 11 %P 240 - 250 %8 2011/// %@ 978-3-939897-31-6 %G eng %R http://dx.doi.org/10.4230/LIPIcs.ICLP.2011.240 %0 Journal Article %J ACM Transactions on Accessible Computing (TACCESS) %D 2011 %T Ability-based design: Concept, principles and examples %A Wobbrock,J.O. %A Kane,S.K. %A Gajos,K.Z. %A Harada,S. %A Jon Froehlich %B ACM Transactions on Accessible Computing (TACCESS) %V 3 %8 2011 %G eng %N 3 %0 Journal Article %J arXiv:1105.1743 [cs] %D 2011 %T Abstracting Abstract Machines: A Systematic Approach to Higher-Order Program Analysis %A David Van Horn %A Might, Matthew %K Computer Science - Programming Languages %K F.3.2 %K F.4.1 %X Predictive models are fundamental to engineering reliable software systems. However, designing conservative, computable approximations for the behavior of programs (static analyses) remains a difficult and error-prone process for modern high-level programming languages. What analysis designers need is a principled method for navigating the gap between semantics and analytic models: analysis designers need a method that tames the interaction of complex languages features such as higher-order functions, recursion, exceptions, continuations, objects and dynamic allocation. We contribute a systematic approach to program analysis that yields novel and transparently sound static analyses. Our approach relies on existing derivational techniques to transform high-level language semantics into low-level deterministic state-transition systems (with potentially infinite state spaces). We then perform a series of simple machine refactorings to obtain a sound, computable approximation, which takes the form of a non-deterministic state-transition systems with finite state spaces. The approach scales up uniformly to enable program analysis of realistic language features, including higher-order functions, tail calls, conditionals, side effects, exceptions, first-class continuations, and even garbage collection. %B arXiv:1105.1743 [cs] %8 2011/05/09/ %G eng %U http://arxiv.org/abs/1105.1743 %0 Journal Article %J BMC Evolutionary Biology %D 2011 %T Accelerated evolution of 3'avian FOXE1 genes, and thyroid and feather specific expression of chicken FoxE1 %A Yaklichkin,Sergey Yu %A Darnell,Diana K %A Pier,Maricela V %A Antin,Parker B %A Hannenhalli, Sridhar %X The forkhead transcription factor gene E1 (FOXE1) plays an important role in regulation of thyroid development, palate formation and hair morphogenesis in mammals. However, avian FOXE1 genes have not been characterized and as such, codon evolution of FOXE1 orthologs in a broader evolutionary context of mammals and birds is not known. %B BMC Evolutionary Biology %V 11 %P 302 - 302 %8 2011/10/15/ %@ 1471-2148 %G eng %U http://www.biomedcentral.com/1471-2148/11/302 %N 1 %R 10.1186/1471-2148-11-302 %0 Journal Article %J BMC Genomics %D 2011 %T Accurate and fast estimation of taxonomic profiles from metagenomic shotgun sequences %A Liu,Bo %A Gibbons,Theodore %A Ghodsi,Mohammad %A Treangen,Todd %A Pop, Mihai %X A major goal of metagenomics is to characterize the microbial composition of an environment. The most popular approach relies on 16S rRNA sequencing, however this approach can generate biased estimates due to differences in the copy number of the gene between even closely related organisms, and due to PCR artifacts. The taxonomic composition can also be determined from metagenomic shotgun sequencing data by matching individual reads against a database of reference sequences. One major limitation of prior computational methods used for this purpose is the use of a universal classification threshold for all genes at all taxonomic levels. %B BMC Genomics %V 12 %P S4 - S4 %8 2011/07/27/ %@ 1471-2164 %G eng %U http://www.biomedcentral.com/1471-2164/12/S2/S4 %N Suppl 2 %R 10.1186/1471-2164-12-S2-S4 %0 Journal Article %J Genome Biology %D 2011 %T Accurate proteome-wide protein quantification from high-resolution 15N mass spectra %A Zia Khan %A Amini, Sasan %A Bloom, Joshua S. %A Ruse, Cristian %A Caudy, Amy A. %A Kruglyak, Leonid %A Singh, Mona %A Perlman, David H. %A Tavazoie, Saeed %X In quantitative mass spectrometry-based proteomics, the metabolic incorporation of a single source of 15N-labeled nitrogen has many advantages over using stable isotope-labeled amino acids. However, the lack of a robust computational framework for analyzing the resulting spectra has impeded wide use of this approach. We have addressed this challenge by introducing a new computational methodology for analyzing 15N spectra in which quantification is integrated with identification. Application of this method to an Escherichia coli growth transition reveals significant improvement in quantification accuracy over previous methods.PMID: 22182234 %B Genome Biology %V 12 %8 2011/12/19/ %@ 1465-6906 %G eng %U http://genomebiology.com/2011/12/12/R122/abstract %N 12 %0 Journal Article %J SIAM Journal on Computing %D 2011 %T On Achieving the" Best of Both Worlds" in Secure Multiparty Computation %A Ishai,Y. %A Katz, Jonathan %A Kushilevitz,E. %A Lindell,Y. %A Petrank,E. %X Two settings are traditionally considered for secure multiparty computation, de-pending on whether or not a majority of the parties are assumed to be honest. Existing protocols that assume an honest majority provide “full security” (and, in particular, guarantee output delivery and fairness) when this assumption holds, but are completely insecure if this assumption is violated. On the other hand, known protocols tolerating an arbitrary number of corruptions do not guarantee fairness or output delivery even if only a single party is dishonest. It is natural to wonder whether it is possible to achieve the “best of both worlds”: namely, a single protocol that simultaneously achieves the best possible security in both the above settings. Here, we rule out this possibility (at least for general functionalities) and show some positive results regarding what can be achieved. %B SIAM Journal on Computing %V 40 %P 122 - 122 %8 2011/// %G eng %N 1 %0 Conference Paper %B Image Processing (ICIP), 2011 18th IEEE International Conference on %D 2011 %T Action recognition using Partial Least Squares and Support Vector Machines %A Ramadan,S. %A Davis, Larry S. %K analysis;support %K approach;multiclass %K approximations;regression %K dataset;action %K dimensional %K extraction;feature %K extraction;image %K feature %K high %K INRIA %K IXMAS %K least %K machines; %K machines;very %K partial %K properties;support %K recognition %K recognition;least %K regressors;spatiotemporal %K squares %K SVM;multiple %K vector %K vectors %X We introduce an action recognition approach based on Partial Least Squares (PLS) and Support Vector Machines (SVM). We extract very high dimensional feature vectors representing spatio-temporal properties of actions and use multiple PLS regressors to find relevant features that distinguish amongst action classes. Finally, we use a multi-class SVM to learn and classify those relevant features. We applied our approach to INRIA's IXMAS dataset. Experimental results show that our method is superior to other methods applied to the IXMAS dataset. %B Image Processing (ICIP), 2011 18th IEEE International Conference on %P 533 - 536 %8 2011/09// %G eng %R 10.1109/ICIP.2011.6116399 %0 Conference Paper %B Person-Oriented Vision (POV), 2011 IEEE Workshop on %D 2011 %T Active inference for retrieval in camera networks %A Daozheng Chen %A Bilgic,Mustafa %A Getoor, Lise %A Jacobs, David W. %A Mihalkova,Lilyana %A Tom Yeh %X We address the problem of searching camera network videos to retrieve frames containing specified individuals. We show the benefit of utilizing a learned probabilistic model that captures dependencies among the cameras. In addition, we develop an active inference framework that can request human input at inference time, directing human attention to the portions of the videos whose correct annotation would provide the biggest performance improvements. Our primary contribution is to show that by mapping video frames in a camera network onto a graphical model, we can apply collective classification and active inference algorithms to significantly increase the performance of the retrieval system, while minimizing the number of human annotations required. %B Person-Oriented Vision (POV), 2011 IEEE Workshop on %I IEEE %P 13 - 20 %8 2011/01// %@ 978-1-61284-036-9 %G eng %U http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=5712363&tag=1 %R 10.1109/POV.2011.5712363 %0 Conference Paper %B Person-Oriented Vision (POV), 2011 IEEE Workshop on %D 2011 %T Active inference for retrieval in camera networks %A Daozheng Chen %A Bilgic,M. %A Getoor, Lise %A Jacobs, David W. %A Mihalkova,L. %A Tom Yeh %K active %K annotation;probabilistic %K frame;cameras;inference %K inference;camera %K mechanisms;probability;search %K model;human %K model;retrieval %K network;graphical %K problem;video %K problems;video %K processing; %K retrieval;video %K signal %K system;searching %X We address the problem of searching camera network videos to retrieve frames containing specified individuals. We show the benefit of utilizing a learned probabilistic model that captures dependencies among the cameras. In addition, we develop an active inference framework that can request human input at inference time, directing human attention to the portions of the videos whose correct annotation would provide the biggest performance improvements. Our primary contribution is to show that by mapping video frames in a camera network onto a graphical model, we can apply collective classification and active inference algorithms to significantly increase the performance of the retrieval system, while minimizing the number of human annotations required. %B Person-Oriented Vision (POV), 2011 IEEE Workshop on %P 13 - 20 %8 2011/01// %G eng %R 10.1109/POV.2011.5712363 %0 Conference Paper %B Proceedings of the 2011 annual conference extended abstracts on Human factors in computing systems %D 2011 %T Active progress bars: facilitating the switch to temporary activities %A Hurter,Christophe %A Girouard,Audrey %A Riche,Nathalie %A Plaisant, Catherine %K Frustration %K participatory design %K progress bars %K task switching %X In this paper, we seek to find a better way of effective task management when a progress bar interrupts user's primary activity. We propose to augment progress bars with user controlled functionalities facilitating the switch to temporary activities. We detail a taxonomy of waiting period contexts and possible temporary tasks, then report on 5 participatory design, and a follow-up survey of 96 respondents. Finally we describe an early prototype of active progress bars, and report on initial use. %B Proceedings of the 2011 annual conference extended abstracts on Human factors in computing systems %S CHI EA '11 %I ACM %C New York, NY, USA %P 1963 - 1968 %8 2011/// %@ 978-1-4503-0268-5 %G eng %U http://doi.acm.org/10.1145/1979742.1979883 %R 10.1145/1979742.1979883 %0 Conference Paper %B 2011 IEEE International Conference on Computer Vision (ICCV) %D 2011 %T Active scene recognition with vision and language %A Yu,Xiaodong %A Fermüller, Cornelia %A Ching Lik Teo %A Yezhou Yang %A Aloimonos, J. %K accuracy %K active scene recognition %K classification performance %K Cognition %K Computer vision %K Detectors %K Equations %K high level knowledge utilization %K HUMANS %K image classification %K inference mechanisms %K object detectors %K Object recognition %K reasoning module %K sensing process %K sensory module %K support vector machines %K Training %X This paper presents a novel approach to utilizing high level knowledge for the problem of scene recognition in an active vision framework, which we call active scene recognition. In traditional approaches, high level knowledge is used in the post-processing to combine the outputs of the object detectors to achieve better classification performance. In contrast, the proposed approach employs high level knowledge actively by implementing an interaction between a reasoning module and a sensory module (Figure 1). Following this paradigm, we implemented an active scene recognizer and evaluated it with a dataset of 20 scenes and 100+ objects. We also extended it to the analysis of dynamic scenes for activity recognition with attributes. Experiments demonstrate the effectiveness of the active paradigm in introducing attention and additional constraints into the sensing process. %B 2011 IEEE International Conference on Computer Vision (ICCV) %I IEEE %P 810 - 817 %8 2011/11/06/13 %@ 978-1-4577-1101-5 %G eng %R 10.1109/ICCV.2011.6126320 %0 Conference Paper %B Twenty-Second International Joint Conference on Artificial Intelligence %D 2011 %T Active Surveying: A Probabilistic Approach for Identifying Key Opinion Leaders %A Sharara,H. %A Getoor, Lise %A Norton,M. %X Opinion leaders play an important role in influenc- ing people’s beliefs, actions and behaviors. Al- though a number of methods have been proposed for identifying influentials using secondary sources of information, the use of primary sources, such as surveys, is still favored in many domains. In this work we present a new surveying method which combines secondary data with partial knowl- edge from primary sources to guide the information gathering process. We apply our proposed active surveying method to the problem of identifying key opinion leaders in the medical field, and show how we are able to accurately identify the opinion lead- ers while minimizing the amount of primary data required, which results in significant cost reduction in data acquisition without sacrificing its integrity. %B Twenty-Second International Joint Conference on Artificial Intelligence %8 2011/// %G eng %0 Conference Paper %B Proceedings of the 20th international conference companion on World wide web %D 2011 %T Adapting a map query interface for a gesturing touch screen interface %A Samet, Hanan %A Teitler,Benjamin E. %A Adelfio,Marco D. %A Lieberman,Michael D. %K map query interface %K newsstand %K touch screen gesturing interface %X NewsStand is an example application of a general framework that we are developing to enable searching for information using a map query interface, where the information results from monitoring the output of over 8,000 RSS news sources and is available for retrieval within minutes of publication. The user interface of NewsStand was recently adapted so that NewsStand can execute on mobile and tablet devices with a gesturing touch screen interface such as the iPhone, iPod Touch, and iPad. This action led to a discovery of some shortcomings of current mapping APIs as well as devising some interesting new widgets. These issues are discussed, and the realization can be seen by a demo at http://newsstand.umiacs.umd.edu on any of the above Apple devices as well as other devices that support gestures such as an Android phone. %B Proceedings of the 20th international conference companion on World wide web %S WWW '11 %I ACM %C New York, NY, USA %P 257 - 260 %8 2011/// %@ 978-1-4503-0637-9 %G eng %U http://doi.acm.org/10.1145/1963192.1963303 %R 10.1145/1963192.1963303 %0 Journal Article %J Proceedings of the 30th Annual ACM Symposium on Principles of Distributed Computing %D 2011 %T Adaptively secure broadcast, revisited %A Garay,J. A %A Katz, Jonathan %A Kumaresan,R. %A Zhou,H.S. %X We consider the classical problem of synchronous broadcast with dishonest majority, whena public-key infrastructure and digital signatures are available. In a surprising result, Hirt and Zikas (Eurocrypt 2010) recently observed that all existing protocols for this task are insecure against an adaptive adversary who can choose which parties to corrupt as the protocol progresses. Moreover, they prove an impossibility result for adaptively secure broadcast in their setting. We argue that the communication model adopted by Hirt and Zikas is unrealistically pes- simistic. We revisit the problem of adaptively secure broadcast in a more natural synchronous model (with rushing), and show that broadcast is possible in this setting for an arbitrary num- ber of corruptions. Our positive result holds under a strong, simulation-based definition in the universal-composability framework. We also study the impact of adaptive attacks on protocols for secure multi-party computation where broadcast is used as a sub-routine. %B Proceedings of the 30th Annual ACM Symposium on Principles of Distributed Computing %8 2011/// %G eng %0 Journal Article %J Advances in Imaging and Electron Physics %D 2011 %T Advances in Video-Based Biometrics %A Chellapa, Rama %A Turaga,P. %B Advances in Imaging and Electron Physics %V 83 %P 183 - 203 %8 2011/// %G eng %0 Journal Article %J International Journal of Research in Marketing %D 2011 %T Agent-based modeling in marketing: Guidelines for rigor %A Rand, William %A Rust,Roland T. %X Agent-based modeling can illuminate how complex marketing phenomena emerge from simple decision rules. Marketing phenomena that are too complex for conventional analytical or empirical approaches can often be modeled using this approach. Agent-based modeling investigates aggregate phenomena by simulating the behavior of individual “agents,” such as consumers or organizations. Some useful examples of agent-based modeling have been published in marketing journals, but widespread acceptance of the agent-based modeling method and publication of this method in the highest-level marketing journals have been slowed by the lack of widely accepted standards of how to do agent-based modeling rigorously. We address this need by proposing guidelines for rigorous agent-based modeling. We demonstrate these guidelines, and the value of agent-based modeling for marketing research, through the use of an example. We use an agent-based modeling approach to replicate the Bass model of the diffusion of innovations, illustrating the use of the proposed guidelines to ensure the rigor of the analysis. We also show how extensions of the Bass model that would be difficult to carry out using traditional marketing research techniques are possible to implement using a rigorous agent-based approach. %B International Journal of Research in Marketing %V 28 %P 181 - 193 %8 2011/09// %@ 0167-8116 %G eng %U http://www.sciencedirect.com/science/article/pii/S0167811611000504 %N 3 %R 10.1016/j.ijresmar.2011.04.002 %0 Journal Article %J ACM Trans. Embed. Comput. Syst. %D 2011 %T Analysis of SystemC actor networks for efficient synthesis %A Falk, Joachim %A Zebelein, Christian %A Keinert, Joachim %A Haubelt, Christian %A Teich, Juergen %A Bhattacharyya, Shuvra S. %K actor-oriented design %K clustering %K Dataflow analysis %K scheduling %X Applications in the signal processing domain are often modeled by dataflow graphs. Due to heterogeneous complexity requirements, these graphs contain both dynamic and static dataflow actors. In previous work, we presented a generalized clustering approach for these heterogeneous dataflow graphs in the presence of unbounded buffers. This clustering approach allows the application of static scheduling methodologies for static parts of an application during embedded software generation for multiprocessor systems. It systematically exploits the predictability and efficiency of the static dataflow model to obtain latency and throughput improvements. In this article, we present a generalization of this clustering technique to dataflow graphs with bounded buffers, therefore enabling synthesis for embedded systems without dynamic memory allocation. Furthermore, a case study is given to demonstrate the performance benefits of the approach. %B ACM Trans. Embed. Comput. Syst. %V 10 %P 18:1 - 18:34 %8 2011 %@ 1539-9087 %G eng %U http://doi.acm.org/10.1145/1880050.1880054 %N 2 %0 Journal Article %J Transportation Research Board 90th Annual Meeting Compendium of Papers %D 2011 %T Analyzing Incident Management Event Sequences with Interactive Visualization %A Guerra Gómez,J. %A Wongsuphasawat,K. %A Wang,T. D %A Pack,M. %A Plaisant, Catherine %X While traditional safety and incident analysis has mostly focused on incident attributesdata, such as the location and time of the incident, there are other aspects in incident response that are temporal in nature and are more difficult to analyze. We describe a visual analytics tool for temporal data exploration, called LifeFlow, used for the analysis of incident response data. LifeFlow provides user-controlled overviews of event sequences (e.g. notification, arrival, clearance etc). It allows analysts to interactively explore temporal patterns, find anomalies in sequences and compare management practices. This type of analysis can potentially lead to process improvements and save human lives. We used NCHRP traffic incident data with more than 200,000 incidents are reported by 8 different agencies in a period of about 28 months. Our experience suggest that even non expert analysts can spot many anomalies in the data using the LifeFlow overviews, and are able to rapidly ask many questions and find differences between agencies. %B Transportation Research Board 90th Annual Meeting Compendium of Papers %8 2011/// %G eng %0 Journal Article %J ACM Trans. Comput. Logic %D 2011 %T Annotated probabilistic temporal logic %A Shakarian,Paulo %A Parker,Austin %A Simari,Gerardo %A V.S. Subrahmanian %K frequency functions %K imprecise probabilities %K Probabilistic and temporal reasoning %K threads %X The semantics of most logics of time and probability is given via a probability distribution over threads, where a thread is a structure specifying what will be true at different points in time (in the future). When assessing the probabilities of statements such as “Event a will occur within 5 units of time of event b,” there are many different semantics possible, even when assessing the truth of this statement within a single thread. We introduce the syntax of annotated probabilistic temporal (APT) logic programs and axiomatically introduce the key notion of a frequency function (for the first time) to capture different types of intrathread reasoning, and then provide a semantics for intrathread and interthread reasoning in APT logic programs parameterized by such frequency functions. We develop a comprehensive set of complexity results for consistency checking and entailment in APT logic programs, together with sound and complete algorithms to check consistency and entailment. The basic algorithms use linear programming, but we then show how to substantially and correctly reduce the sizes of these linear programs to yield better computational properties. We describe a real world application we are developing using APT logic programs. %B ACM Trans. Comput. Logic %V 12 %P 14:1–14:44 - 14:1–14:44 %8 2011/01// %@ 1529-3785 %G eng %U http://doi.acm.org/10.1145/1877714.1877720 %N 2 %R 10.1145/1877714.1877720 %0 Conference Paper %B 2011 AAAI Fall Symposium Series %D 2011 %T Ant Colony Optimization in a Changing Environment %A Seymour,John Jefferson %A Tuzo,Joseph %A desJardins, Marie %X Ant colony optimization (ACO) algorithms are computational problem-solving methods that are inspired by the complex behaviors of ant colonies; specifically, the ways in which ants interact with each other and their environment to optimize the overall performance of the ant colony. Our eventual goal is to develop and experiment with ACO methods that can more effectively adapt to dynamically changing environments and problems. We describe biological ant systems and the dynamics of their environments and behaviors. We then introduce a family of dynamic ACO algorithms that can handle dynamic modifications of their inputs. We report empirical results, showing that dynamic ACO algorithms can effectively adapt to time-varying environments. %B 2011 AAAI Fall Symposium Series %8 2011/03/11/ %G eng %U http://www.aaai.org/ocs/index.php/FSS/FSS11/paper/viewPaper/4223 %0 Conference Paper %B 2011 22nd IEEE International Symposium on Rapid System Prototyping (RSP) %D 2011 %T Applying graphics processor acceleration in a software defined radio prototyping environment %A Plishker,W. %A Zaki, G.F. %A Bhattacharyya, Shuvra S. %A Clancy, C. %A Kuykendall, J. %K Acceleration %K coprocessors %K dataflow foundation %K GNU radio %K Graphics processing unit %K graphics processor acceleration %K Kernel %K Libraries %K multicore platforms %K Multicore processing %K PARALLEL PROCESSING %K Pipelines %K Protocols %K software defined radio prototyping environment %K software radio %K stand-alone GPU accelerated library %X With higher bandwidth requirements and more complex protocols, software defined radio (SDR) has ever growing computational demands. SDR applications have different levels of parallelism that can be exploited on multicore platforms, but design and programming difficulties have inhibited the adoption of specialized multicore platforms like graphics processors (GPUs). In this work we propose a new design flow that augments a popular existing SDR development environment (GNU Radio), with a dataflow foundation and a stand-alone GPU accelerated library. The approach gives an SDR developer the ability to prototype a GPU accelerated application and explore its design space fast and effectively. We demonstrate this design flow on a standard SDR benchmark and show that deciding how to utilize a GPU can be non-trivial for even relatively simple applications. %B 2011 22nd IEEE International Symposium on Rapid System Prototyping (RSP) %P 67 - 73 %8 2011 %G eng %0 Journal Article %J Proceedings of 43rd Annual ACM Symposium on Theory of Computing %D 2011 %T Approximate polytope membership queries %A Arya,S. %A da Fonseca,G. D %A Mount, Dave %X We consider an approximate version of a fundamental geo-metric search problem, polytope membership queries. Given a convex polytope P in Rd, presented as the intersection of halfspaces, the objective is to preprocess P so that, given a query point q, it is possible to determine efficiently whether q lies inside P subject to an error bound ε. Previous solutions to this problem were based on straight- forward applications of classic polytope approximation tech- niques by Dudley (1974) and Bentley et al. (1982). The former yields minimum storage, and the latter yields con- stant query time. A space-time tradeoff can be obtained by interpolating between the two. We present the first sig- nificant improvements to this tradeoff. For example, using the same storage as Dudley, we reduce the query time from O(1/ε (d−1)/2 ) to O(1/ε (d−1)/4 ). Our approach is based on a very simple algorithm. Both lower bounds and upper bounds on the performance of the algorithm are presented. To establish the relevance of our results, we introduce a reduction from approximate nearest neighbor searching to approximate polytope membership queries. We show that our tradeoff provides significant improvements to the best known space-time tradeoffs for approximate nearest neigh- bor searching. Furthermore, this is achieved with construc- tions that are much simpler than existing methods %B Proceedings of 43rd Annual ACM Symposium on Theory of Computing %P 579 - 586 %8 2011/// %G eng %0 Conference Paper %B 2011 Proceedings IEEE INFOCOM %D 2011 %T Approximation algorithms for throughput maximization in wireless networks with delay constraints %A Guanhong Pei %A Anil Kumar,V. S %A Parthasarathy,S. %A Srinivasan, Aravind %K Approximation algorithms %K Approximation methods %K approximation theory %K Delay %K delay constraints %K delays %K general interference model %K Interference %K multihop wireless networks %K optimisation %K Optimized production technology %K radio networks %K radiofrequency interference %K target delay bound %K Throughput %K throughput maximization %K Wireless networks %X We study the problem of throughput maximization in multi-hop wireless networks with end-to-end delay constraints for each session. This problem has received much attention starting with the work of Grossglauser and Tse (2002), and it has been shown that there is a significant tradeoff between the end-to-end delays and the total achievable rate. We develop algorithms to compute such tradeoffs with provable performance guarantees for arbitrary instances, with general interference models. Given a target delay-bound Δ(c) for each session c, our algorithm gives a stable flow vector with a total throughput within a factor of O (logΔm/log log Δm) of the maximum, so that the per-session (end-to-end) delay is O ((logΔm/log log Δm Δ(c))2), where Δm = maxc{Δ(c)}; note that these bounds depend only on the delays, and not on the network size, and this is the first such result, to our knowledge. %B 2011 Proceedings IEEE INFOCOM %I IEEE %P 1116 - 1124 %8 2011/04/10/15 %@ 978-1-4244-9919-9 %G eng %R 10.1109/INFCOM.2011.5934887 %0 Journal Article %J Epidemiological and Molecular Aspects on Cholera %D 2011 %T Aquatic Realm and Cholera %A Huq,A. %A Grim,C. J. %A Rita R Colwell %X Cholera is an ancient disease that can be severe and life threatening. It occurs predominantly in areas of the world where populations lack safe drinking water. Epidemics of cholera are linked with malnutrition, poor sanitation, and conditions resulting from natural disasters such as severe flooding. According to a report published by WHO in 2000 [1], cholera remains a major public health problem and is becoming increasingly important since the number of countries in which cholera is endemic continues to increase. Unfortunately, outbreaks of the disease continue into the twenty-first century with ominous portent in the wake of global climate change [1]. Yet cholera is a preventable disease if people have access to safe drinking water and are properly educated how to protect themselves from the risk of infection with vibrios. Cholera also is an easily treatable disease. Oral rehydration therapy, a solution containing glucose and appropriate salts, has proven to be effective for treatment of most cholera victims [2]. Nevertheless, each year, tens of thousands of people are victims of the disease, bringing this “curse of humankind” to modern civilization. Present understanding of cholera is based on studies conducted over the past three decades and significant new information has been gained concerning environmental factors associated with this disease, especially how to detect the bacterium and where it lives in the natural environment, outside the human gut, and what triggers the annual outbreaks that occur with remarkable regularity. Environmental research on Vibrio cholerae and cholera has provided insights for prediction and prevention of the disease it causes, while the race for effective vaccines against cholera continues. %B Epidemiological and Molecular Aspects on Cholera %P 311 - 339 %8 2011/// %G eng %R 10.1007/978-1-60327-265-0_18 %0 Journal Article %J SIGCOMM Comput. Commun. Rev. %D 2011 %T Architecting for innovation %A Koponen,Teemu %A Shenker,Scott %A Balakrishnan,Hari %A Feamster, Nick %A Ganichev,Igor %A Ghodsi,Ali %A Godfrey,P. Brighten %A McKeown,Nick %A Parulkar,Guru %A Raghavan,Barath %A Rexford,Jennifer %A Arianfar,Somaya %A Kuptsov,Dmitriy %K diversity %K Evolution %K innovation %K internet architecture %X We argue that the biggest problem with the current Internet architecture is not a particular functional deficiency, but its inability to accommodate innovation. To address this problem we propose a minimal architectural "framework" in which comprehensive architectures can reside. The proposed Framework for Internet Innovation (FII) --- which is derived from the simple observation that network interfaces should be extensible and abstract --- allows for a diversity of architectures to coexist, communicate, and evolve. We demonstrate FII's ability to accommodate diversity and evolution with a detailed examination of how information flows through the architecture and with a skeleton implementation of the relevant interfaces. %B SIGCOMM Comput. Commun. Rev. %V 41 %P 24 - 36 %8 2011/// %@ 0146-4833 %G eng %U http://doi.acm.org/10.1145/2002250.2002256 %N 3 %R 10.1145/2002250.2002256 %0 Journal Article %J Journal of the ACM-Association for ComputingMachinery %D 2011 %T Article 28 (28 pages)-New Constructive Aspects of the Lovász Local Lemma %A Haeupler,B. %A Saha,B. %A Srinivasan, Aravind %B Journal of the ACM-Association for ComputingMachinery %V 58 %8 2011/// %G eng %N 6 %0 Journal Article %J BMC Bioinformatics %D 2011 %T Assessing the benefits of using mate-pairs to resolve repeats in de novo short-read prokaryotic assemblies %A Wetzel,Joshua %A Kingsford, Carl %A Pop, Mihai %X Next-generation sequencing technologies allow genomes to be sequenced more quickly and less expensively than ever before. However, as sequencing technology has improved, the difficulty of de novo genome assembly has increased, due in large part to the shorter reads generated by the new technologies. The use of mated sequences (referred to as mate-pairs) is a standard means of disambiguating assemblies to obtain a more complete picture of the genome without resorting to manual finishing. Here, we examine the effectiveness of mate-pair information in resolving repeated sequences in the DNA (a paramount issue to overcome). While it has been empirically accepted that mate-pairs improve assemblies, and a variety of assemblers use mate-pairs in the context of repeat resolution, the effectiveness of mate-pairs in this context has not been systematically evaluated in previous literature. %B BMC Bioinformatics %V 12 %P 95 - 95 %8 2011/04/13/ %@ 1471-2105 %G eng %U http://www.biomedcentral.com/1471-2105/12/95 %N 1 %R 10.1186/1471-2105-12-95 %0 Journal Article %J Journal of Systems and Software %D 2011 %T An assessment of systems and software engineering scholars and institutions (2003–2007 and 2004–2008) %A Wong,W. Eric %A Tse,T.H. %A Glass,Robert L. %A Basili, Victor R. %A Chen,T.Y. %K Research publications %K Systems and software engineering %K Top institutions %K Top scholars %X An ongoing, annual survey of publications in systems and software engineering identifies the top 15 scholars and institutions in the field over a 5-year period. Each ranking is based on the weighted scores of the number of papers published in TSE, TOSEM, JSS, SPE, EMSE, IST, and Software of the corresponding period. This report summarizes the results for 2003–2007 and 2004–2008. The top-ranked institution is Korea Advanced Institute of Science and Technology, Korea for 2003–2007, and Simula Research Laboratory, Norway for 2004–2008, while Magne Jørgensen is the top-ranked scholar for both periods. %B Journal of Systems and Software %V 84 %P 162 - 168 %8 2011/01// %@ 0164-1212 %G eng %U http://www.sciencedirect.com/science/article/pii/S0164121210002682 %N 1 %R 10.1016/j.jss.2010.09.036 %0 Conference Paper %B Proceedings of the 24th annual ACM symposium on User interface software and technology %D 2011 %T Associating the visual representation of user interfaces with their internal structures and metadata %A Chang,Tsung-Hsiang %A Tom Yeh %A Miller,Rob %K accessibility api %K Graphical user interfaces %K Pixel %K text detection %K text segmentation %X Pixel-based methods are emerging as a new and promising way to develop new interaction techniques on top of existing user interfaces. However, in order to maintain platform independence, other available low-level information about GUI widgets, such as accessibility metadata, was neglected intentionally. In this paper, we present a hybrid framework, PAX, which associates the visual representation of user interfaces (i.e. the pixels) and their internal hierarchical metadata (i.e. the content, role, and value). We identify challenges to building such a framework. We also develop and evaluate two new algorithms for detecting text at arbitrary places on the screen, and for segmenting a text image into individual word blobs. Finally, we validate our framework in implementations of three applications. We enhance an existing pixel-based system, Sikuli Script, and preserve the readability of its script code at the same time. Further, we create two novel applications, Screen Search and Screen Copy, to demonstrate how PAX can be applied to development of desktop-level interactive systems. %B Proceedings of the 24th annual ACM symposium on User interface software and technology %S UIST '11 %I ACM %C New York, NY, USA %P 245 - 256 %8 2011/// %@ 978-1-4503-0716-1 %G eng %U http://doi.acm.org/10.1145/2047196.2047228 %R 10.1145/2047196.2047228 %0 Book Section %B New Horizons in Evolutionary Robotics %D 2011 %T Automated Planning Logic Synthesis for Autonomous Unmanned Vehicles in Competitive Environments with Deceptive Adversaries %A Svec,Petr %A Gupta, Satyandra K. %E Doncieux,Stéphane %E Bredèche,Nicolas %E Mouret,Jean-Baptiste %K engineering %X We developed a new approach for automated synthesis of a planning logic for autonomous unmanned vehicles. This new approach can be viewed as an automated iterative process during which an initial version of a logic is synthesized and then gradually improved by detecting and fixing its shortcomings. This is achieved by combining data mining for extraction of vehicle’s states of failure and Genetic Programming (GP) technique for synthesis of corresponding navigation code. We verified the feasibility of the approach using unmanned surface vehicles (USVs) simulation. Our focus was specifically on the generation of a planning logic used for blocking the advancement of an intruder boat towards a valuable target. Developing autonomy logic for this behavior is challenging as the intruder’s attacking logic is human-competitive with deceptive behavior so the USV is required to learn specific maneuvers for specific situations to do successful blocking. We compared the performance of the generated blocking logic to the performance of logic that was manually implemented. Our results show that the new approach was able to synthesize a blocking logic with performance closely approaching the performance of the logic coded by hand. %B New Horizons in Evolutionary Robotics %S Studies in Computational Intelligence %I Springer Berlin / Heidelberg %V 341 %P 171 - 193 %8 2011/// %@ 978-3-642-18271-6 %G eng %U http://www.springerlink.com/content/f454477212518671/abstract/ %0 Conference Paper %B Advanced Video and Signal-Based Surveillance (AVSS), 2011 8th IEEE International Conference on %D 2011 %T AVSS 2011 demo session: A large-scale benchmark dataset for event recognition in surveillance video %A Oh,Sangmin %A Hoogs,Anthony %A Perera,Amitha %A Cuntoor,Naresh %A Chen,Chia-Chih %A Lee,Jong Taek %A Mukherjee,Saurajit %A Aggarwal, JK %A Lee,Hyungtae %A Davis, Larry S. %A Swears,Eran %A Wang,Xiaoyang %A Ji,Qiang %A Reddy,Kishore %A Shah,Mubarak %A Vondrick,Carl %A Pirsiavash,Hamed %A Ramanan,Deva %A Yuen,Jenny %A Torralba,Antonio %A Song,Bi %A Fong,Anesco %A Roy-Chowdhury,Amit %A Desai,Mita %X We introduce to the surveillance community the VIRAT Video Dataset[1], which is a new large-scale surveillance video dataset designed to assess the performance of event recognition algorithms in realistic scenes1. %B Advanced Video and Signal-Based Surveillance (AVSS), 2011 8th IEEE International Conference on %P 527 - 528 %8 2011/09/30/2 %G eng %R 10.1109/AVSS.2011.6027400 %0 Journal Article %J arXiv:1007.4446 [cs] %D 2010 %T Abstracting Abstract Machines %A David Van Horn %A Might, Matthew %K Computer Science - Programming Languages %K F.3.2 %K F.4.1 %X We describe a derivational approach to abstract interpretation that yields novel and transparently sound static analyses when applied to well-established abstract machines. To demonstrate the technique and support our claim, we transform the CEK machine of Felleisen and Friedman, a lazy variant of Krivine's machine, and the stack-inspecting CM machine of Clements and Felleisen into abstract interpretations of themselves. The resulting analyses bound temporal ordering of program events; predict return-flow and stack-inspection behavior; and approximate the flow and evaluation of by-need parameters. For all of these machines, we find that a series of well-known concrete machine refactorings, plus a technique we call store-allocated continuations, leads to machines that abstract into static analyses simply by bounding their stores. We demonstrate that the technique scales up uniformly to allow static analysis of realistic language features, including tail calls, conditionals, side effects, exceptions, first-class continuations, and even garbage collection. %B arXiv:1007.4446 [cs] %8 2010/07/26/ %G eng %U http://arxiv.org/abs/1007.4446 %0 Journal Article %J ACM Trans. Algorithms %D 2010 %T Achieving anonymity via clustering %A Aggarwal,Gagan %A Panigrahy,Rina %A Feder,Tomás %A Thomas,Dilys %A Kenthapadi,Krishnaram %A Khuller, Samir %A Zhu,An %K anonymity %K Approximation algorithms %K clustering %K privacy %X Publishing data for analysis from a table containing personal records, while maintaining individual privacy, is a problem of increasing importance today. The traditional approach of deidentifying records is to remove identifying fields such as social security number, name, etc. However, recent research has shown that a large fraction of the U.S. population can be identified using nonkey attributes (called quasi-identifiers) such as date of birth, gender, and zip code. The k-anonymity model protects privacy via requiring that nonkey attributes that leak information are suppressed or generalized so that, for every record in the modified table, there are at least k−1 other records having exactly the same values for quasi-identifiers. We propose a new method for anonymizing data records, where quasi-identifiers of data records are first clustered and then cluster centers are published. To ensure privacy of the data records, we impose the constraint that each cluster must contain no fewer than a prespecified number of data records. This technique is more general since we have a much larger choice for cluster centers than k-anonymity. In many cases, it lets us release a lot more information without compromising privacy. We also provide constant factor approximation algorithms to come up with such a clustering. This is the first set of algorithms for the anonymization problem where the performance is independent of the anonymity parameter k. We further observe that a few outlier points can significantly increase the cost of anonymization. Hence, we extend our algorithms to allow an ε fraction of points to remain unclustered, that is, deleted from the anonymized publication. Thus, by not releasing a small fraction of the database records, we can ensure that the data published for analysis has less distortion and hence is more useful. Our approximation algorithms for new clustering objectives are of independent interest and could be applicable in other clustering scenarios as well. %B ACM Trans. Algorithms %V 6 %P 49:1–49:19 - 49:1–49:19 %8 2010/07// %@ 1549-6325 %G eng %U http://doi.acm.org/10.1145/1798596.1798602 %N 3 %R 10.1145/1798596.1798602 %0 Journal Article %J AAAI’10 %D 2010 %T Active inference for collective classification %A Bilgic,M. %A Getoor, Lise %X Labeling nodes in a network is an important problem that has seen a growing interest. A number of methods that exploit both local and relational information have been developed for this task. Acquiring the labels for a few nodes at inference time can greatly improve the ac- curacy, however the question of figuring out which node labels to acquire is challenging. Previous approaches have been based on simple structural properties. Here, we present a novel technique, which we refer to as re- flect and correct, that can learn and predict when the un- derlying classification system is likely to make mistakes and it suggests acquisitions to correct those mistakes. %B AAAI’10 %8 2010/// %G eng %0 Journal Article %J ICML’10 %D 2010 %T Active learning for networked data %A Bilgic,M. %A Mihalkova,L. %A Getoor, Lise %X We introduce a novel active learning algo-rithm for classification of network data. In this setting, training instances are connected by a set of links to form a network, the labels of linked nodes are correlated, and the goal is to exploit these dependencies and accu- rately label the nodes. This problem arises in many domains, including social and biologi- cal network analysis and document classifica- tion, and there has been much recent interest in methods that collectively classify the nodes in the network. While in many cases labeled examples are expensive, often network infor- mation is available. We show how an active learning algorithm can take advantage of net- work structure. Our algorithm effectively ex- ploits the links between instances and the in- teraction between the local and collective as- pects of a classifier to improve the accuracy of learning from fewer labeled examples. We ex- periment with two real-world benchmark col- lective classification domains, and show that we are able to achieve extremely accurate re- sults even when only a small fraction of the data is labeled. %B ICML’10 %8 2010/// %G eng %0 Journal Article %J Technical Reports of the Computer Science Department %D 2010 %T Adapting Scrum to Managing a Research Group %A Hicks, Michael W. %A Foster, Jeffrey S. %X Score is an adaptation of the Scrum agile software development methodology to the task of managing Ph.D. students in an academic research group. This paper describes Score, conceived in October 2006, and our experience using it. We have found that Score enables us---faculty and students---to be more efficient and thereby more productive, and enhances the cohesion of our research group. %B Technical Reports of the Computer Science Department %8 2010/09/18/undef %G eng %0 Journal Article %J Computational Optimization and Applications %D 2010 %T Adaptive Constraint Reduction for Convex Quadratic Programming %A Jung,Jin Hyuk %A O'Leary, Dianne P. %A Tits,Andre' L. %X We propose an adaptive, constraint-reduced, primal-dual interior-point algorithm for convex quadratic programming with many more inequality constraints than variables. We reduce the computational effort by assembling, instead of the exact normal-equation matrix, an approximate matrix from a well chosen index set which includes indices of constraints that seem to be most critical. Starting with a large portion of the constraints, our proposed scheme excludes more unnecessary constraints at later iterations. We provide proofs for the global convergence and the quadratic local convergence rate of an affine-scaling variant. Numerical experiments on random problems, on a data-fitting problem, and on a problem in array pattern synthesis show the effectiveness of the constraint reduction in decreasing the time per iteration without significantly affecting the number of iterations. We note that a similar constraint-reduction approach can be applied to algorithms of Mehrotra’s predictor-corrector type, although no convergence theory is supplied. %B Computational Optimization and Applications %8 2010/03// %G eng %R DOI:10.1007/s10589-010-9324-8 %0 Journal Article %J Signal Processing, IEEE Transactions on %D 2010 %T Adaptive Threshold Estimation via Extreme Value Theory %A Broadwater, J.B. %A Chellapa, Rama %K detection; %K detection;Pareto %K distribution;adaptive %K distribution;signal %K estimation;extreme %K estimation;signal %K Kolmogorov-Smirnov %K Pareto %K statistical %K test;adaptive %K theory;generalized %K threshold %K value %X Determining a detection threshold to automatically maintain a low false alarm rate is a challenging problem. In a number of different applications, the underlying parametric assumptions of most automatic target detection algorithms are invalid. Therefore, thresholds derived using these incorrect distribution assumptions do not produce desirable results when applied to real sensor data. Monte Carlo methods for threshold determination work well but tend to perform poorly when targets are present. In order to mitigate these effects, we propose an algorithm using extreme value theory through the use of the generalized Pareto distribution (GPD) and a Kolmogorov-Smirnov statistical test. Unlike previous work based on GPD estimates, this algorithm incorporates a way to adaptively maintain low false alarm rates in the presence of targets. Both synthetic and real-world detection results demonstrate the usefulness of this algorithm. %B Signal Processing, IEEE Transactions on %V 58 %P 490 - 500 %8 2010/02// %@ 1053-587X %G eng %N 2 %R 10.1109/TSP.2009.2031285 %0 Journal Article %J Augmented vision & reality %D 2010 %T Advanced tracking systems: computational approaches to be introduced to new series %A HAMMOUD,R. %A Porikli, F. %A Davis, Larry S. %X Modern visual tracking systems implement a computational process that is often divided into several modules such as localization, tracking, recognition, behavior analysis and classification of events. This book will focus on recent advances in computational approaches for detection and tracking of human body, road boundaries and lane markers as well as on recognition of human activities, drowsiness and distraction state. This book is composed of seven distinct parts. Part I covers people localization algorithms in video sequences. Part II describes successful approaches for tracking people and bodyparts. The third part focuses on tracking of pedestrian and vehicles in outdoor images. Part IV describes recent methods to track lane markers and road boundaries. In part V, methods to track head, hand and facial features are reviewed. The last two parts cover the topics of automatic recognition and classification of activity, gesture, behavior, drowsiness and visual distraction state of humans. %B Augmented vision & reality %V 1 %8 2010/// %G eng %0 Book Section %B Advances in ComputersAdvances in Computers %D 2010 %T Advances in Automated Model-Based System Testing of Software Applications with a GUI Front-End %A Memon, Atif M. %A Nguyen,Bao N. %E Zelkowitz, Marvin V %X Despite the ubiquity of software applications that employ a graphical-user interface (GUI) front-end, functional system testing of these applications has remained, until recently, an understudied research area. During “GUI testing,” test cases, modeled as sequences of user input events, are created and executed on the software by exercising the GUI's widgets. Because each possible sequence of user events may potentially be a test case and today's GUIs offer enormous flexibility to end-users, in principle, GUI testing requires a prohibitively large number of test cases. Any practical test-case generation technique must sample the vast GUI input space. Existing techniques are largely manual, and hence extremely resource intensive. Several new automated model-based techniques have been developed in the past decade. All these techniques develop, either manually or automatically, a model of the GUI and employ it to generate test cases. This chapter presents the first detailed taxonomy of these techniques. A small GUI application is used as a running example to demonstrate each technique and illustrate its relative strengths and weaknesses. %B Advances in ComputersAdvances in Computers %I Elsevier %V Volume 80 %P 121 - 162 %8 2010/// %@ 0065-2458 %G eng %U http://www.sciencedirect.com/science/article/pii/S0065245810800038 %0 Book Section %B Advances in ComputersAdvances in Computers %D 2010 %T Advances in Video-Based Human Activity Analysis: Challenges and Approaches %A Turaga,Pavan %A Chellapa, Rama %A Veeraraghavan,Ashok %E Zelkowitz, Marvin V %X Videos play an ever increasing role in our everyday lives with applications ranging from news, entertainment, scientific research, security, and surveillance. Coupled with the fact that cameras and storage media are becoming less expensive, it has resulted in people producing more video content than ever before. Analysis of human activities in video is important for several important applications. Interpretation and identification of human activities requires approaches that address the following questions (a) what are the appropriate atomic primitives for human activities, (b) how to combine primitives to produce complex activities, (c) what are the required invariances for inference algorithms, and (d) how to build computational models for each of these. In this chapter, we provide a broad overview and discussion of these issues. We shall review state-of-the-art computer vision algorithms that address these issues and then provide a unified perspective from which specific algorithms can be derived. We will then present supporting experimental results. %B Advances in ComputersAdvances in Computers %I Elsevier %V Volume 80 %P 237 - 290 %8 2010/// %@ 0065-2458 %G eng %U http://www.sciencedirect.com/science/article/pii/S0065245810800075 %0 Journal Article %J AI Magazine %D 2010 %T AI Theory and Practice: A Discussion on Hard Challenges and Opportunities Ahead %A Horvitz,Eric %A Getoor, Lise %A Guestrin,Carlos %A Hendler,James %A Konstan,Joseph %A Subramanian,Devika %A Wellman,Michael %A Kautz,Henry %X AI Theory and Practice: A Discussion on Hard Challenges and Opportunities Ahead %B AI Magazine %V 31 %P 103 - 114 %8 2010/07/28/ %@ 0738-4602 %G eng %U http://www.aaai.org/ojs/index.php/aimagazine/article/viewArticle/2293 %N 3 %R 10.1609/aimag.v31i3.2293 %0 Book Section %B Computer Vision – ECCV 2010Computer Vision – ECCV 2010 %D 2010 %T Aligning Spatio-Temporal Signals on a Special Manifold %A Ruonan Li %A Chellapa, Rama %E Daniilidis,Kostas %E Maragos,Petros %E Paragios,Nikos %X We investigate the spatio-temporal alignment of videos or features/signals extracted from them. Specifically, we formally define an alignment manifold and formulate the alignment problem as an optimization procedure on this non-linear space by exploiting its intrinsic geometry. We focus our attention on semantically meaningful videos or signals, e.g., those describing or capturing human motion or activities, and propose a new formalism for temporal alignment accounting for executing rate variations among realizations of the same video event. By construction, we address this static and deterministic alignment task in a dynamic and stochastic manner: we regard the search for optimal alignment parameters as a recursive state estimation problem for a particular dynamic system evolving on the alignment manifold. Consequently, a Sequential Importance Sampling iteration on the alignment manifold is designed for effective and efficient alignment. We demonstrate the performance on several types of input data that arise in vision problems. %B Computer Vision – ECCV 2010Computer Vision – ECCV 2010 %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 6315 %P 547 - 560 %8 2010/// %@ 978-3-642-15554-3 %G eng %U http://dx.doi.org/10.1007/978-3-642-15555-0_40 %0 Journal Article %J BMC Bioinformatics %D 2010 %T Alignment and clustering of phylogenetic markers - implications for microbial diversity studies %A White,James R %A Navlakha,Saket %A Nagarajan,Niranjan %A Ghodsi,Mohammad-Reza %A Kingsford, Carl %A Pop, Mihai %X Molecular studies of microbial diversity have provided many insights into the bacterial communities inhabiting the human body and the environment. A common first step in such studies is a survey of conserved marker genes (primarily 16S rRNA) to characterize the taxonomic composition and diversity of these communities. To date, however, there exists significant variability in analysis methods employed in these studies. %B BMC Bioinformatics %V 11 %P 152 - 152 %8 2010/03/24/ %@ 1471-2105 %G eng %U http://www.biomedcentral.com/1471-2105/11/152 %N 1 %R 10.1186/1471-2105-11-152 %0 Book %D 2010 %T Analyzing Social Media Networks with NodeXL %A Hansen,Derek %A Shneiderman, Ben %A Smith,Marc A. %K Business & Economics / Marketing / Research %K Computers / Computer Science %K Computers / Data Processing %K Computers / Database Management / Data Mining %K Computers / General %K Computers / Information Theory %K Computers / Interactive & Multimedia %K Computers / Social Aspects / General %K Computers / Social Aspects / Human-Computer Interaction %K Computers / User Interfaces %K data mining %K Data mining - Computer programs %K Data mining/ Computer programs %K Information Visualization %K Information visualization - Computer programs %K Information visualization/ Computer programs %K NodeXL %K online social networks %K Social Science / General %K Social Science / Reference %K Social Science / Sociology / General %K Technology & Engineering / General %K Technology & Engineering / Social Aspects %K Webometrics %K Webometrics - Computer programs %K Webometrics/ Computer programs %X "Analyzing Social Media Networks with NodeXL provides a much needed resource for the social media research community, as it describes network theory, provides compelling examples using data sources like Twitter and Flickr, and highlights how to use a free sophisticated tool for analysis. This is the perfect book for anyone trying to analyze the behavior of online social networks and beyond." ---Adam Perer, Research Scientist, IBM Research "This book provides a basic introduction to social network analysis, followed by practical instruction and examples on gathering data from online sources, importing into Excel, and then analyzing the data through Excel. The book will be important for promoting research in the area for those in information science, sociology, cultural studies, virtual community, and e-commerce."---Caroline Haythornthwaite, PhD, Professor, Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign Businesses, entrepreneurs, individuals, and government agencies alike are looking to social network analysis (SNA) tools for insight into trends, connections, and fluctuations in social media. Microsoft's NodeXL is a free, open-source SNA plug-in for use with Excel. It provides instant graphical representation of relationships of complex networked data. But it goes further than other SNA tools - NodeXL was developed by a multidisciplinary team of experts that bring together information studies, computer science, sociology, human-computer interaction, and over 20 years of visual analytic theory and information visualization into a simple tool anyone can use. This makes NodeXL of interest not only to end-users but also to researchers and students studying visual and network analytics and their application in the real world. %I Morgan Kaufmann %8 2010/08/27/ %@ 9780123822291 %G eng %0 Book %D 2010 %T Analyzing Social Media Networks with NodeXL: Insights from a Connected World %A Hansen,Derek %A Shneiderman, Ben %A Smith,Marc A. %K Computers / Computer Science %K Computers / General %K Computers / Social Aspects / General %K Computers / Social Aspects / Human-Computer Interaction %K Computers / User Interfaces %X Businesses, entrepreneurs, individuals, and government agencies alike are looking to social network analysis (SNA) tools for insight into trends, connections, and fluctuations in social media. Microsoft’s NodeXL is a free, open-source SNA plug-in for use with Excel. It provides instant graphical representation of relationships of complex networked data. But it goes further than other SNA tools -- NodeXL was developed by a multidisciplinary team of experts that bring together information studies, computer science, sociology, human-computer interaction, and over 20 years of visual analytic theory and information visualization into a simple tool anyone can use. This makes NodeXL of interest not only to end-users but also to researchers and students studying visual and network analytics and their application in the real world. In Analyzing Social Media Networks with NodeXL, members of the NodeXL development team up to provide readers with a thorough and practical guide for using the tool while also explaining the development behind each feature. Blending the theoretical with the practical, this book applies specific SNA instructions directly to NodeXL, but the theory behind the implementation can be applied to any SNA. To learn more about Analyzing Social Media Networks and NodeXL, visit the companion site at www.mkp.com/nodexl*Walks you through NodeXL, while explaining the theory and development behind each step, providing takeaways that can apply to any SNA *Demonstrates how visual analytics research can be applied to SNA tools for the mass market *Includes case studies from researchers who use NodeXL on popular networks like email, Facebook, Twitter, and wikis %I Morgan Kaufmann %P 302 %8 2010 %@ 9780123822307 %G eng %0 Conference Paper %B Proceedings of the 1st ACM International Health Informatics Symposium %D 2010 %T An animated multivariate visualization for physiological and clinical data in the ICU %A Ordóñez,Patricia %A desJardins, Marie %A Lombardi,Michael %A Lehmann,Christoph U. %A Fackler,Jim %K computational physiology %K Information Visualization %K multivariate %K time series %X Current visualizations of electronic medical data in the Intensive Care Unit (ICU) consist of stacked univariate plots of variables over time and a tabular display of the current numeric values for the corresponding variables and occasionally an alarm limit. The value of information is dependent upon knowledge of historic values to determine a change in state. With the ability to acquire more historic information, providers need more sophisticated visualization tools to assist them in analyzing the data in a multivariate fashion over time. We present a multivariate time series visualization that is interactive and animated, and has proven to be as effective as current methods in the ICU for predicting an episode of acute hypotension in terms of accuracy, confidence, and efficiency with only 30-60 minutes of training. %B Proceedings of the 1st ACM International Health Informatics Symposium %S IHI '10 %I ACM %C New York, NY, USA %P 771 - 779 %8 2010/// %@ 978-1-4503-0030-8 %G eng %U http://doi.acm.org/10.1145/1882992.1883109 %R 10.1145/1882992.1883109 %0 Journal Article %J ACM Trans. Comput. Logic %D 2010 %T Annotated RDF %A Udrea,Octavian %A Recupero,Diego Reforgiato %A V.S. Subrahmanian %K annotated RDF %K Query processing %K view maintenance %X Real-world use of RDF requires the ability to transparently represent and explain metadata associated with RDF triples. For example, when RDF triples are extracted automatically by information extraction programs, there is a need to represent where the triples came from, what their temporal validity is, and how certain we are that the triple is correct. Today, there is no theoretically clean and practically scalable mechanism that spans these different needs - reification is the only solution propose to date, and its implementations have been ugly. In this paper, we present Annotated RDF (or aRDF for short) in which RDF triples are annotated by members of a partially ordered set (with bottom element) that can be selected in any way desired by the user. We present a formal declarative semantics (model theory) for annotated RDF and develop algorithms to check consistency of aRDF theories and to answer queries to aRDF theories. We show that annotated RDF supports users who need to think about the uncertainty, temporal aspects, and provenance of the RDF triples in an RDF database. We develop a prototype aRDF implementation and show that our algorithms work efficiently even on real world data sets containing over 10 million triples. %B ACM Trans. Comput. Logic %V 11 %P 10:1–10:41 - 10:1–10:41 %8 2010/01// %@ 1529-3785 %G eng %U http://doi.acm.org/10.1145/1656242.1656245 %N 2 %R 10.1145/1656242.1656245 %0 Journal Article %J Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on %D 2010 %T Applications of a Simple Characterization of Human Gait in Surveillance %A Yang Ran %A Qinfen Zheng %A Chellapa, Rama %A Strat, T.M. %K algorithms %K Artificial intelligence %K Automated;Photography;Reproducibility of Results;Sensitivity and Specificity;Video Recording; %K Biometry %K Computer-Assisted;Pattern Recognition %K double helical signatures %K Gait %K gait analysis %K human gait %K human motion kinematics %K HUMANS %K Image Enhancement %K Image Interpretation %K Image motion analysis %K iterative local curve embedding algorithm %K Object detection %K simple spatiotemporal characterization %K video sequence %K Video surveillance %X Applications of a simple spatiotemporal characterization of human gait in the surveillance domain are presented. The approach is based on decomposing a video sequence into x-t slices, which generate periodic patterns referred to as double helical signatures (DHSs). The features of DHS are given as follows: 1) they naturally encode the appearance and kinematics of human motion and reveal geometric symmetries and 2) they are effective and efficient for recovering gait parameters and detecting simple events. We present an iterative local curve embedding algorithm to extract the DHS from video sequences. Two applications are then considered. First, the DHS is used for simultaneous segmentation and labeling of body parts in cluttered scenes. Experimental results showed that the algorithm is robust to size, viewing angles, camera motion, and severe occlusion. Then, the DHS is used to classify load-carrying conditions. By examining various symmetries in DHS, activities such as carrying, holding, and walking with objects that are attached to legs are detected. Our approach possesses several advantages: a compact representation that can be computed in real time is used; furthermore, it does not depend on silhouettes or landmark tracking, which are sensitive to errors in background subtraction stage. %B Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on %V 40 %P 1009 - 1020 %8 2010/08// %@ 1083-4419 %G eng %N 4 %R 10.1109/TSMCB.2010.2044173 %0 Journal Article %J Journal of Graph Algorithms and Applications %D 2010 %T Applications of Parameterized st-Orientations %A Charalampos Papamanthou %A Tollis, Ioannis G. %B Journal of Graph Algorithms and Applications %V 14 %P 337 - 365 %8 2010/// %@ 1526-1719 %G eng %U http://emis.um.ac.ir/journals/JGAA/getPaper-96.html?id=212 %N 2 %0 Journal Article %J Computational Geometry %D 2010 %T Approximate range searching: The absolute model %A da Fonseca,Guilherme D. %A Mount, Dave %K Absolute model %K approximation %K Halfbox quadtree %K Idempotence %K Range searching %X Range searching is a well known problem in the area of geometric data structures. We consider this problem in the context of approximation, where an approximation parameter ε > 0 is provided. Most prior work on this problem has focused on the case of relative errors, where each range shape R is bounded, and points within distance ε ⋅ diam ( R ) of the range's boundary may or may not be included. We consider a different approximation model, called the absolute model, in which points within distance ε of the range's boundary may or may not be included, regardless of the diameter of the range. We consider range spaces consisting of halfspaces, Euclidean balls, simplices, axis-aligned rectangles, and general convex bodies. We consider a variety of problem formulations, including range searching under general commutative semigroups, idempotent semigroups, groups, and range emptiness. We show how idempotence can be used to improve not only approximate, but also exact halfspace range searching. Our data structures are much simpler than both their exact and relative model counterparts, and so are amenable to efficient implementation. %B Computational Geometry %V 43 %P 434 - 444 %8 2010/05// %@ 0925-7721 %G eng %U http://www.sciencedirect.com/science/article/pii/S0925772109000844 %N 4 %R 10.1016/j.comgeo.2008.09.009 %0 Journal Article %J Computational Geometry %D 2010 %T Approximation algorithm for the kinetic robust K-center problem %A Friedler,Sorelle A. %A Mount, Dave %K Approximation algorithms %K clustering %K kinetic data structures %K robust statistics %X Two complications frequently arise in real-world applications, motion and the contamination of data by outliers. We consider a fundamental clustering problem, the k-center problem, within the context of these two issues. We are given a finite point set S of size n and an integer k. In the standard k-center problem, the objective is to compute a set of k center points to minimize the maximum distance from any point of S to its closest center, or equivalently, the smallest radius such that S can be covered by k disks of this radius. In the discrete k-center problem the disk centers are drawn from the points of S, and in the absolute k-center problem the disk centers are unrestricted.We generalize this problem in two ways. First, we assume that points are in continuous motion, and the objective is to maintain a solution over time. Second, we assume that some given robustness parameter 0 < t ⩽ 1 is given, and the objective is to compute the smallest radius such that there exist k disks of this radius that cover at least ⌈ t n ⌉ points of S. We present a kinetic data structure (in the KDS framework) that maintains a ( 3 + ε ) -approximation for the robust discrete k-center problem and a ( 4 + ε ) -approximation for the robust absolute k-center problem, both under the assumption that k is a constant. We also improve on a previous 8-approximation for the non-robust discrete kinetic k-center problem, for arbitrary k, and show that our data structure achieves a ( 4 + ε ) -approximation. All these results hold in any metric space of constant doubling dimension, which includes Euclidean space of constant dimension. %B Computational Geometry %V 43 %P 572 - 586 %8 2010/08// %@ 0925-7721 %G eng %U http://www.sciencedirect.com/science/article/pii/S0925772110000027 %N 6–7 %R 10.1016/j.comgeo.2010.01.001 %0 Journal Article %J SIAM Journal on Computing %D 2010 %T Approximation Algorithms for Non-Uniform Buy-at-Bulk Network Design %A Chekuri,C. %A Hajiaghayi, Mohammad T. %A Kortsarz,G. %A Salavatipour,M. R %B SIAM Journal on Computing %V 39 %P 1772 - 1798 %8 2010/// %G eng %N 5 %0 Conference Paper %B Proceedings of the 2010 ACM-IEEE International Symposium on Empirical Software Engineering and Measurement %D 2010 %T Are developers complying with the process: an XP study %A Zazworka, Nico %A Stapel,Kai %A Knauss,Eric %A Shull, Forrest %A Basili, Victor R. %A Schneider,Kurt %K process conformance %K process improvement %K XP programming %X Adapting new software processes and practices in organizational and academic environments requires training the developers and validating the applicability of the newly introduced activities. Investigating process conformance during training and understanding if programmers are able and willing to follow the specific steps are crucial to evaluating whether the process improves various software product quality factors. In this paper we present a process model independent approach to detect process non-conformance. Our approach is based on non-intrusively collected data captured by a version control system and provides the project manager with timely updates. Further, we provide evidence of the applicability of our approach by investigating process conformance in a five day training class on eXtreme Programming (XP) practices at the Leibniz Universität Hannover. Our results show that the approach enabled researchers to formulate minimal intrusive methods to check for conformance and that for the majority of the investigated XP practices violations could be detected. %B Proceedings of the 2010 ACM-IEEE International Symposium on Empirical Software Engineering and Measurement %S ESEM '10 %I ACM %C New York, NY, USA %P 14:1–14:10 - 14:1–14:10 %8 2010/// %@ 978-1-4503-0039-1 %G eng %U http://doi.acm.org/10.1145/1852786.1852805 %R 10.1145/1852786.1852805 %0 Book Section %B Computer Vision – ECCV 2010Computer Vision – ECCV 2010 %D 2010 %T Articulation-Invariant Representation of Non-planar Shapes %A Gopalan,Raghuraman %A Turaga,Pavan %A Chellapa, Rama %E Daniilidis,Kostas %E Maragos,Petros %E Paragios,Nikos %X Given a set of points corresponding to a 2D projection of a non-planar shape, we would like to obtain a representation invariant to articulations (under no self-occlusions). It is a challenging problem since we need to account for the changes in 2D shape due to 3D articulations, viewpoint variations, as well as the varying effects of imaging process on different regions of the shape due to its non-planarity. By modeling an articulating shape as a combination of approximate convex parts connected by non-convex junctions, we propose to preserve distances between a pair of points by (i) estimating the parts of the shape through approximate convex decomposition, by introducing a robust measure of convexity and (ii) performing part-wise affine normalization by assuming a weak perspective camera model, and then relating the points using the inner distance which is insensitive to planar articulations. We demonstrate the effectiveness of our representation on a dataset with non-planar articulations, and on standard shape retrieval datasets like MPEG-7. %B Computer Vision – ECCV 2010Computer Vision – ECCV 2010 %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 6313 %P 286 - 299 %8 2010/// %@ 978-3-642-15557-4 %G eng %U http://dx.doi.org/10.1007/978-3-642-15558-1_21 %0 Journal Article %J BMC Bioinformatics %D 2010 %T Assembly complexity of prokaryotic genomes using short reads %A Kingsford, Carl %A Schatz,Michael C %A Pop, Mihai %X De Bruijn graphs are a theoretical framework underlying several modern genome assembly programs, especially those that deal with very short reads. We describe an application of de Bruijn graphs to analyze the global repeat structure of prokaryotic genomes. %B BMC Bioinformatics %V 11 %P 21 - 21 %8 2010/01/12/ %@ 1471-2105 %G eng %U http://www.biomedcentral.com/1471-2105/11/21 %N 1 %R 10.1186/1471-2105-11-21 %0 Conference Paper %B Proceedings of the 19th ACM international conference on Information and knowledge management %D 2010 %T Assessor error in stratified evaluation %A Webber,W. %A Oard, Douglas %A Scholer,F. %A Hedin,B. %B Proceedings of the 19th ACM international conference on Information and knowledge management %P 539 - 548 %8 2010/// %G eng %0 Journal Article %J The Journal of the Acoustical Society of America %D 2010 %T Audio visual scene analysis using spherical arrays and cameras. %A O'donovan,Adam %A Duraiswami, Ramani %A Zotkin,Dmitry N %A Gumerov, Nail A. %X While audition and vision are used together by living beings to make sense of the world, the observation of the world using machines in applications such as surveillance and robotics has proceeded largely separately. We describe the use of spherical microphone arrays as “audio cameras” and spherical array of video cameras as a tool to perform multi‐modal scene analysis that attempts to answer questions such as “Who?,”, “What?,” “Where?,” and “Why?.” Signal processing algorithms to identify the number of people and their identities and to isolate and dereverberate their speech using multi‐modal processing will be described. The use of graphics processor based signal processing allows for real‐time implementation of these algorithms. [Work supported by ONR.] %B The Journal of the Acoustical Society of America %V 127 %P 1979 - 1979 %8 2010/// %G eng %U http://link.aip.org/link/?JAS/127/1979/3 %N 3 %R 10.1121/1.3385079 %0 Journal Article %J Stabilization, Safety, and Security of Distributed Systems %D 2010 %T Authenticated broadcast with a partially compromised public-key infrastructure %A Gordon,S. %A Katz, Jonathan %A Kumaresan,R. %A Yerukhimovich,A. %X Given a public-key infrastructure (PKI) and digital signatures, it is possible to construct broadcast protocols tolerating any number of corrupted parties. Almost all existing protocols, however, do not distinguish between corrupted parties (who do not follow the protocol), and honest parties whose secret (signing) keys have been compromised (but who continue to behave honestly). We explore conditions under which it is possible to construct broadcast protocols that still provide the usual guarantees (i.e., validity/agreement) to the latter.Consider a network of n parties, where an adversary has compromised the secret keys of up to t c honest parties and, in addition, fully controls the behavior of up to t a other parties. We show that for any fixed t c > 0, and any fixed t a , there exists an efficient protocol for broadcast if and only if 2t a + min (t a , t c ) < n. (When t c = 0, standard results imply feasibility.) We also show that if t c , t a are not fixed, but are only guaranteed to satisfy the bound above, then broadcast is impossible to achieve except for a few specific values of n; for these “exceptional” values of n, we demonstrate a broadcast protocol. Taken together, our results give a complete characterization of this problem. %B Stabilization, Safety, and Security of Distributed Systems %P 144 - 158 %8 2010/// %G eng %R 10.1007/978-3-642-16023-3_14 %0 Conference Paper %D 2010 %T Authentication in the clouds: a framework and its application to mobile users %A Chow, Richard %A Jakobsson, Markus %A Masuoka, Ryusuke %A Molina,Jesus %A Niu, Yuan %A Elaine Shi %A Song,Zhexuan %K Authentication %K Cloud computing %K Mobile computing %X Cloud computing is a natural fit for mobile security. Typical handsets have input constraints and practical computational and power limitations, which must be respected by mobile security technologies in order to be effective. We describe how cloud computing can address these issues. Our approach is based on a flexible framework for supporting authentication decisions we call TrustCube (to manage the authentication infrastructure) and on a behavioral authentication approach referred to as implicit authentication (to translate user behavior into authentication scores). The combination results in a new authentication paradigm for users of mobile technologies, one where an appropriate balance between usability and trust can be managed through flexible policies and dynamic tuning. %S CCSW '10 %I ACM %P 1 - 6 %8 2010 %@ 978-1-4503-0089-6 %G eng %U http://doi.acm.org/10.1145/1866835.1866837 %0 Conference Paper %B 2010 Conference on Design and Architectures for Signal and Image Processing (DASIP) %D 2010 %T Automated generation of an efficient MPEG-4 Reconfigurable Video Coding decoder implementation %A Gu, Ruirui %A Piat, J. %A Raulet, M. %A Janneck, J.W. %A Bhattacharyya, Shuvra S. %K automated generation %K automatic design flow %K CAL language %K CAL networks %K CAL-to-C translation %K CAL2C translation %K coarse-grain dataflow representations %K Computational modeling %K data flow computing %K dataflow information %K Dataflow programming %K decoding %K Digital signal processing %K Libraries %K MPEG-4 reconfigurable video coding decoder implementation %K parallel languages %K SDF detection %K synchronous dataflow detection %K TDP %K TDP-based static scheduling %K The Dataflow interchange format Package %K Transform coding %K user-friendly design %K video coding %K video processing systems %K XML %K XML format %X This paper proposes an automatic design flow from user-friendly design to efficient implementation of video processing systems. This design flow starts with the use of coarse-grain dataflow representations based on the CAL language, which is a complete language for dataflow programming of embedded systems. Our approach integrates previously developed techniques for detecting synchronous dataflow (SDF) regions within larger CAL networks, and exploiting the static structure of such regions using analysis tools in The Dataflow interchange format Package (TDP). Using a new XML format that we have developed to exchange dataflow information between different dataflow tools, we explore systematic implementation of signal processing systems using CAL, SDF-like region detection, TDP-based static scheduling, and CAL-to-C (CAL2C) translation. Our approach, which is a novel integration of three complementary dataflow tools - the CAL parser, TDP, and CAL2C - is demonstrated on an MPEG Reconfigurable Video Coding (RVC) decoder. %B 2010 Conference on Design and Architectures for Signal and Image Processing (DASIP) %P 265 - 272 %8 2010 %G eng %0 Conference Paper %B Acoustics Speech and Signal Processing (ICASSP), 2010 IEEE International Conference on %D 2010 %T Automatic matched filter recovery via the audio camera %A O'Donovan,A.E. %A Duraiswami, Ramani %A Zotkin,Dmitry N %K acoustic %K array;real-time %K arrays;transient %K audio %K camera;automatic %K constraints;microphone %K filter %K filters;microphone %K images;room %K impulse %K Matched %K positions;acoustic %K processing;cameras;matched %K radiators;array %K recovery;beamforming;geometric %K response; %K response;sound %K sensor;acoustic %K signal %K source;source/receiver %K sources;audio %X The sound reaching the acoustic sensor in a realistic environment contains not only the part arriving directly from the sound source but also a number of environmental reflections. The effect of those on the sound is equivalent to a convolution with the room impulse response and can be undone via deconvolution - a technique known as matched filter processing. However, the filter is usually pre-computed in advance using known room geometry and source/receiver positions, and any deviations from those cause the performance to degrade significantly. In this work, an algorithm is proposed to compute the matched filter automatically using an audio camera - a microphone array based system that provides real-time audio images (essentially plots of steered response power in various directions) of environment. Acoustic sources, as well as their significant reflections, are revealed as peaks in the audio image. The reflections are associated with sound source(s) using an acoustic similarity metric, and an approximate matched filter is computed to align the reflections in time with the direct arrival. Preliminary experimental evaluation of the method is performed. It is shown that in case of two sources the reflections are identified correctly, the time delays recovered agree well with those computed from geometric constraints, and that the output SNR improves when the reflections are added coherently to the signal obtained by beamforming directly at the source. %B Acoustics Speech and Signal Processing (ICASSP), 2010 IEEE International Conference on %P 2826 - 2829 %8 2010/03// %G eng %R 10.1109/ICASSP.2010.5496187 %0 Conference Paper %B Image Processing (ICIP), 2010 17th IEEE International Conference on %D 2010 %T Automatic target recognition based on simultaneous sparse representation %A Patel, Vishal M. %A Nasrabadi,N.M. %A Chellapa, Rama %K (artificial %K algorithm;feature %K based %K classification;iterative %K classification;learning %K Comanche %K data %K dictionary;matching %K extraction;image %K forward-looking %K infrared %K intelligence);military %K learning %K MATCHING %K matrix;dictionary %K measure;military %K methods;learning %K orthogonal %K pursuit %K pursuit;confusion %K recognition;class %K recognition;target %K representation;feature %K representation;sparse %K set;automatic %K signal %K similarity %K simultaneous %K sparse %K supervised %K systems;object %K target %K target;simultaneous %K tracking; %X In this paper, an automatic target recognition algorithm is presented based on a framework for learning dictionaries for simultaneous sparse signal representation and feature extraction. The dictionary learning algorithm is based on class supervised simultaneous orthogonal matching pursuit while a matching pursuit-based similarity measure is used for classification. We show how the proposed framework can be helpful for efficient utilization of data, with the possibility of developing real-time, robust target classification. We verify the efficacy of the proposed algorithm using confusion matrices on the well known Comanche forward-looking infrared data set consisting of ten different military targets at different orientations. %B Image Processing (ICIP), 2010 17th IEEE International Conference on %P 1377 - 1380 %8 2010/09// %G eng %R 10.1109/ICIP.2010.5652306 %0 Report %D 2010 %T Autonomous Traffic Engineering using Self-Configuring Link Weights %A Sundaresan,S. %A Lumezanu,C. %A Feamster, Nick %A François,P. %X Network operators use traffic engineering to control the flow of traffic across their networks. Existing TE methods establish static topologies offline, either by setting link weights or by configuring paths a priori. These methods require manual configuration and may not be robust in the face of failures. Some methods also require knowledge about traffic demands and may not be able to handle traffic fluctuations. Even when changes in demand are expected, operators must manually tune network configurations to prepare for them. Because adjusting configurations is difficult to get right, we start from an extreme design point, asking instead whether it is possible to perform traffic engineering online without having to perform any a priori configuration. Our traffic engineering technique, SculpTE, adapts to changing traffic demands by automatically configuring link weights in a stable manner. SculpTE balances load across the network by continually adjusting link weights to expose lightly-loaded paths. We evaluate SculpTE using a simple analytical model and simulations on realistic ISP network topologies. Our results show that SculpTE achieves excellent load balancing, responsiveness, and stability compared to state-of-the-art TE schemes, without requiring network operators to perform any offline configuration. %I School of Computer Science, Georgia Tech %V GT-CS-10-16 %8 2010/// %G eng %U http://hdl.handle.net/1853/35011 %0 Journal Article %J SIGCOMM Comput. Commun. Rev. %D 2010 %T Autonomous traffic engineering with self-configuring topologies %A Sundaresan,Srikanth %A Lumezanu,Cristian %A Feamster, Nick %A Francois,Pierre %K multi-path routing %K online %K sculpte %K self-configuring %K traffic engineering %X Network operators use traffic engineering (TE) to control the flow of traffic across their networks. Existing TE methods require manual configuration of link weights or tunnels, which is difficult to get right, or prior knowledge of traffic demands and hence may not be robust to link failures or traffic fluctuations. We present a self-configuring TE scheme, SculpTE, which automatically adapts the network-layer topology to changing traffic demands. SculpTE is responsive, stable, and achieves excellent load balancing. %B SIGCOMM Comput. Commun. Rev. %V 41 %P – - – %8 2010/08// %@ 0146-4833 %G eng %U http://dl.acm.org/citation.cfm?id=2043164.1851239 %N 4 %0 Conference Paper %B Motion and Video Computing, 2009. WMVC '09. Workshop on %D 2009 %T Action recognition based on human movement characteristics %A Dondera,R. %A David Doermann %A Davis, Larry S. %K action %K ballistic %K characteristics;motion %K correlated %K cost;human %K data;probability %K databases; %K density %K descriptor;motion %K dynamics;computational %K function;robustness;shape %K information;short %K linear %K Movement %K movements;computer %K recognition;human %K recognition;stability;visual %K vector %K vision;pattern %X We present a motion descriptor for human action recognition where appearance and shape information are unreliable. Unlike other motion-based approaches, we leverage image characteristics specific to human movement to achieve better robustness and lower computational cost. Drawing on recent work on motion recognition with ballistic dynamics, an action is modeled as a series of short correlated linear movements and represented with a probability density function over motion vector data. We are targeting common human actions composed of ballistic movements, and our descriptor can handle both short actions (e.g. reaching with the hand) and long actions with events at relatively stable time offsets (e.g. walking). The proposed descriptor is used for both classification and detection of action instances, in a nearest-neighbor framework. We evaluate the descriptor on the KTH action database and obtain a recognition rate of 90% in a relevant test setting, comparable to the state-of-the-art approaches that use other cues in addition to motion. We also acquired a database of actions with slight occlusion and a human actor manipulating objects of various shapes and appearances. This database makes the use of appearance and shape information problematic, but we obtain a recognition rate of 95%. Our work demonstrates that human movement has distinctive patterns, and that these patterns can be used effectively for action recognition. %B Motion and Video Computing, 2009. WMVC '09. Workshop on %P 1 - 8 %8 2009/12// %G eng %R 10.1109/WMVC.2009.5399233 %0 Conference Paper %B IEEE/RSJ International Conference on Intelligent Robots and Systems, 2009. IROS 2009 %D 2009 %T Active segmentation for robotics %A Mishra,A. %A Aloimonos, J. %A Fermüller, Cornelia %K binocular cues %K image colour analysis %K Image segmentation %K Image texture %K Intelligent robots %K Layout %K Machine vision %K monocular cues %K Navigation %K optical flow %K Orbital robotics %K Robot sensing systems %K robot vision %K Robot vision systems %K robotics active segmentation %K Robotics and automation %K semantic robot %K Simultaneous localization and mapping %K stereo disparity %K stereo image processing %X The semantic robots of the immediate future are robots that will be able to find and recognize objects in any environment. They need the capability of segmenting objects in their visual field. In this paper, we propose a novel approach to segmentation based on the operation of fixation by an active observer. Our approach is different from current approaches: while existing works attempt to segment the whole scene at once into many areas, we segment only one image region, specifically the one containing the fixation point. Furthermore, our solution integrates monocular cues (color, texture) with binocular cues (stereo disparities and optical flow). Experiments with real imagery collected by our active robot and from the known databases demonstrate the promise of the approach. %B IEEE/RSJ International Conference on Intelligent Robots and Systems, 2009. IROS 2009 %I IEEE %P 3133 - 3139 %8 2009/10/10/15 %@ 978-1-4244-3803-7 %G eng %R 10.1109/IROS.2009.5354325 %0 Journal Article %J Handbook of Remote Biometrics %D 2009 %T Advanced Technologies for Touchless Fingerprint Recognition %A Tistarelli,M. %A Li, S.Z. %A Chellapa, Rama %B Handbook of Remote Biometrics %P 83 - 109 %8 2009/// %G eng %0 Book Section %B Computer Performance IssuesComputer Performance Issues %D 2009 %T Advances in Web Testing %A Eaton,Cyntrica %A Memon, Atif M. %E Zelkowitz, Marvin V %X AbstractDemand for high‐quality Web applications continues to escalate as reliance on Web‐based software increases and Web systems become increasingly complex. Given the importance of quality and its impact on the user experience, a significant research effort has been invested in developing tools and methodologies that facilitate effective quality assurance for Web applications. Testing, in particular, provides a critical inroad toward meeting the quality demand by enabling developers to discover failures and anomalies in applications before they are released. In this survey, we discuss advances in Web testing and begin by exploring the peculiarities of Web applications that makes evaluating their correctness a challenge and the direct translation of conventional software engineering principles impractical in some cases. We then provide an overview of research contributions in three critical aspects of Web testing: deriving adequate Web application models, defining appropriate Web testing strategies, and conducting Web portability analysis. In short, models are used to capture Web application components, their attributes, and interconnections; testing strategies use the models to generate test cases; and portability analysis enables Web developers to ensure that their applications remain correct as they are launched in highly diverse configurations. %B Computer Performance IssuesComputer Performance Issues %I Elsevier %V Volume 75 %P 281 - 306 %8 2009/// %@ 0065-2458 %G eng %U http://www.sciencedirect.com/science/article/pii/S006524580800805X %0 Journal Article %J Information VisualizationInformation Visualization %D 2009 %T Advancing User-Centered Evaluation of Visual Analytic Environments Through Contests %A Costello,Loura %A Grinstein,Georges %A Plaisant, Catherine %A Scholtz,Jean %K metrics %K synthetic data %K user-centered evaluation %K visual analytics %X In this paper, the authors describe the Visual Analytics Science and Technology (VAST) Symposium contests run in 2006 and 2007 and the VAST 2008 and 2009 challenges. These contests were designed to provide researchers with a better understanding of the tasks and data that face potential end users. Access to these end users is limited because of time constraints and the classified nature of the tasks and data. In that respect, the contests serve as an intermediary, with the metrics and feedback serving as measures of utility to the end users. The authors summarize the lessons learned and the future directions for VAST Challenges. %B Information VisualizationInformation Visualization %V 8 %P 230 - 238 %8 2009/09/21/ %@ 1473-8716, 1473-8724 %G eng %U http://ivi.sagepub.com/content/8/3/230 %N 3 %R 10.1057/ivs.2009.16 %0 Journal Article %J Journal of Visual Languages and Computing %D 2009 %T Age progression in human faces: A survey %A Ramanathan,N. %A Chellapa, Rama %A Biswas,S. %X Facial aging, a new dimension that has recently beenadded to the problem of face recognition, poses interesting theo- retical and practical challenges to the research community. The problem which originally generated interest in the psychophysics and human perception community, has recently found enhanced interest in the computer vision community. How do humans perceive age ? What constitutes an age-invariant signature that can be derived from faces ? How compactly can the facial growth event be described ? How does facial aging impact recognition performance ? In this paper, we give a thorough analysis on the problem of facial aging and further provide a complete account of the many interesting studies that have been performed on this topic from different fields. We offer a comparative analysis of various approaches that have been proposed for problems such as age estimation, appearance prediction, face verification etc. and offer insights into future research on this topic. %B Journal of Visual Languages and Computing %V 15 %P 3349 - 3361 %8 2009/// %G eng %0 Conference Paper %B Data Engineering, 2009. ICDE '09. IEEE 25th International Conference on %D 2009 %T Aggregate Query Answering under Uncertain Schema Mappings %A Gal,A. %A Martinez,M. V %A Simari,G. I %A V.S. Subrahmanian %K aggregate %K algorithm;probabilistic %K answering;by-table %K complexity;data %K complexity;distributed %K database;polynomial %K databases; %K databases;probability;query %K integration;distribution %K mapping;computational %K mapping;query %K processing;range %K processing;statistical %K query %K schema %K semantics;by-tuple %K semantics;computational %K semantics;expected %K semantics;multiple %K semantics;uncertain %K TIME %K value %X Recent interest in managing uncertainty in data integration has led to the introduction of probabilistic schema mappings and the use of probabilistic methods to answer queries across multiple databases using two semantics: by-table and by-tuple. In this paper, we develop three possible semantics for aggregate queries: the range, distribution, and expected value semantics, and show that these three semantics combine with the by-table and by-tuple semantics in six ways. We present algorithms to process COUNT, AVG, SUM, MIN, and MAX queries under all six semantics and develop results on the complexity of processing such queries under all six semantics. We show that computing COUNT is in PTIME for all six semantics and computing SUM is in PTIME for all but the by-tuple/distribution semantics. Finally, we show that AVG, MIN, and MAX are PTIME computable for all by-table semantics and for the by-tuple/range semantics.We developed a prototype implementation and experimented with both real-world traces and simulated data. We show that, as expected, naive processing of aggregates does not scale beyond small databases with a small number of mappings. The results also show that the polynomial time algorithms are scalable up to several million tuples as well as with a large number of mappings. %B Data Engineering, 2009. ICDE '09. IEEE 25th International Conference on %P 940 - 951 %8 2009/04/29/2 %G eng %R 10.1109/ICDE.2009.55 %0 Conference Paper %B Computer Design, 2009. ICCD 2009. IEEE International Conference on %D 2009 %T Algorithmic approach to designing an easy-to-program system: Can it lead to a HW-enhanced programmer's workflow add-on? %A Vishkin, Uzi %K abstraction;PRAM-On-Chip;easy-to-program %K algorithms;parallel %K algorithms;sorting; %K designing;explicit %K multi-threading;fine-grained %K PRAM %K random-access-machine/model;parallel %K sorting %K system %K system;parallel %K systems;concurrency %K THEORY %K theory;multi-threading;parallel %K threads;many-core %X Our earlier parallel algorithmics work on the parallel random-access-machine/model (PRAM) computation model led us to a PRAM-On-Chip vision: a comprehensive many-core system that can look to the programmer like the abstract PRAM model. We introduced the eXplicit MultiThreaded (XMT) design and prototyped it in hardware and software. XMT comprises a programmer's workflow that advances from work-depth, a standard PRAM theory abstraction, to an XMT program, and, if desired, to its performance tuning. XMT provides strong performance for programs developed this way due to its hardware support of very fine-grained threads and the overhead of handling them. XMT has also shown unique promise when it comes to ease-of-programming, the biggest problem that has limited the impact of all parallel systems to date. For example, teachability of XMT programming has been demonstrated at various levels from rising 6th graders to graduate students, and students in a freshman class were able to program 3 parallel sorting algorithms. The main purpose of the current paper is to stimulate discussion on the following somewhat open-ended question. Now that we made significant progress on a system devoted to supporting PRAM-like programming, is it possible to incorporate our hardware support as an add-on into other current and future many-core systems? The paper considers a concrete proposal for doing that: recasting our work as a hardware-enhanced programmer's workflow ¿module¿ that can then be essentially imported into the other systems. %B Computer Design, 2009. ICCD 2009. IEEE International Conference on %P 60 - 63 %8 2009/10// %G eng %R 10.1109/ICCD.2009.5413174 %0 Journal Article %J ACM Trans. Algorithms %D 2009 %T Algorithms for distributional and adversarial pipelined filter ordering problems %A Condon,Anne %A Deshpande, Amol %A Hellerstein,Lisa %A Wu,Ning %K flow algorithms %K Pipelined filter ordering %K Query optimization %K selection ordering %X Pipelined filter ordering is a central problem in database query optimization. The problem is to determine the optimal order in which to apply a given set of commutative filters (predicates) to a set of elements (the tuples of a relation), so as to find, as efficiently as possible, the tuples that satisfy all of the filters. Optimization of pipelined filter ordering has recently received renewed attention in the context of environments such as the Web, continuous high-speed data streams, and sensor networks. Pipelined filter ordering problems are also studied in areas such as fault detection and machine learning under names such as learning with attribute costs, minimum-sum set cover, and satisficing search. We present algorithms for two natural extensions of the classical pipelined filter ordering problem: (1) a distributional-type problem where the filters run in parallel and the goal is to maximize throughput, and (2) an adversarial-type problem where the goal is to minimize the expected value of multiplicative regret. We present two related algorithms for solving (1), both running in time O(n2), which improve on the O(n3 log n) algorithm of Kodialam. We use techniques from our algorithms for (1) to obtain an algorithm for (2). %B ACM Trans. Algorithms %V 5 %P 24:1–24:34 - 24:1–24:34 %8 2009/03// %@ 1549-6325 %G eng %U http://doi.acm.org/10.1145/1497290.1497300 %N 2 %R 10.1145/1497290.1497300 %0 Journal Article %J Journal of Computing and Information Science in Engineering %D 2009 %T Algorithms for extraction of nanowire lengths and positions from optical section microscopy image sequence %A Peng,T. %A Balijepalli,A. %A Gupta,S.K. %A LeBrun,T. W. %B Journal of Computing and Information Science in Engineering %V 9 %P 041007 - 041007 %8 2009/// %G eng %0 Journal Article %J Robotics and Computer-Integrated Manufacturing %D 2009 %T Algorithms for generating multi-stage molding plans for articulated assemblies %A Priyadarshi,Alok K. %A Gupta, Satyandra K. %K In-mold assembly %K mold design %K Process planning %X Multi-stage molding is capable of producing better-quality articulated products at a lower cost. During the multi-stage molding process, assembly operations are performed along with the molding operations. Hence, it gives rise to a new type of planning problem. It is difficult to perform the planning manually because it involves evaluating large number of combinations and solving complex geometric reasoning problems. This paper investigates the problem of generating multi-stage molding plans for articulated assemblies. We present a planning framework that allows us to utilize constraints from experimentally proven molding plans. As a part of the planning problem, we determine the molding sequence and intermediate assembly configurations. We present algorithms for all the steps in the planning problem and characterize their computational complexities. Finally, we illustrate our approach with representative examples. %B Robotics and Computer-Integrated Manufacturing %V 25 %P 91 - 106 %8 2009/02// %@ 0736-5845 %G eng %U http://www.sciencedirect.com/science/article/pii/S0736584507001044 %N 1 %R 10.1016/j.rcim.2007.10.002 %0 Conference Paper %B INFOCOM 2009, IEEE %D 2009 %T All Bits Are Not Equal - A Study of IEEE 802.11 Communication Bit Errors %A Han,Bo %A Ji,Lusheng %A Lee,Seungjoon %A Bhattacharjee, Bobby %A Miller,R.R. %K 802.11 %K bit %K coding;error %K coding;forward %K coding;subframe %K combining %K Communication %K correction;frame %K correction;wireless %K error %K errors;channel %K errors;wireless %K IEEE %K LAN; %K LAN;channel %K mechanisms;network %K patterns;transmission %K statistics;forward %X In IEEE 802.11 Wireless LAN (WLAN) systems, techniques such as acknowledgement, retransmission, and transmission rate adaptation, are frame-level mechanisms designed for combating transmission errors. Recently sub-frame level mechanisms such as frame combining have been proposed by the research community. In this paper, we present results obtained from our bit error study for identifying sub-frame error patterns because we believe that identifiable bit error patterns can potentially introduce new opportunities in channel coding, network coding, forward error correction (FEC), and frame combining mechanisms. We have constructed a number of IEEE 802.11 wireless LAN testbeds and conducted extensive experiments to study the characteristics of bit errors and their location distribution. Conventional wisdom dictates that bit error probability is the result of channel condition and ought to follow corresponding distribution. However our measurement results identify three repeatable bit error patterns that are not induced by channel conditions. We have verified that such error patterns are present in WLAN transmissions in different physical environments and across different wireless LAN hardware platforms. We also discuss our current hypotheses for the reasons behind these bit error probability patterns and how identifying these patterns may help improving WLAN transmission robustness. %B INFOCOM 2009, IEEE %P 1602 - 1610 %8 2009/04// %G eng %R 10.1109/INFCOM.2009.5062078 %0 Journal Article %J Environmental Microbiology Reports %D 2009 %T Analysis of clonally related environmental Vibrio cholerae O1 El Tor isolated before 1992 from Varanasi, India reveals origin of SXT‐ICEs belonging to O139 and O1 serogroups %A Mohapatra,Saswat S. %A Mantri,Chinmay K. %A Mohapatra,Harapriya %A Rita R Colwell %A Singh,Durg V. %X In this study, we report the presence of SXT in environmental Vibrio cholerae O1 El Tor strains isolated before 1992 from Varanasi, India. All isolates, except one, were resistant to Tm, and/or Sul, Sm, Fr, Na and Am. None contained plasmids. PCR and DNA sequencing revealed the presence of SXT containing dfrA1 and/or sulII, strAB in six isolates and dfr18, sulII and strAB in five isolates. Three clinical V. cholerae O1 isolated during 1992 contained the antibiotic resistance gene cassette aadA1 in the class 1 integron. Conjugation experiments, followed by PCR analysis of transconjugants, provided evidence of the transferable nature of SXT and associated antibiotic resistance genes, and its integration into the prfC site. Results of phylogenetic analysis of the intSXT gene of clonally similar V. cholerae showed a clear difference between dfr18+ and dfrA1+V. cholerae O1 isolates. This is the first report of the occurrence of SXT harbouring sulII, strAB, dfr18 and/or dfrA1 genes in environmental V. cholerae O1 isolated prior to 1992 from Varanasi, India, and suggests emergence of SXT+ antibiotic-resistant V. cholerae O139 and O1 from an environmental V. cholerae progenitor by acquisition of SXT and antibiotic-resistant gene clusters. %B Environmental Microbiology Reports %V 2 %P 50 - 57 %8 2009/07/21/ %@ 1758-2229 %G eng %U http://onlinelibrary.wiley.com/doi/10.1111/j.1758-2229.2009.00051.x/abstract?userIsAuthenticated=false&deniedAccessCustomisedMessage= %N 1 %R 10.1111/j.1758-2229.2009.00051.x %0 Journal Article %J Magnetics, IEEE Transactions on %D 2009 %T Analytical Description of Quasi-Random Magnetization Relaxation to Equilibrium %A Serpico,C. %A d'Aquino,M. %A Bertotti,G. %A Mayergoyz, Issak D %K approach;quasirandom %K crossing;uniformly %K dynamical %K dynamics;averaging %K Hamiltonian %K magnetization %K magnetized %K particle;magnetic %K particles;magnetic %K perturbed %K relaxation;magnetisation;probability; %K relaxation;separatrix %K system;Landau-Lifshitz %K technique;damping;probabilistic %X The Landau-Lifshitz (LL) dynamics of a uniformly magnetized particle is considered. The LL equation is written in the form of a Hamiltonian perturbed dynamical system. By using suitable averaging technique, the equation for the slow dynamics of the energy is derived. The averaging technique breaks up in the case of separatrix crossing. It is shown that, in the limit of small damping, the separatrix crossing can be described by using a probabilistic approach. %B Magnetics, IEEE Transactions on %V 45 %P 5224 - 5227 %8 2009/11// %@ 0018-9464 %G eng %N 11 %R 10.1109/TMAG.2009.2031067 %0 Conference Paper %B Proceedings of the fourth international conference on Communities and technologies %D 2009 %T Analyzing (social media) networks with NodeXL %A Smith,Marc A. %A Shneiderman, Ben %A Milic-Frayling,Natasa %A Mendes Rodrigues,Eduarda %A Barash,Vladimir %A Dunne,Cody %A Capone,Tony %A Perer,Adam %A Gleave,Eric %K excel %K network analysis %K social media %K social network %K spreadsheet %K Visualization %X We present NodeXL, an extendible toolkit for network overview, discovery and exploration implemented as an add-in to the Microsoft Excel 2007 spreadsheet software. We demonstrate NodeXL data analysis and visualization features with a social media data sample drawn from an enterprise intranet social network. A sequence of NodeXL operations from data import to computation of network statistics and refinement of network visualization through sorting, filtering, and clustering functions is described. These operations reveal sociologically relevant differences in the patterns of interconnection among employee participants in the social media space. The tool and method can be broadly applied. %B Proceedings of the fourth international conference on Communities and technologies %S C&T '09 %I ACM %C New York, NY, USA %P 255 - 264 %8 2009/// %@ 978-1-60558-713-4 %G eng %U http://doi.acm.org/10.1145/1556460.1556497 %R 10.1145/1556460.1556497 %0 Conference Paper %D 2009 %T Analyzing the process of installing rogue software %A Berthier,R. %A Arjona,J. %A Michel Cukier %K Linux %K Linux target computers %K malicious actions %K rogue software installation %K security of data %X This practical experience report presents the results of an experiment aimed at understanding the sequence of malicious actions following a remote compromise. The type of rogue software installed during attacks was used to classify and understand sequences of malicious actions. For this experiment, we used four Linux target computers running SSH with simple passwords. During the eight-month data collection period, we recorded a total of 1,171 attack sessions. In these sessions, attackers typed a total of 20,335 commands that we categorized into 24 specific actions. These actions were analyzed based on the type of rogue software installed by attackers. %P 560 - 565 %8 2009/07/29/2 %G eng %R 10.1109/DSN.2009.5270293 %0 Journal Article %J Image Processing, IEEE Transactions on %D 2009 %T Appearance Modeling Using a Geometric Transform %A Li,Jian %A Zhou,S. K %A Chellapa, Rama %K body %K contour;feature %K curve;fingerprinting;geometric %K function;image %K function;shape %K geometry;image %K identity %K indicator %K INTENSITY %K matching;trace %K method;pedestrian %K modeling;closed %K part %K Radon %K recognition;region %K resolution;image %K segmentation; %K segmentation;image %K set %K synthesis;image %K transform;appearance %K transform;human %K transform;Radon %K transforms;computational %K warping;level %X A general transform, called the geometric transform (GeT), that models the appearance inside a closed contour is proposed. The proposed GeT is a functional of an image intensity function and a region indicator function derived from a closed contour. It can be designed to combine the shape and appearance information at different resolutions and to generate models invariant to deformation, articulation, or occlusion. By choosing appropriate functionals and region indicator functions, the GeT unifies Radon transform, trace transform, and a class of image warpings. By varying the region indicator and the types of features used for appearance modeling, five novel types of GeTs are introduced and applied to fingerprinting the appearance inside a contour. They include the GeTs based on a level set, shape matching, feature curves, and the GeT invariant to occlusion, and a multiresolution GeT (MRGeT). Applications of GeT to pedestrian identity recognition, human body part segmentation, and image synthesis are illustrated. The proposed approach produces promising results when applied to fingerprinting the appearance of a human and body parts despite the presence of nonrigid deformations and articulated motion. %B Image Processing, IEEE Transactions on %V 18 %P 889 - 902 %8 2009/04// %@ 1057-7149 %G eng %N 4 %R 10.1109/TIP.2008.2011381 %0 Conference Paper %B Proceedings of the 23rd international conference on Supercomputing %D 2009 %T Approximate kernel matrix computation on GPUs forlarge scale learning applications %A Hussein,Mohamed E %A Abd-Almageed, Wael %K affinity propagation %K algorithms %K arrays %K gpu %K kernel methods %K parallel programming %K performance %K space filling curves %K sparse matrices %X Kernel-based learning methods require quadratic space and time complexities to compute the kernel matrix. These complexities limit the applicability of kernel methods to large scale problems with millions of data points. In this paper, we introduce a novel representation of kernel matrices on Graphics Processing Units (GPU). The novel representation exploits the sparseness of the kernel matrix to address the space complexity problem. It also respects the guidelines for memory access on GPUs, which are critical for good performance, to address the time complexity problem. Our representation utilizes the locality preserving properties of space filling curves to obtain a band approximation of the kernel matrix. To prove the validity of the representation, we use Affinity Propagation, an unsupervised clustering algorithm, as an example of kernel methods. Experimental results show a 40x speedup of AP using our representation without degradation in clustering performance. %B Proceedings of the 23rd international conference on Supercomputing %S ICS '09 %I ACM %C Yorktown Heights, NY, USA %P 511 - 512 %8 2009/// %@ 978-1-60558-498-0 %G eng %R 10.1145/1542275.1542355 %0 Conference Paper %B IEEE 25th International Conference on Data Engineering, 2009. ICDE '09 %D 2009 %T ApproxRank: Estimating Rank for a Subgraph %A Yao Wu %A Raschid, Louiqa %K Application software %K ApproxRank %K Computer applications %K Crawlers %K customized semantic query answering %K Data engineering %K Educational institutions %K Explosions %K focused crawlers %K global Web graph %K graph theory %K IdealRank algorithm %K Internet %K localized search engines %K PageRank-style %K personalized search %K Query processing %K Runtime %K Search engines %K Stochastic processes %K Web pages %X Customized semantic query answering, personalized search, focused crawlers and localized search engines frequently focus on ranking the pages contained within a subgraph of the global Web graph. The challenge for these applications is to compute PageRank-style scores efficiently on the subgraph, i.e., the ranking must reflect the global link structure of the Web graph but it must do so without paying the high overhead associated with a global computation. We propose a framework of an exact solution and an approximate solution for computing ranking on a subgraph. The IdealRank algorithm is an exact solution with the assumption that the scores of external pages are known. We prove that the IdealRank scores for pages in the subgraph converge. Since the PageRank-style scores of external pages may not typically be available, we propose the ApproxRank algorithm to estimate scores for the subgraph. Both IdealRank and ApproxRank represent the set of external pages with an external node L1 and extend the subgraph with links to L1. They also modify the PageRank-style transition matrix with respect to L1. We analyze the L1 distance between IdealRank scores and ApproxRank scores of the subgraph and show that it is within a constant factor of the L1 distance of the external pages (e.g., the true PageRank scores and uniform scores assumed by ApproxRank). We compare ApproxRank and a stochastic complementation approach (SC), a current best solution for this problem, on different types of subgraphs. ApproxRank has similar or superior performance to SC and typically improves on the runtime performance of SC by an order of magnitude or better. We demonstrate that ApproxRank provides a good approximation to PageRank for a variety of subgraphs. %B IEEE 25th International Conference on Data Engineering, 2009. ICDE '09 %I IEEE %P 54 - 65 %8 2009/04/29/March %@ 978-1-4244-3422-0 %G eng %R 10.1109/ICDE.2009.108 %0 Conference Paper %B Proceedings of the ACL-IJCNLP 2009 Conference Short Papers$}$ %D 2009 %T Arabic Cross-Document Coreference Resolution %A Sayeed,A. %A Elsayed,T. %A Garera,N. %A Alexander,D. %A Xu,T. %A Oard, Douglas %A Yarowsky,D. %A Piatko,C. %B Proceedings of the ACL-IJCNLP 2009 Conference Short Papers$}$ %P 357 - 360 %8 2009/// %G eng %0 Journal Article %J International Journal of Embedded Systems %D 2009 %T An architectural level design methodology for smart camera applications %A Saha,S. %A Kianzad,V. %A Schlessman,J. %A Aggarwal,G. %A Bhattacharyya, Shuvra S. %A Chellapa, Rama %A Wolf,W. %X Today's embedded computing applications are characterised by increased functionality, and hence increased design complexity and processing requirements. The resulting design spaces are vast and designers are typically able to evaluate only small subsets of solutions due to lack of efficient design tools. In this paper, we propose an architectural level design methodology that provides a means for a comprehensive design space exploration for smart camera applications and enable designers to select higher quality solutions and provides substantial savings on the overall cost of the system. We present efficient, accurate and intuitive models for performance estimation and validate them with experiments. %B International Journal of Embedded Systems %V 4 %P 83 - 97 %8 2009/// %G eng %N 1 %0 Journal Article %J Nucleic Acids Research %D 2009 %T ARDB-Antibiotic Resistance Genes Database %A Liu,B. %A Pop, Mihai %X The treatment of infections is increasingly compromised by the ability of bacteria to develop resistance to antibiotics through mutations or through the acquisition of resistance genes. Antibiotic resistance genes also have the potential to be used for bio-terror purposes through genetically modified organisms. In order to facilitate the identification and characterization of these genes, we have created a manually curated database—the Antibiotic Resistance Genes Database (ARDB)—unifying most of the publicly available information on antibiotic resistance. Each gene and resistance type is annotated with rich information, including resistance profile, mechanism of action, ontology, COG and CDD annotations, as well as external links to sequence and protein databases. Our database also supports sequence similarity searches and implements an initial version of a tool for characterizing common mutations that confer antibiotic resistance. The information we provide can be used as compendium of antibiotic resistance factors as well as to identify the resistance genes of newly sequenced genes, genomes, or metagenomes. Currently, ARDB contains resistance information for 13 293 genes, 377 types, 257 antibiotics, 632 genomes, 933 species and 124 genera. ARDB is available at http://ardb.cbcb.umd.edu/. %B Nucleic Acids Research %V 37 %P D443-D447 - D443-D447 %8 2009/01/01/ %@ 0305-1048, 1362-4962 %G eng %U http://nar.oxfordjournals.org/content/37/suppl_1/D443.short %N Database %R 10.1093/nar/gkn656 %0 Book Section %B Springer Handbook of Automation %D 2009 %T Artificial Intelligence and Automation %A Nau, Dana S. %E Nof,Shimon Y. %K engineering %X Artificial intelligence (AI) focuses on getting machines to do things that we would call intelligent behavior. Intelligence – whether artificial or otherwise – does not have a precise definition, but there are many activities and behaviors that are considered intelligent when exhibited by humans and animals. Examples include seeing, learning, using tools, understanding human speech, reasoning, making good guesses, playing games, and formulating plans and objectives. AI focuses on how to get machines or computers to perform these same kinds of activities, though not necessarily in the same way that humans or animals might do them. %B Springer Handbook of Automation %I Springer Berlin Heidelberg %P 249 - 268 %8 2009/// %@ 978-3-540-78831-7 %G eng %U http://www.springerlink.com/content/q7051r251p38p51h/abstract/ %0 Journal Article %J Journal of Systems and Software %D 2009 %T An assessment of systems and software engineering scholars and institutions (2002–2006) %A Wong,W. Eric %A Tse,T.H. %A Glass,Robert L. %A Basili, Victor R. %A Chen,T.Y. %K Research publications %K Systems and software engineering %K Top institutions %K Top scholars %X This paper summarizes a survey of publications in the field of systems and software engineering from 2002 to 2006. The survey is an ongoing, annual event that identifies the top 15 scholars and institutions over a 5-year period. The rankings are calculated based on the number of papers published in TSE, TOSEM, JSS, SPE, EMSE, IST, and Software. The top-ranked institution is Korea Advanced Institute of Science and Technology, Korea, and the top-ranked scholar is Magne Jørgensen of Simula Research Laboratory, Norway. %B Journal of Systems and Software %V 82 %P 1370 - 1373 %8 2009/08// %@ 0164-1212 %G eng %U http://www.sciencedirect.com/science/article/pii/S0164121209001265 %N 8 %R 10.1016/j.jss.2009.06.018 %0 Conference Paper %B Robotics and Automation, 2009. ICRA '09. IEEE International Conference on %D 2009 %T Assigning cameras to subjects in video surveillance systems %A El-Alfy,H. %A Jacobs, David W. %A Davis, Larry S. %K agent %K algorithm;multiple %K assignment;computation %K augmenting %K cameras;video %K cost %K detection;video %K graph;camera %K MATCHING %K matching;minimum %K matching;target %K path;bipartite %K reduction;maximum %K segment;video %K Surveillance %K surveillance; %K system;graph %K theory;image %K TIME %K tracking;obstacle %K tracking;video %K video %X We consider the problem of tracking multiple agents moving amongst obstacles, using multiple cameras. Given an environment with obstacles, and many people moving through it, we construct a separate narrow field of view video for as many people as possible, by stitching together video segments from multiple cameras over time. We employ a novel approach to assign cameras to people as a function of time, with camera switches when needed. The problem is modeled as a bipartite graph and the solution corresponds to a maximum matching. As people move, the solution is efficiently updated by computing an augmenting path rather than by solving for a new matching. This reduces computation time by an order of magnitude. In addition, solving for the shortest augmenting path minimizes the number of camera switches at each update. When not all people can be covered by the available cameras, we cluster as many people as possible into small groups, then assign cameras to groups using a minimum cost matching algorithm. We test our method using numerous runs from different simulators. %B Robotics and Automation, 2009. ICRA '09. IEEE International Conference on %P 837 - 843 %8 2009/05// %G eng %R 10.1109/ROBOT.2009.5152753 %0 Conference Paper %B Proceedings of the 16th ACM conference on Computer and communications security %D 2009 %T Attacking cryptographic schemes based on "perturbation polynomials" %A Albrecht,Martin %A Gentry,Craig %A Halevi,Shai %A Katz, Jonathan %K pairwise key establishment %K random perturbation polynomial %K sensor network security %X We show attacks on several cryptographic schemes that have recently been proposed for achieving various security goals in sensor networks. Roughly speaking, these schemes all use "perturbation polynomials" to add "noise" to polynomialbased systems that offer information-theoretic security, in an attempt to increase the resilience threshold while maintaining efficiency. We show that the heuristic security arguments given for these modified schemes do not hold, and that they can be completely broken once we allow even a slight extension of the parameters beyond those achieved by the underlying information-theoretic schemes. Our attacks apply to the key predistribution scheme of Zhang et al. (MobiHoc 2007), the access-control schemes of Subramanian et al. (PerCom 2007), and the authentication schemes of Zhang et al. (INFOCOM 2008). Our results cast doubt on the viability of using "perturbation polynomials" for designing secure cryptographic schemes. %B Proceedings of the 16th ACM conference on Computer and communications security %S CCS '09 %I ACM %C New York, NY, USA %P 1 - 10 %8 2009/// %@ 978-1-60558-894-0 %G eng %U http://doi.acm.org/10.1145/1653662.1653664 %R 10.1145/1653662.1653664 %0 Journal Article %J Proceedings of the American Society for Information Science and Technology %D 2009 %T Automatic classification of human values: Applying computational thinking to information ethics %A Fleischmann,K.R. %A Oard, Douglas %A Cheng,A.S. %A Wang,P. %A Ishita,E. %B Proceedings of the American Society for Information Science and Technology %V 46 %P 1 - 4 %8 2009/// %G eng %N 1 %0 Journal Article %J Nature Structural & Molecular Biology %D 2009 %T Avid interactions underlie the Lys63-linked polyubiquitin binding specificities observed for UBA domains %A Sims,Joshua J. %A Haririnia,Aydin %A Dickinson,Bryan C. %A Fushman, David %A Cohen,Robert E. %K apoptosis %K basic cellular processes %K Biochemistry %K biophysics %K cell biology %K cell cycle %K cell surface proteins %K cell-cell interactions %K checkpoints %K chromatin %K chromatin remodeling %K chromatin structure %K content %K DNA recombination %K DNA repair %K DNA replication %K Gene expression %K Genetics %K intracellular signaling %K journal %K macromolecules %K mechanism %K membrane processes %K molecular %K molecular basis of disease %K molecular biology %K molecular interactions %K multi-component complexes %K nature publishing group %K nature structural molecular biology %K nucleic acids %K protein degradation %K protein folding %K protein processing %K Proteins %K regulation of transcription %K regulation of translation %K RNA %K RNA processing %K RNAi %K signal transduction %K single molecule studies %K structure and function of proteins %K transcription %K translation %X Ubiquitin (denoted Ub) receptor proteins as a group must contain a diverse set of binding specificities to distinguish the many forms of polyubiquitin (polyUb) signals. Previous studies suggested that the large class of ubiquitin-associated (UBA) domains contains members with intrinsic specificity for Lys63-linked polyUb or Lys48-linked polyUb, thus explaining how UBA-containing proteins can mediate diverse signaling events. Here we show that previously observed Lys63-polyUb selectivity in UBA domains is the result of an artifact in which the dimeric fusion partner, glutathione S-transferase (GST), positions two UBAs for higher affinity, avid interactions with Lys63-polyUb, but not with Lys48-polyUb. Freed from GST, these UBAs are either nonselective or prefer Lys48-polyUb. Accordingly, NMR experiments reveal no Lys63-polyUb–specific binding epitopes for these UBAs. We reexamine previous conclusions based on GST-UBAs and present an alternative model for how UBAs achieve a diverse range of linkage specificities. %B Nature Structural & Molecular Biology %V 16 %P 883 - 889 %8 2009/// %@ 1545-9993 %G eng %U http://www.nature.com/nsmb/journal/v16/n8/abs/nsmb.1637.html %N 8 %R 10.1038/nsmb.1637 %0 Conference Paper %B The Tenth International ACMSIG ACCESS Conference on Computers and Accessibility %D 2008 %T ACamera Phone Based Currency Reader for the Visually Impaired %A Liu,Xu %A David Doermann %X In this paper we present a camera phone-based currency reader for the visually impaired to denominate face values of the U.S. paper dollars. Currently, they can only be identified visually and this situation will continue in foreseeable future. Our solution harvests the imaging and computational power on camera phones to read bills. Considering it is impractical for the visually impaired to capture high quality image, our currency reader performs real time processing for each captured frame as the camera approaches the bill. We develop efficient background subtraction and perspective correction algorithms and train our currency reader using the Ada-boost framework, which is efficient. Our currency reader processes 10 frames/second and achieves a false positive rate of 10^-4. Major smart phone platforms, including Symbian and Windows Mobile, are supported. %B The Tenth International ACMSIG ACCESS Conference on Computers and Accessibility %P 305 - 306 %8 2008/10// %G eng %0 Conference Paper %B ACM International Conference on Multimedia %D 2008 %T ACamera-based Mobile Data Channel: Capacity and Analysis %A Liu,Xu %A David Doermann %A Li,H. %X In this paper we propose a novel application, color Video Code (V-Code) and analyze its data transmission capacity through camera-based mobile data channels. Users can use the camera on a mobile device (PDA or camera phone) as a passive and pervasive data channel to download data encoded as a sequence of color visual patterns. The color V-Code is animated on a display, acquired by the camera and decoded by the pre-embedded software in the mobile device. One interesting question is what is the data transmission capacity it can achieve, theoretically and practically. To answer this question we build a camera channel model to measure color degradation using information theory and show that the capacity of the camera channel can be improved with the optimized color selection through color calibration. After initialization color models are learned automatically as downloading proceeds. We address the problem of precise registration, and implemented a fast perspective correction method to accelerate the decoder in real-time on a resource constrained device. With the optimized color set and efficient implementation we achieve a transmission bit rate of 15.4kbps on a common iMate Jamin phone (200MHz CPU). This speed is faster than the average GPRS bit rate (12kbps). %B ACM International Conference on Multimedia %P 359 - 368 %8 2008/// %G eng %0 Journal Article %J Journal on Computing and Cultural Heritage %D 2008 %T Access to recorded interviews %A Jong,Franciska De %A Oard, Douglas %A Heeren,Willemijn %A Ordelman,Roeland %B Journal on Computing and Cultural Heritage %V 1 %P 1 - 27 %8 2008/06/01/ %@ 15564673 %G eng %U http://dl.acm.org/citation.cfm?id=1367083 %R 10.1145/1367080.1367083 %0 Journal Article %J SIGCOMM Comput. Commun. Rev. %D 2008 %T Accountable internet protocol (aip) %A Andersen,David G. %A Balakrishnan,Hari %A Feamster, Nick %A Koponen,Teemu %A Moon,Daekyeong %A Shenker,Scott %K accountability %K address %K internet architecture %K scalability %K Security %X This paper presents AIP (Accountable Internet Protocol), a network architecture that provides accountability as a first-order property. AIP uses a hierarchy of self-certifying addresses, in which each component is derived from the public key of the corresponding entity. We discuss how AIP enables simple solutions to source spoofing, denial-of-service, route hijacking, and route forgery. We also discuss how AIP's design meets the challenges of scaling, key management, and traffic engineering. %B SIGCOMM Comput. Commun. Rev. %V 38 %P 339 - 350 %8 2008/08// %@ 0146-4833 %G eng %U http://doi.acm.org/10.1145/1402946.1402997 %N 4 %R 10.1145/1402946.1402997 %0 Journal Article %J Proc. of the 6th International Conference on Language Resources and Evaluation Conference (LREC’08) %D 2008 %T The acl anthology reference corpus: A reference dataset for bibliographic research in computational linguistics %A Bird,S. %A Dale,R. %A Dorr, Bonnie J %A Gibson,B. %A Joseph,M.T. %A Kan,M.Y. %A Lee,D. %A Powley,B. %A Radev,D.R. %A Tan,Y.F. %X The ACL Anthology is a digital archive of conference and journal papers in natural language processing and computational linguistics.Its primary purpose is to serve as a reference repository of research results, but we believe that it can also be an object of study and a platform for research in its own right. We describe an enriched and standardized reference corpus derived from the ACL Anthology that can be used for research in scholarly document processing. This corpus, which we call the ACL Anthology Reference Corpus (ACL ARC), brings together the recent activities of a number of research groups around the world. Our goal is to make the corpus widely available, and to encourage other researchers to use it as a standard testbed for experiments in both bibliographic and bibliometric research. %B Proc. of the 6th International Conference on Language Resources and Evaluation Conference (LREC’08) %P 1755 - 1759 %8 2008/// %G eng %0 Conference Paper %B Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on %D 2008 %T Action recognition using ballistic dynamics %A Vitaladevuni,S.N. %A Kellokumpu,V. %A Davis, Larry S. %K analysis;image %K Bayesian %K dynamics;gesture %K feature;person-centric %K framework;action %K History %K image %K labels;psycho-kinesiological %K morphological %K MOTION %K Movement %K movements;motion %K planning;interactive %K processing; %K recognition %K recognition;ballistic %K recognition;image %K segmentation;video %K signal %K studies;image %K task;human %X We present a Bayesian framework for action recognition through ballistic dynamics. Psycho-kinesiological studies indicate that ballistic movements form the natural units for human movement planning. The framework leads to an efficient and robust algorithm for temporally segmenting videos into atomic movements. Individual movements are annotated with person-centric morphological labels called ballistic verbs. This is tested on a dataset of interactive movements, achieving high recognition rates. The approach is also applied on a gesture recognition task, improving a previously reported recognition rate from 84% to 92%. Consideration of ballistic dynamics enhances the performance of the popular Motion History Image feature. We also illustrate the approachpsilas general utility on real-world videos. Experiments indicate that the method is robust to view, style and appearance variations. %B Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on %P 1 - 8 %8 2008/06// %G eng %R 10.1109/CVPR.2008.4587806 %0 Journal Article %J Artificial Intelligence %D 2008 %T Active logic semantics for a single agent in a static world %A Anderson,Michael L. %A Gomaa,Walid %A Grant,John %A Perlis, Don %K Active logic %K Autonomous agents %K Brittleness %K Logic %K Nonmonotonic logic %K Paraconsistent logic %K semantics %K Soundness %K TIME %X For some time we have been developing, and have had significant practical success with, a time-sensitive, contradiction-tolerant logical reasoning engine called the active logic machine (ALMA). The current paper details a semantics for a general version of the underlying logical formalism, active logic. Central to active logic are special rules controlling the inheritance of beliefs in general (and of beliefs about the current time in particular), very tight controls on what can be derived from direct contradictions (P&¬P), and mechanisms allowing an agent to represent and reason about its own beliefs and past reasoning. Furthermore, inspired by the notion that until an agent notices that a set of beliefs is contradictory, that set seems consistent (and the agent therefore reasons with it as if it were consistent), we introduce an "apperception function" that represents an agent's limited awareness of its own beliefs, and serves to modify inconsistent belief sets so as to yield consistent sets. Using these ideas, we introduce a new definition of logical consequence in the context of active logic, as well as a new definition of soundness such that, when reasoning with consistent premises, all classically sound rules remain sound in our new sense. However, not everything that is classically sound remains sound in our sense, for by classical definitions, all rules with contradictory premises are vacuously sound, whereas in active logic not everything follows from a contradiction. %B Artificial Intelligence %V 172 %P 1045 - 1063 %8 2008/05// %@ 0004-3702 %G eng %U http://www.sciencedirect.com/science/article/pii/S0004370207001993 %N 8-9 %R 16/j.artint.2007.11.005 %0 Journal Article %J Image Processing, IEEE Transactions on %D 2008 %T Activity Modeling Using Event Probability Sequences %A Cuntoor, N.P. %A Yegnanarayana,B. %A Chellapa, Rama %K Automated;Reproducibility of Results;Sensitivity and Specificity;Video Recording; %K Biological;Models %K Carnegie Mellon University;Credo Intelligence %K Computer-Assisted;Models %K Inc.;Motion Capture dataset;Transportation Security Administration;University Central Florida;airport tarmac surveillance dataset;anomaly detection;event probability sequence;event representation;hidden Markov model;human action dataset;human activity rec %K Statistical;Motor Activity;Movement;Pattern Recognition %X Changes in motion properties of trajectories provide useful cues for modeling and recognizing human activities. We associate an event with significant changes that are localized in time and space, and represent activities as a sequence of such events. The localized nature of events allows for detection of subtle changes or anomalies in activities. In this paper, we present a probabilistic approach for representing events using the hidden Markov model (HMM) framework. Using trained HMMs for activities, an event probability sequence is computed for every motion trajectory in the training set. It reflects the probability of an event occurring at every time instant. Though the parameters of the trained HMMs depend on viewing direction, the event probability sequences are robust to changes in viewing direction. We describe sufficient conditions for the existence of view invariance. The usefulness of the proposed event representation is illustrated using activity recognition and anomaly detection. Experiments using the indoor University of Central Florida human action dataset, the Carnegie Mellon University Credo Intelligence, Inc., Motion Capture dataset, and the outdoor Transportation Security Administration airport tarmac surveillance dataset show encouraging results. %B Image Processing, IEEE Transactions on %V 17 %P 594 - 607 %8 2008/04// %@ 1057-7149 %G eng %N 4 %R 10.1109/TIP.2008.916991 %0 Journal Article %J J. Image Video Process. %D 2008 %T Activity representation using 3D shape models %A Abdelkader,Mohamed F. %A Roy-Chowdhury,Amit K. %A Chellapa, Rama %A Akdemir,Umut %X We present a method for characterizing human activities using 3D deformable shape models. The motion trajectories of points extracted from objects involved in the activity are used to build models for each activity, and these models are used for classification and detection of unusual activities. The deformable models are learnt using the factorization theorem for nonrigid 3D models. We present a theory for characterizing the degree of deformation in the 3D models from a sequence of tracked observations. This degree, termed as deformation index (DI), is used as an input to the 3D model estimation process. We study the special case of ground plane activities in detail because of its importance in video surveillance applications. We present results of our activity modeling approach using videos of both high-resolution single individual activities and ground plane surveillance activities. %B J. Image Video Process. %V 2008 %P 5:1–5:16 - 5:1–5:16 %8 2008/01// %@ 1687-5176 %G eng %U http://dx.doi.org/10.1155/2008/347050 %R 10.1155/2008/347050 %0 Conference Paper %B Proceeding of the twenty-sixth annual SIGCHI conference on Human factors in computing systems %D 2008 %T Activity sensing in the wild: a field trial of ubifit garden %A Consolvo,S. %A McDonald,D.W. %A Toscos,T. %A Chen,M.Y. %A Jon Froehlich %A Harrison,B. %A Klasnja,P. %A LaMarca,A. %A LeGrand,L. %A Libby,R. %A others %B Proceeding of the twenty-sixth annual SIGCHI conference on Human factors in computing systems %P 1797 - 1806 %8 2008/// %G eng %0 Journal Article %J Electronic Transactions on Numerical Analysis %D 2008 %T Adaptive Constraint Reduction for Training Support Vector Machines %A Jung,Jin Hyuk %A O'Leary, Dianne P. %A Tits,Andre' L. %B Electronic Transactions on Numerical Analysis %V 31 %P 156 - 177 %8 2008/// %G eng %U http://etna.mcs.kent.edu/vol.31.2008/pp156-177.dir/pp156-177.pdfhttp://etna.mcs.kent.edu/vol.31.2008/pp156-177.dir/pp156-177.pdf %0 Journal Article %J Physical Review A %D 2008 %T The Adiabatic Theorem in the Presence of Noise %A O'Hara,Michael J. %A O'Leary, Dianne P. %B Physical Review A %V 77 %P 042319, 20 pages - 042319, 20 pages %8 2008/// %G eng %U http://link.aps.org/abstract/PRA/v77/e042319http://link.aps.org/abstract/PRA/v77/e042319 %R DOI: 10.1103/PhysRevA.77.042319 %0 Conference Paper %B In G. Visaggio (Ed.), 12 th International Conference on Evaluation and Assessment in Software Engineering. BCS eWIC %D 2008 %T Adopting Curvilinear Component Analysis to Improve Software Cost Estimation Accuracy. Model, Application Strategy, and an Experimental Verification %A Sarcia,Salvatore A. %A Cantone,Giovanni %A Basili, Victor R. %X Cost estimation is a critical issue for software organizations. Good estimates can help us make more informed decisions (controlling and planning software risks), if they are reliable (correct) and valid (stable). In this study, we apply a variable reduction technique (based on auto-associative feed--forward neural networks – called Curvilinear component analysis) to log-linear regression functions calibrated with ordinary least squares. Based on a COCOMO 81 data set, we show that Curvilinear component analysis can improve the estimation model accuracy by turning the initial input variables into an equivalent and more compact representation. We show that, the models obtained by applying Curvilinear component analysis are more parsimonious, correct, and reliable. %B In G. Visaggio (Ed.), 12 th International Conference on Evaluation and Assessment in Software Engineering. BCS eWIC %8 2008/// %G eng %0 Journal Article %J Lecture Notes in Computer Science %D 2008 %T Advances in multilingual and multimodal information retrieval %A Peters,C. %A Jijkoun,V. %A Mandl,T. %A Müller,H. %A Oard, Douglas %A Peñas,A. %A Santos,D. %B Lecture Notes in Computer Science %V 5152 %8 2008/// %G eng %0 Journal Article %J Proceedings of the American Society for Information Science and Technology %D 2008 %T Advancing social science research by applying computational linguistics %A Cheng,A.S. %A Fleischmann,K.R. %A Wang,P. %A Oard, Douglas %B Proceedings of the American Society for Information Science and Technology %V 45 %P 1 - 12 %8 2008/// %G eng %N 1 %0 Journal Article %J Journal of Molecular Biology %D 2008 %T Affinity Makes the Difference: Nonselective Interaction of the UBA Domain of Ubiquilin-1 with Monomeric Ubiquitin and Polyubiquitin Chains %A Zhang,Daoning %A Raasi,Shahri %A Fushman, David %K polyubiquitin %K protein–protein interaction %K UBA domain %K ubiquilin %K ubiquitin %X Ubiquilin/PLIC proteins belong to the family of UBL–UBA proteins implicated in the regulation of the ubiquitin-dependent proteasomal degradation of cellular proteins. A human presenilin-interacting protein, ubiquilin-1, has been suggested as potential therapeutic target for treating Huntington's disease. Ubiquilin's interactions with mono- and polyubiquitins are mediated by its UBA domain, which is one of the tightest ubiquitin binders among known ubiquitin-binding domains. Here we report the three-dimensional structure of the UBA domain of ubiquilin-1 (UQ1-UBA) free in solution and in complex with ubiquitin. UQ1-UBA forms a compact three-helix bundle structurally similar to other known UBAs, and binds to the hydrophobic patch on ubiquitin with a Kd of 20 μM. To gain structural insights into UQ1-UBA's interactions with polyubiquitin chains, we have mapped the binding interface between UQ1-UBA and Lys48- and Lys63-linked di-ubiquitins and characterized the strength of UQ1-UBA binding to these chains. Our NMR data show that UQ1-UBA interacts with the individual ubiquitin units in both chains in a mode similar to its interaction with mono-ubiquitin, although with an improved binding affinity for the chains. Our results indicate that, in contrast to UBA2 of hHR23A that has strong binding preference for Lys48-linked chains, UQ1-UBA shows little or no binding selectivity toward a particular chain linkage or between the two ubiquitin moieties in the same chain. The structural data obtained in this study provide insights into the possible structural reasons for the diversity of polyubiquitin chain recognition by UBA domains. %B Journal of Molecular Biology %V 377 %P 162 - 180 %8 2008/03/14/ %@ 0022-2836 %G eng %U http://www.sciencedirect.com/science/article/pii/S0022283607016452 %N 1 %R 10.1016/j.jmb.2007.12.029 %0 Journal Article %J Topics in Cryptology–CT-RSA 2008 %D 2008 %T Aggregate message authentication codes %A Katz, Jonathan %A Lindell,A. %X We propose and investigate the notion of aggregate message authentication codes (MACs) which have the property that multiple MAC tags, computed by (possibly) different senders on multiple (possibly different) messages, can be aggregated into a shorter tag that can still be verified by a recipient who shares a distinct key with each sender. We suggest aggregate MACs as an appropriate tool for authenticated communication in mobile ad-hoc networks or other settings where resource-constrained devices share distinct keys with a single entity (such as a base station), and communication is an expensive resource. %B Topics in Cryptology–CT-RSA 2008 %P 155 - 169 %8 2008/// %G eng %R 10.1007/978-3-540-79263-5_10 %0 Conference Paper %B Proceedings of the 23rd national conference on Artificial intelligence %D 2008 %T An AGM-based belief revision mechanism for probabilistic spatio-temporal logics %A Parker,A. %A Infantes,G. %A V.S. Subrahmanian %A Grant,J. %B Proceedings of the 23rd national conference on Artificial intelligence %P 511 - 516 %8 2008/// %G eng %0 Journal Article %J ACM Trans. Math. Softw. %D 2008 %T Algorithm 879: EIGENTEST—a test matrix generator for large-scale eigenproblems %A Lee,Che-Rung %A Stewart, G.W. %K Eigensystem %K test matrix generation %X Eigentest is a package that produces real test matrices with known eigensystems. A test matrix, called an eigenmat, is generated in a factored form, in which the user can specify the eigenvalues and has some control over the condition of the eigenvalues and eigenvectors. An eigenmat A of order n requires only O(n) storage for its representation. Auxiliary programs permit the computation of (A − sI)b, (A − sI)Tb, (A − sI)−1 b, and (A − sI)−T b in O(n) operations. A special routine computes specified eigenvectors of an eigenmat and the condition of its eigenvalue. Thus eigenmats are suitable for testing algorithms based on Krylov sequences, as well as others based on matrix-vector products. This article introduces the eigenmat and describes implementations in Fortran 77, Fortran 95, C, and Matlab. %B ACM Trans. Math. Softw. %V 35 %P 7:1–7:11 - 7:1–7:11 %8 2008/07// %@ 0098-3500 %G eng %U http://doi.acm.org/10.1145/1377603.1377610 %N 1 %R 10.1145/1377603.1377610 %0 Journal Article %J Image Processing, IEEE Transactions on %D 2008 %T Algorithmic and Architectural Optimizations for Computationally Efficient Particle Filtering %A Sankaranarayanan,A. C %A Srivastava, A. %A Chellapa, Rama %K Automated;Reproducibility of Results;Sensitivity and Specificity;Signal Processing %K Computer-Assisted;Models %K Computer-Assisted;Video Recording; %K convex program;independent Metropolis Hastings sampler;nonGaussian noise process;nonlinear dynamical system filtering;particle filtering algorithm;pipelined architectural optimization;video sequences;visual tracking;convex programming;image sequences;opti %K Statistical;Pattern Recognition %X In this paper, we analyze the computational challenges in implementing particle filtering, especially to video sequences. Particle filtering is a technique used for filtering nonlinear dynamical systems driven by non-Gaussian noise processes. It has found widespread applications in detection, navigation, and tracking problems. Although, in general, particle filtering methods yield improved results, it is difficult to achieve real time performance. In this paper, we analyze the computational drawbacks of traditional particle filtering algorithms, and present a method for implementing the particle filter using the Independent Metropolis Hastings sampler, that is highly amenable to pipelined implementations and parallelization. We analyze the implementations of the proposed algorithm, and, in particular, concentrate on implementations that have minimum processing times. It is shown that the design parameters for the fastest implementation can be chosen by solving a set of convex programs. The proposed computational methodology was verified using a cluster of PCs for the application of visual tracking. We demonstrate a linear speedup of the algorithm using the methodology proposed in the paper. %B Image Processing, IEEE Transactions on %V 17 %P 737 - 748 %8 2008/05// %@ 1057-7149 %G eng %N 5 %R 10.1109/TIP.2008.920760 %0 Journal Article %J Theoretical Computer Science %D 2008 %T Algorithms for computing a parameterized st-orientation %A Charalampos Papamanthou %A Tollis, Ioannis G. %K Graph algorithms %K Longest path %K Planar graphs %K s t -numberings %X s t -orientations ( s t -numberings) or bipolar orientations of undirected graphs are central to many graph algorithms and applications. Several algorithms have been proposed in the past to compute an s t -orientation of a biconnected graph. In this paper, we present new algorithms that compute such orientations with certain (parameterized) characteristics in the final s t -oriented graph, such as the length of the longest path. This work has many applications, including Graph Drawing and Network Routing, where the length of the longest path is vital in deciding certain features of the final solution. This work applies to other difficult problems as well, such as graph coloring and of course longest path. We present extended theoretical and experimental results which show that our technique is efficient and performs well in practice. %B Theoretical Computer Science %V 408 %P 224 - 240 %8 2008/11/28/ %@ 0304-3975 %G eng %U http://www.sciencedirect.com/science/article/pii/S0304397508005653 %N 2–3 %! Theoretical Computer Science %0 Journal Article %J Journal of Computing and Information Science in Engineering %D 2008 %T Algorithms for Generating Adaptive Projection Patterns for 3D Shape Measurement %A Peng,T. %A Gupta,S.K. %B Journal of Computing and Information Science in Engineering %V 8 %P 031009 - 031009 %8 2008/// %G eng %0 Book Section %B Algorithmic Aspects of Wireless Sensor Networks %D 2008 %T Algorithms for Location Estimation Based on RSSI Sampling %A Charalampos Papamanthou %A Preparata, Franco P. %A Tamassia, Roberto %E Fekete, Sándor P. %K Algorithm Analysis and Problem Complexity %K Computer Communication Networks %K Data structures %K Discrete Mathematics in Computer Science %K Information Systems and Communication Service %X In this paper, we re-examine the RSSI measurement model for location estimation and provide the first detailed formulation of the probability distribution of the position of a sensor node. We also show how to use this probabilistic model to efficiently compute a good estimation of the position of the sensor node by sampling multiple readings from the beacons (where we do not merely use the mean of the samples) and then minimizing a function with an acceptable computational effort. The results of the simulation of our method in TOSSIM indicate that the location of the sensor node can be computed in a small amount of time and that the quality of the solution is competitive with previous approaches. %B Algorithmic Aspects of Wireless Sensor Networks %S Lecture Notes in Computer Science %I Springer Berlin Heidelberg %P 72 - 86 %8 2008/01/01/ %@ 978-3-540-92861-4, 978-3-540-92862-1 %G eng %U http://link.springer.com/chapter/10.1007/978-3-540-92862-1_7 %0 Conference Paper %B Proceedings of the twenty-sixth annual SIGCHI conference on Human factors in computing systems %D 2008 %T Aligning temporal data by sentinel events: discovering patterns in electronic health records %A Wang,Taowei David %A Plaisant, Catherine %A Quinn,Alexander J. %A Stanchak,Roman %A Murphy,Shawn %A Shneiderman, Ben %K electronic health record %K Evaluation %K Information Visualization %K search %K temporal data %K Uncertainty %X Electronic Health Records (EHRs) and other temporal databases contain hidden patterns that reveal important cause-and-effect phenomena. Finding these patterns is a challenge when using traditional query languages and tabular displays. We present an interactive visual tool that complements query formulation by providing operations to align, rank and filter the results, and to visualize estimates of the intervals of validity of the data. Display of patient histories aligned on sentinel events (such as a first heart attack) enables users to spot precursor, co-occurring, and aftereffect events. A controlled study demonstrates the benefits of providing alignment (with a 61% speed improvement for complex tasks). A qualitative study and interviews with medical professionals demonstrates that the interface can be learned quickly and seems to address their needs. %B Proceedings of the twenty-sixth annual SIGCHI conference on Human factors in computing systems %S CHI '08 %I ACM %C New York, NY, USA %P 457 - 466 %8 2008/// %@ 978-1-60558-011-1 %G eng %U http://doi.acm.org/10.1145/1357054.1357129 %R 10.1145/1357054.1357129 %0 Conference Paper %B Practice and Research Techniques, 2008. TAIC PART '08. Testing: Academic Industrial Conference %D 2008 %T Alternating GUI Test Generation and Execution %A Xun Yuan %A Memon, Atif M. %K ALT %K complex failure-causing interactions %K event sequences %K graphical user interface %K Graphical user interfaces %K GUI test generation %K Testing %X Users of today's software perform tasks by interacting with a graphical user interface (GUI) front-end via sequences of input events. Due to the flexibility offered by most GUIs, the number of event sequences grows exponentially with length. One ubiquitous challenge of GUI testing is to selectively generate those sequences that lead to potentially problematic states. This paper presents ALT, a new technique that generates GUI test cases in batches, by leveraging GUI run-time information from a previously run batch to obtain the next batch. Each successive batch consists of "longer" test cases that expand the state space to be explored, yet prune the "unimportant" states. The "alternating" nature of ALT allows it to enhance the next batch by leveraging certain relationships between GUI events (e.g., one enables the other, one alters the other's execution) that are revealed only at run-time and non-trivial to infer statically. An empirical study on four fielded GUI-based applications demonstrates that ALT is successful at identifying complex failure-causing interactions between GUI events. %B Practice and Research Techniques, 2008. TAIC PART '08. Testing: Academic Industrial Conference %P 23 - 32 %8 2008/// %G eng %R 10.1109/TAIC-PART.2008.10 %0 Conference Paper %D 2008 %T Analysis of Computer Security Incident Data Using Time Series Models %A Condon,E. %A He,A. %A Michel Cukier %K Computer networks %K computer security incident data %K NETWORK SECURITY %K resource allocation %K security of data %K telecommunication security %K time series %K time series model %X Organizations face increasing challenges in addressing and preventing computer and network security incidents. There are financial consequences from security incidents. These include lost time and resources used during recovery, possible theft of personal and/or proprietary information, and reputational damage that may negatively impact stock prices or reduce consumer confidence in a company. Being able to understand and predict trends in computer and network security incidents can aid an organization with resource allocation for prevention of such incidents, as well as evaluation of mitigation strategies. We look at using time series models with a large set of security incident data. We examine appropriateness of the data for modeling and consider needed transformations. Parameter search and model selection criteria are discussed. Then, forecasts from time series models are compared to forecasts from Non-Homogeneous Poisson Process (NHPP) software reliability growth (SRG) models. %P 77 - 86 %8 2008/11// %G eng %R 10.1109/ISSRE.2008.39 %0 Conference Paper %B Proceedings of the ACM SIGCOMM 2008 conference on Data communication %D 2008 %T Answering what-if deployment and configuration questions with wise %A Tariq,Mukarram %A Zeitoun,Amgad %A Valancius,Vytautas %A Feamster, Nick %A Ammar,Mostafa %K content distribution networks %K performance modeling %K what-if scenario evaluation %X Designers of content distribution networks often need to determine how changes to infrastructure deployment and configuration affect service response times when they deploy a new data center, change ISP peering, or change the mapping of clients to servers. Today, the designers use coarse, back-of-the-envelope calculations, or costly field deployments; they need better ways to evaluate the effects of such hypothetical "what-if" questions before the actual deployments. This paper presents What-If Scenario Evaluator (WISE), a tool that predicts the effects of possible configuration and deployment changes in content distribution networks. WISE makes three contributions: (1) an algorithm that uses traces from existing deployments to learn causality among factors that affect service response-time distributions; (2) an algorithm that uses the learned causal structure to estimate a dataset that is representative of the hypothetical scenario that a designer may wish to evaluate, and uses these datasets to predict future response-time distributions; (3) a scenario specification language that allows a network designer to easily express hypothetical deployment scenarios without being cognizant of the dependencies between variables that affect service response times. Our evaluation, both in a controlled setting and in a real-world field deployment at a large, global CDN, shows that WISE can quickly and accurately predict service response-time distributions for many practical What-If scenarios. %B Proceedings of the ACM SIGCOMM 2008 conference on Data communication %S SIGCOMM '08 %I ACM %C New York, NY, USA %P 99 - 110 %8 2008/// %@ 978-1-60558-175-0 %G eng %U http://doi.acm.org/10.1145/1402958.1402971 %R 10.1145/1402958.1402971 %0 Journal Article %J NSF Symposium on Semantic Knowledge Discovery, Organization and Use %D 2008 %T Applying automatically generated semantic knowledge: A case study in machine translation %A Madnani,N. %A Resnik, Philip %A Dorr, Bonnie J %A Schwartz,R. %X In this paper, we discuss how we apply automatically generated semantic knowledge to benefit statisticalmachine translation (SMT). Currently, almost all statistical machine translation systems rely heavily on memorizing translations of phrases. Some systems attempt to go further and generalize these learned phrase translations into templates using empirically derived information about word alignments and a small amount of syntactic information, if at all. There are several issues in a SMT pipeline that could be addressed by the application of semantic knowledge, if such knowledge were easily available. One such issue, an important one, is that of reference sparsity. The fundamental problem that translation systems have to face is that there is no such thing as the correct translation for any sentence. In fact, any given source sentence can often be translated into the target language in many valid ways. Since there can be many “correct answers,” almost all models employed by SMT systems require, in addition to a large bitext, a held-out development set comprised of multiple high-quality, human-authored reference translations in the target language in order to tune their parameters relative to a translation quality metric.1 There are several reasons that this requirement is not an easy one to satisfy. First, with a few exceptions—notably NIST’s annual MT evaluations—most new MT research data sets are provided with only a single reference translation. Second, obtaining multiple reference translations in rapid development, low-density source language scenarios (e.g. (Oard, 2003)) is likely to be severely limited (or made entirely impractical) by limitations of time, cost, and ready availability of qualified translators. %B NSF Symposium on Semantic Knowledge Discovery, Organization and Use %8 2008/// %G eng %0 Conference Paper %B Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on %D 2008 %T Approximate earth mover's distance in linear time %A Shirdhonkar,S. %A Jacobs, David W. %K algorithm;normal %K complexity;earth %K complexity;image %K constraint;Kantorovich-Rubinstein %K continuity %K distance;histograms;linear %K distance;weighted %K Euclidean %K Holder %K matching;wavelet %K movers %K problem;computational %K TIME %K transform;computational %K transforms; %K transshipment %K wavelet %X The earth moverpsilas distance (EMD) is an important perceptually meaningful metric for comparing histograms, but it suffers from high (O(N3 logN)) computational complexity. We present a novel linear time algorithm for approximating the EMD for low dimensional histograms using the sum of absolute values of the weighted wavelet coefficients of the difference histogram. EMD computation is a special case of the Kantorovich-Rubinstein transshipment problem, and we exploit the Holder continuity constraint in its dual form to convert it into a simple optimization problem with an explicit solution in the wavelet domain. We prove that the resulting wavelet EMD metric is equivalent to EMD, i.e. the ratio of the two is bounded. We also provide estimates for the bounds. The weighted wavelet transform can be computed in time linear in the number of histogram bins, while the comparison is about as fast as for normal Euclidean distance or chi2 statistic. We experimentally show that wavelet EMD is a good approximation to EMD, has similar performance, but requires much less computation. %B Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on %P 1 - 8 %8 2008/06// %G eng %R 10.1109/CVPR.2008.4587662 %0 Conference Paper %B IEEE INFOCOM 2008. The 27th Conference on Computer Communications %D 2008 %T Approximation Algorithms for Computing Capacity of Wireless Networks with SINR Constraints %A Chafekar,D. %A Kumart,V. S.A %A Marathe,M. V %A Parthasarathy,S. %A Srinivasan, Aravind %K Algorithm design and analysis %K approximation algorithm %K Approximation algorithms %K approximation theory %K Computer networks %K Computer science %K graph theory %K graph-based model %K Interference constraints %K polynomial time algorithm %K Propagation losses %K Protocols %K radio networks %K radiofrequency interference %K signal to interference plus noise ratio %K Signal to noise ratio %K Throughput %K wireless interference %K wireless network %K Wireless networks %X A fundamental problem in wireless networks is to estimate its throughput capacity - given a set of wireless nodes, and a set of connections, what is the maximum rate at which data can be sent on these connections. Most of the research in this direction has focused on either random distributions of points, or has assumed simple graph-based models for wireless interference. In this paper, we study capacity estimation problem using the more general Signal to Interference Plus Noise Ratio (SINR) model for interference, on arbitrary wireless networks. The problem becomes much harder in this setting, because of the non-locality of the SINR model. Recent work by Moscibroda et al. (2006) has shown that the throughput in this model can differ from graph based models significantly. We develop polynomial time algorithms to provably approximate the total throughput in this setting. %B IEEE INFOCOM 2008. The 27th Conference on Computer Communications %I IEEE %P 1166 - 1174 %8 2008/04/13/18 %@ 978-1-4244-2025-4 %G eng %R 10.1109/INFOCOM.2008.172 %0 Book Section %B Computational Linguistics and Intelligent Text Processing, Lecture Notes in Computer Science Volume 4919Computational Linguistics and Intelligent Text Processing, Lecture Notes in Computer Science Volume 4919 %D 2008 %T Arabic/English Multi-document Summarization with CLASSYThe Past and the Future %A Schlesinger,Judith D. %A O'Leary, Dianne P. %A Conroy,John M. %B Computational Linguistics and Intelligent Text Processing, Lecture Notes in Computer Science Volume 4919Computational Linguistics and Intelligent Text Processing, Lecture Notes in Computer Science Volume 4919 %I Springer Berlin %P 568 - 581 %8 2008/// %G eng %U http://dx.doi.org/10.1007/978-3-540-78135-6_49http://dx.doi.org/10.1007/978-3-540-78135-6_49 %0 Report %D 2008 %T Archiving Temporal Web Information: Organization of Web Contents for Fast Access and Compact Storage %A Song,Sangchul %A JaJa, Joseph F. %K Technical Report %X We address the problem of archiving dynamic web contents oversignificant time spans. Current schemes crawl the web contents at regular time intervals and archive the contents after each crawl regardless of whether or not the contents have changed between consecutive crawls. Our goal is to store newly crawled web contents only when they are different than the previous crawl, while ensuring accurate and quick retrieval of archived contents based on arbitrary temporal queries over the archived time period. In this paper, we develop a scheme that stores unique temporal web contents in containers following the widely used ARC/WARC format, and that provides quick access to the archived contents for arbitrary temporal queries. A novel component of our scheme is the use of a new indexing structure based on the concept of persistent or multi-version data structures. Our scheme can be shown to be asymptotically optimal both in storage utilization and insert/retrieval time. We illustrate the performance of our method on two very different data sets from the Stanford WebBase project, the first reflecting very dynamic web contents and the second relatively static web contents. The experimental results clearly illustrate the substantial storage savings achieved by eliminating duplicate contents detected between consecutive crawls, as well as the speed at which our method can find the archived contents specified through arbitrary temporal queries. %I Instititue for Advanced Computer Studies, Univ of Maryland, College Park %V UMIACS-TR-2008-08 %8 2008/04/07/ %G eng %U http://drum.lib.umd.edu/handle/1903/7569 %0 Journal Article %J Proceedings of the Eighth Conference of the Association for Machine Translation in the Americas, October %D 2008 %T Are multiple reference translations necessary? investigating the value of paraphrased reference translations in parameter optimization %A Madnani,N. %A Resnik, Philip %A Dorr, Bonnie J %A Schwartz,R. %X Most state-of-the-art statistical machine trans-lation systems use log-linear models, which are defined in terms of hypothesis features and weights for those features. It is standard to tune the feature weights in order to maxi- mize a translation quality metric, using held- out test sentences and their corresponding ref- erence translations. However, obtaining refer- ence translations is expensive. In our earlier work (Madnani et al., 2007), we introduced a new full-sentence paraphrase technique, based on English-to-English decoding with an MT system, and demonstrated that the resulting paraphrases can be used to cut the number of human reference translations needed in half. In this paper, we take the idea a step further, asking how far it is possible to get with just a single good reference translation for each item in the development set. Our analysis suggests that it is necessary to invest in four or more hu- man translations in order to significantly im- prove on a single translation augmented by monolingual paraphrases. %B Proceedings of the Eighth Conference of the Association for Machine Translation in the Americas, October %8 2008/// %G eng %0 Conference Paper %B Proceedings of the 45th annual Design Automation Conference %D 2008 %T An area-efficient high-throughput hybrid interconnection network for single-chip parallel processing %A Balkan,Aydin O. %A Gang Qu %A Vishkin, Uzi %K hybrid networks %K mesh-of-trees %K on-chip networks %X Single-chip parallel processing requires high bandwidth between processors and on-chip memory modules. A recently proposed Mesh-of-Trees (MoT) network provides high throughput and low latency at relatively high area cost. In this paper, we introduce a hybrid MoT-BF network that combines MoT network with the area efficient butterfly network. We prove that the hybrid network reduces MoT network's area cost. Cycle-accurate simulation and post-layout results all show that significant area reduction can be achieved with negligible performance degradation, when operating at same clock rate. %B Proceedings of the 45th annual Design Automation Conference %S DAC '08 %I ACM %C New York, NY, USA %P 435 - 440 %8 2008/// %@ 978-1-60558-115-6 %G eng %U http://doi.acm.org/10.1145/1391469.1391583 %R 10.1145/1391469.1391583 %0 Journal Article %J Computer %D 2008 %T The ASC-Alliance Projects: A Case Study of Large-Scale Parallel Scientific Code Development %A Hochstein, L. %A Basili, Victor R. %K ASC-Alliance %K code %K development;large-scale %K development;parallel %K engineering; %K machines;software %K Parallel %K projects;computational %K scale %K Science %K scientific %K scientists;large %K software %K systems;parallel %X Computational scientists face many challenges when developing software that runs on large-scale parallel machines. However, software-engineering researchers haven't studied their software development processes in much detail. To better understand the nature of software development in this context, the authors examined five large-scale computational science software projects operated at the five ASC-Alliance centers. %B Computer %V 41 %P 50 - 58 %8 2008/03// %@ 0018-9162 %G eng %N 3 %R 10.1109/MC.2008.101 %0 Journal Article %J Journal of Systems and Software %D 2008 %T An assessment of systems and software engineering scholars and institutions (2001–2005) %A Wong,W. Eric %A Tse,T.H. %A Glass,Robert L. %A Basili, Victor R. %A Chen,T.Y. %K Research publications %K Systems and software engineering %K Top institutions %K Top scholars %X This paper presents the findings of a five-year study of the top scholars and institutions in the systems and software engineering field, as measured by the quantity of papers published in the journals of the field in 2001–2005. The top scholar is Magne Jørgensen of Simula Research Laboratory, Norway, and the top institution is Korea Advanced Institute of Science and Technology, Korea.This paper is part of an ongoing study, conducted annually, that identifies the top 15 scholars and institutions in the most recent five-year period. %B Journal of Systems and Software %V 81 %P 1059 - 1062 %8 2008/06// %@ 0164-1212 %G eng %U http://www.sciencedirect.com/science/article/pii/S0164121207002300 %N 6 %R 10.1016/j.jss.2007.09.018 %0 Journal Article %J Lecture Notes in Computer Science %D 2008 %T Athos: Efficient Authentication of Outsourced File Systems: Information Security %A Triandopoulos, Nikolaos %A Goodrich, Michael T. %A Charalampos Papamanthou %A Tamassia, Roberto %B Lecture Notes in Computer Science %P 80 - 96 %8 2008/// %@ 03029743 %G eng %0 Patent %D 2008 %T Audio Camera Using Microphone Arrays for Real Time Capture of Audio Images ... %A Duraiswami, Ramani %A O'donovan,Adam %A Neumann, Jan %A Gumerov, Nail A. %X Spherical microphone arrays provide an ability to compute the acoustical intensity corresponding to different spatial directions in a given frame of audio data. These intensities may be exhibited as an image and these images are generated at a high frame rate to achieve a video image if the data capture and intensity computations can be performed sufficiently quickly, thereby creating a frame-rate audio camera. A description is provided herein regarding how such a camera is built and the processing done sufficiently quickly using graphics processors. The joint processing of and captured frame-rate audio and video images enables applications such as visual identification of noise sources, beamforming and noise-suppression in video conferenceing and others, by accounting for the spatial differences in the location of the audio and the video cameras. Based on the recognition that the spherical array can be viewed as a central projection camera, such joint analysis can be performed. %V 12/127,451 %8 2008/05// %G eng %U http://www.google.com/patents?id=gWKzAAAAEBAJ %0 Conference Paper %B Data Engineering Workshop, 2008. ICDEW 2008. IEEE 24th International Conference on %D 2008 %T Augmenting spatio-textual search with an infectious disease ontology %A Lieberman,M.D. %A Sankaranarayanan,J. %A Samet, Hanan %A Sperling,J. %K (artificial %K article;ontology;spatio-textual %K disease;map %K engines; %K information %K intelligence);search %K interface;newspaper %K search;classification;diseases;indexing;medical %K STEWARD %K system;classification;epidemics;indexing;infectious %K systems;ontologies %X A system is described that automatically categorizes and classifies infectious disease incidence reports by type and geographic location, to aid analysis by domain experts. It identifies references to infectious diseases by using a disease ontology. The system leverages the textual and spatial search capabilities of the STEWARD system to enable queries such as reports on "influenza" near "Hong Kong", possibly within a particular time period. Documents from the U.S. National Library of Medicine (http://www.pubmed.gov) and the World Health Organization (http://www.who.int) are tagged so that spatial relationships to specific disease occurrences can be presented graphically via a map interface. In addition, newspaper articles can be tagged and indexed to bolster the surveillance of ongoing epidemics. Examining past epidemics using this system may lead to improved understanding of the cause and spread of infectious diseases. %B Data Engineering Workshop, 2008. ICDEW 2008. IEEE 24th International Conference on %P 266 - 269 %8 2008/04// %G eng %R 10.1109/ICDEW.2008.4498330 %0 Conference Paper %B CCS '08 Proceedings of the 15th ACM Conference on Computer and Communications Security %D 2008 %T Authenticated Hash Tables %A Charalampos Papamanthou %A Tamassia, Roberto %A Triandopoulos, Nikos %K Authentication %K hash tables %K rsa accumulator %K verification %X Hash tables are fundamental data structures that optimally answer membership queries. Suppose a client stores n elements in a hash table that is outsourced at a remote server so that the client can save space or achieve load balancing. Authenticating the hash table functionality, i.e., verifying the correctness of queries answered by the server and ensuring the integrity of the stored data, is crucial because the server, lying outside the administrative control of the client, can be malicious. We design efficient and secure protocols for optimally authenticating membership queries on hash tables: for any fixed constants 0 < ε < 1 and κ > 1/ε, the server can provide a proof of integrity of the answer to a (non-)membership query in constant time, requiring O(nε/logκε--1 n) time to treat updates, yet keeping the communication and verification costs constant. This is the first construction for authenticating a hash table with constant query cost and sublinear update cost. Our solution employs the RSA accumulator in a nested way over the stored data, strictly improving upon previous accumulator-based solutions. Our construction applies to two concrete data authentication models and lends itself to a scheme that achieves different trade-offs---namely, constant update time and O(nε/logκε n) query time for fixed ε > 0 and κ > 0. An experimental evaluation of our solution shows very good scalability. %B CCS '08 Proceedings of the 15th ACM Conference on Computer and Communications Security %S CCS '08 %I ACM %P 437 - 448 %8 2008/// %@ 978-1-59593-810-7 %G eng %U http://doi.acm.org/10.1145/1455770.1455826 %0 Conference Paper %B Proceedings of the first workshop on Online social networks %D 2008 %T Authenticated out-of-band communication over social links %A Ramachandran,Anirudh V. %A Feamster, Nick %K Authentication %K Security %K social networks %X Many existing host-based applications rely on their own authentication mechanisms and peer discovery services. Although social networking sites already provide mechanisms for users both to discover other users (e.g., by logging on to the social network Web site) and to communicate securely with each other (e.g., using instant messages within the social networking site), today's applications have no way to exploit the relationships and trust that are inherent in these networks. This paper proposes Authenticatr, a framework that allows applications to use the authentication and peer discovery mechanisms inherent in social networking sites to bootstrap their own authenticated communication channels. We describe motivating applications, detail the interface that Authenticatr exposes to applications, and discuss practical considerations and security threats. %B Proceedings of the first workshop on Online social networks %S WOSN '08 %I ACM %C New York, NY, USA %P 61 - 66 %8 2008/// %@ 978-1-60558-182-8 %G eng %U http://doi.acm.org/10.1145/1397735.1397749 %R 10.1145/1397735.1397749 %0 Journal Article %J Robotics and Autonomous Systems %D 2008 %T Automated design of distributed control rules for the self-assembly of prespecified artificial structures %A Grushin,Alexander %A Reggia, James A. %K Collective problem solving %K Coordination %K Self-assembly %K Stigmergy %K Swarm intelligence %X The self-assembly problem involves the design of agent-level control rules that will cause the agents to form some desired, target structure, subject to environmental constraints. This paper describes a fully automated rule generation procedure that allows structures to successfully self-assemble in a simulated environment with constrained, continuous motion. This environment implicitly imposes ordering constraints on the self-assembly process, where certain parts of the target structure must be assembled before others, and where it may be necessary to assemble (and subsequently disassemble) temporary structures such as staircases. A provably correct methodology is presented for computing a partial order on the self-assembly process, and for generating rules that enforce this order at runtime. The assembly and disassembly of structures is achieved by generating another set of rules, which are inspired by construction behavior among certain species of social insects. Computational experiments verify the effectiveness of the approach on a diverse set of target structures. %B Robotics and Autonomous Systems %V 56 %P 334 - 359 %8 2008/04/30/ %@ 0921-8890 %G eng %U http://www.sciencedirect.com/science/article/pii/S092188900700111X %N 4 %R 10.1016/j.robot.2007.08.006 %0 Journal Article %J Advances in Neural Information Processing Systems %D 2008 %T Automatic online tuning for fast Gaussian summation %A Morariu,V. %A Srinivasan,B.V. %A Raykar,V.C. %A Duraiswami, Ramani %A Davis, Larry S. %X Many machine learning algorithms require the summation of Gaussian kernelfunctions, an expensive operation if implemented straightforwardly. Several meth- ods have been proposed to reduce the computational complexity of evaluating such sums, including tree and analysis based methods. These achieve varying speedups depending on the bandwidth, dimension, and prescribed error, making the choice between methods difficult for machine learning tasks. We provide an algorithm that combines tree methods with the Improved Fast Gauss Transform (IFGT). As originally proposed the IFGT suffers from two problems: (1) the Taylor series expansion does not perform well for very low bandwidths, and (2) parameter se- lection is not trivial and can drastically affect performance and ease of use. We address the first problem by employing a tree data structure, resulting in four eval- uation methods whose performance varies based on the distribution of sources and targets and input parameters such as desired accuracy and bandwidth. To solve the second problem, we present an online tuning approach that results in a black box method that automatically chooses the evaluation method and its parameters to yield the best performance for the input data, desired accuracy, and bandwidth. In addition, the new IFGT parameter selection approach allows for tighter error bounds. Our approach chooses the fastest method at negligible additional cost, and has superior performance in comparisons with previous approaches. %B Advances in Neural Information Processing Systems %8 2008/// %G eng %0 Journal Article %J ACM Transactions on Software Engineering and Methodology (TOSEM) %D 2008 %T Automatically repairing event sequence-based GUI test suites for regression testing %A Memon, Atif M. %K Graphical user interfaces %K regression testing %K repairing test cases %K test case management %K test maintenance %X Although graphical user interfaces (GUIs) constitute a large part of the software being developed today and are typically created using rapid prototyping, there are no effective regression testing techniques for GUIs. The needs of GUI regression testing differ from those of traditional software. When the structure of a GUI is modified, test cases from the original GUI's suite are either reusable or unusable on the modified GUI. Because GUI test case generation is expensive, our goal is to make the unusable test cases usable, thereby helping to retain the suite's event coverage. The idea of reusing these unusable (obsolete) test cases has not been explored before. This article shows that a large number of test cases become unusable for GUIs. It presents a new GUI regression testing technique that first automatically determines the usable and unusable test cases from a test suite after a GUI modification, then determines the unusable test cases that can be repaired so that they can execute on the modified GUI, and finally uses repairing transformations to repair the test cases. This regression testing technique along with four repairing transformations has been implemented. An empirical study for four open-source applications demonstrates that (1) this approach is effective in that many of the test cases can be repaired, and is practical in terms of its time performance, (2) certain types of test cases are more prone to becoming unusable, and (3) certain types of “dominator” events, when modified, make a large number of test cases unusable. %B ACM Transactions on Software Engineering and Methodology (TOSEM) %V 18 %P 4:1–4:36 - 4:1–4:36 %8 2008/11// %@ 1049-331X %G eng %U http://doi.acm.org/10.1145/1416563.1416564 %N 2 %R 10.1145/1416563.1416564 %0 Journal Article %J Intelligent Systems, IEEE %D 2008 %T AVA: Adjective-Verb-Adverb Combinations for Sentiment Analysis %A V.S. Subrahmanian %A Reforgiato,Diego %X Most research on determining the strength of subjective expressions in a sentence or document uses single, specific parts of speech such as adjectives, adverbs, or verbs. To date, almost no research covers the development of a single comprehensive framework in which we can analyze sentiment that takes all three into account. The authors propose the AVA (adjective verb adverb) framework for identifying opinions on any given topic. In AVA, a user can select any topic t of interest and any document d. AVA will return a score that d expresses topic t. The score is expressed on a #x02013;1 (maximally negative) to +1 (maximally positive) scale. %B Intelligent Systems, IEEE %V 23 %P 43 - 50 %8 2008/08//july %@ 1541-1672 %G eng %N 4 %R 10.1109/MIS.2008.57 %0 Conference Paper %B Proceedings of the 3rd USENIX workshop on Steps to reducing unwanted traffic on the internet %D 2007 %T Accountability as a service %A Bender,Adam %A Spring, Neil %A Levin,Dave %A Bhattacharjee, Bobby %X We propose that accountability be a first-class network service, independent of addressing and routing. We design a scheme for allowing accountability services, rather than connectivity-providing ISPs, to vouch for traffic, allowing victims to report abuse, filter abusive traffic, and isolate malicious senders. We discuss how accountability services may evolve, how they may facilitate new applications, and the implications of shifting the burden of network policing to a dedicated service. %B Proceedings of the 3rd USENIX workshop on Steps to reducing unwanted traffic on the internet %S SRUTI'07 %I USENIX Association %C Berkeley, CA, USA %P 5:1–5:6 - 5:1–5:6 %8 2007/// %G eng %U http://dl.acm.org/citation.cfm?id=1361436.1361441 %0 Report %D 2007 %T ACE: A Novel Software Platform to Ensure the Integrity of Long Term Archives %A Song,Sangchul %A JaJa, Joseph F. %K Technical Report %X We develop a new methodology to address the integrity of long term archives using rigorous cryptographic techniques. A prototype system called ACE (Auditing Control Environment) was designed and developed based on this methodology. ACE creates a small-size integrity token for each digital object and some cryptographic summary information based on all the objects handled within a dynamic time period. ACE continuously audits the contents of the various objects according to the policy set by the archive, and provides mechanisms for an independent third-party auditor to certify the integrity of any object. In fact, our approach will allow an independent auditor to verify the integrity of every version of an archived digital object as well as link the current version to the original form of the object when it was ingested into the archive. We show that ACE is very cost effective and scalable while making no assumptions about the archive architecture. We include in this paper some preliminary results on the validation and performance of ACE on a large image collection. %I Instititue for Advanced Computer Studies, Univ of Maryland, College Park %V UMIACS-TR-2007-07 %8 2007/01/31/ %G eng %U http://drum.lib.umd.edu/handle/1903/4047 %0 Conference Paper %B Proceedings of the thirty-ninth annual ACM symposium on Theory of computing %D 2007 %T On achieving the "best of both worlds" in secure multiparty computation %A Katz, Jonathan %K COMPUTATION %K secure %X Two settings are typically considered for secure multipartycomputation, depending on whether or not a majority of the partiesare assumed to be honest. Protocols designed under this assumptionprovide "full security" (and, in particular, guarantee outputdelivery and fairness) when this assumption is correct; however, if half or more of the parties are dishonest then security iscompletely compromised. On the other hand, protocols toleratingarbitrarily-many faults do not provide fairness or guaranteed output delivery even if only a single party is dishonest. It isnatural to wonder whether it is possible to achieve the "best ofboth worlds" : namely, a single protocol that simultaneouslyachieves the best possible security in both the above settings. Ishai, et al. (Crypto 2006) recently addressed this question, andruled out constant-round protocols of this type. As our main result, we completely settle the question by ruling outprotocols using any (expected) polynomial number of rounds. Given this stark negative result, we then ask what can be achieved if we are willing to assume simultaneous message transmission (or, equivalently, a non-rushing adversary). In this setting, we show that impossibility still holdsfor logarithmic-round protocols. We also show, for any polynomialp, a protocol (whose round complexity depends on p) that can be simulated to within closeness O(1/p). %B Proceedings of the thirty-ninth annual ACM symposium on Theory of computing %S STOC '07 %I ACM %C New York, NY, USA %P 11 - 20 %8 2007/// %@ 978-1-59593-631-8 %G eng %U http://doi.acm.org/10.1145/1250790.1250793 %R 10.1145/1250790.1250793 %0 Journal Article %J Parallel Computing %D 2007 %T Active semantic caching to optimize multidimensional data analysis in parallel and distributed environments %A Andrade,Henrique %A Kurc,Tahsin %A Sussman, Alan %A Saltz,Joel %K Active semantic caching %K Parallel databases %K Query optimization %K Scientific data analysis %X In this paper, we present a multi-query optimization framework based on the concept of active semantic caching. The framework permits the identification and transparent reuse of data and computation in the presence of multiple queries (or query batches) that specify user-defined operators and aggregations originating from scientific data-analysis applications. We show how query scheduling techniques, coupled with intelligent cache replacement policies, can further improve the performance of query processing by leveraging the active semantic caching operators. We also propose a methodology for functionally decomposing complex queries in terms of primitives so that multiple reuse sites are exposed to the query optimizer, to increase the amount of reuse. The optimization framework and the database system implemented with it are designed to be efficient irrespective of the underlying parallel and/or distributed machine configuration. We present experimental results highlighting the performance improvements obtained by our methods using real scientific data-analysis applications on multiple parallel and distributed processing configurations (e.g., single symmetric multiprocessor (SMP) machine, cluster of SMP nodes, and a Grid computing configuration). %B Parallel Computing %V 33 %P 497 - 520 %8 2007/08// %@ 0167-8191 %G eng %U http://www.sciencedirect.com/science/article/pii/S0167819107000506 %N 7-8 %R 10.1016/j.parco.2007.03.001 %0 Journal Article %J Signal Processing Letters, IEEE %D 2007 %T Adaptive Detection for Group-Based Multimedia Fingerprinting %A He,Shan %A M. Wu %K adaptive %K computing;security %K data; %K detection;collusion %K detection;group-based %K fingerprinting;copy %K fingerprinting;grouping %K multimedia %K of %K orthogonal %K pattern;detection %K protection;multimedia %K statistics;group %K strategy;nongrouped %X Grouping strategy has been proposed recently to leverage the prior knowledge on collusion pattern to improve collusion resistance in multimedia fingerprinting. However, the improvement is not consistent, as reduced performance of the existing group-based fingerprinting schemes than the non-grouped ones is observed when the grouping does not match the true collusion pattern well. In this letter, we propose a new adaptive detection method, where the threshold for the group detection can be adjusted automatically according to the detection statistics that reflect the underlying collusion pattern. Experimental results show that the proposed adaptive detection outperforms nonadaptive detection and provides consistent performance improvement over non-grouped orthogonal fingerprinting schemes under various collusion scenarios. %B Signal Processing Letters, IEEE %V 14 %P 964 - 967 %8 2007/12// %@ 1070-9908 %G eng %N 12 %R 10.1109/LSP.2007.907992 %0 Conference Paper %B Wavelet Analysis and Pattern Recognition, 2007. ICWAPR '07. International Conference on %D 2007 %T An adaptive mean shift tracking method using multiscale images %A Zhuolin Jiang %A Li,Shao-Fa %A Gao,Dong-Fa %K adaptive mean shift tracking method %K bandwidth matrix %K Gaussian kernel %K Gaussian processes %K Image sequences %K log-likelihood function %K matrix algebra %K maximum likelihood estimation %K multiscale image %K Object detection %K object tracking %X An adaptive mean shift tracking method for object tracking using multiscale images is presented in this paper. A bandwidth matrix and a Gaussian kernel are used to extend the definition of target model. The method can exactly estimate the position of the tracked object using multiscale images from Gaussian pyramid. The tracking method determines the parameters of kernel bandwidth by maximizing the lower bound of a log-likelihood function, which is derived from a kernel density estimate with the bandwidth matrix and the modified weight function. The experimental results show that it can averagely converge in 2.55 iterations per frame. %B Wavelet Analysis and Pattern Recognition, 2007. ICWAPR '07. International Conference on %V 3 %P 1060 - 1066 %8 2007/11// %G eng %R 10.1109/ICWAPR.2007.4421589 %0 Journal Article %J Foundations and Trends in Databases %D 2007 %T Adaptive query processing %A Deshpande, Amol %A Ives,Zachary %A Vijayshankar Raman %X As the data management field has diversified to consider settings in which queries are increasingly complex, statistics are less available, or data is stored remotely, there has been an acknowledgment that the traditional optimize-then-execute paradigm is insufficient. This has led to a plethora of new techniques, generally placed under the common banner of adaptive query processing, that focus on using runtime feed-back to modify query processing in a way that provides better response time or more efficient CPU utilization.In this survey paper, we identify many of the common issues, themes, and approaches that pervade this work, and the settings in which each piece of work is most appropriate. Our goal with this paper is to be a "value-add" over the existing papers on the material, providing not only a brief overview of each technique, but also a basic framework for understanding the field of adaptive query processing in general. We focus primarily on intra-query adaptivity of long-running, but not full-fledged streaming, queries. We conclude with a discussion of open research problems that are of high importance. %B Foundations and Trends in Databases %V 1 %P 1 - 140 %8 2007/01// %@ 1931-7883 %G eng %U http://dx.doi.org/10.1561/1900000001 %N 1 %R 10.1561/1900000001 %0 Conference Paper %B Proceedings of the 33rd international conference on Very large data bases %D 2007 %T Adaptive query processing: why, how, when, what next? %A Deshpande, Amol %A Ives,Zachary %A Vijayshankar Raman %X Adaptive query processing has been the subject of a great deal of recent work, particularly in emerging data management environments such as data integration and data streams. We provide an overview of the work in this area, identifying its common themes, laying out the space of query plans, and discussing open research problems. We discuss why adaptive query processing is needed, how it is being implemented, where it is most appropriately used, and finally, what next, i.e., open research problems. %B Proceedings of the 33rd international conference on Very large data bases %S VLDB '07 %I VLDB Endowment %P 1426 - 1427 %8 2007/// %@ 978-1-59593-649-3 %G eng %U http://dl.acm.org/citation.cfm?id=1325851.1326033 %0 Conference Paper %B Computer Vision, 2007. ICCV 2007. IEEE 11th International Conference on %D 2007 %T Adaptive Vocabulary Forests br Dynamic Indexing and Category Learning %A Tom Yeh %A Lee,John %A Darrell,Trevor %X Histogram pyramid representations computed from a vocabulary tree of visual words have proven valuable for a range of image indexing and recognition tasks; however, they have only used a single, fixed partition of feature space. We present a new efficient algorithm to incrementally compute set-of-trees (forest) vocabulary representations, and show that they improve recognition and indexing performance in methods which use histogram pyramids. Our algorithm incrementally adapts a vocabulary forest with an Inverted file system at the leaf nodes and automatically keeps existing histogram pyramid database entries up-to-date in a forward filesystem. It is possible not only to apply vocabulary tree indexing algorithms directly, but also to compute pyramid match kernel values efficiently. On dynamic recognition tasks where categories or objects under consideration may change over time, we show that adaptive vocabularies offer significant performance advantages in comparison to a single, fixed vocabulary. %B Computer Vision, 2007. ICCV 2007. IEEE 11th International Conference on %I IEEE %P 1 - 8 %8 2007/10/14/21 %@ 978-1-4244-1630-1 %G eng %U http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4409053 %R 10.1109/ICCV.2007.4409053 %0 Book Section %B Scalable Uncertainty ManagementScalable Uncertainty Management %D 2007 %T Aggregates in Generalized Temporally Indeterminate Databases %A Udrea,Octavian %A Majkić,Zoran %A Subrahmanian,V. %E Prade,Henri %E Subrahmanian,V. %K Computer %K Science %X Dyreson and Snodgrass as well as Dekhtyar et. al. have provided a probabilistic model (as well as compelling example applications) for why there may be temporal indeterminacy in databases. In this paper, we first propose a formal model for aggregate computation in such databases when there is uncertainty not just in the temporal attribute, but also in the ordinary (non-temporal) attributes. We identify two types of aggregates: event correlated aggregates, and non event correlated aggregations, and provide efficient algorithms for both of them. We prove that our algorithms are correct, and we present experimental results showing that the algorithms work well in practice. %B Scalable Uncertainty ManagementScalable Uncertainty Management %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 4772 %P 171 - 186 %8 2007/// %@ 978-3-540-75407-7 %G eng %U http://dx.doi.org/10.1007/978-3-540-75410-7_13 %0 Book Section %B Agile software development quality assuranceAgile software development quality assurance %D 2007 %T Agile Quality Assurance Techniques for GUI-Based Applications %A Memon, Atif M. %A Xie,Q. %B Agile software development quality assuranceAgile software development quality assurance %I Idea Group Inc (IGI) %P 114 - 114 %8 2007/// %@ 1599042169, 9781599042169 %G eng %0 Journal Article %J Journal of Computing and Information Science in Engineering %D 2007 %T Algorithms for on-line monitoring of micro spheres in an optical tweezers-based assembly cell %A Peng,T. %A Balijepalli,A. %A Gupta,S.K. %A LeBrun,T. %X Optical tweezers have emerged as a powerful tool for microand nanomanipulation. Using optical tweezers to perform auto- mated assembly requires on-line monitoring of components in the assembly workspace. This paper presents algorithms for estimat- ing positions and orientations of microscale and nanoscale com- ponents in the 3-Dimensional assembly workspace. Algorithms presented in this paper use images obtained by optical section microscopy. The images are first segmented to locate areas of in- terest and then image gradient information from the areas of in- terest is used to generate probable locations and orientations of components in the XY-plane. Finally, signature curves are com- puted and utilized to obtain component locations and orienta- tions in 3-D space. We have tested these algorithms with silica micro-spheres as well as metallic nanowires. We believe that the algorithms described in this paper will provide the foundation for realizing automated assembly operations in optical tweezers- based assembly cells. %B Journal of Computing and Information Science in Engineering %V 7 %P 330 - 330 %8 2007/// %G eng %0 Journal Article %J Protein Engineering Design and SelectionProtein Engineering, Design and Selection %D 2007 %T Amino acid quantitative structure property relationship database: a web-based platform for quantitative investigations of amino acids %A Lu,Yi %A Bulka,Blazej %A desJardins, Marie %A Freeland,Stephen J %K Amino acids %K database %K QSPR %K XML %X Here, we present the AA-QSPR Db (Amino Acid Quantitative Structure Property Relationship Database): a novel, freely available web-resource of data pertaining to amino acids, both engineered and naturally occurring. In addition to presenting fundamental molecular descriptors of size, charge and hydrophobicity, it also includes online visualization tools for users to perform instant, interactive analyses of amino acid sub-sets in which they are interested. The database has been designed with extensible markup language technology to provide a flexible structure, suitable for future development. In addition to providing easy access for queries by external computers, it also offers a user-friendly web-based interface that facilitates human interactions (submission, storage and retrieval of amino acid data) and an associated e-forum that encourages users to question and discuss current and future database contents. %B Protein Engineering Design and SelectionProtein Engineering, Design and Selection %V 20 %P 347 - 351 %8 2007/07/01/ %@ 1741-0126, 1741-0134 %G eng %U http://peds.oxfordjournals.org/content/20/7/347 %N 7 %R 10.1093/protein/gzm027 %0 Conference Paper %B Image Processing, 2007. ICIP 2007. IEEE International Conference on %D 2007 %T Analysis of Nonlinear Collusion Attacks on Fingerprinting Systems for Compressed Multimedia %A Varna,A.L. %A He,Shan %A Swaminathan,A. %A M. Wu %K attacks;data %K coding;multimedia %K coding;security %K collusion %K compressed %K compression;image %K data;watermarking; %K fingerprinting %K multimedia;data %K of %K systems;nonlinear %X In this paper, we analyze the effect of various collusion attacks on fingerprinting systems for compressed multimedia. We evaluate the effectiveness of the collusion attacks in terms of the probability of detection and accuracy in estimating the host signal. Our analysis shows that applying averaging collusion on copies of moderately compressed content gives a highly accurate estimation of the host, and can effectively remove the embedded fingerprints. Averaging is thus the best choice for an attacker as the probability of detection and the distortion introduced are the lowest. %B Image Processing, 2007. ICIP 2007. IEEE International Conference on %V 2 %P II -133 -II -136 - II -133 -II -136 %8 2007/10/16/19 %G eng %R 10.1109/ICIP.2007.4379110 %0 Report %D 2007 %T Analysis of the Residual Arnoldi Method %A Lee,Che-Rung %A Stewart, G.W. %K Technical Report %X The Arnoldi method generates a nested squences of orthonormal bases$U_{1},U_{2}, \ldots$ by orthonormalizing $Au_{k}$ against $U_{k}$. Frequently these bases contain increasingly accurate approximations of eigenparis from the periphery of the spectrum of $A$. However, the convergence of these approximations stagnates if $u_{k}$ is contaminated by error. It has been observed that if one chooses a Rayleigh--Ritz approximation $(\mu_{k}, z_{k})$ to a chosen target eigenpair $(\lambda, x)$ and orthonormalizes the residual $Az_{k - }\mu_{k} z_{k}$, the approximations to $x$ (but not the other eigenvectors) continue to converge, even when the residual is contaminated by error. The same is true of the shift-invert variant of Arnoldi's method. In this paper we give a mathematical analysis of these new methods. %I Instititue for Advanced Computer Studies, Univ of Maryland, College Park %V UMIACS-TR-2007-45 %8 2007/10/15/ %G eng %U http://drum.lib.umd.edu/handle/1903/7428 %0 Journal Article %J Conference on Programming Language Design and Implementation: Proceedings of the 2007 workshop on Programming languages and analysis for security %D 2007 %T Analyzing information flow %A Hicks, Michael W. %B Conference on Programming Language Design and Implementation: Proceedings of the 2007 workshop on Programming languages and analysis for security %8 2007/// %G eng %0 Journal Article %J Computational Linguistics %D 2007 %T Answering Clinical Questions with Knowledge-Based and Statistical Techniques %A Demner-Fushman,Dina %A Jimmy Lin %X The combination of recent developments in question-answering research and the availability of unparalleled resources developed specifically for automatic semantic processing of text in the medical domain provides a unique opportunity to explore complex question answering in the domain of clinical medicine. This article presents a system designed to satisfy the information needs of physicians practicing evidence-based medicine. We have developed a series of knowledge extractors, which employ a combination of knowledge-based and statistical techniques, for automatically identifying clinically relevant aspects of MEDLINE abstracts. These extracted elements serve as the input to an algorithm that scores the relevance of citations with respect to structured representations of information needs, in accordance with the principles of evidence-based medicine. Starting with an initial list of citations retrieved by PubMed, our system can bring relevant abstracts into higher ranking positions, and from these abstracts generate responses that directly answer physicians' questions. We describe three separate evaluations: one focused on the accuracy of the knowledge extractors, one conceptualized as a document reranking task, and finally, an evaluation of answers by two physicians. Experiments on a collection of real-world clinical questions show that our approach significantly outperforms the already competitive PubMed baseline. %B Computational Linguistics %V 33 %P 63 - 103 %8 2007/// %@ 0891-2017 %G eng %U http://dx.doi.org/10.1162/coli.2007.33.1.63 %N 1 %R i: 10.1162/coli.2007.33.1.63 %0 Journal Article %J Pattern Analysis and Machine Intelligence, IEEE Transactions on %D 2007 %T Appearance Characterization of Linear Lambertian Objects, Generalized Photometric Stereo, and Illumination-Invariant Face Recognition %A Zhou,S. K %A Aggarwal,G. %A Chellapa, Rama %A Jacobs, David W. %K albedo field;appearance characterization;generalized photometric stereo algorithms;illumination-invariant face recognition;linear Lambertian objects;observation matrix factorization;face recognition;matrix decomposition;Algorithms;Artificial Intelligence; %K Automated;Photogrammetry;Reproducibility of Results;Sensitivity and Specificity; %K Computer-Assisted;Information Storage and Retrieval;Lighting;Linear Models;Pattern Recognition %X Traditional photometric stereo algorithms employ a Lambertian reflectance model with a varying albedo field and involve the appearance of only one object. In this paper, we generalize photometric stereo algorithms to handle all appearances of all objects in a class, in particular the human face class, by making use of the linear Lambertian property. A linear Lambertian object is one which is linearly spanned by a set of basis objects and has a Lambertian surface. The linear property leads to a rank constraint and, consequently, a factorization of an observation matrix that consists of exemplar images of different objects (e.g., faces of different subjects) under different, unknown illuminations. Integrability and symmetry constraints are used to fully recover the subspace bases using a novel linearized algorithm that takes the varying albedo field into account. The effectiveness of the linear Lambertian property is further investigated by using it for the problem of illumination-invariant face recognition using just one image. Attached shadows are incorporated in the model by a careful treatment of the inherent nonlinearity in Lambert's law. This enables us to extend our algorithm to perform face recognition in the presence of multiple illumination sources. Experimental results using standard data sets are presented %B Pattern Analysis and Machine Intelligence, IEEE Transactions on %V 29 %P 230 - 245 %8 2007/02// %@ 0162-8828 %G eng %N 2 %R 10.1109/TPAMI.2007.25 %0 Journal Article %J Technical Reports of the Computer Science Department %D 2007 %T Appendix to CMod: Modular Information Hiding and Type-Safe Linking for C %A Srivastava,S. %A Hicks, Michael W. %A Foster, Jeffrey S. %X This brief note is an appendix to the paper "CMod: Modular Information Hiding and Type-Safe Linking for C." It consists of the proof of soundness for the formal language presented in that paper. %B Technical Reports of the Computer Science Department %8 2007/06/30/undef %G eng %0 Conference Paper %B Third Language and Technology Conference %D 2007 %T Application of MCL in a dialog agent %A Josyula,D. P %A Fults,S. %A Anderson,M. L %A Wilson,S. %A Perlis, Don %B Third Language and Technology Conference %8 2007/// %G eng %0 Book %D 2007 %T Applied cryptography and network security: 5th international conference, ACNS 2007, Zhuhai, China, June 5-8, 2007: proceedings %A Katz, Jonathan %A Yung,M. %I Springer-Verlag New York Inc %V 4521 %8 2007/// %G eng %0 Conference Paper %D 2007 %T Applying Software Reliability Models on Security Incidents %A Condon,E. %A Michel Cukier %A He,Tao %K computer security incidents %K consumer confidence %K data theft %K network security incidents %K nonhomogenous Poisson process %K reliability growth process %K reputational damage %K security of data %K software reliability %K stock prices %X Computer and network security incidents have increasing financial consequences as demand for network accessibility and connectivity to resources continues to rise. These security incidents can lead to direct financial losses either through data theft of personal and/or proprietary information as well as a reputational damage which may negatively impact stock prices or consumer confidence in a company. This paper examines a large set of security incident data using tools from the software reliability community. We look at applying Non-Homogenous Poisson Process (NHPP) models as a method for describing the reliability growth process. We examine the full set of incidents as well as subsets of the data based on incident types. We look at using the Laplace test to guide selection of the appropriate models. Then, based on the trend results, we apply various NHPP models (i.e., Goel-Okumutu, S-Shaped, Duane, and K-Stage Curve) to illustrate the relevance of using these models to fit the incident data and to predict future incidents. %P 159 - 168 %8 2007/11// %G eng %R 10.1109/ISSRE.2007.29 %0 Conference Paper %B Proceedings of the eighteenth annual ACM-SIAM symposium on Discrete algorithms %D 2007 %T Approximation algorithms for stochastic and risk-averse optimization %A Srinivasan, Aravind %X We present improved approximation algorithms in stochastic optimization. We prove that the multi-stage stochastic versions of covering integer programs (such as set cover and vertex cover) admit essentially the same approximation algorithms as their standard (non-stochastic) counterparts; this improves upon work of Swamy & Shmoys that shows an approximability which depends multiplicatively on the number of stages. We also present approximation algorithms for facility location and some of its variants in the 2-stage recourse model, improving on previous approximation guarantees. %B Proceedings of the eighteenth annual ACM-SIAM symposium on Discrete algorithms %S SODA '07 %I Society for Industrial and Applied Mathematics %C Philadelphia, PA, USA %P 1305 - 1313 %8 2007/// %@ 978-0-898716-24-5 %G eng %U http://dl.acm.org/citation.cfm?id=1283383.1283523 %0 Journal Article %J Journal of Systems and Software %D 2007 %T Archetypal behavior in computer security %A Rosenfeld,Shalom N. %A Rus,Ioana %A Michel Cukier %K Attacks and countermeasures %K Escalation %K Limits to Growth %K Organization computer security %K System archetypes %K System dynamics %K System dynamics modeling and simulation %X The purpose of this study is to understand observed behavior and to diagnose and find solutions to issues encountered in organizational computer security using a systemic approach, namely system archetypes. In this paper we show the feasibility of archetypes application and the benefits of simulation. We developed a model and simulation of some aspects of security based on system dynamics principles. The system dynamics simulation model can be used in support of decision-making, training, and teaching regarding the mitigation of computer security risks. In this paper, we combine two archetypes and show the computer security relevance of such combinations. Presented are instances of the archetypes “Escalation”, in which an organization must continuously increase its efforts to counter additional attacker effort; and “Limits to Growth”, in which the gains from an organization’s security efforts plateau or decline due to its limited capacity for security-related tasks. We describe a scenario where these archetypes (individually and combined) can help in diagnosis and understanding, and present simulation of “what-if” scenarios suggesting how an organization might remedy these problems and maximize its gains from security efforts. %B Journal of Systems and Software %V 80 %P 1594 - 1606 %8 2007/10// %@ 0164-1212 %G eng %U http://www.sciencedirect.com/science/article/pii/S0164121207000155 %N 10 %R 10.1016/j.jss.2007.01.046 %0 Journal Article %J AIAA Paper 2007 %D 2007 %T Assessment of WENO methods with shock-confining filtering for LES of compressible turbulence %A Grube,N. E %A Taylor,E. M %A Martin, M.P %B AIAA Paper 2007 %V 4198 %8 2007/// %G eng %0 Journal Article %J Applied and Environmental MicrobiologyAppl. Environ. Microbiol. %D 2007 %T Association of Vibrio Cholerae O1 El Tor and O139 Bengal with the Copepods Acartia Tonsa and Eurytemora Affinis %A Rawlings,Tonya K. %A Ruiz,Gregory M. %A Rita R Colwell %X The association of Vibrio cholerae with zooplankton has been suggested as an important factor in transmission of human epidemic cholera, and the ability to colonize zooplankton surfaces may play a role in the temporal variation and predominance of the two different serogroups (V. cholerae O1 El Tor and O139) in the aquatic environment. To date, interactions between specific serogroups and species of plankton remain poorly understood. Laboratory microcosm experiments were carried out to compare quantitatively the colonization of two copepod species, Acartia tonsa and Eurytemora affinis, by each of the epidemic serogroups. V. cholerae O1 consistently achieved higher abundances than V. cholerae O139 in colonizing adults of each copepod species as well as the multiple life stages of E. affinis. This difference in colonization may be significant in the general predominance of V. cholerae O1 in cholera epidemics in rural Bangladesh where water supplies are taken directly from the environment. %B Applied and Environmental MicrobiologyAppl. Environ. Microbiol. %V 73 %P 7926 - 7933 %8 2007/12/15/ %@ 0099-2240, 1098-5336 %G eng %U http://aem.asm.org/content/73/24/7926 %N 24 %R 10.1128/AEM.01238-07 %0 Conference Paper %B Proceedings of the 14th ACM conference on Computer and communications security %D 2007 %T Automated detection of persistent kernel control-flow attacks %A Petroni,Jr.,Nick L. %A Hicks, Michael W. %K CFI %K integrity %K Kernel %K rootkit %K virtualization %X This paper presents a new approach to dynamically monitoring operating system kernel integrity, based on a property called state-based control-flow integrity (SBCFI). Violations of SBCFI signal a persistent, unexpected modification of the kernel's control-flow graph. We performed a thorough analysis of 25 Linux rootkits and found that 24 (96%) employ persistent control-flow modifications; an informal study of Windows rootkits yielded similar results. We have implemented SBCFI enforcement as part of the Xen and VMware virtual machine monitors. Our implementation detected all the control-flow modifying rootkits we could install, while imposing unnoticeable overhead for both a typical web server workload and CPU-intensive workloads when operating at 10 second intervals. %B Proceedings of the 14th ACM conference on Computer and communications security %S CCS '07 %I ACM %C New York, NY, USA %P 103 - 115 %8 2007/// %@ 978-1-59593-703-2 %G eng %U http://doi.acm.org/10.1145/1315245.1315260 %R 10.1145/1315245.1315260 %0 Conference Paper %B Proceedings of the twenty-second IEEE/ACM international conference on Automated software engineering %D 2007 %T Automated gui testing guided by usage profiles %A Brooks,Penelope A. %A Memon, Atif M. %K event-driven software %K GUI testing %K usage profiles %X Most software developed in recent years has a graphical userinterface (GUI). The only way for the end-user to interact with the software application is through the GUI. Hence, acceptance and system testing of the software requires GUI testing. This paper presents a new technique for testing of GUI applications. Information on the actual usage of the application, in the form of "usage profiles," is used to ensure that a new version of the application will function correctly. Usage profiles, sequences of events that end-users execute on a GUI, are used to develop a probabilistic usage model of the application. An algorithm uses the model to generate test cases that represent events the user is most likely to execute. Reverse engineering methods are used to extract the underlying structure of the application. An empirical study on four open source GUI applications reveals that test suites generated from the probabilistic model are 0.2-22% of the size of test suites produced directly from usage profiles. Furthermore, the test suites generated from the model detect more faults per test case than those detected directly from the usage profiles, and detect faults not detected by the original profiles %B Proceedings of the twenty-second IEEE/ACM international conference on Automated software engineering %S ASE '07 %I ACM %C New York, NY, USA %P 333 - 342 %8 2007/// %@ 978-1-59593-882-4 %G eng %U http://doi.acm.org/10.1145/1321631.1321681 %R 10.1145/1321631.1321681 %0 Conference Paper %B The 9th International Conference on Document Analysis and Recognition (ICDAR 2007) %D 2007 %T Automatic Document Logo Detection %A Zhu,Guangyu %A David Doermann %X Automatic logo detection and recognition continues to be of great interest to the document retrieval community as it enables effective identification of the source of a document. In this paper, we propose a new approach to logo detection and extraction in document images that robustly classifiesand precisely localizes logos using a boosting strategy across multiple image scales. At a coarse scale, a trained Fisher classifier performs initial classification using features from document context and connected components. Each logo candidate region is further classified at successively finer scales by a cascade of simple classifiers, which allows false alarms to be discarded and the detected region to be refined. Our approach is segmentation free and layout independent. We define a meaningful evaluation metric to measure the quality of logo detection using labeled groundtruth. We demonstrate the effectiveness of our approach using a large collection of real-world documents. %B The 9th International Conference on Document Analysis and Recognition (ICDAR 2007) %C Curitiba, Brazil %P 864 - 868 %8 2007/// %G eng %0 Conference Paper %B Proceedings of the 2007 ACM/IEEE conference on Supercomputing %D 2007 %T Automatic software interference detection in parallel applications %A Tabatabaee, Vahid %A Hollingsworth, Jeffrey K %X We present an automated software interference detection methodology for Single Program, Multiple Data (SPMD) parallel applications. Interference comes from the system and unexpected processes. If not detected and corrected such interference may result in performance degradation. Our goal is to provide a reliable metric for software interference that can be used in soft-failure protection and recovery systems. A unique feature of our algorithm is that we measure the relative timing of application events (i.e. time between MPI calls) rather than system level events such as CPU utilization. This approach lets our system automatically accommodate natural variations in an application's utilization of resources. We use performance irregularities and degradation as signs of software interference. However, instead of relying on temporal changes in performance, our system detects spatial performance degradation across multiple processors. We also include a case study that demonstrates our technique's effectiveness, resilience and robustness. %B Proceedings of the 2007 ACM/IEEE conference on Supercomputing %S SC '07 %I ACM %C Reno, Nevada %P 14:1–14:12 - 14:1–14:12 %8 2007/// %@ 978-1-59593-764-3 %G eng %R 10.1145/1362622.1362642 %0 Journal Article %J IEEE internet computing %D 2007 %T Autonomic computing %A Brabec,F. %A Samet, Hanan %B IEEE internet computing %V 11 %P 52 - 59 %8 2007/// %G eng %N 1 %0 Conference Paper %D 2006 %T Accident or intention: That is the question (in the noisy iterated prisoner’s dilemma) %A Au,T. C %A Nau, Dana S. %X This paper focuses on the Noisy Iterated Prisoner’sDilemma, a version of the Iterated Prisoner’s Dilemma (IPD) in which there is a nonzero probability that a “co- operate” action will accidentally be changed into a “defect” action and vice versa. Tit-For-Tat and other strategies that do quite well in the ordinary (non-noisy) IPD can do quite badly in the Noisy IPD. This paper presents a technique called symbolic noise de- tection, for detecting whether anomalies in player’s behavior are deliberate or accidental. The key idea is to construct a model of the other agent’s behavior, and watch for any de- viation from this model. If the other agent’s next action is inconsistent with this model, the inconsistency can be due either to noise or to a genuine change in their behavior; and we can often distinguish between two cases by waiting to see whether this inconsistency persists in next few moves. We entered several different versions of our strategy in the 2005 Iterated Prisoner’s Dilemma competition, in Cat- egory 2 (noisy environments). Out of the 165 contestants in this category, our programs consistently ranked among top ten. The best of our programs ranked third, and it was beaten only by two “master-slave strategy” programs that each had a large number of “slave” programs feeding points to them. %P 561 - 568 %8 2006/// %G eng %U http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.120.7640&rep=rep1&type=pdf %0 Conference Paper %B Proceedings of the twenty-fifth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems %D 2006 %T Achieving anonymity via clustering %A Aggarwal,Gagan %A Feder,Tomás %A Kenthapadi,Krishnaram %A Khuller, Samir %A Panigrahy,Rina %A Thomas,Dilys %A Zhu,An %K anonymity %K Approximation algorithms %K clustering %K privacy %X Publishing data for analysis from a table containing personal records, while maintaining individual privacy, is a problem of increasing importance today. The traditional approach of de-identifying records is to remove identifying fields such as social security number, name etc. However, recent research has shown that a large fraction of the US population can be identified using non-key attributes (called quasi-identifiers) such as date of birth, gender, and zip code [15]. Sweeney [16] proposed the k-anonymity model for privacy where non-key attributes that leak information are suppressed or generalized so that, for every record in the modified table, there are at least k−1 other records having exactly the same values for quasi-identifiers. We propose a new method for anonymizing data records, where quasi-identifiers of data records are first clustered and then cluster centers are published. To ensure privacy of the data records, we impose the constraint that each cluster must contain no fewer than a pre-specified number of data records. This technique is more general since we have a much larger choice for cluster centers than k-Anonymity. In many cases, it lets us release a lot more information without compromising privacy. We also provide constant-factor approximation algorithms to come up with such a clustering. This is the first set of algorithms for the anonymization problem where the performance is independent of the anonymity parameter k. We further observe that a few outlier points can significantly increase the cost of anonymization. Hence, we extend our algorithms to allow an ε fraction of points to remain unclustered, i.e., deleted from the anonymized publication. Thus, by not releasing a small fraction of the database records, we can ensure that the data published for analysis has less distortion and hence is more useful. Our approximation algorithms for new clustering objectives are of independent interest and could be applicable in other clustering scenarios as well. %B Proceedings of the twenty-fifth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems %S PODS '06 %I ACM %C New York, NY, USA %P 153 - 162 %8 2006/// %@ 1-59593-318-2 %G eng %U http://doi.acm.org/10.1145/1142351.1142374 %R 10.1145/1142351.1142374 %0 Book Section %B Articulated Motion and Deformable ObjectsArticulated Motion and Deformable Objects %D 2006 %T Acquisition of Articulated Human Body Models Using Multiple Cameras %A Sundaresan,Aravind %A Chellapa, Rama %E Perales,Francisco %E Fisher,Robert %X Motion capture is an important application in different areas such as biomechanics, computer animation, and human-computer interaction. Current motion capture methods typically use human body models in order to guide pose estimation and tracking. We model the human body as a set of tapered super-quadrics connected in an articulated structure and propose an algorithm to automatically estimate the parameters of the model using video sequences obtained from multiple calibrated cameras. Our method is based on the fact that the human body is constructed of several articulated chains that can be visualised as essentially 1-D segments embedded in 3-D space and connected at specific joint locations. The proposed method first computes a voxel representation from the images and maps the voxels to a high dimensional space in order to extract the 1-D structure. A bottom-up approach is then suggested in order to build a parametric (spline-based) representation of a general articulated body in the high dimensional space followed by a top-down probabilistic approach that registers the segments to the known human body model. We then present an algorithm to estimate the parameters of our model using the segmented and registered voxels. %B Articulated Motion and Deformable ObjectsArticulated Motion and Deformable Objects %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 4069 %P 78 - 89 %8 2006/// %@ 978-3-540-36031-5 %G eng %U http://dx.doi.org/10.1007/11789239_9 %0 Journal Article %J On the Move to Meaningful Internet Systems 2006: CoopIS, DOA, GADA, and ODBASE %D 2006 %T An adaptive probabilistic replication method for unstructured p2p networks %A Tsoumakos,D. %A Roussopoulos, Nick %B On the Move to Meaningful Internet Systems 2006: CoopIS, DOA, GADA, and ODBASE %P 480 - 497 %8 2006/// %G eng %0 Journal Article %J ACM Transactions on Database Systems (TODS) %D 2006 %T Adaptive pull-based policies for wide area data delivery %A Bright,L. %A Gal,A. %A Raschid, Louiqa %B ACM Transactions on Database Systems (TODS) %V 31 %P 631 - 671 %8 2006/// %G eng %N 2 %0 Conference Paper %B Acoustics, Speech and Signal Processing, 2006. ICASSP 2006 Proceedings. 2006 IEEE International Conference on %D 2006 %T An Adaptive Threshold Method for Hyperspectral Target Detection %A Broadwater, J. %A Chellapa, Rama %K adaptive %K blind %K detection; %K detection;inverse %K importance %K method;background %K processing;importance %K sampling;geophysical %K sampling;object %K signal %K statistics;hyperspectral %K target %K threshold %X In this paper, we present a new approach to automatically determine a detector threshold. This research problem is especially important in hyperspectral target detection as targets are typically very similar to the background. While a number of methods exist to determine the threshold, these methods require either large amounts of data or make simplifying assumptions about the background distribution. We use a method called inverse blind importance sampling which requires few samples and makes no a-priori assumptions about the background statistics. Results show the promise of this algorithm to determine thresholds for fixed false alarm densities in hyperspectral detectors %B Acoustics, Speech and Signal Processing, 2006. ICASSP 2006 Proceedings. 2006 IEEE International Conference on %V 5 %P V - V %8 2006/05// %G eng %R 10.1109/ICASSP.2006.1661497 %0 Conference Paper %B Proceedings of the 11th Conference on European Chapter of the Association for Computational Linguistics %D 2006 %T Adaptive Transformation-based Learning for Improving Dictionary Tagging %A Karagol-Ayan,Burcu %A Doermann, David %A Weinberg, Amy %X We present an adaptive technique that enables users to produce a high quality dictionary parsed into its lexicographic components (headwords, parts of speech, translations, etc.) using an extremely small amount of user provided training data. We use transformation-based learning (TBL) as a postprocessor at two points in the system to improve performance. The results show that the tagging accuracy is increased from 83% and 91% to 93% and 94% for individual words or tokens , and from 64% and 83% to 90% and 93% for contiguous phrases such as definitions or examples of usage. %B Proceedings of the 11th Conference on European Chapter of the Association for Computational Linguistics %C Trento, Italy %P 257 - 264 %8 2006/04// %G eng %0 Journal Article %J Proceedings of the Third International WordNet Conference %D 2006 %T Adding dense, weighted connections to wordnet %A Jordan Boyd-Graber %A Fellbaum,C. %A Osherson,D. %A Schapire,R. %X WORDNET, a ubiquitous tool for natural language processing, suffers from sparsity of connectionsbetween its component concepts (synsets). Through the use of human annotators, a subset of the connections between 1000 hand-chosen synsets was assigned a value of “evocation” representing how much the first concept brings to mind the second. These data, along with existing similarity measures, constitute the basis of a method for predicting evocation between previously unrated pairs. %B Proceedings of the Third International WordNet Conference %P 29 - 36 %8 2006/// %G eng %0 Book %D 2006 %T Advances in Computers: Computational Biology and Bioinformatics %A Zelkowitz, Marvin V %A Tseng,Chau-Wen %K Bioinformatics %K Computational Biology %K Computers / Bioinformatics %K Computers / Computer Science %K Computers / Data Processing %K Computers / Information Theory %K Computers / Interactive & Multimedia %K Computers / Programming / General %K Computers / Reference %K Computers / Software Development & Engineering / General %K Mathematics / Applied %K Science / Life Sciences / Biology %X The field of bioinformatics and computational biology arose due to the need to apply techniques from computer science, statistics, informatics, and applied mathematics to solve biological problems. Scientists have been trying to study biology at a molecular level using techniques derived from biochemistry, biophysics, and genetics. Progress has greatly accelerated with the discovery of fast and inexpensive automated DNA sequencing techniques. As the genomes of more and more organisms are sequenced and assembled, scientists are discovering many useful facts by tracing the evolution of organisms by measuring changes in their DNA, rather than through physical characteristics alone. This has led to rapid growth in the related fields of phylogenetics, the study of evolutionary relatedness among various groups of organisms, and comparative genomics, the study of the correspondence between genes and other genomic features in different organisms. Comparing the genomes of organisms has allowed researchers to better understand the features and functions of DNA in individual organisms, as well as provide insights into how organisms evolve over time. The first four chapters of this book focus on algorithms for comparing the genomes of different organisms. Possible concrete applications include identifying the basis for genetic diseases and tracking the development and spread of different forms of Avian flu. As researchers begin to better understand the function of DNA, attention has begun shifting towards the actual proteins produced by DNA. The final two chapters explore proteomic techniques for analyzing proteins directly to identify their presence and understand their physical structure.- Written by active PhD researchers in computational biology and bioinformatics %I Academic Press %8 2006/11/22/ %@ 9780120121687 %G eng %0 Journal Article %J Journal of Algorithms %D 2006 %T Algorithms for non-uniform size data placement on parallel disks %A Kashyap,Srinivas %A Khuller, Samir %X We study an optimization problem that arises in the context of data placement in a multimedia storage system. We are given a collection of M multimedia objects (data items) that need to be assigned to a storage system consisting of N disks d 1 , d 2 , … , d N . We are also given sets U 1 , U 2 , … , U M such that U i is the set of clients seeking the ith data item. Data item i has size s i . Each disk d j is characterized by two parameters, namely, its storage capacity C j which indicates the maximum total size of data items that may be assigned to it, and a load capacity L j which indicates the maximum number of clients that it can serve. The goal is to find a placement of data items to disks and an assignment of clients to disks so as to maximize the total number of clients served, subject to the capacity constraints of the storage system.We study this data placement problem for homogeneous storage systems where all the disks are identical. We assume that all disks have a storage capacity of k and a load capacity of L. Previous work on this problem has assumed that all data items have unit size, in other words s i = 1 for all i. Even for this case, the problem is NP-hard. For the case where s i ∈ { 1 , … , Δ } for some constant Δ, we develop a polynomial time approximation scheme (PTAS). This result is obtained by developing two algorithms, one that works for constant k and one that works for arbitrary k. The algorithm for arbitrary k guarantees that a solution where at least ( ( k − Δ ) ( k + Δ ) ) ( 1 − 1 ( 1 + k ( 2 Δ ) ) 2 ) -fraction of all clients are assigned to a disk (under certain assumptions). In addition we develop an algorithm for which we can prove tight bounds when s i ∈ { 1 , 2 } . In fact, we can show that a ( 1 − 1 ( 1 + ⌊ k / 2 ⌋ ) 2 ) -fraction of all clients can be assigned (under certain natural assumptions), regardless of the input distribution. %B Journal of Algorithms %V 60 %P 144 - 167 %8 2006/08// %@ 0196-6774 %G eng %U http://www.sciencedirect.com/science/article/pii/S0196677404001063 %N 2 %R 10.1016/j.jalgor.2004.06.007 %0 Conference Paper %D 2006 %T Algorithms for on-line monitoring of components in an optical tweezers-based assembly cell %A Peng,T. %A Balijepalli,A. %A Gupta,S.K. %A LeBrun,T. W. %X Optical tweezers have emerged as a powerful tool for microand nanomanipulation. Using optical tweezers to perform auto- mated assembly requires on-line monitoring of components in the assembly workspace. This paper presents algorithms for estimat- ing positions and orientations of microscale and nanoscale com- ponents in the 3-Dimensional assembly workspace. Algorithms presented in this paper use images obtained by optical section microscopy. The images are first segmented to locate areas of in- terest and then image gradient information from the areas of in- terest is used to generate probable locations and orientations of components in the XY-plane. Finally, signature curves are com- puted and utilized to obtain component locations and orienta- tions in 3-D space. We have tested these algorithms with silica micro-spheres as well as metallic nanowires. We believe that the algorithms described in this paper will provide the foundation for realizing automated assembly operations in optical tweezers- based assembly cells. %P 1 - 13 %8 2006/// %G eng %U http://www.glue.umd.edu/~skgupta/Publication/CIE06_Peng1.pdf %0 Journal Article %J Journal of Research of the National Institute of Standards and Technology %D 2006 %T Algorithms for Structured Total Least Squares Problems with Applications to Blind Image Deblurring %A Kalsi,Anoop %A O'Leary, Dianne P. %X Mastronardi, Lemmerling, and van Huel presented an algorithm for solving a total least squares problem when the matrix and its perturbations are Toeplitz. A Toeplitz matrix is a special kind of matrix with small displacement rank. Here we generalize the fast algorithm to any matrix with small displacement rank. In particular, we show how to efficiently construct the generators whenever M has small displacement rank and show that in many important cases the Cholesky factorization of the matrix M M can also be determined fast. We further extend this problem to Tikhonov regularization of ill-posed problems and illustrate the use of the algorithm on an image deblurring problem. %B Journal of Research of the National Institute of Standards and Technology %V 111 %P 113 - 119 %8 2006/// %G eng %U http://nvl.nist.gov/pub/nistpubs/jres/111/2/V111.N02.A06.pdfhttp://nvl.nist.gov/pub/nistpubs/jres/111/2/V111.N02.A06.pdf %N 2 %0 Conference Paper %B Proceedings of the 2006 SIGCOMM workshop on Challenged networks %D 2006 %T Alternative custodians for congestion control in delay tolerant networks %A Seligman,Matthew %A Fall,Kevin %A Mundur, Padma %K delay tolerant network %K Routing %B Proceedings of the 2006 SIGCOMM workshop on Challenged networks %S CHANTS '06 %I ACM %C New York, NY, USA %P 229 - 236 %8 2006/// %@ 1-59593-572-X %G eng %U http://doi.acm.org/10.1145/1162654.1162660 %R 10.1145/1162654.1162660 %0 Journal Article %J ANALYSIS %D 2006 %T Analysis and comparison of geometric and algebraic multigrid for convection-diffusion equations %A Wu, C. T %A Elman, Howard %B ANALYSIS %V 28 %P 2208 - 2228 %8 2006/// %G eng %N 6 %0 Journal Article %J Bulletin of the American Physical Society %D 2006 %T Analysis of Shock Motion in a Compression Ramp Configuration using DNS data %A Wu,M. %A Martin, M.P %B Bulletin of the American Physical Society %8 2006/// %G eng %0 Journal Article %J IACR ePrint report %D 2006 %T Analyzing the HB and HB+ protocols in the “large error” case %A Katz, Jonathan %A Smith,A. %X HB and HB+ are two shared-key, unidirectional authentication protocols whose extremelylow computational cost makes them potentially well-suited for severely resource-constrained devices. Security of these protocols is based on the conjectured hardness of learning parity with noise; that is, learning a secret s given “noisy” dot products of s that are incorrect with probability ε. Although the problem of learning parity with noise is meaningful for any constant ε < 1/2, existing proofs of security for HB and HB+ only imply security when ε < 1/4. In this note, we show how to extend these proofs to the case of arbitrary ε < 1/2. %B IACR ePrint report %V 326 %P 2006 - 2006 %8 2006/// %G eng %0 Conference Paper %B 10th International Workshop on Frontiers in Handwriting Recognition %D 2006 %T ANew Algorithm for Detecting Text Line in Handwritten Documents %A Li,Yi %A Yefeng Zheng %A David Doermann %A Jaeger,Stefan %X Curvilinear text line detection and segmentation in handwritten documents is a significant challenge for handwriting recognition. Given no prior knowledge of script, we model text line detection as an image segmentation problem by enhancing text line structure using a Gaussian window, and adopting the level set method to evolve text line boundaries. Experiments show that the proposed method achieves high accuracy for detecting text lines in both handwritten and machine printed documents with many scripts. %B 10th International Workshop on Frontiers in Handwriting Recognition %C La Baule, France %P 35 - 40 %8 2006/// %G eng %0 Journal Article %J The Semantic Web: Research and Applications %D 2006 %T Annotated rdf %A Udrea,O. %A Recupero,D. %A V.S. Subrahmanian %X There are numerous extensions of RDF that support temporal reasoning, reasoning about pedigree, reasoning about uncertainty, and so on. In this paper, we present Annotated RDF (or aRDF for short) in which RDF triples are annotated by members of a partially ordered set (with bottom element) that can be selected in any way desired by the user. We present a formal declarative semantics (model theory) for annotated RDF and develop algorithms to check consistency of aRDF theories and to answer queries to aRDF theories. We show that annotated RDF captures versions of all the forms of reasoning mentioned above within a single unified framework. We develop a prototype aRDF implementation and show that our algorithms work very fast indeed – in fact, in just a matter of seconds for theories with over 100,000 nodes. %B The Semantic Web: Research and Applications %V 4011 %P 487 - 501 %8 2006/// %G eng %R 10.1007/11762256_36 %0 Conference Paper %B Proceedings of the Workshop on Frontiers in Linguistically Annotated Corpora 2006 %D 2006 %T Annotation compatibility working group report %A Meyers,A. %A Fang,A. C. %A Ferro,L. %A Kübler,S. %A Jia-Lin,T. %A Palmer,M. %A Poesio,M. %A Dolbey,A. %A Schuler,K. K. %A Loper,E. %A Zinsmeister,H. %A Penn,G. %A Xue,N. %A Hinrichs,E. %A Wiebe,J. %A Pustejovsky,J. %A Farwell,D. %A Hajicova,E. %A Dorr, Bonnie J %A Hovy,E. %A Onyshkevych,B. A. %A Levin,L. %X This report explores the question of compatibility between annotation projects including translating annotation formalisms to each other or to common forms. Compatibility issues are crucial for systems that use the results of multiple annotation projects. We hope that this report will begin a concerted effort in the field to track the compatibility of annotation schemes for part of speech tagging, time annotation, treebanking, role labeling and other phenomena. %B Proceedings of the Workshop on Frontiers in Linguistically Annotated Corpora 2006 %S LAC '06 %I Association for Computational Linguistics %C Stroudsburg, PA, USA %P 38 - 53 %8 2006/// %@ 1-932432-78-7 %G eng %U http://dl.acm.org/citation.cfm?id=1641991.1641997 %0 Report %D 2006 %T Anonymous multi-attribute encryption with range query and conditional decryption %A Bethencourt, J. %A Chan, H. %A Perrig, A. %A Elaine Shi %A Song,D. %X We introduce the concept of Anonymous Multi-Attribute Encryption with Range Query and Con-ditional Decryption (AMERQCD). In AMERQCD, a plaintext is encrypted under a point in multi- dimensional space. To a computationally bounded adversary, the ciphertext hides both the plaintext and the point under which it is encrypted. In a range query, a master key owner releases the decryp- tion key for an arbitrary hyper-rectangle in space, thus allowing decryption of ciphertexts previ- ously encrypted under any point within the hyper-rectangle. However, a computationally bounded adversary cannot learn any information on ciphertexts outside the range covered by the decryption key (except the fact that they do not lie within this range). We give an efficient construction based on the Decision Bilinear Diffie-Hellman (D-BDH) and Decision Linear (D-Linear) assumption. %I Carnegie Mellon University %8 2006 %@ CMU-CS-06-135 %G eng %U http://reports-archive.adm.cs.cmu.edu/anon/anon/home/ftp/2006/CMU-CS-06-135.pdf %R Technical Report %0 Journal Article %J Journal of Visual Communication and Image Representation %D 2006 %T Appearance-based person recognition using color/path-length profile %A Yoon,K. %A Harwood,D. %A Davis, Larry S. %B Journal of Visual Communication and Image Representation %V 17 %P 605 - 622 %8 2006/// %G eng %N 3 %0 Book Section %B Graph Drawing %D 2006 %T Applications of Parameterized st-Orientations in Graph Drawing Algorithms %A Charalampos Papamanthou %A Tollis, Ioannis G. %E Healy, Patrick %E Nikolov, Nikola S. %K Algorithm Analysis and Problem Complexity %K Computer Graphics %K Data structures %K Discrete Mathematics in Computer Science %X Many graph drawing algorithms use st-numberings (st-orien-tations or bipolar orientations) as a first step. An st-numbering of a biconnected undirected graph defines a directed graph with no cycles, one single source s and one single sink t. As there exist exponentially many st-numberings that correspond to a certain undirected graph G, using different st-numberings in various graph drawing algorithms can result in aesthetically different drawings with different area bounds. In this paper, we present results concerning new algorithms for parameterized st-orientations, their impact on graph drawing algorithms and especially in visibility representations. %B Graph Drawing %S Lecture Notes in Computer Science %I Springer Berlin Heidelberg %P 355 - 367 %8 2006/01/01/ %@ 978-3-540-31425-7, 978-3-540-31667-1 %G eng %U http://link.springer.com/chapter/10.1007/11618058_32 %0 Conference Paper %B Proceedings of the 2006 workshop on Programming languages and analysis for security %D 2006 %T Applying flow-sensitive CQUAL to verify MINIX authorization check placement %A Fraser,Timothy %A Petroni,Jr. %A Arbaugh, William A. %K access controls %K cqual %K minix %X We present the first use of flow-sensitive CQUAL to verify the placement of operating system authorization checks. Our analysis of MINIX 3 system servers and discovery of a non-exploitable Time-Of-Check/Time-Of-Use bug demonstrate the effectiveness of flow sensitive CQUAL and its advantage over earlier flow-insensitive versions. We also identify and suggest alternatives to current CQUAL usability features that encourage analysts to make omissions that cause the otherwise sound tool to produce false-negative results. %B Proceedings of the 2006 workshop on Programming languages and analysis for security %S PLAS '06 %I ACM %C Ottawa, Ontario, Canada %P 3 - 6 %8 2006/// %@ 1-59593-374-3 %G eng %R 10.1145/1134744.1134747 %0 Conference Paper %B Proceedings of the 22nd International Conference on Data Engineering, 2006. ICDE '06 %D 2006 %T Approximate Data Collection in Sensor Networks using Probabilistic Models %A Chu,D. %A Deshpande, Amol %A Hellerstein,J. M %A Wei Hong %K Batteries %K Biological system modeling %K Biosensors %K data mining %K Databases %K Energy consumption %K Intelligent networks %K Intelligent sensors %K Robustness %K Wireless sensor networks %X Wireless sensor networks are proving to be useful in a variety of settings. A core challenge in these networks is to minimize energy consumption. Prior database research has proposed to achieve this by pushing data-reducing operators like aggregation and selection down into the network. This approach has proven unpopular with early adopters of sensor network technology, who typically want to extract complete "dumps" of the sensor readings, i.e., to run "SELECT *" queries. Unfortunately, because these queries do no data reduction, they consume significant energy in current sensornet query processors. In this paper we attack the "SELECT " problem for sensor networks. We propose a robust approximate technique called Ken that uses replicated dynamic probabilistic models to minimize communication from sensor nodes to the network’s PC base station. In addition to data collection, we show that Ken is well suited to anomaly- and event-detection applications. A key challenge in this work is to intelligently exploit spatial correlations across sensor nodes without imposing undue sensor-to-sensor communication burdens to maintain the models. Using traces from two real-world sensor network deployments, we demonstrate that relatively simple models can provide significant communication (and hence energy) savings without undue sacrifice in result quality or frequency. Choosing optimally among even our simple models is NPhard, but our experiments show that a greedy heuristic performs nearly as well as an exhaustive algorithm. %B Proceedings of the 22nd International Conference on Data Engineering, 2006. ICDE '06 %I IEEE %P 48 - 48 %8 2006/04/03/07 %@ 0-7695-2570-9 %G eng %R 10.1109/ICDE.2006.21 %0 Journal Article %J Discrete Event Dynamic Systems %D 2006 %T Approximating the minimal sensor selection for supervisory control %A Rohloff,K. R %A Khuller, Samir %A Kortsarz,G. %X This paper discusses the problem of selecting a set of sensors of minimum cost that can be used for the synthesis of a supervisory controller. It is shown how this sensor selection problem is related to a type of directed graph st-cut problem that has not been previously discussed in the literature. Approximation algorithms to solve the sensor selection problem can be used to solve the graph cutting problem and vice-versa. Polynomial time algorithms to find good approximate solutions to either problem most likely do not exist (under certain complexity assumptions), but a time efficient approximation algorithm is shown that solves a special case of these problems. It is also shown how to convert the sensor selection problem into an integer programming problem. %B Discrete Event Dynamic Systems %V 16 %P 143 - 170 %8 2006/// %G eng %N 1 %R 10.1007/s10626-006-6187-3 %0 Journal Article %J Networks %D 2006 %T Approximation algorithms for channel allocation problems in broadcast networks %A Gandhi,Rajiv %A Khuller, Samir %A Srinivasan, Aravind %A Wang,Nan %K channel allocation problem %K distributed approximation algorithm %K edge partitioning problem %X We study two packing problems that arise in the area of dissemination-based information systems; a second theme is the study of distributed approximation algorithms. The problems considered have the property that the space occupied by a collection of objects together could be significantly less than the sum of the sizes of the individual objects. In the Channel Allocation Problem, there are requests that are subsets of topics. There are a fixed number of channels that can carry an arbitrary number of topics. All the topics of each request must be broadcast on some channel. The load on any channel is the number of topics that are broadcast on that channel; the objective is to minimize the maximum load on any channel. We present approximation algorithms for this problem, and also show that the problem is MAX-SNP hard. The second problem is the Edge Partitioning Problem addressed by Goldschmidt, Hochbaum, Levin, and Olinick (Networks, 41:13–23, 2003). Each channel here can deliver topics for at most k requests, and we aim to minimize the total load on all channels. We present an O(n1/3)–approximation algorithm, and also show that the algorithm can be made fully distributed with the same approximation guarantee; we also generalize the (nondistributed) Edge Partitioning Problem of graphs to the case of hypergraphs. © 2006 Wiley Periodicals, Inc. NETWORKS, Vol. 47(4), 225–236 2006 %B Networks %V 47 %P 225 - 236 %8 2006/03/09/ %@ 1097-0037 %G eng %U http://onlinelibrary.wiley.com/doi/10.1002/net.20111/abstract %N 4 %R 10.1002/net.20111 %0 Conference Paper %B American Medical Informatics Association 2006 Annual Symposium Proceedings Biomedical and Health Informatics: From Foundations to Applications to Policy, Washington, DC, DW Bates, JH Holmes, and G. Kuperman, editors %D 2006 %T Archimedes, an archive of medical images %A Tahmoush,D. %A Samet, Hanan %X We present a medical image and medical recorddatabase for the storage, research, transmission, and evaluation of medical images. Medical images from any source that supports the DICOM standard can be stored and accessed, as well as associated analysis and annotations. Retrieval is based on patient info, date, doctor’s annotations, features in the images, or a spatial combination. This database supports the secure transmission of sensitive data for tele-medicine and follows all HIPPA regulations. %B American Medical Informatics Association 2006 Annual Symposium Proceedings Biomedical and Health Informatics: From Foundations to Applications to Policy, Washington, DC, DW Bates, JH Holmes, and G. Kuperman, editors %8 2006/// %G eng %0 Journal Article %J Software: Practice and Experience %D 2006 %T An architecture for adaptive intrusion-tolerant applications %A Pal,Partha %A Rubel,Paul %A Atighetchi,Michael %A Webber,Franklin %A Sanders,William H. %A Seri,Mouna %A Ramasamy,HariGovind %A Lyons,James %A Courtney,Tod %A Agbaria,Adnan %A Michel Cukier %A Gossett,Jeanna %A Keidar,Idit %K adaptive defense %K adaptive middleware %K Byzantine fault tolerance %K intrusion tolerance %K redundancy %K survivability architecture %X Applications that are part of a mission-critical information system need to maintain a usable level of key services through ongoing cyber-attacks. In addition to the well-publicized denial of service (DoS) attacks, these networked and distributed applications are increasingly threatened by sophisticated attacks that attempt to corrupt system components and violate service integrity. While various approaches have been explored to deal with DoS attacks, corruption-inducing attacks remain largely unaddressed. We have developed a collection of mechanisms based on redundancy, Byzantine fault tolerance, and adaptive middleware that help distributed, object-based applications tolerate corruption-inducing attacks. In this paper, we present the ITUA architecture, which integrates these mechanisms in a framework for auto-adaptive intrusion-tolerant systems, and we describe our experience in using the technology to defend a critical application that is part of a larger avionics system as an example. We also motivate the adaptive responses that are key to intrusion tolerance, and explain the use of the ITUA architecture to support them in an architectural framework. Copyright © 2006 John Wiley & Sons, Ltd. %B Software: Practice and Experience %V 36 %P 1331 - 1354 %8 2006/// %@ 1097-024X %G eng %U http://onlinelibrary.wiley.com/doi/10.1002/spe.747/abstract %N 11-12 %R 10.1002/spe.747 %0 Conference Paper %B Proceedings of the 15th conference on USENIX Security Symposium %D 2006 %T An architecture for specification-based detection of semantic integrity violations in kernel dynamic data %A Petroni Jr,N. L %A Fraser,T. %A Walters,A. A %A Arbaugh, William A. %B Proceedings of the 15th conference on USENIX Security Symposium %P 20 - 20 %8 2006/// %G eng %0 Conference Paper %B International Conference on Document Recognition and Retrieval XIII %D 2006 %T ARobust Stamp Detection Framework on Degraded Documents %A Zhu,Guangyu %A Jaeger,Stefan %A David Doermann %X In this paper, we present a novel stamp detection framework based on parameter estimation of connected edge features. Using robust basic shape detectors, the approach is effective for stamps with analytically shaped contours. For elliptic/circular stamps, it efficiently makes use of the orientation information from pairs of edge points to determine its center position and area without computing all the five parameters of ellipse. When linking connected edges, we introduce an effective template-based junction removal technique to specifically address the problem that stamps often spatially overlap with background contents. These give our approach significant advantage in terms of computation complexity over traditional Hough transform methods. Experimental results on real degraded documents demonstrated the robustness of this retrieval approach on large document database, which consists of both printed text and handwritten notes. This enables us to reliably retrieve documents from a specific source by detecting its official stamps with analytically-shape contours when only limited samples are available. %B International Conference on Document Recognition and Retrieval XIII %I San Jose, CA %P 1 - 9 %8 2006/// %G eng %0 Conference Paper %D 2006 %T Assessing the Attack Threat due to IRC Channels %A Meyer,R. %A Michel Cukier %K attack threat assessment %K channel activity %K client-server systems %K computer crime %K intrusion-resilient channels %K IRC channels %K IRC protocols %K Protocols %K telecommunication channels %K telecommunication security %X This practical experience report presents the results of an investigation into the threat of attacks associated with the chat medium IRC. A combination of simulated users (i.e., bots), some configured with scripts that simulated conversations, and regular users were used. The average number of attacks per day a user on IRC can expect, the effect of channel activity, gender based on the name, and network type on the number of attacks were determined. The social structure of IRC channels and the types of users that use it were analyzed. The results indicate that attacks through IRC channels come from human users selecting targets rather than automated scripts targeting every user in a channel %P 467 - 472 %8 2006/06// %G eng %R 10.1109/DSN.2006.12 %0 Journal Article %J ECOOP 2002—Object-Oriented Programming %D 2006 %T Atomic instructions in java %A Hovemeyer,D. %A Pugh, William %A Spacco,J. %B ECOOP 2002—Object-Oriented Programming %P 5 - 16 %8 2006/// %G eng %0 Conference Paper %B Computer Vision and Pattern Recognition Workshop, 2006. CVPRW '06. Conference on %D 2006 %T Attribute Grammar-Based Event Recognition and Anomaly Detection %A Joo,Seong-Wook %A Chellapa, Rama %X We propose to use attribute grammars for recognizing normal events and detecting abnormal events in a video. Attribute grammars can describe constraints on features (attributes) in addition to the syntactic structure of the input. Events are recognized using an extension of the Earley parser that handles attributes and concurrent event threads. Abnormal events are detected when the input does not follow syntax of the grammar or the attributes do not satisfy the constraints in the attribute grammar to some degree. We demonstrate the effectiveness of our method for the task of recognizing normal events and detecting anomalies in a parking lot. %B Computer Vision and Pattern Recognition Workshop, 2006. CVPRW '06. Conference on %P 107 - 107 %8 2006/06// %G eng %R 10.1109/CVPRW.2006.32 %0 Journal Article %J Artificial Intelligence in Medicine %D 2006 %T Automatic identification of confusable drug names %A Kondrak,Grzegorz %A Dorr, Bonnie J %K Drug names %K Evaluation methodology %K Lexical similarity %K Medical errors %X SummaryObjectiveMany hundreds of drugs have names that either look or sound so much alike that doctors, nurses and pharmacists can get them confused, dispensing the wrong one in errors that can injure or even kill patients. Methods and material We propose to address the problem through the application of two new methods—one based on orthographic similarity (“look-alike”), and the other based on phonetic similarity (“sound-alike”). In order to compare the effectiveness of the new methods for identifying confusable drug names with other known similarity measures, we developed a novel evaluation methodology. Results We show that the new orthographic measure (BI-SIM) outperforms other commonly used measures of similarity on a set containing both look-alike and sound-alike pairs, and that a new feature-based phonetic approach (ALINE) outperforms orthographic approaches on a test set containing solely sound-alike pairs. However, an approach that combines several different measures achieves the best results on two test sets. Conclusion Our system is currently used as the basis of a system developed for the U.S. Food and Drug Administration for detection of confusable drug names. %B Artificial Intelligence in Medicine %V 36 %P 29 - 42 %8 2006/01// %@ 0933-3657 %G eng %U http://www.sciencedirect.com/science/article/pii/S0933365705001004 %N 1 %R 10.1016/j.artmed.2005.07.005 %0 Report %D 2006 %T Automatic web services composition using Shop2 %A Hendler,J. %A Nau, Dana S. %A Parsia,B. %A Sirin,E. %A Wu,D. %X Semantic markup of Web services will enable the automation of various kinds of tasks, including discovery, composition, and execution of Web services. We describe how an AI planning system (SHOP2) can be used with DAML-S Web service descriptions to automatically compose Web services. %I Department of Computer Science, University of Maryland, College Park %8 2006/// %G eng %U http://www.stormingmedia.us/96/9697/A969744.html %0 Journal Article %J Algorithms and Computation %D 2005 %T The ABCs of AVDs: Geometric Retrieval Made Simple %A Mount, Dave %X One of the best known structures in computational geometry is the Voronoi diagram. Given a set of n points in space, called sites, this is a subdivision that partitions space into n convex cells according which site is closest. The Voronoi diagram and its dual, the Delaunay triangulation are among the most fundamental proximity structures known, and have numerous uses in science. For example, applying point location to the subdivision defined by a Voronoi diagram answers nearest neighbor queries. Although they are popular in low dimensional spaces, there are various reasons why Voronoi diagrams are rarely used higher dimensions. First, the combinatorial complexity of the diagram grows rapidly, as fast as Ω(nd/2) in d-dimensional space. Second, the diagram does not admit a straightforward search structure in higher dimensions. %B Algorithms and Computation %P 819 - 826 %8 2005/// %G eng %R 10.1007/978-3-540-30551-4_2 %0 Book Section %B Discovery ScienceDiscovery Science %D 2005 %T Active Constrained Clustering by Examining Spectral Eigenvectors %A Xu,Qianjun %A desJardins, Marie %A Wagstaff,Kiri %E Hoffmann,Achim %E Motoda,Hiroshi %E Scheffer,Tobias %X This work focuses on the active selection of pairwise constraints for spectral clustering. We develop and analyze a technique for Active Constrained Clustering by Examining Spectral eigenvectorS (ACCESS) derived from a similarity matrix. The ACCESS method uses an analysis based on the theoretical properties of spectral decomposition to identify data items that are likely to be located on the boundaries of clusters, and for which providing constraints can resolve ambiguity in the cluster descriptions. Empirical results on three synthetic and five real data sets show that ACCESS significantly outperforms constrained spectral clustering using randomly selected constraints. %B Discovery ScienceDiscovery Science %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 3735 %P 294 - 307 %8 2005/// %@ 978-3-540-29230-2 %G eng %U http://dx.doi.org/10.1007/11563983_25 %0 Conference Paper %B 8th Int. Conf. on Document Analysis and Recognition (ICDAR'05) %D 2005 %T Adaptive OCR with Limited User Feedback %A Ma,Huanfeng %A David Doermann %X A methodology is proposed for processing noisy printed documents with limited user feedback. Without the support of ground truth, a specific collection of scanned documents can be processed to extract character templates. The adaptiveness of this approach lies in that the extracted templates are used to train an OCR classifier quickly and with limited user feedback. Experimental results show that this approach is extremely useful for the processing of noisy documents with many touching characters. %B 8th Int. Conf. on Document Analysis and Recognition (ICDAR'05) %P 814 - 818 %8 2005/08// %G eng %0 Journal Article %J Theory of Cryptography %D 2005 %T Adaptively-secure, non-interactive public-key encryption %A Canetti,R. %A Halevi,S. %A Katz, Jonathan %X Adaptively-secure encryption schemes ensure secrecy even in the presence of an adversary who can corrupt parties in an adaptive manner based on public keys, ciphertexts, and secret data of already-corrupted parties. Ideally, an adaptively-secure encryption scheme should, like standard public-key encryption, allow arbitrarily-many parties to use a single encryption key to securely encrypt arbitrarily-many messages to a given receiver who maintains only a single short decryption key. However, it is known that these requirements are impossible to achieve: no non-interactive encryption scheme that supports encryption of an unbounded number of messages and uses a single, unchanging decryption key can be adaptively secure. Impossibility holds even if secure data erasure is possible.We show that this limitation can be overcome by updating the decryption key over time and making some mild assumptions about the frequency of communication between parties. Using this approach, we construct adaptively-secure, completely non-interactive encryption schemes supporting secure encryption of arbitrarily-many messages from arbitrarily-many senders. Our schemes additionally provide forward security and security against chosen-ciphertext attacks. %B Theory of Cryptography %P 150 - 168 %8 2005/// %G eng %R 10.1007/978-3-540-30576-7_9 %0 Conference Paper %B Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems %D 2005 %T Agent-organized networks for dynamic team formation %A Gaston,Matthew E. %A desJardins, Marie %K multi-agent systems %K organizational learning %K team formation %X Many multi-agent systems consist of a complex network of autonomous yet interdependent agents. Examples of such networked multi-agent systems include supply chains and sensor networks. In these systems, agents have a select set of other agents with whom they interact based on environmental knowledge, cognitive capabilities, resource limitations, and communications constraints. Previous findings have demonstrated that the structure of the artificial social network governing the agent interactions is strongly correlated with organizational performance. As multi-agent systems are typically embedded in dynamic environments, we wish to develop distributed, on-line network adaptation mechanisms for discovering effective network structures. Therefore, within the context of dynamic team formation, we propose several strategies for agent-organized networks (AONs) and evaluate their effectiveness for increasing organizational performance. %B Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems %S AAMAS '05 %I ACM %C New York, NY, USA %P 230 - 237 %8 2005/// %@ 1-59593-093-0 %G eng %U http://doi.acm.org/10.1145/1082473.1082508 %R 10.1145/1082473.1082508 %0 Conference Paper %B Proceedings of the 20th national conference on Artificial intelligence - Volume 1 %D 2005 %T Agent-organized networks for multi-agent production and exchange %A Gaston,Matthew E. %A desJardins, Marie %X As multi-agent systems grow in size and complexity, social networks that govern the interactions among the agents will directly impact system behavior at the individual and collective levels. Examples of such large-scale, networked multi-agent systems include peer-to-peer networks, distributed information retrieval, and agent-based supply chains. One way of dealing with the uncertain and dynamic nature of such environments is to endow agents with the ability to modify the agent social network by autonomously adapting their local connectivity structure. In this paper, we present a framework for agent-organized networks (AONs) in the context of multi-agent production and exchange, and experimentally evaluate the feasibility and efficiency of specific AON strategies. We find that decentralized network adaptation can significantly improve organizational performance. Additionally, we analyze several properties of the resulting network structures and consider their relationship to the observed increase in organizational performance. %B Proceedings of the 20th national conference on Artificial intelligence - Volume 1 %S AAAI'05 %I AAAI Press %P 77 - 82 %8 2005/// %@ 1-57735-236-x %G eng %U http://dl.acm.org/citation.cfm?id=1619332.1619347 %0 Journal Article %J J. ACM %D 2005 %T Aggregate operators in probabilistic databases %A Ross,Robert %A V.S. Subrahmanian %A Grant,John %K Aggregates %K probabilistic relational databases %X Though extensions to the relational data model have been proposed in order to handle probabilistic information, there has been very little work to date on handling aggregate operators in such databases. In this article, we present a very general notion of an aggregate operator and show how classical aggregation operators (such as COUNT, SUM, etc.) as well as statistical operators (such as percentiles, variance, etc.) are special cases of this general definition. We devise a formal linear programming based semantics for computing aggregates over probabilistic DBMSs, develop algorithms that satisfy this semantics, analyze their complexity, and introduce several families of approximation algorithms that run in polynomial time. We implemented all of these algorithms and tested them on a large set of data to help determine when each one is preferable. %B J. ACM %V 52 %P 54 - 101 %8 2005/01// %@ 0004-5411 %G eng %U http://doi.acm.org/10.1145/1044731.1044734 %N 1 %R 10.1145/1044731.1044734 %0 Conference Paper %B Computer Vision, 2005. ICCV 2005. Tenth IEEE International Conference on %D 2005 %T An algebraic approach to surface reconstruction from gradient fields %A Agrawal,A. %A Chellapa, Rama %A Raskar, R. %K algebra; %K algebraic %K approach; %K Computer %K confinement; %K discrete %K domain %K error %K field; %K from %K gradient %K graph %K image %K integrability; %K linear %K local %K methods; %K photometric %K reconstruction; %K shading; %K SHAPE %K stereo; %K surface %K system; %K theory; %K vision; %X Several important problems in computer vision such as shape from shading (SFS) and photometric stereo (PS) require reconstructing a surface from an estimated gradient field, which is usually non-integrable, i.e. have non-zero curl. We propose a purely algebraic approach to enforce integrability in discrete domain. We first show that enforcing integrability can be formulated as solving a single linear system Ax =b over the image. In general, this system is under-determined. We show conditions under which the system can be solved and a method to get to those conditions based on graph theory. The proposed approach is non-iterative, has the important property of local error confinement and can be applied to several problems. Results on SFS and PS demonstrate the applicability of our method. %B Computer Vision, 2005. ICCV 2005. Tenth IEEE International Conference on %V 1 %P 174 - 181 Vol. 1 - 174 - 181 Vol. 1 %8 2005/10// %G eng %R 10.1109/ICCV.2005.31 %0 Journal Article %J Electronic Notes in Theoretical Computer Science %D 2005 %T An Algebraic Theory Of Boundary Crossing Transitions %A Ray,Arnab %A Cleaveland, Rance %A Skou,Arne %K Compositional Semantics %K Formal Methods %K Process algebra %K Statecharts %X This paper gives a process-algebraic semantics for the hierarchical state machine (HSM) fragment of Statecharts, in which state transitions are permitted to cross state boundaries. Although frowned upon by researchers as promoting unstructured modeling, such transitions are used extensively in practice to model parameterized start states and conditional exit states. The purpose of this work is to develop a compositional semantics for HSMs that may be fit together with compositional semantic accounts for Statecharts without boundary-crossing transitions in order to arrive at a compositional theory for virtually the whole Statecharts language. Our technical development consists of a process algebra for HSMs that is equipped with an operational semantics, an argument that bisimulation is a congruence for the algebra, a syntax-directed translation procedure for HSMs into the process algebra, and an equational axiomatization of the algebra. %B Electronic Notes in Theoretical Computer Science %V 115 %P 69 - 88 %8 2005/01/18/ %@ 1571-0661 %G eng %U http://www.sciencedirect.com/science/article/pii/S1571066104053186 %R 10.1016/j.entcs.2004.09.029 %0 Journal Article %J ACM Transactions on Mathematical Software-TOMS %D 2005 %T Algorithm 844: Computing sparse reduced-rank approximations to sparse matrices %A Berry,M. W %A Pulatova,S. A %A Stewart, G.W. %B ACM Transactions on Mathematical Software-TOMS %V 31 %P 252 - 269 %8 2005/// %G eng %N 2 %0 Conference Paper %B Computer Design: VLSI in Computers and Processors, 2005. ICCD 2005. Proceedings. 2005 IEEE International Conference on %D 2005 %T Algorithmic and architectural design methodology for particle filters in hardware %A Sankaranarayanan,A. C %A Chellapa, Rama %A Srivastava, A. %K (numerical %K algorithmic %K architectural %K architectures; %K bearing %K complexity; %K computational %K design %K digital %K evolution; %K Filtering %K filtering; %K filters; %K implementation; %K methodology; %K methods); %K nonGaussian %K nonlinear %K only %K Parallel %K particle %K pipeline %K pipelined %K problem; %K processing; %K state %K tracking %K VLSI %K VLSI; %X In this paper, we present algorithmic and architectural methodology for building particle filters in hardware. Particle filtering is a new paradigm for filtering in presence of nonGaussian nonlinear state evolution and observation models. This technique has found wide-spread application in tracking, navigation, detection problems especially in a sensing environment. So far most particle filtering implementations are not lucrative for real time problems due to excessive computational complexity involved. In this paper, we re-derive the particle filtering theory to make it more amenable to simplified VLSI implementations. Furthermore, we present and analyze pipelined architectural methodology for designing these computational blocks. Finally, we present an application using the bearing only tracking problem and evaluate the proposed architecture and algorithmic methodology. %B Computer Design: VLSI in Computers and Processors, 2005. ICCD 2005. Proceedings. 2005 IEEE International Conference on %P 275 - 280 %8 2005/10// %G eng %R 10.1109/ICCD.2005.20 %0 Conference Paper %B Proceedings of the 2005 ACM SIGMETRICS international conference on Measurement and modeling of computer systems %D 2005 %T Algorithmic aspects of capacity in wireless networks %A Kumar,V. S. Anil %A Marathe,Madhav V. %A Parthasarathy,Srinivasan %A Srinivasan, Aravind %K capacity modeling %K end-to-end scheduling %K Linear programming %K Wireless networks %X This paper considers two inter-related questions: (i) Given a wireless ad-hoc network and a collection of source-destination pairs {(si,ti)}, what is the maximum throughput capacity of the network, i.e. the rate at which data from the sources to their corresponding destinations can be transferred in the network? (ii) Can network protocols be designed that jointly route the packets and schedule transmissions at rates close to the maximum throughput capacity? Much of the earlier work focused on random instances and proved analytical lower and upper bounds on the maximum throughput capacity. Here, in contrast, we consider arbitrary wireless networks. Further, we study the algorithmic aspects of the above questions: the goal is to design provably good algorithms for arbitrary instances. We develop analytical performance evaluation models and distributed algorithms for routing and scheduling which incorporate fairness, energy and dilation (path-length) requirements and provide a unified framework for utilizing the network close to its maximum throughput capacity.Motivated by certain popular wireless protocols used in practice, we also explore "shortest-path like" path selection strategies which maximize the network throughput. The theoretical results naturally suggest an interesting class of congestion aware link metrics which can be directly plugged into several existing routing protocols such as AODV, DSR, etc. We complement the theoretical analysis with extensive simulations. The results indicate that routes obtained using our congestion aware link metrics consistently yield higher throughput than hop-count based shortest path metrics. %B Proceedings of the 2005 ACM SIGMETRICS international conference on Measurement and modeling of computer systems %S SIGMETRICS '05 %I ACM %C New York, NY, USA %P 133 - 144 %8 2005/// %@ 1-59593-022-1 %G eng %U http://doi.acm.org/10.1145/1064212.1064228 %R 10.1145/1064212.1064228 %0 Conference Paper %B Foundations of Computer Science, 2005. FOCS 2005. 46th Annual IEEE Symposium on %D 2005 %T Algorithmic graph minor theory: Decomposition, approximation, and coloring %A Demaine,E. D %A Hajiaghayi, Mohammad T. %A Kawarabayashi,K. %K algorithmic graph minor theory %K approximation algorithm %K combinatorial polylogarithmic approximation %K computational complexity %K constant-factor approximation %K graph algorithm %K graph coloring %K graph colouring %K half-integral multicommodity flow %K largest grid minor %K maximization problem %K minimization problem %K polynomial-time algorithm %K subexponential fixed-parameter algorithm %K topological graph theory %K treewidth %X At the core of the seminal graph minor theory of Robertson and Seymour is a powerful structural theorem capturing the structure of graphs excluding a fixed minor. This result is used throughout graph theory and graph algorithms, but is existential. We develop a polynomial-time algorithm using topological graph theory to decompose a graph into the structure guaranteed by the theorem: a clique-sum of pieces almost-embeddable into bounded-genus surfaces. This result has many applications. In particular we show applications to developing many approximation algorithms, including a 2-approximation to graph coloring, constant-factor approximations to treewidth and the largest grid minor, combinatorial polylogarithmic approximation to half-integral multicommodity flow, subexponential fixed-parameter algorithms, and PTASs for many minimization and maximization problems, on graphs excluding a fixed minor. %B Foundations of Computer Science, 2005. FOCS 2005. 46th Annual IEEE Symposium on %P 637 - 646 %8 2005/// %G eng %R 10.1109/SFCS.2005.14 %0 Journal Article %J Computer-Aided Design and Applications %D 2005 %T Algorithms for constructing 3-D point clouds using multiple digital fringe projection patterns %A Peng,T. %A Gupta,S.K. %A Lau,K. %X This paper describes algorithms for generating 3-D point clouds from a set of digital imagesobtained from projecting phase-shifted sinusoidal fringe patterns onto object. In this paper, a mathematical model is introduced for describing the geometric relationship between the fringe patterns being projected, the image captured and the shape of the object being measured. This model allows considerable flexibility in the spatial configuration of shape measurement system. The algorithms for point cloud construction described in this paper present an improvement over the existing algorithms in terms of accuracy, ease of system calibration, and sensitivity to parameter errors. These algorithms have been incorporated in a shape measurement system and shown to have a very good performance. %B Computer-Aided Design and Applications %V 2 %P 737 - 746 %8 2005/// %G eng %U http://cadanda.homestead.com/V2No6_04.pdf %0 Conference Paper %B Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing %D 2005 %T Alignment link projection using transformation-based learning %A Ayan,Necip Fazil %A Dorr, Bonnie J %A Monz,Christof %X We present a new word-alignment approach that learns errors made by existing word alignment systems and corrects them. By adapting transformation-based learning to the problem of word alignment, we project new alignment links from already existing links, using features such as POS tags. We show that our alignment link projection approach yields a significantly lower alignment error rate than that of the best performing alignment system (22.6% relative reduction on English-Spanish data and 23.2% relative reduction on English-Chinese data). %B Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing %S HLT '05 %I Association for Computational Linguistics %C Stroudsburg, PA, USA %P 185 - 192 %8 2005/// %G eng %U http://dx.doi.org/10.3115/1220575.1220599 %R 10.3115/1220575.1220599 %0 Conference Paper %B Proceedings of the second international workshop on Software engineering for high performance computing system applications %D 2005 %T And away we go: understanding the complexity of launching complex HPC applications %A Yoon,Il-Chul %A Sussman, Alan %A Porter, Adam %K development %K high-performance %K Productivity %K software %X Although not well-studied, launching HPC applications is extremely complex. To better understand the launching process, we conducted a simple case study. Based in part on this study and an examination of existing toolkits, we have begun to develop a prototype environment to support HPC application launching. %B Proceedings of the second international workshop on Software engineering for high performance computing system applications %S SE-HPCS '05 %I ACM %C New York, NY, USA %P 45 - 49 %8 2005/// %@ 1-59593-117-1 %G eng %U http://doi.acm.org/10.1145/1145319.1145333 %R 10.1145/1145319.1145333 %0 Journal Article %J Journal of VisionJ Vis %D 2005 %T On the Anisotropy in the Perception of Stereoscopic Slant %A Fermüller, Cornelia %A Hui Ji %X Many visual processes computationally amount to estimation problems. It has been shown that noise in the image data causes consistent errors in the estimation, that is statistical bias [1]. Here we analyze the effects of bias on 3D shape estimation, and we found that it predicts the perceived underestimation of slant for many settings found in experiments.In particular, we concentrate on the problem of shape estimation from stereo using orientation disparity. We found that bias predicts the anisotropy in the perception of stereoscopic slant, an effect that has not been explained before. It has been found that a surface slanted about the horizontal axis is estimated much easier and more accurately than a surface slanted about the vertical axis [2,3]. In both cases there is an underestimation of slant, but it is much larger for slant about the vertical. Cagnello and Rogers [2] argued that this effect is due to orientation disparity, which when the texture on the plane is made up of mostly horizontal and vertical lines, is smaller for surfaces slanting about the vertical. However as shown in [3], the effect also exists, even though in weaker form, when the texture is made up of lines oriented at 45 degrees. For such a configuration the orientation disparity in the two configurations is about the same. Thus orientation disparity by itself cannot be the cause. But errors in the estimated position and orientation of edgels cause bias, which predicts all the above findings and other parametric studies that we performed. [1]. C. Fermuller, H. Malm. Uncertainty in visual processes predicts geometrical optical illusions, Vision Research, 44, 727-749, 2004. [2]. R. Cagnello, B.J.Rogers. Anisotropies in the perception of stereoscopic surfaces: the role of orientation disparity. Vision Research, 33>(16): 2189-2201, 1993. [3]. G.J. Mitchison. S.P. McKee. Mechannisms underlying the anisotropy of stereoscopic tilt perception. Vision Research, 30:1781-1791, 1990. %B Journal of VisionJ Vis %V 5 %P 516 - 516 %8 2005/09/23/ %@ , 1534-7362 %G eng %U http://www.journalofvision.org/content/5/8/516 %N 8 %R 10.1167/5.8.516 %0 Journal Article %J Image Processing, IEEE Transactions on %D 2005 %T Anti-collusion forensics of multimedia fingerprinting using orthogonal modulation %A Wang,Z.J. %A M. Wu %A Zhao,H.V. %A Trappe,W. %A Liu,K. J.R %K Automated;Product Labeling;Signal Processing %K Computer-Assisted; %K Computer-Assisted;Multimedia;Patents as Topic;Pattern Recognition %K Gaussian distribution;anticollusion forensic;colluder identification;collusion resistance;digital fingerprinting;false probability;likelihood-based approach;multimedia fingerprinting;orthogonal modulation;spread spectrum embedding;Gaussian distribution;fi %X Digital fingerprinting is a method for protecting digital data in which fingerprints that are embedded in multimedia are capable of identifying unauthorized use of digital content. A powerful attack that can be employed to reduce this tracing capability is collusion, where several users combine their copies of the same content to attenuate/remove the original fingerprints. In this paper, we study the collusion resistance of a fingerprinting system employing Gaussian distributed fingerprints and orthogonal modulation. We introduce the maximum detector and the thresholding detector for colluder identification. We then analyze the collusion resistance of a system to the averaging collusion attack for the performance criteria represented by the probability of a false negative and the probability of a false positive. Lower and upper bounds for the maximum number of colluders Kmax are derived. We then show that the detectors are robust to different collusion attacks. We further study different sets of performance criteria, and our results indicate that attacks based on a few dozen independent copies can confound such a fingerprinting system. We also propose a likelihood-based approach to estimate the number of colluders. Finally, we demonstrate the performance for detecting colluders through experiments using real images. %B Image Processing, IEEE Transactions on %V 14 %P 804 - 821 %8 2005/06// %@ 1057-7149 %G eng %N 6 %R 10.1109/TIP.2005.847284 %0 Journal Article %J IEEE Transactions on Pattern Analysis and Machine Intelligence %D 2005 %T AParallel-Line Detection Algorithm Based on HMMDecoding %A Yefeng Zheng %A Huiping Li %A David Doermann %X The detection of groups of parallel lines is important in applications such as form processing and text (handwriting) extraction from rule lined paper. These tasks can be very challenging in degraded documents where the lines are severely broken. In this paper we propose a novel model-based method which incorporates high level context to detect these lines. After preprocessing (such as skew correction and text filtering), we use trained Hidden Markov Models (HMM) to locate the optimal positions of all lines simultaneously on the horizontal or vertical projection profiles, based on the Viterbi decoding. The algorithm is trainable so it can be easily adapted to different application scenarios. The experiments conducted on known form processing and rule line detection show our method is robust, and achieves better results than other widely used line detection methods. %B IEEE Transactions on Pattern Analysis and Machine Intelligence %V 27 %P 777 - 792 %8 2005/05// %G eng %N 5 %0 Conference Paper %B Computer Vision, 2005. ICCV 2005. Tenth IEEE International Conference on %D 2005 %T Appearance modeling under geometric context %A Li,Jian %A Zhou,S. K %A Chellapa, Rama %K based %K curve %K geometry;image %K GeT;convex %K GeT;fingerprinting;geometric %K hull;feature %K information;trace %K modeling;contour-driven %K Radon %K recognition; %K recognition;shape %K representation;object %K synthesis;image %K transform;appearance %K transform;image %K transform;Radon %K transforms;computational %K warping;multi-resolution %X We propose a unified framework based on a general definition of geometric transform (GeT) for modeling appearance. GeT represents the appearance by applying designed functionals over certain geometric sets. We show that image warping, Radon transform, trace transform, etc. are special cases of our definition. Moreover, three different types of GeTs are designed to handle deformation, articulation and occlusion and applied to fingerprinting the appearance inside a contour. They include the contour-driven GeT, the feature curve based GeT and selecting functionals to model the appearance inside the convex hull of the contour. A multi-resolution representation that combines both shape and appearance information is also proposed. We apply our approach to image synthesis and object recognition. The proposed approach produces promising results when applied to fingerprinting the appearance of human and body parts despite the challenges due to articulated motion and deformations %B Computer Vision, 2005. ICCV 2005. Tenth IEEE International Conference on %V 2 %P 1252 -1259 Vol. 2 - 1252 -1259 Vol. 2 %8 2005/10// %G eng %R 10.1109/ICCV.2005.38 %0 Journal Article %J Intelligent Systems, IEEE %D 2005 %T Applications of SHOP and SHOP2 %A Nau, Dana S. %A Au,T.-C. %A Ilghami,O. %A Kuter,U. %A Wu,D. %A Yaman,F. %A Munoz-Avila,H. %A Murdock,J. W. %K automated planning %K hierarchical task network planning %K ordered task decomposition %K planning (artificial intelligence) %K problem solving %K search-control strategy %K simple hierarchical ordered planner %K trees (mathematics) %K uncertainty handling %X We design the simple hierarchical ordered planner (SHOP) and its successor, SHOP2, with two goals in mind: to investigate research issues in automated planning and to provide some simple, practical planning tools. SHOP and SHOP2 are based on a planning formalism called hierarchical task network planning. SHOP and SHOP2 use a search-control strategy called ordered task decomposition, which breaks tasks into subtasks and generates the plan's actions in the same order that the plan executor executes them. So, throughout the planning process, the planner can tell what the state of the world at each step of the plan. %B Intelligent Systems, IEEE %V 20 %P 34 - 41 %8 2005/04//march %@ 1541-1672 %G eng %N 2 %R 10.1109/MIS.2005.20 %0 Conference Paper %B Acoustics, Speech, and Signal Processing, 2005. Proceedings. (ICASSP '05). IEEE International Conference on %D 2005 %T Approximate expressions for the mean and the covariance of the maximum likelihood estimator for acoustic source localization %A Raykar,V.C. %A Duraiswami, Ramani %K (mathematics); %K acoustic %K approximate %K approximation %K array %K array; %K covariance %K estimation; %K expansion; %K expressions; %K function; %K likelihood %K localization; %K matrices; %K matrix; %K maximum %K mean %K microphone %K objective %K processing; %K series %K signal %K source %K Taylor %K theory; %K vector; %K vectors; %X Acoustic source localization using multiple microphones can be formulated as a maximum likelihood estimation problem. The estimator is implicitly defined as the minimum of a certain objective function. As a result, we cannot get explicit expressions for the mean and the covariance of the estimator. We derive approximate expressions for the mean vector and covariance matrix of the estimator using Taylor's series expansion of the implicitly defined estimator. The validity of our expressions is verified by Monte-Carlo simulations. We also study the performance of the estimator for different microphone array configurations. %B Acoustics, Speech, and Signal Processing, 2005. Proceedings. (ICASSP '05). IEEE International Conference on %V 3 %P iii/73 - iii/76 Vol. 3 - iii/73 - iii/76 Vol. 3 %8 2005/03// %G eng %R 10.1109/ICASSP.2005.1415649 %0 Conference Paper %B Multi-Agent Security and Survivability, 2005 IEEE 2nd Symposium on %D 2005 %T Approximation results for probabilistic survivability %A Zhang,Y. %A Manister,E. %A Kraus,S. %A V.S. Subrahmanian %K application; %K computing; %K failure; %K fault %K heuristic %K heuristics %K industrial %K model; %K multi-agent %K multiagent %K node %K probabilistic %K probability; %K programming; %K survivability; %K system; %K systems; %K testing; %K tolerant %X As multiagent systems (MASs) are increasingly used in industrial applications, the need to make them more robust and resilient against disruption increases dramatically. The author has developed a probabilistic model (assuming complete ignorance of dependencies between node failures) of survivability based on deploying each agent in a MAS on one or more nodes. Finding a deployment that maximizes survivability is highly intractable for two reasons: firstly, computing the survivability of any deployment is intractable, and secondly, going through an exponential number of deployments to find the best one adds another layer of intractability. In this paper, we study what happens when node failures are independent. We show that computing survivability in this environment is still intractable. We propose various heuristics to compute the survivability of a given deployment. We have implemented and tested all these heuristics. We report on the advantages and disadvantages of different heuristics in different environmental settings. %B Multi-Agent Security and Survivability, 2005 IEEE 2nd Symposium on %P 1 - 10 %8 2005/08// %G eng %R 10.1109/MASSUR.2005.1507042 %0 Book Section %B Architecting Dependable Systems III %D 2005 %T Architecting and Implementing Versatile Dependability %A Tudor Dumitras %A Srivastava, Deepti %A Narasimhan, Priya %E Lemos, Rogério de %E Gacek, Cristina %E Romanovsky, Alexander %K Operating systems %K software engineering %X Distributed applications must often consider and select the appropriate trade-offs among three important aspects – fault-tolerance, performance and resources. We introduce a novel concept, called versatile dependability, that provides a framework for analyzing and reasoning about these trade-offs in dependable software architectures. We present the architecture of a middleware framework that implements versatile dependability by providing the appropriate ”knobs” to tune and re-calibrate the trade-offs. Our framework can adjust the properties and the behavior of the system at development-time, at deployment-time, and throughout the application’s life-cycle. This renders the versatile dependability approach useful both to applications that require static fault-tolerance configurations supporting the loss/addition of resources and changing workloads, as well as to applications that evolve in terms of their dependability requirements. Through a couple of specific examples, one on adapting the replication style at runtime and the other on tuning the system scalability under given constraints, we demonstrate concretely how versatile dependability can provide an extended coverage of the design space of dependable distributed systems. %B Architecting Dependable Systems III %S Lecture Notes in Computer Science %I Springer Berlin Heidelberg %P 212 - 231 %8 2005/01/01/ %@ 978-3-540-28968-5, 978-3-540-31648-0 %G eng %U http://link.springer.com/chapter/10.1007/11556169_10 %0 Conference Paper %B Proceedings of the 3rd IEEE/ACM/IFIP international conference on Hardware/software codesign and system synthesis %D 2005 %T An architectural level design methodology for embedded face detection %A Kianzad,V. %A Saha,S. %A Schlessman,J. %A Aggarwal,G. %A Bhattacharyya, Shuvra S. %A Wolf,W. %A Chellapa, Rama %K design space exploration %K Face detection %K platforms %K reconfigurable %K system-level models %X Face detection and recognition research has attracted great attention in recent years. Automatic face detection has great potential in a large array of application areas, including banking and security system access control, video surveillance, and multimedia information retrieval. In this paper, we discuss an architectural level design methodology for implementation of an embedded face detection system on a reconfigurable system on chip. We present models for performance estimation and validate these models with experimental values obtained from implementing our system on an FPGA platform. This modeling approach is shown to be efficient, accurate, and intuitive for designers to work with. Using this approach, we present several design options that trade-off various architectural features. %B Proceedings of the 3rd IEEE/ACM/IFIP international conference on Hardware/software codesign and system synthesis %S CODES+ISSS '05 %I ACM %C New York, NY, USA %P 136 - 141 %8 2005/// %@ 1-59593-161-9 %G eng %U http://doi.acm.org/10.1145/1084834.1084872 %R 10.1145/1084834.1084872 %0 Conference Paper %B Mobile Computing Systems and Applications, 2006. WMCSA'06. Proceedings. 7th IEEE Workshop on %D 2005 %T Are GSM phones THE solution for localization? %A Varshavsky,A. %A Chen,M.Y. %A de Lara,E. %A Jon Froehlich %A Haehnel,D. %A Hightower,J. %A LaMarca,A. %A Potter,F. %A Sohn,T. %A Tang,K. %A others %B Mobile Computing Systems and Applications, 2006. WMCSA'06. Proceedings. 7th IEEE Workshop on %P 34 - 42 %8 2005/// %G eng %0 Journal Article %J AIAA paper %D 2005 %T Assessment of STBLI DNS data and comparison against experiments %A Wu,M. %A Taylor,E. M %A Martin, M.P %B AIAA paper %8 2005/// %G eng %0 Journal Article %J Physical Review LettersPhys. Rev. Lett. %D 2005 %T Asymptotically Optimal Quantum Circuits for d-Level Systems %A Bullock,Stephen S. %A O’Leary,Dianne P. %A Brennen,Gavin K. %X Scalability of a quantum computation requires that the information be processed on multiple subsystems. However, it is unclear how the complexity of a quantum algorithm, quantified by the number of entangling gates, depends on the subsystem size. We examine the quantum circuit complexity for exactly universal computation on many d-level systems (qudits). Both a lower bound and a constructive upper bound on the number of two-qudit gates result, proving a sharp asymptotic of Θ(d2n) gates. This closes the complexity question for all d-level systems (d finite). The optimal asymptotic applies to systems with locality constraints, e.g., nearest neighbor interactions. %B Physical Review LettersPhys. Rev. Lett. %V 94 %P 230502 - 230502 %8 2005/06/14/ %G eng %U http://link.aps.org/doi/10.1103/PhysRevLett.94.230502 %N 23 %R 10.1103/PhysRevLett.94.230502 %0 Journal Article %J Institute for Systems Research Technical Reports %D 2005 %T An Augmented Visual Query Mechanism for Finding Patterns in Time Series Data (2002) %A Keogh,Eamonn %A Hochheiser,Harry %A Shneiderman, Ben %K Technical Report %X Relatively few query tools exist for data exploration and pattern identification in time series data sets. In previous work we introduced Timeboxes. Timeboxes are rectangular, direct-manipulation queries for studying time-series datasets. We demonstrated how Timeboxes can be used to support interactive exploration via dynamic queries, along with overviews of query results and drag-and-drop support for query-by-example. In this paper, we extend our work by introducing Variable Time Timeboxes (VTT). VTTs are a natural generalization of Timeboxes, which permit the specification of queries that allow a degree of uncertainty in the time axis. We carefully motivate the need for these more expressive queries, and demonstrate the utility of our approach on several data sets. %B Institute for Systems Research Technical Reports %8 2005/// %G eng %U http://drum.lib.umd.edu/handle/1903/6495 %0 Conference Paper %D 2005 %T Automated checking for Windows host vulnerabilities %A Tamizi,M. %A Weinstein,M. %A Michel Cukier %K application vulnerabilities %K computing system security %K Ferret-Windows software tool %K host vulnerabilities %K network vulnerabilities %K open-source software %K operating systems (computers) %K plug-in module %K program diagnostics %K security of data %K software reliability %K software tools %K system attacks %K Windows host vulnerability checking %K Windows platforms %X Evaluation of computing system security requires knowledge of the vulnerabilities present in the system and of potential attacks against the system. Vulnerabilities can be classified based on their location as application vulnerabilities, network vulnerabilities, or host vulnerabilities. This paper describes Ferret-Windows, a new software tool for checking host vulnerabilities on the Windows platforms. This tool helps system administrators by quickly finding vulnerabilities that are present on a host. It is designed and implemented in a modular way: a plug-in module is used for each vulnerability checked, and each possible output format is specified by a plug-in module. Moreover, several vulnerability fixing plug-in modules exist to help users remove specific vulnerabilities. As a result, Ferret-Windows is extensible, and can easily be kept up-to-date through the addition of checks for new vulnerabilities as they are identified. Finally, Ferret-Windows is a freely available open-source software %P 10 pp. -148 - 10 pp. -148 %8 2005/11// %G eng %R 10.1109/ISSRE.2005.11 %0 Journal Article %J IEEE Transactions on Software Engineering %D 2005 %T Automatic Mining of Source Code Repositories to Improve Bug Finding Techniques %A Williams, Chadd C %A Hollingsworth, Jeffrey K %K configuration control %K debugging aids. %K index terms- testing tools %K version control %X We describe a method to use the source code change history of a software project to drive and help to refine the search for bugs. Based on the data retrieved from the source code repository, we implement a static source code checker that searches for a commonly fixed bug and uses information automatically mined from the source code repository to refine its results. By applying our tool, we have identified a total of 178 warnings that are likely bugs in the Apache Web server source code and a total of 546 warnings that are likely bugs in Wine, an open-source implementation of the Windows API. We show that our technique is more effective than the same static analysis that does not use historical data from the source code repository. %B IEEE Transactions on Software Engineering %V 31 %P 466 - 480 %8 2005/// %@ 0098-5589 %G eng %N 6 %R http://doi.ieeecomputersociety.org/10.1109/TSE.2005.63 %0 Journal Article %J Journal of Software Maintenance and Evolution: Research and Practice %D 2005 %T Automating regression testing for evolving GUI software %A Memon, Atif M. %A Nagarajan,Adithya %A Xie,Qing %K daily/nightly builds %K event-flow graphs %K Graphical user interfaces %K GUI regression testing %K GUI testing %K smoke testing %K Software quality %X With the widespread deployment of broadband connections worldwide, software development and maintenance are increasingly being performed by multiple engineers, often working around-the-clock to maximize code churn rates. To ensure rapid quality assurance of such software, techniques such as ‘nightly/daily building and smoke testing’ have become widespread since they often reveal bugs early in the software development process. During these builds, a development version of the software is checked out from the source code repository tree, compiled, linked, and (re)tested with the goal of (re)validating its basic functionality. Although successful for conventional software, smoke tests are difficult to develop and automatically re-run for software that has a graphical user interface (GUI). In this paper, we describe a framework called DART (Daily Automated Regression Tester) that addresses the needs of frequent and automated re-testing of GUI software. The key to our success is automation: DART automates everything from structural GUI analysis, smoke-test-case generation, test-oracle creation, to code instrumentation, test execution, coverage evaluation, regeneration of test cases, and their re-execution. Together with the operating system's task scheduler, DART can execute frequently with little input from the developer/tester to re-test the GUI software. We provide results of experiments showing the time taken and memory required for GUI analysis, test case and test oracle generation, and test execution. We empirically compare the relative costs of employing different levels of detail in the GUI test oracle. We also show the events and statements covered by the smoke test cases. Copyright © 2005 John Wiley & Sons, Ltd. %B Journal of Software Maintenance and Evolution: Research and Practice %V 17 %P 27 - 64 %8 2005/// %@ 1532-0618 %G eng %U http://onlinelibrary.wiley.com/doi/10.1002/smr.305/abstract %N 1 %R 10.1002/smr.305 %0 Conference Paper %B Geoscience and Remote Sensing Symposium, 2005. IGARSS '05. Proceedings. 2005 IEEE International %D 2005 %T Average relative radiance transform for subpixel detection %A Broadwater, J. %A Meth, R. %A Chellapa, Rama %B Geoscience and Remote Sensing Symposium, 2005. IGARSS '05. Proceedings. 2005 IEEE International %V 5 %P 3565 - 3568 %8 2005/07// %G eng %R 10.1109/IGARSS.2005.1526617 %0 Conference Paper %B W3C Workshop on Semantic Web for Life Sciences %D 2004 %T Abstracting workflows: unifying bioinformatics task conceptualization and specification through Semantic Web services %A Hashmi,N. %A Lee,S. %A Cummings, Michael P. %B W3C Workshop on Semantic Web for Life Sciences %C Cambridge, Massachusetts USA %8 2004/// %G eng %0 Journal Article %J IEEE Transactions on Speech and Audio Processing %D 2004 %T Accelerated speech source localization via a hierarchical search of steered response power %A Zotkin,Dmitry N %A Duraiswami, Ramani %K accelerated speech source localization %K Acceleration %K array signal processing %K conferencing system %K Delay %K delay-and-sum beamforming %K direction-of-arrival estimation %K Frequency %K hierarchical search algorithm %K Inverse problems %K multimedia applications %K Multimedia communication %K multiple speech sound source %K Position measurement %K Robustness %K search problems %K Sensor arrays %K Signal processing algorithms %K speech %K speech enhancement %K Speech processing %K steered response power %K steered response power phase-transform weighted source localization algorithm %K transducer arrays %K User interfaces %X Accurate and fast localization of multiple speech sound sources is a problem that is of significant interest in applications such as conferencing systems. Recently, approaches that are based on search for local peaks of the steered response power are becoming popular, despite their known computational expense. Based on the observation that the wavelengths of the sound from a speech source are comparable to the dimensions of the space being searched and that the source is broadband, we have developed an efficient search algorithm. Significant speedups are achieved by using coarse-to-fine strategies in both space and frequency. We present applications of the search algorithm to speed up simple delay-and-sum beamforming and steered response power phase-transform weighted (SRP-PHAT) source localization algorithms. A systematic series of comparisons with previous algorithms are made that show that the technique is much faster, robust, and accurate. The performance of the algorithm can be further improved by using constraints from computer vision. %B IEEE Transactions on Speech and Audio Processing %V 12 %P 499 - 508 %8 2004/09// %@ 1063-6676 %G eng %N 5 %R 10.1109/TSA.2004.832990 %0 Conference Paper %B Vehicular Technology Conference, 2004. VTC2004-Fall. 2004 IEEE 60th %D 2004 %T Achieving packet-level quality of service through scheduling in multirate WLANs %A Yuan Yuan %A Daqing Gu %A Arbaugh, William A. %A Jinyun Zhang %K Analytical models %K channel conditions %K channel errors %K channel temporal fair share %K compensation %K Computer science %K Delay %K error-prone flow compensation %K IEEE 802.11a/b/g physical layer %K multirate wireless fair scheduling %K multirate WLAN %K packet radio networks %K packet-level quality of service %K Physical layer %K Processor scheduling %K QoS %K quality of service %K radio access networks %K Scheduling algorithm %K Throughput %K throughput fairness %K USA Councils %K Wireless LAN %K wireless packet scheduling %K WMFS %X Wireless packet scheduling has been a popular paradigm to achieve packet-level QoS in terms of fairness and throughput in the presence of channel errors. However, the current design does not anticipate the multi-rate capability offered by the IEEE 802.11a/b/g physical layer, thus suffering significant performance degradation in 802.11 WLANs. In this paper, we propose multirate wireless fair scheduling (WMFS). In MWFS, each flow is granted a temporal fair share of the channel, in contrast to the throughput fair share adopted by existing algorithms. Therefore, each flow receives services in proportion to its perceived transmission rate, and high-rate flows are able to opportunistically exploit their good channel conditions and receive more services. MWFS also renovates the compensation model in order to allow for error-prone flows to catch up, thus ensuring fairness for all flows over error-prone channels. We demonstrate the effectiveness of MWFS through both simulations and analysis. Especially, WMFS achieves system throughput 159% of state-of-the-art scheduling algorithms in simulated scenarios. %B Vehicular Technology Conference, 2004. VTC2004-Fall. 2004 IEEE 60th %I IEEE %V 4 %P 2730- 2734 Vol. 4 - 2730- 2734 Vol. 4 %8 2004/09/26/29 %@ 0-7803-8521-7 %G eng %R 10.1109/VETECF.2004.1400554 %0 Journal Article %J Journal of industrial microbiology & biotechnology %D 2004 %T Acinetobacter lipases: molecular biology, biochemical properties and biotechnological potential %A Snellman,E. A. %A Rita R Colwell %X Lipases (EC 3.1.1.3) have received increased attention recently, evidenced by the increasing amount of information about lipases in the current literature. The renewed interest in this enzyme class is due primarily to investigations of their role in pathogenesis and their increasing use in biotechnological applications [38]. Also, many microbial lipases are available as commercial products, the majority of which are used in detergents, cosmetic production, food flavoring, and organic synthesis. Lipases are valued biocatalysts because they act under mild conditions, are highly stable in organic solvents, show broad substrate specificity, and usually show high regio- and/or stereo-selectivity in catalysis. A number of lipolytic strains of Acinetobacter have been isolated from a variety of sources and their lipases possess many biochemical properties similar to those that have been developed for biotechnological applications. This review discusses the biology of lipase expression in Acinetobacter, with emphasis on those aspects relevant to potential biotechnology applications. %B Journal of industrial microbiology & biotechnology %V 31 %P 391 - 400 %8 2004/// %G eng %N 9 %R 10.1007/s10295-004-0167-0 %0 Journal Article %J International Journal of Image and Graphics %D 2004 %T Active deformable models using density estimation %A Abd-Almageed, Wael %A Smith,C.E. %B International Journal of Image and Graphics %V 4 %P 343 - 362 %8 2004/// %G eng %N 3 %0 Conference Paper %B Proceedings of the Workshop Empirically Successful First-Order Reasoning, International Joint Conference on Automated Reasoning %D 2004 %T Active logic for more effective human-computer interaction and other commonsense applications %A Anderson,M. L %A Josyula,D. %A Perlis, Don %A Purang,K. %B Proceedings of the Workshop Empirically Successful First-Order Reasoning, International Joint Conference on Automated Reasoning %8 2004/// %G eng %0 Journal Article %J Proceedings of the AAAI 2004 Fall Symposium on Artificial Multi-agent Learning %D 2004 %T Adapting network structure for efficient team formation %A Gaston,M. %A Simmons,J. %A desJardins, Marie %B Proceedings of the AAAI 2004 Fall Symposium on Artificial Multi-agent Learning %8 2004/// %G eng %0 Journal Article %J ACMTransactions on Asian Language Information Processing %D 2004 %T Adaptive Hindi OCRUsing Generalized Hausdorff Image Comparison %A Ma,Huanfeng %A David Doermann %B ACMTransactions on Asian Language Information Processing %V 26 %P 198 - 213 %8 2004/// %G eng %N 2 %0 Conference Paper %B Proceedings of the 5th ACM Conference on Electronic Commerce %D 2004 %T Adaptive limited-supply online auctions %A Hajiaghayi, Mohammad T. %A Kleinberg,R. %A Parkes,D. C %B Proceedings of the 5th ACM Conference on Electronic Commerce %P 71 - 80 %8 2004/// %G eng %0 Conference Paper %B Distributed Computing Systems, 2004. Proceedings. 24th International Conference on %D 2004 %T Adaptive replication in peer-to-peer systems %A Gopalakrishnan,V. %A Silaghi,B. %A Bhattacharjee, Bobby %A Keleher,P. %K adaptive %K allocation; %K data %K databases; %K decentralized %K delivery %K distributed %K LAR %K low-latency %K peer-to-peer %K processing; %K protocol; %K replicated %K replication %K resource %K strategies; %K structured %K system-neutral %K system; %K systems; %X Peer-to-peer systems can be used to form a low-latency decentralized data delivery system. Structured peer-to-peer systems provide both low latency and excellent load balance with uniform query and data distributions. Under the more common skewed access distributions, however, individual nodes are easily overloaded, resulting in poor global performance and lost messages. This paper describes a lightweight, adaptive, and system-neutral replication protocol, called LAR, that maintains low access latencies and good load balance even under highly skewed demand. We apply LAR to Chord and show that it has lower overhead and better performance than existing replication strategies. %B Distributed Computing Systems, 2004. Proceedings. 24th International Conference on %P 360 - 369 %8 2004/// %G eng %R 10.1109/ICDCS.2004.1281601 %0 Book Section %B Model reduction methods for vector autoregressive processes %D 2004 %T Adaptively-Secure, Non-Interactive Public-Key Encryption %A Canetti,R. %A Halevi,S. %A Katz, Jonathan %B Model reduction methods for vector autoregressive processes %P 150 - 150 %8 2004/// %G eng %0 Journal Article %J Trends in Parasitology %D 2004 %T Advances in schistosome genomics %A El‐Sayed, Najib M. %A Bartholomeu,Daniella %A Ivens,Alasdair %A Johnston,David A. %A LoVerde,Philip T. %X In Spring 2004, the first draft of the 270 Mb genome of Schistosoma mansoni will be released. This sequence is based on the assembly and annotation of a >7.5-fold coverage, shotgun sequencing project. The key stages involved in the international collaborative efforts that have led to the generation of these sequencing data for the parasite S. mansoni are discussed here. %B Trends in Parasitology %V 20 %P 154 - 157 %8 2004/04/01/ %@ 1471-4922 %G eng %U http://www.sciencedirect.com/science/article/pii/S1471492204000480 %N 4 %R 16/j.pt.2004.02.002 %0 Journal Article %J Environmental Modelling & Software %D 2004 %T Agent-based and analytical modeling to evaluate the effectiveness of greenbelts %A Brown,Daniel G. %A Page,Scott E. %A Riolo,Rick %A Rand, William %K Agent-based modeling %K Land-use change %K Landscape ecology %K Urban sprawl %X We present several models of residential development at the rural–urban fringe to evaluate the effectiveness of a greenbelt located beside a developed area, for delaying development outside the greenbelt. First, we develop a mathematical model, under two assumptions about the distributions of service centers, that represents the trade-off between greenbelt placement and width, their effects on the rate of development beyond the greenbelt, and how these interact with spatial patterns of aesthetic quality and the locations of services. Next, we present three agent-based models (ABMs) that include agents with the potential for heterogeneous preferences and a landscape with the potential for heterogeneous attributes. Results from experiments run with a one-dimensional ABM agree with the starkest of the results from the mathematical model, strengthening the support for both models. Further, we present two different two-dimensional ABMs and conduct a series of experiments to supplement our mathematical analysis. These include examining the effects of heterogeneous agent preferences, multiple landscape patterns, incomplete or imperfect information available to agents, and a positive aesthetic quality impact of the greenbelt on neighboring locations. These results suggest how width and location of the greenbelt could help determine the effectiveness of greenbelts for slowing sprawl, but that these relationships are sensitive to the patterns of landscape aesthetic quality and assumptions about service center locations. %B Environmental Modelling & Software %V 19 %P 1097 - 1109 %8 2004/12// %@ 1364-8152 %G eng %U http://www.sciencedirect.com/science/article/pii/S1364815203002664 %N 12 %R 10.1016/j.envsoft.2003.11.012 %0 Journal Article %J Multimedia Tools and Applications %D 2004 %T An algebra for powerpoint sources %A Fayzullin,M. %A V.S. Subrahmanian %X There are now millions of PowerPoint documents available within corporate intranets and/or over the Internet. In this paper, we develop a formal model of PowerPoint databases. We propose a relational style algebra called pptA (PowerPoint Algebra) to query PowerPoint databases. The algebra contains some new operators (such as the APPLY operator that changes properties of objects, slides and presentations) as well as interesting twists on relational operators (e.g. join and cartesian product allow different entities being joined together to share attributes whose values may be merged). We prove a set of equivalence results within this algebra. We have implemented a version of pptA—the paper provides a cost model and experimental results on the conditions under which these equivalences are useful. %B Multimedia Tools and Applications %V 24 %P 273 - 301 %8 2004/// %G eng %N 3 %R 10.1023/B:MTAP.0000039422.87260.52 %0 Conference Paper %D 2004 %T Algorithmic Foundations for Consistency-Checking of Interaction-States of Mechatronic Systems %A Xu,C. %A Gupta,S.K. %X In order to reduce product development time, we needsoftware tools that can perform automated validation of the proposed design concepts. These tools will ensure that only valid design concepts are transferred to the detailed design stage for further development. This paper provides a step towards the automated validation of proposed design concepts. We define the problem of consistency-checking of interaction- states as a key step in the design concept validation. We present a polynomial time algorithm for solving the interaction consistency-checking problem. We also present an algorithm for analyzing inconsistent interaction-states and identifying the inconsistent interactions. We believe that the framework described in this paper will provide the underlying foundations for constructing the next generation software tools for conceptual design of complex mechatronic systems. %C Salt Lake City, Utah, USA %8 2004/// %G eng %U ftp://ftp.eng.umd.edu/:/home/glue/s/k/skgupta/pub/Publication/DETC04_Xu.pdf %0 Journal Article %J Computer Systems: Architectures, Modeling, and Simulation %D 2004 %T Analysis of dataflow programs with interval-limited data-rates %A Teich,J. %A Bhattacharyya, Shuvra S. %B Computer Systems: Architectures, Modeling, and Simulation %P 507 - 518 %8 2004/// %G eng %0 Book Section %B Theory and Applications of Recent Robust MethodsTheory and Applications of Recent Robust Methods %D 2004 %T Analyzing the number of samples required for an approximate Monte-Carlo LMS line estimator %A Mount, Dave %A Netanyahu,N. S %A Zuck,E. %B Theory and Applications of Recent Robust MethodsTheory and Applications of Recent Robust Methods %I Birkhäuser %P 207 - 219 %8 2004/// %@ 3764370602, 9783764370602 %G eng %0 Journal Article %J Proc. of 6th Asian Conference on Computer Vision (ACCV) %D 2004 %T Appearance tracking using adaptive models in a particle filter %A Zhou, S. %A Chellapa, Rama %A Moghaddam, B. %X The particle filter is a popular tool for visual tracking. Usually, the appearance model is eitherfixed or rapidly changing and the motion model is simply a random walk with fixed noise vari- ance. Also, the number of particles used is typically fixed. All these factors make the visual tracker unstable. To stabilize the tracker, we propose the following measures: an observation model arising from an adaptive noise variance, and adaptive number of particles. The adaptive- velocity is computed via a first-order linear predictor using the previous particle configuration. Tracking under occlusion is accomplished using robust statistics. Experimental results on track- ing visual objects in long video sequences such as vehicles, tank, and human faces demonstrate the effectiveness and robustness of our algorithm. %B Proc. of 6th Asian Conference on Computer Vision (ACCV) %8 2004/// %G eng %0 Journal Article %J Pattern Analysis and Applications %D 2004 %T An Appearance-based Approach for Consistent Labeling of Humans and Objects in Video %A Balcells-Capellades,M. %A DeMenthon,D. %A David Doermann %X We present an approach for consistently labeling people and for detecting human-object interactions using mono-camera surveillance video. The approach is based on a robust appearance based correlogram model which is combined with histogram information to model color distributions of people and objects in the scene. The models are dynamically built from non-stationary objects which are the outputs of background subtraction, and are used to identify objects on a frame-by-frame basis. We are able to detect when people merge into groups and to segment them even during partial occlusion. We can also detect when a person deposits or removes an object. The models persist when a person or object leaves the scene and are used to identify them when they reappear. Experiments show that the models are able to accommodate perspective foreshortening that occurs with overhead camera angles, as well as partial occlusion. The results show that this is an effective approach able to provide important information to algorithms performing higher-level analysis such as activity recognition, where human-object interactions play an important role. %B Pattern Analysis and Applications %P 1433 - 7541 %8 2004/11// %G eng %0 Conference Paper %B Acoustics, Speech, and Signal Processing, 2004. Proceedings. (ICASSP '04). IEEE International Conference on %D 2004 %T Appearance-based tracking and recognition using the 3D trilinear tensor %A Jie Shao %A Zhou,S. K %A Chellapa, Rama %K 3D %K adaptive %K affine-transformation %K airborne %K algorithm; %K appearance %K appearance-based %K based %K estimation; %K geometrical %K image %K mathematical %K novel %K object %K operator; %K operators; %K perspective %K prediction; %K processing; %K recognition; %K representation; %K signal %K structure %K synthesis; %K template %K tensor %K tensor; %K tensors; %K tracking; %K transformation; %K trilinear %K updating; %K video %K video-based %K video; %K view %X The paper presents an appearance-based adaptive algorithm for simultaneous tracking and recognition by generalizing the transformation model to 3D perspective transformation. A trilinear tensor operator is used to represent the 3D geometrical structure. The tensor is estimated by predicting the corresponding points using the existing affine-transformation based algorithm. The estimated tensor is used to synthesize novel views to update the appearance templates. Some experimental results using airborne video are presented. %B Acoustics, Speech, and Signal Processing, 2004. Proceedings. (ICASSP '04). IEEE International Conference on %V 3 %P iii - 613-16 vol.3 - iii - 613-16 vol.3 %8 2004/05// %G eng %R 10.1109/ICASSP.2004.1326619 %0 Journal Article %J IEEE Software %D 2004 %T Applying Model-based Distributed Continuous Quality Assurance Processes to Enhance Persistent Software Attributes %A Krishna,A. S %A Yilmaz,C. %A Memon, Atif M. %A Porter, Adam %A Schmidt,D. C %A Gokhale,A. %A Natarajan,B. %X Time and resource constraints often force developers of highlyconfigurable quality of service (QoS)-intensive software sys- tems to guarantee their system’s persistent software attributes (PSAs) (e.g., functional correctness, portability, efficiency, and QoS) on very few platform configurations and to extrapolate from these configurations to the entire configuration space, which allows many sources of degradation to escape detec- tion until systems are fielded. This article illustrates how distributed continuous quality assurance (DCQA) processes help improve the assessment of these PSAs across large QoS- intensive software system configuration spaces. We also il- lustrate how model-based DCQA processes enable developers to run formally-designed screening experiments that isolate the most significant configuration options, such as different workload parameters, operating system, compiler flags, fea- ture sets, and/or run-time optimization controls. Our empir- ical results show that DCQA processes can be used monitor, safeguard, and enforce PSAs at an acceptable level of cost and effort. %B IEEE Software %V 21 %P 32 - 40 %8 2004/// %G eng %N 6 %0 Report %D 2004 %T Applying permutation tests to tree-based statistical models: extending the R package rpart %A Cummings, Michael P. %A Myers,D. S %A Mangelson,M %X Tree-based statistical models are useful for evaluating relationships between predictor and response variables and for generating predictions when the response is unknown. However, current methods of constructing tree-based models do not provide a probabilistic assessment of the models produced. Here we describe our work to use permutation tests to quantitatively estimate the probability of tree-based statistical models. We have extended the rpart (recursive partitioning) package of the R system for statistical data analysis. This extension, rpart.permutation, executes the permutations in parallel, using MPI (Message Passing Interface) to greatly decrease the time necessary to complete the analysis. %I University of Maryland Institute for Advanced Computer Studies %V 2004-24 %8 2004/// %G eng %0 Journal Article %J Journal of Algorithms %D 2004 %T Approximation algorithms for partial covering problems %A Gandhi,Rajiv %A Khuller, Samir %A Srinivasan, Aravind %K Approximation algorithms %K Partial covering %K Primal-dual methods %K Randomized rounding %K Set cover %K Vertex cover %X We study a generalization of covering problems called partial covering. Here we wish to cover only a desired number of elements, rather than covering all elements as in standard covering problems. For example, in k-partial set cover, we wish to choose a minimum number of sets to cover at least k elements. For k-partial set cover, if each element occurs in at most f sets, then we derive a primal-dual f-approximation algorithm (thus implying a 2-approximation for k-partial vertex cover) in polynomial time. Without making any assumption about the number of sets an element is in, for instances where each set has cardinality at most three, we obtain an approximation of 4/3. We also present better-than-2-approximation algorithms for k-partial vertex cover on bounded degree graphs, and for vertex cover on expanders of bounded average degree. We obtain a polynomial-time approximation scheme for k-partial vertex cover on planar graphs, and for covering k points in Rd by disks. %B Journal of Algorithms %V 53 %P 55 - 84 %8 2004/10// %@ 0196-6774 %G eng %U http://www.sciencedirect.com/science/article/pii/S0196677404000689 %N 1 %R 10.1016/j.jalgor.2004.04.002 %0 Conference Paper %B Circuits and Systems, 2004. ISCAS '04. Proceedings of the 2004 International Symposium on %D 2004 %T Arbitrate-and-move primitives for high throughput on-chip interconnection networks %A Balkan,A.O. %A Gang Qu %A Vishkin, Uzi %K 8 %K arbiter %K arbitrate-and-move %K architecture; %K asynchronous %K balanced %K binary %K circuit %K circuit; %K circuits; %K consumption; %K data %K explicit %K interconnection %K interconnections; %K leaf %K mesh-of-trees %K multi-threading; %K Multithreading %K n-leaf %K network; %K pipeline %K pipelined %K power %K primitive %K processing; %K reduced %K simulation; %K structures; %K synchronous %K synchrony %K system-on-chip; %K tree %K tree; %X An n-leaf pipelined balanced binary tree is used for arbitration of order and movement of data from n input ports to one output port. A novel arbitrate-and-move primitive circuit for every node of the tree, which is based on a concept of reduced synchrony that benefits from attractive features of both asynchronous and synchronous designs, is presented. The design objective of the pipelined binary tree is to provide a key building block in a high-throughput mesh-of-trees interconnection network for Explicit Multi Threading (XMT) architecture, a recently introduced parallel computation framework. The proposed reduced synchrony circuit was compared with asynchronous and synchronous designs of arbitrate-and-move primitives. Simulations with 0.18 mu;m technology show that compared to an asynchronous design, the proposed reduced synchrony implementation achieves a higher throughput, up to 2 Giga-Requests per second on an 8-leaf binary tree. Our circuit also consumes less power than the synchronous design, and requires less silicon area than both the synchronous and asynchronous designs. %B Circuits and Systems, 2004. ISCAS '04. Proceedings of the 2004 International Symposium on %V 2 %P II - 441-4 Vol.2 - II - 441-4 Vol.2 %8 2004/05// %G eng %R 10.1109/ISCAS.2004.1329303 %0 Journal Article %J IEEE Robotics & Automation Magazine %D 2004 %T The Argus eye: a new imaging system designed to facilitate robotic tasks of motion %A Baker, P. %A Ogale, A. S %A Fermüller, Cornelia %K Argus eye %K Calibration %K CAMERAS %K computational geometry %K Design automation %K Eyes %K image formation %K imaging system %K Information geometry %K Layout %K Motion estimation %K multiple stereo configurations %K panoramic robots %K robot vision %K Robot vision systems %K robotic motion tasks %K Robotics and automation %K SHAPE %K shape model estimation %K system calibration %X This article describes an imaging system that has been designed to facilitate robotic tasks of motion. The system consists of a number of cameras in a network, arranged so that they sample different parts of the visual sphere. This geometric configuration has provable advantages compared to small field of view cameras for the estimation of the system's own motion and, consequently, the estimation of shape models from the individual cameras. The reason is, inherent ambiguities of confusion between translation and rotation disappear. Pairs of cameras may also be arranged in multiple stereo configurations, which provide additional advantages for segmentation. Algorithms for the calibration of the system and the three-dimensional (3-D) motion estimation are provided. %B IEEE Robotics & Automation Magazine %V 11 %P 31 - 38 %8 2004/12// %@ 1070-9932 %G eng %N 4 %R 10.1109/MRA.2004.1371606 %0 Journal Article %J IEEE Robotics and Automation Magazine %D 2004 %T The Argus eye, a new tool for robotics %A Baker, P. %A Ogale, A. S %A Fermüller, Cornelia %A Aloimonos, J. %B IEEE Robotics and Automation Magazine %V 11 %P 31 - 38 %8 2004/// %G eng %N 4 %0 Conference Paper %B Software Engineering Conference, 2004. Proceedings. 2004 Australian %D 2004 %T ASPIRE: automated systematic protocol implementation robustness evaluation %A Vasan,Arunchandar %A Memon, Atif M. %K algorithm %K ASPIRE %K automated systematic protocol %K automated testing %K fault tolerant computing %K faulty PDU %K formal specification %K HTTP %K implementation robustness evaluation %K Internet %K network protocol %K protocol data unit %K protocol specification %K robustness testing %K SMTP protocol %K stateful protocols %K stateless protocols %K Transport protocols %X Network protocol implementations are susceptible to problems caused by their lack of ability to handle invalid inputs. We present ASPIRE: automated systematic protocol implementation robustness evaluation, an automated approach to pro-actively test protocol implementations by observing their responses to faulty protocol data units (PDUs) or messages. In contrast to existing approaches, we sample the faulty PDU space in a systematic manner, thus allowing us to evaluate protocol implementations in the face of a wider variety of faulty PDUs. We use a pruning strategy to reduce, from exponential, the size of the faulty PDU set to polynomial in the number of fields of a PDU. We have implemented the ASPIRE algorithms and evaluated them on implementations of HTTP (Apache, Google Web Server (GWS), and Microsoft IIS) and SMTP (Sendmail and Microsoft Exchange) protocols. Our results show that Apache, GWS, and IIS, although implementing the same protocol specification, behave differently on faulty HTTP PDUs; Sendmail and exchange are different in handling our faulty SMTP PDUs. %B Software Engineering Conference, 2004. Proceedings. 2004 Australian %P 241 - 250 %8 2004/// %G eng %R 10.1109/ASWEC.2004.1290477 %0 Journal Article %J Physics of Fluids %D 2004 %T Assessment of inflow boundary conditions for compressible turbulent boundary layers %A Xu,S. %A Martin, M.P %B Physics of Fluids %V 16 %P 2623 - 2623 %8 2004/// %G eng %0 Conference Paper %B 13th IEEE International Symposium on High performance Distributed Computing, 2004. Proceedings %D 2004 %T Automated cluster-based Web service performance tuning %A Chung,I. -H %A Hollingsworth, Jeffrey K %K Active Harmony system %K automated performance tuning %K business %K cluster-based Web service system %K Clustering algorithms %K Computer science %K Educational institutions %K electronic commerce %K Internet %K Middleware %K performance evaluation %K scalability %K Throughput %K Transaction databases %K Web server %K Web services %K workstation clusters %X Active harmony provides a way to automate performance tuning. We apply the Active Harmony system to improve the performance of a cluster-based web service system. The performance improvement cannot easily be achieved by tuning individual components for such a system. The experimental results show that there is no single configuration for the system that performs well for all kinds of workloads. By tuning the parameters, Active Harmony helps the system adapt to different workloads and improve the performance up to 16%. For scalability, we demonstrate how to reduce the time when tuning a large system with many tunable parameters. Finally an algorithm is proposed to automatically adjust the structure of cluster-based web systems, and the system throughput is improved up to 70% using this technique. %B 13th IEEE International Symposium on High performance Distributed Computing, 2004. Proceedings %I IEEE %P 36 - 44 %8 2004/06/04/6 %@ 0-7695-2175-4 %G eng %R 10.1109/HPDC.2004.1323484 %0 Book Section %B Artificial Intelligence Methods in Software TestingArtificial Intelligence Methods in Software Testing %D 2004 %T Automated GUI regression testing using AI planning %A Memon, Atif M. %B Artificial Intelligence Methods in Software TestingArtificial Intelligence Methods in Software Testing %I World Scientific %P 51 - 99 %8 2004/// %@ 9812388540, 9789812388544 %G eng %0 Book %D 2004 %T Automated Planning: Theory and Practice %A Ghallab,Malik %A Nau, Dana S. %A Traverso,Paolo %K Business & Economics / Management %K Computers / Intelligence (AI) & Semantics %K Production planning %K Production planning/ Data processing %K Technology & Engineering / Robotics %X Automated planning technology now plays a significant role in a variety of demanding applications, ranging from controlling space vehicles and robots to playing the game of bridge. These real-world applications create new opportunities for synergy between theory and practice: observing what works well in practice leads to better theories of planning, and better theories lead to better performance of practical applications. Automated Planning mirrors this dialogue by offering a comprehensive, up-to-date resource on both the theory and practice of automated planning. The book goes well beyond classical planning, to include temporal planning, resource scheduling, planning under uncertainty, and modern techniques for plan generation, such as task decomposition, propositional satisfiability, constraint satisfaction, and model checking. The authors combine over 30 years experience in planning research and development to offer an invaluable text to researchers, professionals, and graduate students. *Comprehensively explains paradigms for automated planning. *Provides a thorough understanding of theory and planning practice, and how they relate to each other. *Presents case studies of applications in space, robotics, CAD/CAM, process control, emergency operations, and games.*Provides a thorough understanding of AI planning theory and practice, and how they relate to each other. *Covers all the contemporary topics of planning, as well as important practical applications of planning, such as model checking and game playing. *Presents case studies and applications in planning engineering, space, robotics, CAD/CAM, process control, emergency operations, and games.*Provides lecture notes, examples of programming assignments, pointers to downloadable planning systems and related information online. %I Elsevier %8 2004/05/03/ %@ 9781558608566 %G eng %0 Conference Paper %B Acoustics, Speech, and Signal Processing, 2004. Proceedings. (ICASSP '04). IEEE International Conference on %D 2004 %T Automatic position calibration of multiple microphones %A Raykar,V.C. %A Duraiswami, Ramani %K approximations; %K array %K audio %K AUTOMATIC %K calibration; %K closed %K covariance; %K dimensional %K estimation; %K form %K function %K implicit %K least %K likelihood %K loudspeakers; %K maximum %K microphone %K microphones; %K minimisation; %K minimization; %K multiple %K nonlinear %K position %K positions; %K problem; %K processing; %K signal %K solution; %K squares %K theorem; %K three %X We describe a method to determine automatically the relative three dimensional positions of multiple microphones using at least five loudspeakers in unknown positions. The only assumption we make is that there is a microphone which is very close to a loudspeaker. In our experimental setup, we attach one microphone to each loudspeaker. We derive the maximum likelihood estimator and the solution turns out to be a non-linear least squares problem. A closed form solution which can be used as the initial guess for the minimization routine is derived. We also derive an approximate expression for the covariance of the estimator using the implicit function theorem. Using this, we analyze the performance of the estimator with respect to the positions of the loudspeakers. The algorithm is validated using both Monte-Carlo simulations and a real-time experimental setup. %B Acoustics, Speech, and Signal Processing, 2004. Proceedings. (ICASSP '04). IEEE International Conference on %V 4 %P iv-69 - iv-72 vol.4 - iv-69 - iv-72 vol.4 %8 2004/05// %G eng %R 10.1109/ICASSP.2004.1326765 %0 Journal Article %J IEEE Transactions on Speech and Audio Processing, Special Issue on Spontaneous Speech Processing %D 2004 %T Automatic recognition of spontaneous speech for access to multilingual oral history archives %A Byrne,W. %A David Doermann %A Franz,M. %A Gustman,S. %A Hajic,J. %A Oard, Douglas %A Picheny,M. %A Psutka,J. %A Ramabhadran,B. %X The MALACH project has the goal of developing the technologies needed to facilitate access to large collections of spontaneous speech. Its aim is to dramatically improve the state of the art in key Automatic Speech Recognition (ASR), Natural Language Processing (NLP) technologies for use in large-scale retrieval systems. The project leverages a unique collection of oral history interviews with survivors of the Holocaust that has been assembled and extensively annotated by the Survivors of the Shoah Visual History Foundation. This paper describes the collection, 116,000 hours of interviews in 32 languages, and the way in which system requirements have been discerned through user studies. It discusses ASR methods for very difficult speech (heavily accented, emotional, and elderly spontaneous speech), including transcription to create training data and methods for language modeling and speaker adaptation. Results are presented for for English and Czech. NLP results are presented for named entity tagging, topic segmentation, and supervised topic classification, and the architecture of an integrated search system that uses these results is described. %B IEEE Transactions on Speech and Audio Processing, Special Issue on Spontaneous Speech Processing %V 12 %P 420 - 435 %8 2004/07// %G eng %N 4 %0 Conference Paper %B 2004 IEEE International Conference on Communications %D 2004 %T Automatic video summarization for wireless and mobile environments %A Yong Rao %A Mundur, Padma %A Yesha,Y. %K automatic video summarization %K batch processing %K batch processing (computers) %K Clustering algorithms %K Clustering methods %K clustering scheme %K Computer science %K Delaunay diagram %K graph theory %K Gunshot detection systems %K Image sequences %K mesh generation %K Mobile computing %K mobile radio %K multidimensional point data cluster %K Multidimensional systems %K Multimedia communication %K video clip %K video frame content %K Video sequences %K video signal processing %K wireless mobile environment %X In this paper, we propose a novel video summarization technique using which we can automatically generate high quality video summaries suitable for wireless and mobile environments. The significant contribution of this paper lies in the proposed clustering scheme. We use Delaunay diagrams to cluster multidimensional point data corresponding to the frame contents of the video. In contrast to the existing clustering techniques used for summarization, our clustering algorithm is fully automatic and well suited for batch processing. We illustrate the quality of our clustering and summarization scheme in an experiment using several video clips. %B 2004 IEEE International Conference on Communications %I IEEE %V 3 %P 1532- 1536 Vol.3 - 1532- 1536 Vol.3 %8 2004/06/20/24 %@ 0-7803-8533-0 %G eng %R 10.1109/ICC.2004.1312767 %0 Conference Paper %B Proceedings of the 2003 annual national conference on Digital government research %D 2003 %T Accessing diverse geo-referenced data sources with the SAND spatial DBMS %A Sankaranarayanan,Jagan %A Tanin,Egemen %A Samet, Hanan %A Brabec,Frantisek %X The Internet has become the most frequently accessed medium for obtaining various types of data. In particular, government agencies, academic institutions, and private enterprises have published gigabytes of geo-referenced data on the Web. However, to obtain geo-referenced data from the Web successfully, systems must be designed to be capable of understanding the data sets published in different data formats. Also, even if the data sets are available in a simple known format, they often have poorly defined structures. With these issues in mind, we have developed an Internet-enabled data collection and conversion utility that interfaces with our prototype spatial database system, SAND. Using this utility, data can be retrieved from many different sources on the Web and converted into a format understandable by the SAND spatial database management system. Our collection and conversion utility is able to import the most popular data formats; namely, ESRI Shapefiles, Microsoft Excel files, HTML files, and GML files. Data in unstructured formats are verified for correct selection of the data types and handling of missing tuples before the insertion operation into the database. Moreover, our utility makes it possible to download any nonspatial data set and combine it internally with a relevant spatial data set. These features are accessible through a spreadsheet-like interface for online editing and structuring of data. %B Proceedings of the 2003 annual national conference on Digital government research %S dg.o '03 %I Digital Government Society of North America %P 1 - 4 %8 2003/// %G eng %U http://dl.acm.org/citation.cfm?id=1123196.1123206 %0 Journal Article %J University of Maryland Technical Report, HCIL-2003 %D 2003 %T Accuracy, Target Reentry and Fitts’ Law Performance of Preschool Children Using Mice %A Hourcade,J. P %A Bederson, Benjamin B. %A Druin, Allison %A Guimbretiere,F. %B University of Maryland Technical Report, HCIL-2003 %V 16 %8 2003/// %G eng %0 Journal Article %J Image Processing, IEEE Transactions on %D 2003 %T Accurate dense optical flow estimation using adaptive structure tensors and a parametric model %A Liu,Haiying %A Chellapa, Rama %A Rosenfeld, A. %K 3D %K accuracy; %K adaptive %K and %K coherent %K confidence %K dense %K eigenfunctions; %K eigenvalue %K eigenvalues %K eigenvectors; %K estimation; %K flow %K generalized %K ground %K image %K measure; %K model; %K MOTION %K optical %K parameter %K parametric %K problem; %K real %K region; %K sequences; %K structure %K synthetic %K tensor; %K tensors; %K three-dimensional %K truth; %X An accurate optical flow estimation algorithm is proposed in this paper. By combining the three-dimensional (3D) structure tensor with a parametric flow model, the optical flow estimation problem is converted to a generalized eigenvalue problem. The optical flow can be accurately estimated from the generalized eigenvectors. The confidence measure derived from the generalized eigenvalues is used to adaptively adjust the coherent motion region to further improve the accuracy. Experiments using both synthetic sequences with ground truth and real sequences illustrate our method. Comparisons with classical and recently published methods are also given to demonstrate the accuracy of our algorithm. %B Image Processing, IEEE Transactions on %V 12 %P 1170 - 1180 %8 2003/10// %@ 1057-7149 %G eng %N 10 %R 10.1109/TIP.2003.815296 %0 Journal Article %J The Journal of the Acoustical Society of America %D 2003 %T Acoustical scattering from N spheres using a multilevel fast multipole method %A Gumerov, Nail A. %A Duraiswami, Ramani %X We develop an efficient computational method for wave scattering from a large number of spherical objects that are characterized by their radii, locations, and complex acoustical impedances of the surfaces. The direct T‐matrix method for solution of the problem that was developed and tested in [Gumerov and Duraiswami, J. Acoust. Soc. Am. 112, 2688–2701 (2002)] is inefficient for computations involving a large number of scatterers. Here, we implement and test a multilevel fast multipole method for speeding up the solution and achieving better memory complexity. The method is based on hierarchical space subdivision with oct‐trees using optimal space‐partitioning, on a theory for fast translation and re‐expansion of multipole solutions of the Helmholtz equation, and employs an iterative technique for the solution of large dense systems of linear equations with reflection‐based iterations. For N scatterers the method provides O(NIn this paper, we provide a survey of current state of the art in automated manufacturability analysis. We present the historical context in which this area has emerged and outline characteristics to compare and classify various systems. We describe the two dominant approaches to automated manufacturability analysis and overview representative systems based on their application domain. We describe support tools that enhance the effectiveness of manufacturability analysis systems. Finally, we attempt to expose some of the existing research challenges and future directions.
%I Institute for Systems Research, University of Maryland, College Park %V ISR; TR 1995-14 %8 1995/// %G eng %U http://drum.lib.umd.edu//handle/1903/5609 %0 Thesis %D 1995 %T Automated Manufacturability Analysis of Machined Parts %A Gupta, Satyandra K. %K Automation %K computer aided manufacturing %K computer integrated manufacturing %K Feature extraction %K flexible manufacturing %K manufacturability %K Manufacturing %K Solid modeling %X Because of pressing demands to reduce lead time and product cost, increasing research attention is being given to integration of engineering design and manufacturing. In this thesis, a systematic approach has been developed for computer-aided manufacturability analysis of machined parts. This approach can be used during design stages to improve the product quality from the manufacturing point of view.
Evaluating the manufacturability of a proposed design involves determining whether or not it is manufacturable with a given set of manufacturing operations - and if so, then finding the associated manufacturing efficiency. In this research, the design is represented as a solid model. The tolerance and surface finish information is represented as attributes of various faces of the solid model. Machining features are used to model the available machining operations Since there can be several different ways to manufacture a proposed design, this requires considering alternative ways to manufacture it, in order to determine which one best meets the design and manufacturing objectives.
The approach developed in this thesis is based on the systematic exploration of various machining plans. The first step is to identify all machining features which can potentially be used to machine the given design. Using these features, different machining plans are generated. Each time a new plan generated, it is examined to find whether it can produce the desired design tolerances. If a plan is found to be capable of meeting the tolerance specifications, then its rating is computed. If no machining plan can be found that is capable of producing the design, then the design cannot be machined using the given set of machining operations; otherwise, the manufacturability rating of the design is computed. Since various alternative ways of machining the part are considered in this approach, the conclusions about the manufacturability are more realistic compared to the approach where just one alternative is considered.
It is anticipated that this research will help in speeding up the evaluation of new product designs in order to decide how or whether to manufacture them. Such a capability will be useful in responding quickly to changing demands and opportunities in the marketplace. %I UNIVERSITY OF MARYLAND, COLLEGE PARK %8 1995/// %G eng %U http://drum.lib.umd.edu//handle/1903/5693 %0 Journal Article %J First Workshop on Simulation and Interaction in Virtual Environments %D 1995 %T Automatic generation of multiresolution for polygonal models %A Varshney, Amitabh %A Agarwal,P. %A Brooks,F. %A Wright,W. %A Weber,H. %B First Workshop on Simulation and Interaction in Virtual Environments %8 1995/// %G eng %0 Journal Article %J AI Magazine %D 1994 %T AAAI 1994 Spring Symposium Series Reports %A Woods,William %A Uckun,Sendar %A Kohane,Isaac %A Bates,Joseph %A Hulthage,Ingemar %A Gasser,Les %A Hanks,Steve %A Gini,Maria %A Ram,Ashwin %A desJardins, Marie %A Johnson,Peter %A Etzioni,Oren %A Coombs,David %A Whitehead,Steven %B AI Magazine %V 15 %P 22 - 22 %8 1994/09/15/ %@ 0738-4602 %G eng %U http://www.aaai.org/ojs/index.php/aimagazine/article/viewArticle/1101 %N 3 %R 10.1609/aimag.v15i3.1101 %0 Journal Article %J Journal of Artificial Neural Networks %D 1994 %T Adaptation of noncompetitive and competitive neural networks to focal lesions %A Weinrich,M. %A Sutton,G. G %A Reggia, James A. %A D'Autrechy,C. L %B Journal of Artificial Neural Networks %V 1 %P 51 - 60 %8 1994/// %G eng %0 Journal Article %J SIGSOFT Softw. Eng. Notes %D 1994 %T Algebra and models (and reality) %A Zelkowitz, Marvin V %B SIGSOFT Softw. Eng. Notes %V 19 %P 79 - 81 %8 1994/10// %@ 0163-5948 %G eng %U http://doi.acm.org/10.1145/190679.190688 %N 4 %R 10.1145/190679.190688 %0 Conference Paper %B Proceedings of the SIGCHI conference on Human factors in computing systems: celebrating interdependence %D 1994 %T The alphaslider: a compact and rapid selector %A Ahlberg,Christopher %A Shneiderman, Ben %K Alphaslider %K dynamic queries %K menus %K selection technology %K widget %B Proceedings of the SIGCHI conference on Human factors in computing systems: celebrating interdependence %S CHI '94 %I ACM %C New York, NY, USA %P 365 - 371 %8 1994/// %@ 0-89791-650-6 %G eng %U http://doi.acm.org/10.1145/191666.191790 %R 10.1145/191666.191790 %0 Report %D 1994 %T An Application of Distributed Solid Modeling: Feature Recognition %A Regli,W. C. %A Gupta, Satyandra K. %A Nau, Dana S. %K Distributed computing %K feature recognition %K feature- based modeling %K multiprocessor solid modeling %K Systems Integration Methodology %X The availability of low-cost computational power is a driving force behind the growing sophistication of CAD software. Tools designed to reduce time-consuming build-test-redesign iterations are essential for increasing engineering quality and productivity. However, automation of the design process poses many difficult computational problems. As more downstream engineering activities are being considered during the design phase, guaranteeing reasonable response times within design systems becomes problematic. Design is an interactive process and speed is a critical factor in systems that enable designers to explore and experiment with alternative ideas during the design phase. Achieving interactivity requires an increasingly sophisticated allocation of computational resources in order to perform realistic design analyses and generate feedback in real time.
This paper presents our initial efforts to develop techniques to apply distributed algorithms to the problem of recognizing machining features from solid models. Existing work on recognition of features has focused exclusively on serial computer architectures. Our objective is to show that distributed algorithms can be employed on realistic parts with large numbers of features and many geometric and topological entities to obtain significant improvements in computation time using existing hardware and software tools. Migrating solid modeling applications toward a distributed computing frame-work enables interconnection of many of the autonomous and geographically diverse software tools used in the modern manufacturing enterprise.
This has been implemented on a network of SUN workstations using the ACIS solid modeler and the NIH C++ class library; inter-processor communication is handled with TCP/IP- based network communication tools.
%I Institute for Systems Research, University of Maryland, College Park
%V ISR; TR 1994-82
%8 1994///
%G eng
%U http://drum.lib.umd.edu//handle/1903/5552
%0 Journal Article
%J Journal of Heredity
%D 1994
%T An Atlas of Drosophila Genes: Sequences and Molecular Features
%A Maroni,G.
%A Mount, Stephen M.
%A Cavener,DR
%A Cavener,BA
%A Sharp,PM
%A Lloyd,AT
%A Schaeffer,SW
%B Journal of Heredity
%V 85
%P 495 - 496
%8 1994///
%G eng
%N 6
%0 Conference Paper
%B , 1994 IEEE International Conference on Systems, Man, and Cybernetics, 1994. 'Humans, Information and Technology'
%D 1994
%T On automatic filtering of multilingual texts
%A Oard, Douglas
%A DeClaris,N.
%A Dorr, Bonnie J
%A Faloutsos,C.
%K automated performance evaluation
%K Business communication
%K Computer science
%K Discussion forums
%K Displays
%K Educational institutions
%K Floods
%K Government
%K Information filtering
%K Information filters
%K Information retrieval
%K multilingual information retrieval
%K multilingual text filtering
%K natural languages
%K performance evaluation
%X An emerging requirement to sift through the increasing flood of text information has led to the rapid development of information filtering technology in the past five years. This study introduces novel approaches for filtering texts regardless of their source language. We begin with a brief description of related developments in text filtering and multilingual information retrieval. We then present three alternative approaches to selecting texts from a multilingual information stream which represent a logical evolution from existing techniques in related disciplines. Finally, a practical automated performance evaluation technique is proposed
%B , 1994 IEEE International Conference on Systems, Man, and Cybernetics, 1994. 'Humans, Information and Technology'
%I IEEE
%V 2
%P 1645-1650 vol.2 - 1645-1650 vol.2
%8 1994/10/02/5
%@ 0-7803-2129-4
%G eng
%R 10.1109/ICSMC.1994.400083
%0 Journal Article
%J ACM SIGCHI Bulletin
%D 1994
%T AVI '94: An International Workshop
%A Shneiderman, Ben
%A Badre,AI
%A Santos,Paulo
%X The fresh Mediterranean breezes of innovation enlivened the workshop conversations at Advanced Visual Interfaces '94 (Bari, Italy June 1--4, 1994). More than 100 participants met on the flower-gardened grounds of the Grande Albergo di Villa Romanazzi Carducci, whose modern hotel plus conference center complemented the beautifully restored old villa and the marble-lined swimming pool. Being together with active professionals from Europe and America for 4 days gave us an in-depth chance to learn about each other's professional and personal outlooks. The ancient Roman architecture and the traditional Italian surroundings reminded us that the impact of our work could also last for two thousand years and shape many people's daily lives.
%B ACM SIGCHI Bulletin
%V 26
%P 54 - 55
%8 1994/10//
%@ 0736-6906
%G eng
%U http://doi.acm.org/10.1145/191642.1047934
%N 4
%R 10.1145/191642.1047934
%0 Book
%D 1993
%T Active perception.
%A Aloimonos, J.
%I Lawrence Erlbaum Associates, Inc
%8 1993///
%G eng
%0 Book
%D 1993
%T Active Perception, Vol. I of Advances in Computer Vision series
%A Aloimonos, J.
%I Lawrence Erlbaum Associates
%8 1993///
%G eng
%0 Journal Article
%J Active Perception
%D 1993
%T Active vision revisited
%A Aloimonos, J.
%B Active Perception
%8 1993///
%G eng
%0 Conference Paper
%B Acoustics, Speech, and Signal Processing, 1993. ICASSP-93., 1993 IEEE International Conference on
%D 1993
%T An adaptive ESPRIT based on URV decomposition
%A Liu,K. J.R
%A O'Leary, Dianne P.
%A Stewart, G.W.
%A Wu,Y.-J.J.
%K algorithm;array
%K complexity;parameter
%K decomposition;real-time
%K ESPRIT
%K estimation;real-time
%K filters;array
%K of
%K processing;adaptive
%K processing;computational
%K savings;performance;rank-revealing
%K sensors;computational
%K signal
%K systems;
%K systems;time-varying
%K URV
%K URV-based
%X ESPRIT is an algorithm for determining the fixed directions of arrival of a set of narrowband signals at an array of sensors. Its computational burden makes it unsuitable for real-time processing of signals with time-varying directions of arrival. The authors develop a new implementation of ESPRIT that has potential for real-time processing. It is based on a rank-revealing URV decomposition, rather than the eigendecomposition or singular value decomposition (SVD) used in previous ESPRIT algorithms. Its performance is demonstrated on simulated data representing both constant and time-varying signals. It is shown that the URV-based ESPRIT algorithm is effective for estimating time-varying directions-of-arrival at considerable computational savings over the SVD-based algorithm.≪ETX≫
%B Acoustics, Speech, and Signal Processing, 1993. ICASSP-93., 1993 IEEE International Conference on
%V 4
%P 37 -40 vol.4 - 37 -40 vol.4
%8 1993/04//
%G eng
%R 10.1109/ICASSP.1993.319588
%0 Journal Article
%J IEEE Transactions on Knowledge and Data Engineering
%D 1993
%T ADMS: a testbed for incremental access methods
%A Roussopoulos, Nick
%A Economou,N.
%A Stamenas,A.
%K Add-drop multiplexers
%K ADMS
%K advanced database management system
%K client-server architecture
%K commercial database management systems
%K Computational modeling
%K Database systems
%K distributed databases
%K heterogeneous DBMS
%K incremental access methods
%K incremental gateway
%K Information retrieval
%K interoperability
%K join index
%K large databases
%K Navigation
%K network operating systems
%K Object oriented databases
%K Object oriented modeling
%K Query processing
%K System testing
%K very large databases
%K view index
%K Workstations
%X ADMS is an advanced database management system developed-to experiment with incremental access methods for large and distributed databases. It has been developed over the past eight years at the University of Maryland. The paper provides an overview of ADMS, and describes its capabilities and the performance attained by its incremental access methods. This paper also describes an enhanced client-server architecture that allows an incremental gateway access to multiple heterogeneous commercial database management systems
%B IEEE Transactions on Knowledge and Data Engineering
%V 5
%P 762 - 774
%8 1993/10//
%@ 1041-4347
%G eng
%N 5
%R 10.1109/69.243508
%0 Journal Article
%J Synthesis of Parallel Algorithms
%D 1993
%T Advanced parallel prefix-sums, list ranking and connectivity
%A Vishkin, Uzi
%B Synthesis of Parallel Algorithms
%P 215 - 257
%8 1993///
%G eng
%0 Conference Paper
%B Data Compression Conference, 1993. DCC '93.
%D 1993
%T Algorithms for fast vector quantization
%A Arya,S.
%A Mount, Dave
%K algorithm;
%K algorithms;
%K directed
%K fast
%K graph
%K graph;
%K graphs;
%K Neighborhood
%K performance;
%K problems;
%K quantisation;
%K quantization;
%K running
%K search
%K time;
%K tree
%K vector
%X This paper shows that if one is willing to relax the requirement of finding the true nearest neighbor, it is possible to achieve significant improvements in running time and at only a very small loss in the performance of the vector quantizer. The authors present three algorithms for nearest neighbor searching: standard and priority