%0 Journal Article %J Journal of Microbiological Methods %D 2017 %T Application of a paper based device containing a new culture medium to detect Vibrio cholerae in water samples collected in Haiti %A Briquaire, Romain %A Rita R Colwell %A Boncy, Jacques %A Rossignol, Emmanuel %A Dardy, Aline %A Pandini, Isabelle %A Villeval, François %A Machuron, Jean-Louis %A Huq, Anwar %A Rashed, Shah %A Vandevelde, Thierry %A Rozand, Christine %B Journal of Microbiological Methods %V 133 %P 23 - 31 %8 Jan-02-2017 %G eng %U https://www.sciencedirect.com/science/article/pii/S0167701216303578?via%3Dihub %! Journal of Microbiological Methods %R 10.1016/j.mimet.2016.12.014 %0 Journal Article %J The American Journal of Tropical Medicine and Hygiene %D 2017 %T Assessment of Risk of Cholera in Haiti following Hurricane Matthew %A Huq, Anwar %A Anwar, Rifat %A Rita R Colwell %A McDonald, Michael D. %A Khan, Rakib %A Jutla, Antarpreet %A Akanda, Shafqat %X Damage to the inferior and fragile water and sanitation infrastructure of Haiti after Hurricane Matthew has created an urgent public health emergency in terms of likelihood of cholera occurring in the human population. Using satellite-derived data on precipitation, gridded air temperature, and hurricane path and with information on water and sanitation (WASH) infrastructure, we tracked changing environmental conditions conducive for growth of pathogenic vibrios. Based on these data, we predicted and validated the likelihood of cholera cases occurring past hurricane. The risk of cholera in the southwestern part of Haiti remained relatively high since November 2016 to the present. Findings of this study provide a contemporary process for monitoring ground conditions that can guide public health intervention to control cholera in human population by providing access to vaccines, safe WASH facilities. Assuming current social and behavioral patterns remain constant, it is recommended that WASH infrastructure should be improved and considered a priority especially before 2017 rainy season. %B The American Journal of Tropical Medicine and Hygiene %V 97 %P 896 - 903 %8 Jul-09-2017 %G eng %U http://www.ajtmh.org/content/journals/10.4269/ajtmh.17-0048 %N 3 %R 10.4269/ajtmh.17-0048 %0 Journal Article %J Letters in Applied Microbiology %D 2016 %T Are natural reservoirs important for cholera surveillance? The case of an outbreak in a Brazilian estuary %A Martinelli Filho, J.E. %A Lopes, R.M. %A Rivera, I.N.G. %A Rita R Colwell %X Paranaguá Bay is one of the largest estuarine systems on the Southern Brazilian coast. The only recorded cholera outbreak in this region since the early 20th century occurred in 1999 and resulted in 467 cases and at least three reported deaths in a population of approx. 150 000 people. This short communication reports historical, unpublished data related to that outbreak. Water, zooplankton and bivalve samples were collected and evaluated using direct fluorescence assay to determine whether Vibrio cholerae serogroups O1 and O139 were present in the estuarine system at that time. Most of the water (83%) and zooplankton samples (75%) were positive for V. cholerae O1, while V. cholerae O139 was not detected. Shellfish (Mytella sp.) were also positive for V. cholerae O1. These results indicate that the estuary, including biological vectors such as copepods and bivalves, comprise an important reservoir of V. cholerae O1 and a probable waterborne pathway for the disease, in addition to contamination with untreated sewage. %B Letters in Applied Microbiology %P 183 - 188 %8 Jan-09-2016 %G eng %U https://onlinelibrary.wiley.com/doi/10.1111/lam.12614 %N 3 %! Lett Appl Microbiol %R 10.1111/lam.12614 %0 Book Section %B Advances in Cryptology - ASIACRYPT 2013 %D 2013 %T Adaptive and Concurrent Secure Computation from New Adaptive, Non-malleable Commitments %A Dana Dachman-Soled %A Malkin, Tal %A Raykova, Mariana %A Venkitasubramaniam, Muthuramakrishnan %E Sako, Kazue %E Sarkar, Palash %K Algorithm Analysis and Problem Complexity %K Applications of Mathematics %K Data Encryption %K Discrete Mathematics in Computer Science %K Management of Computing and Information Systems %K Systems and Data Security %X We present a unified approach for obtaining general secure computation that achieves adaptive-Universally Composable (UC)-security. Using our approach we essentially obtain all previous results on adaptive concurrent secure computation, both in relaxed models (e.g., quasi-polynomial time simulation), as well as trusted setup models (e.g., the CRS model, the imperfect CRS model). This provides conceptual simplicity and insight into what is required for adaptive and concurrent security, as well as yielding improvements to set-up assumptions and/or computational assumptions in known models. Additionally, we provide the first constructions of concurrent secure computation protocols that are adaptively secure in the timing model, and the non-uniform simulation model. As a corollary we also obtain the first adaptively secure multiparty computation protocol in the plain model that is secure under bounded-concurrency. Conceptually, our approach can be viewed as an adaptive analogue to the recent work of Lin, Pass and Venkitasubramaniam [STOC ‘09], who considered only non-adaptive adversaries. Their main insight was that the non-malleability requirement could be decoupled from the simulation requirement to achieve UC-security. A main conceptual contribution of this work is, quite surprisingly, that it is still the case even when considering adaptive security. A key element in our construction is a commitment scheme that satisfies a strong definition of non-malleability. Our new primitive of concurrent equivocal non-malleable commitments, intuitively, guarantees that even when a man-in-the-middle adversary observes concurrent equivocal commitments and decommitments, the binding property of the commitments continues to hold for commitments made by the adversary. This definition is stronger than previous ones, and may be of independent interest. Previous constructions that satisfy our definition have been constructed in setup models, but either require existence of stronger encryption schemes such as CCA-secure encryption or require independent “trapdoors” provided by the setup for every pair of parties to ensure non-malleability. A main technical contribution of this work is to provide a construction that eliminates these requirements and requires only a single trapdoor. %B Advances in Cryptology - ASIACRYPT 2013 %S Lecture Notes in Computer Science %I Springer Berlin Heidelberg %P 316 - 336 %8 2013/01/01/ %@ 978-3-642-42032-0, 978-3-642-42033-7 %G eng %U http://link.springer.com/chapter/10.1007/978-3-642-42033-7_17 %0 Conference Paper %B CHI 2013 To Appear %D 2013 %T Age-Related Differences in Performance with Touchscreens Compared to Traditional Mouse Input %A Findlater,L. %A Jon Froehlich %A Fattal, K. %A Wobbrock,J.O. %A Dastyar, T. %X Despite the apparent popularity of touch screen for older adults, little is known about the psychomotor performance of these devices. We compare performance between older adults and younger adults in four desktop and touchscreen tasks: pointing, dragging, crossing and steering. On the touchscreen, we also examine pinch zoom. Our results show that while older adults were significantly slower than younger adults in general, the touchscreen reduced the performance gap relative to the desktop and mouse. Indeed, the touchscreen resulted in a significant movement time reduction of 35% over the mouse for older adults, compared to only 16% for younger adults. Error rates also decreased. %B CHI 2013 To Appear %8 2013 %G eng %0 Journal Article %J Human Language Technologies: The 2013 Annual Conference of the North American Chapter of the Association for Computational Linguistics %D 2013 %T Argviz: Interactive Visualization of Topic Dynamics in Multi-party Conversations %A Nguyen, V A %A Hu, Y %A Jordan Boyd-Graber %X Abstract We introduce an efficient, interactive framework—Argviz—for experts to analyze the dynamic topical structure of multi-party conversations. Users inject their needs, expertise, and insights into models via iterative topic refinement. The refined topics feed into a ... %B Human Language Technologies: The 2013 Annual Conference of the North American Chapter of the Association for Computational Linguistics %8 2013/00/01 %G eng %U https://www.aclweb.org/anthology-new/N/N13/N13-3.pdf#page=42 %0 Journal Article %J The ISME Journal %D 2012 %T An additional step in the transmission of Yersinia pestis? %A Easterday, W Ryan %A Kausrud, Kyrre L %A Star, Bastiaan %A Heier, Lise %A Haley, Bradd J %A Ageyev, Vladimir %A Rita R Colwell %A Stenseth, Nils Chr %X Plague, caused by the bacterium Yersinia pestis, is a mammalian vector-borne disease, transmitted by fleas that serve as the vector between rodent hosts. For many pathogens, including Y. pestis, there are strong evolutionary pressures that lead to a reduction in ‘useless genes’, with only those retained that reflect function in the specific environment inhabited by the pathogen. Genetic traits critical for survival and transmission between two environments, the rodent and the flea, are conserved in epizootic/epidemic plague strains. However, there are genes that remain conserved for which no function in the flea–rodent cycle has yet been observed, indicating an additional environment may exist in the transmission cycle of plague. Here, we present evidence for highly conserved genes that suggests a role in the persistence of Y. pestis after death of its host. Furthermore, maintenance of these genes points to Y. pestis traversing a post-mortem path between, and possibly within, epizootic periods and offering insight into mechanisms that may allow Y. pestis an alternative route of transmission in the natural environment. %B The ISME Journal %P 231 - 236 %8 Jan-02-2012 %G eng %U http://www.nature.com/articles/ismej2011105 %N 2 %! ISME J %R 10.1038/ismej.2011.105 %0 Book %D 2012 %T Advances in Computers %A Memon, Atif M. %K Computers / Programming / General %K Computers / Software Development & Engineering / General %K Mathematics / General %X Since its first volume in 1960, Advances in Computers has presented detailed coverage of innovations in computer hardware, software, theory, design, and applications. It has also provided contributors with a medium in which they can explore their subjects in greater depth and breadth than journal articles usually allow. As a result, many articles have become standard references that continue to be of sugnificant, lasting value in this rapidly expanding field. In-depth surveys and tutorials on new computer technologyWell-known authors and researchers in the fieldExtensive bibliographies with most chaptersMany of the volumes are devoted to single themes or subfields of computer science %I Academic Press %8 2012/05/01/ %@ 9780123964786 %G eng %0 Journal Article %J BMC bioinformatics %D 2012 %T AGORA: Assembly Guided by Optical Restriction Alignment %A Lin, H.C. %A Goldstein, S. %A Mendelowitz, L. %A Zhou, S. %A Wetzel, J. %A Schwartz, D.C. %A Pop, Mihai %X Genome assembly is difficult due to repeated sequences within the genome, which create ambiguities and cause the final assembly to be broken up into many separate sequences (contigs). Long range linking information, such as mate-pairs or mapping data, is necessary to help assembly software resolve repeats, thereby leading to a more complete reconstruction of genomes. Prior work has used optical maps for validating assemblies and scaffolding contigs, after an initial assembly has been produced. However, optical maps have not previously been used within the genome assembly process. Here, we use optical map information within the popular de Bruijn graph assembly paradigm to eliminate paths in the de Bruijn graph which are not consistent with the optical map and help determine the correct reconstruction of the genome.We developed a new algorithm called AGORA: Assembly Guided by Optical Restriction Alignment. AGORA is the first algorithm to use optical map information directly within the de Bruijn graph framework to help produce an accurate assembly of a genome that is consistent with the optical map information provided. Our simulations on bacterial genomes show that AGORA is effective at producing assemblies closely matching the reference sequences. Additionally, we show that noise in the optical map can have a strong impact on the final assembly quality for some complex genomes, and we also measure how various characteristics of the starting de Bruijn graph may impact the quality of the final assembly. Lastly, we show that a proper choice of restriction enzyme for the optical map may substantially improve the quality of the final assembly. Our work shows that optical maps can be used effectively to assemble genomes within the de Bruijn graph assembly framework. Our experiments also provide insights into the characteristics of the mapping data that most affect the performance of our algorithm, indicating the potential benefit of more accurate optical mapping technologies, such as nano-coding. %B BMC bioinformatics %V 13 %8 2012 %G eng %N 1 %0 Journal Article %J J Bacteriol %D 2012 %T Archaeosortases and exosortases are widely distributed systems linking membrane transit with posttranslational modification. %A Haft, Daniel H %A Payne, Samuel H %A Jeremy D Selengut %K Amino Acid Sequence %K Aminoacyltransferases %K Archaeal Proteins %K Bacterial Proteins %K Cell Membrane %K Cysteine Endopeptidases %K Gene Expression Regulation, Archaeal %K Gene Expression Regulation, Bacterial %K Gene Expression Regulation, Enzymologic %K Molecular Sequence Data %K Protein Processing, Post-Translational %X

Multiple new prokaryotic C-terminal protein-sorting signals were found that reprise the tripartite architecture shared by LPXTG and PEP-CTERM: motif, TM helix, basic cluster. Defining hidden Markov models were constructed for all. PGF-CTERM occurs in 29 archaeal species, some of which have more than 50 proteins that share the domain. PGF-CTERM proteins include the major cell surface protein in Halobacterium, a glycoprotein with a partially characterized diphytanylglyceryl phosphate linkage near its C terminus. Comparative genomics identifies a distant exosortase homolog, designated archaeosortase A (ArtA), as the likely protein-processing enzyme for PGF-CTERM. Proteomics suggests that the PGF-CTERM region is removed. Additional systems include VPXXXP-CTERM/archeaosortase B in two of the same archaea and PEF-CTERM/archaeosortase C in four others. Bacterial exosortases often fall into subfamilies that partner with very different cohorts of extracellular polymeric substance biosynthesis proteins; several species have multiple systems. Variant systems include the VPDSG-CTERM/exosortase C system unique to certain members of the phylum Verrucomicrobia, VPLPA-CTERM/exosortase D in several alpha- and deltaproteobacterial species, and a dedicated (single-target) VPEID-CTERM/exosortase E system in alphaproteobacteria. Exosortase-related families XrtF in the class Flavobacteria and XrtG in Gram-positive bacteria mark distinctive conserved gene neighborhoods. A picture emerges of an ancient and now well-differentiated superfamily of deeply membrane-embedded protein-processing enzymes. Their target proteins are destined to transit cellular membranes during their biosynthesis, during which most undergo additional posttranslational modifications such as glycosylation.

%B J Bacteriol %V 194 %P 36-48 %8 2012 Jan %G eng %N 1 %R 10.1128/JB.06026-11 %0 Conference Paper %B LEET'12 Proceedings of the 5th USENIX conference on Large-Scale Exploits and Emergent Threats %D 2012 %T Ask WINE: Are We Safer Today? Evaluating Operating System Security Through Big Data Analysis %A Tudor Dumitras %A Efstathopoulos, Petros %X The Internet can be a dangerous place: 800,000 new malware variants are detected each day, and this number is growing at an exponential rate--driven by the quest for economic gains. However, over the past ten years operating-system vendors have introduced a number of security technologies that aim to make exploits harder and to reduce the attack surface of the platform. Faced with these two conflicting trends, it is difficult for end-users to determine what techniques make them safer from Internet attacks. In this position paper, we argue that to answer this question conclusively we must analyze field data collected on real hosts that are targeted by attacks--e.g., the approximately 50 million records of anti-virus telemetry available through Symantec's WINE platform. Such studies can characterize the factors that drive the production of malware, can help us understand the impact of security technologies in the real world and can suggest new security metrics, derived from field observations rather than small lab experiments, indicating how susceptible to attacks a computing platform may be. %B LEET'12 Proceedings of the 5th USENIX conference on Large-Scale Exploits and Emergent Threats %S LEET'12 %I USENIX Association %P 11 - 11 %8 2012/04/24/ %G eng %U http://dl.acm.org/citation.cfm?id=2228340.2228356 %0 Journal Article %J Autonomous Robots %D 2012 %T Automated synthesis of action selection policies for unmanned vehicles operating in adverse environments %A Svec,Petr %A Gupta, Satyandra K. %K Computer science %X We address the problem of automated action selection policy synthesis for unmanned vehicles operating in adverse environments. We introduce a new evolutionary computation-based approach using which an initial version of the policy is automatically generated and then gradually refined by detecting and fixing its shortcomings. The synthesis technique consists of the automated extraction of the vehicle’s exception states and Genetic Programming (GP) for automated composition and optimization of corrective sequences of commands in the form of macro-actions to be applied locally. The focus is specifically on automated synthesis of a policy for Unmanned Surface Vehicle (USV) to efficiently block the advancement of an intruder boat toward a valuable target. This task requires the USV to utilize reactive planning complemented by short-term forward planning to generate specific maneuvers for blocking. The intruder is human-competitive and exhibits a deceptive behavior so that the USV cannot exploit regularity in its attacking behavior. We compared the performance of a hand-coded blocking policy to the performance of a policy that was automatically synthesized. Our results show that the performance of the automatically generated policy exceeds the performance of the hand-coded policy and thus demonstrates the feasibility of the proposed approach. %B Autonomous Robots %V 32 %P 149 - 164 %8 2012/// %@ 0929-5593 %G eng %U http://www.springerlink.com/content/664k84x080868141/abstract/ %N 2 %R 10.1007/s10514-011-9268-6 %0 Journal Article %J IEEE Transactions on Information Forensics & Security %D 2012 %T Automatic Authentication of Banknotes %A Roy,Ankush %A Halder,Biswaiit %A Garain,Utpal %A David Doermann %X In this paper, we address the problem of the automatic authentication of paper currency. Indian banknotes are used to show how a system can be developed for discriminating counterfeit notes from genuine notes. Image processing and pattern recognition techniques are designed to carefully analyze embedded security features. Experiments conducted on forensic samples show that a high precision low cost machine can be developed to address this problem. The analysis of current security features’ ability to protect against counterfeiting also suggest topics that should be considered in designing future currency notes. %B IEEE Transactions on Information Forensics & Security %8 2012/// %G eng %0 Conference Paper %B Technical Communications of the 27th International Conference on Logic Programming (ICLP'11) %D 2011 %T Abduction in Annotated Probabilistic Temporal Logic %A Molinaro,Cristian %A Sliva,Amy %A V.S. Subrahmanian %X Annotated Probabilistic Temporal (APT) logic programs are a form of logic programs that allow users to state (or systems to automatically learn)rules of the form ``formula G becomes true K time units after formula F became true with L to U% probability.'' In this paper, we develop a theory of abduction for APT logic programs. Specifically, given an APT logic program Pi, a set of formulas H that can be ``added'' to Pi, and a goal G, is there a subset S of H such that Pi \cup S is consistent and entails the goal G? In this paper, we study the complexity of the Basic APT Abduction Problem (BAAP). We then leverage a geometric characterization of BAAP to suggest a set of pruning strategies when solving BAAP and use these intuitions to develop a sound and complete algorithm. %B Technical Communications of the 27th International Conference on Logic Programming (ICLP'11) %S Leibniz International Proceedings in Informatics (LIPIcs) %I Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik %V 11 %P 240 - 250 %8 2011/// %@ 978-3-939897-31-6 %G eng %R http://dx.doi.org/10.4230/LIPIcs.ICLP.2011.240 %0 Journal Article %J ACM Transactions on Accessible Computing (TACCESS) %D 2011 %T Ability-based design: Concept, principles and examples %A Wobbrock,J.O. %A Kane,S.K. %A Gajos,K.Z. %A Harada,S. %A Jon Froehlich %B ACM Transactions on Accessible Computing (TACCESS) %V 3 %8 2011 %G eng %N 3 %0 Journal Article %J arXiv:1105.1743 [cs] %D 2011 %T Abstracting Abstract Machines: A Systematic Approach to Higher-Order Program Analysis %A David Van Horn %A Might, Matthew %K Computer Science - Programming Languages %K F.3.2 %K F.4.1 %X Predictive models are fundamental to engineering reliable software systems. However, designing conservative, computable approximations for the behavior of programs (static analyses) remains a difficult and error-prone process for modern high-level programming languages. What analysis designers need is a principled method for navigating the gap between semantics and analytic models: analysis designers need a method that tames the interaction of complex languages features such as higher-order functions, recursion, exceptions, continuations, objects and dynamic allocation. We contribute a systematic approach to program analysis that yields novel and transparently sound static analyses. Our approach relies on existing derivational techniques to transform high-level language semantics into low-level deterministic state-transition systems (with potentially infinite state spaces). We then perform a series of simple machine refactorings to obtain a sound, computable approximation, which takes the form of a non-deterministic state-transition systems with finite state spaces. The approach scales up uniformly to enable program analysis of realistic language features, including higher-order functions, tail calls, conditionals, side effects, exceptions, first-class continuations, and even garbage collection. %B arXiv:1105.1743 [cs] %8 2011/05/09/ %G eng %U http://arxiv.org/abs/1105.1743 %0 Journal Article %J BMC Evolutionary Biology %D 2011 %T Accelerated evolution of 3'avian FOXE1 genes, and thyroid and feather specific expression of chicken FoxE1 %A Yaklichkin,Sergey Yu %A Darnell,Diana K %A Pier,Maricela V %A Antin,Parker B %A Hannenhalli, Sridhar %X The forkhead transcription factor gene E1 (FOXE1) plays an important role in regulation of thyroid development, palate formation and hair morphogenesis in mammals. However, avian FOXE1 genes have not been characterized and as such, codon evolution of FOXE1 orthologs in a broader evolutionary context of mammals and birds is not known. %B BMC Evolutionary Biology %V 11 %P 302 - 302 %8 2011/10/15/ %@ 1471-2148 %G eng %U http://www.biomedcentral.com/1471-2148/11/302 %N 1 %R 10.1186/1471-2148-11-302 %0 Journal Article %J BMC Genomics %D 2011 %T Accurate and fast estimation of taxonomic profiles from metagenomic shotgun sequences %A Liu,Bo %A Gibbons,Theodore %A Ghodsi,Mohammad %A Treangen,Todd %A Pop, Mihai %X A major goal of metagenomics is to characterize the microbial composition of an environment. The most popular approach relies on 16S rRNA sequencing, however this approach can generate biased estimates due to differences in the copy number of the gene between even closely related organisms, and due to PCR artifacts. The taxonomic composition can also be determined from metagenomic shotgun sequencing data by matching individual reads against a database of reference sequences. One major limitation of prior computational methods used for this purpose is the use of a universal classification threshold for all genes at all taxonomic levels. %B BMC Genomics %V 12 %P S4 - S4 %8 2011/07/27/ %@ 1471-2164 %G eng %U http://www.biomedcentral.com/1471-2164/12/S2/S4 %N Suppl 2 %R 10.1186/1471-2164-12-S2-S4 %0 Journal Article %J Genome Biology %D 2011 %T Accurate proteome-wide protein quantification from high-resolution 15N mass spectra %A Zia Khan %A Amini, Sasan %A Bloom, Joshua S. %A Ruse, Cristian %A Caudy, Amy A. %A Kruglyak, Leonid %A Singh, Mona %A Perlman, David H. %A Tavazoie, Saeed %X In quantitative mass spectrometry-based proteomics, the metabolic incorporation of a single source of 15N-labeled nitrogen has many advantages over using stable isotope-labeled amino acids. However, the lack of a robust computational framework for analyzing the resulting spectra has impeded wide use of this approach. We have addressed this challenge by introducing a new computational methodology for analyzing 15N spectra in which quantification is integrated with identification. Application of this method to an Escherichia coli growth transition reveals significant improvement in quantification accuracy over previous methods.PMID: 22182234 %B Genome Biology %V 12 %8 2011/12/19/ %@ 1465-6906 %G eng %U http://genomebiology.com/2011/12/12/R122/abstract %N 12 %0 Journal Article %J SIAM Journal on Computing %D 2011 %T On Achieving the" Best of Both Worlds" in Secure Multiparty Computation %A Ishai,Y. %A Katz, Jonathan %A Kushilevitz,E. %A Lindell,Y. %A Petrank,E. %X Two settings are traditionally considered for secure multiparty computation, de-pending on whether or not a majority of the parties are assumed to be honest. Existing protocols that assume an honest majority provide “full security” (and, in particular, guarantee output delivery and fairness) when this assumption holds, but are completely insecure if this assumption is violated. On the other hand, known protocols tolerating an arbitrary number of corruptions do not guarantee fairness or output delivery even if only a single party is dishonest. It is natural to wonder whether it is possible to achieve the “best of both worlds”: namely, a single protocol that simultaneously achieves the best possible security in both the above settings. Here, we rule out this possibility (at least for general functionalities) and show some positive results regarding what can be achieved. %B SIAM Journal on Computing %V 40 %P 122 - 122 %8 2011/// %G eng %N 1 %0 Conference Paper %B Image Processing (ICIP), 2011 18th IEEE International Conference on %D 2011 %T Action recognition using Partial Least Squares and Support Vector Machines %A Ramadan,S. %A Davis, Larry S. %K analysis;support %K approach;multiclass %K approximations;regression %K dataset;action %K dimensional %K extraction;feature %K extraction;image %K feature %K high %K INRIA %K IXMAS %K least %K machines; %K machines;very %K partial %K properties;support %K recognition %K recognition;least %K regressors;spatiotemporal %K squares %K SVM;multiple %K vector %K vectors %X We introduce an action recognition approach based on Partial Least Squares (PLS) and Support Vector Machines (SVM). We extract very high dimensional feature vectors representing spatio-temporal properties of actions and use multiple PLS regressors to find relevant features that distinguish amongst action classes. Finally, we use a multi-class SVM to learn and classify those relevant features. We applied our approach to INRIA's IXMAS dataset. Experimental results show that our method is superior to other methods applied to the IXMAS dataset. %B Image Processing (ICIP), 2011 18th IEEE International Conference on %P 533 - 536 %8 2011/09// %G eng %R 10.1109/ICIP.2011.6116399 %0 Conference Paper %B Person-Oriented Vision (POV), 2011 IEEE Workshop on %D 2011 %T Active inference for retrieval in camera networks %A Daozheng Chen %A Bilgic,Mustafa %A Getoor, Lise %A Jacobs, David W. %A Mihalkova,Lilyana %A Tom Yeh %X We address the problem of searching camera network videos to retrieve frames containing specified individuals. We show the benefit of utilizing a learned probabilistic model that captures dependencies among the cameras. In addition, we develop an active inference framework that can request human input at inference time, directing human attention to the portions of the videos whose correct annotation would provide the biggest performance improvements. Our primary contribution is to show that by mapping video frames in a camera network onto a graphical model, we can apply collective classification and active inference algorithms to significantly increase the performance of the retrieval system, while minimizing the number of human annotations required. %B Person-Oriented Vision (POV), 2011 IEEE Workshop on %I IEEE %P 13 - 20 %8 2011/01// %@ 978-1-61284-036-9 %G eng %U http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=5712363&tag=1 %R 10.1109/POV.2011.5712363 %0 Conference Paper %B Person-Oriented Vision (POV), 2011 IEEE Workshop on %D 2011 %T Active inference for retrieval in camera networks %A Daozheng Chen %A Bilgic,M. %A Getoor, Lise %A Jacobs, David W. %A Mihalkova,L. %A Tom Yeh %K active %K annotation;probabilistic %K frame;cameras;inference %K inference;camera %K mechanisms;probability;search %K model;human %K model;retrieval %K network;graphical %K problem;video %K problems;video %K processing; %K retrieval;video %K signal %K system;searching %X We address the problem of searching camera network videos to retrieve frames containing specified individuals. We show the benefit of utilizing a learned probabilistic model that captures dependencies among the cameras. In addition, we develop an active inference framework that can request human input at inference time, directing human attention to the portions of the videos whose correct annotation would provide the biggest performance improvements. Our primary contribution is to show that by mapping video frames in a camera network onto a graphical model, we can apply collective classification and active inference algorithms to significantly increase the performance of the retrieval system, while minimizing the number of human annotations required. %B Person-Oriented Vision (POV), 2011 IEEE Workshop on %P 13 - 20 %8 2011/01// %G eng %R 10.1109/POV.2011.5712363 %0 Conference Paper %B Proceedings of the 2011 annual conference extended abstracts on Human factors in computing systems %D 2011 %T Active progress bars: facilitating the switch to temporary activities %A Hurter,Christophe %A Girouard,Audrey %A Riche,Nathalie %A Plaisant, Catherine %K Frustration %K participatory design %K progress bars %K task switching %X In this paper, we seek to find a better way of effective task management when a progress bar interrupts user's primary activity. We propose to augment progress bars with user controlled functionalities facilitating the switch to temporary activities. We detail a taxonomy of waiting period contexts and possible temporary tasks, then report on 5 participatory design, and a follow-up survey of 96 respondents. Finally we describe an early prototype of active progress bars, and report on initial use. %B Proceedings of the 2011 annual conference extended abstracts on Human factors in computing systems %S CHI EA '11 %I ACM %C New York, NY, USA %P 1963 - 1968 %8 2011/// %@ 978-1-4503-0268-5 %G eng %U http://doi.acm.org/10.1145/1979742.1979883 %R 10.1145/1979742.1979883 %0 Conference Paper %B 2011 IEEE International Conference on Computer Vision (ICCV) %D 2011 %T Active scene recognition with vision and language %A Yu,Xiaodong %A Fermüller, Cornelia %A Ching Lik Teo %A Yezhou Yang %A Aloimonos, J. %K accuracy %K active scene recognition %K classification performance %K Cognition %K Computer vision %K Detectors %K Equations %K high level knowledge utilization %K HUMANS %K image classification %K inference mechanisms %K object detectors %K Object recognition %K reasoning module %K sensing process %K sensory module %K support vector machines %K Training %X This paper presents a novel approach to utilizing high level knowledge for the problem of scene recognition in an active vision framework, which we call active scene recognition. In traditional approaches, high level knowledge is used in the post-processing to combine the outputs of the object detectors to achieve better classification performance. In contrast, the proposed approach employs high level knowledge actively by implementing an interaction between a reasoning module and a sensory module (Figure 1). Following this paradigm, we implemented an active scene recognizer and evaluated it with a dataset of 20 scenes and 100+ objects. We also extended it to the analysis of dynamic scenes for activity recognition with attributes. Experiments demonstrate the effectiveness of the active paradigm in introducing attention and additional constraints into the sensing process. %B 2011 IEEE International Conference on Computer Vision (ICCV) %I IEEE %P 810 - 817 %8 2011/11/06/13 %@ 978-1-4577-1101-5 %G eng %R 10.1109/ICCV.2011.6126320 %0 Conference Paper %B Twenty-Second International Joint Conference on Artificial Intelligence %D 2011 %T Active Surveying: A Probabilistic Approach for Identifying Key Opinion Leaders %A Sharara,H. %A Getoor, Lise %A Norton,M. %X Opinion leaders play an important role in influenc- ing people’s beliefs, actions and behaviors. Al- though a number of methods have been proposed for identifying influentials using secondary sources of information, the use of primary sources, such as surveys, is still favored in many domains. In this work we present a new surveying method which combines secondary data with partial knowl- edge from primary sources to guide the information gathering process. We apply our proposed active surveying method to the problem of identifying key opinion leaders in the medical field, and show how we are able to accurately identify the opinion lead- ers while minimizing the amount of primary data required, which results in significant cost reduction in data acquisition without sacrificing its integrity. %B Twenty-Second International Joint Conference on Artificial Intelligence %8 2011/// %G eng %0 Conference Paper %B Proceedings of the 20th international conference companion on World wide web %D 2011 %T Adapting a map query interface for a gesturing touch screen interface %A Samet, Hanan %A Teitler,Benjamin E. %A Adelfio,Marco D. %A Lieberman,Michael D. %K map query interface %K newsstand %K touch screen gesturing interface %X NewsStand is an example application of a general framework that we are developing to enable searching for information using a map query interface, where the information results from monitoring the output of over 8,000 RSS news sources and is available for retrieval within minutes of publication. The user interface of NewsStand was recently adapted so that NewsStand can execute on mobile and tablet devices with a gesturing touch screen interface such as the iPhone, iPod Touch, and iPad. This action led to a discovery of some shortcomings of current mapping APIs as well as devising some interesting new widgets. These issues are discussed, and the realization can be seen by a demo at http://newsstand.umiacs.umd.edu on any of the above Apple devices as well as other devices that support gestures such as an Android phone. %B Proceedings of the 20th international conference companion on World wide web %S WWW '11 %I ACM %C New York, NY, USA %P 257 - 260 %8 2011/// %@ 978-1-4503-0637-9 %G eng %U http://doi.acm.org/10.1145/1963192.1963303 %R 10.1145/1963192.1963303 %0 Journal Article %J Proceedings of the 30th Annual ACM Symposium on Principles of Distributed Computing %D 2011 %T Adaptively secure broadcast, revisited %A Garay,J. A %A Katz, Jonathan %A Kumaresan,R. %A Zhou,H.S. %X We consider the classical problem of synchronous broadcast with dishonest majority, whena public-key infrastructure and digital signatures are available. In a surprising result, Hirt and Zikas (Eurocrypt 2010) recently observed that all existing protocols for this task are insecure against an adaptive adversary who can choose which parties to corrupt as the protocol progresses. Moreover, they prove an impossibility result for adaptively secure broadcast in their setting. We argue that the communication model adopted by Hirt and Zikas is unrealistically pes- simistic. We revisit the problem of adaptively secure broadcast in a more natural synchronous model (with rushing), and show that broadcast is possible in this setting for an arbitrary num- ber of corruptions. Our positive result holds under a strong, simulation-based definition in the universal-composability framework. We also study the impact of adaptive attacks on protocols for secure multi-party computation where broadcast is used as a sub-routine. %B Proceedings of the 30th Annual ACM Symposium on Principles of Distributed Computing %8 2011/// %G eng %0 Journal Article %J Advances in Imaging and Electron Physics %D 2011 %T Advances in Video-Based Biometrics %A Chellapa, Rama %A Turaga,P. %B Advances in Imaging and Electron Physics %V 83 %P 183 - 203 %8 2011/// %G eng %0 Journal Article %J International Journal of Research in Marketing %D 2011 %T Agent-based modeling in marketing: Guidelines for rigor %A Rand, William %A Rust,Roland T. %X Agent-based modeling can illuminate how complex marketing phenomena emerge from simple decision rules. Marketing phenomena that are too complex for conventional analytical or empirical approaches can often be modeled using this approach. Agent-based modeling investigates aggregate phenomena by simulating the behavior of individual “agents,” such as consumers or organizations. Some useful examples of agent-based modeling have been published in marketing journals, but widespread acceptance of the agent-based modeling method and publication of this method in the highest-level marketing journals have been slowed by the lack of widely accepted standards of how to do agent-based modeling rigorously. We address this need by proposing guidelines for rigorous agent-based modeling. We demonstrate these guidelines, and the value of agent-based modeling for marketing research, through the use of an example. We use an agent-based modeling approach to replicate the Bass model of the diffusion of innovations, illustrating the use of the proposed guidelines to ensure the rigor of the analysis. We also show how extensions of the Bass model that would be difficult to carry out using traditional marketing research techniques are possible to implement using a rigorous agent-based approach. %B International Journal of Research in Marketing %V 28 %P 181 - 193 %8 2011/09// %@ 0167-8116 %G eng %U http://www.sciencedirect.com/science/article/pii/S0167811611000504 %N 3 %R 10.1016/j.ijresmar.2011.04.002 %0 Journal Article %J ACM Trans. Embed. Comput. Syst. %D 2011 %T Analysis of SystemC actor networks for efficient synthesis %A Falk, Joachim %A Zebelein, Christian %A Keinert, Joachim %A Haubelt, Christian %A Teich, Juergen %A Bhattacharyya, Shuvra S. %K actor-oriented design %K clustering %K Dataflow analysis %K scheduling %X Applications in the signal processing domain are often modeled by dataflow graphs. Due to heterogeneous complexity requirements, these graphs contain both dynamic and static dataflow actors. In previous work, we presented a generalized clustering approach for these heterogeneous dataflow graphs in the presence of unbounded buffers. This clustering approach allows the application of static scheduling methodologies for static parts of an application during embedded software generation for multiprocessor systems. It systematically exploits the predictability and efficiency of the static dataflow model to obtain latency and throughput improvements. In this article, we present a generalization of this clustering technique to dataflow graphs with bounded buffers, therefore enabling synthesis for embedded systems without dynamic memory allocation. Furthermore, a case study is given to demonstrate the performance benefits of the approach. %B ACM Trans. Embed. Comput. Syst. %V 10 %P 18:1 - 18:34 %8 2011 %@ 1539-9087 %G eng %U http://doi.acm.org/10.1145/1880050.1880054 %N 2 %0 Journal Article %J Transportation Research Board 90th Annual Meeting Compendium of Papers %D 2011 %T Analyzing Incident Management Event Sequences with Interactive Visualization %A Guerra Gómez,J. %A Wongsuphasawat,K. %A Wang,T. D %A Pack,M. %A Plaisant, Catherine %X While traditional safety and incident analysis has mostly focused on incident attributesdata, such as the location and time of the incident, there are other aspects in incident response that are temporal in nature and are more difficult to analyze. We describe a visual analytics tool for temporal data exploration, called LifeFlow, used for the analysis of incident response data. LifeFlow provides user-controlled overviews of event sequences (e.g. notification, arrival, clearance etc). It allows analysts to interactively explore temporal patterns, find anomalies in sequences and compare management practices. This type of analysis can potentially lead to process improvements and save human lives. We used NCHRP traffic incident data with more than 200,000 incidents are reported by 8 different agencies in a period of about 28 months. Our experience suggest that even non expert analysts can spot many anomalies in the data using the LifeFlow overviews, and are able to rapidly ask many questions and find differences between agencies. %B Transportation Research Board 90th Annual Meeting Compendium of Papers %8 2011/// %G eng %0 Journal Article %J ACM Trans. Comput. Logic %D 2011 %T Annotated probabilistic temporal logic %A Shakarian,Paulo %A Parker,Austin %A Simari,Gerardo %A V.S. Subrahmanian %K frequency functions %K imprecise probabilities %K Probabilistic and temporal reasoning %K threads %X The semantics of most logics of time and probability is given via a probability distribution over threads, where a thread is a structure specifying what will be true at different points in time (in the future). When assessing the probabilities of statements such as “Event a will occur within 5 units of time of event b,” there are many different semantics possible, even when assessing the truth of this statement within a single thread. We introduce the syntax of annotated probabilistic temporal (APT) logic programs and axiomatically introduce the key notion of a frequency function (for the first time) to capture different types of intrathread reasoning, and then provide a semantics for intrathread and interthread reasoning in APT logic programs parameterized by such frequency functions. We develop a comprehensive set of complexity results for consistency checking and entailment in APT logic programs, together with sound and complete algorithms to check consistency and entailment. The basic algorithms use linear programming, but we then show how to substantially and correctly reduce the sizes of these linear programs to yield better computational properties. We describe a real world application we are developing using APT logic programs. %B ACM Trans. Comput. Logic %V 12 %P 14:1–14:44 - 14:1–14:44 %8 2011/01// %@ 1529-3785 %G eng %U http://doi.acm.org/10.1145/1877714.1877720 %N 2 %R 10.1145/1877714.1877720 %0 Conference Paper %B 2011 AAAI Fall Symposium Series %D 2011 %T Ant Colony Optimization in a Changing Environment %A Seymour,John Jefferson %A Tuzo,Joseph %A desJardins, Marie %X Ant colony optimization (ACO) algorithms are computational problem-solving methods that are inspired by the complex behaviors of ant colonies; specifically, the ways in which ants interact with each other and their environment to optimize the overall performance of the ant colony. Our eventual goal is to develop and experiment with ACO methods that can more effectively adapt to dynamically changing environments and problems. We describe biological ant systems and the dynamics of their environments and behaviors. We then introduce a family of dynamic ACO algorithms that can handle dynamic modifications of their inputs. We report empirical results, showing that dynamic ACO algorithms can effectively adapt to time-varying environments. %B 2011 AAAI Fall Symposium Series %8 2011/03/11/ %G eng %U http://www.aaai.org/ocs/index.php/FSS/FSS11/paper/viewPaper/4223 %0 Conference Paper %B 2011 22nd IEEE International Symposium on Rapid System Prototyping (RSP) %D 2011 %T Applying graphics processor acceleration in a software defined radio prototyping environment %A Plishker,W. %A Zaki, G.F. %A Bhattacharyya, Shuvra S. %A Clancy, C. %A Kuykendall, J. %K Acceleration %K coprocessors %K dataflow foundation %K GNU radio %K Graphics processing unit %K graphics processor acceleration %K Kernel %K Libraries %K multicore platforms %K Multicore processing %K PARALLEL PROCESSING %K Pipelines %K Protocols %K software defined radio prototyping environment %K software radio %K stand-alone GPU accelerated library %X With higher bandwidth requirements and more complex protocols, software defined radio (SDR) has ever growing computational demands. SDR applications have different levels of parallelism that can be exploited on multicore platforms, but design and programming difficulties have inhibited the adoption of specialized multicore platforms like graphics processors (GPUs). In this work we propose a new design flow that augments a popular existing SDR development environment (GNU Radio), with a dataflow foundation and a stand-alone GPU accelerated library. The approach gives an SDR developer the ability to prototype a GPU accelerated application and explore its design space fast and effectively. We demonstrate this design flow on a standard SDR benchmark and show that deciding how to utilize a GPU can be non-trivial for even relatively simple applications. %B 2011 22nd IEEE International Symposium on Rapid System Prototyping (RSP) %P 67 - 73 %8 2011 %G eng %0 Journal Article %J Proceedings of 43rd Annual ACM Symposium on Theory of Computing %D 2011 %T Approximate polytope membership queries %A Arya,S. %A da Fonseca,G. D %A Mount, Dave %X We consider an approximate version of a fundamental geo-metric search problem, polytope membership queries. Given a convex polytope P in Rd, presented as the intersection of halfspaces, the objective is to preprocess P so that, given a query point q, it is possible to determine efficiently whether q lies inside P subject to an error bound ε. Previous solutions to this problem were based on straight- forward applications of classic polytope approximation tech- niques by Dudley (1974) and Bentley et al. (1982). The former yields minimum storage, and the latter yields con- stant query time. A space-time tradeoff can be obtained by interpolating between the two. We present the first sig- nificant improvements to this tradeoff. For example, using the same storage as Dudley, we reduce the query time from O(1/ε (d−1)/2 ) to O(1/ε (d−1)/4 ). Our approach is based on a very simple algorithm. Both lower bounds and upper bounds on the performance of the algorithm are presented. To establish the relevance of our results, we introduce a reduction from approximate nearest neighbor searching to approximate polytope membership queries. We show that our tradeoff provides significant improvements to the best known space-time tradeoffs for approximate nearest neigh- bor searching. Furthermore, this is achieved with construc- tions that are much simpler than existing methods %B Proceedings of 43rd Annual ACM Symposium on Theory of Computing %P 579 - 586 %8 2011/// %G eng %0 Conference Paper %B 2011 Proceedings IEEE INFOCOM %D 2011 %T Approximation algorithms for throughput maximization in wireless networks with delay constraints %A Guanhong Pei %A Anil Kumar,V. S %A Parthasarathy,S. %A Srinivasan, Aravind %K Approximation algorithms %K Approximation methods %K approximation theory %K Delay %K delay constraints %K delays %K general interference model %K Interference %K multihop wireless networks %K optimisation %K Optimized production technology %K radio networks %K radiofrequency interference %K target delay bound %K Throughput %K throughput maximization %K Wireless networks %X We study the problem of throughput maximization in multi-hop wireless networks with end-to-end delay constraints for each session. This problem has received much attention starting with the work of Grossglauser and Tse (2002), and it has been shown that there is a significant tradeoff between the end-to-end delays and the total achievable rate. We develop algorithms to compute such tradeoffs with provable performance guarantees for arbitrary instances, with general interference models. Given a target delay-bound Δ(c) for each session c, our algorithm gives a stable flow vector with a total throughput within a factor of O (logΔm/log log Δm) of the maximum, so that the per-session (end-to-end) delay is O ((logΔm/log log Δm Δ(c))2), where Δm = maxc{Δ(c)}; note that these bounds depend only on the delays, and not on the network size, and this is the first such result, to our knowledge. %B 2011 Proceedings IEEE INFOCOM %I IEEE %P 1116 - 1124 %8 2011/04/10/15 %@ 978-1-4244-9919-9 %G eng %R 10.1109/INFCOM.2011.5934887 %0 Journal Article %J Epidemiological and Molecular Aspects on Cholera %D 2011 %T Aquatic Realm and Cholera %A Huq,A. %A Grim,C. J. %A Rita R Colwell %X Cholera is an ancient disease that can be severe and life threatening. It occurs predominantly in areas of the world where populations lack safe drinking water. Epidemics of cholera are linked with malnutrition, poor sanitation, and conditions resulting from natural disasters such as severe flooding. According to a report published by WHO in 2000 [1], cholera remains a major public health problem and is becoming increasingly important since the number of countries in which cholera is endemic continues to increase. Unfortunately, outbreaks of the disease continue into the twenty-first century with ominous portent in the wake of global climate change [1]. Yet cholera is a preventable disease if people have access to safe drinking water and are properly educated how to protect themselves from the risk of infection with vibrios. Cholera also is an easily treatable disease. Oral rehydration therapy, a solution containing glucose and appropriate salts, has proven to be effective for treatment of most cholera victims [2]. Nevertheless, each year, tens of thousands of people are victims of the disease, bringing this “curse of humankind” to modern civilization. Present understanding of cholera is based on studies conducted over the past three decades and significant new information has been gained concerning environmental factors associated with this disease, especially how to detect the bacterium and where it lives in the natural environment, outside the human gut, and what triggers the annual outbreaks that occur with remarkable regularity. Environmental research on Vibrio cholerae and cholera has provided insights for prediction and prevention of the disease it causes, while the race for effective vaccines against cholera continues. %B Epidemiological and Molecular Aspects on Cholera %P 311 - 339 %8 2011/// %G eng %R 10.1007/978-1-60327-265-0_18 %0 Journal Article %J SIGCOMM Comput. Commun. Rev. %D 2011 %T Architecting for innovation %A Koponen,Teemu %A Shenker,Scott %A Balakrishnan,Hari %A Feamster, Nick %A Ganichev,Igor %A Ghodsi,Ali %A Godfrey,P. Brighten %A McKeown,Nick %A Parulkar,Guru %A Raghavan,Barath %A Rexford,Jennifer %A Arianfar,Somaya %A Kuptsov,Dmitriy %K diversity %K Evolution %K innovation %K internet architecture %X We argue that the biggest problem with the current Internet architecture is not a particular functional deficiency, but its inability to accommodate innovation. To address this problem we propose a minimal architectural "framework" in which comprehensive architectures can reside. The proposed Framework for Internet Innovation (FII) --- which is derived from the simple observation that network interfaces should be extensible and abstract --- allows for a diversity of architectures to coexist, communicate, and evolve. We demonstrate FII's ability to accommodate diversity and evolution with a detailed examination of how information flows through the architecture and with a skeleton implementation of the relevant interfaces. %B SIGCOMM Comput. Commun. Rev. %V 41 %P 24 - 36 %8 2011/// %@ 0146-4833 %G eng %U http://doi.acm.org/10.1145/2002250.2002256 %N 3 %R 10.1145/2002250.2002256 %0 Journal Article %J Journal of the ACM-Association for ComputingMachinery %D 2011 %T Article 28 (28 pages)-New Constructive Aspects of the Lovász Local Lemma %A Haeupler,B. %A Saha,B. %A Srinivasan, Aravind %B Journal of the ACM-Association for ComputingMachinery %V 58 %8 2011/// %G eng %N 6 %0 Journal Article %J BMC Bioinformatics %D 2011 %T Assessing the benefits of using mate-pairs to resolve repeats in de novo short-read prokaryotic assemblies %A Wetzel,Joshua %A Kingsford, Carl %A Pop, Mihai %X Next-generation sequencing technologies allow genomes to be sequenced more quickly and less expensively than ever before. However, as sequencing technology has improved, the difficulty of de novo genome assembly has increased, due in large part to the shorter reads generated by the new technologies. The use of mated sequences (referred to as mate-pairs) is a standard means of disambiguating assemblies to obtain a more complete picture of the genome without resorting to manual finishing. Here, we examine the effectiveness of mate-pair information in resolving repeated sequences in the DNA (a paramount issue to overcome). While it has been empirically accepted that mate-pairs improve assemblies, and a variety of assemblers use mate-pairs in the context of repeat resolution, the effectiveness of mate-pairs in this context has not been systematically evaluated in previous literature. %B BMC Bioinformatics %V 12 %P 95 - 95 %8 2011/04/13/ %@ 1471-2105 %G eng %U http://www.biomedcentral.com/1471-2105/12/95 %N 1 %R 10.1186/1471-2105-12-95 %0 Journal Article %J Journal of Systems and Software %D 2011 %T An assessment of systems and software engineering scholars and institutions (2003–2007 and 2004–2008) %A Wong,W. Eric %A Tse,T.H. %A Glass,Robert L. %A Basili, Victor R. %A Chen,T.Y. %K Research publications %K Systems and software engineering %K Top institutions %K Top scholars %X An ongoing, annual survey of publications in systems and software engineering identifies the top 15 scholars and institutions in the field over a 5-year period. Each ranking is based on the weighted scores of the number of papers published in TSE, TOSEM, JSS, SPE, EMSE, IST, and Software of the corresponding period. This report summarizes the results for 2003–2007 and 2004–2008. The top-ranked institution is Korea Advanced Institute of Science and Technology, Korea for 2003–2007, and Simula Research Laboratory, Norway for 2004–2008, while Magne Jørgensen is the top-ranked scholar for both periods. %B Journal of Systems and Software %V 84 %P 162 - 168 %8 2011/01// %@ 0164-1212 %G eng %U http://www.sciencedirect.com/science/article/pii/S0164121210002682 %N 1 %R 10.1016/j.jss.2010.09.036 %0 Conference Paper %B Proceedings of the 24th annual ACM symposium on User interface software and technology %D 2011 %T Associating the visual representation of user interfaces with their internal structures and metadata %A Chang,Tsung-Hsiang %A Tom Yeh %A Miller,Rob %K accessibility api %K Graphical user interfaces %K Pixel %K text detection %K text segmentation %X Pixel-based methods are emerging as a new and promising way to develop new interaction techniques on top of existing user interfaces. However, in order to maintain platform independence, other available low-level information about GUI widgets, such as accessibility metadata, was neglected intentionally. In this paper, we present a hybrid framework, PAX, which associates the visual representation of user interfaces (i.e. the pixels) and their internal hierarchical metadata (i.e. the content, role, and value). We identify challenges to building such a framework. We also develop and evaluate two new algorithms for detecting text at arbitrary places on the screen, and for segmenting a text image into individual word blobs. Finally, we validate our framework in implementations of three applications. We enhance an existing pixel-based system, Sikuli Script, and preserve the readability of its script code at the same time. Further, we create two novel applications, Screen Search and Screen Copy, to demonstrate how PAX can be applied to development of desktop-level interactive systems. %B Proceedings of the 24th annual ACM symposium on User interface software and technology %S UIST '11 %I ACM %C New York, NY, USA %P 245 - 256 %8 2011/// %@ 978-1-4503-0716-1 %G eng %U http://doi.acm.org/10.1145/2047196.2047228 %R 10.1145/2047196.2047228 %0 Book Section %B New Horizons in Evolutionary Robotics %D 2011 %T Automated Planning Logic Synthesis for Autonomous Unmanned Vehicles in Competitive Environments with Deceptive Adversaries %A Svec,Petr %A Gupta, Satyandra K. %E Doncieux,Stéphane %E Bredèche,Nicolas %E Mouret,Jean-Baptiste %K engineering %X We developed a new approach for automated synthesis of a planning logic for autonomous unmanned vehicles. This new approach can be viewed as an automated iterative process during which an initial version of a logic is synthesized and then gradually improved by detecting and fixing its shortcomings. This is achieved by combining data mining for extraction of vehicle’s states of failure and Genetic Programming (GP) technique for synthesis of corresponding navigation code. We verified the feasibility of the approach using unmanned surface vehicles (USVs) simulation. Our focus was specifically on the generation of a planning logic used for blocking the advancement of an intruder boat towards a valuable target. Developing autonomy logic for this behavior is challenging as the intruder’s attacking logic is human-competitive with deceptive behavior so the USV is required to learn specific maneuvers for specific situations to do successful blocking. We compared the performance of the generated blocking logic to the performance of logic that was manually implemented. Our results show that the new approach was able to synthesize a blocking logic with performance closely approaching the performance of the logic coded by hand. %B New Horizons in Evolutionary Robotics %S Studies in Computational Intelligence %I Springer Berlin / Heidelberg %V 341 %P 171 - 193 %8 2011/// %@ 978-3-642-18271-6 %G eng %U http://www.springerlink.com/content/f454477212518671/abstract/ %0 Conference Paper %B Advanced Video and Signal-Based Surveillance (AVSS), 2011 8th IEEE International Conference on %D 2011 %T AVSS 2011 demo session: A large-scale benchmark dataset for event recognition in surveillance video %A Oh,Sangmin %A Hoogs,Anthony %A Perera,Amitha %A Cuntoor,Naresh %A Chen,Chia-Chih %A Lee,Jong Taek %A Mukherjee,Saurajit %A Aggarwal, JK %A Lee,Hyungtae %A Davis, Larry S. %A Swears,Eran %A Wang,Xiaoyang %A Ji,Qiang %A Reddy,Kishore %A Shah,Mubarak %A Vondrick,Carl %A Pirsiavash,Hamed %A Ramanan,Deva %A Yuen,Jenny %A Torralba,Antonio %A Song,Bi %A Fong,Anesco %A Roy-Chowdhury,Amit %A Desai,Mita %X We introduce to the surveillance community the VIRAT Video Dataset[1], which is a new large-scale surveillance video dataset designed to assess the performance of event recognition algorithms in realistic scenes1. %B Advanced Video and Signal-Based Surveillance (AVSS), 2011 8th IEEE International Conference on %P 527 - 528 %8 2011/09/30/2 %G eng %R 10.1109/AVSS.2011.6027400 %0 Journal Article %J arXiv:1007.4446 [cs] %D 2010 %T Abstracting Abstract Machines %A David Van Horn %A Might, Matthew %K Computer Science - Programming Languages %K F.3.2 %K F.4.1 %X We describe a derivational approach to abstract interpretation that yields novel and transparently sound static analyses when applied to well-established abstract machines. To demonstrate the technique and support our claim, we transform the CEK machine of Felleisen and Friedman, a lazy variant of Krivine's machine, and the stack-inspecting CM machine of Clements and Felleisen into abstract interpretations of themselves. The resulting analyses bound temporal ordering of program events; predict return-flow and stack-inspection behavior; and approximate the flow and evaluation of by-need parameters. For all of these machines, we find that a series of well-known concrete machine refactorings, plus a technique we call store-allocated continuations, leads to machines that abstract into static analyses simply by bounding their stores. We demonstrate that the technique scales up uniformly to allow static analysis of realistic language features, including tail calls, conditionals, side effects, exceptions, first-class continuations, and even garbage collection. %B arXiv:1007.4446 [cs] %8 2010/07/26/ %G eng %U http://arxiv.org/abs/1007.4446 %0 Journal Article %J ACM Trans. Algorithms %D 2010 %T Achieving anonymity via clustering %A Aggarwal,Gagan %A Panigrahy,Rina %A Feder,Tomás %A Thomas,Dilys %A Kenthapadi,Krishnaram %A Khuller, Samir %A Zhu,An %K anonymity %K Approximation algorithms %K clustering %K privacy %X Publishing data for analysis from a table containing personal records, while maintaining individual privacy, is a problem of increasing importance today. The traditional approach of deidentifying records is to remove identifying fields such as social security number, name, etc. However, recent research has shown that a large fraction of the U.S. population can be identified using nonkey attributes (called quasi-identifiers) such as date of birth, gender, and zip code. The k-anonymity model protects privacy via requiring that nonkey attributes that leak information are suppressed or generalized so that, for every record in the modified table, there are at least k−1 other records having exactly the same values for quasi-identifiers. We propose a new method for anonymizing data records, where quasi-identifiers of data records are first clustered and then cluster centers are published. To ensure privacy of the data records, we impose the constraint that each cluster must contain no fewer than a prespecified number of data records. This technique is more general since we have a much larger choice for cluster centers than k-anonymity. In many cases, it lets us release a lot more information without compromising privacy. We also provide constant factor approximation algorithms to come up with such a clustering. This is the first set of algorithms for the anonymization problem where the performance is independent of the anonymity parameter k. We further observe that a few outlier points can significantly increase the cost of anonymization. Hence, we extend our algorithms to allow an ε fraction of points to remain unclustered, that is, deleted from the anonymized publication. Thus, by not releasing a small fraction of the database records, we can ensure that the data published for analysis has less distortion and hence is more useful. Our approximation algorithms for new clustering objectives are of independent interest and could be applicable in other clustering scenarios as well. %B ACM Trans. Algorithms %V 6 %P 49:1–49:19 - 49:1–49:19 %8 2010/07// %@ 1549-6325 %G eng %U http://doi.acm.org/10.1145/1798596.1798602 %N 3 %R 10.1145/1798596.1798602 %0 Journal Article %J AAAI’10 %D 2010 %T Active inference for collective classification %A Bilgic,M. %A Getoor, Lise %X Labeling nodes in a network is an important problem that has seen a growing interest. A number of methods that exploit both local and relational information have been developed for this task. Acquiring the labels for a few nodes at inference time can greatly improve the ac- curacy, however the question of figuring out which node labels to acquire is challenging. Previous approaches have been based on simple structural properties. Here, we present a novel technique, which we refer to as re- flect and correct, that can learn and predict when the un- derlying classification system is likely to make mistakes and it suggests acquisitions to correct those mistakes. %B AAAI’10 %8 2010/// %G eng %0 Journal Article %J ICML’10 %D 2010 %T Active learning for networked data %A Bilgic,M. %A Mihalkova,L. %A Getoor, Lise %X We introduce a novel active learning algo-rithm for classification of network data. In this setting, training instances are connected by a set of links to form a network, the labels of linked nodes are correlated, and the goal is to exploit these dependencies and accu- rately label the nodes. This problem arises in many domains, including social and biologi- cal network analysis and document classifica- tion, and there has been much recent interest in methods that collectively classify the nodes in the network. While in many cases labeled examples are expensive, often network infor- mation is available. We show how an active learning algorithm can take advantage of net- work structure. Our algorithm effectively ex- ploits the links between instances and the in- teraction between the local and collective as- pects of a classifier to improve the accuracy of learning from fewer labeled examples. We ex- periment with two real-world benchmark col- lective classification domains, and show that we are able to achieve extremely accurate re- sults even when only a small fraction of the data is labeled. %B ICML’10 %8 2010/// %G eng %0 Journal Article %J Technical Reports of the Computer Science Department %D 2010 %T Adapting Scrum to Managing a Research Group %A Hicks, Michael W. %A Foster, Jeffrey S. %X Score is an adaptation of the Scrum agile software development methodology to the task of managing Ph.D. students in an academic research group. This paper describes Score, conceived in October 2006, and our experience using it. We have found that Score enables us---faculty and students---to be more efficient and thereby more productive, and enhances the cohesion of our research group. %B Technical Reports of the Computer Science Department %8 2010/09/18/undef %G eng %0 Journal Article %J Computational Optimization and Applications %D 2010 %T Adaptive Constraint Reduction for Convex Quadratic Programming %A Jung,Jin Hyuk %A O'Leary, Dianne P. %A Tits,Andre' L. %X We propose an adaptive, constraint-reduced, primal-dual interior-point algorithm for convex quadratic programming with many more inequality constraints than variables. We reduce the computational effort by assembling, instead of the exact normal-equation matrix, an approximate matrix from a well chosen index set which includes indices of constraints that seem to be most critical. Starting with a large portion of the constraints, our proposed scheme excludes more unnecessary constraints at later iterations. We provide proofs for the global convergence and the quadratic local convergence rate of an affine-scaling variant. Numerical experiments on random problems, on a data-fitting problem, and on a problem in array pattern synthesis show the effectiveness of the constraint reduction in decreasing the time per iteration without significantly affecting the number of iterations. We note that a similar constraint-reduction approach can be applied to algorithms of Mehrotra’s predictor-corrector type, although no convergence theory is supplied. %B Computational Optimization and Applications %8 2010/03// %G eng %R DOI:10.1007/s10589-010-9324-8 %0 Journal Article %J Signal Processing, IEEE Transactions on %D 2010 %T Adaptive Threshold Estimation via Extreme Value Theory %A Broadwater, J.B. %A Chellapa, Rama %K detection; %K detection;Pareto %K distribution;adaptive %K distribution;signal %K estimation;extreme %K estimation;signal %K Kolmogorov-Smirnov %K Pareto %K statistical %K test;adaptive %K theory;generalized %K threshold %K value %X Determining a detection threshold to automatically maintain a low false alarm rate is a challenging problem. In a number of different applications, the underlying parametric assumptions of most automatic target detection algorithms are invalid. Therefore, thresholds derived using these incorrect distribution assumptions do not produce desirable results when applied to real sensor data. Monte Carlo methods for threshold determination work well but tend to perform poorly when targets are present. In order to mitigate these effects, we propose an algorithm using extreme value theory through the use of the generalized Pareto distribution (GPD) and a Kolmogorov-Smirnov statistical test. Unlike previous work based on GPD estimates, this algorithm incorporates a way to adaptively maintain low false alarm rates in the presence of targets. Both synthetic and real-world detection results demonstrate the usefulness of this algorithm. %B Signal Processing, IEEE Transactions on %V 58 %P 490 - 500 %8 2010/02// %@ 1053-587X %G eng %N 2 %R 10.1109/TSP.2009.2031285 %0 Journal Article %J Augmented vision & reality %D 2010 %T Advanced tracking systems: computational approaches to be introduced to new series %A HAMMOUD,R. %A Porikli, F. %A Davis, Larry S. %X Modern visual tracking systems implement a computational process that is often divided into several modules such as localization, tracking, recognition, behavior analysis and classification of events. This book will focus on recent advances in computational approaches for detection and tracking of human body, road boundaries and lane markers as well as on recognition of human activities, drowsiness and distraction state. This book is composed of seven distinct parts. Part I covers people localization algorithms in video sequences. Part II describes successful approaches for tracking people and bodyparts. The third part focuses on tracking of pedestrian and vehicles in outdoor images. Part IV describes recent methods to track lane markers and road boundaries. In part V, methods to track head, hand and facial features are reviewed. The last two parts cover the topics of automatic recognition and classification of activity, gesture, behavior, drowsiness and visual distraction state of humans. %B Augmented vision & reality %V 1 %8 2010/// %G eng %0 Book Section %B Advances in ComputersAdvances in Computers %D 2010 %T Advances in Automated Model-Based System Testing of Software Applications with a GUI Front-End %A Memon, Atif M. %A Nguyen,Bao N. %E Zelkowitz, Marvin V %X Despite the ubiquity of software applications that employ a graphical-user interface (GUI) front-end, functional system testing of these applications has remained, until recently, an understudied research area. During “GUI testing,” test cases, modeled as sequences of user input events, are created and executed on the software by exercising the GUI's widgets. Because each possible sequence of user events may potentially be a test case and today's GUIs offer enormous flexibility to end-users, in principle, GUI testing requires a prohibitively large number of test cases. Any practical test-case generation technique must sample the vast GUI input space. Existing techniques are largely manual, and hence extremely resource intensive. Several new automated model-based techniques have been developed in the past decade. All these techniques develop, either manually or automatically, a model of the GUI and employ it to generate test cases. This chapter presents the first detailed taxonomy of these techniques. A small GUI application is used as a running example to demonstrate each technique and illustrate its relative strengths and weaknesses. %B Advances in ComputersAdvances in Computers %I Elsevier %V Volume 80 %P 121 - 162 %8 2010/// %@ 0065-2458 %G eng %U http://www.sciencedirect.com/science/article/pii/S0065245810800038 %0 Book Section %B Advances in ComputersAdvances in Computers %D 2010 %T Advances in Video-Based Human Activity Analysis: Challenges and Approaches %A Turaga,Pavan %A Chellapa, Rama %A Veeraraghavan,Ashok %E Zelkowitz, Marvin V %X Videos play an ever increasing role in our everyday lives with applications ranging from news, entertainment, scientific research, security, and surveillance. Coupled with the fact that cameras and storage media are becoming less expensive, it has resulted in people producing more video content than ever before. Analysis of human activities in video is important for several important applications. Interpretation and identification of human activities requires approaches that address the following questions (a) what are the appropriate atomic primitives for human activities, (b) how to combine primitives to produce complex activities, (c) what are the required invariances for inference algorithms, and (d) how to build computational models for each of these. In this chapter, we provide a broad overview and discussion of these issues. We shall review state-of-the-art computer vision algorithms that address these issues and then provide a unified perspective from which specific algorithms can be derived. We will then present supporting experimental results. %B Advances in ComputersAdvances in Computers %I Elsevier %V Volume 80 %P 237 - 290 %8 2010/// %@ 0065-2458 %G eng %U http://www.sciencedirect.com/science/article/pii/S0065245810800075 %0 Journal Article %J AI Magazine %D 2010 %T AI Theory and Practice: A Discussion on Hard Challenges and Opportunities Ahead %A Horvitz,Eric %A Getoor, Lise %A Guestrin,Carlos %A Hendler,James %A Konstan,Joseph %A Subramanian,Devika %A Wellman,Michael %A Kautz,Henry %X AI Theory and Practice: A Discussion on Hard Challenges and Opportunities Ahead %B AI Magazine %V 31 %P 103 - 114 %8 2010/07/28/ %@ 0738-4602 %G eng %U http://www.aaai.org/ojs/index.php/aimagazine/article/viewArticle/2293 %N 3 %R 10.1609/aimag.v31i3.2293 %0 Book Section %B Computer Vision – ECCV 2010Computer Vision – ECCV 2010 %D 2010 %T Aligning Spatio-Temporal Signals on a Special Manifold %A Ruonan Li %A Chellapa, Rama %E Daniilidis,Kostas %E Maragos,Petros %E Paragios,Nikos %X We investigate the spatio-temporal alignment of videos or features/signals extracted from them. Specifically, we formally define an alignment manifold and formulate the alignment problem as an optimization procedure on this non-linear space by exploiting its intrinsic geometry. We focus our attention on semantically meaningful videos or signals, e.g., those describing or capturing human motion or activities, and propose a new formalism for temporal alignment accounting for executing rate variations among realizations of the same video event. By construction, we address this static and deterministic alignment task in a dynamic and stochastic manner: we regard the search for optimal alignment parameters as a recursive state estimation problem for a particular dynamic system evolving on the alignment manifold. Consequently, a Sequential Importance Sampling iteration on the alignment manifold is designed for effective and efficient alignment. We demonstrate the performance on several types of input data that arise in vision problems. %B Computer Vision – ECCV 2010Computer Vision – ECCV 2010 %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 6315 %P 547 - 560 %8 2010/// %@ 978-3-642-15554-3 %G eng %U http://dx.doi.org/10.1007/978-3-642-15555-0_40 %0 Journal Article %J BMC Bioinformatics %D 2010 %T Alignment and clustering of phylogenetic markers - implications for microbial diversity studies %A White,James R %A Navlakha,Saket %A Nagarajan,Niranjan %A Ghodsi,Mohammad-Reza %A Kingsford, Carl %A Pop, Mihai %X Molecular studies of microbial diversity have provided many insights into the bacterial communities inhabiting the human body and the environment. A common first step in such studies is a survey of conserved marker genes (primarily 16S rRNA) to characterize the taxonomic composition and diversity of these communities. To date, however, there exists significant variability in analysis methods employed in these studies. %B BMC Bioinformatics %V 11 %P 152 - 152 %8 2010/03/24/ %@ 1471-2105 %G eng %U http://www.biomedcentral.com/1471-2105/11/152 %N 1 %R 10.1186/1471-2105-11-152 %0 Book %D 2010 %T Analyzing Social Media Networks with NodeXL %A Hansen,Derek %A Shneiderman, Ben %A Smith,Marc A. %K Business & Economics / Marketing / Research %K Computers / Computer Science %K Computers / Data Processing %K Computers / Database Management / Data Mining %K Computers / General %K Computers / Information Theory %K Computers / Interactive & Multimedia %K Computers / Social Aspects / General %K Computers / Social Aspects / Human-Computer Interaction %K Computers / User Interfaces %K data mining %K Data mining - Computer programs %K Data mining/ Computer programs %K Information Visualization %K Information visualization - Computer programs %K Information visualization/ Computer programs %K NodeXL %K online social networks %K Social Science / General %K Social Science / Reference %K Social Science / Sociology / General %K Technology & Engineering / General %K Technology & Engineering / Social Aspects %K Webometrics %K Webometrics - Computer programs %K Webometrics/ Computer programs %X "Analyzing Social Media Networks with NodeXL provides a much needed resource for the social media research community, as it describes network theory, provides compelling examples using data sources like Twitter and Flickr, and highlights how to use a free sophisticated tool for analysis. This is the perfect book for anyone trying to analyze the behavior of online social networks and beyond." ---Adam Perer, Research Scientist, IBM Research "This book provides a basic introduction to social network analysis, followed by practical instruction and examples on gathering data from online sources, importing into Excel, and then analyzing the data through Excel. The book will be important for promoting research in the area for those in information science, sociology, cultural studies, virtual community, and e-commerce."---Caroline Haythornthwaite, PhD, Professor, Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign Businesses, entrepreneurs, individuals, and government agencies alike are looking to social network analysis (SNA) tools for insight into trends, connections, and fluctuations in social media. Microsoft's NodeXL is a free, open-source SNA plug-in for use with Excel. It provides instant graphical representation of relationships of complex networked data. But it goes further than other SNA tools - NodeXL was developed by a multidisciplinary team of experts that bring together information studies, computer science, sociology, human-computer interaction, and over 20 years of visual analytic theory and information visualization into a simple tool anyone can use. This makes NodeXL of interest not only to end-users but also to researchers and students studying visual and network analytics and their application in the real world. %I Morgan Kaufmann %8 2010/08/27/ %@ 9780123822291 %G eng %0 Book %D 2010 %T Analyzing Social Media Networks with NodeXL: Insights from a Connected World %A Hansen,Derek %A Shneiderman, Ben %A Smith,Marc A. %K Computers / Computer Science %K Computers / General %K Computers / Social Aspects / General %K Computers / Social Aspects / Human-Computer Interaction %K Computers / User Interfaces %X Businesses, entrepreneurs, individuals, and government agencies alike are looking to social network analysis (SNA) tools for insight into trends, connections, and fluctuations in social media. Microsoft’s NodeXL is a free, open-source SNA plug-in for use with Excel. It provides instant graphical representation of relationships of complex networked data. But it goes further than other SNA tools -- NodeXL was developed by a multidisciplinary team of experts that bring together information studies, computer science, sociology, human-computer interaction, and over 20 years of visual analytic theory and information visualization into a simple tool anyone can use. This makes NodeXL of interest not only to end-users but also to researchers and students studying visual and network analytics and their application in the real world. In Analyzing Social Media Networks with NodeXL, members of the NodeXL development team up to provide readers with a thorough and practical guide for using the tool while also explaining the development behind each feature. Blending the theoretical with the practical, this book applies specific SNA instructions directly to NodeXL, but the theory behind the implementation can be applied to any SNA. To learn more about Analyzing Social Media Networks and NodeXL, visit the companion site at www.mkp.com/nodexl*Walks you through NodeXL, while explaining the theory and development behind each step, providing takeaways that can apply to any SNA *Demonstrates how visual analytics research can be applied to SNA tools for the mass market *Includes case studies from researchers who use NodeXL on popular networks like email, Facebook, Twitter, and wikis %I Morgan Kaufmann %P 302 %8 2010 %@ 9780123822307 %G eng %0 Conference Paper %B Proceedings of the 1st ACM International Health Informatics Symposium %D 2010 %T An animated multivariate visualization for physiological and clinical data in the ICU %A Ordóñez,Patricia %A desJardins, Marie %A Lombardi,Michael %A Lehmann,Christoph U. %A Fackler,Jim %K computational physiology %K Information Visualization %K multivariate %K time series %X Current visualizations of electronic medical data in the Intensive Care Unit (ICU) consist of stacked univariate plots of variables over time and a tabular display of the current numeric values for the corresponding variables and occasionally an alarm limit. The value of information is dependent upon knowledge of historic values to determine a change in state. With the ability to acquire more historic information, providers need more sophisticated visualization tools to assist them in analyzing the data in a multivariate fashion over time. We present a multivariate time series visualization that is interactive and animated, and has proven to be as effective as current methods in the ICU for predicting an episode of acute hypotension in terms of accuracy, confidence, and efficiency with only 30-60 minutes of training. %B Proceedings of the 1st ACM International Health Informatics Symposium %S IHI '10 %I ACM %C New York, NY, USA %P 771 - 779 %8 2010/// %@ 978-1-4503-0030-8 %G eng %U http://doi.acm.org/10.1145/1882992.1883109 %R 10.1145/1882992.1883109 %0 Journal Article %J ACM Trans. Comput. Logic %D 2010 %T Annotated RDF %A Udrea,Octavian %A Recupero,Diego Reforgiato %A V.S. Subrahmanian %K annotated RDF %K Query processing %K view maintenance %X Real-world use of RDF requires the ability to transparently represent and explain metadata associated with RDF triples. For example, when RDF triples are extracted automatically by information extraction programs, there is a need to represent where the triples came from, what their temporal validity is, and how certain we are that the triple is correct. Today, there is no theoretically clean and practically scalable mechanism that spans these different needs - reification is the only solution propose to date, and its implementations have been ugly. In this paper, we present Annotated RDF (or aRDF for short) in which RDF triples are annotated by members of a partially ordered set (with bottom element) that can be selected in any way desired by the user. We present a formal declarative semantics (model theory) for annotated RDF and develop algorithms to check consistency of aRDF theories and to answer queries to aRDF theories. We show that annotated RDF supports users who need to think about the uncertainty, temporal aspects, and provenance of the RDF triples in an RDF database. We develop a prototype aRDF implementation and show that our algorithms work efficiently even on real world data sets containing over 10 million triples. %B ACM Trans. Comput. Logic %V 11 %P 10:1–10:41 - 10:1–10:41 %8 2010/01// %@ 1529-3785 %G eng %U http://doi.acm.org/10.1145/1656242.1656245 %N 2 %R 10.1145/1656242.1656245 %0 Journal Article %J Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on %D 2010 %T Applications of a Simple Characterization of Human Gait in Surveillance %A Yang Ran %A Qinfen Zheng %A Chellapa, Rama %A Strat, T.M. %K algorithms %K Artificial intelligence %K Automated;Photography;Reproducibility of Results;Sensitivity and Specificity;Video Recording; %K Biometry %K Computer-Assisted;Pattern Recognition %K double helical signatures %K Gait %K gait analysis %K human gait %K human motion kinematics %K HUMANS %K Image Enhancement %K Image Interpretation %K Image motion analysis %K iterative local curve embedding algorithm %K Object detection %K simple spatiotemporal characterization %K video sequence %K Video surveillance %X Applications of a simple spatiotemporal characterization of human gait in the surveillance domain are presented. The approach is based on decomposing a video sequence into x-t slices, which generate periodic patterns referred to as double helical signatures (DHSs). The features of DHS are given as follows: 1) they naturally encode the appearance and kinematics of human motion and reveal geometric symmetries and 2) they are effective and efficient for recovering gait parameters and detecting simple events. We present an iterative local curve embedding algorithm to extract the DHS from video sequences. Two applications are then considered. First, the DHS is used for simultaneous segmentation and labeling of body parts in cluttered scenes. Experimental results showed that the algorithm is robust to size, viewing angles, camera motion, and severe occlusion. Then, the DHS is used to classify load-carrying conditions. By examining various symmetries in DHS, activities such as carrying, holding, and walking with objects that are attached to legs are detected. Our approach possesses several advantages: a compact representation that can be computed in real time is used; furthermore, it does not depend on silhouettes or landmark tracking, which are sensitive to errors in background subtraction stage. %B Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on %V 40 %P 1009 - 1020 %8 2010/08// %@ 1083-4419 %G eng %N 4 %R 10.1109/TSMCB.2010.2044173 %0 Journal Article %J Journal of Graph Algorithms and Applications %D 2010 %T Applications of Parameterized st-Orientations %A Charalampos Papamanthou %A Tollis, Ioannis G. %B Journal of Graph Algorithms and Applications %V 14 %P 337 - 365 %8 2010/// %@ 1526-1719 %G eng %U http://emis.um.ac.ir/journals/JGAA/getPaper-96.html?id=212 %N 2 %0 Journal Article %J Computational Geometry %D 2010 %T Approximate range searching: The absolute model %A da Fonseca,Guilherme D. %A Mount, Dave %K Absolute model %K approximation %K Halfbox quadtree %K Idempotence %K Range searching %X Range searching is a well known problem in the area of geometric data structures. We consider this problem in the context of approximation, where an approximation parameter ε > 0 is provided. Most prior work on this problem has focused on the case of relative errors, where each range shape R is bounded, and points within distance ε ⋅ diam ( R ) of the range's boundary may or may not be included. We consider a different approximation model, called the absolute model, in which points within distance ε of the range's boundary may or may not be included, regardless of the diameter of the range. We consider range spaces consisting of halfspaces, Euclidean balls, simplices, axis-aligned rectangles, and general convex bodies. We consider a variety of problem formulations, including range searching under general commutative semigroups, idempotent semigroups, groups, and range emptiness. We show how idempotence can be used to improve not only approximate, but also exact halfspace range searching. Our data structures are much simpler than both their exact and relative model counterparts, and so are amenable to efficient implementation. %B Computational Geometry %V 43 %P 434 - 444 %8 2010/05// %@ 0925-7721 %G eng %U http://www.sciencedirect.com/science/article/pii/S0925772109000844 %N 4 %R 10.1016/j.comgeo.2008.09.009 %0 Journal Article %J Computational Geometry %D 2010 %T Approximation algorithm for the kinetic robust K-center problem %A Friedler,Sorelle A. %A Mount, Dave %K Approximation algorithms %K clustering %K kinetic data structures %K robust statistics %X Two complications frequently arise in real-world applications, motion and the contamination of data by outliers. We consider a fundamental clustering problem, the k-center problem, within the context of these two issues. We are given a finite point set S of size n and an integer k. In the standard k-center problem, the objective is to compute a set of k center points to minimize the maximum distance from any point of S to its closest center, or equivalently, the smallest radius such that S can be covered by k disks of this radius. In the discrete k-center problem the disk centers are drawn from the points of S, and in the absolute k-center problem the disk centers are unrestricted.We generalize this problem in two ways. First, we assume that points are in continuous motion, and the objective is to maintain a solution over time. Second, we assume that some given robustness parameter 0 < t ⩽ 1 is given, and the objective is to compute the smallest radius such that there exist k disks of this radius that cover at least ⌈ t n ⌉ points of S. We present a kinetic data structure (in the KDS framework) that maintains a ( 3 + ε ) -approximation for the robust discrete k-center problem and a ( 4 + ε ) -approximation for the robust absolute k-center problem, both under the assumption that k is a constant. We also improve on a previous 8-approximation for the non-robust discrete kinetic k-center problem, for arbitrary k, and show that our data structure achieves a ( 4 + ε ) -approximation. All these results hold in any metric space of constant doubling dimension, which includes Euclidean space of constant dimension. %B Computational Geometry %V 43 %P 572 - 586 %8 2010/08// %@ 0925-7721 %G eng %U http://www.sciencedirect.com/science/article/pii/S0925772110000027 %N 6–7 %R 10.1016/j.comgeo.2010.01.001 %0 Journal Article %J SIAM Journal on Computing %D 2010 %T Approximation Algorithms for Non-Uniform Buy-at-Bulk Network Design %A Chekuri,C. %A Hajiaghayi, Mohammad T. %A Kortsarz,G. %A Salavatipour,M. R %B SIAM Journal on Computing %V 39 %P 1772 - 1798 %8 2010/// %G eng %N 5 %0 Conference Paper %B Proceedings of the 2010 ACM-IEEE International Symposium on Empirical Software Engineering and Measurement %D 2010 %T Are developers complying with the process: an XP study %A Zazworka, Nico %A Stapel,Kai %A Knauss,Eric %A Shull, Forrest %A Basili, Victor R. %A Schneider,Kurt %K process conformance %K process improvement %K XP programming %X Adapting new software processes and practices in organizational and academic environments requires training the developers and validating the applicability of the newly introduced activities. Investigating process conformance during training and understanding if programmers are able and willing to follow the specific steps are crucial to evaluating whether the process improves various software product quality factors. In this paper we present a process model independent approach to detect process non-conformance. Our approach is based on non-intrusively collected data captured by a version control system and provides the project manager with timely updates. Further, we provide evidence of the applicability of our approach by investigating process conformance in a five day training class on eXtreme Programming (XP) practices at the Leibniz Universität Hannover. Our results show that the approach enabled researchers to formulate minimal intrusive methods to check for conformance and that for the majority of the investigated XP practices violations could be detected. %B Proceedings of the 2010 ACM-IEEE International Symposium on Empirical Software Engineering and Measurement %S ESEM '10 %I ACM %C New York, NY, USA %P 14:1–14:10 - 14:1–14:10 %8 2010/// %@ 978-1-4503-0039-1 %G eng %U http://doi.acm.org/10.1145/1852786.1852805 %R 10.1145/1852786.1852805 %0 Book Section %B Computer Vision – ECCV 2010Computer Vision – ECCV 2010 %D 2010 %T Articulation-Invariant Representation of Non-planar Shapes %A Gopalan,Raghuraman %A Turaga,Pavan %A Chellapa, Rama %E Daniilidis,Kostas %E Maragos,Petros %E Paragios,Nikos %X Given a set of points corresponding to a 2D projection of a non-planar shape, we would like to obtain a representation invariant to articulations (under no self-occlusions). It is a challenging problem since we need to account for the changes in 2D shape due to 3D articulations, viewpoint variations, as well as the varying effects of imaging process on different regions of the shape due to its non-planarity. By modeling an articulating shape as a combination of approximate convex parts connected by non-convex junctions, we propose to preserve distances between a pair of points by (i) estimating the parts of the shape through approximate convex decomposition, by introducing a robust measure of convexity and (ii) performing part-wise affine normalization by assuming a weak perspective camera model, and then relating the points using the inner distance which is insensitive to planar articulations. We demonstrate the effectiveness of our representation on a dataset with non-planar articulations, and on standard shape retrieval datasets like MPEG-7. %B Computer Vision – ECCV 2010Computer Vision – ECCV 2010 %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 6313 %P 286 - 299 %8 2010/// %@ 978-3-642-15557-4 %G eng %U http://dx.doi.org/10.1007/978-3-642-15558-1_21 %0 Journal Article %J BMC Bioinformatics %D 2010 %T Assembly complexity of prokaryotic genomes using short reads %A Kingsford, Carl %A Schatz,Michael C %A Pop, Mihai %X De Bruijn graphs are a theoretical framework underlying several modern genome assembly programs, especially those that deal with very short reads. We describe an application of de Bruijn graphs to analyze the global repeat structure of prokaryotic genomes. %B BMC Bioinformatics %V 11 %P 21 - 21 %8 2010/01/12/ %@ 1471-2105 %G eng %U http://www.biomedcentral.com/1471-2105/11/21 %N 1 %R 10.1186/1471-2105-11-21 %0 Conference Paper %B Proceedings of the 19th ACM international conference on Information and knowledge management %D 2010 %T Assessor error in stratified evaluation %A Webber,W. %A Oard, Douglas %A Scholer,F. %A Hedin,B. %B Proceedings of the 19th ACM international conference on Information and knowledge management %P 539 - 548 %8 2010/// %G eng %0 Journal Article %J The Journal of the Acoustical Society of America %D 2010 %T Audio visual scene analysis using spherical arrays and cameras. %A O'donovan,Adam %A Duraiswami, Ramani %A Zotkin,Dmitry N %A Gumerov, Nail A. %X While audition and vision are used together by living beings to make sense of the world, the observation of the world using machines in applications such as surveillance and robotics has proceeded largely separately. We describe the use of spherical microphone arrays as “audio cameras” and spherical array of video cameras as a tool to perform multi‐modal scene analysis that attempts to answer questions such as “Who?,”, “What?,” “Where?,” and “Why?.” Signal processing algorithms to identify the number of people and their identities and to isolate and dereverberate their speech using multi‐modal processing will be described. The use of graphics processor based signal processing allows for real‐time implementation of these algorithms. [Work supported by ONR.] %B The Journal of the Acoustical Society of America %V 127 %P 1979 - 1979 %8 2010/// %G eng %U http://link.aip.org/link/?JAS/127/1979/3 %N 3 %R 10.1121/1.3385079 %0 Journal Article %J Stabilization, Safety, and Security of Distributed Systems %D 2010 %T Authenticated broadcast with a partially compromised public-key infrastructure %A Gordon,S. %A Katz, Jonathan %A Kumaresan,R. %A Yerukhimovich,A. %X Given a public-key infrastructure (PKI) and digital signatures, it is possible to construct broadcast protocols tolerating any number of corrupted parties. Almost all existing protocols, however, do not distinguish between corrupted parties (who do not follow the protocol), and honest parties whose secret (signing) keys have been compromised (but who continue to behave honestly). We explore conditions under which it is possible to construct broadcast protocols that still provide the usual guarantees (i.e., validity/agreement) to the latter.Consider a network of n parties, where an adversary has compromised the secret keys of up to t c honest parties and, in addition, fully controls the behavior of up to t a other parties. We show that for any fixed t c  > 0, and any fixed t a , there exists an efficient protocol for broadcast if and only if 2t a  + min (t a , t c ) < n. (When t c  = 0, standard results imply feasibility.) We also show that if t c , t a are not fixed, but are only guaranteed to satisfy the bound above, then broadcast is impossible to achieve except for a few specific values of n; for these “exceptional” values of n, we demonstrate a broadcast protocol. Taken together, our results give a complete characterization of this problem. %B Stabilization, Safety, and Security of Distributed Systems %P 144 - 158 %8 2010/// %G eng %R 10.1007/978-3-642-16023-3_14 %0 Conference Paper %D 2010 %T Authentication in the clouds: a framework and its application to mobile users %A Chow, Richard %A Jakobsson, Markus %A Masuoka, Ryusuke %A Molina,Jesus %A Niu, Yuan %A Elaine Shi %A Song,Zhexuan %K Authentication %K Cloud computing %K Mobile computing %X Cloud computing is a natural fit for mobile security. Typical handsets have input constraints and practical computational and power limitations, which must be respected by mobile security technologies in order to be effective. We describe how cloud computing can address these issues. Our approach is based on a flexible framework for supporting authentication decisions we call TrustCube (to manage the authentication infrastructure) and on a behavioral authentication approach referred to as implicit authentication (to translate user behavior into authentication scores). The combination results in a new authentication paradigm for users of mobile technologies, one where an appropriate balance between usability and trust can be managed through flexible policies and dynamic tuning. %S CCSW '10 %I ACM %P 1 - 6 %8 2010 %@ 978-1-4503-0089-6 %G eng %U http://doi.acm.org/10.1145/1866835.1866837 %0 Conference Paper %B 2010 Conference on Design and Architectures for Signal and Image Processing (DASIP) %D 2010 %T Automated generation of an efficient MPEG-4 Reconfigurable Video Coding decoder implementation %A Gu, Ruirui %A Piat, J. %A Raulet, M. %A Janneck, J.W. %A Bhattacharyya, Shuvra S. %K automated generation %K automatic design flow %K CAL language %K CAL networks %K CAL-to-C translation %K CAL2C translation %K coarse-grain dataflow representations %K Computational modeling %K data flow computing %K dataflow information %K Dataflow programming %K decoding %K Digital signal processing %K Libraries %K MPEG-4 reconfigurable video coding decoder implementation %K parallel languages %K SDF detection %K synchronous dataflow detection %K TDP %K TDP-based static scheduling %K The Dataflow interchange format Package %K Transform coding %K user-friendly design %K video coding %K video processing systems %K XML %K XML format %X This paper proposes an automatic design flow from user-friendly design to efficient implementation of video processing systems. This design flow starts with the use of coarse-grain dataflow representations based on the CAL language, which is a complete language for dataflow programming of embedded systems. Our approach integrates previously developed techniques for detecting synchronous dataflow (SDF) regions within larger CAL networks, and exploiting the static structure of such regions using analysis tools in The Dataflow interchange format Package (TDP). Using a new XML format that we have developed to exchange dataflow information between different dataflow tools, we explore systematic implementation of signal processing systems using CAL, SDF-like region detection, TDP-based static scheduling, and CAL-to-C (CAL2C) translation. Our approach, which is a novel integration of three complementary dataflow tools - the CAL parser, TDP, and CAL2C - is demonstrated on an MPEG Reconfigurable Video Coding (RVC) decoder. %B 2010 Conference on Design and Architectures for Signal and Image Processing (DASIP) %P 265 - 272 %8 2010 %G eng %0 Conference Paper %B Acoustics Speech and Signal Processing (ICASSP), 2010 IEEE International Conference on %D 2010 %T Automatic matched filter recovery via the audio camera %A O'Donovan,A.E. %A Duraiswami, Ramani %A Zotkin,Dmitry N %K acoustic %K array;real-time %K arrays;transient %K audio %K camera;automatic %K constraints;microphone %K filter %K filters;microphone %K images;room %K impulse %K Matched %K positions;acoustic %K processing;cameras;matched %K radiators;array %K recovery;beamforming;geometric %K response; %K response;sound %K sensor;acoustic %K signal %K source;source/receiver %K sources;audio %X The sound reaching the acoustic sensor in a realistic environment contains not only the part arriving directly from the sound source but also a number of environmental reflections. The effect of those on the sound is equivalent to a convolution with the room impulse response and can be undone via deconvolution - a technique known as matched filter processing. However, the filter is usually pre-computed in advance using known room geometry and source/receiver positions, and any deviations from those cause the performance to degrade significantly. In this work, an algorithm is proposed to compute the matched filter automatically using an audio camera - a microphone array based system that provides real-time audio images (essentially plots of steered response power in various directions) of environment. Acoustic sources, as well as their significant reflections, are revealed as peaks in the audio image. The reflections are associated with sound source(s) using an acoustic similarity metric, and an approximate matched filter is computed to align the reflections in time with the direct arrival. Preliminary experimental evaluation of the method is performed. It is shown that in case of two sources the reflections are identified correctly, the time delays recovered agree well with those computed from geometric constraints, and that the output SNR improves when the reflections are added coherently to the signal obtained by beamforming directly at the source. %B Acoustics Speech and Signal Processing (ICASSP), 2010 IEEE International Conference on %P 2826 - 2829 %8 2010/03// %G eng %R 10.1109/ICASSP.2010.5496187 %0 Conference Paper %B Image Processing (ICIP), 2010 17th IEEE International Conference on %D 2010 %T Automatic target recognition based on simultaneous sparse representation %A Patel, Vishal M. %A Nasrabadi,N.M. %A Chellapa, Rama %K (artificial %K algorithm;feature %K based %K classification;iterative %K classification;learning %K Comanche %K data %K dictionary;matching %K extraction;image %K forward-looking %K infrared %K intelligence);military %K learning %K MATCHING %K matrix;dictionary %K measure;military %K methods;learning %K orthogonal %K pursuit %K pursuit;confusion %K recognition;class %K recognition;target %K representation;feature %K representation;sparse %K set;automatic %K signal %K similarity %K simultaneous %K sparse %K supervised %K systems;object %K target %K target;simultaneous %K tracking; %X In this paper, an automatic target recognition algorithm is presented based on a framework for learning dictionaries for simultaneous sparse signal representation and feature extraction. The dictionary learning algorithm is based on class supervised simultaneous orthogonal matching pursuit while a matching pursuit-based similarity measure is used for classification. We show how the proposed framework can be helpful for efficient utilization of data, with the possibility of developing real-time, robust target classification. We verify the efficacy of the proposed algorithm using confusion matrices on the well known Comanche forward-looking infrared data set consisting of ten different military targets at different orientations. %B Image Processing (ICIP), 2010 17th IEEE International Conference on %P 1377 - 1380 %8 2010/09// %G eng %R 10.1109/ICIP.2010.5652306 %0 Report %D 2010 %T Autonomous Traffic Engineering using Self-Configuring Link Weights %A Sundaresan,S. %A Lumezanu,C. %A Feamster, Nick %A François,P. %X Network operators use traffic engineering to control the flow of traffic across their networks. Existing TE methods establish static topologies offline, either by setting link weights or by configuring paths a priori. These methods require manual configuration and may not be robust in the face of failures. Some methods also require knowledge about traffic demands and may not be able to handle traffic fluctuations. Even when changes in demand are expected, operators must manually tune network configurations to prepare for them. Because adjusting configurations is difficult to get right, we start from an extreme design point, asking instead whether it is possible to perform traffic engineering online without having to perform any a priori configuration. Our traffic engineering technique, SculpTE, adapts to changing traffic demands by automatically configuring link weights in a stable manner. SculpTE balances load across the network by continually adjusting link weights to expose lightly-loaded paths. We evaluate SculpTE using a simple analytical model and simulations on realistic ISP network topologies. Our results show that SculpTE achieves excellent load balancing, responsiveness, and stability compared to state-of-the-art TE schemes, without requiring network operators to perform any offline configuration. %I School of Computer Science, Georgia Tech %V GT-CS-10-16 %8 2010/// %G eng %U http://hdl.handle.net/1853/35011 %0 Journal Article %J SIGCOMM Comput. Commun. Rev. %D 2010 %T Autonomous traffic engineering with self-configuring topologies %A Sundaresan,Srikanth %A Lumezanu,Cristian %A Feamster, Nick %A Francois,Pierre %K multi-path routing %K online %K sculpte %K self-configuring %K traffic engineering %X Network operators use traffic engineering (TE) to control the flow of traffic across their networks. Existing TE methods require manual configuration of link weights or tunnels, which is difficult to get right, or prior knowledge of traffic demands and hence may not be robust to link failures or traffic fluctuations. We present a self-configuring TE scheme, SculpTE, which automatically adapts the network-layer topology to changing traffic demands. SculpTE is responsive, stable, and achieves excellent load balancing. %B SIGCOMM Comput. Commun. Rev. %V 41 %P – - – %8 2010/08// %@ 0146-4833 %G eng %U http://dl.acm.org/citation.cfm?id=2043164.1851239 %N 4 %0 Conference Paper %B Motion and Video Computing, 2009. WMVC '09. Workshop on %D 2009 %T Action recognition based on human movement characteristics %A Dondera,R. %A David Doermann %A Davis, Larry S. %K action %K ballistic %K characteristics;motion %K correlated %K cost;human %K data;probability %K databases; %K density %K descriptor;motion %K dynamics;computational %K function;robustness;shape %K information;short %K linear %K Movement %K movements;computer %K recognition;human %K recognition;stability;visual %K vector %K vision;pattern %X We present a motion descriptor for human action recognition where appearance and shape information are unreliable. Unlike other motion-based approaches, we leverage image characteristics specific to human movement to achieve better robustness and lower computational cost. Drawing on recent work on motion recognition with ballistic dynamics, an action is modeled as a series of short correlated linear movements and represented with a probability density function over motion vector data. We are targeting common human actions composed of ballistic movements, and our descriptor can handle both short actions (e.g. reaching with the hand) and long actions with events at relatively stable time offsets (e.g. walking). The proposed descriptor is used for both classification and detection of action instances, in a nearest-neighbor framework. We evaluate the descriptor on the KTH action database and obtain a recognition rate of 90% in a relevant test setting, comparable to the state-of-the-art approaches that use other cues in addition to motion. We also acquired a database of actions with slight occlusion and a human actor manipulating objects of various shapes and appearances. This database makes the use of appearance and shape information problematic, but we obtain a recognition rate of 95%. Our work demonstrates that human movement has distinctive patterns, and that these patterns can be used effectively for action recognition. %B Motion and Video Computing, 2009. WMVC '09. Workshop on %P 1 - 8 %8 2009/12// %G eng %R 10.1109/WMVC.2009.5399233 %0 Conference Paper %B IEEE/RSJ International Conference on Intelligent Robots and Systems, 2009. IROS 2009 %D 2009 %T Active segmentation for robotics %A Mishra,A. %A Aloimonos, J. %A Fermüller, Cornelia %K binocular cues %K image colour analysis %K Image segmentation %K Image texture %K Intelligent robots %K Layout %K Machine vision %K monocular cues %K Navigation %K optical flow %K Orbital robotics %K Robot sensing systems %K robot vision %K Robot vision systems %K robotics active segmentation %K Robotics and automation %K semantic robot %K Simultaneous localization and mapping %K stereo disparity %K stereo image processing %X The semantic robots of the immediate future are robots that will be able to find and recognize objects in any environment. They need the capability of segmenting objects in their visual field. In this paper, we propose a novel approach to segmentation based on the operation of fixation by an active observer. Our approach is different from current approaches: while existing works attempt to segment the whole scene at once into many areas, we segment only one image region, specifically the one containing the fixation point. Furthermore, our solution integrates monocular cues (color, texture) with binocular cues (stereo disparities and optical flow). Experiments with real imagery collected by our active robot and from the known databases demonstrate the promise of the approach. %B IEEE/RSJ International Conference on Intelligent Robots and Systems, 2009. IROS 2009 %I IEEE %P 3133 - 3139 %8 2009/10/10/15 %@ 978-1-4244-3803-7 %G eng %R 10.1109/IROS.2009.5354325 %0 Journal Article %J Handbook of Remote Biometrics %D 2009 %T Advanced Technologies for Touchless Fingerprint Recognition %A Tistarelli,M. %A Li, S.Z. %A Chellapa, Rama %B Handbook of Remote Biometrics %P 83 - 109 %8 2009/// %G eng %0 Book Section %B Computer Performance IssuesComputer Performance Issues %D 2009 %T Advances in Web Testing %A Eaton,Cyntrica %A Memon, Atif M. %E Zelkowitz, Marvin V %X AbstractDemand for high‐quality Web applications continues to escalate as reliance on Web‐based software increases and Web systems become increasingly complex. Given the importance of quality and its impact on the user experience, a significant research effort has been invested in developing tools and methodologies that facilitate effective quality assurance for Web applications. Testing, in particular, provides a critical inroad toward meeting the quality demand by enabling developers to discover failures and anomalies in applications before they are released. In this survey, we discuss advances in Web testing and begin by exploring the peculiarities of Web applications that makes evaluating their correctness a challenge and the direct translation of conventional software engineering principles impractical in some cases. We then provide an overview of research contributions in three critical aspects of Web testing: deriving adequate Web application models, defining appropriate Web testing strategies, and conducting Web portability analysis. In short, models are used to capture Web application components, their attributes, and interconnections; testing strategies use the models to generate test cases; and portability analysis enables Web developers to ensure that their applications remain correct as they are launched in highly diverse configurations. %B Computer Performance IssuesComputer Performance Issues %I Elsevier %V Volume 75 %P 281 - 306 %8 2009/// %@ 0065-2458 %G eng %U http://www.sciencedirect.com/science/article/pii/S006524580800805X %0 Journal Article %J Information VisualizationInformation Visualization %D 2009 %T Advancing User-Centered Evaluation of Visual Analytic Environments Through Contests %A Costello,Loura %A Grinstein,Georges %A Plaisant, Catherine %A Scholtz,Jean %K metrics %K synthetic data %K user-centered evaluation %K visual analytics %X In this paper, the authors describe the Visual Analytics Science and Technology (VAST) Symposium contests run in 2006 and 2007 and the VAST 2008 and 2009 challenges. These contests were designed to provide researchers with a better understanding of the tasks and data that face potential end users. Access to these end users is limited because of time constraints and the classified nature of the tasks and data. In that respect, the contests serve as an intermediary, with the metrics and feedback serving as measures of utility to the end users. The authors summarize the lessons learned and the future directions for VAST Challenges. %B Information VisualizationInformation Visualization %V 8 %P 230 - 238 %8 2009/09/21/ %@ 1473-8716, 1473-8724 %G eng %U http://ivi.sagepub.com/content/8/3/230 %N 3 %R 10.1057/ivs.2009.16 %0 Journal Article %J Journal of Visual Languages and Computing %D 2009 %T Age progression in human faces: A survey %A Ramanathan,N. %A Chellapa, Rama %A Biswas,S. %X Facial aging, a new dimension that has recently beenadded to the problem of face recognition, poses interesting theo- retical and practical challenges to the research community. The problem which originally generated interest in the psychophysics and human perception community, has recently found enhanced interest in the computer vision community. How do humans perceive age ? What constitutes an age-invariant signature that can be derived from faces ? How compactly can the facial growth event be described ? How does facial aging impact recognition performance ? In this paper, we give a thorough analysis on the problem of facial aging and further provide a complete account of the many interesting studies that have been performed on this topic from different fields. We offer a comparative analysis of various approaches that have been proposed for problems such as age estimation, appearance prediction, face verification etc. and offer insights into future research on this topic. %B Journal of Visual Languages and Computing %V 15 %P 3349 - 3361 %8 2009/// %G eng %0 Conference Paper %B Data Engineering, 2009. ICDE '09. IEEE 25th International Conference on %D 2009 %T Aggregate Query Answering under Uncertain Schema Mappings %A Gal,A. %A Martinez,M. V %A Simari,G. I %A V.S. Subrahmanian %K aggregate %K algorithm;probabilistic %K answering;by-table %K complexity;data %K complexity;distributed %K database;polynomial %K databases; %K databases;probability;query %K integration;distribution %K mapping;computational %K mapping;query %K processing;range %K processing;statistical %K query %K schema %K semantics;by-tuple %K semantics;computational %K semantics;expected %K semantics;multiple %K semantics;uncertain %K TIME %K value %X Recent interest in managing uncertainty in data integration has led to the introduction of probabilistic schema mappings and the use of probabilistic methods to answer queries across multiple databases using two semantics: by-table and by-tuple. In this paper, we develop three possible semantics for aggregate queries: the range, distribution, and expected value semantics, and show that these three semantics combine with the by-table and by-tuple semantics in six ways. We present algorithms to process COUNT, AVG, SUM, MIN, and MAX queries under all six semantics and develop results on the complexity of processing such queries under all six semantics. We show that computing COUNT is in PTIME for all six semantics and computing SUM is in PTIME for all but the by-tuple/distribution semantics. Finally, we show that AVG, MIN, and MAX are PTIME computable for all by-table semantics and for the by-tuple/range semantics.We developed a prototype implementation and experimented with both real-world traces and simulated data. We show that, as expected, naive processing of aggregates does not scale beyond small databases with a small number of mappings. The results also show that the polynomial time algorithms are scalable up to several million tuples as well as with a large number of mappings. %B Data Engineering, 2009. ICDE '09. IEEE 25th International Conference on %P 940 - 951 %8 2009/04/29/2 %G eng %R 10.1109/ICDE.2009.55 %0 Conference Paper %B Computer Design, 2009. ICCD 2009. IEEE International Conference on %D 2009 %T Algorithmic approach to designing an easy-to-program system: Can it lead to a HW-enhanced programmer's workflow add-on? %A Vishkin, Uzi %K abstraction;PRAM-On-Chip;easy-to-program %K algorithms;parallel %K algorithms;sorting; %K designing;explicit %K multi-threading;fine-grained %K PRAM %K random-access-machine/model;parallel %K sorting %K system %K system;parallel %K systems;concurrency %K THEORY %K theory;multi-threading;parallel %K threads;many-core %X Our earlier parallel algorithmics work on the parallel random-access-machine/model (PRAM) computation model led us to a PRAM-On-Chip vision: a comprehensive many-core system that can look to the programmer like the abstract PRAM model. We introduced the eXplicit MultiThreaded (XMT) design and prototyped it in hardware and software. XMT comprises a programmer's workflow that advances from work-depth, a standard PRAM theory abstraction, to an XMT program, and, if desired, to its performance tuning. XMT provides strong performance for programs developed this way due to its hardware support of very fine-grained threads and the overhead of handling them. XMT has also shown unique promise when it comes to ease-of-programming, the biggest problem that has limited the impact of all parallel systems to date. For example, teachability of XMT programming has been demonstrated at various levels from rising 6th graders to graduate students, and students in a freshman class were able to program 3 parallel sorting algorithms. The main purpose of the current paper is to stimulate discussion on the following somewhat open-ended question. Now that we made significant progress on a system devoted to supporting PRAM-like programming, is it possible to incorporate our hardware support as an add-on into other current and future many-core systems? The paper considers a concrete proposal for doing that: recasting our work as a hardware-enhanced programmer's workflow ¿module¿ that can then be essentially imported into the other systems. %B Computer Design, 2009. ICCD 2009. IEEE International Conference on %P 60 - 63 %8 2009/10// %G eng %R 10.1109/ICCD.2009.5413174 %0 Journal Article %J ACM Trans. Algorithms %D 2009 %T Algorithms for distributional and adversarial pipelined filter ordering problems %A Condon,Anne %A Deshpande, Amol %A Hellerstein,Lisa %A Wu,Ning %K flow algorithms %K Pipelined filter ordering %K Query optimization %K selection ordering %X Pipelined filter ordering is a central problem in database query optimization. The problem is to determine the optimal order in which to apply a given set of commutative filters (predicates) to a set of elements (the tuples of a relation), so as to find, as efficiently as possible, the tuples that satisfy all of the filters. Optimization of pipelined filter ordering has recently received renewed attention in the context of environments such as the Web, continuous high-speed data streams, and sensor networks. Pipelined filter ordering problems are also studied in areas such as fault detection and machine learning under names such as learning with attribute costs, minimum-sum set cover, and satisficing search. We present algorithms for two natural extensions of the classical pipelined filter ordering problem: (1) a distributional-type problem where the filters run in parallel and the goal is to maximize throughput, and (2) an adversarial-type problem where the goal is to minimize the expected value of multiplicative regret. We present two related algorithms for solving (1), both running in time O(n2), which improve on the O(n3 log n) algorithm of Kodialam. We use techniques from our algorithms for (1) to obtain an algorithm for (2). %B ACM Trans. Algorithms %V 5 %P 24:1–24:34 - 24:1–24:34 %8 2009/03// %@ 1549-6325 %G eng %U http://doi.acm.org/10.1145/1497290.1497300 %N 2 %R 10.1145/1497290.1497300 %0 Journal Article %J Journal of Computing and Information Science in Engineering %D 2009 %T Algorithms for extraction of nanowire lengths and positions from optical section microscopy image sequence %A Peng,T. %A Balijepalli,A. %A Gupta,S.K. %A LeBrun,T. W. %B Journal of Computing and Information Science in Engineering %V 9 %P 041007 - 041007 %8 2009/// %G eng %0 Journal Article %J Robotics and Computer-Integrated Manufacturing %D 2009 %T Algorithms for generating multi-stage molding plans for articulated assemblies %A Priyadarshi,Alok K. %A Gupta, Satyandra K. %K In-mold assembly %K mold design %K Process planning %X Multi-stage molding is capable of producing better-quality articulated products at a lower cost. During the multi-stage molding process, assembly operations are performed along with the molding operations. Hence, it gives rise to a new type of planning problem. It is difficult to perform the planning manually because it involves evaluating large number of combinations and solving complex geometric reasoning problems. This paper investigates the problem of generating multi-stage molding plans for articulated assemblies. We present a planning framework that allows us to utilize constraints from experimentally proven molding plans. As a part of the planning problem, we determine the molding sequence and intermediate assembly configurations. We present algorithms for all the steps in the planning problem and characterize their computational complexities. Finally, we illustrate our approach with representative examples. %B Robotics and Computer-Integrated Manufacturing %V 25 %P 91 - 106 %8 2009/02// %@ 0736-5845 %G eng %U http://www.sciencedirect.com/science/article/pii/S0736584507001044 %N 1 %R 10.1016/j.rcim.2007.10.002 %0 Conference Paper %B INFOCOM 2009, IEEE %D 2009 %T All Bits Are Not Equal - A Study of IEEE 802.11 Communication Bit Errors %A Han,Bo %A Ji,Lusheng %A Lee,Seungjoon %A Bhattacharjee, Bobby %A Miller,R.R. %K 802.11 %K bit %K coding;error %K coding;forward %K coding;subframe %K combining %K Communication %K correction;frame %K correction;wireless %K error %K errors;channel %K errors;wireless %K IEEE %K LAN; %K LAN;channel %K mechanisms;network %K patterns;transmission %K statistics;forward %X In IEEE 802.11 Wireless LAN (WLAN) systems, techniques such as acknowledgement, retransmission, and transmission rate adaptation, are frame-level mechanisms designed for combating transmission errors. Recently sub-frame level mechanisms such as frame combining have been proposed by the research community. In this paper, we present results obtained from our bit error study for identifying sub-frame error patterns because we believe that identifiable bit error patterns can potentially introduce new opportunities in channel coding, network coding, forward error correction (FEC), and frame combining mechanisms. We have constructed a number of IEEE 802.11 wireless LAN testbeds and conducted extensive experiments to study the characteristics of bit errors and their location distribution. Conventional wisdom dictates that bit error probability is the result of channel condition and ought to follow corresponding distribution. However our measurement results identify three repeatable bit error patterns that are not induced by channel conditions. We have verified that such error patterns are present in WLAN transmissions in different physical environments and across different wireless LAN hardware platforms. We also discuss our current hypotheses for the reasons behind these bit error probability patterns and how identifying these patterns may help improving WLAN transmission robustness. %B INFOCOM 2009, IEEE %P 1602 - 1610 %8 2009/04// %G eng %R 10.1109/INFCOM.2009.5062078 %0 Journal Article %J Environmental Microbiology Reports %D 2009 %T Analysis of clonally related environmental Vibrio cholerae O1 El Tor isolated before 1992 from Varanasi, India reveals origin of SXT‐ICEs belonging to O139 and O1 serogroups %A Mohapatra,Saswat S. %A Mantri,Chinmay K. %A Mohapatra,Harapriya %A Rita R Colwell %A Singh,Durg V. %X In this study, we report the presence of SXT in environmental Vibrio cholerae O1 El Tor strains isolated before 1992 from Varanasi, India. All isolates, except one, were resistant to Tm, and/or Sul, Sm, Fr, Na and Am. None contained plasmids. PCR and DNA sequencing revealed the presence of SXT containing dfrA1 and/or sulII, strAB in six isolates and dfr18, sulII and strAB in five isolates. Three clinical V. cholerae O1 isolated during 1992 contained the antibiotic resistance gene cassette aadA1 in the class 1 integron. Conjugation experiments, followed by PCR analysis of transconjugants, provided evidence of the transferable nature of SXT and associated antibiotic resistance genes, and its integration into the prfC site. Results of phylogenetic analysis of the intSXT gene of clonally similar V. cholerae showed a clear difference between dfr18+ and dfrA1+V. cholerae O1 isolates. This is the first report of the occurrence of SXT harbouring sulII, strAB, dfr18 and/or dfrA1 genes in environmental V. cholerae O1 isolated prior to 1992 from Varanasi, India, and suggests emergence of SXT+ antibiotic-resistant V. cholerae O139 and O1 from an environmental V. cholerae progenitor by acquisition of SXT and antibiotic-resistant gene clusters. %B Environmental Microbiology Reports %V 2 %P 50 - 57 %8 2009/07/21/ %@ 1758-2229 %G eng %U http://onlinelibrary.wiley.com/doi/10.1111/j.1758-2229.2009.00051.x/abstract?userIsAuthenticated=false&deniedAccessCustomisedMessage= %N 1 %R 10.1111/j.1758-2229.2009.00051.x %0 Journal Article %J Magnetics, IEEE Transactions on %D 2009 %T Analytical Description of Quasi-Random Magnetization Relaxation to Equilibrium %A Serpico,C. %A d'Aquino,M. %A Bertotti,G. %A Mayergoyz, Issak D %K approach;quasirandom %K crossing;uniformly %K dynamical %K dynamics;averaging %K Hamiltonian %K magnetization %K magnetized %K particle;magnetic %K particles;magnetic %K perturbed %K relaxation;magnetisation;probability; %K relaxation;separatrix %K system;Landau-Lifshitz %K technique;damping;probabilistic %X The Landau-Lifshitz (LL) dynamics of a uniformly magnetized particle is considered. The LL equation is written in the form of a Hamiltonian perturbed dynamical system. By using suitable averaging technique, the equation for the slow dynamics of the energy is derived. The averaging technique breaks up in the case of separatrix crossing. It is shown that, in the limit of small damping, the separatrix crossing can be described by using a probabilistic approach. %B Magnetics, IEEE Transactions on %V 45 %P 5224 - 5227 %8 2009/11// %@ 0018-9464 %G eng %N 11 %R 10.1109/TMAG.2009.2031067 %0 Conference Paper %B Proceedings of the fourth international conference on Communities and technologies %D 2009 %T Analyzing (social media) networks with NodeXL %A Smith,Marc A. %A Shneiderman, Ben %A Milic-Frayling,Natasa %A Mendes Rodrigues,Eduarda %A Barash,Vladimir %A Dunne,Cody %A Capone,Tony %A Perer,Adam %A Gleave,Eric %K excel %K network analysis %K social media %K social network %K spreadsheet %K Visualization %X We present NodeXL, an extendible toolkit for network overview, discovery and exploration implemented as an add-in to the Microsoft Excel 2007 spreadsheet software. We demonstrate NodeXL data analysis and visualization features with a social media data sample drawn from an enterprise intranet social network. A sequence of NodeXL operations from data import to computation of network statistics and refinement of network visualization through sorting, filtering, and clustering functions is described. These operations reveal sociologically relevant differences in the patterns of interconnection among employee participants in the social media space. The tool and method can be broadly applied. %B Proceedings of the fourth international conference on Communities and technologies %S C&T '09 %I ACM %C New York, NY, USA %P 255 - 264 %8 2009/// %@ 978-1-60558-713-4 %G eng %U http://doi.acm.org/10.1145/1556460.1556497 %R 10.1145/1556460.1556497 %0 Conference Paper %D 2009 %T Analyzing the process of installing rogue software %A Berthier,R. %A Arjona,J. %A Michel Cukier %K Linux %K Linux target computers %K malicious actions %K rogue software installation %K security of data %X This practical experience report presents the results of an experiment aimed at understanding the sequence of malicious actions following a remote compromise. The type of rogue software installed during attacks was used to classify and understand sequences of malicious actions. For this experiment, we used four Linux target computers running SSH with simple passwords. During the eight-month data collection period, we recorded a total of 1,171 attack sessions. In these sessions, attackers typed a total of 20,335 commands that we categorized into 24 specific actions. These actions were analyzed based on the type of rogue software installed by attackers. %P 560 - 565 %8 2009/07/29/2 %G eng %R 10.1109/DSN.2009.5270293 %0 Journal Article %J Image Processing, IEEE Transactions on %D 2009 %T Appearance Modeling Using a Geometric Transform %A Li,Jian %A Zhou,S. K %A Chellapa, Rama %K body %K contour;feature %K curve;fingerprinting;geometric %K function;image %K function;shape %K geometry;image %K identity %K indicator %K INTENSITY %K matching;trace %K method;pedestrian %K modeling;closed %K part %K Radon %K recognition;region %K resolution;image %K segmentation; %K segmentation;image %K set %K synthesis;image %K transform;appearance %K transform;human %K transform;Radon %K transforms;computational %K warping;level %X A general transform, called the geometric transform (GeT), that models the appearance inside a closed contour is proposed. The proposed GeT is a functional of an image intensity function and a region indicator function derived from a closed contour. It can be designed to combine the shape and appearance information at different resolutions and to generate models invariant to deformation, articulation, or occlusion. By choosing appropriate functionals and region indicator functions, the GeT unifies Radon transform, trace transform, and a class of image warpings. By varying the region indicator and the types of features used for appearance modeling, five novel types of GeTs are introduced and applied to fingerprinting the appearance inside a contour. They include the GeTs based on a level set, shape matching, feature curves, and the GeT invariant to occlusion, and a multiresolution GeT (MRGeT). Applications of GeT to pedestrian identity recognition, human body part segmentation, and image synthesis are illustrated. The proposed approach produces promising results when applied to fingerprinting the appearance of a human and body parts despite the presence of nonrigid deformations and articulated motion. %B Image Processing, IEEE Transactions on %V 18 %P 889 - 902 %8 2009/04// %@ 1057-7149 %G eng %N 4 %R 10.1109/TIP.2008.2011381 %0 Conference Paper %B Proceedings of the 23rd international conference on Supercomputing %D 2009 %T Approximate kernel matrix computation on GPUs forlarge scale learning applications %A Hussein,Mohamed E %A Abd-Almageed, Wael %K affinity propagation %K algorithms %K arrays %K gpu %K kernel methods %K parallel programming %K performance %K space filling curves %K sparse matrices %X Kernel-based learning methods require quadratic space and time complexities to compute the kernel matrix. These complexities limit the applicability of kernel methods to large scale problems with millions of data points. In this paper, we introduce a novel representation of kernel matrices on Graphics Processing Units (GPU). The novel representation exploits the sparseness of the kernel matrix to address the space complexity problem. It also respects the guidelines for memory access on GPUs, which are critical for good performance, to address the time complexity problem. Our representation utilizes the locality preserving properties of space filling curves to obtain a band approximation of the kernel matrix. To prove the validity of the representation, we use Affinity Propagation, an unsupervised clustering algorithm, as an example of kernel methods. Experimental results show a 40x speedup of AP using our representation without degradation in clustering performance. %B Proceedings of the 23rd international conference on Supercomputing %S ICS '09 %I ACM %C Yorktown Heights, NY, USA %P 511 - 512 %8 2009/// %@ 978-1-60558-498-0 %G eng %R 10.1145/1542275.1542355 %0 Conference Paper %B IEEE 25th International Conference on Data Engineering, 2009. ICDE '09 %D 2009 %T ApproxRank: Estimating Rank for a Subgraph %A Yao Wu %A Raschid, Louiqa %K Application software %K ApproxRank %K Computer applications %K Crawlers %K customized semantic query answering %K Data engineering %K Educational institutions %K Explosions %K focused crawlers %K global Web graph %K graph theory %K IdealRank algorithm %K Internet %K localized search engines %K PageRank-style %K personalized search %K Query processing %K Runtime %K Search engines %K Stochastic processes %K Web pages %X Customized semantic query answering, personalized search, focused crawlers and localized search engines frequently focus on ranking the pages contained within a subgraph of the global Web graph. The challenge for these applications is to compute PageRank-style scores efficiently on the subgraph, i.e., the ranking must reflect the global link structure of the Web graph but it must do so without paying the high overhead associated with a global computation. We propose a framework of an exact solution and an approximate solution for computing ranking on a subgraph. The IdealRank algorithm is an exact solution with the assumption that the scores of external pages are known. We prove that the IdealRank scores for pages in the subgraph converge. Since the PageRank-style scores of external pages may not typically be available, we propose the ApproxRank algorithm to estimate scores for the subgraph. Both IdealRank and ApproxRank represent the set of external pages with an external node L1 and extend the subgraph with links to L1. They also modify the PageRank-style transition matrix with respect to L1. We analyze the L1 distance between IdealRank scores and ApproxRank scores of the subgraph and show that it is within a constant factor of the L1 distance of the external pages (e.g., the true PageRank scores and uniform scores assumed by ApproxRank). We compare ApproxRank and a stochastic complementation approach (SC), a current best solution for this problem, on different types of subgraphs. ApproxRank has similar or superior performance to SC and typically improves on the runtime performance of SC by an order of magnitude or better. We demonstrate that ApproxRank provides a good approximation to PageRank for a variety of subgraphs. %B IEEE 25th International Conference on Data Engineering, 2009. ICDE '09 %I IEEE %P 54 - 65 %8 2009/04/29/March %@ 978-1-4244-3422-0 %G eng %R 10.1109/ICDE.2009.108 %0 Conference Paper %B Proceedings of the ACL-IJCNLP 2009 Conference Short Papers$}$ %D 2009 %T Arabic Cross-Document Coreference Resolution %A Sayeed,A. %A Elsayed,T. %A Garera,N. %A Alexander,D. %A Xu,T. %A Oard, Douglas %A Yarowsky,D. %A Piatko,C. %B Proceedings of the ACL-IJCNLP 2009 Conference Short Papers$}$ %P 357 - 360 %8 2009/// %G eng %0 Journal Article %J International Journal of Embedded Systems %D 2009 %T An architectural level design methodology for smart camera applications %A Saha,S. %A Kianzad,V. %A Schlessman,J. %A Aggarwal,G. %A Bhattacharyya, Shuvra S. %A Chellapa, Rama %A Wolf,W. %X Today's embedded computing applications are characterised by increased functionality, and hence increased design complexity and processing requirements. The resulting design spaces are vast and designers are typically able to evaluate only small subsets of solutions due to lack of efficient design tools. In this paper, we propose an architectural level design methodology that provides a means for a comprehensive design space exploration for smart camera applications and enable designers to select higher quality solutions and provides substantial savings on the overall cost of the system. We present efficient, accurate and intuitive models for performance estimation and validate them with experiments. %B International Journal of Embedded Systems %V 4 %P 83 - 97 %8 2009/// %G eng %N 1 %0 Journal Article %J Nucleic Acids Research %D 2009 %T ARDB-Antibiotic Resistance Genes Database %A Liu,B. %A Pop, Mihai %X The treatment of infections is increasingly compromised by the ability of bacteria to develop resistance to antibiotics through mutations or through the acquisition of resistance genes. Antibiotic resistance genes also have the potential to be used for bio-terror purposes through genetically modified organisms. In order to facilitate the identification and characterization of these genes, we have created a manually curated database—the Antibiotic Resistance Genes Database (ARDB)—unifying most of the publicly available information on antibiotic resistance. Each gene and resistance type is annotated with rich information, including resistance profile, mechanism of action, ontology, COG and CDD annotations, as well as external links to sequence and protein databases. Our database also supports sequence similarity searches and implements an initial version of a tool for characterizing common mutations that confer antibiotic resistance. The information we provide can be used as compendium of antibiotic resistance factors as well as to identify the resistance genes of newly sequenced genes, genomes, or metagenomes. Currently, ARDB contains resistance information for 13 293 genes, 377 types, 257 antibiotics, 632 genomes, 933 species and 124 genera. ARDB is available at http://ardb.cbcb.umd.edu/. %B Nucleic Acids Research %V 37 %P D443-D447 - D443-D447 %8 2009/01/01/ %@ 0305-1048, 1362-4962 %G eng %U http://nar.oxfordjournals.org/content/37/suppl_1/D443.short %N Database %R 10.1093/nar/gkn656 %0 Book Section %B Springer Handbook of Automation %D 2009 %T Artificial Intelligence and Automation %A Nau, Dana S. %E Nof,Shimon Y. %K engineering %X Artificial intelligence (AI) focuses on getting machines to do things that we would call intelligent behavior. Intelligence – whether artificial or otherwise – does not have a precise definition, but there are many activities and behaviors that are considered intelligent when exhibited by humans and animals. Examples include seeing, learning, using tools, understanding human speech, reasoning, making good guesses, playing games, and formulating plans and objectives. AI focuses on how to get machines or computers to perform these same kinds of activities, though not necessarily in the same way that humans or animals might do them. %B Springer Handbook of Automation %I Springer Berlin Heidelberg %P 249 - 268 %8 2009/// %@ 978-3-540-78831-7 %G eng %U http://www.springerlink.com/content/q7051r251p38p51h/abstract/ %0 Journal Article %J Journal of Systems and Software %D 2009 %T An assessment of systems and software engineering scholars and institutions (2002–2006) %A Wong,W. Eric %A Tse,T.H. %A Glass,Robert L. %A Basili, Victor R. %A Chen,T.Y. %K Research publications %K Systems and software engineering %K Top institutions %K Top scholars %X This paper summarizes a survey of publications in the field of systems and software engineering from 2002 to 2006. The survey is an ongoing, annual event that identifies the top 15 scholars and institutions over a 5-year period. The rankings are calculated based on the number of papers published in TSE, TOSEM, JSS, SPE, EMSE, IST, and Software. The top-ranked institution is Korea Advanced Institute of Science and Technology, Korea, and the top-ranked scholar is Magne Jørgensen of Simula Research Laboratory, Norway. %B Journal of Systems and Software %V 82 %P 1370 - 1373 %8 2009/08// %@ 0164-1212 %G eng %U http://www.sciencedirect.com/science/article/pii/S0164121209001265 %N 8 %R 10.1016/j.jss.2009.06.018 %0 Conference Paper %B Robotics and Automation, 2009. ICRA '09. IEEE International Conference on %D 2009 %T Assigning cameras to subjects in video surveillance systems %A El-Alfy,H. %A Jacobs, David W. %A Davis, Larry S. %K agent %K algorithm;multiple %K assignment;computation %K augmenting %K cameras;video %K cost %K detection;video %K graph;camera %K MATCHING %K matching;minimum %K matching;target %K path;bipartite %K reduction;maximum %K segment;video %K Surveillance %K surveillance; %K system;graph %K theory;image %K TIME %K tracking;obstacle %K tracking;video %K video %X We consider the problem of tracking multiple agents moving amongst obstacles, using multiple cameras. Given an environment with obstacles, and many people moving through it, we construct a separate narrow field of view video for as many people as possible, by stitching together video segments from multiple cameras over time. We employ a novel approach to assign cameras to people as a function of time, with camera switches when needed. The problem is modeled as a bipartite graph and the solution corresponds to a maximum matching. As people move, the solution is efficiently updated by computing an augmenting path rather than by solving for a new matching. This reduces computation time by an order of magnitude. In addition, solving for the shortest augmenting path minimizes the number of camera switches at each update. When not all people can be covered by the available cameras, we cluster as many people as possible into small groups, then assign cameras to groups using a minimum cost matching algorithm. We test our method using numerous runs from different simulators. %B Robotics and Automation, 2009. ICRA '09. IEEE International Conference on %P 837 - 843 %8 2009/05// %G eng %R 10.1109/ROBOT.2009.5152753 %0 Conference Paper %B Proceedings of the 16th ACM conference on Computer and communications security %D 2009 %T Attacking cryptographic schemes based on "perturbation polynomials" %A Albrecht,Martin %A Gentry,Craig %A Halevi,Shai %A Katz, Jonathan %K pairwise key establishment %K random perturbation polynomial %K sensor network security %X We show attacks on several cryptographic schemes that have recently been proposed for achieving various security goals in sensor networks. Roughly speaking, these schemes all use "perturbation polynomials" to add "noise" to polynomialbased systems that offer information-theoretic security, in an attempt to increase the resilience threshold while maintaining efficiency. We show that the heuristic security arguments given for these modified schemes do not hold, and that they can be completely broken once we allow even a slight extension of the parameters beyond those achieved by the underlying information-theoretic schemes. Our attacks apply to the key predistribution scheme of Zhang et al. (MobiHoc 2007), the access-control schemes of Subramanian et al. (PerCom 2007), and the authentication schemes of Zhang et al. (INFOCOM 2008). Our results cast doubt on the viability of using "perturbation polynomials" for designing secure cryptographic schemes. %B Proceedings of the 16th ACM conference on Computer and communications security %S CCS '09 %I ACM %C New York, NY, USA %P 1 - 10 %8 2009/// %@ 978-1-60558-894-0 %G eng %U http://doi.acm.org/10.1145/1653662.1653664 %R 10.1145/1653662.1653664 %0 Journal Article %J Proceedings of the American Society for Information Science and Technology %D 2009 %T Automatic classification of human values: Applying computational thinking to information ethics %A Fleischmann,K.R. %A Oard, Douglas %A Cheng,A.S. %A Wang,P. %A Ishita,E. %B Proceedings of the American Society for Information Science and Technology %V 46 %P 1 - 4 %8 2009/// %G eng %N 1 %0 Journal Article %J Nature Structural & Molecular Biology %D 2009 %T Avid interactions underlie the Lys63-linked polyubiquitin binding specificities observed for UBA domains %A Sims,Joshua J. %A Haririnia,Aydin %A Dickinson,Bryan C. %A Fushman, David %A Cohen,Robert E. %K apoptosis %K basic cellular processes %K Biochemistry %K biophysics %K cell biology %K cell cycle %K cell surface proteins %K cell-cell interactions %K checkpoints %K chromatin %K chromatin remodeling %K chromatin structure %K content %K DNA recombination %K DNA repair %K DNA replication %K Gene expression %K Genetics %K intracellular signaling %K journal %K macromolecules %K mechanism %K membrane processes %K molecular %K molecular basis of disease %K molecular biology %K molecular interactions %K multi-component complexes %K nature publishing group %K nature structural molecular biology %K nucleic acids %K protein degradation %K protein folding %K protein processing %K Proteins %K regulation of transcription %K regulation of translation %K RNA %K RNA processing %K RNAi %K signal transduction %K single molecule studies %K structure and function of proteins %K transcription %K translation %X Ubiquitin (denoted Ub) receptor proteins as a group must contain a diverse set of binding specificities to distinguish the many forms of polyubiquitin (polyUb) signals. Previous studies suggested that the large class of ubiquitin-associated (UBA) domains contains members with intrinsic specificity for Lys63-linked polyUb or Lys48-linked polyUb, thus explaining how UBA-containing proteins can mediate diverse signaling events. Here we show that previously observed Lys63-polyUb selectivity in UBA domains is the result of an artifact in which the dimeric fusion partner, glutathione S-transferase (GST), positions two UBAs for higher affinity, avid interactions with Lys63-polyUb, but not with Lys48-polyUb. Freed from GST, these UBAs are either nonselective or prefer Lys48-polyUb. Accordingly, NMR experiments reveal no Lys63-polyUb–specific binding epitopes for these UBAs. We reexamine previous conclusions based on GST-UBAs and present an alternative model for how UBAs achieve a diverse range of linkage specificities. %B Nature Structural & Molecular Biology %V 16 %P 883 - 889 %8 2009/// %@ 1545-9993 %G eng %U http://www.nature.com/nsmb/journal/v16/n8/abs/nsmb.1637.html %N 8 %R 10.1038/nsmb.1637 %0 Conference Paper %B The Tenth International ACMSIG ACCESS Conference on Computers and Accessibility %D 2008 %T ACamera Phone Based Currency Reader for the Visually Impaired %A Liu,Xu %A David Doermann %X In this paper we present a camera phone-based currency reader for the visually impaired to denominate face values of the U.S. paper dollars. Currently, they can only be identified visually and this situation will continue in foreseeable future. Our solution harvests the imaging and computational power on camera phones to read bills. Considering it is impractical for the visually impaired to capture high quality image, our currency reader performs real time processing for each captured frame as the camera approaches the bill. We develop efficient background subtraction and perspective correction algorithms and train our currency reader using the Ada-boost framework, which is efficient. Our currency reader processes 10 frames/second and achieves a false positive rate of 10^-4. Major smart phone platforms, including Symbian and Windows Mobile, are supported. %B The Tenth International ACMSIG ACCESS Conference on Computers and Accessibility %P 305 - 306 %8 2008/10// %G eng %0 Conference Paper %B ACM International Conference on Multimedia %D 2008 %T ACamera-based Mobile Data Channel: Capacity and Analysis %A Liu,Xu %A David Doermann %A Li,H. %X In this paper we propose a novel application, color Video Code (V-Code) and analyze its data transmission capacity through camera-based mobile data channels. Users can use the camera on a mobile device (PDA or camera phone) as a passive and pervasive data channel to download data encoded as a sequence of color visual patterns. The color V-Code is animated on a display, acquired by the camera and decoded by the pre-embedded software in the mobile device. One interesting question is what is the data transmission capacity it can achieve, theoretically and practically. To answer this question we build a camera channel model to measure color degradation using information theory and show that the capacity of the camera channel can be improved with the optimized color selection through color calibration. After initialization color models are learned automatically as downloading proceeds. We address the problem of precise registration, and implemented a fast perspective correction method to accelerate the decoder in real-time on a resource constrained device. With the optimized color set and efficient implementation we achieve a transmission bit rate of 15.4kbps on a common iMate Jamin phone (200MHz CPU). This speed is faster than the average GPRS bit rate (12kbps). %B ACM International Conference on Multimedia %P 359 - 368 %8 2008/// %G eng %0 Journal Article %J Journal on Computing and Cultural Heritage %D 2008 %T Access to recorded interviews %A Jong,Franciska De %A Oard, Douglas %A Heeren,Willemijn %A Ordelman,Roeland %B Journal on Computing and Cultural Heritage %V 1 %P 1 - 27 %8 2008/06/01/ %@ 15564673 %G eng %U http://dl.acm.org/citation.cfm?id=1367083 %R 10.1145/1367080.1367083 %0 Journal Article %J SIGCOMM Comput. Commun. Rev. %D 2008 %T Accountable internet protocol (aip) %A Andersen,David G. %A Balakrishnan,Hari %A Feamster, Nick %A Koponen,Teemu %A Moon,Daekyeong %A Shenker,Scott %K accountability %K address %K internet architecture %K scalability %K Security %X This paper presents AIP (Accountable Internet Protocol), a network architecture that provides accountability as a first-order property. AIP uses a hierarchy of self-certifying addresses, in which each component is derived from the public key of the corresponding entity. We discuss how AIP enables simple solutions to source spoofing, denial-of-service, route hijacking, and route forgery. We also discuss how AIP's design meets the challenges of scaling, key management, and traffic engineering. %B SIGCOMM Comput. Commun. Rev. %V 38 %P 339 - 350 %8 2008/08// %@ 0146-4833 %G eng %U http://doi.acm.org/10.1145/1402946.1402997 %N 4 %R 10.1145/1402946.1402997 %0 Journal Article %J Proc. of the 6th International Conference on Language Resources and Evaluation Conference (LREC’08) %D 2008 %T The acl anthology reference corpus: A reference dataset for bibliographic research in computational linguistics %A Bird,S. %A Dale,R. %A Dorr, Bonnie J %A Gibson,B. %A Joseph,M.T. %A Kan,M.Y. %A Lee,D. %A Powley,B. %A Radev,D.R. %A Tan,Y.F. %X The ACL Anthology is a digital archive of conference and journal papers in natural language processing and computational linguistics.Its primary purpose is to serve as a reference repository of research results, but we believe that it can also be an object of study and a platform for research in its own right. We describe an enriched and standardized reference corpus derived from the ACL Anthology that can be used for research in scholarly document processing. This corpus, which we call the ACL Anthology Reference Corpus (ACL ARC), brings together the recent activities of a number of research groups around the world. Our goal is to make the corpus widely available, and to encourage other researchers to use it as a standard testbed for experiments in both bibliographic and bibliometric research. %B Proc. of the 6th International Conference on Language Resources and Evaluation Conference (LREC’08) %P 1755 - 1759 %8 2008/// %G eng %0 Conference Paper %B Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on %D 2008 %T Action recognition using ballistic dynamics %A Vitaladevuni,S.N. %A Kellokumpu,V. %A Davis, Larry S. %K analysis;image %K Bayesian %K dynamics;gesture %K feature;person-centric %K framework;action %K History %K image %K labels;psycho-kinesiological %K morphological %K MOTION %K Movement %K movements;motion %K planning;interactive %K processing; %K recognition %K recognition;ballistic %K recognition;image %K segmentation;video %K signal %K studies;image %K task;human %X We present a Bayesian framework for action recognition through ballistic dynamics. Psycho-kinesiological studies indicate that ballistic movements form the natural units for human movement planning. The framework leads to an efficient and robust algorithm for temporally segmenting videos into atomic movements. Individual movements are annotated with person-centric morphological labels called ballistic verbs. This is tested on a dataset of interactive movements, achieving high recognition rates. The approach is also applied on a gesture recognition task, improving a previously reported recognition rate from 84% to 92%. Consideration of ballistic dynamics enhances the performance of the popular Motion History Image feature. We also illustrate the approachpsilas general utility on real-world videos. Experiments indicate that the method is robust to view, style and appearance variations. %B Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on %P 1 - 8 %8 2008/06// %G eng %R 10.1109/CVPR.2008.4587806 %0 Journal Article %J Artificial Intelligence %D 2008 %T Active logic semantics for a single agent in a static world %A Anderson,Michael L. %A Gomaa,Walid %A Grant,John %A Perlis, Don %K Active logic %K Autonomous agents %K Brittleness %K Logic %K Nonmonotonic logic %K Paraconsistent logic %K semantics %K Soundness %K TIME %X For some time we have been developing, and have had significant practical success with, a time-sensitive, contradiction-tolerant logical reasoning engine called the active logic machine (ALMA). The current paper details a semantics for a general version of the underlying logical formalism, active logic. Central to active logic are special rules controlling the inheritance of beliefs in general (and of beliefs about the current time in particular), very tight controls on what can be derived from direct contradictions (P&¬P), and mechanisms allowing an agent to represent and reason about its own beliefs and past reasoning. Furthermore, inspired by the notion that until an agent notices that a set of beliefs is contradictory, that set seems consistent (and the agent therefore reasons with it as if it were consistent), we introduce an "apperception function" that represents an agent's limited awareness of its own beliefs, and serves to modify inconsistent belief sets so as to yield consistent sets. Using these ideas, we introduce a new definition of logical consequence in the context of active logic, as well as a new definition of soundness such that, when reasoning with consistent premises, all classically sound rules remain sound in our new sense. However, not everything that is classically sound remains sound in our sense, for by classical definitions, all rules with contradictory premises are vacuously sound, whereas in active logic not everything follows from a contradiction. %B Artificial Intelligence %V 172 %P 1045 - 1063 %8 2008/05// %@ 0004-3702 %G eng %U http://www.sciencedirect.com/science/article/pii/S0004370207001993 %N 8-9 %R 16/j.artint.2007.11.005 %0 Journal Article %J Image Processing, IEEE Transactions on %D 2008 %T Activity Modeling Using Event Probability Sequences %A Cuntoor, N.P. %A Yegnanarayana,B. %A Chellapa, Rama %K Automated;Reproducibility of Results;Sensitivity and Specificity;Video Recording; %K Biological;Models %K Carnegie Mellon University;Credo Intelligence %K Computer-Assisted;Models %K Inc.;Motion Capture dataset;Transportation Security Administration;University Central Florida;airport tarmac surveillance dataset;anomaly detection;event probability sequence;event representation;hidden Markov model;human action dataset;human activity rec %K Statistical;Motor Activity;Movement;Pattern Recognition %X Changes in motion properties of trajectories provide useful cues for modeling and recognizing human activities. We associate an event with significant changes that are localized in time and space, and represent activities as a sequence of such events. The localized nature of events allows for detection of subtle changes or anomalies in activities. In this paper, we present a probabilistic approach for representing events using the hidden Markov model (HMM) framework. Using trained HMMs for activities, an event probability sequence is computed for every motion trajectory in the training set. It reflects the probability of an event occurring at every time instant. Though the parameters of the trained HMMs depend on viewing direction, the event probability sequences are robust to changes in viewing direction. We describe sufficient conditions for the existence of view invariance. The usefulness of the proposed event representation is illustrated using activity recognition and anomaly detection. Experiments using the indoor University of Central Florida human action dataset, the Carnegie Mellon University Credo Intelligence, Inc., Motion Capture dataset, and the outdoor Transportation Security Administration airport tarmac surveillance dataset show encouraging results. %B Image Processing, IEEE Transactions on %V 17 %P 594 - 607 %8 2008/04// %@ 1057-7149 %G eng %N 4 %R 10.1109/TIP.2008.916991 %0 Journal Article %J J. Image Video Process. %D 2008 %T Activity representation using 3D shape models %A Abdelkader,Mohamed F. %A Roy-Chowdhury,Amit K. %A Chellapa, Rama %A Akdemir,Umut %X We present a method for characterizing human activities using 3D deformable shape models. The motion trajectories of points extracted from objects involved in the activity are used to build models for each activity, and these models are used for classification and detection of unusual activities. The deformable models are learnt using the factorization theorem for nonrigid 3D models. We present a theory for characterizing the degree of deformation in the 3D models from a sequence of tracked observations. This degree, termed as deformation index (DI), is used as an input to the 3D model estimation process. We study the special case of ground plane activities in detail because of its importance in video surveillance applications. We present results of our activity modeling approach using videos of both high-resolution single individual activities and ground plane surveillance activities. %B J. Image Video Process. %V 2008 %P 5:1–5:16 - 5:1–5:16 %8 2008/01// %@ 1687-5176 %G eng %U http://dx.doi.org/10.1155/2008/347050 %R 10.1155/2008/347050 %0 Conference Paper %B Proceeding of the twenty-sixth annual SIGCHI conference on Human factors in computing systems %D 2008 %T Activity sensing in the wild: a field trial of ubifit garden %A Consolvo,S. %A McDonald,D.W. %A Toscos,T. %A Chen,M.Y. %A Jon Froehlich %A Harrison,B. %A Klasnja,P. %A LaMarca,A. %A LeGrand,L. %A Libby,R. %A others %B Proceeding of the twenty-sixth annual SIGCHI conference on Human factors in computing systems %P 1797 - 1806 %8 2008/// %G eng %0 Journal Article %J Electronic Transactions on Numerical Analysis %D 2008 %T Adaptive Constraint Reduction for Training Support Vector Machines %A Jung,Jin Hyuk %A O'Leary, Dianne P. %A Tits,Andre' L. %B Electronic Transactions on Numerical Analysis %V 31 %P 156 - 177 %8 2008/// %G eng %U http://etna.mcs.kent.edu/vol.31.2008/pp156-177.dir/pp156-177.pdfhttp://etna.mcs.kent.edu/vol.31.2008/pp156-177.dir/pp156-177.pdf %0 Journal Article %J Physical Review A %D 2008 %T The Adiabatic Theorem in the Presence of Noise %A O'Hara,Michael J. %A O'Leary, Dianne P. %B Physical Review A %V 77 %P 042319, 20 pages - 042319, 20 pages %8 2008/// %G eng %U http://link.aps.org/abstract/PRA/v77/e042319http://link.aps.org/abstract/PRA/v77/e042319 %R DOI: 10.1103/PhysRevA.77.042319 %0 Conference Paper %B In G. Visaggio (Ed.), 12 th International Conference on Evaluation and Assessment in Software Engineering. BCS eWIC %D 2008 %T Adopting Curvilinear Component Analysis to Improve Software Cost Estimation Accuracy. Model, Application Strategy, and an Experimental Verification %A Sarcia,Salvatore A. %A Cantone,Giovanni %A Basili, Victor R. %X Cost estimation is a critical issue for software organizations. Good estimates can help us make more informed decisions (controlling and planning software risks), if they are reliable (correct) and valid (stable). In this study, we apply a variable reduction technique (based on auto-associative feed--forward neural networks – called Curvilinear component analysis) to log-linear regression functions calibrated with ordinary least squares. Based on a COCOMO 81 data set, we show that Curvilinear component analysis can improve the estimation model accuracy by turning the initial input variables into an equivalent and more compact representation. We show that, the models obtained by applying Curvilinear component analysis are more parsimonious, correct, and reliable. %B In G. Visaggio (Ed.), 12 th International Conference on Evaluation and Assessment in Software Engineering. BCS eWIC %8 2008/// %G eng %0 Journal Article %J Lecture Notes in Computer Science %D 2008 %T Advances in multilingual and multimodal information retrieval %A Peters,C. %A Jijkoun,V. %A Mandl,T. %A Müller,H. %A Oard, Douglas %A Peñas,A. %A Santos,D. %B Lecture Notes in Computer Science %V 5152 %8 2008/// %G eng %0 Journal Article %J Proceedings of the American Society for Information Science and Technology %D 2008 %T Advancing social science research by applying computational linguistics %A Cheng,A.S. %A Fleischmann,K.R. %A Wang,P. %A Oard, Douglas %B Proceedings of the American Society for Information Science and Technology %V 45 %P 1 - 12 %8 2008/// %G eng %N 1 %0 Journal Article %J Journal of Molecular Biology %D 2008 %T Affinity Makes the Difference: Nonselective Interaction of the UBA Domain of Ubiquilin-1 with Monomeric Ubiquitin and Polyubiquitin Chains %A Zhang,Daoning %A Raasi,Shahri %A Fushman, David %K polyubiquitin %K protein–protein interaction %K UBA domain %K ubiquilin %K ubiquitin %X Ubiquilin/PLIC proteins belong to the family of UBL–UBA proteins implicated in the regulation of the ubiquitin-dependent proteasomal degradation of cellular proteins. A human presenilin-interacting protein, ubiquilin-1, has been suggested as potential therapeutic target for treating Huntington's disease. Ubiquilin's interactions with mono- and polyubiquitins are mediated by its UBA domain, which is one of the tightest ubiquitin binders among known ubiquitin-binding domains. Here we report the three-dimensional structure of the UBA domain of ubiquilin-1 (UQ1-UBA) free in solution and in complex with ubiquitin. UQ1-UBA forms a compact three-helix bundle structurally similar to other known UBAs, and binds to the hydrophobic patch on ubiquitin with a Kd of 20 μM. To gain structural insights into UQ1-UBA's interactions with polyubiquitin chains, we have mapped the binding interface between UQ1-UBA and Lys48- and Lys63-linked di-ubiquitins and characterized the strength of UQ1-UBA binding to these chains. Our NMR data show that UQ1-UBA interacts with the individual ubiquitin units in both chains in a mode similar to its interaction with mono-ubiquitin, although with an improved binding affinity for the chains. Our results indicate that, in contrast to UBA2 of hHR23A that has strong binding preference for Lys48-linked chains, UQ1-UBA shows little or no binding selectivity toward a particular chain linkage or between the two ubiquitin moieties in the same chain. The structural data obtained in this study provide insights into the possible structural reasons for the diversity of polyubiquitin chain recognition by UBA domains. %B Journal of Molecular Biology %V 377 %P 162 - 180 %8 2008/03/14/ %@ 0022-2836 %G eng %U http://www.sciencedirect.com/science/article/pii/S0022283607016452 %N 1 %R 10.1016/j.jmb.2007.12.029 %0 Journal Article %J Topics in Cryptology–CT-RSA 2008 %D 2008 %T Aggregate message authentication codes %A Katz, Jonathan %A Lindell,A. %X We propose and investigate the notion of aggregate message authentication codes (MACs) which have the property that multiple MAC tags, computed by (possibly) different senders on multiple (possibly different) messages, can be aggregated into a shorter tag that can still be verified by a recipient who shares a distinct key with each sender. We suggest aggregate MACs as an appropriate tool for authenticated communication in mobile ad-hoc networks or other settings where resource-constrained devices share distinct keys with a single entity (such as a base station), and communication is an expensive resource. %B Topics in Cryptology–CT-RSA 2008 %P 155 - 169 %8 2008/// %G eng %R 10.1007/978-3-540-79263-5_10 %0 Conference Paper %B Proceedings of the 23rd national conference on Artificial intelligence %D 2008 %T An AGM-based belief revision mechanism for probabilistic spatio-temporal logics %A Parker,A. %A Infantes,G. %A V.S. Subrahmanian %A Grant,J. %B Proceedings of the 23rd national conference on Artificial intelligence %P 511 - 516 %8 2008/// %G eng %0 Journal Article %J ACM Trans. Math. Softw. %D 2008 %T Algorithm 879: EIGENTEST—a test matrix generator for large-scale eigenproblems %A Lee,Che-Rung %A Stewart, G.W. %K Eigensystem %K test matrix generation %X Eigentest is a package that produces real test matrices with known eigensystems. A test matrix, called an eigenmat, is generated in a factored form, in which the user can specify the eigenvalues and has some control over the condition of the eigenvalues and eigenvectors. An eigenmat A of order n requires only O(n) storage for its representation. Auxiliary programs permit the computation of (A − sI)b, (A − sI)Tb, (A − sI)−1 b, and (A − sI)−T b in O(n) operations. A special routine computes specified eigenvectors of an eigenmat and the condition of its eigenvalue. Thus eigenmats are suitable for testing algorithms based on Krylov sequences, as well as others based on matrix-vector products. This article introduces the eigenmat and describes implementations in Fortran 77, Fortran 95, C, and Matlab. %B ACM Trans. Math. Softw. %V 35 %P 7:1–7:11 - 7:1–7:11 %8 2008/07// %@ 0098-3500 %G eng %U http://doi.acm.org/10.1145/1377603.1377610 %N 1 %R 10.1145/1377603.1377610 %0 Journal Article %J Image Processing, IEEE Transactions on %D 2008 %T Algorithmic and Architectural Optimizations for Computationally Efficient Particle Filtering %A Sankaranarayanan,A. C %A Srivastava, A. %A Chellapa, Rama %K Automated;Reproducibility of Results;Sensitivity and Specificity;Signal Processing %K Computer-Assisted;Models %K Computer-Assisted;Video Recording; %K convex program;independent Metropolis Hastings sampler;nonGaussian noise process;nonlinear dynamical system filtering;particle filtering algorithm;pipelined architectural optimization;video sequences;visual tracking;convex programming;image sequences;opti %K Statistical;Pattern Recognition %X In this paper, we analyze the computational challenges in implementing particle filtering, especially to video sequences. Particle filtering is a technique used for filtering nonlinear dynamical systems driven by non-Gaussian noise processes. It has found widespread applications in detection, navigation, and tracking problems. Although, in general, particle filtering methods yield improved results, it is difficult to achieve real time performance. In this paper, we analyze the computational drawbacks of traditional particle filtering algorithms, and present a method for implementing the particle filter using the Independent Metropolis Hastings sampler, that is highly amenable to pipelined implementations and parallelization. We analyze the implementations of the proposed algorithm, and, in particular, concentrate on implementations that have minimum processing times. It is shown that the design parameters for the fastest implementation can be chosen by solving a set of convex programs. The proposed computational methodology was verified using a cluster of PCs for the application of visual tracking. We demonstrate a linear speedup of the algorithm using the methodology proposed in the paper. %B Image Processing, IEEE Transactions on %V 17 %P 737 - 748 %8 2008/05// %@ 1057-7149 %G eng %N 5 %R 10.1109/TIP.2008.920760 %0 Journal Article %J Theoretical Computer Science %D 2008 %T Algorithms for computing a parameterized st-orientation %A Charalampos Papamanthou %A Tollis, Ioannis G. %K Graph algorithms %K Longest path %K Planar graphs %K s t -numberings %X s t -orientations ( s t -numberings) or bipolar orientations of undirected graphs are central to many graph algorithms and applications. Several algorithms have been proposed in the past to compute an s t -orientation of a biconnected graph. In this paper, we present new algorithms that compute such orientations with certain (parameterized) characteristics in the final s t -oriented graph, such as the length of the longest path. This work has many applications, including Graph Drawing and Network Routing, where the length of the longest path is vital in deciding certain features of the final solution. This work applies to other difficult problems as well, such as graph coloring and of course longest path. We present extended theoretical and experimental results which show that our technique is efficient and performs well in practice. %B Theoretical Computer Science %V 408 %P 224 - 240 %8 2008/11/28/ %@ 0304-3975 %G eng %U http://www.sciencedirect.com/science/article/pii/S0304397508005653 %N 2–3 %! Theoretical Computer Science %0 Journal Article %J Journal of Computing and Information Science in Engineering %D 2008 %T Algorithms for Generating Adaptive Projection Patterns for 3D Shape Measurement %A Peng,T. %A Gupta,S.K. %B Journal of Computing and Information Science in Engineering %V 8 %P 031009 - 031009 %8 2008/// %G eng %0 Book Section %B Algorithmic Aspects of Wireless Sensor Networks %D 2008 %T Algorithms for Location Estimation Based on RSSI Sampling %A Charalampos Papamanthou %A Preparata, Franco P. %A Tamassia, Roberto %E Fekete, Sándor P. %K Algorithm Analysis and Problem Complexity %K Computer Communication Networks %K Data structures %K Discrete Mathematics in Computer Science %K Information Systems and Communication Service %X In this paper, we re-examine the RSSI measurement model for location estimation and provide the first detailed formulation of the probability distribution of the position of a sensor node. We also show how to use this probabilistic model to efficiently compute a good estimation of the position of the sensor node by sampling multiple readings from the beacons (where we do not merely use the mean of the samples) and then minimizing a function with an acceptable computational effort. The results of the simulation of our method in TOSSIM indicate that the location of the sensor node can be computed in a small amount of time and that the quality of the solution is competitive with previous approaches. %B Algorithmic Aspects of Wireless Sensor Networks %S Lecture Notes in Computer Science %I Springer Berlin Heidelberg %P 72 - 86 %8 2008/01/01/ %@ 978-3-540-92861-4, 978-3-540-92862-1 %G eng %U http://link.springer.com/chapter/10.1007/978-3-540-92862-1_7 %0 Conference Paper %B Proceedings of the twenty-sixth annual SIGCHI conference on Human factors in computing systems %D 2008 %T Aligning temporal data by sentinel events: discovering patterns in electronic health records %A Wang,Taowei David %A Plaisant, Catherine %A Quinn,Alexander J. %A Stanchak,Roman %A Murphy,Shawn %A Shneiderman, Ben %K electronic health record %K Evaluation %K Information Visualization %K search %K temporal data %K Uncertainty %X Electronic Health Records (EHRs) and other temporal databases contain hidden patterns that reveal important cause-and-effect phenomena. Finding these patterns is a challenge when using traditional query languages and tabular displays. We present an interactive visual tool that complements query formulation by providing operations to align, rank and filter the results, and to visualize estimates of the intervals of validity of the data. Display of patient histories aligned on sentinel events (such as a first heart attack) enables users to spot precursor, co-occurring, and aftereffect events. A controlled study demonstrates the benefits of providing alignment (with a 61% speed improvement for complex tasks). A qualitative study and interviews with medical professionals demonstrates that the interface can be learned quickly and seems to address their needs. %B Proceedings of the twenty-sixth annual SIGCHI conference on Human factors in computing systems %S CHI '08 %I ACM %C New York, NY, USA %P 457 - 466 %8 2008/// %@ 978-1-60558-011-1 %G eng %U http://doi.acm.org/10.1145/1357054.1357129 %R 10.1145/1357054.1357129 %0 Conference Paper %B Practice and Research Techniques, 2008. TAIC PART '08. Testing: Academic Industrial Conference %D 2008 %T Alternating GUI Test Generation and Execution %A Xun Yuan %A Memon, Atif M. %K ALT %K complex failure-causing interactions %K event sequences %K graphical user interface %K Graphical user interfaces %K GUI test generation %K Testing %X Users of today's software perform tasks by interacting with a graphical user interface (GUI) front-end via sequences of input events. Due to the flexibility offered by most GUIs, the number of event sequences grows exponentially with length. One ubiquitous challenge of GUI testing is to selectively generate those sequences that lead to potentially problematic states. This paper presents ALT, a new technique that generates GUI test cases in batches, by leveraging GUI run-time information from a previously run batch to obtain the next batch. Each successive batch consists of "longer" test cases that expand the state space to be explored, yet prune the "unimportant" states. The "alternating" nature of ALT allows it to enhance the next batch by leveraging certain relationships between GUI events (e.g., one enables the other, one alters the other's execution) that are revealed only at run-time and non-trivial to infer statically. An empirical study on four fielded GUI-based applications demonstrates that ALT is successful at identifying complex failure-causing interactions between GUI events. %B Practice and Research Techniques, 2008. TAIC PART '08. Testing: Academic Industrial Conference %P 23 - 32 %8 2008/// %G eng %R 10.1109/TAIC-PART.2008.10 %0 Conference Paper %D 2008 %T Analysis of Computer Security Incident Data Using Time Series Models %A Condon,E. %A He,A. %A Michel Cukier %K Computer networks %K computer security incident data %K NETWORK SECURITY %K resource allocation %K security of data %K telecommunication security %K time series %K time series model %X Organizations face increasing challenges in addressing and preventing computer and network security incidents. There are financial consequences from security incidents. These include lost time and resources used during recovery, possible theft of personal and/or proprietary information, and reputational damage that may negatively impact stock prices or reduce consumer confidence in a company. Being able to understand and predict trends in computer and network security incidents can aid an organization with resource allocation for prevention of such incidents, as well as evaluation of mitigation strategies. We look at using time series models with a large set of security incident data. We examine appropriateness of the data for modeling and consider needed transformations. Parameter search and model selection criteria are discussed. Then, forecasts from time series models are compared to forecasts from Non-Homogeneous Poisson Process (NHPP) software reliability growth (SRG) models. %P 77 - 86 %8 2008/11// %G eng %R 10.1109/ISSRE.2008.39 %0 Conference Paper %B Proceedings of the ACM SIGCOMM 2008 conference on Data communication %D 2008 %T Answering what-if deployment and configuration questions with wise %A Tariq,Mukarram %A Zeitoun,Amgad %A Valancius,Vytautas %A Feamster, Nick %A Ammar,Mostafa %K content distribution networks %K performance modeling %K what-if scenario evaluation %X Designers of content distribution networks often need to determine how changes to infrastructure deployment and configuration affect service response times when they deploy a new data center, change ISP peering, or change the mapping of clients to servers. Today, the designers use coarse, back-of-the-envelope calculations, or costly field deployments; they need better ways to evaluate the effects of such hypothetical "what-if" questions before the actual deployments. This paper presents What-If Scenario Evaluator (WISE), a tool that predicts the effects of possible configuration and deployment changes in content distribution networks. WISE makes three contributions: (1) an algorithm that uses traces from existing deployments to learn causality among factors that affect service response-time distributions; (2) an algorithm that uses the learned causal structure to estimate a dataset that is representative of the hypothetical scenario that a designer may wish to evaluate, and uses these datasets to predict future response-time distributions; (3) a scenario specification language that allows a network designer to easily express hypothetical deployment scenarios without being cognizant of the dependencies between variables that affect service response times. Our evaluation, both in a controlled setting and in a real-world field deployment at a large, global CDN, shows that WISE can quickly and accurately predict service response-time distributions for many practical What-If scenarios. %B Proceedings of the ACM SIGCOMM 2008 conference on Data communication %S SIGCOMM '08 %I ACM %C New York, NY, USA %P 99 - 110 %8 2008/// %@ 978-1-60558-175-0 %G eng %U http://doi.acm.org/10.1145/1402958.1402971 %R 10.1145/1402958.1402971 %0 Journal Article %J NSF Symposium on Semantic Knowledge Discovery, Organization and Use %D 2008 %T Applying automatically generated semantic knowledge: A case study in machine translation %A Madnani,N. %A Resnik, Philip %A Dorr, Bonnie J %A Schwartz,R. %X In this paper, we discuss how we apply automatically generated semantic knowledge to benefit statisticalmachine translation (SMT). Currently, almost all statistical machine translation systems rely heavily on memorizing translations of phrases. Some systems attempt to go further and generalize these learned phrase translations into templates using empirically derived information about word alignments and a small amount of syntactic information, if at all. There are several issues in a SMT pipeline that could be addressed by the application of semantic knowledge, if such knowledge were easily available. One such issue, an important one, is that of reference sparsity. The fundamental problem that translation systems have to face is that there is no such thing as the correct translation for any sentence. In fact, any given source sentence can often be translated into the target language in many valid ways. Since there can be many “correct answers,” almost all models employed by SMT systems require, in addition to a large bitext, a held-out development set comprised of multiple high-quality, human-authored reference translations in the target language in order to tune their parameters relative to a translation quality metric.1 There are several reasons that this requirement is not an easy one to satisfy. First, with a few exceptions—notably NIST’s annual MT evaluations—most new MT research data sets are provided with only a single reference translation. Second, obtaining multiple reference translations in rapid development, low-density source language scenarios (e.g. (Oard, 2003)) is likely to be severely limited (or made entirely impractical) by limitations of time, cost, and ready availability of qualified translators. %B NSF Symposium on Semantic Knowledge Discovery, Organization and Use %8 2008/// %G eng %0 Conference Paper %B Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on %D 2008 %T Approximate earth mover's distance in linear time %A Shirdhonkar,S. %A Jacobs, David W. %K algorithm;normal %K complexity;earth %K complexity;image %K constraint;Kantorovich-Rubinstein %K continuity %K distance;histograms;linear %K distance;weighted %K Euclidean %K Holder %K matching;wavelet %K movers %K problem;computational %K TIME %K transform;computational %K transforms; %K transshipment %K wavelet %X The earth moverpsilas distance (EMD) is an important perceptually meaningful metric for comparing histograms, but it suffers from high (O(N3 logN)) computational complexity. We present a novel linear time algorithm for approximating the EMD for low dimensional histograms using the sum of absolute values of the weighted wavelet coefficients of the difference histogram. EMD computation is a special case of the Kantorovich-Rubinstein transshipment problem, and we exploit the Holder continuity constraint in its dual form to convert it into a simple optimization problem with an explicit solution in the wavelet domain. We prove that the resulting wavelet EMD metric is equivalent to EMD, i.e. the ratio of the two is bounded. We also provide estimates for the bounds. The weighted wavelet transform can be computed in time linear in the number of histogram bins, while the comparison is about as fast as for normal Euclidean distance or chi2 statistic. We experimentally show that wavelet EMD is a good approximation to EMD, has similar performance, but requires much less computation. %B Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on %P 1 - 8 %8 2008/06// %G eng %R 10.1109/CVPR.2008.4587662 %0 Conference Paper %B IEEE INFOCOM 2008. The 27th Conference on Computer Communications %D 2008 %T Approximation Algorithms for Computing Capacity of Wireless Networks with SINR Constraints %A Chafekar,D. %A Kumart,V. S.A %A Marathe,M. V %A Parthasarathy,S. %A Srinivasan, Aravind %K Algorithm design and analysis %K approximation algorithm %K Approximation algorithms %K approximation theory %K Computer networks %K Computer science %K graph theory %K graph-based model %K Interference constraints %K polynomial time algorithm %K Propagation losses %K Protocols %K radio networks %K radiofrequency interference %K signal to interference plus noise ratio %K Signal to noise ratio %K Throughput %K wireless interference %K wireless network %K Wireless networks %X A fundamental problem in wireless networks is to estimate its throughput capacity - given a set of wireless nodes, and a set of connections, what is the maximum rate at which data can be sent on these connections. Most of the research in this direction has focused on either random distributions of points, or has assumed simple graph-based models for wireless interference. In this paper, we study capacity estimation problem using the more general Signal to Interference Plus Noise Ratio (SINR) model for interference, on arbitrary wireless networks. The problem becomes much harder in this setting, because of the non-locality of the SINR model. Recent work by Moscibroda et al. (2006) has shown that the throughput in this model can differ from graph based models significantly. We develop polynomial time algorithms to provably approximate the total throughput in this setting. %B IEEE INFOCOM 2008. The 27th Conference on Computer Communications %I IEEE %P 1166 - 1174 %8 2008/04/13/18 %@ 978-1-4244-2025-4 %G eng %R 10.1109/INFOCOM.2008.172 %0 Book Section %B Computational Linguistics and Intelligent Text Processing, Lecture Notes in Computer Science Volume 4919Computational Linguistics and Intelligent Text Processing, Lecture Notes in Computer Science Volume 4919 %D 2008 %T Arabic/English Multi-document Summarization with CLASSYThe Past and the Future %A Schlesinger,Judith D. %A O'Leary, Dianne P. %A Conroy,John M. %B Computational Linguistics and Intelligent Text Processing, Lecture Notes in Computer Science Volume 4919Computational Linguistics and Intelligent Text Processing, Lecture Notes in Computer Science Volume 4919 %I Springer Berlin %P 568 - 581 %8 2008/// %G eng %U http://dx.doi.org/10.1007/978-3-540-78135-6_49http://dx.doi.org/10.1007/978-3-540-78135-6_49 %0 Report %D 2008 %T Archiving Temporal Web Information: Organization of Web Contents for Fast Access and Compact Storage %A Song,Sangchul %A JaJa, Joseph F. %K Technical Report %X We address the problem of archiving dynamic web contents oversignificant time spans. Current schemes crawl the web contents at regular time intervals and archive the contents after each crawl regardless of whether or not the contents have changed between consecutive crawls. Our goal is to store newly crawled web contents only when they are different than the previous crawl, while ensuring accurate and quick retrieval of archived contents based on arbitrary temporal queries over the archived time period. In this paper, we develop a scheme that stores unique temporal web contents in containers following the widely used ARC/WARC format, and that provides quick access to the archived contents for arbitrary temporal queries. A novel component of our scheme is the use of a new indexing structure based on the concept of persistent or multi-version data structures. Our scheme can be shown to be asymptotically optimal both in storage utilization and insert/retrieval time. We illustrate the performance of our method on two very different data sets from the Stanford WebBase project, the first reflecting very dynamic web contents and the second relatively static web contents. The experimental results clearly illustrate the substantial storage savings achieved by eliminating duplicate contents detected between consecutive crawls, as well as the speed at which our method can find the archived contents specified through arbitrary temporal queries. %I Instititue for Advanced Computer Studies, Univ of Maryland, College Park %V UMIACS-TR-2008-08 %8 2008/04/07/ %G eng %U http://drum.lib.umd.edu/handle/1903/7569 %0 Journal Article %J Proceedings of the Eighth Conference of the Association for Machine Translation in the Americas, October %D 2008 %T Are multiple reference translations necessary? investigating the value of paraphrased reference translations in parameter optimization %A Madnani,N. %A Resnik, Philip %A Dorr, Bonnie J %A Schwartz,R. %X Most state-of-the-art statistical machine trans-lation systems use log-linear models, which are defined in terms of hypothesis features and weights for those features. It is standard to tune the feature weights in order to maxi- mize a translation quality metric, using held- out test sentences and their corresponding ref- erence translations. However, obtaining refer- ence translations is expensive. In our earlier work (Madnani et al., 2007), we introduced a new full-sentence paraphrase technique, based on English-to-English decoding with an MT system, and demonstrated that the resulting paraphrases can be used to cut the number of human reference translations needed in half. In this paper, we take the idea a step further, asking how far it is possible to get with just a single good reference translation for each item in the development set. Our analysis suggests that it is necessary to invest in four or more hu- man translations in order to significantly im- prove on a single translation augmented by monolingual paraphrases. %B Proceedings of the Eighth Conference of the Association for Machine Translation in the Americas, October %8 2008/// %G eng %0 Conference Paper %B Proceedings of the 45th annual Design Automation Conference %D 2008 %T An area-efficient high-throughput hybrid interconnection network for single-chip parallel processing %A Balkan,Aydin O. %A Gang Qu %A Vishkin, Uzi %K hybrid networks %K mesh-of-trees %K on-chip networks %X Single-chip parallel processing requires high bandwidth between processors and on-chip memory modules. A recently proposed Mesh-of-Trees (MoT) network provides high throughput and low latency at relatively high area cost. In this paper, we introduce a hybrid MoT-BF network that combines MoT network with the area efficient butterfly network. We prove that the hybrid network reduces MoT network's area cost. Cycle-accurate simulation and post-layout results all show that significant area reduction can be achieved with negligible performance degradation, when operating at same clock rate. %B Proceedings of the 45th annual Design Automation Conference %S DAC '08 %I ACM %C New York, NY, USA %P 435 - 440 %8 2008/// %@ 978-1-60558-115-6 %G eng %U http://doi.acm.org/10.1145/1391469.1391583 %R 10.1145/1391469.1391583 %0 Journal Article %J Computer %D 2008 %T The ASC-Alliance Projects: A Case Study of Large-Scale Parallel Scientific Code Development %A Hochstein, L. %A Basili, Victor R. %K ASC-Alliance %K code %K development;large-scale %K development;parallel %K engineering; %K machines;software %K Parallel %K projects;computational %K scale %K Science %K scientific %K scientists;large %K software %K systems;parallel %X Computational scientists face many challenges when developing software that runs on large-scale parallel machines. However, software-engineering researchers haven't studied their software development processes in much detail. To better understand the nature of software development in this context, the authors examined five large-scale computational science software projects operated at the five ASC-Alliance centers. %B Computer %V 41 %P 50 - 58 %8 2008/03// %@ 0018-9162 %G eng %N 3 %R 10.1109/MC.2008.101 %0 Journal Article %J Journal of Systems and Software %D 2008 %T An assessment of systems and software engineering scholars and institutions (2001–2005) %A Wong,W. Eric %A Tse,T.H. %A Glass,Robert L. %A Basili, Victor R. %A Chen,T.Y. %K Research publications %K Systems and software engineering %K Top institutions %K Top scholars %X This paper presents the findings of a five-year study of the top scholars and institutions in the systems and software engineering field, as measured by the quantity of papers published in the journals of the field in 2001–2005. The top scholar is Magne Jørgensen of Simula Research Laboratory, Norway, and the top institution is Korea Advanced Institute of Science and Technology, Korea.This paper is part of an ongoing study, conducted annually, that identifies the top 15 scholars and institutions in the most recent five-year period. %B Journal of Systems and Software %V 81 %P 1059 - 1062 %8 2008/06// %@ 0164-1212 %G eng %U http://www.sciencedirect.com/science/article/pii/S0164121207002300 %N 6 %R 10.1016/j.jss.2007.09.018 %0 Journal Article %J Lecture Notes in Computer Science %D 2008 %T Athos: Efficient Authentication of Outsourced File Systems: Information Security %A Triandopoulos, Nikolaos %A Goodrich, Michael T. %A Charalampos Papamanthou %A Tamassia, Roberto %B Lecture Notes in Computer Science %P 80 - 96 %8 2008/// %@ 03029743 %G eng %0 Patent %D 2008 %T Audio Camera Using Microphone Arrays for Real Time Capture of Audio Images ... %A Duraiswami, Ramani %A O'donovan,Adam %A Neumann, Jan %A Gumerov, Nail A. %X Spherical microphone arrays provide an ability to compute the acoustical intensity corresponding to different spatial directions in a given frame of audio data. These intensities may be exhibited as an image and these images are generated at a high frame rate to achieve a video image if the data capture and intensity computations can be performed sufficiently quickly, thereby creating a frame-rate audio camera. A description is provided herein regarding how such a camera is built and the processing done sufficiently quickly using graphics processors. The joint processing of and captured frame-rate audio and video images enables applications such as visual identification of noise sources, beamforming and noise-suppression in video conferenceing and others, by accounting for the spatial differences in the location of the audio and the video cameras. Based on the recognition that the spherical array can be viewed as a central projection camera, such joint analysis can be performed. %V 12/127,451 %8 2008/05// %G eng %U http://www.google.com/patents?id=gWKzAAAAEBAJ %0 Conference Paper %B Data Engineering Workshop, 2008. ICDEW 2008. IEEE 24th International Conference on %D 2008 %T Augmenting spatio-textual search with an infectious disease ontology %A Lieberman,M.D. %A Sankaranarayanan,J. %A Samet, Hanan %A Sperling,J. %K (artificial %K article;ontology;spatio-textual %K disease;map %K engines; %K information %K intelligence);search %K interface;newspaper %K search;classification;diseases;indexing;medical %K STEWARD %K system;classification;epidemics;indexing;infectious %K systems;ontologies %X A system is described that automatically categorizes and classifies infectious disease incidence reports by type and geographic location, to aid analysis by domain experts. It identifies references to infectious diseases by using a disease ontology. The system leverages the textual and spatial search capabilities of the STEWARD system to enable queries such as reports on "influenza" near "Hong Kong", possibly within a particular time period. Documents from the U.S. National Library of Medicine (http://www.pubmed.gov) and the World Health Organization (http://www.who.int) are tagged so that spatial relationships to specific disease occurrences can be presented graphically via a map interface. In addition, newspaper articles can be tagged and indexed to bolster the surveillance of ongoing epidemics. Examining past epidemics using this system may lead to improved understanding of the cause and spread of infectious diseases. %B Data Engineering Workshop, 2008. ICDEW 2008. IEEE 24th International Conference on %P 266 - 269 %8 2008/04// %G eng %R 10.1109/ICDEW.2008.4498330 %0 Conference Paper %B CCS '08 Proceedings of the 15th ACM Conference on Computer and Communications Security %D 2008 %T Authenticated Hash Tables %A Charalampos Papamanthou %A Tamassia, Roberto %A Triandopoulos, Nikos %K Authentication %K hash tables %K rsa accumulator %K verification %X Hash tables are fundamental data structures that optimally answer membership queries. Suppose a client stores n elements in a hash table that is outsourced at a remote server so that the client can save space or achieve load balancing. Authenticating the hash table functionality, i.e., verifying the correctness of queries answered by the server and ensuring the integrity of the stored data, is crucial because the server, lying outside the administrative control of the client, can be malicious. We design efficient and secure protocols for optimally authenticating membership queries on hash tables: for any fixed constants 0 < ε < 1 and κ > 1/ε, the server can provide a proof of integrity of the answer to a (non-)membership query in constant time, requiring O(nε/logκε--1 n) time to treat updates, yet keeping the communication and verification costs constant. This is the first construction for authenticating a hash table with constant query cost and sublinear update cost. Our solution employs the RSA accumulator in a nested way over the stored data, strictly improving upon previous accumulator-based solutions. Our construction applies to two concrete data authentication models and lends itself to a scheme that achieves different trade-offs---namely, constant update time and O(nε/logκε n) query time for fixed ε > 0 and κ > 0. An experimental evaluation of our solution shows very good scalability. %B CCS '08 Proceedings of the 15th ACM Conference on Computer and Communications Security %S CCS '08 %I ACM %P 437 - 448 %8 2008/// %@ 978-1-59593-810-7 %G eng %U http://doi.acm.org/10.1145/1455770.1455826 %0 Conference Paper %B Proceedings of the first workshop on Online social networks %D 2008 %T Authenticated out-of-band communication over social links %A Ramachandran,Anirudh V. %A Feamster, Nick %K Authentication %K Security %K social networks %X Many existing host-based applications rely on their own authentication mechanisms and peer discovery services. Although social networking sites already provide mechanisms for users both to discover other users (e.g., by logging on to the social network Web site) and to communicate securely with each other (e.g., using instant messages within the social networking site), today's applications have no way to exploit the relationships and trust that are inherent in these networks. This paper proposes Authenticatr, a framework that allows applications to use the authentication and peer discovery mechanisms inherent in social networking sites to bootstrap their own authenticated communication channels. We describe motivating applications, detail the interface that Authenticatr exposes to applications, and discuss practical considerations and security threats. %B Proceedings of the first workshop on Online social networks %S WOSN '08 %I ACM %C New York, NY, USA %P 61 - 66 %8 2008/// %@ 978-1-60558-182-8 %G eng %U http://doi.acm.org/10.1145/1397735.1397749 %R 10.1145/1397735.1397749 %0 Journal Article %J Robotics and Autonomous Systems %D 2008 %T Automated design of distributed control rules for the self-assembly of prespecified artificial structures %A Grushin,Alexander %A Reggia, James A. %K Collective problem solving %K Coordination %K Self-assembly %K Stigmergy %K Swarm intelligence %X The self-assembly problem involves the design of agent-level control rules that will cause the agents to form some desired, target structure, subject to environmental constraints. This paper describes a fully automated rule generation procedure that allows structures to successfully self-assemble in a simulated environment with constrained, continuous motion. This environment implicitly imposes ordering constraints on the self-assembly process, where certain parts of the target structure must be assembled before others, and where it may be necessary to assemble (and subsequently disassemble) temporary structures such as staircases. A provably correct methodology is presented for computing a partial order on the self-assembly process, and for generating rules that enforce this order at runtime. The assembly and disassembly of structures is achieved by generating another set of rules, which are inspired by construction behavior among certain species of social insects. Computational experiments verify the effectiveness of the approach on a diverse set of target structures. %B Robotics and Autonomous Systems %V 56 %P 334 - 359 %8 2008/04/30/ %@ 0921-8890 %G eng %U http://www.sciencedirect.com/science/article/pii/S092188900700111X %N 4 %R 10.1016/j.robot.2007.08.006 %0 Journal Article %J Advances in Neural Information Processing Systems %D 2008 %T Automatic online tuning for fast Gaussian summation %A Morariu,V. %A Srinivasan,B.V. %A Raykar,V.C. %A Duraiswami, Ramani %A Davis, Larry S. %X Many machine learning algorithms require the summation of Gaussian kernelfunctions, an expensive operation if implemented straightforwardly. Several meth- ods have been proposed to reduce the computational complexity of evaluating such sums, including tree and analysis based methods. These achieve varying speedups depending on the bandwidth, dimension, and prescribed error, making the choice between methods difficult for machine learning tasks. We provide an algorithm that combines tree methods with the Improved Fast Gauss Transform (IFGT). As originally proposed the IFGT suffers from two problems: (1) the Taylor series expansion does not perform well for very low bandwidths, and (2) parameter se- lection is not trivial and can drastically affect performance and ease of use. We address the first problem by employing a tree data structure, resulting in four eval- uation methods whose performance varies based on the distribution of sources and targets and input parameters such as desired accuracy and bandwidth. To solve the second problem, we present an online tuning approach that results in a black box method that automatically chooses the evaluation method and its parameters to yield the best performance for the input data, desired accuracy, and bandwidth. In addition, the new IFGT parameter selection approach allows for tighter error bounds. Our approach chooses the fastest method at negligible additional cost, and has superior performance in comparisons with previous approaches. %B Advances in Neural Information Processing Systems %8 2008/// %G eng %0 Journal Article %J ACM Transactions on Software Engineering and Methodology (TOSEM) %D 2008 %T Automatically repairing event sequence-based GUI test suites for regression testing %A Memon, Atif M. %K Graphical user interfaces %K regression testing %K repairing test cases %K test case management %K test maintenance %X Although graphical user interfaces (GUIs) constitute a large part of the software being developed today and are typically created using rapid prototyping, there are no effective regression testing techniques for GUIs. The needs of GUI regression testing differ from those of traditional software. When the structure of a GUI is modified, test cases from the original GUI's suite are either reusable or unusable on the modified GUI. Because GUI test case generation is expensive, our goal is to make the unusable test cases usable, thereby helping to retain the suite's event coverage. The idea of reusing these unusable (obsolete) test cases has not been explored before. This article shows that a large number of test cases become unusable for GUIs. It presents a new GUI regression testing technique that first automatically determines the usable and unusable test cases from a test suite after a GUI modification, then determines the unusable test cases that can be repaired so that they can execute on the modified GUI, and finally uses repairing transformations to repair the test cases. This regression testing technique along with four repairing transformations has been implemented. An empirical study for four open-source applications demonstrates that (1) this approach is effective in that many of the test cases can be repaired, and is practical in terms of its time performance, (2) certain types of test cases are more prone to becoming unusable, and (3) certain types of “dominator” events, when modified, make a large number of test cases unusable. %B ACM Transactions on Software Engineering and Methodology (TOSEM) %V 18 %P 4:1–4:36 - 4:1–4:36 %8 2008/11// %@ 1049-331X %G eng %U http://doi.acm.org/10.1145/1416563.1416564 %N 2 %R 10.1145/1416563.1416564 %0 Journal Article %J Intelligent Systems, IEEE %D 2008 %T AVA: Adjective-Verb-Adverb Combinations for Sentiment Analysis %A V.S. Subrahmanian %A Reforgiato,Diego %X Most research on determining the strength of subjective expressions in a sentence or document uses single, specific parts of speech such as adjectives, adverbs, or verbs. To date, almost no research covers the development of a single comprehensive framework in which we can analyze sentiment that takes all three into account. The authors propose the AVA (adjective verb adverb) framework for identifying opinions on any given topic. In AVA, a user can select any topic t of interest and any document d. AVA will return a score that d expresses topic t. The score is expressed on a #x02013;1 (maximally negative) to +1 (maximally positive) scale. %B Intelligent Systems, IEEE %V 23 %P 43 - 50 %8 2008/08//july %@ 1541-1672 %G eng %N 4 %R 10.1109/MIS.2008.57 %0 Conference Paper %B Proceedings of the 3rd USENIX workshop on Steps to reducing unwanted traffic on the internet %D 2007 %T Accountability as a service %A Bender,Adam %A Spring, Neil %A Levin,Dave %A Bhattacharjee, Bobby %X We propose that accountability be a first-class network service, independent of addressing and routing. We design a scheme for allowing accountability services, rather than connectivity-providing ISPs, to vouch for traffic, allowing victims to report abuse, filter abusive traffic, and isolate malicious senders. We discuss how accountability services may evolve, how they may facilitate new applications, and the implications of shifting the burden of network policing to a dedicated service. %B Proceedings of the 3rd USENIX workshop on Steps to reducing unwanted traffic on the internet %S SRUTI'07 %I USENIX Association %C Berkeley, CA, USA %P 5:1–5:6 - 5:1–5:6 %8 2007/// %G eng %U http://dl.acm.org/citation.cfm?id=1361436.1361441 %0 Report %D 2007 %T ACE: A Novel Software Platform to Ensure the Integrity of Long Term Archives %A Song,Sangchul %A JaJa, Joseph F. %K Technical Report %X We develop a new methodology to address the integrity of long term archives using rigorous cryptographic techniques. A prototype system called ACE (Auditing Control Environment) was designed and developed based on this methodology. ACE creates a small-size integrity token for each digital object and some cryptographic summary information based on all the objects handled within a dynamic time period. ACE continuously audits the contents of the various objects according to the policy set by the archive, and provides mechanisms for an independent third-party auditor to certify the integrity of any object. In fact, our approach will allow an independent auditor to verify the integrity of every version of an archived digital object as well as link the current version to the original form of the object when it was ingested into the archive. We show that ACE is very cost effective and scalable while making no assumptions about the archive architecture. We include in this paper some preliminary results on the validation and performance of ACE on a large image collection. %I Instititue for Advanced Computer Studies, Univ of Maryland, College Park %V UMIACS-TR-2007-07 %8 2007/01/31/ %G eng %U http://drum.lib.umd.edu/handle/1903/4047 %0 Conference Paper %B Proceedings of the thirty-ninth annual ACM symposium on Theory of computing %D 2007 %T On achieving the "best of both worlds" in secure multiparty computation %A Katz, Jonathan %K COMPUTATION %K secure %X Two settings are typically considered for secure multipartycomputation, depending on whether or not a majority of the partiesare assumed to be honest. Protocols designed under this assumptionprovide "full security" (and, in particular, guarantee outputdelivery and fairness) when this assumption is correct; however, if half or more of the parties are dishonest then security iscompletely compromised. On the other hand, protocols toleratingarbitrarily-many faults do not provide fairness or guaranteed output delivery even if only a single party is dishonest. It isnatural to wonder whether it is possible to achieve the "best ofboth worlds" : namely, a single protocol that simultaneouslyachieves the best possible security in both the above settings. Ishai, et al. (Crypto 2006) recently addressed this question, andruled out constant-round protocols of this type. As our main result, we completely settle the question by ruling outprotocols using any (expected) polynomial number of rounds. Given this stark negative result, we then ask what can be achieved if we are willing to assume simultaneous message transmission (or, equivalently, a non-rushing adversary). In this setting, we show that impossibility still holdsfor logarithmic-round protocols. We also show, for any polynomialp, a protocol (whose round complexity depends on p) that can be simulated to within closeness O(1/p). %B Proceedings of the thirty-ninth annual ACM symposium on Theory of computing %S STOC '07 %I ACM %C New York, NY, USA %P 11 - 20 %8 2007/// %@ 978-1-59593-631-8 %G eng %U http://doi.acm.org/10.1145/1250790.1250793 %R 10.1145/1250790.1250793 %0 Journal Article %J Parallel Computing %D 2007 %T Active semantic caching to optimize multidimensional data analysis in parallel and distributed environments %A Andrade,Henrique %A Kurc,Tahsin %A Sussman, Alan %A Saltz,Joel %K Active semantic caching %K Parallel databases %K Query optimization %K Scientific data analysis %X In this paper, we present a multi-query optimization framework based on the concept of active semantic caching. The framework permits the identification and transparent reuse of data and computation in the presence of multiple queries (or query batches) that specify user-defined operators and aggregations originating from scientific data-analysis applications. We show how query scheduling techniques, coupled with intelligent cache replacement policies, can further improve the performance of query processing by leveraging the active semantic caching operators. We also propose a methodology for functionally decomposing complex queries in terms of primitives so that multiple reuse sites are exposed to the query optimizer, to increase the amount of reuse. The optimization framework and the database system implemented with it are designed to be efficient irrespective of the underlying parallel and/or distributed machine configuration. We present experimental results highlighting the performance improvements obtained by our methods using real scientific data-analysis applications on multiple parallel and distributed processing configurations (e.g., single symmetric multiprocessor (SMP) machine, cluster of SMP nodes, and a Grid computing configuration). %B Parallel Computing %V 33 %P 497 - 520 %8 2007/08// %@ 0167-8191 %G eng %U http://www.sciencedirect.com/science/article/pii/S0167819107000506 %N 7-8 %R 10.1016/j.parco.2007.03.001 %0 Journal Article %J Signal Processing Letters, IEEE %D 2007 %T Adaptive Detection for Group-Based Multimedia Fingerprinting %A He,Shan %A M. Wu %K adaptive %K computing;security %K data; %K detection;collusion %K detection;group-based %K fingerprinting;copy %K fingerprinting;grouping %K multimedia %K of %K orthogonal %K pattern;detection %K protection;multimedia %K statistics;group %K strategy;nongrouped %X Grouping strategy has been proposed recently to leverage the prior knowledge on collusion pattern to improve collusion resistance in multimedia fingerprinting. However, the improvement is not consistent, as reduced performance of the existing group-based fingerprinting schemes than the non-grouped ones is observed when the grouping does not match the true collusion pattern well. In this letter, we propose a new adaptive detection method, where the threshold for the group detection can be adjusted automatically according to the detection statistics that reflect the underlying collusion pattern. Experimental results show that the proposed adaptive detection outperforms nonadaptive detection and provides consistent performance improvement over non-grouped orthogonal fingerprinting schemes under various collusion scenarios. %B Signal Processing Letters, IEEE %V 14 %P 964 - 967 %8 2007/12// %@ 1070-9908 %G eng %N 12 %R 10.1109/LSP.2007.907992 %0 Conference Paper %B Wavelet Analysis and Pattern Recognition, 2007. ICWAPR '07. International Conference on %D 2007 %T An adaptive mean shift tracking method using multiscale images %A Zhuolin Jiang %A Li,Shao-Fa %A Gao,Dong-Fa %K adaptive mean shift tracking method %K bandwidth matrix %K Gaussian kernel %K Gaussian processes %K Image sequences %K log-likelihood function %K matrix algebra %K maximum likelihood estimation %K multiscale image %K Object detection %K object tracking %X An adaptive mean shift tracking method for object tracking using multiscale images is presented in this paper. A bandwidth matrix and a Gaussian kernel are used to extend the definition of target model. The method can exactly estimate the position of the tracked object using multiscale images from Gaussian pyramid. The tracking method determines the parameters of kernel bandwidth by maximizing the lower bound of a log-likelihood function, which is derived from a kernel density estimate with the bandwidth matrix and the modified weight function. The experimental results show that it can averagely converge in 2.55 iterations per frame. %B Wavelet Analysis and Pattern Recognition, 2007. ICWAPR '07. International Conference on %V 3 %P 1060 - 1066 %8 2007/11// %G eng %R 10.1109/ICWAPR.2007.4421589 %0 Journal Article %J Foundations and Trends in Databases %D 2007 %T Adaptive query processing %A Deshpande, Amol %A Ives,Zachary %A Vijayshankar Raman %X As the data management field has diversified to consider settings in which queries are increasingly complex, statistics are less available, or data is stored remotely, there has been an acknowledgment that the traditional optimize-then-execute paradigm is insufficient. This has led to a plethora of new techniques, generally placed under the common banner of adaptive query processing, that focus on using runtime feed-back to modify query processing in a way that provides better response time or more efficient CPU utilization.In this survey paper, we identify many of the common issues, themes, and approaches that pervade this work, and the settings in which each piece of work is most appropriate. Our goal with this paper is to be a "value-add" over the existing papers on the material, providing not only a brief overview of each technique, but also a basic framework for understanding the field of adaptive query processing in general. We focus primarily on intra-query adaptivity of long-running, but not full-fledged streaming, queries. We conclude with a discussion of open research problems that are of high importance. %B Foundations and Trends in Databases %V 1 %P 1 - 140 %8 2007/01// %@ 1931-7883 %G eng %U http://dx.doi.org/10.1561/1900000001 %N 1 %R 10.1561/1900000001 %0 Conference Paper %B Proceedings of the 33rd international conference on Very large data bases %D 2007 %T Adaptive query processing: why, how, when, what next? %A Deshpande, Amol %A Ives,Zachary %A Vijayshankar Raman %X Adaptive query processing has been the subject of a great deal of recent work, particularly in emerging data management environments such as data integration and data streams. We provide an overview of the work in this area, identifying its common themes, laying out the space of query plans, and discussing open research problems. We discuss why adaptive query processing is needed, how it is being implemented, where it is most appropriately used, and finally, what next, i.e., open research problems. %B Proceedings of the 33rd international conference on Very large data bases %S VLDB '07 %I VLDB Endowment %P 1426 - 1427 %8 2007/// %@ 978-1-59593-649-3 %G eng %U http://dl.acm.org/citation.cfm?id=1325851.1326033 %0 Conference Paper %B Computer Vision, 2007. ICCV 2007. IEEE 11th International Conference on %D 2007 %T Adaptive Vocabulary Forests br Dynamic Indexing and Category Learning %A Tom Yeh %A Lee,John %A Darrell,Trevor %X Histogram pyramid representations computed from a vocabulary tree of visual words have proven valuable for a range of image indexing and recognition tasks; however, they have only used a single, fixed partition of feature space. We present a new efficient algorithm to incrementally compute set-of-trees (forest) vocabulary representations, and show that they improve recognition and indexing performance in methods which use histogram pyramids. Our algorithm incrementally adapts a vocabulary forest with an Inverted file system at the leaf nodes and automatically keeps existing histogram pyramid database entries up-to-date in a forward filesystem. It is possible not only to apply vocabulary tree indexing algorithms directly, but also to compute pyramid match kernel values efficiently. On dynamic recognition tasks where categories or objects under consideration may change over time, we show that adaptive vocabularies offer significant performance advantages in comparison to a single, fixed vocabulary. %B Computer Vision, 2007. ICCV 2007. IEEE 11th International Conference on %I IEEE %P 1 - 8 %8 2007/10/14/21 %@ 978-1-4244-1630-1 %G eng %U http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4409053 %R 10.1109/ICCV.2007.4409053 %0 Book Section %B Scalable Uncertainty ManagementScalable Uncertainty Management %D 2007 %T Aggregates in Generalized Temporally Indeterminate Databases %A Udrea,Octavian %A Majkić,Zoran %A Subrahmanian,V. %E Prade,Henri %E Subrahmanian,V. %K Computer %K Science %X Dyreson and Snodgrass as well as Dekhtyar et. al. have provided a probabilistic model (as well as compelling example applications) for why there may be temporal indeterminacy in databases. In this paper, we first propose a formal model for aggregate computation in such databases when there is uncertainty not just in the temporal attribute, but also in the ordinary (non-temporal) attributes. We identify two types of aggregates: event correlated aggregates, and non event correlated aggregations, and provide efficient algorithms for both of them. We prove that our algorithms are correct, and we present experimental results showing that the algorithms work well in practice. %B Scalable Uncertainty ManagementScalable Uncertainty Management %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 4772 %P 171 - 186 %8 2007/// %@ 978-3-540-75407-7 %G eng %U http://dx.doi.org/10.1007/978-3-540-75410-7_13 %0 Book Section %B Agile software development quality assuranceAgile software development quality assurance %D 2007 %T Agile Quality Assurance Techniques for GUI-Based Applications %A Memon, Atif M. %A Xie,Q. %B Agile software development quality assuranceAgile software development quality assurance %I Idea Group Inc (IGI) %P 114 - 114 %8 2007/// %@ 1599042169, 9781599042169 %G eng %0 Journal Article %J Journal of Computing and Information Science in Engineering %D 2007 %T Algorithms for on-line monitoring of micro spheres in an optical tweezers-based assembly cell %A Peng,T. %A Balijepalli,A. %A Gupta,S.K. %A LeBrun,T. %X Optical tweezers have emerged as a powerful tool for microand nanomanipulation. Using optical tweezers to perform auto- mated assembly requires on-line monitoring of components in the assembly workspace. This paper presents algorithms for estimat- ing positions and orientations of microscale and nanoscale com- ponents in the 3-Dimensional assembly workspace. Algorithms presented in this paper use images obtained by optical section microscopy. The images are first segmented to locate areas of in- terest and then image gradient information from the areas of in- terest is used to generate probable locations and orientations of components in the XY-plane. Finally, signature curves are com- puted and utilized to obtain component locations and orienta- tions in 3-D space. We have tested these algorithms with silica micro-spheres as well as metallic nanowires. We believe that the algorithms described in this paper will provide the foundation for realizing automated assembly operations in optical tweezers- based assembly cells. %B Journal of Computing and Information Science in Engineering %V 7 %P 330 - 330 %8 2007/// %G eng %0 Journal Article %J Protein Engineering Design and SelectionProtein Engineering, Design and Selection %D 2007 %T Amino acid quantitative structure property relationship database: a web-based platform for quantitative investigations of amino acids %A Lu,Yi %A Bulka,Blazej %A desJardins, Marie %A Freeland,Stephen J %K Amino acids %K database %K QSPR %K XML %X Here, we present the AA-QSPR Db (Amino Acid Quantitative Structure Property Relationship Database): a novel, freely available web-resource of data pertaining to amino acids, both engineered and naturally occurring. In addition to presenting fundamental molecular descriptors of size, charge and hydrophobicity, it also includes online visualization tools for users to perform instant, interactive analyses of amino acid sub-sets in which they are interested. The database has been designed with extensible markup language technology to provide a flexible structure, suitable for future development. In addition to providing easy access for queries by external computers, it also offers a user-friendly web-based interface that facilitates human interactions (submission, storage and retrieval of amino acid data) and an associated e-forum that encourages users to question and discuss current and future database contents. %B Protein Engineering Design and SelectionProtein Engineering, Design and Selection %V 20 %P 347 - 351 %8 2007/07/01/ %@ 1741-0126, 1741-0134 %G eng %U http://peds.oxfordjournals.org/content/20/7/347 %N 7 %R 10.1093/protein/gzm027 %0 Conference Paper %B Image Processing, 2007. ICIP 2007. IEEE International Conference on %D 2007 %T Analysis of Nonlinear Collusion Attacks on Fingerprinting Systems for Compressed Multimedia %A Varna,A.L. %A He,Shan %A Swaminathan,A. %A M. Wu %K attacks;data %K coding;multimedia %K coding;security %K collusion %K compressed %K compression;image %K data;watermarking; %K fingerprinting %K multimedia;data %K of %K systems;nonlinear %X In this paper, we analyze the effect of various collusion attacks on fingerprinting systems for compressed multimedia. We evaluate the effectiveness of the collusion attacks in terms of the probability of detection and accuracy in estimating the host signal. Our analysis shows that applying averaging collusion on copies of moderately compressed content gives a highly accurate estimation of the host, and can effectively remove the embedded fingerprints. Averaging is thus the best choice for an attacker as the probability of detection and the distortion introduced are the lowest. %B Image Processing, 2007. ICIP 2007. IEEE International Conference on %V 2 %P II -133 -II -136 - II -133 -II -136 %8 2007/10/16/19 %G eng %R 10.1109/ICIP.2007.4379110 %0 Report %D 2007 %T Analysis of the Residual Arnoldi Method %A Lee,Che-Rung %A Stewart, G.W. %K Technical Report %X The Arnoldi method generates a nested squences of orthonormal bases$U_{1},U_{2}, \ldots$ by orthonormalizing $Au_{k}$ against $U_{k}$. Frequently these bases contain increasingly accurate approximations of eigenparis from the periphery of the spectrum of $A$. However, the convergence of these approximations stagnates if $u_{k}$ is contaminated by error. It has been observed that if one chooses a Rayleigh--Ritz approximation $(\mu_{k}, z_{k})$ to a chosen target eigenpair $(\lambda, x)$ and orthonormalizes the residual $Az_{k - }\mu_{k} z_{k}$, the approximations to $x$ (but not the other eigenvectors) continue to converge, even when the residual is contaminated by error. The same is true of the shift-invert variant of Arnoldi's method. In this paper we give a mathematical analysis of these new methods. %I Instititue for Advanced Computer Studies, Univ of Maryland, College Park %V UMIACS-TR-2007-45 %8 2007/10/15/ %G eng %U http://drum.lib.umd.edu/handle/1903/7428 %0 Journal Article %J Conference on Programming Language Design and Implementation: Proceedings of the 2007 workshop on Programming languages and analysis for security %D 2007 %T Analyzing information flow %A Hicks, Michael W. %B Conference on Programming Language Design and Implementation: Proceedings of the 2007 workshop on Programming languages and analysis for security %8 2007/// %G eng %0 Journal Article %J Computational Linguistics %D 2007 %T Answering Clinical Questions with Knowledge-Based and Statistical Techniques %A Demner-Fushman,Dina %A Jimmy Lin %X The combination of recent developments in question-answering research and the availability of unparalleled resources developed specifically for automatic semantic processing of text in the medical domain provides a unique opportunity to explore complex question answering in the domain of clinical medicine. This article presents a system designed to satisfy the information needs of physicians practicing evidence-based medicine. We have developed a series of knowledge extractors, which employ a combination of knowledge-based and statistical techniques, for automatically identifying clinically relevant aspects of MEDLINE abstracts. These extracted elements serve as the input to an algorithm that scores the relevance of citations with respect to structured representations of information needs, in accordance with the principles of evidence-based medicine. Starting with an initial list of citations retrieved by PubMed, our system can bring relevant abstracts into higher ranking positions, and from these abstracts generate responses that directly answer physicians' questions. We describe three separate evaluations: one focused on the accuracy of the knowledge extractors, one conceptualized as a document reranking task, and finally, an evaluation of answers by two physicians. Experiments on a collection of real-world clinical questions show that our approach significantly outperforms the already competitive PubMed baseline. %B Computational Linguistics %V 33 %P 63 - 103 %8 2007/// %@ 0891-2017 %G eng %U http://dx.doi.org/10.1162/coli.2007.33.1.63 %N 1 %R i: 10.1162/coli.2007.33.1.63 %0 Journal Article %J Pattern Analysis and Machine Intelligence, IEEE Transactions on %D 2007 %T Appearance Characterization of Linear Lambertian Objects, Generalized Photometric Stereo, and Illumination-Invariant Face Recognition %A Zhou,S. K %A Aggarwal,G. %A Chellapa, Rama %A Jacobs, David W. %K albedo field;appearance characterization;generalized photometric stereo algorithms;illumination-invariant face recognition;linear Lambertian objects;observation matrix factorization;face recognition;matrix decomposition;Algorithms;Artificial Intelligence; %K Automated;Photogrammetry;Reproducibility of Results;Sensitivity and Specificity; %K Computer-Assisted;Information Storage and Retrieval;Lighting;Linear Models;Pattern Recognition %X Traditional photometric stereo algorithms employ a Lambertian reflectance model with a varying albedo field and involve the appearance of only one object. In this paper, we generalize photometric stereo algorithms to handle all appearances of all objects in a class, in particular the human face class, by making use of the linear Lambertian property. A linear Lambertian object is one which is linearly spanned by a set of basis objects and has a Lambertian surface. The linear property leads to a rank constraint and, consequently, a factorization of an observation matrix that consists of exemplar images of different objects (e.g., faces of different subjects) under different, unknown illuminations. Integrability and symmetry constraints are used to fully recover the subspace bases using a novel linearized algorithm that takes the varying albedo field into account. The effectiveness of the linear Lambertian property is further investigated by using it for the problem of illumination-invariant face recognition using just one image. Attached shadows are incorporated in the model by a careful treatment of the inherent nonlinearity in Lambert's law. This enables us to extend our algorithm to perform face recognition in the presence of multiple illumination sources. Experimental results using standard data sets are presented %B Pattern Analysis and Machine Intelligence, IEEE Transactions on %V 29 %P 230 - 245 %8 2007/02// %@ 0162-8828 %G eng %N 2 %R 10.1109/TPAMI.2007.25 %0 Journal Article %J Technical Reports of the Computer Science Department %D 2007 %T Appendix to CMod: Modular Information Hiding and Type-Safe Linking for C %A Srivastava,S. %A Hicks, Michael W. %A Foster, Jeffrey S. %X This brief note is an appendix to the paper "CMod: Modular Information Hiding and Type-Safe Linking for C." It consists of the proof of soundness for the formal language presented in that paper. %B Technical Reports of the Computer Science Department %8 2007/06/30/undef %G eng %0 Conference Paper %B Third Language and Technology Conference %D 2007 %T Application of MCL in a dialog agent %A Josyula,D. P %A Fults,S. %A Anderson,M. L %A Wilson,S. %A Perlis, Don %B Third Language and Technology Conference %8 2007/// %G eng %0 Book %D 2007 %T Applied cryptography and network security: 5th international conference, ACNS 2007, Zhuhai, China, June 5-8, 2007: proceedings %A Katz, Jonathan %A Yung,M. %I Springer-Verlag New York Inc %V 4521 %8 2007/// %G eng %0 Conference Paper %D 2007 %T Applying Software Reliability Models on Security Incidents %A Condon,E. %A Michel Cukier %A He,Tao %K computer security incidents %K consumer confidence %K data theft %K network security incidents %K nonhomogenous Poisson process %K reliability growth process %K reputational damage %K security of data %K software reliability %K stock prices %X Computer and network security incidents have increasing financial consequences as demand for network accessibility and connectivity to resources continues to rise. These security incidents can lead to direct financial losses either through data theft of personal and/or proprietary information as well as a reputational damage which may negatively impact stock prices or consumer confidence in a company. This paper examines a large set of security incident data using tools from the software reliability community. We look at applying Non-Homogenous Poisson Process (NHPP) models as a method for describing the reliability growth process. We examine the full set of incidents as well as subsets of the data based on incident types. We look at using the Laplace test to guide selection of the appropriate models. Then, based on the trend results, we apply various NHPP models (i.e., Goel-Okumutu, S-Shaped, Duane, and K-Stage Curve) to illustrate the relevance of using these models to fit the incident data and to predict future incidents. %P 159 - 168 %8 2007/11// %G eng %R 10.1109/ISSRE.2007.29 %0 Conference Paper %B Proceedings of the eighteenth annual ACM-SIAM symposium on Discrete algorithms %D 2007 %T Approximation algorithms for stochastic and risk-averse optimization %A Srinivasan, Aravind %X We present improved approximation algorithms in stochastic optimization. We prove that the multi-stage stochastic versions of covering integer programs (such as set cover and vertex cover) admit essentially the same approximation algorithms as their standard (non-stochastic) counterparts; this improves upon work of Swamy & Shmoys that shows an approximability which depends multiplicatively on the number of stages. We also present approximation algorithms for facility location and some of its variants in the 2-stage recourse model, improving on previous approximation guarantees. %B Proceedings of the eighteenth annual ACM-SIAM symposium on Discrete algorithms %S SODA '07 %I Society for Industrial and Applied Mathematics %C Philadelphia, PA, USA %P 1305 - 1313 %8 2007/// %@ 978-0-898716-24-5 %G eng %U http://dl.acm.org/citation.cfm?id=1283383.1283523 %0 Journal Article %J Journal of Systems and Software %D 2007 %T Archetypal behavior in computer security %A Rosenfeld,Shalom N. %A Rus,Ioana %A Michel Cukier %K Attacks and countermeasures %K Escalation %K Limits to Growth %K Organization computer security %K System archetypes %K System dynamics %K System dynamics modeling and simulation %X The purpose of this study is to understand observed behavior and to diagnose and find solutions to issues encountered in organizational computer security using a systemic approach, namely system archetypes. In this paper we show the feasibility of archetypes application and the benefits of simulation. We developed a model and simulation of some aspects of security based on system dynamics principles. The system dynamics simulation model can be used in support of decision-making, training, and teaching regarding the mitigation of computer security risks. In this paper, we combine two archetypes and show the computer security relevance of such combinations. Presented are instances of the archetypes “Escalation”, in which an organization must continuously increase its efforts to counter additional attacker effort; and “Limits to Growth”, in which the gains from an organization’s security efforts plateau or decline due to its limited capacity for security-related tasks. We describe a scenario where these archetypes (individually and combined) can help in diagnosis and understanding, and present simulation of “what-if” scenarios suggesting how an organization might remedy these problems and maximize its gains from security efforts. %B Journal of Systems and Software %V 80 %P 1594 - 1606 %8 2007/10// %@ 0164-1212 %G eng %U http://www.sciencedirect.com/science/article/pii/S0164121207000155 %N 10 %R 10.1016/j.jss.2007.01.046 %0 Journal Article %J AIAA Paper 2007 %D 2007 %T Assessment of WENO methods with shock-confining filtering for LES of compressible turbulence %A Grube,N. E %A Taylor,E. M %A Martin, M.P %B AIAA Paper 2007 %V 4198 %8 2007/// %G eng %0 Journal Article %J Applied and Environmental MicrobiologyAppl. Environ. Microbiol. %D 2007 %T Association of Vibrio Cholerae O1 El Tor and O139 Bengal with the Copepods Acartia Tonsa and Eurytemora Affinis %A Rawlings,Tonya K. %A Ruiz,Gregory M. %A Rita R Colwell %X The association of Vibrio cholerae with zooplankton has been suggested as an important factor in transmission of human epidemic cholera, and the ability to colonize zooplankton surfaces may play a role in the temporal variation and predominance of the two different serogroups (V. cholerae O1 El Tor and O139) in the aquatic environment. To date, interactions between specific serogroups and species of plankton remain poorly understood. Laboratory microcosm experiments were carried out to compare quantitatively the colonization of two copepod species, Acartia tonsa and Eurytemora affinis, by each of the epidemic serogroups. V. cholerae O1 consistently achieved higher abundances than V. cholerae O139 in colonizing adults of each copepod species as well as the multiple life stages of E. affinis. This difference in colonization may be significant in the general predominance of V. cholerae O1 in cholera epidemics in rural Bangladesh where water supplies are taken directly from the environment. %B Applied and Environmental MicrobiologyAppl. Environ. Microbiol. %V 73 %P 7926 - 7933 %8 2007/12/15/ %@ 0099-2240, 1098-5336 %G eng %U http://aem.asm.org/content/73/24/7926 %N 24 %R 10.1128/AEM.01238-07 %0 Conference Paper %B Proceedings of the 14th ACM conference on Computer and communications security %D 2007 %T Automated detection of persistent kernel control-flow attacks %A Petroni,Jr.,Nick L. %A Hicks, Michael W. %K CFI %K integrity %K Kernel %K rootkit %K virtualization %X This paper presents a new approach to dynamically monitoring operating system kernel integrity, based on a property called state-based control-flow integrity (SBCFI). Violations of SBCFI signal a persistent, unexpected modification of the kernel's control-flow graph. We performed a thorough analysis of 25 Linux rootkits and found that 24 (96%) employ persistent control-flow modifications; an informal study of Windows rootkits yielded similar results. We have implemented SBCFI enforcement as part of the Xen and VMware virtual machine monitors. Our implementation detected all the control-flow modifying rootkits we could install, while imposing unnoticeable overhead for both a typical web server workload and CPU-intensive workloads when operating at 10 second intervals. %B Proceedings of the 14th ACM conference on Computer and communications security %S CCS '07 %I ACM %C New York, NY, USA %P 103 - 115 %8 2007/// %@ 978-1-59593-703-2 %G eng %U http://doi.acm.org/10.1145/1315245.1315260 %R 10.1145/1315245.1315260 %0 Conference Paper %B Proceedings of the twenty-second IEEE/ACM international conference on Automated software engineering %D 2007 %T Automated gui testing guided by usage profiles %A Brooks,Penelope A. %A Memon, Atif M. %K event-driven software %K GUI testing %K usage profiles %X Most software developed in recent years has a graphical userinterface (GUI). The only way for the end-user to interact with the software application is through the GUI. Hence, acceptance and system testing of the software requires GUI testing. This paper presents a new technique for testing of GUI applications. Information on the actual usage of the application, in the form of "usage profiles," is used to ensure that a new version of the application will function correctly. Usage profiles, sequences of events that end-users execute on a GUI, are used to develop a probabilistic usage model of the application. An algorithm uses the model to generate test cases that represent events the user is most likely to execute. Reverse engineering methods are used to extract the underlying structure of the application. An empirical study on four open source GUI applications reveals that test suites generated from the probabilistic model are 0.2-22% of the size of test suites produced directly from usage profiles. Furthermore, the test suites generated from the model detect more faults per test case than those detected directly from the usage profiles, and detect faults not detected by the original profiles %B Proceedings of the twenty-second IEEE/ACM international conference on Automated software engineering %S ASE '07 %I ACM %C New York, NY, USA %P 333 - 342 %8 2007/// %@ 978-1-59593-882-4 %G eng %U http://doi.acm.org/10.1145/1321631.1321681 %R 10.1145/1321631.1321681 %0 Conference Paper %B The 9th International Conference on Document Analysis and Recognition (ICDAR 2007) %D 2007 %T Automatic Document Logo Detection %A Zhu,Guangyu %A David Doermann %X Automatic logo detection and recognition continues to be of great interest to the document retrieval community as it enables effective identification of the source of a document. In this paper, we propose a new approach to logo detection and extraction in document images that robustly classifiesand precisely localizes logos using a boosting strategy across multiple image scales. At a coarse scale, a trained Fisher classifier performs initial classification using features from document context and connected components. Each logo candidate region is further classified at successively finer scales by a cascade of simple classifiers, which allows false alarms to be discarded and the detected region to be refined. Our approach is segmentation free and layout independent. We define a meaningful evaluation metric to measure the quality of logo detection using labeled groundtruth. We demonstrate the effectiveness of our approach using a large collection of real-world documents. %B The 9th International Conference on Document Analysis and Recognition (ICDAR 2007) %C Curitiba, Brazil %P 864 - 868 %8 2007/// %G eng %0 Conference Paper %B Proceedings of the 2007 ACM/IEEE conference on Supercomputing %D 2007 %T Automatic software interference detection in parallel applications %A Tabatabaee, Vahid %A Hollingsworth, Jeffrey K %X We present an automated software interference detection methodology for Single Program, Multiple Data (SPMD) parallel applications. Interference comes from the system and unexpected processes. If not detected and corrected such interference may result in performance degradation. Our goal is to provide a reliable metric for software interference that can be used in soft-failure protection and recovery systems. A unique feature of our algorithm is that we measure the relative timing of application events (i.e. time between MPI calls) rather than system level events such as CPU utilization. This approach lets our system automatically accommodate natural variations in an application's utilization of resources. We use performance irregularities and degradation as signs of software interference. However, instead of relying on temporal changes in performance, our system detects spatial performance degradation across multiple processors. We also include a case study that demonstrates our technique's effectiveness, resilience and robustness. %B Proceedings of the 2007 ACM/IEEE conference on Supercomputing %S SC '07 %I ACM %C Reno, Nevada %P 14:1–14:12 - 14:1–14:12 %8 2007/// %@ 978-1-59593-764-3 %G eng %R 10.1145/1362622.1362642 %0 Journal Article %J IEEE internet computing %D 2007 %T Autonomic computing %A Brabec,F. %A Samet, Hanan %B IEEE internet computing %V 11 %P 52 - 59 %8 2007/// %G eng %N 1 %0 Conference Paper %D 2006 %T Accident or intention: That is the question (in the noisy iterated prisoner’s dilemma) %A Au,T. C %A Nau, Dana S. %X This paper focuses on the Noisy Iterated Prisoner’sDilemma, a version of the Iterated Prisoner’s Dilemma (IPD) in which there is a nonzero probability that a “co- operate” action will accidentally be changed into a “defect” action and vice versa. Tit-For-Tat and other strategies that do quite well in the ordinary (non-noisy) IPD can do quite badly in the Noisy IPD. This paper presents a technique called symbolic noise de- tection, for detecting whether anomalies in player’s behavior are deliberate or accidental. The key idea is to construct a model of the other agent’s behavior, and watch for any de- viation from this model. If the other agent’s next action is inconsistent with this model, the inconsistency can be due either to noise or to a genuine change in their behavior; and we can often distinguish between two cases by waiting to see whether this inconsistency persists in next few moves. We entered several different versions of our strategy in the 2005 Iterated Prisoner’s Dilemma competition, in Cat- egory 2 (noisy environments). Out of the 165 contestants in this category, our programs consistently ranked among top ten. The best of our programs ranked third, and it was beaten only by two “master-slave strategy” programs that each had a large number of “slave” programs feeding points to them. %P 561 - 568 %8 2006/// %G eng %U http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.120.7640&rep=rep1&type=pdf %0 Conference Paper %B Proceedings of the twenty-fifth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems %D 2006 %T Achieving anonymity via clustering %A Aggarwal,Gagan %A Feder,Tomás %A Kenthapadi,Krishnaram %A Khuller, Samir %A Panigrahy,Rina %A Thomas,Dilys %A Zhu,An %K anonymity %K Approximation algorithms %K clustering %K privacy %X Publishing data for analysis from a table containing personal records, while maintaining individual privacy, is a problem of increasing importance today. The traditional approach of de-identifying records is to remove identifying fields such as social security number, name etc. However, recent research has shown that a large fraction of the US population can be identified using non-key attributes (called quasi-identifiers) such as date of birth, gender, and zip code [15]. Sweeney [16] proposed the k-anonymity model for privacy where non-key attributes that leak information are suppressed or generalized so that, for every record in the modified table, there are at least k−1 other records having exactly the same values for quasi-identifiers. We propose a new method for anonymizing data records, where quasi-identifiers of data records are first clustered and then cluster centers are published. To ensure privacy of the data records, we impose the constraint that each cluster must contain no fewer than a pre-specified number of data records. This technique is more general since we have a much larger choice for cluster centers than k-Anonymity. In many cases, it lets us release a lot more information without compromising privacy. We also provide constant-factor approximation algorithms to come up with such a clustering. This is the first set of algorithms for the anonymization problem where the performance is independent of the anonymity parameter k. We further observe that a few outlier points can significantly increase the cost of anonymization. Hence, we extend our algorithms to allow an ε fraction of points to remain unclustered, i.e., deleted from the anonymized publication. Thus, by not releasing a small fraction of the database records, we can ensure that the data published for analysis has less distortion and hence is more useful. Our approximation algorithms for new clustering objectives are of independent interest and could be applicable in other clustering scenarios as well. %B Proceedings of the twenty-fifth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems %S PODS '06 %I ACM %C New York, NY, USA %P 153 - 162 %8 2006/// %@ 1-59593-318-2 %G eng %U http://doi.acm.org/10.1145/1142351.1142374 %R 10.1145/1142351.1142374 %0 Book Section %B Articulated Motion and Deformable ObjectsArticulated Motion and Deformable Objects %D 2006 %T Acquisition of Articulated Human Body Models Using Multiple Cameras %A Sundaresan,Aravind %A Chellapa, Rama %E Perales,Francisco %E Fisher,Robert %X Motion capture is an important application in different areas such as biomechanics, computer animation, and human-computer interaction. Current motion capture methods typically use human body models in order to guide pose estimation and tracking. We model the human body as a set of tapered super-quadrics connected in an articulated structure and propose an algorithm to automatically estimate the parameters of the model using video sequences obtained from multiple calibrated cameras. Our method is based on the fact that the human body is constructed of several articulated chains that can be visualised as essentially 1-D segments embedded in 3-D space and connected at specific joint locations. The proposed method first computes a voxel representation from the images and maps the voxels to a high dimensional space in order to extract the 1-D structure. A bottom-up approach is then suggested in order to build a parametric (spline-based) representation of a general articulated body in the high dimensional space followed by a top-down probabilistic approach that registers the segments to the known human body model. We then present an algorithm to estimate the parameters of our model using the segmented and registered voxels. %B Articulated Motion and Deformable ObjectsArticulated Motion and Deformable Objects %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 4069 %P 78 - 89 %8 2006/// %@ 978-3-540-36031-5 %G eng %U http://dx.doi.org/10.1007/11789239_9 %0 Journal Article %J On the Move to Meaningful Internet Systems 2006: CoopIS, DOA, GADA, and ODBASE %D 2006 %T An adaptive probabilistic replication method for unstructured p2p networks %A Tsoumakos,D. %A Roussopoulos, Nick %B On the Move to Meaningful Internet Systems 2006: CoopIS, DOA, GADA, and ODBASE %P 480 - 497 %8 2006/// %G eng %0 Journal Article %J ACM Transactions on Database Systems (TODS) %D 2006 %T Adaptive pull-based policies for wide area data delivery %A Bright,L. %A Gal,A. %A Raschid, Louiqa %B ACM Transactions on Database Systems (TODS) %V 31 %P 631 - 671 %8 2006/// %G eng %N 2 %0 Conference Paper %B Acoustics, Speech and Signal Processing, 2006. ICASSP 2006 Proceedings. 2006 IEEE International Conference on %D 2006 %T An Adaptive Threshold Method for Hyperspectral Target Detection %A Broadwater, J. %A Chellapa, Rama %K adaptive %K blind %K detection; %K detection;inverse %K importance %K method;background %K processing;importance %K sampling;geophysical %K sampling;object %K signal %K statistics;hyperspectral %K target %K threshold %X In this paper, we present a new approach to automatically determine a detector threshold. This research problem is especially important in hyperspectral target detection as targets are typically very similar to the background. While a number of methods exist to determine the threshold, these methods require either large amounts of data or make simplifying assumptions about the background distribution. We use a method called inverse blind importance sampling which requires few samples and makes no a-priori assumptions about the background statistics. Results show the promise of this algorithm to determine thresholds for fixed false alarm densities in hyperspectral detectors %B Acoustics, Speech and Signal Processing, 2006. ICASSP 2006 Proceedings. 2006 IEEE International Conference on %V 5 %P V - V %8 2006/05// %G eng %R 10.1109/ICASSP.2006.1661497 %0 Conference Paper %B Proceedings of the 11th Conference on European Chapter of the Association for Computational Linguistics %D 2006 %T Adaptive Transformation-based Learning for Improving Dictionary Tagging %A Karagol-Ayan,Burcu %A Doermann, David %A Weinberg, Amy %X We present an adaptive technique that enables users to produce a high quality dictionary parsed into its lexicographic components (headwords, parts of speech, translations, etc.) using an extremely small amount of user provided training data. We use transformation-based learning (TBL) as a postprocessor at two points in the system to improve performance. The results show that the tagging accuracy is increased from 83% and 91% to 93% and 94% for individual words or tokens , and from 64% and 83% to 90% and 93% for contiguous phrases such as definitions or examples of usage. %B Proceedings of the 11th Conference on European Chapter of the Association for Computational Linguistics %C Trento, Italy %P 257 - 264 %8 2006/04// %G eng %0 Journal Article %J Proceedings of the Third International WordNet Conference %D 2006 %T Adding dense, weighted connections to wordnet %A Jordan Boyd-Graber %A Fellbaum,C. %A Osherson,D. %A Schapire,R. %X WORDNET, a ubiquitous tool for natural language processing, suffers from sparsity of connectionsbetween its component concepts (synsets). Through the use of human annotators, a subset of the connections between 1000 hand-chosen synsets was assigned a value of “evocation” representing how much the first concept brings to mind the second. These data, along with existing similarity measures, constitute the basis of a method for predicting evocation between previously unrated pairs. %B Proceedings of the Third International WordNet Conference %P 29 - 36 %8 2006/// %G eng %0 Book %D 2006 %T Advances in Computers: Computational Biology and Bioinformatics %A Zelkowitz, Marvin V %A Tseng,Chau-Wen %K Bioinformatics %K Computational Biology %K Computers / Bioinformatics %K Computers / Computer Science %K Computers / Data Processing %K Computers / Information Theory %K Computers / Interactive & Multimedia %K Computers / Programming / General %K Computers / Reference %K Computers / Software Development & Engineering / General %K Mathematics / Applied %K Science / Life Sciences / Biology %X The field of bioinformatics and computational biology arose due to the need to apply techniques from computer science, statistics, informatics, and applied mathematics to solve biological problems. Scientists have been trying to study biology at a molecular level using techniques derived from biochemistry, biophysics, and genetics. Progress has greatly accelerated with the discovery of fast and inexpensive automated DNA sequencing techniques. As the genomes of more and more organisms are sequenced and assembled, scientists are discovering many useful facts by tracing the evolution of organisms by measuring changes in their DNA, rather than through physical characteristics alone. This has led to rapid growth in the related fields of phylogenetics, the study of evolutionary relatedness among various groups of organisms, and comparative genomics, the study of the correspondence between genes and other genomic features in different organisms. Comparing the genomes of organisms has allowed researchers to better understand the features and functions of DNA in individual organisms, as well as provide insights into how organisms evolve over time. The first four chapters of this book focus on algorithms for comparing the genomes of different organisms. Possible concrete applications include identifying the basis for genetic diseases and tracking the development and spread of different forms of Avian flu. As researchers begin to better understand the function of DNA, attention has begun shifting towards the actual proteins produced by DNA. The final two chapters explore proteomic techniques for analyzing proteins directly to identify their presence and understand their physical structure.- Written by active PhD researchers in computational biology and bioinformatics %I Academic Press %8 2006/11/22/ %@ 9780120121687 %G eng %0 Journal Article %J Journal of Algorithms %D 2006 %T Algorithms for non-uniform size data placement on parallel disks %A Kashyap,Srinivas %A Khuller, Samir %X We study an optimization problem that arises in the context of data placement in a multimedia storage system. We are given a collection of M multimedia objects (data items) that need to be assigned to a storage system consisting of N disks d 1 , d 2 , … , d N . We are also given sets U 1 , U 2 , … , U M such that U i is the set of clients seeking the ith data item. Data item i has size s i . Each disk d j is characterized by two parameters, namely, its storage capacity C j which indicates the maximum total size of data items that may be assigned to it, and a load capacity L j which indicates the maximum number of clients that it can serve. The goal is to find a placement of data items to disks and an assignment of clients to disks so as to maximize the total number of clients served, subject to the capacity constraints of the storage system.We study this data placement problem for homogeneous storage systems where all the disks are identical. We assume that all disks have a storage capacity of k and a load capacity of L. Previous work on this problem has assumed that all data items have unit size, in other words s i = 1 for all i. Even for this case, the problem is NP-hard. For the case where s i ∈ { 1 , … , Δ } for some constant Δ, we develop a polynomial time approximation scheme (PTAS). This result is obtained by developing two algorithms, one that works for constant k and one that works for arbitrary k. The algorithm for arbitrary k guarantees that a solution where at least ( ( k − Δ ) ( k + Δ ) ) ( 1 − 1 ( 1 + k ( 2 Δ ) ) 2 ) -fraction of all clients are assigned to a disk (under certain assumptions). In addition we develop an algorithm for which we can prove tight bounds when s i ∈ { 1 , 2 } . In fact, we can show that a ( 1 − 1 ( 1 + ⌊ k / 2 ⌋ ) 2 ) -fraction of all clients can be assigned (under certain natural assumptions), regardless of the input distribution. %B Journal of Algorithms %V 60 %P 144 - 167 %8 2006/08// %@ 0196-6774 %G eng %U http://www.sciencedirect.com/science/article/pii/S0196677404001063 %N 2 %R 10.1016/j.jalgor.2004.06.007 %0 Conference Paper %D 2006 %T Algorithms for on-line monitoring of components in an optical tweezers-based assembly cell %A Peng,T. %A Balijepalli,A. %A Gupta,S.K. %A LeBrun,T. W. %X Optical tweezers have emerged as a powerful tool for microand nanomanipulation. Using optical tweezers to perform auto- mated assembly requires on-line monitoring of components in the assembly workspace. This paper presents algorithms for estimat- ing positions and orientations of microscale and nanoscale com- ponents in the 3-Dimensional assembly workspace. Algorithms presented in this paper use images obtained by optical section microscopy. The images are first segmented to locate areas of in- terest and then image gradient information from the areas of in- terest is used to generate probable locations and orientations of components in the XY-plane. Finally, signature curves are com- puted and utilized to obtain component locations and orienta- tions in 3-D space. We have tested these algorithms with silica micro-spheres as well as metallic nanowires. We believe that the algorithms described in this paper will provide the foundation for realizing automated assembly operations in optical tweezers- based assembly cells. %P 1 - 13 %8 2006/// %G eng %U http://www.glue.umd.edu/~skgupta/Publication/CIE06_Peng1.pdf %0 Journal Article %J Journal of Research of the National Institute of Standards and Technology %D 2006 %T Algorithms for Structured Total Least Squares Problems with Applications to Blind Image Deblurring %A Kalsi,Anoop %A O'Leary, Dianne P. %X Mastronardi, Lemmerling, and van Huel presented an algorithm for solving a total least squares problem when the matrix and its perturbations are Toeplitz. A Toeplitz matrix is a special kind of matrix with small displacement rank. Here we generalize the fast algorithm to any matrix with small displacement rank. In particular, we show how to efficiently construct the generators whenever M has small displacement rank and show that in many important cases the Cholesky factorization of the matrix M M can also be determined fast. We further extend this problem to Tikhonov regularization of ill-posed problems and illustrate the use of the algorithm on an image deblurring problem. %B Journal of Research of the National Institute of Standards and Technology %V 111 %P 113 - 119 %8 2006/// %G eng %U http://nvl.nist.gov/pub/nistpubs/jres/111/2/V111.N02.A06.pdfhttp://nvl.nist.gov/pub/nistpubs/jres/111/2/V111.N02.A06.pdf %N 2 %0 Conference Paper %B Proceedings of the 2006 SIGCOMM workshop on Challenged networks %D 2006 %T Alternative custodians for congestion control in delay tolerant networks %A Seligman,Matthew %A Fall,Kevin %A Mundur, Padma %K delay tolerant network %K Routing %B Proceedings of the 2006 SIGCOMM workshop on Challenged networks %S CHANTS '06 %I ACM %C New York, NY, USA %P 229 - 236 %8 2006/// %@ 1-59593-572-X %G eng %U http://doi.acm.org/10.1145/1162654.1162660 %R 10.1145/1162654.1162660 %0 Journal Article %J ANALYSIS %D 2006 %T Analysis and comparison of geometric and algebraic multigrid for convection-diffusion equations %A Wu, C. T %A Elman, Howard %B ANALYSIS %V 28 %P 2208 - 2228 %8 2006/// %G eng %N 6 %0 Journal Article %J Bulletin of the American Physical Society %D 2006 %T Analysis of Shock Motion in a Compression Ramp Configuration using DNS data %A Wu,M. %A Martin, M.P %B Bulletin of the American Physical Society %8 2006/// %G eng %0 Journal Article %J IACR ePrint report %D 2006 %T Analyzing the HB and HB+ protocols in the “large error” case %A Katz, Jonathan %A Smith,A. %X HB and HB+ are two shared-key, unidirectional authentication protocols whose extremelylow computational cost makes them potentially well-suited for severely resource-constrained devices. Security of these protocols is based on the conjectured hardness of learning parity with noise; that is, learning a secret s given “noisy” dot products of s that are incorrect with probability ε. Although the problem of learning parity with noise is meaningful for any constant ε < 1/2, existing proofs of security for HB and HB+ only imply security when ε < 1/4. In this note, we show how to extend these proofs to the case of arbitrary ε < 1/2. %B IACR ePrint report %V 326 %P 2006 - 2006 %8 2006/// %G eng %0 Conference Paper %B 10th International Workshop on Frontiers in Handwriting Recognition %D 2006 %T ANew Algorithm for Detecting Text Line in Handwritten Documents %A Li,Yi %A Yefeng Zheng %A David Doermann %A Jaeger,Stefan %X Curvilinear text line detection and segmentation in handwritten documents is a significant challenge for handwriting recognition. Given no prior knowledge of script, we model text line detection as an image segmentation problem by enhancing text line structure using a Gaussian window, and adopting the level set method to evolve text line boundaries. Experiments show that the proposed method achieves high accuracy for detecting text lines in both handwritten and machine printed documents with many scripts. %B 10th International Workshop on Frontiers in Handwriting Recognition %C La Baule, France %P 35 - 40 %8 2006/// %G eng %0 Journal Article %J The Semantic Web: Research and Applications %D 2006 %T Annotated rdf %A Udrea,O. %A Recupero,D. %A V.S. Subrahmanian %X There are numerous extensions of RDF that support temporal reasoning, reasoning about pedigree, reasoning about uncertainty, and so on. In this paper, we present Annotated RDF (or aRDF for short) in which RDF triples are annotated by members of a partially ordered set (with bottom element) that can be selected in any way desired by the user. We present a formal declarative semantics (model theory) for annotated RDF and develop algorithms to check consistency of aRDF theories and to answer queries to aRDF theories. We show that annotated RDF captures versions of all the forms of reasoning mentioned above within a single unified framework. We develop a prototype aRDF implementation and show that our algorithms work very fast indeed – in fact, in just a matter of seconds for theories with over 100,000 nodes. %B The Semantic Web: Research and Applications %V 4011 %P 487 - 501 %8 2006/// %G eng %R 10.1007/11762256_36 %0 Conference Paper %B Proceedings of the Workshop on Frontiers in Linguistically Annotated Corpora 2006 %D 2006 %T Annotation compatibility working group report %A Meyers,A. %A Fang,A. C. %A Ferro,L. %A Kübler,S. %A Jia-Lin,T. %A Palmer,M. %A Poesio,M. %A Dolbey,A. %A Schuler,K. K. %A Loper,E. %A Zinsmeister,H. %A Penn,G. %A Xue,N. %A Hinrichs,E. %A Wiebe,J. %A Pustejovsky,J. %A Farwell,D. %A Hajicova,E. %A Dorr, Bonnie J %A Hovy,E. %A Onyshkevych,B. A. %A Levin,L. %X This report explores the question of compatibility between annotation projects including translating annotation formalisms to each other or to common forms. Compatibility issues are crucial for systems that use the results of multiple annotation projects. We hope that this report will begin a concerted effort in the field to track the compatibility of annotation schemes for part of speech tagging, time annotation, treebanking, role labeling and other phenomena. %B Proceedings of the Workshop on Frontiers in Linguistically Annotated Corpora 2006 %S LAC '06 %I Association for Computational Linguistics %C Stroudsburg, PA, USA %P 38 - 53 %8 2006/// %@ 1-932432-78-7 %G eng %U http://dl.acm.org/citation.cfm?id=1641991.1641997 %0 Report %D 2006 %T Anonymous multi-attribute encryption with range query and conditional decryption %A Bethencourt, J. %A Chan, H. %A Perrig, A. %A Elaine Shi %A Song,D. %X We introduce the concept of Anonymous Multi-Attribute Encryption with Range Query and Con-ditional Decryption (AMERQCD). In AMERQCD, a plaintext is encrypted under a point in multi- dimensional space. To a computationally bounded adversary, the ciphertext hides both the plaintext and the point under which it is encrypted. In a range query, a master key owner releases the decryp- tion key for an arbitrary hyper-rectangle in space, thus allowing decryption of ciphertexts previ- ously encrypted under any point within the hyper-rectangle. However, a computationally bounded adversary cannot learn any information on ciphertexts outside the range covered by the decryption key (except the fact that they do not lie within this range). We give an efficient construction based on the Decision Bilinear Diffie-Hellman (D-BDH) and Decision Linear (D-Linear) assumption. %I Carnegie Mellon University %8 2006 %@ CMU-CS-06-135 %G eng %U http://reports-archive.adm.cs.cmu.edu/anon/anon/home/ftp/2006/CMU-CS-06-135.pdf %R Technical Report %0 Journal Article %J Journal of Visual Communication and Image Representation %D 2006 %T Appearance-based person recognition using color/path-length profile %A Yoon,K. %A Harwood,D. %A Davis, Larry S. %B Journal of Visual Communication and Image Representation %V 17 %P 605 - 622 %8 2006/// %G eng %N 3 %0 Book Section %B Graph Drawing %D 2006 %T Applications of Parameterized st-Orientations in Graph Drawing Algorithms %A Charalampos Papamanthou %A Tollis, Ioannis G. %E Healy, Patrick %E Nikolov, Nikola S. %K Algorithm Analysis and Problem Complexity %K Computer Graphics %K Data structures %K Discrete Mathematics in Computer Science %X Many graph drawing algorithms use st-numberings (st-orien-tations or bipolar orientations) as a first step. An st-numbering of a biconnected undirected graph defines a directed graph with no cycles, one single source s and one single sink t. As there exist exponentially many st-numberings that correspond to a certain undirected graph G, using different st-numberings in various graph drawing algorithms can result in aesthetically different drawings with different area bounds. In this paper, we present results concerning new algorithms for parameterized st-orientations, their impact on graph drawing algorithms and especially in visibility representations. %B Graph Drawing %S Lecture Notes in Computer Science %I Springer Berlin Heidelberg %P 355 - 367 %8 2006/01/01/ %@ 978-3-540-31425-7, 978-3-540-31667-1 %G eng %U http://link.springer.com/chapter/10.1007/11618058_32 %0 Conference Paper %B Proceedings of the 2006 workshop on Programming languages and analysis for security %D 2006 %T Applying flow-sensitive CQUAL to verify MINIX authorization check placement %A Fraser,Timothy %A Petroni,Jr. %A Arbaugh, William A. %K access controls %K cqual %K minix %X We present the first use of flow-sensitive CQUAL to verify the placement of operating system authorization checks. Our analysis of MINIX 3 system servers and discovery of a non-exploitable Time-Of-Check/Time-Of-Use bug demonstrate the effectiveness of flow sensitive CQUAL and its advantage over earlier flow-insensitive versions. We also identify and suggest alternatives to current CQUAL usability features that encourage analysts to make omissions that cause the otherwise sound tool to produce false-negative results. %B Proceedings of the 2006 workshop on Programming languages and analysis for security %S PLAS '06 %I ACM %C Ottawa, Ontario, Canada %P 3 - 6 %8 2006/// %@ 1-59593-374-3 %G eng %R 10.1145/1134744.1134747 %0 Conference Paper %B Proceedings of the 22nd International Conference on Data Engineering, 2006. ICDE '06 %D 2006 %T Approximate Data Collection in Sensor Networks using Probabilistic Models %A Chu,D. %A Deshpande, Amol %A Hellerstein,J. M %A Wei Hong %K Batteries %K Biological system modeling %K Biosensors %K data mining %K Databases %K Energy consumption %K Intelligent networks %K Intelligent sensors %K Robustness %K Wireless sensor networks %X Wireless sensor networks are proving to be useful in a variety of settings. A core challenge in these networks is to minimize energy consumption. Prior database research has proposed to achieve this by pushing data-reducing operators like aggregation and selection down into the network. This approach has proven unpopular with early adopters of sensor network technology, who typically want to extract complete "dumps" of the sensor readings, i.e., to run "SELECT *" queries. Unfortunately, because these queries do no data reduction, they consume significant energy in current sensornet query processors. In this paper we attack the "SELECT " problem for sensor networks. We propose a robust approximate technique called Ken that uses replicated dynamic probabilistic models to minimize communication from sensor nodes to the network’s PC base station. In addition to data collection, we show that Ken is well suited to anomaly- and event-detection applications. A key challenge in this work is to intelligently exploit spatial correlations across sensor nodes without imposing undue sensor-to-sensor communication burdens to maintain the models. Using traces from two real-world sensor network deployments, we demonstrate that relatively simple models can provide significant communication (and hence energy) savings without undue sacrifice in result quality or frequency. Choosing optimally among even our simple models is NPhard, but our experiments show that a greedy heuristic performs nearly as well as an exhaustive algorithm. %B Proceedings of the 22nd International Conference on Data Engineering, 2006. ICDE '06 %I IEEE %P 48 - 48 %8 2006/04/03/07 %@ 0-7695-2570-9 %G eng %R 10.1109/ICDE.2006.21 %0 Journal Article %J Discrete Event Dynamic Systems %D 2006 %T Approximating the minimal sensor selection for supervisory control %A Rohloff,K. R %A Khuller, Samir %A Kortsarz,G. %X This paper discusses the problem of selecting a set of sensors of minimum cost that can be used for the synthesis of a supervisory controller. It is shown how this sensor selection problem is related to a type of directed graph st-cut problem that has not been previously discussed in the literature. Approximation algorithms to solve the sensor selection problem can be used to solve the graph cutting problem and vice-versa. Polynomial time algorithms to find good approximate solutions to either problem most likely do not exist (under certain complexity assumptions), but a time efficient approximation algorithm is shown that solves a special case of these problems. It is also shown how to convert the sensor selection problem into an integer programming problem. %B Discrete Event Dynamic Systems %V 16 %P 143 - 170 %8 2006/// %G eng %N 1 %R 10.1007/s10626-006-6187-3 %0 Journal Article %J Networks %D 2006 %T Approximation algorithms for channel allocation problems in broadcast networks %A Gandhi,Rajiv %A Khuller, Samir %A Srinivasan, Aravind %A Wang,Nan %K channel allocation problem %K distributed approximation algorithm %K edge partitioning problem %X We study two packing problems that arise in the area of dissemination-based information systems; a second theme is the study of distributed approximation algorithms. The problems considered have the property that the space occupied by a collection of objects together could be significantly less than the sum of the sizes of the individual objects. In the Channel Allocation Problem, there are requests that are subsets of topics. There are a fixed number of channels that can carry an arbitrary number of topics. All the topics of each request must be broadcast on some channel. The load on any channel is the number of topics that are broadcast on that channel; the objective is to minimize the maximum load on any channel. We present approximation algorithms for this problem, and also show that the problem is MAX-SNP hard. The second problem is the Edge Partitioning Problem addressed by Goldschmidt, Hochbaum, Levin, and Olinick (Networks, 41:13–23, 2003). Each channel here can deliver topics for at most k requests, and we aim to minimize the total load on all channels. We present an O(n1/3)–approximation algorithm, and also show that the algorithm can be made fully distributed with the same approximation guarantee; we also generalize the (nondistributed) Edge Partitioning Problem of graphs to the case of hypergraphs. © 2006 Wiley Periodicals, Inc. NETWORKS, Vol. 47(4), 225–236 2006 %B Networks %V 47 %P 225 - 236 %8 2006/03/09/ %@ 1097-0037 %G eng %U http://onlinelibrary.wiley.com/doi/10.1002/net.20111/abstract %N 4 %R 10.1002/net.20111 %0 Conference Paper %B American Medical Informatics Association 2006 Annual Symposium Proceedings Biomedical and Health Informatics: From Foundations to Applications to Policy, Washington, DC, DW Bates, JH Holmes, and G. Kuperman, editors %D 2006 %T Archimedes, an archive of medical images %A Tahmoush,D. %A Samet, Hanan %X We present a medical image and medical recorddatabase for the storage, research, transmission, and evaluation of medical images. Medical images from any source that supports the DICOM standard can be stored and accessed, as well as associated analysis and annotations. Retrieval is based on patient info, date, doctor’s annotations, features in the images, or a spatial combination. This database supports the secure transmission of sensitive data for tele-medicine and follows all HIPPA regulations. %B American Medical Informatics Association 2006 Annual Symposium Proceedings Biomedical and Health Informatics: From Foundations to Applications to Policy, Washington, DC, DW Bates, JH Holmes, and G. Kuperman, editors %8 2006/// %G eng %0 Journal Article %J Software: Practice and Experience %D 2006 %T An architecture for adaptive intrusion-tolerant applications %A Pal,Partha %A Rubel,Paul %A Atighetchi,Michael %A Webber,Franklin %A Sanders,William H. %A Seri,Mouna %A Ramasamy,HariGovind %A Lyons,James %A Courtney,Tod %A Agbaria,Adnan %A Michel Cukier %A Gossett,Jeanna %A Keidar,Idit %K adaptive defense %K adaptive middleware %K Byzantine fault tolerance %K intrusion tolerance %K redundancy %K survivability architecture %X Applications that are part of a mission-critical information system need to maintain a usable level of key services through ongoing cyber-attacks. In addition to the well-publicized denial of service (DoS) attacks, these networked and distributed applications are increasingly threatened by sophisticated attacks that attempt to corrupt system components and violate service integrity. While various approaches have been explored to deal with DoS attacks, corruption-inducing attacks remain largely unaddressed. We have developed a collection of mechanisms based on redundancy, Byzantine fault tolerance, and adaptive middleware that help distributed, object-based applications tolerate corruption-inducing attacks. In this paper, we present the ITUA architecture, which integrates these mechanisms in a framework for auto-adaptive intrusion-tolerant systems, and we describe our experience in using the technology to defend a critical application that is part of a larger avionics system as an example. We also motivate the adaptive responses that are key to intrusion tolerance, and explain the use of the ITUA architecture to support them in an architectural framework. Copyright © 2006 John Wiley & Sons, Ltd. %B Software: Practice and Experience %V 36 %P 1331 - 1354 %8 2006/// %@ 1097-024X %G eng %U http://onlinelibrary.wiley.com/doi/10.1002/spe.747/abstract %N 11-12 %R 10.1002/spe.747 %0 Conference Paper %B Proceedings of the 15th conference on USENIX Security Symposium %D 2006 %T An architecture for specification-based detection of semantic integrity violations in kernel dynamic data %A Petroni Jr,N. L %A Fraser,T. %A Walters,A. A %A Arbaugh, William A. %B Proceedings of the 15th conference on USENIX Security Symposium %P 20 - 20 %8 2006/// %G eng %0 Conference Paper %B International Conference on Document Recognition and Retrieval XIII %D 2006 %T ARobust Stamp Detection Framework on Degraded Documents %A Zhu,Guangyu %A Jaeger,Stefan %A David Doermann %X In this paper, we present a novel stamp detection framework based on parameter estimation of connected edge features. Using robust basic shape detectors, the approach is effective for stamps with analytically shaped contours. For elliptic/circular stamps, it efficiently makes use of the orientation information from pairs of edge points to determine its center position and area without computing all the five parameters of ellipse. When linking connected edges, we introduce an effective template-based junction removal technique to specifically address the problem that stamps often spatially overlap with background contents. These give our approach significant advantage in terms of computation complexity over traditional Hough transform methods. Experimental results on real degraded documents demonstrated the robustness of this retrieval approach on large document database, which consists of both printed text and handwritten notes. This enables us to reliably retrieve documents from a specific source by detecting its official stamps with analytically-shape contours when only limited samples are available. %B International Conference on Document Recognition and Retrieval XIII %I San Jose, CA %P 1 - 9 %8 2006/// %G eng %0 Conference Paper %D 2006 %T Assessing the Attack Threat due to IRC Channels %A Meyer,R. %A Michel Cukier %K attack threat assessment %K channel activity %K client-server systems %K computer crime %K intrusion-resilient channels %K IRC channels %K IRC protocols %K Protocols %K telecommunication channels %K telecommunication security %X This practical experience report presents the results of an investigation into the threat of attacks associated with the chat medium IRC. A combination of simulated users (i.e., bots), some configured with scripts that simulated conversations, and regular users were used. The average number of attacks per day a user on IRC can expect, the effect of channel activity, gender based on the name, and network type on the number of attacks were determined. The social structure of IRC channels and the types of users that use it were analyzed. The results indicate that attacks through IRC channels come from human users selecting targets rather than automated scripts targeting every user in a channel %P 467 - 472 %8 2006/06// %G eng %R 10.1109/DSN.2006.12 %0 Journal Article %J ECOOP 2002—Object-Oriented Programming %D 2006 %T Atomic instructions in java %A Hovemeyer,D. %A Pugh, William %A Spacco,J. %B ECOOP 2002—Object-Oriented Programming %P 5 - 16 %8 2006/// %G eng %0 Conference Paper %B Computer Vision and Pattern Recognition Workshop, 2006. CVPRW '06. Conference on %D 2006 %T Attribute Grammar-Based Event Recognition and Anomaly Detection %A Joo,Seong-Wook %A Chellapa, Rama %X We propose to use attribute grammars for recognizing normal events and detecting abnormal events in a video. Attribute grammars can describe constraints on features (attributes) in addition to the syntactic structure of the input. Events are recognized using an extension of the Earley parser that handles attributes and concurrent event threads. Abnormal events are detected when the input does not follow syntax of the grammar or the attributes do not satisfy the constraints in the attribute grammar to some degree. We demonstrate the effectiveness of our method for the task of recognizing normal events and detecting anomalies in a parking lot. %B Computer Vision and Pattern Recognition Workshop, 2006. CVPRW '06. Conference on %P 107 - 107 %8 2006/06// %G eng %R 10.1109/CVPRW.2006.32 %0 Journal Article %J Artificial Intelligence in Medicine %D 2006 %T Automatic identification of confusable drug names %A Kondrak,Grzegorz %A Dorr, Bonnie J %K Drug names %K Evaluation methodology %K Lexical similarity %K Medical errors %X SummaryObjectiveMany hundreds of drugs have names that either look or sound so much alike that doctors, nurses and pharmacists can get them confused, dispensing the wrong one in errors that can injure or even kill patients. Methods and material We propose to address the problem through the application of two new methods—one based on orthographic similarity (“look-alike”), and the other based on phonetic similarity (“sound-alike”). In order to compare the effectiveness of the new methods for identifying confusable drug names with other known similarity measures, we developed a novel evaluation methodology. Results We show that the new orthographic measure (BI-SIM) outperforms other commonly used measures of similarity on a set containing both look-alike and sound-alike pairs, and that a new feature-based phonetic approach (ALINE) outperforms orthographic approaches on a test set containing solely sound-alike pairs. However, an approach that combines several different measures achieves the best results on two test sets. Conclusion Our system is currently used as the basis of a system developed for the U.S. Food and Drug Administration for detection of confusable drug names. %B Artificial Intelligence in Medicine %V 36 %P 29 - 42 %8 2006/01// %@ 0933-3657 %G eng %U http://www.sciencedirect.com/science/article/pii/S0933365705001004 %N 1 %R 10.1016/j.artmed.2005.07.005 %0 Report %D 2006 %T Automatic web services composition using Shop2 %A Hendler,J. %A Nau, Dana S. %A Parsia,B. %A Sirin,E. %A Wu,D. %X Semantic markup of Web services will enable the automation of various kinds of tasks, including discovery, composition, and execution of Web services. We describe how an AI planning system (SHOP2) can be used with DAML-S Web service descriptions to automatically compose Web services. %I Department of Computer Science, University of Maryland, College Park %8 2006/// %G eng %U http://www.stormingmedia.us/96/9697/A969744.html %0 Journal Article %J Algorithms and Computation %D 2005 %T The ABCs of AVDs: Geometric Retrieval Made Simple %A Mount, Dave %X One of the best known structures in computational geometry is the Voronoi diagram. Given a set of n points in space, called sites, this is a subdivision that partitions space into n convex cells according which site is closest. The Voronoi diagram and its dual, the Delaunay triangulation are among the most fundamental proximity structures known, and have numerous uses in science. For example, applying point location to the subdivision defined by a Voronoi diagram answers nearest neighbor queries. Although they are popular in low dimensional spaces, there are various reasons why Voronoi diagrams are rarely used higher dimensions. First, the combinatorial complexity of the diagram grows rapidly, as fast as Ω(nd/2) in d-dimensional space. Second, the diagram does not admit a straightforward search structure in higher dimensions. %B Algorithms and Computation %P 819 - 826 %8 2005/// %G eng %R 10.1007/978-3-540-30551-4_2 %0 Book Section %B Discovery ScienceDiscovery Science %D 2005 %T Active Constrained Clustering by Examining Spectral Eigenvectors %A Xu,Qianjun %A desJardins, Marie %A Wagstaff,Kiri %E Hoffmann,Achim %E Motoda,Hiroshi %E Scheffer,Tobias %X This work focuses on the active selection of pairwise constraints for spectral clustering. We develop and analyze a technique for Active Constrained Clustering by Examining Spectral eigenvectorS (ACCESS) derived from a similarity matrix. The ACCESS method uses an analysis based on the theoretical properties of spectral decomposition to identify data items that are likely to be located on the boundaries of clusters, and for which providing constraints can resolve ambiguity in the cluster descriptions. Empirical results on three synthetic and five real data sets show that ACCESS significantly outperforms constrained spectral clustering using randomly selected constraints. %B Discovery ScienceDiscovery Science %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 3735 %P 294 - 307 %8 2005/// %@ 978-3-540-29230-2 %G eng %U http://dx.doi.org/10.1007/11563983_25 %0 Conference Paper %B 8th Int. Conf. on Document Analysis and Recognition (ICDAR'05) %D 2005 %T Adaptive OCR with Limited User Feedback %A Ma,Huanfeng %A David Doermann %X A methodology is proposed for processing noisy printed documents with limited user feedback. Without the support of ground truth, a specific collection of scanned documents can be processed to extract character templates. The adaptiveness of this approach lies in that the extracted templates are used to train an OCR classifier quickly and with limited user feedback. Experimental results show that this approach is extremely useful for the processing of noisy documents with many touching characters. %B 8th Int. Conf. on Document Analysis and Recognition (ICDAR'05) %P 814 - 818 %8 2005/08// %G eng %0 Journal Article %J Theory of Cryptography %D 2005 %T Adaptively-secure, non-interactive public-key encryption %A Canetti,R. %A Halevi,S. %A Katz, Jonathan %X Adaptively-secure encryption schemes ensure secrecy even in the presence of an adversary who can corrupt parties in an adaptive manner based on public keys, ciphertexts, and secret data of already-corrupted parties. Ideally, an adaptively-secure encryption scheme should, like standard public-key encryption, allow arbitrarily-many parties to use a single encryption key to securely encrypt arbitrarily-many messages to a given receiver who maintains only a single short decryption key. However, it is known that these requirements are impossible to achieve: no non-interactive encryption scheme that supports encryption of an unbounded number of messages and uses a single, unchanging decryption key can be adaptively secure. Impossibility holds even if secure data erasure is possible.We show that this limitation can be overcome by updating the decryption key over time and making some mild assumptions about the frequency of communication between parties. Using this approach, we construct adaptively-secure, completely non-interactive encryption schemes supporting secure encryption of arbitrarily-many messages from arbitrarily-many senders. Our schemes additionally provide forward security and security against chosen-ciphertext attacks. %B Theory of Cryptography %P 150 - 168 %8 2005/// %G eng %R 10.1007/978-3-540-30576-7_9 %0 Conference Paper %B Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems %D 2005 %T Agent-organized networks for dynamic team formation %A Gaston,Matthew E. %A desJardins, Marie %K multi-agent systems %K organizational learning %K team formation %X Many multi-agent systems consist of a complex network of autonomous yet interdependent agents. Examples of such networked multi-agent systems include supply chains and sensor networks. In these systems, agents have a select set of other agents with whom they interact based on environmental knowledge, cognitive capabilities, resource limitations, and communications constraints. Previous findings have demonstrated that the structure of the artificial social network governing the agent interactions is strongly correlated with organizational performance. As multi-agent systems are typically embedded in dynamic environments, we wish to develop distributed, on-line network adaptation mechanisms for discovering effective network structures. Therefore, within the context of dynamic team formation, we propose several strategies for agent-organized networks (AONs) and evaluate their effectiveness for increasing organizational performance. %B Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems %S AAMAS '05 %I ACM %C New York, NY, USA %P 230 - 237 %8 2005/// %@ 1-59593-093-0 %G eng %U http://doi.acm.org/10.1145/1082473.1082508 %R 10.1145/1082473.1082508 %0 Conference Paper %B Proceedings of the 20th national conference on Artificial intelligence - Volume 1 %D 2005 %T Agent-organized networks for multi-agent production and exchange %A Gaston,Matthew E. %A desJardins, Marie %X As multi-agent systems grow in size and complexity, social networks that govern the interactions among the agents will directly impact system behavior at the individual and collective levels. Examples of such large-scale, networked multi-agent systems include peer-to-peer networks, distributed information retrieval, and agent-based supply chains. One way of dealing with the uncertain and dynamic nature of such environments is to endow agents with the ability to modify the agent social network by autonomously adapting their local connectivity structure. In this paper, we present a framework for agent-organized networks (AONs) in the context of multi-agent production and exchange, and experimentally evaluate the feasibility and efficiency of specific AON strategies. We find that decentralized network adaptation can significantly improve organizational performance. Additionally, we analyze several properties of the resulting network structures and consider their relationship to the observed increase in organizational performance. %B Proceedings of the 20th national conference on Artificial intelligence - Volume 1 %S AAAI'05 %I AAAI Press %P 77 - 82 %8 2005/// %@ 1-57735-236-x %G eng %U http://dl.acm.org/citation.cfm?id=1619332.1619347 %0 Journal Article %J J. ACM %D 2005 %T Aggregate operators in probabilistic databases %A Ross,Robert %A V.S. Subrahmanian %A Grant,John %K Aggregates %K probabilistic relational databases %X Though extensions to the relational data model have been proposed in order to handle probabilistic information, there has been very little work to date on handling aggregate operators in such databases. In this article, we present a very general notion of an aggregate operator and show how classical aggregation operators (such as COUNT, SUM, etc.) as well as statistical operators (such as percentiles, variance, etc.) are special cases of this general definition. We devise a formal linear programming based semantics for computing aggregates over probabilistic DBMSs, develop algorithms that satisfy this semantics, analyze their complexity, and introduce several families of approximation algorithms that run in polynomial time. We implemented all of these algorithms and tested them on a large set of data to help determine when each one is preferable. %B J. ACM %V 52 %P 54 - 101 %8 2005/01// %@ 0004-5411 %G eng %U http://doi.acm.org/10.1145/1044731.1044734 %N 1 %R 10.1145/1044731.1044734 %0 Conference Paper %B Computer Vision, 2005. ICCV 2005. Tenth IEEE International Conference on %D 2005 %T An algebraic approach to surface reconstruction from gradient fields %A Agrawal,A. %A Chellapa, Rama %A Raskar, R. %K algebra; %K algebraic %K approach; %K Computer %K confinement; %K discrete %K domain %K error %K field; %K from %K gradient %K graph %K image %K integrability; %K linear %K local %K methods; %K photometric %K reconstruction; %K shading; %K SHAPE %K stereo; %K surface %K system; %K theory; %K vision; %X Several important problems in computer vision such as shape from shading (SFS) and photometric stereo (PS) require reconstructing a surface from an estimated gradient field, which is usually non-integrable, i.e. have non-zero curl. We propose a purely algebraic approach to enforce integrability in discrete domain. We first show that enforcing integrability can be formulated as solving a single linear system Ax =b over the image. In general, this system is under-determined. We show conditions under which the system can be solved and a method to get to those conditions based on graph theory. The proposed approach is non-iterative, has the important property of local error confinement and can be applied to several problems. Results on SFS and PS demonstrate the applicability of our method. %B Computer Vision, 2005. ICCV 2005. Tenth IEEE International Conference on %V 1 %P 174 - 181 Vol. 1 - 174 - 181 Vol. 1 %8 2005/10// %G eng %R 10.1109/ICCV.2005.31 %0 Journal Article %J Electronic Notes in Theoretical Computer Science %D 2005 %T An Algebraic Theory Of Boundary Crossing Transitions %A Ray,Arnab %A Cleaveland, Rance %A Skou,Arne %K Compositional Semantics %K Formal Methods %K Process algebra %K Statecharts %X This paper gives a process-algebraic semantics for the hierarchical state machine (HSM) fragment of Statecharts, in which state transitions are permitted to cross state boundaries. Although frowned upon by researchers as promoting unstructured modeling, such transitions are used extensively in practice to model parameterized start states and conditional exit states. The purpose of this work is to develop a compositional semantics for HSMs that may be fit together with compositional semantic accounts for Statecharts without boundary-crossing transitions in order to arrive at a compositional theory for virtually the whole Statecharts language. Our technical development consists of a process algebra for HSMs that is equipped with an operational semantics, an argument that bisimulation is a congruence for the algebra, a syntax-directed translation procedure for HSMs into the process algebra, and an equational axiomatization of the algebra. %B Electronic Notes in Theoretical Computer Science %V 115 %P 69 - 88 %8 2005/01/18/ %@ 1571-0661 %G eng %U http://www.sciencedirect.com/science/article/pii/S1571066104053186 %R 10.1016/j.entcs.2004.09.029 %0 Journal Article %J ACM Transactions on Mathematical Software-TOMS %D 2005 %T Algorithm 844: Computing sparse reduced-rank approximations to sparse matrices %A Berry,M. W %A Pulatova,S. A %A Stewart, G.W. %B ACM Transactions on Mathematical Software-TOMS %V 31 %P 252 - 269 %8 2005/// %G eng %N 2 %0 Conference Paper %B Computer Design: VLSI in Computers and Processors, 2005. ICCD 2005. Proceedings. 2005 IEEE International Conference on %D 2005 %T Algorithmic and architectural design methodology for particle filters in hardware %A Sankaranarayanan,A. C %A Chellapa, Rama %A Srivastava, A. %K (numerical %K algorithmic %K architectural %K architectures; %K bearing %K complexity; %K computational %K design %K digital %K evolution; %K Filtering %K filtering; %K filters; %K implementation; %K methodology; %K methods); %K nonGaussian %K nonlinear %K only %K Parallel %K particle %K pipeline %K pipelined %K problem; %K processing; %K state %K tracking %K VLSI %K VLSI; %X In this paper, we present algorithmic and architectural methodology for building particle filters in hardware. Particle filtering is a new paradigm for filtering in presence of nonGaussian nonlinear state evolution and observation models. This technique has found wide-spread application in tracking, navigation, detection problems especially in a sensing environment. So far most particle filtering implementations are not lucrative for real time problems due to excessive computational complexity involved. In this paper, we re-derive the particle filtering theory to make it more amenable to simplified VLSI implementations. Furthermore, we present and analyze pipelined architectural methodology for designing these computational blocks. Finally, we present an application using the bearing only tracking problem and evaluate the proposed architecture and algorithmic methodology. %B Computer Design: VLSI in Computers and Processors, 2005. ICCD 2005. Proceedings. 2005 IEEE International Conference on %P 275 - 280 %8 2005/10// %G eng %R 10.1109/ICCD.2005.20 %0 Conference Paper %B Proceedings of the 2005 ACM SIGMETRICS international conference on Measurement and modeling of computer systems %D 2005 %T Algorithmic aspects of capacity in wireless networks %A Kumar,V. S. Anil %A Marathe,Madhav V. %A Parthasarathy,Srinivasan %A Srinivasan, Aravind %K capacity modeling %K end-to-end scheduling %K Linear programming %K Wireless networks %X This paper considers two inter-related questions: (i) Given a wireless ad-hoc network and a collection of source-destination pairs {(si,ti)}, what is the maximum throughput capacity of the network, i.e. the rate at which data from the sources to their corresponding destinations can be transferred in the network? (ii) Can network protocols be designed that jointly route the packets and schedule transmissions at rates close to the maximum throughput capacity? Much of the earlier work focused on random instances and proved analytical lower and upper bounds on the maximum throughput capacity. Here, in contrast, we consider arbitrary wireless networks. Further, we study the algorithmic aspects of the above questions: the goal is to design provably good algorithms for arbitrary instances. We develop analytical performance evaluation models and distributed algorithms for routing and scheduling which incorporate fairness, energy and dilation (path-length) requirements and provide a unified framework for utilizing the network close to its maximum throughput capacity.Motivated by certain popular wireless protocols used in practice, we also explore "shortest-path like" path selection strategies which maximize the network throughput. The theoretical results naturally suggest an interesting class of congestion aware link metrics which can be directly plugged into several existing routing protocols such as AODV, DSR, etc. We complement the theoretical analysis with extensive simulations. The results indicate that routes obtained using our congestion aware link metrics consistently yield higher throughput than hop-count based shortest path metrics. %B Proceedings of the 2005 ACM SIGMETRICS international conference on Measurement and modeling of computer systems %S SIGMETRICS '05 %I ACM %C New York, NY, USA %P 133 - 144 %8 2005/// %@ 1-59593-022-1 %G eng %U http://doi.acm.org/10.1145/1064212.1064228 %R 10.1145/1064212.1064228 %0 Conference Paper %B Foundations of Computer Science, 2005. FOCS 2005. 46th Annual IEEE Symposium on %D 2005 %T Algorithmic graph minor theory: Decomposition, approximation, and coloring %A Demaine,E. D %A Hajiaghayi, Mohammad T. %A Kawarabayashi,K. %K algorithmic graph minor theory %K approximation algorithm %K combinatorial polylogarithmic approximation %K computational complexity %K constant-factor approximation %K graph algorithm %K graph coloring %K graph colouring %K half-integral multicommodity flow %K largest grid minor %K maximization problem %K minimization problem %K polynomial-time algorithm %K subexponential fixed-parameter algorithm %K topological graph theory %K treewidth %X At the core of the seminal graph minor theory of Robertson and Seymour is a powerful structural theorem capturing the structure of graphs excluding a fixed minor. This result is used throughout graph theory and graph algorithms, but is existential. We develop a polynomial-time algorithm using topological graph theory to decompose a graph into the structure guaranteed by the theorem: a clique-sum of pieces almost-embeddable into bounded-genus surfaces. This result has many applications. In particular we show applications to developing many approximation algorithms, including a 2-approximation to graph coloring, constant-factor approximations to treewidth and the largest grid minor, combinatorial polylogarithmic approximation to half-integral multicommodity flow, subexponential fixed-parameter algorithms, and PTASs for many minimization and maximization problems, on graphs excluding a fixed minor. %B Foundations of Computer Science, 2005. FOCS 2005. 46th Annual IEEE Symposium on %P 637 - 646 %8 2005/// %G eng %R 10.1109/SFCS.2005.14 %0 Journal Article %J Computer-Aided Design and Applications %D 2005 %T Algorithms for constructing 3-D point clouds using multiple digital fringe projection patterns %A Peng,T. %A Gupta,S.K. %A Lau,K. %X This paper describes algorithms for generating 3-D point clouds from a set of digital imagesobtained from projecting phase-shifted sinusoidal fringe patterns onto object. In this paper, a mathematical model is introduced for describing the geometric relationship between the fringe patterns being projected, the image captured and the shape of the object being measured. This model allows considerable flexibility in the spatial configuration of shape measurement system. The algorithms for point cloud construction described in this paper present an improvement over the existing algorithms in terms of accuracy, ease of system calibration, and sensitivity to parameter errors. These algorithms have been incorporated in a shape measurement system and shown to have a very good performance. %B Computer-Aided Design and Applications %V 2 %P 737 - 746 %8 2005/// %G eng %U http://cadanda.homestead.com/V2No6_04.pdf %0 Conference Paper %B Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing %D 2005 %T Alignment link projection using transformation-based learning %A Ayan,Necip Fazil %A Dorr, Bonnie J %A Monz,Christof %X We present a new word-alignment approach that learns errors made by existing word alignment systems and corrects them. By adapting transformation-based learning to the problem of word alignment, we project new alignment links from already existing links, using features such as POS tags. We show that our alignment link projection approach yields a significantly lower alignment error rate than that of the best performing alignment system (22.6% relative reduction on English-Spanish data and 23.2% relative reduction on English-Chinese data). %B Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing %S HLT '05 %I Association for Computational Linguistics %C Stroudsburg, PA, USA %P 185 - 192 %8 2005/// %G eng %U http://dx.doi.org/10.3115/1220575.1220599 %R 10.3115/1220575.1220599 %0 Conference Paper %B Proceedings of the second international workshop on Software engineering for high performance computing system applications %D 2005 %T And away we go: understanding the complexity of launching complex HPC applications %A Yoon,Il-Chul %A Sussman, Alan %A Porter, Adam %K development %K high-performance %K Productivity %K software %X Although not well-studied, launching HPC applications is extremely complex. To better understand the launching process, we conducted a simple case study. Based in part on this study and an examination of existing toolkits, we have begun to develop a prototype environment to support HPC application launching. %B Proceedings of the second international workshop on Software engineering for high performance computing system applications %S SE-HPCS '05 %I ACM %C New York, NY, USA %P 45 - 49 %8 2005/// %@ 1-59593-117-1 %G eng %U http://doi.acm.org/10.1145/1145319.1145333 %R 10.1145/1145319.1145333 %0 Journal Article %J Journal of VisionJ Vis %D 2005 %T On the Anisotropy in the Perception of Stereoscopic Slant %A Fermüller, Cornelia %A Hui Ji %X Many visual processes computationally amount to estimation problems. It has been shown that noise in the image data causes consistent errors in the estimation, that is statistical bias [1]. Here we analyze the effects of bias on 3D shape estimation, and we found that it predicts the perceived underestimation of slant for many settings found in experiments.In particular, we concentrate on the problem of shape estimation from stereo using orientation disparity. We found that bias predicts the anisotropy in the perception of stereoscopic slant, an effect that has not been explained before. It has been found that a surface slanted about the horizontal axis is estimated much easier and more accurately than a surface slanted about the vertical axis [2,3]. In both cases there is an underestimation of slant, but it is much larger for slant about the vertical. Cagnello and Rogers [2] argued that this effect is due to orientation disparity, which when the texture on the plane is made up of mostly horizontal and vertical lines, is smaller for surfaces slanting about the vertical. However as shown in [3], the effect also exists, even though in weaker form, when the texture is made up of lines oriented at 45 degrees. For such a configuration the orientation disparity in the two configurations is about the same. Thus orientation disparity by itself cannot be the cause. But errors in the estimated position and orientation of edgels cause bias, which predicts all the above findings and other parametric studies that we performed. [1]. C. Fermuller, H. Malm. Uncertainty in visual processes predicts geometrical optical illusions, Vision Research, 44, 727-749, 2004. [2]. R. Cagnello, B.J.Rogers. Anisotropies in the perception of stereoscopic surfaces: the role of orientation disparity. Vision Research, 33>(16): 2189-2201, 1993. [3]. G.J. Mitchison. S.P. McKee. Mechannisms underlying the anisotropy of stereoscopic tilt perception. Vision Research, 30:1781-1791, 1990. %B Journal of VisionJ Vis %V 5 %P 516 - 516 %8 2005/09/23/ %@ , 1534-7362 %G eng %U http://www.journalofvision.org/content/5/8/516 %N 8 %R 10.1167/5.8.516 %0 Journal Article %J Image Processing, IEEE Transactions on %D 2005 %T Anti-collusion forensics of multimedia fingerprinting using orthogonal modulation %A Wang,Z.J. %A M. Wu %A Zhao,H.V. %A Trappe,W. %A Liu,K. J.R %K Automated;Product Labeling;Signal Processing %K Computer-Assisted; %K Computer-Assisted;Multimedia;Patents as Topic;Pattern Recognition %K Gaussian distribution;anticollusion forensic;colluder identification;collusion resistance;digital fingerprinting;false probability;likelihood-based approach;multimedia fingerprinting;orthogonal modulation;spread spectrum embedding;Gaussian distribution;fi %X Digital fingerprinting is a method for protecting digital data in which fingerprints that are embedded in multimedia are capable of identifying unauthorized use of digital content. A powerful attack that can be employed to reduce this tracing capability is collusion, where several users combine their copies of the same content to attenuate/remove the original fingerprints. In this paper, we study the collusion resistance of a fingerprinting system employing Gaussian distributed fingerprints and orthogonal modulation. We introduce the maximum detector and the thresholding detector for colluder identification. We then analyze the collusion resistance of a system to the averaging collusion attack for the performance criteria represented by the probability of a false negative and the probability of a false positive. Lower and upper bounds for the maximum number of colluders Kmax are derived. We then show that the detectors are robust to different collusion attacks. We further study different sets of performance criteria, and our results indicate that attacks based on a few dozen independent copies can confound such a fingerprinting system. We also propose a likelihood-based approach to estimate the number of colluders. Finally, we demonstrate the performance for detecting colluders through experiments using real images. %B Image Processing, IEEE Transactions on %V 14 %P 804 - 821 %8 2005/06// %@ 1057-7149 %G eng %N 6 %R 10.1109/TIP.2005.847284 %0 Journal Article %J IEEE Transactions on Pattern Analysis and Machine Intelligence %D 2005 %T AParallel-Line Detection Algorithm Based on HMMDecoding %A Yefeng Zheng %A Huiping Li %A David Doermann %X The detection of groups of parallel lines is important in applications such as form processing and text (handwriting) extraction from rule lined paper. These tasks can be very challenging in degraded documents where the lines are severely broken. In this paper we propose a novel model-based method which incorporates high level context to detect these lines. After preprocessing (such as skew correction and text filtering), we use trained Hidden Markov Models (HMM) to locate the optimal positions of all lines simultaneously on the horizontal or vertical projection profiles, based on the Viterbi decoding. The algorithm is trainable so it can be easily adapted to different application scenarios. The experiments conducted on known form processing and rule line detection show our method is robust, and achieves better results than other widely used line detection methods. %B IEEE Transactions on Pattern Analysis and Machine Intelligence %V 27 %P 777 - 792 %8 2005/05// %G eng %N 5 %0 Conference Paper %B Computer Vision, 2005. ICCV 2005. Tenth IEEE International Conference on %D 2005 %T Appearance modeling under geometric context %A Li,Jian %A Zhou,S. K %A Chellapa, Rama %K based %K curve %K geometry;image %K GeT;convex %K GeT;fingerprinting;geometric %K hull;feature %K information;trace %K modeling;contour-driven %K Radon %K recognition; %K recognition;shape %K representation;object %K synthesis;image %K transform;appearance %K transform;image %K transform;Radon %K transforms;computational %K warping;multi-resolution %X We propose a unified framework based on a general definition of geometric transform (GeT) for modeling appearance. GeT represents the appearance by applying designed functionals over certain geometric sets. We show that image warping, Radon transform, trace transform, etc. are special cases of our definition. Moreover, three different types of GeTs are designed to handle deformation, articulation and occlusion and applied to fingerprinting the appearance inside a contour. They include the contour-driven GeT, the feature curve based GeT and selecting functionals to model the appearance inside the convex hull of the contour. A multi-resolution representation that combines both shape and appearance information is also proposed. We apply our approach to image synthesis and object recognition. The proposed approach produces promising results when applied to fingerprinting the appearance of human and body parts despite the challenges due to articulated motion and deformations %B Computer Vision, 2005. ICCV 2005. Tenth IEEE International Conference on %V 2 %P 1252 -1259 Vol. 2 - 1252 -1259 Vol. 2 %8 2005/10// %G eng %R 10.1109/ICCV.2005.38 %0 Journal Article %J Intelligent Systems, IEEE %D 2005 %T Applications of SHOP and SHOP2 %A Nau, Dana S. %A Au,T.-C. %A Ilghami,O. %A Kuter,U. %A Wu,D. %A Yaman,F. %A Munoz-Avila,H. %A Murdock,J. W. %K automated planning %K hierarchical task network planning %K ordered task decomposition %K planning (artificial intelligence) %K problem solving %K search-control strategy %K simple hierarchical ordered planner %K trees (mathematics) %K uncertainty handling %X We design the simple hierarchical ordered planner (SHOP) and its successor, SHOP2, with two goals in mind: to investigate research issues in automated planning and to provide some simple, practical planning tools. SHOP and SHOP2 are based on a planning formalism called hierarchical task network planning. SHOP and SHOP2 use a search-control strategy called ordered task decomposition, which breaks tasks into subtasks and generates the plan's actions in the same order that the plan executor executes them. So, throughout the planning process, the planner can tell what the state of the world at each step of the plan. %B Intelligent Systems, IEEE %V 20 %P 34 - 41 %8 2005/04//march %@ 1541-1672 %G eng %N 2 %R 10.1109/MIS.2005.20 %0 Conference Paper %B Acoustics, Speech, and Signal Processing, 2005. Proceedings. (ICASSP '05). IEEE International Conference on %D 2005 %T Approximate expressions for the mean and the covariance of the maximum likelihood estimator for acoustic source localization %A Raykar,V.C. %A Duraiswami, Ramani %K (mathematics); %K acoustic %K approximate %K approximation %K array %K array; %K covariance %K estimation; %K expansion; %K expressions; %K function; %K likelihood %K localization; %K matrices; %K matrix; %K maximum %K mean %K microphone %K objective %K processing; %K series %K signal %K source %K Taylor %K theory; %K vector; %K vectors; %X Acoustic source localization using multiple microphones can be formulated as a maximum likelihood estimation problem. The estimator is implicitly defined as the minimum of a certain objective function. As a result, we cannot get explicit expressions for the mean and the covariance of the estimator. We derive approximate expressions for the mean vector and covariance matrix of the estimator using Taylor's series expansion of the implicitly defined estimator. The validity of our expressions is verified by Monte-Carlo simulations. We also study the performance of the estimator for different microphone array configurations. %B Acoustics, Speech, and Signal Processing, 2005. Proceedings. (ICASSP '05). IEEE International Conference on %V 3 %P iii/73 - iii/76 Vol. 3 - iii/73 - iii/76 Vol. 3 %8 2005/03// %G eng %R 10.1109/ICASSP.2005.1415649 %0 Conference Paper %B Multi-Agent Security and Survivability, 2005 IEEE 2nd Symposium on %D 2005 %T Approximation results for probabilistic survivability %A Zhang,Y. %A Manister,E. %A Kraus,S. %A V.S. Subrahmanian %K application; %K computing; %K failure; %K fault %K heuristic %K heuristics %K industrial %K model; %K multi-agent %K multiagent %K node %K probabilistic %K probability; %K programming; %K survivability; %K system; %K systems; %K testing; %K tolerant %X As multiagent systems (MASs) are increasingly used in industrial applications, the need to make them more robust and resilient against disruption increases dramatically. The author has developed a probabilistic model (assuming complete ignorance of dependencies between node failures) of survivability based on deploying each agent in a MAS on one or more nodes. Finding a deployment that maximizes survivability is highly intractable for two reasons: firstly, computing the survivability of any deployment is intractable, and secondly, going through an exponential number of deployments to find the best one adds another layer of intractability. In this paper, we study what happens when node failures are independent. We show that computing survivability in this environment is still intractable. We propose various heuristics to compute the survivability of a given deployment. We have implemented and tested all these heuristics. We report on the advantages and disadvantages of different heuristics in different environmental settings. %B Multi-Agent Security and Survivability, 2005 IEEE 2nd Symposium on %P 1 - 10 %8 2005/08// %G eng %R 10.1109/MASSUR.2005.1507042 %0 Book Section %B Architecting Dependable Systems III %D 2005 %T Architecting and Implementing Versatile Dependability %A Tudor Dumitras %A Srivastava, Deepti %A Narasimhan, Priya %E Lemos, Rogério de %E Gacek, Cristina %E Romanovsky, Alexander %K Operating systems %K software engineering %X Distributed applications must often consider and select the appropriate trade-offs among three important aspects – fault-tolerance, performance and resources. We introduce a novel concept, called versatile dependability, that provides a framework for analyzing and reasoning about these trade-offs in dependable software architectures. We present the architecture of a middleware framework that implements versatile dependability by providing the appropriate ”knobs” to tune and re-calibrate the trade-offs. Our framework can adjust the properties and the behavior of the system at development-time, at deployment-time, and throughout the application’s life-cycle. This renders the versatile dependability approach useful both to applications that require static fault-tolerance configurations supporting the loss/addition of resources and changing workloads, as well as to applications that evolve in terms of their dependability requirements. Through a couple of specific examples, one on adapting the replication style at runtime and the other on tuning the system scalability under given constraints, we demonstrate concretely how versatile dependability can provide an extended coverage of the design space of dependable distributed systems. %B Architecting Dependable Systems III %S Lecture Notes in Computer Science %I Springer Berlin Heidelberg %P 212 - 231 %8 2005/01/01/ %@ 978-3-540-28968-5, 978-3-540-31648-0 %G eng %U http://link.springer.com/chapter/10.1007/11556169_10 %0 Conference Paper %B Proceedings of the 3rd IEEE/ACM/IFIP international conference on Hardware/software codesign and system synthesis %D 2005 %T An architectural level design methodology for embedded face detection %A Kianzad,V. %A Saha,S. %A Schlessman,J. %A Aggarwal,G. %A Bhattacharyya, Shuvra S. %A Wolf,W. %A Chellapa, Rama %K design space exploration %K Face detection %K platforms %K reconfigurable %K system-level models %X Face detection and recognition research has attracted great attention in recent years. Automatic face detection has great potential in a large array of application areas, including banking and security system access control, video surveillance, and multimedia information retrieval. In this paper, we discuss an architectural level design methodology for implementation of an embedded face detection system on a reconfigurable system on chip. We present models for performance estimation and validate these models with experimental values obtained from implementing our system on an FPGA platform. This modeling approach is shown to be efficient, accurate, and intuitive for designers to work with. Using this approach, we present several design options that trade-off various architectural features. %B Proceedings of the 3rd IEEE/ACM/IFIP international conference on Hardware/software codesign and system synthesis %S CODES+ISSS '05 %I ACM %C New York, NY, USA %P 136 - 141 %8 2005/// %@ 1-59593-161-9 %G eng %U http://doi.acm.org/10.1145/1084834.1084872 %R 10.1145/1084834.1084872 %0 Conference Paper %B Mobile Computing Systems and Applications, 2006. WMCSA'06. Proceedings. 7th IEEE Workshop on %D 2005 %T Are GSM phones THE solution for localization? %A Varshavsky,A. %A Chen,M.Y. %A de Lara,E. %A Jon Froehlich %A Haehnel,D. %A Hightower,J. %A LaMarca,A. %A Potter,F. %A Sohn,T. %A Tang,K. %A others %B Mobile Computing Systems and Applications, 2006. WMCSA'06. Proceedings. 7th IEEE Workshop on %P 34 - 42 %8 2005/// %G eng %0 Journal Article %J AIAA paper %D 2005 %T Assessment of STBLI DNS data and comparison against experiments %A Wu,M. %A Taylor,E. M %A Martin, M.P %B AIAA paper %8 2005/// %G eng %0 Journal Article %J Physical Review LettersPhys. Rev. Lett. %D 2005 %T Asymptotically Optimal Quantum Circuits for d-Level Systems %A Bullock,Stephen S. %A O’Leary,Dianne P. %A Brennen,Gavin K. %X Scalability of a quantum computation requires that the information be processed on multiple subsystems. However, it is unclear how the complexity of a quantum algorithm, quantified by the number of entangling gates, depends on the subsystem size. We examine the quantum circuit complexity for exactly universal computation on many d-level systems (qudits). Both a lower bound and a constructive upper bound on the number of two-qudit gates result, proving a sharp asymptotic of Θ(d2n) gates. This closes the complexity question for all d-level systems (d finite). The optimal asymptotic applies to systems with locality constraints, e.g., nearest neighbor interactions. %B Physical Review LettersPhys. Rev. Lett. %V 94 %P 230502 - 230502 %8 2005/06/14/ %G eng %U http://link.aps.org/doi/10.1103/PhysRevLett.94.230502 %N 23 %R 10.1103/PhysRevLett.94.230502 %0 Journal Article %J Institute for Systems Research Technical Reports %D 2005 %T An Augmented Visual Query Mechanism for Finding Patterns in Time Series Data (2002) %A Keogh,Eamonn %A Hochheiser,Harry %A Shneiderman, Ben %K Technical Report %X Relatively few query tools exist for data exploration and pattern identification in time series data sets. In previous work we introduced Timeboxes. Timeboxes are rectangular, direct-manipulation queries for studying time-series datasets. We demonstrated how Timeboxes can be used to support interactive exploration via dynamic queries, along with overviews of query results and drag-and-drop support for query-by-example. In this paper, we extend our work by introducing Variable Time Timeboxes (VTT). VTTs are a natural generalization of Timeboxes, which permit the specification of queries that allow a degree of uncertainty in the time axis. We carefully motivate the need for these more expressive queries, and demonstrate the utility of our approach on several data sets. %B Institute for Systems Research Technical Reports %8 2005/// %G eng %U http://drum.lib.umd.edu/handle/1903/6495 %0 Conference Paper %D 2005 %T Automated checking for Windows host vulnerabilities %A Tamizi,M. %A Weinstein,M. %A Michel Cukier %K application vulnerabilities %K computing system security %K Ferret-Windows software tool %K host vulnerabilities %K network vulnerabilities %K open-source software %K operating systems (computers) %K plug-in module %K program diagnostics %K security of data %K software reliability %K software tools %K system attacks %K Windows host vulnerability checking %K Windows platforms %X Evaluation of computing system security requires knowledge of the vulnerabilities present in the system and of potential attacks against the system. Vulnerabilities can be classified based on their location as application vulnerabilities, network vulnerabilities, or host vulnerabilities. This paper describes Ferret-Windows, a new software tool for checking host vulnerabilities on the Windows platforms. This tool helps system administrators by quickly finding vulnerabilities that are present on a host. It is designed and implemented in a modular way: a plug-in module is used for each vulnerability checked, and each possible output format is specified by a plug-in module. Moreover, several vulnerability fixing plug-in modules exist to help users remove specific vulnerabilities. As a result, Ferret-Windows is extensible, and can easily be kept up-to-date through the addition of checks for new vulnerabilities as they are identified. Finally, Ferret-Windows is a freely available open-source software %P 10 pp. -148 - 10 pp. -148 %8 2005/11// %G eng %R 10.1109/ISSRE.2005.11 %0 Journal Article %J IEEE Transactions on Software Engineering %D 2005 %T Automatic Mining of Source Code Repositories to Improve Bug Finding Techniques %A Williams, Chadd C %A Hollingsworth, Jeffrey K %K configuration control %K debugging aids. %K index terms- testing tools %K version control %X We describe a method to use the source code change history of a software project to drive and help to refine the search for bugs. Based on the data retrieved from the source code repository, we implement a static source code checker that searches for a commonly fixed bug and uses information automatically mined from the source code repository to refine its results. By applying our tool, we have identified a total of 178 warnings that are likely bugs in the Apache Web server source code and a total of 546 warnings that are likely bugs in Wine, an open-source implementation of the Windows API. We show that our technique is more effective than the same static analysis that does not use historical data from the source code repository. %B IEEE Transactions on Software Engineering %V 31 %P 466 - 480 %8 2005/// %@ 0098-5589 %G eng %N 6 %R http://doi.ieeecomputersociety.org/10.1109/TSE.2005.63 %0 Journal Article %J Journal of Software Maintenance and Evolution: Research and Practice %D 2005 %T Automating regression testing for evolving GUI software %A Memon, Atif M. %A Nagarajan,Adithya %A Xie,Qing %K daily/nightly builds %K event-flow graphs %K Graphical user interfaces %K GUI regression testing %K GUI testing %K smoke testing %K Software quality %X With the widespread deployment of broadband connections worldwide, software development and maintenance are increasingly being performed by multiple engineers, often working around-the-clock to maximize code churn rates. To ensure rapid quality assurance of such software, techniques such as ‘nightly/daily building and smoke testing’ have become widespread since they often reveal bugs early in the software development process. During these builds, a development version of the software is checked out from the source code repository tree, compiled, linked, and (re)tested with the goal of (re)validating its basic functionality. Although successful for conventional software, smoke tests are difficult to develop and automatically re-run for software that has a graphical user interface (GUI). In this paper, we describe a framework called DART (Daily Automated Regression Tester) that addresses the needs of frequent and automated re-testing of GUI software. The key to our success is automation: DART automates everything from structural GUI analysis, smoke-test-case generation, test-oracle creation, to code instrumentation, test execution, coverage evaluation, regeneration of test cases, and their re-execution. Together with the operating system's task scheduler, DART can execute frequently with little input from the developer/tester to re-test the GUI software. We provide results of experiments showing the time taken and memory required for GUI analysis, test case and test oracle generation, and test execution. We empirically compare the relative costs of employing different levels of detail in the GUI test oracle. We also show the events and statements covered by the smoke test cases. Copyright © 2005 John Wiley & Sons, Ltd. %B Journal of Software Maintenance and Evolution: Research and Practice %V 17 %P 27 - 64 %8 2005/// %@ 1532-0618 %G eng %U http://onlinelibrary.wiley.com/doi/10.1002/smr.305/abstract %N 1 %R 10.1002/smr.305 %0 Conference Paper %B Geoscience and Remote Sensing Symposium, 2005. IGARSS '05. Proceedings. 2005 IEEE International %D 2005 %T Average relative radiance transform for subpixel detection %A Broadwater, J. %A Meth, R. %A Chellapa, Rama %B Geoscience and Remote Sensing Symposium, 2005. IGARSS '05. Proceedings. 2005 IEEE International %V 5 %P 3565 - 3568 %8 2005/07// %G eng %R 10.1109/IGARSS.2005.1526617 %0 Conference Paper %B W3C Workshop on Semantic Web for Life Sciences %D 2004 %T Abstracting workflows: unifying bioinformatics task conceptualization and specification through Semantic Web services %A Hashmi,N. %A Lee,S. %A Cummings, Michael P. %B W3C Workshop on Semantic Web for Life Sciences %C Cambridge, Massachusetts USA %8 2004/// %G eng %0 Journal Article %J IEEE Transactions on Speech and Audio Processing %D 2004 %T Accelerated speech source localization via a hierarchical search of steered response power %A Zotkin,Dmitry N %A Duraiswami, Ramani %K accelerated speech source localization %K Acceleration %K array signal processing %K conferencing system %K Delay %K delay-and-sum beamforming %K direction-of-arrival estimation %K Frequency %K hierarchical search algorithm %K Inverse problems %K multimedia applications %K Multimedia communication %K multiple speech sound source %K Position measurement %K Robustness %K search problems %K Sensor arrays %K Signal processing algorithms %K speech %K speech enhancement %K Speech processing %K steered response power %K steered response power phase-transform weighted source localization algorithm %K transducer arrays %K User interfaces %X Accurate and fast localization of multiple speech sound sources is a problem that is of significant interest in applications such as conferencing systems. Recently, approaches that are based on search for local peaks of the steered response power are becoming popular, despite their known computational expense. Based on the observation that the wavelengths of the sound from a speech source are comparable to the dimensions of the space being searched and that the source is broadband, we have developed an efficient search algorithm. Significant speedups are achieved by using coarse-to-fine strategies in both space and frequency. We present applications of the search algorithm to speed up simple delay-and-sum beamforming and steered response power phase-transform weighted (SRP-PHAT) source localization algorithms. A systematic series of comparisons with previous algorithms are made that show that the technique is much faster, robust, and accurate. The performance of the algorithm can be further improved by using constraints from computer vision. %B IEEE Transactions on Speech and Audio Processing %V 12 %P 499 - 508 %8 2004/09// %@ 1063-6676 %G eng %N 5 %R 10.1109/TSA.2004.832990 %0 Conference Paper %B Vehicular Technology Conference, 2004. VTC2004-Fall. 2004 IEEE 60th %D 2004 %T Achieving packet-level quality of service through scheduling in multirate WLANs %A Yuan Yuan %A Daqing Gu %A Arbaugh, William A. %A Jinyun Zhang %K Analytical models %K channel conditions %K channel errors %K channel temporal fair share %K compensation %K Computer science %K Delay %K error-prone flow compensation %K IEEE 802.11a/b/g physical layer %K multirate wireless fair scheduling %K multirate WLAN %K packet radio networks %K packet-level quality of service %K Physical layer %K Processor scheduling %K QoS %K quality of service %K radio access networks %K Scheduling algorithm %K Throughput %K throughput fairness %K USA Councils %K Wireless LAN %K wireless packet scheduling %K WMFS %X Wireless packet scheduling has been a popular paradigm to achieve packet-level QoS in terms of fairness and throughput in the presence of channel errors. However, the current design does not anticipate the multi-rate capability offered by the IEEE 802.11a/b/g physical layer, thus suffering significant performance degradation in 802.11 WLANs. In this paper, we propose multirate wireless fair scheduling (WMFS). In MWFS, each flow is granted a temporal fair share of the channel, in contrast to the throughput fair share adopted by existing algorithms. Therefore, each flow receives services in proportion to its perceived transmission rate, and high-rate flows are able to opportunistically exploit their good channel conditions and receive more services. MWFS also renovates the compensation model in order to allow for error-prone flows to catch up, thus ensuring fairness for all flows over error-prone channels. We demonstrate the effectiveness of MWFS through both simulations and analysis. Especially, WMFS achieves system throughput 159% of state-of-the-art scheduling algorithms in simulated scenarios. %B Vehicular Technology Conference, 2004. VTC2004-Fall. 2004 IEEE 60th %I IEEE %V 4 %P 2730- 2734 Vol. 4 - 2730- 2734 Vol. 4 %8 2004/09/26/29 %@ 0-7803-8521-7 %G eng %R 10.1109/VETECF.2004.1400554 %0 Journal Article %J Journal of industrial microbiology & biotechnology %D 2004 %T Acinetobacter lipases: molecular biology, biochemical properties and biotechnological potential %A Snellman,E. A. %A Rita R Colwell %X Lipases (EC 3.1.1.3) have received increased attention recently, evidenced by the increasing amount of information about lipases in the current literature. The renewed interest in this enzyme class is due primarily to investigations of their role in pathogenesis and their increasing use in biotechnological applications [38]. Also, many microbial lipases are available as commercial products, the majority of which are used in detergents, cosmetic production, food flavoring, and organic synthesis. Lipases are valued biocatalysts because they act under mild conditions, are highly stable in organic solvents, show broad substrate specificity, and usually show high regio- and/or stereo-selectivity in catalysis. A number of lipolytic strains of Acinetobacter have been isolated from a variety of sources and their lipases possess many biochemical properties similar to those that have been developed for biotechnological applications. This review discusses the biology of lipase expression in Acinetobacter, with emphasis on those aspects relevant to potential biotechnology applications. %B Journal of industrial microbiology & biotechnology %V 31 %P 391 - 400 %8 2004/// %G eng %N 9 %R 10.1007/s10295-004-0167-0 %0 Journal Article %J International Journal of Image and Graphics %D 2004 %T Active deformable models using density estimation %A Abd-Almageed, Wael %A Smith,C.E. %B International Journal of Image and Graphics %V 4 %P 343 - 362 %8 2004/// %G eng %N 3 %0 Conference Paper %B Proceedings of the Workshop Empirically Successful First-Order Reasoning, International Joint Conference on Automated Reasoning %D 2004 %T Active logic for more effective human-computer interaction and other commonsense applications %A Anderson,M. L %A Josyula,D. %A Perlis, Don %A Purang,K. %B Proceedings of the Workshop Empirically Successful First-Order Reasoning, International Joint Conference on Automated Reasoning %8 2004/// %G eng %0 Journal Article %J Proceedings of the AAAI 2004 Fall Symposium on Artificial Multi-agent Learning %D 2004 %T Adapting network structure for efficient team formation %A Gaston,M. %A Simmons,J. %A desJardins, Marie %B Proceedings of the AAAI 2004 Fall Symposium on Artificial Multi-agent Learning %8 2004/// %G eng %0 Journal Article %J ACMTransactions on Asian Language Information Processing %D 2004 %T Adaptive Hindi OCRUsing Generalized Hausdorff Image Comparison %A Ma,Huanfeng %A David Doermann %B ACMTransactions on Asian Language Information Processing %V 26 %P 198 - 213 %8 2004/// %G eng %N 2 %0 Conference Paper %B Proceedings of the 5th ACM Conference on Electronic Commerce %D 2004 %T Adaptive limited-supply online auctions %A Hajiaghayi, Mohammad T. %A Kleinberg,R. %A Parkes,D. C %B Proceedings of the 5th ACM Conference on Electronic Commerce %P 71 - 80 %8 2004/// %G eng %0 Conference Paper %B Distributed Computing Systems, 2004. Proceedings. 24th International Conference on %D 2004 %T Adaptive replication in peer-to-peer systems %A Gopalakrishnan,V. %A Silaghi,B. %A Bhattacharjee, Bobby %A Keleher,P. %K adaptive %K allocation; %K data %K databases; %K decentralized %K delivery %K distributed %K LAR %K low-latency %K peer-to-peer %K processing; %K protocol; %K replicated %K replication %K resource %K strategies; %K structured %K system-neutral %K system; %K systems; %X Peer-to-peer systems can be used to form a low-latency decentralized data delivery system. Structured peer-to-peer systems provide both low latency and excellent load balance with uniform query and data distributions. Under the more common skewed access distributions, however, individual nodes are easily overloaded, resulting in poor global performance and lost messages. This paper describes a lightweight, adaptive, and system-neutral replication protocol, called LAR, that maintains low access latencies and good load balance even under highly skewed demand. We apply LAR to Chord and show that it has lower overhead and better performance than existing replication strategies. %B Distributed Computing Systems, 2004. Proceedings. 24th International Conference on %P 360 - 369 %8 2004/// %G eng %R 10.1109/ICDCS.2004.1281601 %0 Book Section %B Model reduction methods for vector autoregressive processes %D 2004 %T Adaptively-Secure, Non-Interactive Public-Key Encryption %A Canetti,R. %A Halevi,S. %A Katz, Jonathan %B Model reduction methods for vector autoregressive processes %P 150 - 150 %8 2004/// %G eng %0 Journal Article %J Trends in Parasitology %D 2004 %T Advances in schistosome genomics %A El‐Sayed, Najib M. %A Bartholomeu,Daniella %A Ivens,Alasdair %A Johnston,David A. %A LoVerde,Philip T. %X In Spring 2004, the first draft of the 270 Mb genome of Schistosoma mansoni will be released. This sequence is based on the assembly and annotation of a >7.5-fold coverage, shotgun sequencing project. The key stages involved in the international collaborative efforts that have led to the generation of these sequencing data for the parasite S. mansoni are discussed here. %B Trends in Parasitology %V 20 %P 154 - 157 %8 2004/04/01/ %@ 1471-4922 %G eng %U http://www.sciencedirect.com/science/article/pii/S1471492204000480 %N 4 %R 16/j.pt.2004.02.002 %0 Journal Article %J Environmental Modelling & Software %D 2004 %T Agent-based and analytical modeling to evaluate the effectiveness of greenbelts %A Brown,Daniel G. %A Page,Scott E. %A Riolo,Rick %A Rand, William %K Agent-based modeling %K Land-use change %K Landscape ecology %K Urban sprawl %X We present several models of residential development at the rural–urban fringe to evaluate the effectiveness of a greenbelt located beside a developed area, for delaying development outside the greenbelt. First, we develop a mathematical model, under two assumptions about the distributions of service centers, that represents the trade-off between greenbelt placement and width, their effects on the rate of development beyond the greenbelt, and how these interact with spatial patterns of aesthetic quality and the locations of services. Next, we present three agent-based models (ABMs) that include agents with the potential for heterogeneous preferences and a landscape with the potential for heterogeneous attributes. Results from experiments run with a one-dimensional ABM agree with the starkest of the results from the mathematical model, strengthening the support for both models. Further, we present two different two-dimensional ABMs and conduct a series of experiments to supplement our mathematical analysis. These include examining the effects of heterogeneous agent preferences, multiple landscape patterns, incomplete or imperfect information available to agents, and a positive aesthetic quality impact of the greenbelt on neighboring locations. These results suggest how width and location of the greenbelt could help determine the effectiveness of greenbelts for slowing sprawl, but that these relationships are sensitive to the patterns of landscape aesthetic quality and assumptions about service center locations. %B Environmental Modelling & Software %V 19 %P 1097 - 1109 %8 2004/12// %@ 1364-8152 %G eng %U http://www.sciencedirect.com/science/article/pii/S1364815203002664 %N 12 %R 10.1016/j.envsoft.2003.11.012 %0 Journal Article %J Multimedia Tools and Applications %D 2004 %T An algebra for powerpoint sources %A Fayzullin,M. %A V.S. Subrahmanian %X There are now millions of PowerPoint documents available within corporate intranets and/or over the Internet. In this paper, we develop a formal model of PowerPoint databases. We propose a relational style algebra called pptA (PowerPoint Algebra) to query PowerPoint databases. The algebra contains some new operators (such as the APPLY operator that changes properties of objects, slides and presentations) as well as interesting twists on relational operators (e.g. join and cartesian product allow different entities being joined together to share attributes whose values may be merged). We prove a set of equivalence results within this algebra. We have implemented a version of pptA—the paper provides a cost model and experimental results on the conditions under which these equivalences are useful. %B Multimedia Tools and Applications %V 24 %P 273 - 301 %8 2004/// %G eng %N 3 %R 10.1023/B:MTAP.0000039422.87260.52 %0 Conference Paper %D 2004 %T Algorithmic Foundations for Consistency-Checking of Interaction-States of Mechatronic Systems %A Xu,C. %A Gupta,S.K. %X In order to reduce product development time, we needsoftware tools that can perform automated validation of the proposed design concepts. These tools will ensure that only valid design concepts are transferred to the detailed design stage for further development. This paper provides a step towards the automated validation of proposed design concepts. We define the problem of consistency-checking of interaction- states as a key step in the design concept validation. We present a polynomial time algorithm for solving the interaction consistency-checking problem. We also present an algorithm for analyzing inconsistent interaction-states and identifying the inconsistent interactions. We believe that the framework described in this paper will provide the underlying foundations for constructing the next generation software tools for conceptual design of complex mechatronic systems. %C Salt Lake City, Utah, USA %8 2004/// %G eng %U ftp://ftp.eng.umd.edu/:/home/glue/s/k/skgupta/pub/Publication/DETC04_Xu.pdf %0 Journal Article %J Computer Systems: Architectures, Modeling, and Simulation %D 2004 %T Analysis of dataflow programs with interval-limited data-rates %A Teich,J. %A Bhattacharyya, Shuvra S. %B Computer Systems: Architectures, Modeling, and Simulation %P 507 - 518 %8 2004/// %G eng %0 Book Section %B Theory and Applications of Recent Robust MethodsTheory and Applications of Recent Robust Methods %D 2004 %T Analyzing the number of samples required for an approximate Monte-Carlo LMS line estimator %A Mount, Dave %A Netanyahu,N. S %A Zuck,E. %B Theory and Applications of Recent Robust MethodsTheory and Applications of Recent Robust Methods %I Birkhäuser %P 207 - 219 %8 2004/// %@ 3764370602, 9783764370602 %G eng %0 Journal Article %J Proc. of 6th Asian Conference on Computer Vision (ACCV) %D 2004 %T Appearance tracking using adaptive models in a particle filter %A Zhou, S. %A Chellapa, Rama %A Moghaddam, B. %X The particle filter is a popular tool for visual tracking. Usually, the appearance model is eitherfixed or rapidly changing and the motion model is simply a random walk with fixed noise vari- ance. Also, the number of particles used is typically fixed. All these factors make the visual tracker unstable. To stabilize the tracker, we propose the following measures: an observation model arising from an adaptive noise variance, and adaptive number of particles. The adaptive- velocity is computed via a first-order linear predictor using the previous particle configuration. Tracking under occlusion is accomplished using robust statistics. Experimental results on track- ing visual objects in long video sequences such as vehicles, tank, and human faces demonstrate the effectiveness and robustness of our algorithm. %B Proc. of 6th Asian Conference on Computer Vision (ACCV) %8 2004/// %G eng %0 Journal Article %J Pattern Analysis and Applications %D 2004 %T An Appearance-based Approach for Consistent Labeling of Humans and Objects in Video %A Balcells-Capellades,M. %A DeMenthon,D. %A David Doermann %X We present an approach for consistently labeling people and for detecting human-object interactions using mono-camera surveillance video. The approach is based on a robust appearance based correlogram model which is combined with histogram information to model color distributions of people and objects in the scene. The models are dynamically built from non-stationary objects which are the outputs of background subtraction, and are used to identify objects on a frame-by-frame basis. We are able to detect when people merge into groups and to segment them even during partial occlusion. We can also detect when a person deposits or removes an object. The models persist when a person or object leaves the scene and are used to identify them when they reappear. Experiments show that the models are able to accommodate perspective foreshortening that occurs with overhead camera angles, as well as partial occlusion. The results show that this is an effective approach able to provide important information to algorithms performing higher-level analysis such as activity recognition, where human-object interactions play an important role. %B Pattern Analysis and Applications %P 1433 - 7541 %8 2004/11// %G eng %0 Conference Paper %B Acoustics, Speech, and Signal Processing, 2004. Proceedings. (ICASSP '04). IEEE International Conference on %D 2004 %T Appearance-based tracking and recognition using the 3D trilinear tensor %A Jie Shao %A Zhou,S. K %A Chellapa, Rama %K 3D %K adaptive %K affine-transformation %K airborne %K algorithm; %K appearance %K appearance-based %K based %K estimation; %K geometrical %K image %K mathematical %K novel %K object %K operator; %K operators; %K perspective %K prediction; %K processing; %K recognition; %K representation; %K signal %K structure %K synthesis; %K template %K tensor %K tensor; %K tensors; %K tracking; %K transformation; %K trilinear %K updating; %K video %K video-based %K video; %K view %X The paper presents an appearance-based adaptive algorithm for simultaneous tracking and recognition by generalizing the transformation model to 3D perspective transformation. A trilinear tensor operator is used to represent the 3D geometrical structure. The tensor is estimated by predicting the corresponding points using the existing affine-transformation based algorithm. The estimated tensor is used to synthesize novel views to update the appearance templates. Some experimental results using airborne video are presented. %B Acoustics, Speech, and Signal Processing, 2004. Proceedings. (ICASSP '04). IEEE International Conference on %V 3 %P iii - 613-16 vol.3 - iii - 613-16 vol.3 %8 2004/05// %G eng %R 10.1109/ICASSP.2004.1326619 %0 Journal Article %J IEEE Software %D 2004 %T Applying Model-based Distributed Continuous Quality Assurance Processes to Enhance Persistent Software Attributes %A Krishna,A. S %A Yilmaz,C. %A Memon, Atif M. %A Porter, Adam %A Schmidt,D. C %A Gokhale,A. %A Natarajan,B. %X Time and resource constraints often force developers of highlyconfigurable quality of service (QoS)-intensive software sys- tems to guarantee their system’s persistent software attributes (PSAs) (e.g., functional correctness, portability, efficiency, and QoS) on very few platform configurations and to extrapolate from these configurations to the entire configuration space, which allows many sources of degradation to escape detec- tion until systems are fielded. This article illustrates how distributed continuous quality assurance (DCQA) processes help improve the assessment of these PSAs across large QoS- intensive software system configuration spaces. We also il- lustrate how model-based DCQA processes enable developers to run formally-designed screening experiments that isolate the most significant configuration options, such as different workload parameters, operating system, compiler flags, fea- ture sets, and/or run-time optimization controls. Our empir- ical results show that DCQA processes can be used monitor, safeguard, and enforce PSAs at an acceptable level of cost and effort. %B IEEE Software %V 21 %P 32 - 40 %8 2004/// %G eng %N 6 %0 Report %D 2004 %T Applying permutation tests to tree-based statistical models: extending the R package rpart %A Cummings, Michael P. %A Myers,D. S %A Mangelson,M %X Tree-based statistical models are useful for evaluating relationships between predictor and response variables and for generating predictions when the response is unknown. However, current methods of constructing tree-based models do not provide a probabilistic assessment of the models produced. Here we describe our work to use permutation tests to quantitatively estimate the probability of tree-based statistical models. We have extended the rpart (recursive partitioning) package of the R system for statistical data analysis. This extension, rpart.permutation, executes the permutations in parallel, using MPI (Message Passing Interface) to greatly decrease the time necessary to complete the analysis. %I University of Maryland Institute for Advanced Computer Studies %V 2004-24 %8 2004/// %G eng %0 Journal Article %J Journal of Algorithms %D 2004 %T Approximation algorithms for partial covering problems %A Gandhi,Rajiv %A Khuller, Samir %A Srinivasan, Aravind %K Approximation algorithms %K Partial covering %K Primal-dual methods %K Randomized rounding %K Set cover %K Vertex cover %X We study a generalization of covering problems called partial covering. Here we wish to cover only a desired number of elements, rather than covering all elements as in standard covering problems. For example, in k-partial set cover, we wish to choose a minimum number of sets to cover at least k elements. For k-partial set cover, if each element occurs in at most f sets, then we derive a primal-dual f-approximation algorithm (thus implying a 2-approximation for k-partial vertex cover) in polynomial time. Without making any assumption about the number of sets an element is in, for instances where each set has cardinality at most three, we obtain an approximation of 4/3. We also present better-than-2-approximation algorithms for k-partial vertex cover on bounded degree graphs, and for vertex cover on expanders of bounded average degree. We obtain a polynomial-time approximation scheme for k-partial vertex cover on planar graphs, and for covering k points in Rd by disks. %B Journal of Algorithms %V 53 %P 55 - 84 %8 2004/10// %@ 0196-6774 %G eng %U http://www.sciencedirect.com/science/article/pii/S0196677404000689 %N 1 %R 10.1016/j.jalgor.2004.04.002 %0 Conference Paper %B Circuits and Systems, 2004. ISCAS '04. Proceedings of the 2004 International Symposium on %D 2004 %T Arbitrate-and-move primitives for high throughput on-chip interconnection networks %A Balkan,A.O. %A Gang Qu %A Vishkin, Uzi %K 8 %K arbiter %K arbitrate-and-move %K architecture; %K asynchronous %K balanced %K binary %K circuit %K circuit; %K circuits; %K consumption; %K data %K explicit %K interconnection %K interconnections; %K leaf %K mesh-of-trees %K multi-threading; %K Multithreading %K n-leaf %K network; %K pipeline %K pipelined %K power %K primitive %K processing; %K reduced %K simulation; %K structures; %K synchronous %K synchrony %K system-on-chip; %K tree %K tree; %X An n-leaf pipelined balanced binary tree is used for arbitration of order and movement of data from n input ports to one output port. A novel arbitrate-and-move primitive circuit for every node of the tree, which is based on a concept of reduced synchrony that benefits from attractive features of both asynchronous and synchronous designs, is presented. The design objective of the pipelined binary tree is to provide a key building block in a high-throughput mesh-of-trees interconnection network for Explicit Multi Threading (XMT) architecture, a recently introduced parallel computation framework. The proposed reduced synchrony circuit was compared with asynchronous and synchronous designs of arbitrate-and-move primitives. Simulations with 0.18 mu;m technology show that compared to an asynchronous design, the proposed reduced synchrony implementation achieves a higher throughput, up to 2 Giga-Requests per second on an 8-leaf binary tree. Our circuit also consumes less power than the synchronous design, and requires less silicon area than both the synchronous and asynchronous designs. %B Circuits and Systems, 2004. ISCAS '04. Proceedings of the 2004 International Symposium on %V 2 %P II - 441-4 Vol.2 - II - 441-4 Vol.2 %8 2004/05// %G eng %R 10.1109/ISCAS.2004.1329303 %0 Journal Article %J IEEE Robotics & Automation Magazine %D 2004 %T The Argus eye: a new imaging system designed to facilitate robotic tasks of motion %A Baker, P. %A Ogale, A. S %A Fermüller, Cornelia %K Argus eye %K Calibration %K CAMERAS %K computational geometry %K Design automation %K Eyes %K image formation %K imaging system %K Information geometry %K Layout %K Motion estimation %K multiple stereo configurations %K panoramic robots %K robot vision %K Robot vision systems %K robotic motion tasks %K Robotics and automation %K SHAPE %K shape model estimation %K system calibration %X This article describes an imaging system that has been designed to facilitate robotic tasks of motion. The system consists of a number of cameras in a network, arranged so that they sample different parts of the visual sphere. This geometric configuration has provable advantages compared to small field of view cameras for the estimation of the system's own motion and, consequently, the estimation of shape models from the individual cameras. The reason is, inherent ambiguities of confusion between translation and rotation disappear. Pairs of cameras may also be arranged in multiple stereo configurations, which provide additional advantages for segmentation. Algorithms for the calibration of the system and the three-dimensional (3-D) motion estimation are provided. %B IEEE Robotics & Automation Magazine %V 11 %P 31 - 38 %8 2004/12// %@ 1070-9932 %G eng %N 4 %R 10.1109/MRA.2004.1371606 %0 Journal Article %J IEEE Robotics and Automation Magazine %D 2004 %T The Argus eye, a new tool for robotics %A Baker, P. %A Ogale, A. S %A Fermüller, Cornelia %A Aloimonos, J. %B IEEE Robotics and Automation Magazine %V 11 %P 31 - 38 %8 2004/// %G eng %N 4 %0 Conference Paper %B Software Engineering Conference, 2004. Proceedings. 2004 Australian %D 2004 %T ASPIRE: automated systematic protocol implementation robustness evaluation %A Vasan,Arunchandar %A Memon, Atif M. %K algorithm %K ASPIRE %K automated systematic protocol %K automated testing %K fault tolerant computing %K faulty PDU %K formal specification %K HTTP %K implementation robustness evaluation %K Internet %K network protocol %K protocol data unit %K protocol specification %K robustness testing %K SMTP protocol %K stateful protocols %K stateless protocols %K Transport protocols %X Network protocol implementations are susceptible to problems caused by their lack of ability to handle invalid inputs. We present ASPIRE: automated systematic protocol implementation robustness evaluation, an automated approach to pro-actively test protocol implementations by observing their responses to faulty protocol data units (PDUs) or messages. In contrast to existing approaches, we sample the faulty PDU space in a systematic manner, thus allowing us to evaluate protocol implementations in the face of a wider variety of faulty PDUs. We use a pruning strategy to reduce, from exponential, the size of the faulty PDU set to polynomial in the number of fields of a PDU. We have implemented the ASPIRE algorithms and evaluated them on implementations of HTTP (Apache, Google Web Server (GWS), and Microsoft IIS) and SMTP (Sendmail and Microsoft Exchange) protocols. Our results show that Apache, GWS, and IIS, although implementing the same protocol specification, behave differently on faulty HTTP PDUs; Sendmail and exchange are different in handling our faulty SMTP PDUs. %B Software Engineering Conference, 2004. Proceedings. 2004 Australian %P 241 - 250 %8 2004/// %G eng %R 10.1109/ASWEC.2004.1290477 %0 Journal Article %J Physics of Fluids %D 2004 %T Assessment of inflow boundary conditions for compressible turbulent boundary layers %A Xu,S. %A Martin, M.P %B Physics of Fluids %V 16 %P 2623 - 2623 %8 2004/// %G eng %0 Conference Paper %B 13th IEEE International Symposium on High performance Distributed Computing, 2004. Proceedings %D 2004 %T Automated cluster-based Web service performance tuning %A Chung,I. -H %A Hollingsworth, Jeffrey K %K Active Harmony system %K automated performance tuning %K business %K cluster-based Web service system %K Clustering algorithms %K Computer science %K Educational institutions %K electronic commerce %K Internet %K Middleware %K performance evaluation %K scalability %K Throughput %K Transaction databases %K Web server %K Web services %K workstation clusters %X Active harmony provides a way to automate performance tuning. We apply the Active Harmony system to improve the performance of a cluster-based web service system. The performance improvement cannot easily be achieved by tuning individual components for such a system. The experimental results show that there is no single configuration for the system that performs well for all kinds of workloads. By tuning the parameters, Active Harmony helps the system adapt to different workloads and improve the performance up to 16%. For scalability, we demonstrate how to reduce the time when tuning a large system with many tunable parameters. Finally an algorithm is proposed to automatically adjust the structure of cluster-based web systems, and the system throughput is improved up to 70% using this technique. %B 13th IEEE International Symposium on High performance Distributed Computing, 2004. Proceedings %I IEEE %P 36 - 44 %8 2004/06/04/6 %@ 0-7695-2175-4 %G eng %R 10.1109/HPDC.2004.1323484 %0 Book Section %B Artificial Intelligence Methods in Software TestingArtificial Intelligence Methods in Software Testing %D 2004 %T Automated GUI regression testing using AI planning %A Memon, Atif M. %B Artificial Intelligence Methods in Software TestingArtificial Intelligence Methods in Software Testing %I World Scientific %P 51 - 99 %8 2004/// %@ 9812388540, 9789812388544 %G eng %0 Book %D 2004 %T Automated Planning: Theory and Practice %A Ghallab,Malik %A Nau, Dana S. %A Traverso,Paolo %K Business & Economics / Management %K Computers / Intelligence (AI) & Semantics %K Production planning %K Production planning/ Data processing %K Technology & Engineering / Robotics %X Automated planning technology now plays a significant role in a variety of demanding applications, ranging from controlling space vehicles and robots to playing the game of bridge. These real-world applications create new opportunities for synergy between theory and practice: observing what works well in practice leads to better theories of planning, and better theories lead to better performance of practical applications. Automated Planning mirrors this dialogue by offering a comprehensive, up-to-date resource on both the theory and practice of automated planning. The book goes well beyond classical planning, to include temporal planning, resource scheduling, planning under uncertainty, and modern techniques for plan generation, such as task decomposition, propositional satisfiability, constraint satisfaction, and model checking. The authors combine over 30 years experience in planning research and development to offer an invaluable text to researchers, professionals, and graduate students. *Comprehensively explains paradigms for automated planning. *Provides a thorough understanding of theory and planning practice, and how they relate to each other. *Presents case studies of applications in space, robotics, CAD/CAM, process control, emergency operations, and games.*Provides a thorough understanding of AI planning theory and practice, and how they relate to each other. *Covers all the contemporary topics of planning, as well as important practical applications of planning, such as model checking and game playing. *Presents case studies and applications in planning engineering, space, robotics, CAD/CAM, process control, emergency operations, and games.*Provides lecture notes, examples of programming assignments, pointers to downloadable planning systems and related information online. %I Elsevier %8 2004/05/03/ %@ 9781558608566 %G eng %0 Conference Paper %B Acoustics, Speech, and Signal Processing, 2004. Proceedings. (ICASSP '04). IEEE International Conference on %D 2004 %T Automatic position calibration of multiple microphones %A Raykar,V.C. %A Duraiswami, Ramani %K approximations; %K array %K audio %K AUTOMATIC %K calibration; %K closed %K covariance; %K dimensional %K estimation; %K form %K function %K implicit %K least %K likelihood %K loudspeakers; %K maximum %K microphone %K microphones; %K minimisation; %K minimization; %K multiple %K nonlinear %K position %K positions; %K problem; %K processing; %K signal %K solution; %K squares %K theorem; %K three %X We describe a method to determine automatically the relative three dimensional positions of multiple microphones using at least five loudspeakers in unknown positions. The only assumption we make is that there is a microphone which is very close to a loudspeaker. In our experimental setup, we attach one microphone to each loudspeaker. We derive the maximum likelihood estimator and the solution turns out to be a non-linear least squares problem. A closed form solution which can be used as the initial guess for the minimization routine is derived. We also derive an approximate expression for the covariance of the estimator using the implicit function theorem. Using this, we analyze the performance of the estimator with respect to the positions of the loudspeakers. The algorithm is validated using both Monte-Carlo simulations and a real-time experimental setup. %B Acoustics, Speech, and Signal Processing, 2004. Proceedings. (ICASSP '04). IEEE International Conference on %V 4 %P iv-69 - iv-72 vol.4 - iv-69 - iv-72 vol.4 %8 2004/05// %G eng %R 10.1109/ICASSP.2004.1326765 %0 Journal Article %J IEEE Transactions on Speech and Audio Processing, Special Issue on Spontaneous Speech Processing %D 2004 %T Automatic recognition of spontaneous speech for access to multilingual oral history archives %A Byrne,W. %A David Doermann %A Franz,M. %A Gustman,S. %A Hajic,J. %A Oard, Douglas %A Picheny,M. %A Psutka,J. %A Ramabhadran,B. %X The MALACH project has the goal of developing the technologies needed to facilitate access to large collections of spontaneous speech. Its aim is to dramatically improve the state of the art in key Automatic Speech Recognition (ASR), Natural Language Processing (NLP) technologies for use in large-scale retrieval systems. The project leverages a unique collection of oral history interviews with survivors of the Holocaust that has been assembled and extensively annotated by the Survivors of the Shoah Visual History Foundation. This paper describes the collection, 116,000 hours of interviews in 32 languages, and the way in which system requirements have been discerned through user studies. It discusses ASR methods for very difficult speech (heavily accented, emotional, and elderly spontaneous speech), including transcription to create training data and methods for language modeling and speaker adaptation. Results are presented for for English and Czech. NLP results are presented for named entity tagging, topic segmentation, and supervised topic classification, and the architecture of an integrated search system that uses these results is described. %B IEEE Transactions on Speech and Audio Processing, Special Issue on Spontaneous Speech Processing %V 12 %P 420 - 435 %8 2004/07// %G eng %N 4 %0 Conference Paper %B 2004 IEEE International Conference on Communications %D 2004 %T Automatic video summarization for wireless and mobile environments %A Yong Rao %A Mundur, Padma %A Yesha,Y. %K automatic video summarization %K batch processing %K batch processing (computers) %K Clustering algorithms %K Clustering methods %K clustering scheme %K Computer science %K Delaunay diagram %K graph theory %K Gunshot detection systems %K Image sequences %K mesh generation %K Mobile computing %K mobile radio %K multidimensional point data cluster %K Multidimensional systems %K Multimedia communication %K video clip %K video frame content %K Video sequences %K video signal processing %K wireless mobile environment %X In this paper, we propose a novel video summarization technique using which we can automatically generate high quality video summaries suitable for wireless and mobile environments. The significant contribution of this paper lies in the proposed clustering scheme. We use Delaunay diagrams to cluster multidimensional point data corresponding to the frame contents of the video. In contrast to the existing clustering techniques used for summarization, our clustering algorithm is fully automatic and well suited for batch processing. We illustrate the quality of our clustering and summarization scheme in an experiment using several video clips. %B 2004 IEEE International Conference on Communications %I IEEE %V 3 %P 1532- 1536 Vol.3 - 1532- 1536 Vol.3 %8 2004/06/20/24 %@ 0-7803-8533-0 %G eng %R 10.1109/ICC.2004.1312767 %0 Conference Paper %B Proceedings of the 2003 annual national conference on Digital government research %D 2003 %T Accessing diverse geo-referenced data sources with the SAND spatial DBMS %A Sankaranarayanan,Jagan %A Tanin,Egemen %A Samet, Hanan %A Brabec,Frantisek %X The Internet has become the most frequently accessed medium for obtaining various types of data. In particular, government agencies, academic institutions, and private enterprises have published gigabytes of geo-referenced data on the Web. However, to obtain geo-referenced data from the Web successfully, systems must be designed to be capable of understanding the data sets published in different data formats. Also, even if the data sets are available in a simple known format, they often have poorly defined structures. With these issues in mind, we have developed an Internet-enabled data collection and conversion utility that interfaces with our prototype spatial database system, SAND. Using this utility, data can be retrieved from many different sources on the Web and converted into a format understandable by the SAND spatial database management system. Our collection and conversion utility is able to import the most popular data formats; namely, ESRI Shapefiles, Microsoft Excel files, HTML files, and GML files. Data in unstructured formats are verified for correct selection of the data types and handling of missing tuples before the insertion operation into the database. Moreover, our utility makes it possible to download any nonspatial data set and combine it internally with a relevant spatial data set. These features are accessible through a spreadsheet-like interface for online editing and structuring of data. %B Proceedings of the 2003 annual national conference on Digital government research %S dg.o '03 %I Digital Government Society of North America %P 1 - 4 %8 2003/// %G eng %U http://dl.acm.org/citation.cfm?id=1123196.1123206 %0 Journal Article %J University of Maryland Technical Report, HCIL-2003 %D 2003 %T Accuracy, Target Reentry and Fitts’ Law Performance of Preschool Children Using Mice %A Hourcade,J. P %A Bederson, Benjamin B. %A Druin, Allison %A Guimbretiere,F. %B University of Maryland Technical Report, HCIL-2003 %V 16 %8 2003/// %G eng %0 Journal Article %J Image Processing, IEEE Transactions on %D 2003 %T Accurate dense optical flow estimation using adaptive structure tensors and a parametric model %A Liu,Haiying %A Chellapa, Rama %A Rosenfeld, A. %K 3D %K accuracy; %K adaptive %K and %K coherent %K confidence %K dense %K eigenfunctions; %K eigenvalue %K eigenvalues %K eigenvectors; %K estimation; %K flow %K generalized %K ground %K image %K measure; %K model; %K MOTION %K optical %K parameter %K parametric %K problem; %K real %K region; %K sequences; %K structure %K synthetic %K tensor; %K tensors; %K three-dimensional %K truth; %X An accurate optical flow estimation algorithm is proposed in this paper. By combining the three-dimensional (3D) structure tensor with a parametric flow model, the optical flow estimation problem is converted to a generalized eigenvalue problem. The optical flow can be accurately estimated from the generalized eigenvectors. The confidence measure derived from the generalized eigenvalues is used to adaptively adjust the coherent motion region to further improve the accuracy. Experiments using both synthetic sequences with ground truth and real sequences illustrate our method. Comparisons with classical and recently published methods are also given to demonstrate the accuracy of our algorithm. %B Image Processing, IEEE Transactions on %V 12 %P 1170 - 1180 %8 2003/10// %@ 1057-7149 %G eng %N 10 %R 10.1109/TIP.2003.815296 %0 Journal Article %J The Journal of the Acoustical Society of America %D 2003 %T Acoustical scattering from N spheres using a multilevel fast multipole method %A Gumerov, Nail A. %A Duraiswami, Ramani %X We develop an efficient computational method for wave scattering from a large number of spherical objects that are characterized by their radii, locations, and complex acoustical impedances of the surfaces. The direct T‐matrix method for solution of the problem that was developed and tested in [Gumerov and Duraiswami, J. Acoust. Soc. Am. 112, 2688–2701 (2002)] is inefficient for computations involving a large number of scatterers. Here, we implement and test a multilevel fast multipole method for speeding up the solution and achieving better memory complexity. The method is based on hierarchical space subdivision with oct‐trees using optimal space‐partitioning, on a theory for fast translation and re‐expansion of multipole solutions of the Helmholtz equation, and employs an iterative technique for the solution of large dense systems of linear equations with reflection‐based iterations. For N scatterers the method provides O(NlogN) asymptotic complexity opposed to O(N3) complexity of the direct T‐matrix approach. The results of computations were tested against solutions obtained by other methods, such as the boundary element method and the direct T‐matrix method tested in our early study, and show the computational efficiency and accuracy of the solution technique. [Work supported by NSF Awards 0086075 and 0219681 is gratefully acknowledged.] %B The Journal of the Acoustical Society of America %V 113 %P 2334 - 2334 %8 2003/// %G eng %U http://link.aip.org/link/?JAS/113/2334/2 %N 4 %0 Journal Article %J Proceedings of the 9th MT Summit %D 2003 %T Acquisition of bilingual MT lexicons from OCRed dictionaries %A Karagol-Ayan,B. %A David Doermann %A Dorr, Bonnie J %X This paper describes an approach to analyzing the lexical structure of OCRed bilingual dictionaries to construct re-sources suited for machine translation of low-density languages, where online resources are limited. A rule-based, an HMM-based, and a post-processed HMM-based method are used for rapid construction of MT lexicons based on sys- tematic structural clues provided in the original dictionary. We evaluate the effectiveness of our techniques, concluding that: (1) the rule-based method performs better with dictionaries where the font is not an important distinguishing fea- ture for determining information types; (2) the post-processed stochastic method improves the results of the stochastic method for phrasal entries; and (3) Our resulting bilingual lexicons are comprehensive enough to provide the basis for reasonable translation results when compared to human translations. %B Proceedings of the 9th MT Summit %P 208 - 215 %8 2003/// %G eng %0 Conference Paper %B Proceedings of the Ninth Machine Translation Summit %D 2003 %T Acquisition of Bilingual MTLexicons from OCRed Dictionaries %A Karagol-Ayan,B. %A David Doermann %A Dorr, Bonnie J %B Proceedings of the Ninth Machine Translation Summit %C New Orleans, LA %P 208 - 215 %8 2003/// %G eng %0 Journal Article %J Behavior Representation in Modeling and Simulation %D 2003 %T Activity modeling and recognition using shape theory %A Chellapa, Rama %A Vaswani, N. %A Chowdhury, A.K.R. %X Understanding activities arising out of the interactions of aconfiguration of moving objects is an important problem in video understanding, with applications in surveillance and monitoring, animation, medicine, etc. In this paper, we in- troduce a novel method for activity modeling based on the observation that that every activity has with it an associated structure characterized by a non-rigid shape and a dynamic model that characterizes the variations in the structure as the activity unfolds. We propose two mathematical models to characterize the non-rigid shape and its dynamics. In our first approach, we propose to model an activity by the polygonal shape formed by joining the locations of these point masses at any time , and its deformation over time. This uses the statistical shape theory of Kendall. The second approach models the trajectories of each separate class of moving objects in 3D shape space, and thus can identify dif- ferent kinds of activities. It is based on the factorization the- orem for matrices, which has been used before in computer vision for structure estimation. Deviations from the learned normal shape for each activity is used to identify abnormal ones. We demonstrate the applicability of our algorithms using real-life video sequences in an airport surveillance environment. We are able to identify the major activities that take place in that setting and detect abnormal ones. %B Behavior Representation in Modeling and Simulation %8 2003/// %G eng %0 Conference Paper %B Computer Vision and Pattern Recognition, 2003. Proceedings. 2003 IEEE Computer Society Conference on %D 2003 %T Activity recognition using the dynamics of the configuration of interacting objects %A Vaswani, N. %A RoyChowdhury, A. %A Chellapa, Rama %K 2D %K abnormal %K abnormality %K abnormality; %K acoustic %K activity %K analysis; %K change; %K Computer %K configuration %K configuration; %K data; %K DETECTION %K detection; %K distribution; %K drastic %K dynamics; %K event; %K filter; %K hand-picked %K image %K infrared %K interacting %K learning; %K location %K low %K mean %K model; %K monitoring; %K MOTION %K moving %K noise; %K noisy %K object %K object; %K observation %K observation; %K particle %K pattern %K plane; %K point %K polygonal %K probability %K probability; %K problem; %K processing; %K radar %K recognition; %K resolution %K sensor; %K sensors; %K sequence; %K SHAPE %K shape; %K signal %K slow %K statistic; %K strategy; %K Surveillance %K surveillance; %K target %K test %K tracking; %K video %K video; %K visible %K vision; %X Monitoring activities using video data is an important surveillance problem. A special scenario is to learn the pattern of normal activities and detect abnormal events from a very low resolution video where the moving objects are small enough to be modeled as point objects in a 2D plane. Instead of tracking each point separately, we propose to model an activity by the polygonal 'shape' of the configuration of these point masses at any time t, and its deformation over time. We learn the mean shape and the dynamics of the shape change using hand-picked location data (no observation noise) and define an abnormality detection statistic for the simple case of a test sequence with negligible observation noise. For the more practical case where observation (point locations) noise is large and cannot be ignored, we use a particle filter to estimate the probability distribution of the shape given the noisy observations up to the current time. Abnormality detection in this case is formulated as a change detection problem. We propose a detection strategy that can detect both 'drastic' and 'slow' abnormalities. Our framework can be directly applied for object location data obtained using any type of sensors - visible, radar, infrared or acoustic. %B Computer Vision and Pattern Recognition, 2003. Proceedings. 2003 IEEE Computer Society Conference on %V 2 %P II - 633-40 vol.2 - II - 633-40 vol.2 %8 2003/06// %G eng %R 10.1109/CVPR.2003.1211526 %0 Report %D 2003 %T Adaptive Hindi OCRUsing Generalized Hausdorff Image Comparison %A Ma,Huanfeng %A David Doermann %X In this paper, we present an adaptive Hindi OCR using generalized Hausdor image comparison implemented as part of a rapidly retargetable language tool effort. The system includes: script identification, character segmentation, training sample creation and character recognition. The OCR design (completed in one month) was applied to a complete Hindi-English bilingual dictionary (with 1083 pages) and a collection of ideal images extracted from Hindi documents in PDF format. Experimental results show the recognition accuracy can reach 88% for noisy images and 95% for ideal images, both at the character level. The presented method can also be extended to design OCR systems for different scripts. %I University of Maryland, College Park %V LAMP-TR-105,CFAR-TR-987,CS-TR-4519,UMIACS-TR-2003-87 %8 2003/08// %G eng %0 Conference Paper %B Peer-to-Peer Computing, IEEE International Conference on %D 2003 %T Adaptive Probabilistic Search for Peer-to-Peer Networks %A Tsoumakos,Dimitrios %A Roussopoulos, Nick %X Peer-to-Peer networks are gaining increasing attention from both the scientific and the large Internet user community. Popular applications utilizing this new technology offer many attractive features to a growing number of users. At the heart of such networks lies the search algorithm. Proposed methods either depend on the network-disastrous flooding and its variations or utilize various indices too expensive to maintain. In this paper, we describe an adaptive, bandwidth-efficient algorithm for search in unstructured Peer-to-Peer networks, the Adaptive Probabilistic Search method (APS). Our scheme utilizes feedback from previous searches to probabilistically guide future ones. It performs efficient object discovery while inducing zero overhead over dynamic network operations. Extensive simulation results show that APS achieves high success rates, increased number of discovered objects, very low bandwidth consumption and adaptation to changing topologies. %B Peer-to-Peer Computing, IEEE International Conference on %I IEEE Computer Society %C Los Alamitos, CA, USA %P 102 - 102 %8 2003/// %@ 0-7695-2023-5 %G eng %R http://doi.ieeecomputersociety.org/10.1109/PTP.2003.1231509 %0 Journal Article %J Parallel and Distributed Systems, IEEE Transactions on %D 2003 %T An adaptive quality of service aware middleware for replicated services %A Krishnamurthy,Sudha %A Sanders,W. H. %A Michel Cukier %K distributed resource sharing %K Middleware %K online performance monitoring %K probabilistic modeling %K QoS specification %K quality of service %K replica consistency %K replicated services %K resource allocation %K time-sensitive client applications %K timeliness constraints %X A dependable middleware should be able to adaptively share the distributed resources it manages in order to meet diverse application requirements, even when the quality of service (QoS) is degraded due to uncertain variations in load and unanticipated failures. We have addressed this issue in the context of a dependable middleware that adaptively manages replicated servers to deliver a timely and consistent response to time-sensitive client applications. These applications have specific temporal and consistency requirements, and can tolerate a certain degree of relaxed consistency in exchange for better response time. We propose a flexible QoS model that allows clients to specify their timeliness and consistency constraints. We also propose an adaptive framework that dynamically selects replicas to service a client's request based on the prediction made by probabilistic models. These models use the feedback from online performance monitoring of the replicas to provide probabilistic guarantees for meeting a client's QoS specification. The experimental results we have obtained demonstrate the role of feedback and the efficacy of simple analytical models for adaptively sharing the available replicas among the users under different workload scenarios. %B Parallel and Distributed Systems, IEEE Transactions on %V 14 %P 1112 - 1125 %8 2003/11// %@ 1045-9219 %G eng %N 11 %R 10.1109/TPDS.2003.1247672 %0 Conference Paper %B Multimedia and Expo, 2003. ICME '03. Proceedings. 2003 International Conference on %D 2003 %T Adaptive visual tracking and recognition using particle filters %A Zhou,Shaohua %A Chellapa, Rama %A Moghaddam, B. %K adaptive %K adaptive-velocity %K appearance %K extra-personal %K Filtering %K filters; %K image %K intra-personal %K model; %K MOTION %K particle %K processing; %K recognition; %K sequence; %K sequences; %K series %K signal %K spaces; %K theory; %K TIME %K tracking; %K video %K visual %X This paper presents an improved method for simultaneous tracking and recognition of human faces from video, where a time series model is used to resolve the uncertainties in tracking and recognition. The improvements mainly arise from three aspects: (i) modeling the inter-frame appearance changes within the video sequence using an adaptive appearance model and an adaptive-velocity motion model; (ii) modeling the appearance changes between the video frames and gallery images by constructing intra- and extra-personal spaces; and (iii) utilization of the fact that the gallery images are in frontal views. By embedding them in a particle filter, we are able to achieve a stabilized tracker and an accurate recognizer when confronted by pose and illumination variations. %B Multimedia and Expo, 2003. ICME '03. Proceedings. 2003 International Conference on %V 2 %P II - 349-52 vol.2 - II - 349-52 vol.2 %8 2003/07// %G eng %R 10.1109/ICME.2003.1221625 %0 Book Section %B Advances in ComputersAdvances in Computers %D 2003 %T Advances in GUI Testing %A Memon, Atif M. %X Graphical user interfaces are by far the most popular means used to interact with software today. Unfortunately, the state-of-the-practice in GUI testing has not kept pace with the rapidly evolving GUI technology. In practice, GUI testing is largely manual, often resulting in inadequate testing. There have been several research efforts to improve GUI testing. This chapter presents some of the recent advances in GUI testing and provides guidelines on how to combine them. %B Advances in ComputersAdvances in Computers %I Elsevier %V Volume 58 %P 149 - 201 %8 2003/// %@ 0065-2458 %G eng %U http://www.sciencedirect.com/science/article/pii/S0065245803580044 %0 Journal Article %J Advances in Computers %D 2003 %T Advances in GUI Testing-10 Summary %A Memon, Atif M. %B Advances in Computers %V 58 %P 195 - 196 %8 2003/// %G eng %0 Journal Article %J Advances in Computers %D 2003 %T Advances in GUI Testing-2 GUI Testing Steps %A Memon, Atif M. %B Advances in Computers %V 58 %P 155 - 156 %8 2003/// %G eng %0 Journal Article %J Advances in Computers %D 2003 %T Advances in GUI Testing-3 Record/Playback Tools %A Memon, Atif M. %B Advances in Computers %V 58 %P 157 - 158 %8 2003/// %G eng %0 Journal Article %J Advances in Computers %D 2003 %T Advances in GUI Testing-4 Challenges %A Memon, Atif M. %B Advances in Computers %V 58 %P 159 - 161 %8 2003/// %G eng %0 Journal Article %J Advances in Computers %D 2003 %T Advances in GUI Testing-5 State Machine Models %A Memon, Atif M. %B Advances in Computers %V 58 %P 162 - 167 %8 2003/// %G eng %0 Journal Article %J Advances in Computers %D 2003 %T Advances in GUI Testing-6 Behavioral Models %A Memon, Atif M. %B Advances in Computers %V 58 %P 168 - 169 %8 2003/// %G eng %0 Journal Article %J Advances in Computers %D 2003 %T Advances in GUI Testing-7 Statistical Methods %A Memon, Atif M. %B Advances in Computers %V 58 %P 170 - 171 %8 2003/// %G eng %0 Journal Article %J Advances in Computers %D 2003 %T Advances in GUI Testing-8 Hierarchical Techniques %A Memon, Atif M. %B Advances in Computers %V 58 %P 172 - 193 %8 2003/// %G eng %0 Journal Article %J Advances in Computers %D 2003 %T Advances in GUI Testing-9 Discussion %A Memon, Atif M. %B Advances in Computers %V 58 %P 194 - 194 %8 2003/// %G eng %0 Book Section %B Video RegistrationVideo Registration %D 2003 %T Airborne Video Registration for Activity Monitoring %A Shekhar,Chandra %A Chellapa, Rama %E Shah,Mubarak %E Kumar,Rakesh %E Shah,Mubarak %X The analysis of ground activities involving vehicles and humans is of interest in airborne surveillance applications. The high spatiotemporal resolution of a video camera makes it the sensor of choice for such applications. In the past, video sensors were not preferred, precisely because of the high volume of data generated, which had to be processed manually for the most part. Further, the moving platform gives rise to numerous false alarms. %B Video RegistrationVideo Registration %S The Kluwer International Series in Video Computing %I Springer US %V 5 %P 130 - 143 %8 2003/// %@ 978-1-4615-0459-7 %G eng %U http://dx.doi.org/10.1007/978-1-4615-0459-7_6 %0 Journal Article %J Journal of Computing and Information Science in Engineering %D 2003 %T Algorithms for computing global accessibility cones %A Dhaliwal,S. %A Gupta,S.K. %A Huang,J. %A Priyadarshini,A. %B Journal of Computing and Information Science in Engineering %V 3 %P 200 - 200 %8 2003/// %G eng %0 Journal Article %J Computer-Aided Design %D 2003 %T Algorithms for selecting cutters in multi-part milling problems %A Yao,Zhiyang %A Gupta, Satyandra K. %A Nau, Dana S. %K 2.5-D Milling %K cutter selection %K Geometric Algorithms %X This paper describes geometric algorithms for automatically selecting an optimal sequence of cutters for machining a set of 2.5-D parts. In milling operations, cutter size affects the machining time significantly. Meanwhile, if the batch size is small, it is also important to shorten the time spent on loading tools into the tool magazine and establishing z-length compensation values. Therefore, in small-batch manufacturing, if we can select a set of milling tools that will produce good machining time on more than one type of parts, then several unnecessary machine-tool reconfiguration operations can be eliminated. In selecting milling cutters we consider both the tool loading time and the machining time and generate solutions that allow us to minimize the total machining time. In this paper we first present algorithms for finding the area that can be cut by a given cutter. Then we describe a graph search formulation for the tool selection problem. Finally, the optimal sequence of cutters is selected by using Dijkstra's shortest path planning algorithm. %B Computer-Aided Design %V 35 %P 825 - 839 %8 2003/08// %@ 0010-4485 %G eng %U http://www.sciencedirect.com/science/article/pii/S0010448502001100 %N 9 %R 10.1016/S0010-4485(02)00110-0 %0 Conference Paper %B ICDAR %D 2003 %T AModel-based Line Detection Algorithm in Documents %A Zheng,Y. %A Li,H. %A David Doermann %B ICDAR %P 44 - 48 %8 2003/// %G eng %0 Journal Article %J The Journal of the Acoustical Society of America %D 2003 %T Analysis of particular phononic structures using single integral equations %A Seydou,Fadoulourahmane %A Duraiswami, Ramani %A Gumerov, Nail A. %X A fast method for determining phononic (and photonic) bandgaps in composite materials is developed. It is known that in the propagation of waves in a 3D medium containing N scatterers arranged periodically, there exist refractive indices for which such structures have bandgaps, i.e., frequencies for which no waves can propagate inside. Our task is to find the frequencies that generate these prohibited waves. This requires the solution of an eigenvalue problem for the Helmholtz operator. To solve this problem we choose an alternate route which uses boundary integral equations. We derive a single integral equation on each of the interfaces between the outer region and the scatterers, considering a general transmission boundary condition, by using a hybrid method using layer potentials and Green’s formula. This approach reduces the number of unknowns considerably in comparison to other methods, but requires the treatment of large dense matrices and many matrix vector multiplications. To remedy this, we use the Fast Multipole Method. For solving the eigenvalue problem we discuss two methods: the Newton method and a method based on the Cauchy formula. Details of the numerical implementation, and results will be presented for different cases: sound hard, sound soft, impedance and transmission boundary conditions. [Work partially supported by NSF Award 0219681 is gratefully acknowledged.] %B The Journal of the Acoustical Society of America %V 113 %P 2284 - 2284 %8 2003/// %G eng %U http://link.aip.org/link/?JAS/113/2284/1 %N 4 %0 Journal Article %J SIAM Journal on Numerical Analysis %D 2003 %T An Analysis of Smoothing Effects of Upwinding Strategies for the Convection-Diffusion Equation %A Elman, Howard %A Alison Ramage %X Using a technique for constructing analytic expressions for discrete solutions to the convection-diffusion equation, we examine and characterize the effects of upwinding strategies on solution quality. In particular, for grid-aligned flow and discretization based on bilinear finite elements with streamline upwinding, we show precisely how the amount of upwinding included in the discrete operator affects solution oscillations and accuracy when different types of boundary layers are present. This analysis provides a basis for choosing a streamline upwinding parameter which also gives accurate solutions for problems with non-grid-aligned and variable speed flows. In addition, we show that the same analytic techniques provide insight into other discretizations, such as a finite difference method that incorporates streamline diffusion and the isotropic artificial diffusion method. %B SIAM Journal on Numerical Analysis %V 40 %P 254 - 281 %8 2003/01/01/ %@ 0036-1429 %G eng %U http://www.jstor.org/stable/4100953 %N 1 %0 Journal Article %J ACM Computer Communications Review %D 2003 %T An Analysis of the Layer 2 Handoff costs in Wireless Local Area Networks %A Mishra,A. %A Shin,M. H. %A Arbaugh, William A. %B ACM Computer Communications Review %V 33 %8 2003/// %G eng %N 2 %0 Journal Article %J Sea Technology %D 2003 %T ANNUAL REVIEW & FORECAST REPORTS-THE OCEANS: TO PROTECT AND TO PLOW %A Rita R Colwell %B Sea Technology %V 44 %P 33 - 34 %8 2003/// %G eng %N 1 %0 Journal Article %J Signal Processing, IEEE Transactions on %D 2003 %T Anti-collusion fingerprinting for multimedia %A Trappe,W. %A M. Wu %A Wang,Z.J. %A Liu,K. J.R %K (mathematics); %K additive %K algorithm; %K and %K anti-collusion %K attack; %K averaging %K binary %K code %K codes; %K codevectors; %K coding; %K colluders %K collusion; %K combinatorial %K communication; %K compression; %K correlation; %K cost-effective %K data %K data; %K design %K DETECTION %K detection; %K digital %K embedding; %K fingerprinting; %K Gaussian %K identification; %K image %K images; %K keying; %K logical %K mathematics; %K Modulation %K modulation; %K multimedia %K multimedia; %K of %K on-off %K operation; %K orthogonal %K processes; %K real %K redistribution; %K Security %K signal %K signals; %K theory; %K tree-structured %K TREES %K watermarking; %X Digital fingerprinting is a technique for identifying users who use multimedia content for unintended purposes, such as redistribution. These fingerprints are typically embedded into the content using watermarking techniques that are designed to be robust to a variety of attacks. A cost-effective attack against such digital fingerprints is collusion, where several differently marked copies of the same content are combined to disrupt the underlying fingerprints. We investigate the problem of designing fingerprints that can withstand collusion and allow for the identification of colluders. We begin by introducing the collusion problem for additive embedding. We then study the effect that averaging collusion has on orthogonal modulation. We introduce a tree-structured detection algorithm for identifying the fingerprints associated with K colluders that requires O(Klog(n/K)) correlations for a group of n users. We next develop a fingerprinting scheme based on code modulation that does not require as many basis signals as orthogonal modulation. We propose a new class of codes, called anti-collusion codes (ACCs), which have the property that the composition of any subset of K or fewer codevectors is unique. Using this property, we can therefore identify groups of K or fewer colluders. We present a construction of binary-valued ACC under the logical AND operation that uses the theory of combinatorial designs and is suitable for both the on-off keying and antipodal form of binary code modulation. In order to accommodate n users, our code construction requires only O( radic;n) orthogonal signals for a given number of colluders. We introduce three different detection strategies that can be used with our ACC for identifying a suspect set of colluders. We demonstrate the performance of our ACC for fingerprinting multimedia and identifying colluders through experiments using Gaussian signals and real images. %B Signal Processing, IEEE Transactions on %V 51 %P 1069 - 1087 %8 2003/04// %@ 1053-587X %G eng %N 4 %R 10.1109/TSP.2003.809378 %0 Conference Paper %B Multimedia and Expo, 2003. ICME '03. Proceedings. 2003 International Conference on %D 2003 %T Anti-collusion of group-oriented fingerprinting %A Wang,Z.J. %A M. Wu %A Trappe,W. %A Liu,K. J.R %K anticollusion %K attacks; %K collusion %K communication; %K data; %K digital %K distributed %K distribution; %K fingerprinting; %K fingerprints; %K Gaussian %K group-oriented %K Internet; %K method; %K modulation; %K multimedia %K orthogonal %K security; %K Telecommunication %K watermarking; %X Digital fingerprinting of multimedia data involves embedding information in the content, and offers protection to the digital rights of the content by allowing illegitimate usage of the content to be identified by authorized parties. One potential threat to fingerprints is collusion, whereby a group of adversaries combine their individual copies in an attempt to remove the underlying fingerprints. Former studies indicate that collusion attacks based on a few dozen independent copies can confound a fingerprinting system that employs orthogonal modulation. However, since an adversary is more likely to collude with some users than other users, we propose a group-based fingerprinting scheme where users likely to collude with each other are assigned correlated fingerprints. We evaluate the performance of our group-based fingerprints by studying the collusion resistance of a fingerprinting system employing Gaussian distributed fingerprints. We compare the results to those of fingerprinting systems employing orthogonal modulation. %B Multimedia and Expo, 2003. ICME '03. Proceedings. 2003 International Conference on %V 2 %P II - 217-20 vol.2 - II - 217-20 vol.2 %8 2003/07// %G eng %R 10.1109/ICME.2003.1221592 %0 Report %D 2003 %T AParallel Line Detection Algorithm Based on HMMDecoding %A Yefeng Zheng %A Huiping Li %A David Doermann %X The detection of groups of parallel lines is important in applications such as form processing and text (handwriting) extraciton in rule lined paper. These tasks can be very challenging in degraded documents where the lines are severely broken. In this paper we propose a novel model-based method which incorporates high level context to detect these lines. After preprocessing and skew correction, we use trained Hidden Markov Models (HMM) to locate the optimal positions of all lines simultaneously, based on the Viterbi decoding. The algorithm is trainable, therefore, it can easily be adapted to different application scenarios. The experiments conducted on known form processing and rule line detection show our method is robust, and achieved better results than other widely used line detection methods, such as the Hough transform, projection or vectorization-based methods. %I University of Maryland, College Park %V LAMP-TR-109,CAR-TR-994,CS-TR-4545,UMIACS-TR-2003-1113 %8 2003/12// %G eng %0 Conference Paper %B Image Processing, 2003. ICIP 2003. Proceedings. 2003 International Conference on %D 2003 %T An appearance based approach for human and object tracking %A Capellades,M. B %A David Doermann %A DeMenthon,D. %A Chellapa, Rama %K algorithm; %K analysis; %K background %K basis; %K by %K Color %K colour %K correlogram %K detection; %K distributions; %K frame %K histogram %K human %K image %K information; %K object %K processing; %K segmentation; %K sequences; %K signal %K subtraction %K tracking; %K video %X A system for tracking humans and detecting human-object interactions in indoor environments is described. A combination of correlogram and histogram information is used to model object and human color distributions. Humans and objects are detected using a background subtraction algorithm. The models are built on the fly and used to track them on a frame by frame basis. The system is able to detect when people merge into groups and segment them during occlusion. Identities are preserved during the sequence, even if a person enters and leaves the scene. The system is also able to detect when a person deposits or removes an object from the scene. In the first case the models are used to track the object retroactively in time. In the second case the objects are tracked for the rest of the sequence. Experimental results using indoor video sequences are presented. %B Image Processing, 2003. ICIP 2003. Proceedings. 2003 International Conference on %V 2 %P II - 85-8 vol.3 - II - 85-8 vol.3 %8 2003/09// %G eng %R 10.1109/ICIP.2003.1246622 %0 Journal Article %J Journal of Computer and System Sciences %D 2003 %T On the approximability of clique and related maximization problems %A Srinivasan, Aravind %K Approximation algorithms %K Clique %K Inapproximability %K Independent set %K Packing integer programs %K random sampling %X We consider approximations of the form n1−o(1) for the Maximum Clique problem, where n is the number of vertices in the input graph and where the “o(1)” term goes to zero as n increases. We show that sufficiently strong negative results for such problems, which we call strong inapproximability results, have interesting consequences for exact computation. A simple sampling method underlies most of our results. %B Journal of Computer and System Sciences %V 67 %P 633 - 651 %8 2003/11// %@ 0022-0000 %G eng %U http://www.sciencedirect.com/science/article/pii/S0022000003001107 %N 3 %R 10.1016/S0022-0000(03)00110-7 %0 Journal Article %J SIAM Journal on Computing %D 2003 %T Approximating the domatic number %A Feige,U. %A Halldórsson,M.M. %A Kortsarz,G. %A Srinivasan, Aravind %X A set of vertices in a graph is a dominating set if every vertex outside the set hasa neighbor in the set. The domatic number problem is that of partitioning the vertices of a graph into the maximum number of disjoint dominating sets. Let n denote the number of vertices, δ the minimum degree, and ∆ the maximum degree. We show that every graph has a domatic partition with (1 − o(1))(δ + 1)/ ln n dominating sets and, moreover, that such a domatic partition can be found in polynomial-time. This implies a (1 + o(1)) ln n-approximation algorithm for domatic number, since the domatic number is always at most δ+1. We also show this to be essentially best possible. Namely, extending the approximation hardness of set cover by combining multiprover protocols with zero-knowledge techniques, we show that for every ϵ > 0, a (1 − ϵ) ln n-approximation implies that NP ⊆ DTIME(nO(log log n)). This makes domatic number the first natural maximization problem (known to the authors) that is provably approximable to within polylogarithmic factors but no better. We also show that every graph has a domatic partition with (1 − o(1))(δ + 1)/ ln ∆ dominating sets, where the “o(1)” term goes to zero as ∆ increases. This can be turned into an efficient algorithm that produces a domatic partition of Ω(δ/ ln ∆) sets. %B SIAM Journal on Computing %V 32 %P 172 - 195 %8 2003/// %G eng %N 1 %0 Book Section %B Approximation, Randomization, and Combinatorial Optimization: Algorithms and Techniques %D 2003 %T Approximation Algorithms for Channel Allocation Problems in Broadcast Networks %A Gandhi,Rajiv %A Khuller, Samir %A Srinivasan, Aravind %A Wang,Nan %E Arora,Sanjeev %E Jansen,Klaus %E Rolim,José %E Sahai,Amit %X We study two packing problems that arise in the area of dissemination-based information systems; a second theme is the study of distributed approximation algorithms. The problems considered have the property that the space occupied by a collection of objects together could be significantly less than the sum of the sizes of the individual objects. In the Channel Allocation Problem , there are users who request subsets of items. There are a fixed number of channels that can carry an arbitrary amount of information. Each user must get all of the requested items from one channel, i.e., all the data items of each request must be broadcast on some channel. The load on any channel is the number of items that are broadcast on that channel; the objective is to minimize the maximum load on any channel. We present approximation algorithms for this problem and also show that the problem is MAX-SNP hard. The second problem is the Edge Partitioning Problem addressed by Goldschmidt, Hochbaum, Levin, and Olinick ( Networks, 41:13-23, 2003 ). Each channel here can deliver information to at most k users, and we aim to minimize the total load on all channels. We present an O ( n⅓ )–approximation algorithm and also show that the algorithm can be made fully distributed with the same approximation guarantee; we also generalize to the case of hypergraphs. %B Approximation, Randomization, and Combinatorial Optimization: Algorithms and Techniques %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 2764 %P 821 - 826 %8 2003/// %@ 978-3-540-40770-6 %G eng %U http://dx.doi.org/10.1007/978-3-540-45198-3_5 %0 Journal Article %J Computers, IEEE Transactions on %D 2003 %T AQuA: an adaptive architecture that provides dependable distributed objects %A Ren,Yansong %A Bakken,D. E. %A Courtney,T. %A Michel Cukier %A Karr,D. A. %A Rubel,P. %A Sabnis,C. %A Sanders,W. H. %A Schantz,R.E. %A Seri,M. %K active replication pass-first scheme %K adaptive architecture %K adaptive fault tolerance %K AQuA %K CORBA %K data consistency %K data integrity %K dependable distributed objects %K distributed object management %K performance measurements %K quality of service %K replicated dependability manager %K replication schemes %K software fault tolerance %K system resources %X Building dependable distributed systems from commercial off-the-shelf components is of growing practical importance. For both cost and production reasons, there is interest in approaches and architectures that facilitate building such systems. The AQuA architecture is one such approach; its goal is to provide adaptive fault tolerance to CORBA applications by replicating objects. The AQuA architecture allows application programmers to request desired levels of dependability during applications' runtimes. It provides fault tolerance mechanisms to ensure that a CORBA client can always obtain reliable services, even if the CORBA server object that provides the desired services suffers from crash failures and value faults. AQuA includes a replicated dependability manager that provides dependability management by configuring the system in response to applications' requests and changes in system resources due to faults. It uses Maestro/Ensemble to provide group communication services. It contains a gateway to intercept standard CORBA IIOP messages to allow any standard CORBA application to use AQuA. It provides different types of replication schemes to forward messages reliably to the remote replicated objects. All of the replication schemes ensure strong, data consistency among replicas. This paper describes the AQuA architecture and presents, in detail, the active replication pass-first scheme. In addition, the interface to the dependability manager and the design of the dependability manager replication are also described. Finally, we describe performance measurements that were conducted for the active replication pass-first scheme, and we present results from our study of fault detection, recovery, and blocking times. %B Computers, IEEE Transactions on %V 52 %P 31 - 50 %8 2003/01// %@ 0018-9340 %G eng %N 1 %R 10.1109/TC.2003.1159752 %0 Conference Paper %B Software Engineering, International Conference on %D 2003 %T Architectural Interaction Diagrams: AIDs for System Modeling %A Ray,Arnab %A Cleaveland, Rance %X This paper develops a modeling paradigm called Architectural Interaction Diagrams, or AIDs, for the high-level design of systems containing concurrent, interacting components. The novelty of AIDs is that they introduce interaction mechanisms, or buses, as first-class entities into the modeling vocabulary. Users then have the capability, in their modeling, of using buses whose behavior captures interaction at a higher level of abstraction than that afforded by modeling notations such as Message Sequence Charts or process algebra, which typically provide only one fixed interaction mechanism. This paper defines AIDs formally by giving them an operational semantics that describes how buses combine subsystem transitions into system-level transitions. This semantics enables AIDs to be simulated; to incorporate subsystems given in different modeling notations into a single system model; and to use testing, debugging and model checking early in the system design cycle in order to catch design errors before they are implemented. %B Software Engineering, International Conference on %I IEEE Computer Society %C Los Alamitos, CA, USA %P 396 - 396 %8 2003/// %G eng %R http://doi.ieeecomputersociety.org/10.1109/ICSE.2003.1201218 %0 Journal Article %J IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS SPEECH AND SIGNAL PROCESSING %D 2003 %T AUDIO-P2. 1: PITCH AND TIMBRE MANIPULATIONS USING CORTICAL REPRESENTATION OF SOUND %A Zotkin,Dmitry N %A Shamma,S.A. %A Ru,P. %A Duraiswami, Ramani %A Davis, Larry S. %B IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS SPEECH AND SIGNAL PROCESSING %V 5 %8 2003/// %G eng %0 Journal Article %J Proc. International Conference on Auditory Display (ICAD’03) %D 2003 %T Augmented intelligibility in simultaneous multi-talker environments %A Mesgarani,N. %A Shamma,S. %A Grant,K.W. %A Duraiswami, Ramani %X Speech intelligibility can be severely compromised inenvironments where there are several competing speakers. It may be possible, however, to improve speech intelligibility in such environments by manipulating certain acoustic parameters known to facilitate the segregation of competing signals. The first step in designing such a feature-rich audio display is to understand the significant elements of human auditory perception that affect information transmission capacity. We review a series of experiments to examine the impact of different audio-display design parameters on overall intelligibility of simultaneous speech messages and show how using these features may improve intelligibility. %B Proc. International Conference on Auditory Display (ICAD’03) %P 71 - 74 %8 2003/// %G eng %0 Journal Article %J Lingvisticae Investigationes %D 2003 %T Automatic classification of multi-word expressions in print dictionaries %A Geyken,Alexander %A Jordan Boyd-Graber %X SummaryThis work demonstrates the assignment of multi-word expressions in print dictionaries to POS classes with minimal linguistic resources. In this application, 32,000 entries from the Wörterbuch der deutschen Idiomatik (H. Schemann 1993) were classified using an inductive description of POS sequences in conjunction with a Brill Tagger trained on manually tagged idiomatic entries. This process assigned categories to 86% of entries with 88% accuracy. This classification supplies a meaningful preprocessing step for further applications: the resulting POS-sequences for all idiomatic entries might be used for the automatic recognition of multi-word lexemes in unrestricted text. %B Lingvisticae Investigationes %V 26 %P 187 - 202 %8 2003/01/01/ %@ 03784169 %G eng %U http://umd.library.ingentaconnect.com/content/jbp/li/2003/00000026/00000002/art00002 %N 2 %R 10.1075/li.26.2.03gey %0 Conference Paper %B Proceedings of the 16th annual ACM symposium on User interface software and technology %D 2003 %T Automatic thumbnail cropping and its effectiveness %A Suh,B. %A Ling,H. %A Bederson, Benjamin B. %A Jacobs, David W. %B Proceedings of the 16th annual ACM symposium on User interface software and technology %P 95 - 104 %8 2003/// %G eng %0 Conference Paper %B Proceedings of the 16th annual ACM symposium on User interface software and technology %D 2003 %T Automatic thumbnail cropping and its effectiveness %A Suh,Bongwon %A Ling,Haibin %A Bederson, Benjamin B. %A Jacobs, David W. %K Face detection %K image cropping %K saliency map %K thumbnail %K usability study %K visual search %K zoomable user interfaces %X Thumbnail images provide users of image retrieval and browsing systems with a method for quickly scanning large numbers of images. Recognizing the objects in an image is important in many retrieval tasks, but thumbnails generated by shrinking the original image often render objects illegible. We study the ability of computer vision systems to detect key components of images so that automated cropping, prior to shrinking, can render objects more recognizable. We evaluate automatic cropping techniques 1) based on a general method that detects salient portions of images, and 2) based on automatic face detection. Our user study shows that these methods result in small thumbnails that are substantially more recognizable and easier to find in the context of visual search. %B Proceedings of the 16th annual ACM symposium on User interface software and technology %S UIST '03 %I ACM %C New York, NY, USA %P 95 - 104 %8 2003/// %@ 1-58113-636-6 %G eng %U http://doi.acm.org/10.1145/964696.964707 %R 10.1145/964696.964707 %0 Book Section %B The Semantic Web - ISWC 2003 %D 2003 %T Automating DAML-S Web Services Composition Using SHOP2 %A Wu,Dan %A Parsia,Bijan %A Sirin,Evren %A Hendler,James %A Nau, Dana S. %E Fensel,Dieter %E Sycara,Katia %E Mylopoulos,John %K Computer science %X The DAML-S Process Model is designed to support the application of AI planning techniques to the automated composition of Web services. SHOP2 is an Hierarchical Task Network (HTN) planner well-suited for working with the Process Model. We have proven the correspondence between the semantics of SHOP2 and the situation calculus semantics of the Process Model. We have also implemented a system which soundly and completely plans over sets of DAML-S descriptions using a SHOP2 planner, and then executes the resulting plans over the Web. We discuss the challenges and difficulties of using SHOP2 in the information-rich and human-oriented context of Web services. %B The Semantic Web - ISWC 2003 %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 2870 %P 195 - 210 %8 2003/// %@ 978-3-540-20362-9 %G eng %U http://www.springerlink.com/content/rm5ejwlmbw0mdv97/abstract/ %0 Journal Article %J AI Magazine %D 2002 %T AAAI 2002 Workshops %A Blake,Brian %A Haigh,Karen %A Hexmoor,Henry %A Falcone,Rino %A Soh,Leen-Kiat %A Baral,Chitta %A McIlraith,Sheila %A Gmytrasiewicz,Piotr %A Parsons,Simon %A Malaka,Rainer %A Krueger,Antonio %A Bouquet,Paolo %A Smart,Bill %A Kurumantani,Koichi %A Pease,Adam %A Brenner,Michael %A desJardins, Marie %A Junker,Ulrich %A Delgrande,Jim %A Doyle,Jon %A Rossi,Francesca %A Schaub,Torsten %A Gomes,Carla %A Walsh,Toby %A Guo,Haipeng %A Horvitz,Eric J %A Ide,Nancy %A Welty,Chris %A Anger,Frank D %A Guegen,Hans W %A Ligozat,Gerald %B AI Magazine %V 23 %P 113 - 113 %8 2002/12/15/ %@ 0738-4602 %G eng %U http://www.aaai.org/ojs/index.php/aimagazine/article/viewArticle/1678 %N 4 %R 10.1609/aimag.v23i4.1678 %0 Journal Article %J Communications of the ACM %D 2002 %T ACM's computing professionals face new challenges %A Shneiderman, Ben %X Technologists can make a difference on so many critical fronts. %B Communications of the ACM %V 45 %P 31 - 34 %8 2002/02// %@ 0001-0782 %G eng %U http://doi.acm.org/10.1145/503124.503142 %N 2 %R 10.1145/503124.503142 %0 Conference Paper %B Supercomputing, ACM/IEEE 2002 Conference %D 2002 %T Active Harmony: Towards Automated Performance Tuning %A Tapus, C. %A I-Hsin Chung %A Hollingsworth, Jeffrey K %K Application software %K Automatic control %K Computational modeling %K Computer science %K Computerized monitoring %K Control systems %K grid computing %K Runtime library %K software libraries %K Specification languages %X In this paper, we present the Active Harmony automated runtime tuning system. We describe the interface used by programs to make applications tunable. We present the Library Specification Layer which helps program library developers expose multiple variations of the same API using different algorithms.The Library Specification Language helps to select the most appropriate program library to tune the overall performance. We also present the optimization algorithm used to adjust parameters in the application and the libraries. Finally, we present results that show how the system is able to tune several real applications. The automated tuning system is able to tune the application parameers to within a few percent of the best value after evaluating only 11 out of over 1,700 possible configurations. %B Supercomputing, ACM/IEEE 2002 Conference %I IEEE %P 44 - 44 %8 2002/11// %@ 0-7695-1524-X %G eng %R 10.1109/SC.2002.10062 %0 Conference Paper %B Proceedings of the 2002 ACM/IEEE conference on Supercomputing %D 2002 %T Active Proxy-G: Optimizing the query execution process in the Grid %A Andrade,H. %A Kurc, T. %A Sussman, Alan %A Saltz, J. %B Proceedings of the 2002 ACM/IEEE conference on Supercomputing %P 1 - 15 %8 2002/// %G eng %0 Journal Article %J Technical Reports from UMIACS, UMIACS-TR-2001-45 %D 2002 %T AD (Attacker Defender) Game %A Kochut,Andrzej %A Agrawala, Ashok K. %A Larsen,Ronald L %A A. Udaya Shankar %K Technical Report %X Information Dynamics is a framework for agent-based systems that givesa central position to the role of information, time, and the value of information. We illustrate system design in the Information Dynamics Framework by developing an intelligence game called AD involving attackers, defenders and targets operating in some space of locations. The goal of the attackers is to destroy all targets. Target destruction takes place when the number of attackers in the target's neighborhood exceeds the number of defenders in this neighborhood by a value WINNING_DIFFERENCE. The goal of defenders is to prevent attackers from achieving their goal. (Also UMIACS-TR-2001-45) %B Technical Reports from UMIACS, UMIACS-TR-2001-45 %8 2002/01/31/ %G eng %U http://drum.lib.umd.edu/handle/1903/1142 %0 Conference Paper %D 2002 %T An adaptive framework for tunable consistency and timeliness using replication %A Krishnamurthy, S. %A Sanders,W. H. %A Michel Cukier %K adaptive framework %K concurrent multiple client service %K consistency requirements %K dependability requirements %K distributed object management %K flexible QoS model %K lazy update propagation %K performance history %K probabilistic approach %K Protocols %K quality of service %K replication %K temporal requirements %K timely consistent response %K tunable consistency %K tunable timeliness %X One well-known challenge in using replication to service multiple clients concurrently is that of delivering a timely and consistent response to the clients. In this paper, we address this problem in the context of client applications that have specific temporal and consistency requirements. These applications can tolerate a certain degree of relaxed consistency, in exchange for better response time. We propose a flexible QoS model that allows these clients to specify their temporal and consistency constraints. In order to select replicas to serve these clients, we need to control of the inconsistency of the replicas, so that we have a large enough pool of replicas with the appropriate state to meet a client's timeliness, consistency, and dependability requirements. We describe an adaptive framework that uses lazy update propagation to control the replica inconsistency and employs a probabilistic approach to select replicas dynamically to service a client, based on its QoS specification. The probabilistic approach predicts the ability of a replica to meet a client's QoS specification by using the performance history collected by monitoring the replicas at runtime. We conclude with experimental results based on our implementation. %P 17 - 26 %8 2002/// %G eng %R 10.1109/DSN.2002.1028882 %0 Journal Article %J Advances in Multimedia Information Processing—PCM 2002 %D 2002 %T Adaptive Multimedia System Architecture for Improving QoS inWireless Networks %A Mahajan,A. %A Mundur, Padma %A Joshi,A. %B Advances in Multimedia Information Processing—PCM 2002 %P 37 - 47 %8 2002/// %G eng %0 Report %D 2002 %T Addendum to "A Krylov--Schur Algorithm for Large Eigenproblems" %A Stewart, G.W. %K Technical Report %X In this addendum to an earlier paper by the author, it is shown how tocompute a Krylov decomposition corresponding to an arbitrary Rayleigh-Quotient. This decomposition can be used to restart an Arnoldi process, with a selection of the Ritz vectors corresponding to the Rayleigh quotient. (Also UMIACS-TR-2001-90) %I Instititue for Advanced Computer Studies, Univ of Maryland, College Park %V UMIACS-TR-2001-90 %8 2002/01/31/ %G eng %U http://drum.lib.umd.edu/handle/1903/1171 %0 Conference Paper %B Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 2 %D 2002 %T Agents dealing with time and uncertainty %A Dix,Jürgen %A Kraus,Sarit %A V.S. Subrahmanian %K formalisms and logics %K logic programming %K probabilistic/uncertain reasoning %K temporal reasoning %K theories of agency %X Situated agents in the real world need to handle the fact that events occur frequently, as well as the fact that the agent typically has uncertain knowledge about what is true in the world. The ability to reason about both time and uncertainty is therefore very important. In this paper, we develop a formal theory of agents that can reason about both time and uncertainty. The theory extends the notion of agents described in [10, 21] and proposes the notion of temporal probabilistic (or TP) agents. A formal semantics for TP-agents is proposed - this semantics is described via structures called feasible TP-status interpretations (FTPSI's). TP-agents continuously evaluate changes (in the state of the environment they are situated in) and compute appropriate FTPSI's. For a class of TP-agents called positive TP-agents, we develop a provably sound and complete procedure to compute FTPSI's. %B Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 2 %S AAMAS '02 %I ACM %C New York, NY, USA %P 912 - 919 %8 2002/// %@ 1-58113-480-0 %G eng %U http://doi.acm.org/10.1145/544862.544953 %R 10.1145/544862.544953 %0 Book %D 2002 %T Algorithm Engineering and Experiments: 4th International Workshop, Alenex 2002, San Francisco, Ca, Usa, January 4-5. 2002 : Revised Papers %A Mount, Dave %A Stein,Clifford %K Architecture / General %K Artificial intelligence %K Computer algorithms %K Computer algorithms/ Congresses %K COMPUTERS %K Computers / Computer Graphics %K Computers / Computer Science %K Computers / Data Modeling & Design %K Computers / Data Processing %K Computers / General %K Computers / Programming / Algorithms %K Electronic books %K Mathematics / Discrete Mathematics %K Medical / General %X This book constitutes the thoroughly refereed post-proceedings of the 4th International Workshop on Algorithm Engineering and Experiments, ALENEX 2002, held in San Francisico, CA, USA in January 2002.The 15 revised full papers presented were carefully reviewed and selected from 34 submissions. Among the topics addressed are hewistics for algorithms, combinatorial optimization, searching, graph computation, network optimization, scheduling, computational geometry, sorting, and clustering algorithms. %I Springer %8 2002/09/17/ %@ 9783540439776 %G eng %0 Journal Article %J ACM Comput. Surv. %D 2002 %T Algorithmic issues in modeling motion %A Agarwal,Pankaj K. %A Guibas,Leonidas J. %A Edelsbrunner,Herbert %A Erickson,Jeff %A Isard,Michael %A Har-Peled,Sariel %A Hershberger,John %A Jensen,Christian %A Kavraki,Lydia %A Koehl,Patrice %A Lin,Ming %A Manocha,Dinesh %A Metaxas,Dimitris %A Mirtich,Brian %A Mount, Dave %A Muthukrishnan,S. %A Pai,Dinesh %A Sacks,Elisha %A Snoeyink,Jack %A Suri,Subhash %A Wolefson,Ouri %K computational geometry %K Computer vision %K mobile networks %K modeling %K molecular biology %K motion modeling %K physical simulation %K robotoics %K spatio-temporal databases %X This article is a survey of research areas in which motion plays a pivotal role. The aim of the article is to review current approaches to modeling motion together with related data structures and algorithms, and to summarize the challenges that lie ahead in producing a more unified theory of motion representation that would be useful across several disciplines. %B ACM Comput. Surv. %V 34 %P 550 - 572 %8 2002/12// %@ 0360-0300 %G eng %U http://doi.acm.org/10.1145/592642.592647 %N 4 %R 10.1145/592642.592647 %0 Conference Paper %B Proceedings of the Third International Conference on Management Information Systems, Greece %D 2002 %T ALLI: An Information Integration System Based on Active Logic Framework %A Barfourosh,A. A. %A Nezhad,H. R.M %A Anderson,M. %A Perlis, Don %B Proceedings of the Third International Conference on Management Information Systems, Greece %P 339 - 348 %8 2002/// %G eng %0 Journal Article %J Multimedia Tools and Applications %D 2002 %T The Analysis and Performance of Multi-Rate Service in Distributed Video-on-Demand Systems %A Mundur, Padma %A Simon,R. %A Sood,A. %B Multimedia Tools and Applications %V 17 %P 51 - 75 %8 2002/// %G eng %N 1 %0 Journal Article %J METHODS IN MOLECULAR BIOLOGY %D 2002 %T Analysis of 16S-23S rRNA intergenic spacer of Vibrio cholerae and Vibrio mimicus for detection of these species %A Chun,J. %A Rivera,I. N. G. %A Rita R Colwell %B METHODS IN MOLECULAR BIOLOGY %V 179 %P 171 - 178 %8 2002/// %G eng %R 10.1385/1-59259-238-4:171 %0 Book Section %B Data Structures, Near Neighbor Searches, and MethodologyData Structures, Near Neighbor Searches, and Methodology %D 2002 %T Analysis of approximate nearest neighbor searching with clustered point sets %A Maneewongvatana,S. %A Mount, Dave %B Data Structures, Near Neighbor Searches, and MethodologyData Structures, Near Neighbor Searches, and Methodology %I American Mathematical Society %V 59 %P 105 - 123 %8 2002/// %@ 0821828924, 9780821828922 %G eng %0 Journal Article %J Molecular and Biochemical Parasitology %D 2002 %T Analysis of stage-specific gene expression in the bloodstream and the procyclic form of Trypanosoma brucei using a genomic DNA-microarray %A Diehl,Susanne %A Diehl,Frank %A El‐Sayed, Najib M. %A Clayton,Christine %A Hoheisel,Jörg D. %K Expression %K Gene %K Microarray %K Regulation %K Trypanosoma brucei %X A microarray comprising 21[punctuation space]024 different PCR products spotted on glass slides was constructed for gene expression studies on Trypanosoma brucei. The arrayed fragments were generated from a T. brucei shotgun clone library, which had been prepared from randomly sheared and size-fractionated genomic DNA. For the identification of stage-specific gene activity, total RNA from in vitro cultures of the human, long slender form and the insect, procyclic form of the parasite was labelled and hybridised to the microarray. Approximately 75% of the genomic fragments produced a signal and about 2% exhibited significant differences between the transcript levels in the bloodstream and procyclic forms. A few results were confirmed by Northern blot analysis or reverse-transcription and PCR. Three hundred differentially regulated clones have been selected for sequencing. So far, of 33 clones that showed about 2-fold or more over-expression in bloodstream forms, 15 contained sequences similar to those of VSG expression sites and at least six others appeared non-protein-coding. Of 29 procyclic-specific clones, at least eight appeared not to be protein-coding. A surprisingly large proportion of known regulated genes was already identified in this small sample, and some new ones were found, illustrating the utility of genomic arrays. %B Molecular and Biochemical Parasitology %V 123 %P 115 - 123 %8 2002/08/28/ %@ 0166-6851 %G eng %U http://www.sciencedirect.com/science/article/pii/S016668510200138X %N 2 %R 16/S0166-6851(02)00138-X %0 Journal Article %J Technical Reports from UMIACS, UMIACS-TR-2002-60 %D 2002 %T Analysis of the NICE Application Layer Multicast Protocol %A Banerjee,Suman %A Bhattacharjee, Bobby %K Technical Report %X Application layer multicast protocols organize a set of hosts intoan overlay tree for data delivery. Each host on the overlay peers with a subset of other hosts. Since application layer multicast relies only on an underlying unicast architecture, multiple copies of the same packet can be carried by a single physical link or node on the overlay. The stress at a link or node is defined as the number of identical copies of a packet carried by that link or node. Stretch is another important metric in application layer multicast, which measures the relative increase in delay incurred by the overlay path between pairs of members with respect to the direct unicast path. In this paper we study the NICE application layer multicast protocol to quantify and study the tradeoff between these two important metrics --- stress and stretch in scalably building application layer multicast paths. Also UMIACS-TR-2002-60 %B Technical Reports from UMIACS, UMIACS-TR-2002-60 %8 2002/08/01/ %G eng %U http://drum.lib.umd.edu/handle/1903/1212 %0 Conference Paper %B Proceedings of the 2nd workshop on NLP and XML - Volume 17 %D 2002 %T Annotating the semantic web using natural language %A Katz,Boris %A Jimmy Lin %X Because the ultimate purpose of the Semantic Web is to help users better locate, organize, and process content, we believe that it should be grounded in the information access method humans are most comfortable with---natural language. However, the Resource Description Framework (RDF), the foundation of the Semantic Web, was designed to be easily processed by computers, not humans. To render RDF more friendly to humans, we propose to augment it with natural language annotations, or metadata written in everyday language. We argue that natural language annotations, parsed into computer-readable representations, are not only intuitive and effective, but can also accelerate the pace with which the Semantic Web is being adopted. We believe that our technology can facilitate a happy marriage between natural language technology and the Semantic Web vision. %B Proceedings of the 2nd workshop on NLP and XML - Volume 17 %S NLPXML '02 %I Association for Computational Linguistics %C Stroudsburg, PA, USA %P 1 - 8 %8 2002/// %G eng %U http://dx.doi.org/10.3115/1118808.1118816 %R 10.3115/1118808.1118816 %0 Conference Paper %B Image Processing. 2002. Proceedings. 2002 International Conference on %D 2002 %T Anti-collusion codes: multi-user and multimedia perspectives %A Trappe,W. %A M. Wu %A Liu,K. J.R %K and %K anti-collusion %K authentication; %K binary %K code %K codes; %K coding; %K combinatorial %K computing; %K content; %K data %K designs; %K digital %K embedding; %K encapsulation; %K fingerprinting; %K image %K images; %K logical %K mathematics; %K message %K Modulation %K modulation; %K multimedia %K operation; %K performance; %K watermarking; %X Digital fingerprinting is an effective method to identify users who might try to redistribute multimedia content, such as images and video. These fingerprints are typically embedded into the content using watermarking techniques that are designed to be robust to a variety of attacks. A cheap and effective attack against such digital fingerprints is collusion, where several differently marked copies of the same content are averaged or combined to disrupt the underlying fingerprint. We present a construction of collusion-resistant fingerprints based upon anti-collusion codes (ACC) and binary code modulation. ACC have the property that the composition of any subset of K or fewer codevectors is unique. Using this property, we build fingerprints that allow for the identification of groups of K or less colluders. We present a construction of binary-valued ACC under the logical AND operation using the theory of combinatorial designs. Our code construction requires only Oscr;( radic;n) orthogonal signals to accommodate n users. We demonstrate the performance of our ACC for fingerprinting multimedia by identifying colluders through experiments using real images. %B Image Processing. 2002. Proceedings. 2002 International Conference on %V 2 %P II-149 - II-152 vol.2 - II-149 - II-152 vol.2 %8 2002/// %G eng %R 10.1109/ICIP.2002.1039909 %0 Conference Paper %B Proceedings of the 2002 annual national conference on Digital government research %D 2002 %T APPOINT: an Approach for Peer-to-Peer Offloading the INTernet %A Tanin,Egemen %A Samet, Hanan %X The Internet has recently become the medium of interaction with large volumes of online data. Enterprises in the public and private sectors made their data archives available over the Internet. Working with such large volumes of online data is a challenging task. APPOINT (an Approach for Peer-to-Peer Offloading the INTernet), a centralized peer-to-peer approach that helps users of the Internet transfer large volumes of online data efficiently, is introduced with this paper. In APPOINT, active clients of a client-server architecture act on the server's behalf and communicate with each other to decrease network latency, improve service bandwidth, and resolve server congestions. In essence, a server is enabled to appoint alternatives when needed and create a scalable collaborative virtual mirror from the active clients. Multiple parameters such as locality and availability to decide on how to best forward a download request are considered. The problems that can arise in symmetric upload requests are also addressed. APPOINT is an ongoing project and it is being built as an add-on to existing client-server systems. A library of functions, with a simple application programming interface (API) that can be used within an existing client-server system to improve the service, is currently being developed. %B Proceedings of the 2002 annual national conference on Digital government research %S dg.o '02 %I Digital Government Society of North America %P 1 - 7 %8 2002/// %G eng %U http://dl.acm.org/citation.cfm?id=1123098.1123102 %0 Journal Article %J The Journal of the Acoustical Society of America %D 2002 %T Approximating the head-related transfer function using simple geometric models of the head and torso %A Algazi,V. R %A Duda,R. O %A Duraiswami, Ramani %A Gumerov, Nail A. %A Tang,Z. %B The Journal of the Acoustical Society of America %V 112 %P 2053 - 2053 %8 2002/// %G eng %0 Journal Article %J Random Structures & Algorithms %D 2002 %T Approximation algorithms for the covering Steiner problem %A Konjevod,Goran %A Ravi,R. %A Srinivasan, Aravind %X The covering Steiner problem is a generalization of both the k-MST and the group Steiner problems: given an edge-weighted graph, with subsets of vertices called the groups, and a nonnegative integer value (called the requirement) for each group, the problem is to find a minimum-weight tree spanning at least the required number of vertices of every group. When all requirements are equal to 1, this becomes the group Steiner problem, while if there is only one group which contains all vertices of the graph the problem reduces to k-MST with k equal to the requirement of this unique group. We discuss two different (but equivalent) linear relaxations of the problem for the case when the given graph is a tree and construct polylogarithmic approximation algorithms based on randomized LP rounding of these relaxations. By using a probabilistic approximation of general metrics by tree metrics due to Bartal, our algorithms also solve the covering Steiner problem on general graphs with a further polylogarithmic worsening in the approximation ratio. © 2002 Wiley Periodicals, Inc. Random Struct. Alg., 20:465–483, 2002 %B Random Structures & Algorithms %V 20 %P 465 - 482 %8 2002/05/01/ %@ 1098-2418 %G eng %U http://onlinelibrary.wiley.com/doi/10.1002/rsa.10038/abstract?systemMessage=Wiley+Online+Library+will+be+disrupted+17+March+from+10-14+GMT+%2806-10+EDT%29+for+essential+maintenance %N 3 %R 10.1002/rsa.10038 %0 Journal Article %J Peer-to-Peer Systems %D 2002 %T Are virtualized overlay networks too much of a good thing? %A Keleher,P. %A Bhattacharjee, Bobby %A Silaghi,B. %X The majority of recent high-profile work in peer-to-peer networks has approached the problem of location by abstracting over object lookup services. Namespace virtualization in the overlay layer provides load balance and provable bounds on latency at low costs.We contend that namespace virtualization comes at a significant cost for applications that naturally describe their data sets in a hierarchical manner. Opportunities for enhancing browsing, prefetching and efficient attribute-based searches are lost. A hierarchy exposes relationships between items near to each other in the topology; virtualization of the namespace discards this information even if present at client, higherlevel protocols. We advocate encoding application hierarchies directly into the structure of the overlay network, and revisit this argument through a newly proposed distributed directory service. %B Peer-to-Peer Systems %P 225 - 231 %8 2002/// %G eng %0 Conference Paper %D 2002 %T Argument structure saturation in a constraint based model %A Drury,J. %A Resnik, Philip %A Weinberg, Amy %A Gennari,S. %A Aoshima,S. %8 2002/// %G eng %U http://cognet.mit.edu/library/conferences/paper?paper_id=55048 %0 Book Section %B Flexible Query Answering SystemsFlexible Query Answering Systems %D 2002 %T An Augmented Visual Query Mechanism for Finding Patterns in Time Series Data %A Keogh,Eamonn %A Hochheiser,Harry %A Shneiderman, Ben %E Carbonell,Jaime %E Siekmann,Jörg %E Andreasen,Troels %E Christiansen,Henning %E Motro,Amihai %E Legind Larsen,Henrik %X Relatively few query tools exist for data exploration and pattern identification in time series data sets. In previous work we introduced Timeboxes . Timeboxes are rectangular, direct-manipulation queries for studying time-series datasets. We demonstrated how Timeboxes can be used to support interactive exploration via dynamic queries, along with overviews of query results and drag-and-drop support for query-by-example. In this paper, we extend our work by introducing Variable Time Timeboxes (VTT). VTTs are a natural generalization of Timeboxes, which permit the specification of queries that allow a degree of uncertainty in the time axis. We carefully motivate the need for these more expressive queries, and demonstrate the utility of our approach on several data sets. %B Flexible Query Answering SystemsFlexible Query Answering Systems %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 2522 %P 240 - 250 %8 2002/// %@ 978-3-540-00074-7 %G eng %U http://dx.doi.org/10.1007/3-540-36109-X_19 %0 Journal Article %J Journal of Mechanical Design %D 2002 %T Automated design of multi-stage molds for manufacturing multi-material objects %A Kumar,M. %A Gupta,S.K. %B Journal of Mechanical Design %V 124 %P 399 - 399 %8 2002/// %G eng %0 Journal Article %J Proceedings of the ACL-2002 Workshop on Text Summarization %D 2002 %T Automatic headline generation for newspaper stories %A Zajic, David %A Dorr, Bonnie J %A Schwartz,R. %X In this paper we propose a novelapplication of Hidden Markov Models to automatic generation of informative headlines for English texts. We propose four decoding parameters to make the headlines appear more like Headlinese, the language of informative newspaper headlines. We also allow for morphological variation in words between headline and story English. Informal and formal evaluations indicate that our approach produces informative headlines, mimicking a Headlinese style generated by humans. %B Proceedings of the ACL-2002 Workshop on Text Summarization %8 2002/// %G eng %0 Conference Paper %B Proceedings of the The 5th International Conference on Text, Speech and Dialogue %D 2002 %T Automatic transcription of Czech language oral history in the MALACH project: resources and initial experiments %A Oard, Douglas %A Demner-Fushman,D. %A Hajic,J. %A Ramabhadran,B. %A Gustman,S. %A Byrne,WJ %A Soergel,D. %A Dorr, Bonnie J %A Resnik, Philip %A Picheny,M. %B Proceedings of the The 5th International Conference on Text, Speech and Dialogue %P 57 - 64 %8 2002/// %G eng %0 Journal Article %J Journal of Parallel and Distributed Computing %D 2001 %T Achieving Efficiency and Accuracy in Simulation for I/O-Intensive Applications %A Eom, Hyeonsang %A Hollingsworth, Jeffrey K %K simulation; I/O-intensive applications; parallel or distributed applications; performance prediction; discrete-event simulation %X This paper presents a family of simulators for data-intensive applications, and a methodology to select the most efficient simulator based on a user-supplied requirement for accuracy. The methodology consists of a series of tests that select an appropriate simulation based on the attributes of the application. In addition, each simulator provides two estimates of application execution time: one for the minimum expected time and the other for the maximum. We present the results of applying the strategy to existing applications and show that we can accurately simulate applications tens to hundreds of times faster than application execution time. %B Journal of Parallel and Distributed Computing %V 61 %P 1732 - 1750 %8 2001/12// %@ 0743-7315 %G eng %U http://www.sciencedirect.com/science/article/pii/S0743731501917691 %N 12 %R 06/jpdc.2001.1769 %0 Conference Paper %B 2001 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2001. Proceedings. (ICASSP '01) %D 2001 %T Active speech source localization by a dual coarse-to-fine search %A Duraiswami, Ramani %A Zotkin,Dmitry N %A Davis, Larry S. %K active speech source localization %K array signal processing %K Computer interfaces %K Delay effects %K delay estimation %K delay-based estimators %K dual coarse-to-fine search %K Frequency %K Inverse problems %K Laboratories %K multiple speech sound sources %K Position measurement %K real-time implementation %K Sensor arrays %K Signal processing algorithms %K space %K speech %K Speech processing %K Teleconferencing %K videoconferencing systems %X Accurate and fast localization of multiple speech sound sources is a significant problem in videoconferencing systems. Based on the observation that the wavelengths of the sound from a speech source are comparable to the dimensions of the space being searched, and that the source is broadband, we develop an efficient search strategy that finds the source(s) in a given space. The search is made efficient by using coarse-to-fine strategies in both space and frequency. The algorithm is shown to be robust compared to typical delay-based estimators and fast enough for real-time implementation. Its performance can be further improved by using constraints from computer vision %B 2001 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2001. Proceedings. (ICASSP '01) %I IEEE %V 5 %P 3309-3312 vol.5 - 3309-3312 vol.5 %8 2001/// %@ 0-7803-7041-4 %G eng %R 10.1109/ICASSP.2001.940366 %0 Journal Article %J Parallel and Distributed Systems, IEEE Transactions on %D 2001 %T An adaptive algorithm for tolerating value faults and crash failures %A Ren,Yansong %A Michel Cukier %A Sanders,W. H. %K active replication communication %K adaptive algorithm %K adaptive fault tolerance %K adaptive majority voting algorithm %K AQuA architecture %K client-server systems %K CORBA %K crash failures %K data consistency %K data integrity %K Dependability %K distributed object management %K fault tolerant computing %K objects replication %K value faults %X The AQuA architecture provides adaptive fault tolerance to CORBA applications by replicating objects and providing a high-level method that an application can use to specify its desired level of dependability. This paper presents the algorithms that AQUA uses, when an application's dependability requirements can change at runtime, to tolerate both value faults in applications and crash failures simultaneously. In particular, we provide an active replication communication scheme that maintains data consistency among replicas, detects crash failures, collates the messages generated by replicated objects, and delivers the result of each vote. We also present an adaptive majority voting algorithm that enables the correct ongoing vote while both the number of replicas and the majority size dynamically change. Together, these two algorithms form the basis of the mechanism for tolerating and recovering from value faults and crash failures in AQuA %B Parallel and Distributed Systems, IEEE Transactions on %V 12 %P 173 - 192 %8 2001/02// %@ 1045-9219 %G eng %N 2 %R 10.1109/71.910872 %0 Conference Paper %B Circuits and Systems, 2001. ISCAS 2001. The 2001 IEEE International Symposium on %D 2001 %T Adaptive negative cycle detection in dynamic graphs %A Chandrachoodan,N. %A Bhattacharyya, Shuvra S. %A Liu,K. J.R %B Circuits and Systems, 2001. ISCAS 2001. The 2001 IEEE International Symposium on %V 5 %P 163 - 166 %8 2001/// %G eng %0 Report %D 2001 %T Adjusting the Rayleigh Quotient in Semiorthogonal Lanczos Methods %A Stewart, G.W. %K Technical Report %X In a semiorthogonal Lanczos algorithm, the orthogonality of theLanczos vectors is allowed to deteriorate to roughly the square root of the rounding unit, after which the current vectors are reorthogonalized. A theorem of Simon \cite{simo:84} shows that the Rayleigh quotient\,---\,i.e., the tridiagonal matrix produced by the Lanczos recursion\,---\,contains fully accurate approximations to the Ritz values in spite of the lack of orthogonality. Unfortunately, the same lack of orthogonality can cause the Ritz vectors to fail to converge. It also makes the classical estimate for the residual norm misleadingly small. In this note we show how to adjust the Rayleigh quotient to overcome this problem. (Cross-referenced as UMIACS-TR-2001-31) %I Instititue for Advanced Computer Studies, Univ of Maryland, College Park %V UMIACS-TR-2001-31 %8 2001/05/10/ %G eng %U http://drum.lib.umd.edu/handle/1903/1132 %0 Conference Paper %B Proceedings of the twelfth annual ACM-SIAM symposium on Discrete algorithms %D 2001 %T Algorithms for facility location problems with outliers %A Charikar,Moses %A Khuller, Samir %A Mount, Dave %A Narasimhan,Giri %X Facility location problems are traditionally investigated with the assumption that all the clients are to be provided service. A significant shortcoming of this formulation is that a few very distant clients, called outliers, can exert a disproportionately strong influence over the final solution. In this paper we explore a generalization of various facility location problems (K-center, K-median, uncapacitated facility location etc) to the case when only a specified fraction of the customers are to be served. What makes the problems harder is that we have to also select the subset that should get service. We provide generalizations of various approximation algorithms to deal with this added constraint. %B Proceedings of the twelfth annual ACM-SIAM symposium on Discrete algorithms %S SODA '01 %I Society for Industrial and Applied Mathematics %C Philadelphia, PA, USA %P 642 - 651 %8 2001/// %@ 0-89871-490-7 %G eng %U http://dl.acm.org/citation.cfm?id=365411.365555 %0 Patent %D 2001 %T Analysis and prediction of protein functional sub-types from protein sequence alignments %A Hannenhalli, Sridhar %A Russell,Robert B. %E SMITHKLINE BEECHAM CORP (US) SMITHKLINE BEECHAM PLC (GB) %X Here we present a method for analysis and prediction of functional sub-types from multiple protein sequence alignments. Given an alignment and set of proteins grouped into sub-types according to some definition of function, such as enzymatic specificity, the method identifies positions that are indicative of functional differences by subtraction of sequence profiles, and analysis of positional entropy in the alignment. %V EP20000203764 %8 2001/05/02/ %G eng %U http://www.freepatentsonline.com/EP1096411.html %N EP1096411 %0 Journal Article %J Nucleic acids research %D 2001 %T Analysis of a donor gene region for a variant surface glycoprotein and its expression site in African trypanosomes %A LaCount,D. J %A El‐Sayed, Najib M. %A Kaul,S. %A Wanless,D. %A Turner,C. M.R %A Donelson,J. E %B Nucleic acids research %V 29 %P 2012 - 2012 %8 2001/// %G eng %N 10 %0 Journal Article %J Algorithms and Data Structures %D 2001 %T The analysis of a probabilistic approach to nearest neighbor searching %A Maneewongvatana,S. %A Mount, Dave %X Given a set S of n data points in some metric space. Given a query point q in this space, a nearest neighbor query asks for the nearest point of S to q. Throughout we will assume that the space is real d-dimensional space Rd, and the metric is Euclidean distance. The goal is to preprocess S into a data structure so that such queries can be answered efficiently. Nearest neighbor searching has applications in many areas, including data mining [7], pattern classification [5], data compression [10]. %B Algorithms and Data Structures %P 276 - 286 %8 2001/// %G eng %R 10.1007/3-540-44634-6_26 %0 Conference Paper %B Acoustics, Speech, and Signal Processing, 2001. Proceedings. (ICASSP '01). 2001 IEEE International Conference on %D 2001 %T Analysis of attacks on SDMI audio watermarks %A M. Wu %A Craver,S. %A Felten,E.W. %A Liu,B. %K attacks;watermark %K audio %K coding;copy %K digital %K encapsulation; %K Initiative;attack %K Music %K protection;data %K SDMI %K setup;robustness;security;successful %K watermarks;Secure %K weaknesses;audio %X This paper explains and analyzes the successful attacks submitted by the authors on four audio watermark proposals during a 3-week SDMI public challenge. Our analysis points out some weaknesses in the watermark techniques currently under SDMI consideration and suggests directions for further improvement. The paper also discusses the framework and strategies for analyzing the robustness and security of watermarking systems as well as the difficulty, uniqueness, and unrealistic expectations of the attack setup %B Acoustics, Speech, and Signal Processing, 2001. Proceedings. (ICASSP '01). 2001 IEEE International Conference on %V 3 %P 1369 -1372 vol.3 - 1369 -1372 vol.3 %8 2001/// %G eng %R 10.1109/ICASSP.2001.941183 %0 Journal Article %J Proceedings of the IFIP TC5/WG5 %D 2001 %T Animated heads: From 3d motion fields to action descriptions %A Neumann, J. %A Fermüller, Cornelia %A Aloimonos, J. %B Proceedings of the IFIP TC5/WG5 %V 10 %P 1 - 11 %8 2001/// %G eng %0 Journal Article %J Image Processing, IEEE Transactions on %D 2001 %T Approximating large convolutions in digital images %A Mount, Dave %A Kanungo,T. %A Netanyahu,N. S %A Piatko,C. %A Silverman,R. %A Wu,A. Y %K 2D %K algorithm;binary %K approximation;mathematical %K convolution %K convolution;binary %K convolutions;Bresenham's %K convolutions;convex %K convolutions;geometric %K images;discrete %K kernel;convex %K kernel;digital %K line-drawing %K morphology; %K morphology;approximation %K object;image %K polygonal %K processing;large %K processing;mathematical %K theory;convolution;image %K two-dimensional %X Computing discrete two-dimensional (2-D) convolutions is an important problem in image processing. In mathematical morphology, an important variant is that of computing binary convolutions, where the kernel of the convolution is a 0-1 valued function. This operation can be quite costly, especially when large kernels are involved. We present an algorithm for computing convolutions of this form, where the kernel of the binary convolution is derived from a convex polygon. Because the kernel is a geometric object, we allow the algorithm some flexibility in how it elects to digitize the convex kernel at each placement, as long as the digitization satisfies certain reasonable requirements. We say that such a convolution is valid. Given this flexibility we show that it is possible to compute binary convolutions more efficiently than would normally be possible for large kernels. Our main result is an algorithm which, given an m times;n image and a k-sided convex polygonal kernel K, computes a valid convolution in O(kmn) time. Unlike standard algorithms for computing correlations and convolutions, the running time is independent of the area or perimeter of K, and our techniques do not rely on computing fast Fourier transforms. Our algorithm is based on a novel use of Bresenham's (1965) line-drawing algorithm and prefix-sums to update the convolution incrementally as the kernel is moved from one position to another across the image %B Image Processing, IEEE Transactions on %V 10 %P 1826 - 1835 %8 2001/12// %@ 1057-7149 %G eng %N 12 %R 10.1109/83.974567 %0 Journal Article %J Automata, Languages and Programming %D 2001 %T Approximation algorithms for partial covering problems %A Gandhi,R. %A Khuller, Samir %A Srinivasan, Aravind %X We study the generalization of covering problems to partial covering. Here we wish to cover only a desired number of elements, rather than covering all elements as in standard covering problems. For example, in k-set cover, we wish to choose a minimum number of sets to cover at least k elements. For k-set cover, if each element occurs in at most f sets, then we derive a primal-dual f-approximation algorithm (thus implying a 2-approximation for k-vertex cover) in polynomial time. In addition to its simplicity, this algorithm has the advantage of being parallelizable. For instances where each set has cardinality at most three, we obtain an approximation of 4/3. We also present better-than-2-approximation algorithms for k-vertex cover on bounded degree graphs, and for vertex cover on expanders of bounded average degree. We obtain a polynomial-time approximation scheme for k-vertex cover on planar graphs, and for covering points in R d by disks. %B Automata, Languages and Programming %P 225 - 236 %8 2001/// %G eng %R 10.1007/3-540-48224-5_19 %0 Conference Paper %B 39th AIAA Aerospace Sciences Meeting and Exhibit %D 2001 %T Assessment of the k-epsilon turbulence model for compressible flows using direct simulation data %A Sinha,K. %A Candler,G. V %A Martin, M.P %X We evaluate the k-epsilon turbulence model using direct numerical simulation (DNS) data of a Mach 4 boundary layer. We find that the low Reynolds number damping functions for the Reynolds stress must be corrected by the density ratio to match the DNS data. We present the budget of the k equation and assess the modeling of the various source terms. The models for all the source terms, except for the production and dilatational dissipation terms, are found to be adequate. Finally, we present the solenoidal dissipation rate equation and compute its budget using the boundary layer data. We compare this equation with the dissipation rate equation in an incompressible flow to show the equivalence between the two equations. This is the basis for modeling the solenoidal dissipation equation. However, an additional term in the equation due to variation of fluid viscosity needs to be modeled. %B 39th AIAA Aerospace Sciences Meeting and Exhibit %C Reno, NV %8 2001/// %G eng %0 Conference Paper %B International Conference on Multimedia and Expo %D 2001 %T Attentive toys %A Haritaoglu,I. %A Cozzi,A. %A Koons,D. %A Flickner,M. %A Zotkin,Dmitry N %A Yacoob,Yaser %B International Conference on Multimedia and Expo %V 22 %P 25 - 25 %8 2001/// %G eng %0 Journal Article %J Robotics and Computer-Integrated Manufacturing %D 2001 %T Automated design of sheet metal punches for bending multiple parts in a single setup %A Alva,Ujval %A Gupta, Satyandra K. %K Praess planning %K Punch design %K Sheet metal bending %X Sheet metal bending is a process in which bends are formed using a combination of a punch and a die. A very large number of mechanical products such as furniture panels, shelves, cabinets, housing for electro-mechanical devices, etc. are created by using the sheet metal bending process. Bending tools need to satisfy the following two criteria: (1) tools should be able to withstand bending forces, and (2) tool shapes should be such that there is no tool-part interference. In this paper, we describe a methodology for automatically designing shapes of bending punches for bending multiple parts in a single setup. We create parametric geometric models of punches. These parametric models describe the family of possible punch shapes. Using the part geometry and parametric punch shape models, we automatically generated constraints on tool parameters that eliminate the possibility of part-tool interference. We use mixed-integer techniques to identify parameters of punch shapes that result in the maximum punch strength. Finally, we perform strength analysis of the designed punch shape using finite element analysis methods, to verify that the designed punch shape is capable of withstanding the bending forces. %B Robotics and Computer-Integrated Manufacturing %V 17 %P 33 - 47 %8 2001/02// %@ 0736-5845 %G eng %U http://www.sciencedirect.com/science/article/pii/S0736584500000351 %N 1–2 %R 10.1016/S0736-5845(00)00035-1 %0 Conference Paper %B Automated Software Engineering, International Conference on %D 2001 %T Automated Validation of Software Models %A Sims,Steve %A Cleaveland, Rance %A Butts,Ken %A Ranville,Scott %X This paper describes the application of an automated verification tool to a software model developed at Ford. Ford already has in place an advanced model-based software development framework that employs the Matlab?, Simulink?, and Stateflow? modeling tools. During this project we applied the invariant checker Salsa to a Simulink?/ Stateflow? model of automotive software to check for nondeterminism, missing cases, dead code, and redundant code. During the analysis, a number of anomalies were detected that had not been found during manual review. We argue that the detection and correction of these problems demonstrates a cost-effective application of formal verification that elevates our level of confidence in the model. %B Automated Software Engineering, International Conference on %I IEEE Computer Society %C Los Alamitos, CA, USA %P 91 - 91 %8 2001/// %@ 0-7695-1426-X %G eng %R http://doi.ieeecomputersociety.org/10.1109/ASE.2001.989794 %0 Conference Paper %B AGENTS '01 Proceedings of the Fifth International Conference on Autonomous Agents %D 2001 %T Automatically Tracking and Analyzing the Behavior of Live Insect Colonies %A Balch, Tucker %A Zia Khan %A Veloso, Manuela %X We introduce the study of {\it live} social insect colonies as a relevant and exciting domain for the development and application of multi-agent systems modeling tools. Social insects provide a rich source of {\it traceable} social behavior for testing multi-agent tracking, prediction and modeling algorithms. An additional benefit of this research is the potential for contributions to experimental biology --- the principled techniques developed for analyzing artificial multi-agent systems can be applied to advance the state of knowledge of insect behavior. We contribute a novel machine vision system that addresses the challenge of tracking hundreds of small animals simultaneously. Fast color-based tracking is combined with movement-based tracking to locate ants in a real-time video stream. We also introduce new methods for analyzing the spatial activity of ant colonies. The system was validated in experiments with laboratory colonies of {\it Camponotus festinatus} and several example analyses of the colonies' spatial behavior are provided. %B AGENTS '01 Proceedings of the Fifth International Conference on Autonomous Agents %S AGENTS '01 %I ACM %P 521 - 528 %8 2001/// %@ 1-58113-326-X %G eng %U http://doi.acm.org/10.1145/375735.376434 %0 Book Section %B Active NetworksActive Networks %D 2000 %T Active Networking Means Evolution (or Enhanced Extensibility Required) %A Hicks, Michael W. %A Nettles,Scott %E Yasuda,Hiroshi %X A primary goal of active networking is to increase the pace of network evolution. Evolution is typically achieved via extensibility; that is, typical active network implementations provide an interface to extend network nodes withdyna mically loaded code. Most implementations employ plug-in extensibility , a technique for loading code characterized by a concrete, pre-defined abstraction of future change. While flexible and convenient, we argue that plug-in extensibility alone is not sufficient for true network evolution. Instead, we propose dynamic software updating , a technique that reduces the a priori assumptions of plug-in extensibility, improving flexibility and eliminating the need to pre-plan extensions. However, this additional flexibility creates issues involving validity and security. We discuss these issues, and describe the state-of-the-art in systems that support dynamic software updating, thus framing the problem for researchers developing next-generation active networks. %B Active NetworksActive Networks %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 1942 %P 16 - 32 %8 2000/// %@ 978-3-540-41179-6 %G eng %U http://dx.doi.org/10.1007/3-540-40057-5_2 %0 Journal Article %J The Journal of the Acoustical Society of America %D 2000 %T Active source location and beamforming %A Duraiswami, Ramani %A Zotkin,Dmitry N %A Borovikov,Eugene A. %A Davis, Larry S. %B The Journal of the Acoustical Society of America %V 107 %P 2790 - 2790 %8 2000/// %@ 00014966 %G eng %U http://asadl.org/jasa/resource/1/jasman/v107/i5/p2790_s4 %R 10.1121/1.428978 %0 Journal Article %J IEEE Data Engineering Bulletin %D 2000 %T Adaptive query processing: Technology in evolution %A Hellerstein,J. M %A Franklin,M.J. %A Chandrasekaran,S. %A Deshpande, Amol %A Hildrum,K. %A Madden,S. %A Raman,V. %A Shah,M. A %X As query engines are scaled and federated, they must cope with highly unpredictable and changeableenvironments. In the Telegraph project, we are attempting to architect and implement a continuously adaptive query engine suitable for global-area systems, massive parallelism, and sensor networks. To set the stage for our research, we present a survey of prior work on adaptive query processing, focusing on three characterizations of adaptivity: the frequency of adaptivity, the effects of adaptivity, and the extent of adaptivity. Given this survey, we sketch directions for research in the Telegraph project. %B IEEE Data Engineering Bulletin %V 23 %P 7 - 18 %8 2000/// %G eng %N 2 %0 Journal Article %J Numerical Algorithms %D 2000 %T Adaptive Use of Iterative Methods in Predictor-Corrector Interior Point Methods for Linear Programming %A Wang,Weichung %A O'Leary, Dianne P. %X In this work we devise efficient algorithms for finding the search directions for interior point methods applied to linear programming problems. There are two innovations. The first is the use of updating of preconditioners computed for previous barrier parameters. The second is an adaptive automated procedure for determining whether to use a direct or iterative solver, whether to reinitialize or update the preconditioner, and how many updates to apply. These decisions are based on predictions of the cost of using the different solvers to determine the next search direction, given costs in determining earlier directions. We summarize earlier results using a modified version of the OB1-R code of Lustig, Marsten, and Shanno, and we present results from a predictor–corrector code PCx modified to use adaptive iteration. If a direct method is appropriate for the problem, then our procedure chooses it, but when an iterative procedure is helpful, substantial gains in efficiency can be obtained. %B Numerical Algorithms %V 25 %P 387 - 406 %8 2000/// %G eng %0 Journal Article %J International Journal for Parasitology %D 2000 %T The African trypanosome genome %A El‐Sayed, Najib M. %A Hegde,Priti %A Quackenbush,John %A Melville,Sara E. %A Donelson,John E. %X The haploid nuclear genome of the African trypanosome, Trypanosoma brucei, is about 35 Mb and varies in size among different trypanosome isolates by as much as 25%. The nuclear DNA of this diploid organism is distributed among three size classes of chromosomes: the megabase chromosomes of which there are at least 11 pairs ranging from 1 Mb to more than 6 Mb (numbered I-XI from smallest to largest); several intermediate chromosomes of 200-900 kb and uncertain ploidy; and about 100 linear minichromosomes of 50-150 kb. Size differences of as much as four-fold can occur, both between the two homologues of a megabase chromosome pair in a specific trypanosome isolate and among chromosome pairs in different isolates. The genomic DNA sequences determined to date indicated that about 50% of the genome is coding sequence. The chromosomal telomeres possess TTAGGG repeats and many, if not all, of the telomeres of the megabase and intermediate chromosomes are linked to expression sites for genes encoding variant surface glycoproteins (VSGs). The minichromosomes serve as repositories for VSG genes since some but not all of their telomeres are linked to unexpressed VSG genes. A gene discovery program, based on sequencing the ends of cloned genomic DNA fragments, has generated more than 20 Mb of discontinuous single-pass genomic sequence data during the past year, and the complete sequences of chromosomes I and II (about 1 Mb each) in T. brucei GUTat 10.1 are currently being determined. It is anticipated that the entire genomic sequence of this organism will be known in a few years. Analysis of a test microarray of 400 cDNAs and small random genomic DNA fragments probed with RNAs from two developmental stages of T. brucei demonstrates that the microarray technology can be used to identify batteries of genes differentially expressed during the various life cycle stages of this parasite. %B International Journal for Parasitology %V 30 %P 329 - 345 %8 2000/04// %@ 0020-7519 %G eng %U http://www.sciencedirect.com/science/article/pii/S0020751900000151 %N 4 %R 16/S0020-7519(00)00015-1 %0 Journal Article %J Technical Reports (CIS) %D 2000 %T Agents in Network Management %A Ertugay,Osman %A Hicks, Michael W. %A Smith,Jonathan %A Kornblum,Jessica %X The ubiquity and complexity of modern networks require automated management and control. With increases in scale, automated solutions based on simple data access models such as SNMP will give way to more distributed and algorithmic techniques. This article outlines present and near-term solutions based on the ideas of active networks and mobile agents, which permit sophisticated programmable control and management of ultra large scale networks. %B Technical Reports (CIS) %8 2000/02/26/ %G eng %U http://repository.upenn.edu/cis_reports/138 %0 Journal Article %J ACM Trans. Math. Softw. %D 2000 %T Algorithm 805: computation and uses of the semidiscrete matrix decomposition %A Kolda,Tamara G. %A O'Leary, Dianne P. %K compression %K latent semantic indexing %K Matrix decomposition %K semidiscrete decompositin %K singular value decomposition %X We present algorithms for computing a semidiscrete approximation to a matrix in a weighted norm, with the Frobenius norm as a special case. The approximation is formed as a weighted sum of outer products of vectors whose elements are ±1 or 0, so the storage required by the approximation is quite small. We also present a related algorithm for approximation of a tensor. Applications of the algorithms are presented to data compression, filtering, and information retrieval; software is provided in C and in Matlab. %B ACM Trans. Math. Softw. %V 26 %P 415 - 435 %8 2000/09// %@ 0098-3500 %G eng %U http://doi.acm.org/10.1145/358407.358424 %N 3 %R 10.1145/358407.358424 %0 Journal Article %J Journal of Molecular Biology %D 2000 %T Analysis and prediction of functional sub-types from protein sequence alignments %A Hannenhalli, Sridhar %A Russell,Robert B. %K prediction %K protein function %K protein structure %K sequence alignment %X The increasing number and diversity of protein sequence families requires new methods to define and predict details regarding function. Here, we present a method for analysis and prediction of functional sub-types from multiple protein sequence alignments. Given an alignment and set of proteins grouped into sub-types according to some definition of function, such as enzymatic specificity, the method identifies positions that are indicative of functional differences by comparison of sub-type specific sequence profiles, and analysis of positional entropy in the alignment. Alignment positions with significantly high positional relative entropy correlate with those known to be involved in defining sub-types for nucleotidyl cyclases, protein kinases, lactate/malate dehydrogenases and trypsin-like serine proteases. We highlight new positions for these proteins that suggest additional experiments to elucidate the basis of specificity. The method is also able to predict sub-type for unclassified sequences. We assess several variations on a prediction method, and compare them to simple sequence comparisons. For assessment, we remove close homologues to the sequence for which a prediction is to be made (by a sequence identity above a threshold). This simulates situations where a protein is known to belong to a protein family, but is not a close relative of another protein of known sub-type. Considering the four families above, and a sequence identity threshold of 30 %, our best method gives an accuracy of 96 % compared to 80 % obtained for sequence similarity and 74 % for BLAST. We describe the derivation of a set of sub-type groupings derived from an automated parsing of alignments from PFAM and the SWISSPROT database, and use this to perform a large-scale assessment. The best method gives an average accuracy of 94 % compared to 68 % for sequence similarity and 79 % for BLAST. We discuss implications for experimental design, genome annotation and the prediction of protein function and protein intra-residue distances. %B Journal of Molecular Biology %V 303 %P 61 - 76 %8 2000/10/13/ %@ 0022-2836 %G eng %U http://www.sciencedirect.com/science/article/pii/S0022283600940361 %N 1 %R 10.1006/jmbi.2000.4036 %0 Conference Paper %B Proceedings of the sixteenth annual symposium on Computational geometry %D 2000 %T The analysis of a simple k-means clustering algorithm %A Kanungo,Tapas %A Mount, Dave %A Netanyahu,Nathan S. %A Piatko,Christine %A Silverman,Ruth %A Wu,Angela Y. %B Proceedings of the sixteenth annual symposium on Computational geometry %S SCG '00 %I ACM %C New York, NY, USA %P 100 - 109 %8 2000/// %@ 1-58113-224-7 %G eng %U http://doi.acm.org/10.1145/336154.336189 %R 10.1145/336154.336189 %0 Book Section %B Algebraic Frames for the Perception-Action CycleAlgebraic Frames for the Perception-Action Cycle %D 2000 %T Analyzing Action Representations %A Aloimonos, J. %A Fermüller, Cornelia %E Sommer,Gerald %E Zeevi,Yehoshua %X We argue that actions represent the basic seed of intelligence underlying perception of the environment, and the representations encoding actions should be the starting point upon which further studies of cognition are built. In this paper we make a first effort in characterizing these action representations. In particular, from the study of simple actions related to 3D rigid motion interpretation, we deduce a number of principles for the possible computations responsible for the interpretation of space-time geometry. Using these principles, we then discuss possible avenues on how to proceed in analyzing the representations of more complex human actions. %B Algebraic Frames for the Perception-Action CycleAlgebraic Frames for the Perception-Action Cycle %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 1888 %P 1 - 21 %8 2000/// %@ 978-3-540-41013-3 %G eng %U http://dx.doi.org/10.1007/10722492_1 %0 Journal Article %J International Journal of High Performance Computing Applications %D 2000 %T An API for Runtime Code Patching %A Buck, Bryan %A Hollingsworth, Jeffrey K %X The authors present a postcompiler program manipulation tool called Dyninst, which provides a C++ class library for program instrumentation. Using this library, it is possible to instrument and modify application programs during execution. A unique feature of this library is that it permits machine-independent binary instrumentation programs to be written. The authors describe the interface that a tool sees when using this library. They also discuss three simple tools built using this interface: a utility to count the number of times a function is called, a program to capture the output of an already running program to a file, and an implementation of conditional breakpoints. For the conditional breakpoint example, the authors show that by using their interface compared with gdb, they are able to execute a program with conditional breakpoints up to 900 times faster. %B International Journal of High Performance Computing Applications %V 14 %P 317 - 329 %8 2000/// %G eng %U http://hpc.sagepub.com/content/14/4/317.abstract %N 4 %R 10.1177/109434200001400404 %0 Journal Article %J Computational Geometry %D 2000 %T Approximate range searching %A Arya,Sunil %A Mount, Dave %K Approximation algorithms %K Box-decomposition trees %K Partition trees %K Range searching %X The range searching problem is a fundamental problem in computational geometry, with numerous important applications. Most research has focused on solving this problem exactly, but lower bounds show that if linear space is assumed, the problem cannot be solved in polylogarithmic time, except for the case of orthogonal ranges. In this paper we show that if one is willing to allow approximate ranges, then it is possible to do much better. In particular, given a bounded range Q of diameter w and ε>0, an approximate range query treats the range as a fuzzy object, meaning that points lying within distance εw of the boundary of Q either may or may not be counted. We show that in any fixed dimension d, a set of n points in Rd can be preprocessed in O(n+logn) time and O(n) space, such that approximate queries can be answered in O(logn(1/ε)d) time. The only assumption we make about ranges is that the intersection of a range and a d-dimensional cube can be answered in constant time (depending on dimension). For convex ranges, we tighten this to O(logn+(1/ε)d−1) time. We also present a lower bound for approximate range searching based on partition trees of Ω(logn+(1/ε)d−1), which implies optimality for convex ranges (assuming fixed dimensions). Finally, we give empirical evidence showing that allowing small relative errors can significantly improve query execution times. %B Computational Geometry %V 17 %P 135 - 152 %8 2000/12// %@ 0925-7721 %G eng %U http://www.sciencedirect.com/science/article/pii/S0925772100000225 %N 3–4 %R 10.1016/S0925-7721(00)00022-5 %0 Journal Article %J Information Processing Letters %D 2000 %T Approximating low-congestion routing and column-restricted packing problems %A Baveja,Alok %A Srinivasan, Aravind %K algorithms %K Approximation algorithms %K integer programming %K Packing %K Routing %X We contribute to a body of research asserting that the fractional and integral optima of column-sparse integer programs are “nearby”. This yields improved approximation algorithms for some generalizations of the knapsack problem, with applications to low-congestion routing in networks, file replication in distributed databases, and other packing problems. %B Information Processing Letters %V 74 %P 19 - 25 %8 2000/04// %@ 0020-0190 %G eng %U http://www.sciencedirect.com/science/article/pii/S0020019000000338 %N 1–2 %R 10.1016/S0020-0190(00)00033-8 %0 Journal Article %J Mathematics of Operations ResearchMathematics of Operations Research %D 2000 %T Approximation Algorithms for Disjoint Paths and Related Routing and Packing Problems %A Baveja,Alok %A Srinivasan, Aravind %K Approximation algorithms %K Disjoint paths %K integer programming %K Linear programming %K multicommodity flow %K Packing %K randomized algorithms %K rounding %K Routing %K unsplittable flow %X Given a network and a set of connection requests on it, we consider the maximum edge-disjoint paths and related generalizations and routing problems that arise in assigning paths for these requests. We present improved approximation algorithms and/or integrality gaps for all problems considered; the central theme of this work is the underlying multicommodity flow relaxation. Applications of these techniques to approximating families of packing integer programs are also presented. %B Mathematics of Operations ResearchMathematics of Operations Research %V 25 %P 255 - 280 %8 2000/05/01/ %@ 0364-765X, 1526-5471 %G eng %U http://mor.journal.informs.org/content/25/2/255 %N 2 %R 10.1287/moor.25.2.255.12228 %0 Journal Article %J Algorithmica %D 2000 %T Approximation algorithms with bounded performance guarantees for the clustered traveling salesman problem %A Guttmann-Beck,N. %A Hassin,R. %A Khuller, Samir %A Raghavachari,B. %X Let G=(V,E) be a complete undirected graph with vertex set V , edge set E , and edge weights l(e) satisfying triangle inequality. The vertex set V is partitioned into clusters V 1 , . . ., V k . The clustered traveling salesman problem is to compute a shortest Hamiltonian cycle (tour) that visits all the vertices, and in which the vertices of each cluster are visited consecutively. Since this problem is a generalization of the traveling salesman problem, it is NP-hard. In this paper we consider several variants of this basic problem and provide polynomial time approximation algorithms for them. %B Algorithmica %V 28 %P 422 - 437 %8 2000/// %G eng %N 4 %R 10.1007/s004530010045 %0 Report %D 2000 %T An Arnoldi--Schur Algorithm for Large Eigenproblems %A Stewart, G.W. %K Technical Report %X Sorensen's iteratively restarted Arnoldi algorithm is one of the mostsuccessful and flexible methods for finding a few eigenpairs of a large matrix. However, the need to preserve structure of the Arnoldi decomposition, on which the algorithm is based, restricts the range of transformations that can be performed on it. In consequence, it is difficult to deflate converged Ritz vectors from the decomposition. Moreover, the potential forward instability of the implicit QR algorithm can cause unwanted Ritz vectors to persist in the computation. In this paper we introduce a generalized Arnoldi decomposition that solves both problems in a natural and efficient manner. (Also cross-referenced as UMIACS-TR-2000-21) %I Instititue for Advanced Computer Studies, Univ of Maryland, College Park %V UMIACS-TR-2000-21 %8 2000/04/25/ %G eng %U http://drum.lib.umd.edu/handle/1903/1066 %0 Conference Paper %B 38th AIAA Aerospace Sciences Meeting and Exhibit %D 2000 %T Assessment of Temperature Fluctuation Models for RANS Simulations of Hypersonic Reacting Flows %A Sinha,K. %A Martin, M.P %A Candler,G. V %X Turbulence-chemistry interactions play an important role in various fluid flow problems. In a hypersonic boundary layer, these interactions can alter the surface heating rate of the vehicle by modifying the temperature profile across the layer. The effect of these interactions is accounted for in a Reynolds-averaged Navier Stokes (RANS) simulation using a probability density function (PDF) of the temperature distribution. In this paper, two presumed PDF models are considered, namely, the Martin and Candler model and the Gaffney et al. model, and their accuracy is assessed by comparing to direct numerical simulation (DNS) of homogeneous isotropic turbulence. TheMartin and Candler model reproduces the turbulence-chemistry interactions in the flow correctly whereas the Gaffney model overpredicts the temperature fluctuations. %B 38th AIAA Aerospace Sciences Meeting and Exhibit %C Reno, NV %8 2000/// %G eng %0 Conference Paper %B Tools with Artificial Intelligence, 2000. ICTAI 2000. Proceedings. 12th IEEE International Conference on %D 2000 %T An assumptive logic programming methodology for parsing %A Voll,K. %A Tom Yeh %A Dahl,V. %X We show how several novel tools in logic programming for AI (namely, continuation based linear and timeless assumptions, and datalog grammars) can assist us in producing terse treatments of difficult language processing phenomena. As a proof of concept, we present a concise parser for datalog grammars (logic grammars where strings are represented with numbered word boundaries rather than as lists of words), that uses assumptions and a combination of left-corner parsing and charting. We then study two test cases of this parser's application: complete constituent coordination, and error diagnosis and correction %B Tools with Artificial Intelligence, 2000. ICTAI 2000. Proceedings. 12th IEEE International Conference on %I IEEE Comput. Soc %P 11 - 18 %8 2000/// %@ 0-7695-0909-6 %G eng %U http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=889840 %R 10.1109/TAI.2000.889840 %0 Conference Paper %B Cluster Computing %D 2000 %T Attacking the bottlenecks of backfilling schedulers %A Zotkin,Dmitry N %A Keleher,P. J %A Perkovic,D. %B Cluster Computing %8 2000/// %G eng %0 Conference Paper %B 2000 IEEE International Conference on Systems, Man, and Cybernetics %D 2000 %T An audio-video front-end for multimedia applications %A Zotkin,Dmitry N %A Duraiswami, Ramani %A Davis, Larry S. %A Haritaoglu,I. %K Acoustic noise %K acoustical source location %K Application software %K audio cues %K audio-video front-end %K CAMERAS %K Computer vision %K Microphones %K multimedia applications %K multimedia systems %K multimodal sensor fusion system %K multimodal user interfaces %K Position measurement %K REAL TIME %K Real time systems %K real-time systems %K sensor fusion %K sound %K Speech recognition %K User interfaces %K video cameras %K video gaming %K video-based person tracking %K Videoconference %K videoconferencing %K Virtual reality %K visual cues %K Working environment noise %X Applications such as video gaming, virtual reality, multimodal user interfaces and videoconferencing, require systems that can locate and track persons in a room through a combination of visual and audio cues, enhance the sound that they produce, and perform identification. We describe the development of a particular multimodal sensor fusion system that is portable, runs in real time and achieves these objectives. The system employs novel algorithms for acoustical source location, video-based person tracking and overall system control, which are also described %B 2000 IEEE International Conference on Systems, Man, and Cybernetics %I IEEE %V 2 %P 786-791 vol.2 - 786-791 vol.2 %8 2000/// %@ 0-7803-6583-6 %G eng %R 10.1109/ICSMC.2000.885945 %0 Conference Paper %D 2000 %T Automated design of sheet metal bending tools %A Alva,U. %A Gupta,S.K. %A Gupta,R. K. %X Sheet metal bending is a process in which bends are formed using a combination of apunch and a die. Bending tools need to satisfy the following two criteria: (1) tools should be able to withstand bending forces, and (2) tool shapes should be such that there is no tool-part interference. In this paper, we describe a systematic methodology for automatically synthesizing a punch shape that can be used to produce more than one type of part. We create a parametric geometric model of possible punches. This parametric model describes a family of possible punch shapes. Using the geometric models of various given part types and a parametric punch model, we automatically generate constraints on punch parameters that eliminate the possibility of part-tool interference. We use mixed integer programming techniques to solve these constraints and identify parameters of a punch shape that can work for the given set of parts. Finally, we perform strength analysis of the designed punch to verify that the designed punch is capable of withstanding the bending forces. %8 2000/// %G eng %U http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.34.5031&rep=rep1&type=pdf %0 Journal Article %J SIGSOFT Softw. Eng. Notes %D 2000 %T Automated test oracles for GUIs %A Memon, Atif M. %A Pollack,Martha E. %A Soffa,Mary Lou %K automated oracles %K GUI test oracles %K GUI testing %X Graphical User Interfaces (GUIs) are critical components of today's software. Because GUIs have different characteristics than traditional software, conventional testing techniques do not apply to GUI software. In previous work, we presented an approach to generate GUI test cases, which take the form of sequences of actions. In this paper we develop a test oracle technique to determine if a GUI behaves as expected for a given test case. Our oracle uses a formal model of a GUI, expressed as sets of objects, object properties, and actions. Given the formal model and a test case, our oracle automatically derives the expected state for every action in the test case. We represent the actual state of an executing GUI in terms of objects and their properties derived from the GUI's execution. Using the actual state acquired from an execution monitor, our oracle automatically compares the expected and actual states after each action to verify the correctness of the GUI for the test case. We implemented the oracle as a component in our GUI testing system, called Planning Assisted Tester for grapHical user interface Systems (PATHS), which is based on AI planning. We experimentally evaluated the practicality and effectiveness of our oracle technique and report on the results of experiments to test and verify the behavior of our version of the Microsoft WordPad's GUI. %B SIGSOFT Softw. Eng. Notes %V 25 %P 30 - 39 %8 2000/11// %@ 0163-5948 %G eng %U http://doi.acm.org/10.1145/357474.355050 %N 6 %R 10.1145/357474.355050 %0 Journal Article %J Advances in Database Technology—EDBT 2000 %D 2000 %T Automatic deployment of application-specific metadata and code in MOCHA %A Rodriguez-Martinez,M. %A Roussopoulos, Nick %B Advances in Database Technology—EDBT 2000 %P 69 - 85 %8 2000/// %G eng %0 Journal Article %J IEEE Transactions on Image Processing - Special Issue on Image and Video Processing for Digital Libraries %D 2000 %T Automatic Text Detection and Tracking in Digital Video %A Li,H. %A David Doermann %A Kia,O. %X Text that appears in a scene or is graphically added to video can provide an important supplemental source of index information as well as clues for decoding the video's structure and for classification. In this work, we present algorithms for detecting and tracking text in digital video. Our system implements a scale-space feature extractor that feeds an artificial neural processor to detect text blocks. Our text tracking scheme consists of two modules: a sum of squared difference (SSD) based module to find the initial position and a contour-based module to refine the position. Experiments conducted with a variety of video sources show that our scheme can detect and track text robustly. %B IEEE Transactions on Image Processing - Special Issue on Image and Video Processing for Digital Libraries %V 9 %P 147 - 156 %8 2000/01// %G eng %N 1 %0 Conference Paper %B ICPR %D 2000 %T AVideo Text Detection System Based on Automated Training %A Li,H. %A David Doermann %B ICPR %V 2 %P 223 - 226 %8 2000/// %G eng %0 Journal Article %J Breadth and depth of semantic lexicons %D 1999 %T ACQUISITION OF SEMANTIC LEXICONS %A Dorr, Bonnie J %A Jones,D. %X This paper addresses the problem oflarge-scale acquisition of computational-semantic lexicons from machine-readable resources. We describe semantic filters designed to reduce the number ofincorrect assignments (i.e., improve precision) made by a purely syntactic technique. We demonstrate that it is possible to use these filters to build broad-coverage lexicons with minimal effort, at a depth of knowledge that lies at the syntax-semantics interface. We report on our results of disambiguating the verbs in the semantic filters by adding WordNet sense annotations. We then show the results of our classification on unknown words and we evaluate these results. %B Breadth and depth of semantic lexicons %V 10 %P 79 - 79 %8 1999/// %G eng %0 Journal Article %J ACM SIGMOD Record %D 1999 %T The active MultiSync controller of the cubetree storage organization %A Roussopoulos, Nick %A Kotidis,Y. %A Sismanis,Y. %B ACM SIGMOD Record %V 28 %P 582 - 583 %8 1999/// %G eng %N 2 %0 Book Section %B Wiley Encyclopedia of Electrical and Electronics EngineeringWiley Encyclopedia of Electrical and Electronics Engineering %D 1999 %T Active Perception %A Fermüller, Cornelia %A Aloimonos, J. %B Wiley Encyclopedia of Electrical and Electronics EngineeringWiley Encyclopedia of Electrical and Electronics Engineering %I John Wiley & Sons, Inc. %8 1999/12/27/ %@ 9780471346081 %G eng %U http://onlinelibrary.wiley.com/doi/10.1002/047134608X.W5515/abstract?systemMessage=Wiley+Online+Library+will+be+disrupted+17+March+from+10-14+GMT+%2806-10+EDT%29+for+essential+maintenance&userIsAuthenticated=false&deniedAccessCustomisedMessage= %0 Conference Paper %B Simulation of Semiconductor Processes and Devices, 1999. SISPAD '99. 1999 International Conference on %D 1999 %T Advances in spherical harmonic device modeling: calibration and nanoscale electron dynamics %A Lin,Chung-Kai %A Goldsman,N. %A Mayergoyz, Issak D %A Aronowitz,S. %A Belova,N. %K Boltzmann %K characteristics;SHBTE %K current;surface %K device %K dynamics;spherical %K electron %K equation;calibration;semiconductor %K equation;I-V %K harmonic %K model;substrate %K models;surface %K scattering; %K scattering;Boltzmann %K simulation;calibration;nanoscale %K transport %X Improvements in the Spherical Harmonic (SH) method for solving Boltzmann Transport Equation (BTE) are presented. The simulation results provide the same physical detail as analytical band Monte Carlo (MC) calculations, and are obtained approximately a thousand times faster. A new physical model for surface scattering has also been developed. As a result, the SHBTE model achieves calibration for a complete process of I-V characteristics and substrate current consistently for the first time %B Simulation of Semiconductor Processes and Devices, 1999. SISPAD '99. 1999 International Conference on %P 167 - 170 %8 1999/// %G eng %R 10.1109/SISPAD.1999.799287 %0 Journal Article %J Computer Networks: The International Journal of Computer and Telecommunications Networking %D 1999 %T Agents vs. direct manipulation: what's best to disentangle the Web? %A Sánchez,J. Alfredo %A Etzioni,Oren %A Kautz,Henry %A Lieberman,Henry %A Shneiderman, Ben %B Computer Networks: The International Journal of Computer and Telecommunications Networking %V 31 %P 1765 - 1767 %8 1999/05// %@ 1389-1286 %G eng %U http://dl.acm.org/citation.cfm?id=313009.313142 %N 11-16 %0 Conference Paper %B Proceedings of the eighth international conference on World Wide Web %D 1999 %T Agents vs. direct manipulation: what's best to disentangle the Web? %A Sánchez,J.A. %A Etzioni,O. %A Kautz,H. %A Lieberman,H. %A Shneiderman, Ben %B Proceedings of the eighth international conference on World Wide Web %8 1999/// %G eng %0 Journal Article %J Readings in information visualization %D 1999 %T Alternate geometry %A Card,S.K. %A Mackinlay,J.D. %A Shneiderman, Ben %B Readings in information visualization %P 381 - 381 %8 1999/// %G eng %0 Report %D 1999 %T An analysis of the Rayleigh-Ritz method for approximating eigenspaces %A Jia,Zhongxiao %A Stewart, G.W. %X Abstract. This paper concerns the Rayleigh–Ritz method for computing an approximation to an eigenspace X of a general matrix A from a subspace W that contains an approximation to X. The method produces a pair (N, ˜ X) that purports to approximate a pair (L, X), where X is a basis for X and AX = XL. In this paper we consider the convergence of (N, ˜ X) as the sine ɛ of the angle between X and W approaches zero. It is shown that under a natural hypothesis — called the uniform separation condition — the Ritz pairs (N, ˜ X) converge to the eigenpair (L, X). When one is concerned with eigenvalues and eigenvectors, one can compute certain refined Ritz vectors whose convergence is guaranteed, even when the uniform separation condition is not satisfied. An attractive feature of the analysis is that it does not assume that A has distinct eigenvalues or is diagonalizable. 1. %I Math. Comp %8 1999/// %G eng %0 Conference Paper %B Proceedings of the seventh ACM international conference on Multimedia (Part 2) %D 1999 %T Application-layer broker for scalable Internet services with resource reservation %A Bai,Ping %A Prabhakaran,B. %A Srinivasan, Aravind %B Proceedings of the seventh ACM international conference on Multimedia (Part 2) %S MULTIMEDIA '99 %I ACM %C New York, NY, USA %P 103 - 106 %8 1999/// %@ 1-58113-239-5 %G eng %U http://doi.acm.org/10.1145/319878.319906 %R 10.1145/319878.319906 %0 Journal Article %J Readings in information visualization %D 1999 %T Applications and implications %A Card,S.K. %A Mackinlay,J.D. %A Shneiderman, Ben %B Readings in information visualization %P 625 - 635 %8 1999/// %G eng %0 Journal Article %J Lectures on Approximation and Randomized Algorithms (M. Karonski and HJ Promel, editors), Series in Advanced Topics in Mathematics, Polish Scientific Publishers PWN %D 1999 %T Approximation algorithms via randomized rounding: a survey %A Srinivasan, Aravind %X Approximation algorithms provide a natural way to approach computationally hardproblems. There are currently many known paradigms in this area, including greedy al- gorithms, primal-dual methods, methods based on mathematical programming (linear and semide nite programming in particular), local improvement, and \low distortion" embeddings of general metric spaces into special families of metric spaces. Random- ization is a useful ingredient in many of these approaches, and particularly so in the form of randomized rounding of a suitable relaxation of a given problem. We survey this technique here, with a focus on correlation inequalities and their applications. %B Lectures on Approximation and Randomized Algorithms (M. Karonski and HJ Promel, editors), Series in Advanced Topics in Mathematics, Polish Scientific Publishers PWN %P 9 - 71 %8 1999/// %G eng %0 Thesis %D 1999 %T An architecture and implementation for a cooperative database system %A Godfrey,P. %A Minker, Jack %I University of Maryland at College Park %8 1999/// %G eng %0 Conference Paper %B ICDAR %D 1999 %T ARobust Method for Unknown Forms Analysis %A Xingyuan,L. %A David Doermann %A Oh,W. %A Gao,W. %B ICDAR %P 531 - 534 %8 1999/// %G eng %0 Conference Paper %B Signals, Systems, and Computers, 1999. Conference Record of the Thirty-Third Asilomar Conference on %D 1999 %T Attacks on digital watermarks %A M. Wu %A Liu,Bede %K attacks;copy %K Authentication %K authentication;video %K based %K block %K coding; %K coding;iterative %K control;digital %K countermeasure;fragile-watermarking;images;iterative %K dropping;ownership %K EMBEDDING %K forging;copyright;image %K methods;message %K protection;robust %K removal;videos;watermarks %K size;concealment %K system;robust %K systems;block %K watermark %K watermarking %K watermarks %K watermarks;double %X Two methods for attacking watermarks in images and videos have been proposed. The first attack is to remove robust watermarks by iterative block dropping and concealment. Our study suggests that the block size chosen by a robust watermark embedding system should be large enough to circumventing concealment based attacks. The second attacking method is to forge valid watermarks for authentication systems based on fragile-watermarking. Double watermarking is then proposed as one possible countermeasure. %B Signals, Systems, and Computers, 1999. Conference Record of the Thirty-Third Asilomar Conference on %V 2 %P 1508 -1512 vol.2 - 1508 -1512 vol.2 %8 1999/10// %G eng %R 10.1109/ACSSC.1999.832001 %0 Conference Paper %D 1999 %T Automated punch shape synthesis for sheet metal bending operations %A Alva,U. %A Gupta,S.K. %8 1999/// %G eng %0 Journal Article %J Journal of the WAS, Special Issue on Communications, 85 (2) %D 1998 %T Access Policies for Distributed Video-on-Demand Systems %A Simon,R. %A Mundur, Padma %A Sood,A. %B Journal of the WAS, Special Issue on Communications, 85 (2) %8 1998/// %G eng %0 Journal Article %J Linear Algebra and its Applications %D 1998 %T On the adjugate matrix %A Stewart, G.W. %X The adjugate AA of a matrix A is the transpose of the matrix of the co-factors of the elements of A. The computation of the adjugate from its definition involves the computation of n2 determinants of order (n−1)—a prohibitively expensive O(n4) process. On the other hand, the computation from the formula AA = det (A)A−1 breaks down when A is singular and is potentially unstable when A is ill-conditioned with respect to inversion. In this paper we first show that the adjugate can be perfectly conditioned, even when A is ill-conditioned. We then show that if due care is taken the adjugate can be accurately computed from the inverse, even when the latter has been inaccurately computed. In Appendix A we give a formal derivation of an observation of Wilkinson on the accuracy of computed inverses. %B Linear Algebra and its Applications %V 283 %P 151 - 164 %8 1998/11/01/ %@ 0024-3795 %G eng %U http://www.sciencedirect.com/science/article/pii/S0024379598100988 %N 1–3 %R 10.1016/S0024-3795(98)10098-8 %0 Conference Paper %B Image Processing, 1998. ICIP 98. Proceedings. 1998 International Conference on %D 1998 %T An algorithm for wipe detection %A M. Wu %A Wolf,W. %A Liu,B. %K algorithm;image %K analysis;video %K content;video %K DC %K DETECTION %K detection;statistical %K domain;object %K information;structural %K information;video %K motion;compressed %K motion;shot %K processing; %K programs;camera %K programs;wipe %K sequence;MPEG %K sequences;signal %K signal %K stream;TV %K transitions %X The detection of transitions between shots in video programs is an important first step in analyzing video content. The wipe is a frequently used transitional form between shots. Wipe detection is more involved than the detection of abrupt and other gradual transitions because a wipe may take various patterns and because of the difficulty in discriminating a wipe from object and camera motion. In this paper, we propose an algorithm for detecting wipes using both structural and statistical information. The algorithm can effectively detect most wipes used in current TV programs. It uses the DC sequence which can be easily extracted from the MPEG stream without full decompression %B Image Processing, 1998. ICIP 98. Proceedings. 1998 International Conference on %V 1 %P 893 -897 vol.1 - 893 -897 vol.1 %8 1998/10// %G eng %R 10.1109/ICIP.1998.723664 %0 Conference Paper %B ACM Sigmod Record %D 1998 %T An alternative storage organization for ROLAP aggregate views based on cubetrees %A Kotidis,Y. %A Roussopoulos, Nick %B ACM Sigmod Record %V 27 %P 249 - 258 %8 1998/// %G eng %0 Journal Article %J International Journal of Computer Vision %D 1998 %T Ambiguity in Structure from Motion: Sphere versus Plane %A Fermüller, Cornelia %A Aloimonos, J. %X If 3D rigid motion can be correctly estimated from image sequences, the structure of the scene can be correctly derived using the equations for image formation. However, an error in the estimation of 3D motion will result in the computation of a distorted version of the scene structure. Of computational interest are these regions in space where the distortions are such that the depths become negative, because in order for the scene to be visible it has to lie in front of the image, and thus the corresponding depth estimates have to be positive. The stability analysis for the structure from motion problem presented in this paper investigates the optimal relationship between the errors in the estimated translational and rotational parameters of a rigid motion that results in the estimation of a minimum number of negative depth values. The input used is the value of the flow along some direction, which is more general than optic flow or correspondence. For a planar retina it is shown that the optimal configuration is achieved when the projections of the translational and rotational errors on the image plane are perpendicular. Furthermore, the projection of the actual and the estimated translation lie on a line through the center. For a spherical retina, given a rotational error, the optimal translation is the correct one; given a translational error, the optimal rotational negative deptherror depends both in direction and value on the actual and estimated translation as well as the scene in view. The proofs, besides illuminating the confounding of translation and rotation in structure from motion, have an important application to ecological optics. The same analysis provides a computational explanation of why it is easier to estimate self-motion in the case of a spherical retina and why shape can be estimated easily in the case of a planar retina, thus suggesting that nature's design of compound eyes (or panoramic vision) for flying systems and camera-type eyes for primates (and other systems that perform manipulation) is optimal. %B International Journal of Computer Vision %V 28 %P 137 - 154 %8 1998/// %@ 0920-5691 %G eng %U http://dx.doi.org/10.1023/A:1008063000586 %N 2 %0 Report %D 1998 %T ANN programming manual %A Mount, Dave %I Deptartment of Computer Science, University of Maryland, College Park %8 1998/// %G eng %0 Conference Paper %B Proceedings of the 12th international conference on Supercomputing %D 1998 %T Application level scheduling of gene sequence comparison on metacomputers %A Spring, Neil %A Wolski,R. %B Proceedings of the 12th international conference on Supercomputing %P 141 - 148 %8 1998/// %G eng %0 Journal Article %J PROCEEDINGS-SPIE THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING %D 1998 %T Approximating large convolutions in digital images %A Kanungo,T. %A Mount, Dave %A Netanyahu,N. S %A Piatko,C. %A Silverman,R. %A Wu,A. Y %X Computing discrete two-dimensional convolutions is an important problem in image processing. In mathematicalmorphology, an important variant is that of computing binary convolutions, where the kernel of the convolution is a 0{1 valued function. This operation can be quite costly, especially when large kernels are involved. In this paper, we present an algorithm for computing convolutions of this form, where the kernel of the binary convolution is derived from a convex polygon. Because the kernel is a geometric object, we allow the algorithm some exibility in how it elects to digitize the convex kernel at each placement, as long as the digitization satis es certain reasonable requirements. We say that such a convolution is valid. Given this exibility we show that it is possible to compute binary convolutions more e ciently than would normally be possible for large kernels. Our main result is an algorithm, which given an m n image and a k-sided convex polygonal kernel, computes a valid convolution in time O(kmn) time. Unlike standard algorithms for computing correlations and convolutions, the running time is independent of the area or perimeter of K, and our techniques do not rely on computing fast Fourier transforms. Our algorithm is based on a novel use of Bresenham's line-drawing algorithm and pre x-sums to update the convolution e ciently as the kernel is moved from one position to another across the image. %B PROCEEDINGS-SPIE THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING %P 216 - 227 %8 1998/// %G eng %0 Journal Article %J Algorithmica %D 1998 %T Approximation algorithms for connected dominating sets %A Guha,S. %A Khuller, Samir %X The dominating set problem in graphs asks for a minimum size subset of vertices with the following property: each vertex is required to be either in the dominating set, or adjacent to some vertex in the dominating set. We focus on the related question of finding a connected dominating set of minimum size, where the graph induced by vertices in the dominating set is required to be connected as well. This problem arises in network testing, as well as in wireless communication.Two polynomial time algorithms that achieve approximation factors of 2H(Δ)+2 and H(Δ)+2 are presented, where Δ is the maximum degree and H is the harmonic function. This question also arises in relation to the traveling tourist problem, where one is looking for the shortest tour such that each vertex is either visited or has at least one of its neighbors visited. We also consider a generalization of the problem to the weighted case, and give an algorithm with an approximation factor of (c n +1) \ln n where c n ln k is the approximation factor for the node weighted Steiner tree problem (currently c n = 1.6103 ). We also consider the more general problem of finding a connected dominating set of a specified subset of vertices and provide a polynomial time algorithm with a (c+1) H(Δ) +c-1 approximation factor, where c is the Steiner approximation ratio for graphs (currently c = 1.644 ). %B Algorithmica %V 20 %P 374 - 387 %8 1998/// %G eng %N 4 %R 10.1007/PL00009201 %0 Conference Paper %B Proceedings of the fourteenth annual symposium on Computational geometry %D 1998 %T Approximation algorithms for multiple-tool miling %A Arya,Sunil %A Cheng,Siu-Wing %A Mount, Dave %B Proceedings of the fourteenth annual symposium on Computational geometry %S SCG '98 %I ACM %C New York, NY, USA %P 297 - 306 %8 1998/// %@ 0-89791-973-4 %G eng %U http://doi.acm.org/10.1145/276884.276918 %R 10.1145/276884.276918 %0 Conference Paper %B Seventeenth IEEE Symposium on Reliable Distributed Systems %D 1998 %T AQuA: an adaptive architecture that provides dependable distributed objects %A Cukier, Michel %A Ren,J. %A Sabnis,C. %A Henke,D. %A Pistole,J. %A Sanders,W. H. %A Bakken,D. E. %A Berman,M.E. %A Karr,D. A. %A Schantz,R.E. %K adaptive architecture %K AQuA %K availability requests %K client-server systems %K commercial off-the-shelf hardware %K CORBA %K dependability manager %K dependability requirements %K dependable distributed objects %K distributed object management %K Ensemble protocol stack %K Fault tolerance %K group communication services %K middleware software %K Object Request Brokers %K process-level communication %K proteus %K Quality Objects %K replication %K software fault tolerance %K Software quality %X Dependable distributed systems are difficult to build. This is particularly true if they have dependability requirements that change during the execution of an application, and are built with commercial off-the-shelf hardware. In that case, fault tolerance must be achieved using middleware software, and mechanisms must be provided to communicate the dependability requirements of a distributed application to the system and to adapt the system's configuration to try to achieve the desired dependability. The AQuA architecture allows distributed applications to request a desired level of availability using the Quality Objects (QuO) framework and includes a dependability manager that attempts to meet requested availability levels by configuring the system in response to outside requests and changes in system resources due to faults. The AQuA architecture uses the QuO runtime to process and invoke availability requests, the Proteus dependability manager to configure the system in response to faults and availability requests, and the Ensemble protocol stack to provide group communication services. Furthermore, a CORBA interface is provided to application objects using the AQuA gateway. The gateway provides a mechanism to translate between process-level communication, as supported by Ensemble, and IIOP messages, understood by Object Request Brokers. Both active and passive replication are supported, and the replication type to use is chosen based on the performance and dependability requirements of particular distributed applications %B Seventeenth IEEE Symposium on Reliable Distributed Systems %P 245 - 253 %8 1998/10// %G eng %R 10.1109/RELDIS.1998.740506 %0 Report %D 1998 %T On the Area of Overlap of Translated Polygons %A Mount, Dave %A Silverman,Ruth %A Wu,Angela Y. %K Technical Report %X (Also cross-referenced as CAR-TR-699) Given two simple polygons P and Q in the plane and a translation vector t E R2, the area-oJ-overlap function of P and Q is the function Ar(t) = Area(P n (t + Q)), where t + Q denotes Q translated by t. This function has a number of applications in areas such as motion planning and object recognition. We present a number of mathematical results regarding this function. We also provide efficient algorithms for computing a representation of this function, and for tracing contour curves of constant area of o verlap. %I Department of Computer Science, University of Maryland, College Park %V CS-TR-3201 %8 1998/10/15/ %G eng %U http://drum.lib.umd.edu//handle/1903/403 %0 Journal Article %J Institute for Systems Research Technical Reports %D 1998 %T Assessing users' subjective satisfaction with the Information System for Youth Services (ISYS) %A Slaughter,Laura %A Norman,Kent L %A Shneiderman, Ben %K Technical Report %X In this investigation, the Questionnaire for User Interaction Satisfaction(QUIS 5.5), a tool for assessing users' subjective satisfaction with specific aspects of the human/computer interface was used to assess the strengths and weaknesses of the Info rmation System for Youth Services (ISYS). ISYS is used by over 600 employees of the Maryland State Department of Juvenile Services (DJS) as a tracking device for juvenile offenders. Ratings and comments were collected from 254 DJS employees who use ISYS. The overall mean rating across all questions was 5.1 on a one to nine scale. The ten highest and lowest rated questions were identified. The QUIS allowed us to isolate subgroups which were compared with mean ratings from four measures of specific interfac e factors. The comments obtained from users provided suggestions, complaints and endorsements of the system. Also cross-referenced as CAR-TR-768 %B Institute for Systems Research Technical Reports %8 1998/10/15/ %G eng %U http://drum.lib.umd.edu/handle/1903/431 %0 Conference Paper %B Proceedings of the Symposium on Network and Distributed Systems Security (NDSS’98) %D 1998 %T Automated recovery in a secure bootstrap process %A Arbaugh, William A. %A Keromytis,A. D %A Farber,D. J %A Smith,J. M %B Proceedings of the Symposium on Network and Distributed Systems Security (NDSS’98) %P 155 - 167 %8 1998/// %G eng %0 Conference Paper %B Proceedings of International Conference on Pattern Recognition %D 1998 %T Automatic Identification of Text In Digital Video Key Frames %A Li,H. %A David Doermann %X Scene and graphic text can provide important supplemental index information in video sequences. In this paper we address the problem automatically identifying text regions in digital video key frames. The text contained in video frames is typically very noisy because it is aliased and/or digitized at a much lower resolution than typical document images, making identification, extraction and recognition difficult. The proposed method is based on the use of a hybrid wavelet/neural network segmenter on a series of overlapping small windows to classify regions which contain text. To detect text over a wide range of font sizes, the method is applied to a pyramid of images and the regions identified at each level are integrated. %B Proceedings of International Conference on Pattern Recognition %P 129 - 132 %8 1998/// %G eng %0 Report %D 1998 %T Automatic Text Detection and Tracking in Digital Video %A Li,H. %A David Doermann %A Kia,O. %X Text which either appears in a scene or is graphically added to video can provide an important supplemental source of index information as well as clues for decoding the video's structure and for classification. In this paper we present algorithms for detecting and tracking text components that appear within digital video frames. Our system implements a scale-space feature extractor that feeds an artificial neural processor to extract textual regions and track their movement over time. The extracted regions can then be used as input to an appropriate Optical Character Recognition system which produces indexible keywords. %I University of Maryland, College Park %V LAMP-TR-028,CFAR-TR-900,CS-TR-3962 %8 1998/12// %G eng %0 Conference Paper %B Proceedings of IEEE 1998 Workshop on Multimedia Signal Processing %D 1998 %T Automatic Text Tracking In Digital Videos %A Li,H. %A David Doermann %X We address the problem of automatically tracking moving text in digital videos. Our scheme consists of two separate processes: monitoring which detects the new text line entering a frame, and tracking which uses prediction techniques to reconcile the text from frame to frame. Temporal consistency allows one to monitor periodically and reduce the computation complexity. The tracking process uses a rapid prediction/search scheme to update the position of the text blocks between frames. We provide details of the implementation and results for text moving in the scene and text which moves as a result of camera motion. %B Proceedings of IEEE 1998 Workshop on Multimedia Signal Processing %P 21 - 26 %8 1998/// %G eng %0 Journal Article %J SIAM Review %D 1997 %T Accuracy and Stability of Numerical Algorithms %A Higham,N. J %A Stewart, G.W. %B SIAM Review %V 39 %P 164 - 165 %8 1997/// %G eng %N 1 %0 Conference Paper %B Fifth RIAO Conference on Computer Assisted Information Searching on the Internet, June %D 1997 %T Adaptive filtering of multilingual document streams %A Oard, Douglas %B Fifth RIAO Conference on Computer Assisted Information Searching on the Internet, June %8 1997/// %G eng %0 Journal Article %J Visualization and Computer Graphics, IEEE Transactions on %D 1997 %T Adaptive real-time level-of-detail based rendering for polygonal models %A Xia,J. C %A El-Sana,J. %A Varshney, Amitabh %K (computer %K adaptive %K attributes;scientific %K attributes;view-dependent %K coherence;image-space;lighting;multiresolution %K direction;visibility;computational %K geometry;data %K graphics); %K hierarchies;object-space;run %K hierarchies;triangulated %K level-of-detail %K models;vector %K polygonal %K real-time %K refinement;triangle %K rendering;displayed %K simplifications;viewing %K systems;rendering %K time;scalar %K triangles;frame-to-frame %K visualisation;real-time %K visualization;selective %X We present an algorithm for performing adaptive real-time level-of-detail-based rendering for triangulated polygonal models. The simplifications are dependent on viewing direction, lighting, and visibility and are performed by taking advantage of image-space, object-space, and frame-to-frame coherences. In contrast to the traditional approaches of precomputing a fixed number of level-of-detail representations for a given object, our approach involves statically generating a continuous level-of-detail representation for the object. This representation is then used at run time to guide the selection of appropriate triangles for display. The list of displayed triangles is updated incrementally from one frame to the next. Our approach is more effective than the current level-of-detail-based rendering approaches for most scientific visualization applications, where there are a limited number of highly complex objects that stay relatively close to the viewer. Our approach is applicable for scalar (such as distance from the viewer) as well as vector (such as normal direction) attributes %B Visualization and Computer Graphics, IEEE Transactions on %V 3 %P 171 - 183 %8 1997/06//apr %@ 1077-2626 %G eng %N 2 %R 10.1109/2945.597799 %0 Conference Paper %B ICDAR %D 1997 %T ADistributed Management System for Testing Document Image Database Analysis Algorithms %A David Doermann %A Sauvola,J. %A Haapakoski,S. %A Kauniskangas,H. %A Seppanen,T. %A Pietikainen,M. %B ICDAR %P 989 - 995 %8 1997/// %G eng %0 Journal Article %J Journal of Biological Chemistry %D 1997 %T African trypanosomes have differentially expressed genes encoding homologues of the Leishmania GP63 surface protease %A El‐Sayed, Najib M. %A Donelson,J. E %B Journal of Biological Chemistry %V 272 %P 26742 - 26742 %8 1997/// %G eng %N 42 %0 Journal Article %J CONCUR'97: Concurrency Theory %D 1997 %T An algebraic theory of multiple clocks %A Cleaveland, Rance %A L\üttgen,G. %A Mendler,M. %B CONCUR'97: Concurrency Theory %P 166 - 180 %8 1997/// %G eng %0 Journal Article %J ACM Trans. Math. Softw. %D 1997 %T Algorithm 776: SRRIT: a Fortran subroutine to calculate the dominant invariant subspace of a nonsymmetric matrix %A Bai,Z. %A Stewart, G.W. %K invariant subspace %K nonsymmetric eigenvalue problem %K project method %X SRRT is a Fortran program to calculate an approximate orthonomral basis fr a dominant invariant subspace of a real matrix A by the method of simultaneous iteration. Specifically, given an integer m, SRRIT computes a matrix Q with m orthonormal columns and real quasi-triangular matrix T or order m such that the equation AQ = QT is satisfied up to a tolerance specified by the user. The eigenvalues of T are approximations to the m eigenvalues of largest absolute magnitude of A and the columns of Q span the invariant subspace corresponding to those eigenvalues. SRRIT references A only through a user-provided subroutine to form the product AQ; hence it is suitable for large sparse problems. %B ACM Trans. Math. Softw. %V 23 %P 494 - 513 %8 1997/12// %@ 0098-3500 %G eng %U http://doi.acm.org/10.1145/279232.279234 %N 4 %R 10.1145/279232.279234 %0 Journal Article %J CGC 2nd Annual Fall Workshop on Computational Geometry %D 1997 %T ANN: A library for approximate nearest neighbor searching %A Mount, Dave %A Arya,S. %B CGC 2nd Annual Fall Workshop on Computational Geometry %8 1997/// %G eng %0 Conference Paper %B Software Engineering, International Conference on %D 1997 %T Anywhere, Anytime Code Inspections: Using the Web to Remove Inspection Bottlenecks in Large-Scale Software Development %A Perpich,J. M. %A Perry,D. E. %A Porter, Adam %A Votta,L. G. %A Wade,M. W. %K asynchronous; natural occurring inspection experiment; automated support for inspections %K code inspections: web-based %K meetingless %X The dissemination of critical information and the synchronization of coordinated activities are critical problems in geographically separated, large-scale, software development. While these problems are not insurmountable, their solutions have varying trade-offs in terms of time, cost and effectiveness. Our previous studies have shown that the inspection interval is typically lengthened because of schedule conflicts among inspectors which delay the (usually) required inspection collection meeting.We present and justify a solution using an intranet web that is both timely in its dissemination of information and effective in its coordination of distributed inspectors. First, exploiting a naturally occurring experiment (reported here), we conclude that the asynchronous collection of inspection results is at least as effective as the synchronous collection of those results. Second, exploiting the information dissemination qualities and the on-demand nature of information retrieval of the web, and the platform independence of browsers, we built an inexpensive tool that integrates seamlessly into the current development process. By seamless we mean an identical paper flow that results in an almost identical inspection process.The acceptance of the inspection tool has been excellent. The cost savings just from the reduction in paper work and the time savings from the reduction in distribution interval of the inspection package (sometimes involving international mailings) have been substantial. These savings together with the seamless integration into the existing environment are the major factors for this acceptance. From our viewpoint as experimentalists, the acceptance came too readily. Therefore we lost our op portunity to explore this tool using a series of controlled experiments to isolate the underlying factors or its effectiveness. Nevertheless, by using historical data we can show that the new process is less expensive in terms of cost and at least as effective in terms of quality (defect detection effectiveness). %B Software Engineering, International Conference on %I IEEE Computer Society %C Los Alamitos, CA, USA %P 14 - 14 %8 1997/// %G eng %R http://doi.ieeecomputersociety.org/10.1109/ICSE.1997.610188 %0 Journal Article %J Design Automation for Embedded Systems %D 1997 %T APGAN and RPMC: Complementary heuristics for translating DSP block diagrams into efficient software implementations %A Bhattacharyya, Shuvra S. %A Murthy,P. K %A Lee,E. A %B Design Automation for Embedded Systems %V 2 %P 33 - 60 %8 1997/// %G eng %N 1 %0 Conference Paper %B Proceedings of the twenty-ninth annual ACM symposium on Theory of computing %D 1997 %T Approximating hyper-rectangles: learning and pseudo-random sets %A Auer,Peter %A Long,Philip M. %A Srinivasan, Aravind %K approximations of distributions %K derandomization %K explicit constructions %K machine learning %K multiple-instance learning %K PAC learning %K pseudorandomness %K Ramsey graphs %K random graphs %K rectangles %K sample complexity %B Proceedings of the twenty-ninth annual ACM symposium on Theory of computing %S STOC '97 %I ACM %C New York, NY, USA %P 314 - 323 %8 1997/// %@ 0-89791-888-6 %G eng %U http://doi.acm.org/10.1145/258533.258611 %R 10.1145/258533.258611 %0 Report %D 1997 %T Aspectual Modifications to a LCS Database for NLP Applications %A Dorr, Bonnie J %A Olsen,M.B. %X Verbal and compositional lexical aspect provide the underlying temporal structure ofevents. Knowledge of lexical aspect, e.g., (a)telicity, is therefore required for interpret- ing event sequences in discourse (Dowty, 1986: Moens and Steedman, 1988; Passoneau, 1988), interfacing to temporal databases (Androutsopoulos, 1996), processing temporal modifiers (Antonisse, 1994), describing allowable alternations and their semantic effects (Resnik, 1996; Tenny, 1994), and selecting tense and lexical items for natural language generation ((Dorr and Olsen, 1996; Klavans and Chodorow, 1992), cf. (Slobin and Bo- caz, 1988)). We show that it is possible to represent lexical aspect—both verbal and compositional—on a large scale, using Lexical Conceptual Structure (LCS) representa- tions of verbs in the classes cataloged by Levin (1993). We show how proper consideration of these universal pieces of verb meaning may be used to refine lexical representations and derive a range of meanings from combinations of LCS representations. A single al- gorithm may therefore be used to determine lexical aspect classes and features at both verbal and sentence levels. Finally, we illustrate how knowledge of lexical aspect facili- tates the interpretation of events in NLP applications. %I Instititue for Advanced Computer Studies, Univ of Maryland, College Park %8 1997/// %G eng %0 Journal Article %J IEEE Transactions on Software Engineering %D 1997 %T Assessing software review meetings: results of a comparative analysis of two experimental studies %A Porter, Adam %A Johnson,P. M %K Aggregates %K Collaborative work %K Computer Society %K Costs %K Helium %K Inspection %K inspection meeting %K Job shop scheduling %K Programming %K reconciliation %K Software development management %K Software quality %K software quality assurance %K software review meeting assessment %K Software reviews %X Software review is a fundamental tool for software quality assurance. Nevertheless, there are significant controversies as to the most efficient and effective review method. One of the most important questions currently being debated is the utility of meetings. Although almost all industrial review methods are centered around the inspection meeting, recent findings call their value into question. In prior research the authors separately and independently conducted controlled experimental studies to explore this issue. The paper presents new research to understand the broader implications of these two studies. To do this, they designed and carried out a process of “reconciliation” in which they established a common framework for the comparison of the two experimental studies, reanalyzed the experimental data with respect to this common framework, and compared the results. Through this process they found many striking similarities between the results of the two studies, strengthening their individual conclusions. It also revealed interesting differences between the two experiments, suggesting important avenues for future research %B IEEE Transactions on Software Engineering %V 23 %P 129 - 145 %8 1997/03// %@ 0098-5589 %G eng %N 3 %R 10.1109/32.585501 %0 Journal Article %J Research in Engineering Design %D 1997 %T Automated manufacturability analysis: A survey %A Gupta, Satyandra K. %A Regli,William C. %A Das,Diganta %A Nau, Dana S. %K engineering %X In the market-place of the 21st century, there is no place for traditional ‘over-the-wall’ communications between design and manufacturing. In order to ‘design it right the very first time’, designers must ensure that their products are both functional and easy to manufacture. Software tools have had some successes in reducing the barriers between design and manufacturing. Manufacturability analysis systems are emerging as one such tool — enabling identification of potential manufacturing problems during the design phase and providing suggestions to designers on how to eliminate them. %B Research in Engineering Design %V 9 %P 168 - 190 %8 1997/// %@ 0934-9839 %G eng %U http://www.springerlink.com/content/p8p5516251023777/abstract/ %N 3 %R 10.1007/BF01596601 %0 Journal Article %J Evolutionary Computation, IEEE Transactions on %D 1997 %T Automatic discovery of self-replicating structures in cellular automata %A Lohn,J. D %A Reggia, James A. %B Evolutionary Computation, IEEE Transactions on %V 1 %P 165 - 178 %8 1997/// %G eng %N 3 %0 Journal Article %J Discrete & Computational Geometry %D 1996 %T Accounting for boundary effects in nearest-neighbor searching %A Arya,S. %A Mount, Dave %A Narayan,O. %X Given n data points ind-dimensional space, nearest-neighbor searching involves determining the nearest of these data points to a given query point. Most averagecase analyses of nearest-neighbor searching algorithms are made under the simplifying assumption thatd is fixed and thatn is so large relative tod thatboundary effects can be ignored. This means that for any query point the statistical distribution of the data points surrounding it is independent of the location of the query point. However, in many applications of nearest-neighbor searching (such as data compression by vector quantization) this assumption is not met, since the number of data pointsn grows roughly as 2 d .Largely for this reason, the actual performances of many nearest-neighbor algorithms tend to be much better than their theoretical analyses would suggest. We present evidence of why this is the case. We provide an accurate analysis of the number of cells visited in nearest-neighbor searching by the bucketing andk-d tree algorithms. We assumem dpoints uniformly distributed in dimensiond, wherem is a fixed integer ≥2. Further, we assume that distances are measured in theL ∞ metric. Our analysis is tight in the limit asd approaches infinity. Empirical evidence is presented showing that the analysis applies even in low dimensions. %B Discrete & Computational Geometry %V 16 %P 155 - 176 %8 1996/// %G eng %N 2 %R 10.1007/BF02716805 %0 Conference Paper %B Euro-Par'96 Parallel Processing %D 1996 %T An adaptive cost system for parallel program instrumentation %A Hollingsworth, Jeffrey K %A Miller, B. %B Euro-Par'96 Parallel Processing %P 88 - 97 %8 1996/// %G eng %0 Journal Article %J ACM Comput. Surv. %D 1996 %T Advanced graphic user interfaces: elastic and tightly coupled windows %A Shneiderman, Ben %B ACM Comput. Surv. %V 28 %8 1996/12// %@ 0360-0300 %G eng %U http://doi.acm.org/10.1145/242224.242409 %N 4es %R 10.1145/242224.242409 %0 Conference Paper %B , Eleventh Annual IEEE Symposium on Logic in Computer Science, 1996. LICS '96. Proceedings %D 1996 %T An algebraic theory of process efficiency %A Natarajan,V. %A Cleaveland, Rance %K algebraic characterization %K algebraic specification %K algebraic theory %K bisimulation-based efficiency preorders %K Calculus %K calculus of communicating systems %K Carbon capture and storage %K Computer science %K concurrent systems %K formal logic %K formal specification %K full abstractness result %K internal activity %K optimality %K process efficiency %K reasoning %K semantic preorders %K System testing %K testing-based semantic theory %X This paper presents a testing-based semantic theory for reasoning about the efficiency of concurrent systems as measured in terms of the amount of their internal activity. The semantic preorders are given an algebraic characterization, and their optimality is established by means of a full abstractness result. They are also shown to subsume existing bisimulation-based efficiency preorders. An example is provided to illustrate the utility of this approach %B , Eleventh Annual IEEE Symposium on Logic in Computer Science, 1996. LICS '96. Proceedings %I IEEE %P 63 - 72 %8 1996/07/27/30 %@ 0-8186-7463-6 %G eng %R 10.1109/LICS.1996.561304 %0 Report %D 1996 %T Algorithm-Independent Stability Analysis of Structure from Motion. %A Fermüller, Cornelia %X The stability analysis for the structure from motion problem presented in this paper investigates the optimal relationship between the errors in the estimated translational and rotational parameters of a rigid motion that results in the estimation of a minimum number of negative depth values. No particular estimators are used and no specific assumptions about the scene are made. The input used is the value of the flow along some direction, which is more general than optic flow or correspondence. For a planar retina it is shown that the optimal configuration is achieved when the projections of the translational and rotational errors on the image plane are perpendicular. For a spherical retina, given a rotational error, the optimal translation is the correct one, while given a translational error the optimal rotational error is normal to the translational one at an equal distance from the real and estimated translations. The proofs, besides illuminating the confounding of translation and rotation in structure from motion, have an important application to ecological optics. The same analysis provides a computational explanation of why it is much easier to estimate self-motion in the case of a spherical retina and why it is much easier to estimate shape in the case of a planar retina, thus suggesting that nature's design of compound eyes (or panoramic vision) for flying systems and camera-type eyes for primates (and other systems that perform manipulation) is optimal. %I Computer Vision Lab, University of Maryland College Park %8 1996/// %G eng %0 Journal Article %J Neural computation %D 1996 %T Alignment of coexisting cortical maps in a motor control model %A Chen,Y. %A Reggia, James A. %B Neural computation %V 8 %P 731 - 755 %8 1996/// %G eng %N 4 %0 Conference Paper %B In Workshop on Materialized Views, in cooperation with ACM SIGMOD %D 1996 %T Answering queries using OQL view expressions %A Florescu,D. %A Raschid, Louiqa %A Valduriez,P. %B In Workshop on Materialized Views, in cooperation with ACM SIGMOD %8 1996/// %G eng %0 Journal Article %J Machine Vision and Applications %D 1996 %T Applying algebraic and differential invariants for logo recognition %A David Doermann %A Rivlin,E. %A Weiss, I. %X The problem of logo recognition is of great interest in the document domain, especially for document databases. By recognizing the logo we obtain semantic information about the document which may be useful in deciding whether or not to analyze the textual components. Given a logo block candidate from a document image and alogo database, we would like to determine whether the region corresponds to a logo in the database. Similarly, if we are given a logo block candidate and adocumentdatabase, we wish to determine whether there are any documents in the database of similar origin. Both problems require indexing into a possibly large model space.In this contribution, we present a novel application of algebraic and differential invariants to the problem of logo recognition. By using invariants we have shape descriptors for matching that are unique and independent of the point of view. The algebraic invariants handle cases in which the whole shape of the logo is given and it is easy to describe. The differential invariants cover complex arbitrary logo shape and handle situations in which only part of the logo is recovered. We outline a hierarchical approach to logo recognition and define methods for page segmentation, feature extraction, and indexing. We demonstrate our approach and present results on a database of approximately 100 logos. %B Machine Vision and Applications %V 9 %P 73 - 86 %8 1996/// %G eng %N 2 %0 Conference Paper %B SPIEConference on Multimedia Storage and Archiving Systems %D 1996 %T Archiving, indexing, and retrieval of video in compressed domain %A Kobla,V. %A David Doermann %A Lin,K-I. %B SPIEConference on Multimedia Storage and Archiving Systems %V 2916 %P 78 - 89 %8 1996/// %G eng %0 Journal Article %J ScienceScience %D 1996 %T AT-AC Introns—An ATtACk on Dogma %A Mount, Stephen M. %X Messenger RNAs are initially made as long precursor molecules from which final, transcribable mRNAs are assembled. A specialized organelle, the spliceosome, contains small RNAs with pairs of dinucleotides that can pick out complementary pairs of nucleotides at the ends of the introns that are to be spliced together. It has been assumed that these pairs were always GU-AG (or GT-AG in the DNA), but in findings described in this week's issue of Science (p. 1716) and the current issue of Cell (vol. 84, p. 801) a minor class of intron is shown to be flanked by AU-AC (or AT-AC in the DNA). In his Perspective, Mount explains how these maverick introns were discovered and why their existence is so surprising. %B ScienceScience %V 271 %P 1690 - 1692 %8 1996/03/22/ %@ 0036-8075, 1095-9203 %G eng %U http://www.sciencemag.org/content/271/5256/1690 %N 5256 %R 10.1126/science.271.5256.1690 %0 Journal Article %J JOURNAL OF APPLIED NONCLASSICAL LOGICS %D 1996 %T Automated inference in active logics %A Miller,M. %A Perlis, Don %B JOURNAL OF APPLIED NONCLASSICAL LOGICS %V 6 %P 9 - 28 %8 1996/// %G eng %0 Journal Article %J SIGSOFT Softw. Eng. Notes %D 1995 %T Algebra and models (and reality) %A Zelkowitz, Marvin V %B SIGSOFT Softw. Eng. Notes %V 20 %P 55 - 57 %8 1995/04// %@ 0163-5948 %G eng %U http://doi.acm.org/10.1145/224155.224159 %N 2 %R 10.1145/224155.224159 %0 Journal Article %J SIAM Journal on Matrix Analysis and ApplicationsSIAM. J. Matrix Anal. & Appl. %D 1995 %T Algebraic Analysis of the Hierarchical Basis Preconditioner %A Elman, Howard %A Zhang, Xuejun %B SIAM Journal on Matrix Analysis and ApplicationsSIAM. J. Matrix Anal. & Appl. %V 16 %P 192 - 192 %8 1995/// %@ 08954798 %G eng %U http://epubs.siam.org/sima/resource/1/sjmael/v16/i1/p192_s1 %N 1 %R 10.1137/S0895479892236933 %0 Journal Article %J Discrete applied mathematics %D 1995 %T Almost fully-parallel parentheses matching %A Berkman,O. %A Vishkin, Uzi %B Discrete applied mathematics %V 57 %P 11 - 28 %8 1995/// %G eng %N 1 %0 Conference Paper %B Proceedings of the 1st conference on Designing interactive systems: processes, practices, methods, & techniques %D 1995 %T An applied ethnographic method for redesigning user interfaces %A Rose,Anne %A Shneiderman, Ben %A Plaisant, Catherine %B Proceedings of the 1st conference on Designing interactive systems: processes, practices, methods, & techniques %S DIS '95 %I ACM %C New York, NY, USA %P 115 - 122 %8 1995/// %@ 0-89791-673-5 %G eng %U http://doi.acm.org/10.1145/225434.225447 %R 10.1145/225434.225447 %0 Conference Paper %B Proceedings of the eleventh annual symposium on Computational geometry %D 1995 %T Approximate range searching %A Arya,Sunil %A Mount, Dave %B Proceedings of the eleventh annual symposium on Computational geometry %S SCG '95 %I ACM %C New York, NY, USA %P 172 - 181 %8 1995/// %@ 0-89791-724-3 %G eng %U http://doi.acm.org/10.1145/220279.220298 %R 10.1145/220279.220298 %0 Journal Article %J SIAM Journal on Computing (SICOMP) %D 1995 %T Approximating the minimum equivalent digraph %A Khuller, Samir %A Raghavachari,B. %A Young,N. %B SIAM Journal on Computing (SICOMP) %V 24 %P 859 - 872 %8 1995/// %G eng %N 4 %0 Book %D 1995 %T Artificial Intelligence: Theory and Practice %A Dean, Thomas L. %A ALLEN, JAMES AUTOR %A Aloimonos, J. %K Computers / Expert Systems %K Computers / Intelligence (AI) & Semantics %X This book provides a detailed understanding of the broad issues in artificial intelligence and a useful survey of current AI technology. The author delivers broad coverage of innovative representational techniques, including neural networks, image processing, and probabilistic reasoning, alongside the traditional methods of symbolic reasoning. AI algorithms are described in detailed prose in the text and fully implemented in LISP at the ends of chapters. A stand-alone LISP chapter makes an excellent reference and refresher. Each chapter includes a detailed description of an AI application. %I Benjamin/Cummings Pub. %P 604 %8 1995 %@ 9780805325478 %G eng %0 Report %D 1995 %T Automated Manufacturability Analysis: A Survey %A Das,Diganta %A Gupta, Satyandra K. %A Regli,W. C. %A Nau, Dana S. %K manufacturability %K Manufacturing %K Solid modeling %K Systems Integration Methodology %X In the marketplace of the 21st century, there is no place for traditional ``over-the-wall'' communications between design and manufacturing. In order to ``design it right the very first time,'' designers must ensure that their products are both functional and easy to manufacture. Software tools have had some successes in reducing the barriers between design and manufacturing. Manufacturability analysis systems are emerging as one such tool---enabling identification of potential manufacturing problems during the design phase and providing suggestions to designers on how to eliminate them.

In this paper, we provide a survey of current state of the art in automated manufacturability analysis. We present the historical context in which this area has emerged and outline characteristics to compare and classify various systems. We describe the two dominant approaches to automated manufacturability analysis and overview representative systems based on their application domain. We describe support tools that enhance the effectiveness of manufacturability analysis systems. Finally, we attempt to expose some of the existing research challenges and future directions.

%I Institute for Systems Research, University of Maryland, College Park %V ISR; TR 1995-14 %8 1995/// %G eng %U http://drum.lib.umd.edu//handle/1903/5609 %0 Thesis %D 1995 %T Automated Manufacturability Analysis of Machined Parts %A Gupta, Satyandra K. %K Automation %K computer aided manufacturing %K computer integrated manufacturing %K Feature extraction %K flexible manufacturing %K manufacturability %K Manufacturing %K Solid modeling %X Because of pressing demands to reduce lead time and product cost, increasing research attention is being given to integration of engineering design and manufacturing. In this thesis, a systematic approach has been developed for computer-aided manufacturability analysis of machined parts. This approach can be used during design stages to improve the product quality from the manufacturing point of view.

Evaluating the manufacturability of a proposed design involves determining whether or not it is manufacturable with a given set of manufacturing operations - and if so, then finding the associated manufacturing efficiency. In this research, the design is represented as a solid model. The tolerance and surface finish information is represented as attributes of various faces of the solid model. Machining features are used to model the available machining operations Since there can be several different ways to manufacture a proposed design, this requires considering alternative ways to manufacture it, in order to determine which one best meets the design and manufacturing objectives.

The approach developed in this thesis is based on the systematic exploration of various machining plans. The first step is to identify all machining features which can potentially be used to machine the given design. Using these features, different machining plans are generated. Each time a new plan generated, it is examined to find whether it can produce the desired design tolerances. If a plan is found to be capable of meeting the tolerance specifications, then its rating is computed. If no machining plan can be found that is capable of producing the design, then the design cannot be machined using the given set of machining operations; otherwise, the manufacturability rating of the design is computed. Since various alternative ways of machining the part are considered in this approach, the conclusions about the manufacturability are more realistic compared to the approach where just one alternative is considered.

It is anticipated that this research will help in speeding up the evaluation of new product designs in order to decide how or whether to manufacture them. Such a capability will be useful in responding quickly to changing demands and opportunities in the marketplace. %I UNIVERSITY OF MARYLAND, COLLEGE PARK %8 1995/// %G eng %U http://drum.lib.umd.edu//handle/1903/5693 %0 Journal Article %J First Workshop on Simulation and Interaction in Virtual Environments %D 1995 %T Automatic generation of multiresolution for polygonal models %A Varshney, Amitabh %A Agarwal,P. %A Brooks,F. %A Wright,W. %A Weber,H. %B First Workshop on Simulation and Interaction in Virtual Environments %8 1995/// %G eng %0 Journal Article %J AI Magazine %D 1994 %T AAAI 1994 Spring Symposium Series Reports %A Woods,William %A Uckun,Sendar %A Kohane,Isaac %A Bates,Joseph %A Hulthage,Ingemar %A Gasser,Les %A Hanks,Steve %A Gini,Maria %A Ram,Ashwin %A desJardins, Marie %A Johnson,Peter %A Etzioni,Oren %A Coombs,David %A Whitehead,Steven %B AI Magazine %V 15 %P 22 - 22 %8 1994/09/15/ %@ 0738-4602 %G eng %U http://www.aaai.org/ojs/index.php/aimagazine/article/viewArticle/1101 %N 3 %R 10.1609/aimag.v15i3.1101 %0 Journal Article %J Journal of Artificial Neural Networks %D 1994 %T Adaptation of noncompetitive and competitive neural networks to focal lesions %A Weinrich,M. %A Sutton,G. G %A Reggia, James A. %A D'Autrechy,C. L %B Journal of Artificial Neural Networks %V 1 %P 51 - 60 %8 1994/// %G eng %0 Journal Article %J SIGSOFT Softw. Eng. Notes %D 1994 %T Algebra and models (and reality) %A Zelkowitz, Marvin V %B SIGSOFT Softw. Eng. Notes %V 19 %P 79 - 81 %8 1994/10// %@ 0163-5948 %G eng %U http://doi.acm.org/10.1145/190679.190688 %N 4 %R 10.1145/190679.190688 %0 Conference Paper %B Proceedings of the SIGCHI conference on Human factors in computing systems: celebrating interdependence %D 1994 %T The alphaslider: a compact and rapid selector %A Ahlberg,Christopher %A Shneiderman, Ben %K Alphaslider %K dynamic queries %K menus %K selection technology %K widget %B Proceedings of the SIGCHI conference on Human factors in computing systems: celebrating interdependence %S CHI '94 %I ACM %C New York, NY, USA %P 365 - 371 %8 1994/// %@ 0-89791-650-6 %G eng %U http://doi.acm.org/10.1145/191666.191790 %R 10.1145/191666.191790 %0 Report %D 1994 %T An Application of Distributed Solid Modeling: Feature Recognition %A Regli,W. C. %A Gupta, Satyandra K. %A Nau, Dana S. %K Distributed computing %K feature recognition %K feature- based modeling %K multiprocessor solid modeling %K Systems Integration Methodology %X The availability of low-cost computational power is a driving force behind the growing sophistication of CAD software. Tools designed to reduce time-consuming build-test-redesign iterations are essential for increasing engineering quality and productivity. However, automation of the design process poses many difficult computational problems. As more downstream engineering activities are being considered during the design phase, guaranteeing reasonable response times within design systems becomes problematic. Design is an interactive process and speed is a critical factor in systems that enable designers to explore and experiment with alternative ideas during the design phase. Achieving interactivity requires an increasingly sophisticated allocation of computational resources in order to perform realistic design analyses and generate feedback in real time.

This paper presents our initial efforts to develop techniques to apply distributed algorithms to the problem of recognizing machining features from solid models. Existing work on recognition of features has focused exclusively on serial computer architectures. Our objective is to show that distributed algorithms can be employed on realistic parts with large numbers of features and many geometric and topological entities to obtain significant improvements in computation time using existing hardware and software tools. Migrating solid modeling applications toward a distributed computing frame-work enables interconnection of many of the autonomous and geographically diverse software tools used in the modern manufacturing enterprise.

This has been implemented on a network of SUN workstations using the ACIS solid modeler and the NIH C++ class library; inter-processor communication is handled with TCP/IP- based network communication tools. %I Institute for Systems Research, University of Maryland, College Park %V ISR; TR 1994-82 %8 1994/// %G eng %U http://drum.lib.umd.edu//handle/1903/5552 %0 Journal Article %J Journal of Heredity %D 1994 %T An Atlas of Drosophila Genes: Sequences and Molecular Features %A Maroni,G. %A Mount, Stephen M. %A Cavener,DR %A Cavener,BA %A Sharp,PM %A Lloyd,AT %A Schaeffer,SW %B Journal of Heredity %V 85 %P 495 - 496 %8 1994/// %G eng %N 6 %0 Conference Paper %B , 1994 IEEE International Conference on Systems, Man, and Cybernetics, 1994. 'Humans, Information and Technology' %D 1994 %T On automatic filtering of multilingual texts %A Oard, Douglas %A DeClaris,N. %A Dorr, Bonnie J %A Faloutsos,C. %K automated performance evaluation %K Business communication %K Computer science %K Discussion forums %K Displays %K Educational institutions %K Floods %K Government %K Information filtering %K Information filters %K Information retrieval %K multilingual information retrieval %K multilingual text filtering %K natural languages %K performance evaluation %X An emerging requirement to sift through the increasing flood of text information has led to the rapid development of information filtering technology in the past five years. This study introduces novel approaches for filtering texts regardless of their source language. We begin with a brief description of related developments in text filtering and multilingual information retrieval. We then present three alternative approaches to selecting texts from a multilingual information stream which represent a logical evolution from existing techniques in related disciplines. Finally, a practical automated performance evaluation technique is proposed %B , 1994 IEEE International Conference on Systems, Man, and Cybernetics, 1994. 'Humans, Information and Technology' %I IEEE %V 2 %P 1645-1650 vol.2 - 1645-1650 vol.2 %8 1994/10/02/5 %@ 0-7803-2129-4 %G eng %R 10.1109/ICSMC.1994.400083 %0 Journal Article %J ACM SIGCHI Bulletin %D 1994 %T AVI '94: An International Workshop %A Shneiderman, Ben %A Badre,AI %A Santos,Paulo %X The fresh Mediterranean breezes of innovation enlivened the workshop conversations at Advanced Visual Interfaces '94 (Bari, Italy June 1--4, 1994). More than 100 participants met on the flower-gardened grounds of the Grande Albergo di Villa Romanazzi Carducci, whose modern hotel plus conference center complemented the beautifully restored old villa and the marble-lined swimming pool. Being together with active professionals from Europe and America for 4 days gave us an in-depth chance to learn about each other's professional and personal outlooks. The ancient Roman architecture and the traditional Italian surroundings reminded us that the impact of our work could also last for two thousand years and shape many people's daily lives. %B ACM SIGCHI Bulletin %V 26 %P 54 - 55 %8 1994/10// %@ 0736-6906 %G eng %U http://doi.acm.org/10.1145/191642.1047934 %N 4 %R 10.1145/191642.1047934 %0 Book %D 1993 %T Active perception. %A Aloimonos, J. %I Lawrence Erlbaum Associates, Inc %8 1993/// %G eng %0 Book %D 1993 %T Active Perception, Vol. I of Advances in Computer Vision series %A Aloimonos, J. %I Lawrence Erlbaum Associates %8 1993/// %G eng %0 Journal Article %J Active Perception %D 1993 %T Active vision revisited %A Aloimonos, J. %B Active Perception %8 1993/// %G eng %0 Conference Paper %B Acoustics, Speech, and Signal Processing, 1993. ICASSP-93., 1993 IEEE International Conference on %D 1993 %T An adaptive ESPRIT based on URV decomposition %A Liu,K. J.R %A O'Leary, Dianne P. %A Stewart, G.W. %A Wu,Y.-J.J. %K algorithm;array %K complexity;parameter %K decomposition;real-time %K ESPRIT %K estimation;real-time %K filters;array %K of %K processing;adaptive %K processing;computational %K savings;performance;rank-revealing %K sensors;computational %K signal %K systems; %K systems;time-varying %K URV %K URV-based %X ESPRIT is an algorithm for determining the fixed directions of arrival of a set of narrowband signals at an array of sensors. Its computational burden makes it unsuitable for real-time processing of signals with time-varying directions of arrival. The authors develop a new implementation of ESPRIT that has potential for real-time processing. It is based on a rank-revealing URV decomposition, rather than the eigendecomposition or singular value decomposition (SVD) used in previous ESPRIT algorithms. Its performance is demonstrated on simulated data representing both constant and time-varying signals. It is shown that the URV-based ESPRIT algorithm is effective for estimating time-varying directions-of-arrival at considerable computational savings over the SVD-based algorithm.≪ETX≫ %B Acoustics, Speech, and Signal Processing, 1993. ICASSP-93., 1993 IEEE International Conference on %V 4 %P 37 -40 vol.4 - 37 -40 vol.4 %8 1993/04// %G eng %R 10.1109/ICASSP.1993.319588 %0 Journal Article %J IEEE Transactions on Knowledge and Data Engineering %D 1993 %T ADMS: a testbed for incremental access methods %A Roussopoulos, Nick %A Economou,N. %A Stamenas,A. %K Add-drop multiplexers %K ADMS %K advanced database management system %K client-server architecture %K commercial database management systems %K Computational modeling %K Database systems %K distributed databases %K heterogeneous DBMS %K incremental access methods %K incremental gateway %K Information retrieval %K interoperability %K join index %K large databases %K Navigation %K network operating systems %K Object oriented databases %K Object oriented modeling %K Query processing %K System testing %K very large databases %K view index %K Workstations %X ADMS is an advanced database management system developed-to experiment with incremental access methods for large and distributed databases. It has been developed over the past eight years at the University of Maryland. The paper provides an overview of ADMS, and describes its capabilities and the performance attained by its incremental access methods. This paper also describes an enhanced client-server architecture that allows an incremental gateway access to multiple heterogeneous commercial database management systems %B IEEE Transactions on Knowledge and Data Engineering %V 5 %P 762 - 774 %8 1993/10// %@ 1041-4347 %G eng %N 5 %R 10.1109/69.243508 %0 Journal Article %J Synthesis of Parallel Algorithms %D 1993 %T Advanced parallel prefix-sums, list ranking and connectivity %A Vishkin, Uzi %B Synthesis of Parallel Algorithms %P 215 - 257 %8 1993/// %G eng %0 Conference Paper %B Data Compression Conference, 1993. DCC '93. %D 1993 %T Algorithms for fast vector quantization %A Arya,S. %A Mount, Dave %K algorithm; %K algorithms; %K directed %K fast %K graph %K graph; %K graphs; %K Neighborhood %K performance; %K problems; %K quantisation; %K quantization; %K running %K search %K time; %K tree %K vector %X This paper shows that if one is willing to relax the requirement of finding the true nearest neighbor, it is possible to achieve significant improvements in running time and at only a very small loss in the performance of the vector quantizer. The authors present three algorithms for nearest neighbor searching: standard and priority k -d tree search algorithms and a neighborhood graph search algorithm in which a directed graph is constructed for the point set and edges join neighboring points %B Data Compression Conference, 1993. DCC '93. %P 381 - 390 %8 1993/// %G eng %R 10.1109/DCC.1993.253111 %0 Journal Article %J Institute for Systems Research Technical Reports %D 1993 %T Alpha Slider: Searching Textual Lists with Sliders %A Osada,M. %A Liao,Holmes %A Shneiderman, Ben %K direct manipulation %K Human-computer interaction %K keyboard data entry %K sliders %K Systems Integration %K touchscreen %X AlphaSlider is a query interface that uses a direct manipulation slider to select words, phrases, or names from an existing list. This paper introduces a prototype of AlphaSlider, describes the design issues, reports on an experimental evaluation, and offers directions for further research. The experiment tested 24 subjects selecting items from lists of 40, 80, 160, and 320 entries. Mean selection times only doubled with the 8-fold increase in list length. Users quickly accommodated to this selection method. %B Institute for Systems Research Technical Reports %8 1993/// %G eng %U http://drum.lib.umd.edu/handle/1903/5397 %0 Journal Article %J Ninth Symposium on Human Interface %D 1993 %T Alphaslider: Development and Evaluation of Text Retrieval Methods Using Sliders %A Osada,M. %A Liao,H. %A Shneiderman, Ben %X AlphaSlider is a query interface that uses a direct manipulation slider to select words, phrases, or names from an existing list. This paper introduces a prototype of AlphaSlider, describes the design issues, reports on an experimental evaluation, and offers directions for further research. The experiment tested 24 subjects selecting items from lists of 40, 80, 160, and 320 entries. Mean selection times only doubled with the 8-fold increase in list length. Users quickly accommodated to this selection method. %B Ninth Symposium on Human Interface %P 91 - 94 %8 1993/// %G eng %0 Conference Paper %B Functional Programming, Concurrency, Simulation and Automated Reasoning %D 1993 %T Analyzing concurrent systems using the Concurrency Workbench %A Cleaveland, Rance %B Functional Programming, Concurrency, Simulation and Automated Reasoning %P 129 - 144 %8 1993/// %G eng %0 Conference Paper %B Scalable Parallel Libraries Conference, 1993., Proceedings of the %D 1993 %T Applying the CHAOS/PARTI library to irregular problems in computational chemistry and computational aerodynamics %A Das,R. %A Hwang,Y. S %A Uysal, M. %A Saltz, J. %A Sussman, Alan %B Scalable Parallel Libraries Conference, 1993., Proceedings of the %P 45 - 56 %8 1993/// %G eng %0 Conference Paper %B Proceedings of the fourth annual ACM-SIAM Symposium on Discrete algorithms %D 1993 %T Approximate nearest neighbor queries in fixed dimensions %A Arya,Sunil %A Mount, Dave %B Proceedings of the fourth annual ACM-SIAM Symposium on Discrete algorithms %S SODA '93 %I Society for Industrial and Applied Mathematics %C Philadelphia, PA, USA %P 271 - 280 %8 1993/// %@ 0-89871-313-7 %G eng %U http://dl.acm.org/citation.cfm?id=313559.313768 %0 Journal Article %J Proceedings of 4th Annual ACMSIAM Symposium on Discrete Algorithms (SODA’93) %D 1993 %T Approximate nearest neighbor searching %A Arya,S. %A Mount, Dave %B Proceedings of 4th Annual ACMSIAM Symposium on Discrete Algorithms (SODA’93) %P 271 - 280 %8 1993/// %G eng %0 Conference Paper %B Parallel Processing Symposium, 1993., Proceedings of Seventh International %D 1993 %T Approximate parallel prefix computation and its applications %A Goodrich,M. T %A Matias,Y. %A Vishkin, Uzi %B Parallel Processing Symposium, 1993., Proceedings of Seventh International %P 318 - 325 %8 1993/// %G eng %0 Journal Article %J Journal of Algorithms %D 1993 %T Approximation Algorithms for Graph Augmentation %A Khuller, Samir %A Thurimella,R. %X The problem of increasing both edge and vertex connectivity of a graph at an optimal cost is studied. Since the general problem is NP-hard, we focus on efficient approximation schemes that come within a constant factor from the optima]. Previous algorithms either do not take edge costs into consideration, or they run slower than our algorithm. Our algorithm takes as input an undirected graph G0 = (V, E0) on n vertices, that is not necessarily connected, and a set Feasible of m weighted edges on V, and outputs a subset Aug of edges which when added to G0 make it two-connected. The weight of Aug, when G0 is initially connected, is no more than twice the weight of the least weight subset of edges of Feasible that increases the connectivity to two. The running time of our algorithm is O(m + n log n). As a consequence of our results, we can find an approximation to the least-weight two-connected spanning subgraph of a two-connected weighted graph. %B Journal of Algorithms %V 14 %P 214 - 225 %8 1993/03// %@ 0196-6774 %G eng %U http://www.sciencedirect.com/science/article/pii/S0196677483710102 %N 2 %R 10.1006/jagm.1993.1010 %0 Journal Article %J SPIE Vision Geometry II %D 1993 %T On the area of overlap of translated polygons %A Mount, Dave %A Silverman,R. %A Wu,A. %X Given two simple polygons P and Q in the plane and a translation vector t 2R2, the area-of-overlap function of P and Q is the function Ar(t) = Area(P \(t + Q)), where t + Q denotes Q translated by t. This function has a number of applications in areas such as motion planning and object recognition. We present a number of mathematical results regarding this function. We also provide e cient algorithms for computing a representation of this function, and for tracing contour curves of constant area-of-overlap. %B SPIE Vision Geometry II %V 2060 %P 254 - 264 %8 1993/// %G eng %0 Conference Paper %B Computer Vision—ECCV'92 %D 1992 %T Active egomotion estimation: a qualitative approach %A Aloimonos, J. %A Duric, Z. %B Computer Vision—ECCV'92 %P 497 - 510 %8 1992/// %G eng %0 Conference Paper %B Proceedings of the SIGCHI conference on Human factors in computing systems %D 1992 %T Anthropomorphism: from Eliza to Terminator 2 %A Don,Abbe %A Brennan,Susan %A Laurel,Brenda %A Shneiderman, Ben %B Proceedings of the SIGCHI conference on Human factors in computing systems %S CHI '92 %I ACM %C New York, NY, USA %P 67 - 70 %8 1992/// %@ 0-89791-513-5 %G eng %U http://doi.acm.org/10.1145/142750.142760 %R 10.1145/142750.142760 %0 Journal Article %J Journal of Systems and Software %D 1992 %T An application of decision theory for the evaluation of software prototypes %A Cárdenas,S. R %A Tian,J. %A Zelkowitz, Marvin V %B Journal of Systems and Software %V 19 %P 27 - 39 %8 1992/// %G eng %N 1 %0 Conference Paper %B , First International Workshop on Interoperability in Multidatabase Systems, 1991. IMS '91. Proceedings %D 1991 %T An algebra and calculus for relational multidatabase systems %A Grant,J. %A Litwin,W. %A Roussopoulos, Nick %A Sellis,T. %K Algebra %K autonomous databases %K Calculus %K Computer networks %K Computer science %K Data models %K Data structures %K Database systems %K database theory %K distributed databases %K Military computing %K multidatabase manipulation language %K multidatabase system %K multirelational algebra %K query languages %K relational algebra %K Relational databases %K Spatial databases %K theoretical foundation %X With the existence of many autonomous databases widely accessible through computer networks, users will require the capability to jointly manipulate data in different databases. A multidatabase system provides such a capability through a multidatabase manipulation language. The authors propose a theoretical foundation for such languages by presenting a multirelational algebra and calculus based on the relational algebra and calculus. The proposal is illustrated by various queries on an example multidatabase %B , First International Workshop on Interoperability in Multidatabase Systems, 1991. IMS '91. Proceedings %I IEEE %P 118 - 124 %8 1991/04// %@ 0-8186-2205-9 %G eng %R 10.1109/IMS.1991.153694 %0 Journal Article %J Information and Computation %D 1991 %T Approximate parallel scheduling. II. Applications to logarithmic-time optimal parallel graph algorithms %A Cole,R. %A Vishkin, Uzi %B Information and Computation %V 92 %P 1 - 47 %8 1991/// %G eng %N 1 %0 Book %D 1990 %T Abductive Inference Models for Diagnostic Problem-Solving %A Yun,P. %A Reggia, James A. %I Springer-Verlag, New York %8 1990/// %G eng %0 Journal Article %J Hypertext: concepts, systems and applications: proceedings of the First European Conference on Hypertext, INRIA, France, November 1990 %D 1990 %T The advantages of hypertext for large information spaces; where are the big systems? %A Shneiderman, Ben %A Deter,R. %B Hypertext: concepts, systems and applications: proceedings of the First European Conference on Hypertext, INRIA, France, November 1990 %P 343 - 343 %8 1990/// %G eng %N 5 %0 Conference Paper %B Workshop on Computer-Aided Verification %D 1990 %T On automatically distinguishing inequivalent processes %A Cleaveland, Rance %B Workshop on Computer-Aided Verification %8 1990/// %G eng %0 Journal Article %J SIAM Journal on Scientific and Statistical ComputingSIAM J. Sci. and Stat. Comput. %D 1989 %T Approximate Schur Complement Preconditioners on Serial and Parallel Computers %A Elman, Howard %B SIAM Journal on Scientific and Statistical ComputingSIAM J. Sci. and Stat. Comput. %V 10 %P 581 - 581 %8 1989/// %@ 01965204 %G eng %U http://link.aip.org/link/SJOCE3/v10/i3/p581/s1&Agg=doi %N 3 %R 10.1137/0910037 %0 Conference Paper %B Proceedings of the 4th International Symposium on Methodologies for Intelligent Systems %D 1989 %T Assessing others' knowledge and ignorance %A Kraus,S. %A Perlis, Don %B Proceedings of the 4th International Symposium on Methodologies for Intelligent Systems %8 1989/// %G eng %0 Journal Article %J Electronic Publishing %D 1989 %T Automatically transforming regularly structured linear documents into hypertext. %A Furuta,R. %A Plaisant, Catherine %A Shneiderman, Ben %X Fully automatic conversion of a paper-based document into hypertext can be achieved in manycases if the original document is naturally partitioned into a collection of small-sized pieces that are unambiguously and consistently structured. We describe the methodology that we have used successfully to design and implement several straightforward conversions from the original document’s machine-readable markup. %B Electronic Publishing %V 2 %P 211 - 229 %8 1989/// %G eng %N 4 %0 Journal Article %J Algorithmica %D 1988 %T The accelerated centroid decomposition technique for optimal parallel tree evaluation in logarithmic time %A Cole,R. %A Vishkin, Uzi %B Algorithmica %V 3 %P 329 - 346 %8 1988/// %G eng %N 1 %0 Journal Article %J International Journal of Computer Vision %D 1988 %T Active vision %A Aloimonos, J. %A Weiss, I. %A Bandyopadhyay, A. %B International Journal of Computer Vision %V 1 %P 333 - 356 %8 1988/// %G eng %N 4 %0 Report %D 1988 %T The American Voice and Robotics" Guardian" System: A Case Study in User Interface Usability Evaluation %A Wallace,D.F. %A Norman,K. L %A Plaisant, Catherine %X American Voice and Robotics (AVR), in conjunction with the Maryland Industrial Partnerships(MIPS) program, contracted the Human-Computer Interaction Laboratory (HCIL) to evaluate the user interface of AVR's "Guardian"home automation system. Among their goals for the system were:Ease of use and learning of the system (intuitiveness), Aesthetic appeal, Unintimidating, Impressive, High Functionality, Fast and accurate input of useraction, Clear feedback, Forgiving to errors, and Fun to use. Researchers found that users were initially impressed with the system, but that they were not fully satisfied with it. Specific initial recommendations, theoretical and methodological concerns, procedures, results, and direction of future research will be discussed. %I University of Maryland %8 1988/// %G eng %0 Journal Article %J International journal of man-machine studies %D 1988 %T Analysis of competition-based spreading activation in connectionist models %A Wang,P. Y %A Seidman,S. B %A Reggia, James A. %B International journal of man-machine studies %V 28 %P 77 - 97 %8 1988/// %G eng %N 1 %0 Journal Article %J SIAM Journal on Computing %D 1988 %T Approximate parallel scheduling. Part I: The basic technique with applications to optimal parallel list ranking in logarithmic time %A Cole,R. %A Vishkin, Uzi %B SIAM Journal on Computing %V 17 %P 128 - 128 %8 1988/// %G eng %0 Journal Article %J MD computing: computers in medical practice %D 1988 %T Artificial neural systems in medical science and practice. %A Reggia, James A. %B MD computing: computers in medical practice %V 5 %P 4 - 4 %8 1988/// %G eng %N 3 %0 Journal Article %J Artificial Intelligence %D 1988 %T Autocircumscription %A Perlis, Don %X Reasoning can be used to select among various possible interpretations of events. But how are these possibilities determined? We isolate two key technical features of circumscription (consistency and minimization), and use the first as the basis for a reformulation of the circumscription principle in a way related to possibility, self-knowledge, and negative introspection. The second (minimization) then can be separately expressed on its own. Conceptual clarity and a kind of validity are results of this separation, as well as a computational means to determine (sometimes) when a wff is not among a reasoner's conclusions. %B Artificial Intelligence %V 36 %P 223 - 236 %8 1988/09// %@ 0004-3702 %G eng %U http://www.sciencedirect.com/science/article/pii/0004370288900033 %N 2 %R 16/0004-3702(88)90003-3 %0 Report %D 1987 %T ALEX-an Alexical Programming Language %A Kozen,D. %A Teitelbaum,T. %A Chen,W. Z %A Field,J. H %A Pugh, William %A Vander Zanden,B. T %X ALEX is an experimental language for high-level parallel programming. It is a testbed for exploring various non-traditional ways of expressing algorithmic ideas, making extensive use of high-resolution color graphics. The language itself is not a programming language in the traditional sense, since there is no lexical syntax. This paper discusses the basic design of the ALEX user interface. %I Cornell University %V TR87-835 %8 1987/// %G eng %U http://hdl.handle.net/1813/6675 %0 Conference Paper %B Proceedings of the 15th annual conference on Computer Science %D 1987 %T Algorithms for covering and packing and applications to CAD/CAM (abstract only): preliminary results %A Mount, Dave %A Silverman,Ruth %X Computational geometry deals with the computational complexity of geometric problems within the framework of the analysis of algorithms. Numerous applications have been found to computer graphics, computer-aided design, pattern recognition and robotics.Among the geometric problems are bin packing problems. C.A. Roger “Packing and Covering”, Cambridge University Press (1964) studied these problems. Many of these are hard problems, and very few have been solved. Some packing problems are the restriction of a very interesting problem in CAM (computer-aided manufacture). In the manufacture of clothing, certain parts, such as sleeves, are laid out as repeated polygonal copies and combinations on a bolt of cloth. This pattern layout is traditionally (and currently) done by hand. It's a candidate for automation. We propose some special cases of this problem whose solution can be extended to practical use. We are working on several geometric questions involving packing translations of objects. Classical results in mathematics demonstrate equivalence between certain packing problems and finding optimal enclosed figures. J. O'Rourke et al [“An optimal algorithm for finding minimal enclosing triangles”, J. of Algorithms 7, 258-264 (1986)], as well as V. Klee and M. Laskowski [“Finding the smallest triangles containing a given convex polygon”, J. of Algorithms 6, 359-375 (1985)] have produced algorithms for finding a smallest area triangle containing (or enclosing) a given polygon. Smallest area convex polygons of a certain type, containing a given set, can be viewed as larger pieces of fabric containing pattern to be cut out. We have an optimal algorithm for finding largest enclosed polygons for a given set. We also have preliminary results for smallest enclosing polygons. These results have theoretical interest, and have applications to robotics and collision avoidance, as well as to manufacture. %B Proceedings of the 15th annual conference on Computer Science %S CSC '87 %I ACM %C New York, NY, USA %P 439– - 439– %8 1987/// %@ 0-89791-218-7 %G eng %U http://doi.acm.org/10.1145/322917.323100 %R 10.1145/322917.323100 %0 Journal Article %J ACM SIGMOD Record %D 1987 %T Analysis of object oriented spatial access methods %A Faloutsos,C. %A Sellis,T. %A Roussopoulos, Nick %B ACM SIGMOD Record %V 16 %P 426 - 439 %8 1987/// %G eng %N 3 %0 Report %D 1986 %T The accelerated centroid decomposition technique for optimal parallel tree evaluation in logarithmic time, Ultracomputer Note-108 %A Cole,R. %A Vishkin, Uzi %I TR-242, Department of Computer Science, Courant Institute NYU %8 1986/// %G eng %0 Journal Article %J COMP. SYST. SCI. ENG. %D 1986 %T Algorithm for concurrency control in replicated databases. %A Son,S. H %A Agrawala, Ashok K. %X A replicated database system is a distributed database system in which some data objects are stored redundantly at multiple sites to improve the reliability of the system. Without proper control mechanisms, the consistency of a replicated database system could be violated. A new scheme to increase the reliability as well as the degree of concurrency is described. It allows transactions to operate on a data object so long as one or more token copies are available. The scheme also exploits the fact that, for recovery reasons, there are two values for one data object. It is proved that transactions executed according to this scheme preserve consistency. %B COMP. SYST. SCI. ENG. %V 1 %P 75 - 81 %8 1986/// %G eng %N 2 %0 Journal Article %J SIGSOFT Softw. Eng. Notes %D 1986 %T Applying direct manipulation concepts: direct manipulation dik operating system (DMDOS) %A Iseki,Osamu %A Shneiderman, Ben %X Software engineers are often called upon to design user interfaces, but strategies and guidelines are only beginning to emerge. Shneiderman (1983) introduced the term "Direct Manipulation" to describe user interfaces which have:1) continuous representation of the objects of interest.2) physical actions (movement and selection by mouse, joystick, touch screen, etc.) or labeled button presses instead of complex Syntax.3) rapid, incremental, reversible operations whose impact on the object of interest is immediately visible.4) layered or spiral approach to learning that permits usage with minimal knowledge.The concepts of direct manipulation has been applied in some distinctive systems such as XEROX STAR and APPLE Macintosh, and many application software products such as spread sheets, word processors, drawing tools, desk-top managers, etc.However, the basic software of personal computers, the operating system, is still often based on command language concepts. This paper describes DMDOS (Direct Manipulation Disk Operating System), that we designed by applying the concepts of direct manipulation. to MS-DOS on the IBM PC. %B SIGSOFT Softw. Eng. Notes %V 11 %P 22 - 26 %8 1986/04// %@ 0163-5948 %G eng %U http://doi.acm.org/10.1145/382248.382815 %N 2 %R 10.1145/382248.382815 %0 Conference Paper %B 27th Annual Symposium on Foundations of Computer Science %D 1986 %T Approximate and exact parallel scheduling with applications to list, tree and graph problems %A Cole,R. %A Vishkin, Uzi %B 27th Annual Symposium on Foundations of Computer Science %P 478 - 491 %8 1986/// %G eng %0 Conference Paper %B Proceedings Symposium on Foundations of Computer Science %D 1986 %T Approximate scheduling, exact scheduling, and applications to parallel algorithms %A Cole,R. %A Vishkin, Uzi %B Proceedings Symposium on Foundations of Computer Science %P 478 - 491 %8 1986/// %G eng %0 Journal Article %J Linear Algebra and its Applications %D 1986 %T Assignment and scheduling in parallel matrix factorization %A O'Leary, Dianne P. %A Stewart, G.W. %X We consider the problem of factoring a dense n×n matrix on a network consisting of P MIMD processors, with no shared memory, when the network is smaller than the number of elements in the matrix (P<n2). The specific example analyzed is a computational network that arises in computing the LU, QR, or Cholesky factorizations. We prove that if the nodes of the network are evenly distributed among processors and if computations are scheduled by a round-robin or a least-recently-executed scheduling algorithm, then optimal order of speedup is achieved. However, such speedup is not necessarily achieved for other scheduling algorithms or if the computation for the nodes is inappropriately split across processors, and we give examples of these phenomena. Lower bounds on execution time for the algorithm are established for two important node-assignment strategies. %B Linear Algebra and its Applications %V 77 %P 275 - 299 %8 1986/05// %@ 0024-3795 %G eng %U http://www.sciencedirect.com/science/article/pii/0024379586901722 %R 10.1016/0024-3795(86)90172-2 %0 Conference Paper %B Proceedigns of the Expert Systems in Government Symposium %D 1985 %T Abductive inference %A Reggia, James A. %B Proceedigns of the Expert Systems in Government Symposium %P 484 - 489 %8 1985/// %G eng %0 Journal Article %J IEEE Transactions on Biomedical Engineering %D 1985 %T Answer justification in abductive expert systems for diagnostic problem solving %A Reggia, James A. %A Perricone,B. %A Nau, Dana S. %A Peng,Y. %B IEEE Transactions on Biomedical Engineering %8 1985/// %G eng %0 Journal Article %J Biomedical Engineering, IEEE Transactions on %D 1985 %T Answer Justification in Diagnostic Expert Systems-Part II: Supporting Plausible Justifications %A Reggia, James A. %A Perricone,B. T. %A Nau, Dana S. %A Peng,Y. %B Biomedical Engineering, IEEE Transactions on %P 268 - 272 %8 1985/// %G eng %N 4 %0 Conference Paper %B Workshop on Nonmonotonic Reasoning %D 1984 %T An abductive non-monotonic logic %A Reggia, James A. %A Nau, Dana S. %B Workshop on Nonmonotonic Reasoning %8 1984/// %G eng %0 Conference Paper %B 7th International Conference on Automated Deduction, Napa, California, USA, May 14-16, 1984: proceedings %D 1984 %T Applications or Protected Circumscription %A Minter,J. %A Perlis, Don %B 7th International Conference on Automated Deduction, Napa, California, USA, May 14-16, 1984: proceedings %V 170 %P 414 - 414 %8 1984/// %G eng %0 Book %D 1983 %T Achievements and Prospects with Regard to Energy Recycling in Textile Finishing Shops %A Pop, Mihai %A Romas, D. %A Romania %I UN %8 1983 %G eng %0 Journal Article %J IEEE Transactions on Systems, Man, and Cybernetics %D 1983 %T Analysis of Relaxation Processes: the Two Node, Two Label Case %A O'Leary, Dianne P. %A Peleg,Shmuel %B IEEE Transactions on Systems, Man, and Cybernetics %V SMC-13 %P 618 - 623 %8 1983/// %G eng %0 Journal Article %J ACM Trans. Database Syst. %D 1982 %T An architecture for automatic relational database sytem conversion %A Shneiderman, Ben %A Thomas,Glenn %K automatic conversion %K Database systems %K relational model %K transformations %X Changes in requirements for database systems necessitate schema restructuring, database translation, and application or query program conversion. An alternative to the lengthy manual revision process is proposed by offering a set of 15 transformations keyed to the relational model of data and the relational algebra. Motivations, examples, and detailed descriptions are provided. %B ACM Trans. Database Syst. %V 7 %P 235 - 257 %8 1982/06// %@ 0362-5915 %G eng %U http://doi.acm.org/10.1145/319702.319724 %N 2 %R 10.1145/319702.319724 %0 Conference Paper %B Proceedings of the June 7-10, 1982, national computer conference %D 1982 %T Automatic database system conversion: schema revision, data translation, and source-to-source program transformation %A Shneiderman, Ben %A Thomas,Glenn %X Changing data requirements present database administrators with a difficult problem: the revision of the schema, the translation of the stored database, and the conversion of the numerous application programs. This paper describes an automatic database system conversion facility which provides one approach for coping with this problem. The Pure Definition Language and the Pure Manipulation Language have been designed to facilitate the conversions specified in the Pure Transformation Language. Two conversions and their effect on retrievals are demonstrated. %B Proceedings of the June 7-10, 1982, national computer conference %S AFIPS '82 %I ACM %C New York, NY, USA %P 579 - 587 %8 1982/// %@ 0-88283-035-X %G eng %U http://doi.acm.org/10.1145/1500774.1500849 %R 10.1145/1500774.1500849 %0 Journal Article %J ACM Transactions on Mathematical Software %D 1981 %T Algorithm 572: Solution of the Helmholtz Equation for the Dirichlet Problem on General Bounded Three-Dimensional Regions [D3] %A O'Leary, Dianne P. %A Widlund,Olof %B ACM Transactions on Mathematical Software %V 7 %P 239 - 246 %8 1981/06// %@ 0098-3500 %G eng %U http://doi.acm.org/10.1145/355945.355954 %N 2 %R 10.1145/355945.355954 %0 Journal Article %J Advances in computer programming management %D 1980 %T Advances in software engineering resource estimation %A Zelkowitz, Marvin V %B Advances in computer programming management %V 1 %P 206 - 225 %8 1980/// %G eng %0 Report %D 1980 %T An Algorithm for Mutual Exclusion in Computer Networks. %A Agrawala, Ashok K. %A Ricart, G. %X An algorithm is proposed which creates mutual exclusion in a computer network whose nodes can communicate only by messages and do not share memory. The algorithm sends only 2*(N-1) Messages, where N is the number of nodes in the network, per critical section invocation. This number of messages is a minimum if parallel, distributed, symmetric control is used; hence, the algorithm is optimal in this minimal under some general assumptions. Like Lamport's 'bakery algorithm,' unbounded sequence numbers are used to provide first-come first-served priority into the critical section. It is shown that the number can be contained in a fixed amount of memory by storing it as the residue of a modulus. The number of messages required to implement the exclusion can be reduced by using sequential node-by-node processing, by using broadcast message techniques or by sending information through timing channels. The readers and writers problem is solved by a simple modification of the algorithm. The modifications necessary to make the algorithm robust are described. %I Department of Computer Science, University of Maryland, College Park %8 1980/// %G eng %0 Journal Article %J Information Systems %D 1980 %T Approximate pattern matching in a pattern database system %A Davis, Larry S. %A Roussopoulos, Nick %X This work is concerned with the organization of a large database of binary pictures (normalized for size, rotation and position) and with the efficient "inexact match" of an input pattern against the whole database. Current techniques in pattern analysis use matching algorithms without any regard to the global organization of the storage representations of the models they are to match. Consequently, the algorithms are only practical for small databases. This paper discusses the design of a pattern database system and the economy that it provides for the matching problem. The database organization is based on the quad-tree representation of binary patterns. Given an arbitrary decomposition D (or partition) of a pattern P and an arbitrary function f on the pattern, we repeatedly apply f on D(P), D(D(P))P, ... to obtain higher and higher levels of abstraction f(D)(P)), f(D(D(P))), ... of the pattern. The computed values obtained after the jth application of f are used to label the ith level of the pyramid. The specific representation used in this paper is called the sum-quad-tree, in which each level of the tree stores the sum of the labels of its sons. The lowest level of the sum-quad-tree corresponds to the individual pixels and is the nth level (i.e. node m at level n implies v(m) = 0 or v(m) = 1). Nodes at the jth level of the sum-quad-tree correspond to sums of 2n-j × 2n-j picture points, so that the 0th level contains the number of 1's in the pattern. The pyramid representation is used as a hierarchical (n-level) indexing scheme of the database. The advantage of this organization is that the matching algorithms presented reject most of the patterns in the database by utilizing the relatively small in size index tables and thus avoid the overhead of unnecessary CPU time and operation between main memory and secondary storage. %B Information Systems %V 5 %P 107 - 119 %8 1980/// %@ 0306-4379 %G eng %U http://www.sciencedirect.com/science/article/pii/0306437980900022 %N 2 %R 16/0306-4379(80)90002-2 %0 Journal Article %J Nature %D 1980 %T Are snRNPs involved in splicing? %A Lerner,Michael R. %A Boyle,John A. %A Mount, Stephen M. %A Wolin,Sandra L. %A Steitz,Joan A. %X Discrete, stable small RNA molecules are found in the nuclei of cells from a wide variety of eukaryotic organisms. Many of these small nuclear RNA (snRNA) species, which range in size from about 90 to 220 nucleotides, have been well-characterised biochemically, and some sequenced. However, their function has remained obscure. The most abundant snRNA species exist as a closely related set of RNA–protein complexes called small nuclear ribonucleoproteins (snRNPs). snRNPs are the antigens recognised by antibodies from some patients with lupus erythematosus (LE), an autoimmune rheumatic disease. Anti-RNP antibodies from lupus sera selectively precipitate snRNP species containing Ula7 and Ulb9 RNAs from mouse Ehrlich ascites cell nuclei, whereas anti-Sm antibodies bind these snRNPs and four others containing U2, U4, US and U6 RNAs. Both antibody systems precipitate the same seven prominent nuclear proteins (molecular weight 12,000–32,000). All molecules of the snRNAs U1, U2, U4, U5 and U6 appear to exist in the form of antigenic snRNPs9. The particles sediment at about 10S and each probably contains a single snRNA molecule. Indirect immunofluorescence studies using anti-RNP and anti-Sm sera confirm the nuclear (but non-nucleolar) location of the antigenic snRNPs. Here we present several lines of evidence that suggest a direct involvement of snRNPs in the splicing of hnRNA. Most intriguing is the observation that the nucleotide sequence at the 5′ end of U1 RNA exhibits extensive complementarity to those across splice junctions in hnRNA molecules. %B Nature %V 283 %P 220 - 224 %8 1980/01/10/ %G eng %U http://www.nature.com/nature/journal/v283/n5743/abs/283220a0.html %N 5743 %R 10.1038/283220a0 %0 Journal Article %J IEEE Computer Software and Applications Conference %D 1980 %T Automatic database system conversion: A transformation language approach to sub-schema implementation %A Thomas,G. %A Shneiderman, Ben %B IEEE Computer Software and Applications Conference %P 80 - 88 %8 1980/// %G eng %0 Conference Paper %B Proceedings of the 3rd international conference on Software engineering %D 1978 %T Analyzing medium-scale software development %A Basili, Victor R. %A Zelkowitz, Marvin V %X The collection and analysis of data from programming projects is necessary for the appropriate evaluation of software engineering methodologies. Towards this end, the Software Engineering Laboratory was organized between the University of Maryland and NASA Goddard Space Flight Center. This paper describes the structure of the Laboratory and provides some data on project evaluation from some of the early projects that have been monitored. The analysis relates to resource forecasting using a model of the project life cycle based upon the Rayleigh equation and to error rates applying ideas developed by Belady and Lehman. %B Proceedings of the 3rd international conference on Software engineering %S ICSE '78 %I IEEE Press %C Piscataway, NJ, USA %P 116 - 123 %8 1978/// %G eng %U http://dl.acm.org/citation.cfm?id=800099.803200 %0 Journal Article %J The Computer Journal %D 1976 %T An application of compiler simulation at the source language level %A Perlis, Don %B The Computer Journal %V 19 %P 90 - 90 %8 1976/// %G eng %N 1 %0 Conference Paper %B Proceedings of the 2nd international conference on Software engineering %D 1976 %T Automatic program analysis and evaluation %A Zelkowitz, Marvin V %K ERRORS %K Evaluation %K Program complexity %K Program measurement %K Static and dynamic analysis %X There is currently considerable interest in the computing community in the evaluation of computer programming. However, in order to objectively evaluate such concepts, it is necessary to undertake a thorough evaluation of the programming process itself. Most previous studies of this type have analyzed, by hand usually, a few instances of programs. This has led to some general conjectures; however, the amount of information that must be processed precludes any large scale analysis. In order to avoid this problem, an automatic data collection facility has been implemented as part of a PL/1 compiler at the University of Maryland. This system automatically collects information on each program that has been compiled - at almost no additional cost to the user of the compiler. This paper will describe the system and will evaluate some of the characteristics of some of the 25,000 programs that have been run since July, 1975. %B Proceedings of the 2nd international conference on Software engineering %S ICSE '76 %I IEEE Computer Society Press %C Los Alamitos, CA, USA %P 158 - 163 %8 1976/// %G eng %U http://dl.acm.org/citation.cfm?id=800253.807668