%0 Generic %D 2018 %T High-Throughput DNA Sequencing To Profile Microbial Water Quality Of Potable Reuse %A Menu B. Leddy %A Megan H. Plumlee %A Rose S. Kantor %A Kara L. Nelson %A Scott E. Miller %A Lauren C. Kennedy %A Blake W. Stamps %A John R. Spear %A Nur A. Hasan %A Rita R Colwell %G eng %U https://www.wateronline.com/doc/high-throughput-dna-sequencing-to-profile-microbial-water-quality-of-potable-reuse-0001 %0 Journal Article %J Advances in Water Resources %D 2017 %T Hydroclimatic sustainability assessment of changing climate on cholera in the Ganges-Brahmaputra basin %A Nasr-Azadani, Fariborz %A Khan, Rakibul %A Rahimikollu, Javad %A Unnikrishnan, Avinash %A Akanda, Ali %A Alam, Munirul %A Huq, Anwar %A Jutla, Antarpreet %A Rita R Colwell %X The association of cholera and climate has been extensively documented. However, determining the effects of changing climate on the occurrence of disease remains a challenge. Bimodal peaks of cholera in Bengal Delta are hypothesized to be linked to asymmetric flow of the Ganges and Brahmaputra rivers. Spring cholera is related to intrusion of bacteria-laden coastal seawater during low flow seasons, while autumn cholera results from cross-contamination of water resources when high flows in the rivers cause massive inundation. Coarse resolution of General Circulation Model (GCM) output (usually at 100 – 300 km)cannot be used to evaluate variability at the local scale(10–20 km),hence the goal of this study was to develop a framework that could be used to understand impacts of climate change on occurrence of cholera. Instead of a traditional approach of downscaling precipitation, streamflow of the two rivers was directly linked to GCM outputs, achieving reasonable accuracy (R2 = 0.89 for the Ganges and R2 = 0.91 for the Brahmaputra)using machine learning algorithms (Support Vector Regression-Particle Swarm Optimization). Copula methods were used to determine probabilistic risks of cholera under several discharge conditions. Key results, using model outputs from ECHAM5, GFDL, andHadCM3for A1B and A2 scenarios, suggest that the combined low flow of the two rivers may increase in the future, with high flows increasing for first half of this century, decreasing thereafter. Spring and autumn cholera, assuming societal conditions remain constant e.g., at the current rate, may decrease. However significant shifts were noted in the magnitude of river discharge suggesting that cholera dynamics of the delta may well demonstrate an uncertain predictable pattern of occurrence over the next century. %B Advances in Water Resources %V 108 %P 332 - 344 %8 Jan-10-2017 %G eng %U https://linkinghub.elsevier.com/retrieve/pii/S030917081630728X %! Advances in Water Resources %R 10.1016/j.advwatres.2016.11.018 %0 Journal Article %J mBio %D 2015 %T Hybrid Vibrio cholerae El Tor Lacking SXT Identified as the Cause of a Cholera Outbreak in the Philippines %A Klinzing, David C. %A Choi, Seon Young %A Hasan, Nur A. %A Matias, Ronald R. %A Tayag, Enrique %A Geronimo, Josefina %A Skowronski, Evan %A Rashed, Shah M. %A Kawashima, Kent %A Rosenzweig, C. Nicole %A Gibbons, Henry S. %A Torres, Brian C. %A Liles, Veni %A Alfon, Alicia C. %A Juan, Maria Luisa %A Natividad, Filipinas F. %A Cebula, Thomas A. %A Rita R Colwell %X Cholera continues to be a global threat, with high rates of morbidity and mortality. In 2011, a cholera outbreak occurred in Palawan, Philippines, affecting more than 500 people, and 20 individuals died. Vibrio cholerae O1 was confirmed as the etiological agent. Source attribution is critical in cholera outbreaks for proper management of the disease, as well as to control spread. In this study, three V. cholerae O1 isolates from a Philippines cholera outbreak were sequenced and their genomes analyzed to determine phylogenetic relatedness to V. cholerae O1 isolates from recent outbreaks of cholera elsewhere. The Philippines V. cholerae O1 isolates were determined to be V. cholerae O1 hybrid El Tor belonging to the seventh-pandemic clade. They clustered tightly, forming a monophyletic clade closely related to V. cholerae O1 hybrid El Tor from Asia and Africa. The isolates possess a unique multilocus variable-number tandem repeat analysis (MLVA) genotype (12-7-9-18-25 and 12-7-10-14-21) and lack SXT. In addition, they possess a novel 15-kb genomic island (GI-119) containing a predicted type I restriction-modification system. The CTXΦ-RS1 array of the Philippines isolates was similar to that of V. cholerae O1 MG116926, a hybrid El Tor strain isolated in Bangladesh in 1991. Overall, the data indicate that the Philippines V. cholerae O1 isolates are unique, differing from recent V. cholerae O1 isolates from Asia, Africa, and Haiti. Furthermore, the results of this study support the hypothesis that the Philippines isolates of V. cholerae O1 are indigenous and exist locally in the aquatic ecosystem of the Philippines. %B mBio %8 Jan-05-2015 %G eng %U http://mbio.asm.org/lookup/doi/10.1128/mBio.00047-15 %N 2 %! mBio %R 10.1128/mBio.00047-15 %0 Report %D 2012 %T A Hybrid System for Error Detection in Electronic Dictionaries %A Zajic, David %A David Doermann %A Bloodgood,Michael %A Rodrigues,Paul %A Ye,Peng %A Zotkina,Elena %X A progress report on CASL’s research on error detection in electronic dictionaries, including a hybrid system, application and evaluation on a second dictionary and a graphical user interface. %B Technical Reports of the Center for the Advanced Study of Language %0 Journal Article %J Briefings in Bioinformatics %D 2011 %T Hawkeye and AMOS: Visualizing and Assessing the Quality of Genome Assemblies %A Schatz,Michael C %A Phillippy,Adam M %A Sommer,Daniel D %A Delcher,Arthur L. %A Puiu,Daniela %A Narzisi,Giuseppe %A Salzberg,Steven L. %A Pop, Mihai %K assembly forensics %K DNA Sequencing %K genome assembly %K visual analytics %X Since its launch in 2004, the open-source AMOS project has released several innovative DNA sequence analysis applications including: Hawkeye, a visual analytics tool for inspecting the structure of genome assemblies; the Assembly Forensics and FRCurve pipelines for systematically evaluating the quality of a genome assembly; and AMOScmp, the first comparative genome assembler. These applications have been used to assemble and analyze dozens of genomes ranging in complexity from simple microbial species through mammalian genomes. Recent efforts have been focused on enhancing support for new data characteristics brought on by second- and now third-generation sequencing. This review describes the major components of AMOS in light of these challenges, with an emphasis on methods for assessing assembly quality and the visual analytics capabilities of Hawkeye. These interactive graphical aspects are essential for navigating and understanding the complexities of a genome assembly, from the overall genome structure down to individual bases. Hawkeye and AMOS are available open source at http://amos.sourceforge.net. %B Briefings in Bioinformatics %8 2011/12/23/ %@ 1467-5463, 1477-4054 %G eng %U http://bib.oxfordjournals.org/content/early/2011/12/23/bib.bbr074 %R 10.1093/bib/bbr074 %0 Conference Paper %B PART 2 ———– Proceedings of the 2011 annual conference extended abstracts on Human factors in computing systems %D 2011 %T HCI for peace: from idealism to concrete steps %A Hourcade,Juan Pablo %A Bullock-Rest,Natasha E. %A Friedman,Batya %A Nelson,Mark %A Shneiderman, Ben %A Zaphiris,Panayiotis %K cyprus %K peace %K persuasive technology %K post-conflict reconciliation %K social media %K value sensitive design %K war %X This panel will contribute diverse perspectives on the use of computer technology to promote peace and prevent armed conflict. These perspectives include: the use of social media to promote democracy and citizen participation, the role of computers in helping people communicate across division lines in zones of conflict, how persuasive technology can promote peace, and how interaction design can play a role in post-conflict reconciliation. %B PART 2 ———– Proceedings of the 2011 annual conference extended abstracts on Human factors in computing systems %S CHI EA '11 %I ACM %C New York, NY, USA %P 613 - 616 %8 2011/// %G eng %U http://doi.acm.org/10.1145/1979482.1979493 %R 10.1145/1979482.1979493 %0 Conference Paper %B ACM SIGCOMM Workshop on Home Networks '11 %D 2011 %T Helping Users Shop for ISPs with Internet Nutrition Labels %A Sundaresan, Srikanth %A Feamster, Nick %A Teixeira, Renata %A Tang, Anthony %A Edwards, W. Keith %A Grinter, Rebecca E. %A Marshini Chetty %A de Donato, Walter %K access networks %K benchmarking %K bismark %K broadband networks %K gateway measurements %X When purchasing home broadband access from Internet service providers (ISPs), users must decide which service plans are most appropriate for their needs. Today, ISPs advertise their available service plans using only generic upload and download speeds. Unfortunately, these metrics do not always accurately reflect the varying performance that home users will experience for a wide range of applications. In this paper, we propose that each ISP service plan carry a "nutrition label" that conveys more comprehensive information about network metrics along many dimensions, including various aspects of throughput, latency, loss rate, and jitter. We first justify why these metrics should form the basis of a network nutrition label. Then, we demonstrate that current plans that are superficially similar with respect to advertised download rates may have different performance according to the label metrics. We close with a discussion of the challenges involved in presenting a nutrition label to users in a way that is both accurate and easy to understand. %B ACM SIGCOMM Workshop on Home Networks '11 %S HomeNets '11 %I ACM %P 13 - 18 %8 2011/// %@ 978-1-4503-0798-7 %G eng %U http://doi.acm.org/10.1145/2018567.2018571 %0 Book Section %B Transactions on High-Performance Embedded Architectures and Compilers IV %D 2011 %T Heterogeneous Design in Functional DIF %A Plishker,William %A Sane, Nimish %A Kiemb, Mary %A Bhattacharyya, Shuvra S. %E Stenström, Per %K Arithmetic and Logic Structures %K Computer Communication Networks %K Dataflow %K heterogeneous %K Input/Output and Data Communications %K Logic Design %K Processor Architectures %K Programming Languages, Compilers, Interpreters %K Signal processing %X Dataflow formalisms have provided designers of digital signal processing (DSP) systems with analysis and optimizations for many years. As system complexity increases, designers are relying on more types of dataflow models to describe applications while retaining these implementation benefits. The semantic range of DSP-oriented dataflow models has expanded to cover heterogeneous models and dynamic applications, but efficient design, simulation, and scheduling of such applications has not. To facilitate implementing heterogeneous applications, we utilize a new dataflow model of computation and show how actors designed in other dataflow models are directly supported by this framework, allowing system designers to immediately compose and simulate actors from different models. Using examples, we show how this approach can be applied to quickly describe and functionally simulate a heterogeneous dataflow-based application such that a designer may analyze and tune trade-offs among different models and schedules for simulation time, memory consumption, and schedule size. %B Transactions on High-Performance Embedded Architectures and Compilers IV %S Lecture Notes in Computer Science %I Springer Berlin Heidelberg %P 391 - 408 %8 2011 %@ 978-3-642-24567-1, 978-3-642-24568-8 %G eng %U http://link.springer.com/chapter/10.1007/978-3-642-24568-8_20 %0 Journal Article %J Technical Reports from UMIACS %D 2011 %T A Hierarchical Algorithm for Fast Debye Summation with Applications to Small Angle Scattering %A Gumerov, Nail A. %A Berlin,Konstantin %A Fushman, David %A Duraiswami, Ramani %K Technical Report %X Debye summation, which involves the summation of sinc functions of distances between all pair of atoms in three dimensional space, arises in computations performed in crystallography, small/wide angle X-ray scattering (SAXS/WAXS) and small angle neutron scattering (SANS). Direct evaluation of Debye summation has quadratic complexity, which results in computational bottleneck when determining crystal properties, or running structure refinement protocols that involve SAXS or SANS, even for moderately sized molecules. We present a fast approximation algorithm that efficiently computes the summation to any prescribed accuracy epsilon in linear time. The algorithm is similar to the fast multipole method (FMM), and is based on a hierarchical spatial decomposition of the molecule coupled with local harmonic expansions and translation of these expansions. An even more efficient implementation is possible when the scattering profile is all that is required, as in small angle scattering reconstruction (SAS) of macromolecules. We examine the relationship of the proposed algorithm to existing approximate methods for profile computations, and provide detailed description of the algorithm, including error bounds and algorithms for stable computation of the translation operators. Our theoretical and computational results show orders of magnitude improvement in computation complexity over existing methods, while maintaining prescribed accuracy. %B Technical Reports from UMIACS %8 2011/09/01/ %G eng %U http://drum.lib.umd.edu/handle/1903/11857 %0 Journal Article %J arXiv:1103.1362 [cs] %D 2011 %T Higher-Order Symbolic Execution via Contracts %A Tobin-Hochstadt, Sam %A David Van Horn %K Computer Science - Programming Languages %X We present a new approach to automated reasoning about higher-order programs by extending symbolic execution to use behavioral contracts as symbolic values, enabling symbolic approximation of higher-order behavior. Our approach is based on the idea of an abstract reduction semantics that gives an operational semantics to programs with both concrete and symbolic components. Symbolic components are approximated by their contract and our semantics gives an operational interpretation of contracts-as-values. The result is a executable semantics that soundly predicts program behavior, including contract failures, for all possible instantiations of symbolic components. We show that our approach scales to an expressive language of contracts including arbitrary programs embedded as predicates, dependent function contracts, and recursive contracts. Supporting this feature-rich language of specifications leads to powerful symbolic reasoning using existing program assertions. We then apply our approach to produce a verifier for contract correctness of components, including a sound and computable approximation to our semantics that facilitates fully automated contract verification. Our implementation is capable of verifying contracts expressed in existing programs, and of justifying valuable contract-elimination optimizations. %B arXiv:1103.1362 [cs] %8 2011/03/07/ %G eng %U http://arxiv.org/abs/1103.1362 %0 Journal Article %J Logic programming, knowledge representation, and nonmonotonic reasoning %D 2011 %T Homage to Michael Gelfond on His 65 th Birthday %A Minker, Jack %X Michael Gelfond is one of the world leading scientists in the field of logic programming and nonmonotonic reasoning. This essay covers several aspects of Michael’s personal life, starting from his birth in the USSR, through his experiences in the USSR up to the time he emigrated to the United States (U.S.). This is followed by his first experiences in the U.S.: how he became involved in logic programming and nonmonotonic reasoning and some of his major scientific achievements. Michael is a warm, generous person, and I discuss his impact on some colleagues and students. In the concluding section, I observe that starting his career with impediments in the FSU, he overcame them to become one of the top computer scientists in logic programming and nonmonotonic reasoning. %B Logic programming, knowledge representation, and nonmonotonic reasoning %P 1 - 11 %8 2011/// %G eng %R 10.1007/978-3-642-20832-4_1 %0 Journal Article %J SIGCOMM-Computer Communication Review %D 2011 %T How Many Tiers? Pricing in the Internet Transit Market %A Valancius,V. %A Lumezanu,C. %A Feamster, Nick %A Johari,R. %A Vazirani,V. V %X ISPs are increasingly selling “tiered” contracts, which offer Inter-net connectivity to wholesale customers in bundles, at rates based on the cost of the links that the traffic in the bundle is traversing. Although providers have already begun to implement and deploy tiered pricing contracts, little is known about how to structure them. Although contracts that sell connectivity on finer granularities im- prove market efficiency, they are also more costly for ISPs to im- plement and more difficult for customers to understand. Our goal is to analyze whether current tiered pricing practices in the whole- sale transit market yield optimal profits for ISPs and whether better bundling strategies might exist. In the process, we offer two contri- butions: (1) we develop a novel way of mapping traffic and topol- ogy data to a demand and cost model; and (2) we fit this model on three large real-world networks: an European transit ISP, a content distribution network, and an academic research network, and run counterfactuals to evaluate the effects of different bundling strate- gies. Our results show that the common ISP practice of structuring tiered contracts according to the cost of carrying the traffic flows (e.g., offering a discount for traffic that is local) can be suboptimal and that dividing contracts based on both traffic demand and the cost of carrying it into only three or four tiers yields near-optimal profit for the ISP. %B SIGCOMM-Computer Communication Review %V 41 %P 194 - 194 %8 2011/// %G eng %N 4 %0 Conference Paper %D 2011 %T How secure are networked office devices? %A Condon,E. %A Cummins,E. %A Afoulki,Z. %A Michel Cukier %K computer network security %K data privacy %K networked office device security %K privacy risk assessment %K Risk management %K security risk assessment %K STRIDE threat model %K university network %X Many office devices have a history of being networked (such as printers) and others without the same past are increasingly becoming networked (such as photocopiers). The modern networked versions of previously non-networked devices have much in common with traditional networked servers in terms of features and functions. While an organization may have policies and procedures for securing traditional network servers, securing networked office devices providing similar services can easily be overlooked. In this paper we present an evaluation of privacy and security risks found when examining over 1,800 networked office devices connected to a large university network. We use the STRIDE threat model to categorize threats and vulnerabilities and then we group the devices according to assessed risk from the perspective of the university. We found that while steps had been taken to secure some devices, many were using default or unsecured configurations. %P 465 - 472 %8 2011/06// %G eng %R 10.1109/DSN.2011.5958259 %0 Book %D 2010 %T Handbook of Signal Processing Systems %A Bhattacharyya, Shuvra S. %A Deprettere, Ed F. %K Computers / Information Theory %K Technology & Engineering / Electrical %K Technology & Engineering / Signals & Signal Processing %X The Handbook is organized in four parts. The first part motivates representative applications that drive and apply state-of-the art methods for design and implementation of signal processing systems; the second part discusses architectures for implementing these applications; the third part focuses on compilers and simulation tools; and the fourth part describes models of computation and their associated design tools and methodologies. %I Springer %P 1099 %8 2010 %@ 9781441963451 %G eng %0 Conference Paper %B Proceedings of the 9th IAPR International Workshop on Document Analysis Systems %D 2010 %T Handwritten Arabic text line segmentation using affinity propagation %A Kumar,Jayant %A Abd-Almageed, Wael %A Kang,Le %A David Doermann %K affinity propagation %K arabic %K arabic documents %K breadth-first search %K clustering %K dijkstra's shortest path algorithm %K handwritten documents %K line detection %K text line segmentation %X In this paper, we present a novel graph-based method for extracting handwritten text lines in monochromatic Arabic document images. Our approach consists of two steps - Coarse text line estimation using primary components which define the line and assignment of diacritic components which are more difficult to associate with a given line. We first estimate local orientation at each primary component to build a sparse similarity graph. We then, use a shortest path algorithm to compute similarities between non-neighboring components. From this graph, we obtain coarse text lines using two estimates obtained from Affinity propagation and Breadth-first search. In the second step, we assign secondary components to each text line. The proposed method is very fast and robust to non-uniform skew and character size variations, normally present in handwritten text lines. We evaluate our method using a pixel-matching criteria, and report 96% accuracy on a dataset of 125 Arabic document images. We also present a proximity analysis on datasets generated by artificially decreasing the spacings between text lines to demonstrate the robustness of our approach. %B Proceedings of the 9th IAPR International Workshop on Document Analysis Systems %S DAS '10 %I ACM %C New York, NY, USA %P 135 - 142 %8 2010/// %@ 978-1-60558-773-8 %G eng %U http://doi.acm.org/10.1145/1815330.1815348 %R 10.1145/1815330.1815348 %0 Journal Article %J NIPS 2010 Workshop on Networks Across Disciplines: Theory and Applications, Whistler BC, Canada %D 2010 %T Higher-order graphical models for classification in social and affiliation networks %A Zheleva,E. %A Getoor, Lise %A Sarawagi,S. %X In this work we explore the application of higher-order Markov Random Fields(MRF) to classification in social and affiliation networks. We consider both friend- ship links and group membership for inferring hidden attributes in a collective inference framework. We explore different ways of using the social groups as ei- ther node features or to construct the graphical model structure. The bottleneck in applying higher-order MRFs to a domain with many overlapping large cliques is the complexity of inference which is exponential in the size of the largest clique. To circumvent the slow inference problem, we borrow recent advancements in the computer vision community to achieve fast approximate inference results. We provide preliminary results using a dataset from facebook which suggest that our higher-order MRF models are capturing the structural dependencies in the net- works and they yield higher accuracy than linear classifiers. %B NIPS 2010 Workshop on Networks Across Disciplines: Theory and Applications, Whistler BC, Canada %8 2010/// %G eng %0 Conference Paper %B Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing %D 2010 %T Holistic sentiment analysis across languages: multilingual supervised latent Dirichlet allocation %A Jordan Boyd-Graber %A Resnik, Philip %X In this paper, we develop multilingual supervised latent Dirichlet allocation (MlSLDA), a probabilistic generative model that allows insights gleaned from one language's data to inform how the model captures properties of other languages. MlSLDA accomplishes this by jointly modeling two aspects of text: how multilingual concepts are clustered into thematically coherent topics and how topics associated with text connect to an observed regression variable (such as ratings on a sentiment scale). Concepts are represented in a general hierarchical framework that is flexible enough to express semantic ontologies, dictionaries, clustering constraints, and, as a special, degenerate case, conventional topic models. Both the topics and the regression are discovered via posterior inference from corpora. We show MlSLDA can build topics that are consistent across languages, discover sensible bilingual lexical correspondences, and leverage multilingual corpora to better predict sentiment. %B Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing %S EMNLP '10 %I Association for Computational Linguistics %C Stroudsburg, PA, USA %P 45 - 55 %8 2010/// %G eng %U http://dl.acm.org/citation.cfm?id=1870658.1870663 %0 Journal Article %J Developmental Cell %D 2010 %T Hopx and Hdac2 Interact to Modulate Gata4 Acetylation and Embryonic Cardiac Myocyte Proliferation %A Trivedi,Chinmay M. %A Zhu,Wenting %A Wang,Qiaohong %A Jia,Cheng %A Kee,Hae Jin %A Li,Li %A Hannenhalli, Sridhar %A Epstein,Jonathan A. %X SummaryRegulation of chromatin structure via histone modification has recently received intense attention. Here, we demonstrate that the chromatin-modifying enzyme histone deacetylase 2 (Hdac2) functions with a small homeodomain factor, Hopx, to mediate deacetylation of Gata4, which is expressed by cardiac progenitor cells and plays critical roles in the regulation of cardiogenesis. In the absence of Hopx and Hdac2 in mouse embryos, Gata4 hyperacetylation is associated with a marked increase in cardiac myocyte proliferation, upregulation of Gata4 target genes, and perinatal lethality. Hdac2 physically interacts with Gata4, and this interaction is stabilized by Hopx. The ability of Gata4 to transactivate cell cycle genes is impaired by Hopx/Hdac2-mediated deacetylation, and this effect is abrogated by loss of Hdac2-Gata4 interaction. These results suggest that Gata4 is a nonhistone target of Hdac2-mediated deacetylation and that Hdac2, Hopx, and Gata4 coordinately regulate cardiac myocyte proliferation during embryonic development. %B Developmental Cell %V 19 %P 450 - 459 %8 2010/09/14/ %@ 1534-5807 %G eng %U http://www.sciencedirect.com/science/article/pii/S1534580710003874 %N 3 %R 10.1016/j.devcel.2010.08.012 %0 Journal Article %J SIAM review %D 2009 %T Hat guessing games %A Butler,S. %A Hajiaghayi, Mohammad T. %A Kleinberg,R. D %A Leighton,T. %B SIAM review %V 51 %P 399 - 413 %8 2009/// %G eng %N 2 %0 Journal Article %J The fourth paradigm: data-intensive scientific discovery %D 2009 %T HEALTH AND WELLBEING %A Gillam,M. %A Feied,C. %A MOODY,E. %A Shneiderman, Ben %A Smith,M. %A DICKASON,J. %B The fourth paradigm: data-intensive scientific discovery %P 57 - 57 %8 2009/// %G eng %0 Journal Article %J Image Processing, IEEE Transactions on %D 2009 %T High-Fidelity Data Embedding for Image Annotation %A He,Shan %A Kirovski,D. %A M. Wu %K Automated;Product Labeling;Reproducibility of Results;Sensitivity and Specificity;Signal Processing %K Computer-Assisted; %K Computer-Assisted;Information Storage and Retrieval;Pattern Recognition %K JPEG compression;JPEG cropping;arbitrary imagery;data hiding;high-fidelity data embedding;image annotation;image watermarking;medical images;photographic images;data compression;image coding;watermarking;Algorithms;Computer Graphics;Documentation;Image En %X High fidelity is a demanding requirement for data hiding, especially for images with artistic or medical value. This correspondence proposes a high-fidelity image watermarking for annotation with robustness to moderate distortion. To achieve the high fidelity of the embedded image, we introduce a visual perception model that aims at quantifying the local tolerance to noise for arbitrary imagery. Based on this model, we embed two kinds of watermarks: a pilot watermark that indicates the existence of the watermark and an information watermark that conveys a payload of several dozen bits. The objective is to embed 32 bits of metadata into a single image in such a way that it is robust to JPEG compression and cropping. We demonstrate the effectiveness of the visual model and the application of the proposed annotation technology using a database of challenging photographic and medical images that contain a large amount of smooth regions. %B Image Processing, IEEE Transactions on %V 18 %P 429 - 435 %8 2009/02// %@ 1057-7149 %G eng %N 2 %R 10.1109/TIP.2008.2008733 %0 Book %D 2009 %T HotSWUp '09: Proceedings of the 2Nd International Workshop on Hot Topics in Software Upgrades %I ACM %C New York, NY, USA %8 2009/// %@ 978-1-60558-723-3 %G eng %0 Conference Paper %B Proceedings of the 8th International Conference on Interaction Design and Children %D 2009 %T How children search the internet with keyword interfaces %A Druin, Allison %A Foss,E. %A Hatley,L. %A Golub,E. %A Guha,M.L. %A Fails,J. %A Hutchinson,H. %B Proceedings of the 8th International Conference on Interaction Design and Children %P 89 - 96 %8 2009/// %G eng %0 Conference Paper %B Image Processing (ICIP), 2009 16th IEEE International Conference on %D 2009 %T How would you look as you age ? %A Ramanathan,N. %A Chellapa, Rama %K age-separated %K appearances;facial %K database;face %K Face %K growth %K image %K model;face %K model;facial %K models;facial %K recognition;image %K SHAPE %K TEXTURE %K texture; %K transformation %K verification;facial %X Facial appearances change with increase in age. While generic growth patterns that are characteristic of different age groups can be identified, facial growth is also observed to be influenced by individual-specific attributes such as one's gender, ethnicity, life-style etc. In this paper, we propose a facial growth model that comprises of transformation models for facial shape and texture. We collected empirical data pertaining to facial growth from a database of age-separated face images of adults and used the same in developing the aforementioned transformation models. The proposed model finds applications in predicting one's appearance across ages and in performing face verification across ages. %B Image Processing (ICIP), 2009 16th IEEE International Conference on %P 53 - 56 %8 2009/11// %G eng %R 10.1109/ICIP.2009.5413998 %0 Conference Paper %B Computer Vision, 2009 IEEE 12th International Conference on %D 2009 %T Human detection using partial least squares analysis %A Schwartz,William Robson %A Kembhavi,Aniruddha %A Harwood,David %A Davis, Larry S. %X Significant research has been devoted to detecting people in images and videos. In this paper we describe a human detection method that augments widely used edge-based features with texture and color information, providing us with a much richer descriptor set. This augmentation results in an extremely high-dimensional feature space (more than 170,000 dimensions). In such high-dimensional spaces, classical machine learning algorithms such as SVMs are nearly intractable with respect to training. Furthermore, the number of training samples is much smaller than the dimensionality of the feature space, by at least an order of magnitude. Finally, the extraction of features from a densely sampled grid structure leads to a high degree of multicollinearity. To circumvent these data characteristics, we employ Partial Least Squares (PLS) analysis, an efficient dimensionality reduction technique, one which preserves significant discriminative information, to project the data onto a much lower dimensional subspace (20 dimensions, reduced from the original 170,000). Our human detection system, employing PLS analysis over the enriched descriptor set, is shown to outperform state-of-the-art techniques on three varied datasets including the popular INRIA pedestrian dataset, the low-resolution gray-scale DaimlerChrysler pedestrian dataset, and the ETHZ pedestrian dataset consisting of full-length videos of crowded scenes. %B Computer Vision, 2009 IEEE 12th International Conference on %P 24 - 31 %8 2009/10/29/2 %G eng %R 10.1109/ICCV.2009.5459205 %0 Conference Paper %B Proceedings of the 11th international conference on Ubiquitous computing, September %D 2009 %T HydroSense: infrastructure-mediated single-point sensing of whole-home water activity %A Jon Froehlich %A Larson,E. %A Campbell,T. %A Haggerty,C. %A Fogarty,J. %A Patel,S.N. %B Proceedings of the 11th international conference on Ubiquitous computing, September %8 2009/// %G eng %0 Report %D 2008 %T H (div) preconditioning for a mixed finite element formulation of the stochastic diffusion problem %A Elman, Howard %A Furnival, D. G %A Powell, C. E %I Citeseer %8 2008/// %G eng %0 Journal Article %J Journal of Cryptology %D 2008 %T Handling expected polynomial-time strategies in simulation-based security proofs %A Katz, Jonathan %A Lindell,Y. %X The standard class of adversaries considered in cryptography is that of strict polynomial-time probabilistic machines. However, expected polynomial-time machines are often also considered. For example, there are many zero-knowledge protocols for which the only known simulation techniques run in expected (and not strict) polynomial time. In addition, it has been shown that expected polynomial-time simulation is essential for achieving constant-round black-box zero-knowledge protocols. This reliance on expected polynomial-time simulation introduces a number of conceptual and technical difficulties. In this paper, we develop techniques for dealing with expected polynomial-time adversaries in simulation-based security proofs. %B Journal of Cryptology %V 21 %P 303 - 349 %8 2008/// %G eng %N 3 %R 10.1007/s00145-007-9004-8 %0 Book %D 2008 %T Handwritten Document Image Processing %A Yefeng Zheng %A David Doermann %A Huiping Li %I VDMVerlang Dr. Muller %8 2008/// %G eng %0 Journal Article %J IEEE Transactions on Image Processing %D 2008 %T Hardware and Software Systems for Image and Video Processing-Algorithmic and Architectural Optimizations for Computationally Efficient Particle Filtering %A Sankaranarayanan,A. C %A Srivastava, A. %A Chellapa, Rama %B IEEE Transactions on Image Processing %V 17 %P 737 - 748 %8 2008/// %G eng %N 5 %0 Journal Article %J Journal of Parallel and Distributed Computing %D 2008 %T Hardware monitors for dynamic page migration %A Tikir, Mustafa M. %A Hollingsworth, Jeffrey K %K Address translation counters %K cc-NUMA systems %K Dynamic page migration %K Full system simulation %K Hardware performance monitors %K High performance computing %K Multiprocessor systems %K OpenMP applications %K Runtime optimization %X In this paper, we first introduce a profile-driven online page migration scheme and investigate its impact on the performance of multithreaded applications. We use centralized lightweight, inexpensive plug-in hardware monitors to profile the memory access behavior of an application, and then migrate pages to memory local to the most frequently accessing processor. We also investigate the use of several other potential sources of data gathered from hardware monitors and compare their effectiveness to using data from centralized hardware monitors. In particular, we investigate the effectiveness of using cache miss profiles, Translation Lookaside Buffer (TLB) miss profiles and the content of the on-chip TLBs using the valid bit information. Moreover, we also introduce a modest hardware feature, called Address Translation Counters (ATC), and compare its effectiveness with other sources of hardware profiles.Using the Dyninst runtime instrumentation combined with hardware monitors, we were able to add page migration capabilities to a Sun Fire 6800 server without having to modify the operating system kernel, or to re-compile application programs. Our dynamic page migration scheme reduced the total number of non-local memory accesses of applications by up to 90% and improved the execution times up to 16%. We also conducted a simulation based study and demonstrated that cache miss profiles gathered from on-chip CPU monitors, which are typically available in current microprocessors, can be effectively used to guide dynamic page migrations in applications. %B Journal of Parallel and Distributed Computing %V 68 %P 1186 - 1200 %8 2008/09// %@ 0743-7315 %G eng %U http://www.sciencedirect.com/science/article/pii/S0743731508001020 %N 9 %R 16/j.jpdc.2008.05.006 %0 Conference Paper %B SIGCHI EA '08 %D 2008 %T Hci for community and international development %A Thomas, John %A Dearden, Andy %A Dray, Susan %A Light, Ann %A Best, Michael %A Arkin, Nuray %A Maunder, Andrew %A Kam, Mathew %A Marshini Chetty %A Sambasivan, Nithya %A Buckhalter, Celeste %A Krishnan, Gaurishankar %K community design %K ict4d %K information and communication technology %K international development %K participatory design %K ucd4id %K User centered design %X This workshop explores the challenges in applying, extending and inventing appropriate methods and contributions of Human Computer Interaction (HCI) to International economic and community Development. We address interaction design for parts of the world that are often marginalized by the Global North as well as people in the Global North who are themselves similarly marginalized by poverty or other barriers. We hope to extend the boundaries of the field of Human Computer Interaction by spurring a discussion on how existing methods and practices can be adapted and modified, and how new practices can be developed, to deal with the unique challenges posed by these contexts. %B SIGCHI EA '08 %S CHI EA '08 %I ACM %P 3909 - 3912 %8 2008/// %@ 978-1-60558-012-8 %G eng %U http://doi.acm.org/10.1145/1358628.1358954 %0 Journal Article %J Geospatial Services and Applications for the Internet %D 2008 %T Hierarchical infrastructure for internet mapping services %A Brabec,F. %A Samet, Hanan %B Geospatial Services and Applications for the Internet %P 1 - 30 %8 2008/// %G eng %R 10.1007/978-0-387-74674-6_1 %0 Conference Paper %B Int. Conf. on Computer Graphics Theory and Applications (GRAPP) %D 2008 %T A hierarchical spatial index for triangulated surfaces %A De Floriani, Leila %A Facinoli,M. %A Magillo,P. %A Dimitri,D. %X We present the PM2-Triangle quadtree (PM2T-quadtree), a new hierarchical spatial index for triangle mesheswhich has been designed for performing spatial queries on triangle-based terrain models. The PM2T-quadtree is based on a recursive space decomposition into square blocks. Here, we propose a highly compact data structure encoding a PM2T-quadtree, which decouples the spatial indexing structure from the combinatorial description of the mesh. We compare the PM2T-quadtree against other spatial indexes by considering the structure of the underlying domain subdivision, the storage costs of their data structures and the performance in geometric queries. %B Int. Conf. on Computer Graphics Theory and Applications (GRAPP) %P 86 - 91 %8 2008/// %G eng %0 Report %D 2008 %T High Performance Computing Algorithms for Land Cover %A Dynamics Using Remote %A Satya Kalluri %A Bader,David A. %A John Townshend %A JaJa, Joseph F. %A Zengyan Zhang %A Fallah-adl,Hassan %X Global and regional land cover studies require the ability to apply complex models on selected subsets of large amounts of multi-sensor and multi-temporal data sets that have been derived from raw instrument measurements using widely accepted pre-processing algorithms. The computational and storage requirements of most such studies far exceed what is possible on a single workstation environment. We have been pursuing a new approach that couples scalable and open distributed heterogeneous hardware with the development of high performance software for processing, indexing, and organizing remotely sensed data. Hierarchical data management tools are used to ingest raw data, create metadata, and organize the archived data so as to automatically achieve computational load balancing among the available nodes and minimize I/O overheads. We illustrate our approach with four specific examples. The first is the development of the first fast operational scheme for the atmospheric correction of Landsat TM scenes, while the second example focuses on image segmentation using a novel hierarchical connected components algorithm. Retrieval of global BRDF (Bidirectional Reflectance Distribution Function) in the red and near infrared wavelengths using four years (1983 to 1986) of Pathfinder AVHRR Land (PAL) data set is the focus of our third example. The fourth example is the development of a hierarchical data organization scheme that allows on-demand processing and retrieval of regional and global AVHRR data sets. Our results show that substantial improvements in computational times can be achieved by using the high performance computing technology. %I CiteSeerX %8 2008/// %G eng %U http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.58.4213 %0 Conference Paper %B Similarity Search and Applications, 2008. SISAP 2008. First International Workshop on %D 2008 %T High-Dimensional Similarity Retrieval Using Dimensional Choice %A Tahmoush,D. %A Samet, Hanan %K data %K database %K function;high-dimensional %K management %K processing; %K processing;sequential %K reduction;database %K reduction;distance %K retrieval;query %K search;data %K search;similarity %K similarity %K system;dimension %K systems;query %X There are several pieces of information that can be utilized in order to improve the efficiency of similarity searches on high-dimensional data. The most commonly used information is the distribution of the data itself but the use of dimensional choice based on the information in the query as well as the parameters of the distribution can provide an effective improvement in the query processing speed and storage. The use of this method can produce dimension reduction by as much as a factor of n, the number of data points in the database, over sequential search. We demonstrate that the curse of dimensionality is not based on the dimension of the data itself, but primarily upon the effective dimension of the distance function. We also introduce a new distance function that utilizes fewer dimensions of the higher dimensional space to produce a maximal lower bound distance in order to approximate the full distance function. This work has demonstrated significant dimension reduction, up to 70% reduction with an improvement in accuracy or over 99% with only a 6% loss in accuracy on a prostate cancer data set. %B Similarity Search and Applications, 2008. SISAP 2008. First International Workshop on %P 35 - 42 %8 2008/04// %G eng %R 10.1109/SISAP.2008.20 %0 Report %D 2008 %T Hosting virtual networks on commodity hardware %A Bhatia,S. %A Motiwala,M. %A Muhlbauer,W. %A Valancius,V. %A Bavier,A. %A Feamster, Nick %A Peterson,L. %A Rexford,J. %X This paper describes Trellis, a software platform for hostingmultiple virtual networks on shared commodity hardware. Trellis allows each virtual network to define its own topol- ogy, control protocols, and forwarding tables, which low- ers the barrier for deploying custom services on an isolated, reconfigurable, and programmable network, while amor- tizing costs by sharing the physical infrastructure. Trellis synthesizes two container-based virtualization technologies, VServer and NetNS, as well as a new tunneling mechanism, EGRE, into a coherent platform that enables high-speed vir- tual networks. We describe the design and implementation, of Trellis, including kernel-level performance optimizations, and evaluate its supported packet-forwarding rates against other virtualization technologies. We are in the process of upgrading the VINI facility to use Trellis. We also plan to release Trellis as part of MyVINI, a standalone software dis- tribution that allows researchers and application developers to deploy their own virtual network hosting platforms. %I Georgia Institute of Technology %V GT-CS-07-10 %8 2008/// %G eng %0 Book %D 2008 %T HotSWUp '08: Proceedings of the 1st International Workshop on Hot Topics in Software Upgrades %I ACM %C New York, NY, USA %8 2008/// %@ 978-1-60558-304-4 %G eng %0 Report %D 2008 %T How Do Users Find Things with PubMed? Towards Automatic Utility Evaluation with User Simulations %A Jimmy Lin %A Smucker,Mark D. %K *BIOMEDICAL INFORMATION SYSTEMS %K *BROWSING %K *INFORMATION RETRIEVAL %K *PUBMED %K *SIMILARITY %K *SIMULATION %K *USER NEEDS %K DOCUMENTS %K FIND-SIMILAR %K HUMAN FACTORS ENGINEERING & MAN MACHINE SYSTEM %K INFORMATION SCIENCE %K MAN COMPUTER INTERFACE %K QUALITY %K queries %K searching %K TEST AND EVALUATION %K UTILITY %X In the context of document retrieval in the biomedical domain, this paper explores the complex relationship between the quality of initial query results and the overall utility of an interactive system. We demonstrate that a content-similarity browsing tool can compensate for poor retrieval results, and that the relationship between retrieval performance and overall utility is non-linear. Arguments are advanced with user simulations, which characterize the relevance of documents that a user might encounter with different browsing strategies. With broader implications to IR, this work provides a case study of how user simulations can be exploited as a formative tool for automatic utility evaluation. Simulation-based studies provide researchers with an additional evaluation tool to complement interactive and Cranfield-style experiments. %I Human-Computer Interaction Lab, University of Maryland, College Park %8 2008/02// %G eng %U http://stinet.dtic.mil/oai/oai?&verb=getRecord&metadataPrefix=html&identifier=ADA478703 %0 Report %D 2008 %T How friendship links and group memberships affect the privacy of individuals in social networks %A Zheleva,Elena %A Getoor, Lise %K Technical Report %X In order to address privacy concerns, many social media websites allow users to hide their personal profiles from the public. In this work, we show how an adversary can exploit a social network with a mixture of public and private user profiles to predict the private attributes of users. We map this problem to a relational classification problem and we propose a simple yet powerful model that uses group features and group memberships of users to perform multi-value classification. We compare its efficacy against several other classification approaches. Our results show that even in the case when there is an option for making profile attributes private, if links and group affiliations are known, users' privacy in social networks may be compromised. On a dataset from a well-known social-media website, we could easily recover the sensitive attributes for half of the private-profile users with a high accuracy when as much as half of the profiles are private. To the best of our knowledge, this is the first work that uses link-based and group-based classification to study privacy implications in social networks. We conclude with a discussion of our findings and the broader applicability of our proposed model. %I Instititue for Advanced Computer Studies, Univ of Maryland, College Park %V UMIACS-TR-2008-16 %8 2008/07/01/ %G eng %U http://drum.lib.umd.edu/handle/1903/8691 %0 Journal Article %J Fast Software Encryption %D 2008 %T How to encrypt with a malicious random number generator %A Kamara,S. %A Katz, Jonathan %X Chosen-plaintext attacks on private-key encryption schemes are currently modeled by giving an adversary access to an oracle that encrypts a given message m using random coins that are generated uniformly at random and independently of anything else. This leaves open the possibility of attacks in case the random coins are poorly generated (e.g., using a faulty random number generator), or are under partial adversarial control (e.g., when encryption is done by lightweight devices that may be captured and tampered with).We introduce new notions of security modeling such attacks, propose two concrete schemes meeting our definitions, and show generic transformations for achieving security in this context. %B Fast Software Encryption %P 303 - 315 %8 2008/// %G eng %R 10.1007/978-3-540-71039-4_19 %0 Journal Article %J Crossroads %D 2008 %T How to succeed in graduate school: a guide for students and advisors %A desJardins, Marie %B Crossroads %V 14 %P 5 - 9 %8 2008/06// %@ 1528-4972 %G eng %U http://doi.acm.org/10.1145/1375972.1375975 %N 4 %R 10.1145/1375972.1375975 %0 Conference Paper %B Robotics and Automation, 2008. ICRA 2008. IEEE International Conference on %D 2008 %T Human detection using iterative feature selection and logistic principal component analysis %A Abd-Almageed, Wael %A Davis, Larry S. %K algorithm;logistic %K analysis;edge %K analysis;probability; %K applications;principal %K belongness %K component %K DETECTION %K detection;feature %K detection;iterative %K detection;principal %K extraction;filtering %K feature %K filtering;human %K MAP %K methods;object %K PCA;object %K probability;edge %K selection %K theory;iterative %X We present a fast feature selection algorithm suitable for object detection applications where the image being tested must be scanned repeatedly to detected the object of interest at different locations and scales. The algorithm iteratively estimates the belongness probability of image pixels to foreground of the image. To prove the validity of the algorithm, we apply it to a human detection problem. The edge map is filtered using a feature selection algorithm. The filtered edge map is then projected onto an eigen space of human shapes to determine if the image contains a human. Since the edge maps are binary in nature, Logistic Principal Component Analysis is used to obtain the eigen human shape space. Experimental results illustrate the accuracy of the human detector. %B Robotics and Automation, 2008. ICRA 2008. IEEE International Conference on %P 1691 - 1697 %8 2008/05// %G eng %R 10.1109/ROBOT.2008.4543444 %0 Journal Article %J Genome Biology %D 2007 %T Hawkeye: an interactive visual analytics tool for genome assemblies %A Schatz,Michael C %A Phillippy,Adam M %A Shneiderman, Ben %A Salzberg,Steven L. %X Genome sequencing remains an inexact science, and genome sequences can contain significant errors if they are not carefully examined. Hawkeye is our new visual analytics tool for genome assemblies, designed to aid in identifying and correcting assembly errors. Users can analyze all levels of an assembly along with summary statistics and assembly metrics, and are guided by a ranking component towards likely mis-assemblies. Hawkeye is freely available and released as part of the open source AMOS project http://amos.sourceforge.net/hawkeye. %B Genome Biology %V 8 %P R34 - R34 %8 2007/03/09/ %@ 1465-6906 %G eng %U http://genomebiology.com/2007/8/3/R34 %N 3 %R 10.1186/gb-2007-8-3-r34 %0 Conference Paper %B SIGCHI EA '07 %D 2007 %T HCI4D: hci challenges in the global south %A Marshini Chetty %A Grinter, Rebecca E. %K global south %K hci %K methodologies %X While researchers have designed user-centered systems for the Global South, fewer have discussed the unique challenges facing HCI projects. In this paper, we describe methodological and practical challenges for HCI research and practice in the Global South. %B SIGCHI EA '07 %S CHI EA '07 %I ACM %P 2327 - 2332 %8 2007/// %@ 978-1-59593-642-4 %G eng %U http://doi.acm.org/10.1145/1240866.1241002 %0 Journal Article %J AI Magazine %D 2007 %T Heuristic Search and Information Visualization Methods for School Redistricting %A desJardins, Marie %A Bulka,Blazej %A Carr,Ryan %A Jordan,Eric %A Rheingans,Penny %X We describe an application of AI search and information visualization techniques to the problem of school redistricting, in which students are assigned to home schools within a county or school district. This is a multicriteria optimization problem in which competing objectives, such as school capacity, busing costs, and socioeconomic distribution, must be considered. Because of the complexity of the decision-making problem, tools are needed to help end users generate, evaluate, and compare alternative school assignment plans. A key goal of our research is to aid users in finding multiple qualitatively different redistricting plans that represent different trade-offs in the decision space. We present heuristic search methods that can be used to find a set of qualitatively different plans, and give empirical results of these search methods on population data from the school district of Howard County, Maryland. We show the resulting plans using novel visualization methods that we have developed for summarizing and comparing alternative plans. %B AI Magazine %V 28 %P 59 - 59 %8 2007/09/15/ %@ 0738-4602 %G eng %U https://www.aaai.org/ojs/index.php/aimagazine/article/viewArticle/2055 %N 3 %R 10.1609/aimag.v28i3.2055 %0 Conference Paper %B Computer Vision, 2007. ICCV 2007. IEEE 11th International Conference on %D 2007 %T Hierarchical Part-Template Matching for Human Detection and Segmentation %A Zhe Lin %A Davis, Larry S. %A David Doermann %A DeMenthon,D. %K analysis;global %K approach;background %K articulations;video %K Bayesian %K detection;human %K detectors;hierarchical %K detectors;partial %K framework;Bayesian %K human %K likelihood %K MAP %K matching;human %K matching;image %K methods;image %K occlusion %K occlusions;shape %K part-based %K part-template %K re-evaluation;global %K segmentation;image %K segmentation;local %K sequences; %K sequences;Bayes %K SHAPE %K subtraction;fine %K template-based %X Local part-based human detectors are capable of handling partial occlusions efficiently and modeling shape articulations flexibly, while global shape template-based human detectors are capable of detecting and segmenting human shapes simultaneously. We describe a Bayesian approach to human detection and segmentation combining local part-based and global template-based schemes. The approach relies on the key ideas of matching a part-template tree to images hierarchically to generate a reliable set of detection hypotheses and optimizing it under a Bayesian MAP framework through global likelihood re-evaluation and fine occlusion analysis. In addition to detection, our approach is able to obtain human shapes and poses simultaneously. We applied the approach to human detection and segmentation in crowded scenes with and without background subtraction. Experimental results show that our approach achieves good performance on images and video sequences with severe occlusion. %B Computer Vision, 2007. ICCV 2007. IEEE 11th International Conference on %P 1 - 8 %8 2007/10// %G eng %R 10.1109/ICCV.2007.4408975 %0 Journal Article %J 19th Internal congress on acoustics %D 2007 %T High frequency acoustic simulations via FMM accelerated BEM %A Gumerov, Nail A. %A Duraiswami, Ramani %X High frequency simulations of acoustics are among the most expensive problems to simulate. Inpractice 6 to 10 points per wavelength are required. Since the wavenumber k is inversely proportional to wavelength, if a numerical method is a surface based method (such as the BEM), then problem size scales as O(k2D2), where D is the size of the domain. Dense matrices appear and typical iter- ative solution can be achieved for O(k4D4) memory and O(k4D4) per iteration cost. We employ an algorithm based on the fast multipole method (FMM) using coaxial and diagonal translation operators based on the frequency of simulation to reduce the memory requirements and per iteration cost to O(k2D2 log kD) for larger kD (kD ~250). The number of iterations needed depend upon the condition of the matrix, and preconditioning chosen. Preconditioned Krylov methods such as the flexible gen- eralized minimum residual method (FGMRES), with preconditioning based upon a lower accuracy FMM, usually ensure a convergent iteration. Example calculations are presented. %B 19th Internal congress on acoustics %V 159 %8 2007/// %G eng %0 Journal Article %J BMC Bioinformatics %D 2007 %T High-throughput sequence alignment using Graphics Processing Units %A Schatz,Michael C %A Trapnell,Cole %A Delcher,Arthur L. %A Varshney, Amitabh %B BMC Bioinformatics %V 8 %P 474 - 474 %8 2007/// %@ 1471-2105 %G eng %U http://www.biomedcentral.com/1471-2105/8/474 %N 1 %R 10.1186/1471-2105-8-474 %0 Journal Article %J BMC Public Health %D 2007 %T HIV risk behaviors among female IDUs in developing and transitional countries %A Charles,C. %A Des Jarlais Don,P. T %A Gerry,S. %B BMC Public Health %V 7 %8 2007/// %G eng %0 Conference Paper %B Proc. 6th ACM Workshop on Hot Topics in Networks (Hotnets-VI) %D 2007 %T Holding the Internet Accountable %A Andersen,David %A Balakrishnan,Hari %A Feamster, Nick %A Koponen,Teemu %A Moon,Daekyong %A Shenker,Scott %X Today’s IP network layer provides little to no protection against misconfiguration or malice. Despite some progress in improving the robustness and security of the IP layer, misconfigurations and attacks still occur frequently. We show how a network layer that provides accountability, i.e., the ability to associate each action with the responsible entity, provides a firm foundation for defenses against misconfiguration and malice. We present the design of a network layer that incorporates accountability called AIP (Accountable Internet Protocol) and show how its features—notably, its use of self-certifying addresses— can improve both source accountability (the ability to trace actions to a particular end host and stop that host from misbehaving) and control-plane accountability (the ability to pinpoint and prevent attacks on routing). %B Proc. 6th ACM Workshop on Hot Topics in Networks (Hotnets-VI) %8 2007/11/01/ %G eng %U http://repository.cmu.edu/compsci/66 %0 Conference Paper %B Proceedings of the Workshop on Metareasoning in Agent-Based Systems %D 2007 %T Hood College, Master of Business Administration, 2005 Hood College, Master of Science (Computer Science), 2001 Hood College, Bachelor of Science (Computer Science), 1998 Frederick Community College, Associate in Arts (Business Administration), 1993 %A Anderson,M. L %A Schmill,M. %A Oates,T. %A Perlis, Don %A Josyula,D. %A Wright,D. %A Human,S. W.T.D.N %A Metacognition,L. %A Fults,S. %A Josyula,D. P %B Proceedings of the Workshop on Metareasoning in Agent-Based Systems %8 2007/// %G eng %0 Book Section %B UbiComp 2007: Ubiquitous Computing %D 2007 %T How Smart Homes Learn: The Evolution of the Networked Home and Household %A Marshini Chetty %A Sung, Ja-Young %A Grinter, Rebecca E. %E Krumm, John %E Abowd, Gregory D. %E Seneviratne, Aruna %E Strang, Thomas %K Computer Communication Networks %K computers and society %K home networking %K Information Systems Applications (incl.Internet) %K infrastructure %K smart home %K software engineering %K Systems and Data Security %K User Interfaces and Human Computer Interaction %X Despite a growing desire to create smart homes, we know little about how networked technologies interact with a house’s infrastructure. In this paper, we begin to close this gap by presenting findings from a study that examined the relationship between home networking and the house itself—and the work that results for householders as a consequence of this interaction. We discuss four themes that emerged: an ambiguity in understanding the virtual boundaries created by wireless networks, the home network control paradox, a new home network access paradox, and the relationship between increased responsibilities and the possibilities of wireless networking. %B UbiComp 2007: Ubiquitous Computing %S Lecture Notes in Computer Science %I Springer Berlin Heidelberg %P 127 - 144 %8 2007/01/01/ %@ 978-3-540-74852-6, 978-3-540-74853-3 %G eng %U http://link.springer.com/chapter/10.1007/978-3-540-74853-3_8 %0 Journal Article %J SIGCOMM Comput. Commun. Rev. %D 2007 %T How to lease the internet in your spare time %A Feamster, Nick %A Gao,Lixin %A Rexford,Jennifer %X Today's Internet Service Providers (ISPs) serve two roles: managing their network infrastructure and providing (arguably limited) services to end users. We argue that coupling these roles impedes the deployment of new protocols and architectures, and that the future Internet should support two separate entities: infrastructure providers (who manage the physical infrastructure) and service providers (who deploy network protocols and offer end-to-end services). We present a high-level design for Cabo, an architecture that enables this separation; we also describe challenges associated with realizing this architecture. %B SIGCOMM Comput. Commun. Rev. %V 37 %P 61 - 64 %8 2007/01// %@ 0146-4833 %G eng %U http://doi.acm.org/10.1145/1198255.1198265 %N 1 %R 10.1145/1198255.1198265 %0 Journal Article %J IEEE/RSJ IROS Workshop: From sensors to human spatial concepts (FS2HSC) %D 2007 %T Human activity understanding using visibility context %A Morariu,V.I. %A Prasad,V. S.N %A Davis, Larry S. %X Visibility in architectural layouts affects humannavigation, so a suitable representation of visibility context is useful in understanding human activity. Motivated by studies of spatial behavior, we use a set of features from visibility analysis to represent spatial context in the interpretation of human activity. An agent’s goal, belief about the world, trajectory and visible layout are considered to be random variables that evolve with time during the agent’s movement, and are modeled in a Bayesian framework. We design a search-based task in a sprite-world, and compare the results of our framework to those of human subject experiments. Our findings confirm that knowledge of spatial layout improves human interpretations of the trajectories (implying that visibility context is useful in this task). Since our framework demonstrates performance close to that of human subjects with knowledge of spatial layout, our findings confirm that our model makes adequate use of visibility context. In addition, the representation we use for visibility context allows our model to generalize well when presented with new scenes. %B IEEE/RSJ IROS Workshop: From sensors to human spatial concepts (FS2HSC) %8 2007/// %G eng %0 Conference Paper %B Image Analysis and Processing, 2007. ICIAP 2007. 14th International Conference on %D 2007 %T Human Appearance Change Detection %A Ghanem,N.M. %A Davis, Larry S. %K (artificial %K appearance %K approach;occupancy %K change %K changes %K classification;support %K classifier;vector %K detection;image %K detection;machine %K difference %K Frequency %K intelligence);pattern %K intersection %K learning %K machine %K machines;vector %K map;histogram %K map;human %K map;support %K package %K quantisation;video %K quantization;video %K recognition;boosting %K recognition;image %K sequence;left %K sequences;image %K sequences;learning %K surveillance; %K technique;codeword %K vector %X We present a machine learning approach to detect changes in human appearance between instances of the same person that may be taken with different cameras, but over short periods of time. For each video sequence of the person, we approximately align each frame in the sequence and then generate a set of features that captures the differences between the two sequences. The features are the occupancy difference map, the codeword frequency difference map (based on a vector quantization of the set of colors and frequencies) at each aligned pixel and the histogram intersection map. A boosting technique is then applied to learn the most discriminative set of features, and these features are then used to train a support vector machine classifier to recognize significant appearance changes. We apply our approach to the problem of left package detection. We train the classifiers on a laboratory database of videos in which people are seen with and without common articles that people carry - backpacks and suitcases. We test the approach on some real airport video sequences. Moving to the real world videos requires addressing additional problems, including the view selection problem and the frame selection problem. %B Image Analysis and Processing, 2007. ICIAP 2007. 14th International Conference on %P 536 - 541 %8 2007/09// %G eng %R 10.1109/ICIAP.2007.4362833 %0 Journal Article %J Machine Vision and Applications %D 2007 %T Human appearance modeling for matching across video sequences %A Yu,Y. %A Harwood,D. %A Yoon,K. %A Davis, Larry S. %X We present an appearance model for establishing correspondence between tracks of people which may be taken at different places, at different times or across different cameras. The appearance model is constructed by kernel density estimation. To incorporate structural information and to achieve invariance to motion and pose, besides color features, an additional feature of path-length is used. To achieve illumination invariance, two types of illumination insensitive color features are discussed: brightness color feature and RGB rank feature. The similarity between a test image and an appearance model is measured by the information gain or Kullback–Leibler distance. To thoroughly represent the information contained in a video sequence with as little data as possible, a key frame selection and matching scheme is proposed. Experimental results demonstrate the important role of the path-length feature in the appearance model and the effectiveness of the proposed appearance model and matching method. %B Machine Vision and Applications %V 18 %P 139 - 149 %8 2007/// %G eng %N 3 %0 Conference Paper %B Computer Vision and Pattern Recognition, 2007. CVPR '07. IEEE Conference on %D 2007 %T Human Identification using Gait and Face %A Chellapa, Rama %A Roy-Chowdhury, A.K. %A Kale, A. %K algorithm;visual-hull %K analysis;image %K approach;cameras;face %K approximation;probabilistic %K database;camera;face %K databases; %K fusion;face %K fusion;human %K fusion;probability;video %K Gait %K identification;planar %K NIST %K processing;visual %K recognition %K recognition;gait %K recognition;view-invariant %K signal %X In general the visual-hull approach for performing integrated face and gait recognition requires at least two cameras. In this paper we present experimental results for fusion of face and gait for the single camera case. We considered the NIST database which contains outdoor face and gait data for 30 subjects. In the NIST database, subjects walk along an inverted Sigma pattern. In (A. Kale, et al., 2003), we presented a view-invariant gait recognition algorithm for the single camera case along with some experimental evaluations. In this chapter we present the results of our view-invariant gait recognition algorithm in (A. Kale, et al., 2003) on the NIST database. The algorithm is based on the planar approximation of the person which is valid when the person walks far away from the camera. In (S. Zhou et al., 2003), an algorithm for probabilistic recognition of human faces from video was proposed and the results were demonstrated on the NIST database. Details of these methods can be found in the respective papers. We give an outline of the fusion strategy here. %B Computer Vision and Pattern Recognition, 2007. CVPR '07. IEEE Conference on %P 1 - 2 %8 2007/06// %G eng %R 10.1109/CVPR.2007.383523 %0 Journal Article %J IEEE Intelligent Systems %D 2007 %T Human Responsibility for Autonomous Agents %A Shneiderman, Ben %K Automatic control %K Autonomous agents %K autonomous systems %K Bandwidth %K Computer bugs %K Computer errors %K Control systems %K data privacy %K Human-computer interaction %K HUMANS %K Robots %K Safety %K Software design %X Automated or autonomous systems can sometimes fail harmlessly, but they can also destroy data, compromise privacy, and consume resources, such as bandwidth or server capacity. What's more troubling is that automated systems embedded in vital systems can cause financial losses, destruction of property, and loss of life. Controlling these dangers will increase trust while enabling broader use of these systems with higher degrees of safety. Obvious threats stem from design errors and software bugs, but we can't overlook mistaken assumptions by designers, unanticipated actions by humans, and interference from other computerized systems. This article is part of a special issue on Interacting with Autonomy. %B IEEE Intelligent Systems %V 22 %P 60 - 61 %8 2007/04//March %@ 1541-1672 %G eng %N 2 %R 10.1109/MIS.2007.32 %0 Book Section %B Human-Computer Interaction – INTERACT 2007Human-Computer Interaction – INTERACT 2007 %D 2007 %T Human Values for Shaping the Made World %A Shneiderman, Ben %E Baranauskas,Cécilia %E Palanque,Philippe %E Abascal,Julio %E Barbosa,Simone %X Interface design principles have been effective in shaping new desktop applications, web-based resources, and mobile devices. Usability and sociability promote successful online communities and social network services. The contributions of human-computer interaction researchers have been effective in raising the quality of design of many products and services. As our influence grows, we can play an even more profound role in guaranteeing that enduring human values are embedded in the next generation of technology. This talk identifies which goals are realistic, such as universality, responsibility, trust, empathy, and privacy, and how we might ensure that they become part of future services and systems. %B Human-Computer Interaction – INTERACT 2007Human-Computer Interaction – INTERACT 2007 %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 4662 %P 1 - 1 %8 2007/// %@ 978-3-540-74794-9 %G eng %U http://dx.doi.org/10.1007/978-3-540-74796-3_1 %0 Journal Article %J Pattern Analysis and Machine Intelligence, IEEE Transactions on %D 2007 %T Hybrid Detectors for Subpixel Targets %A Broadwater, J. %A Chellapa, Rama %K ACE subpixel algorithm;AMSD subpixel algorithm;hybrid detectors;hyperspectral imagery analysis;physics;statistics;subpixel target detection;subspace detection;object detection;spectral analysis;statistical analysis;target tracking;Algorithms;Artificial In %K Automated;Reproducibility of Results;Sensitivity and Specificity;Signal Processing %K Computer-Assisted; %K Computer-Assisted;Models %K Computer-Assisted;Pattern Recognition %K Statistical;Image Enhancement;Image Interpretation %K Statistical;Numerical Analysis %X Subpixel detection is a challenging problem in hyperspectral imagery analysis. Since the target size is smaller than the size of a pixel, detection algorithms must rely solely on spectral information. A number of different algorithms have been developed over the years to accomplish this task, but most detectors have taken either a purely statistical or a physics-based approach to the problem. We present two new hybrid detectors that take advantage of these approaches by modeling the background using both physics and statistics. Results demonstrate improved performance over the well-known AMSD and ACE subpixel algorithms in experiments that include multiple targets, images, and area types - especially when dealing with weak targets in complex backgrounds. %B Pattern Analysis and Machine Intelligence, IEEE Transactions on %V 29 %P 1891 - 1903 %8 2007/11// %@ 0162-8828 %G eng %N 11 %R 10.1109/TPAMI.2007.1104 %0 Journal Article %J Proceedings of SPIE %D 2007 %T Hyperstereo algorithms for the perception of terrain drop-offs %A Mohananchettiar,Arunkumar %A Cevher,Volkan %A CuQlock-Knopp,Grayson %A Chellapa, Rama %A Merritt,John %X The timely detection of terrain drop-offs is critical for safe and efficient off-road mobility, whether with human drivers or with terrain navigation systems that use autonomous machine-vision. In this paper, we propose a joint tracking and detection machine-vision approach for accurate and efficient terrain drop-off detection and localization. We formulate the problem using a hyperstereo camera system and build an elevation map using the range map obtained from a stereo algorithm. A terrain drop-off is then detected with the use of optimal drop-off detection filters applied to the range map. For more robust results, a method based on multi-frame fusion of terrain drop-off evidence is proposed. Also presented is a fast, direct method that does not employ stereo disparity mapping. We compared our algorithm's detection of terrain drop-offs with time-code data from human observers viewing the same video clips in stereoscopic 3D. The algorithm detected terrain drop-offs an average of 9 seconds sooner, or 12m farther, than the human observers. This suggests that passive image-based hyperstereo machine-vision may be useful as an early warning system for off-road mobility. %B Proceedings of SPIE %V 6557 %P 65570L-65570L-10 - 65570L-65570L-10 %8 2007/04/27/ %@ 0277786X %G eng %U http://spiedigitallibrary.org/proceedings/resource/2/psisdg/6557/1/65570L_1?isAuthorized=no %N 1 %R doi:10.1117/12.721066 %0 Conference Paper %B Acoustics, Speech and Signal Processing, 2006. ICASSP 2006 Proceedings. 2006 IEEE International Conference on %D 2006 %T Headphone-Based Reproduction of 3D Auditory Scenes Captured by Spherical/Hemispherical Microphone Arrays %A Zhiyun Li %A Duraiswami, Ramani %K 3D %K analysis;headphones;microphone %K arrays;orthogonal %K arrays;spatial %K auditory %K beam-space;spatial %K filter;spherical %K filters; %K function;headphone-based %K harmonics;array %K microphone %K processing;audio %K processing;harmonic %K related %K reproduction;hemispherical %K scenes;head %K signal %K transfer %X We propose a method to reproduce 3D auditory scenes captured by spherical microphone arrays over headphones. This algorithm employs expansions of the captured sound and the head related transfer function over the sphere and uses the orthonormality of the spherical harmonics. Using a spherical microphone array, we first record the 3D auditory scene, then the recordings are spatially filtered and reproduced through headphones in the orthogonal beam-space of the head related transfer functions (HRTFs). We use the KEMAR HRTF measurements to verify our algorithm %B Acoustics, Speech and Signal Processing, 2006. ICASSP 2006 Proceedings. 2006 IEEE International Conference on %V 5 %P V - V %8 2006/05// %G eng %R 10.1109/ICASSP.2006.1661281 %0 Journal Article %J The Journal of the Acoustical Society of America %D 2006 %T Head-related transfer functions via the fast multipole accelerated boundary element method %A Gumerov, Nail A. %A Duraiswami, Ramani %A Zotkin,Dmitry N %X Computation of head‐related transfer functions (HRTF) via geometrical meshes and numerical methods has been suggested by a number of authors. An issue facing this approach is the large computational time needed for high frequencies, where the discretization must include hundreds of thousands of elements. Conventional computational methods are unable to achieve such computations without excessive time or memory. We use a newly developed fast multipole accelerated boundary element method (FMM/BEM) that scales linearly both in time and memory. The method is applied to the mesh of the widely used KEMAR manikin and its HRTF computed up to 20 kHz. The results are compared with available experimental measurements. %B The Journal of the Acoustical Society of America %V 120 %P 3342 - 3343 %8 2006/// %G eng %U http://link.aip.org/link/?JAS/120/3342/6 %N 5 %0 Conference Paper %B Proceedings of the 18th conference on Innovative applications of artificial intelligence - Volume 2 %D 2006 %T Heuristic search and information visualization methods for school redistricting %A desJardins, Marie %A Bulka,Blazej %A Carr,Ryan %A Hunt,Andrew %A Rathod,Priyang %A Rheingans,Penny %X We describe an application of AI search and information visualization techniques to the problem of school redistricting, in which students are assigned to home schools within a county or school district. This is a multicriteria optimization problem in which competing objectives must be considered, such as school capacity, busing costs, and socioeconomic distribution. Because of the complexity of the decision-making problem, tools are needed to help end users generate, evaluate, and compare alternative school assignment plans. A key goal of our research is to aid users in finding multiple qualitatively different redistricting plans that represent different tradeoffs in the decision space.We present heuristic search methods that can be used to find a set of qualitatively different plans, and give empirical results of these search methods on population data from the school district of Howard County, Maryland. We show the resulting plans using novel visualization methods that we have developed for summarizing and comparing alternative plans. %B Proceedings of the 18th conference on Innovative applications of artificial intelligence - Volume 2 %S IAAI'06 %I AAAI Press %P 1774 - 1781 %8 2006/// %@ 978-1-57735-281-5 %G eng %U http://dl.acm.org/citation.cfm?id=1597122.1597136 %0 Journal Article %J IEEE Multimedia %D 2006 %T Hierarchical Layouts for Photo Libraries %A Kustanowitz,J. %A Shneiderman, Ben %K annotated digital photo collection %K auto-layout technique %K bi-level hierarchies %K Computer science %K data visualisation %K digital libraries %K document image processing %K Information Visualization %K interactive algorithms %K interactive displays %K Libraries %K Lifting equipment %K Organization Charts %K photo collections %K photo layouts %K photo library %K Photography %K quantum content %K Silver %K Springs %K User interfaces %K Web pages %X We use an annotated digital photo collection to demonstrate a two-level auto-layout technique consisting of a central primary region with secondary regions surrounding it. Because the object sizes within regions can only be changed in discrete units, we refer to them as quantum content. Our real-time algorithms enable a compelling interactive display as users resize the canvas, or move and resize the primary region %B IEEE Multimedia %V 13 %P 62 - 72 %8 2006/12//Oct %@ 1070-986X %G eng %N 4 %R 10.1109/MMUL.2006.83 %0 Journal Article %J Computer %D 2006 %T High-confidence medical device software and systems %A Lee,I. %A Pappas,G. J %A Cleaveland, Rance %A Hatcliff,J. %A Krogh,B. H %A Lee,P. %A Rubin,H. %A Sha,L. %K Aging %K biomedical equipment %K Clinical software engineering %K Costs %K FDA device %K Food and Drug Administration device %K health and safety %K health care %K health information system %K health safety %K healthcare delivery %K Healthcare technology %K Information systems %K medical computing %K medical device manufacturing %K medical device software development %K medical device systems development %K medical information systems %K Medical services %K Medical software %K Medical tests %K networked medical devices %K Production systems %K Software design %K Software safety %K Software systems %K Software testing %K US healthcare quality %X Given the shortage of caregivers and the increase in an aging US population, the future of US healthcare quality does not look promising and definitely is unlikely to be cheaper. Advances in health information systems and healthcare technology offer a tremendous opportunity for improving the quality of care while reducing costs. The development and production of medical device software and systems is a crucial issue, both for the US economy and for ensuring safe advances in healthcare delivery. As devices become increasingly smaller in physical terms but larger in software terms, the design, testing, and eventual Food and Drug Administration (FDA) device approval is becoming much more expensive for medical device manufacturers both in terms of time and cost. Furthermore, the number of devices that have recently been recalled due to software and hardware problems is increasing at an alarming rate. As medical devices are becoming increasingly networked, ensuring even the same level of health safety seems a challenge. %B Computer %V 39 %P 33 - 38 %8 2006/04// %@ 0018-9162 %G eng %N 4 %R 10.1109/MC.2006.127 %0 Thesis %D 2006 %T A holistic approach to structure from motion %A Hui Ji %X This dissertation investigates the general structure from motion problem. That is, how to compute in an unconstrained environment 3D scene structure, camera motion and moving objects from video sequences. We present a framework which uses concatenated feed-back loops to overcome the main difficulty in the structure from motion problem: the chicken-and-egg dilemma between scene segmentation and structure recovery. The idea is that we compute structure and motion in stages by gradually computing 3D scene information of increasing complexity and using processes which operate on increasingly large spatial image areas. Within this framework, we developed three modules. First, we introduce a new constraint for the estimation of shape using image features from multiple views. We analyze this constraint and show that noise leads to unavoidable mis-estimation of the shape, which also predicts the erroneous shape perception in human. This insight provides a clear argument for the need for feed-back loops. Second, a novel constraint on shape is developed which allows us to connect multiple frames in the estimation of camera motion by matching only small image patches. Third, we present a texture descriptor for matching areas of extended sizes. The advantage of this texture descriptor, which is based on fractal geometry, lies in its invariance to any smooth mapping (Bi-Lipschitz transform) including changes of viewpoint, illumination and surface distortion. Finally, we apply our framework to the problem of super-resolution imaging. We use the 3D motion estimation together with a novel wavelet-based reconstruction scheme to reconstruct a high-resolution image from a sequence of low-resolution images. %I University of Maryland at College Park %C College Park, MD, USA %8 2006/// %G eng %0 Journal Article %J Proceedings of the IEEE %D 2006 %T How Multirobot Systems Research will Accelerate our Understanding of Social Animal Behavior %A Balch, T. %A Dellaert, F. %A Feldman, A. %A Guillory, A. %A Isbell, C.L. %A Zia Khan %A Pratt, S.C. %A Stein, A.N. %A Wilde, H. %K Acceleration %K Animal behavior %K ant movement tracking %K Artificial intelligence %K biology computing %K Computer vision %K control engineering computing %K Insects %K Intelligent robots %K Labeling %K monkey movement tracking %K multi-robot systems %K multirobot systems %K robotics algorithms %K Robotics and automation %K social animal behavior %K social animals %K social insect behavior %K Speech recognition %K tracking %X Our understanding of social insect behavior has significantly influenced artificial intelligence (AI) and multirobot systems' research (e.g., ant algorithms and swarm robotics). In this work, however, we focus on the opposite question: "How can multirobot systems research contribute to the understanding of social animal behavior?" As we show, we are able to contribute at several levels. First, using algorithms that originated in the robotics community, we can track animals under observation to provide essential quantitative data for animal behavior research. Second, by developing and applying algorithms originating in speech recognition and computer vision, we can automatically label the behavior of animals under observation. In some cases the automatic labeling is more accurate and consistent than manual behavior identification. Our ultimate goal, however, is to automatically create, from observation, executable models of behavior. An executable model is a control program for an agent that can run in simulation (or on a robot). The representation for these executable models is drawn from research in multirobot systems programming. In this paper we present the algorithms we have developed for tracking, recognizing, and learning models of social animal behavior, details of their implementation, and quantitative experimental results using them to study social insects %B Proceedings of the IEEE %V 94 %P 1445 - 1463 %8 2006/07// %@ 0018-9219 %G eng %N 7 %0 Journal Article %J Semantic Multimedia %D 2006 %T Human activity language: Grounding concepts with a linguistic framework %A Guerra-Filho,G. %A Aloimonos, J. %B Semantic Multimedia %P 86 - 100 %8 2006/// %G eng %0 Book %D 2006 %T Human identification based on gait %A Nixon,Mark S. %A Tan,Tieniu %A Chellapa, Rama %K Biometric identification %K Computers / Computer Graphics %K Computers / Computer Science %K Computers / Computer Vision & Pattern Recognition %K Computers / General %K Computers / Information Technology %K Computers / Programming / Software Development %K Computers / Software Development & Engineering / Systems Analysis & Design %K Gait in humans %K Medical / Physiology %K Social Science / Anthropology / Physical %X Biometrics now affect many people's lives, and is the focus of much academic research and commercial development. Gait is one of the most recent biometrics, with its own unique advantages. Gait recognizes people by the way they walk and run, analyzes movement, which in turn implies analyzing sequences of images. This professional book introduces developments from the laboratories of very distinguished researchers within this relatively new area of biometrics and clearly establishes human gait as a biometric. Human Identification Based on Gait provides a snapshot of all the biometric work in human identification by gait (all major centers for research are indicated in this book). To complete the picture, studies are included from medicine, psychology and other areas wherein we find not only justification for the use of gait as a biometric, but also pointers to techniques and to analysis. Human Identification Based on Gait is designed for a professional audience, composed of researchers and practitioners in industry. This book is also suitable as a secondary text for graduate-level students in computer science. %I Springer Science & Business %8 2006/// %@ 9780387244242 %G eng %0 Conference Paper %B Shape Modeling and Applications, 2005 International Conference %D 2005 %T The half-edge tree: a compact data structure for level-of-detail tetrahedral meshes %A Danovaro,E. %A De Floriani, Leila %A Magillo,P. %A Puppo,E. %A Sobrero,D. %A Sokolovsky,N. %K application; %K compact %K computational %K data %K detection; %K edge %K encoding; %K generation; %K geometry; %K half-edge %K iterative %K level-of-detail %K mesh %K meshes; %K methods; %K model; %K structure; %K structures; %K tetrahedral %K tree %K tree; %X We propose a new data structure for the compact encoding of a level-of detail (LOD) model of a three-dimensional scalar field based on unstructured tetrahedral meshes. Such data structure, called a half-edge tree (HET), is built through the iterative application of a half-edge collapse, i.e. by contracting an edge to one of its endpoints. We also show that selective refined meshes extracted from an HET contain on average about 34% and up to 75% less tetrahedra than those extracted from an LOD model built through a general edge collapse. %B Shape Modeling and Applications, 2005 International Conference %P 332 - 337 %8 2005/06// %G eng %R 10.1109/SMI.2005.47 %0 Journal Article %J Proceedings of SPIE %D 2005 %T Handling uneven embedding capacity in binary images: a revisit %A M. Wu %A Fridrich,Jessica %A Goljan,Miroslav %A Gou,Hongmei %X Hiding data in binary images can facilitate the authentication and annotation of important document images in digital domain. A representative approach is to first identify pixels whose binary color can be flipped without introducing noticeable artifacts, and then embed one bit in each non-overlapping block by adjusting the flippable pixel values to obtain the desired block parity. The distribution of these flippable pixels is highly uneven across the image, which is handled by random shuffling in the literature. In this paper, we revisit the problem of data embedding for binary images and investigate the incorporation of a most recent steganography framework known as the wet paper coding to improve the embedding capacity. The wet paper codes naturally handle the uneven embedding capacity through randomized projections. In contrast to the previous approach, where only a small portion of the flippable pixels are actually utilized in the embedding, the wet paper codes allow for a high utilization of pixels that have high flippability score for embedding, thus giving a significantly improved embedding capacity than the previous approach. The performance of the proposed technique is demonstrated on several representative images. We also analyze the perceptual impact and capacity-robustness relation of the new approach. %B Proceedings of SPIE %V 5681 %P 194 - 205 %8 2005/03/21/ %@ 0277786X %G eng %U http://spiedigitallibrary.org/proceedings/resource/2/psisdg/5681/1/194_1?isAuthorized=no %N 1 %R doi:10.1117/12.587379 %0 Conference Paper %B Document Analysis and Recognition, 2005. Proceedings. Eighth International Conference on %D 2005 %T Handwriting matching and its application to handwriting synthesis %A Yefeng Zheng %A David Doermann %K (artificial %K deformation %K deformation; %K handwriting %K image %K intelligence); %K learning %K learning; %K matching; %K point %K recognition; %K sampling; %K SHAPE %K synthesis; %X Since it is extremely expensive to collect a large volume of handwriting samples, synthesized data are often used to enlarge the training set. We argue that, in order to generate good handwriting samples, a synthesis algorithm should learn the shape deformation characteristics of handwriting from real samples. In this paper, we present a point matching algorithm to learn the deformation, and apply it to handwriting synthesis. Preliminary experiments show the advantages of our approach. %B Document Analysis and Recognition, 2005. Proceedings. Eighth International Conference on %P 861 - 865 Vol. 2 - 861 - 865 Vol. 2 %8 2005/09/01/aug %G eng %R 10.1109/ICDAR.2005.122 %0 Report %D 2005 %T Headline Generation for Written and Broadcast News %A Zajic, David %A Dorr, Bonnie J %A Schwartz,Richard %K *INFORMATION RETRIEVAL %K *RADIO BROADCASTING %K AUTOMATIC %K AUTOMATIC SUMMARIZATION %K HEDGE TRIMMER %K INFORMATION SCIENCE %K RADIO COMMUNICATIONS %K STATISTICAL PROCESSES %K TEST AND EVALUATION %K WORDS(LANGUAGE) %X This technical report is an overview of work done on Headline Generation for written and broadcast news. The report covers HMM Hedge, a statistical approach based on the noisy channel model, Hedge Trimmer, a parse-and-trim approach using linguistically motivated trimming rules, and Topiary, a combination of Trimmer and Unsupervised Topic Discovery. Automatic evaluation of summaries using ROUGE and BLEU is described and used to evaluate the Headline Generation systems. %I Instititue for Advanced Computer Studies, Univ of Maryland, College Park %8 2005/03// %G eng %U http://stinet.dtic.mil/oai/oai?&verb=getRecord&metadataPrefix=html&identifier=ADA454198 %0 Journal Article %J Institute for Systems Research Technical Reports %D 2005 %T Help! I'm Lost: User Frustration in Web Navigation (2003) %A Lazar,Jonathan %A Bessiere,Katie %A Ceaparu,Irina %A Robinson,John %A Shneiderman, Ben %K Technical Report %X Computers can be valuable tools, and networked resources via the Internet can be beneficial to many different populations and communities. Unfortunately, when people are unable to reach their task goals due to frustrating experiences, this can hinder the effectiveness of technology. This research summary provides information about the user frustration research that has been performed at the University of Maryland and Towson University. Causes of user frustration are discussed in this research summary, along with the surprising finding that nearly one-third to one-half of the time spent in front of the computer is wasted due to frustrating experiences. Furthermore, when interfaces are planned to be deceptive and confusing, this %B Institute for Systems Research Technical Reports %8 2005/// %G eng %U http://drum.lib.umd.edu/handle/1903/6508 %0 Conference Paper %D 2005 %T A hierarchical task-network planner based on symbolic model checking %A Kuter,U. %A Nau, Dana S. %A Pistore,M. %A Traverso,P. %X Although several approaches have been developed for planning in nondeterministic domains, solving large planning problems is still quite difficult. In this work, we present a novel algorithm, called YoYo, for planning in nondeterministic domains under the assumption of full observability. This algorithm enables us to combine the power of search-control strategies as in Planning with Hierarchical Task Networks (HTNs) with tech- niques from the Planning via Symbolic Model-Checking (SMC). Our experimental evaluation confirms the po- tentialities of our approach, demonstrating that it com- bines the advantages of these paradigms. %P 300 - 309 %8 2005/// %G eng %U https://www.aaai.org/Papers/ICAPS/2005/ICAPS05-031.pdf %0 Conference Paper %B Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing %D 2005 %T The Hiero machine translation system: Extensions, evaluation, and analysis %A Chiang,D. %A Lopez,A. %A Madnani,N. %A Monz,C. %A Resnik, Philip %A Subotin,M. %B Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing %P 779 - 786 %8 2005/// %G eng %0 Journal Article %J Proceedings 119th convention of AES %D 2005 %T High Order Spatial Audio Capture and Binaural Head-Tracked Playback over Headphones with HRTF Cues %A Duraiswami, Ramani %A Zotkin,Dmitry N %A Li,Z. %A Grassi,E. %A Gumerov, Nail A. %A Davis, Larry S. %X A theory and a system for capturing an audio scene and then rendering it remotely are developed andpresented. The sound capture is performed with a spherical microphone array. The sound field at the location of the array is deduced from the captured sound and is represented using either spherical wave-functions or plane-wave expansions. The sound field representation is then transmitted to a remote location for immediate rendering or stored for later use. The sound renderer, coupled with the head tracker, reconstructs the acoustic field using individualized head-related transfer functions to preserve the perceptual spatial structure of the audio scene. Rigorous error bounds and Nyquist-like sampling criterion for the representation of the sound field are presented and verified. %B Proceedings 119th convention of AES %8 2005/// %G eng %0 Conference Paper %B Audio Engineering Society Convention 119 %D 2005 %T High Order Spatial Audio Capture and Its Binaural Head-Tracked Playback Over Headphones with HRTF Cues %A Zotkin,Dmitry N %B Audio Engineering Society Convention 119 %8 2005/10// %G eng %U http://www.aes.org/e-lib/browse.cfm?elib=13369 %0 Conference Paper %B Audio Engineering Society Convention 119 %D 2005 %T High Order Spatial Audio Capture and Its Binaural Head-Tracked Playback Over Headphones with HRTF Cues %A Zotkin,Dmitry N %A Davis, Larry S. %A Duraiswami, Ramani %A Grassi,Elena %A Gumerov, Nail A. %A Zhiyun,Li %B Audio Engineering Society Convention 119 %8 2005/// %G eng %U http://www.aes.org/e-lib/browse.cfm?elib=13369 %0 Conference Paper %B Audio Engineering Society Convention 119 %D 2005 %T High Order Spatial Audio Capture and Its Binaural Head-Tracked Playback Over Headphones with HRTF Cues %A Davis, Larry S. %A Duraiswami, Ramani %A Grassi,E. %A Gumerov, Nail A. %A Li,Z. %A Zotkin,Dmitry N %X A theory and a system for capturing an audio scene and then rendering it remotely are developed and presented. The sound capture is performed with a spherical microphone array. The sound field at the location, and in a region of space in the neighborhood, of the array is deduced from the captured sound and represented using either spherical wave-functions or plane-wave expansions. The representation is then transmitted to a remote location for immediate rendering or stored for later reproduction. The sound renderer, coupled with the head tracker, reconstructs the acoustic field using individualized head-related transfer functions to preserve the perceptual spatial structure of the audio scene. Rigorous error bounds and a Nyquist-like sampling criterion for the representation of the sound field are presented and verified. %B Audio Engineering Society Convention 119 %8 2005/// %G eng %0 Conference Paper %B Parallel and Distributed Processing Symposium, 2005. Proceedings. 19th IEEE International %D 2005 %T High Performance Communication between Parallel Programs %A Jae-Yong Lee %A Sussman, Alan %K Adaptive arrays %K Analytical models %K Chaotic communication %K Computational modeling %K Computer science %K Data analysis %K data distribution %K Educational institutions %K high performance communication %K image data analysis %K image resolution %K inter-program communication patterns %K InterComm %K Libraries %K Message passing %K parallel languages %K parallel libraries %K parallel programming %K parallel programs %K performance evaluation %K Wind %X We present algorithms for high performance communication between message-passing parallel programs, and evaluate the algorithms as implemented in InterComm. InterComm is a framework to couple parallel programs in the presence of complex data distributions within a coupled application. Multiple parallel libraries and languages may be used in the different programs of a single coupled application. The ability to couple such programs is required in many emerging application areas, such as complex simulations that model physical phenomena at multiple scales and resolutions, and image data analysis applications. We describe the new algorithms we have developed for computing inter-program communication patterns. We present experimental results showing the performance of various algorithmic tradeoffs, and also compare performance against an earlier system. %B Parallel and Distributed Processing Symposium, 2005. Proceedings. 19th IEEE International %I IEEE %P 177b- 177b - 177b- 177b %8 2005/04// %@ 0-7695-2312-9 %G eng %R 10.1109/IPDPS.2005.243 %0 Journal Article %J Journal of Computational Biology %D 2005 %T HOPE: A Homotopy Optimization Method for Protein Structure Prediction %A Dunlavy,Daniel M. %A O'Leary, Dianne P. %A Klimov,Dmitri %A Thirumalai,D. %X We use a homotopy optimization method, HOPE, to minimize the potential energy associated with a protein model. The method uses the minimum energy conformation of one protein as a template to predict the lowest energy structure of a query sequence. This objective is achieved by following a path of conformations determined by a homotopy between the potential energy functions for the two proteins. Ensembles of solutions are produced by perturbing conformations along the path, increasing the likelihood of predicting correct structures. Successful results are presented for pairs of homologous proteins, where HOPE is compared to a variant of Newton's method and to simulated annealing. %B Journal of Computational Biology %V 12 %P 1275 - 1288 %8 2005/12// %@ 1066-5277, 1557-8666 %G eng %U http://www.liebertonline.com/doi/abs/10.1089/cmb.2005.12.1275 %N 10 %R 10.1089/cmb.2005.12.1275 %0 Journal Article %J Proceedings of HCII 2005 %D 2005 %T How do I find blue books about dogs? The errors and frustrations of young digital library users %A Hutchinson,H. %A Druin, Allison %A Bederson, Benjamin B. %A Reuter,K. %A Rose,A. %A Weeks,A. C %B Proceedings of HCII 2005 %8 2005/// %G eng %0 Journal Article %J Computer Vision and Image Understanding %D 2005 %T Human action-recognition using mutual invariants %A Parameswaran,Vasu %A Chellapa, Rama %K Human action-recognition %K Model based invariants %K Mutual invariants %K View invariance %X Static and temporally varying 3D invariants are proposed for capturing the spatio-temporal dynamics of a general human action to enable its representation in a compact, view-invariant manner. Two variants of the representation are presented and studied: (1) a restricted-3D version, whose theory and implementation are simple and efficient but which can be applied only to a restricted class of human action, and (2) a full-3D version, whose theory and implementation are more complex but which can be applied to any general human action. A detailed analysis of the two representations is presented. We show why a straightforward implementation of the key ideas does not work well in the general case, and present strategies designed to overcome inherent weaknesses in the approach. What results is an approach for human action modeling and recognition that is not only invariant to viewpoint, but is also robust enough to handle different people, different speeds of action (and hence, frame rate) and minor variabilities in a given action, while encoding sufficient distinction among actions. Results on 2D projections of human motion capture and on manually segmented real image sequences demonstrate the effectiveness of the approach. %B Computer Vision and Image Understanding %V 98 %P 294 - 324 %8 2005/05// %@ 1077-3142 %G eng %U http://www.sciencedirect.com/science/article/pii/S107731420400147X %N 2 %R 10.1016/j.cviu.2004.09.002 %0 Conference Paper %B Proceedings of the 2005 national conference on Digital government research %D 2005 %T Human-computer interaction themes in digital government: web site comprehension and statistics visualization %A Shneiderman, Ben %X Digital government applications often involve web sites to provide information for citizens and visitors from essential services such as passport application or motor vehicle registration to discretionary, but highly popular applications such as recreation and parks information. Another aspect of government web sites is the delivery of statistical reports with summary tables, aggregated data resources, and extensive raw data files. This review focuses on human-computer interaction themes to improve designs of web sites and statistics visualization, ??? specifically as they relate to digital government sites. It also addresses research methods that are appropriate for digital government interfaces. %B Proceedings of the 2005 national conference on Digital government research %S dg.o '05 %I Digital Government Society of North America %P 7 - 8 %8 2005/// %G eng %U http://dl.acm.org/citation.cfm?id=1065226.1065230 %0 Thesis %D 2004 %T Handling estimation errors in database query processing %A Deshpande, Amol %X A query optimizer is among the core pieces of a modern database management system, responsible for choosing a query execution plan for a user-provided declarative query. Query optimizers typically employ a cost estimation procedure to compare costs of different execution plans. Errors in this estimation process are quite common and arise due to reasons such as incomplete and insufficient statistical information about the data, and highly variable runtime environments that can affect the plan costs in unpredictable manners. Two approaches have been previously proposed for handling such estimation errors: (1) building sophisticated synopsis techniques that succintly summarize the data in the database and thus provide more statistical information to the query optimizer, and (2) aggressive reoptimization schemes that attempt to change the execution plans chosen to execute queries, on-the-fly. In the first part of this dissertation, we focus on building and using sophisticated synopsis techniques in the context of a traditional query optimizer. We propose a class of synopsis techniques called DEPENDENCY-B ASED Histograms that use statistical interaction models to exploit the correlations in the data, and to estimate selectivities efficiently. We also develop an efficient algorithm to search through the class of statistical models that we employ. Using sophisticated synopsis techniques such as these in the context of a traditional query optimizer poses interesting computational challenges; a naive approach to doing this could make the query optimization process so expensive as to be ineffective. This naturally leads to an “estimation planning” problem that asks for the best strategy to compute all the estimates required by an optimizer using the synopses at its disposal. We analyze this problem, its solution space, and propose algorithms to efficiently find good estimation plans. There are many scenarios where sophisticated synopsis techniques may not be applicable; examples include wide area and web based data sources, data streams and complex data domains. In the second part of this dissertation, we explore a highly-adaptive query processing technique called eddies that treats query processing as routing of tuples through operators, and adapts to changing data and runtime characteristics by continuously changing the order in which tuples are routed. We analyze the eddies architecture and identify a fundamental flaw in the basic design of the architecture: the burden of history in routing. (Abstract shortened by UMI.) %I University of California at Berkeley %C Berkeley, CA, USA %8 2004/// %G eng %0 Report %D 2004 %T Headline evaluation experiment results %A Zajic, David %A Dorr, Bonnie J %A Schwartz,R. %A President,S. %X This technical report describes an experiment intending to show that different summarizationtechniques have an effect on human performance of an extrinsic task. The task is document selection in the context of information retrieval. We conclude that the task was too difficult and ambiguous for subjects to perform at the level required in order to make statistically significant claims about the relationship between summarization techniques and human performance. An alternate interpretation of the experimental results is described and plans for future experiments are discussed. %I University of Maryland, College Park, MD. UMIACS-TR-2004-18 %8 2004/// %G eng %0 Journal Article %J Advances in Database Technology-EDBT 2004 %D 2004 %T Hierarchical in-network data aggregation with quality guarantees %A Deligiannakis,A. %A Kotidis,Y. %A Roussopoulos, Nick %B Advances in Database Technology-EDBT 2004 %P 577 - 578 %8 2004/// %G eng %0 Conference Paper %B Parallel and Distributed Processing Symposium, 2004. Proceedings. 18th International %D 2004 %T Hierarchical routing with soft-state replicas in TerraDir %A Silaghi,B. %A Gopalakrishnan,Vijay %A Bhattacharjee, Bobby %A Kelcher,P. %K ad-hoc %K adaptive %K allocation; %K asymmetrical %K balancing; %K bottlenecks; %K consistency %K constraints; %K delivering; %K demand %K distribution; %K guarantees; %K hierarchical %K latency %K load %K low %K namespaces; %K peer-to-peer %K protocol; %K protocols; %K replica %K replicas; %K replication %K resource %K routing; %K soft-state %K systems; %K TerraDir; %K topological %X Summary form only given. Recent work on peer-to-peer systems has demonstrated the ability to deliver low latencies and good load balance when demand for data is relatively uniform. We describe an adaptive replication protocol that delivers low latencies, good load balance even when demand is heavily skewed. The protocol can withstand arbitrary and instantaneous changes in demand distribution. Our approach also addresses classical concerns related to topological constraints of asymmetrical namespaces, such as hierarchical bottlenecks in the context of hierarchical namespaces. The protocol replicates routing state in an ad-hoc manner based on profiled information, is lightweight, scalable, and requires no replica consistency guarantees. %B Parallel and Distributed Processing Symposium, 2004. Proceedings. 18th International %P 48 - 48 %8 2004/04// %G eng %R 10.1109/IPDPS.2004.1302967 %0 Journal Article %J Genome Research %D 2004 %T Hierarchical Scaffolding With Bambus %A Pop, Mihai %A Kosack,Daniel S. %A Salzberg,Steven L. %X The output of a genome assembler generally comprises a collection of contiguous DNA sequences (contigs) whose relative placement along the genome is not defined. A procedure called scaffolding is commonly used to order and orient these contigs using paired read information. This ordering of contigs is an essential step when finishing and analyzing the data from a whole-genome shotgun project. Most recent assemblers include a scaffolding module; however, users have little control over the scaffolding algorithm or the information produced. We thus developed a general-purpose scaffolder, called Bambus, which affords users significant flexibility in controlling the scaffolding parameters. Bambus was used recently to scaffold the low-coverage draft dog genome data. Most significantly, Bambus enables the use of linking data other than that inferred from mate-pair information. For example, the sequence of a completed genome can be used to guide the scaffolding of a related organism. We present several applications of Bambus: support for finishing, comparative genomics, analysis of the haplotype structure of genomes, and scaffolding of a mammalian genome at low coverage. Bambus is available as an open-source package from our Web site. %B Genome Research %V 14 %P 149 - 159 %8 2004/01/01/ %G eng %U http://genome.cshlp.org/content/14/1/149.abstract %N 1 %R 10.1101/gr.1536204 %0 Journal Article %J Computer Vision and Image Understanding %D 2004 %T A hierarchy of cameras for 3D photography %A Neumann, Jan %A Fermüller, Cornelia %A Aloimonos, J. %K Camera design %K Multi-view geometry %K Polydioptric cameras %K Spatio-temporal image analysis %K structure from motion %X The view-independent visualization of 3D scenes is most often based on rendering accurate 3D models or utilizes image-based rendering techniques. To compute the 3D structure of a scene from a moving vision sensor or to use image-based rendering approaches, we need to be able to estimate the motion of the sensor from the recorded image information with high accuracy, a problem that has been well-studied. In this work, we investigate the relationship between camera design and our ability to perform accurate 3D photography, by examining the influence of camera design on the estimation of the motion and structure of a scene from video data. By relating the differential structure of the time varying plenoptic function to different known and new camera designs, we can establish a hierarchy of cameras based upon the stability and complexity of the computations necessary to estimate structure and motion. At the low end of this hierarchy is the standard planar pinhole camera for which the structure from motion problem is non-linear and ill-posed. At the high end is a camera, which we call the full field of view polydioptric camera, for which the motion estimation problem can be solved independently of the depth of the scene which leads to fast and robust algorithms for 3D Photography. In between are multiple view cameras with a large field of view which we have built, as well as omni-directional sensors. %B Computer Vision and Image Understanding %V 96 %P 274 - 293 %8 2004/12// %@ 1077-3142 %G eng %U http://www.sciencedirect.com/science/article/pii/S1077314204000505 %N 3 %R 10.1016/j.cviu.2004.03.013 %0 Conference Paper %B 13th International Conference on Computer Communications and Networks, 2004. ICCCN 2004. Proceedings %D 2004 %T High-performance MAC for high-capacity wireless LANs %A Yuan Yuan %A Daqing Gu %A Arbaugh, William A. %A Jinyun Zhang %K 35 Mbit/s %K access protocols %K Aggregates %K Bandwidth %K batch transmission %K Computer science %K Educational institutions %K high-capacity wireless LAN %K high-performance MAC %K Laboratories %K Local area networks %K Media Access Protocol %K opportunistic selection %K Physical layer %K probability %K Throughput %K Wireless LAN %X The next-generation wireless technologies, e.g., 802.11n and 802.15.3a, offer a physical-layer speed at least an-order-of-magnitude higher than the current standards. However, direct application of current MACs leads to high protocol overhead and significant throughput degradation. In this paper, we propose ADCA, a high-performance MAC that works with high-capacity physical layer. ADCA exploits two ideas of adaptive batch transmission and opportunistic selection of high-rate hosts to simultaneously reduce the overhead and improve the aggregate throughput. It opportunistically favors high-rate hosts by providing higher access probability and more access time, while ensuring each low-rate host certain minimum amount of channel access time. Simulations show that the ADCA design increases the throughput by 112% and reduces the average delay by 55% compared with the legacy DCF. It delivers more than 100 Mbps MAC-layer throughput as compared with 35 Mbps offered by the legacy MAC %B 13th International Conference on Computer Communications and Networks, 2004. ICCCN 2004. Proceedings %I IEEE %P 167 - 172 %8 2004/10/11/13 %@ 0-7803-8814-3 %G eng %R 10.1109/ICCCN.2004.1401615 %0 Journal Article %J Web Semantics: Science, Services and Agents on the World Wide Web %D 2004 %T HTN planning for Web Service composition using SHOP2 %A Sirin,Evren %A Parsia,Bijan %A Wu,Dan %A Hendler,James %A Nau, Dana S. %K HTN planning %K OWL-S %K SHOP2 %K Web Service composition %K Web services %X Automated composition of Web Services can be achieved by using AI planning techniques. Hierarchical Task Network (HTN) planning is especially well-suited for this task. In this paper, we describe how HTN planning system SHOP2 can be used with OWL-S Web Service descriptions. We provide a sound and complete algorithm to translate OWL-S service descriptions to a SHOP2 domain. We prove the correctness of the algorithm by showing the correspondence to the situation calculus semantics of OWL-S. We implemented a system that plans over sets of OWL-S descriptions using SHOP2 and then executes the resulting plans over the Web. The system is also capable of executing information-providing Web Services during the planning process. We discuss the challenges and difficulties of using planning in the information-rich and human-oriented context of Web Services. %B Web Semantics: Science, Services and Agents on the World Wide Web %V 1 %P 377 - 396 %8 2004/10// %@ 1570-8268 %G eng %U http://www.sciencedirect.com/science/article/pii/S1570826804000113 %N 4 %R 10.1016/j.websem.2004.06.005 %0 Conference Paper %B Proceedings of the 2nd international conference on Mobile systems, applications, and services %D 2004 %T Human needs and mobile technologies: small, fast, and fun %A Shneiderman, Ben %X The central thesis of "Leonardo's Laptop" (MIT Press, 2002) is that designers who are sensitive to human needs are more likely to make the breakthroughs that yield new technologies successes. Therefore, a theory of mobile devices would focus on compact devices that support human relationships, provide salient information, and enable creative expression. The foundations are not only the megahertz of connectivity, but also the usability and universality of interfaces. Demonstrations include digital photo applications, personal info, healthcare, and e-commerce. %B Proceedings of the 2nd international conference on Mobile systems, applications, and services %S MobiSys '04 %I ACM %C New York, NY, USA %P 1 - 1 %8 2004/// %@ 1-58113-793-1 %G eng %U http://doi.acm.org/10.1145/990064.990065 %R 10.1145/990064.990065 %0 Conference Paper %B Geoscience and Remote Sensing Symposium, 2004. IGARSS '04. Proceedings. 2004 IEEE International %D 2004 %T A hybrid algorithm for subpixel detection in hyperspectral imagery %A Broadwater, J. %A Meth, R. %A Chellapa, Rama %K alarm %K algorithm;abundance %K algorithm;Fully %K algorithm;hybrid %K algorithm;statistical %K AMSD;Adaptive %K analysis;structured %K approximations;maximum %K backgrounds;subpixel %K constrained %K detection;alarm %K detection;target %K Detector;FCLS %K detector;hyperspectral %K estimation; %K estimation;emittance %K identification;adaptive %K imagery;reflectance %K least %K likelihood %K Matched %K matching;least %K processing;geophysical %K rate;generalized %K ratio %K signal %K signatures;spectral %K spectra;false %K spectra;spectral %K squares %K subspace %K systems;geophysical %K techniques;image %K tests;hybrid %K unmixing %X Numerous subpixel detection algorithms utilizing structured backgrounds have been developed over the past few years. These range from detection schemes based on spectral unmixing to generalized likelihood ratio tests. Spectral unmixing algorithms such as the Fully Constrained Least Squares (FCLS) algorithm have the advantage of physically modeling the interactions of spectral signatures based on reflectance/emittance spectroscopy. Generalized likelihood ratio tests like the Adaptive Matched Subspace Detector (AMSD) have the advantage of identifying targets that are statistically different from the background. Therefore, a hybrid detector based on both AMSD and FCLS was developed to take advantage of each detector's strengths. Results demonstrate that the hybrid detector achieved the lowest false alarm rates while also producing meaningful abundance estimates %B Geoscience and Remote Sensing Symposium, 2004. IGARSS '04. Proceedings. 2004 IEEE International %V 3 %P 1601 -1604 vol.3 - 1601 -1604 vol.3 %8 2004/09// %G eng %R 10.1109/IGARSS.2004.1370633 %0 Journal Article %J The craft of information visualization: readings and reflections %D 2003 %T HCIL Technical Report Listing (1993-2002) %A Ceaparu,I. %A Druin, Allison %A Guimbretiere,F. %A Manipulation,D. %A Shneiderman, Ben %A Westerlurid,B. %A Keogh,E. %A Hochheiser,H. %A Shrieidemian,B. %A Shneidemiari,B. %A others %B The craft of information visualization: readings and reflections %V 27 %P 393 - 393 %8 2003/// %G eng %N 29 %0 Conference Paper %B Proceedings of the HLT-NAACL 03 on Text summarization workshop - Volume 5 %D 2003 %T Hedge Trimmer: a parse-and-trim approach to headline generation %A Dorr, Bonnie J %A Zajic, David %A Schwartz,Richard %X This paper presents Hedge Trimmer, a HEaDline GEneration system that creates a headline for a newspaper story using linguistically-motivated heuristics to guide the choice of a potential headline. We present feasibility tests used to establish the validity of an approach that constructs a headline by selecting words in order from a story. In addition, we describe experimental results that demonstrate the effectiveness of our linguistically-motivated approach over a HMM-based model, using both human evaluation and automatic metrics for comparing the two approaches. %B Proceedings of the HLT-NAACL 03 on Text summarization workshop - Volume 5 %S HLT-NAACL-DUC '03 %I Association for Computational Linguistics %C Stroudsburg, PA, USA %P 1 - 8 %8 2003/// %G eng %U http://dx.doi.org/10.3115/1119467.1119468 %R 10.3115/1119467.1119468 %0 Journal Article %J IT & Society %D 2003 %T Help! I’m lost: User frustration in web navigation %A Lazar,J. %A Bessiere,K. %A Ceaparu,I. %A Robinson,J. %A Shneiderman, Ben %X Computers can be valuable tools, and networked resources via the Internet can be beneficial to many different populations and communities. Unfortunately, when people are unable to reach their task goals due to frustrating experiences, this can hinder the effectiveness of technology. This research summary provides information about the user frustration research that has been performed at the University of Maryland and Towson University. Causes of user frustration are discussed in this research summary, along with the surprising finding that nearly one-third to one-half of the time spent in front of the computer is wasted due to frustrating experiences. Furthermore, when interfaces are planned to be deceptive and confusing, this can lead to increased frustration. Implications for designers and users are discussed. %B IT & Society %V 1 %P 18 - 26 %8 2003/// %G eng %N 3 %0 Conference Paper %B Proceedings of the 2003 annual national conference on Digital government research %D 2003 %T Helping users get started with visual interfaces: multi-layered interfaces, integrated initial guidance and video demonstrations %A Kang,Hyunmo %A Plaisant, Catherine %A Shneiderman, Ben %X We are investigating new ways to help users learn to use public access interactive tools, in particular for the visual exploration of government statistics. Our work led to a series of interfaces using multi-layered design, a new help method called Integrated Initial Guidance, and video demonstrations. Multi-layer designs structure an interface so that a simpler interface is available for users to get started and more complex features are accessed as users move through the more advanced layers. Integrated Initial Guidance provides help within the working interface, right at the start of the application. Using the metaphor of "sticky notes" overlaid on top of the functional interface locates the main widgets, demonstrates their manipulation, and explains the resulting actions using preset activations of the interface. %B Proceedings of the 2003 annual national conference on Digital government research %S dg.o '03 %I Digital Government Society of North America %P 1 - 1 %8 2003/// %G eng %U http://dl.acm.org/citation.cfm?id=1123196.1123257 %0 Conference Paper %B Image Processing, 2003. ICIP 2003. Proceedings. 2003 International Conference on %D 2003 %T A hidden Markov model based framework for recognition of humans from gait sequences %A Sundaresan,Aravind %A RoyChowdhury,Amit %A Chellapa, Rama %K analysis; %K background-subtracted %K binarized %K discrete %K distance %K feature %K Gait %K hidden %K human %K image %K image; %K Markov %K metrics; %K model; %K models; %K postures; %K recognition; %K sequences; %K vector; %X In this paper we propose a generic framework based on hidden Markov models (HMMs) for recognition of individuals from their gait. The HMM framework is suitable, because the gait of an individual can be visualized as his adopting postures from a set, in a sequence which has an underlying structured probabilistic nature. The postures that the individual adopts can be regarded as the states of the HMM and are typical to that individual and provide a means of discrimination. The framework assumes that, during gait, the individual transitions between N discrete postures or states but it is not dependent on the particular feature vector used to represent the gait information contained in the postures. The framework, thus, provides flexibility in the selection of the feature vector. The statistical nature of the HMM lends robustness to the model. In this paper we use the binarized background-subtracted image as the feature vector and use different distance metrics, such as those based on the L1 and L2 norms of the vector difference, and the normalized inner product of the vectors, to measure the similarity between feature vectors. The results we obtain are better than the baseline recognition rates reported before. %B Image Processing, 2003. ICIP 2003. Proceedings. 2003 International Conference on %V 2 %P II - 93-6 vol.3 - II - 93-6 vol.3 %8 2003/09// %G eng %R 10.1109/ICIP.2003.1246624 %0 Conference Paper %B Proceedings of the 6th ACM international workshop on Data warehousing and OLAP - DOLAP '03 %D 2003 %T Hierarchical dwarfs for the rollup cube %A Sismanis,Yannis %A Deligiannakis,Antonios %A Kotidis,Yannis %A Roussopoulos, Nick %B Proceedings of the 6th ACM international workshop on Data warehousing and OLAP - DOLAP '03 %C New Orleans, Louisiana, USA %P 17 - 17 %8 2003/// %G eng %U http://dl.acm.org/citation.cfm?id=956064 %R 10.1145/956060.956064 %0 Conference Paper %B Proceedings of the 17th annual international conference on Supercomputing %D 2003 %T A high performance multi-perspective vision studio %A Borovikov,Eugene %A Sussman, Alan %K database %K distributed system %K high-performance %K IMAGE PROCESSING %K multi-perspective %K VISION %K volumetric reconstruction %X We describe a multi-perspective vision studio as a flexible high performance framework for solving complex image processing and machine vision problems on multi-view image sequences. The studio abstracts multi-view image data from image sequence acquisition facilities, stores and catalogs sequences in a high performance distributed database, allows customization of back-end processing services, and can serve custom client applications, thus helping make multi-view video sequence processing efficient and generic. To illustrate our approach, we describe two multi-perspective studio applications, and discuss performance and scalability results. %B Proceedings of the 17th annual international conference on Supercomputing %S ICS '03 %I ACM %C New York, NY, USA %P 348 - 357 %8 2003/// %@ 1-58113-733-8 %G eng %U http://doi.acm.org/10.1145/782814.782862 %R 10.1145/782814.782862 %0 Conference Paper %B Applications of Signal Processing to Audio and Acoustics, 2003 IEEE Workshop on. %D 2003 %T HRTF personalization using anthropometric measurements %A Zotkin,Dmitry N %A Hwang,J. %A Duraiswami, Ramani %A Davis, Larry S. %K acoustic %K anthropometric %K audio %K audio; %K auditory %K Ear %K functions; %K Head %K head-and-torso %K HRTF %K individualized %K localization; %K measurements; %K model; %K models; %K parameters; %K perception; %K personalization; %K physiological %K processing; %K related %K scattering; %K scene; %K signal %K sound %K spatial %K subjective %K transfer %K virtual %K wave %X Individualized head related transfer functions (HRTFs) are needed for accurate rendering of spatial audio, which is important in many applications. Since these are relatively tedious to acquire, they may not be acceptable for some applications. A number of studies have sought to perform simple customization of the HRTF. We propose and test a strategy for HRTF personalization, based on matching certain anthropometric ear parameters with the HRTF database, and the incorporation of a low-frequency "head-and-torso" model. We present preliminary tests aimed at evaluation of this customization. Results show that the approach improves both the accuracy of the localization and subjective perception of the virtual auditory scene. %B Applications of Signal Processing to Audio and Acoustics, 2003 IEEE Workshop on. %P 157 - 160 %8 2003/10// %G eng %R 10.1109/ASPAA.2003.1285855 %0 Conference Paper %B Proceedings. IEEE Conference on Advanced Video and Signal Based Surveillance, 2003. %D 2003 %T Human body pose estimation using silhouette shape analysis %A Mittal,A. %A Liang Zhao %A Davis, Larry S. %K 3D %K analysis; %K body %K classification; %K clutter; %K detection; %K estimation; %K extraction; %K feature %K function; %K human %K image %K likelihood %K multiple %K object %K parameter %K parameters; %K Pixel %K pose %K probability; %K segmentation; %K segmentations; %K SHAPE %K silhouette %K structure; %K surveillance; %K views; %X We describe a system for human body pose estimation from multiple views that is fast and completely automatic. The algorithm works in the presence of multiple people by decoupling the problems of pose estimation of different people. The pose is estimated based on a likelihood function that integrates information from multiple views and thus obtains a globally optimal solution. Other characteristics that make our method more general than previous work include: (1) no manual initialization; (2) no specification of the dimensions of the 3D structure; (3) no reliance on some learned poses or patterns of activity; (4) insensitivity to edges and clutter in the background and within the foreground. The algorithm has applications in surveillance and promising results have been obtained. %B Proceedings. IEEE Conference on Advanced Video and Signal Based Surveillance, 2003. %P 263 - 270 %8 2003/07// %G eng %R 10.1109/AVSS.2003.1217930 %0 Patent %D 2003 %T Human visual model for data hiding %A M. Wu %A Yu,Hong Heather %E Matsushita Electric Industrial Co., Ltd. %X A method and apparatus of hiding identification data in visual media. When image or video data is received, frequency masking is performed to divide the image or video data into blocks of smooth regions and blocks of non-smooth regions and to obtain preliminary just-noticeable-difference. Edge detection is performed to divide the non-smooth region of the image or video data into texture blocks and edge blocks. Then blocks of regions that are substantially proximate to blocks of smooth regions of the image or video data are determined. The image or video data is then adjusted by applying different strength of watermark in association with the type of each block. %V : 09/691,544 %8 2003/08/26/ %G eng %U http://www.google.com/patents?id=aNgOAAAAEBAJ %N 6611608 %0 Journal Article %J Machine Translation %D 2003 %T Hybrid Natural Language Generation from Lexical Conceptual Structures %A Habash,Nizar %A Dorr, Bonnie J %A Traum,David %X This paper describes Lexogen, a system for generating natural-languagesentences from Lexical Conceptual Structure, an interlingualrepresentation. The system has been developed as part of aChinese–English Machine Translation (MT) system; however, it isdesigned to be used for many other MT language pairs and naturallanguage applications. The contributions of this work include: (1)development of a large-scale Hybrid Natural Language Generation system withlanguage-independent components; (2) enhancements to an interlingualrepresentation and associated algorithm forgeneration from ambiguous input; (3) development of an efficientreusable language-independent linearization module with a grammardescription language that can be used with other systems; (4)improvements to an earlier algorithm forhierarchically mapping thematic roles to surface positions; and (5)development of a diagnostic tool for lexicon coverage and correctnessand use of the tool for verification of English, Spanish, and Chineselexicons. An evaluation of Chinese–English translation quality showscomparable performance with a commercial translation system. Thegeneration system can also be extended to other languages and this isdemonstrated and evaluated for Spanish. %B Machine Translation %V 18 %P 81 - 128 %8 2003/// %@ 0922-6567 %G eng %U http://dx.doi.org/10.1023/B:COAT.0000020960.27186.18 %N 2 %0 Conference Paper %B Proceedings 1st International Symposium on 3D Data Processing Visualization and Transmission %D 2002 %T Half-edge Multi-Tessellation: a compact representations for multiresolution tetrahedral meshes %A Danovaro,E. %A De Floriani, Leila %X This paper deals with the problem of analyzing and vi-sualizing volume data sets of large size. To this aim, we define a three-dimensional multi-resolution model based on unstructured tetrahedral meshes, and built through a half- edge-collapse simplification strategy, that we call a Half- Edge Multi-Tessellation (MT). We propose a new compact data structure for a half-edge MT, and we analyze it with respect to both its space requirements and its efficiency in supporting Level-Of-Detail (LOD) queries based on selec- tive refinement. %B Proceedings 1st International Symposium on 3D Data Processing Visualization and Transmission %P 494 - 499 %8 2002/// %G eng %0 Book Section %B Machine Translation: From Research to Real UsersMachine Translation: From Research to Real Users %D 2002 %T Handling Translation Divergences: Combining Statistical and Symbolic Techniques in Generation-Heavy Machine Translation %A Habash,Nizar %A Dorr, Bonnie J %E Richardson,Stephen %X This paper describes a novel approach to handling translation divergences in a Generation-Heavy Hybrid Machine Translation (GHMT) system. The translation divergence problem is usually reserved for Transfer and Interlingual MT because it requires a large combination of complex lexical and structural mappings. A major requirement of these approaches is the accessibility of large amounts of explicit symmetric knowledge for both source and target languages. This limitation renders Transfer and Interlingual approaches ineffective in the face of structurally-divergent language pairs with asymmetric resources. GHMT addresses the more common form of this problem, source-poor/targetrich, by fully exploiting symbolic and statistical target-language resources. This non-interlingual non-transfer approach is accomplished by using target-language lexical semantics, categorial variations and subcategorization frames to overgenerate multiple lexico-structural variations from a target-glossed syntactic dependency of the source-language sentence. The symbolic overgeneration, which accounts for different possible translation divergences, is constrained by a statistical target-language model. %B Machine Translation: From Research to Real UsersMachine Translation: From Research to Real Users %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 2499 %P 84 - 93 %8 2002/// %@ 978-3-540-44282-0 %G eng %U http://dx.doi.org/10.1007/3-540-45820-4_9 %0 Conference Paper %B Frontiers in Handwriting Recognition, 2002. Proceedings. Eighth International Workshop on %D 2002 %T Hidden loop recovery for handwriting recognition %A David Doermann %A Intrator,N. %A Rivin,E. %A Steinherz,T. %K analysis; %K character %K contour %K cursive %K detection; %K distance %K edge %K ellipses; %K form %K handwritten %K hidden %K loop %K measurements; %K mutual %K partitioning; %K recognition; %K recovery; %K SHAPE %K shape; %K sophisticated %K strokes; %K symmetric %K truncated %K word %X One significant challenge in the recognition of off-line handwriting is in the interpretation of loop structures. Although this information is readily available in online representation, close proximity of strokes often merges their centers making them difficult to identify. In this paper a novel approach to the recovery of hidden loops in off-line scanned document images is presented. The proposed algorithm seeks blobs that resemble truncated ellipses. We use a sophisticated form analysis method based on mutual distance measurements between the two sides of a symmetric shape. The experimental results are compared with the ground truth of the online representations of each off-line word image. More than 86% percent of the meaningful loops are handled correctly. %B Frontiers in Handwriting Recognition, 2002. Proceedings. Eighth International Workshop on %P 375 - 380 %8 2002/// %G eng %R 10.1109/IWFHR.2002.1030939 %0 Conference Paper %B Automation Congress, 2002 Proceedings of the 5th Biannual World %D 2002 %T Hidden Markov models for silhouette classification %A Abd-Almageed, Wael %A Smith,C. %K Computer vision %K Feature extraction %K Fourier transforms %K hidden Markov models %K HMM %K image classification %K Neural networks %K object classification %K Object recognition %K parameter estimation %K pattern recognition %K Probability distribution %K Shape measurement %K silhouette classification %K Wavelet transforms %X In this paper, a new technique for object classification from silhouettes is presented. Hidden Markov models are used as a classification mechanism. Through a set of experiments, we show the validity of our approach and show its invariance under severe rotation conditions. Also, a comparison with other techniques that use hidden Markov models for object classification from silhouettes is presented. %B Automation Congress, 2002 Proceedings of the 5th Biannual World %I IEEE %V 13 %P 395 - 402 %8 2002/// %@ 1-889335-18-5 %G eng %R 10.1109/WAC.2002.1049575 %0 Conference Paper %B Motion and Video Computing, 2002. Proceedings. Workshop on %D 2002 %T A hierarchical approach for obtaining structure from two-frame optical flow %A Liu,Haiying %A Chellapa, Rama %A Rosenfeld, A. %K algorithm; %K aliasing; %K analysis; %K computer-rendered %K depth %K depth; %K error %K estimation; %K extraction; %K Face %K feature %K flow; %K gesture %K hierarchical %K image %K images; %K inverse %K iterative %K methods; %K MOTION %K nonlinear %K optical %K parameter %K processing; %K real %K recognition; %K sequences; %K signal %K structure-from-motion; %K system; %K systems; %K TIME %K two-frame %K variation; %K video %X A hierarchical iterative algorithm is proposed for extracting structure from two-frame optical flow. The algorithm exploits two facts: one is that in many applications, such as face and gesture recognition, the depth variation of the visible surface of an object in a scene is small compared to the distance between the optical center and the object; the other is that the time aliasing problem is alleviated at the coarse level for any two-frame optical flow estimate so that the estimate tends to be more accurate. A hierarchical representation for the relationship between the optical flow, depth, and the motion parameters is derived, and the resulting non-linear system is iteratively solved through two linear subsystems. At the coarsest level, the surface of the object tends to be flat, so that the inverse depth tends to be a constant, which is used as the initial depth map. Inverse depth and motion parameters are estimated by the two linear subsystems at each level and the results are propagated to finer levels. Error analysis and experiments using both computer-rendered images and real images demonstrate the correctness and effectiveness of our algorithm. %B Motion and Video Computing, 2002. Proceedings. Workshop on %P 214 - 219 %8 2002/12// %G eng %R 10.1109/MOTION.2002.1182239 %0 Journal Article %J EURASIP J. Appl. Signal Process. %D 2002 %T High-level synthesis of DSP applications using adaptive negative cycle detection %A Chandrachoodan,Nitin %A Bhattacharyya, Shuvra S. %A Liu,K. J.R %K adaptive performance estimation %K dynamic graphs %K maximum cycle mean %K negative cycle detection %X The problem of detecting negative weight cycles in a graph is examined in the context of the dynamic graph structures that arise in the process of high level synthesis (HLS). The concept of adaptive negative cycle detection is introduced, in which a graph changes over time and negative cycle detection needs to be done periodically, but not necessarily after every individual change. We present an algorithm for this problem, based on a novel extension of the well-known Bellman-Ford algorithm that allows us to adapt existing cycle information to the modified graph, and show by experiments that our algorithm significantly outperforms previous incremental approaches for dynamic graphs. In terms of applications, the adaptive technique leads to a very fast implementation of Lawlers algorithm for the computation of the maximum cycle mean (MCM) of a graph, especially for a certain form of sparse graph. Such sparseness often occurs in practical circuits and systems, as demonstrated, for example, by the ISCAS 89/93 benchmarks. The application of the adaptive technique to design-space exploration (synthesis) is also demonstrated by developing automated search techniques for scheduling iterative data-flow graphs. %B EURASIP J. Appl. Signal Process. %V 2002 %P 893 - 907 %8 2002/01// %@ 1110-8657 %G eng %U http://dl.acm.org/citation.cfm?id=1283100.1283192 %N 1 %0 Conference Paper %B Proceedings of the 24th international conference on Software engineering - ICSE '02 %D 2002 %T A history-based test prioritization technique for regression testing in resource constrained environments %A Kim,Jung-Min %A Porter, Adam %B Proceedings of the 24th international conference on Software engineering - ICSE '02 %C Orlando, Florida %P 119 - 119 %8 2002/// %G eng %U http://dl.acm.org/citation.cfm?id=581357 %R 10.1145/581339.581357 %0 Conference Paper %B Proceedings of the Participatory Design Conference %D 2002 %T How young can our design partners be %A Farber,A. %A Druin, Allison %A Chipman,G. %A Julian,D. %A Somashekhar,S. %B Proceedings of the Participatory Design Conference %P 272 - 277 %8 2002/// %G eng %0 Report %D 2002 %T HTN Planning in Answer Set Programming %A Dix,Jurgen %A Kuter,Ugur %A Nau, Dana S. %K Technical Report %X In this paper we introduce a formalism for solving Hierarchical Task NEtwork(HTN) Planning using Answer Set Programming (ASP). The ASP paradigm evolved out of the stable semantics for logic programs in recent years and is strongly related to nonmonotonic logics. We consider the formulation of HTM planning as described in the SHOP planning system and define a systematic translation method from SHOP's representation of the planning problems into logic programs with negation. We show that our translation is sound and complete: answer sets of the logic programs obtained by our translation correspond exactly to the solutions of the planning problems. Our approach does not rly on a particular system for computing answer sets. It can therefore serve as a means to evaluate ASP systems by using well-established benchmarks from the planning community. We tested our method on various such benchmarks and used smodels and DLV for computing answer sets. We compared our method to (1) similar approaches based on non-HTN planning and (2) SHOP, a dedicated planning system. We show that our approach outperforms non-HTN methods and that its performance is closer to that of SHOP, when we are using ASP systems which allow for nonground programs. (Also UMIACS-TR-2002-18) %I Instititue for Advanced Computer Studies, Univ of Maryland, College Park %V UMIACS-TR-2002-18 %8 2002/05/22/ %G eng %U http://drum.lib.umd.edu//handle/1903/1182 %0 Conference Paper %B In Proceedings, AAAI Fall Symposium on Uncertainty in Computation %D 2001 %T Handling uncertainty with active logic %A Bhatia,M. %A Chi,P. %A Chong,W. %A Josyula,D. P %A Okamoto,Y. %A Perlis, Don %A Purang,K. %B In Proceedings, AAAI Fall Symposium on Uncertainty in Computation %8 2001/// %G eng %0 Journal Article %J ASME 2001 Design Engineering Technical Conference and Computers and Information in Engineering Conference, Pittsburgh, PA %D 2001 %T Haptic and aural rendering of a virtual milling process %A Chang,C. F %A Varshney, Amitabh %A Ge,Q. J %B ASME 2001 Design Engineering Technical Conference and Computers and Information in Engineering Conference, Pittsburgh, PA %P 105 - 113 %8 2001/// %G eng %0 Conference Paper %B Mobile Data Management %D 2001 %T Hashing moving objects %A Song,Z. %A Roussopoulos, Nick %B Mobile Data Management %P 161 - 172 %8 2001/// %G eng %0 Journal Article %J Software Engineering, IEEE Transactions on %D 2001 %T Hierarchical GUI test case generation using automated planning %A Memon, Atif M. %A Pollack,M. E %A Soffa,M. L %K Artificial intelligence %K automated planning %K automatic test case generation %K Automatic testing %K correctness testing %K goal state %K Graphical user interfaces %K hierarchical GUI test case generation %K initial state %K Microsoft WordPad %K operators %K plan-generation system %K planning (artificial intelligence) %K Planning Assisted Tester for Graphical User Interface Systems %K program testing %K software %X The widespread use of GUIs for interacting with software is leading to the construction of more and more complex GUIs. With the growing complexity come challenges in testing the correctness of a GUI and its underlying software. We present a new technique to automatically generate test cases for GUIs that exploits planning, a well-developed and used technique in artificial intelligence. Given a set of operators, an initial state, and a goal state, a planner produces a sequence of the operators that will transform the initial state to the goal state. Our test case generation technique enables efficient application of planning by first creating a hierarchical model of a GUI based on its structure. The GUI model consists of hierarchical planning operators representing the possible events in the GUI. The test designer defines the preconditions and effects of the hierarchical operators, which are input into a plan-generation system. The test designer also creates scenarios that represent typical initial and goal states for a GUI user. The planner then generates plans representing sequences of GUI interactions that a user might employ to reach the goal state from the initial state. We implemented our test case generation system, called Planning Assisted Tester for Graphical User Interface Systems (PATHS) and experimentally evaluated its practicality and effectiveness. We describe a prototype implementation of PATHS and report on the results of controlled experiments to generate test cases for Microsoft's WordPad %B Software Engineering, IEEE Transactions on %V 27 %P 144 - 155 %8 2001/02// %@ 0098-5589 %G eng %N 2 %R 10.1109/32.908959 %0 Book Section %B Hierarchical and Geometrical Methods in Scientific VisualizationHierarchical and Geometrical Methods in Scientific Visualization %D 2001 %T Hierarchical image-based and polygon-based rendering for large-scale visualizations %A Chang,C. F %A Varshney, Amitabh %A Ge,Q. J %B Hierarchical and Geometrical Methods in Scientific VisualizationHierarchical and Geometrical Methods in Scientific Visualization %I Springer %8 2001/// %@ 9783540433132 %G eng %0 Conference Paper %B Hardware/Software Codesign, 2001. CODES 2001. Proceedings of the Ninth International Symposium on %D 2001 %T Hybrid global/local search strategies for dynamic voltage scaling in embedded multiprocessors %A Bambha,N. K %A Bhattacharyya, Shuvra S. %A Teich,J. %A Zitzler,E. %B Hardware/Software Codesign, 2001. CODES 2001. Proceedings of the Ninth International Symposium on %P 243 - 248 %8 2001/// %G eng %0 Conference Paper %B Proceedings of the Eleventh SIAM Conference on Parallel Processing for Scientific Computing %D 2001 %T A hypergraph-based workload partitioning strategy for parallel data aggregation %A Chang,C. %A Kurc, T. %A Sussman, Alan %A Catalyurek,U. %A Saltz, J. %B Proceedings of the Eleventh SIAM Conference on Parallel Processing for Scientific Computing %8 2001/// %G eng %0 Conference Paper %B ICPR %D 2000 %T Hidden Markov Models for Images %A DeMenthon,D. %A Stuckelberg,M. Vuilleumier %A David Doermann %X In this paper we investigate how speech recognition techniques can be extended to image processing. We describe a method for learning statistical models of images using a second order hidden Markov mesh model. First, an image can be segmented in a way that best matches its statistical model by an approach related to the dynamic programming used for segmenting Markov chains. Second, given an image segmentation a statistical model (3D state transition matrix and observation distributions within states) can be estimated. These two steps are repeated until convergence to provide both a segmentation and a statistical model of the image. We also describe a semi-Markov modeling technique in which the distributions of widths and heights of segmented regions are modeled explicitly by gamma distributions in a way related to explicit duration modeling in HMMs. Finally, we propose a statistical distance measure between images based on the similarity of their statistical models for classication and retrieval tasks. %B ICPR %P 147 - 150 %8 2000/// %G eng %0 Journal Article %J International Journal of Remote Sensing %D 2000 %T High performance computing algorithms for land cover dynamics using remote sensing data %A Kalluri, SNV %A JaJa, Joseph F. %A Bader, D.A. %A Zhang,Z. %A Townshend,J.R.G. %A Fallah-Adl,H. %X Global and regional land cover studies need to apply complex models on selected subsets of large volumes of multi-sensor and multi-temporal data sets that have been derived from raw instrument measurements using widely accepted pre-processing algorithms. The computational and storage requirements of most of these studies far exceed what is possible on a single workstation environment. We have been pursuing a new approach that couples scalable and open distributed heterogeneous hardware with the development of high performance software for processing, indexing and organizing remotely sensed data. Hierarchical data management tools are used to ingest raw data, create metadata and organize the archived data so as to automatically achieve computational load balancing among the available nodes and minimize input/output overheads. We illustrate our approach with four specific examples. The first is the development of the first fast operational scheme for the atmospheric correction of Landsat Thematic Mapper scenes, while the second example focuses on image segmentation using a novel hierarchical connected components algorithm. Retrieval of the global Bidirectional Reflectance Distribution Function in the red and near-infrared wavelengths using four years (1983 to 1986) of Pathfinder Advanced Very High Resolution Radiometer (AVHRR) Land data is the focus of our third example. The fourth example is the development of a hierarchical data organization scheme that allows on-demand processing and retrieval of regional and global AVHRR data sets. Our results show that substantial reductions in computational times can be achieved by the high performance computing technology. %B International Journal of Remote Sensing %V 21 %P 1513 - 1536 %8 2000/// %@ 0143-1161 %G eng %U http://www.tandfonline.com/doi/abs/10.1080/014311600210290 %N 6-7 %R 10.1080/014311600210290 %0 Conference Paper %B Geoscience and Remote Sensing Symposium, 1999. IGARSS '99 Proceedings. IEEE 1999 International %D 1999 %T A hierarchical data archiving and processing system to generate custom tailored products from AVHRR data %A Kalluri, SNV %A Zhang,Z. %A JaJa, Joseph F. %A Bader, D.A. %A Song,H. %A El Saleous,N. %A Vermote,E. %A Townshend,J.R.G. %K archiving;image %K AVHRR;GIS;PACS;custom %K data %K image;land %K image;remote %K mapping; %K mapping;PACS;geophysical %K measurement %K PROCESSING %K processing;geophysical %K product;data %K remote %K scheme;infrared %K sensing;optical %K sensing;terrain %K signal %K surface;multispectral %K system;indexing %K tailored %K technique;hierarchical %K techniques;remote %X A novel indexing scheme is described to catalogue satellite data on a pixel basis. The objective of this research is to develop an efficient methodology to archive, retrieve and process satellite data, so that data products can be generated to meet the specific needs of individual scientists. When requesting data, users can specify the spatial and temporal resolution, geographic projection, choice of atmospheric correction, and the data selection methodology. The data processing is done in two stages. Satellite data is calibrated, navigated and quality flags are appended in the initial processing. This processed data is then indexed and stored. Secondary processing such as atmospheric correction and projection are done after a user requests the data to create custom made products. By dividing the processing in to two stages saves time, since the basic processing tasks such as navigation and calibration which are common to all requests are not repeated when different users request satellite data. The indexing scheme described can be extended to allow fusion of data sets from different sensors %B Geoscience and Remote Sensing Symposium, 1999. IGARSS '99 Proceedings. IEEE 1999 International %V 5 %P 2374 -2376 vol.5 - 2374 -2376 vol.5 %8 1999/// %G eng %R 10.1109/IGARSS.1999.771514 %0 Journal Article %J ACM SIGCAS Computers and Society %D 1999 %T Human values and the future of technology: a declaration of responsibility %A Shneiderman, Ben %X We can make a difference in shaping the future by ensuring that computers "serve human needs (Mumford, 1934)." By making explicit the enduring values that we hold dear we can guide computer system designers and developers for the next decade, century, and thereafter. After setting our high-level goals we can pursue the components and seek the process for fulfilling them. High-level goals might include peace, excellent health care, adequate nutrition, accessible education, communication, freedom of expression, support for creative exploration, safety, and socially constructive entertainment. Computer technology can help attain these high-level goals if we clearly state measurable objectives, obtain participation of professionals, and design effective human-computer interfaces. Design considerations include adequate attention to individual differences among users, support of social and organizational structures, design for reliability and safety, provision of access by the elderly, handicapped, or illiterate, and appropriate user controlled adaptation. With suitable theories and empirical research we can achieve ease of learning, rapid performance, low error rates, and good retention over time, while preserving high subjective satisfaction. To raise the consciousness of designers and achieve these goals, we must generate an international debate, stimulate discussions within organizations, and interact with other intellectual communities. This paper calls for a focus on the "you" and "I" in developing improved user interface (UI) research and systems, offers a Declaration of Responsibility, and proposes a Social Impact Statement for major computing projects. %B ACM SIGCAS Computers and Society %V 29 %P 5 - 9 %8 1999/09// %@ 0095-2737 %G eng %U http://doi.acm.org/10.1145/572183.572185 %N 3 %R 10.1145/572183.572185 %0 Journal Article %J IEEE Computer Graphics and Applications %D 1999 %T Human-centered computing, online communities, and virtual environments %A Brown,J. R %A van Dam,A. %A Earnshaw,R. %A Encarnacao,J. %A Guedj,R. %A Preece,J. %A Shneiderman, Ben %A Vince,J. %K Books %K Collaboration %K Collaborative work %K Conferences %K EC/NSF joint Advanced Research Workshop %K Feeds %K Human computer interaction %K human-centered computing %K Internet %K Joining materials %K Laboratories %K Online communities %K Research initiatives %K USA Councils %K User interfaces %K Virtual environment %K virtual environments %K Virtual reality %X This report summarizes results of the first EC/NSF joint Advanced Research Workshop, which identified key research challenges and opportunities in information technology. The group agreed that the first joint research workshop should concentrate on the themes of human-centered computing and VEs. Human-centered computing is perceived as an area of strategic importance because of the move towards greater decentralization and decomposition in the location and provision of computation. The area of VEs is one where increased collaboration should speed progress in solving some of the more intractable problems in building effective applications %B IEEE Computer Graphics and Applications %V 19 %P 70 - 74 %8 1999/12//Nov %@ 0272-1716 %G eng %N 6 %R 10.1109/38.799742 %0 Journal Article %J Working Notes of the AAAI-98 Workshop on Case-Based Reasoning Integrations %D 1998 %T Hybrid planning: An approach to integrating generative and case-based planning %A desJardins, Marie %A Francis,A. %A Wolverton,M. %B Working Notes of the AAAI-98 Workshop on Case-Based Reasoning Integrations %8 1998/// %G eng %0 Journal Article %J VLSI DESIGN %D 1998 %T Hydrodynamic Device Simulation with New State Variables Specially Chosen a Block Gummel Iterative Approach %A Liang,W. %A Kerr,D. C. %A Goldsman,N. %A Mayergoyz, Issak D %X A new numerical formulation for solving the hydrodynamic model of semiconductor devicesis presented. Themethod is based on using new variables to transform the conventional hydrodynamic equations into forms which facilitatenumerical evaluation with a block Gum- mel approach. To demonstrate the new method, we apply it to model a 0.35 wn 2-D LDD MOSFET, where robust convergence properties are observed. %B VLSI DESIGN %V 6 %P 191 - 195 %8 1998/// %G eng %N 1-4 %0 Journal Article %J SIAM journal on matrix analysis and applications %D 1998 %T On hyperbolic triangularization: Stability and pivoting %A Stewart,M. %A Stewart, G.W. %X This paper treats the problem of triangularizing a matrix by hyperbolic Householdertransformations. The stability of this method, which finds application in block updating and fast algorithms for Toeplitz-like matrices, has been analyzed only in special cases. Here we give a gen- eral analysis which shows that two distinct implementations of the individual transformations are relationally stable. The analysis also shows that pivoting is required for the entire triangularization algorithm to be stable. %B SIAM journal on matrix analysis and applications %V 19 %P 847 - 860 %8 1998/// %G eng %N 4 %0 Journal Article %J Journal of Computational Biology %D 1997 %T Hardness of flip-cut problems from optical mapping %A DANČÍK,V. %A Hannenhalli, Sridhar %A Muthukrishnan,S. %B Journal of Computational Biology %V 4 %P 119 - 125 %8 1997/// %G eng %N 2 %0 Conference Paper %B Compression and Complexity of Sequences 1997. Proceedings %D 1997 %T Hardness of flip-cut problems from optical mapping [DNA molecules application] %A Dancik,V. %A Hannenhalli, Sridhar %A Muthukrishnan,S. %K binary shift cut %K Biochemistry %K Bioinformatics %K biological techniques %K Biology %K biomedical optical imaging %K combinatorial mathematics %K combinatorial problem %K computational complexity %K computational problems %K DNA %K DNA molecule %K exclusive binary flip cut %K flip-cut problem hardness %K Genetic engineering %K Genomics %K MATHEMATICS %K molecular biology %K molecular biophysics %K molecule orientation %K multiple partial restriction maps %K NP-complete problem %K optical mapping %K polynomial time solutions %K Polynomials %K sequences %X Optical mapping is a new technology for constructing restriction maps. Associated computational problems include aligning multiple partial restriction maps into a single “consensus” restriction map, and determining the correct orientation of each molecule, which was formalized as the exclusive binary flip cut (EBFC) problem by Muthukrishnan and Parida (see Proc. of the First ACM Conference on Computational Molecular Biology (RECOMB), Santa Fe, p.209-19, 1997). Here, the authors prove that the EBFC problem, as well as a number of its variants, are NP-complete. They also identify another problem formalized as binary shift cut (BSC) problem motivated by the fact that there might be missing fragments at the beginnings and/or the ends of the molecules, and prove it to be NP-complete. Therefore, they do not have efficient, that is, polynomial time solutions unless P=NP %B Compression and Complexity of Sequences 1997. Proceedings %I IEEE %P 275 - 284 %8 1997/06/11/13 %@ 0-8186-8132-2 %G eng %R 10.1109/SEQUEN.1997.666922 %0 Journal Article %J Journal of Logic and Computation %D 1997 %T How to (plan to) meet a deadline between now and then %A Nirkhe,M. %A Kraus,S. %A Miller,M. %A Perlis, Don %B Journal of Logic and Computation %V 7 %P 109 - 109 %8 1997/// %G eng %N 1 %0 Conference Paper %B Euro-Par'96 Parallel Processing %D 1996 %T A high performance image database system for remotely sensed imagery %A Shock,C. %A Chang,C. %A Davis, Larry S. %A Goward,S. %A Saltz, J. %A Sussman, Alan %B Euro-Par'96 Parallel Processing %P 109 - 122 %8 1996/// %G eng %0 Journal Article %J 16th AIAA International Communications Satellite Systems Conference %D 1996 %T Hybrid network management (communication systems) %A Baras,J. S %A Ball,M. %A Karne,R. K %A Kelley,S. %A Jang,K.D. %A Plaisant, Catherine %A Roussopoulos, Nick %A Stathatos,K. %A Vakhutinsky,A. %A Valluri,J. %X We describe our collaborative efforts towards the design and implementation of a next-generation integrated network management system for hybrid networks (INMS/HN). We describe the overall software architecture of the system at its current stage of development. This NMS is specifically designed to address issues relevant to complex heterogeneous networks consisting of seamlessly interoperable terrestrial and satellite networks. NMSs are a key element for interoperability in such networks. We describe the integration of configuration management and performance management. The next step in this integration is fault management. In particular, we describe the object model, issues concerning the graphical user interface, browsing tools, performance data graphical widget displays, and management information database organization issues. %B 16th AIAA International Communications Satellite Systems Conference %8 1996/// %G eng %0 Journal Article %J International Amateur-Professional Photoelectric Photometry Communications %D 1995 %T How to succeed in graduate school: A guide for students and advisors %A desJardins, Marie %B International Amateur-Professional Photoelectric Photometry Communications %V 58 %P 15 - 15 %8 1995/// %G eng %0 Journal Article %J Crossroads %D 1995 %T How to succeed in graduate school: a guide for students and advisors: part II of II %A desJardins, Marie %B Crossroads %V 1 %P 1 - 6 %8 1995/02// %@ 1528-4972 %G eng %U http://doi.acm.org/10.1145/197892.197895 %N 3 %R 10.1145/197892.197895 %0 Journal Article %J Human factors in information systems: emerging theoretical bases %D 1995 %T HUMAN VALUE AND THE FUTURE OF TECHNOLOGY %A Shneiderman, Ben %B Human factors in information systems: emerging theoretical bases %P 355 - 355 %8 1995/// %G eng %0 Book %D 1995 %T Human-Computer Interaction Laboratory 1995 Video Reports %A Plaisant, Catherine %A Morrison,S. %A Skokowski,C. %A Reesch,J. %A Shneiderman, Ben %A Laboratory,University of Maryland at College Park. Human/Computer Interaction %A Channel,F. %I University of Maryland at College Park, Human/Computer Interaction Laboratory %8 1995/// %G eng %0 Conference Paper %B Document Analysis and Recognition, 1995., Proceedings of the Third International Conference on %D 1995 %T Hybrid thinning through reconstruction %A David Doermann %A Kia,O. %K algorithms;edge %K ambiguous %K detection;automatic %K detection;contextual %K detection;image %K information;contour %K methods;nonlocal %K methods;pixel-wise %K noise;hybrid %K Pixel %K points;local %K reconstruction;image %K region %K segmentation;noise; %K stroke;junction %K thinning %K thinning;image %X One difficulty with many pixel-wise thinning algorithms is that they produce unacceptable results at junction points, or in the presence of contour noise. The authors present a novel approach to detecting ambiguous regions in a thinned image. The method uses the reconstructability properties of appropriate thinning algorithms to reverse the thinning process and automatically detect those pixels which may have resulted from more then one stroke in the image. The ambiguous regions are then interpreted and reconstructed using domain specific or derived contextual information. The approach has the advantage of using local methods to rapidly identify strokes (or regions) which have been thinned correctly and allowing more detailed analysis based on non-local methods in the remaining regions %B Document Analysis and Recognition, 1995., Proceedings of the Third International Conference on %V 2 %P 632 -635 vol.2 - 632 -635 vol.2 %8 1995/08// %G eng %R 10.1109/ICDAR.1995.601975 %0 Conference Paper %B Pattern Recognition, 1994. Vol. 3-Conference C: Signal Processing, Proceedings of the 12th IAPR International Conference on %D 1994 %T High performance computing for land cover dynamics %A Parulekar,R. %A Davis, Larry S. %A Chellapa, Rama %A Saltz, J. %A Sussman, Alan %A Townshend,J. %B Pattern Recognition, 1994. Vol. 3-Conference C: Signal Processing, Proceedings of the 12th IAPR International Conference on %P 234 - 238 %8 1994/// %G eng %0 Journal Article %J Artificial Intelligence in Medicine %D 1994 %T High-specificity neurological localization using a connectionist model %A Tuhrim,S. %A Reggia, James A. %A Peng,Y. %B Artificial Intelligence in Medicine %V 6 %P 521 - 532 %8 1994/// %G eng %N 6 %0 Journal Article %J Image and vision computing %D 1994 %T How normal flow constrains relative depth for an active observer %A Huang, L. %A Aloimonos, J. %B Image and vision computing %V 12 %P 435 - 445 %8 1994/// %G eng %N 7 %0 Journal Article %J Crossroads %D 1994 %T How to succeed in graduate school: a guide for students and advisors: part I of II %A desJardins, Marie %X This paper attempts to raise some issues that are important for graduate students to be successful and to get as much out of the process as possible, and for advisors who wish to help their students be successful. The intent is not to provide prescriptive advice -- no formulas for finishing a thesis or twelve-step programs for becoming a better advisor are given -- but to raise awareness on both sides of the advisor-student relationship as to what the expectations are and should be for this relationship, what a graduate student should expect to accomplish, common problems, and where to go if the advisor is not forthcoming. %B Crossroads %V 1 %P 3 - 9 %8 1994/12// %@ 1528-4972 %G eng %U http://doi.acm.org/10.1145/197149.197154 %N 2 %R 10.1145/197149.197154 %0 Journal Article %J University Of Maryland %D 1994 %T Human Emotion Recognition From Motion Using A Radial Basis Function Network Architecture %A Mark,R. %A Yaser,Y. %A Larry,D. %B University Of Maryland %8 1994/// %G eng %0 Conference Paper %B , 11th IAPR International Conference on Pattern Recognition, 1992. Vol.III. Conference C: Image, Speech and Signal Analysis, Proceedings %D 1992 %T Hierarchical curve representation %A Fermüller, Cornelia %A Kropatsch,W. %K Automation %K continuous curves %K curvature %K data mining %K digital images %K Educational institutions %K Feature extraction %K hierarchical curve representation %K IMAGE PROCESSING %K image recognition %K image resolution %K Image segmentation %K multiresolution structure %K Object recognition %K planar curves %K pyramid %K Robustness %K Sampling methods %K Smoothing methods %X Presents a robust method for describing planar curves in multiple resolution using curvature information. The method is developed by taking into account the discrete nature of digital images as well as the discrete aspect of a multiresolution structure (pyramid). The authors deal with the robustness of the technique, which is due to the additional information that is extracted from observing the behavior of corners in the pyramid. Furthermore the resulting algorithm is conceptually simple and easily parallelizable. They develop theoretical results, analyzing the curvature of continuous curves in scale-space, which show the behavior of curvature extrema under varying scale. These results are used to eliminate any ambiguities that might arise from sampling problems due to the discreteness of the representation. Finally, experimental results demonstrate the potential of the method %B , 11th IAPR International Conference on Pattern Recognition, 1992. Vol.III. Conference C: Image, Speech and Signal Analysis, Proceedings %I IEEE %P 143 - 146 %8 1992/09/30/Aug-3 %@ 0-8186-2920-7 %G eng %R 10.1109/ICPR.1992.201947 %0 Journal Article %J International Journal of Man-Machine Studies %D 1991 %T High precision touchscreens: design strategies and comparisons with a mouse %A Sears,Andrew %A Shneiderman, Ben %X Three studies were conducted comparing speed of performance, error rates and user preference ratings for three selection devices. The devices tested were a touchscreen, a touchscreen with stabilization (stabilization software filters and smooths raw data from hardware), and a mouse. The task was the selection of rectangular targets 1, 4, 16 and 32 pixels per side (0·4 × 0·6, 1·7 × 2·2, 6·9 × 9·0, 13·8 × 17·9 mm respectively). Touchscreen users were able to point at single pixel targets, thereby countering widespread expectations of poor touchscreen resolution. The results show no difference in performance between the mouse and touchscreen for targets ranging from 32 to 4 pixels per side. In addition, stabilization significantly reduced the error rates for the touchscreen when selecting small targets. These results imply that touchscreens, when properly used, have attractive advantages in selecting targets as small as 4 pixels per size (approximately one-quarter of the size of a single character). A variant of Fitts' Law is proposed to predict touchscreen pointing times. Ideas for future research are also presented. %B International Journal of Man-Machine Studies %V 34 %P 593 - 613 %8 1991/04// %@ 0020-7373 %G eng %U http://www.sciencedirect.com/science/article/pii/0020737391900378 %N 4 %R 10.1016/0020-7373(91)90037-8 %0 Journal Article %J ACM SIGCHI Bulletin %D 1991 %T Human values and the future of technology: a declaration of responsibility %A Shneiderman, Ben %X "We must learn to balance the material wonders of technology with the spiritual demands of our human nature."---John Naisbitt (1982).We can make a difference in shaping the future by ensurin g that computers "serve human needs (Mumford, 1934)." By making explicit the enduring values that we hold dear we can guide computer system designers and developers for the next decade, century, and thereafter. After setting our high-level goals we can pursue the components and seek the process for fulfilling them.High-level goals might include peace, excellent health care, adequate nutrition, accessible education, communication, freedom of expression, support for creative exploration, safety, and socially constructive entertainment. Computer technology can help attain these high-level goals if we clearly state measurable objectives, obtain participation of professionals, and design effective human-computer interfaces. Design considerations include adequate attention to individual differences among users, support of social and organizational structures, design for reliability and safety, provision of access by the elderly, handicapped, or illiterate, and appropriate user controlled adaptation. With suitable theories and empirical research we can achieve ease of learning, rapid performance, low error rates, and good retention over time, while preserving high subjective satisfaction.To raise the consciousness of designers and achieve these goals, we must generate an international debate, stimulate discussions within organizations, and interact with other intellectual communities. This paper calls for a focus on the "you" and "I" in developing improved user interface (UI) research and systems, offers a Declaration of Responsibility, and proposes a Social Impact Statement for major computing projects. %B ACM SIGCHI Bulletin %V 23 %P 11 - 16 %8 1991/01// %@ 0736-6906 %G eng %U http://doi.acm.org/10.1145/122672.122674 %N 1 %R 10.1145/122672.122674 %0 Conference Paper %B Proceedings of the conference on Computers and the quality of life %D 1990 %T Human values and the future of technology: a declaration of empowerment %A Shneiderman, Ben %B Proceedings of the conference on Computers and the quality of life %S CQL '90 %I ACM %C New York, NY, USA %P 1 - 6 %8 1990/// %@ 0-89791-403-1 %G eng %U http://doi.acm.org/10.1145/97344.97360 %R 10.1145/97344.97360 %0 Conference Paper %B Proceedings of the SIGCHI conference on Human factors in computing systems: Wings for the mind %D 1989 %T Human-computer interaction lab, University of Maryland %A Shneiderman, Ben %B Proceedings of the SIGCHI conference on Human factors in computing systems: Wings for the mind %S CHI '89 %I ACM %C New York, NY, USA %P 309 - 310 %8 1989/// %@ 0-89791-301-9 %G eng %U http://doi.acm.org/10.1145/67449.67509 %R 10.1145/67449.67509 %0 Journal Article %J CHI %D 1989 %T Human-Computer Interaction Laboratory, University of Maryland, Center for Automation Research %A Shneiderman, Ben %B CHI %V 89 %P 309 - 310 %8 1989/// %G eng %0 Conference Paper %B Proceedings of the second annual ACM conference on Hypertext %D 1989 %T Hypertext and software engineering %A Balzer,R. %A Begeman,M. %A Garg,P. K. %A Schwartz,M. %A Shneiderman, Ben %X The purpose of this panel is to bring together researchers in software engineering and hypertext and help identify the major issues in the application of hypertext technology and concepts to software engineering and vice versa. %B Proceedings of the second annual ACM conference on Hypertext %S HYPERTEXT '89 %I ACM %C New York, NY, USA %P 395 - 396 %8 1989/// %@ 0-89791-339-6 %G eng %U http://doi.acm.org/10.1145/74224.74259 %R 10.1145/74224.74259 %0 Journal Article %J Tech Report HCIL-89-06 %D 1989 %T Hypertext hands-on! %A Shneiderman, Ben %A Kearsley,G. %X This innovative book/software package provides the first hands-on nontechnical introduction to hypertext. Hypertext is a computer technology for manipulating information; in a grander sense, it is a new way of reading and writing. With the IBM-PC disket tes provided in this package, you will learn about hypertext by experiencing it. You will discover what it is like to read interactively, to find information according to your own needs and interests. Both the book and the software versions cover the basic concepts of hypertext, typical hypertext applications, and currently available authoring systems. They also raise important design and implementations issues. The book is self-contained an can be read from beginning to end without a computer. The software is also self-contained and, presenting hypertext in hypertext form, can be read in any order you choose. Since the two versions contain largely similar material, they provide an interesting basis for comparison between conventional text presentation and hypertext. %B Tech Report HCIL-89-06 %8 1989/// %G eng %0 Book %D 1988 %T Hypertext on hypertext: IBM PC & compatibles %E Shneiderman, Ben %I ACM %C New York, NY, USA %8 1988/// %@ 0-89791-280-2 %G eng %0 Conference Paper %B Proceedings of the 10th Int'l Joint Conference on Artificial Intelligence %D 1987 %T How can a program mean %A Perlis, Don %B Proceedings of the 10th Int'l Joint Conference on Artificial Intelligence %P 163 - 166 %8 1987/// %G eng %0 Journal Article %J ACM SIGCHI Bulletin %D 1986 %T Human-computer interaction research at the University of Maryland %A Shneiderman, Ben %X The Human-Computer Interaction Laboratory (HCIL) is a unit of the Center for Automation Research at the Univesity of Maryland. HCIL is an interdisciplinary research group whose participants are faculty in the Departments of Computer Science and Psychology and the Colleges of Library and Information Services, Business, and Education. In addition, staff scienctists, graduate students, and undergraduates contribute to this small, but lively community that pursues empirical studies of people using computers.Our support comes from industrial research projects, government grants, the State of Maryland, and the University of Maryland. Projects often become interrelated in surprising ways enabling individuals to cooperate constructively. Some of our efforts during the past year are described below. %B ACM SIGCHI Bulletin %V 17 %P 27 - 32 %8 1986/01// %@ 0736-6906 %G eng %U http://doi.acm.org/10.1145/15671.15673 %N 3 %R 10.1145/15671.15673 %0 Journal Article %J SIAM Journal on Scientific and Statistical ComputingSIAM J. Sci. and Stat. Comput. %D 1986 %T A Hybrid Chebyshev Krylov Subspace Algorithm for Solving Nonsymmetric Systems of Linear Equations %A Elman, Howard %A Saad, Youcef %A Saylor, Paul E. %B SIAM Journal on Scientific and Statistical ComputingSIAM J. Sci. and Stat. Comput. %V 7 %P 840 - 840 %8 1986/// %@ 01965204 %G eng %U http://link.aip.org/link/SJOCE3/v7/i3/p840/s1&Agg=doi %N 3 %R 10.1137/0907057 %0 Report %D 1985 %T A High-Level Interactive System for Designing VLSI Signal Processors %A Owens,R. M. %A JaJa, Joseph F. %K Technical Report %X This paper describes a high-level interactive system that can be used to generate VLSI designs for various operations in signal processing such as filtering, convolution and computing the discrete Fourier transform. The overall process is fully automated and requires that the user specifies only a few parameters such as operation, precision, size and architecture type. The built-in architectures are new digit-on-line bit-serial architectures that are based on recently derived fast algorithms for the above operations. The basic elements are compact and have a very small gate delay. We feel that our system will offer a flexible and easy to use tool that can produce practical designs which are easy to test, efficient and fast. %I Institute for Systems Research, University of Maryland, College Park %V ISR-TR-1985-7 %8 1985/// %G eng %U http://drum.lib.umd.edu/handle/1903/4383 %0 Journal Article %J Empirical foundations of information and software science %D 1985 %T Human factors issues of manuals, online help, and tutorials %A Shneiderman, Ben %B Empirical foundations of information and software science %V 1984 %P 107 - 107 %8 1985/// %G eng %0 Journal Article %J Proceedings of the Annual Symposium on Computer Application in Medical CareProc Annu Symp Comput Appl Med Care %D 1984 %T Human Factors in Interactive Medical Systems %A Shneiderman, Ben %B Proceedings of the Annual Symposium on Computer Application in Medical CareProc Annu Symp Comput Appl Med Care %P 96 - 96 %8 1984/11/07/ %@ 0195-4210 %G eng %0 Journal Article %J SIGCHI Bull. %D 1983 %T High-tech can stimulate creative action: the increased ease-of-use of computers supports individual competence and productive work %A Shneiderman, Ben %X Jokes about the complexity and error-prone use of computer systems reflect the experience of many people. Computer anxiety, terminal terror, and network neurosis are modern maladies - but help is on the way! %B SIGCHI Bull. %V 14 %P 6 - 7 %8 1983/04// %@ 0736-6906 %G eng %U http://doi.acm.org/10.1145/1044188.1044189 %N 4 %R 10.1145/1044188.1044189 %0 Journal Article %J Proceedings COMPCON %D 1983 %T Human engineering management plan for interactive systems %A Shneiderman, Ben %B Proceedings COMPCON %V 83 %8 1983/// %G eng %0 Book Section %B Enduser Systems and Their Human FactorsEnduser Systems and Their Human Factors %D 1983 %T Human factors of interactive software %A Shneiderman, Ben %E Blaser,Albrecht %E Zoeppritz,Magdalena %X There is intense interest about human factors issues in interactive computer systems for life-critical applications, industrial/commercial uses, and personal computing in the office or home. Primary design goals include proper functionality, adequate reliability, suitable cost, and adherence to schedule. Measurable human factors issues include time to learn, speed of performance, rate of errors, subjective satisfaction, and retention over time. Specific human factors acceptance tests are described as a natural complement to hardware and software acceptance tests. Project management ideas, information resources, and potential research directions are presented. %B Enduser Systems and Their Human FactorsEnduser Systems and Their Human Factors %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 150 %P 9 - 29 %8 1983/// %@ 978-3-540-12273-9 %G eng %U http://dx.doi.org/10.1007/3-540-12273-7_16 %0 Journal Article %J Human factors in software development %D 1981 %T Human Aspects of Computing Editor Control Flow and Data Structure Documentation: Two Experiments %A Shneiderman, Ben %B Human factors in software development %P 365 - 365 %8 1981/// %G eng %0 Journal Article %J Proceedings-Compcon %D 1981 %T HUMAN FACTORS ISSUES IN DESIGNING INTERACTIVE SYSTEMS %A Shneiderman, Ben %B Proceedings-Compcon %V 25 %P 116 - 116 %8 1981/// %G eng %0 Journal Article %J ACM SIGSOC Bulletin %D 1981 %T Human factors studies with system message styles (abstract only) %A Shneiderman, Ben %X Computer systems often contain messages which are imprecise ('SYNTAX ERROR'), hostile ('FATAL ERROR, RUN ABORTED'), cryptic ('IEH291H'), or obscure ('CTL DAMAGE, TRANS ERR'). Such messages may be acceptable to computer professionals who regularly use a specific system, but they lead to frustration for novices and for professionals who are using new features or facilities.We have conducted five studies using COBOL compiler syntax errors and text editor command errors to measure the impact of improving the wording of system messages. The results indicate that increased specificity, more positive tone, and greater clarity can improve correction rates and user satisfaction.An overview of the experimental results will be presented along with guidelines for writing system messages. %B ACM SIGSOC Bulletin %V 13 %P 138– - 138– %8 1981/05// %@ 0163-5794 %G eng %U http://doi.acm.org/10.1145/1015579.810979 %N 2-3 %R 10.1145/1015579.810979 %0 Journal Article %J Information & Management %D 1980 %T Hardware options, evaluation metrics, and a design sequence for interactive information systems %A Shneiderman, Ben %K casual users %K design sequence %K evaluation metrics %K ferminal design %K human factor %K Interactive information systems %X Interactive information systems must satisfy a wide variety of users, serve a broad range of tasks, and be suited to diverse hardware environments. This paper concentrates on three aspects of interactive information systems design: hardware options, evaluation metrics, and a possible design sequence. Rigorous pilot studies are emphasized, and supporting experimental evidence is offered. %B Information & Management %V 3 %P 3 - 18 %8 1980/// %@ 0378-7206 %G eng %U http://www.sciencedirect.com/science/article/pii/0378720680900269 %N 1 %R 10.1016/0378-7206(80)90026-9 %0 Journal Article %J Proceedings-Compcon %D 1980 %T HUMAN FACTORS EXPERIMENTS FOR REFINING INTERACTIVE SYSTEM DESIGNS %A Shneiderman, Ben %B Proceedings-Compcon %V 21 %P 123 - 123 %8 1980/// %G eng %0 Journal Article %J Software Configuration Management %D 1980 %T Human Factors of Software Design and Development %A Weiser,M. %A Shneiderman, Ben %B Software Configuration Management %P 67 - 67 %8 1980/// %G eng %0 Journal Article %J Computer %D 1979 %T Human Factors Experiments in Designing Interactive Systems %A Shneiderman, Ben %K Application software %K Computer languages %K Design engineering %K Home computing %K human factors %K interactive systems %K Process design %K Testing %X Successful industrial design gracefully unites esthetics and function at minimum cost. However, designers face special problems when they apply their skills to interactive computer systems. %B Computer %V 12 %P 9 - 19 %8 1979/12// %@ 0018-9162 %G eng %N 12 %R 10.1109/MC.1979.1658571