%0 Journal Article %J IEEE Transactions on Visualization and Computer Graphics %D 2018 %T Deep-learning-assisted Volume Visualization %A Cheng, Hsueh-Chien %A Cardone, Antonio %A Jain, Somay %A Krokos, Eric %A Narayan, Kedar %A Subramaniam, Sriram %A Varshney, Amitabh %B IEEE Transactions on Visualization and Computer Graphics %P 1 - 1 %8 Jan-01-2018 %G eng %U http://ieeexplore.ieee.org/document/8265023/http://xplorestaging.ieee.org/ielx7/2945/4359476/08265023.pdf?arnumber=8265023 %! IEEE Trans. Visual. Comput. Graphics %R 10.1109/TVCG.2018.2796085 %0 Journal Article %J International Journal of Environmental Research and Public Health %D 2018 %T A Metagenomic Approach to Evaluating Surface Water Quality in Haiti %A Roy, Monika %A Arnaud, Jean %A Jasmin, Paul %A Hamner, Steve %A Hasan, Nur %A Rita R Colwell %A Ford, Timothy %X The cholera epidemic that occurred in Haiti post-earthquake in 2010 has resulted in over 9000 deaths during the past eight years. Currently, morbidity and mortality rates for cholera have declined, but cholera cases still occur on a daily basis. One continuing issue is an inability to accurately predict and identify when cholera outbreaks might occur. To explore this surveillance gap, a metagenomic approach employing environmental samples was taken. In this study, surface water samples were collected at two time points from several sites near the original epicenter of the cholera outbreak in the Central Plateau of Haiti. These samples underwent whole genome sequencing and subsequent metagenomic analysis to characterize the microbial community of bacteria, fungi, protists, and viruses, and to identify antibiotic resistance and virulence associated genes. Replicates from sites were analyzed by principle components analysis, and distinct genomic profiles were obtained for each site. Cholera toxin converting phage was detected at one site, and Shiga toxin converting phages at several sites. Members of the Acinetobacter family were frequently detected in samples, including members implicated in waterborne diseases. These results indicate a metagenomic approach to evaluating water samples can be useful for source tracking and the surveillance of pathogens such as Vibrio cholerae over time, as well as for monitoring virulence factors such as cholera toxin. %B International Journal of Environmental Research and Public Health %V 1542 %P 2211 %8 Jan-10-2018 %G eng %U https://www.ncbi.nlm.nih.gov/pubmed/30309013 %N 10 %! IJERPH %0 Journal Article %J Eos %D 2018 %T Satellites and Cell Phones Form a Cholera Early-Warning System %A Akanda, Ali %A Aziz, Sonia %A Jutla, Antarpreet %A Huq, Anwar %A Alam, Munirul %A Ahsan, Gias %A Rita R Colwell %B Eos %V 99 %8 Mar-03-2020 %G eng %U https://eos.org/project-updates/satellites-and-cell-phones-form-a-cholera-early-warning-system %! Eos %R 10.1029/2018EO094839 %0 Journal Article %J The American Journal of Tropical Medicine and Hygiene %D 2017 %T Assessment of Risk of Cholera in Haiti following Hurricane Matthew %A Huq, Anwar %A Anwar, Rifat %A Rita R Colwell %A McDonald, Michael D. %A Khan, Rakib %A Jutla, Antarpreet %A Akanda, Shafqat %X Damage to the inferior and fragile water and sanitation infrastructure of Haiti after Hurricane Matthew has created an urgent public health emergency in terms of likelihood of cholera occurring in the human population. Using satellite-derived data on precipitation, gridded air temperature, and hurricane path and with information on water and sanitation (WASH) infrastructure, we tracked changing environmental conditions conducive for growth of pathogenic vibrios. Based on these data, we predicted and validated the likelihood of cholera cases occurring past hurricane. The risk of cholera in the southwestern part of Haiti remained relatively high since November 2016 to the present. Findings of this study provide a contemporary process for monitoring ground conditions that can guide public health intervention to control cholera in human population by providing access to vaccines, safe WASH facilities. Assuming current social and behavioral patterns remain constant, it is recommended that WASH infrastructure should be improved and considered a priority especially before 2017 rainy season. %B The American Journal of Tropical Medicine and Hygiene %V 97 %P 896 - 903 %8 Jul-09-2017 %G eng %U http://www.ajtmh.org/content/journals/10.4269/ajtmh.17-0048 %N 3 %R 10.4269/ajtmh.17-0048 %0 Journal Article %J ACS Biomaterials Science & Engineering %D 2017 %T A Bioinformatics 3D Cellular Morphotyping Strategy for Assessing Biomaterial Scaffold Niches %A Florczyk, Stephen J. %A Simon, Mylene %A Juba, Derek %A Pine, P. Scott %A Sarkar, Sumona %A Chen, Desu %A Baker, Paula J. %A Bodhak, Subhadip %A Cardone, Antonio %A Brady, Mary C. %A Bajcsy, Peter %A Simon, Carl G. %B ACS Biomaterials Science & Engineering %V 3 %P 2302 - 2313 %8 Sep-10-2017 %G eng %U http://pubs.acs.org/doi/10.1021/acsbiomaterials.7b00473http://pubs.acs.org/doi/pdf/10.1021/acsbiomaterials.7b00473 %N 10 %! ACS Biomater. Sci. Eng. %R 10.1021/acsbiomaterials.7b00473 %0 Journal Article %J Frontiers in Microbiology %D 2017 %T Characterization of Pathogenic Vibrio parahaemolyticus from the Chesapeake Bay, Maryland %A Chen, Arlene J. %A Hasan, Nur A. %A Haley, Bradd J. %A Taviani, Elisa %A Tarnowski, Mitch %A Brohawn, Kathy %A Johnson, Crystal N. %A Rita R Colwell %A Huq, Anwar %B Frontiers in Microbiology %8 Mar-12-2018 %G eng %U http://journal.frontiersin.org/article/10.3389/fmicb.2017.02460 %! Front. Microbiol. %R 10.3389/fmicb.2017.02460 %0 Journal Article %J mBio %D 2017 %T Comparative Genomics of Escherichia coli Isolated from Skin and Soft Tissue and Other Extraintestinal Infections %A Ranjan, Amit %A Shaik, Sabiha %A Nandanwar, Nishant %A Hussain, Arif %A Tiwari, Sumeet K. %A Semmler, Torsten %A Jadhav, Savita %A Wieler, Lothar H. %A Alam, Munirul %A Rita R Colwell %A Ahmed, Niyaz %E Cossart, Pascale F. %X Escherichia coli, an intestinal Gram-negative bacterium, has been shown to be associated with a variety of diseases in addition to intestinal infections, such as urinary tract infections (UTIs), meningitis in neonates, septicemia, skin and soft tissue infections (SSTIs), and colisepticemia. Thus, for nonintestinal infections, it is categorized as extraintestinal pathogenic E. coli (ExPEC). It is also an opportunistic pathogen, causing cross infections, notably as an agent of zoonotic diseases. However, comparative genomic data providing functional and genetic coordinates for ExPEC strains associated with these different types of infections have not proven conclusive. In the study reported here, ExPEC E. coli isolated from SSTIs was characterized, including virulence and drug resistance profiles, and compared with isolates from patients suffering either pyelonephritis or septicemia. Results revealed that the majority of the isolates belonged to two pathogenic phylogroups, B2 and D. Approximately 67% of the isolates were multidrug resistant (MDR), with 85% producing extended-spectrum beta-lactamase (ESBL) and 6% producing metallo-beta-lactamase (MBL). The blaCTX-M-15 genotype was observed in at least 70% of the E. coli isolates in each category, conferring resistance to an extended range of beta-lactam antibiotics. Whole-genome sequencing and comparative genomics of the ExPEC isolates revealed that two of the four isolates from SSTIs, NA633 and NA643, belong to pandemic sequence type ST131, whereas functional characteristics of three of the ExPEC pathotypes revealed that they had equal capabilities to form biofilm and were resistant to human serum. Overall, the isolates from a variety of ExPEC infections demonstrated similar resistomes and virulomes and did not display any disease-specific functional or genetic coordinates. IMPORTANCE Infections caused by extraintestinal pathogenic E. coli (ExPEC) are of global concern as they result in significant costs to health care facilities management. The recent emergence of a multidrug-resistant pandemic clone, Escherichia coli ST131, is of primary concern as a global threat. In developing countries, such as India, skin and soft tissue infections (SSTIs) associated with E. coli are marginally addressed. In this study, we employed both genomic analysis and phenotypic assays to determine relationships, if any, among the ExPEC pathotypes. Similarity between antibiotic resistance and virulence profiles was observed, ST131 isolates from SSTIs were reported, and genomic similarities among strains isolated from different disease conditions were detected. This study provides functional molecular infection epidemiology insight into SSTI-associated E. coli compared with ExPEC pathotypes. %B mBio %8 Jun-09-2017 %G eng %U http://mbio.asm.org/lookup/doi/10.1128/mBio.01070-17 %N 4 %! mBio %R 10.1128/mBio.01070-17 %0 Journal Article %J Journal of Biomolecular Techniques : JBT %D 2017 %T Genomic Methods and Microbiological Technologies for Profiling Novel and Extreme Environments for the Extreme Microbiome Project (XMP) %A Tighe, Scott %A Afshinnekoo, Ebrahim %A Rock, Tara M. %A McGrath, Ken %A Alexander, Noah %A McIntyre, Alexa %A Ahsanuddin, Sofia %A Bezdan, Daniela %A Green, Stefan J. %A Joye, Samantha %A Stewart Johnson, Sarah %A Baldwin, Don A. %A Bivens, Nathan %A Ajami, Nadim %A Carmical, Joseph R. %A Herriott, Ian Charold %A Rita R Colwell %A Donia, Mohamed %A Foox, Jonathan %A Greenfield, Nick %A Hunter, Tim %A Hoffman, Jessica %A Hyman, Joshua %A Jorgensen, Ellen %A Krawczyk, Diana %A Lee, Jodie %A Levy, Shawn %A Garcia-Reyero, àlia %A Settles, Matthew %A Thomas, Kelley %A ómez, Felipe %A Schriml, Lynn %A Kyrpides, Nikos %A Zaikova, Elena %A Penterman, Jon %A Mason, Christopher E. %X The Extreme Microbiome Project (XMP) is a project launched by the Association of Biomolecular Resource Facilities Metagenomics Research Group (ABRF MGRG) that focuses on whole genome shotgun sequencing of extreme and unique environments using a wide variety of biomolecular techniques. The goals are multifaceted, including development and refinement of new techniques for the following: 1) the detection and characterization of novel microbes, 2) the evaluation of nucleic acid techniques for extremophilic samples, and 3) the identification and implementation of the appropriate bioinformatics pipelines. Here, we highlight the different ongoing projects that we have been working on, as well as details on the various methods we use to characterize the microbiome and metagenome of these complex samples. In particular, we present data of a novel multienzyme extraction protocol that we developed, called Polyzyme or MetaPolyZyme. Presently, the XMP is characterizing sample sites around the world with the intent of discovering new species, genes, and gene clusters. Once a project site is complete, the resulting data will be publically available. Sites include Lake Hillier in Western Australia, the “Door to Hell” crater in Turkmenistan, deep ocean brine lakes of the Gulf of Mexico, deep ocean sediments from Greenland, permafrost tunnels in Alaska, ancient microbial biofilms from Antarctica, Blue Lagoon Iceland, Ethiopian toxic hot springs, and the acidic hypersaline ponds in Western Australia. %B Journal of Biomolecular Techniques : JBT %V 28 %P 31 - 39 %8 Jan-04-2017 %G eng %U https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5345951/ %N 1 %! J Biomol Tech %R 10.7171/jbt.17-2801-004 %0 Journal Article %J Advances in Water Resources %D 2017 %T Hydroclimatic sustainability assessment of changing climate on cholera in the Ganges-Brahmaputra basin %A Nasr-Azadani, Fariborz %A Khan, Rakibul %A Rahimikollu, Javad %A Unnikrishnan, Avinash %A Akanda, Ali %A Alam, Munirul %A Huq, Anwar %A Jutla, Antarpreet %A Rita R Colwell %X The association of cholera and climate has been extensively documented. However, determining the effects of changing climate on the occurrence of disease remains a challenge. Bimodal peaks of cholera in Bengal Delta are hypothesized to be linked to asymmetric flow of the Ganges and Brahmaputra rivers. Spring cholera is related to intrusion of bacteria-laden coastal seawater during low flow seasons, while autumn cholera results from cross-contamination of water resources when high flows in the rivers cause massive inundation. Coarse resolution of General Circulation Model (GCM) output (usually at 100 – 300 km)cannot be used to evaluate variability at the local scale(10–20 km),hence the goal of this study was to develop a framework that could be used to understand impacts of climate change on occurrence of cholera. Instead of a traditional approach of downscaling precipitation, streamflow of the two rivers was directly linked to GCM outputs, achieving reasonable accuracy (R2 = 0.89 for the Ganges and R2 = 0.91 for the Brahmaputra)using machine learning algorithms (Support Vector Regression-Particle Swarm Optimization). Copula methods were used to determine probabilistic risks of cholera under several discharge conditions. Key results, using model outputs from ECHAM5, GFDL, andHadCM3for A1B and A2 scenarios, suggest that the combined low flow of the two rivers may increase in the future, with high flows increasing for first half of this century, decreasing thereafter. Spring and autumn cholera, assuming societal conditions remain constant e.g., at the current rate, may decrease. However significant shifts were noted in the magnitude of river discharge suggesting that cholera dynamics of the delta may well demonstrate an uncertain predictable pattern of occurrence over the next century. %B Advances in Water Resources %V 108 %P 332 - 344 %8 Jan-10-2017 %G eng %U https://linkinghub.elsevier.com/retrieve/pii/S030917081630728X %! Advances in Water Resources %R 10.1016/j.advwatres.2016.11.018 %0 Journal Article %J Water %D 2017 %T Membrane Bioreactor-Based Wastewater Treatment Plant in Saudi Arabia: Reduction of Viral Diversity, Load, and Infectious Capacity %A Jumat, Muhammad %A Hasan, Nur %A Subramanian, Poorani %A Heberling, Colin %A Rita R Colwell %A Hong, Pei-Ying %B Water %V 96046Volume 70 %P 534 %8 Jan-07-2017 %G eng %U http://www.mdpi.com/2073-4441/9/7/534 %N 7 %! Water %R 10.3390/w9070534 %0 Journal Article %J Scientific Reports %D 2017 %T The microbiomes of blowflies and houseflies as bacterial transmission reservoirs %A Junqueira, AC %A Ratan, Aakrosh %A Acerbi, Enzo %A Drautz-Moses, Daniela I. %A Premkrishnan, BNV %A Costea, PI %A Linz, Bodo %A Purbojati, Rikky W. %A Paulo, Daniel F. %A Gaultier, Nicolas E. %A Subramanian, Poorani %A Hasan, Nur A. %A Rita R Colwell %A Bork, Peer %A Azeredo-Espin, Ana Maria L. %A Bryant, Donald A. %A Schuster, Stephan C. %X Blowflies and houseflies are mechanical vectors inhabiting synanthropic environments around the world. They feed and breed in fecal and decaying organic matter, but the microbiome they harbour and transport is largely uncharacterized. We sampled 116 individual houseflies and blowflies from varying habitats on three continents and subjected them to high-coverage, whole-genome shotgun sequencing. This allowed for genomic and metagenomic analyses of the host-associated microbiome at the species level. Both fly host species segregate based on principal coordinate analysis of their microbial communities, but they also show an overlapping core microbiome. Legs and wings displayed the largest microbial diversity and were shown to be an important route for microbial dispersion. The environmental sequencing approach presented here detected a stochastic distribution of human pathogens, such as Helicobacter pylori, thereby demonstrating the potential of flies as proxies for environmental and public health surveillance. %B Scientific Reports %8 Jan-12-2017 %G eng %U http://www.nature.com/articles/s41598-017-16353-x %N 1 %! Sci Rep %R 10.1038/s41598-017-16353-x %0 Journal Article %J Current Environmental Health Reports %D 2017 %T Natural Disasters and Cholera Outbreaks: Current Understanding and Future Outlook %A Jutla, Antarpreet %A Khan, Rakibul %A Rita R Colwell %X Purpose of Review Diarrheal diseases remain a serious global public health threat, especially for those populations lacking access to safe water and sanitation infrastructure. Although association of several diarrheal diseases, e.g., cholera, shigellosis, etc., with climatic processes has been documented, the global human population remains at heightened risk of outbreak of diseases after natural disasters, such as earthquakes, floods, or droughts. In this review, cholera was selected as a signature diarrheal disease and the role of natural disasters in triggering and transmitting cholera was analyzed. Recent Findings Key observations include identification of an inherent feedback loop that includes societal structure, prevailing climatic processes, and spatio-temporal seasonal variability of natural disasters. Data obtained from satellite-based remote sensing are concluded to have application, although limited, in predicting risks of a cholera outbreak(s). Summary We argue that with the advent of new high spectral and spatial resolution data, earth observation systems should be seamlessly integrated in a decision support mechanism to be mobilize resources when a region suffers a natural disaster. A framework is proposed that can be used to assess the impact of natural disasters with response to outbreak of cholera, providing assessment of short- and long-term influence of climatic processes on disease outbreaks. %B Current Environmental Health Reports %P 99 - 107 %8 Jan-03-2017 %G eng %U http://link.springer.com/10.1007/s40572-017-0132-5 %N 1Suppl 1 %! Curr Envir Health Rpt %R 10.1007/s40572-017-0132-5 %0 Conference Paper %B 2016 32nd Southern Biomedical Engineering Conference (SBEC) %D 2016 %T 3D Cellular Morphotyping of Scaffold Niches %A Florczyk, Stephen J %A Simon, Mylene %A Juba, Derek %A Pine, P Scott %A Sarkar, Sumona %A Chen, Desu %A Baker, Paula J %A Bodhak, Subhadip %A Cardone, Antonio %A Brady, Mary %A others %B 2016 32nd Southern Biomedical Engineering Conference (SBEC) %I IEEE %G eng %0 Journal Article %J BMC Microbiology %D 2016 %T Enrichment dynamics of Listeria monocytogenes and the associated microbiome from naturally contaminated ice cream linked to a listeriosis outbreak %A Ottesen, Andrea %A Ramachandran, Padmini %A Reed, Elizabeth %A White, James R. %A Hasan, Nur %A Subramanian, Poorani %A Ryan, Gina %A Jarvis, Karen %A Grim, Christopher %A Daquiqan, Ninalynn %A Hanes, Darcy %A Allard, Marc %A Rita R Colwell %A Brown, Eric %A Chen, Yi %B BMC Microbiology %8 Jan-12-2016 %G eng %U http://bmcmicrobiol.biomedcentral.com/articles/10.1186/s12866-016-0894-1 %! BMC Microbiol %R 10.1186/s12866-016-0894-1 %0 Journal Article %J mBio %D 2016 %T Phylogenetic Diversity of Vibrio cholerae Associated with Endemic Cholera in Mexico from 1991 to 2008 %A Choi, Seon Y %A Rashed, Shah M. %A Hasan, Nur A. %A Alam, Munirul %A Islam, Tarequl %A Sadique, Abdus %A Johura, Fatema-Tuz %A Eppinger, Mark %A Ravel, Jacques %A Huq, Anwar %A Cravioto, Alejandro %A Rita R Colwell %X An outbreak of cholera occurred in 1991 in Mexico, where it had not been reported for more than a century and is now endemic. Vibrio cholerae O1 prototype El Tor and classical strains coexist with altered El Tor strains (1991 to 1997). Nontoxigenic (CTX−) V. cholerae El Tor dominated toxigenic (CTX+) strains (2001 to 2003), but V. cholerae CTX+ variant El Tor was isolated during 2004 to 2008, outcompeting CTX− V. cholerae. Genomes of six Mexican V. cholerae O1 strains isolated during 1991 to 2008 were sequenced and compared with both contemporary and archived strains of V. cholerae. Three were CTX+ El Tor, two were CTX− El Tor, and the remaining strain was a CTX+ classical isolate. Whole-genome sequence analysis showed the six isolates belonged to five distinct phylogenetic clades. One CTX− isolate is ancestral to the 6th and 7th pandemic CTX+ V. cholerae isolates. The other CTX− isolate joined with CTX− non-O1/O139 isolates from Haiti and seroconverted O1 isolates from Brazil and Amazonia. One CTX+ isolate was phylogenetically placed with the sixth pandemic classical clade and the V. cholerae O395 classical reference strain. Two CTX+ El Tor isolates possessing intact Vibrio seventh pandemic island II (VSP-II) are related to hybrid El Tor isolates from Mozambique and Bangladesh. The third CTX+ El Tor isolate contained West African-South American (WASA) recombination in VSP-II and showed relatedness to isolates from Peru and Brazil. Except for one isolate, all Mexican isolates lack SXT/R391 integrative conjugative elements (ICEs) and sensitivity to selected antibiotics, with one isolate resistant to streptomycin. No isolates were related to contemporary isolates from Asia, Africa, or Haiti, indicating phylogenetic diversity. %B mBio %V 7 %8 Apr-05-2016 %G eng %U http://mbio.asm.org/lookup/doi/10.1128/mBio.02160-15 %N 2 %! mBio %R 10.1128/mBio.02160-15 %0 Journal Article %J PeerJ %D 2015 %T Concordance and discordance of sequence survey methods for molecular epidemiology %A Castro-Nallar, Eduardo %A Hasan, Nur A. %A Cebula, Thomas A. %A Rita R Colwell %A Robison, Richard A. %A Johnson, W. Evan %A Crandall, Keith A. %X The post-genomic era is characterized by the direct acquisition and analysis of genomic data with many applications, including the enhancement of the understanding of microbial epidemiology and pathology. However, there are a number of molecular approaches to survey pathogen diversity, and the impact of these different approaches on parameter estimation and inference are not entirely clear. We sequenced whole genomes of bacterial pathogens, Burkholderia pseudomallei, Yersinia pestis, and Brucella spp. (60 new genomes), and combined them with 55 genomes from GenBank to address how different molecular survey approaches (whole genomes, SNPs, and MLST) impact downstream inferences on molecular evolutionary parameters, evolutionary relationships, and trait character associations. We selected isolates for sequencing to represent temporal, geographic origin, and host range variability. We found that substitution rate estimates vary widely among approaches, and that SNP and genomic datasets yielded different but strongly supported phylogenies. MLST yielded poorly supported phylogenies, especially in our low diversity dataset, i.e., Y. pestis. Trait associations showed that B. pseudomallei and Y. pestis phylogenies are significantly associated with geography, irrespective of the molecular survey approach used, while Brucella spp. phylogeny appears to be strongly associated with geography and host origin. We contrast inferences made among monomorphic (clonal) and non-monomorphic bacteria, and between intra- and inter-specific datasets. We also discuss our results in light of underlying assumptions of different approaches. %B PeerJ %P e761 %8 Jan-01-2015 %G eng %U https://peerj.com/articles/761 %N 8105394 %R 10.7717/peerj.761 %0 Journal Article %J Frontiers in Public Health %D 2015 %T Diagnostic Approach for Monitoring Hydroclimatic Conditions Related to Emergence of West Nile Virus in West Virginia %A Jutla, Antarpreet %A Huq, Anwar %A Rita R Colwell %X West Nile virus (WNV), mosquito-borne and water-based disease, is increasingly a global threat to public health. Since its appearance in the northeastern United States in 1999, WNV has since been reported in several states in the continental United States. The objective of this study is to highlight role of hydroclimatic processes estimated through satellite sensors in capturing conditions for emergence of the vectors in historically disease free regions. We tested the hypothesis that an increase in surface temperature, in combination with intensification of vegetation, and enhanced precipitation, lead to conditions favorable for vector (mosquito) growth. Analysis of land surface temperature (LST) pattern shows that temperature values >16°C, with heavy precipitation, may lead to abundance of the mosquito population. This hypothesis was tested in West Virginia where a sudden epidemic of WNV infection was reported in 2012. Our results emphasize the value of hydroclimatic processes estimated by satellite remote sensing, as well as continued environmental surveillance of mosquitoes, because when a vector-borne infection like WNV is discovered in contiguous regions, the risk of spread of WNV mosquitoes increase at points where appropriate hydroclimatic processes intersect with the vector niche. %B Frontiers in Public Health %8 Dec-02-2015 %G eng %U http://journal.frontiersin.org/Article/10.3389/fpubh.2015.00010/abstract %! Front. Public Health %R 10.3389/fpubh.2015.00010 %0 Journal Article %J Climate Research %D 2015 %T Downscaling river discharge to assess the effects of climate change on cholera outbreaks in the Bengal Delta %A Nasr-Azadani, F %A Unnikrishnan, A %A Akanda, A %A Islam, S %A Alam, M %A Huq, A %A Jutla, A %A Rita R Colwell %X Endemic cholera in the Bengal Delta region of South Asia has been associated with asymmetric and episodic variability of river discharge. Spring cholera was found to be related to intrusion of bacteria-laden coastal seawater during low flow seasons. Autumn cholera was hypothesized to result from cross-contamination of water resources when high river discharge causes massive inland inundation. The effect of climate change on diarrheal diseases has not been explored, because of the difficulties in establishing linkages between coarse-resolution global climate model outputs with localized disease outbreaks. Since rivers act as corridors for transport of cholera bacteria, the first step is to understand the discharge variability that may occur with climate change and whether it is linked to cholera. Here, we present a framework for downscaling precipitation from global climate models for river discharge in the Ganges-Brahmaputra-Meghna basin. Using a data-mining method that includes particle swarm optimization-based support vector regression, precipitation was downscaled for a statistical multiple regressive model to estimate river discharge in the basin. Key results from an ensemble of HadCM3, GFDL, and ECHAM5 models indicated 8 and 7.5% increase in flows for the IPCC A1B and A2 scenarios, respectively. The majority of the changes are attributable to increases in flows from February through August for both scenarios, with little to no change in seasonality of high and low flows during the next century. The probability of spring and autumn cholera is likely to increase steadily in the endemic region of the Bengal Delta. %B Climate Research %V 64 %P 257 - 274 %8 Jul-08-2017 %G eng %U http://www.int-res.com/abstracts/cr/v64/n3/p257-274/ %N 3 %! Clim. Res. %R 10.3354/cr01310 %0 Journal Article %J mBio %D 2015 %T Hybrid Vibrio cholerae El Tor Lacking SXT Identified as the Cause of a Cholera Outbreak in the Philippines %A Klinzing, David C. %A Choi, Seon Young %A Hasan, Nur A. %A Matias, Ronald R. %A Tayag, Enrique %A Geronimo, Josefina %A Skowronski, Evan %A Rashed, Shah M. %A Kawashima, Kent %A Rosenzweig, C. Nicole %A Gibbons, Henry S. %A Torres, Brian C. %A Liles, Veni %A Alfon, Alicia C. %A Juan, Maria Luisa %A Natividad, Filipinas F. %A Cebula, Thomas A. %A Rita R Colwell %X Cholera continues to be a global threat, with high rates of morbidity and mortality. In 2011, a cholera outbreak occurred in Palawan, Philippines, affecting more than 500 people, and 20 individuals died. Vibrio cholerae O1 was confirmed as the etiological agent. Source attribution is critical in cholera outbreaks for proper management of the disease, as well as to control spread. In this study, three V. cholerae O1 isolates from a Philippines cholera outbreak were sequenced and their genomes analyzed to determine phylogenetic relatedness to V. cholerae O1 isolates from recent outbreaks of cholera elsewhere. The Philippines V. cholerae O1 isolates were determined to be V. cholerae O1 hybrid El Tor belonging to the seventh-pandemic clade. They clustered tightly, forming a monophyletic clade closely related to V. cholerae O1 hybrid El Tor from Asia and Africa. The isolates possess a unique multilocus variable-number tandem repeat analysis (MLVA) genotype (12-7-9-18-25 and 12-7-10-14-21) and lack SXT. In addition, they possess a novel 15-kb genomic island (GI-119) containing a predicted type I restriction-modification system. The CTXΦ-RS1 array of the Philippines isolates was similar to that of V. cholerae O1 MG116926, a hybrid El Tor strain isolated in Bangladesh in 1991. Overall, the data indicate that the Philippines V. cholerae O1 isolates are unique, differing from recent V. cholerae O1 isolates from Asia, Africa, and Haiti. Furthermore, the results of this study support the hypothesis that the Philippines isolates of V. cholerae O1 are indigenous and exist locally in the aquatic ecosystem of the Philippines. %B mBio %8 Jan-05-2015 %G eng %U http://mbio.asm.org/lookup/doi/10.1128/mBio.00047-15 %N 2 %! mBio %R 10.1128/mBio.00047-15 %0 Journal Article %J Frontiers in Public Health %D 2015 %T Occurrence and Diversity of Clinically Important Vibrio Species in the Aquatic Environment of Georgia %A Kokashvili, Tamar %A Whitehouse, Chris A. %A Tskhvediani, Ana %A Grim, Christopher J. %A Elbakidze, Tinatin %A Mitaishvili, Nino %A Janelidze, Nino %A Jaiani, Ekaterine %A Haley, Bradd J. %A Lashkhi, Nino %A Huq, Anwar %A Rita R Colwell %A Tediashvili, Marina %B Frontiers in Public Health %8 10/2015 %G eng %U http://journal.frontiersin.org/Article/10.3389/fpubh.2015.00232/ %! Front. Public Health %R 10.3389/fpubh.2015.00232 %0 Journal Article %J The American Journal of Tropical Medicine and Hygiene %D 2015 %T Predictive Time Series Analysis Linking Bengal Cholera with Terrestrial Water Storage Measured from Gravity Recovery and Climate Experiment Sensors %A Rita R Colwell %A Unnikrishnan, Avinash %A Jutla, Antarpreet %A Huq, Anwar %A Akanda, Ali %X Outbreaks of diarrheal diseases, including cholera, are related to floods and droughts in regions where water and sanitation infrastructure are inadequate or insufficient. However, availability of data on water scarcity and abundance in transnational basins, are a prerequisite for developing cholera forecasting systems. With more than a decade of terrestrial water storage (TWS) data from the Gravity Recovery and Climate Experiment, conditions favorable for predicting cholera occurrence may now be determined. We explored lead–lag relationships between TWS in the Ganges–Brahmaputra–Meghna basin and endemic cholera in Bangladesh. Since bimodal seasonal peaks in cholera in Bangladesh occur during spring and autumn seasons, two separate logistical models between TWS and disease time series (2002–2010) were developed. TWS representing water availability showed an asymmetrical, strong association with cholera prevalence in the spring (τ = −0.53; P < 0.001) and autumn (τ = 0.45; P < 0.001) up to 6 months in advance. One unit (centimeter of water) decrease in water availability in the basin increased odds of above normal cholera by 24% (confidence interval [CI] = 20–31%; P < 0.05) in the spring, while an increase in regional water by 1 unit, through floods, increased odds of above average cholera in the autumn by 29% (CI = 22–33%; P < 0.05). %B The American Journal of Tropical Medicine and Hygiene %V 93 %P 1179 - 1186 %8 Sep-12-2015 %G eng %U http://www.ajtmh.org/content/journals/10.4269/ajtmh.14-0648 %N 6 %R 10.4269/ajtmh.14-0648 %0 Journal Article %J Applied and Environmental Microbiology %D 2015 %T Rapid Proliferation of Vibrio parahaemolyticus, Vibrio vulnificus, and Vibrio cholerae during Freshwater Flash Floods in French Mediterranean Coastal Lagoons %A Esteves, Kevin %A Hervio-Heath, Dominique %A Mosser, Thomas %A Rodier, Claire %A Tournoud, Marie-George %A Jumas-Bilak, Estelle %A Rita R Colwell %A Monfort, Patrick %E Wommack, K. E. %X Vibrio parahaemolyticus, Vibrio vulnificus, and Vibrio cholerae of the non-O1/non-O139 serotype are present in coastal lagoons of southern France. In these Mediterranean regions, the rivers have long low-flow periods followed by short-duration or flash floods during and after heavy intense rainstorms, particularly at the end of the summer and in autumn. These floods bring large volumes of freshwater into the lagoons, reducing their salinity. Water temperatures recorded during sampling (15 to 24°C) were favorable for the presence and multiplication of vibrios. In autumn 2011, before heavy rainfalls and flash floods, salinities ranged from 31.4 to 36.1‰ and concentrations of V. parahaemolyticus, V. vulnificus, and V. cholerae varied from 0 to 1.5 × 103 most probable number (MPN)/liter, 0.7 to 2.1 × 103 MPN/liter, and 0 to 93 MPN/liter, respectively. Following heavy rainstorms that generated severe flash flooding and heavy discharge of freshwater, salinity decreased, reaching 2.2 to 16.4‰ within 15 days, depending on the site, with a concomitant increase in Vibrio concentration to ca. 104 MPN/liter. The highest concentrations were reached with salinities between 10 and 20‰ for V. parahaemolyticus, 10 and 15‰ for V. vulnificus, and 5 and 12‰ for V. cholerae. Thus, an abrupt decrease in salinity caused by heavy rainfall and major flooding favored growth of human-pathogenic Vibrio spp. and their proliferation in the Languedocian lagoons. Based on these results, it is recommended that temperature and salinity monitoring be done to predict the presence of these Vibrio spp. in shellfish-harvesting areas of the lagoons. %B Applied and Environmental Microbiology %P 7600 - 7609 %8 Jan-11-2015 %G eng %U http://aem.asm.org/lookup/doi/10.1128/AEM.01848-15 %N 21 %! Appl. Environ. Microbiol. %R 10.1128/AEM.01848-15 %0 Journal Article %J PLOS ONE %D 2015 %T Satellite Based Assessment of Hydroclimatic Conditions Related to Cholera in Zimbabwe %A Jutla, Antarpreet %A Aldaach, Haidar %A Billian, Hannah %A Akanda, Ali %A Huq, Anwar %A Rita R Colwell %E Schumann, Guy J-P. %X Introduction Cholera, an infectious diarrheal disease, has been shown to be associated with large scale hydroclimatic processes. The sudden and sporadic occurrence of epidemic cholera is linked with high mortality rates, in part, due to uncertainty in timing and location of outbreaks. Improved understanding of the relationship between pathogenic abundance and climatic processes allows prediction of disease outbreak to be an achievable goal. In this study, we show association of large scale hydroclimatic processes with the cholera epidemic in Zimbabwe reported to have begun in Chitungwiza, a city in Mashonaland East province, in August, 2008. Principal Findings Climatic factors in the region were found to be associated with triggering cholera outbreak and are shown to be related to anomalies of temperature and precipitation, validating the hypothesis that poor conditions of sanitation, coupled with elevated temperatures, and followed by heavy rainfall can initiate outbreaks of cholera. Spatial estimation by satellite of precipitation and global gridded air temperature captured sensitivities in hydroclimatic conditions that permitted identification of the location in the region where the disease outbreak began. Discussion Satellite derived hydroclimatic processes can be used to capture environmental conditions related to epidemic cholera, as occurred in Zimbabwe, thereby providing an early warning system. Since cholera cannot be eradicated because the causative agent, Vibrio cholerae, is autochthonous to the aquatic environment, prediction of conditions favorable for its growth and estimation of risks of triggering the disease in a given population can be used to alert responders, potentially decreasing infection and saving lives. %B PLOS ONE %P e0137828 %8 May-09-2017 %G eng %U https://dx.plos.org/10.1371/journal.pone.0137828 %N 9Suppl 1 %! PLoS ONE %R 10.1371/journal.pone.0137828 %0 Journal Article %J BMC bioinformatics %D 2015 %T Survey statistics of automated segmentations applied to optical imaging of mammalian cells %A Bajcsy, Peter %A Cardone, Antonio %A Chalfoun, Joe %A Halter, Michael %A Juba, Derek %A Kociolek, Marcin %A Majurski, Michael %A Peskin, Adele %A Simon, Carl %A Simon, Mylene %A others %B BMC bioinformatics %V 16 %P 1 %G eng %0 Journal Article %J Science %D 2015 %T A unified initiative to harness Earth's microbiomes %A Alivisatos, A. P. %A Blaser, M. J. %A Brodie, E. L. %A Chun, M. %A Dangl, J. L. %A Donohue, T. J. %A Dorrestein, P. C. %A Gilbert, J. A. %A Green, J. L. %A Jansson, J. K. %A Knight, R. %A Maxon, M. E. %A McFall-Ngai, M. J. %A Miller, J. F. %A Pollard, K. S. %A Ruby, E. G. %A Taha, S. A. %A Rita R Colwell %B Science %P 507 - 508 %8 Jun-10-2017 %G eng %U http://www.sciencemag.org/cgi/doi/10.1126/science.aac8480 %N 62607551614341176 %! Science %R 10.1126/science.aac8480 %0 Journal Article %J The Lancet Global Health %D 2014 %T Global diarrhoea action plan needs integrated climate-based surveillance %A Akanda, Ali S %A Jutla, Antarpreet S %A Rita R Colwell %B The Lancet Global Health %V 2 %P e69 - e70 %8 Jan-02-2014 %G eng %U https://linkinghub.elsevier.com/retrieve/pii/S2214109X13701554 %N 2 %! The Lancet Global Health %R 10.1016/S2214-109X(13)70155-4 %0 Journal Article %J Frontiers in Microbiology %D 2014 %T Molecular diversity and predictability of Vibrio parahaemolyticus along the Georgian coastal zone of the Black Sea %A Haley, Bradd J. %A Kokashvili, Tamar %A Tskshvediani, Ana %A Janelidze, Nino %A Mitaishvili, Nino %A Grim, Christopher J. %A Constantin_de_Magny, Guillaume %A Chen, Arlene J. %A Taviani, Elisa %A Eliashvili, Tamar %A Tediashvili, Marina %A Whitehouse, Chris A. %A Rita R Colwell %A Huq, Anwar %X Vibrio parahaemolyticus is a leading cause of seafood-related gastroenteritis and is also an autochthonous member of marine and estuarine environments worldwide. One-hundred seventy strains of V. parahaemolyticus were isolated from water and plankton samples collected along the Georgian coast of the Black Sea during 28 months of sample collection. All isolated strains were tested for presence of tlh, trh, and tdh. A subset of strains were serotyped and tested for additional factors and markers of pandemicity. Twenty-six serotypes, five of which are clinically relevant, were identified. Although all 170 isolates were negative for tdh, trh, and the Kanagawa Phenomenon, 7 possessed the GS-PCR sequence and 27 the 850 bp sequence of V. parahaemolyticus pandemic strains. The V. parahaemolyticus population in the Black Sea was estimated to be genomically heterogeneous by rep-PCR and the serodiversity observed did not correlate with rep-PCR genomic diversity. Statistical modeling was used to predict presence of V. parahaemolyticus as a function of water temperature, with strongest concordance observed for Green Cape site samples (Percent of total variance = 70, P < 0.001). Results demonstrate a diverse population of V. parahaemolyticus in the Black Sea, some of which carry pandemic markers, with increased water temperature correlated to an increase in abundance of V. parahaemolyticus. %B Frontiers in Microbiology %V 5 %8 Jan-01-2014 %G eng %U http://journal.frontiersin.org/article/10.3389/fmicb.2014.00045 %! Front. Microbiol. %R 10.3389/fmicb.2014.00045 %0 Journal Article %J mBio %D 2014 %T Phylodynamic Analysis of Clinical and Environmental Vibrio cholerae Isolates from Haiti Reveals Diversification Driven by Positive Selection %A Azarian, Taj %A Ali, Afsar %A Johnson, Judith A. %A Mohr, David %A Prosperi, Mattia %A Veras, Nazle M. %A Jubair, Mohammed %A Strickland, Samantha L. %A Rashid, Mohammad H. %A Alam, Meer T. %A Weppelmann, Thomas A. %A Katz, Lee S. %A Tarr, Cheryl L. %A Rita R Colwell %A Morris, J. Glenn %A Salemi, Marco %X Phylodynamic analysis of genome-wide single-nucleotide polymorphism (SNP) data is a powerful tool to investigate underlying evolutionary processes of bacterial epidemics. The method was applied to investigate a collection of 65 clinical and environmental isolates of Vibrio cholerae from Haiti collected between 2010 and 2012. Characterization of isolates recovered from environmental samples identified a total of four toxigenic V. cholerae O1 isolates, four non-O1/O139 isolates, and a novel nontoxigenic V. cholerae O1 isolate with the classical tcpA gene. Phylogenies of strains were inferred from genome-wide SNPs using coalescent-based demographic models within a Bayesian framework. A close phylogenetic relationship between clinical and environmental toxigenic V. cholerae O1 strains was observed. As cholera spread throughout Haiti between October 2010 and August 2012, the population size initially increased and then fluctuated over time. Selection analysis along internal branches of the phylogeny showed a steady accumulation of synonymous substitutions and a progressive increase of nonsynonymous substitutions over time, suggesting diversification likely was driven by positive selection. Short-term accumulation of nonsynonymous substitutions driven by selection may have significant implications for virulence, transmission dynamics, and even vaccine efficacy. %B mBio %8 Jul-12-2016 %G eng %U http://mbio.asm.org/lookup/doi/10.1128/mBio.01824-14 %N 6 %! mBio %R 10.1128/mBio.01824-14 %0 Book %D 2013 %T Current Topics in Microbiology and Immunology One Health: The Human-Animal-Environment Interfaces in Emerging Infectious Diseases The Human Environment Interface: Applying Ecosystem Concepts to Health %A Preston, Nicholas D. %A Daszak, Peter %A Rita R Colwell %E Mackenzie, John S. %E Jeggo, Martyn %E Daszak, Peter %E Richt, Juergen A. %I Springer Berlin Heidelberg %C Berlin, Heidelberg %V 365 %P 83 - 100 %@ 978-3-642-36888-2 %G eng %U http://link.springer.com/10.1007/978-3-642-36889-9 %R 10.1007/978-3-642-36889-9 %0 Journal Article %J Journal of Medical Microbiology %D 2013 %T Drug response and genetic properties of Vibrio cholerae associated with endemic cholera in north-eastern Thailand, 2003-2011 %A Chomvarin, C. %A Johura, F.-T. %A Mannan, S. B. %A Jumroenjit, W. %A Kanoktippornchai, B. %A Tangkanakul, W. %A Tantisuwichwong, N. %A Huttayananont, S. %A Watanabe, H. %A Hasan, N. A. %A Huq, A. %A Cravioto, A. %A Rita R Colwell %A Alam, M. %X Cholera, caused by Vibrio cholerae, results in significant morbidity and mortality worldwide, including Thailand. Representative V. cholerae strains associated with endemic cholera (n = 32), including strains (n = 3) from surface water sources, in Khon Kaen, Thailand (2003–2011), were subjected to microbiological, molecular and phylogenetic analyses. According to phenotypic and related genetic data, all tested V. cholerae strains belonged to serogroup O1, biotype El Tor (ET), Inaba (IN) or Ogawa (OG). All of the strains were sensitive to gentamicin and ciprofloxacin, while multidrug-resistant (MDR) strains showing resistance to erythromycin, tetracycline, trimethoprim/sulfamethoxazole and ampicillin were predominant in 2007. V. cholerae strains isolated before and after 2007 were non-MDR. All except six diarrhoeal strains possessed ctxA and ctxB genes and were toxigenic altered ET, confirmed by MAMA-PCR and DNA sequencing. Year-wise data revealed that V. cholerae INET strains isolated between 2003 and 2004, plus one strain isolated in 2007, lacked the RS1 sequence (rstC) and toxin-linked cryptic plasmid (TLC)-specific genetic marker, but possessed CTXCL prophage genes ctxB CL and rstR CL. A sharp genetic transition was noted, namely the majority of V. cholerae strains in 2007 and all in 2010 and 2011 were not repressor genotype rstR CL but instead were rstR ET, and all ctx + strains possessed RS1 and TLC-specific genetic markers. DNA sequencing data revealed that strains isolated since 2007 had a mutation in the tcpA gene at amino acid position 64 (N→S). Four clonal types, mostly of environmental origin, including subtypes, reflected genetic diversity, while distinct signatures were observed for clonally related, altered ET from Thailand, Vietnam and Bangladesh, confirmed by distinct subclustering patterns observed in the PFGE (NotI)-based dendrogram, suggesting that endemic cholera is caused by V. cholerae indigenous to Khon Kaen. %B Journal of Medical Microbiology %P 599 - 609 %8 Jan-04-2013 %G eng %U http://jmm.microbiologyresearch.org/content/journal/jmm/10.1099/jmm.0.053801-0 %! Journal of Medical Microbiology %R 10.1099/jmm.0.053801-0 %0 Journal Article %J The American Journal of Tropical Medicine and Hygiene %D 2013 %T Environmental Factors Influencing Epidemic Cholera %A Huq, Anwar %A Hasan, Nur %A Akanda, Ali %A Whitcombe, Elizabeth %A Rita R Colwell %A Haley, Bradd %A Alam, Munir %A Jutla, Antarpreet %A Sack, R. Bradley %X Cholera outbreak following the earthquake of 2010 in Haiti has reaffirmed that the disease is a major public health threat. Vibrio cholerae is autochthonous to aquatic environment, hence, it cannot be eradicated but hydroclimatology-based prediction and prevention is an achievable goal. Using data from the 1800s, we describe uniqueness in seasonality and mechanism of occurrence of cholera in the epidemic regions of Asia and Latin America. Epidemic regions are located near regional rivers and are characterized by sporadic outbreaks, which are likely to be initiated during episodes of prevailing warm air temperature with low river flows, creating favorable environmental conditions for growth of cholera bacteria. Heavy rainfall, through inundation or breakdown of sanitary infrastructure, accelerates interaction between contaminated water and human activities, resulting in an epidemic. This causal mechanism is markedly different from endemic cholera where tidal intrusion of seawater carrying bacteria from estuary to inland regions, results in outbreaks. %B The American Journal of Tropical Medicine and Hygiene %V 89 %P 597 - 607 %8 Apr-09-2013 %G eng %U http://www.ajtmh.org/content/journals/10.4269/ajtmh.12-0721 %N 3 %R 10.4269/ajtmh.12-0721 %0 Journal Article %J HCIC 2013 %D 2013 %T Exploring Early Solutions for Automatically Identifying Inaccessible Sidewalks in the Physical World Using Google Street View %A Hara, K %A Le, V %A Sun, J %A Jacobs, D %A Jon Froehlich %X Abstract Poorly maintained sidewalks , missing curb ramps, and other obstacles pose considerable accessibility challenges. Although pedestrian-and bicycle-oriented maps and associated routing algorithms continue to improve, there has been a lack of work focusing ... %B HCIC 2013 %8 2013/00/01 %G eng %U http://www.cs.umd.edu/~jonf/publications/Hara_ExploringEarlySolutionsForAutomaticallyIdentifyingInaccessibleSidewalksInThePhysicalWorldUsingGoogleStreetView_HCIC2013.pdf %0 Journal Article %J First AAAI Conference on Human Computation and Crowdsourcing %D 2013 %T An Initial Study of Automatic Curb Ramp Detection with Crowdsourced Verification Using Google Street View Images %A Hara, Kotaro %A Sun, Jin %A Chazan, Jonah %A Jacobs, David %A Jon Froehlich %X In our previous research, we examined whether minimally trained crowd workers could find, categorize, and assess sidewalk accessibility problems using Google Street View (GSV) images. This poster paper presents a first step towards combining automated methods ( e.g., machine vision-based curb ramp detectors) in concert with human computation to improve the overall scalability of our approach. %B First AAAI Conference on Human Computation and Crowdsourcing %8 2013/00/11 %G eng %U http://www.aaai.org/ocs/index.php/HCOMP/HCOMP13/paper/view/7507 %! First AAAI Conference on Human Computation and Crowdsourcing %0 Journal Article %J arXiv:1211.3722 [cs] %D 2013 %T Optimizing Abstract Abstract Machines %A Johnson, J. Ian %A Labich, Nicholas %A Might, Matthew %A David Van Horn %K Computer Science - Programming Languages %K F.3.2 %X The technique of abstracting abstract machines (AAM) provides a systematic approach for deriving computable approximations of evaluators that are easily proved sound. This article contributes a complementary step-by-step process for subsequently going from a naive analyzer derived under the AAM approach, to an efficient and correct implementation. The end result of the process is a two to three order-of-magnitude improvement over the systematically derived analyzer, making it competitive with hand-optimized implementations that compute fundamentally less precise results. %B arXiv:1211.3722 [cs] %8 2013/// %G eng %U http://arxiv.org/abs/1211.3722 %0 Journal Article %J Computational Science & Discovery %D 2013 %T Parallel geometric classification of stem cells by their three-dimensional morphology %A Juba,Derek %A Cardone, Antonio %A Ip, Cheuk Yiu %A Simon Jr, Carl G %A K Tison, Christopher %A Kumar, Girish %A Brady,Mary %A Varshney, Amitabh %B Computational Science & Discovery %V 6 %P 015007 %8 01/2013 %N 1 %! Comput. Sci. Disc. %R 10.1088/1749-4699/6/1/015007 %0 Journal Article %J Research in Microbiology %D 2013 %T Quantification of Vibrio parahaemolyticus, Vibrio vulnificus and Vibrio cholerae in French Mediterranean coastal lagoons %A Cantet, Franck %A Hervio-Heath, Dominique %A Caro, Audrey %A Le Mennec, Cécile %A Monteil, Caroline %A Quéméré, Catherine %A Jolivet-Gougeon, Anne %A Rita R Colwell %A Monfort, Patrick %X Vibrio parahaemolyticus, Vibrio vulnificus and Vibrio cholerae are human pathogens. Little is known about these Vibrio spp. in the coastal lagoons of France. The purpose of this study was to investigate their incidence in water, shellfish and sediment of three French Mediterranean coastal lagoons using the most probable number-polymerase chain reaction (MPN-PCR). In summer, the total number of V. parahaemolyticus in water, sediment, mussels and clams collected from the three lagoons varied from 1 to >1.1 × 103 MPN/l, 0.09 to 1.1 × 103 MPN/ml, 9 to 210 MPN/g and 1.5 to 2.1 MPN/g, respectively. In winter, all samples except mussels contained V. parahaemolyticus, but at very low concentrations. Pathogenic (tdh- or trh2-positive) V. parahaemolyticus were present in water, sediment and shellfish samples collected from these lagoons. The number of V. vulnificus in water, sediment and shellfish samples ranged from 1 to 1.1 × 103 MPN/l, 0.07 to 110 MPN/ml and 0.04 to 15 MPN/g, respectively, during summer. V. vulnificus was not detected during winter. V. cholerae was rarely detected in water and sediment during summer. In summary, results of this study highlight the finding that the three human pathogenic Vibrio spp. are present in the lagoons and constitute a potential public health hazard. %B Research in Microbiology %V 164 %P 867 - 874 %8 Jan-10-2013 %G eng %U https://linkinghub.elsevier.com/retrieve/pii/S092325081300106X %N 8 %! Research in Microbiology %R 10.1016/j.resmic.2013.06.005 %0 Book Section %B Advances in Cryptology – EUROCRYPT 2013 %D 2013 %T Streaming Authenticated Data Structures %A Charalampos Papamanthou %A Shi, Elaine %A Tamassia, Roberto %A Yi, Ke %E Johansson, Thomas %E Nguyen, Phong Q. %K Algorithm Analysis and Problem Complexity %K Data Encryption %K Discrete Mathematics in Computer Science %K Systems and Data Security %X We consider the problem of streaming verifiable computation, where both a verifier and a prover observe a stream of n elements x 1,x 2,…,x n and the verifier can later delegate some computation over the stream to the prover. The prover must return the output of the computation, along with a cryptographic proof to be used for verifying the correctness of the output. Due to the nature of the streaming setting, the verifier can only keep small local state (e.g., logarithmic) which must be updatable in a streaming manner and with no interaction with the prover. Such constraints make the problem particularly challenging and rule out applying existing verifiable computation schemes. We propose streaming authenticated data structures, a model that enables efficient verification of data structure queries on a stream. Compared to previous work, we achieve an exponential improvement in the prover’s running time: While previous solutions have linear prover complexity (in the size of the stream), even for queries executing in sublinear time (e.g., set membership), we propose a scheme with O(logM logn)O(\log M\ log n) prover complexity, where n is the size of the stream and M is the size of the universe of elements. Our schemes support a series of expressive queries, such as (non-)membership, successor, range search and frequency queries, over an ordered universe and even in higher dimensions. The central idea of our construction is a new authentication tree, called generalized hash tree. We instantiate our generalized hash tree with a hash function based on lattices assumptions, showing that it enjoys suitable algebraic properties that traditional Merkle trees lack. We exploit such properties to achieve our results. %B Advances in Cryptology – EUROCRYPT 2013 %S Lecture Notes in Computer Science %I Springer Berlin Heidelberg %P 353 - 370 %8 2013/01/01/ %@ 978-3-642-38347-2, 978-3-642-38348-9 %G eng %U http://link.springer.com/chapter/10.1007/978-3-642-38348-9_22 %0 Journal Article %J Remote Sensing Letters %D 2013 %T A water marker monitored by satellites to predict seasonal endemic cholera %A Jutla, Antarpreet %A Akanda, Ali Shafqat %A Huq, Anwar %A Faruque, Abu Syed Golam %A Rita R Colwell %A Islam, Shafiqul %X The ability to predict an occurrence of cholera, a water-related disease, offers a significant public health advantage. Satellite-based estimates of chlorophyll, a surrogate for plankton abundance, have been linked to cholera incidence. However, cholera bacteria can survive under a variety of coastal ecological conditions, thus constraining the predictive ability of the chlorophyll, since it provides only an estimate of greenness of seawater. Here, a new remote-sensing-based index is proposed: Satellite Water Marker (SWM), which estimates the condition of coastal water, based on observed variability in the difference between blue (412 nm) and green (555 nm) wavelengths that can be related to seasonal cholera incidence. The index is bounded between physically separable wavelengths for relatively clear (blue) and turbid (green) water. Using SWM, prediction of cholera with reasonable accuracy, at least two months in advance, can potentially be achieved in the endemic coastal regions. %B Remote Sensing Letters %V 4472741982394456 %P 822 - 831 %8 Jan-08-2013 %G eng %U http://www.tandfonline.com/doi/abs/10.1080/2150704X.2013.802097 %N 8 %! Remote Sensing Letters %R 10.1080/2150704X.2013.802097 %0 Book Section %B Theory of Cryptography %D 2013 %T Why “Fiat-Shamir for Proofs” Lacks a Proof %A Bitansky, Nir %A Dana Dachman-Soled %A Garg, Sanjam %A Jain, Abhishek %A Kalai, Yael Tauman %A López-Alt, Adriana %A Wichs, Daniel %E Sahai, Amit %K Algorithm Analysis and Problem Complexity %K Computation by Abstract Devices %K Data Encryption %K Systems and Data Security %X The Fiat-Shamir heuristic [CRYPTO ’86] is used to convert any 3-message public-coin proof or argument system into a non-interactive argument, by hashing the prover’s first message to select the verifier’s challenge. It is known that this heuristic is sound when the hash function is modeled as a random oracle. On the other hand, the surprising result of Goldwasser and Kalai [FOCS ’03] shows that there exists a computationally sound argument on which the Fiat-Shamir heuristic is never sound, when instantiated with any actual efficient hash function. This leaves us with the following interesting possibility: perhaps we can securely instantiates the Fiat-Shamir heuristic for all 3-message public-coin statistically sound proofs, even if we must fail for some computationally sound arguments. Indeed, this has been conjectured to be the case by Barak, Lindell and Vadhan [FOCS ’03], but we do not have any provably secure instantiation under any “standard assumption”. In this work, we give a broad black-box separation result showing that the security of the Fiat-Shamir heuristic for statistically sound proofs cannot be proved under virtually any standard assumption via a black-box reduction. More precisely: –If we want to have a “universal” instantiation of the Fiat-Shamir heuristic that works for all 3-message public-coin proofs, then we cannot prove its security via a black-box reduction from any assumption that has the format of a “cryptographic game”. –For many concrete proof systems, if we want to have a “specific” instantiation of the Fiat-Shamir heuristic for that proof system, then we cannot prove its security via a black box reduction from any “falsifiable assumption” that has the format of a cryptographic game with an efficient challenger. %B Theory of Cryptography %S Lecture Notes in Computer Science %I Springer Berlin Heidelberg %P 182 - 201 %8 2013/01/01/ %@ 978-3-642-36593-5, 978-3-642-36594-2 %G eng %U http://link.springer.com/chapter/10.1007/978-3-642-36594-2_11 %0 Journal Article %J Computer Vision and Image Understanding %D 2012 %T Class consistent k-means: Application to face and action recognition %A Zhuolin Jiang %A Zhe Lin %A Davis, Larry S. %K Action recognition %K Class consistent k-means %K Discriminative tree classifier %K face recognition %K Supervised clustering %X A class-consistent k-means clustering algorithm (CCKM) and its hierarchical extension (Hierarchical CCKM) are presented for generating discriminative visual words for recognition problems. In addition to using the labels of training data themselves, we associate a class label with each cluster center to enforce discriminability in the resulting visual words. Our algorithms encourage data points from the same class to be assigned to the same visual word, and those from different classes to be assigned to different visual words. More specifically, we introduce a class consistency term in the clustering process which penalizes assignment of data points from different classes to the same cluster. The optimization process is efficient and bounded by the complexity of k-means clustering. A very efficient and discriminative tree classifier can be learned for various recognition tasks via the Hierarchical CCKM. The effectiveness of the proposed algorithms is validated on two public face datasets and four benchmark action datasets. %B Computer Vision and Image Understanding %V 116 %P 730 - 741 %8 2012/06// %@ 1077-3142 %G eng %U http://www.sciencedirect.com/science/article/pii/S1077314212000367 %N 6 %R 10.1016/j.cviu.2012.02.004 %0 Report %D 2012 %T Constructing Inverted Files: To MapReduce or Not Revisited %A Wei, Zheng %A JaJa, Joseph F. %K Technical Report %X Current high-throughput algorithms for constructing inverted files allfollow the MapReduce framework, which presents a high-level programming model that hides the complexities of parallel programming. In this paper, we take an alternative approach and develop a novel strategy that exploits the current and emerging architectures of multicore processors. Our algorithm is based on a high-throughput pipelined strategy that produces parallel parsed streams, which are immediately consumed at the same rate by parallel indexers. We have performed extensive tests of our algorithm on a cluster of 32 nodes, and were able to achieve a throughput close to the peak throughput of the I/O system: a throughput of 280 MB/s on a single node and a throughput that ranges between 5.15 GB/s (1 Gb/s Ethernet interconnect) and 6.12GB/s (10Gb/s InfiniBand interconnect) on a cluster with 32 nodes for processing the ClueWeb09 dataset. Such a performance represents a substantial gain over the best known MapReduce algorithms even when comparing the single node performance of our algorithm to MapReduce algorithms running on large clusters. Our results shed a light on the extent of the performance cost that may be incurred by using the simpler, higher-level MapReduce programming model for large scale applications. %I Instititue for Advanced Computer Studies, Univ of Maryland, College Park %V UMIACS-TR-2012-03 %8 2012/01/26/ %G eng %U http://drum.lib.umd.edu/handle/1903/12171 %0 Conference Paper %B Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work %D 2012 %T Dynamic changes in motivation in collaborative citizen-science projects %A Rotman,Dana %A Preece,Jenny %A Hammock,Jen %A Procita,Kezee %A Hansen,Derek %A Parr,Cynthia %A Lewis,Darcy %A Jacobs, David W. %K citizen science %K Collaboration %K crowdsourcing %K ecology %K motivation %K scientists %K volunteers %X Online citizen science projects engage volunteers in collecting, analyzing, and curating scientific data. Existing projects have demonstrated the value of using volunteers to collect data, but few projects have reached the full collaborative potential of scientists and volunteers. Understanding the shared and unique motivations of these two groups can help designers establish the technical and social infrastructures needed to promote effective partnerships. We present findings from a study of the motivational factors affecting participation in ecological citizen science projects. We show that volunteers are motivated by a complex framework of factors that dynamically change throughout their cycle of work on scientific projects; this motivational framework is strongly affected by personal interests as well as external factors such as attribution and acknowledgment. Identifying the pivotal points of motivational shift and addressing them in the design of citizen-science systems will facilitate improved collaboration between scientists and volunteers. %B Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work %S CSCW '12 %I ACM %C New York, NY, USA %P 217 - 226 %8 2012/// %@ 978-1-4503-1086-4 %G eng %U http://doi.acm.org/10.1145/2145204.2145238 %R 10.1145/2145204.2145238 %0 Journal Article %J Applied and Environmental Microbiology %D 2012 %T Ecology of Vibrio parahaemolyticus and Vibrio vulnificus in the Coastal and Estuarine Waters of Louisiana, Maryland, Mississippi, and Washington (United States) %A Johnson, Crystal N. %A Bowers, John C. %A Griffitt, Kimberly J. %A Molina, Vanessa %A Clostio, Rachel W. %A Pei, Shaofeng %A Laws, Edward %A Paranjpye, Rohinee N. %A Strom, Mark S. %A Chen, Arlene %A Hasan, Nur A. %A Huq, Anwar %A Noriea, Nicholas F. %A Grimes, D. Jay %A Rita R Colwell %X Vibrio parahaemolyticus and Vibrio vulnificus, which are native to estuaries globally, are agents of seafood-borne or wound infections, both potentially fatal. Like all vibrios autochthonous to coastal regions, their abundance varies with changes in environmental parameters. Sea surface temperature (SST), sea surface height (SSH), and chlorophyll have been shown to be predictors of zooplankton and thus factors linked to vibrio populations. The contribution of salinity, conductivity, turbidity, and dissolved organic carbon to the incidence and distribution of Vibrio spp. has also been reported. Here, a multicoastal, 21-month study was conducted to determine relationships between environmental parameters and V. parahaemolyticus and V. vulnificus populations in water, oysters, and sediment in three coastal areas of the United States. Because ecologically unique sites were included in the study, it was possible to analyze individual parameters over wide ranges. Molecular methods were used to detect genes for thermolabile hemolysin (tlh), thermostable direct hemolysin (tdh), and tdh-related hemolysin (trh) as indicators of V. parahaemolyticus and the hemolysin gene vvhA for V. vulnificus. SST and suspended particulate matter were found to be strong predictors of total and potentially pathogenic V. parahaemolyticus and V. vulnificus. Other predictors included chlorophyll a, salinity, and dissolved organic carbon. For the ecologically unique sites included in the study, SST was confirmed as an effective predictor of annual variation in vibrio abundance, with other parameters explaining a portion of the variation not attributable to SST. %B Applied and Environmental Microbiology %P 7249 - 7257 %8 Mar-10-2013 %G eng %U http://aem.asm.org/lookup/doi/10.1128/AEM.01296-12 %N 20 %! Appl. Environ. Microbiol. %R 10.1128/AEM.01296-12 %0 Journal Article %J Journal of Medical Microbiology %D 2012 %T Genetic characteristics of drug-resistant Vibrio cholerae O1 causing endemic cholera in Dhaka, 2006-2011 %A Rashed, S. M. %A Mannan, S. B. %A Johura, F.-T. %A Islam, M. T. %A Sadique, A. %A Watanabe, H. %A Sack, R. B. %A Huq, A. %A Rita R Colwell %A Cravioto, A. %A Alam, M. %X Vibrio cholerae O1 biotype El Tor (ET), causing the seventh cholera pandemic, was recently replaced in Bangladesh by an altered ET possessing ctxB of the Classical (CL) biotype, which caused the first six cholera pandemics. In the present study, V. cholerae O1 strains associated with endemic cholera in Dhaka between 2006 and 2011 were analysed for major phenotypic and genetic characteristics. Of 54 representative V. cholerae isolates tested, all were phenotypically ET and showed uniform resistance to trimethoprim/sulfamethoxazole (SXT) and furazolidone (FR). Resistance to tetracycline (TE) and erythromycin (E) showed temporal fluctuation, varying from year to year, while all isolates were susceptible to gentamicin (CN) and ciprofloxacin (CIP). Year-wise data revealed erythromycin resistance to be 33.3 % in 2006 and 11 % in 2011, while tetracycline resistance accounted for 33, 78, 0, 100 and 27 % in 2006, 2007, 2008, 2009 and 2010, respectively; interestingly, all isolates tested were sensitive to TE in 2011, as observed in 2008. All V. cholerae isolates tested possessed genetic elements such as SXT, ctxAB, tcpA ET, rstR ET and rtxC; none had IntlI (Integron I). Double mismatch amplification mutation assay (DMAMA)-PCR followed by DNA sequencing and analysis of the ctxB gene revealed a point mutation at position 58 (C→A), which has resulted in an amino acid substitution from histidine (H) to asparagine (N) at position 20 (genotype 7) since 2008. Although the multi-resistant strains having tetracycline resistance showed minor genetic divergence, V. choleraestrains were clonal, as determined by a PFGE (NotI)-based dendrogram. This study shows 2008–2010 to be the time of transition from ctxB genotype 1 to genotype 7 in V. cholerae ET causing endemic cholera in Dhaka, Bangladesh. %B Journal of Medical Microbiology %P 1736 - 1745 %8 Jan-12-2012 %G eng %U http://jmm.microbiologyresearch.org/content/journal/jmm/10.1099/jmm.0.049635-0 %! Journal of Medical Microbiology %R 10.1099/jmm.0.049635-0 %0 Journal Article %J Nucleic Acids Res %D 2012 %T InterPro in 2011: new developments in the family and domain prediction database. %A Hunter, Sarah %A Jones, Philip %A Mitchell, Alex %A Apweiler, Rolf %A Attwood, Teresa K %A Bateman, Alex %A Bernard, Thomas %A Binns, David %A Bork, Peer %A Burge, Sarah %A de Castro, Edouard %A Coggill, Penny %A Corbett, Matthew %A Das, Ujjwal %A Daugherty, Louise %A Duquenne, Lauranne %A Finn, Robert D %A Fraser, Matthew %A Gough, Julian %A Haft, Daniel %A Hulo, Nicolas %A Kahn, Daniel %A Kelly, Elizabeth %A Letunic, Ivica %A Lonsdale, David %A Lopez, Rodrigo %A Madera, Martin %A Maslen, John %A McAnulla, Craig %A McDowall, Jennifer %A McMenamin, Conor %A Mi, Huaiyu %A Mutowo-Muellenet, Prudence %A Mulder, Nicola %A Natale, Darren %A Orengo, Christine %A Pesseat, Sebastien %A Punta, Marco %A Quinn, Antony F %A Rivoire, Catherine %A Sangrador-Vegas, Amaia %A Jeremy D Selengut %A Sigrist, Christian J A %A Scheremetjew, Maxim %A Tate, John %A Thimmajanarthanan, Manjulapramila %A Thomas, Paul D %A Wu, Cathy H %A Yeats, Corin %A Yong, Siew-Yit %K Databases, Protein %K Protein Structure, Tertiary %K Proteins %K Sequence Analysis, Protein %K software %K Terminology as Topic %K User-Computer Interface %X

InterPro (http://www.ebi.ac.uk/interpro/) is a database that integrates diverse information about protein families, domains and functional sites, and makes it freely available to the public via Web-based interfaces and services. Central to the database are diagnostic models, known as signatures, against which protein sequences can be searched to determine their potential function. InterPro has utility in the large-scale analysis of whole genomes and meta-genomes, as well as in characterizing individual protein sequences. Herein we give an overview of new developments in the database and its associated software since 2009, including updates to database content, curation processes and Web and programmatic interfaces.

%B Nucleic Acids Res %V 40 %P D306-12 %8 2012 Jan %G eng %N Database issue %R 10.1093/nar/gkr948 %0 Journal Article %J Journal of Clinical Microbiology %D 2012 %T Vibrio Cholerae Classical Biotype Strains Reveal Distinct Signatures in Mexico %A Alam,Munirul %A Islam,M. Tarequl %A Rashed,Shah Manzur %A Johura,Fatema-Tuz %A Bhuiyan,Nurul A. %A Delgado,Gabriela %A Morales,Rosario %A Mendez,Jose Luis %A Navarro,Armando %A Watanabe,Haruo %A Hasan,Nur-A. %A Rita R Colwell %A Cravioto,Alejandro %X Vibrio cholerae O1 Classical (CL) biotype caused the 5th and 6th, and probably the earlier cholera pandemics, before the El Tor (ET) biotype initiated the 7th pandemic in Asia in the 1970's by completely displacing the CL biotype. Although the CL biotype was thought to be extinct in Asia, and it had never been reported from Latin America, V. cholerae CL and ET biotypes, including hybrid ET were found associated with endemic cholera in Mexico between 1991 and 1997. In this study, CL biotype strains isolated from endemic cholera in Mexico, between 1983 and 1997 were characterized in terms of major phenotypic and genetic traits, and compared with CL biotype strains isolated in Bangladesh between 1962 and 1989. According to sero- and bio-typing data, all V. cholerae strains tested had the major phenotypic and genotypic characteristics specific for the CL biotype. Antibiograms revealed the majority of the Bangladeshi strains to be resistant to trimethoprim/sulfamethoxazole, furazolidone, ampicillin, and gentamycin, while the Mexican strains were sensitive to all of these drugs, as well as to ciprofloxacin, erythromycin, and tetracycline. Pulsed-field gel electrophoresis (PFGE) of NotI-digested genomic DNA revealed characteristic banding patterns for all the CL biotype strains, although the Mexican strains differed with the Bangladeshi strains in 1-2 DNA bands. The difference may be subtle, but consistent, as confirmed by the sub-clustering patterns in the PFGE-based dendrogram, and can serve as regional signature, suggesting pre-1991 existence and evolution of the CL biotype strains in the Americas, independent from that of Asia. %B Journal of Clinical Microbiology %8 04/2012 %@ 0095-1137, 1098-660X %G eng %U http://jcm.asm.org/content/early/2012/04/12/JCM.00189-12 %R 10.1128/JCM.00189-12 %0 Conference Paper %B Document Analysis Systems %D 2012 %T Logo Retrieval in Document Images %A Jain,Rajiv %A David Doermann %X This paper presents a scalable algorithm for segmentation free logo retrieval in document images. The contributions of this paper include the use of the SURF feature for logo retrieval, a novel indexing algorithm for efficient retrieval of SURF features and a method to filter results using the orientation of local features and geometric constraints. Results demonstrate that logo retrieval can be performed with high accurately and efficiently scaled to a large datasets. %B Document Analysis Systems %8 2012/// %G eng %0 Journal Article %J Parallel and Distributed Systems, IEEE Transactions on %D 2012 %T An Optimized High-Throughput Strategy for Constructing Inverted Files %A Wei,Z. %A JaJa, Joseph F. %X Current high-throughput algorithms for constructing inverted files all follow the MapReduce framework, which presents a high-level programming model that hides the complexities of parallel programming. In this paper, we take an alternative approach and develop a novel strategy that exploits the current and emerging architectures of multicore processors. Our algorithm is based on a high-throughput pipelined strategy that produces parallel parsed streams, which are immediately consumed at the same rate by parallel indexers. We have performed extensive tests of our algorithm on a cluster of 32 nodes, and were able to achieve a throughput close to the peak throughput of the I/O system: a throughput of 280 MB/s on a single node and a throughput that ranges between 5.15 GB/s (1 Gb/s Ethernet interconnect) and 6.12GB/s (10Gb/s InfiniBand interconnect) on a cluster with 32 nodes for processing the ClueWeb09 dataset. Such a performance represents a substantial gain over the best known MapReduce algorithms even when comparing the single node performance of our algorithm to MapReduce algorithms running on large clusters. Our results shed a light on the extent of the performance cost that may be incurred by using the simpler, higher-level MapReduce programming model for large scale applications. %B Parallel and Distributed Systems, IEEE Transactions on %V PP %P 1 - 1 %8 2012/// %@ 1045-9219 %G eng %N 99 %R 10.1109/TPDS.2012.43 %0 Journal Article %J Pattern Analysis and Machine Intelligence, IEEE Transactions on %D 2012 %T Recognizing Human Actions by Learning and Matching Shape-Motion Prototype Trees %A Zhuolin Jiang %A Zhe Lin %A Davis, Larry S. %K action prototype %K actor location %K brute-force computation %K CMU action data set %K distance measures %K dynamic backgrounds %K dynamic prototype sequence matching %K flexible action matching %K frame-to-frame distances %K frame-to-prototype correspondence %K hierarchical k-means clustering %K human action recognition %K Image matching %K image recognition %K Image sequences %K joint probability model %K joint shape %K KTH action data set %K large gesture data set %K learning %K learning (artificial intelligence) %K look-up table indexing %K motion space %K moving cameras %K pattern clustering %K prototype-to-prototype distances %K shape-motion prototype-based approach %K table lookup %K training sequence %K UCF sports data set %K Video sequences %K video signal processing %K Weizmann action data set %X A shape-motion prototype-based approach is introduced for action recognition. The approach represents an action as a sequence of prototypes for efficient and flexible action matching in long video sequences. During training, an action prototype tree is learned in a joint shape and motion space via hierarchical K-means clustering and each training sequence is represented as a labeled prototype sequence; then a look-up table of prototype-to-prototype distances is generated. During testing, based on a joint probability model of the actor location and action prototype, the actor is tracked while a frame-to-prototype correspondence is established by maximizing the joint probability, which is efficiently performed by searching the learned prototype tree; then actions are recognized using dynamic prototype sequence matching. Distance measures used for sequence matching are rapidly obtained by look-up table indexing, which is an order of magnitude faster than brute-force computation of frame-to-frame distances. Our approach enables robust action matching in challenging situations (such as moving cameras, dynamic backgrounds) and allows automatic alignment of action sequences. Experimental results demonstrate that our approach achieves recognition rates of 92.86 percent on a large gesture data set (with dynamic backgrounds), 100 percent on the Weizmann action data set, 95.77 percent on the KTH action data set, 88 percent on the UCF sports data set, and 87.27 percent on the CMU action data set. %B Pattern Analysis and Machine Intelligence, IEEE Transactions on %V 34 %P 533 - 547 %8 2012/03// %@ 0162-8828 %G eng %N 3 %R 10.1109/TPAMI.2011.147 %0 Conference Paper %B IEEE conference on Computer Vision and Pattern Recognition %D 2012 %T Submodular Dictionary Learning for Sparse Coding %A Zhuolin Jiang %A Zhang, G. %A Davis, Larry S. %X A greedy-based approach to learn a compact and dis-criminative dictionary for sparse representation is pre- sented. We propose an objective function consisting of two components: entropy rate of a random walk on a graph and a discriminative term. Dictionary learning is achieved by finding a graph topology which maximizes the objec- tive function. By exploiting the monotonicity and submod- ularity properties of the objective function and the matroid constraint, we present a highly efficient greedy-based op- timization algorithm. It is more than an order of magni- tude faster than several recently proposed dictionary learn- ing approaches. Moreover, the greedy algorithm gives a near-optimal solution with a (1/2)-approximation bound. Our approach yields dictionaries having the property that feature points from the same class have very similar sparse codes. Experimental results demonstrate that our approach outperforms several recently proposed dictionary learning techniques for face, action and object category recognition. %B IEEE conference on Computer Vision and Pattern Recognition %8 2012/// %G eng %0 Journal Article %J EcoHealth %D 2012 %T Temporal and Spatial Variability in the Distribution of Vibrio vulnificus in the Chesapeake Bay: A Hindcast Study %A Banakar,V. %A Constantin de Magny,G. %A Jacobs,J. %A Murtugudde,R. %A Huq,A. %A J. Wood,R. %A Rita R Colwell %X Vibrio vulnificus, an estuarine bacterium, is the causative agent of seafood-related gastroenteritis, primary septicemia, and wound infections worldwide. It occurs as part of the normal microflora of coastal marine environments and can be isolated from water, sediment, and oysters. Hindcast prediction was undertaken to determine spatial and temporal variability in the likelihood of occurrence of V. vulnificus in surface waters of the Chesapeake Bay. Hindcast predictions were achieved by forcing a multivariate habitat suitability model with simulated sea surface temperature and salinity in the Bay for the period between 1991 and 2005 and the potential hotspots of occurrence of V. vulnificus in the Chesapeake Bay were identified. The likelihood of occurrence of V. vulnificus during high and low rainfall years was analyzed. From results of the study, it is concluded that hindcast prediction yields an improved understanding of environmental conditions associated with occurrence of V. vulnificus in the Chesapeake Bay. %B EcoHealth %P 1 - 12 %8 2012/// %G eng %R 10.1007/s10393-011-0736-4 %0 Conference Paper %B Person-Oriented Vision (POV), 2011 IEEE Workshop on %D 2011 %T Active inference for retrieval in camera networks %A Daozheng Chen %A Bilgic,M. %A Getoor, Lise %A Jacobs, David W. %A Mihalkova,L. %A Tom Yeh %K active %K annotation;probabilistic %K frame;cameras;inference %K inference;camera %K mechanisms;probability;search %K model;human %K model;retrieval %K network;graphical %K problem;video %K problems;video %K processing; %K retrieval;video %K signal %K system;searching %X We address the problem of searching camera network videos to retrieve frames containing specified individuals. We show the benefit of utilizing a learned probabilistic model that captures dependencies among the cameras. In addition, we develop an active inference framework that can request human input at inference time, directing human attention to the portions of the videos whose correct annotation would provide the biggest performance improvements. Our primary contribution is to show that by mapping video frames in a camera network onto a graphical model, we can apply collective classification and active inference algorithms to significantly increase the performance of the retrieval system, while minimizing the number of human annotations required. %B Person-Oriented Vision (POV), 2011 IEEE Workshop on %P 13 - 20 %8 2011/01// %G eng %R 10.1109/POV.2011.5712363 %0 Conference Paper %B Person-Oriented Vision (POV), 2011 IEEE Workshop on %D 2011 %T Active inference for retrieval in camera networks %A Daozheng Chen %A Bilgic,Mustafa %A Getoor, Lise %A Jacobs, David W. %A Mihalkova,Lilyana %A Tom Yeh %X We address the problem of searching camera network videos to retrieve frames containing specified individuals. We show the benefit of utilizing a learned probabilistic model that captures dependencies among the cameras. In addition, we develop an active inference framework that can request human input at inference time, directing human attention to the portions of the videos whose correct annotation would provide the biggest performance improvements. Our primary contribution is to show that by mapping video frames in a camera network onto a graphical model, we can apply collective classification and active inference algorithms to significantly increase the performance of the retrieval system, while minimizing the number of human annotations required. %B Person-Oriented Vision (POV), 2011 IEEE Workshop on %I IEEE %P 13 - 20 %8 2011/01// %@ 978-1-61284-036-9 %G eng %U http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=5712363&tag=1 %R 10.1109/POV.2011.5712363 %0 Conference Paper %B Advanced Video and Signal-Based Surveillance (AVSS), 2011 8th IEEE International Conference on %D 2011 %T AVSS 2011 demo session: A large-scale benchmark dataset for event recognition in surveillance video %A Oh,Sangmin %A Hoogs,Anthony %A Perera,Amitha %A Cuntoor,Naresh %A Chen,Chia-Chih %A Lee,Jong Taek %A Mukherjee,Saurajit %A Aggarwal, JK %A Lee,Hyungtae %A Davis, Larry S. %A Swears,Eran %A Wang,Xiaoyang %A Ji,Qiang %A Reddy,Kishore %A Shah,Mubarak %A Vondrick,Carl %A Pirsiavash,Hamed %A Ramanan,Deva %A Yuen,Jenny %A Torralba,Antonio %A Song,Bi %A Fong,Anesco %A Roy-Chowdhury,Amit %A Desai,Mita %X We introduce to the surveillance community the VIRAT Video Dataset[1], which is a new large-scale surveillance video dataset designed to assess the performance of event recognition algorithms in realistic scenes1. %B Advanced Video and Signal-Based Surveillance (AVSS), 2011 8th IEEE International Conference on %P 527 - 528 %8 2011/09/30/2 %G eng %R 10.1109/AVSS.2011.6027400 %0 Journal Article %J Proceedings of the National Academy of SciencesPNAS %D 2011 %T Bacillus Anthracis Comparative Genome Analysis in Support of the Amerithrax Investigation %A Rasko,David A %A Worsham,Patricia L %A Abshire,Terry G %A Stanley,Scott T %A Bannan,Jason D %A Wilson,Mark R %A Langham,Richard J %A Decker,R. Scott %A Jiang,Lingxia %A Read,Timothy D. %A Phillippy,Adam M %A Salzberg,Steven L. %A Pop, Mihai %A Van Ert,Matthew N %A Kenefic,Leo J %A Keim,Paul S %A Fraser-Liggett,Claire M %A Ravel,Jacques %X Before the anthrax letter attacks of 2001, the developing field of microbial forensics relied on microbial genotyping schemes based on a small portion of a genome sequence. Amerithrax, the investigation into the anthrax letter attacks, applied high-resolution whole-genome sequencing and comparative genomics to identify key genetic features of the letters’ Bacillus anthracis Ames strain. During systematic microbiological analysis of the spore material from the letters, we identified a number of morphological variants based on phenotypic characteristics and the ability to sporulate. The genomes of these morphological variants were sequenced and compared with that of the B. anthracis Ames ancestor, the progenitor of all B. anthracis Ames strains. Through comparative genomics, we identified four distinct loci with verifiable genetic mutations. Three of the four mutations could be directly linked to sporulation pathways in B. anthracis and more specifically to the regulation of the phosphorylation state of Spo0F, a key regulatory protein in the initiation of the sporulation cascade, thus linking phenotype to genotype. None of these variant genotypes were identified in single-colony environmental B. anthracis Ames isolates associated with the investigation. These genotypes were identified only in B. anthracis morphotypes isolated from the letters, indicating that the variants were not prevalent in the environment, not even the environments associated with the investigation. This study demonstrates the forensic value of systematic microbiological analysis combined with whole-genome sequencing and comparative genomics. %B Proceedings of the National Academy of SciencesPNAS %V 108 %P 5027 - 5032 %8 2011/03/22/ %@ 0027-8424, 1091-6490 %G eng %U http://www.pnas.org/content/108/12/5027 %N 12 %R 10.1073/pnas.1016657108 %0 Conference Paper %B Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on %D 2011 %T Bypassing synthesis: PLS for face recognition with pose, low-resolution and sketch %A Sharma,A. %A Jacobs, David W. %K approximations;pose %K COMPARISON %K estimation; %K extraction;image %K framework;face %K intensities;subspace %K least %K PLS;bypassing %K recognition;feature %K recognition;partial %K resolution;least %K selection;multimodal %K squares %K squares;pixel %K synthesis;face %X This paper presents a novel way to perform multi-modal face recognition. We use Partial Least Squares (PLS) to linearly map images in different modalities to a common linear subspace in which they are highly correlated. PLS has been previously used effectively for feature selection in face recognition. We show both theoretically and experimentally that PLS can be used effectively across modalities. We also formulate a generic intermediate subspace comparison framework for multi-modal recognition. Surprisingly, we achieve high performance using only pixel intensities as features. We experimentally demonstrate the highest published recognition rates on the pose variations in the PIE data set, and also show that PLS can be used to compare sketches to photos, and to compare images taken at different resolutions. %B Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on %P 593 - 600 %8 2011/06// %G eng %R 10.1109/CVPR.2011.5995350 %0 Book Section %B Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques %D 2011 %T A Canonical Form for Testing Boolean Function Properties %A Dana Dachman-Soled %A Servedio, Rocco A. %E Goldberg, Leslie Ann %E Jansen, Klaus %E Ravi, R. %E Rolim, José D. P. %K Algorithm Analysis and Problem Complexity %K Boolean functions %K Computation by Abstract Devices %K Computer Communication Networks %K Computer Graphics %K Data structures %K Discrete Mathematics in Computer Science %K property testing %X In a well-known result Goldreich and Trevisan (2003) showed that every testable graph property has a “canonical” tester in which a set of vertices is selected at random and the edges queried are the complete graph over the selected vertices. We define a similar-in-spirit canonical form for Boolean function testing algorithms, and show that under some mild conditions property testers for Boolean functions can be transformed into this canonical form. Our first main result shows, roughly speaking, that every “nice” family of Boolean functions that has low noise sensitivity and is testable by an “independent tester,” has a canonical testing algorithm. Our second main result is similar but holds instead for families of Boolean functions that are closed under ID-negative minors. Taken together, these two results cover almost all of the constant-query Boolean function testing algorithms that we know of in the literature, and show that all of these testing algorithms can be automatically converted into a canonical form. %B Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques %S Lecture Notes in Computer Science %I Springer Berlin Heidelberg %P 460 - 471 %8 2011/01/01/ %@ 978-3-642-22934-3, 978-3-642-22935-0 %G eng %U http://link.springer.com/chapter/10.1007/978-3-642-22935-0_39 %0 Report %D 2011 %T Constructing Inverted Files on a Cluster of Multicore Processors Near Peak I/O Throughput %A Wei, Zheng %A JaJa, Joseph F. %K Technical Report %X We develop a new strategy for processing a collection of documents on a cluster of multicore processors to build the inverted files at almost the peak I/O throughput of the underlying system. Our algorithm is based on a number of novel techniques including: (i) a high-throughput pipelined strategy that produces parallel parsed streams that are consumed at the same rate by parallel indexers; (ii) a hybrid trie and B-tree dictionary data structure that enables efficient parallel construction of the global dictionary; and (iii) a partitioning strategy of the work of the indexers using random sampling, which achieve extremely good load balancing with minimal communication overhead. We have performed extensive tests of our algorithm on a cluster of 32 nodes, each consisting of two Intel Xeon X5560 Quad-core, and were able to achieve a throughput close to the peak throughput of the I/O system. In particular, we achieve a throughput of 280 MB/s on a single node and a throughput of 6.12GB/s on a cluster with 32 nodes for processing the ClueWeb09 dataset. Similar results were obtained for widely different datasets. The throughput of our algorithm is superior to the best known algorithms reported in the literature even when compared to those running on much larger clusters. %I Instititue for Advanced Computer Studies, Univ of Maryland, College Park %V UMIACS-TR-2011-03 %8 2011/03/03/ %G eng %U http://drum.lib.umd.edu/handle/1903/11311 %0 Conference Paper %B Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on %D 2011 %T A deformation and lighting insensitive metric for face recognition based on dense correspondences %A Jorstad,A. %A Jacobs, David W. %A Trouve,A. %K #x0308;ve %K Bayes %K classifier;pose %K comparison;lighting %K correspondences;expression %K deformation;dense %K estimation; %K insensitive %K methods;face %K metric;nai %K recognition;image %K recognition;pose %K variation;Bayes %K variation;face %X Face recognition is a challenging problem, complicated by variations in pose, expression, lighting, and the passage of time. Significant work has been done to solve each of these problems separately. We consider the problems of lighting and expression variation together, proposing a method that accounts for both variabilities within a single model. We present a novel deformation and lighting insensitive metric to compare images, and we present a novel framework to optimize over this metric to calculate dense correspondences between images. Typical correspondence cost patterns are learned between face image pairs and a Nai #x0308;ve Bayes classifier is applied to improve recognition accuracy. Very promising results are presented on the AR Face Database, and we note that our method can be extended to a broad set of applications. %B Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on %P 2353 - 2360 %8 2011/06// %G eng %R 10.1109/CVPR.2011.5995431 %0 Conference Paper %B CSCW '11 %D 2011 %T A Dive into Online Community Properties %A Wagstrom, Patrick %A Martino, Jacquelyn %A von Kaenel, Juerg %A Marshini Chetty %A Thomas, John %A Jones, Lauretta %K enterprise %K Online communities %K Taxonomy %K Visualization %X As digital communities grow in size their feature sets also grow with them. Different users have different experiences with the same tools and communities. Enterprises and other organizations seeking to leverage these communities need a straightforward way to analyze and compare a variety of salient attributes of these communities. We describe a taxonomy and tool for crowd-sourcing user based evaluations of enterprise relevant attributes of digital communities and present the results of a small scale study on its usefulness and stability across multiple raters. %B CSCW '11 %S CSCW '11 %I ACM %P 725 - 728 %8 2011/// %@ 978-1-4503-0556-3 %G eng %U http://doi.acm.org/10.1145/1958824.1958955 %0 Journal Article %J Pattern Analysis and Machine Intelligence, IEEE Transactions on %D 2011 %T Dynamic Processing Allocation in Video %A Daozheng Chen %A Bilgic,M. %A Getoor, Lise %A Jacobs, David W. %K algorithms;digital %K allocation;video %K analysis;computer %K background %K detection;graphical %K detection;resource %K graphics;face %K model;resource %K processing; %K processing;face %K recognition;object %K signal %K subtraction;baseline %K video %X Large stores of digital video pose severe computational challenges to existing video analysis algorithms. In applying these algorithms, users must often trade off processing speed for accuracy, as many sophisticated and effective algorithms require large computational resources that make it impractical to apply them throughout long videos. One can save considerable effort by applying these expensive algorithms sparingly, directing their application using the results of more limited processing. We show how to do this for retrospective video analysis by modeling a video using a chain graphical model and performing inference both to analyze the video and to direct processing. We apply our method to problems in background subtraction and face detection, and show in experiments that this leads to significant improvements over baseline algorithms. %B Pattern Analysis and Machine Intelligence, IEEE Transactions on %V 33 %P 2174 - 2187 %8 2011/11// %@ 0162-8828 %G eng %N 11 %R 10.1109/TPAMI.2011.55 %0 Journal Article %J Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2011) %D 2011 %T Estimating Functional Agent-Based Models: An Application to Bid Shading in Online Markets Format %A Guo,Wei %A Jank,Wolfgang %A Rand, William %K Agent-based modeling %K business %K Calibration %K Genetic algorithms %K internet auctions %K simulation %X Bid shading is a common strategy in online auctions to avoid the "winner’s curse". While almost all bidders shade their bids, at least to some degree, it is impossible to infer the degree and volume of shaded bids directly from observed bidding data. In fact, most bidding data only allows us to observe the resulting price process, i.e. whether prices increase fast (due to little shading) or whether they slow down (when all bidders shade their bids). In this work, we propose an agent-based model that simulates bidders with different bidding strategies and their interaction with one another. We calibrate that model (and hence estimate properties about the propensity and degree of shaded bids) by matching the emerging simulated price process with that of the observed auction data using genetic algorithms. From a statistical point of view, this is challenging because we match functional draws from simulated and real price processes. We propose several competing fitness functions and explore how the choice alters the resulting ABM calibration. We apply our model to the context of eBay auctions for digital cameras and show that a balanced fitness function yields the best results. %B Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2011) %8 2011/// %G eng %U http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1846639 %0 Journal Article %J Journal of Signal Processing Systems %D 2011 %T Exploiting Statically Schedulable Regions in Dataflow Programs %A Gu, Ruirui %A Janneck, Jörn W. %A Raulet, Mickaël %A Bhattacharyya, Shuvra S. %K Cal %K Circuits and Systems %K Computer Imaging, Vision, Pattern Recognition and Graphics %K Dataflow %K DIF %K Electrical Engineering %K Image Processing and Computer Vision %K multicore processors %K pattern recognition %K Quasi-static scheduling %K Signal, Image and Speech Processing %X Dataflow descriptions have been used in a wide range of Digital Signal Processing (DSP) applications, such as multi-media processing, and wireless communications. Among various forms of dataflow modeling, Synchronous Dataflow (SDF) is geared towards static scheduling of computational modules, which improves system performance and predictability. However, many DSP applications do not fully conform to the restrictions of SDF modeling. More general dataflow models, such as CAL (Eker and Janneck 2003), have been developed to describe dynamically-structured DSP applications. Such generalized models can express dynamically changing functionality, but lose the powerful static scheduling capabilities provided by SDF. This paper focuses on the detection of SDF-like regions in dynamic dataflow descriptions—in particular, in the generalized specification framework of CAL. This is an important step for applying static scheduling techniques within a dynamic dataflow framework. Our techniques combine the advantages of different dataflow languages and tools, including CAL (Eker and Janneck 2003), DIF (Hsu et al. 2005) and CAL2C (Roquier et al. 2008). In addition to detecting SDF-like regions, we apply existing SDF scheduling techniques to exploit the static properties of these regions within enclosing dynamic dataflow models. Furthermore, we propose an optimized approach for mapping SDF-like regions onto parallel processing platforms such as multi-core processors. %B Journal of Signal Processing Systems %V 63 %P 129 - 142 %8 2011 %@ 1939-8018, 1939-8115 %G eng %U http://link.springer.com/article/10.1007/s11265-009-0445-1 %N 1 %! J Sign Process Syst %0 Book Section %B Handbook of Face RecognitionHandbook of Face Recognition %D 2011 %T Face Tracking and Recognition in Video %A Chellapa, Rama %A Du,Ming %A Turaga,Pavan %A Zhou,Shaohua Kevin %E Li,Stan Z. %E Jain,Anil K. %X In this chapter, we describe the utility of videos in enhancing performance of image-based recognition tasks. We discuss a joint tracking-recognition framework that allows for using the motion information in a video to better localize and identify the person in the video using still galleries. We discuss how to jointly capture facial appearance and dynamics to obtain a parametric representation for video-to-video recognition. We discuss recognition in multi-camera networks where the probe and gallery both consist of multi-camera videos. Concluding remarks and directions for future research are provided. %B Handbook of Face RecognitionHandbook of Face Recognition %I Springer London %P 323 - 351 %8 2011/// %@ 978-0-85729-932-1 %G eng %U http://dx.doi.org/10.1007/978-0-85729-932-1_13 %0 Conference Paper %B Parallel Distributed Processing Symposium (IPDPS), 2011 IEEE International %D 2011 %T A Fast Algorithm for Constructing Inverted Files on Heterogeneous Platforms %A Wei, Zheng %A JaJa, Joseph F. %K architecture;graphics %K B-tree %K C1060;central %K construction;multicore %K CPU;multithreaded %K data %K device %K dictionary %K equipment;coprocessors;data %K files %K GPU;computer %K graphic %K indexer;Intel %K pipelined %K platform;high-throughput %K PROCESSING %K Quad-core;NVIDIA %K strategy;hybrid %K structure;CUDA %K structure;inverted %K structures;multiprocessing %K systems; %K Tesla %K trie %K unified %K unit;computer %K unit;heterogeneous %K X5560 %K Xeon %X Given a collection of documents residing on a disk, we develop a new strategy for processing these documents and building the inverted files extremely fast. Our approach is tailored for a heterogeneous platform consisting of a multicore CPU and a highly multithreaded GPU. Our algorithm is based on a number of novel techniques including: (i) a high-throughput pipelined strategy that produces parallel parsed streams that are consumed at the same rate by parallel indexers, (ii) a hybrid trie and B-tree dictionary data structure in which the trie is represented by a table for fast look-up and each B-tree node contains string caches, (iii) allocation of parsed streams with frequent terms to CPU threads and the rest to GPU threads so as to match the throughput of parsed streams, and (iv) optimized CUDA indexer implementation that ensures coalesced memory accesses and effective use of shared memory. We have performed extensive tests of our algorithm on a single node (two Intel Xeon X5560 Quad-core) with two NVIDIA Tesla C1060 attached to it, and were able to achieve a throughput of more than 262 MB/s on the ClueWeb09 dataset. Similar results were obtained for widely different datasets. The throughput of our algorithm is superior to the best known algorithms reported in the literature even when compared to those run on large clusters. %B Parallel Distributed Processing Symposium (IPDPS), 2011 IEEE International %P 1124 - 1134 %8 2011/05// %G eng %R 10.1109/IPDPS.2011.107 %0 Journal Article %J SIGCOMM-Computer Communication Review %D 2011 %T How Many Tiers? Pricing in the Internet Transit Market %A Valancius,V. %A Lumezanu,C. %A Feamster, Nick %A Johari,R. %A Vazirani,V. V %X ISPs are increasingly selling “tiered” contracts, which offer Inter-net connectivity to wholesale customers in bundles, at rates based on the cost of the links that the traffic in the bundle is traversing. Although providers have already begun to implement and deploy tiered pricing contracts, little is known about how to structure them. Although contracts that sell connectivity on finer granularities im- prove market efficiency, they are also more costly for ISPs to im- plement and more difficult for customers to understand. Our goal is to analyze whether current tiered pricing practices in the whole- sale transit market yield optimal profits for ISPs and whether better bundling strategies might exist. In the process, we offer two contri- butions: (1) we develop a novel way of mapping traffic and topol- ogy data to a demand and cost model; and (2) we fit this model on three large real-world networks: an European transit ISP, a content distribution network, and an academic research network, and run counterfactuals to evaluate the effects of different bundling strate- gies. Our results show that the common ISP practice of structuring tiered contracts according to the cost of carrying the traffic flows (e.g., offering a discount for traffic that is local) can be suboptimal and that dividing contracts based on both traffic demand and the cost of carrying it into only three or four tiers yields near-optimal profit for the ISP. %B SIGCOMM-Computer Communication Review %V 41 %P 194 - 194 %8 2011/// %G eng %N 4 %0 Journal Article %J Handbook of face recognition %D 2011 %T Illumination modeling for face recognition %A Basri,R. %A Jacobs, David W. %X In this chapter, we show that effective systems can account for the effects of lighting using fewer than 10 degrees of freedom. This can have considerable impact on the speed and accuracy of recognition systems. We will describe theoretical results that, with some simplifying assumptions, prove the validity of low-dimensional, linear approximations to the set of images produced by a face. %B Handbook of face recognition %P 169 - 195 %8 2011/// %G eng %R 10.1007/978-0-85729-932-1_7 %0 Journal Article %J Image Processing, IEEE Transactions on %D 2011 %T Illumination Recovery From Image With Cast Shadows Via Sparse Representation %A Mei,Xue %A Ling,Haibin %A Jacobs, David W. %K #x2113;1-regularized %K approximations; %K coding;image %K compression;image %K constraints;sparse %K formulation;Lambertian %K illumination %K image %K least-square %K light %K linear %K reconstruction;image %K recovery;low-dimensional %K representation;data %K representation;least %K scene;cast %K sensing;directional %K shadows;compressive %K sources;image %K squares %K subspaces;nonnegativity %X In this paper, we propose using sparse representation for recovering the illumination of a scene from a single image with cast shadows, given the geometry of the scene. The images with cast shadows can be quite complex and, therefore, cannot be well approximated by low-dimensional linear subspaces. However, it can be shown that the set of images produced by a Lambertian scene with cast shadows can be efficiently represented by a sparse set of images generated by directional light sources. We first model an image with cast shadows composed of a diffusive part (without cast shadows) and a residual part that captures cast shadows. Then, we express the problem in an #x2113;1-regularized least-squares formulation, with nonnegativity constraints (as light has to be non-negative at any point in space). This sparse representation enjoys an effective and fast solution thanks to recent advances in compressive sensing. In experiments on synthetic and real data, our approach performs favorably in comparison with several previously proposed methods. %B Image Processing, IEEE Transactions on %V 20 %P 2366 - 2377 %8 2011/08// %@ 1057-7149 %G eng %N 8 %R 10.1109/TIP.2011.2118222 %0 Book Section %B Information Security %D 2011 %T Implicit Authentication through Learning User Behavior %A Elaine Shi %A Niu, Yuan %A Jakobsson, Markus %A Chow, Richard %E Burmester, Mike %E Tsudik, Gene %E Magliveras, Spyros %E Ilic, Ivana %K Computer science %X Users are increasingly dependent on mobile devices. However, current authentication methods like password entry are significantly more frustrating and difficult to perform on these devices, leading users to create and reuse shorter passwords and pins, or no authentication at all. We present implicit authentication - authenticating users based on behavior patterns. We describe our model for performing implicit authentication and assess our techniques using more than two weeks of collected data from over 50 subjects. %B Information Security %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 6531 %P 99 - 113 %8 2011 %@ 978-3-642-18177-1 %G eng %U http://www.springerlink.com/content/m57u551u3133475m/abstract/ %0 Journal Article %J Proceedings of the VLDB Endowment %D 2011 %T An incremental Hausdorff distance calculation algorithm %A Nutanong,Sarana %A Jacox,Edwin H. %A Samet, Hanan %X The Hausdorff distance is commonly used as a similarity measure between two point sets. Using this measure, a set X is considered similar to Y iff every point in X is close to at least one point in Y. Formally, the Hausdorff distance HausDist(X, Y) can be computed as the Max-Min distance from X to Y, i.e., find the maximum of the distance from an element in X to its nearest neighbor (NN) in Y. Although this is similar to the closest pair and farthest pair problems, computing the Hausdorff distance is a more challenging problem since its Max-Min nature involves both maximization and minimization rather than just one or the other. A traditional approach to computing HausDist(X, Y) performs a linear scan over X and utilizes an index to help compute the NN in Y for each x in X. We present a pair of basic solutions that avoid scanning X by applying the concept of aggregate NN search to searching for the element in X that yields the Hausdorff distance. In addition, we propose a novel method which incrementally explores the indexes of the two sets X and Y simultaneously. As an example application of our techniques, we use the Hausdorff distance as a measure of similarity between two trajectories (represented as point sets). We also use this example application to compare the performance of our proposed method with the traditional approach and the basic solutions. Experimental results show that our proposed method outperforms all competitors by one order of magnitude in terms of the tree traversal cost and total response time. %B Proceedings of the VLDB Endowment %V 4 %P 506 - 517 %8 2011/05// %@ 2150-8097 %G eng %U http://dl.acm.org/citation.cfm?id=2002974.2002978 %N 8 %0 Conference Paper %B Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on %D 2011 %T A large-scale benchmark dataset for event recognition in surveillance video %A Oh,Sangmin %A Hoogs, A. %A Perera,A. %A Cuntoor, N. %A Chen,Chia-Chih %A Lee,Jong Taek %A Mukherjee,S. %A Aggarwal, JK %A Lee,Hyungtae %A Davis, Larry S. %A Swears,E. %A Wang,Xioyang %A Ji,Qiang %A Reddy,K. %A Shah,M. %A Vondrick,C. %A Pirsiavash,H. %A Ramanan,D. %A Yuen,J. %A Torralba,A. %A Song,Bi %A Fong,A. %A Roy-Chowdhury, A. %A Desai,M. %K algorithm;evaluation %K CVER %K databases; %K databases;video %K dataset;moving %K event %K metrics;large-scale %K object %K recognition %K recognition;diverse %K recognition;video %K scenes;surveillance %K surveillance;visual %K tasks;computer %K tracks;outdoor %K video %K video;computer %K vision;continuous %K vision;image %K visual %X We introduce a new large-scale video dataset designed to assess the performance of diverse visual event recognition algorithms with a focus on continuous visual event recognition (CVER) in outdoor areas with wide coverage. Previous datasets for action recognition are unrealistic for real-world surveillance because they consist of short clips showing one action by one individual [15, 8]. Datasets have been developed for movies [11] and sports [12], but, these actions and scene conditions do not apply effectively to surveillance videos. Our dataset consists of many outdoor scenes with actions occurring naturally by non-actors in continuously captured videos of the real world. The dataset includes large numbers of instances for 23 event types distributed throughout 29 hours of video. This data is accompanied by detailed annotations which include both moving object tracks and event examples, which will provide solid basis for large-scale evaluation. Additionally, we propose different types of evaluation modes for visual recognition tasks and evaluation metrics along with our preliminary experimental results. We believe that this dataset will stimulate diverse aspects of computer vision research and help us to advance the CVER tasks in the years ahead. %B Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on %P 3153 - 3160 %8 2011/06// %G eng %R 10.1109/CVPR.2011.5995586 %0 Conference Paper %B Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on %D 2011 %T Learning a discriminative dictionary for sparse coding via label consistent K-SVD %A Zhuolin Jiang %A Zhe Lin %A Davis, Larry S. %K classification error %K Dictionaries %K dictionary learning process %K discriminative sparse code error %K face recognition %K image classification %K Image coding %K K-SVD %K label consistent %K learning (artificial intelligence) %K object category recognition %K Object recognition %K optimal linear classifier %K reconstruction error %K singular value decomposition %K Training data %X A label consistent K-SVD (LC-KSVD) algorithm to learn a discriminative dictionary for sparse coding is presented. In addition to using class labels of training data, we also associate label information with each dictionary item (columns of the dictionary matrix) to enforce discriminability in sparse codes during the dictionary learning process. More specifically, we introduce a new label consistent constraint called `discriminative sparse-code error' and combine it with the reconstruction error and the classification error to form a unified objective function. The optimal solution is efficiently obtained using the K-SVD algorithm. Our algorithm learns a single over-complete dictionary and an optimal linear classifier jointly. It yields dictionaries so that feature points with the same class labels have similar sparse codes. Experimental results demonstrate that our algorithm outperforms many recently proposed sparse coding techniques for face and object category recognition under the same learning conditions. %B Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on %P 1697 - 1704 %8 2011/06// %G eng %R 10.1109/CVPR.2011.5995354 %0 Journal Article %J Proceedings of the VLDB Endowment %D 2011 %T Lightweight Graphical Models for Selectivity Estimation Without Independence Assumptions %A Tzoumas,K. %A Deshpande, Amol %A Jensen,C. S %X As a result of decades of research and industrial development, mod-ern query optimizers are complex software artifacts. However, the quality of the query plan chosen by an optimizer is largely deter- mined by the quality of the underlying statistical summaries. Small selectivity estimation errors, propagated exponentially, can lead to severely sub-optimal plans. Modern optimizers typically maintain one-dimensional statistical summaries and make the attribute value independence and join uniformity assumptions for efficiently esti- mating selectivities. Therefore, selectivity estimation errors in to- day’s optimizers are frequently caused by missed correlations be- tween attributes. We present a selectivity estimation approach that does not make the independence assumptions. By carefully using concepts from the field of graphical models, we are able to fac- tor the joint probability distribution of all the attributes in the data- base into small, usually two-dimensional distributions. We describe several optimizations that can make selectivity estimation highly efficient, and we present a complete implementation inside Post- greSQL’s query optimizer. Experimental results indicate an order of magnitude better selectivity estimates, while keeping optimiza- tion time in the range of tens of milliseconds. %B Proceedings of the VLDB Endowment %V 4 %8 2011/// %G eng %N 7 %0 Conference Paper %B Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on %D 2011 %T Localizing parts of faces using a consensus of exemplars %A Belhumeur,P. N. %A Jacobs, David W. %A Kriegman, D.J. %A Kumar,N. %K Bayesian %K faces;lighting;occlusion;pose;Bayes %K function;exemplar %K images;expression;face %K localization;human %K methods;face %K objective %K part %K recognition; %X We present a novel approach to localizing parts in images of human faces. The approach combines the output of local detectors with a non-parametric set of global models for the part locations based on over one thousand hand-labeled exemplar images. By assuming that the global models generate the part locations as hidden variables, we derive a Bayesian objective function. This function is optimized using a consensus of models for these hidden variables. The resulting localizer handles a much wider range of expression, pose, lighting and occlusion than prior ones. We show excellent performance on a new dataset gathered from the internet and show that our localizer achieves state-of-the-art performance on the less challenging BioID dataset. %B Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on %P 545 - 552 %8 2011/06// %G eng %R 10.1109/CVPR.2011.5995602 %0 Journal Article %J AMIA Annual Symposium ProceedingsAMIA Annu Symp Proc %D 2011 %T Medication Reconciliation: Work Domain Ontology, Prototype Development, and a Predictive Model %A Markowitz,Eliz %A Bernstam,Elmer V. %A Herskovic,Jorge %A Zhang,Jiajie %A Shneiderman, Ben %A Plaisant, Catherine %A Johnson,Todd R. %X Medication errors can result from administration inaccuracies at any point of care and are a major cause for concern. To develop a successful Medication Reconciliation (MR) tool, we believe it necessary to build a Work Domain Ontology (WDO) for the MR process. A WDO defines the explicit, abstract, implementation-independent description of the task by separating the task from work context, application technology, and cognitive architecture. We developed a prototype based upon the WDO and designed to adhere to standard principles of interface design. The prototype was compared to Legacy Health System’s and Pre-Admission Medication List Builder MR tools via a Keystroke-Level Model analysis for three MR tasks. The analysis found the prototype requires the fewest mental operations, completes tasks in the fewest steps, and completes tasks in the least amount of time. Accordingly, we believe that developing a MR tool, based upon the WDO and user interface guidelines, improves user efficiency and reduces cognitive load. %B AMIA Annual Symposium ProceedingsAMIA Annu Symp Proc %V 2011 %P 878 - 887 %8 2011/// %@ 1942-597X %G eng %0 Journal Article %J Proceedings of the European Signal Processing Conference %D 2011 %T Model-based precision analysis and optimization for digital signal processors %A Kedilaya, Soujanya %A Plishker,William %A Purkovic, Aleksandar %A Johnson, Brian %A Bhattacharyya, Shuvra S. %X Embedded signal processing has witnessed explosive growth in re-cent years in both scientific and consumer applications, driving the need for complex, high-performance signal processing systems that are largely application driven. In order to efficiently implement these systems on programmable platforms such as digital signal processors (DSPs), it is important to analyze and optimize the ap- plication design from early stages of the design process. A key per- formance concern for designers is choosing the data format. In this work, we propose a systematic and efficient design flow involving model-based design to analyze application data sets and precision requirements. We demonstrate this design flow with an exploration study into the required precision for eigenvalue decomposition (EVD) using the Jacobi algorithm. We demonstrate that with a high degree of structured analysis and automation, we are able to analyze the data set to derive an efficient data format, and optimize important parts of the algorithm with respect to precision. %B Proceedings of the European Signal Processing Conference %P 506 - 510 %8 2011 %G eng %0 Journal Article %J Arxiv preprint arXiv:1112.3740 %D 2011 %T Modeling Tiered Pricing in the Internet Transit Market %A Valancius,Vytautas %A Lumezanu,Cristian %A Feamster, Nick %A Johari,Ramesh %A Vazirani,Vijay V. %K Computer Science - Networking and Internet Architecture %X ISPs are increasingly selling "tiered" contracts, which offer Internet connectivity to wholesale customers in bundles, at rates based on the cost of the links that the traffic in the bundle is traversing. Although providers have already begun to implement and deploy tiered pricing contracts, little is known about how such pricing affects ISPs and their customers. While contracts that sell connectivity on finer granularities improve market efficiency, they are also more costly for ISPs to implement and more difficult for customers to understand. In this work we present two contributions: (1) we develop a novel way of mapping traffic and topology data to a demand and cost model; and (2) we fit this model on three large real-world networks: an European transit ISP, a content distribution network, and an academic research network, and run counterfactuals to evaluate the effects of different pricing strategies on both the ISP profit and the consumer surplus. We highlight three core findings. First, ISPs gain most of the profits with only three or four pricing tiers and likely have little incentive to increase granularity of pricing even further. Second, we show that consumer surplus follows closely, if not precisely, the increases in ISP profit with more pricing tiers. Finally, the common ISP practice of structuring tiered contracts according to the cost of carrying the traffic flows (e.g., offering a discount for traffic that is local) can be suboptimal and that dividing contracts based on both traffic demand and the cost of carrying it into only three or four tiers yields near-optimal profit for the ISP. %B Arxiv preprint arXiv:1112.3740 %8 2011/12/16/ %G eng %U http://arxiv.org/abs/1112.3740 %0 Conference Paper %B Proceedings of the 42nd ACM technical symposium on Computer science education %D 2011 %T NSF/IEEE-TCPP curriculum initiative on parallel and distributed computing: core topics for undergraduates %A Prasad,S. K. %A Chtchelkanova,A. %A Das,S. %A Dehne,F. %A Gouda,M. %A Gupta,A. %A JaJa, Joseph F. %A Kant,K. %A La Salle,A. %A LeBlanc,R. %A others %B Proceedings of the 42nd ACM technical symposium on Computer science education %P 617 - 618 %8 2011/// %G eng %0 Conference Paper %B Privacy, security, risk and trust (passat), 2011 ieee third international conference on and 2011 ieee third international conference on social computing (socialcom) %D 2011 %T Odd Leaf Out: Improving Visual Recognition with Games %A Hansen,D. L %A Jacobs, David W. %A Lewis,D. %A Biswas,A. %A Preece,J. %A Rotman,D. %A Stevens,E. %K algorithm;educational %K classification;object %K computational %K computing;botany;computer %K datasets;misclassification %K errors;scientific %K feedback;labeled %K game;human %K games;computer %K games;image %K image %K leaf %K Odd %K Out;complex %K recognition; %K recognition;biology %K tags;visual %K tasks;computer %K tasks;textual %K VISION %X A growing number of projects are solving complex computational and scientific tasks by soliciting human feedback through games. Many games with a purpose focus on generating textual tags for images. In contrast, we introduce a new game, Odd Leaf Out, which provides players with an enjoyable and educational game that serves the purpose of identifying misclassification errors in a large database of labeled leaf images. The game uses a novel mechanism to solicit useful information from players' incorrect answers. A study of 165 players showed that game data can be used to identify mislabeled leaves much more quickly than would have been possible using a computer vision algorithm alone. Domain novices and experts were equally good at identifying mislabeled images, although domain experts enjoyed the game more. We discuss the successes and challenges of this new game, which can be applied to other domains with labeled image datasets. %B Privacy, security, risk and trust (passat), 2011 ieee third international conference on and 2011 ieee third international conference on social computing (socialcom) %P 87 - 94 %8 2011/10// %G eng %R 10.1109/PASSAT/SocialCom.2011.225 %0 Conference Paper %B International Conference on Document Analysis and Recognition %D 2011 %T Offline Writer Identification using K-Adjacent Segments %A Jain,Rajiv %A David Doermann %X This paper presents a method for performing offline writer identification by using K-adjacent segment (KAS) features in a bag-of-features framework to model a user’s handwriting. This approach achieves a top 1 recognition rate of 93% on the benchmark IAMEnglish handwriting dataset, which outperforms current state of the art features. Results further demonstrate that identification performance improves as the number of training samples increase, and additionally, that the performance of the KAS features extend to Arabic handwriting found in the MADCAT dataset. %B International Conference on Document Analysis and Recognition %P 769 - 773 %8 2011/// %G eng %0 Journal Article %J Journal of Signal Processing Systems %D 2011 %T Overview of the MPEG Reconfigurable Video Coding Framework %A Bhattacharyya, Shuvra S. %A Eker, Johan %A Janneck, Jörn W. %A Lucarz, Christophe %A Mattavelli, Marco %A Raulet, Mickaël %K CAL actor language %K Circuits and Systems %K Code synthesis %K Computer Imaging, Vision, Pattern Recognition and Graphics %K Dataflow programming %K Electrical Engineering %K Image Processing and Computer Vision %K pattern recognition %K Reconfigurable Video Coding %K Signal, Image and Speech Processing %X Video coding technology in the last 20 years has evolved producing a variety of different and complex algorithms and coding standards. So far the specification of such standards, and of the algorithms that build them, has been done case by case providing monolithic textual and reference software specifications in different forms and programming languages. However, very little attention has been given to provide a specification formalism that explicitly presents common components between standards, and the incremental modifications of such monolithic standards. The MPEG Reconfigurable Video Coding (RVC) framework is a new ISO standard currently under its final stage of standardization, aiming at providing video codec specifications at the level of library components instead of monolithic algorithms. The new concept is to be able to specify a decoder of an existing standard or a completely new configuration that may better satisfy application-specific constraints by selecting standard components from a library of standard coding algorithms. The possibility of dynamic configuration and reconfiguration of codecs also requires new methodologies and new tools for describing the new bitstream syntaxes and the parsers of such new codecs. The RVC framework is based on the usage of a new actor/ dataflow oriented language called CAL for the specification of the standard library and instantiation of the RVC decoder model. This language has been specifically designed for modeling complex signal processing systems. CAL dataflow models expose the intrinsic concurrency of the algorithms by employing the notions of actor programming and dataflow. The paper gives an overview of the concepts and technologies building the standard RVC framework and the non standard tools supporting the RVC model from the instantiation and simulation of the CAL model to software and/or hardware code synthesis. %B Journal of Signal Processing Systems %V 63 %P 251 - 263 %8 2011 %@ 1939-8018, 1939-8115 %G eng %U http://link.springer.com/article/10.1007/s11265-009-0399-3 %N 2 %! J Sign Process Syst %0 Conference Paper %B Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on %D 2011 %T Piecing together the segmentation jigsaw using context %A Chen,Xi %A Jain, A. %A Gupta,A. %A Davis, Larry S. %K algorithms;image %K approximation %K formulation;greedy %K function %K information;cost %K manner;image %K programming;approximation %K recognition;image %K segmentation; %K segmentation;jigsaw %K segmentation;quadratic %K solution;contextual %K theory;greedy %X We present an approach to jointly solve the segmentation and recognition problem using a multiple segmentation framework. We formulate the problem as segment selection from a pool of segments, assigning each selected segment a class label. Previous multiple segmentation approaches used local appearance matching to select segments in a greedy manner. In contrast, our approach formulates a cost function based on contextual information in conjunction with appearance matching. This relaxed cost function formulation is minimized using an efficient quadratic programming solver and an approximate solution is obtained by discretizing the relaxed solution. Our approach improves labeling performance compared to other segmentation based recognition approaches. %B Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on %P 2001 - 2008 %8 2011/06// %G eng %R 10.1109/CVPR.2011.5995367 %0 Conference Paper %D 2011 %T Real-Time Planning for Covering an Initially-Unknown Spatial Environment %A Shivashankar,V. %A Jain, R. %A Kuter,U. %A Nau, Dana S. %X We consider the problem of planning, on the fly, a path whereby a robotic vehicle will cover every point in an ini- tially unknown spatial environment. We describe four strate- gies (Iterated WaveFront, Greedy-Scan, Delayed Greedy- Scan and Closest-First Scan) for generating cost-effective coverage plans in real time for unknown environments. We give theorems showing the correctness of our planning strate- gies. Our experiments demonstrate that some of these strate- gies work significantly better than others, and that the best ones work very well; e.g., in environments having an average of 64,000 locations for the robot to cover, the best strategy re- turned plans with less than 6% redundant coverage, and took only an average of 0.1 milliseconds per action. %8 2011/// %G eng %U http://www.aaai.org/ocs/index.php/FLAIRS/FLAIRS11/paper/download/2566/2992 %0 Conference Paper %B 2011 IEEE International Conference on Computer Vision (ICCV) %D 2011 %T Sparse dictionary-based representation and recognition of action attributes %A Qiang Qiu %A Zhuolin Jiang %A Chellapa, Rama %K action attributes %K appearance information %K class distribution %K Dictionaries %K dictionary learning process %K Encoding %K Entropy %K Gaussian process model %K Gaussian processes %K Histograms %K HUMANS %K Image coding %K image representation %K information maximization %K learning (artificial intelligence) %K modeled action categories %K Mutual information %K Object recognition %K probabilistic logic %K sparse coding property %K sparse dictionary-based recognition %K sparse dictionary-based representation %K sparse feature space %K unmodeled action categories %X We present an approach for dictionary learning of action attributes via information maximization. We unify the class distribution and appearance information into an objective function for learning a sparse dictionary of action attributes. The objective function maximizes the mutual information between what has been learned and what remains to be learned in terms of appearance information and class distribution for each dictionary item. We propose a Gaussian Process (GP) model for sparse representation to optimize the dictionary objective function. The sparse coding property allows a kernel with a compact support in GP to realize a very efficient dictionary learning process. Hence we can describe an action video by a set of compact and discriminative action attributes. More importantly, we can recognize modeled action categories in a sparse feature space, which can be generalized to unseen and unmodeled action categories. Experimental results demonstrate the effectiveness of our approach in action recognition applications. %B 2011 IEEE International Conference on Computer Vision (ICCV) %I IEEE %P 707 - 714 %8 2011/11/06/13 %@ 978-1-4577-1101-5 %G eng %R 10.1109/ICCV.2011.6126307 %0 Journal Article %J The ISME Journal %D 2011 %T Temperature regulation of virulence factors in the pathogen Vibrio coralliilyticus %A Kimes,Nikole E. %A Grim,Christopher J. %A Johnson,Wesley R. %A Hasan,Nur A. %A Tall,Ben D. %A Kothary,Mahendra H. %A Kiss,Hajnalka %A Munk,A. Christine %A Tapia,Roxanne %A Green,Lance %A Detter,Chris %A Bruce,David C. %A Brettin,Thomas S. %A Rita R Colwell %A Morris,Pamela J. %K ecophysiology %K ecosystems %K environmental biotechnology %K geomicrobiology %K ISME J %K microbe interactions %K microbial communities %K microbial ecology %K microbial engineering %K microbial epidemiology %K microbial genomics %K microorganisms %X Sea surface temperatures (SST) are rising because of global climate change. As a result, pathogenic Vibrio species that infect humans and marine organisms during warmer summer months are of growing concern. Coral reefs, in particular, are already experiencing unprecedented degradation worldwide due in part to infectious disease outbreaks and bleaching episodes that are exacerbated by increasing SST. For example, Vibrio coralliilyticus, a globally distributed bacterium associated with multiple coral diseases, infects corals at temperatures above 27 °C. The mechanisms underlying this temperature-dependent pathogenicity, however, are unknown. In this study, we identify potential virulence mechanisms using whole genome sequencing of V. coralliilyticus ATCC (American Type Culture Collection) BAA-450. Furthermore, we demonstrate direct temperature regulation of numerous virulence factors using proteomic analysis and bioassays. Virulence factors involved in motility, host degradation, secretion, antimicrobial resistance and transcriptional regulation are upregulated at the higher virulent temperature of 27 °C, concurrent with phenotypic changes in motility, antibiotic resistance, hemolysis, cytotoxicity and bioluminescence. These results provide evidence that temperature regulates multiple virulence mechanisms in V. coralliilyticus, independent of abundance. The ecological and biological significance of this temperature-dependent virulence response is reinforced by climate change models that predict tropical SST to consistently exceed 27 °C during the spring, summer and fall seasons. We propose V. coralliilyticus as a model Gram-negative bacterium to study temperature-dependent pathogenicity in Vibrio-related diseases. %B The ISME Journal %V 6 %P 835 - 846 %8 2011/12/08/ %@ 1751-7362 %G eng %U http://www.nature.com/ismej/journal/v6/n4/full/ismej2011154a.html %N 4 %R 10.1038/ismej.2011.154 %0 Conference Paper %B Computer Vision Workshops (ICCV Workshops), 2011 IEEE International Conference on %D 2011 %T Trainable 3D recognition using stereo matching %A Castillo,C. D %A Jacobs, David W. %K 2D %K 3D %K class %K classification %K classification;image %K data %K dataset;CMU %K dataset;face %K descriptor;occlusion;pose %K estimation;solid %K image %K image;3D %K matching;pose %K matching;trainable %K modelling;stereo %K object %K PIE %K processing; %K recognition;face %K recognition;image %K set;3D %K variation;stereo %X Stereo matching has been used for face recognition in the presence of pose variation. In this approach, stereo matching is used to compare two 2-D images based on correspondences that reflect the effects of viewpoint variation and allow for occlusion. We show how to use stereo matching to derive image descriptors that can be used to train a classifier. This improves face recognition performance, producing the best published results on the CMU PIE dataset. We also demonstrate that classification based on stereo matching can be used for general object classification in the presence of pose variation. In preliminary experiments we show promising results on the 3D object class dataset, a standard, challenging 3D classification data set. %B Computer Vision Workshops (ICCV Workshops), 2011 IEEE International Conference on %P 625 - 631 %8 2011/// %G eng %R 10.1109/ICCVW.2011.6130301 %0 Journal Article %J Nucleic Acids ResearchNucl. Acids Res. %D 2011 %T Transcriptional Regulation Via TF-Modifying Enzymes: An Integrative Model-Based Analysis %A Everett,Logan J %A Jensen,Shane T %A Hannenhalli, Sridhar %X Transcription factor activity is largely regulated through post-translational modification. Here, we report the first integrative model of transcription that includes both interactions between transcription factors and promoters, and between transcription factors and modifying enzymes. Simulations indicate that our method is robust against noise. We validated our tool on a well-studied stress response network in yeast and on a STAT1-mediated regulatory network in human B cells. Our work represents a significant step toward a comprehensive model of gene transcription. %B Nucleic Acids ResearchNucl. Acids Res. %V 39 %P e78-e78 - e78-e78 %8 2011/07/01/ %@ 0305-1048, 1362-4962 %G eng %U http://nar.oxfordjournals.org/content/39/12/e78 %N 12 %R 10.1093/nar/gkr172 %0 Journal Article %J The American Journal of Tropical Medicine and Hygiene %D 2011 %T Warming Oceans, Phytoplankton, and River Discharge: Implications for Cholera Outbreaks %A Jutla,Antarpreet S. %A Akanda,Ali S. %A Griffiths,Jeffrey K. %A Rita R Colwell %A Islam,Shafiqul %X Phytoplankton abundance is inversely related to sea surface temperature (SST). However, a positive relationship is observed between SST and phytoplankton abundance in coastal waters of Bay of Bengal. This has led to an assertion that in a warming climate, rise in SST may increase phytoplankton blooms and, therefore, cholera outbreaks. Here, we explain why a positive SST-phytoplankton relationship exists in the Bay of Bengal and the implications of such a relationship on cholera dynamics. We found clear evidence of two independent physical drivers for phytoplankton abundance. The first one is the widely accepted phytoplankton blooming produced by the upwelling of cold, nutrient-rich deep ocean waters. The second, which explains the Bay of Bengal findings, is coastal phytoplankton blooming during high river discharges with terrestrial nutrients. Causal mechanisms should be understood when associating SST with phytoplankton and subsequent cholera outbreaks in regions where freshwater discharge are a predominant mechanism for phytoplankton production. %B The American Journal of Tropical Medicine and Hygiene %V 85 %P 303 - 308 %8 08/2011 %@ 0002-9637, %G eng %U http://www.ajtmh.org/content/85/2/303 %N 2 %R 10.4269/ajtmh.2011.11-0181 %0 Conference Paper %B Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on %D 2011 %T Wide-baseline stereo for face recognition with large pose variation %A Castillo,C. D %A Jacobs, David W. %K 2D %K algorithm;frontal %K dataset;dynamic %K estimation;stereo %K Face %K image %K image;near %K image;pose %K MATCHING %K matching;pose %K matching;surface %K method;dynamic %K performance;stereo %K PIE %K processing; %K profile %K Programming %K programming;face %K recognition;CMU %K recognition;image %K slant;wide-baseline %K stereo %K stereo;window-based %K variation;recognition %X 2-D face recognition in the presence of large pose variations presents a significant challenge. When comparing a frontal image of a face to a near profile image, one must cope with large occlusions, non-linear correspondences, and significant changes in appearance due to viewpoint. Stereo matching has been used to handle these problems, but performance of this approach degrades with large pose changes. We show that some of this difficulty is due to the effect that foreshortening of slanted surfaces has on window-based matching methods, which are needed to provide robustness to lighting change. We address this problem by designing a new, dynamic programming stereo algorithm that accounts for surface slant. We show that on the CMU PIE dataset this method results in significant improvements in recognition performance. %B Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on %P 537 - 544 %8 2011/06// %G eng %R 10.1109/CVPR.2011.5995559 %0 Journal Article %J Computational Optimization and Applications %D 2010 %T Adaptive Constraint Reduction for Convex Quadratic Programming %A Jung,Jin Hyuk %A O'Leary, Dianne P. %A Tits,Andre' L. %X We propose an adaptive, constraint-reduced, primal-dual interior-point algorithm for convex quadratic programming with many more inequality constraints than variables. We reduce the computational effort by assembling, instead of the exact normal-equation matrix, an approximate matrix from a well chosen index set which includes indices of constraints that seem to be most critical. Starting with a large portion of the constraints, our proposed scheme excludes more unnecessary constraints at later iterations. We provide proofs for the global convergence and the quadratic local convergence rate of an affine-scaling variant. Numerical experiments on random problems, on a data-fitting problem, and on a problem in array pattern synthesis show the effectiveness of the constraint reduction in decreasing the time per iteration without significantly affecting the number of iterations. We note that a similar constraint-reduction approach can be applied to algorithms of Mehrotra’s predictor-corrector type, although no convergence theory is supplied. %B Computational Optimization and Applications %8 2010/03// %G eng %R DOI:10.1007/s10589-010-9324-8 %0 Conference Paper %D 2010 %T Authentication in the clouds: a framework and its application to mobile users %A Chow, Richard %A Jakobsson, Markus %A Masuoka, Ryusuke %A Molina,Jesus %A Niu, Yuan %A Elaine Shi %A Song,Zhexuan %K Authentication %K Cloud computing %K Mobile computing %X Cloud computing is a natural fit for mobile security. Typical handsets have input constraints and practical computational and power limitations, which must be respected by mobile security technologies in order to be effective. We describe how cloud computing can address these issues. Our approach is based on a flexible framework for supporting authentication decisions we call TrustCube (to manage the authentication infrastructure) and on a behavioral authentication approach referred to as implicit authentication (to translate user behavior into authentication scores). The combination results in a new authentication paradigm for users of mobile technologies, one where an appropriate balance between usability and trust can be managed through flexible policies and dynamic tuning. %S CCSW '10 %I ACM %P 1 - 6 %8 2010 %@ 978-1-4503-0089-6 %G eng %U http://doi.acm.org/10.1145/1866835.1866837 %0 Conference Paper %B 2010 Conference on Design and Architectures for Signal and Image Processing (DASIP) %D 2010 %T Automated generation of an efficient MPEG-4 Reconfigurable Video Coding decoder implementation %A Gu, Ruirui %A Piat, J. %A Raulet, M. %A Janneck, J.W. %A Bhattacharyya, Shuvra S. %K automated generation %K automatic design flow %K CAL language %K CAL networks %K CAL-to-C translation %K CAL2C translation %K coarse-grain dataflow representations %K Computational modeling %K data flow computing %K dataflow information %K Dataflow programming %K decoding %K Digital signal processing %K Libraries %K MPEG-4 reconfigurable video coding decoder implementation %K parallel languages %K SDF detection %K synchronous dataflow detection %K TDP %K TDP-based static scheduling %K The Dataflow interchange format Package %K Transform coding %K user-friendly design %K video coding %K video processing systems %K XML %K XML format %X This paper proposes an automatic design flow from user-friendly design to efficient implementation of video processing systems. This design flow starts with the use of coarse-grain dataflow representations based on the CAL language, which is a complete language for dataflow programming of embedded systems. Our approach integrates previously developed techniques for detecting synchronous dataflow (SDF) regions within larger CAL networks, and exploiting the static structure of such regions using analysis tools in The Dataflow interchange format Package (TDP). Using a new XML format that we have developed to exchange dataflow information between different dataflow tools, we explore systematic implementation of signal processing systems using CAL, SDF-like region detection, TDP-based static scheduling, and CAL-to-C (CAL2C) translation. Our approach, which is a novel integration of three complementary dataflow tools - the CAL parser, TDP, and CAL2C - is demonstrated on an MPEG Reconfigurable Video Coding (RVC) decoder. %B 2010 Conference on Design and Architectures for Signal and Image Processing (DASIP) %P 265 - 272 %8 2010 %G eng %0 Journal Article %J EURASIP Journal on Advances in Signal Processing %D 2010 %T Better Flow Estimation from Color Images-Volume 2007, Article ID 53912, 9 pages %A Ji,H. %A Fermüller, Cornelia %B EURASIP Journal on Advances in Signal Processing %V 2007 %8 2010/// %G eng %N 23 %0 Journal Article %J BMC Microbiology %D 2010 %T Comparative genomic analysis reveals evidence of two novel Vibrio species closely related to V. cholerae %A Bradd,H. %A Christopher,G. %A Nur,H. %A Seon-Young,C. %A Jongsik,C. %A Thomas,B. %A David,B. %A Jean,C. %A Chris,D. J. %A Cliff,H. %A Rita R Colwell %X In recent years genome sequencing has been used to characterize new bacterial species, a method of analysis available as a result of improved methodology and reduced cost. Included in a constantly expanding list of Vibrio species are several that have been reclassified as novel members of the Vibrionaceae. The description of two putative new Vibrio species, Vibrio sp. RC341 and Vibrio sp. RC586 for which we propose the names V. metecus and V. parilis, respectively, previously characterized as non-toxigenic environmental variants of V. cholerae is presented in this study. Results Based on results of whole-genome average nucleotide identity (ANI), average amino acid identity (AAI), rpoB similarity, MLSA, and phylogenetic analysis, the new species are concluded to be phylogenetically closely related to V. cholerae and V. mimicus. Vibrio sp. RC341 and Vibrio sp. RC586 demonstrate features characteristic of V. cholerae and V. mimicus, respectively, on differential and selective media, but their genomes show a 12 to 15% divergence (88 to 85% ANI and 92 to 91% AAI) compared to the sequences of V. cholerae and V. mimicus genomes (ANI <95% and AAI <96% indicative of separate species). Vibrio sp. RC341 and Vibrio sp. RC586 share 2104 ORFs (59%) and 2058 ORFs (56%) with the published core genome of V. cholerae and 2956 (82%) and 3048 ORFs (84%) with V. mimicus MB-451, respectively. The novel species share 2926 ORFs with each other (81% Vibrio sp. RC341 and 81% Vibrio sp. RC586). Virulence-associated factors and genomic islands of V. cholerae and V. mimicus, including VSP-I and II, were found in these environmental Vibrio spp. Conclusions Results of this analysis demonstrate these two environmental vibrios, previously characterized as variant V. cholerae strains, are new species which have evolved from ancestral lineages of the V. cholerae and V. mimicus clade. The presence of conserved integration loci for genomic islands as well as evidence of horizontal gene transfer between these two new species, V. cholerae, and V. mimicus suggests genomic islands and virulence factors are transferred between these species. %B BMC Microbiology %V 10 %8 2010/// %G eng %0 Journal Article %J Computer Vision and Image Understanding %D 2010 %T Comparing and combining lighting insensitive approaches for face recognition %A Gopalan,Raghuraman %A Jacobs, David W. %K Classifier comparison and combination %K face recognition %K Gradient direction %K lighting %X Face recognition under changing lighting conditions is a challenging problem in computer vision. In this paper, we analyze the relative strengths of different lighting insensitive representations, and propose efficient classifier combination schemes that result in better recognition rates. We consider two experimental settings, wherein we study the performance of different algorithms with (and without) prior information on the different illumination conditions present in the scene. In both settings, we focus on the problem of having just one exemplar per person in the gallery. Based on these observations, we design algorithms for integrating the individual classifiers to capture the significant aspects of each representation. We then illustrate the performance improvement obtained through our classifier combination algorithms on the illumination subset of the PIE dataset, and on the extended Yale-B dataset. Throughout, we consider galleries with both homogenous and heterogeneous lighting conditions. %B Computer Vision and Image Understanding %V 114 %P 135 - 145 %8 2010/01// %@ 1077-3142 %G eng %U http://www.sciencedirect.com/science/article/pii/S1077314209001210 %N 1 %R 10.1016/j.cviu.2009.07.005 %0 Journal Article %J BMC Evolutionary Biology %D 2010 %T Evolutionary dynamics of U12-type spliceosomal introns %A Lin,Chiao-Feng %A Mount, Stephen M. %A Jarmołowski,Artur %A Makałowski,Wojciech %X Many multicellular eukaryotes have two types of spliceosomes for the removal of introns from messenger RNA precursors. The major (U2) spliceosome processes the vast majority of introns, referred to as U2-type introns, while the minor (U12) spliceosome removes a small fraction (less than 0.5%) of introns, referred to as U12-type introns. U12-type introns have distinct sequence elements and usually occur together in genes with U2-type introns. A phylogenetic distribution of U12-type introns shows that the minor splicing pathway appeared very early in eukaryotic evolution and has been lost repeatedly. %B BMC Evolutionary Biology %V 10 %P 47 - 47 %8 2010/02/17/ %@ 1471-2148 %G eng %U http://www.biomedcentral.com/1471-2148/10/47/abstract %N 1 %R 10.1186/1471-2148-10-47 %0 Journal Article %J Information Forensics and Security, IEEE Transactions on %D 2010 %T Face Verification Across Age Progression Using Discriminative Methods %A Ling,Haibin %A Soatto,S. %A Ramanathan,N. %A Jacobs, David W. %K algorithms;support %K databases; %K dataset;age %K FGnet %K hair;gradient %K information;image %K information;recognition %K machine;face %K machines;visual %K methods;eyewear;face %K methods;support %K orientation %K orientation;gradient %K progression;commercial %K pyramid;hierarchical %K quality;magnitude %K recognition;gradient %K systems;discriminative %K vector %K verification;facial %X Face verification in the presence of age progression is an important problem that has not been widely addressed. In this paper, we study the problem by designing and evaluating discriminative approaches. These directly tackle verification tasks without explicit age modeling, which is a hard problem by itself. First, we find that the gradient orientation, after discarding magnitude information, provides a simple but effective representation for this problem. This representation is further improved when hierarchical information is used, which results in the use of the gradient orientation pyramid (GOP). When combined with a support vector machine GOP demonstrates excellent performance in all our experiments, in comparison with seven different approaches including two commercial systems. Our experiments are conducted on the FGnet dataset and two large passport datasets, one of them being the largest ever reported for recognition tasks. Second, taking advantage of these datasets, we empirically study how age gaps and related issues (including image quality, spectacles, and facial hair) affect recognition algorithms. We found surprisingly that the added difficulty of verification produced by age gaps becomes saturated after the gap is larger than four years, for gaps of up to ten years. In addition, we find that image quality and eyewear present more of a challenge than facial hair. %B Information Forensics and Security, IEEE Transactions on %V 5 %P 82 - 91 %8 2010/03// %@ 1556-6013 %G eng %N 1 %R 10.1109/TIFS.2009.2038751 %0 Journal Article %J Developmental Cell %D 2010 %T Hopx and Hdac2 Interact to Modulate Gata4 Acetylation and Embryonic Cardiac Myocyte Proliferation %A Trivedi,Chinmay M. %A Zhu,Wenting %A Wang,Qiaohong %A Jia,Cheng %A Kee,Hae Jin %A Li,Li %A Hannenhalli, Sridhar %A Epstein,Jonathan A. %X SummaryRegulation of chromatin structure via histone modification has recently received intense attention. Here, we demonstrate that the chromatin-modifying enzyme histone deacetylase 2 (Hdac2) functions with a small homeodomain factor, Hopx, to mediate deacetylation of Gata4, which is expressed by cardiac progenitor cells and plays critical roles in the regulation of cardiogenesis. In the absence of Hopx and Hdac2 in mouse embryos, Gata4 hyperacetylation is associated with a marked increase in cardiac myocyte proliferation, upregulation of Gata4 target genes, and perinatal lethality. Hdac2 physically interacts with Gata4, and this interaction is stabilized by Hopx. The ability of Gata4 to transactivate cell cycle genes is impaired by Hopx/Hdac2-mediated deacetylation, and this effect is abrogated by loss of Hdac2-Gata4 interaction. These results suggest that Gata4 is a nonhistone target of Hdac2-mediated deacetylation and that Hdac2, Hopx, and Gata4 coordinately regulate cardiac myocyte proliferation during embryonic development. %B Developmental Cell %V 19 %P 450 - 459 %8 2010/09/14/ %@ 1534-5807 %G eng %U http://www.sciencedirect.com/science/article/pii/S1534580710003874 %N 3 %R 10.1016/j.devcel.2010.08.012 %0 Journal Article %J Applied and Environmental MicrobiologyAppl. Environ. Microbiol. %D 2010 %T Identification of Pathogenic Vibrio Species by Multilocus PCR-Electrospray Ionization Mass Spectrometry and Its Application to Aquatic Environments of the Former Soviet Republic of Georgia %A Whitehouse,Chris A. %A Baldwin,Carson %A Sampath,Rangarajan %A Blyn,Lawrence B. %A Melton,Rachael %A Li,Feng %A Hall,Thomas A. %A Harpin,Vanessa %A Matthews,Heather %A Tediashvili,Marina %A Jaiani,Ekaterina %A Kokashvili,Tamar %A Janelidze,Nino %A Grim,Christopher %A Rita R Colwell %A Huq,Anwar %X The Ibis T5000 is a novel diagnostic platform that couples PCR and mass spectrometry. In this study, we developed an assay that can identify all known pathogenic Vibrio species and field-tested it using natural water samples from both freshwater lakes and the Georgian coastal zone of the Black Sea. Of the 278 total water samples screened, 9 different Vibrio species were detected, 114 (41%) samples were positive for V. cholerae, and 5 (0.8%) samples were positive for the cholera toxin A gene (ctxA). All ctxA-positive samples were from two freshwater lakes, and no ctxA-positive samples from any of the Black Sea sites were detected. %B Applied and Environmental MicrobiologyAppl. Environ. Microbiol. %V 76 %P 1996 - 2001 %8 2010/03/15/ %@ 0099-2240, 1098-5336 %G eng %U http://aem.asm.org/content/76/6/1996 %N 6 %R 10.1128/AEM.01919-09 %0 Patent %D 2010 %T Identifying Modifiers in Web Queries Over Structured Data %A Paparizos,Stelios %A Joshi,Amrula Sadanand %A Getoor, Lise %A Ntoulas,Alexandros %E Microsoft Corporation %X Described is using modifiers in online search queries for queries that map to a database table. A modifier (e.g., an adjective or a preposition) specifies the intended meaning of a target, in which the target maps to a column in that table. The modifier thus corresponds to one or more functions that determine which rows of data in the column match the query, e.g., “cameras under $400” maps to a camera (or product) table, and “under” is the modifier that represents a function (less than) that is used to evaluate a “price” target/data column. Also described are different classes of modifiers, and generating the dictionaries for a domain (corresponding to a table) via query log mining. %V 12/473,286 %8 2010/12/02/ %G eng %U http://www.google.com/patents?id=gQTkAAAAEBAJ %0 Journal Article %J Vision research %D 2010 %T Illusory motion due to causal time filtering %A Fermüller, Cornelia %A Ji,H. %A Kitaoka,A. %B Vision research %V 50 %P 315 - 329 %8 2010/// %G eng %N 3 %0 Conference Paper %B 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) %D 2010 %T Learning shift-invariant sparse representation of actions %A Li,Yi %A Fermüller, Cornelia %A Aloimonos, J. %A Hui Ji %K action characterization %K Action recognition %K action retrieval %K action synthesis %K Character recognition %K data compression %K human motion capture %K HUMANS %K Image matching %K Image motion analysis %K image representation %K Image sequences %K Information retrieval %K joint movements %K large convex minimizations %K learning (artificial intelligence) %K learning shift-invariant sparse representation %K Matching pursuit algorithms %K minimisation %K Minimization methods %K MoCap data compression %K Motion analysis %K motion capture analysis %K motion disorder disease %K motion sequences %K orthogonal matching pursuit %K Parkinson diagnosis %K Parkinson's disease %K Pursuit algorithms %K shift-invariant basis functions %K short basis functions %K snippets %K sparse linear combination %K split Bregman algorithm %K time series %K time series data %K Unsupervised learning %K unsupervised learning algorithm %X A central problem in the analysis of motion capture (MoCap) data is how to decompose motion sequences into primitives. Ideally, a description in terms of primitives should facilitate the recognition, synthesis, and characterization of actions. We propose an unsupervised learning algorithm for automatically decomposing joint movements in human motion capture (MoCap) sequences into shift-invariant basis functions. Our formulation models the time series data of joint movements in actions as a sparse linear combination of short basis functions (snippets), which are executed (or “activated”) at different positions in time. Given a set of MoCap sequences of different actions, our algorithm finds the decomposition of MoCap sequences in terms of basis functions and their activations in time. Using the tools of L1 minimization, the procedure alternately solves two large convex minimizations: Given the basis functions, a variant of Orthogonal Matching Pursuit solves for the activations, and given the activations, the Split Bregman Algorithm solves for the basis functions. Experiments demonstrate the power of the decomposition in a number of applications, including action recognition, retrieval, MoCap data compression, and as a tool for classification in the diagnosis of Parkinson (a motion disorder disease). %B 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) %I IEEE %P 2630 - 2637 %8 2010/06/13/18 %@ 978-1-4244-6984-0 %G eng %R 10.1109/CVPR.2010.5539977 %0 Journal Article %J Computer Vision–ECCV 2010 %D 2010 %T Learning what and how of contextual models for scene labeling %A Jain, A. %A Gupta,A. %A Davis, Larry S. %X We present a data-driven approach to predict the importance of edges and construct a Markov network for image analysis based on statistical models of global and local image features. We also address the coupled problem of predicting the feature weights associated with each edge of a Markov network for evaluation of context. Experimental results indicate that this scene dependent structure construction model eliminates spurious edges and improves performance over fully-connected and neighborhood connected Markov network. %B Computer Vision–ECCV 2010 %P 199 - 212 %8 2010/// %G eng %0 Conference Paper %B Proceedings of the 7th USENIX conference on Networked systems design and implementation %D 2010 %T Maranello: practical partial packet recovery for 802.11 %A Han,Bo %A Schulman,Aaron %A Gringoli,Francesco %A Spring, Neil %A Bhattacharjee, Bobby %A Nava,Lorenzo %A Ji,Lusheng %A Lee,Seungjoon %A Miller,Robert %X Partial packet recovery protocols attempt to repair corrupted packets instead of retransmitting them in their entirety. Recent approaches have used physical layer confidence estimates or additional error detection codes embedded in each transmission to identify corrupt bits, or have applied forward error correction to repair without such explicit knowledge. In contrast to these approaches, our goal is a practical design that simultaneously: (a) requires no extra bits in correct packets, (b) reduces recovery latency, except in rare instances, (c) remains compatible with existing 802.11 devices by obeying timing and backoff standards, and (d) can be incrementally deployed on widely available access points and wireless cards. In this paper, we design, implement, and evaluate Maranello, a novel partial packet recovery mechanism for 802.11. In Maranello, the receiver computes checksums over blocks in corrupt packets and bundles these checksums into a negative acknowledgment sent when the sender expects to receive an acknowledgment. The sender then retransmits only those blocks for which the checksum is incorrect, and repeats this partial retransmission until it receives an acknowledgment. Successful transmissions are not burdened by additional bits and the receiver needs not infer which bits were corrupted. We implemented Maranello using OpenFWWF (open source firmware for Broadcom wireless cards) and deployed it in a small testbed. We compare Maranello to alternative recovery protocols using a trace-driven simulation and to 802.11 using a live implementation under various channel conditions. To our knowledge, Maranello is the first partial packet recovery design to be implemented in commonly available firmware. %B Proceedings of the 7th USENIX conference on Networked systems design and implementation %S NSDI'10 %I USENIX Association %C Berkeley, CA, USA %P 14 - 14 %8 2010/// %G eng %U http://dl.acm.org/citation.cfm?id=1855711.1855725 %0 Journal Article %J ACM Transactions on Applied Perception (TAP) %D 2010 %T Mesh saliency and human eye fixations %A Kim,Youngmin %A Varshney, Amitabh %A Jacobs, David W. %A Guimbretière,François %K eye-tracker %K mesh saliency %K Visual perception %X Mesh saliency has been proposed as a computational model of perceptual importance for meshes, and it has been used in graphics for abstraction, simplification, segmentation, illumination, rendering, and illustration. Even though this technique is inspired by models of low-level human vision, it has not yet been validated with respect to human performance. Here, we present a user study that compares the previous mesh saliency approaches with human eye movements. To quantify the correlation between mesh saliency and fixation locations for 3D rendered images, we introduce the normalized chance-adjusted saliency by improving the previous chance-adjusted saliency measure. Our results show that the current computational model of mesh saliency can model human eye movements significantly better than a purely random model or a curvature-based model. %B ACM Transactions on Applied Perception (TAP) %V 7 %P 12:1–12:13 - 12:1–12:13 %8 2010/02// %@ 1544-3558 %G eng %U http://doi.acm.org/10.1145/1670671.1670676 %N 2 %R 10.1145/1670671.1670676 %0 Conference Paper %B 2010 AAAI Fall Symposium Series %D 2010 %T The Metacognitive Loop: An Architecture for Building Robust Intelligent Systems %A Shahri,Hamid Haidarian %A Dinalankara,Wikum %A Fults,Scott %A Wilson,Shomir %A Perlis, Don %A Schmill,Matt %A Oates,Tim %A Josyula,Darsana %A Anderson,Michael %K commonsense %K ontologies %K robust intelligent systems %X The Metacognitive Loop: An Architecture for Building Robust Intelligent Systems %B 2010 AAAI Fall Symposium Series %8 2010/03/11/ %G eng %U http://www.aaai.org/ocs/index.php/FSS/FSS10/paper/view/2161 %0 Conference Paper %B Proceedings of the 2010 Roadmap for Digital Preservation Interoperability Framework Workshop %D 2010 %T Monitoring distributed collections using the Audit Control Environment (ACE) %A Smorul,Michael %A Song,Sangchul %A JaJa, Joseph F. %K digital %K preservation %X The Audit Control Environment (ACE) is a system which provides a scalable, auditable platform that actively monitors collections to ensure their integrity over the lifetime of an archive. It accomplishes this by using a small integrity token issued for each monitored item. This token is part of a larger externally auditable cryptographic system. We will describe how this system has been implemented for a set of applications designed to run in an archive or library environment. ACE has been used for almost two years by the Chronopolis Preservation Environment to monitor the integrity of collections replicated between the three independent archive partners. During this time, ACE has been expanded to better support the requirements of this distributed archive. We will describe how ACE has been used and expanded to support the Chronopolis preservation requirements. We conclude by discussing several future requirements for integrity monitoring that have been identified by users of ACE. These include securely monitoring remote data, monitoring offline data, and scaling monitoring activities in a way that does not impact the normal operational activity of an archive. %B Proceedings of the 2010 Roadmap for Digital Preservation Interoperability Framework Workshop %S US-DPIF '10 %I ACM %C New York, NY, USA %P 13:1–13:5 - 13:1–13:5 %8 2010/// %@ 978-1-4503-0109-1 %G eng %U http://doi.acm.org/10.1145/2039274.2039287 %R 10.1145/2039274.2039287 %0 Journal Article %J International Journal of Computer Vision %D 2010 %T Multi-Camera Tracking with Adaptive Resource Allocation %A Han,B. %A Joo, S.W. %A Davis, Larry S. %B International Journal of Computer Vision %P 1 - 14 %8 2010/// %G eng %0 Conference Paper %B Proceedings of the 19th ACM international conference on Information and knowledge management %D 2010 %T Multi-view clustering with constraint propagation for learning with an incomplete mapping between views %A Eaton,Eric %A desJardins, Marie %A Jacob,Sara %K constrained clustering %K multi-view learning %K semi-supervised learning %X Multi-view learning algorithms typically assume a complete bipartite mapping between the different views in order to exchange information during the learning process. However, many applications provide only a partial mapping between the views, creating a challenge for current methods. To address this problem, we propose a multi-view algorithm based on constrained clustering that can operate with an incomplete mapping. Given a set of pairwise constraints in each view, our approach propagates these constraints using a local similarity measure to those instances that can be mapped to the other views, allowing the propagated constraints to be transferred across views via the partial mapping. It uses co-EM to iteratively estimate the propagation within each view based on the current clustering model, transfer the constraints across views, and update the clustering model, thereby learning a unified model for all views. We show that this approach significantly improves clustering performance over several other methods for transferring constraints and allows multi-view clustering to be reliably applied when given a limited mapping between the views. %B Proceedings of the 19th ACM international conference on Information and knowledge management %S CIKM '10 %I ACM %C New York, NY, USA %P 389 - 398 %8 2010/// %@ 978-1-4503-0099-5 %G eng %U http://doi.acm.org/10.1145/1871437.1871489 %R 10.1145/1871437.1871489 %0 Journal Article %J OMICS: A Journal of Integrative Biology %D 2010 %T Occurrence of the Vibrio cholerae seventh pandemic VSP-I island and a new variant %A Grim,Christopher J. %A Choi,Jinna %A Jongsik Chun %A Jeon,Yoon-Seong %A Taviani,Elisa %A Hasan,Nur A. %A Haley,Bradd %A Huq,Anwar %A Rita R Colwell %B OMICS: A Journal of Integrative Biology %V 14 %P 1 - 7 %8 2010/02// %@ 1536-2310, 1557-8100 %G eng %U http://online.liebertpub.com/doi/abs/10.1089/omi.2009.0087 %N 1 %R 10.1089/omi.2009.0087 %0 Conference Paper %D 2010 %T One experience collecting sensitive mobile data %A Niu, Y. %A Elaine Shi %A Chow, R. %A Golle, P. %A Jakobsson, M. %X We report on our efforts to collect behavioral data basedon activities recorded by phones. We recruited Android de- vice owners and offered entry into a raffle for participants. Our application was distributed from the Android Market, and its placement there unexpectedly helped us find par- ticipants from casual users browsing for free applications. We collected data from 267 total participants who gave us varying amounts of data. %8 2010 %G eng %U http://www2.parc.com/csl/members/eshi/docs/users.pdf %0 Book Section %B Pairing-Based Cryptography - Pairing 2010 %D 2010 %T Optimal Authenticated Data Structures with Multilinear Forms %A Charalampos Papamanthou %A Tamassia, Roberto %A Triandopoulos, Nikos %E Joye, Marc %E Miyaji, Atsuko %E Otsuka, Akira %K Algorithm Analysis and Problem Complexity %K authenticated dictionary %K Coding and Information Theory %K Computer Communication Networks %K Data Encryption %K Discrete Mathematics in Computer Science %K multilinear forms %K Systems and Data Security %X Cloud computing and cloud storage are becoming increasingly prevalent. In this paradigm, clients outsource their data and computations to third-party service providers. Data integrity in the cloud therefore becomes an important factor for the functionality of these web services.Authenticated data structures, implemented with various cryptographic primitives, have been widely studied as a means of providing efficient solutions to data integrity problems (e.g., Merkle trees). In this paper, we introduce a new authenticated dictionary data structure that employs multilinear forms, a cryptographic primitive proposed by Silverberg and Boneh in 2003 [10], the construction of which, however, remains an open problem to date. Our authenticated dictionary is optimal, that is, it does not add any extra asymptotic cost to the plain dictionary data structure, yielding proofs of constant size, i.e., asymptotically equal to the size of the answer, while maintaining other relevant complexities logarithmic. Instead, solutions based on cryptographic hashing (e.g., Merkle trees) require proofs of logarithmic size [40]. Because multilinear forms are not known to exist yet, our result can be viewed from a different angle: if one could prove that optimal authenticated dictionaries cannot exist in the computational model, irrespectively of cryptographic primitives, then our solution would imply that cryptographically interesting multilinear form generators cannot exist as well (i.e., it can be viewed as a reduction). Thus, we provide an alternative avenue towards proving the nonexistence of multilinear form generators in the context of general lower bounds for authenticated data structures [40] and for memory checking [18], a model similar to the authenticated data structures model. %B Pairing-Based Cryptography - Pairing 2010 %S Lecture Notes in Computer Science %I Springer Berlin Heidelberg %P 246 - 264 %8 2010/01/01/ %@ 978-3-642-17454-4, 978-3-642-17455-1 %G eng %U http://link.springer.com/chapter/10.1007/978-3-642-17455-1_16 %0 Conference Paper %B Parallel Distributed Processing (IPDPS), 2010 IEEE International Symposium on %D 2010 %T Optimization of linked list prefix computations on multithreaded GPUs using CUDA %A Wei, Zheng %A JaJa, Joseph F. %K 200 %K accesses;fine %K accesses;linked %K Bandwidth %K C1060;cell %K computations;extremely %K computations;multithreaded %K CUDA;MTA;NVIDIA %K GeForce %K GPUs;optimization;prefix %K grain %K high %K list %K memory %K Parallel %K prefix %K process;coprocessors;multi-threading; %K processor;data %K series;Tesla %K sums;randomization %X We present a number of optimization techniques to compute prefix sums on linked lists and implement them on multithreaded GPUs using CUDA. Prefix computations on linked structures involve in general highly irregular fine grain memory accesses that are typical of many computations on linked lists, trees, and graphs. While the current generation of GPUs provides substantial computational power and extremely high bandwidth memory accesses, they may appear at first to be primarily geared toward streamed, highly data parallel computations. In this paper, we introduce an optimized multithreaded GPU algorithm for prefix computations through a randomization process that reduces the problem to a large number of fine-grain computations. We map these fine-grain computations onto multithreaded GPUs in such a way that the processing cost per element is shown to be close to the best possible. Our experimental results show scalability for list sizes ranging from 1M nodes to 256M nodes, and significantly improve on the recently published parallel implementations of list ranking, including implementations on the Cell Processor, the MTA-8, and the NVIDIA GeForce 200 series. They also compare favorably to the performance of the best known CUDA algorithm for the scan operation on the Tesla C1060. %B Parallel Distributed Processing (IPDPS), 2010 IEEE International Symposium on %P 1 - 8 %8 2010/04// %G eng %R 10.1109/IPDPS.2010.5470455 %0 Journal Article %J Proc. VLDB Endow. %D 2010 %T Sharing-aware horizontal partitioning for exploiting correlations during query processing %A Tzoumas,Kostas %A Deshpande, Amol %A Jensen,Christian S. %X Optimization of join queries based on average selectivities is suboptimal in highly correlated databases. In such databases, relations are naturally divided into partitions, each partition having substantially different statistical characteristics. It is very compelling to discover such data partitions during query optimization and create multiple plans for a given query, one plan being optimal for a particular combination of data partitions. This scenario calls for the sharing of state among plans, so that common intermediate results are not recomputed. We study this problem in a setting with a routing-based query execution engine based on eddies [1]. Eddies naturally encapsulate horizontal partitioning and maximal state sharing across multiple plans. We define the notion of a conditional join plan, a novel representation of the search space that enables us to address the problem in a principled way. We present a low-overhead greedy algorithm that uses statistical summaries based on graphical models. Experimental results suggest an order of magnitude faster execution time over traditional optimization for high correlations, while maintaining the same performance for low correlations. %B Proc. VLDB Endow. %V 3 %P 542 - 553 %8 2010/09// %@ 2150-8097 %G eng %U http://dl.acm.org/citation.cfm?id=1920841.1920911 %N 1-2 %0 Journal Article %J IEEE Transactions on Pattern Analysis and Machine Intelligence %D 2010 %T SPECIAL SECTION ON SHAPE ANALYSIS AND ITS APPLICATIONS IN IMAGE UNDERSTANDING %A Srivastava, A. %A Damon,J.N. %A Dryden,I.L. %A Jermyn,I.H. %A Das,S. %A Vaswani, N. %A Huckemann,S. %A Hotz,T. %A Munk,A. %A Lin,Z. %A others %B IEEE Transactions on Pattern Analysis and Machine Intelligence %V 32 %P 0162 - 8828 %8 2010/// %G eng %N 4 %0 Journal Article %J ECCV Workshops %D 2010 %T A tree-based approach to integrated action localization, recognition and segmentation %A Zhuolin Jiang %A Lin,Z. %A Davis, Larry S. %X A tree-based approach to integrated action segmentation,localization and recognition is proposed. An action is represented as a sequence of joint hog-flow descriptors extracted independently from each frame. During training, a set of action prototypes is first learned based on a k-means clustering, and then a binary tree model is constructed from the set of action prototypes based on hierarchical k-means cluster- ing. Each tree node is characterized by a shape-motion descriptor and a rejection threshold, and an action segmentation mask is defined for leaf nodes (corresponding to a prototype). During testing, an action is local- ized by mapping each test frame to a nearest neighbor prototype using a fast matching method to search the learned tree, followed by global fil- tering refinement. An action is recognized by maximizing the sum of the joint probabilities of the action category and action prototype over test frames. Our approach does not explicitly rely on human tracking and background subtraction, and enables action localization and recognition in realistic and challenging conditions (such as crowded backgrounds). Experimental results show that our approach can achieve recognition rates of 100% on the CMU action dataset and 100% on the Weizmann dataset. %B ECCV Workshops %8 2010/// %G eng %0 Conference Paper %B Proceedings of the 23nd annual ACM symposium on User interface software and technology %D 2010 %T VizWiz: nearly real-time answers to visual questions %A Bigham,Jeffrey P. %A Jayant,Chandrika %A Ji,Hanjie %A Little,Greg %A Miller,Andrew %A Miller,Robert C. %A Miller,Robin %A Tatarowicz,Aubrey %A White,Brandyn %A White,Samual %A Tom Yeh %K blind users %K non-visual interfaces %K real-time human computation %X The lack of access to visual information like text labels, icons, and colors can cause frustration and decrease independence for blind people. Current access technology uses automatic approaches to address some problems in this space, but the technology is error-prone, limited in scope, and quite expensive. In this paper, we introduce VizWiz, a talking application for mobile phones that offers a new alternative to answering visual questions in nearly real-time - asking multiple people on the web. To support answering questions quickly, we introduce a general approach for intelligently recruiting human workers in advance called quikTurkit so that workers are available when new questions arrive. A field deployment with 11 blind participants illustrates that blind people can effectively use VizWiz to cheaply answer questions in their everyday lives, highlighting issues that automatic approaches will need to address to be useful. Finally, we illustrate the potential of using VizWiz as part of the participatory design of advanced tools by using it to build and evaluate VizWiz::LocateIt, an interactive mobile tool that helps blind people solve general visual search problems. %B Proceedings of the 23nd annual ACM symposium on User interface software and technology %S UIST '10 %I ACM %C New York, NY, USA %P 333 - 342 %8 2010/// %@ 978-1-4503-0271-5 %G eng %U http://doi.acm.org/10.1145/1866029.1866080 %R 10.1145/1866029.1866080 %0 Conference Paper %B Computer Vision and Pattern Recognition Workshops (CVPRW), 2010 IEEE Computer Society Conference on %D 2010 %T VizWiz::LocateIt - enabling blind people to locate objects in their environment %A Bigham,Jeffrey P. %A Jayant,Chandrika %A Miller,Andrew %A White,Brandyn %A Tom Yeh %X Blind people face a number of challenges when interacting with their environments because so much information is encoded visually. Text is pervasively used to label objects, colors carry special significance, and items can easily become lost in surroundings that cannot be quickly scanned. Many tools seek to help blind people solve these problems by enabling them to query for additional information, such as color or text shown on the object. In this paper we argue that many useful problems may be better solved by directly modeling them as search problems, and present a solution called VizWiz::LocateIt that directly supports this type of interaction. VizWiz::LocateIt enables blind people to take a picture and ask for assistance in finding a specific object. The request is first forwarded to remote workers who outline the object, enabling efficient and accurate automatic computer vision to guide users interactively from their existing cellphones. A two-stage algorithm is presented that uses this information to guide users to the appropriate object interactively from their phone. %B Computer Vision and Pattern Recognition Workshops (CVPRW), 2010 IEEE Computer Society Conference on %I IEEE %P 65 - 72 %8 2010/06// %@ 978-1-4244-7029-7 %G eng %U http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=5543821 %R 10.1109/CVPRW.2010.5543821 %0 Conference Paper %B INFOCOM 2009, IEEE %D 2009 %T All Bits Are Not Equal - A Study of IEEE 802.11 Communication Bit Errors %A Han,Bo %A Ji,Lusheng %A Lee,Seungjoon %A Bhattacharjee, Bobby %A Miller,R.R. %K 802.11 %K bit %K coding;error %K coding;forward %K coding;subframe %K combining %K Communication %K correction;frame %K correction;wireless %K error %K errors;channel %K errors;wireless %K IEEE %K LAN; %K LAN;channel %K mechanisms;network %K patterns;transmission %K statistics;forward %X In IEEE 802.11 Wireless LAN (WLAN) systems, techniques such as acknowledgement, retransmission, and transmission rate adaptation, are frame-level mechanisms designed for combating transmission errors. Recently sub-frame level mechanisms such as frame combining have been proposed by the research community. In this paper, we present results obtained from our bit error study for identifying sub-frame error patterns because we believe that identifiable bit error patterns can potentially introduce new opportunities in channel coding, network coding, forward error correction (FEC), and frame combining mechanisms. We have constructed a number of IEEE 802.11 wireless LAN testbeds and conducted extensive experiments to study the characteristics of bit errors and their location distribution. Conventional wisdom dictates that bit error probability is the result of channel condition and ought to follow corresponding distribution. However our measurement results identify three repeatable bit error patterns that are not induced by channel conditions. We have verified that such error patterns are present in WLAN transmissions in different physical environments and across different wireless LAN hardware platforms. We also discuss our current hypotheses for the reasons behind these bit error probability patterns and how identifying these patterns may help improving WLAN transmission robustness. %B INFOCOM 2009, IEEE %P 1602 - 1610 %8 2009/04// %G eng %R 10.1109/INFCOM.2009.5062078 %0 Conference Paper %B Robotics and Automation, 2009. ICRA '09. IEEE International Conference on %D 2009 %T Assigning cameras to subjects in video surveillance systems %A El-Alfy,H. %A Jacobs, David W. %A Davis, Larry S. %K agent %K algorithm;multiple %K assignment;computation %K augmenting %K cameras;video %K cost %K detection;video %K graph;camera %K MATCHING %K matching;minimum %K matching;target %K path;bipartite %K reduction;maximum %K segment;video %K Surveillance %K surveillance; %K system;graph %K theory;image %K TIME %K tracking;obstacle %K tracking;video %K video %X We consider the problem of tracking multiple agents moving amongst obstacles, using multiple cameras. Given an environment with obstacles, and many people moving through it, we construct a separate narrow field of view video for as many people as possible, by stitching together video segments from multiple cameras over time. We employ a novel approach to assign cameras to people as a function of time, with camera switches when needed. The problem is modeled as a bipartite graph and the solution corresponds to a maximum matching. As people move, the solution is efficiently updated by computing an augmenting path rather than by solving for a new matching. This reduces computation time by an order of magnitude. In addition, solving for the shortest augmenting path minimizes the number of camera switches at each update. When not all people can be covered by the available cameras, we cluster as many people as possible into small groups, then assign cameras to groups using a minimum cost matching algorithm. We test our method using numerous runs from different simulators. %B Robotics and Automation, 2009. ICRA '09. IEEE International Conference on %P 837 - 843 %8 2009/05// %G eng %R 10.1109/ROBOT.2009.5152753 %0 Journal Article %J Molecular Ecology Resources %D 2009 %T Biological agent detection technologies %A Jakupciak,John P. %A Rita R Colwell %K barcoding %K biological agent %K DETECTION %K identification %K sequencing %X The challenge for first responders, physicians in the emergency room, public health personnel, as well as for food manufacturers, distributors and retailers is accurate and reliable identification of pathogenic agents and their corresponding diseases. This is the weakest point in biological agent detection capability today.There is intense research for new molecular detection technologies that could be used for very accurate detection of pathogens that would be a concern to first responders. These include the need for sensors for multiple applications as varied as understanding the ecology of pathogenic micro-organisms, forensics, environmental sampling for detect-to-treat applications, biological sensors for ‘detect to warn’ in infrastructure protection, responses to reports of ‘suspicious powders’, and customs and borders enforcement, to cite a few examples. The benefits of accurate detection include saving millions of dollars annually by reducing disruption of the workforce and the national economy and improving delivery of correct countermeasures to those who are most in need of the information to provide protective and/or response measures. %B Molecular Ecology Resources %V 9 %P 51 - 57 %8 2009/04/21/ %@ 1755-0998 %G eng %U http://onlinelibrary.wiley.com/doi/10.1111/j.1755-0998.2009.02632.x/full %N s1 %R 10.1111/j.1755-0998.2009.02632.x %0 Journal Article %J J. Parallel Distrib. Comput. %D 2009 %T Call for Papers: Special Issue of the Journal of Parallel and Distributed Computing: Cloud Computing %A Chockler,Gregory %A Dekel,Eliezer %A JaJa, Joseph F. %A Jimmy Lin %B J. Parallel Distrib. Comput. %V 69 %P 813– - 813– %8 2009/09// %@ 0743-7315 %G eng %U http://dx.doi.org/10.1016/j.jpdc.2009.07.002 %N 9 %R 10.1016/j.jpdc.2009.07.002 %0 Conference Paper %B Sensor, Mesh and Ad Hoc Communications and Networks, 2009. SECON '09. 6th Annual IEEE Communications Society Conference on %D 2009 %T Channel Access Throttling for Improving WLAN QoS %A Han,Bo %A Ji,Lusheng %A Lee,Seungjoon %A Miller,R.R. %A Bhattacharjee, Bobby %K 802.11 %K access %K area %K call %K capacity;data %K capacity;quality %K capacity;WLAN %K categories;transmission %K channel %K device %K distributed %K driver;quality %K facto %K frames;de %K IEEE %K LAN; %K LAN;VoIP %K local %K mechanism;member %K method;enhanced %K multimedia %K network;wireless %K networks;channel %K of %K parameters;channel %K priority;channel %K QoS %K QoS;channel %K service;telecommunication %K service;traffic %K standards;telecommunication %K stations;open-source %K throttling;channel %K traffic;wireless %K treatments;wireless %K wireless %X The de facto QoS channel access method for the IEEE 802.11 Wireless LANs is the Enhanced Distributed Channel Access (EDCA) mechanism, which differentiates transmission treatments for data frames belonging to different traffic categories with four different levels of channel access priority. In this paper, we propose extending EDCA with Channel Access Throttling (CAT) for more flexible and efficient QoS support. By assigning different member stations different channel access parameters, CAT differentiates channel access priorities not between traffic categories but between member stations. Then by dynamically changing the channel access parameters of each member station based on a pre-computed schedule, CAT enables EDCA WLANs the benefits of scheduled access QoS. We also present evaluation results of CAT obtained from both simulations and experiments conducted using off-the-shelf WLAN hardware and open-source device driver. Our results show that CAT can proportionally partition channel capacity, significantly improve performance of multimedia applications, effectively achieve performance protection for admitted flows, and increase per cell VoIP call capacity by up to 41%. %B Sensor, Mesh and Ad Hoc Communications and Networks, 2009. SECON '09. 6th Annual IEEE Communications Society Conference on %P 1 - 9 %8 2009/06// %G eng %R 10.1109/SAHCN.2009.5168915 %0 Conference Paper %B Communications, 2009. ICC '09. IEEE International Conference on %D 2009 %T Channel Access Throttling for Overlapping BSS Management %A Han,Bo %A Ji,Lusheng %A Lee,Seungjoon %A Miller,R.R. %A Bhattacharjee, Bobby %K access %K access;cellular %K allocation;channel %K BSS %K capacity;computer %K capacity;proportional %K cell;open-source %K channel %K channels; %K co-channel %K device %K driver;overlapping %K efficiency;high-priority %K LAN;wireless %K management;partition %K management;wireless %K medium %K network %K parameters;multiple %K partitioning;radio %K radio;channel %K resources;wireless %K throttling;channel %K utilization %K WLAN %X Multiple co-channel WLAN BSSes (i.e., WLAN cells) overlapping in coverage are generally considered undesirable because members of the OBSSes compete for channel access, which typically increases the contention level of wireless medium access and reduces overall system performance. In this paper, we propose to use channel access throttling (CAT) for managing Wireless LAN radio resources for overlapping BSSes (OBSSes). CAT provides an access point (AP) of each BSS with a mechanism to control channel access parameters of its member stations on the fly. By coordinating the CAT operations of the OBSS APs, we can enable privileged channel access to an individual BSS at a particular time, for example, by assigning high priority access parameters to member stations associated with the BSS. By controlling how much each BSS may be given the privileged channel access, we can also achieve a proportional partitioning of channel capacity among OBSSes. We present evaluation results obtained from both simulations and experiments using testbed built with commercial off-the-shelf (COTS) WLAN hardware and open-source device driver. Our results show that with CAT, not only can we proportionally partition channel capacity among the OBSSes, but also improve channel utilization efficiency and increase overall capacity. %B Communications, 2009. ICC '09. IEEE International Conference on %P 1 - 6 %8 2009/06// %G eng %R 10.1109/ICC.2009.5198815 %0 Journal Article %J Jisuanji Gongcheng/ Computer Engineering %D 2009 %T Classic triangular mesh subdivision algorithm %A Zhuolin Jiang %A Li,S.F. %A Jia,X.P. %A Zhu,H.L. %X This paper makes a concise introduction to seven classic algorithms of triangular mesh subdivision, and makes a classification and comparison between them according to their continuity, own advantage and application status. In order to improve the visualization of triangular mesh subdivision, interactive display control is implemented by using MFC and OpenGL while state machine model based on functional class is used as the software operating pattern. On the basis of this, the prototype implementation of Loop algorithm is performed. The improved method of solving existing problematic issues from prototype implementation is presented. %B Jisuanji Gongcheng/ Computer Engineering %V 35 %P 7 - 10 %8 2009/// %G eng %N 6 %0 Conference Paper %B IEEE Conference on Computer Vision and Pattern Recognition, 2009. CVPR 2009 %D 2009 %T Combining powerful local and global statistics for texture description %A Yong Xu %A Si-Bin Huang %A Hui Ji %A Fermüller, Cornelia %K Computer science %K discretized measurements %K fractal geometry %K Fractals %K geometric transformations %K global statistics %K Histograms %K illumination transformations %K image classification %K image resolution %K Image texture %K lighting %K local measurements SIFT features %K local statistics %K MATHEMATICS %K multifractal spectrum %K multiscale representation %K Power engineering and energy %K Power engineering computing %K Robustness %K Solids %K Statistics %K texture description %K UMD high-resolution dataset %K wavelet frame system %K Wavelet transforms %X A texture descriptor is proposed, which combines local highly discriminative features with the global statistics of fractal geometry to achieve high descriptive power, but also invariance to geometric and illumination transformations. As local measurements SIFT features are estimated densely at multiple window sizes and discretized. On each of the discretized measurements the fractal dimension is computed to obtain the so-called multifractal spectrum, which is invariant to geometric transformations and illumination changes. Finally to achieve robustness to scale changes, a multi-scale representation of the multifractal spectrum is developed using a framelet system, that is, a redundant tight wavelet frame system. Experiments on classification demonstrate that the descriptor outperforms existing methods on the UIUC as well as the UMD high-resolution dataset. %B IEEE Conference on Computer Vision and Pattern Recognition, 2009. CVPR 2009 %I IEEE %P 573 - 580 %8 2009/06/20/25 %@ 978-1-4244-3992-8 %G eng %R 10.1109/CVPR.2009.5206741 %0 Journal Article %J Proceedings of the National Academy of Sciences %D 2009 %T Comparative genomics reveals mechanism for short-term and long-term clonal transitions in pandemic Vibrio cholerae %A Chun,J. %A Grim,C. J. %A Hasan,N. A. %A Lee,J. H. %A Choi,S. Y. %A Haley,B. J. %A Taviani,E. %A Jeon,Y. S. %A Kim,D.W. %A Lee,J. H. %A Rita R Colwell %X Vibrio cholerae, the causative agent of cholera, is a bacterium autochthonous to the aquatic environment, and a serious public health threat. V. cholerae serogroup O1 is responsible for the previous two cholera pandemics, in which classical and El Tor biotypes were dominant in the sixth and the current seventh pandemics, respectively. Cholera researchers continually face newly emerging and reemerging pathogenic clones carrying diverse combinations of phenotypic and genotypic properties, which significantly hampered control of the disease. To elucidate evolutionary mechanisms governing genetic diversity of pandemic V. cholerae, we compared the genome sequences of 23 V. cholerae strains isolated from a variety of sources over the past 98 years. The genome-based phylogeny revealed 12 distinct V. cholerae lineages, of which one comprises both O1 classical and El Tor biotypes. All seventh pandemic clones share nearly identical gene content. Using analogy to influenza virology, we define the transition from sixth to seventh pandemic strains as a “shift” between pathogenic clones belonging to the same O1 serogroup, but from significantly different phyletic lineages. In contrast, transition among clones during the present pandemic period is characterized as a “drift” between clones, differentiated mainly by varying composition of laterally transferred genomic islands, resulting in emergence of variants, exemplified by V. cholerae O139 and V. cholerae O1 El Tor hybrid clones. Based on the comparative genomics it is concluded that V. cholerae undergoes extensive genetic recombination via lateral gene transfer, and, therefore, genome assortment, not serogroup, should be used to define pathogenic V. cholerae clones. %B Proceedings of the National Academy of Sciences %V 106 %P 15442 - 15447 %8 2009/// %@ 0027-8424, 1091-6490 %G eng %U http://www.pnas.org/content/106/36/15442 %N 36 %R 10.1073/pnas.0907787106 %0 Conference Paper %D 2009 %T Controlling data in the cloud: outsourcing computation without outsourcing control %A Chow, Richard %A Golle, Philippe %A Jakobsson, Markus %A Elaine Shi %A Staddon, Jessica %A Masuoka, Ryusuke %A Molina,Jesus %K Cloud computing %K privacy %K Security %X Cloud computing is clearly one of today's most enticing technology areas due, at least in part, to its cost-efficiency and flexibility. However, despite the surge in activity and interest, there are significant, persistent concerns about cloud computing that are impeding momentum and will eventually compromise the vision of cloud computing as a new IT procurement model. In this paper, we characterize the problems and their impact on adoption. In addition, and equally importantly, we describe how the combination of existing research thrusts has the potential to alleviate many of the concerns impeding adoption. In particular, we argue that with continued research advances in trusted computing and computation-supporting encryption, life in the cloud can be advantageous from a business intelligence standpoint over the isolated alternative that is more common today. %S CCSW '09 %I ACM %P 85 - 90 %8 2009 %@ 978-1-60558-784-4 %G eng %U http://doi.acm.org/10.1145/1655008.1655020 %0 Conference Paper %B INFOCOM 2009, IEEE %D 2009 %T CPM: Adaptive Video-on-Demand with Cooperative Peer Assists and Multicast %A Gopalakrishnan,V. %A Bhattacharjee, Bobby %A Ramakrishnan,K.K. %A Jana,R. %A Srivastava,D. %K assists;multicast;peer-to-peer %K communication;peer-to-peer %K computing;video %K CPM;cooperative %K demand; %K on %K parameters;video-on-demand;multicast %K peer %K schemes;synthetic %X We present CPM, a unified approach that exploits server multicast, assisted by peer downloads, to provide efficient video-on-demand (VoD) in a service provider environment. We describe our architecture and show how CPM is designed to dynamically adapt to a wide range of situations including highly different peer-upload bandwidths, content popularity, user request arrival patterns, video library size, and subscriber population. We demonstrate the effectiveness of CPM using simulations (based on an actual implementation codebase) across the range of situations described above and show that CPM does significantly better than traditional unicast, different forms of multicast, as well as peer-to-peer schemes. Along with synthetic parameters, we augment our experiments using data from a deployed VoD service to evaluate the performance of CPM. %B INFOCOM 2009, IEEE %P 91 - 99 %8 2009/04// %G eng %R 10.1109/INFCOM.2009.5061910 %0 Conference Paper %B Proceedings of the 27th international conference extended abstracts on Human factors in computing systems %D 2009 %T Creativity challenges and opportunities in social computing %A Fischer,Gerhard %A Jennings,Pamela %A Maher,Mary Lou %A Resnick,Mitchel %A Shneiderman, Ben %K creativity %K social computing %X There is a convergence in recent theories of creativity that go beyond characteristics and cognitive processes of individuals to recognize the importance of the social construction of creativity. In parallel, there has been a rise in social computing supporting the collaborative construction of knowledge. The panel will discuss the challenges and opportunities from the confluence of these two developments by bringing together the contrasting and controversial perspective of the individual panel members. It will synthesize from different perspectives an analytic framework to understand these new developments, and how to promote rigorous research methods and how to identify the unique challenges in developing evaluation and assessment methods for creativity research. %B Proceedings of the 27th international conference extended abstracts on Human factors in computing systems %S CHI EA '09 %I ACM %C New York, NY, USA %P 3283 - 3286 %8 2009/// %@ 978-1-60558-247-4 %G eng %U http://doi.acm.org/10.1145/1520340.1520470 %R 10.1145/1520340.1520470 %0 Journal Article %J Environmental Microbiology Reports %D 2009 %T Detection of toxigenic Vibrio cholerae O1 in freshwater lakes of the former Soviet Republic of Georgia %A Grim,Christopher J. %A Jaiani,Ekaterina %A Whitehouse,Chris A. %A Janelidze,Nino %A Kokashvili,Tamuna %A Tediashvili,Marina %A Rita R Colwell %A Huq,Anwar %X Three freshwater lakes, Lisi Lake, Kumisi Lake and Tbilisi Sea, near Tbilisi, Georgia, were studied from January 2006 to December 2007 to determine the presence of Vibrio cholerae employing both bacteriological culture method and direct detection methods, namely PCR and direct fluorescent antibody (DFA). For PCR, DNA extracted from water samples was tested for presence of V. cholerae and genes coding for selected virulence factors. Vibrio cholerae non-O1/non-O139 was routinely isolated by culture from all three lakes; whereas V. cholerae O1 and O139 were not. Water samples collected during the summer months from Lisi Lake and Kumisi Lake were positive for both V. cholerae and V. cholerae ctxA, tcpA, zot, ompU and toxR by PCR. Water samples collected during the same period from both Lisi and Kumisi Lake were also positive for V. cholerae serogroup O1 by DFA. All of the samples were negative for V. cholerae serotype O139. The results of this study provide evidence for an environmental presence of toxigenic V. cholerae O1, which may represent a potential source of illness as these lakes serve as recreational water in Tbilisi, Georgia. %B Environmental Microbiology Reports %V 2 %P 2 - 6 %8 2009/09/03/ %@ 1758-2229 %G eng %U http://onlinelibrary.wiley.com/doi/10.1111/j.1758-2229.2009.00073.x/abstract?userIsAuthenticated=false&deniedAccessCustomisedMessage= %N 1 %R 10.1111/j.1758-2229.2009.00073.x %0 Journal Article %J Passive and Active Network Measurement %D 2009 %T Dynamics of online scam hosting infrastructure %A Konte,M. %A Feamster, Nick %A Jung,J. %X This paper studies the dynamics of scam hosting infrastructure, with an emphasis on the role of fast-flux service networks. By monitoring changes in DNS records of over 350 distinct spam-advertised domains collected from URLs in 115,000 spam emails received at a large spam sinkhole, we measure the rates and locations of remapping DNS records, and the rates at which “fresh” IP addresses are used. We find that, unlike the short-lived nature of the scams themselves, the infrastructure that hosts these scams has relatively persistent features that may ultimately assist detection. %B Passive and Active Network Measurement %P 219 - 228 %8 2009/// %G eng %R 10.1007/978-3-642-00975-4_22 %0 Journal Article %J Jisuanji Gongcheng/ Computer Engineering %D 2009 %T Encryption Algorithm Based on Circle Property %A Ge,L.N. %A He,Z.H. %A Zhuolin Jiang %B Jisuanji Gongcheng/ Computer Engineering %V 35 %P 180 - 182 %8 2009/// %G eng %N 4 %0 Journal Article %J Jisuanji Gongcheng/ Computer Engineering %D 2009 %T Fast and Accurate Method of Uniform-colored Video Text Extraction %A Shen,R.D. %A Li,S.F. %A Zhuolin Jiang %X For most video text is rich in edge,uniform-colored and horizontally ranged,the candidate text areas in video image are determined by using a fast deriche edge based algorithm,and the accurate binary text images are extracted from the areas by using a color based algorithm.The experimental results show that the method is effective in text extraction in complex-background video frame,and has higher processing speed and better extraction result in comparison with color alone based method. %B Jisuanji Gongcheng/ Computer Engineering %V 35 %8 2009/// %G eng %N 9 %0 Journal Article %J Nature %D 2009 %T Genome assortment, not serogroup, defines Vibrio cholerae pandemic strains %A Brettin,Thomas S[Los Alamos National Laboratory %A Bruce,David C[Los Alamos National Laboratory %A Challacombe,Jean F[Los Alamos National Laboratory %A Detter,John C[Los Alamos National Laboratory %A Han,Cliff S[Los Alamos National Laboratory %A Munik,A. C[Los Alamos National Laboratory %A Chertkov,Olga[Los Alamos National Laboratory %A Meincke,Linda[Los Alamos National Laboratory %A Saunders,Elizabeth[Los Alamos National Laboratory %A Choi,Seon Y[SEOUL NATL UNIV %A Haley,Bradd J[U MARYLAND %A Taviani,Elisa[U MARYLAND %A Jeon,Yoon-Seong[INTL VACCINE INST SEOUL %A Kim,Dong Wook[INTL VACCINE INST SEOUL %A Lee,Jae-Hak[SEOUL NATL UNIV %A Walters,Ronald A[PNNL %A Hug,Anwar[NATL INST CHOLERIC ENTERIC DIS %A Rita R Colwell %K 59; CHOLERA; GENES; GENETICS; GENOTYPE; ISLANDS; ORIGIN; PHENOTYPE; PUBLIC HEALTH; RECOMBINATION; STRAINS; TOXINS %X Vibrio cholerae, the causative agent of cholera, is a bacterium autochthonous to the aquatic environment, and a serious public health threat. V. cholerae serogroup O1 is responsible for the previous two cholera pandemics, in which classical and El Tor biotypes were dominant in the 6th and the current 7th pandemics, respectively. Cholera researchers continually face newly emerging and re-emerging pathogenic clones carrying combinations of new serogroups as well as of phenotypic and genotypic properties. These genotype and phenotype changes have hampered control of the disease. Here we compare the complete genome sequences of 23 strains of V. cholerae isolated from a variety of sources and geographical locations over the past 98 years in an effort to elucidate the evolutionary mechanisms governing genetic diversity and genesis of new pathogenic clones. The genome-based phylogeny revealed 12 distinct V. cholerae phyletic lineages, of which one, designated the V. cholerae core genome (CG), comprises both O1 classical and EI Tor biotypes. All 7th pandemic clones share nearly identical gene content, i.e., the same genome backbone. The transition from 6th to 7th pandemic strains is defined here as a 'shift' between pathogenic clones belonging to the same O1 serogroup, but from significantly different phyletic lineages within the CG clade. In contrast, transition among clones during the present 7th pandemic period can be characterized as a 'drift' between clones, differentiated mainly by varying composition of laterally transferred genomic islands, resulting in emergence of variants, exemplified by V.cholerae serogroup O139 and V.cholerae O1 El Tor hybrid clones that produce cholera toxin of classical biotype. Based on the comprehensive comparative genomics presented in this study it is concluded that V. cholerae undergoes extensive genetic recombination via lateral gene transfer, and, therefore, genome assortment, not serogroup, should be used to define pathogenic V. cholerae clones. %B Nature %8 2009/// %G eng %U http://www.osti.gov/energycitations/servlets/purl/962365-icnke9/ %0 Journal Article %J Proceedings of DigCCurr2009 Digital Curation: Practice, Promise and Prospects %D 2009 %T An Implementation of the Audit Control Environment (ACE) to Support the Long Term Integrity of Digital Archives %A Smorul,M. %A Song,S. %A JaJa, Joseph F. %X In this paper, we describe the implementation of the AuditControl Environment (ACE)[1] system that provides a scalable, auditable platform for ensuring the integrity of digital archival holdings. The core of ACE is a small integrity token issued for each monitored item, which is part of a larger, externally auditable cryptographic system. Two components that describe this system, an Audit Manager and Integrity Management Service, have been developed and released. The Audit Manager component is designed to be installed locally at the archive, while the Integrity Management Service is a centralized, publically available service. ACE allows for the monitoring of collections on a variety of disk and grid based storage systems. Each collection in ACE is subject to monitoring based on a customizable policy. The released ACE Version 1.0 has been tested extensively on a wide variety of collections in both centralized and distributed environments. %B Proceedings of DigCCurr2009 Digital Curation: Practice, Promise and Prospects %P 164 - 164 %8 2009/// %G eng %0 Conference Paper %D 2009 %T Implicit authentication for mobile devices %A Jakobsson, Markus %A Elaine Shi %A Golle, Philippe %A Chow, Richard %X We introduce the notion of implicit authentication - the ability to authenticate mobile users based on actions they would carry out anyway. We develop a model for how to perform implicit authentication, and describe experiments aimed at assessing the benefits of our techniques. Our preliminary findings support that this is a meaningful approach, whether used to increase usability or increase security. %S HotSec'09 %I USENIX Association %P 9 - 9 %8 2009 %G eng %U http://dl.acm.org/citation.cfm?id=1855628.1855637 %0 Book Section %B Human-Computer InteractionHuman-Computer Interaction %D 2009 %T Information Visualization %A Card,Stuart %E Sears,Andrew %E Jacko,Julie %B Human-Computer InteractionHuman-Computer Interaction %I CRC Press %V 20093960 %P 181 - 215 %8 2009/03/02/ %@ 978-1-4200-8885-4, 978-1-4200-8886-1 %G eng %U http://www.crcnetbase.com/doi/abs/10.1201/9781420088861.ch10 %0 Conference Paper %B Proceedings of the 27th international conference extended abstracts on Human factors in computing systems %D 2009 %T Interacting with eHealth: towards grand challenges for HCI %A André,P. %A White,R. %A Tan,D. %A Berners-Lee,T. %A Consolvo,S. %A Jacobs,R. %A Kohane,I. %A Le Dantec,C.A. %A Mamykina,L. %A Marsden,G. %X While health records are increasingly stored electronically, we, as citizens, have little access to this data about ourselves. We are not used to thinking of these official records either as ours or as useful to us. We increasingly turn to the Web, however, to query any ache, pain or health goal we may have before consulting with health care professionals. Likewise, for proactive health care such as nutrition or fitness, or to find fellow-sufferers for post diagnosis support, we turn to online resources. There is a potential disconnect between points at which professional and lay eHealth data and resources intersect for preventative or proactive health care. Such gaps in information sharing may have direct impact on practices we decide to take up, the care we seek, or the support professionals offer. In this panel, we consider several places within proactive, preventative health care in particular HCI has a role towards enhancing health knowledge discovery and health support interaction. Our goal is to demonstrate how now is the time for eHealth to come to the forefront of the HCI research agenda. %B Proceedings of the 27th international conference extended abstracts on Human factors in computing systems %P 3309 - 3312 %8 2009/// %G eng %0 Journal Article %J Concurrency and Computation: Practice and Experience %D 2009 %T Interactive direct volume rendering on desktop multicore processors %A Wang,Qin %A JaJa, Joseph F. %K direct volume rendering %K multicore processors %K multithreaded algorithms %K Parallel algorithms %K volume visualization %X We present a new multithreaded implementation for the computationally demanding direct volume rendering (DVR) of volumetric data sets on desktop multicore processors using ray casting. The new implementation achieves interactive rendering of very large volumes, even on high resolution screens. Our implementation is based on a new algorithm that combines an object-order traversal of the volumetric data followed by a focused ray casting. Using a very compact data structure, our method starts with a quick association of data subcubes with fine-grain screen tiles appearing along the viewing direction in front-to-back order. The next stage uses very limited ray casting on the generated sets of subcubes while skipping empty or transparent space and applying early ray termination in an effective way. Our multithreaded implementation makes use of new dynamic techniques to ensure effective memory management and load balancing. Our software enables a user to interactively explore large data sets through DVR while arbitrarily specifying a 2D transfer function. We test our system on a wide variety of well-known volumetric data sets on a two-processor Clovertown platform, each consisting of a Quad-Core 1.86 GHz Intel Xeon Processor. Our experimental tests demonstrate DVR at interactive rates for the largest data sets that can fit in the main memory on our platform. These tests also indicate a high degree of scalability, excellent load balancing, and efficient memory management across the data sets used. Copyright © 2009 John Wiley & Sons, Ltd. %B Concurrency and Computation: Practice and Experience %V 21 %P 2199 - 2212 %8 2009/09/10/ %@ 1532-0634 %G eng %U http://onlinelibrary.wiley.com/doi/10.1002/cpe.1485/abstract?userIsAuthenticated=false&deniedAccessCustomisedMessage= %N 17 %R 10.1002/cpe.1485 %0 Conference Paper %B SIGCHI '09 %D 2009 %T It's Not Easy Being Green: Understanding Home Computer Power Management %A Marshini Chetty %A Brush, A.J. Bernheim %A Meyers, Brian R. %A Johns, Paul %K home computer use %K power management %K Sustainability %X Although domestic computer use is increasing, most efforts to reduce energy use through improved power management have focused on computers in the workplace. We studied 20 households to understand how people use power management strategies on their home computers. We saw computers in the home, particularly desktop computers, are left on much more than they are actively used suggesting opportunities for economic and energy savings. However, for most of our participants, the economic incentives were too minor to motivate them to turn off devices when not in use, especially given other frustrations such as long boot up times. We suggest research directions for home computer power management that could help users be more green without having to dramatically change their home computing habits. %B SIGCHI '09 %S CHI '09 %I ACM %P 1033 - 1042 %8 2009/// %@ 978-1-60558-246-7 %G eng %U http://doi.acm.org/10.1145/1518701.1518860 %0 Journal Article %J Journal of education for library and information science %D 2009 %T The Maryland Modular Method: An Approach to Doctoral Education in Information Studies %A Druin, Allison %A Jaeger,P. T %A Golbeck,J. %A Fleischmann,K.R. %A Jimmy Lin %A Qu,Y. %A Wang,P. %A Xie,B. %B Journal of education for library and information science %V 50 %P 293 - 301 %8 2009/// %G eng %N 4 %0 Journal Article %J Software Engineering, IEEE Transactions on %D 2009 %T Maturing Software Engineering Knowledge through Classifications: A Case Study on Unit Testing Techniques %A Vegas,S. %A Juristo,N. %A Basili, Victor R. %K characteristic;project %K characteristic;software %K classification;matching %K engineering %K engineering; %K knowledge;software %K technique %K techniques;program %K Testing %K testing;software %K testing;unit %X Classification makes a significant contribution to advancing knowledge in both science and engineering. It is a way of investigating the relationships between the objects to be classified and identifies gaps in knowledge. Classification in engineering also has a practical application; it supports object selection. They can help mature software engineering knowledge, as classifications constitute an organized structure of knowledge items. Till date, there have been few attempts at classifying in software engineering. In this research, we examine how useful classifications in software engineering are for advancing knowledge by trying to classify testing techniques. The paper presents a preliminary classification of a set of unit testing techniques. To obtain this classification, we enacted a generic process for developing useful software engineering classifications. The proposed classification has been proven useful for maturing knowledge about testing techniques, and therefore, SE, as it helps to: 1) provide a systematic description of the techniques, 2) understand testing techniques by studying the relationships among techniques (measured in terms of differences and similarities), 3) identify potentially useful techniques that do not yet exist by analyzing gaps in the classification, and 4) support practitioners in testing technique selection by matching technique characteristics to project characteristics. %B Software Engineering, IEEE Transactions on %V 35 %P 551 - 565 %8 2009/08//july %@ 0098-5589 %G eng %N 4 %R 10.1109/TSE.2009.13 %0 Conference Paper %B Proceedings of the 11th International Conference on Human-Computer Interaction with Mobile Devices and Services %D 2009 %T Mobile Living Labs 09: Methods and Tools for Evaluation in the Wild %A ter Hofte,Henri %A Jensen,Kasper Løvborg %A Nurmi,Petteri %A Jon Froehlich %K field study %K in-situ evaluation %K living labs %K methods %K mobile %K tools %K user experience %X In a Mobile Living Lab, mobile devices are used to evaluate concepts and prototypes in real-life settings. In other words, the lab is brought to the people. This workshop provides a forum for researchers and practitioners to share experiences and issues with methods and tools for Mobile Living Labs. In particular, we seek to bring together people who have applied methods for Mobile Living Labs and people who build tools for those methods.The aim of the workshop is twofold. First, to make an up-to-date overview of current methods and tools for conducting user studies in Mobile Living Labs -- highlighting their individual strengths and weaknesses. Second, to uncover challenges that are not adequately addressed by current methods and tools and to come up with ideas and requirements that could fill this gap thus serving as beacons for further research and development in this area. %B Proceedings of the 11th International Conference on Human-Computer Interaction with Mobile Devices and Services %S MobileHCI '09 %I ACM %C New York, NY, USA %P 107:1–107:2 - 107:1–107:2 %8 2009/// %@ 978-1-60558-281-8 %G eng %U http://doi.acm.org/10.1145/1613858.1613981 %R 10.1145/1613858.1613981 %0 Conference Paper %B Computer Vision, 2009 IEEE 12th International Conference on %D 2009 %T Recognizing actions by shape-motion prototype trees %A Zhe Lin %A Zhuolin Jiang %A Davis, Larry S. %X A prototype-based approach is introduced for action recognition. The approach represents an action as a sequence of prototypes for efficient and flexible action matching in long video sequences. During training, first, an action prototype tree is learned in a joint shape and motion space via hierarchical k-means clustering; then a lookup table of prototype-to-prototype distances is generated. During testing, based on a joint likelihood model of the actor location and action prototype, the actor is tracked while a frame-to-prototype correspondence is established by maximizing the joint likelihood, which is efficiently performed by searching the learned prototype tree; then actions are recognized using dynamic prototype sequence matching. Distance matrices used for sequence matching are rapidly obtained by look-up table indexing, which is an order of magnitude faster than brute-force computation of frame-to-frame distances. Our approach enables robust action matching in very challenging situations (such as moving cameras, dynamic backgrounds) and allows automatic alignment of action sequences. Experimental results demonstrate that our approach achieves recognition rates of 91.07% on a large gesture dataset (with dynamic backgrounds), 100% on the Weizmann action dataset and 95.77% on the KTH action dataset. %B Computer Vision, 2009 IEEE 12th International Conference on %P 444 - 451 %8 2009/10/29/2 %G eng %R 10.1109/ICCV.2009.5459184 %0 Journal Article %J IEEE Transactions on Pattern Analysis and Machine Intelligence %D 2009 %T Robust Wavelet-Based Super-Resolution Reconstruction: Theory and Algorithm %A Hui Ji %A Fermüller, Cornelia %K batch algorithm %K better-conditioned iterative back projection scheme %K Enhancement %K homography estimation %K image denoising %K image denoising scheme %K image frame alignment %K Image processing software %K Image reconstruction %K image resolution %K image sequence %K Image sequences %K iterative methods %K regularization criteria %K robust wavelet-based iterative super-resolution reconstruction %K surface normal vector %K video formation analysis %K video sequence %K video signal processing %K Wavelet transforms %X We present an analysis and algorithm for the problem of super-resolution imaging, that is the reconstruction of HR (high-resolution) images from a sequence of LR (low-resolution) images. Super-resolution reconstruction entails solutions to two problems. One is the alignment of image frames. The other is the reconstruction of a HR image from multiple aligned LR images. Both are important for the performance of super-resolution imaging. Image alignment is addressed with a new batch algorithm, which simultaneously estimates the homographies between multiple image frames by enforcing the surface normal vectors to be the same. This approach can handle longer video sequences quite well. Reconstruction is addressed with a wavelet-based iterative reconstruction algorithm with an efficient de-noising scheme. The technique is based on a new analysis of video formation. At a high level our method could be described as a better-conditioned iterative back projection scheme with an efficient regularization criteria in each iteration step. Experiments with both simulated and real data demonstrate that our approach has better performance than existing super-resolution methods. It can remove even large amounts of mixed noise without creating artifacts. %B IEEE Transactions on Pattern Analysis and Machine Intelligence %V 31 %P 649 - 660 %8 2009/04// %@ 0162-8828 %G eng %N 4 %R 10.1109/TPAMI.2008.103 %0 Conference Paper %B Proceedings of IS&T Archiving 2009 %D 2009 %T Search and Access Strategies for Web Archives %A Song,Sangchul %A JaJa, Joseph F. %B Proceedings of IS&T Archiving 2009 %8 2009/// %G eng %0 Book Section %B Semantic Mining Technologies for Multimedia DatabasesSemantic Mining Technologies for Multimedia Databases %D 2009 %T Shape Matching for Foliage Database Retrieval %A Ling,H. %A Jacobs, David W. %B Semantic Mining Technologies for Multimedia DatabasesSemantic Mining Technologies for Multimedia Databases %P 100 - 100 %8 2009/// %G eng %0 Journal Article %J IEEETransactions on Pattern Analysis and Machine Intelligence %D 2009 %T Signature Detection and Matching for Document Image Retrieval %A Zhu,Guangyu %A Yefeng Zheng %A David Doermann %A Jaeger,Stefan %X As one of the most pervasive methods of individual identification and document authentication, signatures present convincing evidence and provide an important form of indexing for effective document image processing and retrieval in a broad range of applications. However, detection and segmentation of free-form objects such as signatures from clustered background is currently an open document analysis problem. In this paper, we focus on two fundamental problems in signature-based document image retrieval. First, we propose a novel multi-scale approach to jointly detecting and segmenting signatures from document images. Rather than focusing on local features that typically have large variations, our approach captures the structural saliency using a signature production model and computes the dynamic curvature of 2-D contour fragments over multiple scales. This detection framework is general and computationally tractable. Second, we treat the problem of signature retrieval in the unconstrained setting of translation, scale, and rotation invariant non-rigid shape matching. We propose two novel measures of shape dissimilarity based on anisotropic scaling and registration residual error, and present a supervised learning framework for combining complementary shape information from different dissimilarity metrics using LDA. We quantitatively study state-of-the-art shape representations, shape matching algorithms, measures of dissimilarity, and the use of multiple instances as query in document image retrieval. We further demonstrate our matching techniques in off-line signature verification. Extensive experiments using large real world collections of English and Arabic machine printed and handwritten documents demonstrate the excellent performance of our approaches. %B IEEETransactions on Pattern Analysis and Machine Intelligence %V 31 %P 2015 - 2031 %8 2009/11// %G eng %N 11 %0 Conference Paper %B Computer Vision, 2009 IEEE 12th International Conference on %D 2009 %T Sparse representation of cast shadows via ℓ1-regularized least squares %A Mei,Xue %A Ling,Haibin %A Jacobs, David W. %K #x2113;1-regularized %K approximations;lighting; %K formulation;Lambertian %K least %K representation;image %K representation;least %K scene;cast %K sensing;sparse %K shadows;compressive %K squares %X Scenes with cast shadows can produce complex sets of images. These images cannot be well approximated by low-dimensional linear subspaces. However, in this paper we show that the set of images produced by a Lambertian scene with cast shadows can be efficiently represented by a sparse set of images generated by directional light sources. We first model an image with cast shadows as composed of a diffusive part (without cast shadows) and a residual part that captures cast shadows. Then, we express the problem in an #x2113;1-regularized least squares formulation, with nonnegativity constraints. This sparse representation enjoys an effective and fast solution, thanks to recent advances in compressive sensing. In experiments on both synthetic and real data, our approach performs favorably in comparison to several previously proposed methods. %B Computer Vision, 2009 IEEE 12th International Conference on %P 583 - 590 %8 2009/// %G eng %R 10.1109/ICCV.2009.5459185 %0 Journal Article %J Scientific Programming %D 2009 %T Streaming model based volume ray casting implementation for Cell Broadband Engine %A Kim,Jusub %A JaJa, Joseph F. %X Interactive high quality volume rendering is becoming increasingly more important as the amount of more complex volumetric data steadily grows. While a number of volumetric rendering techniques have been widely used, ray casting has been recognized as an effective approach for generating high quality visualization. However, for most users, the use of ray casting has been limited to datasets that are very small because of its high demands on computational power and memory bandwidth. However the recent introduction of the Cell Broadband Engine (Cell B.E.) processor, which consists of 9 heterogeneous cores designed to handle extremely demanding computations with large streams of data, provides an opportunity to put the ray casting into practical use. In this paper, we introduce an efficient parallel implementation of volume ray casting on the Cell B.E. The implementation is designed to take full advantage of the computational power and memory bandwidth of the Cell B.E. using an intricate orchestration of the ray casting computation on the available heterogeneous resources. Specifically, we introduce streaming model based schemes and techniques to efficiently implement acceleration techniques for ray casting on Cell B.E. In addition to ensuring effective SIMD utilization, our method provides two key benefits: there is no cost for empty space skipping and there is no memory bottleneck on moving volumetric data for processing. Our experimental results show that we can interactively render practical datasets on a single Cell B.E. processor. %B Scientific Programming %V 17 %P 173 - 184 %8 2009/01/01/ %G eng %U http://dx.doi.org/10.3233/SPR-2009-0267 %N 1 %R 10.3233/SPR-2009-0267 %0 Journal Article %J International Journal on Digital Libraries %D 2009 %T Techniques to audit and certify the long-term integrity of digital archives %A Song,Sangchul %A JaJa, Joseph F. %K Computer science %X A fundamental requirement for a digital archive is to set up mechanisms that will ensure the authenticity of its holdings in the long term. In this article, we develop a new methodology to address the long-term integrity of digital archives using rigorous cryptographic techniques. Our approach involves the generation of a small-size integrity token for each object, some cryptographic summary information, and a framework that enables cost-effective regular and periodic auditing of the archive’s holdings depending on the policy set by the archive. Our scheme is very general, architecture and platform independent, and can detect with high probability any alteration to an object, including malicious alterations introduced by the archive or by an external intruder. The scheme can be shown to be mathematically correct as long as a small amount of cryptographic information, in the order of 100 KB/year, can be kept intact. Using this approach, a prototype system called ACE (Auditing Control Environment) has been built and tested in an operational large scale archiving environment. %B International Journal on Digital Libraries %V 10 %P 123 - 131 %8 2009/// %@ 1432-5012 %G eng %U http://www.springerlink.com/content/y52815g805h96334/abstract/ %N 2 %R 10.1007/s00799-009-0056-2 %0 Journal Article %J Appl Environ Microbiol %D 2009 %T Three genomes from the phylum Acidobacteria provide insight into the lifestyles of these microorganisms in soils. %A Ward, Naomi L %A Challacombe, Jean F %A Janssen, Peter H %A Henrissat, Bernard %A Coutinho, Pedro M %A Wu, Martin %A Xie, Gary %A Haft, Daniel H %A Sait, Michelle %A Badger, Jonathan %A Barabote, Ravi D %A Bradley, Brent %A Brettin, Thomas S %A Brinkac, Lauren M %A Bruce, David %A Creasy, Todd %A Daugherty, Sean C %A Davidsen, Tanja M %A DeBoy, Robert T %A Detter, J Chris %A Dodson, Robert J %A Durkin, A Scott %A Ganapathy, Anuradha %A Gwinn-Giglio, Michelle %A Han, Cliff S %A Khouri, Hoda %A Kiss, Hajnalka %A Kothari, Sagar P %A Madupu, Ramana %A Nelson, Karen E %A Nelson, William C %A Paulsen, Ian %A Penn, Kevin %A Ren, Qinghu %A Rosovitz, M J %A Jeremy D Selengut %A Shrivastava, Susmita %A Sullivan, Steven A %A Tapia, Roxanne %A Thompson, L Sue %A Watkins, Kisha L %A Yang, Qi %A Yu, Chunhui %A Zafar, Nikhat %A Zhou, Liwei %A Kuske, Cheryl R %K Anti-Bacterial Agents %K bacteria %K Biological Transport %K Carbohydrate Metabolism %K Cyanobacteria %K DNA, Bacterial %K Fungi %K Genome, Bacterial %K Macrolides %K Molecular Sequence Data %K Nitrogen %K Phylogeny %K Proteobacteria %K Sequence Analysis, DNA %K sequence homology %K Soil Microbiology %X

The complete genomes of three strains from the phylum Acidobacteria were compared. Phylogenetic analysis placed them as a unique phylum. They share genomic traits with members of the Proteobacteria, the Cyanobacteria, and the Fungi. The three strains appear to be versatile heterotrophs. Genomic and culture traits indicate the use of carbon sources that span simple sugars to more complex substrates such as hemicellulose, cellulose, and chitin. The genomes encode low-specificity major facilitator superfamily transporters and high-affinity ABC transporters for sugars, suggesting that they are best suited to low-nutrient conditions. They appear capable of nitrate and nitrite reduction but not N(2) fixation or denitrification. The genomes contained numerous genes that encode siderophore receptors, but no evidence of siderophore production was found, suggesting that they may obtain iron via interaction with other microorganisms. The presence of cellulose synthesis genes and a large class of novel high-molecular-weight excreted proteins suggests potential traits for desiccation resistance, biofilm formation, and/or contribution to soil structure. Polyketide synthase and macrolide glycosylation genes suggest the production of novel antimicrobial compounds. Genes that encode a variety of novel proteins were also identified. The abundance of acidobacteria in soils worldwide and the breadth of potential carbon use by the sequenced strains suggest significant and previously unrecognized contributions to the terrestrial carbon cycle. Combining our genomic evidence with available culture traits, we postulate that cells of these isolates are long-lived, divide slowly, exhibit slow metabolic rates under low-nutrient conditions, and are well equipped to tolerate fluctuations in soil hydration.

%B Appl Environ Microbiol %V 75 %P 2046-56 %8 2009 Apr %G eng %N 7 %R 10.1128/AEM.02294-08 %0 Conference Paper %B Indo-US Workshop on International Trends in Digital Preservation %D 2009 %T Tools and Services for Long-Term Preservation of Digital Archives %A JaJa, Joseph F. %A Smorul,M. %A Song,S. %X We have been working on a technology model to support thepreservation and reliable access of long term digital archives. The model is built around a layered object architecture involving modular, extensible components that can gracefully adapt to the evolving technology, standards, and protocols. This has led to the development of methodologies, tools and services to handle a number of core requirements of long term digital archives. Specifically, we have built flexible tools for implementing general ingestion workflows, active monitoring and auditing of the archive’s collections to ensure their long-term availability and integrity, storage organization and indexing to optimize access. These tools are platform and architecture independent, and have been tested using a wide variety of collections on heterogeneous computing platforms. In this paper, we will primarily focus on describing the underpinnings of our software called ACE (Auditing Control Environment), and report on its performance on a large scale distributed environment called Chronopolis. Built on top of rigorous cryptographic techniques, ACE provides a policy- driven, scalable environment to monitor and audit the archive’s contents in a cost effective way. In addition, we will briefly introduce some our recent efforts to deal with storage organization and access of web archives. Long term preservation is a process that must begin before an object is ingested into the archive and must remain active throughout the lifetime of the archive. The ACE tool provides a very flexible environment to actively monitor and audit the contents of a digital archive throughout its lifetime, so as to ensure the availability and integrity of the archive’s holdings with extremely high probability. ACE is based on rigorous cryptographic techniques, and enables periodic auditing of the archive’s holdings at the granularity and frequency set by the manager of the archive. The scheme is cost effective and very general, does not depend on the archive’s architecture, and can detect any alterations, including alterations made by a malicious user. ACE can gracefully adapt to format migrations and changes to the archive’s policies. %B Indo-US Workshop on International Trends in Digital Preservation %8 2009/// %G eng %0 Journal Article %J BMC Evol Biol %D 2009 %T Toward reconstructing the evolution of advanced moths and butterflies (Lepidoptera: Ditrysia): an initial molecular study %A Regier,J. C %A Zwick,A. %A Cummings, Michael P. %A Kawahara,A. Y %A Cho,S. %A Weller,S. %A Roe,A. %A Baixeras,J. %A Brown,J. W %A Parr,C. %A Davis,DR %A Epstein,M %A Hallwachs,W %A Hausmann,A %A Janzen,DH %A Kitching,IJ %A Solis,MA %A Yen,S-H %A Bazinet,A. L %A Mitter,C %X BACKGROUND: In the mega-diverse insect order Lepidoptera (butterflies and moths; 165,000 described species), deeper relationships are little understood within the clade Ditrysia, to which 98% of the species belong. To begin addressing this problem, we tested the ability of five protein-coding nuclear genes (6.7 kb total), and character subsets therein, to resolve relationships among 123 species representing 27 (of 33) superfamilies and 55 (of 100) families of Ditrysia under maximum likelihood analysis. RESULTS: Our trees show broad concordance with previous morphological hypotheses of ditrysian phylogeny, although most relationships among superfamilies are weakly supported. There are also notable surprises, such as a consistently closer relationship of Pyraloidea than of butterflies to most Macrolepidoptera. Monophyly is significantly rejected by one or more character sets for the putative clades Macrolepidoptera as currently defined (P < 0.05) and Macrolepidoptera excluding Noctuoidea and Bombycoidea sensu lato (P < or = 0.005), and nearly so for the superfamily Drepanoidea as currently defined (P < 0.08). Superfamilies are typically recovered or nearly so, but usually without strong support. Relationships within superfamilies and families, however, are often robustly resolved. We provide some of the first strong molecular evidence on deeper splits within Pyraloidea, Tortricoidea, Geometroidea, Noctuoidea and others.Separate analyses of mostly synonymous versus non-synonymous character sets revealed notable differences (though not strong conflict), including a marked influence of compositional heterogeneity on apparent signal in the third codon position (nt3). As available model partitioning methods cannot correct for this variation, we assessed overall phylogeny resolution through separate examination of trees from each character set. Exploration of "tree space" with GARLI, using grid computing, showed that hundreds of searches are typically needed to find the best-feasible phylogeny estimate for these data. CONCLUSION: Our results (a) corroborate the broad outlines of the current working phylogenetic hypothesis for Ditrysia, (b) demonstrate that some prominent features of that hypothesis, including the position of the butterflies, need revision, and (c) resolve the majority of family and subfamily relationships within superfamilies as thus far sampled. Much further gene and taxon sampling will be needed, however, to strongly resolve individual deeper nodes. %B BMC Evol Biol %V 9 %P 280 - 280 %8 2009/// %G eng %R 10.1186/1471-2148-9-280 %0 Report %D 2009 %T Towards an Internet Connectivity Market %A Feamster, Nick %A Hassan,U. %A Sundaresan,S. %A Valancius,V. %A Johari,R. %A Vazirani,V. %X Today’s Internet achieves end-to-end connectivity through bilateral contracts between neighboring networks; unfortunately, this “one size fits all” connectivity results in less efficient paths, unsold capacity and unmet demand, and sometimes catastrophic market failures that result in global disconnectivity. This paper presents the design and evaluation of MINT, a Market for Internet Transit. MINT is a connectivity market and corresponding set of protocols that allows ISPs to offer path segments on an open market. Edge networks bid for end-to-end paths, and a mediator matches bids for paths to collections of path segments that form end-to-end paths. MINT can be deployed using protocols that are present in today’s routers, and it operates in parallel with the existing routing infrastructure and connectivity market. We present MINT’s market model and protocol design; evaluate how MINT improves efficiency, the utility of edge networks, and the profits of transit networks; and how MINT can operate at Internet scale. %I Georgia Institute of Technology %V GT-CS-09-01 %8 2009/// %G eng %U http://hdl.handle.net/1853/30622 %0 Journal Article %J Computing in Science Engineering %D 2009 %T Using Graphics Processors for High-Performance Computation and Visualization of Plasma Turbulence %A Stantchev,G. %A Juba,D. %A Dorland,W. %A Varshney, Amitabh %K analysis;parallel %K computation;parallel %K computing;numerical %K direct %K engineering %K numerical %K PROCESSING %K processing;plasma %K processors;data %K simulation;graphics %K systems;nuclear %K turbulence %K turbulence; %K units;high-performance %K visualisation;multiprocessing %K visualization;single-program-multiple-data %X Direct numerical simulation (DNS) of turbulence is computationally intensive and typically relies on some form of parallel processing. The authors present techniques to map DNS computations to modern graphics processing units (GPUs), which are characterized by very high memory bandwidth and hundreds of SPMD (single-program-multiple-data) processors. %B Computing in Science Engineering %V 11 %P 52 - 59 %8 2009/04//march %@ 1521-9615 %G eng %N 2 %R 10.1109/MCSE.2009.42 %0 Journal Article %J Pattern Analysis and Machine Intelligence, IEEE Transactions on %D 2009 %T Using Stereo Matching with General Epipolar Geometry for 2D Face Recognition across Pose %A Castillo,C. D %A Jacobs, David W. %K 2D face recognition;CMU PIE data set;computer vision;continuous correspondences;general epipolar geometry;lighting variation;pose variation;stereo matching;computer vision;face recognition;geometry;image matching;pose estimation;stereo image processing;Al %K Automated; %K Computer-Assisted;Pattern Recognition %X Face recognition across pose is a problem of fundamental importance in computer vision. We propose to address this problem by using stereo matching to judge the similarity of two, 2D images of faces seen from different poses. Stereo matching allows for arbitrary, physically valid, continuous correspondences. We show that the stereo matching cost provides a very robust measure of similarity of faces that is insensitive to pose variations. To enable this, we show that, for conditions common in face recognition, the epipolar geometry of face images can be computed using either four or three feature points. We also provide a straightforward adaptation of a stereo matching algorithm to compute the similarity between faces. The proposed approach has been tested on the CMU PIE data set and demonstrates superior performance compared to existing methods in the presence of pose variation. It also shows robustness to lighting variation. %B Pattern Analysis and Machine Intelligence, IEEE Transactions on %V 31 %P 2298 - 2304 %8 2009/12// %@ 0162-8828 %G eng %N 12 %R 10.1109/TPAMI.2009.123 %0 Journal Article %J International Journal of Computer Vision %D 2009 %T Viewpoint Invariant Texture Description Using Fractal Analysis %A Yong Xu %A Hui Ji %A Fermüller, Cornelia %X Image texture provides a rich visual description of the surfaces in the scene. Many texture signatures based on various statistical descriptions and various local measurements have been developed. Existing signatures, in general, are not invariant to 3D geometric transformations, which is a serious limitation for many applications. In this paper we introduce a new texture signature, called the multifractal spectrum (MFS). The MFS is invariant under the bi-Lipschitz map, which includes view-point changes and non-rigid deformations of the texture surface, as well as local affine illumination changes. It provides an efficient framework combining global spatial invariance and local robust measurements. Intuitively, the MFS could be viewed as a “better histogram” with greater robustness to various environmental changes and the advantage of capturing some geometrical distribution information encoded in the texture. Experiments demonstrate that the MFS codes the essential structure of textures with very low dimension, and thus represents an useful tool for texture classification. %B International Journal of Computer Vision %V 83 %P 85 - 100 %8 2009/// %@ 0920-5691 %G eng %U http://dx.doi.org/10.1007/s11263-009-0220-6 %N 1 %0 Conference Paper %B Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on %D 2009 %T Visibility constraints on features of 3D objects %A Basri,R. %A Felzenszwalb,P. F %A Girshick,R. B %A Jacobs, David W. %A Klivans,C. J %K 3D %K algorithms;synthetic %K complexity;iterative %K constraints;computational %K data;synthetic %K dataset;NP-hard;image-based %K features;COIL %K framework;iterative %K images;three-dimensional %K methods;object %K object %K recognition; %K recognition;viewing %K sphere;visibility %X To recognize three-dimensional objects it is important to model how their appearances can change due to changes in viewpoint. A key aspect of this involves understanding which object features can be simultaneously visible under different viewpoints. We address this problem in an image-based framework, in which we use a limited number of images of an object taken from unknown viewpoints to determine which subsets of features might be simultaneously visible in other views. This leads to the problem of determining whether a set of images, each containing a set of features, is consistent with a single 3D object. We assume that each feature is visible from a disk of viewpoints on the viewing sphere. In this case we show the problem is NP-hard in general, but can be solved efficiently when all views come from a circle on the viewing sphere. We also give iterative algorithms that can handle noisy data and converge to locally optimal solutions in the general case. Our techniques can also be used to recover viewpoint information from the set of features that are visible in different images. We show that these algorithms perform well both on synthetic data and images from the COIL dataset. %B Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on %P 1231 - 1238 %8 2009/06// %G eng %R 10.1109/CVPR.2009.5206726 %0 Journal Article %J Journal on Computing and Cultural Heritage %D 2008 %T Access to recorded interviews %A Jong,Franciska De %A Oard, Douglas %A Heeren,Willemijn %A Ordelman,Roeland %B Journal on Computing and Cultural Heritage %V 1 %P 1 - 27 %8 2008/06/01/ %@ 15564673 %G eng %U http://dl.acm.org/citation.cfm?id=1367083 %R 10.1145/1367080.1367083 %0 Journal Article %J Proc. of the 6th International Conference on Language Resources and Evaluation Conference (LREC’08) %D 2008 %T The acl anthology reference corpus: A reference dataset for bibliographic research in computational linguistics %A Bird,S. %A Dale,R. %A Dorr, Bonnie J %A Gibson,B. %A Joseph,M.T. %A Kan,M.Y. %A Lee,D. %A Powley,B. %A Radev,D.R. %A Tan,Y.F. %X The ACL Anthology is a digital archive of conference and journal papers in natural language processing and computational linguistics.Its primary purpose is to serve as a reference repository of research results, but we believe that it can also be an object of study and a platform for research in its own right. We describe an enriched and standardized reference corpus derived from the ACL Anthology that can be used for research in scholarly document processing. This corpus, which we call the ACL Anthology Reference Corpus (ACL ARC), brings together the recent activities of a number of research groups around the world. Our goal is to make the corpus widely available, and to encourage other researchers to use it as a standard testbed for experiments in both bibliographic and bibliometric research. %B Proc. of the 6th International Conference on Language Resources and Evaluation Conference (LREC’08) %P 1755 - 1759 %8 2008/// %G eng %0 Journal Article %J Electronic Transactions on Numerical Analysis %D 2008 %T Adaptive Constraint Reduction for Training Support Vector Machines %A Jung,Jin Hyuk %A O'Leary, Dianne P. %A Tits,Andre' L. %B Electronic Transactions on Numerical Analysis %V 31 %P 156 - 177 %8 2008/// %G eng %U http://etna.mcs.kent.edu/vol.31.2008/pp156-177.dir/pp156-177.pdfhttp://etna.mcs.kent.edu/vol.31.2008/pp156-177.dir/pp156-177.pdf %0 Journal Article %J Lecture Notes in Computer Science %D 2008 %T Advances in multilingual and multimodal information retrieval %A Peters,C. %A Jijkoun,V. %A Mandl,T. %A Müller,H. %A Oard, Douglas %A Peñas,A. %A Santos,D. %B Lecture Notes in Computer Science %V 5152 %8 2008/// %G eng %0 Conference Paper %B Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on %D 2008 %T Approximate earth mover's distance in linear time %A Shirdhonkar,S. %A Jacobs, David W. %K algorithm;normal %K complexity;earth %K complexity;image %K constraint;Kantorovich-Rubinstein %K continuity %K distance;histograms;linear %K distance;weighted %K Euclidean %K Holder %K matching;wavelet %K movers %K problem;computational %K TIME %K transform;computational %K transforms; %K transshipment %K wavelet %X The earth moverpsilas distance (EMD) is an important perceptually meaningful metric for comparing histograms, but it suffers from high (O(N3 logN)) computational complexity. We present a novel linear time algorithm for approximating the EMD for low dimensional histograms using the sum of absolute values of the weighted wavelet coefficients of the difference histogram. EMD computation is a special case of the Kantorovich-Rubinstein transshipment problem, and we exploit the Holder continuity constraint in its dual form to convert it into a simple optimization problem with an explicit solution in the wavelet domain. We prove that the resulting wavelet EMD metric is equivalent to EMD, i.e. the ratio of the two is bounded. We also provide estimates for the bounds. The weighted wavelet transform can be computed in time linear in the number of histogram bins, while the comparison is about as fast as for normal Euclidean distance or chi2 statistic. We experimentally show that wavelet EMD is a good approximation to EMD, has similar performance, but requires much less computation. %B Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on %P 1 - 8 %8 2008/06// %G eng %R 10.1109/CVPR.2008.4587662 %0 Report %D 2008 %T Archiving Temporal Web Information: Organization of Web Contents for Fast Access and Compact Storage %A Song,Sangchul %A JaJa, Joseph F. %K Technical Report %X We address the problem of archiving dynamic web contents oversignificant time spans. Current schemes crawl the web contents at regular time intervals and archive the contents after each crawl regardless of whether or not the contents have changed between consecutive crawls. Our goal is to store newly crawled web contents only when they are different than the previous crawl, while ensuring accurate and quick retrieval of archived contents based on arbitrary temporal queries over the archived time period. In this paper, we develop a scheme that stores unique temporal web contents in containers following the widely used ARC/WARC format, and that provides quick access to the archived contents for arbitrary temporal queries. A novel component of our scheme is the use of a new indexing structure based on the concept of persistent or multi-version data structures. Our scheme can be shown to be asymptotically optimal both in storage utilization and insert/retrieval time. We illustrate the performance of our method on two very different data sets from the Stanford WebBase project, the first reflecting very dynamic web contents and the second relatively static web contents. The experimental results clearly illustrate the substantial storage savings achieved by eliminating duplicate contents detected between consecutive crawls, as well as the speed at which our method can find the archived contents specified through arbitrary temporal queries. %I Instititue for Advanced Computer Studies, Univ of Maryland, College Park %V UMIACS-TR-2008-08 %8 2008/04/07/ %G eng %U http://drum.lib.umd.edu/handle/1903/7569 %0 Book Section %B Approximation, Randomization and Combinatorial Optimization. Algorithms and TechniquesApproximation, Randomization and Combinatorial Optimization. Algorithms and Techniques %D 2008 %T Budgeted Allocations in the Full-Information Setting %A Srinivasan, Aravind %E Goel,Ashish %E Jansen,Klaus %E Rolim,José %E Rubinfeld,Ronitt %X We build on the work of Andelman & Mansour and Azar, Birnbaum, Karlin, Mathieu & Thach Nguyen to show that the full-information (i.e., offline) budgeted-allocation problem can be approximated to within 4/3: we conduct a rounding of the natural LP relaxation, for which our algorithm matches the known lower-bound on the integrality gap. %B Approximation, Randomization and Combinatorial Optimization. Algorithms and TechniquesApproximation, Randomization and Combinatorial Optimization. Algorithms and Techniques %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 5171 %P 247 - 253 %8 2008/// %@ 978-3-540-85362-6 %G eng %U http://dx.doi.org/10.1007/978-3-540-85363-3_20 %0 Book Section %B Studies in Computational Intelligence: Machine Learning in Document Analysis and RecognitionStudies in Computational Intelligence: Machine Learning in Document Analysis and Recognition %D 2008 %T Combining Classifiers with Informational Confidence %A Jaeger,Stefan %A Ma,Huanfeng %A David Doermann %E Simone Marinai,Hiromichi Fujisawa %X We propose a new statistical method for learning normalized confidence values in multiple classifier systems. Our main idea is to adjust confidence values so that their nominal values equal the information actually conveyed. In order to do so, we assume that information depends on the actual performance of each confidence value on an evaluation set. As information measure, we use Shannon's well-known logarithmic notion of information. With the confidence values matching their informational content, the classifier combination scheme reduces to the simple sum-rule, theoretically justifying this elementary combination scheme. In experimental evaluations for script identification, and both handwritten and printed character recognition, we achieve a consistent improvement on the best single recognition rate. We cherish the hope that our information-theoretical framework helps fill the theoretical gap we still experience in classifier combination, and puts the excellent practical performance of multiple classifier systems on a more solid basis. %B Studies in Computational Intelligence: Machine Learning in Document Analysis and RecognitionStudies in Computational Intelligence: Machine Learning in Document Analysis and Recognition %I Springer %P 163 - 192 %8 2008/// %G eng %0 Journal Article %J Proceedings of the 5th International ISCRAM Conference %D 2008 %T Community response grid (CRG) for a university campus: Design requirements and implications %A Wu,P.F. %A Qu,Y. %A Preece,J. %A Fleischmann,K. %A Golbeck,J. %A Jaeger,P. %A Shneiderman, Ben %X This paper describes the initial stages of the participatory design of a community-oriented emergency responsesystem for a university campus. After reviewing related work and the current University emergency response system, this paper describes our participatory design process, discusses initial findings from a design requirement survey and from our interactions with different stakeholders, and proposes a Web interface design for a community response grid system. The prototyping of the system demonstrates the possibility of fostering a social-network-based community participation in emergency response, and also identifies concerns raised by potential users and by the professional responder community. %B Proceedings of the 5th International ISCRAM Conference %P 34 - 43 %8 2008/// %G eng %0 Journal Article %J Journal of BacteriologyJ. Bacteriol. %D 2008 %T The Complete Genome Sequence of Thermococcus Onnurineus NA1 Reveals a Mixed Heterotrophic and Carboxydotrophic Metabolism %A Lee,Hyun Sook %A Kang,Sung Gyun %A Bae,Seung Seob %A Lim,Jae Kyu %A Cho,Yona %A Kim,Yun Jae %A Jeon,Jeong Ho %A Cha,Sun-Shin %A Kwon,Kae Kyoung %A Kim,Hyung-Tae %A Park,Cheol-Joo %A Lee,Hee-Wook %A Kim,Seung Il %A Jongsik Chun %A Rita R Colwell %A Kim,Sang-Jin %A Lee,Jung-Hyun %X Members of the genus Thermococcus, sulfur-reducing hyperthermophilic archaea, are ubiquitously present in various deep-sea hydrothermal vent systems and are considered to play a significant role in the microbial consortia. We present the complete genome sequence and feature analysis of Thermococcus onnurineus NA1 isolated from a deep-sea hydrothermal vent area, which reveal clues to its physiology. Based on results of genomic analysis, T. onnurineus NA1 possesses the metabolic pathways for organotrophic growth on peptides, amino acids, or sugars. More interesting was the discovery that the genome encoded unique proteins that are involved in carboxydotrophy to generate energy by oxidation of CO to CO2, thereby providing a mechanistic basis for growth with CO as a substrate. This lithotrophic feature in combination with carbon fixation via RuBisCO (ribulose 1,5-bisphosphate carboxylase/oxygenase) introduces a new strategy with a complementing energy supply for T. onnurineus NA1 potentially allowing it to cope with nutrient stress in the surrounding of hydrothermal vents, providing the first genomic evidence for the carboxydotrophy in Thermococcus. %B Journal of BacteriologyJ. Bacteriol. %V 190 %P 7491 - 7499 %8 2008/11/15/ %@ 0021-9193, 1098-5530 %G eng %U http://jb.asm.org/content/190/22/7491 %N 22 %R 10.1128/JB.00746-08 %0 Journal Article %J Understanding Events %D 2008 %T Computational Vision Approaches for Event Modeling %A Chellapa, Rama %A Cuntoor, N.P. %A Joo, S.W. %A V.S. Subrahmanian %A Turaga,P. %X Event modeling systems provide a semantic interpretation of sequences of pixels that are captured by a video camera. The design of a practical system has to take into account the following three main factors: low-level preprocessing limitations, computational and storage complexity of the event model, and user interaction. The hidden Markov model (HMM) and its variants have been widely used to model both speech and video signals. Computational efficiency of the Baum-Welch and the Viterbi algorithms has been a leading reason for the popularity of the HMM. Since the objective is to detect events in video sequences that are meaningful to humans, one might want to provide space in the design loop for a user who can specify events of interest. This chapter explores this using semantic approaches that not only use features extracted from raw video streams but also incorporate metadata and ontologies of activities. It presents three approaches for applications such as event recognition: anomaly detection, temporal segmentation, and ontology evaluation. The three approaches discussed are statistical methods based on HMMs, formal grammars, and ontologies. The effectiveness of these approaches is illustrated using video sequences captured both indoors and outdoors: the indoor UCF human action dataset, the TSA airport tarmac surveillance dataset, and the bank monitoring dataset. %B Understanding Events %V 1 %P 473 - 522 %8 2008/// %G eng %N 9 %0 Journal Article %J Plasma Science, IEEE Transactions on %D 2008 %T Confluent Volumetric Visualization of Gyrokinetic Turbulence %A Stantchev,G. %A Juba,D. %A Dorland,W. %A Varshney, Amitabh %K flow;plasma %K geometry;plasma %K gyrokinetic %K simulation;nontrivial %K simulation;plasma %K turbulence; %K turbulence;nonlinear %K turbulence;volumetric %K visualisation;plasma %K visualization;flow %X Data from gyrokinetic turbulence codes are often difficult to visualize due their high dimensionality, the nontrivial geometry of the underlying grids, and the vast range of spatial scales. We present an interactive visualization framework that attempts to address these issues. Images from a nonlinear gyrokinetic simulation are presented. %B Plasma Science, IEEE Transactions on %V 36 %P 1112 - 1113 %8 2008/08// %@ 0093-3813 %G eng %N 4 %R 10.1109/TPS.2008.924509 %0 Report %D 2008 %T CrossTalk: The Journal of Defense Software Engineering. Volume 21, Number 10, October 2008 %A Basili, Victor R. %A Dangle,K. %A Esker,L. %A Marotta,F. %A Rus,I. %A Brosgol,B. M %A Jamin,S. %A Arthur,J. D %A Ravichandar,R. %A Wisnosky,D. E %I DTIC Document %8 2008/// %G eng %0 Journal Article %J Jisuanji Gongcheng/ Computer Engineering %D 2008 %T Design and implementation of blackboard-based system for human detection %A Zhuolin Jiang %A Li,S.F. %A Gao,D.F. %B Jisuanji Gongcheng/ Computer Engineering %V 34 %8 2008/// %G eng %N 2 %0 Journal Article %J Proceedings of the American Society for Information Science and Technology %D 2008 %T Designing community-based emergency communication system: A preliminary study %A Fei Wu,P. %A Qu,Y. %A Fleischmann,K. %A Golbeck,J. %A Jaeger,P. %A Preece,J. %A Shneiderman, Ben %B Proceedings of the American Society for Information Science and Technology %V 45 %P 1 - 3 %8 2008/// %G eng %N 1 %0 Conference Paper %B 8th International Web Archiving Workshop, Aarhus, Denmark. %D 2008 %T Fast browsing of archived Web contents %A Song,S. %A JaJa, Joseph F. %X The web is becoming the preferred medium for communicatingand storing information pertaining to almost any human activity. However it is an ephemeral medium whose contents are constantly changing, resulting in a permanent loss of part of our cultural and scientific heritage on a regular basis. Archiving important web contents is a very challenging technical problem due to its tremendous scale and complex structure, extremely dynamic nature, and its rich heterogeneous and deep contents. In this paper, we consider the problem of archiving a linked set of web objects into web containers in such a way as to minimize the number of containers accessed during a typical browsing session. We develop a method that makes use of the notion of PageRank and optimized graph partitioning to enable faster browsing of archived web contents. We include simulation results that illustrate the performance of our scheme and compare it to the common scheme currently used to organize web objects into web containers. %B 8th International Web Archiving Workshop, Aarhus, Denmark. %8 2008/// %G eng %0 Report %D 2008 %T Fast flux service networks: Dynamics and roles in hosting online scams %A Konte,M. %A Feamster, Nick %A Jung,J. %X This paper studies the dynamics of fast flux service networks and their role in online scam hosting infrastructures. By monitoring changes in DNS records of over 350 distinct fast flux domains collected from URLs in 115,000 spam emails at a large spam sinkhole, we measure the rate of change of DNS records, accumulation of new distinct IPs in the hosting infrastructure, and location of change both for individual domains and across 21 different scam campaigns. We find that fast flux networks redirect clients at much different rates—and at different locations in the DNS hierarchy—than conventional load-balanced Web sites. We also find that the IP addresses in the fast flux infrastructure itself change rapidly, and that this infrastructure is shared extensively across scam campaigns, and some of these IP addresses are also used to send spam. Finally, we compared IP addresses in fast-flux infrastructure and flux domains with various blacklists (i.e., SBL, XBL/PBL, and URIBL) and found that nearly one-third of scam sites were not listed in the URL blacklist at the time they were hosting scams. We also observed many hosting sites and nameservers that were listed in both the SBL and XBL both before and after we observed fast-flux activity; these observations lend insight into both the responsiveness of existing blacklists and the life cycles of fast-flux nodes. %I School of Computer Science, Georgia Tech %V GT-CS-08-07 %8 2008/// %G eng %U http://hdl.handle.net/1853/30451 %0 Report %D 2008 %T High Performance Computing Algorithms for Land Cover %A Dynamics Using Remote %A Satya Kalluri %A Bader,David A. %A John Townshend %A JaJa, Joseph F. %A Zengyan Zhang %A Fallah-adl,Hassan %X Global and regional land cover studies require the ability to apply complex models on selected subsets of large amounts of multi-sensor and multi-temporal data sets that have been derived from raw instrument measurements using widely accepted pre-processing algorithms. The computational and storage requirements of most such studies far exceed what is possible on a single workstation environment. We have been pursuing a new approach that couples scalable and open distributed heterogeneous hardware with the development of high performance software for processing, indexing, and organizing remotely sensed data. Hierarchical data management tools are used to ingest raw data, create metadata, and organize the archived data so as to automatically achieve computational load balancing among the available nodes and minimize I/O overheads. We illustrate our approach with four specific examples. The first is the development of the first fast operational scheme for the atmospheric correction of Landsat TM scenes, while the second example focuses on image segmentation using a novel hierarchical connected components algorithm. Retrieval of global BRDF (Bidirectional Reflectance Distribution Function) in the red and near infrared wavelengths using four years (1983 to 1986) of Pathfinder AVHRR Land (PAL) data set is the focus of our third example. The fourth example is the development of a hierarchical data organization scheme that allows on-demand processing and retrieval of regional and global AVHRR data sets. Our results show that substantial improvements in computational times can be achieved by using the high performance computing technology. %I CiteSeerX %8 2008/// %G eng %U http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.58.4213 %0 Journal Article %J Electronic Transactions on Numerical Analysis %D 2008 %T Implementing an Interior Point Method for Linear Programs on a CPU-GPU System %A Jung,Jin Hyuk %A O'Leary, Dianne P. %B Electronic Transactions on Numerical Analysis %V 28 %P 174 - 189 %8 2008/// %G eng %U http://etna.mcs.kent.edu/vol.28.2007-2008/pp174-189.dir/pp174-189.pdfhttp://etna.mcs.kent.edu/vol.28.2007-2008/pp174-189.dir/pp174-189.pdf %0 Book Section %B Information Systems SecurityInformation Systems Security %D 2008 %T Implicit Flows: Can’t Live with ‘Em, Can’t Live without ‘Em %A King,Dave %A Hicks,Boniface %A Hicks, Michael W. %A Jaeger,Trent %E Sekar,R. %E Pujari,Arun %X Verifying that programs trusted to enforce security actually do so is a practical concern for programmers and administrators. However, there is a disconnect between the kinds of tools that have been successfully applied to real software systems (such as taint mode in Perl and Ruby), and information-flow compilers that enforce a variant of the stronger security property of noninterference. Tools that have been successfully used to find security violations have focused on explicit flows of information, where high-security information is directly leaked to output. Analysis tools that enforce noninterference also prevent implicit flows of information, where high-security information can be inferred from a program’s flow of control. However, these tools have seen little use in practice, despite the stronger guarantees that they provide. To better understand why, this paper experimentally investigates the explicit and implicit flows identified by the standard algorithm for establishing noninterference. When applied to implementations of authentication and cryptographic functions, the standard algorithm discovers many real implicit flows of information, but also reports an extremely high number of false alarms, most of which are due to conservative handling of unchecked exceptions (e.g., null pointer exceptions). After a careful analysis of all sources of true and false alarms, due to both implicit and explicit flows, the paper concludes with some ideas to improve the false alarm rate, toward making stronger security analysis more practical. %B Information Systems SecurityInformation Systems Security %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 5352 %P 56 - 70 %8 2008/// %@ 978-3-540-89861-0 %G eng %U http://dx.doi.org/10.1007/978-3-540-89862-7_4 %0 Conference Paper %B Machine Learning and Cybernetics, 2008 International Conference on %D 2008 %T An improved mean shift tracking method based on nonparametric clustering and adaptive bandwidth %A Zhuolin Jiang %A Li,Shao-Fa %A Jia,Xi-Ping %A Zhu,Hong-Li %K adaptive bandwidth %K appearance model %K bandwidth matrix %K Bhattacharyya coefficient %K color information %K color space partitioning %K image colour analysis %K iterative procedure %K kernel bandwidth parameter %K kernel density estimate %K log-likelihood function %K mean shift tracking method %K modified weight function %K nonparametric clustering %K Object detection %K object representation %K object tracking %K pattern clustering %K similarity measure %K spatial layout %K target candidate %K target model %K tracking %X An improved mean shift method for object tracking based on nonparametric clustering and adaptive bandwidth is presented in this paper. Based on partitioning the color space of a tracked object by using a modified nonparametric clustering, an appearance model of the tracked object is built. It captures both the color information and spatial layout of the tracked object. The similarity measure between the target model and the target candidate is derived from the Bhattacharyya coefficient. The kernel bandwidth parameters are automatically selected by maximizing the lower bound of a log-likelihood function, which is derived from a kernel density estimate using the bandwidth matrix and the modified weight function. The experimental results show that the method can converge in an average of 2.6 iterations per frame. %B Machine Learning and Cybernetics, 2008 International Conference on %V 5 %P 2779 - 2784 %8 2008/07// %G eng %R 10.1109/ICMLC.2008.4620880 %0 Journal Article %J Visualization and Computer Graphics, IEEE Transactions on %D 2008 %T Interactive High-Resolution Isosurface Ray Casting on Multicore Processors %A Wang,Qin %A JaJa, Joseph F. %K compact indexing structure;interactive high-resolution isosurface ray casting;interactive isosurface rendering;memory management;multicore processor;multithreading;object-order traversal;static screen partitioning;multi-threading;multiprocessing systems;r %K Computer-Assisted;Imaging %K Computer-Assisted;User-Computer Interface; %K Three-Dimensional;Information Storage and Retrieval;Reproducibility of Results;Sensitivity and Specificity;Signal Processing %X We present a new method for the interactive rendering of isosurfaces using ray casting on multicore processors. This method consists of a combination of an object-order traversal that coarsely identifies possible candidate three-dimensional (3D) data blocks for each small set of contiguous pixels and an isosurface ray casting strategy tailored for the resulting limited-size lists of candidate 3D data blocks. Our implementation scheme results in a compact indexing structure and makes careful use of multithreading and memory management environments commonly present in multicore processors. Although static screen partitioning is widely used in the literature, our scheme starts with an image partitioning for the initial stage and then performs dynamic allocation of groups of ray casting tasks among the different threads to ensure almost equal loads among the different cores while maintaining spatial locality. We also pay a particular attention to the overhead incurred by moving the data across the different levels of the memory hierarchy. We test our system on a two-processor Clovertown platform, each consisting of a Quad-Core 1.86-GHz Intel Xeon Processor and present detailed experimental results for a number of widely different benchmarks. We show that our system is efficient and scalable and achieves high cache performance and excellent load balancing, resulting in an overall performance that is superior to any of the previous algorithms. In fact, we achieve interactive isosurface rendering on a screen with 1.0242 resolution for all the data sets tested up to the maximum size that can fit in the main memory of our platform. %B Visualization and Computer Graphics, IEEE Transactions on %V 14 %P 603 - 614 %8 2008/06//may %@ 1077-2626 %G eng %N 3 %R 10.1109/TVCG.2007.70630 %0 Report %D 2008 %T The Maryland Large-Scale Integrated Neurocognitive Architecture %A Reggia, James A. %A Tagamets,M. %A Contreras-Vidal,J. %A Jacobs, David W. %A Weems,S. %A Naqvi,W. %A Yang,C. %K *COMPUTATIONS %K *HYBRID SYSTEMS %K *NEURAL NETS %K *NEUROCOGNITIVE ARCHITECTURE %K ADAPTIVE SYSTEMS %K Artificial intelligence %K BRAIN %K Cognition %K COMPUTER PROGRAMMING %K COMPUTER PROGRAMMING AND SOFTWARE %K HYBRID AI %K Machine intelligence %K MECHANICAL ORGANS %K MODULAR CONSTRUCTION %K NERVOUS SYSTEM %K PE61101E %K PLASTIC PROPERTIES %K PROCESSING EQUIPMENT %K RECURRENT NEURAL NETWORK %X Recent progress in neural computation, high performance computing, neuroscience and cognitive science suggests that an effort to produce a general-purpose, adaptive machine intelligence is likely to yield a qualitatively more powerful system than those currently existing. Here we outline our progress in developing a framework for creating such a large-scale machine intelligence, or neurocognitive architecture that is based on the modularity, dynamics and plasticity of the human brain. We successfully implemented three intermediate-scale parts of such a system, and these are described. Based on this experience, we concluded that for the short term, optimal results would be obtained by using a hybrid design including neural, symbolic AI, and artificial life methods. We propose a three-tiered architecture that integrates these different methods, and describe a prototype mini-Roboscout that we implemented and evaluated based on this architecture. We also examined, via computational experiments, the effectiveness of genetic programming as a design tool for recurrent neural networks, and the speed-up obtained for adaptive neural networks when they are executed on a graphical processing unit. We conclude that the implementation of a large-scale neurocognitive architecture is feasible, and outline a roadmap for proceeding. %I University of Maryland College Park %8 2008/03// %G eng %U http://stinet.dtic.mil/oai/oai?&verb=getRecord&metadataPrefix=html&identifier=ADA481261 %0 Journal Article %J ACM Trans. Database Syst. %D 2008 %T Metric space similarity joins %A Jacox,Edwin H. %A Samet, Hanan %K distance-based indexing %K external memory algorithms %K nearest neighbor queries %K range queries %K Ranking %K Similarity join %X Similarity join algorithms find pairs of objects that lie within a certain distance ε of each other. Algorithms that are adapted from spatial join techniques are designed primarily for data in a vector space and often employ some form of a multidimensional index. For these algorithms, when the data lies in a metric space, the usual solution is to embed the data in vector space and then make use of a multidimensional index. Such an approach has a number of drawbacks when the data is high dimensional as we must eventually find the most discriminating dimensions, which is not trivial. In addition, although the maximum distance between objects increases with dimension, the ability to discriminate between objects in each dimension does not. These drawbacks are overcome via the introduction of a new method called Quickjoin that does not require a multidimensional index and instead adapts techniques used in distance-based indexing for use in a method that is conceptually similar to the Quicksort algorithm. A formal analysis is provided of the Quickjoin method. Experiments show that the Quickjoin method significantly outperforms two existing techniques. %B ACM Trans. Database Syst. %V 33 %P 7:1–7:38 - 7:1–7:38 %8 2008/06// %@ 0362-5915 %G eng %U http://doi.acm.org/10.1145/1366102.1366104 %N 2 %R 10.1145/1366102.1366104 %0 Conference Paper %B Proceedings of the 2008 ACM CoNEXT Conference %D 2008 %T MINT: a Market for INternet Transit %A Valancius,Vytautas %A Feamster, Nick %A Johari,Ramesh %A Vazirani,Vijay %X Today's Internet's routing paths are inefficient with respect to both connectivity and the market for interconnection. The former manifests itself via needlessly long paths, de-peering, etc. The latter arises because of a primitive market structure that results in unfulfilled demand and unused capacity. Today's networks make pairwise, myopic interconnection decisions based on business considerations that may not mirror considerations of the edge networks (or end systems) that would benefit from the existence of a particular interconnection. These bilateral contracts are also complex and difficult to enforce. This paper proposes MINT, a market structure and routing protocol suite that facilitates the sale and purchase of end-to-end Internet paths. We present MINT's structure, explain how it improves connectivity and market efficiency, explore the types of connectivity that might be exchanged (vs. today's "best effort" connectivity), and argue that MINT's deployment is beneficial to both stub networks and transit providers. We discuss research challenges, including the design both of the protocol that maintains information about connectivity and of the market clearing algorithms. Our preliminary evaluation shows that such a market quickly reaches equilibrium and exhibits price stability. %B Proceedings of the 2008 ACM CoNEXT Conference %S CoNEXT '08 %I ACM %C New York, NY, USA %P 70:1–70:6 - 70:1–70:6 %8 2008/// %@ 978-1-60558-210-8 %G eng %U http://doi.acm.org/10.1145/1544012.1544082 %R 10.1145/1544012.1544082 %0 Journal Article %J IEEE Transactions on Software Engineering %D 2008 %T Modular Information Hiding and Type-Safe Linking for C %A Srivastava,S. %A Hicks, Michael W. %A Foster, Jeffrey S. %A Jenkins,P. %K C language %K CMOD %K Code design %K Coding Tools and Techniques %K compiler %K data encapsulation %K Information hiding %K modular information hiding %K modular reasoning %K modules %K object-oriented programming %K open source programs %K packages %K program compilers %K public domain software %K reliability %K software reusability %K type-safe linking %X This paper presents CMod, a novel tool that provides a sound module system for C. CMod works by enforcing four rules that are based on principles of modular reasoning and on current programming practice. CMod's rules flesh out the convention that .h header files are module interfaces and .c source files are module implementations. Although this convention is well-known, existing explanations of it are incomplete, omitting important subtleties needed for soundness. In contrast, we have proven formally that CMod's rules enforce both information hiding and type-safe linking. To use CMod, the programmer develops and builds their software as usual, redirecting the compiler and linker to CMod's wrappers. We evaluated CMod by applying it to 30 open source programs, totaling more than one million LoC. Violations to CMod's rules revealed more than a thousand information hiding errors, dozens of typing errors, and hundreds of cases that, although not currently bugs, make programming mistakes more likely as the code evolves. At the same time, programs generally adhere to the assumptions underlying CMod's rules, and so we could fix rule violations with a modest effort. We conclude that CMod can effectively support modular programming in C: it soundly enforces type-safe linking and information-hiding while being largely compatible with existing practice. %B IEEE Transactions on Software Engineering %V 34 %P 357 - 376 %8 2008/06//May %@ 0098-5589 %G eng %N 3 %R 10.1109/TSE.2008.25 %0 Conference Paper %B Computer Science and Software Engineering, 2008 International Conference on %D 2008 %T A New Approach of Dynamic Background Modeling for Surveillance Information %A Gao,Dongfa %A Zhuolin Jiang %A Ye,Ming %K approximate information extraction %K binary mask images %K disturbance filtering %K dynamic background modeling %K Feature extraction %K filtering theory %K Image reconstruction %K information frame reconstruction %K NOISE %K noise filtering %K orthogonal nonseparable wavelet transformation %K Surveillance %K surveillance information %K Wavelet transforms %X This paper presents a new approach of best background modeling for surveillance information. The approach makes orthogonal non-separable wavelet transformation of information frames used for background modeling, extracts the approximate information to reconstruct information frames, filters out the disturbance, shadow and noise from the reconstructed frames, constructs basic background with the method of binary mask images, uses multi-frame combination of non-uniform noise to filter noise in basic background, applies mutual information to detect the situation of adjacent changes. If the background has a gradual change, weighted superposition of multi background modeling images with time will be applied to update the background. If the background has a major or sudden change, the background will remodel from this frame. %B Computer Science and Software Engineering, 2008 International Conference on %V 1 %P 850 - 855 %8 2008/12// %G eng %R 10.1109/CSSE.2008.601 %0 Report %D 2008 %T Parallel Algorithms for Volumetric Surface Construction %A JaJa, Joseph F. %A Shi,Qingmin %A Varshney, Amitabh %X Large scale scientific data sets are appearing at an increasing rate whose sizes can range from hundreds of gigabytes to tens of terabytes. Isosurface extraction and rendering is an important visualization technique that enables the visual exploration of such data sets using surfaces. However the computational requirements of this approach are substantial which in general prevent the interactive rendering of isosurfaces for large data sets. Therefore, parallel and distributed computing techniques offer a promising direction to deal with the corresponding computational challenges. In this chapter, we give a brief historical perspective of the isosurface visualization approach, and describe the basic sequential and parallel techniques used to extract and render isosurfaces with a particular focus on out-of-core techniques. For parallel algorithms, we assume a distributed memory model in which each processor has its own local disk, and processors communicate and exchange data through an interconnection network. We present a general framework for evaluating parallel isosurface extraction algorithms and describe the related best known parallel algorithms. We also describe the main parallel strategies used to handle isosurface rendering, pointing out the limitations of these strategies. 1. %I Instititue for Advanced Computer Studies, Univ of Maryland, College Park %8 2008/// %G eng %U http://citeseerx.ist.psu.edu/viewdoc/summary?doi=?doi=10.1.1.122.4472 %0 Journal Article %J Journal of Molecular Graphics and Modelling %D 2008 %T Parallel, stochastic measurement of molecular surface area %A Juba,Derek %A Varshney, Amitabh %K gpu %K Molecular surface %K Parallel %K Progressive %K Quasi-random %K Stochastic %X Biochemists often wish to compute surface areas of proteins. A variety of algorithms have been developed for this task, but they are designed for traditional single-processor architectures. The current trend in computer hardware is towards increasingly parallel architectures for which these algorithms are not well suited.We describe a parallel, stochastic algorithm for molecular surface area computation that maps well to the emerging multi-core architectures. Our algorithm is also progressive, providing a rough estimate of surface area immediately and refining this estimate as time goes on. Furthermore, the algorithm generates points on the molecular surface which can be used for point-based rendering. We demonstrate a GPU implementation of our algorithm and show that it compares favorably with several existing molecular surface computation programs, giving fast estimates of the molecular surface area with good accuracy. %B Journal of Molecular Graphics and Modelling %V 27 %P 82 - 87 %8 2008/08// %@ 1093-3263 %G eng %U http://www.sciencedirect.com/science/article/pii/S1093326308000387 %N 1 %R 10.1016/j.jmgm.2008.03.001 %0 Book Section %B Studies in Computational Intelligence: Machine Learning in Document Analysis and RecognitionStudies in Computational Intelligence: Machine Learning in Document Analysis and Recognition %D 2008 %T Review of Classifier Combination Methods %A Tulyakov,Sergey %A Jaeger,Stefan %A Govindaraju,Venu %A David Doermann %E Simone Marinai,Hiromichi Fujisawa %X Classifier combination methods have proved to be an effective tool to increase the performance of pattern recognition applications. In this chapter we review and categorize major advancements in this field. Despite a significant number of publications describing successful classifier combination implementations, the theoretical basis is still missing and achieved improvements are inconsistent. By introducing different categories of classifier combinations in this review we attempt to put forward more specific directions for future theoretical research.We also introduce a retraining effect and effects of locality based training as important properties of classifier combinations. Such effects have significant influence on the performance of combinations, and their study is necessary for complete theoretical understanding of combination algorithms. %B Studies in Computational Intelligence: Machine Learning in Document Analysis and RecognitionStudies in Computational Intelligence: Machine Learning in Document Analysis and Recognition %I Springer %P 361 - 386 %8 2008/// %G eng %0 Conference Paper %B AAAI-08 Workshop on Metareasoning,(Chicago, IL) %D 2008 %T The role of metacognition in robust AI systems %A Schmill,M. D %A Oates,T. %A Anderson,M. %A Fults,S. %A Josyula,D. %A Perlis, Don %A Wilson,S. %B AAAI-08 Workshop on Metareasoning,(Chicago, IL) %8 2008/// %G eng %0 Journal Article %J IEEETransactions on Pattern Analysis and Machine Intelligence %D 2008 %T Script-Independent Text Line Segmentation in Freestyle Handwritten Documents %A Yi,L. %A Zheng,Y. %A David Doermann %A Jaeger,S. %X Text line segmentation in freestyle handwritten documents remains an open document analysis problem. Curvilinear text lines and small gaps between neighboring text lines present a challenge to algorithms developed for machine printed or hand-printed documents. In this paper, we propose a novel approach based on density estimation and a state-of-the-art image segmentation technique, the level set method. From an input document image, we estimate a probability map, where each element represents the probability of the underlying pixel belonging to a text line. The level set method is then exploited to determine the boundary of neighboring text lines by evolving an initial estimate. Unlike connected component based methods ( [1] and [2] for example), the proposed algorithm does not use any script-specific knowledge. Extensive quantitative experiments on freestyle handwritten documents with diverse scripts, such as Arabic, Chinese, Korean, and Hindi in the University of Maryland Multilingual database, demonstrate that our algorithm consistently outperforms previous methods [1]–[3]. Further experiments show the proposed algorithm is robust to scale change, rotation, and noise. %B IEEETransactions on Pattern Analysis and Machine Intelligence %P 1313 - 1329 %8 2008/08// %G eng %0 Journal Article %J Computer Vision–ECCV 2008 %D 2008 %T Searching the world’s herbaria: A system for visual identification of plant species %A Belhumeur,P. %A Chen,D. %A Feiner,S. %A Jacobs, David W. %A Kress,W. %A Ling,H. %A Lopez,I. %A Ramamoorthi,R. %A Sheorey,S. %A White,S. %X We describe a working computer vision system that aids in the identification of plant species. A user photographs an isolated leaf on a blank background, and the system extracts the leaf shape and matches it to the shape of leaves of known species. In a few seconds, the system displays the top matching species, along with textual descriptions and additional images. This system is currently in use by botanists at the Smithsonian Institution National Museum of Natural History. The primary contributions of this paper are: a description of a working computer vision system and its user interface for an important new application area; the introduction of three new datasets containing thousands of single leaf images, each labeled by species and verified by botanists at the US National Herbarium; recognition results for two of the three leaf datasets; and descriptions throughout of practical lessons learned in constructing this system. %B Computer Vision–ECCV 2008 %P 116 - 129 %8 2008/// %G eng %R 10.1007/978-3-540-88693-8_9 %0 Journal Article %J AI Magazine %D 2008 %T A self-help guide for autonomous systems %A Anderson,M. L %A Fults,S. %A Josyula,D. P %A Oates,T. %A Perlis, Don %A Wilson,S. %A Wright,D. %B AI Magazine %V 29 %P 67 - 67 %8 2008/// %G eng %N 2 %0 Report %D 2008 %T Statistical Relational Learning as an Enabling Technology for Data Acquisition and Data Fusion in Heterogeneous Sensor Networks %A Jacobs, David W. %A Getoor, Lise %K *ALGORITHMS %K *CLASSIFICATION %K data acquisition %K DATA FUSION %K Detectors %K Feature extraction %K HMM(HIDDEN MARKOV MODELS) %K NETWORKS %K NUMERICAL MATHEMATICS %K PE611102 %K RANDOM FIELDS %K STATISTICS AND PROBABILITY %K TEST SETS %K VIDEO SIGNALS %X Our work has focused on developing new cost sensitive feature acquisition and classification algorithms, mapping these algorithms onto camera networks, and creating a test bed of video data and implemented vision algorithms that we can use to implement these. First, we will describe a new algorithm that we have developed for feature acquisition in Hidden Markov Models (HMMs). This is particularly useful for inference tasks involving video from a single camera, in which the relationship between frames of video can be modeled as a Markov chain. We describe this algorithm in the context of using background subtraction results to identify portions of video that contain a moving object. Next, we will describe new algorithms that apply to general graphical models. These can be tested using existing test sets that are drawn from a range of domains in addition to sensor networks. %I OFFICE OF RESEARCH ADMINISTRATION AND ADVANCEMENT, UNIVERSITY OF MARYLAND COLLEGE PARK %8 2008/06/29/ %G eng %U http://stinet.dtic.mil/oai/oai?&verb=getRecord&metadataPrefix=html&identifier=ADA500520 %0 Conference Paper %B Machine Learning and Cybernetics, 2008 International Conference on %D 2008 %T A topic-based Document Correlation Model %A Jia,Xi-Ping %A Peng,Hong %A Zheng,Qi-Lun %A Zhuolin Jiang %A Li,Zhao %K bipartite graph optimal matching %K data mining %K document correlation analysis %K document retrieval %K Gibbs sampling %K Information retrieval %K latent Dirichlet allocation model %K text analysis %K text mining %K topic-based document correlation model %X Document correlation analysis is now a focus of study in text mining. This paper proposed a Document Correlation Model to capture the correlation between documents from topic level. The model represents the document correlation as the Optimal Matching of a bipartite graph, of which each partition is a document, each node is a topic, and each edge is the similarity between two topics. The topics of each document are retrieved by the Latent Dirichlet Allocation model and Gibbs sampling. Experiments on correlated document search show that the Document Correlation Model outperforms the Vector Space Model on two aspects: 1) it has higher average retrieval precision; 2) it needs less space to store a documentpsilas information. %B Machine Learning and Cybernetics, 2008 International Conference on %V 5 %P 2487 - 2491 %8 2008/07// %G eng %R 10.1109/ICMLC.2008.4620826 %0 Conference Paper %B Applications of Computer Vision, 2008. WACV 2008. IEEE Workshop on %D 2008 %T Tracking Down Under: Following the Satin Bowerbird %A Kembhavi,A. %A Farrell,R. %A Luo,Yuancheng %A Jacobs, David W. %A Duraiswami, Ramani %A Davis, Larry S. %K analysis %K behavior;animal %K Bowerbird;animal %K computing;feature %K detection;animal %K extraction;tracking;video %K processing;zoology; %K Satin %K sciences %K selection;behavioural %K signal %K tool;feature %K tracking;automated %K video %X Socio biologists collect huge volumes of video to study animal behavior (our collaborators work with 30,000 hours of video). The scale of these datasets demands the development of automated video analysis tools. Detecting and tracking animals is a critical first step in this process. However, off-the-shelf methods prove incapable of handling videos characterized by poor quality, drastic illumination changes, non-stationary scenery and foreground objects that become motionless for long stretches of time. We improve on existing approaches by taking advantage of specific aspects of this problem: by using information from the entire video we are able to find animals that become motionless for long intervals of time; we make robust decisions based on regional features; for different parts of the image, we tailor the selection of model features, choosing the features most helpful in differentiating the target animal from the background in that part of the image. We evaluate our method, achieving almost 83% tracking accuracy on a more than 200,000 frame dataset of Satin Bowerbird courtship videos. %B Applications of Computer Vision, 2008. WACV 2008. IEEE Workshop on %P 1 - 7 %8 2008/01// %G eng %R 10.1109/WACV.2008.4544004 %0 Journal Article %J Computer Vision and Image Understanding %D 2008 %T Using specularities in comparing 3D models and 2D images %A Osadchy,Margarita %A Jacobs, David W. %A Ramamoorthi,Ravi %A Tucker,David %K illumination %K Object recognition %K Specularity %X We aim to create systems that identify and locate objects by comparing known, 3D shapes to intensity images that they have produced. To do this we focus on verification methods that determine whether a known model in a specific pose is consistent with an image. We build on prior work that has done this successfully for Lambertian objects, to handle a much broader class of shiny objects that produce specular highlights. Our core contribution is a novel method for determining whether a known 3D shape is consistent with the 2D shape of a possible highlight found in an image. We do this using only a qualitative description of highlight formation that is consistent with most models of specular reflection, so no specific knowledge of an object’s specular reflectance properties is needed. This allows us to treat non-Lambertian image effects as a positive source of information about object identity, rather than treating them as a potential source of noise. We then show how to integrate information about highlights into a system that also checks the consistency of Lambertian reflectance effects. Also, we show how to model Lambertian reflectance using a reference image, rather than albedos, which can be difficult to measure in shiny objects. We test each aspect of our approach using several different data sets. We demonstrate the potential value of our method of handling specular highlights by building a system that can locate shiny, transparent objects, such as glassware, on table tops. We demonstrate our hybrid methods on pottery, and our use of reference images with face recognition experiments. %B Computer Vision and Image Understanding %V 111 %P 275 - 294 %8 2008/09// %@ 1077-3142 %G eng %U http://www.sciencedirect.com/science/article/pii/S1077314207001713 %N 3 %R 10.1016/j.cviu.2007.12.004 %0 Journal Article %J Journal of VisionJ Vis %D 2008 %T A View-Point Invariant Texture Descriptor %A Fermüller, Cornelia %A Yong Xu %A Hui Ji %X A new texture descriptor based on fractal geometry, called the multi fractal spectrum (MFS) is introduced. The key quantity in the study of fractal geometry is the fractal dimension, which is a measure of how an object changes over scale. Consider the intensity of an image as a 3D surface and slice it at regular intervals at the dimension of height. For each interval we obtain a point set, for which we compute the fractal dimension. The vector composed of the fractal dimensions of all point sets is called the MFS of intensity. Replacing the intensity with other quantities, such as the density function, or the output of various filters (e.g. Laplacian, Gradient filters), different MFS descriptors are obtained. The MFS is shown mathematically to be invariant under any smooth mapping (bi-Lipschitz maps), which includes view-point changes and non-rigid deformations of the surface as well as local affine illumination changes. Computational experiments on unstructured textures, such as landscapes and shelves in a supermarket, demonstrate the robustness of the MFS to environmental changes. On standard data sets the MFS performs comparable to the top texture descriptors in the task of classification. However, in contrast to other descriptors, it has extremely low dimension and can be computed very efficiently and robustly. Psychophysical demonstrate that humans can differentiate black and white textures on the basis of the fractal dimension. %B Journal of VisionJ Vis %V 8 %P 354 - 354 %8 2008/05/10/ %@ , 1534-7362 %G eng %U http://www.journalofvision.org/content/8/6/354 %N 6 %R 10.1167/8.6.354 %0 Book Section %B Handbook of Data VisualizationHandbook of Data Visualization %D 2008 %T Visualizing Functional Data with an Application to eBay’s Online Auctions %A Chen,Chun-houh %A Härdle,Wolfgang %A Unwin,Antony %A Jank,Wolfgang %A Shmueli,Galit %A Plaisant, Catherine %A Shneiderman, Ben %X Technological advances in the measurement, collection, and storage of data have led tomore andmore complex data structures. Examples of such structures include measurements of the behavior of individuals over time, digitized two- or three-dimensional images of the brain, and recordings of three- or even four-dimensional movements of objects traveling through space and time. Such data, although recorded in a discrete fashion, are usually thought of as continuous objects that are represented by functional relationships. This gave rise to functional data analysis (FDA), which was made popular by the monographs of Ramsay and Silverman (1997, 2002), where the center of interest is a set of curves, shapes, objects, or, more generally, a set of functional observations , in contrast to classical statistics where interest centers on a set of data vectors. In that sense, functional data is not only different from the data structure studied in classical statistics, but it actually generalizes it. Many of these new data structures require new statistical methods to unveil the information that they carry. %B Handbook of Data VisualizationHandbook of Data Visualization %S Springer Handbooks of Computational Statistics %I Springer Berlin Heidelberg %P 873 - 898 %8 2008/// %@ 978-3-540-33037-0 %G eng %U http://dx.doi.org/10.1007/978-3-540-33037-0_34 %0 Report %D 2007 %T ACE: A Novel Software Platform to Ensure the Integrity of Long Term Archives %A Song,Sangchul %A JaJa, Joseph F. %K Technical Report %X We develop a new methodology to address the integrity of long term archives using rigorous cryptographic techniques. A prototype system called ACE (Auditing Control Environment) was designed and developed based on this methodology. ACE creates a small-size integrity token for each digital object and some cryptographic summary information based on all the objects handled within a dynamic time period. ACE continuously audits the contents of the various objects according to the policy set by the archive, and provides mechanisms for an independent third-party auditor to certify the integrity of any object. In fact, our approach will allow an independent auditor to verify the integrity of every version of an archived digital object as well as link the current version to the original form of the object when it was ingested into the archive. We show that ACE is very cost effective and scalable while making no assumptions about the archive architecture. We include in this paper some preliminary results on the validation and performance of ACE on a large image collection. %I Instititue for Advanced Computer Studies, Univ of Maryland, College Park %V UMIACS-TR-2007-07 %8 2007/01/31/ %G eng %U http://drum.lib.umd.edu/handle/1903/4047 %0 Conference Paper %B Wavelet Analysis and Pattern Recognition, 2007. ICWAPR '07. International Conference on %D 2007 %T An adaptive mean shift tracking method using multiscale images %A Zhuolin Jiang %A Li,Shao-Fa %A Gao,Dong-Fa %K adaptive mean shift tracking method %K bandwidth matrix %K Gaussian kernel %K Gaussian processes %K Image sequences %K log-likelihood function %K matrix algebra %K maximum likelihood estimation %K multiscale image %K Object detection %K object tracking %X An adaptive mean shift tracking method for object tracking using multiscale images is presented in this paper. A bandwidth matrix and a Gaussian kernel are used to extend the definition of target model. The method can exactly estimate the position of the tracked object using multiscale images from Gaussian pyramid. The tracking method determines the parameters of kernel bandwidth by maximizing the lower bound of a log-likelihood function, which is derived from a kernel density estimate with the bandwidth matrix and the modified weight function. The experimental results show that it can averagely converge in 2.55 iterations per frame. %B Wavelet Analysis and Pattern Recognition, 2007. ICWAPR '07. International Conference on %V 3 %P 1060 - 1066 %8 2007/11// %G eng %R 10.1109/ICWAPR.2007.4421589 %0 Journal Article %J Pattern Analysis and Machine Intelligence, IEEE Transactions on %D 2007 %T Appearance Characterization of Linear Lambertian Objects, Generalized Photometric Stereo, and Illumination-Invariant Face Recognition %A Zhou,S. K %A Aggarwal,G. %A Chellapa, Rama %A Jacobs, David W. %K albedo field;appearance characterization;generalized photometric stereo algorithms;illumination-invariant face recognition;linear Lambertian objects;observation matrix factorization;face recognition;matrix decomposition;Algorithms;Artificial Intelligence; %K Automated;Photogrammetry;Reproducibility of Results;Sensitivity and Specificity; %K Computer-Assisted;Information Storage and Retrieval;Lighting;Linear Models;Pattern Recognition %X Traditional photometric stereo algorithms employ a Lambertian reflectance model with a varying albedo field and involve the appearance of only one object. In this paper, we generalize photometric stereo algorithms to handle all appearances of all objects in a class, in particular the human face class, by making use of the linear Lambertian property. A linear Lambertian object is one which is linearly spanned by a set of basis objects and has a Lambertian surface. The linear property leads to a rank constraint and, consequently, a factorization of an observation matrix that consists of exemplar images of different objects (e.g., faces of different subjects) under different, unknown illuminations. Integrability and symmetry constraints are used to fully recover the subspace bases using a novel linearized algorithm that takes the varying albedo field into account. The effectiveness of the linear Lambertian property is further investigated by using it for the problem of illumination-invariant face recognition using just one image. Attached shadows are incorporated in the model by a careful treatment of the inherent nonlinearity in Lambert's law. This enables us to extend our algorithm to perform face recognition in the presence of multiple illumination sources. Experimental results using standard data sets are presented %B Pattern Analysis and Machine Intelligence, IEEE Transactions on %V 29 %P 230 - 245 %8 2007/02// %@ 0162-8828 %G eng %N 2 %R 10.1109/TPAMI.2007.25 %0 Conference Paper %B Third Language and Technology Conference %D 2007 %T Application of MCL in a dialog agent %A Josyula,D. P %A Fults,S. %A Anderson,M. L %A Wilson,S. %A Perlis, Don %B Third Language and Technology Conference %8 2007/// %G eng %0 Journal Article %J EURASIP Journal on Advances in Signal Processing %D 2007 %T Better flow estimation from color images %A Hui Ji %A Fermüller, Cornelia %X One of the difficulties in estimating optical flow is bias. Correcting the bias using the classical techniques is very difficult. The reason is that knowledge of the error statistics is required, which usually cannot be obtained because of lack of data. In this paper, we present an approach which utilizes color information. Color images do not provide more geometric information than monochromatic images to the estimation of optic flow. They do, however, contain additional statistical information. By utilizing the technique of instrumental variables, bias from multiple noise sources can be robustly corrected without computing the parameters of the noise distribution. Experiments on synthesized and real data demonstrate the efficiency of the algorithm. %B EURASIP Journal on Advances in Signal Processing %V 2007 %P 133 - 133 %8 2007/01// %@ 1110-8657 %G eng %U http://dx.doi.org/10.1155/2007/53912 %N 1 %R 10.1155/2007/53912 %0 Journal Article %J IEEE Transactions on Mobile Computing %D 2007 %T Cell breathing in wireless LANs: Algorithms and evaluation %A Hajiaghayi, Mohammad T. %A Mirrokni,S. V. %A Saberi,A. %A Bahl,P. %A Jain,K. %A Qiu,L. %B IEEE Transactions on Mobile Computing %V 6 %P 164 - 178 %8 2007/// %G eng %N 2 %0 Journal Article %J Journal of Biological Chemistry %D 2007 %T Characterization of Ehp, a Secreted Complement Inhibitory Protein from Staphylococcus aureus %A Hammel,Michal %A Sfyroera,Georgia %A Pyrpassopoulos,Serapion %A Ricklin,Daniel %A Ramyar,Kasra X. %A Pop, Mihai %A Jin,Zhongmin %A Lambris,John D. %A Geisbrecht,Brian V. %X We report here the discovery and characterization of Ehp, a new secreted Staphylococcus aureus protein that potently inhibits the alternative complement activation pathway. Ehp was identified through a genomic scan as an uncharacterized secreted protein from S. aureus, and immunoblotting of conditioned S. aureus culture medium revealed that the Ehp protein was secreted at the highest levels during log-phase bacterial growth. The mature Ehp polypeptide is composed of 80 residues and is 44% identical to the complement inhibitory domain of S. aureus Efb (extracellular fibrinogen-binding protein). We observed preferential binding by Ehp to native and hydrolyzed C3 relative to fully active C3b and found that Ehp formed a subnanomolar affinity complex with these various forms of C3 by binding to its thioester-containing C3d domain. Site-directed mutagenesis demonstrated that Arg75 and Asn82 are important in forming the Ehp·C3d complex, but loss of these side chains did not completely disrupt Ehp/C3d binding. This suggested the presence of a second C3d-binding site in Ehp, which was mapped to the proximity of Ehp Asn63. Further molecular level details of the Ehp/C3d interaction were revealed by solving the 2.7-Å crystal structure of an Ehp·C3d complex in which the low affinity site had been mutationally inactivated. Ehp potently inhibited C3b deposition onto sensitized surfaces by the alternative complement activation pathway. This inhibition was directly related to Ehp/C3d binding and was more potent than that seen for Efb-C. An altered conformation in Ehp-bound C3 was detected by monoclonal antibody C3-9, which is specific for a neoantigen exposed in activated forms of C3. Our results suggest that increased inhibitory potency of Ehp relative to Efb-C is derived from the second C3-binding site in this new protein. %B Journal of Biological Chemistry %V 282 %P 30051 - 30061 %8 2007/10/12/ %G eng %U http://www.jbc.org/content/282/41/30051.abstract %N 41 %R 10.1074/jbc.M704247200 %0 Journal Article %J Telecommunications Policy %D 2007 %T Community response grids: E-government, social networks, and effective emergency management %A Jaeger,Paul T. %A Shneiderman, Ben %A Fleischmann,Kenneth R. %A Preece,Jennifer %A Qu,Yan %A Fei Wu,Philip %K Community response grid %K E-government %K Emergency response %K Mobile communications %K Public policy %K social networks %X This paper explores the concept of developing community response grids (CRGs) for community emergency response and the policy implications of such a system. CRGs make use of the Internet and mobile communication devices, allowing residents and responders to share information, communicate, and coordinate activities in response to a major disaster. This paper explores the viability of using mobile communication technologies and the Web, including e-government, to develop response systems that would aid communities before, during, and after a major disaster, providing channels for contacting residents and responders, uploading information, distributing information, coordinating the responses of social networks, and facilitating resident-to-resident assistance. Drawing upon research from computer science, information studies, public policy, emergency management, and several other disciplines, the paper elaborates on the concept of and need for CRGs, examines related current efforts that can inform the development of CRGs, discusses how research about community networks can be used to instill trust and social capital in CRGs, and examines the issues of public policy, telecommunications, and e-government related to such a system. %B Telecommunications Policy %V 31 %P 592 - 604 %8 2007/11// %@ 0308-5961 %G eng %U http://www.sciencedirect.com/science/article/pii/S0308596107000699 %N 10–11 %R 10.1016/j.telpol.2007.07.008 %0 Journal Article %J Proceedings of the 13th Americas Conference on Information Systems %D 2007 %T Community response grids for older adults: Motivations, usability, and sociability %A Wu,P.F. %A Preece,J. %A Shneiderman, Ben %A Jaeger,P. T %A Qu,Y. %X This paper discusses the motivation for a Community Response Grid (CRG) to helpolder adults improve their capability for coping with emergency situations. We define and discuss the concept of a CRG, briefly review the limits of current emergency response systems, and identify usability and sociability guidelines for CRGs for older adults based on existing research. The paper ends with a call to action and suggestions for future research directions. %B Proceedings of the 13th Americas Conference on Information Systems %8 2007/// %G eng %0 Journal Article %J Biosecurity and Bioterrorism: Biodefense Strategy, Practice, and Science %D 2007 %T Community Response Grids: Using Information Technology to Help Communities Respond to Bioterror Emergencies %A Jaeger,Paul T. %A Fleischmann,Kenneth R. %A Preece,Jennifer %A Shneiderman, Ben %A Fei Wu,Philip %A Qu,Yan %X Access to accurate and trusted information is vital in preparing for, responding to, and recovering from an mergency. To facilitate response in large-scale emergency situations, Community Response Grids (CRGs) integrate Internet and mobile technologies to enable residents to report information, professional emergency responders to disseminate instructions, and residents to assist one another. CRGs use technology to help residents and professional emergency responders to work together in community response to emergencies, including bioterrorism events. In a time of increased danger from bioterrorist threats, the application of advanced information and communication technologies to community response is vital in confronting such threats. This article describes CRGs, their underlying concepts, development efforts, their relevance to biosecurity and bioterrorism, and future research issues in the use of technology to facilitate community response. %B Biosecurity and Bioterrorism: Biodefense Strategy, Practice, and Science %V 5 %P 335 - 346 %8 2007/12// %@ 1538-7135, 1557-850X %G eng %U http://www.liebertonline.com/doi/abs/10.1089/bsp.2007.0034 %N 4 %R 10.1089/bsp.2007.0034 %0 Conference Paper %B Scientific and Statistical Database Management, 2007. SSBDM '07. 19th International Conference on %D 2007 %T Component-based Data Layout for Efficient Slicing of Very Large Multidimensional Volumetric Data %A Kim,Jusub %A JaJa, Joseph F. %K axis-aligned %K cache %K curves;very %K data %K data;data %K databases; %K handling;query %K large %K layout;data %K memory %K multidimensional %K processing;very %K queries;space-filling %K size;component-based %K slicing %K slicing;out-of-core %K volumetric %X In this paper, we introduce a new efficient data layout scheme to efficiently handle out-of-core axis-aligned slicing queries of very large multidimensional volumetric data. Slicing is a very useful dimension reduction tool that removes or reduces occlusion problems in visualizing 3D/4D volumetric data sets and that enables fast visual exploration of such data sets. We show that the data layouts based on typical space-filling curves are not optimal for the out-of-core slicing queries and present a novel component-based data layout scheme for a specialized problem domain, in which it is only required to provide fast slicing at every k-th value, for any k gt; 1. Our component-based data layout scheme provides much faster processing time for any axis-aligned slicing direction at every k-th value, k gt; 1, requiring less cache memory size and without any replication of data. In addition, the data layout can be generalized to any high dimension. %B Scientific and Statistical Database Management, 2007. SSBDM '07. 19th International Conference on %P 8 - 8 %8 2007/07// %G eng %R 10.1109/SSDBM.2007.7 %0 Conference Paper %B Proceedings of the 16th international conference on World Wide Web %D 2007 %T Defeating script injection attacks with browser-enforced embedded policies %A Jim,Trevor %A Swamy,Nikhil %A Hicks, Michael W. %K cross-site scripting %K script injection %K web application security %X Web sites that accept and display content such as wiki articles or comments typically filter the content to prevent injected script code from running in browsers that view the site. The diversity of browser rendering algorithms and the desire to allow rich content make filtering quite difficult, however, and attacks such as the Samy and Yamanner worms have exploited filtering weaknesses. This paper proposes a simple alternative mechanism for preventing script injection called Browser-Enforced Embedded Policies (BEEP). The idea is that a web site can embed a policy in its pages that specifies which scripts are allowed to run. The browser, which knows exactly when it will run a script, can enforce this policy perfectly. We have added BEEP support to several browsers, and built tools to simplify adding policies to web applications. We found that supporting BEEP in browsers requires only small and localized modifications, modifying web applications requires minimal effort, and enforcing policies is generally lightweight. %B Proceedings of the 16th international conference on World Wide Web %S WWW '07 %I ACM %C New York, NY, USA %P 601 - 610 %8 2007/// %@ 978-1-59593-654-7 %G eng %U http://doi.acm.org/10.1145/1242572.1242654 %R 10.1145/1242572.1242654 %0 Book Section %B Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques %D 2007 %T Distribution-Free Testing Lower Bounds for Basic Boolean Functions %A Dana Dachman-Soled %A Servedio, Rocco A. %E Charikar, Moses %E Jansen, Klaus %E Reingold, Omer %E Rolim, José D. P. %K Algorithm Analysis and Problem Complexity %K Discrete Mathematics in Computer Science %K Numeric Computing %X In the distribution-free property testing model, the distance between functions is measured with respect to an arbitrary and unknown probability distribution \mathcal{D} over the input domain. We consider distribution-free testing of several basic Boolean function classes over {0,1} n , namely monotone conjunctions, general conjunctions, decision lists, and linear threshold functions. We prove that for each of these function classes, Ω((n/logn)1/5) oracle calls are required for any distribution-free testing algorithm. Since each of these function classes is known to be distribution-free properly learnable (and hence testable) using Θ(n) oracle calls, our lower bounds are within a polynomial factor of the best possible. %B Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques %S Lecture Notes in Computer Science %I Springer Berlin Heidelberg %P 494 - 508 %8 2007/01/01/ %@ 978-3-540-74207-4, 978-3-540-74208-1 %G eng %U http://link.springer.com/chapter/10.1007/978-3-540-74208-1_36 %0 Journal Article %J Science %D 2007 %T Draft Genome of the Filarial Nematode Parasite Brugia Malayi %A Ghedin,Elodie %A Wang,Shiliang %A Spiro,David %A Caler,Elisabet %A Zhao,Qi %A Crabtree,Jonathan %A Allen,Jonathan E %A Delcher,Arthur L. %A Guiliano,David B %A Miranda-Saavedra,Diego %A Angiuoli,Samuel V %A Creasy,Todd %A Amedeo,Paolo %A Haas,Brian %A El‐Sayed, Najib M. %A Wortman,Jennifer R. %A Feldblyum,Tamara %A Tallon,Luke %A Schatz,Michael %A Shumway,Martin %A Koo,Hean %A Salzberg,Steven L. %A Schobel,Seth %A Pertea,Mihaela %A Pop, Mihai %A White,Owen %A Barton,Geoffrey J %A Carlow,Clotilde K. S %A Crawford,Michael J %A Daub,Jennifer %A Dimmic,Matthew W %A Estes,Chris F %A Foster,Jeremy M %A Ganatra,Mehul %A Gregory,William F %A Johnson,Nicholas M %A Jin,Jinming %A Komuniecki,Richard %A Korf,Ian %A Kumar,Sanjay %A Laney,Sandra %A Li,Ben-Wen %A Li,Wen %A Lindblom,Tim H %A Lustigman,Sara %A Ma,Dong %A Maina,Claude V %A Martin,David M. A %A McCarter,James P %A McReynolds,Larry %A Mitreva,Makedonka %A Nutman,Thomas B %A Parkinson,John %A Peregrín-Alvarez,José M %A Poole,Catherine %A Ren,Qinghu %A Saunders,Lori %A Sluder,Ann E %A Smith,Katherine %A Stanke,Mario %A Unnasch,Thomas R %A Ware,Jenna %A Wei,Aguan D %A Weil,Gary %A Williams,Deryck J %A Zhang,Yinhua %A Williams,Steven A %A Fraser-Liggett,Claire %A Slatko,Barton %A Blaxter,Mark L %A Scott,Alan L %X Parasitic nematodes that cause elephantiasis and river blindness threaten hundreds of millions of people in the developing world. We have sequenced the ∼90 megabase (Mb) genome of the human filarial parasite Brugia malayi and predict ∼11,500 protein coding genes in 71 Mb of robustly assembled sequence. Comparative analysis with the free-living, model nematode Caenorhabditis elegans revealed that, despite these genes having maintained little conservation of local synteny during ∼350 million years of evolution, they largely remain in linkage on chromosomal units. More than 100 conserved operons were identified. Analysis of the predicted proteome provides evidence for adaptations of B. malayi to niches in its human and vector hosts and insights into the molecular basis of a mutualistic relationship with its Wolbachia endosymbiont. These findings offer a foundation for rational drug design. %B Science %V 317 %P 1756 - 1760 %8 2007/09/21/ %@ 0036-8075, 1095-9203 %G eng %U http://www.sciencemag.org/content/317/5845/1756 %N 5845 %R 10.1126/science.1145406 %0 Journal Article %J Journal of Parallel and Distributed Computing %D 2007 %T An efficient and scalable parallel algorithm for out-of-core isosurface extraction and rendering %A Wang,Qin %A JaJa, Joseph F. %A Varshney, Amitabh %K Parallel isosurface extraction %K scientific visualization %X We consider the problem of isosurface extraction and rendering for large scale time-varying data. Such data sets have been appearing at an increasing rate especially from physics-based simulations, and can range in size from hundreds of gigabytes to tens of terabytes. Isosurface extraction and rendering is one of the most widely used visualization techniques to explore and analyze such data sets. A common strategy for isosurface extraction involves the determination of the so-called active cells followed by a triangulation of these cells based on linear interpolation, and ending with a rendering of the triangular mesh. We develop a new simple indexing scheme for out-of-core processing of large scale data sets, which enables the identification of the active cells extremely quickly, using more compact indexing structure and more effective bulk data movement than previous schemes. Moreover, our scheme leads to an efficient and scalable implementation on multiprocessor environments in which each processor has access to its own local disk. In particular, our parallel algorithm provably achieves load balancing across the processors independent of the isovalue, with almost no overhead in the total amount of work relative to the sequential algorithm. We conduct a large number of experimental tests on the University of Maryland Visualization Cluster using the Richtmyer–Meshkov instability data set, and obtain results that consistently validate the efficiency and the scalability of our algorithm. %B Journal of Parallel and Distributed Computing %V 67 %P 592 - 603 %8 2007/05// %@ 0743-7315 %G eng %U http://www.sciencedirect.com/science/article/pii/S0743731506002450 %N 5 %R 10.1016/j.jpdc.2006.12.007 %0 Journal Article %J Journal of Cryptology %D 2007 %T Efficient signature schemes with tight reductions to the Diffie-Hellman problems %A Goh,E. J %A Jarecki,S. %A Katz, Jonathan %A Wang,N. %X We propose and analyze two efficient signature schemes whose security is tightly related to the Diffie-Hellman problems in the random oracle model. The security of our first scheme relies on the hardness of the computational Diffie-Hellman problem; the security of our second scheme - which is more efficient than the first-is based on the hardness of the decisional Diffie-Hellman problem, a stronger assumption. Given the current state of the art, it is as difficult to solve the Diffie-Hellman problems as it is to solve the discrete logarithm problem in many groups of cryptographic interest. Thus, the signature schemes shown here can currently offer substantially better efficiency (for a given level of provable security) than existing schemes based on the discrete logarithm assumption. The techniques we introduce can also be applied in a wide variety of settings to yield more efficient cryptographic schemes (based on various number-theoretic assumptions) with tight security reductions. %B Journal of Cryptology %V 20 %P 493 - 514 %8 2007/// %G eng %N 4 %R 10.1007/s00145-007-0549-3 %0 Conference Paper %B Computer Vision and Pattern Recognition, 2007. CVPR '07. IEEE Conference on %D 2007 %T Efficiently Determining Silhouette Consistency %A Li Yi %A Jacobs, David W. %K camera;scaled %K consistency;image %K miscalibrated %K orthographic %K problem;silhouette %K projection;shape-from-silhouette %K reconstruction; %X Volume intersection is a frequently used technique to solve the Shape-From-Silhouette problem, which constructs a 3D object estimate from a set of silhouettes taken with calibrated cameras. It is natural to develop an efficient algorithm to determine the consistency of a set of silhouettes before performing time-consuming reconstruction, so that inaccurate silhouettes can be omitted. In this paper we first present a fast algorithm to determine the consistency of three silhouettes from known (but arbitrary) viewing directions, assuming the projection is scaled orthographic. The temporal complexity of the algorithm is linear in the number of points of the silhouette boundaries. We further prove that a set of more than three convex silhouettes are consistent if and only if any three of them are consistent. Another possible application of our approach is to determine the miscalibrated cameras in a large camera system. A consistent subset of cameras can be determined on the fly and miscalibrated cameras can also be recalibrated at a coarse scale. Real and synthesized data are used to demonstrate our results. %B Computer Vision and Pattern Recognition, 2007. CVPR '07. IEEE Conference on %P 1 - 8 %8 2007/06// %G eng %R 10.1109/CVPR.2007.383161 %0 Journal Article %J Nature %D 2007 %T Evolution of genes and genomes on the Drosophila phylogeny %A Clark,Andrew G. %A Eisen,Michael B. %A Smith,Douglas R. %A Bergman,Casey M. %A Oliver,Brian %A Markow,Therese A. %A Kaufman,Thomas C. %A Kellis,Manolis %A Gelbart,William %A Iyer,Venky N. %A Pollard,Daniel A. %A Sackton,Timothy B. %A Larracuente,Amanda M. %A Singh,Nadia D. %A Abad,Jose P. %A Abt,Dawn N. %A Adryan,Boris %A Aguade,Montserrat %A Akashi,Hiroshi %A Anderson,Wyatt W. %A Aquadro,Charles F. %A Ardell,David H. %A Arguello,Roman %A Artieri,Carlo G. %A Barbash,Daniel A. %A Barker,Daniel %A Barsanti,Paolo %A Batterham,Phil %A Batzoglou,Serafim %A Begun,Dave %A Bhutkar,Arjun %A Blanco,Enrico %A Bosak,Stephanie A. %A Bradley,Robert K. %A Brand,Adrianne D. %A Brent,Michael R. %A Brooks,Angela N. %A Brown,Randall H. %A Butlin,Roger K. %A Caggese,Corrado %A Calvi,Brian R. %A Carvalho,A. Bernardo de %A Caspi,Anat %A Castrezana,Sergio %A Celniker,Susan E. %A Chang,Jean L. %A Chapple,Charles %A Chatterji,Sourav %A Chinwalla,Asif %A Civetta,Alberto %A Clifton,Sandra W. %A Comeron,Josep M. %A Costello,James C. %A Coyne,Jerry A. %A Daub,Jennifer %A David,Robert G. %A Delcher,Arthur L. %A Delehaunty,Kim %A Do,Chuong B. %A Ebling,Heather %A Edwards,Kevin %A Eickbush,Thomas %A Evans,Jay D. %A Filipski,Alan %A Findei|[szlig]|,Sven %A Freyhult,Eva %A Fulton,Lucinda %A Fulton,Robert %A Garcia,Ana C. L. %A Gardiner,Anastasia %A Garfield,David A. %A Garvin,Barry E. %A Gibson,Greg %A Gilbert,Don %A Gnerre,Sante %A Godfrey,Jennifer %A Good,Robert %A Gotea,Valer %A Gravely,Brenton %A Greenberg,Anthony J. %A Griffiths-Jones,Sam %A Gross,Samuel %A Guigo,Roderic %A Gustafson,Erik A. %A Haerty,Wilfried %A Hahn,Matthew W. %A Halligan,Daniel L. %A Halpern,Aaron L. %A Halter,Gillian M. %A Han,Mira V. %A Heger,Andreas %A Hillier,LaDeana %A Hinrichs,Angie S. %A Holmes,Ian %A Hoskins,Roger A. %A Hubisz,Melissa J. %A Hultmark,Dan %A Huntley,Melanie A. %A Jaffe,David B. %A Jagadeeshan,Santosh %A Jeck,William R. %A Johnson,Justin %A Jones,Corbin D. %A Jordan,William C. %A Karpen,Gary H. %A Kataoka,Eiko %A Keightley,Peter D. %A Kheradpour,Pouya %A Kirkness,Ewen F. %A Koerich,Leonardo B. %A Kristiansen,Karsten %A Kudrna,Dave %A Kulathinal,Rob J. %A Kumar,Sudhir %A Kwok,Roberta %A Lander,Eric %A Langley,Charles H. %A Lapoint,Richard %A Lazzaro,Brian P. %A Lee,So-Jeong %A Levesque,Lisa %A Li,Ruiqiang %A Lin,Chiao-Feng %A Lin,Michael F. %A Lindblad-Toh,Kerstin %A Llopart,Ana %A Long,Manyuan %A Low,Lloyd %A Lozovsky,Elena %A Lu,Jian %A Luo,Meizhong %A Machado,Carlos A. %A Makalowski,Wojciech %A Marzo,Mar %A Matsuda,Muneo %A Matzkin,Luciano %A McAllister,Bryant %A McBride,Carolyn S. %A McKernan,Brendan %A McKernan,Kevin %A Mendez-Lago,Maria %A Minx,Patrick %A Mollenhauer,Michael U. %A Montooth,Kristi %A Mount, Stephen M. %A Mu,Xu %A Myers,Eugene %A Negre,Barbara %A Newfeld,Stuart %A Nielsen,Rasmus %A Noor,Mohamed A. F. %A O'Grady,Patrick %A Pachter,Lior %A Papaceit,Montserrat %A Parisi,Matthew J. %A Parisi,Michael %A Parts,Leopold %A Pedersen,Jakob S. %A Pesole,Graziano %A Phillippy,Adam M %A Ponting,Chris P. %A Pop, Mihai %A Porcelli,Damiano %A Powell,Jeffrey R. %A Prohaska,Sonja %A Pruitt,Kim %A Puig,Marta %A Quesneville,Hadi %A Ram,Kristipati Ravi %A Rand,David %A Rasmussen,Matthew D. %A Reed,Laura K. %A Reenan,Robert %A Reily,Amy %A Remington,Karin A. %A Rieger,Tania T. %A Ritchie,Michael G. %A Robin,Charles %A Rogers,Yu-Hui %A Rohde,Claudia %A Rozas,Julio %A Rubenfield,Marc J. %A Ruiz,Alfredo %A Russo,Susan %A Salzberg,Steven L. %A Sanchez-Gracia,Alejandro %A Saranga,David J. %A Sato,Hajime %A Schaeffer,Stephen W. %A Schatz,Michael C %A Schlenke,Todd %A Schwartz,Russell %A Segarra,Carmen %A Singh,Rama S. %A Sirot,Laura %A Sirota,Marina %A Sisneros,Nicholas B. %A Smith,Chris D. %A Smith,Temple F. %A Spieth,John %A Stage,Deborah E. %A Stark,Alexander %A Stephan,Wolfgang %A Strausberg,Robert L. %A Strempel,Sebastian %A Sturgill,David %A Sutton,Granger %A Sutton,Granger G. %A Tao,Wei %A Teichmann,Sarah %A Tobari,Yoshiko N. %A Tomimura,Yoshihiko %A Tsolas,Jason M. %A Valente,Vera L. S. %A Venter,Eli %A Venter,J. Craig %A Vicario,Saverio %A Vieira,Filipe G. %A Vilella,Albert J. %A Villasante,Alfredo %A Walenz,Brian %A Wang,Jun %A Wasserman,Marvin %A Watts,Thomas %A Wilson,Derek %A Wilson,Richard K. %A Wing,Rod A. %A Wolfner,Mariana F. %A Wong,Alex %A Wong,Gane Ka-Shu %A Wu,Chung-I %A Wu,Gabriel %A Yamamoto,Daisuke %A Yang,Hsiao-Pei %A Yang,Shiaw-Pyng %A Yorke,James A. %A Yoshida,Kiyohito %A Zdobnov,Evgeny %A Zhang,Peili %A Zhang,Yu %A Zimin,Aleksey V. %A Baldwin,Jennifer %A Abdouelleil,Amr %A Abdulkadir,Jamal %A Abebe,Adal %A Abera,Brikti %A Abreu,Justin %A Acer,St Christophe %A Aftuck,Lynne %A Alexander,Allen %A An,Peter %A Anderson,Erica %A Anderson,Scott %A Arachi,Harindra %A Azer,Marc %A Bachantsang,Pasang %A Barry,Andrew %A Bayul,Tashi %A Berlin,Aaron %A Bessette,Daniel %A Bloom,Toby %A Blye,Jason %A Boguslavskiy,Leonid %A Bonnet,Claude %A Boukhgalter,Boris %A Bourzgui,Imane %A Brown,Adam %A Cahill,Patrick %A Channer,Sheridon %A Cheshatsang,Yama %A Chuda,Lisa %A Citroen,Mieke %A Collymore,Alville %A Cooke,Patrick %A Costello,Maura %A D'Aco,Katie %A Daza,Riza %A Haan,Georgius De %A DeGray,Stuart %A DeMaso,Christina %A Dhargay,Norbu %A Dooley,Kimberly %A Dooley,Erin %A Doricent,Missole %A Dorje,Passang %A Dorjee,Kunsang %A Dupes,Alan %A Elong,Richard %A Falk,Jill %A Farina,Abderrahim %A Faro,Susan %A Ferguson,Diallo %A Fisher,Sheila %A Foley,Chelsea D. %A Franke,Alicia %A Friedrich,Dennis %A Gadbois,Loryn %A Gearin,Gary %A Gearin,Christina R. %A Giannoukos,Georgia %A Goode,Tina %A Graham,Joseph %A Grandbois,Edward %A Grewal,Sharleen %A Gyaltsen,Kunsang %A Hafez,Nabil %A Hagos,Birhane %A Hall,Jennifer %A Henson,Charlotte %A Hollinger,Andrew %A Honan,Tracey %A Huard,Monika D. %A Hughes,Leanne %A Hurhula,Brian %A Husby,M Erii %A Kamat,Asha %A Kanga,Ben %A Kashin,Seva %A Khazanovich,Dmitry %A Kisner,Peter %A Lance,Krista %A Lara,Marcia %A Lee,William %A Lennon,Niall %A Letendre,Frances %A LeVine,Rosie %A Lipovsky,Alex %A Liu,Xiaohong %A Liu,Jinlei %A Liu,Shangtao %A Lokyitsang,Tashi %A Lokyitsang,Yeshi %A Lubonja,Rakela %A Lui,Annie %A MacDonald,Pen %A Magnisalis,Vasilia %A Maru,Kebede %A Matthews,Charles %A McCusker,William %A McDonough,Susan %A Mehta,Teena %A Meldrim,James %A Meneus,Louis %A Mihai,Oana %A Mihalev,Atanas %A Mihova,Tanya %A Mittelman,Rachel %A Mlenga,Valentine %A Montmayeur,Anna %A Mulrain,Leonidas %A Navidi,Adam %A Naylor,Jerome %A Negash,Tamrat %A Nguyen,Thu %A Nguyen,Nga %A Nicol,Robert %A Norbu,Choe %A Norbu,Nyima %A Novod,Nathaniel %A O'Neill,Barry %A Osman,Sahal %A Markiewicz,Eva %A Oyono,Otero L. %A Patti,Christopher %A Phunkhang,Pema %A Pierre,Fritz %A Priest,Margaret %A Raghuraman,Sujaa %A Rege,Filip %A Reyes,Rebecca %A Rise,Cecil %A Rogov,Peter %A Ross,Keenan %A Ryan,Elizabeth %A Settipalli,Sampath %A Shea,Terry %A Sherpa,Ngawang %A Shi,Lu %A Shih,Diana %A Sparrow,Todd %A Spaulding,Jessica %A Stalker,John %A Stange-Thomann,Nicole %A Stavropoulos,Sharon %A Stone,Catherine %A Strader,Christopher %A Tesfaye,Senait %A Thomson,Talene %A Thoulutsang,Yama %A Thoulutsang,Dawa %A Topham,Kerri %A Topping,Ira %A Tsamla,Tsamla %A Vassiliev,Helen %A Vo,Andy %A Wangchuk,Tsering %A Wangdi,Tsering %A Weiand,Michael %A Wilkinson,Jane %A Wilson,Adam %A Yadav,Shailendra %A Young,Geneva %A Yu,Qing %A Zembek,Lisa %A Zhong,Danni %A Zimmer,Andrew %A Zwirko,Zac %A Jaffe,David B. %A Alvarez,Pablo %A Brockman,Will %A Butler,Jonathan %A Chin,CheeWhye %A Gnerre,Sante %A Grabherr,Manfred %A Kleber,Michael %A Mauceli,Evan %A MacCallum,Iain %X Comparative analysis of multiple genomes in a phylogenetic framework dramatically improves the precision and sensitivity of evolutionary inference, producing more robust results than single-genome analyses can provide. The genomes of 12 Drosophila species, ten of which are presented here for the first time (sechellia, simulans, yakuba, erecta, ananassae, persimilis, willistoni, mojavensis, virilis and grimshawi), illustrate how rates and patterns of sequence divergence across taxa can illuminate evolutionary processes on a genomic scale. These genome sequences augment the formidable genetic tools that have made Drosophila melanogaster a pre-eminent model for animal genetics, and will further catalyse fundamental research on mechanisms of development, cell biology, genetics, disease, neurobiology, behaviour, physiology and evolution. Despite remarkable similarities among these Drosophila species, we identified many putatively non-neutral changes in protein-coding genes, non-coding RNA genes, and cis-regulatory regions. These may prove to underlie differences in the ecology and behaviour of these diverse species. %B Nature %V 450 %P 203 - 218 %8 2007/11/08/ %@ 0028-0836 %G eng %U http://www.nature.com/nature/journal/v450/n7167/full/nature06341.html %N 7167 %R 10.1038/nature06341 %0 Journal Article %J IEEE Transactions on Pattern Analysis and Machine Intelligence %D 2007 %T Face and Gesture Recognition-Appearance Characterization of Linear Lambertian Objects, Generalized Photometric Stereo, and Illumination-Invariant Face Recognition %A Zhou,S. K %A Aggarwal,G. %A Chellapa, Rama %A Jacobs, David W. %B IEEE Transactions on Pattern Analysis and Machine Intelligence %V 29 %P 230 - 245 %8 2007/// %G eng %N 2 %0 Journal Article %J Emerging Infectious DiseasesEmerg Infect Dis %D 2007 %T Genome Analysis Linking Recent European and African Influenza (H5N1) Viruses %A Salzberg,Steven L. %A Kingsford, Carl %A Cattoli,Giovanni %A Spiro,David J. %A Janies,Daniel A. %A Aly,Mona Mehrez %A Brown,Ian H. %A Couacy-Hymann,Emmanuel %A De Mia,Gian Mario %A Dung,Do Huu %A Guercio,Annalisa %A Joannis,Tony %A Ali,Ali Safar Maken %A Osmani,Azizullah %A Padalino,Iolanda %A Saad,Magdi D. %A Savić,Vladimir %A Sengamalay,Naomi A. %A Yingst,Samuel %A Zaborsky,Jennifer %A Zorman-Rojs,Olga %A Ghedin,Elodie %A Capua,Ilaria %X Although linked, these viruses are distinct from earlier outbreak strains., To better understand the ecology and epidemiology of the highly pathogenic avian influenza virus in its transcontinental spread, we sequenced and analyzed the complete genomes of 36 recent influenza A (H5N1) viruses collected from birds in Europe, northern Africa, and southeastern Asia. These sequences, among the first complete genomes of influenza (H5N1) viruses outside Asia, clearly depict the lineages now infecting wild and domestic birds in Europe and Africa and show the relationships among these isolates and other strains affecting both birds and humans. The isolates fall into 3 distinct lineages, 1 of which contains all known non-Asian isolates. This new Euro-African lineage, which was the cause of several recent (2006) fatal human infections in Egypt and Iraq, has been introduced at least 3 times into the European-African region and has split into 3 distinct, independently evolving sublineages. One isolate provides evidence that 2 of these sublineages have recently reassorted. %B Emerging Infectious DiseasesEmerg Infect Dis %V 13 %P 713 - 718 %8 2007/05// %@ 1080-6040 %G eng %N 5 %R 10.3201/eid1305.070013 %0 Journal Article %J AI Magazine %D 2007 %T Heuristic Search and Information Visualization Methods for School Redistricting %A desJardins, Marie %A Bulka,Blazej %A Carr,Ryan %A Jordan,Eric %A Rheingans,Penny %X We describe an application of AI search and information visualization techniques to the problem of school redistricting, in which students are assigned to home schools within a county or school district. This is a multicriteria optimization problem in which competing objectives, such as school capacity, busing costs, and socioeconomic distribution, must be considered. Because of the complexity of the decision-making problem, tools are needed to help end users generate, evaluate, and compare alternative school assignment plans. A key goal of our research is to aid users in finding multiple qualitatively different redistricting plans that represent different trade-offs in the decision space. We present heuristic search methods that can be used to find a set of qualitatively different plans, and give empirical results of these search methods on population data from the school district of Howard County, Maryland. We show the resulting plans using novel visualization methods that we have developed for summarizing and comparing alternative plans. %B AI Magazine %V 28 %P 59 - 59 %8 2007/09/15/ %@ 0738-4602 %G eng %U https://www.aaai.org/ojs/index.php/aimagazine/article/viewArticle/2055 %N 3 %R 10.1609/aimag.v28i3.2055 %0 Conference Paper %B Proceedings of the Workshop on Metareasoning in Agent-Based Systems %D 2007 %T Hood College, Master of Business Administration, 2005 Hood College, Master of Science (Computer Science), 2001 Hood College, Bachelor of Science (Computer Science), 1998 Frederick Community College, Associate in Arts (Business Administration), 1993 %A Anderson,M. L %A Schmill,M. %A Oates,T. %A Perlis, Don %A Josyula,D. %A Wright,D. %A Human,S. W.T.D.N %A Metacognition,L. %A Fults,S. %A Josyula,D. P %B Proceedings of the Workshop on Metareasoning in Agent-Based Systems %8 2007/// %G eng %0 Journal Article %J Journal of Vision %D 2007 %T Illusory Motion Due to Causal Time Filtering %A Fermüller, Cornelia %A Hui Ji %X Static patterns by Kitaoka (2006), the most well known of which is the “Rotating Snake”, elicit forceful illusory motion. The patterns are composed of repeating patches of asymmetric intensity profile, in most cases organized circularly. Motion perception depends on the size of the patches and is found to occur in the periphery for larger patches and closer to the center of the eye for small patches. We propose as main cause for these illusions erroneous estimation of image motion due to eye movements. The reason is that image motion is estimated from the spatial and temporal energy of the image signal with filters which are symmetric in space, but asymmetric (causal) in time. In other words, only the past, but not the future, is used to estimate the temporal energy. It is shown that such filters mis-estimate the motion of locally asymmetric intensity signals for a range of spatial frequencies. This mis-estimation predicts the perceived motion in the different patterns of Kitaoka as well as the peripheral drift illusion, and accounts for the effect at varying patch size. This study builds upon our prior work on the distortion of image features and movement (Fermüller and Malm 2004). Kiatoka (2006): http://www.ritsumei.ac.jp/~akitaoka/index-e.html. C. Fermüller and H. Malm (2004).“ Uncertainty in visual processes predicts geometrical optical illusions ”, Vision Research, 4, 727–749. %B Journal of Vision %V 7 %P 977 - 977 %8 2007/06/30/ %@ , 1534-7362 %G eng %U http://www.journalofvision.org/content/7/9/977 %N 9 %R 10.1167/7.9.977 %0 Journal Article %J Networking, IEEE/ACM Transactions on %D 2007 %T Implications of Autonomy for the Expressiveness of Policy Routing %A Feamster, Nick %A Johari,R. %A Balakrishnan,H. %K autonomous systems %K global Internet connectivity %K interdomain routing system %K Internet %K next-hop rankings %K routing protocol %K routing protocols %K routing stability %K stable path assignment %X Thousands of competing autonomous systems must cooperate with each other to provide global Internet connectivity. Each autonomous system (AS) encodes various economic, business, and performance decisions in its routing policy. The current interdomain routing system enables each AS to express policy using rankings that determine how each router in the AS chooses among different routes to a destination, and filters that determine which routes are hidden from each neighboring AS. Because the Internet is composed of many independent, competing networks, the interdomain routing system should provide autonomy, allowing network operators to set their rankings independently, and to have no constraints on allowed filters. This paper studies routing protocol stability under these conditions. We first demonstrate that ldquonext-hop rankings,rdquo commonly used in practice, may not ensure routing stability. We then prove that, when providers can set rankings and filters autonomously, guaranteeing that the routing system will converge to a stable path assignment imposes strong restrictions on the rankings ASes are allowed to choose. We discuss the implications of these results for the future of interdomain routing. %B Networking, IEEE/ACM Transactions on %V 15 %P 1266 - 1279 %8 2007/12// %@ 1063-6692 %G eng %N 6 %R 10.1109/TNET.2007.896531 %0 Conference Paper %B Scientific and Statistical Database Management, 2007. SSBDM'07. 19th International Conference on %D 2007 %T Information-Aware 2^ n-Tree for Efficient Out-of-Core Indexing of Very Large Multidimensional Volumetric Data %A Kim,J. %A JaJa, Joseph F. %B Scientific and Statistical Database Management, 2007. SSBDM'07. 19th International Conference on %P 9 - 9 %8 2007/// %G eng %0 Report %D 2007 %T Modelling and rendering large volume data with gaussian radial basis functions %A Juba,D. %A Varshney, Amitabh %X Implicit representations have the potential to represent large volumes succinctly. In this paper we presenta multiresolution and progressive implicit representation of scalar volumetric data using anisotropic Gaussian radial basis functions (RBFs) defined over an octree. Our representation lends itself well to progressive level-of-detail representations. Our RBF encoding algorithm based on a Maximum Like- lihood Estimation (MLE) calculation is non-iterative, scales in a O(nlogn) manner, and operates in a memory-friendly manner on very large datasets by processing small blocks at a time. We also present a GPU-based ray-casting algorithm for direct rendering from implicit volumes. Our GPU-based im- plicit volume rendering algorithm is accelerated by early-ray termination and empty-space skipping for implicit volumes and can render volumes encoded with 16 million RBFs at 1 to 3 frames/second. The octree hierarchy enables the GPU-based ray-casting algorithm to efficiently traverse using location codes and is also suitable for view-dependent level-of-detail-based rendering. %I Instititue for Advanced Computer Studies, Univ of Maryland, College Park %V UMIACS-TR-2007-22 %8 2007/// %G eng %0 Journal Article %J Image Processing, IEEE Transactions on %D 2007 %T A Multiple-Hypothesis Approach for Multiobject Visual Tracking %A Joo,Seong-Wook %A Chellapa, Rama %K Automated;Reproducibility of Results;Sensitivity and Specificity;Signal Processing %K bipartite graph edge covering problem;data association;multiobject visual tracking;multiple measurements;multiple-hypothesis approach;multiple-object tracking;object detection;graph theory;object detection;probability;target tracking;Algorithms;Artificial %K Computer-Assisted;Motion;Numerical Analysis %K Computer-Assisted;Pattern Recognition %K Computer-Assisted;Subtraction Technique; %X In multiple-object tracking applications, it is essential to address the problem of associating targets and observation data. For visual tracking of multiple targets which involves objects that split and merge, a target may be associated with multiple measurements and many targets may be associated with a single measurement. The space of such data association is exponential in the number of targets and exhaustive enumeration is impractical. We pose the association problem as a bipartite graph edge covering problem given the targets and the object detection information. We propose an efficient method of maintaining multiple association hypotheses with the highest probabilities over all possible histories of associations. Our approach handles objects entering and exiting the field of view, merging and splitting objects, as well as objects that are detected as fragmented parts. Experimental results are given for tracking multiple players in a soccer game and for tracking people with complex interaction in a surveillance setting. It is shown through quantitative evaluation that our method tracks through varying degrees of interactions among the targets with high success rate. %B Image Processing, IEEE Transactions on %V 16 %P 2849 - 2854 %8 2007/11// %@ 1057-7149 %G eng %N 11 %R 10.1109/TIP.2007.906254 %0 Book Section %B IEEEConference on Computer Vision and Pattern Recognition (CVPR 2007)IEEEConference on Computer Vision and Pattern Recognition (CVPR 2007) %D 2007 %T Multi-scale Structural Saliency for Signature Detection %A Zhu,Guangyu %A Yefeng Zheng %A David Doermann %A Jaeger,Stefan %X Detecting and segmenting free-form objects from cluttered backgrounds is a challenging problem in computer vision. Signature detection in document images is one classic example and as of yet no reasonable solutions have been presented. In this paper, we propose a novel multi-scale approach to jointly detecting and segmenting signatures from documents with diverse layouts and complex backgrounds. Rather than focusing on local features that typically have large variations, our approach aims to capture the structural saliency of a signature by searching over multiple scales. This detection framework is general and computationally tractable. We present a saliency measure based on a signature production model that effectively quantifies the dynamic curvature of 2-D contour fragments. Our evaluation using large real world collections of handwritten and machine printed documents demonstrates the effectiveness of this joint detection and segmentation approach. %B IEEEConference on Computer Vision and Pattern Recognition (CVPR 2007)IEEEConference on Computer Vision and Pattern Recognition (CVPR 2007) %I Minneapolis, MN %P 1 - 8 %8 2007/// %G eng %0 Conference Paper %B Proceedings of the 15th international conference on Multimedia %D 2007 %T Multi-scale video cropping %A El-Alfy,Hazem %A Jacobs, David W. %A Davis, Larry S. %K shortest path algorithm %K Surveillance %K video cropping %X We consider the problem of cropping surveillance videos. This process chooses a trajectory that a small sub-window can take through the video, selecting the most important parts of the video for display on a smaller monitor. We model the information content of the video simply, by whether the image changes at each pixel. Then we show that we can find the globally optimal trajectory for a cropping window by using a shortest path algorithm. In practice, we can speed up this process without affecting the results, by stitching together trajectories computed over short intervals. This also reduces system latency. We then show that we can use a second shortest path formulation to find good cuts from one trajectory to another, improving coverage of interesting events in the video. We describe additional techniques to improve the quality and efficiency of the algorithm, and show results on surveillance videos. %B Proceedings of the 15th international conference on Multimedia %S MULTIMEDIA '07 %I ACM %C New York, NY, USA %P 97 - 106 %8 2007/// %@ 978-1-59593-702-5 %G eng %U http://doi.acm.org/10.1145/1291233.1291255 %R 10.1145/1291233.1291255 %0 Conference Paper %B Proceedings of the 8th annual international conference on Digital government research %D 2007 %T New techniques for ensuring the long term integrity of digital archives %A Song,S. %A JaJa, Joseph F. %X A large portion of the government, business, cultural, andscientific digital data being created today needs to be archived and preserved for future use of periods ranging from a few years to decades and sometimes centuries. A fundamental requirement of a long term archive is to ensure the integrity of its holdings. In this paper, we develop a new methodology to address the integrity of long term archives using rigorous cryptographic techniques. Our approach involves the generation of a small-size integrity token for each digital object to be archived, and some cryptographic summary information based on all the objects handled within a dynamic time period. We present a framework that enables the continuous auditing of the holdings of the archive depending on the policy set by the archive. Moreover, an independent auditor will be able to verify the integrity of every version of an archived digital object as well as link the current version to the original form of the object when it was ingested into the archive. We built a - prototype system that is completely independent of the archive’s underlying architecture, and tested it on large scale data. We include in this paper some preliminary results on the validation and performance of our prototype. %B Proceedings of the 8th annual international conference on Digital government research %P 57 - 65 %8 2007/// %G eng %0 Patent %D 2007 %T Object recognition using linear subspaces %A Jacobs, David W. %A Basri,Ronen The Weizmann Institute %E NEC Laboratories America, Inc. (4 Independence Way, Princeton, New Jersey 08540, US) %X Abstract of EP1204069A method for choosing an image from a plurality of three-dimensional models which is most similar to an input image is provided. The method includes the steps of: (a) providing a database of the plurality of three-dimensional models; (b) providing an input image; (c) positioning each three-dimensional model relative to the input image; (d) for each three-dimensional model, determining a rendered image that is most similar to the input image by: (d)(i) computing a linear subspace that describes an approximation to the set of all possible rendered images that each three-dimensional model can produce under all possible lighting conditions where each point in the linear subspace represents a possible image; and one of(d)(ii) finding the point on the linear subspace that is closest to the input image or finding a rendered image in a subset of the linear subspace obtained by projecting the set of images that are generated by positive lights onto the linear subspace; (e) computing a measure of similarly between the input image and each rendered image; and (f) selecting the three-dimensional model corresponding to the rendered image whose measure of similarity is most similar to the input image. Step (d) is preferably repeated for each of a red, green, and blue color component for each three-dimensional model. The linear subspace is preferably either four-dimensional or nine-dimensional. %V EP20010117763 %8 2007/01/17/ %G eng %U http://www.freepatentsonline.com/EP1204069.html %N EP1204069 %0 Conference Paper %B Proceedings from the Workshop on Metareasoning in Agent Based Systems at the Sixth International Joint Conference on Autonomous Agents and Multiagent Sytems %D 2007 %T Ontologies for reasoning about failures in AI systems %A Schmill,M. %A Josyula,D. %A Anderson,M. L %A Wilson,S. %A Oates,T. %A Perlis, Don %A Fults,S. %B Proceedings from the Workshop on Metareasoning in Agent Based Systems at the Sixth International Joint Conference on Autonomous Agents and Multiagent Sytems %8 2007/// %G eng %0 Journal Article %J International Journal of Computer Vision %D 2007 %T Photometric stereo with general, unknown lighting %A Basri,R. %A Jacobs, David W. %A Kemelmacher,I. %X Work on photometric stereo has shown how to recover the shape and reflectance properties of an object using multiple images taken with a fixed viewpoint and variable lighting conditions. This work has primarily relied on known lighting conditions or the presence of a single point source of light in each image. In this paper we show how to perform photometric stereo assuming that all lights in a scene are distant from the object but otherwise unconstrained. Lighting in each image may be an unknown and may include arbitrary combination of diffuse, point and extended sources. Our work is based on recent results showing that for Lambertian objects, general lighting conditions can be represented using low order spherical harmonics. Using this representation we can recover shape by performing a simple optimization in a low-dimensional space. We also analyze the shape ambiguities that arise in such a representation. We demonstrate our method by reconstructing the shape of objects from images obtained under a variety of lightings. We further compare the reconstructed shapes against shapes obtained with a laser scanner. %B International Journal of Computer Vision %V 72 %P 239 - 257 %8 2007/// %G eng %N 3 %R 10.1007/s11263-006-8815-7 %0 Journal Article %J Bulletin of the American Physical Society %D 2007 %T Plasma Turbulence Simulation and Visualization on Graphics Processors: Efficient Parallel Computing on the Desktop %A Stantchev,George %A Juba,Derek %A Dorland,William %A Varshney, Amitabh %X Direct numerical simulation (DNS) of turbulence is computationally very intensive and typically relies on some form of parallel processing. Spectral kernels used for spatial discretization are a common computational bottleneck on distributed memory architectures. One way to increase the efficiency of DNS algorithms is to parallelize spectral kernels using tightly-coupled SPMD multiprocessor hardware architecture with minimal inter-processor communication latency. In this poster we present techniques to take advantage of the recent programmable interfaces for modern Graphics Processing Units (GPUs) to carefully map DNS computations to GPU architectures that are characterized by a very high memory bandwidth and hundreds of SPMD processors. We compare and contrast the performance of our parallel algorithm on a modern GPU versus a CPU implementation of several turbulence simulation codes. We also demonstrate a prototype of a scalable computational steering framework based on turbulence simulation and visualization coupling on the GPU. %B Bulletin of the American Physical Society %V Volume 52, Number 11 %8 2007/11/12/ %G eng %U http://meetings.aps.org/Meeting/DPP07/Event/70114 %0 Conference Paper %B Computer Vision, 2007. ICCV 2007. IEEE 11th International Conference on %D 2007 %T Probabilistic Fusion Tracking Using Mixture Kernel-Based Bayesian Filtering %A Han,Bohyung %A Joo,Seong-Wook %A Davis, Larry S. %K (numerical %K adaptive %K arrangement %K Bayesian %K Filtering %K filtering;multiple %K filters;probabilistic %K fusion %K fusion;tracking; %K integration;mixture %K kernel-based %K methods);sensor %K methods;array %K particle %K processing;particle %K sensors;object %K signal %K system;blind %K techniques;visual %K tracking;Bayes %K tracking;particle %K tracking;sensor %X Even though sensor fusion techniques based on particle filters have been applied to object tracking, their implementations have been limited to combining measurements from multiple sensors by the simple product of individual likelihoods. Therefore, the number of observations is increased as many times as the number of sensors, and the combined observation may become unreliable through blind integration of sensor observations - especially if some sensors are too noisy and non-discriminative. We describe a methodology to model interactions between multiple sensors and to estimate the current state by using a mixture of Bayesian filters - one filter for each sensor, where each filter makes a different level of contribution to estimate the combined posterior in a reliable manner. In this framework, an adaptive particle arrangement system is constructed in which each particle is allocated to only one of the sensors for observation and a different number of samples is assigned to each sensor using prior distribution and partial observations. We apply this technique to visual tracking in logical and physical sensor fusion frameworks, and demonstrate its effectiveness through tracking results. %B Computer Vision, 2007. ICCV 2007. IEEE 11th International Conference on %P 1 - 8 %8 2007/10// %G eng %R 10.1109/ICCV.2007.4408938 %0 Book Section %B Empirical Software EngineeringEmpirical Software Engineering %D 2007 %T Protocols in the use of empirical software engineering artifacts %A Basili, Victor R. %A Zelkowitz, Marvin V %A Sjøberg,D. I.K %A Johnson,P. %A Cowling,A. J %X If empirical software engineering is to grow as a valid scientific endeavor, the ability to acquire, use, share, and compare data collected from a variety of sources must be encouraged. This is necessary to validate the formal models being developed within computer science. However, within the empirical software engineering community this has not been easily accomplished. This paper analyses experiences from a number of projects, and defines the issues, which include the following: (1) How should data, testbeds, and artifacts be shared? (2) What limits should be placed on who can use them and how? How does one limit potential misuse? (3) What is the appropriate way to credit the organization and individual that spent the effort collecting the data, developing the testbed, and building the artifact? (4) Once shared, who owns the evolved asset? As a solution to these issues, the paper proposes a framework for an empirical software engineering artifact agreement. Such an agreement is intended to address the needs for both creator and user of such artifacts and should foster a market in making available and using such artifacts. If this framework for sharing software engineering artifacts is commonly accepted, it should encourage artifact owners to make the artifacts accessible to others (gaining credit is more likely and misuse is less likely). It may be easier for other researchers to request artifacts since there will be a well-defined protocol for how to deal with relevant matters. %B Empirical Software EngineeringEmpirical Software Engineering %I Springer %V 12 %P 107 - 119 %8 2007/// %G eng %0 Journal Article %J ACM SIGIR Forum %D 2007 %T Searching spontaneous conversational speech %A Jong,Franciska De %A Oard, Douglas %A Ordelman,Roeland %A Raaijmakers,Stephan %B ACM SIGIR Forum %V 41 %P 104 - 104 %8 2007/12/01/ %@ 01635840 %G eng %U http://dl.acm.org/citation.cfm?id=1328982 %R 10.1145/1328964.1328982 %0 Journal Article %J Pattern Analysis and Machine Intelligence, IEEE Transactions on %D 2007 %T Shape Classification Using the Inner-Distance %A Ling,Haibin %A Jacobs, David W. %K Automated;Reproducibility of Results;Sensitivity and Specificity; %K Computer-Assisted;Imaging %K Euclidean distance;articulation invariant signatures;computer vision;inner distance;multidimensional scaling;part structure;shape classification;shape descriptors;shape silhouette;shortest path;computational geometry;computer vision;graph theory;image cla %K Three-Dimensional;Pattern Recognition %X Part structure and articulation are of fundamental importance in computer and human vision. We propose using the inner-distance to build shape descriptors that are robust to articulation and capture part structure. The inner-distance is defined as the length of the shortest path between landmark points within the shape silhouette. We show that it is articulation insensitive and more effective at capturing part structures than the Euclidean distance. This suggests that the inner-distance can be used as a replacement for the Euclidean distance to build more accurate descriptors for complex shapes, especially for those with articulated parts. In addition, texture information along the shortest path can be used to further improve shape classification. With this idea, we propose three approaches to using the inner-distance. The first method combines the inner-distance and multidimensional scaling (MDS) to build articulation invariant signatures for articulated shapes. The second method uses the inner-distance to build a new shape descriptor based on shape contexts. The third one extends the second one by considering the texture information along shortest paths. The proposed approaches have been tested on a variety of shape databases, including an articulated shape data set, MPEG7 CE-Shape-1, Kimia silhouettes, the ETH-80 data set, two leaf data sets, and a human motion silhouette data set. In all the experiments, our methods demonstrate effective performance compared with other algorithms %B Pattern Analysis and Machine Intelligence, IEEE Transactions on %V 29 %P 286 - 299 %8 2007/02// %@ 0162-8828 %G eng %N 2 %R 10.1109/TPAMI.2007.41 %0 Journal Article %J Signal Processing Magazine, IEEE %D 2007 %T Signal Processing for Biometric Systems [DSP Forum] %A Jain, A.K. %A Chellapa, Rama %A Draper, S.C. %A Memon, N. %A Phillips,P.J. %A Vetro, A. %K (access %K biometric %K control);security;signal %K forum;signal %K magazine %K PROCESSING %K processing; %K security;biometric %K standardization;fusion %K systems %K technique;multibiometric %K technique;signal %K technology;biometrics %X This IEEE signal processing magazine (SPM) forum discuses signal processing applications, technologies, requirements, and standardization of biometric systems. The forum members bring their expert insights into issues such as biometric security, privacy, and multibiometric and fusion techniques. The invited forum members are Prof. Anil K. Jain of Michigan State University, Prof. Rama Chellappa of the University of Maryland, Dr. Stark C. Draper of theUniversity of Wisconsin in Madison, Prof. Nasir Memon of Polytechnic University, and Dr. P. Jonathon Phillips of the National Institute of Standards and Technology. The moderator of the forum is Dr. Anthony Vetro of Mitsubishi Electric Research Labs, and associate editor of SPM. %B Signal Processing Magazine, IEEE %V 24 %P 146 - 152 %8 2007/11// %@ 1053-5888 %G eng %N 6 %R 10.1109/MSP.2007.905886 %0 Conference Paper %B Information Visualization, 2007. IV '07. 11th International Conference %D 2007 %T Similarity-Based Forecasting with Simultaneous Previews: A River Plot Interface for Time Series Forecasting %A Buono,P. %A Plaisant, Catherine %A Simeone,A. %A Aris,A. %A Shneiderman, Ben %A Shmueli,G. %A Jank,W. %K data driven forecasting method %K data visualisation %K Data visualization %K Economic forecasting %K forecasting preview interface %K Graphical user interfaces %K historical time series dataset %K Laboratories %K new stock offerings %K partial time series %K pattern matching %K pattern matching search %K Predictive models %K river plot interface %K Rivers %K similarity-based forecasting %K Smoothing methods %K Technological innovation %K Testing %K time series %K time series forecasting %K Weather forecasting %X Time-series forecasting has a large number of applications. Users with a partial time series for auctions, new stock offerings, or industrial processes desire estimates of the future behavior. We present a data driven forecasting method and interface called similarity-based forecasting (SBF). A pattern matching search in an historical time series dataset produces a subset of curves similar to the partial time series. The forecast is displayed graphically as a river plot showing statistical information about the SBF subset. A forecasting preview interface allows users to interactively explore alternative pattern matching parameters and see multiple forecasts simultaneously. User testing with 8 users demonstrated advantages and led to improvements. %B Information Visualization, 2007. IV '07. 11th International Conference %I IEEE %P 191 - 196 %8 2007/07/04/6 %@ 0-7695-2900-3 %G eng %R 10.1109/IV.2007.101 %0 Journal Article %J ACM Trans. Database Syst. %D 2007 %T Spatial join techniques %A Jacox,Edwin H. %A Samet, Hanan %K external memory algorithms %K plane-sweep %K Spatial join %X A variety of techniques for performing a spatial join are reviewed. Instead of just summarizing the literature and presenting each technique in its entirety, distinct components of the different techniques are described and each is decomposed into an overall framework for performing a spatial join. A typical spatial join technique consists of the following components: partitioning the data, performing internal-memory spatial joins on subsets of the data, and checking if the full polygons intersect. Each technique is decomposed into these components and each component addressed in a separate section so as to compare and contrast similar aspects of each technique. The goal of this survey is to describe the algorithms within each component in detail, comparing and contrasting competing methods, thereby enabling further analysis and experimentation with each component and allowing the best algorithms for a particular situation to be built piecemeal, or, even better, enabling an optimizer to choose which algorithms to use. %B ACM Trans. Database Syst. %V 32 %8 2007/03// %@ 0362-5915 %G eng %U http://doi.acm.org/10.1145/1206049.1206056 %N 1 %R 10.1145/1206049.1206056 %0 Conference Paper %B Computer Vision, 2007. ICCV 2007. IEEE 11th International Conference on %D 2007 %T A Study of Face Recognition as People Age %A Ling,Haibin %A Soatto,S. %A Ramanathan,N. %A Jacobs, David W. %K Bayesian %K difference;passport %K machine;face %K machines; %K magnitude;gradient %K orientation %K photo %K pyramid;hierarchical %K recognition;face %K recognition;support %K representation;face %K task;support %K technique;discriminative %K techniques;intensity %K vector %K verification %K verification;gradient %X In this paper we study face recognition across ages within a real passport photo verification task. First, we propose using the gradient orientation pyramid for this task. Discarding the gradient magnitude and utilizing hierarchical techniques, we found that the new descriptor yields a robust and discriminative representation. With the proposed descriptor, we model face verification as a two-class problem and use a support vector machine as a classifier. The approach is applied to two passport data sets containing more than 1,800 image pairs from each person with large age differences. Although simple, our approach outperforms previously tested Bayesian technique and other descriptors, including the intensity difference and gradient with magnitude. In addition, it works as well as two commercial systems. Second, for the first time, we empirically study how age differences affect recognition performance. Our experiments show that, although the aging process adds difficulty to the recognition task, it does not surpass illumination or expression as a confounding factor. %B Computer Vision, 2007. ICCV 2007. IEEE 11th International Conference on %P 1 - 8 %8 2007/10// %G eng %R 10.1109/ICCV.2007.4409069 %0 Journal Article %J Pattern Analysis and Machine Intelligence, IEEE Transactions on %D 2007 %T Surface Dependent Representations for Illumination Insensitive Image Comparison %A Osadchy,M. %A Jacobs, David W. %A Lindenbaum,M. %K Automated;Reproducibility of Results;Sensitivity and Specificity;Subtraction Technique; %K Computer-Assisted;Imaging %K Gabor jets;Gaussian random surface;filter whitening;illumination;image comparison;image matching;lighting condition;multiscale oriented filter;surface dependent representation;image matching;image representation;Algorithms;Artificial Intelligence;Image En %K Three-Dimensional;Information Storage and Retrieval;Lighting;Pattern Recognition %X We consider the problem of matching images to tell whether they come from the same scene viewed under different lighting conditions. We show that the surface characteristics determine the type of image comparison method that should be used. Previous work has shown the effectiveness of comparing the image gradient direction for surfaces with material properties that change rapidly in one direction. We show analytically that two other widely used methods, normalized correlation of small windows and comparison of multiscale oriented filters, essentially compute the same thing. Then, we show that for surfaces whose properties change more slowly, comparison of the output of whitening filters is most effective. This suggests that a combination of these strategies should be employed to compare general objects. We discuss indications that Gabor jets use such a mixed strategy effectively, and we propose a new mixed strategy. We validate our results on synthetic and real images %B Pattern Analysis and Machine Intelligence, IEEE Transactions on %V 29 %P 98 - 111 %8 2007/01// %@ 0162-8828 %G eng %N 1 %R 10.1109/TPAMI.2007.250602 %0 Conference Paper %B AAAI Spring Symposium on Logical Formalizations of Commonsense Reasoning %D 2007 %T Toward domain-neutral human-level metacognition %A Anderson,M. L %A Schmill,M. %A Oates,T. %A Perlis, Don %A Josyula,D. %A Wright,D. %A Wilson,S. %B AAAI Spring Symposium on Logical Formalizations of Commonsense Reasoning %8 2007/// %G eng %0 Conference Paper %B Computer Vision and Pattern Recognition, 2007. CVPR '07. IEEE Conference on %D 2007 %T Using Stereo Matching for 2-D Face Recognition Across Pose %A Castillo,C. D %A Jacobs, David W. %K 2D %K estimation;stereo %K Face %K gallery %K image %K image;2D %K image;dynamic %K matching;dynamic %K matching;pose %K processing; %K programming;face %K programming;pose %K query %K recognition;2D %K recognition;image %K variation;stereo %X We propose using stereo matching for 2-D face recognition across pose. We match one 2-D query image to one 2-D gallery image without performing 3-D reconstruction. Then the cost of this matching is used to evaluate the similarity of the two images. We show that this cost is robust to pose variations. To illustrate this idea we built a face recognition system on top of a dynamic programming stereo matching algorithm. The method works well even when the epipolar lines we use do not exactly fit the viewpoints. We have tested our approach on the PIE dataset. In all the experiments, our method demonstrates effective performance compared with other algorithms. %B Computer Vision and Pattern Recognition, 2007. CVPR '07. IEEE Conference on %P 1 - 8 %8 2007/06// %G eng %R 10.1109/CVPR.2007.383111 %0 Report %D 2007 %T Web Archiving: Organizing Web Objects into Web Containers to Optimize Access %A Song,Sangchul %A JaJa, Joseph F. %K Technical Report %X The web is becoming the preferred medium for communicating and storinginformation pertaining to almost any human activity. However it is an ephemeral medium whose contents are constantly changing, resulting in a permanent loss of part of our cultural and scientific heritage on a regular basis. Archiving important web contents is a very challenging technical problem due to its tremendous scale and complex structure, extremely dynamic nature, and its rich heterogeneous and deep contents. In this paper, we consider the problem of archiving a linked set of web objects into web containers in such a way as to minimize the number of containers accessed during a typical browsing session. We develop a method that makes use of the notion of PageRank and optimized graph partitioning to enable faster browsing of archived web contents. We include simulation results that illustrate the performance of our scheme and compare it to the common scheme currently used to organize web objects into web containers. %I Instititue for Advanced Computer Studies, Univ of Maryland, College Park %V UMIACS-TR-2007-42 %8 2007/10/09/ %G eng %U http://drum.lib.umd.edu/handle/1903/7426 %0 Journal Article %J IEEE Transactions on Pattern Analysis and Machine Intelligence %D 2006 %T A 3D shape constraint on video %A Hui Ji %A Fermüller, Cornelia %K 3D motion estimation %K algorithms %K Artificial intelligence %K CAMERAS %K decoupling translation from rotation %K Estimation error %K Fluid flow measurement %K Image Enhancement %K Image Interpretation, Computer-Assisted %K Image reconstruction %K Imaging, Three-Dimensional %K Information Storage and Retrieval %K integration of motion fields %K Layout %K minimisation %K Minimization methods %K Motion estimation %K multiple motion fields %K parameter estimation %K Pattern Recognition, Automated %K Photography %K practical constrained minimization %K SHAPE %K shape and rotation. %K shape vectors %K stability %K structure estimation %K surface normals %K Three-dimensional motion estimation %K video 3D shape constraint %K Video Recording %K video signal processing %X We propose to combine the information from multiple motion fields by enforcing a constraint on the surface normals (3D shape) of the scene in view. The fact that the shape vectors in the different views are related only by rotation can be formulated as a rank = 3 constraint. This constraint is implemented in an algorithm which solves 3D motion and structure estimation as a practical constrained minimization. Experiments demonstrate its usefulness as a tool in structure from motion providing very accurate estimates of 3D motion. %B IEEE Transactions on Pattern Analysis and Machine Intelligence %V 28 %P 1018 - 1023 %8 2006/06// %@ 0162-8828 %G eng %N 6 %R 10.1109/TPAMI.2006.109 %0 Conference Paper %B 10th International Workshop on Frontiers in Handwriting Recognition %D 2006 %T ANew Algorithm for Detecting Text Line in Handwritten Documents %A Li,Yi %A Yefeng Zheng %A David Doermann %A Jaeger,Stefan %X Curvilinear text line detection and segmentation in handwritten documents is a significant challenge for handwriting recognition. Given no prior knowledge of script, we model text line detection as an image segmentation problem by enhancing text line structure using a Gaussian window, and adopting the level set method to evolve text line boundaries. Experiments show that the proposed method achieves high accuracy for detecting text lines in both handwritten and machine printed documents with many scripts. %B 10th International Workshop on Frontiers in Handwriting Recognition %C La Baule, France %P 35 - 40 %8 2006/// %G eng %0 Conference Paper %B Proceedings of the Workshop on Frontiers in Linguistically Annotated Corpora 2006 %D 2006 %T Annotation compatibility working group report %A Meyers,A. %A Fang,A. C. %A Ferro,L. %A Kübler,S. %A Jia-Lin,T. %A Palmer,M. %A Poesio,M. %A Dolbey,A. %A Schuler,K. K. %A Loper,E. %A Zinsmeister,H. %A Penn,G. %A Xue,N. %A Hinrichs,E. %A Wiebe,J. %A Pustejovsky,J. %A Farwell,D. %A Hajicova,E. %A Dorr, Bonnie J %A Hovy,E. %A Onyshkevych,B. A. %A Levin,L. %X This report explores the question of compatibility between annotation projects including translating annotation formalisms to each other or to common forms. Compatibility issues are crucial for systems that use the results of multiple annotation projects. We hope that this report will begin a concerted effort in the field to track the compatibility of annotation schemes for part of speech tagging, time annotation, treebanking, role labeling and other phenomena. %B Proceedings of the Workshop on Frontiers in Linguistically Annotated Corpora 2006 %S LAC '06 %I Association for Computational Linguistics %C Stroudsburg, PA, USA %P 38 - 53 %8 2006/// %@ 1-932432-78-7 %G eng %U http://dl.acm.org/citation.cfm?id=1641991.1641997 %0 Conference Paper %B International Conference on Document Recognition and Retrieval XIII %D 2006 %T ARobust Stamp Detection Framework on Degraded Documents %A Zhu,Guangyu %A Jaeger,Stefan %A David Doermann %X In this paper, we present a novel stamp detection framework based on parameter estimation of connected edge features. Using robust basic shape detectors, the approach is effective for stamps with analytically shaped contours. For elliptic/circular stamps, it efficiently makes use of the orientation information from pairs of edge points to determine its center position and area without computing all the five parameters of ellipse. When linking connected edges, we introduce an effective template-based junction removal technique to specifically address the problem that stamps often spatially overlap with background contents. These give our approach significant advantage in terms of computation complexity over traditional Hough transform methods. Experimental results on real degraded documents demonstrated the robustness of this retrieval approach on large document database, which consists of both printed text and handwritten notes. This enables us to reliably retrieve documents from a specific source by detecting its official stamps with analytically-shape contours when only limited samples are available. %B International Conference on Document Recognition and Retrieval XIII %I San Jose, CA %P 1 - 9 %8 2006/// %G eng %0 Conference Paper %B Computer Vision and Pattern Recognition Workshop, 2006. CVPRW '06. Conference on %D 2006 %T Attribute Grammar-Based Event Recognition and Anomaly Detection %A Joo,Seong-Wook %A Chellapa, Rama %X We propose to use attribute grammars for recognizing normal events and detecting abnormal events in a video. Attribute grammars can describe constraints on features (attributes) in addition to the syntactic structure of the input. Events are recognized using an extension of the Earley parser that handles attributes and concurrent event threads. Abnormal events are detected when the input does not follow syntax of the grammar or the attributes do not satisfy the constraints in the attribute grammar to some degree. We demonstrate the effectiveness of our method for the task of recognizing normal events and detecting anomalies in a parking lot. %B Computer Vision and Pattern Recognition Workshop, 2006. CVPRW '06. Conference on %P 107 - 107 %8 2006/06// %G eng %R 10.1109/CVPRW.2006.32 %0 Patent %D 2006 %T Cast shadows and linear subspaces for object recognition %A Thornber,Karvel K %A Jacobs, David W. %E NEC Laboratories America, Inc. %X The present invention is a method of deriving a reflectance function that analytically approximates the light reflected from an object model in terms of the spherical harmonic components of light. The reflectance function depends upon the intensity of light incident at each point on the model, but excludes light originating from below a local horizon, therefore not contributing to the reflectance because of the cast shadows. This reflectance function is used in the process of machine vision, by allowing a machine to optimize the reflectance function and arrive at an optimal rendered image of the object model, relative to an input image. Therefore, the recognition of an image produced under variable lighting conditions is more robust. The reflectance function of the present invention also has applicability in other fields, such as computer graphics. %V 10/085,750 %8 2006/02/28/ %G eng %U http://www.google.com/patents?id=io54AAAAEBAJ %N 7006684 %0 Journal Article %J International Journal of Human-Computer Interaction %D 2006 %T Creativity Support Tools: Report From a U.S. National Science Foundation Sponsored Workshop %A Shneiderman, Ben %A Fischer,Gerhard %A Czerwinski,Mary %A Resnick,Mitch %A Myers,Brad %A Candy,Linda %A Edmonds,Ernest %A Eisenberg,Mike %A Giaccardi,Elisa %A Hewett,Tom %A Jennings,Pamela %A Kules,Bill %A Nakakoji,Kumiyo %A Nunamaker,Jay %A Pausch,Randy %A Selker,Ted %A Sylvan,Elisabeth %A Terry,Michael %X Creativity support tools is a research topic with high risk but potentially very high payoff. The goal is to develop improved software and user interfaces that empower users to be not only more productive but also more innovative. Potential users include software and other engineers, diverse scientists, product and graphic designers, architects, educators, students, and many others. Enhanced interfaces could enable more effective searching of intellectual resources, improved collaboration among teams, and more rapid discovery processes. These advanced interfaces should also provide potent support in hypothesis formation, speedier evaluation of alternatives, improved understanding through visualization, and better dissemination of results. For creative endeavors that require composition of novel artifacts (e.g., computer programs, scientific papers, engineering diagrams, symphonies, artwork), enhanced interfaces could facilitate exploration of alternatives, prevent unproductive choices, and enable easy backtracking. This U.S. National Science Foundation sponsored workshop brought together 25 research leaders and graduate students to share experiences, identify opportunities, and formulate research challenges. Two key outcomes emerged: (a) encouragement to evaluate creativity support tools through multidimensional in-depth longitudinal case studies and (b) formulation of 12 principles for design of creativity support tools.Creativity support tools is a research topic with high risk but potentially very high payoff. The goal is to develop improved software and user interfaces that empower users to be not only more productive but also more innovative. Potential users include software and other engineers, diverse scientists, product and graphic designers, architects, educators, students, and many others. Enhanced interfaces could enable more effective searching of intellectual resources, improved collaboration among teams, and more rapid discovery processes. These advanced interfaces should also provide potent support in hypothesis formation, speedier evaluation of alternatives, improved understanding through visualization, and better dissemination of results. For creative endeavors that require composition of novel artifacts (e.g., computer programs, scientific papers, engineering diagrams, symphonies, artwork), enhanced interfaces could facilitate exploration of alternatives, prevent unproductive choices, and enable easy backtracking. This U.S. National Science Foundation sponsored workshop brought together 25 research leaders and graduate students to share experiences, identify opportunities, and formulate research challenges. Two key outcomes emerged: (a) encouragement to evaluate creativity support tools through multidimensional in-depth longitudinal case studies and (b) formulation of 12 principles for design of creativity support tools. %B International Journal of Human-Computer Interaction %V 20 %P 61 - 77 %8 2006/// %@ 1044-7318 %G eng %U http://www.tandfonline.com/doi/abs/10.1207/s15327590ijhc2002_1 %N 2 %R 10.1207/s15327590ijhc2002_1 %0 Report %D 2006 %T Development of a Large-Scale Integrated Neurocognitive Architecture Part 1: Conceptual Framework %A Reggia, James A. %A Tagamets,Malle %A Contreras-Vidal,Jose %A Weems,Scott %A Jacobs, David W. %A Winder,Ransom %A Chabuk,Timur %K Technical Report %X The idea of creating a general purpose machine intelligence that capturesmany of the features of human cognition goes back at least to the earliest days of artificial intelligence and neural computation. In spite of more than a half-century of research on this issue, there is currently no existing approach to machine intelligence that comes close to providing a powerful, general-purpose human-level intelligence. However, substantial progress made during recent years in neural computation, high performance computing, neuroscience and cognitive science suggests that a renewed effort to produce a general purpose and adaptive machine intelligence is timely, likely to yield qualitatively more powerful approaches to machine intelligence than those currently existing, and certain to lead to substantial progress in cognitive science, AI and neural computation. In this report, we outline a conceptual framework for the long-term development of a large-scale machine intelligence that is based on the modular organization, dynamics and plasticity of the human brain. Some basic design principles are presented along with a review of some of the relevant existing knowledge about the neurobiological basis of cognition. Three intermediate-scale prototypes for parts of a larger system are successfully implemented, providing support for the effectiveness of several of the principles in our framework. We conclude that a human-competitive neuromorphic system for machine intelligence is a viable long- term goal, but that for the short term, substantial integration with more standard symbolic methods as well as substantial research will be needed to make this goal achievable. %I Instititue for Advanced Computer Studies, Univ of Maryland, College Park %V UMIACS-TR-2006-33 %8 2006/06/15/ %G eng %U http://drum.lib.umd.edu//handle/1903/3665 %0 Report %D 2006 %T Development of a Large-Scale Integrated Neurocognitive Architecture - Part 2: Design and Architecture %A Reggia, James A. %A Tagamets,M. %A Contreras-Vidal,J. %A Jacobs, David W. %A Weems,S. %A Naqvi,W. %A Winder,R. %A Chabuk,T. %A Jung,J. %A Yang,C. %K Technical Report %X In Part 1 of this report, we outlined a framework for creating an intelligent agentbased upon modeling the large-scale functionality of the human brain. Building on those results, we begin Part 2 by specifying the behavioral requirements of a large-scale neurocognitive architecture. The core of our long-term approach remains focused on creating a network of neuromorphic regions that provide the mechanisms needed to meet these requirements. However, for the short term of the next few years, it is likely that optimal results will be obtained by using a hybrid design that also includes symbolic methods from AI/cognitive science and control processes from the field of artificial life. We accordingly propose a three-tiered architecture that integrates these different methods, and describe an ongoing computational study of a prototype 'mini-Roboscout' based on this architecture. We also examine the implications of some non-standard computational methods for developing a neurocognitive agent. This examination included computational experiments assessing the effectiveness of genetic programming as a design tool for recurrent neural networks for sequence processing, and experiments measuring the speed-up obtained for adaptive neural networks when they are executed on a graphical processing unit (GPU) rather than a conventional CPU. We conclude that the implementation of a large-scale neurocognitive architecture is feasible, and outline a roadmap for achieving this goal. %I Instititue for Advanced Computer Studies, Univ of Maryland, College Park %V UMIACS-TR-2006-43 %8 2006/10// %G eng %U http://drum.lib.umd.edu//handle/1903/3957 %0 Conference Paper %B International Conference on Document Recognition and Retrieval XIII %D 2006 %T DOCLIB: a Software Library for Document Processing %A Jaeger,Stefan %A Zhu,Guangyu %A David Doermann %A Chen,Kevin %A Sampat,Summit %X Most researchers would agree that research in the field of document processing can benefit tremendously from a common software library through which institutions are able to develop and share research-related software and applications across academic, business, and government domains. However, despite several attempts in the past, the research community still lacks a widely-accepted standard software library for document processing. This paper describes a new library called DOCLIB, which tries to overcome the drawbacks of earlier approaches. Many of DOCLIB’s features are unique either in themselves or in their combination with others, e.g. the factory concept for support of different image types, the juxtaposition of image data and metadata, or the add-on mechanism. We cherish the hope that DOCLIB serves the needs of researchers better than previous approaches and will readily be accepted by a larger group of scientists. %B International Conference on Document Recognition and Retrieval XIII %I San Jose, CA %P 1 - 9 %8 2006/// %G eng %0 Report %D 2006 %T Efficient Isosurface Extraction for Large Scale Time-Varying Data Using the Persistent Hyperoctree (PHOT) %A Shi,Qingmin %A JaJa, Joseph F. %K Technical Report %X We introduce the Persistent HyperOcTree (PHOT) to handle the4D isocontouring problem for large scale time-varying data sets. This novel data structure is provably space efficient and optimal in retrieving active cells. More importantly, the set of active cells for any possible isovalue are already organized in a Compact Hyperoctree, which enables very efficient slicing of the isocontour along spatial and temporal dimensions. Experimental results based on the very large Richtmyer-Meshkov instability data set demonstrate the effectiveness of our approach. This technique can also be used for other isosurfacing schemes such as view-dependent isosurfacing and ray-tracing, which will benefit from the inherent hierarchical structure associated with the active cells. %I Instititue for Advanced Computer Studies, Univ of Maryland, College Park %V UMIACS-TR-2006-01 %8 2006/01/13/T20:3 %G eng %U http://drum.lib.umd.edu/handle/1903/3035 %0 Journal Article %J Evolutionary Computation, IEEE Transactions on %D 2006 %T Evolutionary design of neural network architectures using a descriptive encoding language %A Jung,J. Y %A Reggia, James A. %B Evolutionary Computation, IEEE Transactions on %V 10 %P 676 - 688 %8 2006/// %G eng %N 6 %0 Journal Article %J CTWatch Quarterly %D 2006 %T Experiments to understand HPC time to development %A Hochstein, Lorin %A Nakamura,Taiga %A Basili, Victor R. %A Asgari, Sima %A Zelkowitz, Marvin V %A Hollingsworth, Jeffrey K %A Shull, Forrest %A Carver,Jeffrey %A Voelp,Martin %A Zazworka, Nico %A Johnson,Philip %K hackystat %K HPC %K publications-journals %X In order to understand how high performance computing (HPC) programs are developed, a series of experiments, using students in graduate level HPC classes, have been conducted at many universities in the US. In this paper we discuss the general process of conducting those experiments, give some of the early results of those experiments, and describe a web-based process we are developing that will allow us to run additional experiments at other universities and laboratories that will be easier to conduct and generate results that more accurately reflect the process of building HPC programs. %B CTWatch Quarterly %8 2006/// %G eng %U http://csdl.ics.hawaii.edu/techreports/06-08/06-08.pdf %0 Journal Article %J Decision Support Systems %D 2006 %T Exploring auction databases through interactive visualization %A Shmueli,Galit %A Jank,Wolfgang %A Aris,Aleks %A Plaisant, Catherine %A Shneiderman, Ben %K Auction dynamics %K Bid history %K Online auctions %K time series %K user interface %X We introduce AuctionExplorer, a suite of tools for exploring databases of online auctions. The suite combines tools for collecting, processing, and interactively exploring auction attributes (e.g., seller rating), and the bid history (price evolution represented as a time series). Part of AuctionExplorer's power comes from its coupling of the two information structures, thereby allowing exploration of relationships between them. Exploration can be directed by hypothesis testing or exploratory data analysis. We propose a process for visual data analysis and illustrate AuctionExplorer's operations with a dataset of eBay auctions. Insights may improve seller, bidder, auction house, and other vendors' understanding of the market, thereby assisting their decision making process. %B Decision Support Systems %V 42 %P 1521 - 1538 %8 2006/12// %@ 0167-9236 %G eng %U http://www.sciencedirect.com/science/article/pii/S0167923606000042 %N 3 %R 10.1016/j.dss.2006.01.001 %0 Journal Article %J Taxon %D 2006 %T First Steps Toward an Electronic Field Guide for Plants %A Haibin,Gaurav Agarwal %A Agarwal,Gaurav %A Ling,Haibin %A Jacobs, David W. %A Shirdhonkar,Sameer %A Kress,W. John %A Russell,Rusty %A Belhumeur,Peter %A Dixit,An %A Feiner,Steve %A Mahajan,Dhruv %A Sunkavalli,Kalyan %A Ramamoorthi,Ravi %A White,Sean %X this paper, we will describe our progress towards building a digital collection of the Smithsonian's type specimens, developing recognition algorithms that can match an image of a leaf to the species of plant from which it comes, and designing user interfaces for interacting with an electronic field guide. To start, we are developing a prototype electronic field guide for the flora of Plummers Island, a small, well-studied island in the Potomac River. This prototype system contains multiple images for each of about 130 species of plants on the island, and should soon grow to cover all 200+ species currently recorded (Shetler et al., 2005). Images of full specimens are available, as well as images of isolated leaves of each species. A zoomable user interface allows a user to browse these images, zooming in on ones of interest. Visual recognition algorithms assist a botanist in locating the specimens that are most relevant to identify the species of a plant. The system currently runs on a small hand-held computer. We will describe the components of this prototype, and also describe some of the future challenges we anticipate if we are to provide botanists in the field with all the resources that are now currently available in the world's museums and herbaria. Type Specimen Digital Collection The first challenge in producing our electronic field guide is to create a digital collection covering all of the Smithsonian's 85,000 vascular plant type specimens. For each type specimen, the database should eventually include systematically acquired high-resolution digital images of the specimen, textual descriptions, links to decision trees, images of live plants, and 3D models. Figure 1: On the left, our set-up at the Smithsonian for digitally photographing type specimens. On the... %B Taxon %V 55 %P 597 - 610 %8 2006/// %G eng %0 Journal Article %J ACM Transactions on Programming Languages and Systems (TOPLAS) %D 2006 %T Flow-insensitive type qualifiers %A Foster, Jeffrey S. %A Johnson,R. %A Kodumal,J. %A Aiken,A. %B ACM Transactions on Programming Languages and Systems (TOPLAS) %V 28 %P 1035 - 1087 %8 2006/// %G eng %N 6 %0 Thesis %D 2006 %T A holistic approach to structure from motion %A Hui Ji %X This dissertation investigates the general structure from motion problem. That is, how to compute in an unconstrained environment 3D scene structure, camera motion and moving objects from video sequences. We present a framework which uses concatenated feed-back loops to overcome the main difficulty in the structure from motion problem: the chicken-and-egg dilemma between scene segmentation and structure recovery. The idea is that we compute structure and motion in stages by gradually computing 3D scene information of increasing complexity and using processes which operate on increasingly large spatial image areas. Within this framework, we developed three modules. First, we introduce a new constraint for the estimation of shape using image features from multiple views. We analyze this constraint and show that noise leads to unavoidable mis-estimation of the shape, which also predicts the erroneous shape perception in human. This insight provides a clear argument for the need for feed-back loops. Second, a novel constraint on shape is developed which allows us to connect multiple frames in the estimation of camera motion by matching only small image patches. Third, we present a texture descriptor for matching areas of extended sizes. The advantage of this texture descriptor, which is based on fractal geometry, lies in its invariance to any smooth mapping (Bi-Lipschitz transform) including changes of viewpoint, illumination and surface distortion. Finally, we apply our framework to the problem of super-resolution imaging. We use the 3D motion estimation together with a novel wavelet-based reconstruction scheme to reconstruct a high-resolution image from a sequence of low-resolution images. %I University of Maryland at College Park %C College Park, MD, USA %8 2006/// %G eng %0 Conference Paper %B Acoustics, Speech and Signal Processing, 2006. ICASSP 2006 Proceedings. 2006 IEEE International Conference on %D 2006 %T Image Registration and Fusion Studies for the Integration of Multiple Remote Sensing Data %A Le Moigne,J. %A Cole-Rhodes,A. %A Eastman,R. %A Jain,P. %A Joshua,A. %A Memarsadeghi,N. %A Mount, Dave %A Netanyahu,N. %A Morisette,J. %A Uko-Ozoro,E. %K ALI %K EO-1 %K fusion studies %K geophysical signal processing %K Hyperion sensors %K image registration %K Internet %K multiple remote sensing data %K multiple source data %K Remote sensing %K Web-based image registration toolbox %X The future of remote sensing will see the development of spacecraft formations, and with this development will come a number of complex challenges such as maintaining precise relative position and specified attitudes. At the same time, there will be increasing needs to understand planetary system processes and build accurate prediction models. One essential technology to accomplish these goals is the integration of multiple source data. For this integration, image registration and fusion represent the first steps and need to be performed with very high accuracy. In this paper, we describe studies performed in both image registration and fusion, including a modular framework that was built to describe registration algorithms, a Web-based image registration toolbox, and the comparison of several image fusion techniques using data from the EO-1/ALI and Hyperion sensors %B Acoustics, Speech and Signal Processing, 2006. ICASSP 2006 Proceedings. 2006 IEEE International Conference on %V 5 %P V - V %8 2006/05// %G eng %R 10.1109/ICASSP.2006.1661494 %0 Report %D 2006 %T Information-aware HyperOctree for effective isosurface rendering of large scale time-varying data %A Kim,J. %A JaJa, Joseph F. %X We develop a new indexing structure and a new out-of-core scheme to extract and render isosurfaces forlarge scale time-varying 3-D volume data. The new algorithm enables the fast visualization of arbitrary isosurfaces cut by a user-specified hyperplane along any of the four dimensions. Our data structure makes use of the entropy measure to establish the relative resolutions of the spatial and temporal dimensions rather than treating the temporal dimension just as any other dimension. The preprocessing is very efficient and the resulting indexing structure is very compact. We have tested our scheme on 40GB subset of the Richtmyer-Meshkov instability data set and obtained very good performance for a wide range of isosurface extraction queries. %I Instititue for Advanced Computer Studies, Univ of Maryland, College Park %V UMIACS-TR-2006-00 %8 2006/// %G eng %0 Book Section %B Systems Biology and Regulatory GenomicsSystems Biology and Regulatory Genomics %D 2006 %T An Interaction-Dependent Model for Transcription Factor Binding %A Wang,Li-San %A Jensen,Shane %A Hannenhalli, Sridhar %E Eskin,Eleazar %E Ideker,Trey %E Raphael,Ben %E Workman,Christopher %X Transcriptional regulation is accomplished by several transcription factor proteins that bind to specific DNA elements in the relative vicinity of the gene, and interact with each other and with Polymerase enzyme. Thus the determination of transcription factor-DNA binding is an important step toward understanding transcriptional regulation. An effective way to experimentally determine the genomic regions bound by a transcription factor is by a ChIP-on-chip assay. Then, given the putative genomic regions, computational motif finding algorithms are applied to estimate the DNA binding motif or positional weight matrix for the TF. The a priori expectation is that the presence or absence of the estimated motif in a promoter should be a good indicator of the binding of the TF to that promoter. This association between the presence of the transcription factor motif and its binding is however weak in a majority of cases where the whole genome ChIP experiments have been performed. One possible reason for this is that the DNA binding of a particular transcription factor depends not only on its own motif, but also on synergistic or antagonistic action of neighboring motifs for other transcription factors. We believe that modeling this interaction-dependent binding with linear regression can better explain the observed binding data. We assess this hypothesis based on the whole genome ChIP-on-chip data for Yeast. The derived interactions are largely consistent with previous results that combine ChIP-on-chip data with expression data. We additionally apply our method to determine interacting partners for CREB and validate our findings based on published experimental results. %B Systems Biology and Regulatory GenomicsSystems Biology and Regulatory Genomics %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 4023 %P 225 - 234 %8 2006/// %@ 978-3-540-48293-2 %G eng %U http://dx.doi.org/10.1007/978-3-540-48540-7_19 %0 Journal Article %J Visualization and Computer Graphics, IEEE Transactions on %D 2006 %T Isosurface Extraction and Spatial Filtering using Persistent Octree (POT) %A Shi,Q. %A JaJa, Joseph F. %K 4D isocontour slicing;Richtmyer-Meshkov instability dataset;branch-on-need octree;hybrid data structure;isosurface extraction;persistent octree;spatial filtering;data visualisation;database indexing;feature extraction;octrees;spatial data structures;Algor %K Computer-Assisted;Imaging %K Three-Dimensional;Information Storage and Retrieval;User-Computer Interface; %X We propose a novel persistent octree (POT) indexing structure for accelerating isosurface extraction and spatial filtering from volumetric data. This data structure efficiently handles a wide range of visualization problems such as the generation of view-dependent isosurfaces, ray tracing, and isocontour slicing for high dimensional data. POT can be viewed as a hybrid data structure between the interval tree and the branch-on-need octree (BONO) in the sense that it achieves the asymptotic bound of the interval tree for identifying the active cells corresponding to an isosurface and is more efficient than BONO for handling spatial queries. We encode a compact octree for each isovalue. Each such octree contains only the corresponding active cells, in such a way that the combined structure has linear space. The inherent hierarchical structure associated with the active cells enables very fast filtering of the active cells based on spatial constraints. We demonstrate the effectiveness of our approach by performing view-dependent isosurfacing on a wide variety of volumetric data sets and 4D isocontour slicing on the time-varying Richtmyer-Meshkov instability dataset %B Visualization and Computer Graphics, IEEE Transactions on %V 12 %P 1283 - 1290 %8 2006/10//sept %@ 1077-2626 %G eng %N 5 %R 10.1109/TVCG.2006.157 %0 Conference Paper %B Proceedings of Convergence Convergence International Congress and Exposition on Transportation Electronics, Detroit, USA %D 2006 %T Model based verification and validation of distributed control architectures %A Ray,A. %A Cleaveland, Rance %A Jiang,S. %A Fuhrman,T. %B Proceedings of Convergence Convergence International Congress and Exposition on Transportation Electronics, Detroit, USA %8 2006/// %G eng %0 Journal Article %J Vision Research %D 2006 %T Noise causes slant underestimation in stereo and motion %A Hui Ji %A Fermüller, Cornelia %K Bias %K Partial bias correction %K shape estimation %K Shape from motion %K Stereo orientation disparity %X This paper discusses a problem, which is inherent in the estimation of 3D shape (surface normals) from multiple views. Noise in the image signal causes bias, which may result in substantial errors in the parameter estimation. The bias predicts the underestimation of slant found in psychophysical and computational experiments. Specifically, we analyze the estimation of 3D shape from motion and stereo using orientation disparity. For the case of stereo, we show that bias predicts the anisotropy in the perception of horizontal and vertical slant. For the case of 3D motion we demonstrate the bias by means of a new illusory display. Finally, we discuss statistically optimal strategies for the problem and suggest possible avenues for visual systems to deal with the bias. %B Vision Research %V 46 %P 3105 - 3120 %8 2006/10// %@ 0042-6989 %G eng %U http://www.sciencedirect.com/science/article/pii/S0042698906002124 %N 19 %R 10.1016/j.visres.2006.04.010 %0 Report %D 2006 %T A Novel Information-Aware Octree for the Visualization of Large Scale Time-Varying Data %A Kim,Jusub %A JaJa, Joseph F. %K Technical Report %X Large scale scientific simulations are increasingly generatingvery large data sets that present substantial challenges to current visualization systems. In this paper, we develop a new scalable and efficient scheme for the visual exploration of 4-D isosurfaces of time varying data by rendering the 3-D isosurfaces obtained through an arbitrary axis-parallel hyperplane cut. The new scheme is based on: (i) a new 4-D hierarchical indexing structure, called Information-Aware Octree; (ii) a controllable delayed fetching technique; and (iii) an optimized data layout. Together, these techniques enable efficient and scalable out-of-core visualization of large scale time varying data sets. We introduce an entropy-based dimension integration technique by which the relative resolutions of the spatial and temporal dimensions are established, and use this information to design a compact size 4-D hierarchical indexing structure. We also present scalable and efficient techniques for out-of-core rendering. Compared with previous algorithms for constructing 4-D isosurfaces, our scheme is substantially faster and requires much less memory. Compared to the Temporal Branch-On-Need octree (T-BON), which can only handle a subset of our queries, our indexing structure is an order of magnitude smaller and is at least as effective in dealing with the queries that the T-BON can handle. We have tested our scheme on two large time-varying data sets and obtained very good performance for a wide range of isosurface extraction queries using an order of magnitude smaller indexing structures than previous techniques. In particular, we can generate isosurfaces at intermediate time steps very quickly. %I Instititue for Advanced Computer Studies, Univ of Maryland, College Park %V UMIACS-TR-2006-03 %8 2006/04/20/T16:3 %G eng %U http://drum.lib.umd.edu/handle/1903/3335 %0 Report %D 2006 %T NSF Workshop Storage Resource Broker Data Grid Preservation Assessment %A JaJa, Joseph F. %I San Diego Supercomputer Center %V TR-2006.3 %8 2006/// %G eng %0 Conference Paper %B Proceedings of the seventeenth annual ACM-SIAM symposium on Discrete algorithm %D 2006 %T The prize-collecting generalized Steiner tree problem via a new approach of primal-dual schema %A Hajiaghayi, Mohammad T. %A Jain,K. %B Proceedings of the seventeenth annual ACM-SIAM symposium on Discrete algorithm %P 631 - 640 %8 2006/// %G eng %0 Conference Paper %B 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition %D 2006 %T A Projective Invariant for Textures %A Yong Xu %A Hui Ji %A Fermüller, Cornelia %K Computer science %K Computer vision %K Educational institutions %K Fractals %K Geometry %K Image texture %K Level set %K lighting %K Robustness %K Surface texture %X Image texture analysis has received a lot of attention in the past years. Researchers have developed many texture signatures based on texture measurements, for the purpose of uniquely characterizing the texture. Existing texture signatures, in general, are not invariant to 3D transforms such as view-point changes and non-rigid deformations of the texture surface, which is a serious limitation for many applications. In this paper, we introduce a new texture signature, called the multifractal spectrum (MFS). It provides an efficient framework combining global spatial invariance and local robust measurements. The MFS is invariant under the bi-Lipschitz map, which includes view-point changes and non-rigid deformations of the texture surface, as well as local affine illumination changes. Experiments demonstrate that the MFS captures the essential structure of textures with quite low dimension. %B 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition %I IEEE %V 2 %P 1932 - 1939 %8 2006/// %@ 0-7695-2597-0 %G eng %R 10.1109/CVPR.2006.38 %0 Conference Paper %B Image Processing, 2006 IEEE International Conference on %D 2006 %T Recognition of Multi-Object Events Using Attribute Grammars %A Joo,Seong-Wook %A Chellapa, Rama %K attribute %K event %K grammar;multiobject %K grammars;image %K identification %K label;probabilistic %K parsing;attribute %K recognition;object %K recognition;probability; %K representation;object %X We present a method for representing and recognizing visual events using attribute grammars. In contrast to conventional grammars, attribute grammars are capable of describing features that are not easily represented by finite symbols. Our approach handles multiple concurrent events involving multiple entities by associating unique object identification labels with multiple event threads. Probabilistic parsing and probabilistic conditions on the attributes are used to achieve a robust recognition system. We demonstrate the effectiveness of our method for the task of recognizing vehicle casing in parking lots and events occurring in an airport tarmac %B Image Processing, 2006 IEEE International Conference on %P 2897 - 2900 %8 2006/10// %G eng %R 10.1109/ICIP.2006.313035 %0 Conference Paper %B Proceedings of the 2006 international conference on Digital government research %D 2006 %T Robust technologies for automated ingestion and long-term preservation of digital information %A JaJa, Joseph F. %K automated ingestion %K digital archiving %K digital preservation %K format registry %K management of preservation processes %X In this summary, we present an overview of our DIGARCH project and report on a number of significant advances achieved thus far. In particular, we highlight our contributions to the development of a novel architecture for the Global Digital Format Registry, the design of a highly reliable and scalable deep archive, and the development of the underpinnings of a policy-driven management of preservation processes. Challenges and future plans are briefly outlined. %B Proceedings of the 2006 international conference on Digital government research %S dg.o '06 %I Digital Government Society of North America %P 285 - 286 %8 2006/// %G eng %U http://dx.doi.org/10.1145/1146598.1146674 %R 10.1145/1146598.1146674 %0 Journal Article %J Science of Computer Programming %D 2006 %T Safe manual memory management in Cyclone %A Swamy,Nikhil %A Hicks, Michael W. %A Morrisett,Greg %A Grossman,Dan %A Jim,Trevor %K cyclone %K Memory management %K memory safety %K Reaps %K Reference counting %K regions %K unique pointers %X The goal of the Cyclone project is to investigate how to make a low-level C-like language safe. Our most difficult challenge has been providing programmers with control over memory management while retaining safety. This paper describes our experience trying to integrate and use effectively two previously-proposed, safe memory-management mechanisms: statically-scoped regions and tracked pointers. We found that these typing mechanisms can be combined to build alternative memory-management abstractions, such as reference counted objects and arenas with dynamic lifetimes, and thus provide a flexible basis. Our experience — porting C programs and device drivers, and building new applications for resource-constrained systems — confirms that experts can use these features to improve memory footprint and sometimes to improve throughput when used instead of, or in combination with, conservative garbage collection. %B Science of Computer Programming %V 62 %P 122 - 144 %8 2006/10/01/ %@ 0167-6423 %G eng %U http://www.sciencedirect.com/science/article/pii/S0167642306000785 %N 2 %R 10.1016/j.scico.2006.02.003 %0 Report %D 2006 %T Script-Independent Text Line Segmentation in Freestyle Handwritten Documents %A Li,Yi %A Yefeng Zheng %A David Doermann %A Jaeger,Stefan %X Text line segmentation in freestyle handwritten documents remains an open document analysis problem. Curvilinear text lines and small gaps between neighboring text lines present a challenge to algorithms developed for machine printed or hand-printed documents. In this paper, we propose a novel approach based on density estimation and a state-of-the-art image segmentation technique, the level set method. From an input document image, we estimate a probability map, where each element represents the probability that the underlying pixel belongs to a text line. The level set method is then exploited to determine the boundary of neighboring text lines by evolving an initial estimate. Unlike most connected component based methods [1, 2], the proposed algorithm does not use any script-specific knowledge. Extensive quantitative experiments on freestyle handwritten documents with diverse scripts, such as Arabic, Chinese, Korean, and Hindi, demonstrate that our algorithm consistently outperforms previous methods [3, 1, 2]. Further experiments show the proposed algorithm is robust to scale change, rotation, and noise. %I University of Maryland, College Park %V LAMP-TR-136, CS-TR-4836, UMIACS-TR-2006-51, CFAR-TR-1017 %8 2006/11// %G eng %0 Journal Article %J Interacting with Computers %D 2006 %T Severity and impact of computer user frustration: A comparison of student and workplace users %A Lazar,Jonathan %A Jones,Adam %A Hackley,Mary %A Shneiderman, Ben %K Computer anxiety %K Computer experience %K Helpdesk %K Training %K User frustration %K user interface design %X User frustration with information and computing technology is a pervasive and persistent problem. When computers crash, network congestion causes delays, and poor user interfaces trigger confusion there are dramatic consequences for individuals, organizations, and society. These frustrations, not only cause personal dissatisfaction and loss of self-efficacy, but may disrupt workplaces, slow learning, and reduce participation in local and national communities. Our exploratory study of 107 student computer users and 50 workplace computer users shows high levels of frustration and loss of 1/3–1/2 of time spent. This paper reports on the incident and individual factors that cause of frustration, and how they raise frustration severity. It examines the frustration impacts on the daily interactions of the users. The time lost and time to fix problem, and importance of task, strongly correlate with frustration levels for both student and workplace users. Differences between students and workplace users are discussed in the paper, as are implications for researchers. %B Interacting with Computers %V 18 %P 187 - 207 %8 2006/03// %@ 0953-5438 %G eng %U http://www.sciencedirect.com/science/article/pii/S0953543805000561 %N 2 %R 10.1016/j.intcom.2005.06.001 %0 Journal Article %J Proc. LREC %D 2006 %T SParseval: Evaluation metrics for parsing speech %A Roark,B. %A Harper,M. %A Charniak,E. %A Dorr, Bonnie J %A Johnson,M. %A Kahn,J. %A Liu,Y. %A Ostendorf,M. %A Hale,J. %A Krasnyanskaya,A. %A others %X While both spoken and written language processing stand to benefit from parsing, the standard Parseval metrics (Black et al., 1991) andtheir canonical implementation (Sekine and Collins, 1997) are only useful for text. The Parseval metrics are undefined when the words input to the parser do not match the words in the gold standard parse tree exactly, and word errors are unavoidable with automatic speech recognition (ASR) systems. To fill this gap, we have developed a publicly available tool for scoring parses that implements a variety of metrics which can handle mismatches in words and segmentations, including: alignment-based bracket evaluation, alignment-based dependency evaluation, and a dependency evaluation that does not require alignment. We describe the different metrics, how to use the tool, and the outcome of an extensive set of experiments on the sensitivity of the metrics. %B Proc. LREC %8 2006/// %G eng %0 Journal Article %J Software: Practice and Experience %D 2006 %T Synthetic‐perturbation techniques for screening shared memory programs %A Snelick,Robert %A JaJa, Joseph F. %A Kacker,Raghu %A Lyon,Gordon %K Design of experiments %K parallel programs %K performance %K Shared memory programming model %K Synthetic perturbation %X The synthetic-perturbation screening (SPS) methodology is based on an empirical approach; SPS introduces artificial perturbations into the MIMD program and captures the effects of such perturbations by using the modern branch of statistics called design of experiments. SPS can provide the basis of a powerful tool for screening MIMD programs for performance bottlenecks. This technique is portable across machines and architectures, and scales extremely well on massively parallel processors. The purpose of this paper is to explain the general approach and to extend it to address specific features that are the main source of poor performance on the shared memory programming model. These include performance degradation due to load imbalance and insufficient parallelism, and overhead introduced by synchronizations and by accessing shared data structures. We illustrate the practicality of SPS by demonstrating its use on two very different case studies: a large image understanding benchmark and a parallel quicksort. %B Software: Practice and Experience %V 24 %P 679 - 701 %8 2006/10/30/ %@ 1097-024X %G eng %U http://onlinelibrary.wiley.com/doi/10.1002/spe.4380240802/abstract %N 8 %R 10.1002/spe.4380240802 %0 Conference Paper %B Machine Learning and Cybernetics, 2006 International Conference on %D 2006 %T A Time Saving Method for Human Detection in Wide Angle Camera Images %A Zhuolin Jiang %A Li,Shao-Fa %A Gao,Dong-Fa %K CAMERAS %K Embedded system %K human detection %K image recognition %K Image segmentation %K indoor environment %K Motion estimation %K time-saving method %K wide angle camera images %X A time-saving method for human detection in an indoor environment in wide angle camera images is presented in this paper, without using time-consuming template operations such as edge detection and morphological operations. It excludes other objects in motion to detect humans including cases of the partial occlusion of humans, rotation of the head and variation of skin colors. This algorithm is applicable where the memory space is limited and the time requirements are strict, such as in an embedded system. The experimental results show that it took about 0.35 seconds to process the algorithm after the scene study time %B Machine Learning and Cybernetics, 2006 International Conference on %P 4029 - 4034 %8 2006/08// %G eng %R 10.1109/ICMLC.2006.258804 %0 Book Section %B Computer Vision – ECCV 2006Computer Vision – ECCV 2006 %D 2006 %T Wavelet-Based Super-Resolution Reconstruction: Theory and Algorithm %A Hui Ji %A Fermüller, Cornelia %E Leonardis,Aleš %E Bischof,Horst %E Pinz,Axel %X We present a theoretical analysis and a new algorithm for the problem of super-resolution imaging: the reconstruction of HR (high-resolution) images from a sequence of LR (low-resolution) images. Super-resolution imaging entails solutions to two problems. One is the alignment of image frames. The other is the reconstruction of a HR image from multiple aligned LR images. Our analysis of the latter problem reveals insights into the theoretical limits of super-resolution reconstruction. We find that at best we can reconstruct a HR image blurred by a specific low-pass filter. Based on the analysis we present a new wavelet-based iterative reconstruction algorithm which is very robust to noise. Furthermore, it has a computationally efficient built-in denoising scheme with a nearly optimal risk bound. Roughly speaking, our method could be described as a better-conditioned iterative back-projection scheme with a fast and optimal regularization criteria in each iteration step. Experiments with both simulated and real data demonstrate that our approach has significantly better performance than existing super-resolution methods. It has the ability to remove even large amounts of mixed noise without creating smoothing artifacts. %B Computer Vision – ECCV 2006Computer Vision – ECCV 2006 %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 3954 %P 295 - 307 %8 2006/// %@ 978-3-540-33838-3 %G eng %U http://dx.doi.org/10.1007/11744085_23 %0 Journal Article %J Behaviour & Information Technology %D 2006 %T Workplace user frustration with computers: an exploratory investigation of the causes and severity %A Lazar,Jonathan %A Jones,Adam %A Shneiderman, Ben %X When hard-to-use computers cause users to become frustrated, it can affect workplace productivity, user mood and interactions with other co-workers. Previous research has examined the frustration that students and their families face in using computers. To learn more about the causes and measure the severity of user frustration with computers in the workplace, we collected modified time diaries from 50 workplace users, who spent an average of 5.1 hours on the computer. In this exploratory research, users reported wasting on average, 42 ? 43% of their time on the computer due to frustrating experiences. The largest number of frustrating experiences occurred while using word processors, email and web browsers. The causes of the frustrating experiences, the time lost due to the frustrating experiences, and the effects of the frustrating experiences on the mood of the users are discussed in this paper. Implications for designers, managers, users, information technology staff and policymakers are discussed.When hard-to-use computers cause users to become frustrated, it can affect workplace productivity, user mood and interactions with other co-workers. Previous research has examined the frustration that students and their families face in using computers. To learn more about the causes and measure the severity of user frustration with computers in the workplace, we collected modified time diaries from 50 workplace users, who spent an average of 5.1 hours on the computer. In this exploratory research, users reported wasting on average, 42 ? 43% of their time on the computer due to frustrating experiences. The largest number of frustrating experiences occurred while using word processors, email and web browsers. The causes of the frustrating experiences, the time lost due to the frustrating experiences, and the effects of the frustrating experiences on the mood of the users are discussed in this paper. Implications for designers, managers, users, information technology staff and policymakers are discussed. %B Behaviour & Information Technology %V 25 %P 239 - 251 %8 2006/// %@ 0144-929X %G eng %U http://www.tandfonline.com/doi/abs/10.1080/01449290500196963 %N 3 %R 10.1080/01449290500196963 %0 Journal Article %J Journal of VisionJ Vis %D 2005 %T On the Anisotropy in the Perception of Stereoscopic Slant %A Fermüller, Cornelia %A Hui Ji %X Many visual processes computationally amount to estimation problems. It has been shown that noise in the image data causes consistent errors in the estimation, that is statistical bias [1]. Here we analyze the effects of bias on 3D shape estimation, and we found that it predicts the perceived underestimation of slant for many settings found in experiments.In particular, we concentrate on the problem of shape estimation from stereo using orientation disparity. We found that bias predicts the anisotropy in the perception of stereoscopic slant, an effect that has not been explained before. It has been found that a surface slanted about the horizontal axis is estimated much easier and more accurately than a surface slanted about the vertical axis [2,3]. In both cases there is an underestimation of slant, but it is much larger for slant about the vertical. Cagnello and Rogers [2] argued that this effect is due to orientation disparity, which when the texture on the plane is made up of mostly horizontal and vertical lines, is smaller for surfaces slanting about the vertical. However as shown in [3], the effect also exists, even though in weaker form, when the texture is made up of lines oriented at 45 degrees. For such a configuration the orientation disparity in the two configurations is about the same. Thus orientation disparity by itself cannot be the cause. But errors in the estimated position and orientation of edgels cause bias, which predicts all the above findings and other parametric studies that we performed. [1]. C. Fermuller, H. Malm. Uncertainty in visual processes predicts geometrical optical illusions, Vision Research, 44, 727-749, 2004. [2]. R. Cagnello, B.J.Rogers. Anisotropies in the perception of stereoscopic surfaces: the role of orientation disparity. Vision Research, 33>(16): 2189-2201, 1993. [3]. G.J. Mitchison. S.P. McKee. Mechannisms underlying the anisotropy of stereoscopic tilt perception. Vision Research, 30:1781-1791, 1990. %B Journal of VisionJ Vis %V 5 %P 516 - 516 %8 2005/09/23/ %@ , 1534-7362 %G eng %U http://www.journalofvision.org/content/5/8/516 %N 8 %R 10.1167/5.8.516 %0 Journal Article %J INTERACT'05: Communicating Naturally through Computers (Adjunct Proceedings) %D 2005 %T The Challenge of Personal Information Management %A Dix,A. %A Jones,W. %A Czerwinski,M. %A Teevan,J. %A Plaisant, Catherine %A Moran,TP %B INTERACT'05: Communicating Naturally through Computers (Adjunct Proceedings) %8 2005/// %G eng %0 Journal Article %J C/C++ Users Journal %D 2005 %T Cyclone: A type-safe dialect of C %A Grossman,D. %A Hicks, Michael W. %A Jim,T. %A Morrisett,G. %X If any bug has achieved celebrity status, it is thebuffer overflow. It made front-page news as early as 1987, as the enabler of the Morris worm, the first worm to spread through the Internet. In recent years, attacks exploiting buffer overflows have become more frequent, and more virulent. This year, for exam- ple, the Witty worm was released to the wild less than 48 hours after a buffer overflow vulnerability was publicly announced; in 45 minutes, it infected the entire world-wide population of 12,000 machines running the vulnerable programs. Notably, buffer overflows are a problem only for the C and C++ languages—Java and other “safe” lan- guages have built-in protection against them. More- over, buffer overflows appear in C programs written by expert programmers who are security concious— programs such as OpenSSH, Kerberos, and the com- mercial intrusion detection programs that were the target of Witty. This is bad news for C. If security experts have trouble producing overflow-free C programs, then there is not much hope for ordinary C program- mers. On the other hand, programming in Java is no panacea; for certain applications, C has no com- petition. From a programmer’s point of view, all the safe languages are about the same, while C is a very different beast. %B C/C++ Users Journal %V 23 %P 112 - 139 %8 2005/// %G eng %N 1 %0 Conference Paper %B Computer Vision, 2005. ICCV 2005. Tenth IEEE International Conference on %D 2005 %T Deformation invariant image matching %A Ling,Haibin %A Jacobs, David W. %K deformation %K deformations;nonaffine %K deformations;point %K descriptor;geodesic %K distances;geodesic %K geometry;differential %K geometry;image %K histogram;image %K image %K invariant %K local %K matching;computational %K matching;deformation %K matching;image %K morphing; %K sampling;geodesic-intensity %X We propose a novel framework to build descriptors of local intensity that are invariant to general deformations. In this framework, an image is embedded as a 2D surface in 3D space, with intensity weighted relative to distance in x-y. We show that as this weight increases, geodesic distances on the embedded surface are less affected by image deformations. In the limit, distances are deformation invariant. We use geodesic sampling to get neighborhood samples for interest points, and then use a geodesic-intensity histogram (GIH) as a deformation invariant local descriptor. In addition to its invariance, the new descriptor automatically finds its support region. This means it can safely gather information from a large neighborhood to improve discriminability. Furthermore, we propose a matching method for this descriptor that is invariant to affine lighting changes. We have tested this new descriptor on interest point matching for two data sets, one with synthetic deformation and lighting change, and another with real non-affine deformations. Our method shows promising matching results compared to several other approaches %B Computer Vision, 2005. ICCV 2005. Tenth IEEE International Conference on %V 2 %P 1466 -1473 Vol. 2 - 1466 -1473 Vol. 2 %8 2005/10// %G eng %R 10.1109/ICCV.2005.67 %0 Conference Paper %B Symposium on Document Image Understanding Technology %D 2005 %T DOCLIB: A document processing research tool %A Chen,Kevin %A Jaeger,Stefan %A Zhu,Guangyu %A David Doermann %X Often, valuable document processing intellectual capital is lost due to staff transitions or project restructuring prior to technology transfer. Furthermore, hardware and software integrity, dependencies, and compatibility are critical components that often impede technology migration. While many open source tools attempt to mitigate these issues, they do not always address specific design needs and tailored-process that Government organizations must adhere to. This paper addresses the need for a common document processing research vehicle through which institutions can develop and share research-related software and applications across academic, business, and Government domains. %B Symposium on Document Image Understanding Technology %P 159 - 163 %8 2005/// %G eng %0 Patent %D 2005 %T Dynamic network resource allocation using multimedia content features and ... %A M. Wu %A Joyce,Robert A. %A Vetro,Anthony %A Wong,Hau-San %A Guan,Ling %A Kung,Sun-Yuan %E Mitsubishi Electric Research Labs, Inc. %X A method for dynamically allocating network resources while transferring multimedia at variable bit-rates in a network extracts first content features from the multimedia to determine renegotiation points and observation periods. Second content features and traffic features are extracted from the multimedia bit stream during the observation periods. The second content features and the traffic features are combined in a neural network to predict the network resources to be allocated at the renegotiation points. %V 09/795,952 %8 2005/09/20/ %G eng %U http://www.google.com/patents?id=dToWAAAAEBAJ %N 6947378 %0 Journal Article %J SIGCOMM Comput. Commun. Rev. %D 2005 %T An empirical study of "bogon" route advertisements %A Feamster, Nick %A Jung,Jaeyeon %A Balakrishnan,Hari %K anomalies %K BGP %K bogon prefixes %X An important factor in the robustness of the interdomain routing system is whether the routers in autonomous systems (ASes) filter routes for "bogon" address space---i.e., private address space and address space that has not been allocated by the Internet Assigned Numbers Authority (IANA). This paper presents an empirical study of bogon route announcements, as observed at eight vantage points on the Internet. On average, we observe several bogon routes leaked every few days; a small number of ASes also temporarily leak hundreds of bogon routes. About 40% of these bogon routes are not withdrawn for at least a day. We observed 110 different ASes originating routes for bogon prefixes and a few ASes that were responsible for advertising a disproportionate number of these routes. We also find that some ASes that do filter unallocated prefixes continue to filter them for as long as five months after they have been allocated, mistakenly filtering valid routes. Both of these types of delinquencies have serious implications: the failure to filter valid prefixes can could make nefarious activities such as denial of service attacks difficult to trace; failure to update filters when new prefixes are allocated prevents legitimate routes from being globally visible. %B SIGCOMM Comput. Commun. Rev. %V 35 %P 63 - 70 %8 2005/01// %@ 0146-4833 %G eng %U http://doi.acm.org/10.1145/1052812.1052826 %N 1 %R 10.1145/1052812.1052826 %0 Conference Paper %B Computer Vision, 2005. ICCV 2005. Tenth IEEE International Conference on %D 2005 %T On the equivalence of common approaches to lighting insensitive recognition %A Osadchy,M. %A Jacobs, David W. %A Lindenbaum,M. %K 3D %K conditions;lighting %K cosine %K difference;image %K direction %K filters;gradient %K function;image %K insensitive %K intensity;lighting %K recognition;image %K recognition;lighting %K scenes;Gaussian %K segmentation; %K variation;monotonic %X Lighting variation is commonly handled by methods invariant to additive and multiplicative changes in image intensity. It has been demonstrated that comparing images using the direction of the gradient can produce broader insensitivity to changes in lighting conditions, even for 3D scenes. We analyze two common approaches to image comparison that are invariant, normalized correlation using small correlation windows, and comparison based on a large set of oriented difference of Gaussian filters. We show analytically that these methods calculate a monotonic (cosine) function of the gradient direction difference and hence are equivalent to the direction of gradient method. Our analysis is supported with experiments on both synthetic and real scenes %B Computer Vision, 2005. ICCV 2005. Tenth IEEE International Conference on %V 2 %P 1721 -1726 Vol. 2 - 1721 -1726 Vol. 2 %8 2005/10// %G eng %R 10.1109/ICCV.2005.179 %0 Conference Paper %D 2005 %T An experimental evaluation to determine if port scans are precursors to an attack %A Panjwani,S. %A Tan,S. %A Jarrin,K.M. %A Michel Cukier %K attack data collection %K computer crime %K filtered data groups %K ICMP scans %K IP address %K IP networks %K management traffic %K port scans %K telecommunication security %K Telecommunication traffic %K vulnerability scans %X This paper describes an experimental approach to determine the correlation between port scans and attacks. Discussions in the security community often state that port scans should be considered as precursors to an attack. However, very few studies have been conducted to quantify the validity of this hypothesis. In this paper, attack data were collected using a test-bed dedicated to monitoring attackers. The data collected consist of port scans, ICMP scans, vulnerability scans, successful attacks and management traffic. Two experiments were performed to validate the hypothesis of linking port scans and vulnerability scans to the number of packets observed per connection. Customized scripts were then developed to filter the collected data and group them on the basis of scans and attacks between a source and destination IP address pair. The correlation of the filtered data groups was assessed. The analyzed data consists of forty-eight days of data collection for two target computers on a heavily utilized subnet. %P 602 - 611 %8 2005/07/01/june %G eng %R 10.1109/DSN.2005.18 %0 Journal Article %J Science %D 2005 %T The Genome Sequence of Trypanosoma cruzi, Etiologic Agent of Chagas Disease %A El‐Sayed, Najib M. %A Myler,Peter J. %A Bartholomeu,Daniella C. %A Nilsson,Daniel %A Aggarwal,Gautam %A Tran,Anh-Nhi %A Ghedin,Elodie %A Worthey,Elizabeth A. %A Delcher,Arthur L. %A Blandin,Gaëlle %A Westenberger,Scott J. %A Caler,Elisabet %A Cerqueira,Gustavo C. %A Branche,Carole %A Haas,Brian %A Anupama,Atashi %A Arner,Erik %A Åslund,Lena %A Attipoe,Philip %A Bontempi,Esteban %A Bringaud,Frédéric %A Burton,Peter %A Cadag,Eithon %A Campbell,David A. %A Carrington,Mark %A Crabtree,Jonathan %A Darban,Hamid %A da Silveira,Jose Franco %A de Jong,Pieter %A Edwards,Kimberly %A Englund,Paul T. %A Fazelina,Gholam %A Feldblyum,Tamara %A Ferella,Marcela %A Frasch,Alberto Carlos %A Gull,Keith %A Horn,David %A Hou,Lihua %A Huang,Yiting %A Kindlund,Ellen %A Klingbeil,Michele %A Kluge,Sindy %A Koo,Hean %A Lacerda,Daniela %A Levin,Mariano J. %A Lorenzi,Hernan %A Louie,Tin %A Machado,Carlos Renato %A McCulloch,Richard %A McKenna,Alan %A Mizuno,Yumi %A Mottram,Jeremy C. %A Nelson,Siri %A Ochaya,Stephen %A Osoegawa,Kazutoyo %A Pai,Grace %A Parsons,Marilyn %A Pentony,Martin %A Pettersson,Ulf %A Pop, Mihai %A Ramirez,Jose Luis %A Rinta,Joel %A Robertson,Laura %A Salzberg,Steven L. %A Sanchez,Daniel O. %A Seyler,Amber %A Sharma,Reuben %A Shetty,Jyoti %A Simpson,Anjana J. %A Sisk,Ellen %A Tammi,Martti T. %A Tarleton,Rick %A Teixeira,Santuza %A Van Aken,Susan %A Vogt,Christy %A Ward,Pauline N. %A Wickstead,Bill %A Wortman,Jennifer %A White,Owen %A Fraser,Claire M. %A Stuart,Kenneth D. %A Andersson,Björn %X Whole-genome sequencing of the protozoan pathogen Trypanosoma cruzi revealed that the diploid genome contains a predicted 22,570 proteins encoded by genes, of which 12,570 represent allelic pairs. Over 50% of the genome consists of repeated sequences, such as retrotransposons and genes for large families of surface molecules, which include trans-sialidases, mucins, gp63s, and a large novel family (>1300 copies) of mucin-associated surface protein (MASP) genes. Analyses of the T. cruzi, T. brucei, and Leishmania major (Tritryp) genomes imply differences from other eukaryotes in DNA repair and initiation of replication and reflect their unusual mitochondrial DNA. Although the Tritryp lack several classes of signaling molecules, their kinomes contain a large and diverse set of protein kinases and phosphatases; their size and diversity imply previously unknown interactions and regulatory processes, which may be targets for intervention. %B Science %V 309 %P 409 - 415 %8 2005/07/15/ %G eng %U http://www.sciencemag.org/content/309/5733/409.abstract %N 5733 %R 10.1126/science.1112631 %0 Journal Article %J ScienceScience %D 2005 %T The Genome Sequence of Trypanosoma Cruzi, Etiologic Agent of Chagas Disease %A El‐Sayed, Najib M. %A Myler,Peter J. %A Bartholomeu,Daniella C. %A Nilsson,Daniel %A Aggarwal,Gautam %A Tran,Anh-Nhi %A Ghedin,Elodie %A Worthey,Elizabeth A. %A Delcher,Arthur L. %A Blandin,Gaëlle %A Westenberger,Scott J. %A Caler,Elisabet %A Cerqueira,Gustavo C. %A Branche,Carole %A Haas,Brian %A Anupama,Atashi %A Arner,Erik %A Åslund,Lena %A Attipoe,Philip %A Bontempi,Esteban %A Bringaud,Frédéric %A Burton,Peter %A Cadag,Eithon %A Campbell,David A. %A Carrington,Mark %A Crabtree,Jonathan %A Darban,Hamid %A da Silveira,Jose Franco %A de Jong,Pieter %A Edwards,Kimberly %A Englund,Paul T. %A Fazelina,Gholam %A Feldblyum,Tamara %A Ferella,Marcela %A Frasch,Alberto Carlos %A Gull,Keith %A Horn,David %A Hou,Lihua %A Huang,Yiting %A Kindlund,Ellen %A Klingbeil,Michele %A Kluge,Sindy %A Koo,Hean %A Lacerda,Daniela %A Levin,Mariano J. %A Lorenzi,Hernan %A Louie,Tin %A Machado,Carlos Renato %A McCulloch,Richard %A McKenna,Alan %A Mizuno,Yumi %A Mottram,Jeremy C. %A Nelson,Siri %A Ochaya,Stephen %A Osoegawa,Kazutoyo %A Pai,Grace %A Parsons,Marilyn %A Pentony,Martin %A Pettersson,Ulf %A Pop, Mihai %A Ramirez,Jose Luis %A Rinta,Joel %A Robertson,Laura %A Salzberg,Steven L. %A Sanchez,Daniel O. %A Seyler,Amber %A Sharma,Reuben %A Shetty,Jyoti %A Simpson,Anjana J. %A Sisk,Ellen %A Tammi,Martti T. %A Tarleton,Rick %A Teixeira,Santuza %A Van Aken,Susan %A Vogt,Christy %A Ward,Pauline N. %A Wickstead,Bill %A Wortman,Jennifer %A White,Owen %A Fraser,Claire M. %A Stuart,Kenneth D. %A Andersson,Björn %X Whole-genome sequencing of the protozoan pathogen Trypanosoma cruzi revealed that the diploid genome contains a predicted 22,570 proteins encoded by genes, of which 12,570 represent allelic pairs. Over 50% of the genome consists of repeated sequences, such as retrotransposons and genes for large families of surface molecules, which include trans-sialidases, mucins, gp63s, and a large novel family (>1300 copies) of mucin-associated surface protein (MASP) genes. Analyses of the T. cruzi, T. brucei, and Leishmania major (Tritryp) genomes imply differences from other eukaryotes in DNA repair and initiation of replication and reflect their unusual mitochondrial DNA. Although the Tritryp lack several classes of signaling molecules, their kinomes contain a large and diverse set of protein kinases and phosphatases; their size and diversity imply previously unknown interactions and regulatory processes, which may be targets for intervention. %B ScienceScience %V 309 %P 409 - 415 %8 2005/07/15/ %@ 0036-8075, 1095-9203 %G eng %U http://www.sciencemag.org/content/309/5733/409 %N 5733 %R 10.1126/science.1112631 %0 Conference Paper %B 8th Int. Conf. on Document Analysis and Recognition %D 2005 %T Identifying Script on Word-Level with Informational Confidence %A Jaeger,Stefan %A Ma,Huanfeng %A David Doermann %X In this paper, we present a multiple classifier system for script identification. Applying a Gabor filter analysis of textures on word-level, our system identifies Latin and non-Latin words in bilingual printed documents. The classifier system comprises four different architectures based on nearest neighbors, weighted Euclidean distances, Gaussian mixture models, and support vector machines. We report results for Arabic, Chinese, Hindi, and Korean script. Moreover, we show that combining informational confidence values using sum-rule can consistently outperform the best single recognition rate. %B 8th Int. Conf. on Document Analysis and Recognition %P 416 - 420 %8 2005/08// %G eng %0 Conference Paper %B Proceedings of the 2005 conference on Applications, technologies, architectures, and protocols for computer communications %D 2005 %T Implications of autonomy for the expressiveness of policy routing %A Feamster, Nick %A Johari,Ramesh %A Balakrishnan,Hari %K autonomy %K BGP %K Internet %K policy %K protocol %K Routing %K Safety %K stability %X Thousands of competing autonomous systems must cooperate with each other to provide global Internet connectivity. Each autonomous system (AS) encodes various economic, business, and performance decisions in its routing policy. The current interdomain routing system enables each AS to express policy using rankings that determine how each router inthe AS chooses among different routes to a destination, and filters that determine which routes are hidden from each neighboring AS. Because the Internet is composed of many independent, competing networks, the interdomain routing system should provide autonomy, allowing network operators to set their rankings independently, and to have no constraints on allowed filters. This paper studies routing protocol stability under these conditions. We first demonstrate that certain rankings that are commonly used in practice may not ensure routing stability. We then prove that, when providers can set rankings and filters autonomously, guaranteeing that the routing system will converge to a stable path assignment essentially requires ASes to rank routes based on AS-path lengths. We discuss the implications of these results for the future of interdomain routing. %B Proceedings of the 2005 conference on Applications, technologies, architectures, and protocols for computer communications %S SIGCOMM '05 %I ACM %C New York, NY, USA %P 25 - 36 %8 2005/// %@ 1-59593-009-4 %G eng %U http://doi.acm.org/10.1145/1080091.1080096 %R 10.1145/1080091.1080096 %0 Conference Paper %B IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2005. CVPR 2005 %D 2005 %T Integration of motion fields through shape %A Ji,H. %A Fermüller, Cornelia %K 3D motion estimation %K Automation %K CAMERAS %K computational geometry %K Computer vision %K constrained minimization problem %K decoupling translation from rotation %K Educational institutions %K image colour analysis %K image gradients %K image resolution %K Image segmentation %K image sequence %K Image sequences %K integration of motion fields %K Layout %K minimisation %K Motion estimation %K motion field integration %K motion segmentation %K parameter estimation %K planar patches %K rank-3 constraint %K scene patches %K SHAPE %K shape and rotation %K shape estimation %K structure estimation %X Structure from motion from single flow fields has been studied intensively, but the integration of information from multiple flow fields has not received much attention. Here we address this problem by enforcing constraints on the shape (surface normals) of the scene in view, as opposed to constraints on the structure (depth). The advantage of integrating shape is two-fold. First, we do not need to estimate feature correspondences over multiple frames, but we only need to match patches. Second, the shape vectors in the different views are related only by rotation. This constraint on shape can be combined easily with motion estimation, thus formulating motion and structure estimation from multiple views as a practical constrained minimization problem using a rank-3 constraint. Based on this constraint, we develop a 3D motion technique, which locates through color and motion segmentation, planar patches in the scene, matches patches over multiple frames, and estimates the motion between multiple frames and the shape of the selected scene patches using the image gradients. Experiments evaluate the accuracy of the 3D motion estimation and demonstrate the motion and shape estimation of the technique by super-resolving an image sequence. %B IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2005. CVPR 2005 %I IEEE %V 2 %P 663- 669 vol. 2 - 663- 669 vol. 2 %8 2005/06/20/25 %@ 0-7695-2372-2 %G eng %R 10.1109/CVPR.2005.190 %0 Patent %D 2005 %T Lambertian reflectance and linear subspaces %A Jacobs, David W. %A Basri,Ronen %E NEC Laboratories America, Inc. %X A method for choosing an image from a plurality of three-dimensional models which is most similar to an input image is provided. The method includes the steps of: (a) providing a database of the plurality of three-dimensional models; (b) providing an input image; (c) positioning each three-dimensional model relative to the input image; (d) for each three-dimensional model, determining a rendered image that is most similar to the input image by: (d)(i) computing a linear subspace that describes an approximation to the set of all possible rendered images that each three-dimensional model can produce under all possible lighting conditions where each point in the linear subspace represents a possible image; and one of (d)(ii) finding the point on the linear subspace that is closest to the input image or finding a rendered image in a subset of the linear subspace obtained by projecting the set of images that are generated by positive lights onto the linear subspace; (e) computing a... %V : 09/705,507 %8 2005/02/08/ %G eng %U http://www.google.com/patents?id=6hEWAAAAEBAJ %N 6853745 %0 Journal Article %J ACM transactions on graphics %D 2005 %T Mesh saliency %A Lee,Chang Ha %A Varshney, Amitabh %A Jacobs, David W. %K perception %K saliency %K simplification %K viewpoint selection %K visual attention %X Research over the last decade has built a solid mathematical foundation for representation and analysis of 3D meshes in graphics and geometric modeling. Much of this work however does not explicitly incorporate models of low-level human visual attention. In this paper we introduce the idea of mesh saliency as a measure of regional importance for graphics meshes. Our notion of saliency is inspired by low-level human visual system cues. We define mesh saliency in a scale-dependent manner using a center-surround operator on Gaussian-weighted mean curvatures. We observe that such a definition of mesh saliency is able to capture what most would classify as visually interesting regions on a mesh. The human-perception-inspired importance measure computed by our mesh saliency operator results in more visually pleasing results in processing and viewing of 3D meshes. compared to using a purely geometric measure of shape. such as curvature. We discuss how mesh saliency can be incorporated in graphics applications such as mesh simplification and viewpoint selection and present examples that show visually appealing results from using mesh saliency. %B ACM transactions on graphics %V 24 %P 659 - 666 %8 2005/07// %@ 0730-0301 %G eng %U http://doi.acm.org/10.1145/1073204.1073244 %N 3 %R 10.1145/1073204.1073244 %0 Conference Paper %B AAAI Spring Symposium on Metacognition in Computation %D 2005 %T Metacognition for dropping and reconsidering intentions %A Josyula,D. P %A Anderson,M. L %A Perlis, Don %B AAAI Spring Symposium on Metacognition in Computation %8 2005/// %G eng %0 Conference Paper %B Acoustics, Speech, and Signal Processing, 2005. Proceedings. (ICASSP '05). IEEE International Conference on %D 2005 %T A method for converting a smiling face to a neutral face with applications to face recognition %A Ramachandran, M. %A Zhou,S. K %A Jhalani, D. %A Chellapa, Rama %K appearance-based %K Expression %K Face %K face; %K feature %K invariant %K motion; %K neutral %K nonrigid %K normalization; %K recognition; %K smiling %X The human face displays a variety of expressions, like smile, sorrow, surprise, etc. All these expressions constitute nonrigid motions of various features of the face. These expressions lead to a significant change in the appearance of a facial image which leads to a drop in the recognition accuracy of a face-recognition system trained with neutral faces. There are other factors like pose and illumination which also lead to performance drops. Researchers have proposed methods to tackle the effects of pose and illumination; however, there has been little work on how to tackle expressions. We attempt to address the issue of expression invariant face-recognition. We present preprocessing steps for converting a smiling face to a neutral face. We expect that this would in turn make the vector in the feature space to be closer to the correct vector in the gallery, in an appearance-based face recognition. This conjecture is supported by our recognition results which demonstrate that the accuracy goes up if we include the expression-normalization block. %B Acoustics, Speech, and Signal Processing, 2005. Proceedings. (ICASSP '05). IEEE International Conference on %V 2 %P ii/977 - ii/980 Vol. 2 - ii/977 - ii/980 Vol. 2 %8 2005/03// %G eng %R 10.1109/ICASSP.2005.1415570 %0 Conference Paper %B Mass Storage Systems and Technologies, 2005. Proceedings. 22nd IEEE / 13th NASA Goddard Conference on %D 2005 %T Mitigating risk of data loss in preservation environments %A Moore,R.W. %A JaJa, Joseph F. %A Chadduck,R. %K archives; %K authentication; %K authenticity; %K computing; %K data %K databases; %K digital %K distributed %K environment; %K Grid %K integrity; %K management; %K message %K objects; %K persistent %K preservation %K record %K risk %K storage %X Preservation environments manage digital records for time periods that are much longer than that of a single vendor product. A primary requirement is the preservation of the authenticity and integrity of the digital records while simultaneously minimizing the cost of long-term storage, as the data is migrated onto successive generations of technology. The emergence of low-cost storage hardware has made it possible to implement innovative software systems that minimize risk of data loss and preserve authenticity and integrity. This paper describes software mechanisms in use in current persistent archives and presents an example based upon the NARA research prototype persistent archive. %B Mass Storage Systems and Technologies, 2005. Proceedings. 22nd IEEE / 13th NASA Goddard Conference on %P 39 - 48 %8 2005/04// %G eng %R 10.1109/MSST.2005.20 %0 Journal Article %J Theoretical Computer Science %D 2005 %T A new framework for addressing temporal range queries and some preliminary results %A Shi,Qingmin %A JaJa, Joseph F. %K algorithms %K Data structures %K Orthogonal range search %K temporal data %X Given a set of n objects, each characterized by d attributes specified at m fixed time instances, we are interested in the problem of designing space efficient indexing structures such that a class of temporal range search queries can be handled efficiently. When m = 1 , our problem reduces to the d-dimensional orthogonal search problem. We establish efficient data structures to handle several classes of the general problem. Our results include a linear size data structure that enables a query time of O ( log n log m + f ) for one-sided queries when d = 1 , where f is the number of objects satisfying the query. A similar result is shown for counting queries. We also show that the most general problem can be solved with a polylogarithmic query time using superlinear space data structures. %B Theoretical Computer Science %V 332 %P 109 - 121 %8 2005/02/28/ %@ 0304-3975 %G eng %U http://www.sciencedirect.com/science/article/pii/S0304397504007005 %N 1–3 %R 10.1016/j.tcs.2004.10.013 %0 Journal Article %J Proc. of IEEE International Conference on Computer Vision %D 2005 %T Non-negative lighting and specular object recognition %A Jacobs, David W. %A Shirdhonkar,S. %B Proc. of IEEE International Conference on Computer Vision %V 2 %8 2005/// %G eng %0 Conference Paper %B Computer Vision, 2005. ICCV 2005. Tenth IEEE International Conference on %D 2005 %T Non-negative lighting and specular object recognition %A Shirdhonkar,S. %A Jacobs, David W. %K and %K distribution %K eigenfunctions;image %K eigenvalue %K harmonic %K Lambertian %K lighting;nonnegative %K lighting;semidefinite %K matching;object %K object %K objects;Szego %K optimization;incident %K programming;specular %K recognition;optimisation; %K recognition;spherical %K representation;eigenvalues %K theorem;constrained %X Recognition of specular objects is particularly difficult because their appearance is much more sensitive to lighting changes than that of Lambertian objects. We consider an approach in which we use a 3D model to deduce the lighting that best matches the model to the image. In this case, an important constraint is that incident lighting should be non-negative everywhere. In this paper, we propose a new method to enforce this constraint and explore its usefulness in specular object recognition, using the spherical harmonic representation of lighting. The method follows from a novel extension of Szego's eigenvalue distribution theorem to spherical harmonics, and uses semidefinite programming to perform a constrained optimization. The new method is faster as well as more accurate than previous methods. Experiments on both synthetic and real data indicate that the constraint can improve recognition of specular objects by better separating the correct and incorrect models %B Computer Vision, 2005. ICCV 2005. Tenth IEEE International Conference on %V 2 %P 1323 -1330 Vol. 2 - 1323 -1330 Vol. 2 %8 2005/10// %G eng %R 10.1109/ICCV.2005.168 %0 Journal Article %J SIAM Journal on Computing %D 2005 %T Novel transformation techniques using q-heaps with applications to computational geometry %A Shi,Q. %A JaJa, Joseph F. %X Using the notions of Q-heaps and fusion trees developed by Fredman and Willard,we develop general transformation techniques to reduce a number of computational geometry prob- lems to their special versions in partially ranked spaces. In particular, we develop a fast fractional cascading technique, which uses linear space and enables sublogarithmic iterative search on catalog trees in the case when the degree of each node is bounded by O(log∈ n), for some constant ϵ > 0, where n is the total size of all the lists stored in the tree. We apply the fast fractional cascading tech- nique in combination with the other techniques to derive the first linear-space sublogarithmic time algorithms for the two fundamental geometric retrieval problems: orthogonal segment intersection and rectangular point enclosure. %B SIAM Journal on Computing %V 34 %P 1474 - 1492 %8 2005/// %G eng %N 6 %0 Journal Article %J Information Processing Letters %D 2005 %T Optimal and near-optimal algorithms for generalized intersection reporting on pointer machines %A Shi,Qingmin %A JaJa, Joseph F. %K algorithms %K computational geometry %K Generalized intersection %X We develop efficient algorithms for a number of generalized intersection reporting problems, including orthogonal and general segment intersection, 2D range searching, rectangular point enclosure, and rectangle intersection search. Our results for orthogonal and general segment intersection, 3-sided 2D range searching, and rectangular pointer enclosure problems match the lower bounds for their corresponding standard versions under the pointer machine model. Our results for the remaining problems improve upon the best known previous algorithms. %B Information Processing Letters %V 95 %P 382 - 388 %8 2005/08/16/ %@ 0020-0190 %G eng %U http://www.sciencedirect.com/science/article/pii/S0020019005001183 %N 3 %R 10.1016/j.ipl.2005.04.008 %0 Journal Article %J Institute for Systems Research Technical Reports %D 2005 %T Representing Unevenly-Spaced Time Series Data for Visualization and Interactive Exploration (2005) %A Aris,Aleks %A Shneiderman, Ben %A Plaisant, Catherine %A Shmueli,Galit %A Jank,Wolfgang %K Technical Report %X Visualizing time series data is useful to support discovery of relations and patterns in financial, genomic, medical and other applications. In most time series, measurements are equally spaced over time. This paper discusses the challenges for unevenly-spaced time series data and presents four methods to represent them: sampled events, aggregated sampled events, event index and interleaved event index. We developed these methods while studying eBay auction data with TimeSearcher. We describe the advantages, disadvantages, choices for algorithms and parameters, and compare the different methods. Since each method has its advantages, this paper provides guidance for choosing the right combination of methods, algorithms, and parameters to solve a given problem for unevenly-spaced time series. Interaction issues such as screen resolution, response time for dynamic queries, and meaning of the visual display are governed by these decisions. %B Institute for Systems Research Technical Reports %8 2005/// %G eng %U http://drum.lib.umd.edu/handle/1903/6537 %0 Book Section %B Human-Computer Interaction - INTERACT 2005Human-Computer Interaction - INTERACT 2005 %D 2005 %T Representing Unevenly-Spaced Time Series Data for Visualization and Interactive Exploration %A Aris,Aleks %A Shneiderman, Ben %A Plaisant, Catherine %A Shmueli,Galit %A Jank,Wolfgang %E Costabile,Maria %E Paternò,Fabio %X Visualizing time series is useful to support discovery of relations and patterns in financial, genomic, medical and other applications. Often, measurements are equally spaced over time. We discuss the challenges of unevenly-spaced time series and present fourrepresentationmethods: sampled events, aggregated sampled events, event index and interleaved event index. We developed these methods while studying eBay auction data with TimeSearcher. We describe the advantages, disadvantages, choices for algorithms and parameters, and compare the different methods for different tasks. Interaction issues such as screen resolution, response time for dynamic queries, and learnability are governed by these decisions. %B Human-Computer Interaction - INTERACT 2005Human-Computer Interaction - INTERACT 2005 %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 3585 %P 835 - 846 %8 2005/// %@ 978-3-540-28943-2 %G eng %U http://dx.doi.org/10.1007/11555261_66 %0 Journal Article %J Information Security Practice and Experience %D 2005 %T Robust routing in malicious environment for ad hoc networks %A Yu,Z. %A Seng,C. Y %A Jiang,T. %A Wu,X. %A Arbaugh, William A. %B Information Security Practice and Experience %P 36 - 47 %8 2005/// %G eng %0 Book Section %B Approximation, Randomization and Combinatorial Optimization. Algorithms and TechniquesApproximation, Randomization and Combinatorial Optimization. Algorithms and Techniques %D 2005 %T Scheduling on Unrelated Machines Under Tree-Like Precedence Constraints %A Kumar, V. %A Marathe,Madhav %A Parthasarathy,Srinivasan %A Srinivasan, Aravind %E Chekuri,Chandra %E Jansen,Klaus %E Rolim,José %E Trevisan,Luca %X We present polylogarithmic approximations for the R | prec | C max and R | prec |∑ j w j C j problems, when the precedence constraints are “treelike” – i.e., when the undirected graph underlying the precedences is a forest. We also obtain improved bounds for the weighted completion time and flow time for the case of chains with restricted assignment – this generalizes the job shop problem to these objective functions. We use the same lower bound of “congestion+dilation”, as in other job shop scheduling approaches. The first step in our algorithm for the R | prec | C max problem with treelike precedences involves using the algorithm of Lenstra, Shmoys and Tardos to obtain a processor assignment with the congestion + dilation value within a constant factor of the optimal. We then show how to generalize the random delays technique of Leighton, Maggs and Rao to the case of trees. For the weighted completion time, we show a certain type of reduction to the makespan problem, which dovetails well with the lower bound we employ for the makespan problem. For the special case of chains, we show a dependent rounding technique which leads to improved bounds on the weighted completion time and new bicriteria bounds for the flow time. %B Approximation, Randomization and Combinatorial Optimization. Algorithms and TechniquesApproximation, Randomization and Combinatorial Optimization. Algorithms and Techniques %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 3624 %P 609 - 609 %8 2005/// %@ 978-3-540-28239-6 %G eng %U http://dx.doi.org/10.1007/11538462_13 %0 Report %D 2005 %T Severity and Impact of Computer User Frustration: A Comparison of Student and Workplace Users (2002) %A Lazar,Jonathan %A Jones,Adam %A Hackley,Mary %A Shneiderman, Ben %K Technical Report %X User frustration with information and computing technology is a pervasive and persistent problem. When computers crash, network congestion causes delays, and poor user interfaces trigger confusion there are dramatic consequences for individuals, organizations, and society. These frustrations not only cause personal dissatisfaction and loss of self-efficacy, but may disrupt workplaces, slow learning, and reduce participation in local and national communities. Our study of 107 student computer users and 50 workplace computer users shows high levels of frustration and loss of 1/3 to 1/2 of time spent. This paper reports on the incident-specific and user-specific causes of frustration, and they raise frustration severity. It examines the frustration impacts on the daily interactions of the users. The time lost and time to fix problem, and importance of task, strongly correlate with frustration levels for both student and workplace users. Differences between students and workplace users are discussed in the paper. %B Institute for Systems Research Technical Reports %8 2005/// %G eng %U http://drum.lib.umd.edu/handle/1903/6496 %0 Journal Article %J Algorithms and Computation %D 2005 %T Space-efficient and fast algorithms for multidimensional dominance reporting and counting %A JaJa, Joseph F. %A Mortensen,C. %A Shi,Q. %X We present linear-space sub-logarithmic algorithms for handling the 3-dimensional dominance reporting and the 2-dimensional dominance counting problems. Under the RAM model as described in [M. L. Fredman and D. E. Willard. ldquoSurpassing the information theoretic bound with fusion treesrdquo, Journal of Computer and System Sciences, 47:424–436, 1993], our algorithms achieve O(log n/loglog n+f) query time for the 3-dimensional dominance reporting problem, where f is the output size, and O(log n/loglog n) query time for the 2-dimensional dominance counting problem. We extend these results to any constant dimension d ge 3, achieving O(n(log n/loglog n)d – 3) space and O((log n/loglog n)d – 2+ f) query time for the reporting case and O(n(log n/loglog n)d – 2) space and O((log n/loglog n)d – 1) query time for the counting case. %B Algorithms and Computation %P 1755 - 1756 %8 2005/// %G eng %R 10.1007/978-3-540-30551-4_49 %0 Report %D 2005 %T Stable Policy Routing with Provider Independence %A Feamster, Nick %A Johari,Ramesh %A Balakrishnan,Hari %X Thousands of competing autonomous systems (ASes) mustcooperate with each other to provide global Internet connectivity.These ASes encode various economic, business,and performance decisions in their routing policies. The currentinterdomain routing system enables ASes to express policyusing rankings that determine how each router in an ASorders the different routes to a destination, and filters thatdetermine which routes are hidden from each neighboringAS. Since the Internet is composed of many independent,competing networks, the interdomain routing system shouldallow providers to set their rankings independently, and tohave no constraints on allowed filters. This paper studiesrouting protocol stability under these constraints. We firstdemonstrate that certain rankings that are commonly usedin practice may not ensure routing stability. We then provethat, with ranking independence and unrestricted filtering,guaranteeing that the routing system will converge to a stablepath assignment essentially requires ASes to rank routesbased on AS-path lengths. Finally, we discuss the implicationsof these results for the future of interdomain routing. %I Massachusetts Institute of Technology Computer Science and Artificial Intelligence Laboratory %V MIT-CSAIL-TR-2005-009 %8 2005/02/08/ %G eng %U http://dspace.mit.edu/handle/1721.1/30522 %0 Book Section %B Coordination Models and LanguagesCoordination Models and Languages %D 2005 %T Tagged Sets: A Secure and Transparent Coordination Medium %A Oriol,Manuel %A Hicks, Michael W. %E Jacquet,Jean-Marie %E Picco,Gian %X A simple and effective way of coordinating distributed, mobile, and parallel applications is to use a virtual shared memory (VSM), such as a Linda tuple-space. In this paper, we propose a new kind of VSM, called a tagged set . Each element in the VSM is a value with an associated tag, and values are read or removed from the VSM by matching the tag. Tagged sets exhibit three properties useful for VSMs: 1 Ease of use . A tagged value naturally corresponds to the notion that data has certain attributes, expressed by the tag, which can be used for later retrieval. 2 Flexibility . Tags are implemented as propositional logic formulae, and selection as logical implication, so the resulting system is quite powerful. Tagged sets naturally support a variety of applications, such as shared data repositories (e.g., for media or e-mail), message passing, and publish/subscribe algorithms; they are powerful enough to encode existing VSMs, such as Linda spaces. 3 Security . Our notion of tags naturally corresponds to keys, or capabilities: a user may not select data in the set unless she presents a legal key or keys. Normal tags correspond to symmetric keys, and we introduce asymmetric tags that correspond to public and private key pairs. Treating tags as keys permits users to easily specify protection criteria for data at a fine granularity. This paper motivates our approach, sketches its basic theory, and places it in the context of other data management strategies. %B Coordination Models and LanguagesCoordination Models and Languages %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 3454 %P 193 - 205 %8 2005/// %@ 978-3-540-25630-4 %G eng %U http://dx.doi.org/10.1007/11417019_17 %0 Journal Article %J Algorithms and Computation %D 2005 %T Techniques for indexing and querying temporal observations for a collection of objects %A Shi,Q. %A JaJa, Joseph F. %X We consider the problem of dynamically indexing temporal observations about a collection of objects, each observation consisting of a key identifying the object, a list of attribute values and a timestamp indicating the time at which these values were recorded. We make no assumptions about the rates at which these observations are collected, nor do we assume that the various objects have about the same number of observations. We develop indexing structures that are almost linear in the total number of observations available at any given time instance, and that support dynamic additions of new observations in polylogarithmic time. Moreover, these structures allow the quick handling of queries to identify objects whose attribute values fall within a certain range at every time instance of a specified time interval. Provably good bounds are established. %B Algorithms and Computation %P 822 - 834 %8 2005/// %G eng %R 10.1007/978-3-540-30551-4_70 %0 Patent %D 2005 %T Torrance-sparrow off-specular reflection and linear subspaces for object recognition %A Thornber,Karvel K %A Jacobs, David W. %E NEC Laboratories America, Inc. %X The Torrance-Sparrow model of off-specular reflection is recast in a significantly simpler and more transparent form in order to render a spherical-harmonic decomposition more feasible. By assuming that a physical surface consists of small, reflecting facets whose surface normals satisfy a normal distribution, the model captures the off-specular enhancement of the reflected intensity distribution often observed at large angles of incidence and reflection, features beyond the reach of the phenomenological broadening models usually employed. In passing we remove a physical inconsistency in the original treatment, restoring reciprocity and correcting the dependence of reflectance on angle near grazing incidence. It is noted that the results predicted by the model are relatively insensitive to values of its one parameter, the width of the distribution of surface normals. %V 10/230,888 %8 2005/05/31/ %G eng %U http://www.google.com/patents?id=LI4VAAAAEBAJ %N 6900805 %0 Conference Paper %B IEEE Symposium on Information Visualization, 2005. INFOVIS 2005 %D 2005 %T Turning information visualization innovations into commercial products: lessons to guide the next success %A Shneiderman, Ben %A Rao,R. %A Andrews,K. %A Ahlberg,C. %A Brodbeck,D. %A Jewitt,T. %A Mackinlay,J. %K Books %K commercial development %K commercial product %K Computer interfaces %K Computer science %K data visualisation %K Data visualization %K Educational institutions %K exploratory data analysis %K information visualization innovation %K information visualization tool %K innovation management %K Laboratories %K Management training %K new technology emergence %K Technological innovation %K technology transfer %K Turning %K User interfaces %X As information visualization matures as an academic research field, commercial spinoffs are proliferating, but success stories are harder to find. This is the normal process of emergence for new technologies, but the panel organizers believe that there are certain strategies that facilitate success. To teach these lessons, we have invited several key figures who are seeking to commercialize information visualization tools. The panelists make short presentations, engage in a moderated discussion, and respond to audience questions. %B IEEE Symposium on Information Visualization, 2005. INFOVIS 2005 %I IEEE %P 241 - 244 %8 2005/10/23/25 %@ 0-7803-9464-X %G eng %R 10.1109/INFVIS.2005.1532153 %0 Journal Article %J Institute for Systems Research Technical Reports %D 2005 %T User Frustration with Technology in the Workplace (2004) %A Lazar,Jonathan %A Jones,Adam %A Bessiere,Katie %A Ceaparu,Irina %A Shneiderman, Ben %K Technical Report %X When hard to use computers cause users to become frustrated, it can affect workplace productivity, user mood, and interactions with other co-workers. Previous research has examined the frustration that graduate students and their families face in using computers. To learn more about the causes and effects of user frustration with computers in the workplace, we collected modified time diaries from 50 workplace users, who spent an average of 5.1 hours on the computer. In this experiment, users reported wasting on average, 42-43% of their time on the computer due to frustrating experiences. The causes of the frustrating experiences, the time lost due to the frustrating experiences, and the effects of the frustrating experiences on the mood of the users are discussed in this paper. Implications for designers, managers, users, information technology staff, and policymakers are discussed. %B Institute for Systems Research Technical Reports %8 2005/// %G eng %U http://drum.lib.umd.edu/handle/1903/6515 %0 Conference Paper %B Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on %D 2005 %T Using the inner-distance for classification of articulated shapes %A Ling,H. %A Jacobs, David W. %K articulated %K CE-Shape-1 %K classification; %K database; %K databases; %K dataset; %K descriptor; %K dynamic %K human %K image %K inner-distance; %K Kimia %K landmark %K leaf %K matching; %K MOTION %K MPEG7 %K points; %K programming; %K SHAPE %K silhouette %K silhouette; %K Swedish %K visual %X We propose using the inner-distance between landmark points to build shape descriptors. The inner-distance is defined as the length of the shortest path between landmark points within the shape silhouette. We show that the inner-distance is articulation insensitive and more effective at capturing complex shapes with part structures than Euclidean distance. To demonstrate this idea, it is used to build a new shape descriptor based on shape contexts. After that, we design a dynamic programming based method for shape matching and comparison. We have tested our approach on a variety of shape databases including an articulated shape dataset, MPEG7 CE-Shape-1, Kimia silhouettes, a Swedish leaf database and a human motion silhouette dataset. In all the experiments, our method demonstrates effective performance compared with other algorithms. %B Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on %V 2 %P 719 - 726 vol. 2 - 719 - 726 vol. 2 %8 2005/06// %G eng %R 10.1109/CVPR.2005.362 %0 Conference Paper %B Proceedings of the Workshop Empirically Successful First-Order Reasoning, International Joint Conference on Automated Reasoning %D 2004 %T Active logic for more effective human-computer interaction and other commonsense applications %A Anderson,M. L %A Josyula,D. %A Perlis, Don %A Purang,K. %B Proceedings of the Workshop Empirically Successful First-Order Reasoning, International Joint Conference on Automated Reasoning %8 2004/// %G eng %0 Journal Article %J Trends in Parasitology %D 2004 %T Advances in schistosome genomics %A El‐Sayed, Najib M. %A Bartholomeu,Daniella %A Ivens,Alasdair %A Johnston,David A. %A LoVerde,Philip T. %X In Spring 2004, the first draft of the 270 Mb genome of Schistosoma mansoni will be released. This sequence is based on the assembly and annotation of a >7.5-fold coverage, shotgun sequencing project. The key stages involved in the international collaborative efforts that have led to the generation of these sequencing data for the parasite S. mansoni are discussed here. %B Trends in Parasitology %V 20 %P 154 - 157 %8 2004/04/01/ %@ 1471-4922 %G eng %U http://www.sciencedirect.com/science/article/pii/S1471492204000480 %N 4 %R 16/j.pt.2004.02.002 %0 Book Section %B Computer Vision - ECCV 2004Computer Vision - ECCV 2004 %D 2004 %T Bias in Shape Estimation %A Hui Ji %A Fermüller, Cornelia %E Pajdla,Tomáš %E Matas,Jirí %X This paper analyses the uncertainty in the estimation of shape from motion and stereo. It is shown that there are computational limitations of a statistical nature that previously have not been recognized. Because there is noise in all the input parameters, we cannot avoid bias. The analysis rests on a new constraint which relates image lines and rotation to shape. Because the human visual system has to cope with bias as well, it makes errors. This explains the underestimation of slant found in computational and psychophysical experiments, and demonstrated here for an illusory display. We discuss properties of the best known estimators with regard to the problem, as well as possible avenues for visual systems to deal with the bias. %B Computer Vision - ECCV 2004Computer Vision - ECCV 2004 %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 3023 %P 405 - 416 %8 2004/// %@ 978-3-540-21982-8 %G eng %U http://dx.doi.org/10.1007/978-3-540-24672-5_32 %0 Journal Article %J Computer Vision-ECCV 2004 %D 2004 %T Characterization of human faces under illumination variations using rank, integrability, and symmetry constraints %A Zhou, S. %A Chellapa, Rama %A Jacobs, David W. %X Photometric stereo algorithms use a Lambertian reflectance model with a varying albedo field and involve the appearances of only one object. This paper extends photometric stereo algorithms to handle all the appearances of all the objects in a class, in particular the class of human faces. Similarity among all facial appearances motivates a rank constraint on the albedos and surface normals in the class. This leads to a factorization of an observation matrix that consists of exemplar images of different objects under different illuminations, which is beyond what can be analyzed using bilinear analysis. Bilinear analysis requires exemplar images of different objects under same illuminations. To fully recover the class-specific albedos and surface normals, integrability and face symmetry constraints are employed. The proposed linear algorithm takes into account the effects of the varying albedo field by approximating the integrability terms using only the surface normals. As an application, face recognition under illumination variation is presented. The rank constraint enables an algorithm to separate the illumination source from the observed appearance and keep the illuminant-invariant information that is appropriate for recognition. Good recognition results have been obtained using the PIE dataset. %B Computer Vision-ECCV 2004 %P 588 - 601 %8 2004/// %G eng %R 10.1007/978-3-540-24670-1_45 %0 Conference Paper %B Genetic and Evolutionary Computation–GECCO 2004 %D 2004 %T A descriptive encoding language for evolving modular neural networks %A Jung,J. Y %A Reggia, James A. %B Genetic and Evolutionary Computation–GECCO 2004 %P 519 - 530 %8 2004/// %G eng %0 Conference Paper %B PROCEEDINGS OF THE NATIONAL CONFERENCE ON ARTIFICIAL IN℡LIGENCE %D 2004 %T Domain-independent reason-enhanced controller for task-oriented systems-DIRECTOR %A Josyula,D. P %A Anderson,M. L %A Perlis, Don %B PROCEEDINGS OF THE NATIONAL CONFERENCE ON ARTIFICIAL IN℡LIGENCE %P 1014 - 1015 %8 2004/// %G eng %0 Report %D 2004 %T Efficient Serial and Parallel Algorithms for Querying Large Scale Multidimensional Time Series Data %A JaJa, Joseph F. %A Kim,J. %A Wang,Q. %X We consider the problem of querying large scale multidimensional time series data to discover events of interest,test and validate hypotheses, or to associate temporal patterns with specific events. Multidimensional time series data is growing at an extremely fast rate due to a number of trends including a recent strong interest in collecting and analyzing time series of business, scientific, demographic, and simulation data. The ability to explore such collections interactively, even at a coarse level, will be critical to the process of extracting some of the information and knowledge embedded in such collections. We develop indexing techniques and search algorithms to efficiently handle temporal range value querying of multidimensional time series data. Our indexing uses linear space data structures that enable the handling of queries very efficiently, invoking in the worst case a logarithmic number of queries to single time steps. We also show that our algorithm is ideally suited for parallel implementation on clusters of processors achieving a linear speedup in the number of available processors. A particularly simple data structure with provably good bounds is also presented for the case when the number of multidimensional objects is relatively small. These techniques improve significantly over previous techniques for either the serial or the parallel case, and are evaluated by extensive experimental results that confirm their superior performance. In particular, we achieve query times in the order of hundreds of milliseconds on a (relatively outdated) cluster of 16 processors for 140GB of data consisting of 160, 000 distinct time series of 16-dimensional points, each time series being of length 10, 000. %I Instititue for Advanced Computer Studies, Univ of Maryland, College Park %V UMIACS-TR-2004-50 %8 2004/// %G eng %0 Conference Paper %B Proceedings of the 4th international symposium on Memory management %D 2004 %T Experience with safe manual memory-management in cyclone %A Hicks, Michael W. %A Morrisett,Greg %A Grossman,Dan %A Jim,Trevor %K cyclone %K Memory management %K memory safety %K regions %K unique pointers %X The goal of the Cyclone project is to investigate type safety for low-level languages such as C. Our most difficult challenge has been providing programmers control over memory management while retaining type safety. This paper reports on our experience trying to integrate and effectively use two previously proposed, type-safe memory management mechanisms: statically-scoped regions and unique pointers. We found that these typing mechanisms can be combined to build alternative memory-management abstractions, such as reference counted objects and arenas with dynamic lifetimes, and thus provide a flexible basis. Our experience---porting C programs and building new applications for resource-constrained systems---confirms that experts can use these features to improve memory footprint and sometimes to improve throughput when used instead of, or in combination with, conservative garbage collection. %B Proceedings of the 4th international symposium on Memory management %S ISMM '04 %I ACM %C New York, NY, USA %P 73 - 84 %8 2004/// %@ 1-58113-945-4 %G eng %U http://doi.acm.org/10.1145/1029873.1029883 %R 10.1145/1029873.1029883 %0 Journal Article %J International Journal of Computer Vision %D 2004 %T Generalized photometric stereo and its application to face recognition %A Zhou, S. %A Chellapa, Rama %A Jacobs, David W. %X Most photometric stereo algorithms employ a Lambertian reflectance model witha varying albedo field and involve the appearances of only one object. The recovered albedos and surface normals are object-specific and appearances not belonging to the object cannot be easily handled. We generalize photometric stereo algorithms to handle all appearances of all objects in a class, in particular the human face class, by assuming that albedos and surface normals of all objects in the class be rank-constrained, i.e. lie in a subspace. Rank constraints lead us to a factorization of an observation matrix that consists of exemplar images of different objects under different illuminations. To fully recover the subspace bases or class- specific albedos and surface normals, we employ integrability and face symmetry constraints and propose a linearized algorithm. This algorithm takes into account the effects of the varying albedo field by approximating the integrability terms using only the surface normals. We then apply our generalized photometric stereo algorithm for recognizing faces under illumination variations. As far as recognition is concerned, we can utilize a bootstrap set which is just a collection of 2D image observations to avoid an explicit requirement that 3D information be available. We obtain good recognition results using the PIE database. %B International Journal of Computer Vision %8 2004/// %G eng %0 Journal Article %J Parallel and Distributed Systems, IEEE Transactions on %D 2004 %T A global-state-triggered fault injector for distributed system evaluation %A Chandra,Ramesh %A Lefever,R.M. %A Joshi,K.R. %A Michel Cukier %A Sanders,W. H. %K distributed processing %K distributed system evaluation %K fault tolerant computing %K global-state-based fault injection mechanism %K Loki %K offline clock synchronization %K performance evaluation %K post-runtime analysis %K Synchronisation %K system recovery %K user-specified performance %X Validation of the dependability of distributed systems via fault injection is gaining importance because distributed systems are being increasingly used in environments with high dependability requirements. The fact that distributed systems can fail in subtle ways that depend on the state of multiple parts of the system suggests that a global-state-based fault injection mechanism should be used to validate them. However, global-state-based fault injection is challenging since it is very difficult in practice to maintain the global state of a distributed system at runtime with minimal intrusion into the system execution. We present Loki, a global-state-based fault injector, which has been designed with the goals of low intrusion, high precision, and high flexibility. Loki achieves these goals by utilizing the ideas of partial view of global state, optimistic synchronization, and offline analysis. In Loki, faults are injected based on a partial, view of the global state of the system, and a post-runtime analysis is performed to place events and injections into a single global timeline and to discard experiments with incorrect fault injections. Finally, the experiments with correct fault injections are used to estimate user-specified performance and dependability measures. A flexible measure language has been designed that facilitates the specification of a wide range of measures. %B Parallel and Distributed Systems, IEEE Transactions on %V 15 %P 593 - 605 %8 2004/07// %@ 1045-9219 %G eng %N 7 %R 10.1109/TPDS.2004.14 %0 Journal Article %J Lecture Notes in Computer Science %D 2004 %T Illumination, Reflectance, and Reflection-Characterization of Human Faces under Illumination Variations Using Rank, Integrability, and Symmetry Constraints %A Zhou,S. K %A Chellapa, Rama %A Jacobs, David W. %B Lecture Notes in Computer Science %V 3021 %P 588 - 601 %8 2004/// %G eng %0 Report %D 2004 %T Managing the 802.11 energy/performance tradeoff with machine learning %A Monteleoni,C. %A Balakrishnan,H. %A Feamster, Nick %A Jaakkola,T. %X This paper addresses the problem of managing the tradeoff betweenenergy consumption and performance in wireless devices implementingthe IEEE 802.11 standard. To save energy, the 802.11 specificationproposes a power-saving mode (PSM), where a device can sleep to saveenergy, periodically waking up to receive packets from a neighbor(e.g., an access point) that may have buffered packets for thesleeping device. Previous work has shown that a fixed polling time forwaking up degrades the performance of Web transfers, because networkactivity is bursty and time-varying. We apply a new online machinelearning algorithm to this problem and show, using ns simulation andtrace analysis, that it is able to adapt well to network activity. Thelearning process makes no assumptions about the underlying networkactivity being stationary or even Markov. Our learning power-savingalgorithm, LPSM, guides the learning using a "loss function" thatcombines the increased latency from potentially sleeping too long andthe wasted use of energy in waking up too soon. In our nssimulations, LPSM saved 7%-20% more energy than 802.11 in power-savingmode, with an associated increase in average latency by a factor of1.02, and not more than 1.2. LPSM is straightforward to implementwithin the 802.11 PSM framework. %I Massachusetts Institute of Technology Computer Science and Artificial Intelligence Laboratory %V MIT-CSAIL-TR-2004-068 %8 2004/// %G eng %U http://hdl.handle.net/1721.1/30499 %0 Journal Article %J International Journal of High Performance Computing Applications %D 2004 %T Measuring HPC productivity %A Faulk,S. %A Gustafson,J. %A Johnson,P. %A Porter, Adam %A Tichy,W. %A Votta,L. %B International Journal of High Performance Computing Applications %V 18 %P 459 - 473 %8 2004/// %G eng %N 4 %0 Journal Article %J Environmental and Ecological Statistics %D 2004 %T Multiscale advanced raster map analysis system: definition, design and development %A Patil,G. P. %A Balbus,J. %A Biging,G. %A JaJa, Joseph F. %A Myers,W. L. %A Taillie,C. %X This paper brings together a multidisciplinary initiative to develop advanced statistical and computational techniques for analyzing, assessing, and extracting information from raster maps. This information will provide a rigorous foundation to address a wide range of applications including disease mapping, emerging infectious diseases, landscape ecological assessment, land cover trends and change detection, watershed assessment, and map accuracy assessment. It will develop an advanced map analysis system that integrates these techniques with an advanced visualization toolbox, and use the system to conduct large case studies using rich sets of raster data, primarily from remotely sensed imagery. As a result, it will be possible to study and evaluate raster maps of societal, ecological, and environmental variables to facilitate quantitative characterization and comparative analysis of geospatial trends, patterns, and phenomena. In addition to environmental and ecological studies, these techniques and tools can be used for policy decisions at national, state, and local levels, crisis management, and protection of infrastructure. Geospatial data form the foundation of an information-based society. Remote sensing has been a vastly under-utilized resource involving a multi-million dollar investment at the national levels. Even when utilized, the credibility has been at stake, largely because of lack of tools that can assess, visualize, and communicate accuracy and reliability in timely manner and at desired confidence levels. Consider an imminent 21st century scenario: What message does a multi-categorical map have about the large landscape it represents? And at what scale, and at what level of detail? Does the spatial pattern of the map reveal any societal, ecological, environmental condition of the landscape? And therefore can it be an indicator of change? How do you automate the assessment of the spatial structure and behavior of change to discover critical areas, hot spots, and their corridors? Is the map accurate? How accurate is it? How do you assess the accuracy of the map? How do we evaluate a temporal change map for change detection? What are the implications of the kind and amount of change and accuracy on what matters, whether climate change, carbon emission, water resources, urban sprawl, biodiversity, indicator species, human health, or early warning? And with what confidence? The proposed research initiative is expected to find answers to these questions and a few more that involve multi-categorical raster maps based on remote sensing and other geospatial data. It includes the development of techniques for map modeling and analysis using Markov Random Fields, geospatial statistics, accuracy assessment and change detection, upper echelons of surfaces, advanced computational techniques for geospatial data mining, and advanced visualization techniques. %B Environmental and Ecological Statistics %V 11 %P 113 - 138 %8 2004/// %G eng %N 2 %R 10.1023/B:EEST.0000027205.77490.8c %0 Journal Article %J Applied Cryptography and Network Security %D 2004 %T One-round protocols for two-party authenticated key exchange %A Jeong,I. R %A Katz, Jonathan %A Lee,D. H %B Applied Cryptography and Network Security %P 220 - 232 %8 2004/// %G eng %0 Report %D 2004 %T PAWN: Producer-Archive Workflow Network in support of digital preservation %A Smorul,M. %A JaJa, Joseph F. %A Wang, Y. %A McCall,F. %X We describe the design and the implementation of the PAWN (Producer – ArchiveWorkflow Network) environment to enable secure and distributed ingestion of digital objects into a persistent archive. PAWN was developed to capture the core elements required for long term preservation of digital objects as identified by previous research in the digital library and archiving communities. In fact, PAWN can be viewed as an implementation of the Ingest Process as defined by the Open Archival Information System (OAIS) Reference Model, and is currently being used to ingest significant collections into a pilot persistent archive developed through a collaboration between the San Diego Supercomputer Center, the University of Maryland, and the National Archives and Records Administration. We make use of METS (Metadata Encoding and Transmission Standards) to encapsulate content, structural, descriptive, and preservation metadata. The basic software components are based on open standards and web technologies, and hence are platform independent. %I Instititue for Advanced Computer Studies, Univ of Maryland, College Park %V UMIACS-TR-2004 %P 2006 - 2006 %8 2004/// %G eng %0 Conference Paper %B Parallel Architectures, Algorithms and Networks, 2004. Proceedings. 7th International Symposium on %D 2004 %T Strategies for exploring large scale data %A JaJa, Joseph F. %K algorithms; %K association; %K asymptotic %K bounds; %K business %K data %K data; %K database %K databases; %K demographic %K discovery; %K Indexing %K indexing; %K information %K knowledge %K large %K linear %K mining; %K multidimensional %K objects; %K optimal %K Parallel %K pattern %K processing; %K query %K querying; %K range %K scale %K scientific %K search %K serial %K series %K series; %K simulation %K space %K structure; %K structures; %K techniques; %K temporal %K TIME %K value %K very %K window; %X We consider the problem of querying large scale multidimensional time series data to discover events of interest, test and validate hypotheses, or to associate temporal patterns with specific events. This type of data currently dominates most other types of available data, and will very likely become even more prevalent in the future given the current trends in collecting time series of business, scientific, demographic, and simulation data. The ability to explore such collections interactively, even at a coarse level, will be critical in discovering the information and knowledge embedded in such collections. We develop indexing techniques and search algorithms to efficiently handle temporal range value querying of multidimensional time series data. Our indexing uses linear space data structures that enable the handling of queries in I/O time that is essentially the same as that of handling a single time slice, assuming the availability of a logarithmic number of processors as a function of the temporal window. A data structure with provably almost optimal asymptotic bounds is also presented for the case when the number of multidimensional objects is relatively small. These techniques improve significantly over standard techniques for either serial or parallel processing, and are evaluated by extensive experimental results that confirm their superior performance. %B Parallel Architectures, Algorithms and Networks, 2004. Proceedings. 7th International Symposium on %P 2 - 2 %8 2004/05// %G eng %R 10.1109/ISPAN.2004.1300447 %0 Journal Article %J Security Privacy, IEEE %D 2004 %T Susceptibility matrix: a new aid to software auditing %A Jiwnani,K. %A Zelkowitz, Marvin V %K approach; %K auditing; %K data; %K matrix; %K of %K program %K Security %K software %K susceptibility %K taxonomy-based %K testing; %K vulnerabilities; %X Testing for security is lengthy, complex, and costly, so focusing test efforts in areas that have the greatest number of security vulnerabilities is essential. This article describes a taxonomy-based approach that gives an insight into the distribution of vulnerabilities in a system. %B Security Privacy, IEEE %V 2 %P 16 - 21 %8 2004/04//mar %@ 1540-7993 %G eng %N 2 %R 10.1109/MSECP.2004.1281240 %0 Conference Paper %B Proceedings of SSDBM %D 2004 %T Temporal range exploration of large scale multidimensional time series data %A JaJa, Joseph F. %A Kim,J. %A Wang,Q. %X We consider the problem of querying largescale multidimensional time series data to discover events of interest, test and validate hypotheses, or to associate temporal patterns with specific events. This type of data currently dominate most other types of available data, and will very likely become even more prevalent in the future given the current trends in collecting time series of business, scientific, demographic, and simulation data. The ability to explore such collections interactively, even at a coarse level, will be critical in discovering the information and knowledge embedded in such collections. We develop indexing techniques and search algorithms to efficiently handle temporal range value querying of multidimensional time series data. Our indexing uses linear space data structures that enable the handling of queries in I/O time that is essentially the same as that of handling a single time slice, assuming the availability of a logarithmic number of processors as a function of size of the temporal window. A data structure with provably good bounds is also presented for the case when the number of multidimensional objects is relatively small. These techniques improve significantly over standard techniques for either serial or parallel processing, and are evaluated by extensive experimental results that confirm their superior performance. %B Proceedings of SSDBM %P 95 - 106 %8 2004/// %G eng %0 Book Section %B Trust ManagementTrust Management %D 2004 %T Using Trust in Recommender Systems: An Experimental Analysis %A Massa,Paolo %A Bhattacharjee, Bobby %E Jensen,Christian %E Poslad,Stefan %E Dimitrakos,Theo %K Computer %K Science %X Recommender systems (RS) have been used for suggesting items (movies, books, songs, etc.) that users might like. RSs compute a user similarity between users and use it as a weight for the users’ ratings. However they have many weaknesses, such as sparseness, cold start and vulnerability to attacks. We assert that these weaknesses can be alleviated using a Trust-aware system that takes into account the “web of trust” provided by every user. Specifically, we analyze data from the popular Internet web site epinions.com . The dataset consists of 49290 users who expressed reviews (with rating) on items and explicitly specified their web of trust, i.e. users whose reviews they have consistently found to be valuable. We show that any two users have usually few items rated in common. For this reason, the classic RS technique is often ineffective and is not able to compute a user similarity weight for many of the users. Instead exploiting the webs of trust, it is possible to propagate trust and infer an additional weight for other users. We show how this quantity can be computed against a larger number of users. %B Trust ManagementTrust Management %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 2995 %P 221 - 235 %8 2004/// %@ 978-3-540-21312-3 %G eng %U http://dx.doi.org/10.1007/978-3-540-24747-0_17 %0 Journal Article %J Applied and Environmental MicrobiologyAppl. Environ. Microbiol. %D 2004 %T Viable but Nonculturable Vibrio Cholerae O1 in the Aquatic Environment of Argentina %A Binsztein,Norma %A Costagliola,Marcela C. %A Pichel,Mariana %A Jurquiza,Verónica %A Ramírez,Fernando C. %A Akselman,Rut %A Vacchino,Marta %A Huq,Anwarul %A Rita R Colwell %X In Argentina, as in other countries of Latin America, cholera has occurred in an epidemic pattern. Vibrio cholerae O1 is native to the aquatic environment, and it occurs in both culturable and viable but nonculturable (VNC) forms, the latter during interepidemic periods. This is the first report of the presence of VNC V. cholerae O1 in the estuarine and marine waters of the Río de la Plata and the Argentine shelf of the Atlantic Ocean, respectively. Employing immunofluorescence and PCR methods, we were able to detect reservoirs of V. cholerae O1 carrying the virulence-associated genes ctxA and tcpA. The VNC forms of V. cholerae O1 were identified in samples of water, phytoplankton, and zooplankton; the latter organisms were mainly the copepods Acartia tonsa, Diaptomus sp., Paracalanus crassirostris, and Paracalanus parvus. We found that under favorable conditions, the VNC form of V. cholerae can revert to the pathogenic, transmissible state. We concluded that V. cholerae O1 is a resident of Argentinean waters, as has been shown to be the case in other geographic regions of the world. %B Applied and Environmental MicrobiologyAppl. Environ. Microbiol. %V 70 %P 7481 - 7486 %8 2004/12/01/ %@ 0099-2240, 1098-5336 %G eng %U http://aem.asm.org/content/70/12/7481 %N 12 %R 10.1128/AEM.70.12.7481-7486.2004 %0 Journal Article %J Computer Vision-ECCV 2004 %D 2004 %T Whitening for photometric comparison of smooth surfaces under varying illumination %A Osadchy,M. %A Lindenbaum,M. %A Jacobs, David W. %X We consider the problem of image comparison in order to match smooth surfaces under varying illumination. In a smooth surface nearby surface normals are highly correlated. We model such surfaces as Gaussian processes and derive the resulting statistical characterization of the corresponding images. Supported by this model, we treat the difference between two images, associated with the same surface and different lighting, as colored Gaussian noise, and use the whitening tool from signal detection theory to construct a measure of difference between such images. This also improves comparisons by accentuating the differences between images of different surfaces. At the same time, we prove that no linear filter, including ours, can produce lighting insensitive image comparisons. While our Gaussian assumption is a simplification, the resulting measure functions well for both synthetic and real smooth objects. Thus we improve upon methods for matching images of smooth objects, while providing insight into the performance of such methods. Much prior work has focused on image comparison methods appropriate for highly curved surfaces. We combine our method with one of these, and demonstrate high performance on rough and smooth objects. %B Computer Vision-ECCV 2004 %P 217 - 228 %8 2004/// %G eng %R 10.1007/978-3-540-24673-2_18 %0 Book Section %B Approximation, Randomization, and Combinatorial Optimization: Algorithms and Techniques %D 2003 %T Approximation Algorithms for Channel Allocation Problems in Broadcast Networks %A Gandhi,Rajiv %A Khuller, Samir %A Srinivasan, Aravind %A Wang,Nan %E Arora,Sanjeev %E Jansen,Klaus %E Rolim,José %E Sahai,Amit %X We study two packing problems that arise in the area of dissemination-based information systems; a second theme is the study of distributed approximation algorithms. The problems considered have the property that the space occupied by a collection of objects together could be significantly less than the sum of the sizes of the individual objects. In the Channel Allocation Problem , there are users who request subsets of items. There are a fixed number of channels that can carry an arbitrary amount of information. Each user must get all of the requested items from one channel, i.e., all the data items of each request must be broadcast on some channel. The load on any channel is the number of items that are broadcast on that channel; the objective is to minimize the maximum load on any channel. We present approximation algorithms for this problem and also show that the problem is MAX-SNP hard. The second problem is the Edge Partitioning Problem addressed by Goldschmidt, Hochbaum, Levin, and Olinick ( Networks, 41:13-23, 2003 ). Each channel here can deliver information to at most k users, and we aim to minimize the total load on all channels. We present an O ( n⅓ )–approximation algorithm and also show that the algorithm can be made fully distributed with the same approximation guarantee; we also generalize to the case of hypergraphs. %B Approximation, Randomization, and Combinatorial Optimization: Algorithms and Techniques %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 2764 %P 821 - 826 %8 2003/// %@ 978-3-540-40770-6 %G eng %U http://dx.doi.org/10.1007/978-3-540-45198-3_5 %0 Conference Paper %B Proceedings of the 16th annual ACM symposium on User interface software and technology %D 2003 %T Automatic thumbnail cropping and its effectiveness %A Suh,B. %A Ling,H. %A Bederson, Benjamin B. %A Jacobs, David W. %B Proceedings of the 16th annual ACM symposium on User interface software and technology %P 95 - 104 %8 2003/// %G eng %0 Conference Paper %B Proceedings of the 16th annual ACM symposium on User interface software and technology %D 2003 %T Automatic thumbnail cropping and its effectiveness %A Suh,Bongwon %A Ling,Haibin %A Bederson, Benjamin B. %A Jacobs, David W. %K Face detection %K image cropping %K saliency map %K thumbnail %K usability study %K visual search %K zoomable user interfaces %X Thumbnail images provide users of image retrieval and browsing systems with a method for quickly scanning large numbers of images. Recognizing the objects in an image is important in many retrieval tasks, but thumbnails generated by shrinking the original image often render objects illegible. We study the ability of computer vision systems to detect key components of images so that automated cropping, prior to shrinking, can render objects more recognizable. We evaluate automatic cropping techniques 1) based on a general method that detects salient portions of images, and 2) based on automatic face detection. Our user study shows that these methods result in small thumbnails that are substantially more recognizable and easier to find in the context of visual search. %B Proceedings of the 16th annual ACM symposium on User interface software and technology %S UIST '03 %I ACM %C New York, NY, USA %P 95 - 104 %8 2003/// %@ 1-58113-636-6 %G eng %U http://doi.acm.org/10.1145/964696.964707 %R 10.1145/964696.964707 %0 Journal Article %J Journal of the Optical Society of America-A-Optics Image Science and Vision %D 2003 %T BAYESIAN AND STATISTICAL APPROACHES TO VISION-Natural Image Statistics and Perceptual Inference-What makes viewpoint-invariant properties perceptually salient? %A Jacobs, David W. %B Journal of the Optical Society of America-A-Optics Image Science and Vision %V 20 %P 1304 - 1320 %8 2003/// %G eng %N 7 %0 Report %D 2003 %T Fast Algorithms for 3-D Dominance Reporting and Counting %A Shi,Qingmin %A JaJa, Joseph F. %K Technical Report %X We present in this paper fast algorithms for the 3-D dominance reportingand counting problems, and generalize the results to the d-dimensional case. Our 3-D dominance reporting algorithm achieves $O(\log n/\log\log n +f)$ query time using $O(n\log^{\epsilon}n)$ space, where $f$ is the number of points satisfying the query and $\epsilon>0$ is an arbitrary small constant. For the 3-D dominance counting problem (which is equivalent to the 3-D range counting problem), our algorithm runs in $O((\log n/\log\log n)^2)$ time using $O(nlog^{1+\epsilon}n/\log\log n)$ space. Also UMIACS-TR-2003-06 %I Instititue for Advanced Computer Studies, Univ of Maryland, College Park %V UMIACS-TR-2003-06 %8 2003/02/05/ %G eng %U http://drum.lib.umd.edu/handle/1903/1253 %0 Journal Article %J Algorithms and Data Structures %D 2003 %T Fast algorithms for a class of temporal range queries %A Shi,Q. %A JaJa, Joseph F. %X Given a set of n objects, each characterized by d attributes specified at m fixed time instances, we are interested in the problem of designing efficient indexing structures such that the following type of queries can be handled efficiently: given d value ranges and a time interval, report or count all the objects whose attributes fall within the corresponding d value ranges at each time instance lying in the specified time interval. We establish efficient data structures to handle several classes of the general problem. Our results include a linear size data structure that enables a query time of O(log n log m + f) for one-sided queries when d=1, where f is the output size. We also show that the most general problem can be solved with polylogarithmic query time using nonlinear space data structures. %B Algorithms and Data Structures %P 91 - 102 %8 2003/// %G eng %R 10.1007/978-3-540-45078-8_9 %0 Report %D 2003 %T Fast Fractional Cascading and Its Applications %A Shi,Qingmin %A JaJa, Joseph F. %K Technical Report %X Using the notions of Q-heaps and fusion trees developed by Fredman andWillard, we develop a faster version of the fractional cascading technique while maintaining the linear space structure. The new version enables sublogarithmic iterative search in the case when we have a search tree and the degree of each node is bounded by $O(\log^{\epsilon}n)$, for some constant $\epsilon >0$, where $n$ is the total size of all the lists stored in the tree. The fast fractional cascading technique is used in combination with other techniques to derive sublogarithmic time algorithms for the geometric retrieval problems: orthogonal segment intersection and rectangular point enclosure. The new algorithms use $O(n)$ space and achieve a query time of $O(\log n/\log\log n + f)$, where $f$ is the number of objects satisfying the query. All our algorithms assume the version of the RAM model used by Fredman and Willard. (UMIACS-TR-2003-71) %I Instititue for Advanced Computer Studies, Univ of Maryland, College Park %V UMIACS-TR-2003-71 %8 2003/08/01/ %G eng %U http://drum.lib.umd.edu/handle/1903/1296 %0 Journal Article %J Nature %D 2003 %T The genome sequence of Bacillus anthracis Ames and comparison to closely related bacteria %A Read,Timothy D. %A Peterson,Scott N. %A Tourasse,Nicolas %A Baillie,Les W. %A Paulsen,Ian T. %A Nelson,Karen E. %A Tettelin,Herv|[eacute]| %A Fouts,Derrick E. %A Eisen,Jonathan A. %A Gill,Steven R. %A Holtzapple,Erik K. %A |[Oslash]|kstad,Ole Andreas %A Helgason,Erlendur %A Rilstone,Jennifer %A Wu,Martin %A Kolonay,James F. %A Beanan,Maureen J. %A Dodson,Robert J. %A Brinkac,Lauren M. %A Gwinn,Michelle %A DeBoy,Robert T. %A Madpu,Ramana %A Daugherty,Sean C. %A Durkin,A. Scott %A Haft,Daniel H. %A Nelson,William C. %A Peterson,Jeremy D. %A Pop, Mihai %A Khouri,Hoda M. %A Radune,Diana %A Benton,Jonathan L. %A Mahamoud,Yasmin %A Jiang,Lingxia %A Hance,Ioana R. %A Weidman,Janice F. %A Berry,Kristi J. %A Plaut,Roger D. %A Wolf,Alex M. %A Watkins,Kisha L. %A Nierman,William C. %A Hazen,Alyson %A Cline,Robin %A Redmond,Caroline %A Thwaite,Joanne E. %A White,Owen %A Salzberg,Steven L. %A Thomason,Brendan %A Friedlander,Arthur M. %A Koehler,Theresa M. %A Hanna,Philip C. %A Kolst|[oslash]|,Anne-Brit %A Fraser,Claire M. %X Bacillus anthracis is an endospore-forming bacterium that causes inhalational anthrax1. Key virulence genes are found on plasmids (extra-chromosomal, circular, double-stranded DNA molecules) pXO1 (ref. 2) and pXO2 (ref. 3). To identify additional genes that might contribute to virulence, we analysed the complete sequence of the chromosome of B. anthracis Ames (about 5.23 megabases). We found several chromosomally encoded proteins that may contribute to pathogenicity—including haemolysins, phospholipases and iron acquisition functions—and identified numerous surface proteins that might be important targets for vaccines and drugs. Almost all these putative chromosomal virulence and surface proteins have homologues in Bacillus cereus, highlighting the similarity of B. anthracis to near-neighbours that are not associated with anthrax4. By performing a comparative genome hybridization of 19 B. cereus and Bacillus thuringiensis strains against a B. anthracis DNA microarray, we confirmed the general similarity of chromosomal genes among this group of close relatives. However, we found that the gene sequences of pXO1 and pXO2 were more variable between strains, suggesting plasmid mobility in the group. The complete sequence of B. anthracis is a step towards a better understanding of anthrax pathogenesis. %B Nature %V 423 %P 81 - 86 %8 2003/05/01/ %@ 0028-0836 %G eng %U http://www.nature.com/nature/journal/v423/n6935/full/nature01586.html %N 6935 %R 10.1038/nature01586 %0 Book %D 2003 %T Identifying Relevant Information for Testing Technique Selection: An Instantiated Characterization Schema %A Vegas,Sira %A Juristo,Natalia %A Basili, Victor R. %K Business & Economics / Information Management %K Computer software %K Computer software - Testing %K Computer software/ Testing %K Computers / Information Technology %K Computers / Internet / Application Development %K Computers / Programming / General %K Computers / Programming Languages / General %K Computers / Software Development & Engineering / General %K Computers / Software Development & Engineering / Quality Assurance & Testing %K Technology & Engineering / Materials Science %X The importance of properly selecting testing techniques is widely accepted in the software engineering community today. However, there are chiefly two reasons why the selections now made by software developers are difficult to evaluate as correct. First, there are several techniques with which the average developer is unfamiliar, often leaving testers with limited knowledge of all the techniques currently available. Second, the available information regarding the different testing techniques is primarily procedure (focused on how to use the technique), rather than pragmatic (focused on the effect and appropriateness of using the technique). The problem addressed in this book is aimed at improving software testing technique selection.Identifying Relevant Information for Testing Technique Selection: An Instantiated Characterization Schema will train its readers how to use the conceptual tool presented here in various ways. Developers will improve their testing technique selection process by systematically and objectively selecting the testing techniques for a software project. Developers will also build a repository containing their own experience with the application of various software testing techniques. Researchers will focus their research on the relevant aspects of testing technique when creating it, and when comparing different techniques.Identifying Relevant Information for Testing Technique Selection: An Instantiated Characterization Schema is designed to meet the needs of a professional audience in software engineering. This book is also suitable for graduate-level students in computer science and engineering. %I Springer %8 2003/04/01/ %@ 9781402074356 %G eng %0 Journal Article %J Nucleic Acids ResearchNucl. Acids Res. %D 2003 %T Improving the Arabidopsis genome annotation using maximal transcript alignment assemblies %A Haas,Brian J. %A Delcher,Arthur L. %A Mount, Stephen M. %A Wortman,Jennifer R. %A Jr,Roger K. Smith %A Hannick,Linda I. %A Maiti,Rama %A Ronning,Catherine M. %A Rusch,Douglas B %A Town,Christopher D. %A Salzberg,Steven L. %A White,Owen %X The spliced alignment of expressed sequence data to genomic sequence has proven a key tool in the comprehensive annotation of genes in eukaryotic genomes. A novel algorithm was developed to assemble clusters of overlapping transcript alignments (ESTs and full‐length cDNAs) into maximal alignment assemblies, thereby comprehensively incorporating all available transcript data and capturing subtle splicing variations. Complete and partial gene structures identified by this method were used to improve The Institute for Genomic Research Arabidopsis genome annotation (TIGR release v.4.0). The alignment assemblies permitted the automated modeling of several novel genes and >1000 alternative splicing variations as well as updates (including UTR annotations) to nearly half of the ∼27 000 annotated protein coding genes. The algorithm of the Program to Assemble Spliced Alignments (PASA) tool is described, as well as the results of automated updates to Arabidopsis gene annotations. %B Nucleic Acids ResearchNucl. Acids Res. %V 31 %P 5654 - 5666 %8 2003/10/01/ %@ 0305-1048, 1362-4962 %G eng %U http://nar.oxfordjournals.org/content/31/19/5654 %N 19 %R 10.1093/nar/gkg770 %0 Journal Article %J ACM Trans. Database Syst. %D 2003 %T Iterative spatial join %A Jacox,Edwin H. %A Samet, Hanan %K external memory algorithms %K plane-sweep %K Spatial databases %K Spatial join %X The key issue in performing spatial joins is finding the pairs of intersecting rectangles. For unindexed data sets, this is usually resolved by partitioning the data and then performing a plane sweep on the individual partitions. The resulting join can be viewed as a two-step process where the partition corresponds to a hash-based join while the plane-sweep corresponds to a sort-merge join. In this article, we look at extending the idea of the sort-merge join for one-dimensional data to multiple dimensions and introduce the Iterative Spatial Join. As with the sort-merge join, the Iterative Spatial Join is best suited to cases where the data is already sorted. However, as we show in the experiments, the Iterative Spatial Join performs well when internal memory is limited, compared to the partitioning methods. This suggests that the Iterative Spatial Join would be useful for very large data sets or in situations where internal memory is a shared resource and is therefore limited, such as with today's database engines which share internal memory amongst several queries. Furthermore, the performance of the Iterative Spatial Join is predictable and has no parameters which need to be tuned, unlike other algorithms. The Iterative Spatial Join is based on a plane sweep algorithm, which requires the entire data set to fit in internal memory. When internal memory overflows, the Iterative Spatial Join simply makes additional passes on the data, thereby exhibiting only a gradual performance degradation. To demonstrate the use and efficacy of the Iterative Spatial Join, we first examine and analyze current approaches to performing spatial joins, and then give a detailed analysis of the Iterative Spatial Join as well as present the results of extensive testing of the algorithm, including a comparison with partitioning-based spatial join methods. These tests show that the Iterative Spatial Join overcomes the performance limitations of the other algorithms for data sets of all sizes as well as differing amounts of internal memory. %B ACM Trans. Database Syst. %V 28 %P 230 - 256 %8 2003/09// %@ 0362-5915 %G eng %U http://doi.acm.org/10.1145/937598.937600 %N 3 %R 10.1145/937598.937600 %0 Journal Article %J Pattern Analysis and Machine Intelligence, IEEE Transactions on %D 2003 %T Lambertian reflectance and linear subspaces %A Basri,R. %A Jacobs, David W. %K 2D %K 4D %K 9D %K analog; %K analytic %K characterization; %K convex %K convolution %K distant %K functions; %K harmonics; %K image %K image; %K intensities; %K Lambertian %K light %K lighting %K linear %K methods; %K nonnegative %K normals; %K object %K optimization; %K programming; %K query %K recognition; %K reflectance; %K reflectivity; %K set; %K sources; %K space; %K spherical %K subspace; %K subspaces; %K surface %X We prove that the set of all Lambertian reflectance functions (the mapping from surface normals to intensities) obtained with arbitrary distant light sources lies close to a 9D linear subspace. This implies that, in general, the set of images of a convex Lambertian object obtained under a wide variety of lighting conditions can be approximated accurately by a low-dimensional linear subspace, explaining prior empirical results. We also provide a simple analytic characterization of this linear space. We obtain these results by representing lighting using spherical harmonics and describing the effects of Lambertian materials as the analog of a convolution. These results allow us to construct algorithms for object recognition based on linear methods as well as algorithms that use convex optimization to enforce nonnegative lighting functions. We also show a simple way to enforce nonnegative lighting when the images of an object lie near a 4D linear space. We apply these algorithms to perform face recognition by finding the 3D model that best matches a 2D query image. %B Pattern Analysis and Machine Intelligence, IEEE Transactions on %V 25 %P 218 - 233 %8 2003/02// %@ 0162-8828 %G eng %N 2 %R 10.1109/TPAMI.2003.1177153 %0 Report %D 2003 %T An O(n)-Space O(log n/log log n + f)-Query Time Algorithm for 3-D Dominance Reporting %A Shi,Qingmin %A JaJa, Joseph F. %K Technical Report %X We present a linear-space algorithm for handling the {\emthree-dimensional dominance reporting problem}: given a set $S$ of $n$ three-dimensional points, design a data structure for $S$ so that the points in $S$ which dominate a given query point can be reported quickly. Under the variation of the RAM model introduced by Fredman and Willard~\cite{Fredman94}, our algorithm achieves $O(\log n/\log\log n+f)$ query time, where $f$ is the number of points reported. Extensions to higher dimensions are also reported. (UMIACS-TR-2003-77) %I Instititue for Advanced Computer Studies, Univ of Maryland, College Park %V UMIACS-TR-2003-77 %8 2003/08/01/ %G eng %U http://drum.lib.umd.edu/handle/1903/1301 %0 Book Section %B Perceptual organization in vision: behavioral and neural perspectivesPerceptual organization in vision: behavioral and neural perspectives %D 2003 %T Perceptual Completion and Memory %A Jacobs, David W. %B Perceptual organization in vision: behavioral and neural perspectivesPerceptual organization in vision: behavioral and neural perspectives %P 403 - 403 %8 2003/// %G eng %0 Journal Article %J Applied and Environmental MicrobiologyAppl. Environ. Microbiol. %D 2003 %T Predictability of Vibrio Cholerae in Chesapeake Bay %A Louis,Valérie R. %A Russek-Cohen,Estelle %A Choopun,Nipa %A Rivera,Irma N. G. %A Gangle,Brian %A Jiang,Sunny C. %A Rubin,Andrea %A Patz,Jonathan A. %A Huq,Anwar %A Rita R Colwell %X Vibrio cholerae is autochthonous to natural waters and can pose a health risk when it is consumed via untreated water or contaminated shellfish. The correlation between the occurrence of V. cholerae in Chesapeake Bay and environmental factors was investigated over a 3-year period. Water and plankton samples were collected monthly from five shore sampling sites in northern Chesapeake Bay (January 1998 to February 2000) and from research cruise stations on a north-south transect (summers of 1999 and 2000). Enrichment was used to detect culturable V. cholerae, and 21.1% (n = 427) of the samples were positive. As determined by serology tests, the isolates, did not belong to serogroup O1 or O139 associated with cholera epidemics. A direct fluorescent-antibody assay was used to detect V. cholerae O1, and 23.8% (n = 412) of the samples were positive. V. cholerae was more frequently detected during the warmer months and in northern Chesapeake Bay, where the salinity is lower. Statistical models successfully predicted the presence of V. cholerae as a function of water temperature and salinity. Temperatures above 19°C and salinities between 2 and 14 ppt yielded at least a fourfold increase in the number of detectable V. cholerae. The results suggest that salinity variation in Chesapeake Bay or other parameters associated with Susquehanna River inflow contribute to the variability in the occurrence of V. cholerae and that salinity is a useful indicator. Under scenarios of global climate change, increased climate variability, accompanied by higher stream flow rates and warmer temperatures, could favor conditions that increase the occurrence of V. cholerae in Chesapeake Bay. %B Applied and Environmental MicrobiologyAppl. Environ. Microbiol. %V 69 %P 2773 - 2785 %8 2003/05/01/ %@ 0099-2240, 1098-5336 %G eng %U http://aem.asm.org/content/69/5/2773 %N 5 %R 10.1128/AEM.69.5.2773-2785.2003 %0 Journal Article %J Proc. 2nd IEEE International Computer Society %D 2003 %T A query language to support scientific discovery %A Eckman,B. %A Deutsch,K. %A Janer,M. %A Lacroix,Z. %A Raschid, Louiqa %B Proc. 2nd IEEE International Computer Society %8 2003/// %G eng %0 Report %D 2003 %T Recovery of a Digital Image Collection Through the SDSC/UMD/NARA Prototype Persistent Archive %A Smorul,Mike %A JaJa, Joseph F. %A McCall,Fritz %A Brown,Susan Fitch %A Moore,Reagan %A Marciano,Richard %A Chen,Sheau-Yen %A Lopez,Rick %A Chadduck,Robert %K Technical Report %X The San Diego Supercomputer Center (SDSC), the University of Maryland, and theNational Archives and Records Administration (NARA) are collaborating on building a pilot persistent archive using and extending data grid and digital library technologies. The current prototype consists of node servers at SDSC, University of Maryland, and NARA, connected through the Storage Request Broker (SRB) data grid middleware, and currently holds several terabytes of NARA selected collections. In particular, a historically important image collection that was on the verge of becoming inaccessible was fully restored and ingested into our pilot system. In this report, we describe the methodology behind our approach to fully restore this image collection and the process used to ingest it into the prototype persistent archive. (UMIACS-TR-2003-105) %I Instititue for Advanced Computer Studies, Univ of Maryland, College Park %V UMIACS-TR-2003-105 %8 2003/11/25/ %G eng %U http://drum.lib.umd.edu/handle/1903/1321 %0 Journal Article %J Technical Reports from UMIACS %D 2003 %T Safe and flexible memory management in Cyclone %A Hicks, Michael W. %A Morrisett,G. %A Grossman,D. %A Jim,T. %X Cyclone is a type-safe programming language intended for applications requiring control over memory management. Our previous work on Cyclone included support for stack allocation, lexical region allocation, and a garbage-collected heap. We achieved safety (i.e., prevented dangling pointers) through a region-based type-and-effects system. This paper describes some new memory-management mechanisms that we have integrated into Cyclone: dynamic regions, unique pointers, and reference-counted objects. Our experience shows that these new mechanisms are well suited for the timely recovery of objects in situations where it is awkward to use lexical regions. Crucially, programmers can write reusable functions without unnecessarily restricting callers' choices among the variety of memory-management options. To achieve this goal, Cyclone employs a combination of polymorphism and scoped constructs that temporarily let us treat objects as if they were allocated in a lexical region. %B Technical Reports from UMIACS %8 2003/08/01/undef %G eng %0 Journal Article %J ACM Trans. Graph. %D 2003 %T A search engine for 3D models %A Funkhouser,Thomas %A Min,Patrick %A Kazhdan,Michael %A Chen,Joyce %A Halderman,Alex %A Dobkin,David %A Jacobs, David W. %K Search engine %K shape matching %K shape representation %K shape retrieval %X As the number of 3D models available on the Web grows, there is an increasing need for a search engine to help people find them. Unfortunately, traditional text-based search techniques are not always effective for 3D data. In this article, we investigate new shape-based search methods. The key challenges are to develop query methods simple enough for novice users and matching algorithms robust enough to work for arbitrary polygonal models. We present a Web-based search engine system that supports queries based on 3D sketches, 2D sketches, 3D models, and/or text keywords. For the shape-based queries, we have developed a new matching algorithm that uses spherical harmonics to compute discriminating similarity measures without requiring repair of model degeneracies or alignment of orientations. It provides 46 to 245% better performance than related shape-matching methods during precision--recall experiments, and it is fast enough to return query results from a repository of 20,000 models in under a second. The net result is a growing interactive index of 3D models available on the Web (i.e., a Google for 3D models). %B ACM Trans. Graph. %V 22 %P 83 - 105 %8 2003/01// %@ 0730-0301 %G eng %U http://doi.acm.org/10.1145/588272.588279 %N 1 %R 10.1145/588272.588279 %0 Report %D 2003 %T Space-Efficient and Fast Algorithms for Multidimensional Dominance Reporting and Range Counting %A Shi,Qingmin %A JaJa, Joseph F. %A Mortensen,Christian %K Technical Report %X We present linear-space sublogarithmic algorithms for handling the {\emthree-dimensional dominance reporting problem} and the {\em two-dimensional range counting problem}. Under the RAM model as described in~[M.~L. Fredman and D.~E. Willard. ``Surpassing the information theoretic bound with fusion trees'', {\em Journal of Computer and System Sciences}, 47:424--436, 1993], our algorithms achieve $O(\log n/\log\log n+f)$ query time for 3-D dominance reporting, where $f$ is the number of points reported, and $O(\log n/\log\log n)$ query time for 2-D range counting case. We extend these results to any constant dimension $d$ achieving $O(n(\log n/\log\log n)^{d-3})$-space and $O((\log n/\log\log )^{d-2}+f)$-query time for the reporting case and $O(n(\log n/\log\log n)^{d-2})$-space and $O((\log n/\log\log n)^{d-1})$ query time for the counting case. (UMIACS-TR-2003-101) %I Instititue for Advanced Computer Studies, Univ of Maryland, College Park %V UMIACS-TR-2003-101 %8 2003/11/25/ %G eng %U http://drum.lib.umd.edu/handle/1903/1318 %0 Book %D 2003 %T Special section on perceptual organization in computer vision %A Jacobs, David W. %A Lindenbaum,M. %A August,J. %A Zucker,SW %A Ben-Shahar,O. %A Zucker,SW %A Tuytelaars,T. %A Turina,A. %A Van Gool,L. %A Mahamud,S. %I IEEE Computer Society %8 2003/// %G eng %0 Conference Paper %B INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL IN℡LIGENCE %D 2003 %T Towards domain-independent, task-oriented, conversational adequacy %A Josyula,D. P %A Anderson,M. L %A Perlis, Don %B INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL IN℡LIGENCE %V 18 %P 1637 - 1638 %8 2003/// %G eng %0 Conference Paper %B ICCV Workshop on Statistical and Computational Theories of Vision %D 2003 %T Uncertainty in 3D shape estimation %A Ji,H. %A Fermüller, Cornelia %B ICCV Workshop on Statistical and Computational Theories of Vision %8 2003/// %G eng %0 Conference Paper %B Global Telecommunications Conference, 2003. GLOBECOM '03. IEEE %D 2003 %T On the use of flow migration for handling short-term overloads %A Kuo,Kuo-Tung %A Phuvoravan,S. %A Bhattacharjee, Bobby %A Jun La,R. %A Shayman,M. %A Chang,Hyeong Soo %K computing; %K CONGESTION %K congestion; %K CONTROL %K control; %K dynamic %K end-to-end %K engineering %K fast-timescale %K flow %K Internet %K IP %K label %K long-term %K mapping; %K migration; %K MPLS %K multiprotocol %K network %K network; %K networks; %K of %K optimal %K overloads; %K protocol; %K QoS; %K QUALITY %K quality; %K routers; %K routing; %K service; %K set-up %K short-term %K software %K software; %K static %K switching; %K Telecommunication %K telephony; %K time; %K transient %K voice-over-IP; %X In this work, we investigate flow migration as a mechanism to sustain QoS to network users during short-term overloads in the context of an MPLS IP network. We experiment with three different control techniques: static long-term optimal mapping of flows to LSPs; on-line locally optimal mapping of flows to LSPs at flow set-up time; and dynamic flow migration in response to transient congestion. These techniques are applicable over different timescales, have different run-time overheads, and require different levels of monitoring and control software inside the network. We present results both from detailed simulations and a complete implementation using software IP routers. We use voice-over-IP as our test application, and show that if end-to-end quality is to be maintained during short unpredictable bursts of high load, then a fast-timescale control such as migration is required. %B Global Telecommunications Conference, 2003. GLOBECOM '03. IEEE %V 6 %P 3108 - 3112 vol.6 - 3108 - 3112 vol.6 %8 2003/12// %G eng %R 10.1109/GLOCOM.2003.1258807 %0 Conference Paper %B Computer Vision, 2003. Proceedings. Ninth IEEE International Conference on %D 2003 %T Using specularities for recognition %A Osadchy,M. %A Jacobs, David W. %A Ramamoorthi,R. %K 3D %K formation;object %K glass;computer %K image %K information;specular %K light %K measurement;reflection;stereo %K objects;specular %K objects;wine %K processing; %K property;pottery;recognition %K recognition;object %K recognition;position %K reflectance %K reflection;compact %K reflection;transparent %K shape;Lambertian %K source;highlight %K systems;shiny %K vision;lighting;object %X Recognition systems have generally treated specular highlights as noise. We show how to use these highlights as a positive source of information that improves recognition of shiny objects. This also enables us to recognize very challenging shiny transparent objects, such as wine glasses. Specifically, we show how to find highlights that are consistent with a hypothesized pose of an object of known 3D shape. We do this using only a qualitative description of highlight formation that is consistent with most models of specular reflection, so no specific knowledge of an object's reflectance properties is needed. We first present a method that finds highlights produced by a dominant compact light source, whose position is roughly known. We then show how to estimate the lighting automatically for objects whose reflection is part specular and part Lambertian. We demonstrate this method for two classes of objects. First, we show that specular information alone can suffice to identify objects with no Lambertian reflectance, such as transparent wine glasses. Second, we use our complete system to recognize shiny objects, such as pottery. %B Computer Vision, 2003. Proceedings. Ninth IEEE International Conference on %P 1512 -1519 vol.2 - 1512 -1519 vol.2 %8 2003/10// %G eng %R 10.1109/ICCV.2003.1238669 %0 Journal Article %J Journal of the Optical Society of America A %D 2003 %T What makes viewpoint-invariant properties perceptually salient? %A Jacobs, David W. %X It has been noted that many of the perceptually salient image properties identified by the Gestalt psychologists, such as collinearity, parallelism, and good continuation, are invariant to changes in viewpoint. However, I show that viewpoint invariance is not sufficient to distinguish these Gestalt properties; one can define an infinite number of viewpoint-invariant properties that are not perceptually salient. I then show that generally, the perceptually salient viewpoint-invariant properties are minimal, in the sense that they can be derived by using less image information than for nonsalient properties. This finding provides support for the hypothesis that the biological relevance of an image property is determined both by the extent to which it provides information about the world and by the ease with which this property can be computed. [An abbreviated version of this work, including technical details that are avoided in this paper, is contained in K. Boyer and S. Sarker, eds., Perceptual Organization for Artificial Vision Systems (Kluwer Academic, Dordrecht, The Netherlands, 2000), pp. 121–138.] %B Journal of the Optical Society of America A %V 20 %P 1304 - 1320 %8 2003/// %G eng %N 7 %0 Journal Article %J AI Magazine %D 2002 %T AAAI 2002 Workshops %A Blake,Brian %A Haigh,Karen %A Hexmoor,Henry %A Falcone,Rino %A Soh,Leen-Kiat %A Baral,Chitta %A McIlraith,Sheila %A Gmytrasiewicz,Piotr %A Parsons,Simon %A Malaka,Rainer %A Krueger,Antonio %A Bouquet,Paolo %A Smart,Bill %A Kurumantani,Koichi %A Pease,Adam %A Brenner,Michael %A desJardins, Marie %A Junker,Ulrich %A Delgrande,Jim %A Doyle,Jon %A Rossi,Francesca %A Schaub,Torsten %A Gomes,Carla %A Walsh,Toby %A Guo,Haipeng %A Horvitz,Eric J %A Ide,Nancy %A Welty,Chris %A Anger,Frank D %A Guegen,Hans W %A Ligozat,Gerald %B AI Magazine %V 23 %P 113 - 113 %8 2002/12/15/ %@ 0738-4602 %G eng %U http://www.aaai.org/ojs/index.php/aimagazine/article/viewArticle/1678 %N 4 %R 10.1609/aimag.v23i4.1678 %0 Journal Article %J Advances in Multimedia Information Processing—PCM 2002 %D 2002 %T Adaptive Multimedia System Architecture for Improving QoS inWireless Networks %A Mahajan,A. %A Mundur, Padma %A Joshi,A. %B Advances in Multimedia Information Processing—PCM 2002 %P 37 - 47 %8 2002/// %G eng %0 Journal Article %J ACM Comput. Surv. %D 2002 %T Algorithmic issues in modeling motion %A Agarwal,Pankaj K. %A Guibas,Leonidas J. %A Edelsbrunner,Herbert %A Erickson,Jeff %A Isard,Michael %A Har-Peled,Sariel %A Hershberger,John %A Jensen,Christian %A Kavraki,Lydia %A Koehl,Patrice %A Lin,Ming %A Manocha,Dinesh %A Metaxas,Dimitris %A Mirtich,Brian %A Mount, Dave %A Muthukrishnan,S. %A Pai,Dinesh %A Sacks,Elisha %A Snoeyink,Jack %A Suri,Subhash %A Wolefson,Ouri %K computational geometry %K Computer vision %K mobile networks %K modeling %K molecular biology %K motion modeling %K physical simulation %K robotoics %K spatio-temporal databases %X This article is a survey of research areas in which motion plays a pivotal role. The aim of the article is to review current approaches to modeling motion together with related data structures and algorithms, and to summarize the challenges that lie ahead in producing a more unified theory of motion representation that would be useful across several disciplines. %B ACM Comput. Surv. %V 34 %P 550 - 572 %8 2002/12// %@ 0360-0300 %G eng %U http://doi.acm.org/10.1145/592642.592647 %N 4 %R 10.1145/592642.592647 %0 Book Section %B Chip TechnologyChip Technology %D 2002 %T Combinatorial Algorithms for Design of DNA Arrays %A Hannenhalli, Sridhar %A Hubbell,Earl %A Lipshutz,Robert %A Pevzner,Pavel %E Hoheisel,Jörg %E Brazma,A. %E Büssow,K. %E Cantor,C. %E Christians,F. %E Chui,G. %E Diaz,R. %E Drmanac,R. %E Drmanac,S. %E Eickhoff,H. %E Fellenberg,K. %E Hannenhalli, Sridhar %E Hoheisel,J. %E Hou,A. %E Hubbell,E. %E Jin,H. %E Jin,P. %E Jurinke,C. %E Konthur,Z. %E Köster,H. %E Kwon,S. %E Lacy,S. %E Lehrach,H. %E Lipshutz,R. %E Little,D. %E Lueking,A. %E McGall,G. %E Moeur,B. %E Nordhoff,E. %E Nyarsik,L. %E Pevzner,P. %E Robinson,A. %E Sarkans,U. %E Shafto,J. %E Sohail,M. %E Southern,E. %E Swanson,D. %E Ukrainczyk,T. %E van den Boom,D. %E Vilo,J. %E Vingron,M. %E Walter,G. %E Xu,C. %X Optimal design of DNA arrays requires the development of algorithms with two-fold goals: reducing the effects caused by unintended illumination ( border length minimization problem ) and reducing the complexity of masks ( mask decomposition problem ). We describe algorithms that reduce the number of rectangles in mask decomposition by 20–30% as compared to a standard array design under the assumption that the arrangement of oligonucleotides on the array is fixed. This algorithm produces provably optimal solution for all studied real instances of array design. We also address the difficult problem of finding an arrangement which minimizes the border length and come up with a new idea of threading that significantly reduces the border length as compared to standard designs. %B Chip TechnologyChip Technology %S Advances in Biochemical Engineering/Biotechnology %I Springer Berlin / Heidelberg %V 77 %P 1 - 19 %8 2002/// %@ 978-3-540-43215-9 %G eng %U http://dx.doi.org/10.1007/3-540-45713-5_1 %0 Journal Article %J Science %D 2002 %T Comparative Genome Sequencing for Discovery of Novel Polymorphisms in Bacillus Anthracis %A Read,Timothy D. %A Salzberg,Steven L. %A Pop, Mihai %A Shumway,Martin %A Umayam,Lowell %A Jiang,Lingxia %A Holtzapple,Erik %A Busch,Joseph D %A Smith,Kimothy L %A Schupp,James M %A Solomon,Daniel %A Keim,Paul %A Fraser,Claire M. %X Comparison of the whole-genome sequence ofBacillus anthracis isolated from a victim of a recent bioterrorist anthrax attack with a reference reveals 60 new markers that include single nucleotide polymorphisms (SNPs), inserted or deleted sequences, and tandem repeats. Genome comparison detected four high-quality SNPs between the two sequenced B. anthracischromosomes and seven differences among different preparations of the reference genome. These markers have been tested on a collection of anthrax isolates and were found to divide these samples into distinct families. These results demonstrate that genome-based analysis of microbial pathogens will provide a powerful new tool for investigation of infectious disease outbreaks. %B Science %V 296 %P 2028 - 2033 %8 2002/06/14/ %@ 0036-8075, 1095-9203 %G eng %U http://www.sciencemag.org/content/296/5575/2028 %N 5575 %R 10.1126/science.1071837 %0 Journal Article %J USENIX Annual Technical Conference %D 2002 %T Cyclone: A safe dialect of C %A Jim,T. %A Morrisett,G. %A Grossman,D. %A Hicks, Michael W. %A Cheney,J. %A Wang, Y. %X Cyclone is a safe dialect of C. It has been designed from the ground up to prevent the buffer overflows, format string attacks, and memory management errors that are common in C programs, while retaining C's syntax and semantics. This paper examines safety violations enabled by C's design, and shows how Cyclone avoids them, without giving up C's hallmark control over low-level details such as data representation and memory management. %B USENIX Annual Technical Conference %P 275 - 288 %8 2002/// %G eng %0 Conference Paper %B Scientific and Statistical Database Management, 2002. Proceedings. 14th International Conference on %D 2002 %T Efficient techniques for range search queries on earth science data %A Shi,Qingmin %A JaJa, Joseph F. %K based %K computing; %K content %K data %K data; %K databases; %K Earth %K factors; %K large %K mining %K mining; %K natural %K processing; %K queries; %K query %K range %K raster %K retrieval; %K scale %K Science %K sciences %K search %K spatial %K structures; %K tasks; %K temporal %K tree %K tree-of-regions; %K visual %X We consider the problem of organizing large scale earth science raster data to efficiently handle queries for identifying regions whose parameters fall within certain range values specified by the queries. This problem seems to be critical to enabling basic data mining tasks such as determining associations between physical phenomena and spatial factors, detecting changes and trends, and content based retrieval. We assume that the input is too large to fit in internal memory and hence focus on data structures and algorithms that minimize the I/O bounds. A new data structure, called a tree-of-regions (ToR), is introduced and involves a combination of an R-tree and efficient representation of regions. It is shown that such a data structure enables the handling of range queries in an optimal I/O time, under certain reasonable assumptions. We also show that updates to the ToR can be handled efficiently. Experimental results for a variety of multi-valued earth science data illustrate the fast execution times of a wide range of queries, as predicted by our theoretical analysis. %B Scientific and Statistical Database Management, 2002. Proceedings. 14th International Conference on %P 142 - 151 %8 2002/// %G eng %R 10.1109/SSDM.2002.1029714 %0 Book Section %B Dependable Computing EDCC-4 %D 2002 %T Experimental Evaluation of the Unavailability Induced by a Group Membership Protocol %A Joshi,Kaustubh %A Michel Cukier %A Sanders,William %E Bondavalli,Andrea %E Thevenod-Fosse,Pascale %K Computer science %X Group communication is an important paradigm for building highly available distributed systems. However, group membership operations often require the system to block message traffic, causing system services to become unavailable. This makes it important to quantify the unavailability induced by membership operations. This paper experimentally evaluates the blocking behavior of the group membership protocol of the Ensemble group communication system using a novel global-state-based fault injection technique. In doing so, we demonstrate how a layered distributed protocol such as the Ensemble group membership protocol can be modeled in terms of a state machine abstraction, and show how the resulting global state space can be used to specify fault triggers and define important measures on the system. Using this approach, we evaluate the cost associated with important states of the protocol under varying workload and group size. We also evaluate the sensitivity of the protocol to the occurrence of a second correlated crash failure during its operation. %B Dependable Computing EDCC-4 %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 2485 %P 644 - 648 %8 2002/// %@ 978-3-540-00012-9 %G eng %U http://www.springerlink.com/content/mnkd4upqr14w2r7e/abstract/ %0 Journal Article %J Nature %D 2002 %T Genome sequence and comparative analysis of the model rodent malaria parasite Plasmodium yoelii yoelii %A Carlton,Jane M. %A Angiuoli,Samuel V %A Suh,Bernard B. %A Kooij,Taco W. %A Pertea,Mihaela %A Silva,Joana C. %A Ermolaeva,Maria D. %A Allen,Jonathan E %A Jeremy D Selengut %A Koo,Hean L. %A Peterson,Jeremy D. %A Pop, Mihai %A Kosack,Daniel S. %A Shumway,Martin F. %A Bidwell,Shelby L. %A Shallom,Shamira J. %A Aken,Susan E. van %A Riedmuller,Steven B. %A Feldblyum,Tamara V. %A Cho,Jennifer K. %A Quackenbush,John %A Sedegah,Martha %A Shoaibi,Azadeh %A Cummings,Leda M. %A Florens,Laurence %A Yates,John R. %A Raine,J. Dale %A Sinden,Robert E. %A Harris,Michael A. %A Cunningham,Deirdre A. %A Preiser,Peter R. %A Bergman,Lawrence W. %A Vaidya,Akhil B. %A Lin,Leo H. van %A Janse,Chris J. %A Waters,Andrew P. %A Smith,Hamilton O. %A White,Owen R. %A Salzberg,Steven L. %A Venter,J. Craig %A Fraser,Claire M. %A Hoffman,Stephen L. %A Gardner,Malcolm J. %A Carucci,Daniel J. %X Species of malaria parasite that infect rodents have long been used as models for malaria disease research. Here we report the whole-genome shotgun sequence of one species, Plasmodium yoelii yoelii, and comparative studies with the genome of the human malaria parasite Plasmodium falciparum clone 3D7. A synteny map of 2,212 P. y. yoelii contiguous DNA sequences (contigs) aligned to 14 P. falciparum chromosomes reveals marked conservation of gene synteny within the body of each chromosome. Of about 5,300 P. falciparum genes, more than 3,300 P. y. yoelii orthologues of predominantly metabolic function were identified. Over 800 copies of a variant antigen gene located in subtelomeric regions were found. This is the first genome sequence of a model eukaryotic parasite, and it provides insight into the use of such systems in the modelling of Plasmodium biology and disease. %B Nature %V 419 %P 512 - 519 %8 2002/10/03/ %@ 0028-0836 %G eng %U http://www.nature.com/nature/journal/v419/n6906/full/nature01099.html %N 6906 %R 10.1038/nature01099 %0 Conference Paper %B CHI '02 extended abstracts on Human factors in computing systems %D 2002 %T Getting real about speech: overdue or overhyped? %A James,Frankie %A Lai,Jennifer %A Suhm,Bernhard %A Balentine,Bruce %A Makhoul,John %A Nass,Clifford %A Shneiderman, Ben %K cognitive load %K social responses %K speech interfaces %X Speech has recently made headway towards becoming a more mainstream interface modality. For example, there is an increasing number of call center applications, especially in the airline and banking industries. However, speech still has many properties that cause its use to be problematic, such as its inappropriateness in both very quiet and very noisy environments, and the tendency of speech to increase cognitive load. Concerns about such problems are valid; however, they do not explain why the use of speech is so controversial in the HCI community. This panel would like to address the issues underlying the controversy around speech, by discussing the current state of the art, the reasons it is so difficult to build a good speech interface, and how HCI research can contribute to the development of speech interfaces. %B CHI '02 extended abstracts on Human factors in computing systems %S CHI EA '02 %I ACM %C New York, NY, USA %P 708 - 709 %8 2002/// %@ 1-58113-454-1 %G eng %U http://doi.acm.org/10.1145/506443.506557 %R 10.1145/506443.506557 %0 Conference Paper %B Proceedings of the Participatory Design Conference %D 2002 %T How young can our design partners be %A Farber,A. %A Druin, Allison %A Chipman,G. %A Julian,D. %A Somashekhar,S. %B Proceedings of the Participatory Design Conference %P 272 - 277 %8 2002/// %G eng %0 Journal Article %J Information Security %D 2002 %T Implementation of chosen-ciphertext attacks against PGP and GnuPG %A Jallad,K. %A Katz, Jonathan %A Schneier,B. %X We recently noted [6] that PGP and other e-mail encryption protocols are, in theory, highly vulnerable to chosen-ciphertext attacks in which the recipient of the e-mail acts as an unwitting “decryption oracle”. We argued further that such attacks are quite feasible and therefore represent a serious concern. Here, we investigate these claims in more detail by attempting to implement the suggested attacks. On one hand, we are able to successfully implement the described attacks against PGP and GnuPG (two widely-used software packages) in a number of different settings. On the other hand, we show that the attacks largely fail when data is compressed before encryption.Interestingly, the attacks are unsuccessful for largely fortuitous reasons; resistance to these attacks does not seem due to any conscious effort made to prevent them. Based on our work, we discuss those instances in which chosen-ciphertext attacks do indeed represent an important threat and hence must be taken into account in order to maintain confidentiality. We also recommend changes in the OpenPGP standard [3] to reduce the effectiveness of our attacks in these settings. %B Information Security %P 90 - 101 %8 2002/// %G eng %R 10.1007/3-540-45811-5_7 %0 Book Section %B Approximation Algorithms for Combinatorial OptimizationApproximation Algorithms for Combinatorial Optimization %D 2002 %T Improved Approximation Algorithms for the Partial Vertex Cover Problem %A Halperin,Eran %A Srinivasan, Aravind %E Jansen,Klaus %E Leonardi,Stefano %E Vazirani,Vijay %X The partial vertex cover problem is a generalization of the vertex cover problem:given an undirected graph G = ( V,E ) and an integer k , we wish to choose a minimum number of vertices such that at least k edges are covered. Just as for vertex cover, 2-approximation algorithms are known for this problem, and it is of interest to see if we can do better than this.The current-best approximation ratio for partial vertex cover, when parameterized by the maximum degree d of G , is (2 − Θ (1/ d )).We improve on this by presenting a $$ łeft( 2 - \Theta łeft( \tfracłn łn d łn d \right) \right) $$ -approximation algorithm for partial vertex cover using semidefinite programming, matching the current-best bound for vertex cover. Our algorithmuses a new rounding technique, which involves a delicate probabilistic analysis. %B Approximation Algorithms for Combinatorial OptimizationApproximation Algorithms for Combinatorial Optimization %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 2462 %P 161 - 174 %8 2002/// %@ 978-3-540-44186-1 %G eng %U http://dx.doi.org/10.1007/3-540-45753-4_15 %0 Conference Paper %B CHI '02 extended abstracts on Human factors in computing systems %D 2002 %T Interacting with identification technology: can it make us more secure? %A Scholtz,Jean %A Johnson,Jeff %A Shneiderman, Ben %A Hope-Tindall,Peter %A Gosling,Marcus %A Phillips,Jonathon %A Wexelblat,Alan %K Biometrics %K civil liberties %K face recognition %K national id card %K privacy %K Security %B CHI '02 extended abstracts on Human factors in computing systems %S CHI EA '02 %I ACM %C New York, NY, USA %P 564 - 565 %8 2002/// %@ 1-58113-454-1 %G eng %U http://doi.acm.org/10.1145/506443.506484 %R 10.1145/506443.506484 %0 Conference Paper %B Software Maintenance, 2002. Proceedings. International Conference on %D 2002 %T Maintaining software with a security perspective %A Jiwnani,K. %A Zelkowitz, Marvin V %K (computers); %K budget %K classification %K classification; %K constraints; %K data; %K engineering; %K flaw %K maintenance; %K of %K operating %K program %K scheme; %K Security %K software %K stable %K system %K systems %K systems; %K testing; %K TIME %K vulnerabilities; %K vulnerability %X Testing for software security is a lengthy, complex and costly process. Currently, security testing is done using penetration analysis and formal verification of security kernels. These methods are not complete and are difficult to use. Hence it is essential to focus testing effort in areas that have a greater number of security vulnerabilities to develop secure software as well as meet budget and time constraints. We propose a testing strategy based on a classification of vulnerabilities to develop secure and stable systems. This taxonomy will enable a system testing and maintenance group to understand the distribution of security vulnerabilities and prioritize their testing effort according to the impact the vulnerabilities have on the system. This is based on Landwehr's (1994) classification scheme for security flaws and we evaluated it using a database of 1360 operating system vulnerabilities. This analysis indicates vulnerabilities tend to be focused in relatively few areas and associated with a small number of software engineering issues. %B Software Maintenance, 2002. Proceedings. International Conference on %P 194 - 203 %8 2002/// %G eng %R 10.1109/ICSM.2002.1167766 %0 Journal Article %J Natural Language Processing and Information Systems %D 2002 %T Omnibase: Uniform access to heterogeneous data for question answering %A Katz,B. %A Felshin,S. %A Yuret,D. %A Ibrahim,A. %A Jimmy Lin %A Marton,G. %A Jerome McFarland,A. %A Temelkuran,B. %X Although the World Wide Web contains a tremendous amount of information, the lack of uniform structure makes finding the right knowledge difficult. A solution is to turn the Web into a “virtual database” and to access it through natural language.We built Omnibase, a system that integrates heterogeneous data sources using an object- property-value model. With the help of Omnibase, our Start natural language system can now access numerous heterogeneous data sources on the Web in a uniform manner, and answers millions of user questions with high precision. %B Natural Language Processing and Information Systems %P 230 - 234 %8 2002/// %G eng %R 10.1007/3-540-36271-1_23 %0 Journal Article %J Proceedings of the 3rd International NASA Workshop on Planning and Scheduling for Space %D 2002 %T PASSAT: A user-centric planning framework %A Myers,K.L. %A Tyson,W.M. %A Wolverton,M.J. %A Jarvis,P.A. %A Lee,T.J. %A desJardins, Marie %X We describe a plan-authoring system called PASSAT (Plan-Authoring System based on Sketches, Advice, and Templates) that combines interactive tools for constructing plans with a suite of automated and mixed-initiative capabilities designed to complement human planning skills. PASSAT is organized around a library of predefined templates that encode task networks describing standard operating procedures and previous cases. Users can select from these templates to apply during plan development, with the system providing various forms of automated assistance. A mixed-initiative plan sketch facility helps users refine outlines for plans to complete solutions, by detecting problems and proposing possible fixes. An advice capability enables user specification of high-level guidelines for plans that the system helps to enforce. Finally, PASSAT includes process facilitation mechanisms designed to help a user track and manage outstanding planning tasks and information requirements, as a means of improving the efficiency and effectiveness of the planning process. PASSAT is designed for applications for which a core of planning knowledge can be captured in predefined action models but where significant user control of the planning process is required. %B Proceedings of the 3rd International NASA Workshop on Planning and Scheduling for Space %8 2002/// %G eng %0 Book Section %B Computer Vision — ECCV 2002Computer Vision — ECCV 2002 %D 2002 %T Probabilistic Human Recognition from Video %A Zhou,Shaohua %A Chellapa, Rama %E Heyden,Anders %E Sparr,Gunnar %E Nielsen,Mads %E Johansen,Peter %X This paper presents a method for incorporating temporal information in a video sequence for the task of human recognition. A time series state space model, parameterized by a tracking state vector and a recognizing identity variable , is proposed to simultaneously characterize the kinematics and identity. Two sequential importance sampling (SIS) methods, a brute-force version and an efficient version, are developed to provide numerical solutions to the model. The joint distribution of both state vector and identity variable is estimated at each time instant and then propagated to the next time instant. Marginalization over the state vector yields a robust estimate of the posterior distribution of the identity variable. Due to the propagation of identity and kinematics, a degeneracy in posterior probability of the identity variable is achieved to give improved recognition. This evolving behavior is characterized using changes in entropy . The effectiveness of this approach is illustrated using experimental results on low resolution face data and upper body data. %B Computer Vision — ECCV 2002Computer Vision — ECCV 2002 %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 2352 %P 173 - 183 %8 2002/// %@ 978-3-540-43746-8 %G eng %U http://dx.doi.org/10.1007/3-540-47977-5_45 %0 Journal Article %J SIGPLAN Not. %D 2002 %T Region-based memory management in cyclone %A Grossman,Dan %A Morrisett,Greg %A Jim,Trevor %A Hicks, Michael W. %A Wang,Yanling %A Cheney,James %X Cyclone is a type-safe programming language derived from C. The primary design goal of Cyclone is to let programmers control data representation and memory management without sacrificing type-safety. In this paper, we focus on the region-based memory management of Cyclone and its static typing discipline. The design incorporates several advancements, including support for region subtyping and a coherent integration with stack allocation and a garbage collector. To support separate compilation, Cyclone requires programmers to write some explicit region annotations, but a combination of default annotations, local type inference, and a novel treatment of region effects reduces this burden. As a result, we integrate C idioms in a region-based framework. In our experience, porting legacy C to Cyclone has required altering about 8% of the code; of the changes, only 6% (of the 8%) were region annotations. %B SIGPLAN Not. %V 37 %P 282 - 293 %8 2002/05// %@ 0362-1340 %G eng %U http://doi.acm.org/10.1145/543552.512563 %N 5 %R 10.1145/543552.512563 %0 Journal Article %J International Symposioum on Software Reliability Engineering %D 2002 %T Security Testing using a Susceptibility Matrix %A Jiwnani,K. %A Zelkowitz, Marvin V %X Software testing is a cost effective method to detect faults insoftware. Similarly, Security testing is intended to assess the trustworthiness of the security mechanisms and is often regarded as a special case of system testing. The emphasis of Security testing is not to establish the functional correctness of the software but to establish some degree of confidence in the security mechanisms. It is the single most common technique for gaining assurance that a system operates within the constraints of a given set of policies and mechanisms. Presently, there is no systematic approach to security testing. Our goal has been to devise a classification scheme to increase testing effort in high-risk areas and help the software community to get feedback to improve continuously. %B International Symposioum on Software Reliability Engineering %V 13 %8 2002/// %G eng %0 Conference Paper %B in: Workshop on Cognitive Agents, 25th German Conference on Artificial Intelligence %D 2002 %T Time-situated agency: Active logic and intention formation %A Anderson,M. L %A Josyula,D. P %A Okamoto,Y. A %A Perlis, Don %B in: Workshop on Cognitive Agents, 25th German Conference on Artificial Intelligence %8 2002/// %G eng %0 Conference Paper %B Proceedings of the Sixth Workshop on the Semantics and Pragmatics of Dialog %D 2002 %T The use-mention distinction and its importance to HCI %A Anderson,M. L %A Okamoto,Y. %A Josyula,D. %A Perlis, Don %B Proceedings of the Sixth Workshop on the Semantics and Pragmatics of Dialog %P 21 - 28 %8 2002/// %G eng %0 Journal Article %J Computer Science Technical Reports %D 2001 %T Cyclone User's Manual, Version 0.1. 3 %A Grossman,D. %A Morrisett,G. %A Jim,T. %A Hicks, Michael W. %A Wang, Y. %A Cheney,J. %X The current version of this manual should be available at http://www.cs.cornell.edu/projects/cyclone/ and http://www.research.att.com/projects/cyclone/. The version here describes Cyclone Version 0.1.3, although minor changes may have occurred before the release. %B Computer Science Technical Reports %8 2001/11/16/undef %G eng %0 Book Section %B Discovery ScienceDiscovery Science %D 2001 %T Dynamic Aggregation to Support Pattern Discovery: A Case Study with Web Logs %A Tang,Lida %A Shneiderman, Ben %E Jantke,Klaus %E Shinohara,Ayumi %X Rapid growth of digital data collections is overwhelming the capabilities of humans to comprehend them without aid. The extraction of useful data from large raw data sets is something that humans do poorly. Aggregation is a technique that extracts important aspect from groups of data thus reducing the amount that the user has to deal with at one time, thereby enabling them to discover patterns, outliers, gaps, and clusters. Previous mechanisms for interactive exploration with aggregated data were either too complex to use or too limited in scope. This paper proposes a new technique for dynamic aggregation that can combine with dynamic queries to support most of the tasks involved in data manipulation. %B Discovery ScienceDiscovery Science %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 2226 %P 464 - 469 %8 2001/// %@ 978-3-540-42956-2 %G eng %U http://dx.doi.org/10.1007/3-540-45650-3_42 %0 Journal Article %J Multimedia, IEEE Transactions on %D 2001 %T Dynamic resource allocation via video content and short-term traffic statistics %A M. Wu %A Joyce,R.A. %A Wong,Hau-San %A Guan,Long %A Kung,Sun-Yuan %K allocation;link %K allocation;multimedia %K allocation;variable %K bit %K codes; %K content;bandwidth %K error;short %K Internet;dynamic %K patterns;variable %K prediction %K rate %K resource %K segments;short-term %K statistics;traffic %K systems;resource %K traffic %K utilization;mean-square %K video %K video;video %X The reliable and efficient transmission of high-quality variable bit rate (VBR) video through the Internet generally requires network resources be allocated in a dynamic fashion. This includes the determination of when to renegotiate for network resources, as well as how much to request at a given time. The accuracy of any resource request method depends critically on its prediction of future traffic patterns. Such a prediction can be performed using the content and traffic information of short video segments. This paper presents a systematic approach to select the best features for prediction, indicating that while content is important in predicting the bandwidth of a video hit stream, the use of both content and available short-term bandwidth statistics can yield significant improvements. A new framework for traffic prediction is proposed in this paper; experimental results show a smaller mean-square resource prediction error and higher overall link utilization %B Multimedia, IEEE Transactions on %V 3 %P 186 - 199 %8 2001/06// %@ 1520-9210 %G eng %N 2 %R 10.1109/6046.923818 %0 Conference Paper %B Proc. UAI %D 2001 %T Efficient stepwise selection in decomposable models %A Deshpande, Amol %A Garofalakis,M. N %A Jordan,M. I %B Proc. UAI %P 128 - 135 %8 2001/// %G eng %0 Journal Article %J ACM Trans. Database Syst. %D 2001 %T Flexible support for multiple access control policies %A Jajodia,Sushil %A Samarati,Pierangela %A Sapino,Maria Luisa %A V.S. Subrahmanian %K access control policy %K Authorization %K logic programming %X Although several access control policies can be devised for controlling access to information, all existing authorization models, and the corresponding enforcement mechanisms, are based on a specific policy (usually the closed policy). As a consequence, although different policy choices are possible in theory, in practice only a specific policy can actually be applied within a given system. In this paper, we present a unified framework that can enforce multiple access control policies within a single system. The framework is based on a language through which users can specify security policies to be enforced on specific accesses. The language allows the specification of both positive and negative authorizations and incorporates notions of authorization derivation, conflict resolution, and decision strategies. Different strategies may be applied to different users, groups, objects, or roles, based on the needs of the security policy. The overall result is a flexible and powerful, yet simple, framework that can easily capture many of the traditional access control policies as well as protection requirements that exist in real-world applications, but are seldom supported by existing systems. The major advantage of our approach is that it can be used to specify different access control policies that can all coexist in the same system and be enforced by the same security server. %B ACM Trans. Database Syst. %V 26 %P 214 - 260 %8 2001/06// %@ 0362-5915 %G eng %U http://doi.acm.org/10.1145/383891.383894 %N 2 %R 10.1145/383891.383894 %0 Journal Article %J NIPS %D 2001 %T Fragment completion in humans and machines %A Jacobs, David W. %A Rokers,B. %A Rudra,A. %A Liu,Z. %B NIPS %P 27 - 34 %8 2001/// %G eng %0 Conference Paper %B In Proceedings, AAAI Fall Symposium on Uncertainty in Computation %D 2001 %T Handling uncertainty with active logic %A Bhatia,M. %A Chi,P. %A Chong,W. %A Josyula,D. P %A Okamoto,Y. %A Perlis, Don %A Purang,K. %B In Proceedings, AAAI Fall Symposium on Uncertainty in Computation %8 2001/// %G eng %0 Conference Paper %B Thirteenth International Conference on Scientific and Statistical Database Management, 2001. SSDBM 2001. Proceedings %D 2001 %T Integrating distributed scientific data sources with MOCHA and XRoaster %A Rodriguez-Martinez,M. %A Roussopoulos, Nick %A McGann,J. M %A Kelley,S. %A Mokwa,J. %A White,B. %A Jala,J. %K client-server systems %K data sets %K data sites %K Databases %K Distributed computing %K distributed databases %K distributed scientific data source integration %K Educational institutions %K graphical tool %K hypermedia markup languages %K IP networks %K java %K Large-scale systems %K Maintenance engineering %K meta data %K metadata %K Middleware %K middleware system %K MOCHA %K Query processing %K remote sites %K scientific information systems %K user-defined types %K visual programming %K XML %K XML metadata elements %K XML-based framework %K XRoaster %X MOCHA is a novel middleware system for integrating distributed data sources that we have developed at the University of Maryland. MOCHA is based on the idea that the code that implements user-defined types and functions should be automatically deployed to remote sites by the middleware system itself. To this end, we have developed an XML-based framework to specify metadata about data sites, data sets, and user-defined types and functions. XRoaster is a graphical tool that we have developed to help the user create all the XML metadata elements to be used in MOCHA %B Thirteenth International Conference on Scientific and Statistical Database Management, 2001. SSDBM 2001. Proceedings %I IEEE %P 263 - 266 %8 2001/// %@ 0-7695-1218-6 %G eng %R 10.1109/SSDM.2001.938560 %0 Book Section %B Discovery ScienceDiscovery Science %D 2001 %T Interactive Exploration of Time Series Data %A Hochheiser,Harry %A Shneiderman, Ben %E Jantke,Klaus %E Shinohara,Ayumi %X Widespread interest in discovering features and trends in time- series has generated a need for tools that support interactive exploration.This paper introduces timeboxes: a powerful direct-manipulation metaphor for the specification of queries over time series datasets. Our TimeSearcher implementation of timeboxes supports interactive formulation and modification of queries, thus speeding the process of exploring time series data sets and guiding data mining. %B Discovery ScienceDiscovery Science %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 2226 %P 441 - 446 %8 2001/// %@ 978-3-540-42956-2 %G eng %U http://dx.doi.org/10.1007/3-540-45650-3_38 %0 Book Section %B Discovery Science %D 2001 %T Inventing Discovery Tools: Combining Information Visualization with Data Mining %A Shneiderman, Ben %E Jantke,Klaus %E Shinohara,Ayumi %X The growing use of information visualization tools and data mining algorithms stems from two separate lines of research. Information visualization researchers believe in the importance of giving users an overview and insight into the data distributions, while data mining researchers believe that statistical algorithms and machine learning can be relied on to find the interesting patterns. This paper discusses two issues that influence design of discovery tools: statistical algorithms vs. visual data presentation, and hypothesis testing vs. exploratory data analysis. I claim that a combined approach could lead to novel discovery tools that preserve user control, enable more effective exploration, and promote responsibility. %B Discovery Science %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 2226 %P 17 - 28 %8 2001/// %@ 978-3-540-42956-2 %G eng %U http://dx.doi.org/10.1007/3-540-45650-3_4 %0 Journal Article %J Visual Form 2001 %D 2001 %T Judging whether multiple silhouettes can come from the same object %A Jacobs, David W. %A Belhumeur,P. %A Jermyn,I. %X We consider the problem of recognizing an object from its silhouette. We focus on the case in which the camera translates, and rotates about a known axis parallel to the image, such as when a mobile robot explores an environment. In this case we present an algorithm for determining whether a new silhouette could come from the same object that produced two previously seen silhouettes. In a basic case, when cross-sections of each silhouette are single line segments, we can check for consistency between three silhouettes using linear programming. This provides the basis for methods that handle more complex cases. We show many experiments that demonstrate the performance of these methods when there is noise, some deviation from the assumptions of the algorithms, and partial occlusion. Previous work has addressed the problem of precisely reconstructing an object using many silhouettes taken under controlled conditions. Our work shows that recognition can be performed without complete reconstruction, so that a small number of images can be used, with viewpoints that are only partly constrained. %B Visual Form 2001 %P 532 - 541 %8 2001/// %G eng %R 10.1007/3-540-45129-3_49 %0 Conference Paper %B Computer Vision, 2001. ICCV 2001. Proceedings. Eighth IEEE International Conference on %D 2001 %T Lambertian reflectance and linear subspaces %A Basri,R. %A Jacobs, David W. %K functions;spherical %K harmonics;convex %K Lambertian %K lighting;object %K object;convex %K objects;convex %K optimization;isotropic %K programming;object %K recognition; %K recognition;reflectance %X We prove that the set of all reflectance functions (the mapping from surface normals to intensities) produced by Lambertian objects under distant, isotropic lighting lies close to a 9D linear subspace. This implies that the images of a convex Lambertian object obtained under a wide variety of lighting conditions can be approximated accurately with a low-dimensional linear subspace, explaining prior empirical results. We also provide a simple analytic characterization of this linear space. We obtain these results by representing lighting using spherical harmonics and describing the effects of Lambertian materials as the analog of a convolution. These results allow us to construct algorithms for object recognition based on linear methods as well as algorithms that use convex optimization to enforce non-negative lighting functions %B Computer Vision, 2001. ICCV 2001. Proceedings. Eighth IEEE International Conference on %V 2 %P 383 -390 vol.2 - 383 -390 vol.2 %8 2001/// %G eng %R 10.1109/ICCV.2001.937651 %0 Journal Article %J Computer Vision and Image Understanding %D 2001 %T Linear Fitting with Missing Data for Structure-from-Motion %A Jacobs, David W. %X Several vision problems can be reduced to the problem of fitting a linear surface of low dimension to data. These include determining affine structure from motion or from intensity images. These methods must deal with missing data; for example, in structure from motion, missing data will occur if some point features are not visible in the image throughout the motion sequence. Once data is missing, linear fitting becomes a nonlinear optimization problem. Techniques such as gradient descent require a good initial estimate of the solution to ensure convergence to the correct answer. We propose a novel method for fitting a low rank matrix to a matrix with missing elements. This method produces a good starting point for descent-type algorithms and can produce an accurate solution without further refinement. We then focus on applying this method to the problem of structure-from-motion. We show that our method has desirable theoretical properties compared to previously proposed methods, because it can find solutions when there is less data present. We also show experimentally that our method provides good results compared to previously proposed methods. %B Computer Vision and Image Understanding %V 82 %P 57 - 81 %8 2001/04// %@ 1077-3142 %G eng %U http://www.sciencedirect.com/science/article/pii/S1077314201909063 %N 1 %R 10.1006/cviu.2001.0906 %0 Book Section %B From Fragments to Objects Segmentation and Grouping in VisionFrom Fragments to Objects Segmentation and Grouping in Vision %D 2001 %T Perceptual organization as generic object recognition %A Jacobs, David W. %E Thomas F. Shipley and Philip J. Kellman %X We approach some aspects of perceptual organization as the process of fitting generic models of objects to image data. A generic model of shape encodes prior knowledge of what shapes are likely to come from real objects. For such a model to be useful, it must also lead to efficient computations. We show that models of shape based on local properties of objects can be effectively used by simple, neurally plausible networks, and that they can still encode many perceptually important properties. We also discuss the relationship between perceptual salience and viewpoint invariance. Many gestalt properties are the subset of viewpoint invariant properties that can be encoded using the smallest possible sets of features, making them the ecologically valid properties that can also be used with computational efficiency. These results suggest that implicit models of shape used in perceptual organization arise from a combination of ecological and computational constraints. Finally, we discuss experiments demonstrating the role of convexity in amodal completion. These experiments point out some of the limitations of simple local shape models, and indicate the potential role that the part structure of objects also plays in perceptual organization. %B From Fragments to Objects Segmentation and Grouping in VisionFrom Fragments to Objects Segmentation and Grouping in Vision %I North-Holland %V Volume 130 %P 295 - 329 %8 2001/// %@ 0166-4115 %G eng %U http://www.sciencedirect.com/science/article/pii/S0166411501800303 %0 Journal Article %J Pattern Analysis and Machine Intelligence, IEEE Transactions on %D 2001 %T Projective alignment with regions %A Basri,R. %A Jacobs, David W. %K algebra;object %K alignment;projective %K approach;segmentation %K errors;image %K fixed %K objects;pose %K occlusion;planar %K patterns;partial %K points;flow %K recognition; %K recovery;projective %K segmentation;matrix %K transformations;region-based %X We have previously proposed (Basri and Jacobs, 1999, and Jacobs and Basri, 1999) an approach to recognition that uses regions to determine the pose of objects while allowing for partial occlusion of the regions. Regions introduce an attractive alternative to existing global and local approaches, since, unlike global features, they can handle occlusion and segmentation errors, and unlike local features they are not as sensitive to sensor errors, and they are easier to match. The region-based approach also uses image information directly, without the construction of intermediate representations, such as algebraic descriptions, which may be difficult to reliably compute. We further analyze properties of the method for planar objects undergoing projective transformations. In particular, we prove that three visible regions are sufficient to determine the transformation uniquely and that for a large class of objects, two regions are insufficient for this purpose. However, we show that when several regions are available, the pose of the object can generally be recovered even when some or all regions are significantly occluded. Our analysis is based on investigating the flow patterns of points under projective transformations in the presence of fixed points %B Pattern Analysis and Machine Intelligence, IEEE Transactions on %V 23 %P 519 - 527 %8 2001/05// %@ 0162-8828 %G eng %N 5 %R 10.1109/34.922709 %0 Conference Paper %B E-commerce Security and Privacy %D 2001 %T Provisional authorizations %A Jajodia,S. %A Kudo,M. %A V.S. Subrahmanian %X Past generations of access control systems, when faced with an access request, have issued a “yes” (resp. “no”) answer to the access request resulting in access being granted (resp. denied). In this chapter, we ar­gue that for the world’s rapidly proliferating business to business (B2B) applications and auctions, “yes/no” responses are just not enough. We propose the notion of a “provisional authorization” which intuitively says “You may perform the desired access provided you cause condition C to be satisfied.” For instance, a user accessing an online brokerage may receive some information if he fills out his name/address, but not otherwise. While a variety of such provisional authorization mecha­nisms exist on the web, they are all hardcoded on an application by application basis. We show that given (almost) any logic L, we may define a provisional authorization specification language pASLL. pASLL is based on the declarative, polynomially evaluable authorization spec­ification language ASL proposed by Jajodia et al [JSS97]. We define programs in pASLL, and specify how given any access request, we must find a “weakest” precondition under which the access can be granted (in the worst case, if this weakest precondition is “false” this amounts to a denial). We develop a model theoretic semantics for pASLL and show how it can be applied to online sealed-bid auction servers and online contracting. %B E-commerce Security and Privacy %V 2 %P 133 - 159 %8 2001/// %G eng %R 10.1007/978-1-4615-1467-1_8 %0 Journal Article %J Perceptual Organization for Artificial Vision Systems %D 2000 %T Breakout Session Report: Principles and Methods %A Jacobs, David W. %A Malik,J. %A Nevatia, R. %X This report will present a summary of views presented during a discussion at the 1999 Workshop on Perceptual Organization in Computer Vision. Our goal is to present diverse views, informally expressed on principles and algorithms of perceptual organization. Naturally, such a discussion must be somewhat limited both by the time available and by the specific set of researchers who could be present. Still, we hope to describe some interesting ideas expressed and to note the number of areas of apparent consensus among a fairly broad group. In particular, we will describe views on the state of the art in perceptual grouping, and what seem to be key open questions and promising directions for addressing them. %B Perceptual Organization for Artificial Vision Systems %P 17 - 28 %8 2000/// %G eng %R 10.1007/978-1-4615-4413-5_2 %0 Journal Article %J IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) %D 2000 %T Class representation and image retrieval with non-metric distances %A Jacobs, David W. %A Weinshall,D. %A Gdalyahu,Y. %X One of the key problems in appearance-based vision is understanding how to use a set oflabeled images to classify new images. Classification systems that can model human perfor- mance, or that use robust image matching methods, often make use of similarity judgments that are non-metric; but when the triangle inequality is not obeyed, most existing pattern recogni- tion techniques are not applicable. We note that exemplar-based (or nearest-neighbor) methods can be applied naturally when using a wide class of non-metric similarity functions. The key issue, however, is to find methods for choosing good representatives of a class that accurately characterize it. We show that existing condensing techniques for finding class representatives are ill-suited to deal with non-metric dataspaces. We then focus on developing techniques for solving this problem, emphasizing two points: First, we show that the distance between two images is not a good measure of how well one image can represent another in non-metric spaces. Instead, we use the vector correlation between the distances from each image to other previously seen images. Second, we show that in non-metric spaces, boundary points are less significant for capturing the structure of a class than they are in Euclidean spaces. We suggest that atypical points may be more important in describing classes. We demonstrate the importance of these ideas to learning that generalizes from experience by improving performance using both synthetic and real images. %B IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) %V 22 %P 1 - 1 %8 2000/// %G eng %N 583-600 %0 Journal Article %J Pattern Analysis and Machine Intelligence, IEEE Transactions on %D 2000 %T Classification with nonmetric distances: image retrieval and class representation %A Jacobs, David W. %A Weinshall,D. %A Gdalyahu,Y. %K appearance-based %K by %K classification;image %K correlation;correlation %K dataspaces;nonmetric %K distances;nonmetric %K example; %K functions;nonmetric %K image %K inequality;vector %K judgments;robust %K MATCHING %K methods;image %K methods;nonmetric %K methods;triangle %K points;boundary %K points;class %K representation;exemplar-based %K representation;image %K retrieval;learning %K retrieval;nearest-neighbor %K similarity %K vision;atypical %X A key problem in appearance-based vision is understanding how to use a set of labeled images to classify new images. Systems that model human performance, or that use robust image matching methods, often use nonmetric similarity judgments; but when the triangle inequality is not obeyed, most pattern recognition techniques are not applicable. Exemplar-based (nearest-neighbor) methods can be applied to a wide class of nonmetric similarity functions. The key issue, however, is to find methods for choosing good representatives of a class that accurately characterize it. We show that existing condensing techniques are ill-suited to deal with nonmetric dataspaces. We develop techniques for solving this problem, emphasizing two points: First, we show that the distance between images is not a good measure of how well one image can represent another in nonmetric spaces. Instead, we use the vector correlation between the distances from each image to other previously seen images. Second, we show that in nonmetric spaces, boundary points are less significant for capturing the structure of a class than in Euclidean spaces. We suggest that atypical points may be more important in describing classes. We demonstrate the importance of these ideas to learning that generalizes from experience by improving performance. We also suggest ways of applying parametric techniques to supervised learning problems that involve a specific nonmetric distance functions, showing how to generalize the idea of linear discriminant functions in a way that may be more useful in nonmetric spaces %B Pattern Analysis and Machine Intelligence, IEEE Transactions on %V 22 %P 583 - 600 %8 2000/06// %@ 0162-8828 %G eng %N 6 %R 10.1109/34.862197 %0 Journal Article %J KLUWER INTERNATIONAL SERIES IN ENGINEERING AND COMPUTER SCIENCE %D 2000 %T Convexity in Perceptual Completion %A Liu,Z. %A Jacobs, David W. %A Basri,R. %B KLUWER INTERNATIONAL SERIES IN ENGINEERING AND COMPUTER SCIENCE %P 73 - 90 %8 2000/// %G eng %0 Journal Article %J International Journal of Computer Vision %D 2000 %T Design and use of linear models for image motion analysis %A Fleet,D. J %A Black,M. J %A Yacoob,Yaser %A Jepson,A. D %B International Journal of Computer Vision %V 36 %P 171 - 193 %8 2000/// %G eng %N 3 %0 Journal Article %J International Journal of Remote Sensing %D 2000 %T High performance computing algorithms for land cover dynamics using remote sensing data %A Kalluri, SNV %A JaJa, Joseph F. %A Bader, D.A. %A Zhang,Z. %A Townshend,J.R.G. %A Fallah-Adl,H. %X Global and regional land cover studies need to apply complex models on selected subsets of large volumes of multi-sensor and multi-temporal data sets that have been derived from raw instrument measurements using widely accepted pre-processing algorithms. The computational and storage requirements of most of these studies far exceed what is possible on a single workstation environment. We have been pursuing a new approach that couples scalable and open distributed heterogeneous hardware with the development of high performance software for processing, indexing and organizing remotely sensed data. Hierarchical data management tools are used to ingest raw data, create metadata and organize the archived data so as to automatically achieve computational load balancing among the available nodes and minimize input/output overheads. We illustrate our approach with four specific examples. The first is the development of the first fast operational scheme for the atmospheric correction of Landsat Thematic Mapper scenes, while the second example focuses on image segmentation using a novel hierarchical connected components algorithm. Retrieval of the global Bidirectional Reflectance Distribution Function in the red and near-infrared wavelengths using four years (1983 to 1986) of Pathfinder Advanced Very High Resolution Radiometer (AVHRR) Land data is the focus of our third example. The fourth example is the development of a hierarchical data organization scheme that allows on-demand processing and retrieval of regional and global AVHRR data sets. Our results show that substantial reductions in computational times can be achieved by the high performance computing technology. %B International Journal of Remote Sensing %V 21 %P 1513 - 1536 %8 2000/// %@ 0143-1161 %G eng %U http://www.tandfonline.com/doi/abs/10.1080/014311600210290 %N 6-7 %R 10.1080/014311600210290 %0 Conference Paper %B Computer Vision and Pattern Recognition, 2000. Proceedings. IEEE Conference on %D 2000 %T In search of illumination invariants %A Chen,H. F %A Belhumeur,P. N. %A Jacobs, David W. %K comparison;image %K gradient;face %K invariants;image %K Lambertian %K recognition; %K recognition;illumination %K recognition;object %K reflectance;face %X We consider the problem of determining functions of an image of an object that are insensitive to illumination changes. We first show that for an object with Lambertian reflectance there are no discriminative functions that are invariant to illumination. This result leads as to adopt a probabilistic approach in which we analytically determine a probability distribution for the image gradient as a function of the surface's geometry and reflectance. Our distribution reveals that the direction of the image gradient is insensitive to changes in illumination direction. We verify this empirically by constructing a distribution for the image gradient from more than 20 million samples of gradients in a database of 1,280 images of 20 inanimate objects taken under varying lighting condition. Using this distribution we develop an illumination insensitive measure of image comparison and test it on the problem of face recognition %B Computer Vision and Pattern Recognition, 2000. Proceedings. IEEE Conference on %V 1 %P 254 -261 vol.1 - 254 -261 vol.1 %8 2000/// %G eng %R 10.1109/CVPR.2000.855827 %0 Journal Article %J PE & RS- Photogrammetric Engineering and Remote Sensing %D 2000 %T Kronos: A software system for the processing and retrieval of large-scale AVHRR data sets %A Zhang,Z. %A JaJa, Joseph F. %A Bader, D.A. %A Kalluri, SNV %A Song,H. %A El Saleous,N. %A Vermote,E. %A Townshend,J.R.G. %X Raw remotely sensed satellite data have to be processed andmapped into a standard projection in order to produce a multi- temporal data set which can then be used for regional or global Earth science studies. However, traditional methods of processing remotely sensed satellite data have inherent limitations because they are based on a fixed processing chain. Different users may need the data in different forms with possibly different processing steps; hence, additional transformations may have to be applied to the processed data, resulting in potentially significant errors. In this paper, we describe a software system, Kronos, for the generation of custom-tailored products from the Advanced Very High Resolution Radiometer (AVHRR) sensor. It allows the generation of a rich set of products that can be easily specified through a simple interface by scientists wishing to carry out Earth system modeling or analysis. Kronos is based on a flexible methodology and consists of four major components: ingest and preprocessing, indexing and storage, a search and processing engine, and a Java interface. After geo-location and calibration, every pixel is indexed and stored using a combination of data structures. Following the users' queries, data are selectively retrieved and secondary processing such as atmospheric correction, compositing, and projection are performed as specified. The processing is divided into two stages, the first of which involves the geo-location and calibration of the remotely sensed data and, hence, results in no loss of information. The second stage involves the retrieval of the appropriate data subsets and the application of the secondary processing specified by the user. This scheme allows the indexing and the storage of data from different sensors without any loss of information and, therefore, allows assimilation of data from multiple sensors. User specified processing can be applied later as needed. %B PE & RS- Photogrammetric Engineering and Remote Sensing %V 66 %P 1073 - 1082 %8 2000/// %G eng %N 9 %0 Conference Paper %B Proceedings of the 2000 ACM SIGMOD International Conference on Management of Data, Dallas, Texas %D 2000 %T MOCHA: a database middleware system featuring automatic deployment of application-specific functionality %A Rodrıguez-Martinez,M. %A Roussopoulos, Nick %A McGann,J. M %A Keyley,S. %A Katz,V. %A Song,Z. %A JaJa, Joseph F. %B Proceedings of the 2000 ACM SIGMOD International Conference on Management of Data, Dallas, Texas %8 2000/// %G eng %0 Journal Article %J Proc. IEEE Pacific RIM Conference on Multimedia, Sydney, Australia %D 2000 %T A Neural Network Approach for Predicting Network Resource Requirements in Video Transmission Systems %A Wong,H.S. %A Wu,M. %A Joyce,R.A. %A Guan,L. %A Kung, S.Y. %X Dynamic resource allocation is important for ensuring ef-ficient network utilization in Internet-based multimedia content delivery system. To allow accurate network traf- fic prediction in the case of video delivery, relevant infor- mation based on video contents and the short term traffic pattern has to be taken into account, while the inclusion of non-relevant features will deterioriate the prediction per- formance due to the "curse of dimensionality" problem. In this work, we propose a neural network-based predic- tion system and specifically address the determination of relevant input features for the system. Experiments have shown that the current system is capable of identifying a highly relevant subset of features for traffic prediction given a large number of video content and short-term net- work traffic descriptors. %B Proc. IEEE Pacific RIM Conference on Multimedia, Sydney, Australia %8 2000/// %G eng %0 Journal Article %J Perceptual Organization for Artificial Vision Systems %D 2000 %T Perceptual Completion Behind Occluders: The Role of Convexity %A Liu,Z. %A Jacobs, David W. %A Basri,R. %X An important problem in perceptual organization is to determine whether two image regions belong to the same object. In this chapter, we consider the situation when two image regions potentially group into a single object behind a common occluder. We study the extent to which two image regions are grouped more strongly than two other image regions. Existing theories in both human and computer vision have mainly emphasized the role of good continuation. Namely, the shorter and smoother the completing contours are between two image regions, the stronger the grouping will be. In contrast, Jacobs [3] has proposed a theory that considers relative positions and orientations of two image regions. For instance, the theory predicts that two image regions that can be linked by convex completing contours are grouped more strongly than those that can only be linked by concave completing contours, even though the completing contours are identical in shape. We present, in addition to our previous work (Liu, Jacobs, and Basri, 1999), human psychophysical evidence that concurs with the predictions of this theory and suggests an important role of convexity in perceptual grouping. %B Perceptual Organization for Artificial Vision Systems %P 73 - 90 %8 2000/// %G eng %R 10.1007/978-1-4615-4413-5_6 %0 Journal Article %J Computing in Science Engineering %D 2000 %T A perspective on Quicksort %A JaJa, Joseph F. %K algorithm; %K algorithms; %K analysis; %K complexity %K complexity; %K computational %K geometry; %K Parallel %K Quicksort %K sorting; %X This article introduces the basic Quicksort algorithm and gives a flavor of the richness of its complexity analysis. The author also provides a glimpse of some of its generalizations to parallel algorithms and computational geometry %B Computing in Science Engineering %V 2 %P 43 - 49 %8 2000/02//jan %@ 1521-9615 %G eng %N 1 %R 10.1109/5992.814657 %0 Journal Article %J AIAA Paper No. 00-2311 %D 2000 %T Toward the large-eddy simulation over a hypersonic elliptical cross-section cone %A Martin, M.P %A Weirs,G. %A Candler,G. V %A Piomelli,U. %A Johnson,H. %A Nompelis,I. %B AIAA Paper No. 00-2311 %8 2000/// %G eng %0 Book Section %B Approximation Algorithms for Combinatorial Optimization %D 2000 %T Wavelength Rerouting in Optical Networks, or the Venetian Routing Problem %A Caprara,Alberto %A Italiano,Giuseppe %A Mohan,G. %A Panconesi,Alessandro %A Srinivasan, Aravind %E Jansen,Klaus %E Khuller, Samir %X Wavelength rerouting has been suggested as a viable and cost-effective method to improve the blocking performance of wavelength-routed Wavelength-Division Multiplexing (WDM) networks. This method leads to the following combinatorial optimization problem, dubbed Venetian Routing. Given a directed multigraph G along with two vertices s and t and a collection of pairwise arc-disjoint paths, we wish to find an st -path which arc-intersects the smallest possible number of such paths. In this paper we prove the computational hardness oft his problem even in various special cases, and present several approximation algorithms for its solution. In particular we show a non-trivial connection between Venetian Routing and Label Cover. %B Approximation Algorithms for Combinatorial Optimization %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 1913 %P 71 - 84 %8 2000/// %@ 978-3-540-67996-7 %G eng %U http://dx.doi.org/10.1007/3-540-44436-X_9 %0 Conference Paper %B Geoscience and Remote Sensing Symposium, 2000. Proceedings. IGARSS 2000. IEEE 2000 International %D 2000 %T Web based progressive transmission for browsing remotely sensed imagery %A Mareboyana,M. %A Srivastava,S. %A JaJa, Joseph F. %K based %K decomposition;geophysical %K image;model-based %K interest;remote %K interest;vector %K mapping;user %K mapping;vector %K measurement %K of %K processing;geophysical %K processing;image %K Progressive %K quantisation;wavelet %K quantization;wavelet %K refinement;region %K regions %K representation;land %K representation;remote %K sensing;scalar;terrain %K sensing;terrain %K signal %K specified %K specified;user %K surface;large %K technique;image %K techniques;image %K transforms; %K transmission;browsing;geophysical %K VQ;progressive %K Web %X This paper describes an image representation technique that entails progressive refinement of user specified regions of interest (ROI) of large images. Progressive refinement to original quality can be accomplished in theory. However, due to heavy burden on storage resources for the authors' applications, they restrict the refinement to about 25% of the original data resolution. A wavelet decomposition of the data combined with scalar and vector quantization (VQ) of the high frequency components and JPEG/DCT compression of low frequency component is used as representation framework. Their software will reconstruct the region selected by the user from its wavelet decomposition such that it fills up the preview window with the appropriate subimages at the desired resolution, including full resolution stored for preview. Further refinement from the first preview can be obtained progressively by transmitting high frequency coefficients from low resolution to high resolution which are compressed by variant of vector quantization called model-based VQ (MVQ). The user will have an option for progressive build up of the ROIs until full resolution stored or terminate the transmission at any time during the progressive refinement %B Geoscience and Remote Sensing Symposium, 2000. Proceedings. IGARSS 2000. IEEE 2000 International %V 2 %P 591 -593 vol.2 - 591 -593 vol.2 %8 2000/// %G eng %R 10.1109/IGARSS.2000.861640 %0 Journal Article %J Perceptual organization for artificial vision systems %D 2000 %T What makes viewpoint invariant properties perceptually salient?: A computational perspective %A Jacobs, David W. %X Many perceptually salient image properties identified by the Gestalt psychologists, such as collinearity, parallelism, and good continuation, are viewpoint invariant. That is, there exist scene structures that always produce images with these properties regardless of viewpoint, while other scene structures virtually never produce these properties. This has suggested that the perceptual salience of viewpoint invariants is due to the leverage they provide for inferring 3-D properties of objects and scenes. However, we show that viewpoint invariance is not sufficient to distinguish these Gestalt properties; one can define an infinite number of viewpoint invariant properties that are not perceptually salient. We then show that generally, the perceptually salient viewpoint invariant properties are minimal, in the sense that they can be derived using less image information than non-salient properties. Computations that use minimal properties are more tractable than those requiring higher order properties. This provides support for the hypothesis that the biological relevance of an image property is determined both by the extent to which it provides information about the world and by the ease with which this property can be computed. %B Perceptual organization for artificial vision systems %P 121 - 138 %8 2000/// %G eng %R 10.1007/978-1-4615-4413-5_8 %0 Journal Article %J International Journal of Computer Vision %D 1999 %T 3-d to 2-d pose determination with regions %A Jacobs, David W. %A Basri,R. %X This paper presents a novel approach to parts-based object recognition in the presence of occlusion. We focus on the problem of determining the pose of a 3-D object from a single 2-D image when convex parts of the object have been matched to corresponding regions in the image. We consider three types of occlusions: self-occlusion, occlusions whose locus is identified in the image, and completely arbitrary occlusions. We show that in the first two cases this is a convex optimization problem, derive efficient algorithms, and characterize their performance. For the last case, we prove that the problem of finding valid poses is computationally hard, but provide an efficient, approximate algorithm. This work generalizes our previous work on region-based object recognition, which focused on the case of planar models. %B International Journal of Computer Vision %V 34 %P 123 - 145 %8 1999/// %G eng %N 2 %R 10.1023/A:1008135819955 %0 Journal Article %J Breadth and depth of semantic lexicons %D 1999 %T ACQUISITION OF SEMANTIC LEXICONS %A Dorr, Bonnie J %A Jones,D. %X This paper addresses the problem oflarge-scale acquisition of computational-semantic lexicons from machine-readable resources. We describe semantic filters designed to reduce the number ofincorrect assignments (i.e., improve precision) made by a purely syntactic technique. We demonstrate that it is possible to use these filters to build broad-coverage lexicons with minimal effort, at a depth of knowledge that lies at the syntax-semantics interface. We report on our results of disambiguating the verbs in the semantic filters by adding WordNet sense annotations. We then show the results of our classification on unknown words and we evaluate these results. %B Breadth and depth of semantic lexicons %V 10 %P 79 - 79 %8 1999/// %G eng %0 Report %D 1999 %T An analysis of the Rayleigh-Ritz method for approximating eigenspaces %A Jia,Zhongxiao %A Stewart, G.W. %X Abstract. This paper concerns the Rayleigh–Ritz method for computing an approximation to an eigenspace X of a general matrix A from a subspace W that contains an approximation to X. The method produces a pair (N, ˜ X) that purports to approximate a pair (L, X), where X is a basis for X and AX = XL. In this paper we consider the convergence of (N, ˜ X) as the sine ɛ of the angle between X and W approaches zero. It is shown that under a natural hypothesis — called the uniform separation condition — the Ritz pairs (N, ˜ X) converge to the eigenpair (L, X). When one is concerned with eigenvalues and eigenvectors, one can compute certain refined Ritz vectors whose convergence is guaranteed, even when the uniform separation condition is not satisfied. An attractive feature of the analysis is that it does not assume that A has distinct eigenvalues or is diagonalizable. 1. %I Math. Comp %8 1999/// %G eng %0 Report %D 1999 %T Building mosaics from video using MPEG motion vectors %A Jones,R. %A DeMenthon,D. %A David Doermann %X In this paper we present a novel way of creating mosaics from an MPEG video sequence. Two original aspects of our work are that (1) we explicitly compute camera motion between frames and (2) we deduce the camera motion directly from the motion vectors encoded in the MPEG video stream. This enables us to create mosaics more simply and quickly than with other methods. %I University of Maryland, College Park %V LAMP-TR-035,CAR-TR-918,CS-TR-4034 %8 1999/07// %G eng %0 Conference Paper %B ACM Conference on Multimedia %D 1999 %T Building Mosaics from Video Using MPEG Motion Vectors %A Jones,R. %A DeMenthon,D. %A Doermann, David %X In this paper we present a novel way of creating mosaics from an MPEG video sequence. Two original aspects of our work are that (1) we explicitly compute camera motion between frames and (2) we deduce the camera motion directly from the motion vectors encoded in the MPEG video stream�. This enables us to create mosaics more simply and quickly than with other methods. %B ACM Conference on Multimedia %P 29 - 32 %8 1999/11// %G eng %0 Conference Paper %B Proceedings of the 1998 conference on Advances in neural information processing systems II %D 1999 %T Classification in non-metric spaces %A Weinshall,Daphna %A Jacobs, David W. %A Gdalyahu,Yoram %B Proceedings of the 1998 conference on Advances in neural information processing systems II %I MIT Press %C Cambridge, MA, USA %P 838 - 844 %8 1999/// %@ 0-262-11245-0 %G eng %U http://dl.acm.org/citation.cfm?id=340534.340816 %0 Journal Article %J Algorithm Engineering and Experimentation %D 1999 %T Designing practical efficient algorithms for symmetric multiprocessors %A Helman,D. %A JaJa, Joseph F. %X Symmetric multiprocessors (SMPs) dominate the high-end server market and are currently the primary candidate for constructing large scale multiprocessor systems. Yet, the design of efficient parallel algorithms for this platform currently poses several challenges. In this paper, we present a computational model for designing efficient algorithms for symmetric multiprocessors. We then use this model to create efficient solutions to two widely different types of problems - linked list prefix computations and generalized sorting. Our novel algorithm for prefix computations builds upon the sparse ruling set approach of Reid-Miller and Blelloch. Besides being somewhat simpler and requiring nearly half the number of memory accesses, we can bound our complexity with high probability instead of merely on average. Our algorithm for generalized sorting is a modification of our algorithm for sorting by regular sampling on distributed memory architectures. The algorithm is a stable sort which appears to be asymptotically faster than any of the published algorithms for SMPs. Both of our algorithms were implemented in C using POSIX threads and run on four symmetric multiprocessors - the IBM SP-2 (High Node), the HP-Convex Exemplar (S-Class), the DEC AlphaServer, and the Silicon Graphics Power Challenge. We ran our code for each algorithm using a variety of benchmarks which we identified to examine the dependence of our algorithm on memory access patterns. In spite of the fact that the processors must compete for access to main memory, both algorithms still yielded scalable performance up to 16 processors, which was the largest platform available to us. For some problems, our prefix computation algorithm actually matched or exceeded the performance of the standard sequential solution using only a single thread. Similarly, our generalized sorting algorithm always beat the performance of sequential merge sort by at least an order of magnitude, even with a single thread. %B Algorithm Engineering and Experimentation %P 663 - 663 %8 1999/// %G eng %R 10.1007/3-540-48518-X_3 %0 Conference Paper %B Geoscience and Remote Sensing Symposium, 1999. IGARSS '99 Proceedings. IEEE 1999 International %D 1999 %T Developing the next generation of Earth science data systems: the Global Land Cover Facility %A Lindsay,F.E. %A Townshend,J.R.G. %A JaJa, Joseph F. %A Humphries,J. %A Plaisant, Catherine %A Shneiderman, Ben %K Computer architecture %K data archiving %K data distribution system %K Data systems %K Distributed computing %K Earth science data products %K Earth science data system %K ESIP %K geographic information system %K geographic information systems %K Geography %K geophysical measurement technique %K geophysical signal processing %K geophysical techniques %K Geoscience %K GIS %K GLCF %K Global Land Cover Facility %K High performance computing %K Indexing %K information service %K Information services %K Institute for Advanced Computer Studies %K land cover %K NASA %K next generation %K PACS %K Remote sensing %K terrain mapping %K UMIACS %K University of Maryland %K User interfaces %K web-based interface %X A recent initiative by NASA has resulted in the formation of a federation of Earth science data partners. These Earth Science Information Partners (ESIPs) have been tasked with creating novel Earth science data products and services as well as distributing new and existing data sets to the Earth science community and the general public. The University of Maryland established its ESIP activities with the creation of the Global Land Cover Facility (GLCF). This joint effort of the Institute for Advanced Computer Studies (UMIACS) and the Department of Geography has developed an operational data archiving and distribution system aimed at advancing current land cover research efforts. The success of the GLCF is tied closely to assessing user needs as well. As the timely delivery of data products to the research community. This paper discusses the development and implementation of a web-based interface that allows users to query the authors' data holdings and perform user requested processing tasks on demand. The GLCF takes advantage of a scaleable, high performance computing architecture for the manipulation of very large remote sensing data sets and the rapid spatial indexing of multiple format data types. The user interface has been developed with the cooperation of the Human-Computer Interaction Laboratory (HCIL) and demonstrates advances in spatial and temporal querying tools as well as the ability to overlay multiple raster and vector data sets. Their work provides one perspective concerning how critical earth science data may be handled in the near future by a coalition of distributed data centers %B Geoscience and Remote Sensing Symposium, 1999. IGARSS '99 Proceedings. IEEE 1999 International %I IEEE %V 1 %P 616-618 vol.1 - 616-618 vol.1 %8 1999/// %@ 0-7803-5207-6 %G eng %R 10.1109/IGARSS.1999.773583 %0 Journal Article %J International Journal on Digital Libraries %D 1999 %T The end of zero-hit queries: query previews for NASA’s Global Change Master Directory %A Greene,Stephan %A Tanin,Egemen %A Plaisant, Catherine %A Shneiderman, Ben %A Olsen,Lola %A Major,Gene %A Johns,Steve %X The Human-Computer Interaction Laboratory (HCIL) of the University of Maryland and NASA have collaborated over three years to refine and apply user interface research concepts developed at HCIL in order to improve the usability of NASA data services. The research focused on dynamic query user interfaces, visualization, and overview + preview designs. An operational prototype, using query previews, was implemented with NASA’s Global Change Master Directory (GCMD), a directory service for earth science datasets. Users can see the histogram of the data distribution over several attributes and choose among attribute values. A result bar shows the cardinality of the result set, thereby preventing users from submitting queries that would have zero hits. Our experience confirmed the importance of metadata accuracy and completeness. The query preview interfaces make visible the problems or gaps in the metadata that are undetectable with classic form fill-in interfaces. This could be seen as a problem, but we think that it will have a long-term beneficial effect on the quality of the metadata as data providers will be compelled to produce more complete and accurate metadata. The adaptation of the research prototype to the NASA data required revised data structures and algorithms. %B International Journal on Digital Libraries %V 2 %P 79 - 90 %8 1999/// %@ 1432-5012 %G eng %U http://dx.doi.org/10.1007/s007990050039 %N 2 %0 Conference Paper %B Geoscience and Remote Sensing Symposium, 1999. IGARSS '99 Proceedings. IEEE 1999 International %D 1999 %T A hierarchical data archiving and processing system to generate custom tailored products from AVHRR data %A Kalluri, SNV %A Zhang,Z. %A JaJa, Joseph F. %A Bader, D.A. %A Song,H. %A El Saleous,N. %A Vermote,E. %A Townshend,J.R.G. %K archiving;image %K AVHRR;GIS;PACS;custom %K data %K image;land %K image;remote %K mapping; %K mapping;PACS;geophysical %K measurement %K PROCESSING %K processing;geophysical %K product;data %K remote %K scheme;infrared %K sensing;optical %K sensing;terrain %K signal %K surface;multispectral %K system;indexing %K tailored %K technique;hierarchical %K techniques;remote %X A novel indexing scheme is described to catalogue satellite data on a pixel basis. The objective of this research is to develop an efficient methodology to archive, retrieve and process satellite data, so that data products can be generated to meet the specific needs of individual scientists. When requesting data, users can specify the spatial and temporal resolution, geographic projection, choice of atmospheric correction, and the data selection methodology. The data processing is done in two stages. Satellite data is calibrated, navigated and quality flags are appended in the initial processing. This processed data is then indexed and stored. Secondary processing such as atmospheric correction and projection are done after a user requests the data to create custom made products. By dividing the processing in to two stages saves time, since the basic processing tasks such as navigation and calibration which are common to all requests are not repeated when different users request satellite data. The indexing scheme described can be extended to allow fusion of data sets from different sensors %B Geoscience and Remote Sensing Symposium, 1999. IGARSS '99 Proceedings. IEEE 1999 International %V 5 %P 2374 -2376 vol.5 - 2374 -2376 vol.5 %8 1999/// %G eng %R 10.1109/IGARSS.1999.771514 %0 Patent %D 1999 %T Linear fitting with missing data: applications to structure-from-motion and to characterizing intensity images %A Jacobs, David W. %E NEC Research Institute, Inc. %X A method for generating a complete scene structure from a video sequence that provides incomplete data. The method has a first step of building a first matrix consisting of point locations from a motion sequence by acquiring a sequence of images of a fixed scene using a moving camera; identifying and tracking point features through the sequence; and using the coordinates of the features to build the first matrix with some missing elements where some features are not present in some images. In a second step an approximate solution is built by selecting triples of columns from the first matrix; forming their nullspaces into a second matrix; and taking the three smallest components of the second matrix. In a third step, an iterative algorithm is applied to the three smallest components to build a third matrix and to improve the estimate. Lastly, in a fourth step the third matrix is decomposed to determine the complete scene structure. Another aspect of the present invention are... %8 1999/12/28/ %G eng %U http://www.google.com/patents?id=7LkYAAAAEBAJ %N 6009437 %0 Book %D 1999 %T Perceptual Organization in Computer Vision %A Boyer,K.L. %A Sarkar,S. %A Feldman,J. %A Granlund,G. %A Horaud,R. %A Hutchinson,S. %A Jacobs, David W. %A Kak,A. %A Lowe,D. %A Malik,J. %I Academic Press %8 1999/// %G eng %0 Conference Paper %B Parallel and Distributed Processing, 1999. 13th International and 10th Symposium on Parallel and Distributed Processing, 1999. 1999 IPPS/SPDP. Proceedings %D 1999 %T Prefix computations on symmetric multiprocessors %A Helman,D.R. %A JaJa, Joseph F. %K access %K algorithms; %K AlphaServer;HP-Convex %K approach;symmetric %K architecture;symmetric %K Challenge;high-end %K computations;scalable %K DEC %K Exemplar;IBM %K Graphics %K market;large %K multiprocessor %K multiprocessors;Unix;distributed %K patterns;prefix %K performance;sparse %K power %K ruling %K scale %K server %K set %K SP-2;POSIX %K systems;memory %K threads;Silicon %X We introduce a new optimal prefix computation algorithm on linked lists which builds upon the sparse ruling set approach of Reid-Miller and Blelloch. Besides being somewhat simpler and requiring nearly half the number of memory accesses, we can bound our complexity with high probability instead of merely on average. Moreover, whereas Reid-Miller and Blelloch (1996) targeted their algorithm for implementation on a vector multiprocessor architecture, we develop our algorithm for implementation on the symmetric multiprocessor architecture (SMP). These symmetric multiprocessors dominate the high-end server market and are currently the primary candidate for constructing large scale multiprocessor systems. Our prefix computation algorithm was implemented in C using POSIX threads and run on four symmetric multiprocessors-the IBM SP-2 (High Node), the HP-Convex Exemplar (S-Class), the DEC AlphaServer; and the Silicon Graphics Power Challenge. We ran our code using a variety of benchmarks which we identified to examine the dependence of our algorithm on memory access patterns. For some problems, our algorithm actually matched or exceeded the performance of the standard sequential solution using only a single thread. Moreover, in spite of the fact that the processors must compete for access to main memory, our algorithm still achieved scalable performance with up to 16 processors, which was the largest platform available to us %B Parallel and Distributed Processing, 1999. 13th International and 10th Symposium on Parallel and Distributed Processing, 1999. 1999 IPPS/SPDP. Proceedings %P 7 - 13 %8 1999/04// %G eng %R 10.1109/IPPS.1999.760427 %0 Journal Article %J Vision Research %D 1999 %T The role of convexity in perceptual completion: beyond good continuation %A Liu,Zili %A Jacobs, David W. %A Basri,Ronen %K Amodal completion %K Convexity %K Good continuation %K Grouping %K Stereoscopic depth %X Since the seminal work of the Gestalt psychologists, there has been great interest in understanding what factors determine the perceptual organization of images. While the Gestaltists demonstrated the significance of grouping cues such as similarity, proximity and good continuation, it has not been well understood whether their catalog of grouping cues is complete — in part due to the paucity of effective methodologies for examining the significance of various grouping cues. We describe a novel, objective method to study perceptual grouping of planar regions separated by an occluder. We demonstrate that the stronger the grouping between two such regions, the harder it will be to resolve their relative stereoscopic depth. We use this new method to call into question many existing theories of perceptual completion (Ullman, S. (1976). Biological Cybernetics, 25, 1–6; Shashua, A., & Ullman, S. (1988). 2nd International Conference on Computer Vision (pp. 321–327); Parent, P., & Zucker, S. (1989). IEEE Transactions on Pattern Analysis and Machine Intelligence, 11, 823–839; Kellman, P. J., & Shipley, T. F. (1991). Cognitive psychology, Liveright, New York; Heitger, R., & von der Heydt, R. (1993). A computational model of neural contour processing, figure-ground segregation and illusory contours. In Internal Conference Computer Vision (pp. 32–40); Mumford, D. (1994). Algebraic geometry and its applications, Springer, New York; Williams, L. R., & Jacobs, D .W. (1997). Neural Computation, 9, 837–858) that are based on Gestalt grouping cues by demonstrating that convexity plays a strong role in perceptual completion. In some cases convexity dominates the effects of the well known Gestalt cue of good continuation. While convexity has been known to play a role in figure/ground segmentation (Rubin, 1927; Kanizsa & Gerbino, 1976), this is the first demonstration of its importance in perceptual completion. %B Vision Research %V 39 %P 4244 - 4257 %8 1999/12// %@ 0042-6989 %G eng %U http://www.sciencedirect.com/science/article/pii/S0042698999001418 %N 25 %R 10.1016/S0042-6989(99)00141-8 %0 Journal Article %J Journal of Parallel and Distributed Computing %D 1999 %T Simple: A Methodology for Programming High Performance Algorithms on Clusters of Symmetric Multiprocessors (SMPs) %A Bader,David A. %A JaJa, Joseph F. %K cluster computing %K communication primitives %K experimental parallel algorithms %K message passing (MPI) %K Parallel algorithms %K parallel performance %K shared memory %K symmetric multiprocessors (SMP) %X We describe a methodology for developing high performance programs running on clusters of SMP nodes. The SMP cluster programming methodology is based on a small prototype kernel (Simple) of collective communication primitives that make efficient use of the hybrid shared and message-passing environment. We illustrate the power of our methodology by presenting experimental results for sorting integers, two-dimensional fast Fourier transforms (FFT), and constraint-satisfied searching. Our testbed is a cluster of DEC AlphaServer 2100 4/275 nodes interconnected by an ATM switch. %B Journal of Parallel and Distributed Computing %V 58 %P 92 - 108 %8 1999/07// %@ 0743-7315 %G eng %U http://www.sciencedirect.com/science/article/pii/S0743731599915411 %N 1 %R 10.1006/jpdc.1999.1541 %0 Book Section %B Advances in Computers %D 1999 %T A Survey of Current Paradigms in Machine Translation %A Dorr, Bonnie J %A Jordan,Pamela W. %A Benoit,John W. %E Zelkowitz, Marvin V %X This paper is a survey of the current machine translation research in the US, Europe and Japan. A short history of machine translation is presented first, followed by an overview of the current research work. Representative examples of a wide range of different approaches adopted by machine translation researchers are presented. These are described in detail along with a discussion of the practicalities of scaling up these approaches for operational environments. In support of this discussion, issues in, and techniques for, evaluating machine translation systems are addressed. %B Advances in Computers %I Elsevier %V Volume 49 %P 1 - 68 %8 1999/// %@ 0065-2458 %G eng %U http://www.sciencedirect.com/science/article/pii/S006524580860282X %0 Journal Article %J Readings in information visualization: using vision to think %D 1999 %T TennisViewer: A Browser for Competition Trees %A Johnson,B. %A Shneiderman, Ben %A Baker,MJ %A Eick,SG %B Readings in information visualization: using vision to think %P 149 - 149 %8 1999/// %G eng %0 Conference Paper %B Proceedings of the ACM 1999 conference on Java Grande %D 1999 %T Transparent communication for distributed objects in Java %A Hicks, Michael W. %A Jagannathan,Suresh %A Kelsey,Richard %A Moore,Jonathan T. %A Ungureanu,Cristian %B Proceedings of the ACM 1999 conference on Java Grande %S JAVA '99 %I ACM %C New York, NY, USA %P 160 - 170 %8 1999/// %@ 1-58113-161-5 %G eng %U http://doi.acm.org/10.1145/304065.304119 %R 10.1145/304065.304119 %0 Report %D 1999 %T XMT-M: A Scalable Decentralized Processor %A Berkovich,Efraim %A Nuzman,Joseph %A Franklin,Manoj %A Jacob,Bruce %A Vishkin, Uzi %K Technical Report %X A defining challenge for research in computer science and engineering hasbeen the ongoing quest for reducing the completion time of a single computation task. Even outside the parallel processing communities, there is little doubt that the key to further progress in this quest is to do parallel processing of some kind. A recently proposed parallel processing framework that spans the entire spectrum from (parallel) algorithms to architecture to implementation is the explicit multi-threading (XMT) framework. This framework provides: (i) simple and natural parallel algorithms for essentially every general-purpose application, including notoriously difficult irregular integer applications, and (ii) a multi-threaded programming model for these algorithms which allows an ``independence-of-order'' semantics: every thread can proceed at its own speed, independent of other concurrent threads. To the extent possible, the XMT framework uses established ideas in parallel processing. This paper presents XMT-M, a microarchitecture implementation of the XMT model that is possible with current technology. XMT-M offers an engineering design point that addresses four concerns: buildability, programmability, performance, and scalability. The XMT-M hardware is geared to execute multiple threads in parallel on a single chip: relying on very few new gadgets, it can execute parallel threads without busy-waits! Existing code can be run on XMT-M as a single thread without any modifications, thereby providing backward compatibility for commercial acceptance. Simulation-based studies of XMT-M demonstrate considerable improvements in performance relative to the best serial processor even for small, and therefore practical, input sizes. (Also cross-referenced as UMIACS-TR-99-55) %I Instititue for Advanced Computer Studies, Univ of Maryland, College Park %V UMIACS-TR-99-55 %8 1999/10/09/ %G eng %U http://drum.lib.umd.edu//handle/1903/1030 %0 Conference Paper %B Computer Vision and Pattern Recognition, 1998. Proceedings. 1998 IEEE Computer Society Conference on %D 1998 %T Clustering appearances of 3D objects %A Basri,R. %A Roth,D. %A Jacobs, David W. %K 3D %K clustering;image %K clustering;sequences %K images;unsupervised %K objects;local %K of %K properties;reliable %K recognition; %K sequences;object %X We introduce a method for unsupervised clustering of images of 3D objects. Our method examines the space of all images and partitions the images into sets that form smooth and parallel surfaces in this space. It further uses sequences of images to obtain more reliable clustering. Finally, since our method relies on a non-Euclidean similarity measure we introduce algebraic techniques for estimating local properties of these surfaces without first embedding the images in a Euclidean space. We demonstrate our method by applying it to a large database of images %B Computer Vision and Pattern Recognition, 1998. Proceedings. 1998 IEEE Computer Society Conference on %P 414 - 420 %8 1998/06// %G eng %R 10.1109/CVPR.1998.698639 %0 Conference Paper %B Computer Vision and Pattern Recognition, 1998. Proceedings. 1998 IEEE Computer Society Conference on %D 1998 %T Comparing images under variable illumination %A Jacobs, David W. %A Belhumeur,P. N. %A Basri,R. %K conditions;matching %K illumination;visual %K images;point %K Lambertian %K matching;object %K model;database;lighting %K object %K recognition; %K recognition;image %K reflectance %K sources;pose;variable %X We consider the problem of determining whether two images come from different objects or the same object in the same pose, but under different illumination conditions. We show that this problem cannot be solved using hard constraints: even using a Lambertian reflectance model, there is always an object and a pair of lighting conditions consistent with any two images. Nevertheless, we show that for point sources and objects with Lambertian reflectance, the ratio of two images from the same object is simpler than the ratio of images from different objects. We also show that the ratio of the two images provides two of the three distinct values in the Hessian matrix of the object's surface. Using these observations, we develop a simple measure for matching images under variable illumination, comparing its performance to other existing methods on a database of 450 images of 10 individuals %B Computer Vision and Pattern Recognition, 1998. Proceedings. 1998 IEEE Computer Society Conference on %P 610 - 617 %8 1998/06// %G eng %R 10.1109/CVPR.1998.698668 %0 Conference Paper %B Computer Vision, 1998. Sixth International Conference on %D 1998 %T Condensing image databases when retrieval is based on non-metric distances %A Jacobs, David W. %A Weinshall,D. %A Gdalyahu,Y. %K appearance-based %K databases; %K databases;image %K dataspaces;pattern %K MATCHING %K methods;non-metric %K processing;visual %K recognition;query %K systems;image %K vision;classification %X One of the key problems in appearance-based vision is understanding how to use a set of labeled images to classify new images. Classification systems that can model human performance, or that use robust image matching methods, often make use of similarity judgments that are non-metric but when the triangle inequality is not obeyed, most existing pattern recognition techniques are not applicable. We note that exemplar-based (or nearest-neighbor) methods can be applied naturally when using a wide class of non-metric similarity functions. The key issue, however, is to find methods for choosing good representatives of a class that accurately characterize it. We note that existing condensing techniques for finding class representatives are ill-suited to deal with non-metric dataspaces. We then focus on developing techniques for solving this problem, emphasizing two points: First, we show that the distance between two images is not a good measure of how well one image can represent another in non-metric spaces. Instead, we use the vector correlation between the distances from each image to other previously seen images. Second, we show that in non-metric spaces, boundary points are less significant for capturing the structure of a class than they are in Euclidean spaces. We suggest that atypical points may be more important in describing classes. We demonstrate the importance of these ideas to learning that generalizes from experience by improving performance using both synthetic and real images %B Computer Vision, 1998. Sixth International Conference on %P 596 - 601 %8 1998/01// %G eng %R 10.1109/ICCV.1998.710778 %0 Journal Article %J Vision Research %D 1998 %T Determining the similarity of deformable shapes %A Basri,Ronen %A Costa,Luiz %A Geiger,Davi %A Jacobs, David W. %X Determining the similarity of two shapes is a significant task in both machine and human vision systems that must recognize or classify objects. The exact properties of human shape similarity judgements are not well understood yet, and this task is particularly difficult in domains where the shapes are not related by rigid transformations. In this paper we identify a number of possibly desirable properties of a shape similarity method, and determine the extent to which these properties can be captured by approaches that compare local properties of the contours of the shapes, through elastic matching. Special attention is devoted to objects that possess articulations, i.e. articulated parts. Elastic matching evaluates the similarity of two shapes as the sum of local deformations needed to change one shape into another. We show that similarities of part structure can be captured by such an approach, without the explicit computation of part structure. This may be of importance, since although parts appear to play a significant role in visual recognition, it is difficult to stably determine part structure. We also show novel results about how one can evaluate smooth and polyhedral shapes with the same method. Finally, we describe shape similarity effects that cannot be handled by current approaches. %B Vision Research %V 38 %P 2365 - 2385 %8 1998/08// %@ 0042-6989 %G eng %U http://www.sciencedirect.com/science/article/pii/S0042698998000431 %N 15–16 %R 10.1016/S0042-6989(98)00043-1 %0 Journal Article %J Pattern Recognition %D 1998 %T EFFICIENT DETERMINATION OF SHAPE FROM MULTIPLE IMAGES CONTAINING PARTIAL INFORMATION %A Basri,Ronen %A GROVE,ADAM J. %A Jacobs, David W. %K 2-D shape recovery from multiple images %K NP-complete %K Shape recovery with occlusion %X We consider the problem of reconstructing the shape of a 2-D object from multiple partial images related by scaled translations, in the presence of occlusion. Lindenbaum and Bruckstein have considered this problem in the specific case of a translating object seen by small sensors, for application to the understanding of insect vision. Their solution is limited by the fact that its run time is exponential in the number of images and sensors. We generalize the problem to allow for arbitrary types of occlusion of objects that translate and change scale. We show that this more general version of the problem can be solved in time that is polynomial in the number of sensors, but that even the original problem posed by Lindenbaum and Bruckstein is, in fact, NP-hard when the number of images is unbounded. Finally, we consider the case where the object is known to be convex. We show that Lindenbaum and Bruckstein’s version of the problem is then efficiently solvable even when many images are used, as is the general problem in certain more restricted cases. %B Pattern Recognition %V 31 %P 1691 - 1703 %8 1998/11// %@ 0031-3203 %G eng %U http://www.sciencedirect.com/science/article/pii/S0031320398000508 %N 11 %R 10.1016/S0031-3203(98)00050-8 %0 Journal Article %J Technical Reports of the Computer Science Department %D 1998 %T Interactive Smooth Zoomming in a Starfield Information Visualization %A Jog,Ninad %A Shneiderman, Ben %K Technical Report %X This paper discusses the design and implementation of interactivesmooth zooming of a starfield display. A starfield display is a two dimensional scatterplot of a multidimensional database where every item from the database is represented as a small colored glyph whose position is determined by its ranking along ordinal attributes of the items laid out on the axes. One way of navigating this visual information is by using a zooming tool to incrementally zoom in on the items by varying the attribute range on either axis independently - such zooming causes the glyphs to move continuously and to grow or shrink. To get a feeling of flying through the data, users should be able to track the motion of each glyph without getting distracted by flicker or large jumps - conditions that necessitate high display refresh rates and closely spaced glyphs on successive frames. Although the use of high-speed hardware can achieve the required visual effect for small databases, the twin software bottlenecks of rapidly accessing display items and constructing a new display image fundamentally retard the refresh rate. Our work explores several methods to overcome these bottlenecks, presents a taxonomy of various zooming methods and introduces a new widget, the zoom bar, that facilitates zooming. (Also cross-referenced as CAR-TR-714) (Also cross-referenced as ISR-TR-94-46) %B Technical Reports of the Computer Science Department %8 1998/10/15/ %G eng %U http://drum.lib.umd.edu/handle/1903/411 %0 Report %D 1998 %T Looking to Parallel Algorithms for ILP and Decentralization %A Berkovich,Efraim %A Jacob,Bruce %A Nuzman,Joseph %A Vishkin, Uzi %K Technical Report %X We introduce explicit multi-threading (XMT), a decentralized architecturethat exploits fine-grained SPMD-style programming; a SPMD program can translate directly to MIPS assembly language using three additional instruction primitives. The motivation for XMT is: (i) to define an inherently decentralizable architecture, taking into account that the performance of future integrated circuits will be dominated by wire costs, (ii) to increase available instruction-level parallelism (ILP) by leveraging expertise in the world of parallel algorithms, and (iii) to reduce hardware complexity by alleviating the need to detect ILP at run-time: if parallel algorithms can give us an overabundance of work to do in the form of thread-level parallelism, one can extract instruction-level parallelism with greatly simplified dependence-checking. We show that implementations of such an architecture tend towards decentralization and that, when global communication is necessary, overall performance is relatively insensitive to large on-chip delays. We compare the performance of the design to more traditional parallel architectures and to a high-performance superscalar implementation, but the intent is merely to illustrate the performance behavior of the organization and to stimulate debate on the viability of introducing SPMD to the single-chip processor domain. We cannot offer at this stage hard comparisons with well-researched models of execution. When programming for the SPMD model, the total number of operations that the processor has to perform is often slightly higher. To counter this, we have observed that the length of the critical path through the dynamic execution graph is smaller than in the serial domain, and the amount of ILP is correspondingly larger. Fine-grained SPMD programming connects with a broad knowledge base in parallel algorithms and scales down to provide good performance relative to high-performance superscalar designs even with small input sizes and small numbers of functional units. Keywords: Fine-grained SPMD, parallel algorithms. spawn-join, prefix-sum, instruction-level parallelism, decentralized architecture. (Also cross-referenced as UMIACS-TR- 98-40) %I Department of Computer Science, University of Maryland, College Park %V CS-TR-3921 %8 1998/10/15/ %G eng %U http://drum.lib.umd.edu//handle/1903/496 %0 Journal Article %J Computational Science Engineering, IEEE %D 1998 %T Models and high-performance algorithms for global BRDF retrieval %A Zengyan Zhang %A Kalluri, SNV %A JaJa, Joseph F. %A Liang,Shunlin %A Townshend,J.R.G. %K algorithms; %K BRDF %K Earth %K geomorphology; %K geophysical %K global %K high-performance %K IBM %K information %K light %K machines; %K models; %K Parallel %K processing; %K reflectivity; %K retrieval %K retrieval; %K scattering; %K signal %K SP2; %K surface; %X The authors describe three models for retrieving information related to the scattering of light on the Earth's surface. Using these models, they've developed algorithms for the IBM SP2 that efficiently retrieve this information %B Computational Science Engineering, IEEE %V 5 %P 16 - 29 %8 1998/12//oct %@ 1070-9924 %G eng %N 4 %R 10.1109/99.735892 %0 Journal Article %J Journal of Experimental Algorithmics (JEA) %D 1998 %T A new deterministic parallel sorting algorithm with an experimental evaluation %A Helman,David R. %A JaJa, Joseph F. %A Bader,David A. %K generalized sorting %K integer sorting %K Parallel algorithms %K parallel performance %K sorting by regular sampling %X We introduce a new deterministic parallel sorting algorithm for distributed memory machines based on the regular sampling approach. The algorithm uses only two rounds of regular all-to-all personalized communication in a scheme that yields very good load balancing with virtually no overhead. Moreover, unlike previous variations, our algorithm efficiently handles the presence of duplicate values without the overhead of tagging each element with a unique identifier. This algorithm was implemented in SPLIT-C and run on a variety of platforms, including the Thinking Machines CM-5, the IBM SP-2-WN, and the Cray Research T3D. We ran our code using widely different benchmarks to examine the dependence of our algorithm on the input distribution. Our experimental results illustrate the efficiency and scalability of our algorithm across different platforms. In fact, the performance compares closely to that of our random sample sort algorithm, which seems to outperform all similar algorithms known to the authors on these platforms. Together, their performance is nearly invariant over the set of input distributions, unlike previous efficient algorithms. However, unlike our randomized sorting algorithm, the performance and memory requirements of our regular sorting algorithm can be deterministically guaranteed. %B Journal of Experimental Algorithmics (JEA) %V 3 %8 1998/09// %@ 1084-6654 %G eng %U http://doi.acm.org/10.1145/297096.297128 %R 10.1145/297096.297128 %0 Report %D 1998 %T An On-line Variable Length Binary Encoding %A Acharya,Tinku %A JaJa, Joseph F. %K Technical Report %X We present a methodology of an on-line variable-length binaryencoding of a set of integers. The basic principle of this methodology is to maintain the prefix property amongst the codes assigned on-line to a set of integers growing dynamically. The prefix property enables unique decoding of a string of elements from this set. To show the utility of this on-line variable length binary encoding, we apply this methodology to encode the LZW codes. Application of this encoding scheme significantly improves the compression achieved by the standard LZW scheme. This encoding can be applied in other compression schemes to encode the pointers using variable-length binary codes. (Also cross-referenced as UMIACS-TR-95-39) %I Instititue for Advanced Computer Studies, Univ of Maryland, College Park %V UMIACS-TR-95-39 %8 1998/10/15/ %G eng %U http://drum.lib.umd.edu/handle/1903/714 %0 Report %D 1998 %T Parallel Algorithms for Image Histogramming and Connected Components with an Experimental Study %A Bader,David A. %A JaJa, Joseph F. %K Technical Report %X This paper presents efficient and portable implementations of two useful primitives in image processing algorithms, histogramming and connected components. Our general framework is a single-address space, distributed memory programming model. We use efficient techniques for distributing and coalescing data as well as efficient combinations of task and data parallelism. Our connected components algorithm uses a novel approach for parallel merging which performs drastically limited updating during iterative steps, and concludes with a total consistency update at the final step. The algorithms have been coded in Split-C and run on a variety of platforms. Our experimental results are consistent with the theoretical analysis and provide the best known execution times for these two primitives, even when compared with machine specific implementations. More efficient implementations of Split-C will likely result in even faster execution times. (Also cross-referenced as UMIACS-TR-94-133.) %I Instititue for Advanced Computer Studies, Univ of Maryland, College Park %V UMIACS-TR-94-133 %8 1998/10/15/ %G eng %U http://drum.lib.umd.edu/handle/1903/681 %0 Report %D 1998 %T A Parallel Sorting Algorithm With an Experimental Study %A Helman,David R. %A Bader,David A. %A JaJa, Joseph F. %K Technical Report %X Previous schemes for sorting on general-purpose parallel machineshave had to choose between poor load balancing and irregular communication or multiple rounds of all-to-all personalized communication. In this paper, we introduce a novel variation on sample sort which uses only two rounds of regular all-to-all personalized communication in a scheme that yields very good load balancing with virtually no overhead. This algorithm was implemented in Split-C and run on a variety of platforms, including the Thinking Machines CM-5, the IBM SP-2, and the Cray Research T3D. We ran our code using widely different benchmarks to examine the dependence of our algorithm on the input distribution. Our experimental results are consistent with the theoretical analysis and illustrate the efficiency and scalability of our algorithm across different platforms. In fact, it seems to outperform all similar algorithms known to the authors on these platforms, and its performance is invariant over the set of input distributions unlike previous efficient algorithms. Our results also compare favorably with those reported for the simpler ranking problem posed by the NAS Integer Sorting (IS) Benchmark. (Also cross-referenced as UMIACS-TR-95-102) %I Instititue for Advanced Computer Studies, Univ of Maryland, College Park %V UMIACS-TR-95-102 %8 1998/10/15/ %G eng %U http://drum.lib.umd.edu/handle/1903/768 %0 Journal Article %J Journal of Parallel and Distributed Computing %D 1998 %T A Randomized Parallel Sorting Algorithm with an Experimental Study %A Helman,David R. %A Bader,David A. %A JaJa, Joseph F. %K generalized sorting %K integer sorting, sample sort, parallel performance %K Parallel algorithms %X Previous schemes for sorting on general-purpose parallel machines have had to choose between poor load balancing and irregular communication or multiple rounds of all-to-all personalized communication. In this paper, we introduce a novel variation on sample sort which uses only two rounds of regular all-to-all personalized communication in a scheme that yields very good load balancing with virtually no overhead. Moreover, unlike previous variations, our algorithm efficiently handles the presence of duplicate values without the overhead of tagging each element with a unique identifier. This algorithm was implemented in Split-C and run on a variety of platforms, including the Thinking Machines CM-5, the IBM SP-2, and the Cray Research T3D. We ran our code using widely different benchmarks to examine the dependence of our algorithm on the input distribution. Our experimental results illustrate the efficiency and scalability of our algorithm across different platforms. In fact, it seems to outperform all similar algorithms known to the authors on these platforms, and its performance is invariant over the set of input distributions unlike previous efficient algorithms. Our results also compare favorably with those reported for the simpler ranking problem posed by the NAS Integer Sorting (IS) Benchmark. %B Journal of Parallel and Distributed Computing %V 52 %P 1 - 23 %8 1998/07/10/ %@ 0743-7315 %G eng %U http://www.sciencedirect.com/science/article/pii/S0743731598914629 %N 1 %R 10.1006/jpdc.1998.1462 %0 Report %D 1998 %T Scalable Data Parallel Algorithms for Texture Synthesis and Compression using Gibbs Random Fields %A Bader,David A. %A JaJa, Joseph F. %A Chellapa, Rama %K Technical Report %X This paper introduces scalable data parallel algorithms for image processing. Focusing on Gibbs and Markov Random Field model representation for textures, we present parallel algorithms for texture synthesis, compression, and maximum likelihood parameter estimation, currently implemented on Thinking Machines CM-2 and CM-5. Use of fine-grained, data parallel processing techniques yields real-time algorithms for texture synthesis and compression that are substantially faster than the previously known sequential implementations. Although current implementations are on Connection Machines, the methodology presented here enables machine independent scalable algorithms for a number of problems in image processing and analysis. (Also cross-referenced as UMIACS-TR-93-80.) %I Instititue for Advanced Computer Studies, Univ of Maryland, College Park %V UMIACS-TR-93-80 %8 1998/10/15/ %G eng %U http://drum.lib.umd.edu/handle/1903/596 %0 Journal Article %J Journal of Systems and Software %D 1998 %T Specification-based Testing of Reactive Software: A Case Study in Technology Transfer %A Jategaonkar Jagadeesan,L. %A Porter, Adam %A Puchol,C. %A Ramming,J. C %A Votta,L. G. %B Journal of Systems and Software %V 40 %P 249 - 262 %8 1998/// %G eng %N 3 %0 Report %D 1998 %T A Survey of Current Paradigms in Machine Translation %A Dorr, Bonnie J %A Jordan,P.W. %A Benoit,J.W. %X This is paper is a survey of the current machine translation research in the US, Eu-rope, and Japan. A short history of machine translation is presented first, followed by an overview of the current research work. Representative examples of a wide range of different approaches adopted by machine translation researchers are presented. These are described in detail along with a discussion of the practicalities of scaling up these approaches for operational environments. In support of this discussion, issues in, and techniques for, evaluating machine translation systems are discussed. %I Instititue for Advanced Computer Studies, Univ of Maryland, College Park %8 1998/// %G eng %0 Journal Article %J The ML Workshop, International Conference on Functional Programming (ICFP) %D 1998 %T The switchware active network implementation %A Alexander,D. S %A Hicks, Michael W. %A Kakkar,P. %A Keromytis,A. D %A Shaw,M. %A Moore,J. T %A Gunter,C. A %A Jim,T. %A Nettles,S. M %A Smith,J. M %B The ML Workshop, International Conference on Functional Programming (ICFP) %8 1998/// %G eng %0 Journal Article %J International Journal of Computer Vision %D 1998 %T Uncertainty propagation in model-based recognition %A Alter,T. D. %A Jacobs, David W. %X Robust recognition systems require a careful understanding of the effects of error in sensed features. In model-based recognition, matches between model features and sensed image features typically are used to compute a model pose and then project the unmatched model features into the image. The error in the image features results in uncertainty in the projected model features. We first show how error propagates when poses are based on three pairs of 3D model and 2D image points. In particular, we show how to simply and efficiently compute the distributed region in the image where an unmatched model point might appear, for both Gaussian and bounded error in the detection of image points, and for both scaled-orthographic and perspective projection models. Next, we provide geometric and experimental analyses to indicate when this linear approximation will succeed and when it will fail. Then, based on the linear approximation, we show how we can utilize Linear Programming to compute bounded propagated error regions for any number of initial matches. Finally, we use these results to extend, from two-dimensional to three-dimensional objects, robust implementations of alignment, interpretation-tree search, and transformation clustering. %B International Journal of Computer Vision %V 27 %P 127 - 159 %8 1998/// %G eng %N 2 %R 10.1023/A:1007989016491 %0 Conference Paper %B Computer Vision and Pattern Recognition, 1997. Proceedings., 1997 IEEE Computer Society Conference on %D 1997 %T 3-D to 2-D recognition with regions %A Jacobs, David W. %A Basri,R. %K 2-D %K algorithms;object %K complexity;image %K hard;efficient %K image;3-D %K object;computationally %K poses;computational %K recognition; %K recognition;occlusion;occlusions;performance;regions;valid %K segmentation;object %X This paper presents a novel approach to parts-based object recognition in the presence of occlusion. We focus on the problem of determining the pose of a 3-D object from a single 2-D image when convex parts of the object have been matched to corresponding regions in the image. We consider three types of occlusions: self-occlusion, occlusions whose locus is identified in the image, and completely arbitrary occlusions. We derive efficient algorithms for the first two cases, and characterize their performance. For the last case, we prove that the problem of finding valid poses is computationally hard, but provide an efficient, approximate algorithm. This work generalizes our previous work on region-based object recognition, which focused on the case of planar models %B Computer Vision and Pattern Recognition, 1997. Proceedings., 1997 IEEE Computer Society Conference on %P 547 - 553 %8 1997/06// %G eng %R 10.1109/CVPR.1997.609379 %0 Journal Article %J IEEE Transactions on Software Engineering %D 1997 %T Assessing software review meetings: results of a comparative analysis of two experimental studies %A Porter, Adam %A Johnson,P. M %K Aggregates %K Collaborative work %K Computer Society %K Costs %K Helium %K Inspection %K inspection meeting %K Job shop scheduling %K Programming %K reconciliation %K Software development management %K Software quality %K software quality assurance %K software review meeting assessment %K Software reviews %X Software review is a fundamental tool for software quality assurance. Nevertheless, there are significant controversies as to the most efficient and effective review method. One of the most important questions currently being debated is the utility of meetings. Although almost all industrial review methods are centered around the inspection meeting, recent findings call their value into question. In prior research the authors separately and independently conducted controlled experimental studies to explore this issue. The paper presents new research to understand the broader implications of these two studies. To do this, they designed and carried out a process of “reconciliation” in which they established a common framework for the comparison of the two experimental studies, reanalyzed the experimental data with respect to this common framework, and compared the results. Through this process they found many striking similarities between the results of the two studies, strengthening their individual conclusions. It also revealed interesting differences between the two experiments, suggesting important avenues for future research %B IEEE Transactions on Software Engineering %V 23 %P 129 - 145 %8 1997/03// %@ 0098-5589 %G eng %N 3 %R 10.1109/32.585501 %0 Journal Article %J Computer Vision and Image Understanding %D 1997 %T Constancy and similarity %A Basri,R. %A Jacobs, David W. %B Computer Vision and Image Understanding %V 65 %P 447 - 449 %8 1997/// %G eng %N 3 %0 Journal Article %J The Journal of Supercomputing %D 1997 %T Fast algorithms for estimating aerosol optical depth and correcting thematic mapper imagery %A Fallah-Adl,H. %A JaJa, Joseph F. %A Liang, S. %X Remotely sensed images collected by satellites are usually contaminated by the effects of atmospheric particles through the absorption and scattering of radiation from the earth's surface. The objective of atmospheric correction is to retrieve the surface reflectance from remotely sensed imagery by removing the atmospheric effects, which is usually performed in two steps. First, the optical characteristics of the atmosphere are estimated and then the remotely sensed imagery is corrected by inversion procedures that derive the surface reflectance. In this paper we introduce an efficient algorithm to estimate the optical characteristics of the Thematic Mapper imagery and to remove the atmospheric effects from it. Our algorithm introduces a set of techniques to significantly improve the quality of the retrieved images. We pay particular attention to the computational efficiency of the algorithm, thereby allowing us to correct large TM images quickly. We also provide a parallel implementation of our algorithm and show its portability and scalability on three parallel machines. %B The Journal of Supercomputing %V 10 %P 315 - 329 %8 1997/// %G eng %N 4 %R 10.1007/BF00227861 %0 Conference Paper %B Computer Vision and Pattern Recognition, 1997. Proceedings., 1997 IEEE Computer Society Conference on %D 1997 %T Learning parameterized models of image motion %A Black,M. J %A Yacoob,Yaser %A Jepson,A. D %A Fleet,D. J %K image motion %K Image sequences %K learning %K learning (artificial intelligence) %K model-based recognition %K Motion estimation %K multi-resolution scheme %K non-rigid motion %K optical flow %K optical flow estimation %K parameterized models %K Principal component analysis %K training set %X A framework for learning parameterized models of optical flow from image sequences is presented. A class of motions is represented by a set of orthogonal basis flow fields that are computed from a training set using principal component analysis. Many complex image motions can be represented by a linear combination of a small number of these basis flows. The learned motion models may be used for optical flow estimation and for model-based recognition. For optical flow estimation we describe a robust, multi-resolution scheme for directly computing the parameters of the learned flow models from image derivatives. As examples we consider learning motion discontinuities, non-rigid motion of human mouths, and articulated human motion %B Computer Vision and Pattern Recognition, 1997. Proceedings., 1997 IEEE Computer Society Conference on %P 561 - 567 %8 1997/06// %G eng %R 10.1109/CVPR.1997.609381 %0 Conference Paper %B Computer Vision and Pattern Recognition, 1997. Proceedings., 1997 IEEE Computer Society Conference on %D 1997 %T Linear fitting with missing data: applications to structure-from-motion and to characterizing intensity images %A Jacobs, David W. %K data;structure-from-affine-motion;structure-from-motion;vision %K estimation; %K fitting;linear %K images;iterative %K Lambertian %K matrix;intensity %K matrix;missing %K method;linear %K methods;motion %K problems;computer %K rank %K scene;data %K surface;low %K vision;iterative %X Several vision problems can be reduced to the problem of fitting a linear surface of low dimension to data, including the problems of structure-from-affine-motion, and of characterizing the intensity images of a Lambertian scene by constructing the intensity manifold. For these problems, one must deal with a data matrix with some missing elements. In structure-from-motion, missing elements will occur if some point features are not visible in some frames. To construct the intensity manifold missing matrix elements will arise when the surface normals of some scene points do not face the light source in some images. We propose a novel method for fitting a low rank matrix to a matrix with missing elements. We show experimentally that our method produces good results in the presence of noise. These results can be either used directly, or can serve as an excellent starting point for an iterative method %B Computer Vision and Pattern Recognition, 1997. Proceedings., 1997 IEEE Computer Society Conference on %P 206 - 212 %8 1997/06// %G eng %R 10.1109/CVPR.1997.609321 %0 Journal Article %J Neural Computation %D 1997 %T Local Parallel Computation of Stochastic Completion Fields %A Williams,Lance R. %A Jacobs, David W. %X We describe a local parallel method for computing the stochastic completion field introduced in the previous article (Williams and Jacobs, 1997). The stochastic completion field represents the likelihood that a completion joining two contour fragments passes through any given position and orientation in the image plane. It is based on the assumption that the prior probability distribution of completion shape can be modeled as a random walk in a lattice of discrete positions and orientations. The local parallel method can be interpreted as a stable finite difference scheme for solving the underlying Fokker-Planck equation identified by Mumford (1994). The resulting algorithm is significantly faster than the previously employed method, which relied on convolution with large-kernel filters computed by Monte Carlo simulation. The complexity of the new method is O (n3m), while that of the previous algorithm was O(n4m2 (for an n × n image with m discrete orientations). Perhaps most significant, the use of a local method allows us to model the probability distribution of completion shape using stochastic processes that are neither homogeneous nor isotropic. For example, it is possible to modulate particle decay rate by a directional function of local image brightnesses (i.e., anisotropic decay). The effect is that illusory contours can be made to respect the local image brightness structure. Finally, we note that the new method is more plausible as a neural model since (1) unlike the previous method, it can be computed in a sparse, locally connected network, and (2) the network dynamics are consistent with psychophysical measurements of the time course of illusory contour formation. %B Neural Computation %V 9 %P 859 - 881 %8 1997/// %@ 0899-7667 %G eng %U http://dx.doi.org/10.1162/neco.1997.9.4.859 %N 4 %R 10.1162/neco.1997.9.4.859 %0 Journal Article %J International Journal of Computer Vision %D 1997 %T Matching 3-D models to 2-D images %A Jacobs, David W. %X We consider the problem of analytically characterizing the set of all 2-D images that a group of 3-D features may produce, and demonstrate that this is a useful thing to do. Our results apply for simple point features and point features with associated orientation vectors when we model projection as a 3-D to 2-D affine transformation. We show how to represent the set of images that a group of 3-D points produces with two lines (1-D subspaces), one in each of two orthogonal, high-dimensional spaces, where a single image group corresponds to one point in each space. The images of groups of oriented point features can be represented by a 2-D hyperbolic surface in a single high-dimensional space. The problem of matching an image to models is essentially reduced to the problem of matching a point to simple geometric structures. Moreover, we show that these are the simplest and lowest dimensional representations possible for these cases.We demonstrate the value of this way of approaching matching by applying our results to a variety of vision problems. In particular, we use this result to build a space-efficient indexing system that performs 3-D to 2-D matching by table lookup. This system is analytically built and accessed, accounts for the effects of sensing error, and is tested on real images. We also derive new results concerning the existence of invariants and non-accidental properties in this domain. Finally, we show that oriented points present unexpected difficulties: indexing requires fundamentally more space with oriented than with simple points, we must use more images in a motion sequence to determine the affine structure of oriented points, and the linear combinations result does not hold for oriented points. %B International Journal of Computer Vision %V 21 %P 123 - 153 %8 1997/// %G eng %N 1 %R 10.1023/A:1007927623619 %0 Journal Article %J International Journal of Computer Vision %D 1997 %T Recognition using region correspondences %A Basri,R. %A Jacobs, David W. %X Recognition systems attempt to recover information about the identity of observed objects and their location in the environment. A fundamental problem in recognition is pose estimation. This is the problem of using a correspondence between some portions of an object model and some portions of an image to determine whether the image contains an instance of the object, and, in case it does, to determine the transformation that relates the model to the image. The current approaches to this problem are divided into methods that use ldquoglobalrdquo properties of the object (e.g., centroid and moments of inertia) and methods that use ldquolocalrdquo properties of the object (e.g., corners and line segments). Global properties are sensitive to occlusion and, specifically, to self occlusion. Local properties are difficult to locate reliably, and their matching involves intensive computation.We present a novel method for recognition that uses region information. In our approach the model and the image are divided into regions. Given a match between subsets of regions (without any explicit correspondence between different pieces of the regions) the alignment transformation is computed. The method applies to planar objects under similarity, affine, and projective transformations and to projections of 3-D objects undergoing affine and projective transformations. The new approach combines many of the advantages of the previous two approaches, while avoiding some of their pitfalls. Like the global methods, our approach makes use of region information that reflects the true shape of the object. But like local methods, our approach can handle occlusion. %B International Journal of Computer Vision %V 25 %P 145 - 166 %8 1997/// %G eng %N 2 %R 10.1023/A:1007919917506 %0 Conference Paper %B Software Engineering, International Conference on %D 1997 %T Specification-based Testing of Reactive Software: Tools and Experiments %A Jagadeesan,Lalita Jategaonkar %A Porter, Adam %A Puchol,Carlos %A Ramming,J. Christopher %A Votta,Lawrence G. %K empirical studies %K reactive systems %K specification-based testing %K temporal logic %X Testing commercial software is expensive and time consuming. Automated testing methods promise to save a great deal of time and money throughout the software industry. One approach that is well-suited for the reactive systems found in telephone switching systems is specification-based testing.We have built a set of tools to automatically test softmare applications for violations of safety properties expressed in temporal logic. Our testing system automatically constructs finite state machine oracles corresponding to safety properties, builds test harnesses, and integrates them with the application. The test harness then generates inputs automatically to test the application.We describe a study examining the feasibility of this approach for testing industrial applications. To conduct this study we formally modeled an Automatic Protection Switching system (APS), which is an application common to many telephony systems. We then asked a number of computer science graduate students to develop several versions of the APS and use our tools to test them. We found that the tools are very effective, save significant amounts of human effort (at the expense of machine resources), and are easy to use. We also discuss improvements that are needed before we can use the tools with professional developers building commercial products. %B Software Engineering, International Conference on %I IEEE Computer Society %C Los Alamitos, CA, USA %P 525 - 525 %8 1997/// %G eng %R http://doi.ieeecomputersociety.org/10.1109/ICSE.1997.610373 %0 Journal Article %J Neural Computation %D 1997 %T Stochastic completion fields: A neural model of illusory contour shape and salience %A Williams,L. R %A Jacobs, David W. %X Describes an algorithm- and representation-level theory of illusory contour shape and salience. Unlike previous theories, the model is derived from a single assumption: that the prior probability distribution of boundary completion shape can be modeled by a random walk. The model does not use numerical relaxation or other explicit minimization, but instead relies on the fact that the probability that a particle following a random walk will pass through a given position, and orientation on a path joining 2 boundary fragments can be computed directly as the product of 2 vector-field convolutions. It is shown that for the random walk defined, the maximum likelihood paths are curves of least energy, that is, on average, random walks follow paths commonly assumed to model the shape of illusory contours. A computer model is demonstrated on numerous illusory contour stimuli from the literature. (PsycINFO Database Record (c) 2012 APA, all rights reserved) %B Neural Computation %V 9 %P 837 - 858 %8 1997/// %G eng %N 4 %0 Conference Paper %B Proceedings of Land Satellite Information in the Next Decade, II: Sources and Applications. Bethesda (MD): American Society of Photogrammetry and Remote Sensing %D 1997 %T The vegetation canopy lidar mission %A Dubayah, R. %A Blair,J. B. %A Bufton,J. L. %A Clark,D. B. %A JaJa, Joseph F. %A Knox,R. %A Luthcke,S. B. %A Prince,S. %A Weishampel,J. %X The Vegetation Canopy Lidar (VCL) is the first selected mission of NASA’s new EarthSystem Science Pathfinder program. The principal goal of VCL is the characterization of the three-dimensional structure of the earth; in particular, canopy vertical and horizontal structure and land surface topography. Its primary science objectives are: landcover characterization for terrestrial ecosystem modeling, monitoring and prediction; landcover characterization for climate modeling and prediction; and, production of a global reference data set of topographic spot heights and transects. VCL will provide unique data sets for understanding important environ- mental issues including climatic change and variability, biotic erosion and sustainable landuse, and will dramatically improve our estimation of global biomass and carbon stocks, fractional forest cover, forest extent and condition. It will also provide canopy data critical for biodiversity, natural hazard, and climate studies. Scheduled for launch in early 2000, VCL is an active lidar remote sensing system consisting of a five-beam instrument with 25 m contiguous along track resolution. The five beams are in a circular configuration 8 km across and each beam traces a separate ground track spaced 2 km apart, eventually producing 2 km coverage between 65° N and S. VCL’s core measurement objectives are: (1) canopy top heights; (2) vertical distribution of intercepted surfaces (e.g. leaves and branches); and, (3) ground surface topographic elevations. These measurements are used to derive a variety of science data products including canopy heights, canopy vertical distribution, and ground elevations gridded monthly at 1° resolution and every 6 months at 2 km resolution, as well as a 2 km fractional forest cover product. %B Proceedings of Land Satellite Information in the Next Decade, II: Sources and Applications. Bethesda (MD): American Society of Photogrammetry and Remote Sensing %P 100 - 112 %8 1997/// %G eng %0 Journal Article %J Parallel and Distributed Systems, IEEE Transactions on %D 1996 %T The block distributed memory model %A JaJa, Joseph F. %A Ryu,Kwan Woo %K accesses;single %K address %K algorithms;performance %K algorithms;performance;remote %K allocation;sorting; %K balancing;matrix %K bandwidth;communication %K communication;load %K complexity;computation %K complexity;computational %K complexity;cost %K complexity;distributed %K distributed %K evaluation;resource %K FFT;block %K Fourier %K latency;parallel %K locality;communication %K machines;interconnection %K measure;data %K memory %K model;communication %K model;computational %K multiplication;memory %K multiplication;parallel %K problems;distributed %K rearrangement %K space;sorting;spatial %K systems;fast %K topology;interprocessor %K transforms;matrix %X We introduce a computation model for developing and analyzing parallel algorithms on distributed memory machines. The model allows the design of algorithms using a single address space and does not assume any particular interconnection topology. We capture performance by incorporating a cost measure for interprocessor communication induced by remote memory accesses. The cost measure includes parameters reflecting memory latency, communication bandwidth, and spatial locality. Our model allows the initial placement of the input data and pipelined prefetching. We use our model to develop parallel algorithms for various data rearrangement problems, load balancing, sorting, FFT, and matrix multiplication. We show that most of these algorithms achieve optimal or near optimal communication complexity while simultaneously guaranteeing an optimal speed-up in computational complexity. Ongoing experimental work in testing and evaluating these algorithms has thus far shown very promising results %B Parallel and Distributed Systems, IEEE Transactions on %V 7 %P 830 - 840 %8 1996/08// %@ 1045-9219 %G eng %N 8 %R 10.1109/71.532114 %0 Conference Paper %B Proceedings of the Workshop on Set Constraints, held in Conjunction with CP’96, Boston, Massachusetts %D 1996 %T CLP (SC): Implementation and efficiency considerations» %A Jeffrey,S.F. %B Proceedings of the Workshop on Set Constraints, held in Conjunction with CP’96, Boston, Massachusetts %8 1996/// %G eng %0 Conference Paper %B Parallel Processing, 1996. Proceedings of the 1996 ICPP Workshop on Challenges for %D 1996 %T On combining technology and theory in search of a parallel computation model %A JaJa, Joseph F. %K algorithms;parallel %K architectures;parallel %K COMPUTATION %K computing;portability;parallel %K machines; %K machines;high %K MIMD %K model;parallel %K Multiprocessors;general %K Parallel %K performance;parallel %K platforms;Symmetric %K purpose %X A fundamental problem in parallel computing is to design high-level, architecture independent, algorithms that execute efficiently on general purpose parallel machines. The aim is to be able to achieve portability and high performance simultaneously. A key to accomplishing this is the existence of a computation model that can bridge the gap between the high level programming models and the underlying hardware models. There are currently two factors that make this fundamental problem more tractable. The first is the emergence of a dominant parallel architecture consisting of a number of powerful microprocessors interconnected by either a proprietary interconnect, or a standard off-the-shelf interconnect (such as an ATM switch). The second factor is the emergence of standards, such as the message passing standard MPI, for which efficient implementations are either available or about to appear on most machines. Our recent work has exploited these two developments by developing a methodology based on (1) a simple computation model for the current MIMD platforms that incorporates communication cost into the complexity of the algorithms, and (2) a SPMD programming model that makes effective use of communication primitives. We describe our approach for validating the computation model based on extensive experimentation and the development of benchmarks, and discuss its extension to the emerging clusters of Symmetric Multiprocessors (SMPs) architecture %B Parallel Processing, 1996. Proceedings of the 1996 ICPP Workshop on Challenges for %P 115 - 123 %8 1996/08// %G eng %R 10.1109/ICPPW.1996.538597 %0 Journal Article %J Neural Processing Letters %D 1996 %T Effects of varying parameters on properties of self-organizing feature maps %A Cho,S. %A Jang,M. %A Reggia, James A. %B Neural Processing Letters %V 4 %P 53 - 59 %8 1996/// %G eng %N 1 %0 Conference Paper %B Parallel Processing, 1996., Proceedings of the 1996 International Conference on %D 1996 %T Efficient algorithms for estimating atmospheric parameters for surface reflectance retrieval %A Fallah-Adl,H. %A JaJa, Joseph F. %A Liang,Shunlin %K Atmospheric %K correction;atmospheric %K effects;atmospheric %K imagery;scalability;surface %K parameters;portability;remotely %K reconstruction;photoreflectance;remote %K reflectance %K reflectance;surface %K retrieval;image %K sensed %K sensing; %X The objective of atmospheric correction is to retrieve the surface reflectance from remotely sensed imagery by removing the atmospheric effects. We introduce an efficient algorithm to estimate the optical characteristics of the TM imagery and to remove the atmospheric effects from it. Our algorithm introduces a set of techniques to significantly improve the quality of the retrieved images. We pay a particular attention to the computational efficiency of the algorithm thereby allowing us to correct large TM images quite fast. We also provide a parallel implementation of our algorithm and show its portability and scalability on several parallel machines %B Parallel Processing, 1996., Proceedings of the 1996 International Conference on %V 2 %P 132 -141 vol.2 - 132 -141 vol.2 %8 1996/08// %G eng %R 10.1109/ICPP.1996.537392 %0 Journal Article %J Computational Science Engineering, IEEE %D 1996 %T Fast algorithms for removing atmospheric effects from satellite images %A Fallah-Adl,H. %A JaJa, Joseph F. %A Liang, S. %A Townshend,J. %A Kaufman,Y.J. %K algorithms;reflectivity;remote %K Atmospheric %K based %K computing;image %K correction;atmospheric %K effects;atmospheric %K efficiency;inversion %K enhancement;parallel %K imagery;satellite %K images;solar %K implementation;reflected %K particles;computational %K photons;surface %K procedures;parallel %K radiation;atmospheric %K radiation;remote %K reflectance;atmospheric %K remote %K research;remotely %K sensed %K sensing %K sensing; %K sensing;satellite %K techniques;geophysics %X The varied features of the earth's surface each reflect sunlight and other wavelengths of solar radiation in a highly specific way. This principle provides the foundation for the science of satellite based remote sensing. A vexing problem confronting remote sensing researchers, however, is that the reflected radiation observed from remote locations is significantly contaminated by atmospheric particles. These aerosols and molecules scatter and absorb the solar photons reflected by the surface in such a way that only part of the surface radiation can be detected by a sensor. The article discusses the removal of atmospheric effects due to scattering and absorption, ie., atmospheric correction. Atmospheric correction algorithms basically consist of two major steps. First, the optical characteristics of the atmosphere are estimated. Various quantities related to the atmospheric correction can then be computed by radiative transfer algorithms, given the atmospheric optical properties. Second, the remotely sensed imagery is corrected by inversion procedures that derive the surface reflectance. We focus on the second step, describing our work on improving the computational efficiency of the existing atmospheric correction algorithms. We discuss a known atmospheric correction algorithm and then introduce a substantially more efficient version which we have devised. We have also developed a parallel implementation of our algorithm %B Computational Science Engineering, IEEE %V 3 %P 66 - 77 %8 1996///summer %@ 1070-9924 %G eng %N 2 %R 10.1109/99.503316 %0 Journal Article %J 16th AIAA International Communications Satellite Systems Conference %D 1996 %T Hybrid network management (communication systems) %A Baras,J. S %A Ball,M. %A Karne,R. K %A Kelley,S. %A Jang,K.D. %A Plaisant, Catherine %A Roussopoulos, Nick %A Stathatos,K. %A Vakhutinsky,A. %A Valluri,J. %X We describe our collaborative efforts towards the design and implementation of a next-generation integrated network management system for hybrid networks (INMS/HN). We describe the overall software architecture of the system at its current stage of development. This NMS is specifically designed to address issues relevant to complex heterogeneous networks consisting of seamlessly interoperable terrestrial and satellite networks. NMSs are a key element for interoperability in such networks. We describe the integration of configuration management and performance management. The next step in this integration is fault management. In particular, we describe the object model, issues concerning the graphical user interface, browsing tools, performance data graphical widget displays, and management information database organization issues. %B 16th AIAA International Communications Satellite Systems Conference %8 1996/// %G eng %0 Journal Article %J AIP Conference Proceedings %D 1996 %T Integrated network management of hybrid networks %A Baras,John S %A Ball,Mike %A Karne,Ramesh K %A Kelley,Steve %A Jang,Kap D %A Plaisant, Catherine %A Roussopoulos, Nick %A Stathatos,Kostas %A Vakhutinsky,Andrew %A Jaibharat,Valluri %A Whitefield,David %X We describe our collaborative efforts towards the design and implementation of a next generation integrated network management system for hybrid networks (INMS/HN). We describe the overall software architecture of the system at its current stage of development. This network management system is specifically designed to address issues relevant for complex heterogeneous networks consisting of seamlessly interoperable terrestrial and satellite networks. Network management systems are a key element for interoperability in such networks. We describe the integration of configuration management and performance management. The next step in this integration is fault management. In particular we describe the object model, issues of the Graphical User Interface (GUI), browsing tools and performance data graphical widget displays, management information database (MIB) organization issues. Several components of the system are being commercialized by Hughes Network Systems. © 1996 American Institute of Physics. %B AIP Conference Proceedings %V 361 %P 345 - 350 %8 1996/03/01/ %@ 0094243X %G eng %U http://proceedings.aip.org/resource/2/apcpcs/361/1/345_1?isAuthorized=no %N 1 %R doi:10.1063/1.50028 %0 Journal Article %J Applied Computational Geometry Towards Geometric Engineering %D 1996 %T Matching convex polygons and polyhedra, allowing for occlusion %A Basri,R. %A Jacobs, David W. %X We review our recent results on visual object recognition and reconstruction allowing for occlusion. Our scheme uses matches between convex parts of objects in the model and image to determine structure and pose, without relying on specific correspondences between local or global geometric features of the objects. We provide results determining the minimal number of regions required to uniquely determine the pose under a variety of situations, and also showing that, depending on the situation, the problem of determining pose may be a convex optimization problem that is efficiently solved, or it may be a non-convex optimization problem which has no known, efficient solution. We also relate the problem of determining pose using region matching to the problem of finding the transformation that places one polygon inside another and the problem of finding a line that intersects each of a set of 3-D volumes. %B Applied Computational Geometry Towards Geometric Engineering %P 133 - 147 %8 1996/// %G eng %R 10.1007/BFb0014491 %0 Journal Article %J Information Sciences %D 1996 %T An on-line variable-length binary encoding of text %A Acharya,Tinku %A JaJa, Joseph F. %X We present a methodology for on-line variable-length binary encoding of a dynamically growing set of integers. Our encoding maintains the prefix property that enables unique decoding of a string of integers from the set. In order to develop the formalism of this on-line binary encoding, we define a unique binary tree data structure called the “phase in binary tree.” To show the utility of this on-line variable-length binary encoding, we apply this methodology to encode the pointers generated by the LZW algorithm. The experimental results obtained illustrate the superior performance of our algorithm compared to the most widely used algorithms. This on-line variable-length binary encoding can be applied in other dictionary-based text compression schemes as well to effectively encode the output pointers to enhance the compression ratio. %B Information Sciences %V 94 %P 1 - 22 %8 1996/10// %@ 0020-0255 %G eng %U http://www.sciencedirect.com/science/article/pii/0020025596000898 %N 1–4 %R 10.1016/0020-0255(96)00089-8 %0 Journal Article %J PPL-Parallel Processing Letters %D 1996 %T An optimal randomized parallel algorithm for the single function coarsest partition problem %A JaJa, Joseph F. %A Ryu,K. W. %X We describe a randomized parallel algorithm to solve the single function coarsest partition problem. The algorithm runs in O(log n) time using O(n) operations with high probability on the Priority CRCW PRAM. The previous best known algorithms run in O(log2 n) time using O(n log2 n) operations on the CREW PRAM and O(log n) time using O(n log log n) operations on the Arbitrary CRCW PRAM. The technique presented can be used to generate the Euler tour of a rooted tree optimally from the parent representation. %B PPL-Parallel Processing Letters %V 6 %P 187 - 194 %8 1996/// %G eng %N 2 %R 10.1142/S0129626496000182 %0 Journal Article %J The Journal of Supercomputing %D 1996 %T Parallel algorithms for image enhancement and segmentation by region growing, with an experimental study %A Bader, D.A. %A JaJa, Joseph F. %A Harwood,D. %A Davis, Larry S. %X This paper presents efficient and portable implementations of a powerful image enhancement process, the Symmetric Neighborhood Filter (SNF), and an image segmentation technique that makes use of the SNF and a variant of the conventional connected components algorithm which we call delta-Connected Components. We use efficient techniques for distributing and coalescing data as well as efficient combinations of task and data parallelism. The image segmentation algorithm makes use of an efficient connected components algorithm based on a novel approach for parallel merging. The algorithms have been coded in Split-C and run on a variety of platforms, including the Thinking Machines CM-5, IBM SP-1 and SP-2, Cray Research T3D, Meiko Scientific CS-2, Intel Paragon, and workstation clusters. Our experimental results are consistent with the theoretical analysis (and provide the best known execution times for segmentation, even when compared with machine-specific implementations). Our test data include difficult images from the Landsat Thematic Mapper (TM) satellite data. %B The Journal of Supercomputing %V 10 %P 141 - 168 %8 1996/// %G eng %N 2 %R 10.1007/BF00130707 %0 Conference Paper %B Proceedings of the eighth annual ACM symposium on Parallel algorithms and architectures %D 1996 %T Parallel algorithms for personalized communication and sorting with an experimental study (extended abstract) %A Helman,David R. %A Bader,David A. %A JaJa, Joseph F. %B Proceedings of the eighth annual ACM symposium on Parallel algorithms and architectures %S SPAA '96 %I ACM %C New York, NY, USA %P 211 - 222 %8 1996/// %@ 0-89791-809-6 %G eng %U http://doi.acm.org/10.1145/237502.237558 %R 10.1145/237502.237558 %0 Conference Paper %B Parallel Processing Symposium, 1996., Proceedings of IPPS '96, The 10th International %D 1996 %T Practical parallel algorithms for dynamic data redistribution, median finding, and selection %A Bader, D.A. %A JaJa, Joseph F. %K algorithms;performance %K algorithms;scalability;statistical %K allocation; %K balancing %K clusters;distributed %K CM-5;communication %K CS-2;SPLIT-C;Thinking %K data %K data;median %K evaluation;resource %K finding;parallel %K Gray %K Machines %K memory %K model;dynamic %K Paragon;Meiko %K primitives;distributed %K problem;workstation %K Programming %K redistribution;load %K research %K scientific %K SP-1;Intel %K systems;parallel %K T3D;IBM %X A common statistical problem is that of finding the median element in a set of data. This paper presents a fast and portable parallel algorithm for finding the median given a set of elements distributed across a parallel machine. In fact, our algorithm solves the general selection problem that requires the determination of the element of rank i, for an arbitrarily given integer i. Practical algorithms needed by our selection algorithm for the dynamic redistribution of data are also discussed. Our general framework is a distributed memory programming model enhanced by a set of communication primitives. We use efficient techniques for distributing, coalescing, and load balancing data as well as efficient combinations of task and data parallelism. The algorithms have been coded in SPLIT-C and run on a variety of platforms, including the Thinking Machines CM-5, IBM SP-1 and SP-2, Gray Research T3D, Meiko Scientific CS-2, Intel Paragon, and workstation clusters. Our experimental results illustrate the scalability and efficiency of our algorithms across different platforms and improve upon all the related experimental results known to the authors %B Parallel Processing Symposium, 1996., Proceedings of IPPS '96, The 10th International %P 292 - 301 %8 1996/04// %G eng %R 10.1109/IPPS.1996.508072 %0 Journal Article %J Journal of Experimental Algorithmics (JEA) %D 1996 %T Practical parallel algorithms for personalized communication and integer sorting %A Bader,David A. %A Helman,David R. %A JaJa, Joseph F. %X A fundamental challenge for parallel computing is to obtain high-level, architecture independent, algorithms which efficiently execute on general-purpose parallel machines. With the emergence of message passing standards such as MPI, it has become easier to design efficient and portable parallel algorithms by making use of these communication primitives. While existing primitives allow an assortment of collective communication routines, they do not handle an important communication event when most or all processors have non-uniformly sized personalized messages to exchange with each other. We focus in this paper on the h-relation personalized communication whose efficient implementation will allow high performance implementations of a large class of algorithms. While most previous h-relation algorithms use randomization, this paper presents a new deterministic approach for h-relation personalized communication with asymptotically optimal complexity for h>p2. As an application, we present an efficient algorithm for stable integer sorting. The algorithms presented in this paper have been coded in Split-C and run on a variety of platforms, including the Thinking Machines CM-5, IBM SP-1 and SP-2, Cray Research T3D, Meiko Scientific CS-2, and the Intel Paragon. Our experimental results are consistent with the theoretical analysis and illustrate the scalability and efficiency of our algorithms across different platforms. In fact, they seem to outperform all similar algorithms known to the authors on these platforms. %B Journal of Experimental Algorithmics (JEA) %V 1 %8 1996/01// %@ 1084-6654 %G eng %U http://doi.acm.org/10.1145/235141.235148 %R 10.1145/235141.235148 %0 Journal Article %J Pattern Analysis and Machine Intelligence, IEEE Transactions on %D 1996 %T Robust and efficient detection of salient convex groups %A Jacobs, David W. %K complexity;computer %K complexity;contours;image %K computational %K convex %K detection;feature %K detection;object %K extraction;object %K groups;computational %K organisation;proximity;salient %K recognition; %K recognition;line %K recognition;perceptual %K segment %K vision;edge %X This paper describes an algorithm that robustly locates salient convex collections of line segments in an image. The algorithm is guaranteed to find all convex sets of line segments in which the length of the gaps between segments is smaller than some fixed proportion of the total length of the lines. This enables the algorithm to find convex groups whose contours are partially occluded or missing due to noise. We give an expected case analysis of the algorithm performance. This demonstrates that salient convexity is unlikely to occur at random, and hence is a strong clue that grouped line segments reflect underlying structure in the scene. We also show that our algorithm run time is O(n 2log(n)+nm), when we wish to find the m most salient groups in an image with n line segments. We support this analysis with experiments on real data, and demonstrate the grouping system as part of a complete recognition system %B Pattern Analysis and Machine Intelligence, IEEE Transactions on %V 18 %P 23 - 37 %8 1996/01// %@ 0162-8828 %G eng %N 1 %R 10.1109/34.476008 %0 Conference Paper %B Proceedings of the 16th conference on Computational linguistics - Volume 1 %D 1996 %T Role of word sense disambiguation in lexical acquisition: predicting semantics from syntactic cues %A Dorr, Bonnie J %A Jones,Doug %X This paper addresses the issue of word-sense ambiguity in extraction from machine-readable resources for the construction of large-scale knowledge sources. We describe two experiments: one which ignored word-sense distinctions, resulting in 6.3% accuracy for semantic classification of verbs based on (Levin, 1993); and one which exploited word-sense distinctions, resulting in 97.9% accuracy. These experiments were dual purpose: (1) to validate the central thesis of the work of (Levin, 1993), i.e., that verb semantics and syntactic behavior are predictably related; (2) to demonstrate that a 15-fold improvement can be achieved in deriving semantic information from syntactic cues if we first divide the syntactic cues into distinct groupings that correlate with different word senses. Finally, we show that we can provide effective acquisition techniques for novel word senses using a combination of online sources. %B Proceedings of the 16th conference on Computational linguistics - Volume 1 %S COLING '96 %I Association for Computational Linguistics %C Stroudsburg, PA, USA %P 322 - 327 %8 1996/// %G eng %U http://dx.doi.org/10.3115/992628.992685 %R 10.3115/992628.992685 %0 Journal Article %J Theoretical Computer Science %D 1996 %T Sorting strings and constructing digital search trees in parallel %A JaJa, Joseph F. %A Ryu,Kwan Woo %A Vishkin, Uzi %X We describe two simple optimal-work parallel algorithms for sorting a list L= (X1, X2, …, Xm) of m strings over an arbitrary alphabet Σ, where ∑i = 1m¦Xi¦ = n and two elements of Σ can be compared in unit time using a single processor. The first algorithm is a deterministic algorithm that runs in O(log2m/log log m) time and the second is a randomized algorithm that runs in O(logm) time. Both algorithms use O(m log m + n) operations. Compared to the best-known parallel algorithms for sorting strings, our algorithms offer the following improvements. 1.1. The total number of operations used by our algorithms is optimal while all previous parallel algorithms use a nonoptimal number of operations. 2. 2. We make no assumption about the alphabet while the previous algorithms assume that the alphabet is restricted to {1, 2, …, no(1)}. 3. 3. The computation model assumed by our algorithms is the Common CRCW PRAM unlike the known algorithms that assume the Arbitrary CRCW PRAM. 4. 4. Our algorithms use O(m log m + n) space, while the previous parallel algorithms use O(n1 + ε) space, where ε is a positive constant. We also present optimal-work parallel algorithms to construct a digital search tree for a given set of strings and to search for a string in a sorted list of strings. We use our parallel sorting algorithms to solve the problem of determining a minimal starting point of a circular string with respect to lexicographic ordering. Our solution improves upon the previous best-known result to solve this problem. %B Theoretical Computer Science %V 154 %P 225 - 245 %8 1996/02/05/ %@ 0304-3975 %G eng %U http://www.sciencedirect.com/science/article/pii/0304397594002630 %N 2 %R 10.1016/0304-3975(94)00263-0 %0 Journal Article %J Pattern Analysis and Machine Intelligence, IEEE Transactions on %D 1996 %T The space requirements of indexing under perspective projections %A Jacobs, David W. %K 2D %K complexity;feature %K complexity;table %K extraction;image %K hashing;indexing %K image %K images;3D %K lookup; %K lookup;computational %K matching;geometric %K matching;indexing;object %K model %K points;feature %K process;invariants;object %K processing;table %K projections;space %K recognition;perspective %K recognition;stereo %X Object recognition systems can be made more efficient through the use of table lookup to match features. The cost of this indexing process depends on the space required to represent groups of model features in such a lookup table. We determine the space required to perform indexing of arbitrary sets of 3D model points for lookup from a single 2D image formed under perspective projection. We show that in this case, one must use a 3D surface to represent model groups, and we provide an analytic description of such a surface. This is in contrast to the cases of scaled-orthographic or affine projection, in which only a 2D surface is required to represent a group of model features. This demonstrates a fundamental way in which the recognition of objects under perspective projection is more complex than is recognition under other projection models %B Pattern Analysis and Machine Intelligence, IEEE Transactions on %V 18 %P 330 - 333 %8 1996/03// %@ 0162-8828 %G eng %N 3 %R 10.1109/34.485561 %0 Conference Paper %B Pattern Recognition, 1996., Proceedings of the 13th International Conference on %D 1996 %T Space/time trade-offs for associative memory %A GROVE,A. J %A Jacobs, David W. %K access %K matching;set %K memory;associative %K nets;pattern %K processing;content-addressable %K query;memory %K recall;membership %K scheme;associative %K space;set %K storage;neural %K theory; %K theory;storage %K time;associative %X In any storage scheme, there is some trade-off between the space used (size of memory) and access time. However, the nature of this trade-off seems to depend on more than just what is being stored-it also depends the types of queries we consider. We justify this claim by considering a particular memory model and contrast recognition (membership queries) with associative recall. We show that the latter task can require exponentially larger memories even when identical information is stored %B Pattern Recognition, 1996., Proceedings of the 13th International Conference on %V 4 %P 296 -302 vol.4 - 296 -302 vol.4 %8 1996/08// %G eng %R 10.1109/ICPR.1996.547434 %0 Conference Paper %B In Proceedings of the Workshop on Predicative Forms in Natural Language and Lexical Knowledge Bases %D 1996 %T Use of Syntactic and Semantic Filters for Lexical Acquisition: Using WordNet to Increase Precision %A Dorr, Bonnie J %A Jones,Doug %X This paper describes an approach to automatic extraction of verb meanings from machinereadable resources for the construction of large-scale knowledge sources. We describe semantic filters designed to reduce the number of incorrect assignments made by a purely syntactic technique. We report on our results of disambiguating the verbs in the semantic filters by adding WordNet sense annotations. %B In Proceedings of the Workshop on Predicative Forms in Natural Language and Lexical Knowledge Bases %P 81 - 88 %8 1996/// %G eng %0 Conference Paper %B Proceedings of the the seventh ACM conference on Hypertext %D 1996 %T Visual metaphor and the problem of complexity in the design of Web sites: techniques for generating, recognizing and visualizing structure %A Joyce,Michael %A Kolker,Robert %A Moulthrop,Stuart %A Shneiderman, Ben %A Unsworth,John Merritt %B Proceedings of the the seventh ACM conference on Hypertext %S HYPERTEXT '96 %I ACM %C New York, NY, USA %P 257– - 257– %8 1996/// %@ 0-89791-778-2 %G eng %U http://doi.acm.org/10.1145/234828.234854 %R 10.1145/234828.234854 %0 Conference Paper %B Supercomputing, 1995. Proceedings of the IEEE/ACM SC95 Conference %D 1995 %T Efficient Algorithms for Atmospheric Correction of Remotely Sensed Data %A Fallah-Adl,H. %A JaJa, Joseph F. %A Liang,Shunlin %A Kaufman,Y.J. %A Townshend,J. %K Atmospheric %K AVHRR; %K computing; %K correction; %K high %K I/O; %K Parallel %K performance %K processing; %K remote %K scalable %K sensing; %K TM; %X Remotely sensed imagery has been used for developing and validating various studies regarding land cover dynamics. However, the large amounts of imagery collected by the satellites are largely contaminated by the effects of atmospheric particles. The objective of atmospheric correction is to retrieve the surface reflectance from remotely sensed imagery by removing the atmospheric effects. We introduce a number of computational techniques that lead to a substantial speedup of an atmospheric correction algorithm based on using look-up tables. Excluding I/O time, the previous known implementation processes one pixel at a time and requires about 2.63 seconds per pixel on a SPARC-10 machine, while our implementation is based on processing the whole image and takes about 4-20 microseconds per pixel on the same machine. We also develop a parallel version of our algorithm that is scalable in terms of both computation and I/O. Experimental results obtained show that a Thematic Mapper (TM) image (36 MB per band, 5 bands need to be corrected) can be handled in less than 4.3 minutes on a 32-node CM-5 machine, including I/O time. %B Supercomputing, 1995. Proceedings of the IEEE/ACM SC95 Conference %P 12 - 12 %8 1995/// %G eng %R 10.1109/SUPERC.1995.242453 %0 Journal Article %J Pattern Analysis and Machine Intelligence, IEEE Transactions on %D 1995 %T Efficient image processing algorithms on the scan line array processor %A Helman,D. %A JaJa, Joseph F. %K algorithms; %K algorithms;intermediate %K algorithms;rotation;scaling;scan %K array %K computation;image %K DCT;block %K DFT;convex %K filtering;optimal %K hulls;convolution;expanding;histogram %K level %K line %K machine;block %K matching;translation;convolution;image %K PROCESSING %K processing;labelling;low %K processing;median %K processing;parallel %K processor;shrinking;template %K SIMD %X Develops efficient algorithms for low and intermediate level image processing on the scan line array processor, a SIMD machine consisting of a linear array of cells that processes images in a scan line fashion. For low level processing, the authors present algorithms for block DFT, block DCT, convolution, template matching, shrinking, and expanding which run in real-time. By real-time, the authors mean that, if the required processing is based on neighborhoods of size m times;m, then the output lines are generated at a rate of O(m) operations per line and a latency of O(m) scan lines, which is the best that can be achieved on this model. The authors also develop an algorithm for median filtering which runs in almost real-time at a cost of O(m log m) time per scan line and a latency of [m/2] scan lines. For intermediate level processing, the authors present optimal algorithms for translation, histogram computation, scaling, and rotation. The authors also develop efficient algorithms for labelling the connected components and determining the convex hulls of multiple figures which run in O(n log n) and O(n log2n) time, respectively. The latter algorithms are significantly simpler and easier to implement than those already reported in the literature for linear arrays %B Pattern Analysis and Machine Intelligence, IEEE Transactions on %V 17 %P 47 - 56 %8 1995/01// %@ 0162-8828 %G eng %N 1 %R 10.1109/34.368153 %0 Report %D 1995 %T Enhancing LZW Coding Using a Variable-Length Binary Encoding %A Acharya,Tinku %A JaJa, Joseph F. %K algorithms %K data compression %K Systems Integration Methodology %X We present here a methodology to enhance the LZW coding for text compression using a variable-length binary encoding scheme. The basic principle of this encoding is based on allocating a set of prefix codes to a set of integers growing dynamically. The prefix property enables unique decoding of a string of elements from this set. We presented the experimental results to show the effectiveness of this variable-length binary encoding scheme. %I Institute for Systems Research, University of Maryland, College Park %V ISR-TR-1995-70 %8 1995/// %G eng %U http://drum.lib.umd.edu/handle/1903/5654 %0 Journal Article %J Institute for Systems Research Technical Reports %D 1995 %T Evaluating Spatial and Textual Style of Displays %A Shneiderman, Ben %A Chimera,Richard %A Jog,Ninog %A Stimart,Ren %A White,David %K human-computer interaction, Systems Integration Methodology %X The next generation of Graphic User Interfaces (GUIs) will offer rapid access to perceptually-rich, information abundant, and cognitively consistent interfaces. These new GUIs will be subjected to usability tests and expert reviews, plus new analysis methods and novel metrics to help guide designers. We have developed and tested first generation concordance tools to help developers to review terminology, capitalization, and abbreviation. We have also developed a dialog box summary table to help developers spot patterns and identify possible inconsistencies in layout, color fonts, font size, font style, and ordering of widgets. In this study we also explored the use of metrics such as widget counts, balance, alignment, density, and aspect ratios to provide further clues about where redesigns might be appropriate. Preliminary experience with several commercial projects is encouraging. %B Institute for Systems Research Technical Reports %8 1995/// %G eng %U http://drum.lib.umd.edu/handle/1903/5639 %0 Conference Paper %B Geoscience and Remote Sensing Symposium, 1995. IGARSS '95. 'Quantitative Remote Sensing for Science and Applications', International %D 1995 %T Land cover dynamics investigation using parallel computers %A Liang, S. %A Davis, Larry S. %A Townshend,J. %A Chellapa, Rama %A DeFries, R. %A Dubayah, R. %A Goward,S. %A JaJa, Joseph F. %A Krishnamachar, S. %A Roussopoulos, Nick %A Saltz, J. %A Samet, Hanan %A Shock,T. %A Srinivasan, M. %K ; geographic information system; geophysical measurement technique; geophysics computing; image classification; image map database; image segmentation; land cover dynamics; land surface; mixture modeling; object oriented programming; optical imaging; para %K GIS; IR imaging; Markovian random fields; atmospheric correction %X A comprehensive and highly interdisciplinary research program is being carried out to investigate global land cover dynamics in heterogeneous parallel computing environments. Some of the problems are addressed including atmospheric correction, mixture modeling, image classifications by Markovian random fields and by segmentation, global image/map databases, object oriented parallel programming and parallel/IO. During the initial two years project, significant progress has been made in all of these areas %B Geoscience and Remote Sensing Symposium, 1995. IGARSS '95. 'Quantitative Remote Sensing for Science and Applications', International %V 1 %P 332 -334 vol.1 - 332 -334 vol.1 %8 1995//10/14 %G eng %R 10.1109/IGARSS.1995.520273 %0 Conference Paper %B PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON PARALLEL PROCESSING %D 1995 %T An Optimal Ear Decomposition Algorithm with Applications on Fixed-Size Linear Arrays %A Huang,Y. M. %A JaJa, Joseph F. %B PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON PARALLEL PROCESSING %8 1995/// %G eng %0 Journal Article %J ACM SIGPLAN Notices %D 1995 %T Parallel algorithms for image histogramming and connected components with an experimental study (extended abstract) %A Bader,David A. %A JaJa, Joseph F. %K connected components %K histogramming %K IMAGE PROCESSING %K image understanding %K Parallel algorithms %K scalable parallel processing %X This paper presents efficient and portable implementations of two useful primitives in image processing algorithms, histogramming and connected components. Our general framework is a single-address space, distributed memory programming model. We use efficient techniques for distributing and coalescing data as well as efficient combinations of task and data parallelism. Our connected components algorithm uses a novel approach for parallel merging which performs drastically limited updating during iterative steps, and concludes with a total consistency update at the final step. The algorithms have been coded in Split-C and run on a variety of platforms. Our experimental results are consistent with the theoretical analysis and provide the best known execution times for these two primitives, even when compared with machine-specific implementations. More efficient implementations of Split-C will likely result in even faster execution times. %B ACM SIGPLAN Notices %V 30 %P 123 - 133 %8 1995/08// %@ 0362-1340 %G eng %U http://doi.acm.org/10.1145/209937.209950 %N 8 %R 10.1145/209937.209950 %0 Journal Article %J Image Processing, IEEE Transactions on %D 1995 %T Scalable data parallel algorithms for texture synthesis using Gibbs random fields %A Bader, D.A. %A JaJa, Joseph F. %A Chellapa, Rama %K algorithms;maximum %K algorithms;parallel %K algorithms;scalable %K algorithms;texture %K analysis;image %K CM-2;Thinking %K CM-5;fine-grained %K compression;image %K compression;texture %K Connection %K data %K estimation;model %K estimation;parallel %K field;Thinking %K fields;Markov %K likelihood %K machine %K Machines;Gibbs %K machines;random %K Parallel %K parameter %K processes; %K processes;data %K processing;image %K processing;machine-independent %K random %K representation;real-time %K scalable %K synthesis;Markov %K texture;maximum %X This article introduces scalable data parallel algorithms for image processing. Focusing on Gibbs and Markov random field model representation for textures, we present parallel algorithms for texture synthesis, compression, and maximum likelihood parameter estimation, currently implemented on Thinking Machines CM-2 and CM-5. The use of fine-grained, data parallel processing techniques yields real-time algorithms for texture synthesis and compression that are substantially faster than the previously known sequential implementations. Although current implementations are on Connection Machines, the methodology presented enables machine-independent scalable algorithms for a number of problems in image processing and analysis %B Image Processing, IEEE Transactions on %V 4 %P 1456 - 1460 %8 1995/10// %@ 1057-7149 %G eng %N 10 %R 10.1109/83.465111 %0 Journal Article %J Proc. IFIP 2.6 Visual Databases Systems %D 1995 %T Starfield information visualization with interactive smooth zooming %A Jog,N. %A Shneiderman, Ben %X This paper discusses the design and implementation of interactive smooth zooming of a starfield display (which is a visualization of a multi-attribute database) and introduces the zoom bar, a new widget for zooming and panning. Whereas traditional zoom techniques are based on zooming towards or away from a focal point, this paper introduces a novel approach based on zooming towards or away from a fixed line.Starfield displays plot items from a database as small selectable glyphs using two of the ordinal attributes of the data as the variables along the display axes. One way of filtering this visual information is by changing the range of displayed values on either of the display axes. If this is done incrementally and smoothly, the starfield display appears to zoom in and out, and users can track the motion of the glyphs without getting disoriented by sudden, large changes in context. %B Proc. IFIP 2.6 Visual Databases Systems %P 3 - 14 %8 1995/// %G eng %0 Book Section %B Visual database systems, IIIVisual database systems, III %D 1995 %T Starfield visualization with interactive smooth zooming %A Jogt,NK %A Shneiderman, Ben %X This paper discusses the design and implementation of interactive smooth zooming of a starfield display (which is a visualization of a multi-attribute database) and introduces the zoom bar, a new widget for zooming and panning. Whereas traditional zoom techniques are based on zooming towards or away from a focal point, this paper introduces a novel approach based on zooming towards or away from a fixed line.Starfield displays plot items from a database as small selectable glyphs using two of the ordinal attributes of the data as the variables along the display axes. One way of filtering this visual information is by changing the range of displayed values on either of the display axes. If this is done incrementally and smoothly, the starfield display appears to zoom in and out, and users can track the motion of the glyphs without getting disoriented by sudden, large changes in context. %B Visual database systems, IIIVisual database systems, III %V 3 %P 1 - 1 %8 1995/// %G eng %0 Conference Paper %B Computer Vision, 1995. Proceedings., Fifth International Conference on %D 1995 %T Stochastic completion fields: a neural model of illusory contour shape and salience %A Williams,L. R %A Jacobs, David W. %K boundary %K completion %K computational %K Computer %K contour %K contours; %K convolutions; %K cortex; %K curves %K detection; %K distribution; %K edge %K energy; %K estimation; %K fields; %K fragments; %K geometry; %K illusory %K image %K lattice; %K least %K likelihood %K mammalian %K maximum %K model; %K nets; %K neural %K of %K paths; %K plane; %K probability %K probability; %K random %K recognition; %K shape; %K stimuli; %K Stochastic %K vector-field %K visual %K walk; %X We describe an algorithm and representation level theory of illusory contour shape and salience. Unlike previous theories, our model is derived from a single assumption-namely, that the prior probability distribution of boundary completion shape can be modeled by a random walk in a lattice whose points are positions and orientations in the image plane (i.e. the space which one can reasonably assume is represented by neurons of the mammalian visual cortex). Our model does not employ numerical relaxation or other explicit minimization, but instead relies on the fact that the probability that a particle following a random walk will pass through a given position and orientation on a path joining two boundary fragments can be computed directly as the product of two vector-field convolutions. We show that for the random walk we define, the maximum likelihood paths are curves of least energy, that is, on average, random walks follow paths commonly assumed to model the shape of illusory contours. A computer model is demonstrated on numerous illusory contour stimuli from the literature %B Computer Vision, 1995. Proceedings., Fifth International Conference on %P 408 - 415 %8 1995/06// %G eng %R 10.1109/ICCV.1995.466910 %0 Journal Article %J Information processing letters %D 1995 %T Using synthetic perturbations and statistical screening to assay shared-memory programs %A Snelick,R. %A JaJa, Joseph F. %A Kacker,R. %A Lyon,G. %B Information processing letters %V 54 %P 147 - 153 %8 1995/// %G eng %N 3 %0 Journal Article %J AI Magazine %D 1994 %T AAAI 1994 Spring Symposium Series Reports %A Woods,William %A Uckun,Sendar %A Kohane,Isaac %A Bates,Joseph %A Hulthage,Ingemar %A Gasser,Les %A Hanks,Steve %A Gini,Maria %A Ram,Ashwin %A desJardins, Marie %A Johnson,Peter %A Etzioni,Oren %A Coombs,David %A Whitehead,Steven %B AI Magazine %V 15 %P 22 - 22 %8 1994/09/15/ %@ 0738-4602 %G eng %U http://www.aaai.org/ojs/index.php/aimagazine/article/viewArticle/1101 %N 3 %R 10.1609/aimag.v15i3.1101 %0 Conference Paper %B Parallel Processing Symposium, 1994. Proceedings., Eighth International %D 1994 %T The block distributed memory model for shared memory multiprocessors %A JaJa, Joseph F. %A Ryu,Kwan Woo %K accesses;shared %K address %K algebra;parallel %K algorithms;performance %K algorithms;performance;pipelined %K allocation;shared %K balancing;matrix %K bandwidth;computation %K block %K Communication %K communication;load %K complexity;computational %K complexity;cost %K complexity;distributed %K complexity;optimal %K data %K distributed %K evaluation;resource %K Fourier %K latency;optimal %K locality;communication %K measure;data %K memory %K model;communication %K model;computational %K multiplication;memory %K multiprocessors;single %K placement;interprocessor %K prefetching;remote %K problems;fast %K rearrangement %K space;sorting;spatial %K speedup;parallel %K systems;fast %K systems;sorting; %K transforms;input %K transforms;matrix %X Introduces a computation model for developing and analyzing parallel algorithms on distributed memory machines. The model allows the design of algorithms using a single address space and does not assume any particular interconnection topology. We capture performance by incorporating a cost measure for interprocessor communication induced by remote memory accesses. The cost measure includes parameters reflecting memory latency, communication bandwidth, and spatial locality. Our model allows the initial placement of the input data and pipelined prefetching. We use our model to develop parallel algorithms for various data rearrangement problems, load balancing, sorting, FFT, and matrix multiplication. We show that most of these algorithms achieve optimal or near optimal communication complexity while simultaneously guaranteeing an optimal speed-up in computational complexity %B Parallel Processing Symposium, 1994. Proceedings., Eighth International %P 752 - 756 %8 1994/04// %G eng %R 10.1109/IPPS.1994.288220 %0 Conference Paper %B Proceedings of the workshop on Advanced visual interfaces %D 1994 %T Data structures for dynamic queries: an analytical and experimental evaluation %A Jain,Vinit %A Shneiderman, Ben %X Dynamic Queries is a querying technique for doing range search on multi-key data sets. It is a direct manipulation mechanism where the query is formulated using graphical widgets and the results are displayed graphically preferably within 100 milliseconds.This paper evaluates four data structures, the multilist, the grid file, k-d tree and the quad tree used to organize data in high speed storage for dynamic queries. The effect of factors like size, distribution and dimensionality of data on the storage overhead and the speed of search is explored. Analytical models for estimating the storage and the search overheads are presented, and verified to be correct by empirical data. Results indicate that multilists are suitable for small (few thousand points) data sets irrespective of the data distribution. For large data sets the grid files are excellent for uniformly distributed data, and trees are good for skewed data distributions. There was no significant difference in performance between the tree structures. %B Proceedings of the workshop on Advanced visual interfaces %S AVI '94 %I ACM %C New York, NY, USA %P 1 - 11 %8 1994/// %@ 0-89791-733-2 %G eng %U http://doi.acm.org/10.1145/192309.192313 %R 10.1145/192309.192313 %0 Conference Paper %B Conference companion on Human factors in computing systems %D 1994 %T Dynamaps: dynamic queries on a health statistics atlas %A Plaisant, Catherine %A Jain,Vinit %B Conference companion on Human factors in computing systems %S CHI '94 %I ACM %C New York, NY, USA %P 439 - 440 %8 1994/// %@ 0-89791-651-4 %G eng %U http://doi.acm.org/10.1145/259963.260438 %R 10.1145/259963.260438 %0 Journal Article %J Theoretical computer science %D 1994 %T An efficient parallel algorithm for the single function coarsest partition problem %A JaJa, Joseph F. %A Ryu,K. W. %X We describe an efficient parallel algorithm to solve the single function coarsest partition problem. The algorithm runs in O(log n) time using O(n log log n) operations on the arbitrary CRCW PRAM. The previous best-known algorithms run in O(log2 n) time using O(n log2 n) operations on the CREW PRAM, and O(log n) time using O(n log n) operations on the arbitrary CRCW PRAM. Our solution is based on efficient algorithms for solving several subproblems that are of independent interest. In particular, we present efficient parallel algorithms to find a minimal starting point of a circular string with respect to lexicographic ordering and to sort lexicographically a list of strings of different lengths %B Theoretical computer science %V 129 %P 293 - 307 %8 1994/// %G eng %N 2 %0 Journal Article %J Proceedings: ARPA Image Understanding Workshop, Monterey, California %D 1994 %T Error propagation in 3D-from-2D recognition: Scaled-orthographic and perspective projections %A Alter,T. D. %A Jacobs, David W. %X Robust recognition systems require a careful under-standing of the e ects of error in sensed features. Error in these image features results in uncertainty in the possible image location of each additional model feature. We present an accurate, analytic ap- proximation for this uncertainty when model poses are based on matching three image and model points. This result applies to objects that are fully three- dimensional, where past results considered only two- dimensional objects. Further, we introduce a lin- ear programming algorithm to compute this uncer- tainty when poses are based on any number of initial matches. %B Proceedings: ARPA Image Understanding Workshop, Monterey, California %8 1994/// %G eng %0 Conference Paper %B Computer Vision and Pattern Recognition, 1994. Proceedings CVPR '94., 1994 IEEE Computer Society Conference on %D 1994 %T Error propagation in full 3D-from-2D object recognition %A Alter,T. D. %A Jacobs, David W. %K 3D-from-2D %K algorithm; %K error %K extraction; %K feature %K features; %K handling; %K image %K initial %K linear %K matches; %K object %K Programming %K programming; %K propagation; %K recognition %K recognition; %K robust %K sequences; %K systems; %K Uncertainty %K uncertainty; %X Robust recognition systems require a careful understanding of the effects of error in sensed features. Error in these image features results in uncertainty in the possible image location of each additional model feature. We present an accurate, analytic approximation for this uncertainty when model poses are based on matching three image and model points. This result applies to objects that are fully three-dimensional, where past results considered only two-dimensional objects. Further, we introduce a linear programming algorithm to compute this uncertainty when poses are based on any number of initial matches %B Computer Vision and Pattern Recognition, 1994. Proceedings CVPR '94., 1994 IEEE Computer Society Conference on %P 892 - 898 %8 1994/06// %G eng %R 10.1109/CVPR.1994.323920 %0 Conference Paper %B Pattern Recognition, 1994. Vol. 1 - Conference A: Computer Vision Image Processing., Proceedings of the 12th IAPR International Conference on %D 1994 %T Finding structurally consistent motion correspondences %A Jacobs, David W. %A Chennubhotla,C. %K 3D %K boundaries; %K common %K consistent %K correspondences; %K estimation; %K features; %K image %K independent %K linear %K MOTION %K motion; %K occlusion %K programming; %K scene %K segmentation; %K specularities; %K structurally %K structure; %K tracked %X Much work on deriving scene structure and motion from features assumes as input a set of tracked image features that share a common 3D motion. Producing this input requires segmenting independent motions, and detecting image features that do not correspond to 3D features, originating instead, for example in occlusion boundaries or specularities. We derive a linear program that tells when a set of tracked points might have come from 3D points that share a single motion, assuming affine motion and bounded error. We can also use linear programming to place conservative bounds on the structure of the scene that corresponds to tracked points. We implement and test this algorithm on real images %B Pattern Recognition, 1994. Vol. 1 - Conference A: Computer Vision Image Processing., Proceedings of the 12th IAPR International Conference on %V 1 %P 650 -653 vol.1 - 650 -653 vol.1 %8 1994/10// %G eng %R 10.1109/ICPR.1994.576388 %0 Journal Article %J Applications of Invariance in Computer Vision %D 1994 %T Generalizing invariants for 3-D to 2-D matching %A Jacobs, David W. %X Invariant representations of images have proven useful in performing a variety of vision tasks. However, there are no general invariant functions when one considers a single 2-D image of a 3-D scene. One possible response to the lack of true invariants is to attempt to generalize the notion of an invariant by finding the most economical characterization possible of the set of all 2-D images that a group of 3-D features may produce. A true invariant exists when, for each model, we can represent all its images at a single point in some representational space. When this is not possible, it is still very useful to find the simplest and lowest-dimensional representation of each model's images.We show how this can be done for a variety of different types of model features, and types of projection models. Of particular interest, we show how to represent the set of images that a group of 3-D points produces by two lines (1-D subspaces), one in each of two orthogonal, high-dimensional spaces, where a single image group corresponds to one point in each space. We demonstrate the value of our results by applying them to a variety of vision problems. In particular, we describe a space-efficient indexing system that performs 3-D to 2-D matching by table lookup. We also show how to find a least squares solution to the structure-from-motion problem using point features that have associated orientations, such as corners. We show how to determine when a restricted class of models may give rise to an invariant function, and construct a model-based invariant for pairs of planar algebraic curves that are not coplanar. %B Applications of Invariance in Computer Vision %P 415 - 434 %8 1994/// %G eng %R 10.1007/3-540-58240-1_22 %0 Conference Paper %B Motion of Non-Rigid and Articulated Objects, 1994., Proceedings of the 1994 IEEE Workshop on %D 1994 %T Segmenting independently moving, noisy points %A Jacobs, David W. %A Chennubhotla,C. %K 3D %K common %K consistent %K estimation; %K features; %K image %K independently %K linear %K MOTION %K motion; %K moving %K noisy %K point %K points; %K programming; %K real %K segmentation; %K sequence; %K sequences; %K video %X There has been much work on using point features tracked through a video sequence to determine structure and motion. In many situations, to use this work, we must first isolate subsets of points that share a common motion. This is hard because we must distinguish between independent motions and apparent deviations from a single motion due to noise. We propose several methods of searching for point-sets with consistent 3D motions. We analyze the potential sensitivity of each method for detecting independent motions, and experiment with each method on a real image sequence %B Motion of Non-Rigid and Articulated Objects, 1994., Proceedings of the 1994 IEEE Workshop on %P 96 - 103 %8 1994/11// %G eng %R 10.1109/MNRAO.1994.346249 %0 Journal Article %J Journal of Parallel and Distributed Computing %D 1994 %T Special issue on data parallel algorithms and programming %A JaJa, Joseph F. %A Wang,Pearl Y. %B Journal of Parallel and Distributed Computing %V 21 %P 1 - 3 %8 1994/04// %@ 0743-7315 %G eng %U http://dx.doi.org/10.1006/jpdc.1994.1037 %N 1 %R 10.1006/jpdc.1994.1037 %0 Journal Article %J International Journal of Computer Vision %D 1994 %T A study of affine matching with bounded sensor error %A Grimson,W. E.L %A Huttenlocher,D. P %A Jacobs, David W. %X Affine transformations of the plane have been used in a number of model-based recognition systems. Because the underlying mathematics are based on exact data, in practice various heuristics are used to adapt the methods to real data where there is positional uncertainty. This paper provides a precise analysis of affine point matching under uncertainty. We obtain an expression for the range of affine-invariant values that are consistent with a given set of four points, where each image point lies in an isin-disc of uncertainty. This range is shown to depend on the actualx-y-positions of the data points. In other words, given uncertainty in the data there are no representations that are invariant with respect to the Cartesian coordinate system of the data. This is problematic for methods, such as geometric hashing, that are based on affine-invariant representations. We also analyze the effect that uncertainty has on the probability that recognition methods using affine transformations will find false positive matches. We find that there is a significant probability of false positives with even moderate levels of sensor error, suggesting the importance of good verification techniques and good grouping techniques. %B International Journal of Computer Vision %V 13 %P 7 - 32 %8 1994/// %G eng %N 1 %R 10.1007/BF01420793 %0 Journal Article %J SIAM Journal on Computing %D 1994 %T Top-Bottom Routing around a Rectangle is as Easy as Computing Prefix Minima %A Berkman,Omer %A JaJa, Joseph F. %A Krishnamurthy,Sridhar %A Thurimella,Ramakrishna %A Vishkin, Uzi %K Parallel algorithms %K pram algorithms %K prefix minima %K VLSI routing %X A new parallel algorithm for the prefix minima problem is presented for inputs drawn from the range of integers $[1..s]$. For an input of size $n$, it runs in $O(\log \log \log s)$ time and $O(n)$ work (which is optimal). A faster algorithm is presented for the special case $s = n$; it runs in $O(\log ^ * n)$ time with optimal work. Both algorithms are for the Priority concurrent-read concurrent-write parallel random access machine (CROW PRAM). A possibly surprising outcome of this work is that, whenever the range of the input is restricted, the prefix minima problem can be solved significantly faster than the $\Omega (\log \log n)$ time lower bound in a decision model of parallel computation, as described by Valiant [SIAM J. Comput., 4 (1975), pp. 348–355].The top-bottom routing problem, which is an important subproblem of routing wires around a rectangle in two layers, is also considered. It is established that, for parallel (and hence for serial) computation, the problem of top-bottom routing is no harder than the prefix minima problem with $s = n$, thus giving an $O(\log ^ * n)$ time optimal parallel algorithm for the top-bottom routing problem. This is one of the first nontrivial problems to be given an optimal parallel algorithm that runs in sublogarithmic time. %B SIAM Journal on Computing %V 23 %P 449 - 465 %8 1994/// %G eng %U http://link.aip.org/link/?SMJ/23/449/1 %N 3 %R 10.1137/S0097539791218275 %0 Report %D 1994 %T Uncertainty Propagation in Model-Based Recognition. %A Jacobs, David W. %A Alter,T. D. %K *IMAGE PROCESSING %K *PATTERN RECOGNITION %K algorithms %K APPROXIMATION(MATHEMATICS) %K CYBERNETICS %K ERROR CORRECTION CODES %K image registration %K Linear programming %K MATCHING %K MATHEMATICAL MODELS %K PIXELS %K PROJECTIVE TECHNIQUES. %K regions %K THREE DIMENSIONAL %K TWO DIMENSIONAL %K Uncertainty %X Building robust recognition systems requires a careful understanding of the effects of error in sensed features. Error in these image features results in a region of uncertainty in the possible image location of each additional model feature. We present an accurate, analytic approximation for this uncertainty region when model poses are based on matching three image and model points, for both Gaussian and bounded error in the detection of image points, and for both scaled-orthographic and perspective projection models. This result applies to objects that are fully three-dimensional, where past results considered only two-dimensional objects. Further, we introduce a linear programming algorithm to compute the uncertainty region when poses are based on any number of initial matches. Finally, we use these results to extend, from two-dimensional to three-dimensional objects, robust implementations of alignment interpretation-tree search, and transformation clustering. (AN) %I MASSACHUSETTS INSTITUTE OF TECHNOLOGY ARTIFICIAL INTELLIGENCE LAB %8 1994/12// %G eng %U http://stinet.dtic.mil/oai/oai?&verb=getRecord&metadataPrefix=html&identifier=ADA295642 %0 Journal Article %J Institute for Systems Research Technical Reports %D 1994 %T User Controlled Smooth Zooming for Information Visualization %A Jog,Ninog %A Shneiderman, Ben %K animation %K dynamic queries %K starfield display %K Systems Integration %K visualization zooming %K zoom bar %X This paper discusses the design and implementation of user controlled smooth zooming of a starfield display. A starfield display is a two dimensional graphical visualization of a multidimensional database where every item from the database is represented as a small colored rectangle whose position is determined by its ranking along ordinal attributes of the items laid out on the axes. One way of navigating this visual information is by using a zooming tool to incrementally zoom in on the items by varying the attribute range on either axis independently - such zooming causes the rectangles to move continuously and to grow or shrink. To get a feeling of flying through the data, users should be able to track the motion of each rectangle without getting distracted by flicker or large jumps - conditions that necessitate high display refresh rates and closely spaced rectangles on successive frames. Although the use of high-speed hardware can achieve the required visual effect for small databases, the twin software bottlenecks of rapidly accessing display items and constructing a new display image fundamentally retard the refresh rate. Our work explores several methods to overcome these bottlenecks, presents a taxonomy of various zooming methods and introduces a new widget, the zoom bar, that facilitates zooming.

%B Institute for Systems Research Technical Reports %8 1994/// %G eng %U http://drum.lib.umd.edu/handle/1903/5520 %0 Conference Paper %B Computer Vision and Pattern Recognition, 1993. Proceedings CVPR '93., 1993 IEEE Computer Society Conference on %D 1993 %T 2D images of 3-D oriented points %A Jacobs, David W. %K 2D %K 3-D %K database %K derivation; %K image %K images; %K indexing; %K linear %K model %K nonrigid %K oriented %K points; %K processing; %K recovery; %K structure-form-motion %K structure-from-motion %K transformation; %X A number of vision problems have been shown to become simpler when one models projection from 3-D to 2-D as a nonrigid linear transformation. These results have been largely restricted to models and scenes that consist only of 3-D points. It is shown that, with this projection model, several vision tasks become fundamentally more complex in the somewhat more complicated domain of oriented points. More space is required for indexing models in a database, more images are required to derive structure from motion, and new views of an object cannot be synthesized linearly from old views %B Computer Vision and Pattern Recognition, 1993. Proceedings CVPR '93., 1993 IEEE Computer Society Conference on %P 226 - 232 %8 1993/06// %G eng %R 10.1109/CVPR.1993.340985 %0 Journal Article %J Machine Translation %D 1993 %T A first-pass approach for evaluating machine translation systems %A Jordan,P.W. %A Dorr, Bonnie J %A Benoit,J.W. %X This paper describes a short-term survey and evaluation project that covered a large number of machine translation products and research. We discuss our evaluation approach and address certain issues and implications relevant to our findings. We represented a variety of potential users of MT systems and were faced with the task of identifying which systems would best help them solve their translation problems. %B Machine Translation %V 8 %P 49 - 58 %8 1993/// %G eng %N 1 %0 Journal Article %J Parallel and Distributed Systems, IEEE Transactions on %D 1993 %T Optimal algorithms on the pipelined hypercube and related networks %A JaJa, Joseph F. %A Ryu,K. W. %K algorithms;pipeline %K algorithms;pipelined %K combinatorial %K geometry;parallel %K hypercube;shuffle-exchange;combinatorial %K mathematics;computational %K packing;monotone %K polygon;parallel %K problems;cube-connected-cycles;line %K processing; %X Parallel algorithms for several important combinatorial problems such as the all nearest smaller values problem, triangulating a monotone polygon, and line packing are presented. These algorithms achieve linear speedups on the pipelined hypercube, and provably optimal speedups on the shuffle-exchange and the cube-connected-cycles for any number p of processors satisfying 1 les;p les;n/((log3n)(loglog n)2), where n is the input size. The lower bound results are established under no restriction on how the input is mapped into the local memories of the different processors %B Parallel and Distributed Systems, IEEE Transactions on %V 4 %P 582 - 591 %8 1993/05// %@ 1045-9219 %G eng %N 5 %R 10.1109/71.224210 %0 Report %D 1993 %T Recognizing 3-D objects using 2-D images %A Jacobs, David W. %X To visually recognize objects, we adopt the strategy of forming groups of image features with a bottom-up process, and then using these groups to index into a data base to find all of the matching groups of model features. This approach reduces the computation needed for recognition, since we only consider groups of model features that can account for these relatively large chunks of the image. To perform indexing, we represent a group of 3-D model features in terms of the 2-D images it can produce. Specifically, we show that the simplest and most space-efficient way of doing this for models consisting of general groups of 3-D point features is to represent the set of images each model group produces with two lines (1D subspaces), one in each of two orthogonal, high-dimensional spaces. These spaces represent all possible image groups so that a single image group corresponds to one point in each space. We determine the effects of bounded sensing error on a set of image points, so that we may build a robust and efficient indexing system. We also present an optimal indexing method for more complicated features, and we present bounds on the space required for indexing in a variety of situations. We use the representations of a model's images that we develop to analyze other approaches to matching. We show that there are no invariants of general 3-D models, and demonstrate limitations in the use of non-accidental properties, and in other approaches to reconstructing a 3-D scene from a single 2-D image. Grouping, Non- accidental properties, Indexing, Invariants, Recognition, Sensing error. %I MASSACHUSETTS INSTITUTE OF TECHNOLOGY ARTIFICIAL INTELLIGENCE LAB %8 1993/// %G eng %0 Journal Article %J The Journal of VLSI Signal Processing %D 1993 %T Systolic architectures for finite-state vector quantization %A Kolagotla,R. K. %A Yu,S. S. %A JaJa, Joseph F. %X We present a new systolic architecture for implementing Finite State Vector Quantization in real-time for both speech and image data. This architecture is modular and has a very simple control flow. Only one processor is needed for speech compression. A linear array of processors is used for image compression; the number of processors needed is independent of the size of the image. We also present a simple architecture for converting line-scanned image data into the format required by this systolic architecture. Image data is processed at a rate of 1 pixel per clock cycle. An implementation at 31.5 MHz can quantize 1024×1024 pixel images at 30 frames/sec in real-time. We describe a VLSI implementation of these processors. %B The Journal of VLSI Signal Processing %V 5 %P 249 - 259 %8 1993/// %G eng %N 2 %R 10.1007/BF01581299 %0 Journal Article %J Sparks of innovation in human-computer interaction %D 1993 %T Treemaps %A Johnson,B. %A Shneiderman, Ben %B Sparks of innovation in human-computer interaction %V 284 %P 309 - 309 %8 1993/// %G eng %0 Journal Article %J The VLDB JournalVLDB Journal %D 1993 %T Using differential technlques to efficiently support transaction time %A Jensen,Christian S. %A Mark,Leo %A Roussopoulos, Nick %A Sells,Timos %B The VLDB JournalVLDB Journal %V 2 %P 75 - 111 %8 1993/01// %@ 1066-8888 %G eng %U http://dl.acm.org/citation.cfm?id=615162 %N 1 %R 10.1007/BF01231799 %0 Conference Paper %B Parallel Processing, 1993. ICPP 1993. International Conference on %D 1993 %T Using Synthetic-Perturbation Techniques for Tuning Shared Memory Programs %A Snelick,Robert %A JaJa, Joseph F. %A Kacker,Raghu %A Lyon,Gordon %X The Synthetic-Perturbation Tuning (SPT) methodology is base d on compirical approach that introduces artificial delays into the MIMD program and captures the effects of such delays by using the modrn branch statistics called design of experiments. %B Parallel Processing, 1993. ICPP 1993. International Conference on %V 2 %P 2 - 10 %8 1993/08// %G eng %R 10.1109/ICPP.1993.184 %0 Journal Article %J Signal Processing, IEEE Transactions on %D 1993 %T VLSI implementation of a tree searched vector quantizer %A Kolagotla,R. K. %A Yu,S.-S. %A JaJa, Joseph F. %K (mathematics); %K 2 %K 20 %K chips; %K coding; %K compression; %K data %K design; %K digital %K image %K implementation; %K MHz; %K micron; %K PROCESSING %K quantisation; %K quantizer; %K searched %K signal %K tree %K TREES %K vector %K VLSI %K VLSI; %X The VLSI design and implementation of a tree-searched vector quantizer is presented. The number of processors needed is equal to the depth of the tree. All processors are identical, and data flow between processors is regular. No global control signals are needed. The processors have been fabricated using 2 mu;m N-well process on a 7.9 times;9.2 mm die. Each processor chip contains 25000 transistors and has 84 pins. The processors have been thoroughly tested at a clock frequency of 20 MHz %B Signal Processing, IEEE Transactions on %V 41 %P 901 - 905 %8 1993/02// %@ 1053-587X %G eng %N 2 %R 10.1109/78.193225 %0 Journal Article %J Information Processing Letters %D 1992 %T On the difficulty of Manhattan channel routing %A Greenberg,Ronald %A JaJa, Joseph F. %A Krishnamurthy,Sridhar %K combinatorial problems %K computational complexity %K lower bounds %K VLSI channel routing %X We show that channel routing in the Manhattan model remains difficult even when all nets are single-sided. Given a set of n single-sided nets, we consider the problem of determining the minimum number of tracks required to obtain a dogleg-free routing. In addition to showing that the decision version of the problem isNP-complete, we show that there are problems requiring at least d+Ω(n) tracks, where d is the density. This existential lower bound does not follow from any of the known lower bounds in the literature. %B Information Processing Letters %V 44 %P 281 - 284 %8 1992/12/21/ %@ 0020-0190 %G eng %U http://www.sciencedirect.com/science/article/pii/002001909290214G %N 5 %R 10.1016/0020-0190(92)90214-G %0 Book %D 1992 %T An introduction to parallel algorithms %A JaJa, Joseph F. %I Addison Wesley Longman Publishing Co., Inc. %C Redwood City, CA, USA %8 1992/// %@ 0-201-54856-9 %G eng %0 Book %D 1992 %T Introduction to Parallel Computing %A JaJa, Joseph F. %I Addison-Wesley Publishing Co %8 1992/// %G eng %0 Journal Article %J Journal of Parallel and Distributed Computing %D 1992 %T Load balancing and routing on the hypercube and related networks %A JaJa, Joseph F. %A Ryu,Kwan Woo %X Several results related to the load balancing problem on the hypercube, the shuffle-exchange, the cube-connected cycles, and the butterfly are shown. Implications of these results for routing algorithms are also discussed. Our results include the following: •⊎ Efficient load balancing algorithms are found for the hypercube, the shuffle-exchange, the cube-connected cycles, and the butterfly. • ⊎ Load balancing is shown to require more time on a p-processor shuffle-exchange, cube-connected cycle or butterfly than on a p-processor weak hypercube. • ⊎ Routing n packets on a p-processor hypercube can be done optimally whenever n = p1+1/k, for any fixed k > 0. %B Journal of Parallel and Distributed Computing %V 14 %P 431 - 435 %8 1992/04// %@ 0743-7315 %G eng %U http://www.sciencedirect.com/science/article/pii/074373159290081W %N 4 %R 10.1016/0743-7315(92)90081-W %0 Conference Paper %B Computer Vision and Pattern Recognition, 1992. Proceedings CVPR '92., 1992 IEEE Computer Society Conference on %D 1992 %T Space efficient 3-D model indexing %A Jacobs, David W. %K 3-D %K error;table %K features;sensing %K indexing;3-D %K lookup; %K lookup;image %K model %K point %K processing;table %X It is shown that the set of 2-D images produced by a group of 3-D point features of a rigid model can be optimally represented with two lines in two high-dimensional spaces. This result is used to match images and model groups by table lookup. The table is efficiently built and accessed through analytic methods that account for the effect of sensing error. In real images, it reduces the set of potential matches by a factor of several thousand. This representation of a model's images is used to analyze two other approaches to recognition. It is determined when invariants exist in several domains, and it is shown that there is an infinite set of qualitatively similar nonaccidental properties %B Computer Vision and Pattern Recognition, 1992. Proceedings CVPR '92., 1992 IEEE Computer Society Conference on %P 439 - 444 %8 1992/06// %G eng %R 10.1109/CVPR.1992.223153 %0 Journal Article %J Computer Vision—ECCV'92 %D 1992 %T A study of affine matching with bounded sensor error %A Grimson,W. %A Huttenlocher,D. %A Jacobs, David W. %X Affine transformations of the plane have been used by modelbased recognition systems to approximate the effects of perspective projection. Because the underlying mathematics are based on exact data, in practice various heuristics are used to adapt the methods to real data where there is positional uncertainty. This paper provides a precise analysis of affine point matching under uncertainty. We obtain an expression for the range of affine-invariant values consistent with a given set of four points, where each data point lies in an exist-disc. This range is shown to depend on the actual x- y-positions of the data points. Thus given uncertainty in the data, the representation is no longer invariant with respect to the Cartesian coordinate system. This is problematic for methods, such as geometric hashing, that depend on the invariant properties of the representation. We also analyze the effect that uncertainty has on the probability that recognition methods using affine transformations will find false positive matches. We find that such methods will produce false positives with even moderate levels of sensor error. %B Computer Vision—ECCV'92 %P 291 - 306 %8 1992/// %G eng %R 10.1007/3-540-55426-2_34 %0 Report %D 1992 %T VLSI Architectures and Implementation of Predictive Tree- Searched Vector Quantizers for Real-Time Video Compression %A Yu,S.-S. %A Kolagotla,Ravi K. %A JaJa, Joseph F. %K data compression %K IMAGE PROCESSING %K Signal processing %K Speech processing %K Systems Integration %K systolic architecture %K Vector quantization %K VLSI architectures %X We describe a pipelined systolic architecture for implementing predictive Tree-Searched Vector Quantization (PTSVQ) for real- time image and speech coding applications. This architecture uses identical processors for both the encoding and decoding processes. the overall design is regular and the control is simple. Input data is processed at a rate of 1 pixel per clock cycle, which allows real-time processing of images at video rates. We implemented these processors using 1.2um CMOS technology. Spice simulations indicate correct operation at 40 MHz. Prototype version of these chips fabricated using 2um CMOS technology work at 20 MHz. %I Institute for Systems Research, University of Maryland, College Park %V ISR-TR-1992-48 %8 1992/// %G eng %U http://drum.lib.umd.edu/handle/1903/5230 %0 Report %D 1992 %T VLSI Implementation of Real-Time Parallel DCT/DST Lattice Structures for Video %A Chiu,Ching-Te %A Kolagotla,Ravi K. %A Liu,K. J. Ray %A JaJa, Joseph F. %K Parallel architectures %K Signal processing %K Systems Integration %K VLSI architectures %X The alternate use [1] of the discrete cosine transform (DCT) and the discrete sine transform (DST) can achieve a higher data compression rate and less block effect in image processing. A parallel lattice structure that can dually generate the 1-D DCT and DST is proposed. We also develop a fully-pipelined 2-D DCT lattice architecture that consists of two 1-D DCT/DST arrays without transposition. Both architectures are ideally suited for VLSI implementation because they are modular, regular, and have only local interconnections. the VLSI implementation of the lattice module using the distributed arithmetic approach is described. This realization of the lattice module using 2 um CMOS technology can achieve an 80Mb/s data rate. %I Institute for Systems Research, University of Maryland, College Park %V ISR-TR-1992-34 %8 1992/// %G eng %U http://drum.lib.umd.edu/handle/1903/5216 %0 Journal Article %J Software Engineering Education %D 1991 %T Computer based systems engineering workshop %A Lavi,J. %A Agrawala, Ashok K. %A Buhr,R. %A Jackson,K. %A Jackson,M. %A Lang,B. %X Modern computer based systems are complex multi-systems consisting of many connected individual subsystems; each one of them is typically also a multicomputer system. The subsystems in a multi-system can be either geographically distributed or locally connected systems. Typical examples of computer based systems are medical systems, process control systems, communications systems, weapon systems and large information systems.The development of these complex systems requires the establishment of a new engineering discipline in its own right, Computer Based Systems Engineering — CBSE. The definition of the discipline, its current and future practice and the ways to establish and promote it were discussed in an international IEEE workshop held in Neve-Ilan, Israel in May 1990. The major conclusion of the workshop was that CBSE should be established as a new field in its own right. To achieve this goal, the workshop participants recommended that the IEEE Computer Society shall set up a task force for the promotion of the field, the establishment of CBSE Institutes and the development of the educational framework of CBSE. The paper describes the major findings of the workshop that led to these conclusions and recommendations. %B Software Engineering Education %P 149 - 163 %8 1991/// %G eng %0 Journal Article %J IEEE Transactions on Knowledge and Data Engineering %D 1991 %T Incremental implementation model for relational databases with transaction time %A Jensen,C. S %A Mark,L. %A Roussopoulos, Nick %K Computational modeling %K Computer science %K Data models %K data retrieval %K Database languages %K Database systems %K database theory %K decremental computations %K deferred update %K Degradation %K differential computation %K historical data %K History %K incremental computations %K Information retrieval %K partitioned storage models %K queries %K relational data model %K Relational databases %K stored views %K Transaction databases %K transaction time %K view materialization %X An implementation model for the standard relational data model extended with transaction time is presented. The implementation model integrates techniques of view materialization, differential computation, and deferred update into a coherent whole. It is capable of storing any view (reflecting past or present states) and subsequently using stored views as outsets for incremental and decremental computations of requested views, making it more flexible than previously proposed partitioned storage models. The working and the expressiveness of the model are demonstrated by sample queries that show how historical data are retrieved %B IEEE Transactions on Knowledge and Data Engineering %V 3 %P 461 - 473 %8 1991/12// %@ 1041-4347 %G eng %N 4 %R 10.1109/69.109107 %0 Conference Paper %B Computer Vision and Pattern Recognition, 1991. Proceedings CVPR '91., IEEE Computer Society Conference on %D 1991 %T Model group indexing for recognition %A Clemens,D. T %A Jacobs, David W. %K 2-D %K dimensional;image %K group %K groups;image-model %K indexing;pattern %K lookup; %K match %K pattern %K recognition;data %K recognition;pointers;computer %K search;index %K sheet;2G-4 %K space;indexing %K structures;table %K system;model %K vision;computerised %X It is shown that an index space can be a powerful tool for reducing the image-model match search by a factor of kG-3 , but only when accompanied by some mechanism, such as grouping, that prevents the system from having to consider all matches between image groups of size G and model groups of size G. It is also shown that if image groups are to index a single point at recognition time, then the index space must contain pointers to each model group over a 2-D sheet, and should therefore be 2G-4 dimensional. A simple indexing system has been implemented to demonstrate these concepts, and a series of experiments have been conducted to investigate the tradeoffs between space and time. They indicate that the speedups are achievable, but require a large amount of space %B Computer Vision and Pattern Recognition, 1991. Proceedings CVPR '91., IEEE Computer Society Conference on %P 4 - 9 %8 1991/06// %G eng %R 10.1109/CVPR.1991.139652 %0 Journal Article %J Algorithmica %D 1991 %T Optimal algorithms for adjacent side routing %A Wu,S.A. %A JaJa, Joseph F. %X We consider the switchbox routing problem of two-terminal nets in the case when all thek nets lie on two adjacent sides of the rectangle. Our routing model is the standard two-layer model. We develop an optimal algorithm that routes all the nets whenever a routing exists. The routing obtained uses the fewest possible number of vias. A more general version of this problem (adjacent staircase) is also optimally solved. %B Algorithmica %V 6 %P 565 - 578 %8 1991/// %G eng %N 1 %R 10.1007/BF01759060 %0 Conference Paper %B Computer Vision and Pattern Recognition, 1991. Proceedings CVPR '91., IEEE Computer Society Conference on %D 1991 %T Optimal matching of planar models in 3D scenes %A Jacobs, David W. %K 3D %K approximation;flat %K error;close %K error;model %K features;computerised %K features;optimal %K matching;planar %K models;point %K object;image;maximum %K pattern %K picture %K processing; %K recognition;computerised %K scenes;bounded %K sensing %X The problem of matching a model consisting of the point features of a flat object to point features found in an image that contains the object in an arbitrary three-dimensional pose is addressed. Once three points are matched, it is possible to determine the pose of the object. Assuming bounded sensing error, the author presents a solution to the problem of determining the range of possible locations in the image at which any additional model points may appear. This solution leads to an algorithm for determining the largest possible matching between image and model features that includes this initial hypothesis. The author implements a close approximation to this algorithm, which is O( nm isin;6), where n is the number of image points, m is the number of model points, and isin; is the maximum sensing error. This algorithm is compared to existing methods, and it is shown that it produces more accurate results %B Computer Vision and Pattern Recognition, 1991. Proceedings CVPR '91., IEEE Computer Society Conference on %P 269 - 274 %8 1991/06// %G eng %R 10.1109/CVPR.1991.139700 %0 Journal Article %J SIAM Journal on Computing %D 1991 %T Parallel Algorithms for Channel Routing in the Knock-Knee Model %A JaJa, Joseph F. %A Chang,Shing-Chong %K channel routing %K Layout %K left-edge algorithm %K line packing %K Parallel algorithms %K VLSI design %X The channel routing problem of a set of two-terminal nets in the knock-knee model is considered. A new approach to route all the nets within $d$ tracks, where $d$ is the density, such that the corresponding layout can be realized with three layers is developed. The routing and the layer assignment algorithms run in $O(\log n)$ time with $n / \log n$ processors on the CREW PRAM model under the reasonable assumption that all terminals lie in the range $[1,N]$, where $N = O(n)$. %B SIAM Journal on Computing %V 20 %P 228 - 245 %8 1991/// %G eng %U http://link.aip.org/link/?SMJ/20/228/1 %N 2 %R 10.1137/0220014 %0 Journal Article %J Integration, the VLSI Journal %D 1991 %T Parallel algorithms for VLSI routing %A JaJa, Joseph F. %K channel routing %K detailed routing %K global routing %K Parallel algorithms %K river routing %K VLSI routing %X With the increase in the design complexity of VLSI systems, there is an ever increasing need for efficient design automation tools. Parallel processing could open up the way for substantially faster and cost-effective VLSI design tools. In this paper, we review some of the basic parallel algorithms that have been recently developed to handle problems arising in VLSI routing. We also include some results that have not appeared in the literature before. These results indicate that existing parallel algorithmic techniques can efficiently handle many VLSI routing problems. Our emphasis will be on outlining some of the basic parallel strategies with appropriate pointers to the literature for more details. %B Integration, the VLSI Journal %V 12 %P 305 - 320 %8 1991/12// %@ 0167-9260 %G eng %U http://www.sciencedirect.com/science/article/pii/016792609190027I %N 3 %R 10.1016/0167-9260(91)90027-I %0 Journal Article %J Pattern Analysis and Machine Intelligence, IEEE Transactions on %D 1991 %T Space and time bounds on indexing 3D models from 2D images %A Clemens,D. T %A Jacobs, David W. %K 2D %K bounds;time %K bounds;visual %K extraction;grouping %K features;model %K features;model-based %K images;3D %K indexing;feature %K model %K operation;image %K pattern %K picture %K processing; %K recognition %K recognition;computerised %K recognition;space %K systems;computerised %X Model-based visual recognition systems often match groups of image features to groups of model features to form initial hypotheses, which are then verified. In order to accelerate recognition considerably, the model groups can be arranged in an index space (hashed) offline such that feasible matches are found by indexing into this space. For the case of 2D images and 3D models consisting of point features, bounds on the space required for indexing and on the speedup that such indexing can achieve are demonstrated. It is proved that, even in the absence of image error, each model must be represented by a 2D surface in the index space. This places an unexpected lower bound on the space required to implement indexing and proves that no quantity is invariant for all projections of a model into the image. Theoretical bounds on the speedup achieved by indexing in the presence of image error are also determined, and an implementation of indexing for measuring this speedup empirically is presented. It is found that indexing can produce only a minimal speedup on its own. However, when accompanied by a grouping operation, indexing can provide significant speedups that grow exponentially with the number of features in the groups %B Pattern Analysis and Machine Intelligence, IEEE Transactions on %V 13 %P 1007 - 1017 %8 1991/10// %@ 0162-8828 %G eng %N 10 %R 10.1109/34.99235 %0 Conference Paper %B , IEEE Conference on Visualization, 1991. Visualization '91, Proceedings %D 1991 %T Tree-maps: a space-filling approach to the visualization of hierarchical information structures %A Johnson,B. %A Shneiderman, Ben %K Computer displays %K Computer Graphics %K Computer science %K Data analysis %K display space %K Educational institutions %K Feedback %K hierarchical information structures %K HUMANS %K Laboratories %K Libraries %K Marine vehicles %K rectangular region %K semantic information %K space-filling approach %K tree-map visualization technique %K trees (mathematics) %K Two dimensional displays %K Visualization %X A method for visualizing hierarchically structured information is described. The tree-map visualization technique makes 100% use of the available display space, mapping the full hierarchy onto a rectangular region in a space-filling manner. This efficient use of space allows very large hierarchies to be displayed in their entirety and facilitates the presentation of semantic information. Tree-maps can depict both the structure and content of the hierarchy. However, the approach is best suited to hierarchies in which the content of the leaf nodes and the structure of the hierarchy are of primary importance, and the content information associated with internal nodes is largely derived from their children %B , IEEE Conference on Visualization, 1991. Visualization '91, Proceedings %I IEEE %P 284 - 291 %8 1991/10/22/25 %@ 0-8186-2245-8 %G eng %R 10.1109/VISUAL.1991.175815 %0 Journal Article %J Computers, IEEE Transactions on %D 1991 %T VLSI architectures for multidimensional transforms %A Chakrabarti,C. %A JaJa, Joseph F. %K architecture; %K architectures; %K arithmetic; %K complexity; %K computational %K Computer %K digital %K fixed-precision %K linear %K multidimensional %K separable %K transforms; %K VLSI %X The authors propose a family of VLSI architectures with area-time tradeoffs for computing (N times;N times; . . . times;N) d-dimensional linear separable transforms. For fixed-precision arithmetic with b bits, the architectures have an area A=O(Nd+2a) and computation time T=O(dNd/2-ab ), and achieve the AT2 bound of AT2=O(n2b 2) for constant d, where n=Nd and O lt;a les;d/2 %B Computers, IEEE Transactions on %V 40 %P 1053 - 1057 %8 1991/09// %@ 0018-9340 %G eng %N 9 %R 10.1109/12.83648 %0 Journal Article %J Parallel architectures and algorithms for image understanding %D 1991 %T VLSI architectures for template matching and block matching %A Chakrabarti,C. %A JaJa, Joseph F. %B Parallel architectures and algorithms for image understanding %P 3 - 27 %8 1991/// %G eng %0 Book %D 1990 %T The 3rd Symposium on the Frontiers of Massively Parallel Computation: Proceedings of the Third Symposium %A JaJa, Joseph F. %I IEEE Computer Society Press %8 1990/// %G eng %0 Journal Article %J Parallel and Distributed Systems, IEEE Transactions on %D 1990 %T Efficient algorithms for list ranking and for solving graph problems on the hypercube %A Ryu,K. W. %A JaJa, Joseph F. %K algorithm;hypercube %K algorithms;graph %K algorithms;linear %K algorithms;sorting; %K balancing;one-port %K basic %K communication;sorting;st-numbering;tree %K complexity;graph %K components;ear %K decomposition;graph %K evaluation;computational %K Expression %K graph %K problems;biconnected %K problems;hypercube %K ranking;load %K speedup;list %K theory;parallel %X A hypercube algorithm to solve the list ranking problem is presented. Let n be the length of the list, and let p be the number of processors of the hypercube. The algorithm described runs in time O(n/p) when n= Omega;(p 1+ epsi;) for any constant epsi; gt;0, and in time O(n log n/p+log3 p) otherwise. This clearly attains a linear speedup when n= Omega;(p 1+ epsi;). Efficient balancing and routing schemes had to be used to achieve the linear speedup. The authors use these techniques to obtain efficient hypercube algorithms for many basic graph problems such as tree expression evaluation, connected and biconnected components, ear decomposition, and st-numbering. These problems are also addressed in the restricted model of one-port communication %B Parallel and Distributed Systems, IEEE Transactions on %V 1 %P 83 - 90 %8 1990/01// %@ 1045-9219 %G eng %N 1 %R 10.1109/71.80127 %0 Conference Paper %B Computer Design: VLSI in Computers and Processors, 1990. ICCD '90. Proceedings., 1990 IEEE International Conference on %D 1990 %T An efficient parallel algorithm for channel routing %A Krishnamurthy, S. %A JaJa, Joseph F. %K 21000;channel %K access %K algorithm;sequential %K algorithms; %K algorithms;shared %K balance %K CAD;computational %K complexity;parallel %K Layout %K machine;standard %K memory %K model;circuit %K nets;parallel %K Parallel %K PRAM;Sequent %K random %K routing;multiprocessor %K system;multiterminal %K two-layer %X The channel-routing of a set of multiterminal nets in the standard two-layer model is considered. The sequential algorithms based on the greedy strategy do not seem to be easily parallelizable. Proposed is an efficient parallel algorithm for routing channels with cyclic constraints. The algorithm runs in time O(n2/p+log2p), with p processors, on a shared memory parallel random access machine (PRAM) model where 1 les;p les;n2 radic; and n is the size of the input. An efficient adaptation of this algorithm on a Sequent Balance 21000 multiprocessor system is reported %B Computer Design: VLSI in Computers and Processors, 1990. ICCD '90. Proceedings., 1990 IEEE International Conference on %P 400 - 403 %8 1990/09// %G eng %R 10.1109/ICCD.1990.130261 %0 Journal Article %J Electronic Publishing %D 1990 %T Examining usability for a training-oriented hypertext: Can hyper-activity be good? %A Jones,T. %A Shneiderman, Ben %X We describe the design and evaluation of a hypertext-based tutorial for hypertext authors. This85-article tutorial represents an innovative application of hypertext to procedural learning. The work has been influenced by Carroll’s minimalist model, and by the syntactic/semantic semantic model of user behavior. The usability study involved eight subjects who studied the Hyperties Author Tutorial (HAT) for approximately one hour and then performed a set of authoring tasks in an average of 21 minutes. All users successfully completed the tasks. As a result of the study, we provide a characterization of appropriate uses of hypertext for training, and describe the meaning of a hyper-active environment. %B Electronic Publishing %V 3 %P 207 - 225 %8 1990/// %G eng %N 4 %0 Conference Paper %B Proceedings of 1990 International Conference on Parallel Processing %D 1990 %T Load balancing on the hypercube and related networks %A JaJa, Joseph F. %A Ryu,K. W. %B Proceedings of 1990 International Conference on Parallel Processing %V 1 %P 203 - 210 %8 1990/// %G eng %0 Journal Article %J Computers, IEEE Transactions on %D 1990 %T Systolic architectures for the computation of the discrete Hartley and the discrete cosine transforms based on prime factor decomposition %A Chakrabarti,C. %A JaJa, Joseph F. %K architectures; %K architectures;two-dimensional %K arithmetic;discrete %K arrays;fast %K binary %K cosine %K decomposition;systolic %K design;prime %K factor %K Fourier %K Hartley;discrete %K systolic %K transforms;hardware %K transforms;parallel %X Two-dimensional systolic array implementations for computing the discrete Hartley transform (DHT) and the discrete cosine transform (DCT) when the transform size N is decomposable into mutually prime factors are proposed. The existing two-dimensional formulations for DHT and DCT are modified, and the corresponding algorithms are mapped into two-dimensional systolic arrays. The resulting architecture is fully pipelined with no control units. The hardware design is based on bit serial left to right MSB (most significant bit) to LSB (least significant bit) binary arithmetic %B Computers, IEEE Transactions on %V 39 %P 1359 - 1368 %8 1990/11// %@ 0018-9340 %G eng %N 11 %R 10.1109/12.61045 %0 Book %D 1989 %T Efficient Techniques for Routing and for Solving Graph Problems on the Hypercube %A JaJa, Joseph F. %A Ryu,K. W. %I University of Maryland %8 1989/// %G eng %0 Journal Article %J AI Memos (1959 - 2004) %D 1989 %T Grouping for recognition %A Jacobs, David W. %X This paper presents a new method of grouping edges in order to recognize objects. This grouping method succeeds on images of both two- and three- dimensional objects. So that the recognition system can consider first the collections of edges most likely to lead to the correct recognition of objects, we order groups of edges based on the likelihood that a single object produced them. The grouping module estimates this likelihood using the distance that separates edges and their relative orientation. This ordering greatly reduces the amount of computation required to locate objects and improves the system's robustness to error. %B AI Memos (1959 - 2004) %8 1989/// %G eng %U http://hdl.handle.net/1721.1/6525 %0 Conference Paper %B Proceedings of the 1989 International Conference on Parallel Processing %D 1989 %T List ranking on the Hypercube %A Ryu,K. W. %A JaJa, Joseph F. %B Proceedings of the 1989 International Conference on Parallel Processing %V 3 %P 20 - 23 %8 1989/// %G eng %0 Journal Article %J Computers, IEEE Transactions on %D 1989 %T A new approach to realizing partially symmetric functions %A JaJa, Joseph F. %A Wu,S.-M. %K class %K complexity;logic %K cover;symmetric %K covers;switching %K design;switching %K functions;Boolean %K functions;complexity;partially %K functions;computational %K functions;sum-of-product %K of %K symmetric %K theory; %K theory;symmetric %X Consideration is given to the class of partially symmetric functions and a method for realizing them is outlined. Each such function can be expressed as a sum of totally symmetric functions such that a circuit can be designed with its complexity dependent on the size of such symmetric cover. The authors compare the sizes of symmetric and sum-of-product covers and show that the symmetric cover will be substantially smaller for this class of functions %B Computers, IEEE Transactions on %V 38 %P 896 - 898 %8 1989/06// %@ 0018-9340 %G eng %N 6 %R 10.1109/12.24302 %0 Report %D 1989 %T Optimal parallel algorithms for one-layer routing %A Chang,S. C. %A JaJa, Joseph F. %A Ryu,K. W. %I Instititue for Advanced Computer Studies, Univ of Maryland, College Park %8 1989/// %G eng %0 Journal Article %J Computer-Aided Design of Integrated Circuits and Systems, IEEE Transactions on %D 1989 %T On routing two-terminal nets in the presence of obstacles %A JaJa, Joseph F. %A Wu,S.A. %K CAD; %K criteria;routability;routing %K edge-disjoint %K finding %K Layout %K length %K model;number %K model;total %K nets;standard %K of %K paths;knock-knee %K two-layer %K two-terminal %K vias;obstacles;optimization %K wires;circuit %X Consideration is given to the problem of routing k two-terminal nets in the presence of obstacles in two models: the standard two-layer model and the knock-knee model. Determining routability is known to be NP-complete for arbitrary k. The authors introduce a technique that reduces the general problem into finding edge-disjoint paths in a graph whose size depends only on the size of the obstacles. Two optimization criteria are considered: the total length of the wires and the number of vias used %B Computer-Aided Design of Integrated Circuits and Systems, IEEE Transactions on %V 8 %P 563 - 570 %8 1989/05// %@ 0278-0070 %G eng %N 5 %R 10.1109/43.24884 %0 Conference Paper %B VLSI Algorithms and Architectures %D 1988 %T Input sensitive VLSI layouts for graphs of arbitrary degree %A Sherlekar,D. %A JaJa, Joseph F. %X A general method to find area-efficient VLSI layouts of graphs of arbitrary degree is presented. For graphs of maximum degree Δ, the layouts obtained are smaller by a factor of Δ2 than those obtained using existing methods.Optimal planar layouts, and near-optimal nonplanar layouts are also derived for planar graphs of arbitrary degree and gauge. The results span the spectrum between outerplanar graphs (gauge 1), and arbitrary planar graphs (gauge O(n)). Optimality is established by developing families of planar graphs of varying gauge and degree, and proving lower bounds on their layout area. These techniques can be combined to exhibit a trade-off between area and the number of contact cuts. The resulting scheme is sensitive to all three parameters that affect the area: the maximum degree, the gauge, and the number of contact cuts. %B VLSI Algorithms and Architectures %P 268 - 277 %8 1988/// %G eng %R 10.1007/BFb0040394 %0 Report %D 1988 %T Optimal Architectures for Multidimensional Transforms %A Chakrabarti,Chaitali %A JaJa, Joseph F. %K Technical Report %X Multidimensional transforms have widespread applications in computer vision, pattern analysis and image processing. The only existing optimal architecture for computing multidimensional DFT on data of size n = Nd requires very large rotator units of area O(n^2) and pipeline-time O(log n). In this paper we propose a family of optimal architectures with areatime trade-offs for computing multidimensional transforms. The large rotator unit is replaced by a combination of a small rotator unit, a transpose unit and a block rotator unit. The combination has an area of O(N^(d+2a)) and a pipeline time of O(N^(d/2-a)log n), for 0 < a < d/2. We apply this scheme to design optimal architectures for two-dimensional DFT, DHT and DCT. The computation is made efficient by mapping each of the one-dimensional transforms involved into two dimensions. %I Institute for Systems Research, University of Maryland, College Park %V ISR-TR-1988-39 %8 1988/// %G eng %U http://drum.lib.umd.edu/handle/1903/4770 %0 Report %D 1988 %T Optimal Systolic Designs for the Computation of the Discrete Hartley and the Discrete Cosine Transforms %A Chakrabarti,Chaitali %A JaJa, Joseph F. %K *ALGORITHMS %K *BINARY ARITHMETIC %K Computer architecture %K DCT(DISCRETE COSINE TRANSFORM) %K DISCRETE FOURIER TRANSFORMS %K ITERATIONS %K NUMERICAL MATHEMATICS %K TWO DIMENSIONAL %X In this paper, we propose new algorithms for computing the Discrete Hartley and the Discrete Cosine Transform. The algorithms are based on iterative applications of the modified small n algorithms of DFT. The one dimensional transforms are mapped into two dimensions first and then implemented on two dimensional systolic arrays. Pipelined bit serial architectures operating on left to right LSB to MSB binary arithmetic is the basis of the hardware design. Different hardware schemes for implementing these transforms are studied. We show that our schemes achieve a substantial speed-up over existing schemes. %I Institute for Systems Research, University of Maryland, College Park %8 1988/// %G eng %U http://stinet.dtic.mil/oai/oai?&verb=getRecord&metadataPrefix=html&identifier=ADA452389 %0 Journal Article %J Circuits and Systems, IEEE Transactions on %D 1988 %T Parallel algorithms for planar graph isomorphism and related problems %A JaJa, Joseph F. %A Kosaraju,S.R. %K 2D %K algorithms; %K algorithms;parallel %K array;computational %K array;CREW-PRAM %K coarsest %K complexity;graph %K components;two-dimensional %K COMPUTATION %K graph %K graph;planar %K isomorphism;single-function %K model;mesh %K models;planar %K partitioning %K problem;triconnected %K processor %K theory;parallel %X Parallel algorithms for planar graph isomorphism and several related problems are presented. Two models of parallel computation are considered: the CREW-PRAM model and the two-dimensional array of processors. The results include O( radic;n)-time mesh algorithms for finding a good separating cycle and the triconnected components of a planar graph, and for solving the single-function coarsest partitioning problem %B Circuits and Systems, IEEE Transactions on %V 35 %P 304 - 311 %8 1988/03// %@ 0098-4094 %G eng %N 3 %R 10.1109/31.1743 %0 Report %D 1988 %T Parallel Algorithms for Wiring Module Pins to Frame Pads %A Chang,Shing-Chong %A JaJa, Joseph F. %K *ALGORITHMS %K *PARALLEL PROCESSING %K COMPUTER PROGRAMMING AND SOFTWARE %K efficiency %K INPUT %K LAYERS %K length %K MEMORY DEVICES %K MODELS %K MODULAR CONSTRUCTION %K NUMERICAL MATHEMATICS %K PARALLEL ORIENTATION %K PINS %K WIRE %X We present fast and efficient parallel algorithms for several problems related to wiring a set of pins on a module to a set of pads lying on the houndary of a chip. The one-layer model is used to perform the wiring. Our basic model of parallel processing is the CREW-PRAM model which is characterized by the presence of an unlimited number processors sharing the main memory. Concurrent reads are allowed while concurrent writes are not. All our algorithms use 0(n) processors, where n is the input length. Our algorithms have fast implementations on other parallel models such as the mesh or the hypercube. %I Institute for Systems Research, University of Maryland, College Park %8 1988/// %G eng %U http://stinet.dtic.mil/oai/oai?&verb=getRecord&metadataPrefix=html&identifier=ADA452383 %0 Journal Article %J SIAM Journal on Computing (SICOMP) %D 1987 %T THE DISCRETE GEODESIC PROBLEM %A JOSEPH,SB %A Mount, Dave %A Papadimitriou,C. H %X We presentan algorithm for determining the shortest path between a source and a destinationon an arbitrary (possibly nonconvex) polyhedralsurface. The path is constrained to lie on the surface, and distances are measured according to the Euclidean metric. Our algorithm runs in time O(n log n) and requires O(n2) space, where n is the number of edges of the surface. After we run our algorithm, the distance from the source to any other destination may be determined using standard techniques in time O(log n) by locating the destination in the subdivision created by the algorithm. The actual shortest path from the source to a destination can bereported in time O(k+log n), where k is the number of faces crossed by the path. The algorithm generalizes to the case of multiple source points to build the Voronoi diagram on the surface, where n is now the maximum of the number of vertices and the number of sources. %B SIAM Journal on Computing (SICOMP) %V 16 %8 1987/// %G eng %N 4 %0 Journal Article %J IEEE Workshop on Computer Vision %D 1987 %T GROPER: A Grouping Based Object Recognition System for Two-Dimensional Objects %A Jacobs, David W. %B IEEE Workshop on Computer Vision %P 164 - 169 %8 1987/// %G eng %0 Report %D 1985 %T A High-Level Interactive System for Designing VLSI Signal Processors %A Owens,R. M. %A JaJa, Joseph F. %K Technical Report %X This paper describes a high-level interactive system that can be used to generate VLSI designs for various operations in signal processing such as filtering, convolution and computing the discrete Fourier transform. The overall process is fully automated and requires that the user specifies only a few parameters such as operation, precision, size and architecture type. The built-in architectures are new digit-on-line bit-serial architectures that are based on recently derived fast algorithms for the above operations. The basic elements are compact and have a very small gate delay. We feel that our system will offer a flexible and easy to use tool that can produce practical designs which are easy to test, efficient and fast. %I Institute for Systems Research, University of Maryland, College Park %V ISR-TR-1985-7 %8 1985/// %G eng %U http://drum.lib.umd.edu/handle/1903/4383 %0 Journal Article %J Computers & Mathematics with Applications %D 1985 %T Lower bounds on monotone arithmetic circuits with restricted depths %A JaJa, Joseph F. %X We consider monotone arithmetic circuits with restricted depths to compute monotone multivariate polynomials such as the elementary symmetric functions, convolution of several vectors and raising a matrix to a power. We develop general lower- and upper-bound techniques that seem to generate almost-matching bounds for all the functions considered. These results imply exponential lower bounds for circuits of bounded depths which compute any of these functions. We also obtain several examples for which negation can reduce the size exponentially. %B Computers & Mathematics with Applications %V 11 %P 1155 - 1164 %8 1985/12// %@ 0898-1221 %G eng %U http://www.sciencedirect.com/science/article/pii/0898122185901038 %N 12 %R 10.1016/0898-1221(85)90103-8 %0 Report %D 1985 %T A New Approach for Compiling Boolean Functions %A JaJa, Joseph F. %A Wu,S. M. %K Technical Report %X We propose a new approach for laying out Boolean functions which is based on extracting the symmetries of a given set of functions and applying optimization procedures especially tailored to exploit these symmetries. This paper establishes a rigorous foundation for this approach and shows that it will outperform existing methods for many classes of the functions. The different components of a newly developed system, SYMBL, will be briefly described. %I Institute for Systems Research, University of Maryland, College Park %V ISR; TR 1985-39 %8 1985/// %G eng %U http://drum.lib.umd.edu/handle/1903/4412 %0 Report %D 1985 %T VLSI Architectures Based on the Small N Algorithms %A JaJa, Joseph F. %A Owens,R. M. %K Technical Report %X Digital convolution and the discrete Fourier transform are basic operations whose computational requirements are of great importance in many applications. In this paper, we propose new types of VLSI architectures which are shown to be quite suitable to handle these operations. These architectures will result in fully pipelined bit-serial arrays which require no control units. Some preliminary implementations indicate a substantial speed-up gain over other existing designs. %I Institute for Systems Research, University of Maryland, College Park %V ISR-TR-1985-8 %8 1985/// %G eng %U http://drum.lib.umd.edu/handle/1903/4384 %0 Journal Article %J Journal of the ACM (JACM) %D 1984 %T Information Transfer in Distributed Computing with Applications to VLSI %A JaJa, Joseph F. %A Prasanna Kumar,V. K. %B Journal of the ACM (JACM) %V 31 %P 150 - 162 %8 1984/01// %@ 0004-5411 %G eng %U http://doi.acm.org/10.1145/2422.322421 %N 1 %R 10.1145/2422.322421 %0 Journal Article %J Journal of Computer-Based Instruction %D 1983 %T An empirical comparison of two PLATO text editors. %A Shneiderman, Ben %A Hill,R. %A Jacob,R. %A Mah,W. %X Two PLATO system text editors were evaluated with 14 nonprogrammers and 14 programmers who were either university staff or college students. Half of each group learned the "plain" editor, which had 8 commands and 3 screens of HELP material; while the other half of each group learned the "fancy" editor, which had 15 commands and 1 screen of HELP material. The fancy editor is a subset of the widely used TUTOR editor, but nonprogrammers preferred the plain editor. Faster learning, faster performance, fewer invocations of HELP material, and fewer requests for human assistance were characteristic of the plain editor. %B Journal of Computer-Based Instruction %V 10 %P 43 - 50 %8 1983/// %G eng %0 Journal Article %J Theoretical Computer Science %D 1982 %T On the relationship between the biconnectivity augmentation and traveling salesman problem %A Fredrickson,G. N. %A JaJa, Joseph F. %B Theoretical Computer Science %V 19 %P 189 - 201 %8 1982/// %G eng %N 2