%0 Journal Article %J Research in Microbiology %D 2018 %T Occurrence of Vibrio cholerae in water reservoirs of Burkina Faso %A Kaboré, Saidou %A Cecchi, Philippe %A Mosser, Thomas %A Toubiana, Mylène %A Traoré, Oumar %A Ouattara, Aboubakar S. %A Traoré, Alfred S. %A Barro, Nicolas %A Rita R Colwell %A Monfort, Patrick %X Africa is currently an important region in which cholera epidemics occur. Little is known about the presence of Vibrio cholerae in freshwater bodies in Africa. There are ca. 1700 lakes and reservoirs in Burkina Faso, most of which have been built within recent decades to secure water resources. The purpose of this study was to investigate the presence of V. cholerae in the water of reservoirs, using the most-probable-number polymerase chain reaction. Results showed that V. cholerae could be detected in water samples collected from 14 of 39 sampled reservoirs. The concentrations varied from 0 MPN/l to more than 1100 MPN/l. Fifty strains of V. cholerae isolated on CHROMagar™ vibrio were identified as V. cholerae non-O1/non-O139, none of which carried the ctxA gene. A significant positive correlation was found between the presence of V. cholerae in the reservoirs and both alkaline pH and phytoplankton biomass. V. cholerae was present in significantly higher numbers in reservoirs of urban areas than in rural areas. Since V. cholerae non-O1/non-O139 has been shown to be a causative agent of endemic diarrheal outbreaks, their presence in Burkina Faso reservoirs suggests they may play a role in gastroenteritis in that country. %B Research in Microbiology %V 169 %P 1 - 10 %8 Jan-01-2018 %G eng %U https://www.sciencedirect.com/science/article/pii/S092325081730147X?via%3Dihub %N 1 %! Research in Microbiology %R 10.1016/j.resmic.2017.08.004 %0 Journal Article %J Frontiers in Public Health %D 2015 %T Occurrence and Diversity of Clinically Important Vibrio Species in the Aquatic Environment of Georgia %A Kokashvili, Tamar %A Whitehouse, Chris A. %A Tskhvediani, Ana %A Grim, Christopher J. %A Elbakidze, Tinatin %A Mitaishvili, Nino %A Janelidze, Nino %A Jaiani, Ekaterine %A Haley, Bradd J. %A Lashkhi, Nino %A Huq, Anwar %A Rita R Colwell %A Tediashvili, Marina %B Frontiers in Public Health %8 10/2015 %G eng %U http://journal.frontiersin.org/Article/10.3389/fpubh.2015.00232/ %! Front. Public Health %R 10.3389/fpubh.2015.00232 %0 Journal Article %J Proceedings of the National Academy of Sciences %D 2014 %T Occurrence in Mexico, 1998–2008, of Vibrio cholerae CTX + El Tor carrying an additional truncated CTX prophage %A Alam, Munirul %A Rashed, Shah M %A Mannan, Shahnewaj Bin %A Islam, Tarequl %A Lizarraga-Partida, Marcial L. %A Delgado, Gabriela %A Morales-Espinosa, Rosario %A Mendez, Jose Luis %A Navarro, Armando %A Watanabe, Haruo %A Ohnishi, Makoto %A Hasan, Nur A. %A Huq, Anwar %A Sack, R. Bradley %A Rita R Colwell %A Cravioto, Alejandro %B Proceedings of the National Academy of Sciences %V 111 %P 9917 - 9922 %8 Aug-07-2014 %G eng %U http://www.pnas.org/lookup/doi/10.1073/pnas.1323408111 %N 27 %! Proc Natl Acad Sci USA %R 10.1073/pnas.1323408111 %0 Journal Article %J Microbial Ecology %D 2013 %T Ocean Warming and Spread of Pathogenic Vibrios in the Aquatic Environment %A Vezzulli, Luigi %A Rita R Colwell %A Pruzzo, Carla %X Vibrios are among the most common bacteria that inhabit surface waters throughout the world and are responsible for a number of severe infections both in humans and animals. Several reports recently showed that human Vibrio illnesses are increasing worldwide including fatal acute diarrheal diseases, such as cholera, gastroenteritis, wound infections, and septicemia. Many scientists believe this increase may be associated with global warming and rise in sea surface temperature (SST), although not enough evidence is available to support a causal link between emergence of Vibrio infections and climate warming. The effect of increased SST in promoting spread of vibrios in coastal and brackish waters is considered a causal factor explaining this trend. Field and laboratory studies carried out over the past 40 years supported this hypothesis, clearly showing temperature promotes Vibrio growth and persistence in the aquatic environment. Most recently, a long-term retrospective microbiological study carried out in the coastal waters of the southern North Sea provided the first experimental evidence for a positive and significant relationship between SST and Vibrio occurrence over a multidecadal time scale. As a future challenge, macroecological studies of the effects of ocean warming on Vibrio persistence and spread in the aquatic environment over large spatial and temporal scales would conclusively support evidence acquired to date combined with studies of the impact of global warming on epidemiologically relevant variables, such as host susceptibility and exposure. Assessing a causal link between ongoing climate change and enhanced growth and spread of vibrios and related illness is expected to improve forecast and mitigate future outbreaks associated with these pathogens. %B Microbial Ecology %P 817 - 825 %8 Jan-05-2013 %G eng %U http://link.springer.com/10.1007/s00248-012-0163-2 %! Microb Ecol %R 10.1007/s00248-012-0163-2 %0 Conference Paper %B International Conference on Machine Learning %D 2013 %T Online Latent Dirichlet Allocation with Infinite Vocabulary %A Zhai,Ke %A Jordan Boyd-Graber %X Topic models based on latent Dirichlet allocation (LDA) assume a predefined vocabulary. This is reasonable in batch settings but not reasonable for streaming and online settings. To address this lacuna, we extend LDA by drawing topics from a Dirichlet process whose base distribution is a distribution over all strings rather than from a finite Dirichlet. We develop inference using online variational inference and—to only consider a finite number of words for each topic—propose heuristics to dynamically order, expand, and contract the set of words we consider in our vocabulary. We show our model can successfully incorporate new words and that it performs better than topic models with finite vocabularies in evaluations of topic quality and classification performance. %B International Conference on Machine Learning %9 Conference Paper %0 Journal Article %J arXiv:1211.3722 [cs] %D 2013 %T Optimizing Abstract Abstract Machines %A Johnson, J. Ian %A Labich, Nicholas %A Might, Matthew %A David Van Horn %K Computer Science - Programming Languages %K F.3.2 %X The technique of abstracting abstract machines (AAM) provides a systematic approach for deriving computable approximations of evaluators that are easily proved sound. This article contributes a complementary step-by-step process for subsequently going from a naive analyzer derived under the AAM approach, to an efficient and correct implementation. The end result of the process is a two to three order-of-magnitude improvement over the systematically derived analyzer, making it competitive with hand-optimized implementations that compute fundamentally less precise results. %B arXiv:1211.3722 [cs] %8 2013/// %G eng %U http://arxiv.org/abs/1211.3722 %0 Journal Article %J University Journal of Zoology, Rajshahi University %D 2012 %T Occurrence of protozoans & their limnological relationships in some ponds of Mathbaria, Bangladesh %A Mozumder,P. K. %A Banu,M. A. %A Naser,M. N. %A Ali,M. S. %A Alam,M. %A Sack,R. B. %A Rita R Colwell %A Huq,A. %B University Journal of Zoology, Rajshahi University %V 29 %P 69 - 71 %8 2012/// %@ 1023-6104 %G eng %U http://journals.sfu.ca/bd/index.php/UJZRU %N 1 %0 Journal Article %J Parallel and Distributed Systems, IEEE Transactions on %D 2012 %T An Optimized High-Throughput Strategy for Constructing Inverted Files %A Wei,Z. %A JaJa, Joseph F. %X Current high-throughput algorithms for constructing inverted files all follow the MapReduce framework, which presents a high-level programming model that hides the complexities of parallel programming. In this paper, we take an alternative approach and develop a novel strategy that exploits the current and emerging architectures of multicore processors. Our algorithm is based on a high-throughput pipelined strategy that produces parallel parsed streams, which are immediately consumed at the same rate by parallel indexers. We have performed extensive tests of our algorithm on a cluster of 32 nodes, and were able to achieve a throughput close to the peak throughput of the I/O system: a throughput of 280 MB/s on a single node and a throughput that ranges between 5.15 GB/s (1 Gb/s Ethernet interconnect) and 6.12GB/s (10Gb/s InfiniBand interconnect) on a cluster with 32 nodes for processing the ClueWeb09 dataset. Such a performance represents a substantial gain over the best known MapReduce algorithms even when comparing the single node performance of our algorithm to MapReduce algorithms running on large clusters. Our results shed a light on the extent of the performance cost that may be incurred by using the simpler, higher-level MapReduce programming model for large scale applications. %B Parallel and Distributed Systems, IEEE Transactions on %V PP %P 1 - 1 %8 2012/// %@ 1045-9219 %G eng %N 99 %R 10.1109/TPDS.2012.43 %0 Conference Paper %B Proceedings of the 2nd ACM SIGHIT International Health Informatics Symposium %D 2012 %T Optimizing epidemic protection for socially essential workers %A Barrett,Chris %A Beckman,Richard %A Bisset,Keith %A Chen,Jiangzhuo %A DuBois,Thomas %A Eubank,Stephen %A Kumar,V. S. Anil %A Lewis,Bryan %A Marathe,Madhav V. %A Srinivasan, Aravind %A Stretz,Paula E. %K epidemiology %K OPTIMIZATION %K public health informatics %X Public-health policy makers have many tools to mitigate an epidemic's effects. Most related research focuses on the direct effects on those infected (in terms of health, life, or productivity). Interventions including treatment, prophylaxis, quarantine, and social distancing are well studied in this context. These interventions do not address indirect effects due to the loss of critical services and infrastructures when too many of those responsible for their day-to-day operations fall ill. We examine, both analytically and through simulation, the protection of such essential subpopulations by sequestering them, effectively isolating them into groups during an epidemic. We develop a framework for studying the benefits of sequestering and heuristics for when to sequester. We also prove a key property of sequestering placement which helps partition the subpopulations optimally. Thus we provide a first step toward determining how to allocate resources between the direct protection of a population, and protection of those responsible for critical services. %B Proceedings of the 2nd ACM SIGHIT International Health Informatics Symposium %S IHI '12 %I ACM %C New York, NY, USA %P 31 - 40 %8 2012/// %@ 978-1-4503-0781-9 %G eng %U http://doi.acm.org/10.1145/2110363.2110371 %R 10.1145/2110363.2110371 %0 Journal Article %J Knowledge and Data Engineering, IEEE Transactions on %D 2012 %T Organizing User Search Histories %A Hwang,Heasoo %A Lauw,H. W %A Getoor, Lise %A Ntoulas,A. %K alteration;query %K analysis;user %K engines;search %K engines;text %K goal;textual %K group %K historical %K History %K identification;query %K information %K interfaces; %K log;sessionization;task-oriented %K organization;query %K processing;search %K query %K query;user %K quest;query %K ranking;search %K search %K search;long-term %K similarity;time %K suggestion;result %K threshold;user %K Web;collaborative %X Users are increasingly pursuing complex task-oriented goals on the web, such as making travel arrangements, managing finances, or planning purchases. To this end, they usually break down the tasks into a few codependent steps and issue multiple queries around these steps repeatedly over long periods of time. To better support users in their long-term information quests on the web, search engines keep track of their queries and clicks while searching online. In this paper, we study the problem of organizing a user's historical queries into groups in a dynamic and automated fashion. Automatically identifying query groups is helpful for a number of different search engine components and applications, such as query suggestions, result ranking, query alterations, sessionization, and collaborative search. In our approach, we go beyond approaches that rely on textual similarity or time thresholds, and we propose a more robust approach that leverages search query logs. We experimentally study the performance of different techniques, and showcase their potential, especially when combined together. %B Knowledge and Data Engineering, IEEE Transactions on %V 24 %P 912 - 925 %8 2012/05// %@ 1041-4347 %G eng %N 5 %R 10.1109/TKDE.2010.251 %0 Journal Article %J Multimedia Analysis, Processing and Communications %D 2011 %T Object Detection and Tracking for Intelligent Video Surveillance %A Kim,K. %A Davis, Larry S. %X Appearing as an important task in computer vision, pedestrian detection has been widely investigated in the recent years. To design a robust detector, we propose a feature descriptor called Local Response Context (LRC). This descriptor captures discriminative information regarding the surrounding of the person’s location by sampling the response map obtained by a generic sliding window detector. A partial least squares regression model using LRC descriptors is learned and employed as a second classification stage (after the execution of the generic detector to obtain the response map). Experiments based on the ETHZ pedestrian dataset show that the proposed approach improves significantly the results achieved by the generic detector alone and is comparable to the state-of-the-art methods. %B Multimedia Analysis, Processing and Communications %P 265 - 288 %8 2011/// %G eng %0 Book Section %B Advances in Cryptology – ASIACRYPT 2011 %D 2011 %T Oblivious RAM with O((logN)^3) Worst-Case Cost %A Elaine Shi %A Chan, T. %A Stefanov, Emil %A Li, Mingfei %E Lee, Dong %E Wang, Xiaoyun %K Computer science %X Oblivious RAM is a useful primitive that allows a client to hide its data access patterns from an untrusted server in storage outsourcing applications. Until recently, most prior works on Oblivious RAM aim to optimize its amortized cost, while suffering from linear or even higher worst-case cost. Such poor worst-case behavior renders these schemes impractical in realistic settings, since a data access request can occasionally be blocked waiting for an unreasonably large number of operations to complete. This paper proposes novel Oblivious RAM constructions that achieves poly-logarithmic worst-case cost, while consuming constant client-side storage. To achieve the desired worst-case asymptotic performance, we propose a novel technique in which we organize the O-RAM storage into a binary tree over data buckets, while moving data blocks obliviously along tree edges. %B Advances in Cryptology – ASIACRYPT 2011 %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 7073 %P 197 - 214 %8 2011 %@ 978-3-642-25384-3 %G eng %U http://www.springerlink.com/content/2214q8013376rj37/abstract/ %0 Journal Article %J EcoHealth %D 2011 %T Occurrence of Vibrio cholerae in Municipal and Natural Waters and Incidence of Cholera in Azerbaijan %A Gurbanov, Shair %A Akhmadov, Rashid %A Shamkhalova, Gulnara %A Akhmadova, Sevinj %A Haley, Bradd J. %A Rita R Colwell %A Huq, Anwar %X Cholera, a waterborne disease caused by Vibrio cholerae, is an autochthonous member of the aquatic environment and predominantly reported from developing countries. Technical reports and proceedings were reviewed to determine the relationship between occurrence of V. cholerae in natural waters, including sources of municipal water, and cases of cholera in Azerbaijan. Water samples collected from different environmental sources from 1970 to 1998 were tested for V. cholerae and 0.73% (864/117,893) were positive. The results showed that in April of each year, when the air temperature rose by approximately 5°C, V. cholerae could be isolated. With each increase in air temperature, 6–8 weeks after, impact on cases of cholera was recorded. The incidence of cholera peaked when the air temperature reached >25°C during the month of September. It is concluded that a distinct seasonality in cholera incidence exists in Azerbaijan, with increased occurrence during warmer months. %B EcoHealth %P 468 - 477 %8 Jan-12-2011 %G eng %U http://link.springer.com/10.1007/s10393-012-0756-8 %N 4 %! EcoHealth %R 10.1007/s10393-012-0756-8 %0 Conference Paper %B Privacy, security, risk and trust (passat), 2011 ieee third international conference on and 2011 ieee third international conference on social computing (socialcom) %D 2011 %T Odd Leaf Out: Improving Visual Recognition with Games %A Hansen,D. L %A Jacobs, David W. %A Lewis,D. %A Biswas,A. %A Preece,J. %A Rotman,D. %A Stevens,E. %K algorithm;educational %K classification;object %K computational %K computing;botany;computer %K datasets;misclassification %K errors;scientific %K feedback;labeled %K game;human %K games;computer %K games;image %K image %K leaf %K Odd %K Out;complex %K recognition; %K recognition;biology %K tags;visual %K tasks;computer %K tasks;textual %K VISION %X A growing number of projects are solving complex computational and scientific tasks by soliciting human feedback through games. Many games with a purpose focus on generating textual tags for images. In contrast, we introduce a new game, Odd Leaf Out, which provides players with an enjoyable and educational game that serves the purpose of identifying misclassification errors in a large database of labeled leaf images. The game uses a novel mechanism to solicit useful information from players' incorrect answers. A study of 165 players showed that game data can be used to identify mislabeled leaves much more quickly than would have been possible using a computer vision algorithm alone. Domain novices and experts were equally good at identifying mislabeled images, although domain experts enjoyed the game more. We discuss the successes and challenges of this new game, which can be applied to other domains with labeled image datasets. %B Privacy, security, risk and trust (passat), 2011 ieee third international conference on and 2011 ieee third international conference on social computing (socialcom) %P 87 - 94 %8 2011/10// %G eng %R 10.1109/PASSAT/SocialCom.2011.225 %0 Conference Paper %B International Conference on Document Analysis and Recognition %D 2011 %T Offline Writer Identification using K-Adjacent Segments %A Jain,Rajiv %A David Doermann %X This paper presents a method for performing offline writer identification by using K-adjacent segment (KAS) features in a bag-of-features framework to model a user’s handwriting. This approach achieves a top 1 recognition rate of 93% on the benchmark IAMEnglish handwriting dataset, which outperforms current state of the art features. Results further demonstrate that identification performance improves as the number of training samples increase, and additionally, that the performance of the KAS features extend to Arabic handwriting found in the MADCAT dataset. %B International Conference on Document Analysis and Recognition %P 769 - 773 %8 2011/// %G eng %0 Book Section %B Advances in Cryptology – CRYPTO 2011 %D 2011 %T Optimal Verification of Operations on Dynamic Sets %A Charalampos Papamanthou %A Tamassia, Roberto %A Triandopoulos, Nikos %E Rogaway, Phillip %K Computer Communication Networks %K computers and society %K Data Encryption %K Discrete Mathematics in Computer Science %K Management of Computing and Information Systems %K Systems and Data Security %X We study the design of protocols for set-operation verification, namely the problem of cryptographically checking the correctness of outsourced set operations performed by an untrusted server over a dynamic collection of sets that are owned (and updated) by a trusted source. We present new authenticated data structures that allow any entity to publicly verify a proof attesting the correctness of primitive set operations such as intersection, union, subset and set difference. Based on a novel extension of the security properties of bilinear-map accumulators as well as on a primitive called accumulation tree, our protocols achieve optimal verification and proof complexity (i.e., only proportional to the size of the query parameters and the answer), as well as optimal update complexity (i.e., constant), while incurring no extra asymptotic space overhead. The proof construction is also efficient, adding a logarithmic overhead to the computation of the answer of a set-operation query. In contrast, existing schemes entail high communication and verification costs or high storage costs. Applications of interest include efficient verification of keyword search and database queries. The security of our protocols is based on the bilinear q-strong Diffie-Hellman assumption. %B Advances in Cryptology – CRYPTO 2011 %S Lecture Notes in Computer Science %I Springer Berlin Heidelberg %P 91 - 110 %8 2011/01/01/ %@ 978-3-642-22791-2, 978-3-642-22792-9 %G eng %U http://link.springer.com/chapter/10.1007/978-3-642-22792-9_6 %0 Conference Paper %B FIRE %D 2011 %T Overview of the FIRE 2011 RISOTTask %A Garain,Utpal %A Paik,Jiaul %A Pal,Tamaltaru %A Majumder,Prasenjit %A David Doermann %A Oard, Douglas %X RISOT was a pilot task in FIRE 2011 which focused on the retrieval of automatically recognized text from machine printed sources. The collection used for search was a subset of the FIRE 2008 and 2010 Bengali test collections that contained 92 topics and 62,825 documents. Two teams participated, submitting a total of 11 monolingual runs. %B FIRE %8 2011/// %G eng %0 Journal Article %J Journal of Signal Processing Systems %D 2011 %T Overview of the MPEG Reconfigurable Video Coding Framework %A Bhattacharyya, Shuvra S. %A Eker, Johan %A Janneck, Jörn W. %A Lucarz, Christophe %A Mattavelli, Marco %A Raulet, Mickaël %K CAL actor language %K Circuits and Systems %K Code synthesis %K Computer Imaging, Vision, Pattern Recognition and Graphics %K Dataflow programming %K Electrical Engineering %K Image Processing and Computer Vision %K pattern recognition %K Reconfigurable Video Coding %K Signal, Image and Speech Processing %X Video coding technology in the last 20 years has evolved producing a variety of different and complex algorithms and coding standards. So far the specification of such standards, and of the algorithms that build them, has been done case by case providing monolithic textual and reference software specifications in different forms and programming languages. However, very little attention has been given to provide a specification formalism that explicitly presents common components between standards, and the incremental modifications of such monolithic standards. The MPEG Reconfigurable Video Coding (RVC) framework is a new ISO standard currently under its final stage of standardization, aiming at providing video codec specifications at the level of library components instead of monolithic algorithms. The new concept is to be able to specify a decoder of an existing standard or a completely new configuration that may better satisfy application-specific constraints by selecting standard components from a library of standard coding algorithms. The possibility of dynamic configuration and reconfiguration of codecs also requires new methodologies and new tools for describing the new bitstream syntaxes and the parsers of such new codecs. The RVC framework is based on the usage of a new actor/ dataflow oriented language called CAL for the specification of the standard library and instantiation of the RVC decoder model. This language has been specifically designed for modeling complex signal processing systems. CAL dataflow models expose the intrinsic concurrency of the algorithms by employing the notions of actor programming and dataflow. The paper gives an overview of the concepts and technologies building the standard RVC framework and the non standard tools supporting the RVC model from the instantiation and simulation of the CAL model to software and/or hardware code synthesis. %B Journal of Signal Processing Systems %V 63 %P 251 - 263 %8 2011 %@ 1939-8018, 1939-8115 %G eng %U http://link.springer.com/article/10.1007/s11265-009-0399-3 %N 2 %! J Sign Process Syst %0 Patent %D 2010 %T Object Classification Using Taxonomies %A Tsaparas,Panayiotis %A Papadimitriou,Panagiotis %A Fuxman,Ariel D. %A Getoor, Lise %A Agrawal,Rakesh %E Microsoft Corporation %X As provided herein objects from a source catalog, such as a provider's catalog, can be added to a target catalog, such as an enterprise master catalog, in a scalable manner utilizing catalog taxonomies. A baseline classifier determines probabilities for source objects to target catalog classes. Source objects can be assigned to those classes with probabilities that meet a desired threshold and meet a desired rate. A classification cost for target classes can be determined for respective unassigned source objects, which can comprise determining an assignment cost and separation cost for the source objects for respective desired target classes. The separation and assignment costs can be combined to determine the classification cost, and the unassigned source objects can be assigned to those classes having a desired classification cost. %V 12/414,065 %8 2010/07/22/ %G eng %U http://www.google.com/patents?id=oXDSAAAAEBAJ %0 Conference Paper %B Digital Image Processing and Analysis %D 2010 %T Object Dependent Manifold Priors for Image Deconvolution %A Ni,Jie %A Turaga,Pavan %A M.Patel,Vishal %A Chellapa, Rama %K Deconvolution %K IMAGE PROCESSING %K Inverse problems %X In this paper we propose a manifold based deconvolution algorithm by estimating a manifold from a set of natural images that exploits the availability of the sample class data for regularizing the deblurring problem. %B Digital Image Processing and Analysis %S OSA Technical Digest (CD) %I Optical Society of America %P DMC2 - DMC2 %8 2010/06/07/ %G eng %U http://www.opticsinfobase.org/abstract.cfm?URI=DIPA-2010-DMC2 %0 Conference Paper %B Proceedings of the 2010 ACM-IEEE International Symposium on Empirical Software Engineering and Measurement %D 2010 %T Obtaining valid safety data for software safety measurement and process improvement %A Basili, Victor R. %A Zelkowitz, Marvin V %A Layman,Lucas %A Dangle,Kathleen %A Diep,Madeline %K case study %K NASA %K risk analysis %K safety metrics %X We report on a preliminary case study to examine software safety risk in the early design phase of the NASA Constellation spaceflight program. Our goal is to provide NASA quality assurance managers with information regarding the ongoing state of software safety across the program. We examined 154 hazard reports created during the preliminary design phase of three major flight hardware systems within the Constellation program. Our purpose was two-fold: 1) to quantify the relative importance of software with respect to system safety; and 2) to identify potential risks due to incorrect application of the safety process, deficiencies in the safety process, or the lack of a defined process. One early outcome of this work was to show that there are structural deficiencies in collecting valid safety data that make software safety different from hardware safety. In our conclusions we present some of these deficiencies. %B Proceedings of the 2010 ACM-IEEE International Symposium on Empirical Software Engineering and Measurement %S ESEM '10 %I ACM %C New York, NY, USA %P 46:1–46:4 - 46:1–46:4 %8 2010/// %@ 978-1-4503-0039-1 %G eng %U http://doi.acm.org/10.1145/1852786.1852846 %R 10.1145/1852786.1852846 %0 Journal Article %J OMICS: A Journal of Integrative Biology %D 2010 %T Occurrence of the Vibrio cholerae seventh pandemic VSP-I island and a new variant %A Grim,Christopher J. %A Choi,Jinna %A Jongsik Chun %A Jeon,Yoon-Seong %A Taviani,Elisa %A Hasan,Nur A. %A Haley,Bradd %A Huq,Anwar %A Rita R Colwell %B OMICS: A Journal of Integrative Biology %V 14 %P 1 - 7 %8 2010/02// %@ 1536-2310, 1557-8100 %G eng %U http://online.liebertpub.com/doi/abs/10.1089/omi.2009.0087 %N 1 %R 10.1089/omi.2009.0087 %0 Conference Paper %D 2010 %T One experience collecting sensitive mobile data %A Niu, Y. %A Elaine Shi %A Chow, R. %A Golle, P. %A Jakobsson, M. %X We report on our efforts to collect behavioral data basedon activities recorded by phones. We recruited Android de- vice owners and offered entry into a raffle for participants. Our application was distributed from the Android Market, and its placement there unexpectedly helped us find par- ticipants from casual users browsing for free applications. We collected data from 267 total participants who gave us varying amounts of data. %8 2010 %G eng %U http://www2.parc.com/csl/members/eshi/docs/users.pdf %0 Journal Article %J IEEE Transactions on Pattern Analysis and Machine Intelligence %D 2010 %T Online Empirical Evaluation of Tracking Algorithms %A Wu,Hao %A Sankaranarayanan,A. C %A Chellapa, Rama %K Back %K Biomedical imaging %K Computer vision %K Filtering %K formal model validation techniques %K formal verification %K ground truth %K Kanade Lucas Tomasi feature tracker %K Karhunen-Loeve transforms %K lighting %K Markov processes %K mean shift tracker %K model validation. %K online empirical evaluation %K particle filtering (numerical methods) %K Particle filters %K Particle tracking %K performance evaluation %K receiver operating characteristic curves %K Robustness %K SNR %K Statistics %K Surveillance %K time reversed Markov chain %K tracking %K tracking algorithms %K visual tracking %X Evaluation of tracking algorithms in the absence of ground truth is a challenging problem. There exist a variety of approaches for this problem, ranging from formal model validation techniques to heuristics that look for mismatches between track properties and the observed data. However, few of these methods scale up to the task of visual tracking, where the models are usually nonlinear and complex and typically lie in a high-dimensional space. Further, scenarios that cause track failures and/or poor tracking performance are also quite diverse for the visual tracking problem. In this paper, we propose an online performance evaluation strategy for tracking systems based on particle filters using a time-reversed Markov chain. The key intuition of our proposed methodology relies on the time-reversible nature of physical motion exhibited by most objects, which in turn should be possessed by a good tracker. In the presence of tracking failures due to occlusion, low SNR, or modeling errors, this reversible nature of the tracker is violated. We use this property for detection of track failures. To evaluate the performance of the tracker at time instant t, we use the posterior of the tracking algorithm to initialize a time-reversed Markov chain. We compute the posterior density of track parameters at the starting time t = 0 by filtering back in time to the initial time instant. The distance between the posterior density of the time-reversed chain (at t = 0) and the prior density used to initialize the tracking algorithm forms the decision statistic for evaluation. It is observed that when the data are generated by the underlying models, the decision statistic takes a low value. We provide a thorough experimental analysis of the evaluation methodology. Specifically, we demonstrate the effectiveness of our approach for tackling common challenges such as occlusion, pose, and illumination changes and provide the Receiver Operating Characteristic (ROC) curves. Finally, we also s how the applicability of the core ideas of the paper to other tracking algorithms such as the Kanade-Lucas-Tomasi (KLT) feature tracker and the mean-shift tracker. %B IEEE Transactions on Pattern Analysis and Machine Intelligence %V 32 %P 1443 - 1458 %8 2010/08// %@ 0162-8828 %G eng %N 8 %R 10.1109/TPAMI.2009.135 %0 Conference Paper %B Proceedings of the 18th SIGSPATIAL International Conference on Advances in Geographic Information Systems %D 2010 %T Ontuition: intuitive data exploration via ontology navigation %A Adelfio,Marco D. %A Lieberman,Michael D. %A Samet, Hanan %A Firozvi,Kashif A. %K MAPPING %K ontology %K ontuition %K spatio-textual %X Ontuition, a system for mapping ontologies, is presented. Transforming data to a usable format for Ontuition involves recognizing and resolving data values corresponding to concepts in multiple ontological domains. In particular, for datasets with a geographic component an attempt is made to identify and extract enough spatio-textual data that specific lat/long values to dataset entries can be assigned. Next, a gazetteer is used to transform the textually-specified locations into lat/long values that can be displayed on a map. Non-spatial ontological concepts are also discovered. This methodology is applied to the National Library of Medicine's very popular clinical trials website (http://clinicaltrials.gov/) whose users are generally interested in locating trials near where they live. The trials are specified using XML files. The location data is extracted and coupled with a disease ontology to enable general queries on the data with the result being of use to a very large group of people. The goal is to do this automatically for such ontology datasets with a locational component. %B Proceedings of the 18th SIGSPATIAL International Conference on Advances in Geographic Information Systems %S GIS '10 %I ACM %C New York, NY, USA %P 540 - 541 %8 2010/// %@ 978-1-4503-0428-3 %G eng %U http://doi.acm.org/10.1145/1869790.1869887 %R 10.1145/1869790.1869887 %0 Book Section %B Pairing-Based Cryptography - Pairing 2010 %D 2010 %T Optimal Authenticated Data Structures with Multilinear Forms %A Charalampos Papamanthou %A Tamassia, Roberto %A Triandopoulos, Nikos %E Joye, Marc %E Miyaji, Atsuko %E Otsuka, Akira %K Algorithm Analysis and Problem Complexity %K authenticated dictionary %K Coding and Information Theory %K Computer Communication Networks %K Data Encryption %K Discrete Mathematics in Computer Science %K multilinear forms %K Systems and Data Security %X Cloud computing and cloud storage are becoming increasingly prevalent. In this paradigm, clients outsource their data and computations to third-party service providers. Data integrity in the cloud therefore becomes an important factor for the functionality of these web services.Authenticated data structures, implemented with various cryptographic primitives, have been widely studied as a means of providing efficient solutions to data integrity problems (e.g., Merkle trees). In this paper, we introduce a new authenticated dictionary data structure that employs multilinear forms, a cryptographic primitive proposed by Silverberg and Boneh in 2003 [10], the construction of which, however, remains an open problem to date. Our authenticated dictionary is optimal, that is, it does not add any extra asymptotic cost to the plain dictionary data structure, yielding proofs of constant size, i.e., asymptotically equal to the size of the answer, while maintaining other relevant complexities logarithmic. Instead, solutions based on cryptographic hashing (e.g., Merkle trees) require proofs of logarithmic size [40]. Because multilinear forms are not known to exist yet, our result can be viewed from a different angle: if one could prove that optimal authenticated dictionaries cannot exist in the computational model, irrespectively of cryptographic primitives, then our solution would imply that cryptographically interesting multilinear form generators cannot exist as well (i.e., it can be viewed as a reduction). Thus, we provide an alternative avenue towards proving the nonexistence of multilinear form generators in the context of general lower bounds for authenticated data structures [40] and for memory checking [18], a model similar to the authenticated data structures model. %B Pairing-Based Cryptography - Pairing 2010 %S Lecture Notes in Computer Science %I Springer Berlin Heidelberg %P 246 - 264 %8 2010/01/01/ %@ 978-3-642-17454-4, 978-3-642-17455-1 %G eng %U http://link.springer.com/chapter/10.1007/978-3-642-17455-1_16 %0 Conference Paper %B Parallel Distributed Processing (IPDPS), 2010 IEEE International Symposium on %D 2010 %T Optimization of linked list prefix computations on multithreaded GPUs using CUDA %A Wei, Zheng %A JaJa, Joseph F. %K 200 %K accesses;fine %K accesses;linked %K Bandwidth %K C1060;cell %K computations;extremely %K computations;multithreaded %K CUDA;MTA;NVIDIA %K GeForce %K GPUs;optimization;prefix %K grain %K high %K list %K memory %K Parallel %K prefix %K process;coprocessors;multi-threading; %K processor;data %K series;Tesla %K sums;randomization %X We present a number of optimization techniques to compute prefix sums on linked lists and implement them on multithreaded GPUs using CUDA. Prefix computations on linked structures involve in general highly irregular fine grain memory accesses that are typical of many computations on linked lists, trees, and graphs. While the current generation of GPUs provides substantial computational power and extremely high bandwidth memory accesses, they may appear at first to be primarily geared toward streamed, highly data parallel computations. In this paper, we introduce an optimized multithreaded GPU algorithm for prefix computations through a randomization process that reduces the problem to a large number of fine-grain computations. We map these fine-grain computations onto multithreaded GPUs in such a way that the processing cost per element is shown to be close to the best possible. Our experimental results show scalability for list sizes ranging from 1M nodes to 256M nodes, and significantly improve on the recently published parallel implementations of list ranking, including implementations on the Cell Processor, the MTA-8, and the NVIDIA GeForce 200 series. They also compare favorably to the performance of the best known CUDA algorithm for the scan operation on the Tesla C1060. %B Parallel Distributed Processing (IPDPS), 2010 IEEE International Symposium on %P 1 - 8 %8 2010/04// %G eng %R 10.1109/IPDPS.2010.5470455 %0 Conference Paper %B Proceedings of the 2010 ACM SIGCOMM workshop on Home networks %D 2010 %T Outsourcing home network security %A Feamster, Nick %K home networking %K NETWORK SECURITY %K programmable networking %X The growth of home and small enterprise networks brings with it a large number of devices and networks that are either managed poorly or not at all. Hosts on these networks may become compromised and become sources of spam, denial-of-service traffic, or the site of a scam or phishing attack site. Although a typical user now knows how to apply software updates and run anti-virus software, these techniques still require user vigilance, and they offer no recourse when a machine ultimately becomes compromised. Rather than having individual networks managed independently, we propose to outsource the management and operation of these networks to a third party that has both operations expertise and a broader view of network activity. Our approach harnesses two trends: (1) the advent of programmable network switches, which offer flexibility and the possibility for remote management; and (2) the increasing application of distributed network monitoring and inference algorithms to network security problems (an appealing technique because of its ability to reveal coordinated behavior that may represent an attack). %B Proceedings of the 2010 ACM SIGCOMM workshop on Home networks %S HomeNets '10 %I ACM %C New York, NY, USA %P 37 - 42 %8 2010/// %@ 978-1-4503-0198-5 %G eng %U http://doi.acm.org/10.1145/1851307.1851317 %R 10.1145/1851307.1851317 %0 Conference Paper %B Foundations of Computer Science (FOCS), 2010 51st Annual IEEE Symposium on %D 2010 %T Overcoming the Hole in the Bucket: Public-Key Cryptography Resilient to Continual Memory Leakage %A Brakerski,Z. %A Kalai,Y.T. %A Katz, Jonathan %A Vaikuntanathan,V. %K continual %K cryptography; %K cryptography;public-key %K encryption %K key %K key;digital %K leakage;cryptographic %K memory %K schemes;digital %K schemes;public-key %K schemes;secret %K signatures;identity-based %K signatures;public %X In recent years, there has been a major effort to design cryptographic schemes that remain secure even when arbitrary information about the secret key is leaked (e.g., via side-channel attacks). We explore the possibility of achieving security under emphcontinual leakage from the emphentire secret key by designing schemes in which the secret key is updated over time. In this model, we construct public-key encryption schemes, digital signatures, and identity-based encryption schemes that remain secure even if an attacker can leak a constant fraction of the secret memory (including the secret key) in each time period between key updates. We also consider attackers who may probe the secret memory during the updates themselves. We stress that we allow unrestricted leakage, without the assumption that “only computation leaks information”. Prior to this work, constructions of public-key encryption schemes secure under continual leakage were not known even under this assumption. %B Foundations of Computer Science (FOCS), 2010 51st Annual IEEE Symposium on %P 501 - 510 %8 2010/10// %G eng %R 10.1109/FOCS.2010.55 %0 Journal Article %J The Journal of Research of the National Institute of Standards and Technology %D 2010 %T Overlap-based cell tracker %A Chalfoun, J %A Cardone, Antonio %A Dima, A.A. %A Allen, D.P. %A Halter, M.W. %X order to facilitate the extraction of quantitative data from live cell image sets, automated image analysis methods are needed. This paper presents an introduction to the general principle of an overlap cell tracking software developed by the National Institute of Standards and Technology (NIST). This cell tracker has the ability to track cells across a set of time lapse images acquired at high rates based on the amount of overlap between cellular regions in consecutive frames. It is designed to be highly flexible, requires little user parameterization, and has a fast execution time. %B The Journal of Research of the National Institute of Standards and Technology %V 115 %8 11/2010 %N 6 %& 477 %0 Journal Article %J Guide to Reliable Internet Services and Applications %D 2010 %T Overlay Networking and Resiliency %A Bhattacharjee, Bobby %A Rabinovich,M. %B Guide to Reliable Internet Services and Applications %P 221 - 251 %8 2010/// %G eng %0 Conference Paper %B Image Processing (ICIP), 2009 16th IEEE International Conference on %D 2009 %T Object detection via boosted deformable features %A Hussein,M. %A Porikli, F. %A Davis, Larry S. %K boosted %K detection;object %K detection;statistics; %K detection;visual %K ensembles;deformable %K evidence;feature %K extraction;object %K features;human %X It is a common practice to model an object for detection tasks as a boosted ensemble of many models built on features of the object. In this context, features are defined as subregions with fixed relative locations and extents with respect to the object's image window. We introduce using deformable features with boosted ensembles. A deformable features adapts its location depending on the visual evidence in order to match the corresponding physical feature. Therefore, deformable features can better handle deformable objects. We empirically show that boosted ensembles of deformable features perform significantly better than boosted ensembles of fixed features for human detection. %B Image Processing (ICIP), 2009 16th IEEE International Conference on %P 1445 - 1448 %8 2009/11// %G eng %R 10.1109/ICIP.2009.5414561 %0 Journal Article %J Pattern Analysis and Machine Intelligence, IEEE Transactions on %D 2009 %T Observing Human-Object Interactions: Using Spatial and Functional Compatibility for Recognition %A Gupta,A. %A Kembhavi,A. %A Davis, Larry S. %K Automated;Recognition (Psychology);Video Recording; %K Bayesian approach;functional compatibility;human perception;human-object interactions;objects recognition;psychological studies;spatial compatibility;Bayes methods;behavioural sciences;human factors;image recognition;motion estimation;object recognition;A %K Biological;Movement;Pattern Recognition %K Computer-Assisted;Models %X Interpretation of images and videos containing humans interacting with different objects is a daunting task. It involves understanding scene or event, analyzing human movements, recognizing manipulable objects, and observing the effect of the human movement on those objects. While each of these perceptual tasks can be conducted independently, recognition rate improves when interactions between them are considered. Motivated by psychological studies of human perception, we present a Bayesian approach which integrates various perceptual tasks involved in understanding human-object interactions. Previous approaches to object and action recognition rely on static shape or appearance feature matching and motion analysis, respectively. Our approach goes beyond these traditional approaches and applies spatial and functional constraints on each of the perceptual elements for coherent semantic interpretation. Such constraints allow us to recognize objects and actions when the appearances are not discriminative enough. We also demonstrate the use of such constraints in recognition of actions from static images without using any motion information. %B Pattern Analysis and Machine Intelligence, IEEE Transactions on %V 31 %P 1775 - 1789 %8 2009/10// %@ 0162-8828 %G eng %N 10 %R 10.1109/TPAMI.2009.83 %0 Journal Article %J IEEETransactions on Pattern Analysis and Machine Intelligence %D 2009 %T Off-Line Loop Investigation for Handwriting Analysis %A Steinherz,T. %A David Doermann %A Rivlin,E. %A Intrator,N. %B IEEETransactions on Pattern Analysis and Machine Intelligence %V 31 %P 193 - 209 %8 2009/02// %G eng %N 2 %0 Conference Paper %B Proceedings of the Third International Workshop on Data Mining and Audience Intelligence for Advertising %D 2009 %T Online allocation of display advertisements subject to advanced sales contracts %A Alaei,Saeed %A Arcaute,Esteban %A Khuller, Samir %A Ma,Wenjing %A Malekian,Azarakhsh %A Tomlin,John %K display advertising %K modeling %K online algorithms %K OPTIMIZATION %K simulation %X In this paper we propose a utility model that accounts for both sales and branding advertisers. We first study the computational complexity of optimization problems related to both online and offline allocation of display advertisements. Next, we focus on a particular instance of the online allocation problem, and design a simple online algorithm with provable approximation guarantees. Our algorithm is near optimal as is shown by a matching lower bound. Finally, we report on experiments to establish actual case behavior on some real datasets, with encouraging results. %B Proceedings of the Third International Workshop on Data Mining and Audience Intelligence for Advertising %S ADKDD '09 %I ACM %C New York, NY, USA %P 69 - 77 %8 2009/// %@ 978-1-60558-671-7 %G eng %U http://doi.acm.org/10.1145/1592748.1592758 %R 10.1145/1592748.1592758 %0 Journal Article %J Symposium on Click Modular Router %D 2009 %T An OpenFlow switch element for Click %A Mundada,Y. %A Sherwood,R. %A Feamster, Nick %B Symposium on Click Modular Router %8 2009/// %G eng %0 Conference Paper %B Proceedings of the 2009 Workshop on Graph-based Methods for Natural Language Processing %D 2009 %T Opinion graphs for polarity and discourse classification %A Somasundaran,Swapna %A Namata,Galileo %A Getoor, Lise %A Wiebe,Janyce %X This work shows how to construct discourse-level opinion graphs to perform a joint interpretation of opinions and discourse relations. Specifically, our opinion graphs enable us to factor in discourse information for polarity classification, and polarity information for discourse-link classification. This inter-dependent framework can be used to augment and improve the performance of local polarity and discourse-link classifiers. %B Proceedings of the 2009 Workshop on Graph-based Methods for Natural Language Processing %S TextGraphs-4 %I Association for Computational Linguistics %C Stroudsburg, PA, USA %P 66 - 74 %8 2009/// %@ 978-1-932432-54-1 %G eng %U http://dl.acm.org/citation.cfm?id=1708124.1708138 %0 Patent %D 2009 %T Optical interconnect structure in a computer system and method of transporting data between processing elements and memory through the optical interconnect structure %A Vishkin, Uzi %E University of Maryland %X A multi-chip processor/memory arrangement replacing a large computer chip, includes a number of modules each including processing elements, registers, and/or memories interconnected by an optical interconnection fabric providing an all-to-all interconnection between the chips, so that the memory cells on each chip represent a portion of shared memory. The optical interconnect fabric is responsible for transporting data between the chips while processing elements on each chip dominate processing. Each chip is manufactured in mass production so that the entire processor/memory arrangement is fabricated in an inexpensive and simplified technology process. The optical communication fabric is based on waveguide technology and includes a number of waveguides, the layout of which follows certain constraints. The waveguides can intersect each other in the single plane, or alternatively, a double layer of waveguide structures and bent over approach may be used. Specific layout patterns of... %V 10/529,310 %8 2009/03/17/ %G eng %U http://www.google.com/patents?id=wgS6AAAAEBAJ %N 7505822 %0 Journal Article %J Neural computation %D 2009 %T An oscillatory hebbian network model of short-term memory %A Winder,R. K %A Reggia, James A. %A Weems,S. A %A Bunting,M. F %B Neural computation %V 21 %P 741 - 761 %8 2009/// %G eng %N 3 %0 Journal Article %J Proceedings of the IEEE %D 2008 %T Object Detection, Tracking and Recognition for Multiple Smart Cameras %A Sankaranarayanan,A. C %A Veeraraghavan,A. %A Chellapa, Rama %K algorithm;geometric %K analysis;image %K association;distributed %K camera;visual %K cameras; %K cameras;object %K colour %K constraints;imaging %K data %K detection;object %K detection;sensor %K detection;three-dimiensional %K fusion;target %K model;video %K network;distributed %K recognition;object %K scene %K sensor %K sensor;multiple %K sensors;geometry;image %K sensors;object %K smart %K texture;intelligent %K tracking;target %K tracking;video %X Video cameras are among the most commonly used sensors in a large number of applications, ranging from surveillance to smart rooms for videoconferencing. There is a need to develop algorithms for tasks such as detection, tracking, and recognition of objects, specifically using distributed networks of cameras. The projective nature of imaging sensors provides ample challenges for data association across cameras. We first discuss the nature of these challenges in the context of visual sensor networks. Then, we show how real-world constraints can be favorably exploited in order to tackle these challenges. Examples of real-world constraints are (a) the presence of a world plane, (b) the presence of a three-dimiensional scene model, (c) consistency of motion across cameras, and (d) color and texture properties. In this regard, the main focus of this paper is towards highlighting the efficient use of the geometric constraints induced by the imaging devices to derive distributed algorithms for target detection, tracking, and recognition. Our discussions are supported by several examples drawn from real applications. Lastly, we also describe several potential research problems that remain to be addressed. %B Proceedings of the IEEE %V 96 %P 1606 - 1624 %8 2008/10// %@ 0018-9219 %G eng %N 10 %R 10.1109/JPROC.2008.928758 %0 Journal Article %J Applied and Environmental MicrobiologyAppl. Environ. Microbiol. %D 2008 %T Occurrence and Expression of Luminescence in Vibrio Cholerae %A Grim,Christopher J. %A Taviani,Elisa %A Alam,Munirul %A Huq,Anwar %A Sack,R. Bradley %A Rita R Colwell %X Several species of the genus Vibrio, including Vibrio cholerae, are bioluminescent or contain bioluminescent strains. Previous studies have reported that only 10% of V. cholerae strains are luminescent. Analysis of 224 isolates of non-O1/non-O139 V. cholerae collected from Chesapeake Bay, MD, revealed that 52% (116/224) were luminescent when an improved assay method was employed and 58% (130/224) of isolates harbored the luxA gene. In contrast, 334 non-O1/non-O139 V. cholerae strains isolated from two rural provinces in Bangladesh yielded only 21 (6.3%) luminescent and 35 (10.5%) luxA+ isolates. An additional 270 clinical and environmental isolates of V. cholerae serogroups O1 and O139 were tested, and none were luminescent or harbored luxA. These results indicate that bioluminescence may be a trait specific for non-O1/non-O139 V. cholerae strains that frequently occur in certain environments. Luminescence expression patterns of V. cholerae were also investigated, and isolates could be grouped based on expression level. Several strains with defective expression of the lux operon, including natural K variants, were identified. %B Applied and Environmental MicrobiologyAppl. Environ. Microbiol. %V 74 %P 708 - 715 %8 2008/02/01/ %@ 0099-2240, 1098-5336 %G eng %U http://aem.asm.org/content/74/3/708 %N 3 %R 10.1128/AEM.01537-07 %0 Conference Paper %B Proceeding of the twenty-sixth annual SIGCHI conference on Human factors in computing systems %D 2008 %T One-handed touchscreen input for legacy applications %A Karlson,A.K. %A Bederson, Benjamin B. %B Proceeding of the twenty-sixth annual SIGCHI conference on Human factors in computing systems %P 1399 - 1408 %8 2008/// %G eng %0 Conference Paper %B IEEE 24th International Conference on Data Engineering, 2008. ICDE 2008 %D 2008 %T Online Filtering, Smoothing and Probabilistic Modeling of Streaming data %A Kanagal,B. %A Deshpande, Amol %K Data analysis %K data streaming %K declarative query %K dynamic probabilistic model %K Filtering %K Global Positioning System %K hidden Markov models %K Monitoring %K Monte Carlo methods %K Noise generators %K Noise measurement %K online filtering %K particle filter %K particle filtering (numerical methods) %K probabilistic database view %K probability %K Real time systems %K real-time application %K relational database system %K Relational databases %K sequential Monte Carlo algorithm %K Smoothing methods %K SQL %X In this paper, we address the problem of extending a relational database system to facilitate efficient real-time application of dynamic probabilistic models to streaming data. We use the recently proposed abstraction of model-based views for this purpose, by allowing users to declaratively specify the model to be applied, and by presenting the output of the models to the user as a probabilistic database view. We support declarative querying over such views using an extended version of SQL that allows for querying probabilistic data. Underneath we use particle filters, a class of sequential Monte Carlo algorithms, to represent the present and historical states of the model as sets of weighted samples (particles) that are kept up-to-date as new data arrives. We develop novel techniques to convert the queries on the model-based view directly into queries over particle tables, enabling highly efficient query processing. Finally, we present experimental evaluation of our prototype implementation over several synthetic and real datasets, that demonstrates the feasibility of online modeling of streaming data using our system and establishes the advantages of tight integration between dynamic probabilistic models and databases. %B IEEE 24th International Conference on Data Engineering, 2008. ICDE 2008 %I IEEE %P 1160 - 1169 %8 2008/04/07/12 %@ 978-1-4244-1836-7 %G eng %R 10.1109/ICDE.2008.4497525 %0 Conference Paper %B Proceedings of the 16th ACM international conference on Multimedia %D 2008 %T An ontology based approach for activity recognition from video %A Akdemir,Umut %A Turaga,Pavan %A Chellapa, Rama %K activity ontologies %K visual surveillance %X Representation and recognition of human activities is an important problem for video surveillance and security applications. Considering the wide variety of settings in which surveillance systems are being deployed, it is necessary to create a common knowledge-base or ontology of human activities. Most current attempts at ontology design in computer vision for human activities have been empirical in nature. In this paper, we present a more systematic approach to address the problem of designing ontologies for visual activity recognition. We draw on general ontology design principles and adapt them to the specific domain of human activity ontologies. Then, we discuss qualitative evaluation principles and provide several examples from existing ontologies and how they can be improved upon. Finally, we demonstrate quantitatively in terms of recognition performance, the efficacy and validity of our approach for bank and airport tarmac surveillance domains. %B Proceedings of the 16th ACM international conference on Multimedia %S MM '08 %I ACM %C New York, NY, USA %P 709 - 712 %8 2008/// %@ 978-1-60558-303-7 %G eng %U http://doi.acm.org/10.1145/1459359.1459466 %R 10.1145/1459359.1459466 %0 Book Section %B Automata, Languages and Programming %D 2008 %T Optimal Cryptographic Hardness of Learning Monotone Functions %A Dana Dachman-Soled %A Lee, Homin K. %A Malkin, Tal %A Servedio, Rocco A. %A Wan, Andrew %A Wee, Hoeteck %E Aceto, Luca %E Damgård, Ivan %E Goldberg, Leslie Ann %E Halldórsson, Magnús M. %E Ingólfsdóttir, Anna %E Walukiewicz, Igor %K Data structures %K Data Structures, Cryptology and Information Theory %K Discrete Mathematics in Computer Science %K Numeric Computing %K Software Engineering/Programming and Operating Systems %K Theory of computation %X A wide range of positive and negative results have been established for learning different classes of Boolean functions from uniformly distributed random examples. However, polynomial-time algorithms have thus far been obtained almost exclusively for various classes of monotone functions, while the computational hardness results obtained to date have all been for various classes of general (nonmonotone) functions. Motivated by this disparity between known positive results (for monotone functions) and negative results (for nonmonotone functions), we establish strong computational limitations on the efficient learnability of various classes of monotone functions. We give several such hardness results which are provably almost optimal since they nearly match known positive results. Some of our results show cryptographic hardness of learning polynomial-size monotone circuits to accuracy only slightly greater than 1/2+1/n‾‾√1/2 + 1/\sqrt{n} ; this accuracy bound is close to optimal by known positive results (Blum et al., FOCS ’98). Other results show that under a plausible cryptographic hardness assumption, a class of constant-depth, sub-polynomial-size circuits computing monotone functions is hard to learn; this result is close to optimal in terms of the circuit size parameter by known positive results as well (Servedio, Information and Computation ’04). Our main tool is a complexity-theoretic approach to hardness amplification via noise sensitivity of monotone functions that was pioneered by O’Donnell (JCSS ’04). %B Automata, Languages and Programming %S Lecture Notes in Computer Science %I Springer Berlin Heidelberg %P 36 - 47 %8 2008/01/01/ %@ 978-3-540-70574-1, 978-3-540-70575-8 %G eng %U http://link.springer.com/chapter/10.1007/978-3-540-70575-8_4 %0 Journal Article %J Algorithms-ESA 2008 %D 2008 %T An Optimal Incremental Algorithm for Minimizing Lateness with Rejection %A Khuller, Samir %A Mestre,J. %X This paper re-examines the classical problem of minimizing maximum lateness which is defined as follows: given a collection of n jobs with processing times and due dates, in what order should they be processed on a single machine to minimize maximum lateness? The lateness of a job is defined as its completion time minus its due date. This problem can be solved easily by ordering the jobs in non-decreasing due date order. We now consider the following question: which subset of k jobs should we reject to reduce the maximum lateness by the largest amount? While this problem can be solved optimally in polynomial time, we show the following surprising result: there is a fixed permutation of the jobs, such that for all k, if we reject the first k jobs from this permutation, we derive an optimal solution for the problem in which we are allowed to reject k jobs. This allows for an incremental solution in which we can keep incrementally rejecting jobs if we need a solution with lower maximum lateness value. Moreover, we also develop an optimal O(n logn) time algorithm to find this permutation. %B Algorithms-ESA 2008 %P 601 - 610 %8 2008/// %G eng %R 10.1007/978-3-540-87744-8_50 %0 Conference Paper %B Motion and video Computing, 2008. WMVC 2008. IEEE Workshop on %D 2008 %T Optimal Multi-View Fusion of Object Locations %A Sankaranarayanan,A. C %A Chellapa, Rama %K control;surveillance;tracking; %K detection;position %K estimates;optimal %K estimation;location %K estimation;multicamera %K estimation;object %K estimator;motion %K fusion;surveillance;cameras;estimation %K location %K multiview %K theory;motion %K tracking;minimum %K variance %X In surveillance applications, it is common to have multiple cameras observing targets exhibiting motion on a ground plane. Tracking and estimation of the location of a target on the plane becomes an important inference problem. In this paper, we study the problem of combining estimates of location obtained from multiple cameras. We model the relation between the uncertainty in the location estimation to the position and location of the camera with respect to the plane (which is encoded by a 2D projective transformation). This is addressed by a theoretical study of the properties of a random variable under a projective transformation and analysis of the geometric setting when the moments of the transformed random variable exist. In this context, we prove that ground plane tracking near the horizon line is often inaccurate. Using suitable approximations to compute the moments, a minimum variance estimator is designed to fuse the multi-camera location estimates. Finally, we present experimental results that illustrate the importance of such modeling in location estimation and tracking. %B Motion and video Computing, 2008. WMVC 2008. IEEE Workshop on %P 1 - 8 %8 2008/01// %G eng %R 10.1109/WMVC.2008.4544048 %0 Journal Article %J British Machine Vision Conference %D 2007 %T Object detection using a shape codebook %A Yu,X. %A Yi,L. %A Fermüller, Cornelia %A David Doermann %X This paper presents a method for detecting categories of objects in real-worldimages. Given training images of an object category, our goal is to recognize and localize instances of those objects in a candidate image. The main contribution of this work is a novel structure of the shape code- book for object detection. A shape codebook entry consists of two compo- nents: a shape codeword and a group of associated vectors that specify the object centroids. Like their counterpart in language, the shape codewords are simple and generic such that they can be easily extracted from most object categories. The associated vectors store the geometrical relationships be- tween the shape codewords, which specify the characteristics of a particular object category. Thus they can be considered as the “grammar” of the shape codebook. In this paper, we use Triple-Adjacent-Segments (TAS) extracted from im- age edges as the shape codewords. Object detection is performed in a prob- abilistic voting framework. Experimental results on public datasets show performance similiar to the state-of-the-art, yet our method has significantly lower complexity and requires considerably less supervision in the training (We only need bounding boxes for a few training samples, do not need fig- ure/ground segmentation and do not need a validation dataset). %B British Machine Vision Conference %V 4 %8 2007/// %G eng %0 Conference Paper %B British Machine Vision Conference (BMVC'07) %D 2007 %T Object Detection Using Shape Codebook %A Yu,Xiaodong %A Li,Yi %A Fermüller, Cornelia %A David Doermann %X This paper presents a method for detecting categories of objects in real-world images. Given training images of an object category, our goal is to recognize and localize instances of those objects in a candidate image. The main contribution of this work is a novel structure of the shape codebook for object detection. A shape codebook entry consists of two components: a shape codeword and a group of associated vectors that specify the object centroids. Like their counterpart in language, the shape codewords are simple and generic such that they can be easily extracted from most object categories. The associated vectors store the geometrical relationships between the shape codewords, which specify the characteristics of a particular object category. Thus they can be considered as the “grammar” of the shape codebook. In this paper, we use Triple-Adjacent-Segments (TAS) extracted from image edges as the shape codewords. Object detection is performed in a probabilistic voting framework. Experimental results on public datasets show performance similar to the state-of-the-art, yet our method has significantly lower complexity and requires considerably less supervision in the training (We only need bounding boxes for a few training samples, do not need figure/ground segmentation and do not need a validation dataset). %B British Machine Vision Conference (BMVC'07) %I BMVC07 %8 2007/12// %G eng %0 Patent %D 2007 %T Object recognition using linear subspaces %A Jacobs, David W. %A Basri,Ronen The Weizmann Institute %E NEC Laboratories America, Inc. (4 Independence Way, Princeton, New Jersey 08540, US) %X Abstract of EP1204069A method for choosing an image from a plurality of three-dimensional models which is most similar to an input image is provided. The method includes the steps of: (a) providing a database of the plurality of three-dimensional models; (b) providing an input image; (c) positioning each three-dimensional model relative to the input image; (d) for each three-dimensional model, determining a rendered image that is most similar to the input image by: (d)(i) computing a linear subspace that describes an approximation to the set of all possible rendered images that each three-dimensional model can produce under all possible lighting conditions where each point in the linear subspace represents a possible image; and one of(d)(ii) finding the point on the linear subspace that is closest to the input image or finding a rendered image in a subset of the linear subspace obtained by projecting the set of images that are generated by positive lights onto the linear subspace; (e) computing a measure of similarly between the input image and each rendered image; and (f) selecting the three-dimensional model corresponding to the rendered image whose measure of similarity is most similar to the input image. Step (d) is preferably repeated for each of a red, green, and blue color component for each three-dimensional model. The linear subspace is preferably either four-dimensional or nine-dimensional. %V EP20010117763 %8 2007/01/17/ %G eng %U http://www.freepatentsonline.com/EP1204069.html %N EP1204069 %0 Conference Paper %B Image Analysis and Processing, 2007. ICIAP 2007. 14th International Conference on %D 2007 %T Object Tracking at Multiple Levels of Spatial Resolutions %A Tran,S.D. %A Davis, Larry S. %K coarse-to-fine %K detection;tracking; %K filtering;spatial %K framework;object %K levels;tracking %K multiple %K resolution;multiresolution %K resolutions %K searching;data %K space;computer %K state %K tracking %K tracking;sequential %K vision;object %X Tracking is usually performed at a single level of data resolution. This paper describes a multi-resolution tracking framework developed with efficiency and robustness in mind. Efficiency is achieved by processing low resolution data whenever possible. Robustness results from multiple level coarse-to-fine searching in the tracking state space. We combine sequential filtering both in time and resolution levels into a probabilistic framework. A color blob tracker is implemented and the tracking results are evaluated in a number of experiments. %B Image Analysis and Processing, 2007. ICIAP 2007. 14th International Conference on %P 149 - 154 %8 2007/09// %G eng %R 10.1109/ICIAP.2007.4362772 %0 Conference Paper %B Computer Vision and Pattern Recognition, 2007. CVPR '07. IEEE Conference on %D 2007 %T Objects in Action: An Approach for Combining Action Understanding and Object Perception %A Gupta,A. %A Davis, Larry S. %K analysis;image %K approach;action %K Bayesian %K classification;image %K classification;object %K framework;Bayes %K interactions;inference %K interpretation %K localization;object %K methods;gesture %K MOTION %K movements;human %K perception; %K perception;human-object %K perception;object %K process;object %K processing;visual %K recognition;image %K recognition;video %K segmentation;action %K segmentation;object %K signal %K understanding;human %X Analysis of videos of human-object interactions involves understanding human movements, locating and recognizing objects and observing the effects of human movements on those objects. While each of these can be conducted independently, recognition improves when interactions between these elements are considered. Motivated by psychological studies of human perception, we present a Bayesian approach which unifies the inference processes involved in object classification and localization, action understanding and perception of object reaction. Traditional approaches for object classification and action understanding have relied on shape features and movement analysis respectively. By placing object classification and localization in a video interpretation framework, we can localize and classify objects which are either hard to localize due to clutter or hard to recognize due to lack of discriminative features. Similarly, by applying context on human movements from the objects on which these movements impinge and the effects of these movements, we can segment and recognize actions which are either too subtle to perceive or too hard to recognize using motion features alone. %B Computer Vision and Pattern Recognition, 2007. CVPR '07. IEEE Conference on %P 1 - 8 %8 2007/06// %G eng %R 10.1109/CVPR.2007.383331 %0 Journal Article %J ACM Transactions on Algorithms (TALG) %D 2007 %T Oblivious routing on node-capacitated and directed graphs %A Hajiaghayi, Mohammad T. %A Kleinberg,R. D %A R\äcke,H. %A Leighton,T. %B ACM Transactions on Algorithms (TALG) %V 3 %P 51–es - 51–es %8 2007/// %G eng %N 4 %0 Journal Article %J PROCEEDINGS OF THE NATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE %D 2007 %T Online collective entity resolution %A Bhattacharya,I. %A Getoor, Lise %X Entity resolution is a critical component of data integration where the goal is to reconcile database references correspond- ing to the same real-world entities. Given the abundance of publicly available databases that have unresolved entities, we motivate the problem of quick and accurate resolution for an- swering queries over such ‘unclean’ databases. Since collec- tive entity resolution approaches — where related references are resolved jointly — have been shown to be more accu- rate than independent attribute-based resolution, we focus on adapting collective resolution for answering queries. We pro- pose a two-stage collective resolution strategy for processing queries. We then show how it can be performed on-the-fly by adaptively extracting and resolving those database references that are the most helpful for resolving the query. We validate our approach on two large real-world publication databases where we show the usefulness of collective resolution and at the same time demonstrate the need for adaptive strategies for query processing. We then show how the same queries can be answered in real time using our adaptive approach while pre- serving the gains of collective resolution. This work extends work presented in (Bhattacharya, Licamele, & Getoor 2006). %B PROCEEDINGS OF THE NATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE %V 22 %P 1606 - 1606 %8 2007/// %G eng %N 2 %0 Conference Paper %B Proceedings from the Workshop on Metareasoning in Agent Based Systems at the Sixth International Joint Conference on Autonomous Agents and Multiagent Sytems %D 2007 %T Ontologies for reasoning about failures in AI systems %A Schmill,M. %A Josyula,D. %A Anderson,M. L %A Wilson,S. %A Oates,T. %A Perlis, Don %A Fults,S. %B Proceedings from the Workshop on Metareasoning in Agent Based Systems at the Sixth International Joint Conference on Autonomous Agents and Multiagent Sytems %8 2007/// %G eng %0 Journal Article %J SIAM Journal on Computing %D 2007 %T Optimal expected-case planar point location %A Arya,S. %A Malamatos,T. %A Mount, Dave %A Wong,K.C. %X Point location is the problem of preprocessing a planar polygonal subdivision S of size ninto a data structure in order to determine efficiently the cell of the subdivision that contains a given query point. We consider this problem from the perspective of expected query time. We are given the probabilities pz that the query point lies within each cell z ∈ S. The entropy H of the resulting discrete probability distribution is the dominant term in the lower bound on the expected-case query time. We show that it is possible to achieve query time H + O( √ H + 1) with space O(n), which is optimal up to lower order terms in the query time. We extend this result to subdivisions with convex cells, assuming a uniform query distribution within each cell. In order to achieve space efficiency, we introduce the concept of entropy-preserving cuttings. %B SIAM Journal on Computing %V 37 %P 584 - 584 %8 2007/// %G eng %N 2 %0 Journal Article %J Journal of Computational Physics %D 2007 %T Optimization of nonlinear error for weighted essentially non-oscillatory methods in direct numerical simulations of compressible turbulence %A Taylor,Ellen M. %A Wu,Minwei %A Martin, M.P %K Compressible turbulence %K direct numerical simulation %K Limiters %K Linear dissipation %K Non-linear dissipation %K Numerical dissipation %K Shock capturing %X Weighted essentially non-oscillatory (WENO) methods have been developed to simultaneously provide robust shock-capturing in compressible fluid flow and avoid excessive damping of fine-scale flow features such as turbulence. Under certain conditions in compressible turbulence, however, numerical dissipation remains unacceptably high even after optimization of the linear component that dominates in smooth regions. We therefore construct and evaluate WENO schemes that also reduce dissipation due to one source of nonlinear error: the smoothness measurement that governs the application of stencil adaptation away from the linear optimal stencil. Direct numerical simulations (DNS) include a one-dimensional Euler solution and three-dimensional compressible isotropic turbulence. We find that the smoothness measurement modifications that we call the “relative smoothness limiter” and the “relative total variation limiter” each significantly enhance thez grid-convergence properties of WENO schemes while generating, respectively, small and moderate additional computational expense. Moreover, we observe these techniques to be broadly effective regardless of flow configuration. %B Journal of Computational Physics %V 223 %P 384 - 397 %8 2007/// %@ 0021-9991 %G eng %U http://www.sciencedirect.com/science/article/pii/S0021999106004426 %N 1 %R 10.1016/j.jcp.2006.09.010 %0 Conference Paper %B Proceedings of the 2007 ACM SIGMOD international conference on Management of data - SIGMOD '07 %D 2007 %T Optimizing mpf queries %A Corrada Bravo, Hector %A Ramakrishnan,Raghu %B Proceedings of the 2007 ACM SIGMOD international conference on Management of data - SIGMOD '07 %C Beijing, China %P 701 - 701 %8 2007/// %G eng %U http://dl.acm.org/citation.cfm?id=1247558 %R 10.1145/1247480.1247558 %0 Journal Article %J Spatial data on the Web: modeling and management %D 2007 %T Out-of-core Multi-resolution Terrain Modeling %A Danovaro,E. %A De Floriani, Leila %A Puppo,E. %A Samet, Hanan %B Spatial data on the Web: modeling and management %P 43 - 43 %8 2007/// %G eng %0 Conference Paper %B Proceedings of the seventeenth annual ACM-SIAM symposium on Discrete algorithm %D 2006 %T Oblivious network design %A Gupta,A. %A Hajiaghayi, Mohammad T. %A R\äcke,H. %B Proceedings of the seventeenth annual ACM-SIAM symposium on Discrete algorithm %P 970 - 979 %8 2006/// %G eng %0 Journal Article %J CTWatch Quarterly %D 2006 %T Observations about software development for high end computing %A Carver, J. C %A Hochstein, L. M %A Kendall,R. P %A Nakamura, T. %A Zelkowitz, Marvin V %A Basili, Victor R. %A Post,D. E %X In this paper, we have summarized our findings from a series of case studies conducted with ten ASC and MP Codes as a series of observations. Due to different environments in which each code is developed, some of the observations are consistent across code teams while others vary across code teams. Overall, we found high consistency among the ASC Codes and the MP Codes. Due to the different environments and foci of these projects, this result is both surprising and positive. In addition, despite the fact that a large majority of the developers on these teams have little or no formal training in software engineering, they have been able to make use of some basic software engineering principles. Further education and motivation could increase the use of these principles and further increase the quality of scientific and engineering software that has already demonstrated its value.Based on the positive results thus far, we have plans to conduct additional case studies to gather more data in support of or in contradiction to the observations presented in this paper. In future case studies, we will strive to investigate codes from additional domains, thereby allowing broader, more inclusive conclusions to be drawn. %B CTWatch Quarterly %V 2 %P 33 - 38 %8 2006/// %G eng %N 4A %0 Book %D 2006 %T Oceans And Health: Pathogens In The Marine Environment %A Belkin,Shimshon %A Rita R Colwell %K Electronic books %K Marine microbiology %K Medical / Epidemiology %K Medical / Microbiology %K Nature / Animals / Marine Life %K Pathogenic microorganisms %K Science / Environmental Science %K Science / Life Sciences / Biology %K Science / Life Sciences / Marine Biology %K Science / Life Sciences / Microbiology %K Seawater/ microbiology %X The release of non-disinfected wastewaters into the marine environment is a common worldwide practice, in under-developed as well as in highly developed countries. Consequently, the seas are constantly infused with wastewater bacteria, among them highly pathogenic ones. In view of the public health significance of this phenomenon, it is surprising how little is actually known concerning the fate of such bacteria once they enter the sea. While numerous studies have addressed the effects of various environmental parameters on colony formation, many of them actually ignore the fact that bacteria can retain viability and infectivity while losing colony-forming ability. Only in recent years have efforts also been directed at unraveling the mechanisms determining bacterial sensitivity or survival under these conditions. This, therefore, is one subject of Oceans and Health: Pathogens in the Marine Environment: the survival, infectivity, pathogenicity and viability of enteric bacteria in the sea. Chapters also detail the public health aspects of wastewater release, civil engineering and economic considerations, other sources of pathogens, and much more. %I Springer %8 2006/// %@ 9780387237084 %G eng %0 Journal Article %J Computer Networks %D 2006 %T OMNI: An efficient overlay multicast infrastructure for real-time applications %A Banerjee,Suman %A Kommareddy,Christopher %A Kar,Koushik %A Bhattacharjee, Bobby %A Khuller, Samir %K Application-layer multicast %K Minimum latency problem %K overlay multicast %X We consider an overlay architecture (called OMNI) where service providers deploy a set of service nodes (called MSNs) in the network to efficiently implement media-streaming applications. These MSNs are organized into an overlay and act as application-layer multicast forwarding entities for a set of clients.We present a decentralized scheme that organizes the MSNs into an appropriate overlay structure that is particularly beneficial for real-time applications. We formulate our optimization criterion as a “degree-constrained minimum average-latency problem” which is known to be NP-Hard. A key feature of this formulation is that it gives a dynamic priority to different MSNs based on the size of its service set. Our proposed approach iteratively modifies the overlay tree using localized transformations to adapt with changing distribution of MSNs and clients, as well as network conditions. We show that a centralized greedy approach to this problem does not perform quite as well, while our distributed iterative scheme efficiently converges to near-optimal solutions. %B Computer Networks %V 50 %P 826 - 841 %8 2006/04/13/ %@ 1389-1286 %G eng %U http://www.sciencedirect.com/science/article/pii/S1389128605002434 %N 6 %R 10.1016/j.comnet.2005.07.023 %0 Journal Article %J Technical Reports from UMIACS %D 2006 %T A One-Threshold Algorithm for Detecting Abandoned Packages Under Severe Occlusions Using a Single Camera %A Lim,Ser-Nam %A Davis, Larry S. %K Technical Report %X We describe a single-camera system capable of detecting abandoned packages under severe occlusions, which leads to complications on several levels. The first arises when frames containing only background pixels are unavailable for initializing the background model - a problem for which we apply a novel discriminative measure. The proposed measure is essentially the probability of observing a particular pixel value, conditioned on the probability that no motion is detected, with the pdf on which the latter is based being estimated as a zero-mean and unimodal Gaussian distribution from observing the difference values between successive frames. We will show that such a measure is a powerful discriminant even under severe occlusions, and can deal robustly with the foreground aperture effect - a problem inherently caused by differencing successive frames. The detection of abandoned packages then follows at both the pixel and region level. At the pixel-level, an ``abandoned pixel'' is detected as a foreground pixel, at which no motion is observed. At the region-level, abandoned pixels are ascertained in a Markov Random Field (MRF), after which they are clustered. These clusters are only finally classified as abandoned packages, if they display temporal persistency in their size, shape, position and color properties, which is determined using conditional probabilities of these attributes. The algorithm is also carefully designed to avoid any thresholding, which is the pitfall of many vision systems, and which significantly improves the robustness of our system. Experimental results from real-life train station sequences demonstrate the robustness and applicability of our algorithm. %B Technical Reports from UMIACS %8 2006/02/13/T21:4 %G eng %U http://drum.lib.umd.edu/handle/1903/3329 %0 Journal Article %J Proceedings of the ICML Workshop on Open Problems in Stastistical Relational Learning %D 2006 %T Open problems in relational data clustering %A Anthony,A. %A desJardins, Marie %X Data clustering is the task of detecting pat-terns in a set of data. Most algorithms take non-relational data as input and are sometimes unable to find significant patterns. Many data sets can include relational infor- mation, as well as independent object at- tributes. We believe that clustering with re- lational data will help find significant pat- terns where non-relational algorithms fail. This paper discusses two open problems in relational data clustering: clustering hetero- geneous data, and relation selection or ex- traction. Potential methods for addressing the problems are presented. %B Proceedings of the ICML Workshop on Open Problems in Stastistical Relational Learning %8 2006/// %G eng %0 Conference Paper %B Proc. AAAI Spring Symposium on Computational Approaches to Analyzing Weblogs, Stanford, CA %D 2006 %T Opinion Analysis in Document Databases %A Cesarano,C. %A Dorr, Bonnie J %A Picariello, A. %A Reforgiato,D. %A Sagoff,A. %A V.S. Subrahmanian %X There are numerous applications in which we would like toassess what opinions are being expressed in text documents. Forr example, Martha Stewart’s company may have wished to assess the degree of harshness of news articles about her in the recent past. Likewise, a World Bank official may wish to as- sess the degree of criticism of a proposed dam in Bangladesh. The ability to gauge opinion on a given topic is therefore of critical interest. In this paper, we develop a suite of algo- rithms which take as input, a set D of documents as well as a topic t, and gauge the degree of opinion expressed about topic t in the set D of documents. Our algorithms can return both a number (larger the number, more positive the opinion) as well as a qualitative opinion (e.g. harsh, complimentary). We as- sess the accuracy of these algorithms via human experiments and show that the best of these algorithms can accurately re- flect human opinions. We have also conducted performance experiments showing that our algorithms are computationally fast. %B Proc. AAAI Spring Symposium on Computational Approaches to Analyzing Weblogs, Stanford, CA %8 2006/// %G eng %0 Conference Paper %B Computer Aided Control System Design, 2006 IEEE International Conference on Control Applications, 2006 IEEE International Symposium on Intelligent Control, 2006 IEEE %D 2006 %T Optimization of signal processing software for control system implementation %A Bhattacharyya, Shuvra S. %A Levine,William S. %X Signal processing plays a fundamental role in the design of control systems #x2014; the portion of a digitally-implemented control system between the sensor outputs and the actuator inputs is precisely a digital signal processor (DSP). Consequently, effective techniques for design and optimization of signal processing software are important in achieving efficient controller implementations. Motivated by these relationships, this paper reviews techniques for modeling signal processing functionality in a manner that exposes aspects of application structure that are useful for mapping the functionality into efficient implementations. The paper then introduces some representative techniques that operate on such models to systematically derive optimized implementations from them. %B Computer Aided Control System Design, 2006 IEEE International Conference on Control Applications, 2006 IEEE International Symposium on Intelligent Control, 2006 IEEE %P 1562 - 1567 %8 2006/10// %G eng %R 10.1109/CACSD-CCA-ISIC.2006.4776874 %0 Journal Article %J Software: Practice and Experience %D 2006 %T Optimization of structured programs %A Zelkowitz, Marvin V %A Bail,William G %K compiler %K Nested control structures %K Optimizer %K structured programming %X The class of programs which do not contain goto statements has a structure which lends itself to optimization by an optimizer that is fast, efficient and relatively easy to program. The design of such an optimizer is described, along with some of the results obtained using this optimizer—one such result being that very little code optimization is achieved. The conjecture is made that this is true because gotoless programming languages lend themselves to more compact and concise object code at the source language level. %B Software: Practice and Experience %V 4 %P 51 - 57 %8 2006/10/27/ %@ 1097-024X %G eng %U http://onlinelibrary.wiley.com/doi/10.1002/spe.4380040106/abstract %N 1 %R 10.1002/spe.4380040106 %0 Journal Article %J Protein Expression and Purification %D 2006 %T An optimized system for expression and purification of secreted bacterial proteins %A Geisbrecht,Brian V. %A Bouyain,Samuel %A Pop, Mihai %K Pathogens %K Secreted proteins %K Toxins %K Virulence factors %X In this report, we describe an optimized system for the efficient overexpression, purification, and refolding of secreted bacterial proteins. Candidate secreted proteins were produced recombinantly in Escherichia coli as Tobacco Etch Virus protease-cleavable hexahistidine-c-myc eptiope fusion proteins. Without regard to their initial solubility, recombinant fusion proteins were extracted from whole cells with guanidium chloride, purified under denaturing conditions by immobilized metal affinity chromatography, and refolded by rapid dilution into a solution containing only Tris buffer and sodium chloride. Following concentration on the same resin under native conditions, each protein was eluted for further purification and/or characterization. Preliminary studies on a test set of 12 secreted proteins ranging in size from 13 to 130 kDa yielded between 10 and 50 mg of fusion protein per liter of induced culture at greater than 90% purity, as judged by Coomassie-stained SDS–PAGE. Of the nine proteins further purified, analytical gel filtration chromatography indicated that each was a monomer in solution and circular dichroism spectroscopy revealed that each had adopted a well-defined secondary structure. While there are many potential applications for this system, the results presented here suggest that it will be particularly useful for investigators employing structural approaches to understand protein function, as attested to by the crystal structures of three proteins purified using this methodology (B.V. Geisbrecht, B.Y. Hamaoka, B. Perman, A. Zemla, D.J. Leahy, J. Biol. Chem. 280 (2005) 17243–17250). %B Protein Expression and Purification %V 46 %P 23 - 32 %8 2006/03// %@ 1046-5928 %G eng %U http://www.sciencedirect.com/science/article/pii/S1046592805003128 %N 1 %R 10.1016/j.pep.2005.09.003 %0 Journal Article %J Modeling and Management of Geographical Data over Distributed Architectures. Springer–Verlag %D 2006 %T Out–of–Core Multiresolution Terrain Modeling %A Danovaro,E. %A De Floriani, Leila %A Puppo,E. %A Samet, Hanan %B Modeling and Management of Geographical Data over Distributed Architectures. Springer–Verlag %8 2006/// %G eng %0 Conference Paper %B PROCEEDINGS OF THE NATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE %D 2006 %T Overconfidence or paranoia? search in imperfect-information games %A Parker,A. %A Nau, Dana S. %A V.S. Subrahmanian %X We derive a recursive formula for expected utility valuesin imperfect- information game trees, and an imperfect- information game tree search algorithm based on it. The for- mula and algorithm are general enough to incorporate a wide variety of opponent models. We analyze two opponent mod- els. The “paranoid” model is an information-set analog of the minimax rule used in perfect-information games. The “over- confident” model assumes the opponent moves randomly. Our experimental tests in the game of kriegspiel chess (an imperfect-information variant of chess) produced surpris- ing results: (1) against each other, and against one of the kriegspiel algorithms presented at IJCAI-05, the overconfi- dent model usually outperformed the paranoid model; (2) the performance of both models depended greatly on how well the model corresponded to the opponent’s behavior. These re- sults suggest that the usual assumption of perfect-information game tree search—that the opponent will choose the best pos- sible move—isn’t as useful in imperfect-information games. %B PROCEEDINGS OF THE NATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE %V 21 %P 1045 - 1045 %8 2006/// %G eng %U http://www.aaai.org/Papers/AAAI/2006/AAAI06-164.pdf %0 Report %D 2006 %T OverDoSe: A generic DDoS protection service using an overlay network %A Elaine Shi %A Stoica,I. %A Andersen,D.G. %A Perrig, A. %X We present the design and implementation of OverDoSe, an overlay network offering generic DDoS protection for targeted sites. OverDoSe clients and servers are isolated at the IP level. Overlay nodes route packets between a client and a server, and regulate traffic according to the server’s instructions. Through the use of light-weight security primitives, OverDoSe achieves resilience against compromised overlay nodes with a minimal performance overhead. OverDoSe can be deployed by a single ISP who wishes to offer DDoS protection as a value-adding service to its customers. %I School of Computer Science, Carnegie Mellon University %8 2006 %@ CMU-CS-06-114 %G eng %U http://repository.cmu.edu/cgi/viewcontent.cgi?article=1073&context=compsci %R Technical Report %0 Conference Paper %B Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing %D 2005 %T OCR post-processing for low density languages %A Kolak,O. %A Resnik, Philip %B Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing %P 867 - 874 %8 2005/// %G eng %0 Conference Paper %B Proceedings of the 6th ACM conference on Electronic commerce %D 2005 %T Online auctions with re-usable goods %A Hajiaghayi, Mohammad T. %B Proceedings of the 6th ACM conference on Electronic commerce %P 165 - 174 %8 2005/// %G eng %0 Conference Paper %B Computer Vision, 2005. ICCV 2005. Tenth IEEE International Conference on %D 2005 %T On-line density-based appearance modeling for object tracking %A Han,B. %A Davis, Larry S. %K appearance %K approximation;Gaussian %K Computer %K density %K density-based %K detection;target %K Gaussian %K mixtures;object %K modeling %K modeling;real-time %K processes;computer %K sequences;object %K technique;online %K tracking; %K tracking;online %K vision;image %K vision;sequential %X Object tracking is a challenging problem in real-time computer vision due to variations of lighting condition, pose, scale, and view-point over time. However, it is exceptionally difficult to model appearance with respect to all of those variations in advance; instead, on-line update algorithms are employed to adapt to these changes. We present a new on-line appearance modeling technique which is based on sequential density approximation. This technique provides accurate and compact representations using Gaussian mixtures, in which the number of Gaussians is automatically determined. This procedure is performed in linear time at each time step, which we prove by amortized analysis. Features for each pixel and rectangular region are modeled together by the proposed sequential density approximation algorithm, and the target model is updated in scale robustly. We show the performance of our method by simulations and tracking in natural videos %B Computer Vision, 2005. ICCV 2005. Tenth IEEE International Conference on %V 2 %P 1492 -1499 Vol. 2 - 1492 -1499 Vol. 2 %8 2005/10// %G eng %R 10.1109/ICCV.2005.181 %0 Journal Article %J Information Processing Letters %D 2005 %T Optimal and near-optimal algorithms for generalized intersection reporting on pointer machines %A Shi,Qingmin %A JaJa, Joseph F. %K algorithms %K computational geometry %K Generalized intersection %X We develop efficient algorithms for a number of generalized intersection reporting problems, including orthogonal and general segment intersection, 2D range searching, rectangular point enclosure, and rectangle intersection search. Our results for orthogonal and general segment intersection, 3-sided 2D range searching, and rectangular pointer enclosure problems match the lower bounds for their corresponding standard versions under the pointer machine model. Our results for the remaining problems improve upon the best known previous algorithms. %B Information Processing Letters %V 95 %P 382 - 388 %8 2005/08/16/ %@ 0020-0190 %G eng %U http://www.sciencedirect.com/science/article/pii/S0020019005001183 %N 3 %R 10.1016/j.ipl.2005.04.008 %0 Conference Paper %B Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems %D 2005 %T Optimal status sets of heterogeneous agent programs %A Stroe,Bogdan %A V.S. Subrahmanian %A Dasgupta,Sudeshna %K heterogeneous agents %K heuristic search %K non-ground representations %K optimal status sets %X There are many situations where an agent can perform one of several sets of actions in responses to changes in its environment, and the agent chooses to perform the set of actions that optimizes some objective function. In past work, Eiter et. al. have proposed a rule based framework for programming agents on top of heterogeneous data sources, but they provide no solutions to the above problem. In this paper, we propose a semantics called optimal feasible status set semantics for agents which allows the agent to associate an objective function with feasible status sets and act according to the feasible status set that optimizes this objective function. We provide both an algorithm to compute exact optimal feasible status sets as well as the TierOpt and FastOpt algorithms to find (suboptimal) feasible status set much faster. We report on experiments on a suite of real agent applications showing that the heuristic algorithms works well in practice. %B Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems %S AAMAS '05 %I ACM %C New York, NY, USA %P 709 - 715 %8 2005/// %@ 1-59593-093-0 %G eng %U http://doi.acm.org/10.1145/1082473.1082581 %R 10.1145/1082473.1082581 %0 Report %D 2005 %T Ordered and Quantum Treemaps: Making Effective Use of 2D Space to Display Hierarchies (2001) %A Bederson, Benjamin B. %A Shneiderman, Ben %A Wattenberg,Martin %K Technical Report %X Treemaps, a space- filling method of visualizing large hierarchical data sets, are receiving increasing attention. Several algorithms have been proposed to create more useful displays by controlling the aspect ratios of the rectangles that make up a treemap. While these algorithms do improve visibility of small items in a single layout, they introduce instability over time in the display of dynamically changing data, fail to preserve order of the underlying data, and create layouts that are difficult to visually search. In addition, continuous treemap algorithms are not suitable for displaying quantum-sized objects within them, such as images. This paper introduces several new treemap algorithms, which address these shortcomings. In addition, we show a new application of these treemaps, using them to present groups of images. The ordered treemap algorithms ensure that items near each other in the given order will be near each other in the treemap layout. Using experimental evidence from Monte Carlo trials, we show that compared to other layout algorithms ordered treemaps are more stable while maintaining relatively favorable aspect ratios of the constituent rectangles. A second test set uses stock market data. The quantum treemap algorithms modify the layout of the continuous treemap algorithms to generate rectangles that are integral multiples of an input object size. The quantum treemap algorithm has been applied to PhotoMesa, an application that supports browsing of large numbers of images. %B Institute for Systems Research Technical Reports %8 2005/// %G eng %U http://drum.lib.umd.edu/handle/1903/6486 %0 Journal Article %J Institute for Systems Research Technical Reports %D 2005 %T Ordered Treemap Layouts (2001) %A Shneiderman, Ben %A Wattenberg,Martin %K Technical Report %X Treemaps, a space-filling method of visualizing large hierarchical data sets, are receiving increasing attention. Several algorithms have been proposed to create more useful displays by controlling the aspect ratios of the rectangles that make up a treemap. While these algorithms do improve visibility of small items in a single layout, they introduce instability over time in the display of dynamically changing data, and fail to preserve an ordering of the underlying data. This paper introduces the ordered treemap, which addresses these two shortcomings. The ordered treemap algorithm ensures that items near each other in the given order will be near each other in the treemap layout. Using experimental evidence from Monte Carlo trials, we show that compared to other layout algorithms ordered treemaps are more stable while maintaining relatively low aspect ratios of the constituent rectangles. A second test set uses stock market data. %B Institute for Systems Research Technical Reports %8 2005/// %G eng %U http://drum.lib.umd.edu/handle/1903/6480 %0 Journal Article %J Behavior Research Methods %D 2005 %T An outdoor 3-D visual tracking system for the study of spatial navigation and memory in rhesus monkeys %A Zia Khan %A Herman, Rebecca A. %A Wallen, Kim %A Balch, Tucker %K cognitive psychology %X Previous studies of the navigational abilities of nonhuman primates have largely been limited to what could be described by a human observer with a pen and paper. Consequently, we have developed a system that uses a pair of cameras to automatically obtain the three-dimensional trajectory of rhesus monkeys performing an outdoor spatial navigation and memory task. The system provides trajectories, path length, speed, and other variables that would be impossible for an unaided observer to note. From trajectory data, we computed and validated a path-length measurement. We use this measurement to compare the navigation abilities of several animals. In addition, we provide quantitative data on the accuracy of a method for automatic behavior detection. Currently, the system is being used to examine the sex differences in spatial navigation of rhesus monkeys. We expect that measures derived from the trajectory data will reveal strategies used by animals to solve spatial problems. %B Behavior Research Methods %V 37 %P 453 - 463 %8 2005/08/01/ %@ 1554-351X, 1554-3528 %G eng %U http://link.springer.com/article/10.3758/BF03192714 %N 3 %! Behavior Research Methods %0 Conference Paper %B AAAI Spring symposium on Computational Approaches to Analyzing Weblogs %D 2004 %T Oasys: An opinion analysis system %A Cesarano,C. %A Dorr, Bonnie J %A Picariello, A. %A Reforgiato,D. %A Sagoff,A. %A V.S. Subrahmanian %X There are numerous applications in which we would like toassess what opinions are being expressed in text documents. For example, Martha Stewart’s company may have wished to assess the degree of harshness of news articles about her in the recent past. Likewise, a World Bank official may wish to as- sess the degree of criticism of a proposed dam in Bangladesh. The ability to gauge opinion on a given topic is therefore of critical interest. In this paper, we develop a suite of algo- rithms which take as input, a set D of documents as well as a topic t, and gauge the degree of opinion expressed about topic t in the set D of documents. Our algorithms can return both a number (larger the number, more positive the opinion) as well as a qualitative opinion (e.g. harsh, complimentary). We as- sess the accuracy of these algorithms via human experiments and show that the best of these algorithms can accurately re- flect human opinions. We have also conducted performance experiments showing that our algorithms are computationally fast. %B AAAI Spring symposium on Computational Approaches to Analyzing Weblogs %8 2004/// %G eng %0 Conference Paper %B Image Processing, 2004. ICIP '04. 2004 International Conference on %D 2004 %T Object tracking by adaptive feature extraction %A Han,Bohyung %A Davis, Larry S. %K adaptive %K algorithm; %K analysis; %K colour %K component %K extraction; %K feature %K feature; %K heterogeneous %K image %K image; %K likelihood %K Mean-shift %K object %K online %K principal %K tracking %K tracking; %X Tracking objects in the high-dimensional feature space is not only computationally expensive but also functionally inefficient. Selecting a low-dimensional discriminative feature set is a critical step to improve tracker performance. A good feature set for tracking can differ from frame to frame due to the changes in the background against the tracked object, and due to an on-line algorithm that adaptively determines a advantageous distinctive feature set. In this paper, multiple heterogeneous features are assembled, and likelihood images are constructed for various subspaces of the combined feature space. Then, the most discriminative feature is extracted by principal component analysis (PCA) based on those likelihood images. This idea is applied to the mean-shift tracking algorithm [D. Comaniciu et al., June 2000], and we demonstrate its effectiveness through various experiments. %B Image Processing, 2004. ICIP '04. 2004 International Conference on %V 3 %P 1501 - 1504 Vol. 3 - 1501 - 1504 Vol. 3 %8 2004/10// %G eng %R 10.1109/ICIP.2004.1421349 %0 Journal Article %J ACM Comput. Surv. %D 2004 %T Object-based and image-based object representations %A Samet, Hanan %K Access methods %K feature query %K geographic information systems (GIS) %K image space %K location query %K object space %K octrees %K Pyramids %K quadtrees %K R-trees %K space-filling curves %K Spatial databases %X An overview is presented of object-based and image-based representations of objects by their interiors. The representations are distinguished by the manner in which they can be used to answer two fundamental queries in database applications: (1) Feature query: given an object, determine its constituent cells (i.e., their locations in space). (2) Location query: given a cell (i.e., a location in space), determine the identity of the object (or objects) of which it is a member as well as the remaining constituent cells of the object (or objects). Regardless of the representation that is used, the generation of responses to the feature and location queries is facilitated by building an index (i.e., the result of a sort) either on the objects or on their locations in space, and implementing it using an access structure that correlates the objects with the locations. Assuming the presence of an access structure, implicit (i.e., image-based) representations are described that are good for finding the objects associated with a particular location or cell (i.e., the location query), while requiring that all cells be examined when determining the locations associated with a particular object (i.e., the feature query). In contrast, explicit (i.e., object-based) representations are good for the feature query, while requiring that all objects be examined when trying to respond to the location query. The goal is to be able to answer both types of queries with one representation and without possibly having to examine every cell. Representations are presented that achieve this goal by imposing containment hierarchies on either space (i.e., the cells in the space in which the objects are found), or objects. In the former case, space is aggregated into successively larger-sized chunks (i.e., blocks), while in the latter, objects are aggregated into successively larger groups (in terms of the number of objects that they contain). The former is applicable to image-based interior-based representations of which the space pyramid is an example. The latter is applicable to object-based interior-based representations of which the R-tree is an example. The actual mechanics of many of these representations are demonstrated in the VASCO JAVA applets found at http://www.cs.umd.edu/˜hjs/quadtree/index.html. %B ACM Comput. Surv. %V 36 %P 159 - 217 %8 2004/06// %@ 0360-0300 %G eng %U http://doi.acm.org/10.1145/1031120.1031123 %N 2 %R 10.1145/1031120.1031123 %0 Journal Article %J Environmental Microbiology %D 2004 %T Occurrence and distribution of Vibrio cholerae in the coastal environment of Peru %A Gil,Ana I. %A Louis,Valérie R. %A Rivera,Irma N. G. %A Lipp,Erin %A Huq,Anwar %A Lanata,Claudio F. %A Taylor,David N. %A Russek‐Cohen,Estelle %A Choopun,Nipa %A Sack,R. Bradley %A Rita R Colwell %X The occurrence and distribution of Vibrio cholerae in sea water and plankton along the coast of Peru were studied from October 1997 to June 2000, and included the 1997–98 El Niño event. Samples were collected at four sites in coastal waters off Peru at monthly intervals. Of 178 samples collected and tested, V. cholerae O1 was cultured from 10 (5.6%) samples, and V. cholerae O1 was detected by direct fluorescent antibody assay in 26 out of 159 samples tested (16.4%). Based on the number of cholera cases reported in Peru from 1997 to 2000, a significant correlation was observed between cholera incidence and elevated sea surface temperature (SST) along the coast of Peru (P < 0.001). From the results of this study, coastal sea water and zooplankton are concluded to be a reservoir for V. cholerae in Peru. The climate–cholera relationship observed for the 1997–98 El Niño year suggests that an early warning system for cholera risk can be established for Peru and neighbouring Latin American countries. %B Environmental Microbiology %V 6 %P 699 - 706 %8 2004/07/01/ %@ 1462-2920 %G eng %U http://onlinelibrary.wiley.com/doi/10.1111/j.1462-2920.2004.00601.x/abstract?userIsAuthenticated=false&deniedAccessCustomisedMessage= %N 7 %R 10.1111/j.1462-2920.2004.00601.x %0 Journal Article %J Applied Cryptography and Network Security %D 2004 %T One-round protocols for two-party authenticated key exchange %A Jeong,I. R %A Katz, Jonathan %A Lee,D. H %B Applied Cryptography and Network Security %P 220 - 232 %8 2004/// %G eng %0 Journal Article %J IEEE, 0-7695-2158-2164 %D 2004 %T On-Line Kernel-Based Tracking in Joint Feature-Spatial Spaces %A Yang,C. %A Duraiswami, Ramani %A Elgammal,A. %A Davis, Larry S. %X We will demonstrate an object tracking algorithm thatuses a novel simple symmetric similarity function between spatially-smoothed kernel-density estimates of the model and target distributions. The similarity measure is based on the expectation of the density estimates over the model or target images. The density is estimated using radial-basis kernel functions which measure the affinity between points and provide a better outlier rejection property. The mean- shift algorithm is used to track objects by iteratively max- imizing this similarity function. To alleviate the quadratic complexity of the density estimation, we employ Gaussian kernels and the fast Gauss transform to reduce the compu- tations to linear order. This leads to a very efficient and robust nonparametric tracking algorithm. More details can be found in [2]. The system processes online video stream on a P4 1.4GHz and achieves 30 frames per second using an ordinary webcam. %B IEEE, 0-7695-2158-2164 %8 2004/// %G eng %0 Journal Article %J Typescript. Princeton University %D 2004 %T Only in Europe?: The Economic and Military Foundations of European World Empires %A Jordan Boyd-Graber %B Typescript. Princeton University %8 2004/// %G eng %0 Patent %D 2004 %T OPTICAL INTERCONNECT STRUCTURE IN A COMPUTER SYSTEM %A Vishkin, Uzi %X A multi-chip processor/memory arrangement (20) is shown which includes a plurality of modules (22), also referred to herein as chips. The modules (22) are interconnected there between by an optical interconnect structure (24) also referred to herein as optical interconnect fabric. The basic concept underlining the structure of the arrangement (20) is to position the processing elements and memory cells on the small chips (22) which are fabricated in mass production based on inexpensive technology, for example, 0.25 micron technology and interconnected with the optical interconnect fabric (24). Packaged with the optical interconnect structure (24), a plurality of inexpensive chips (22) provides sufficient performance but for a small fraction of the cost of the processor/memory argument implemented on a single large computer chips (0.065 micron chip). %8 2004/09/30/ %G eng %N WO/2004/083904 %0 Journal Article %J Knowledge and Data Engineering, IEEE Transactions on %D 2004 %T Optimal models of disjunctive logic programs: semantics, complexity, and computation %A Leone,N. %A Scarcello,F. %A V.S. Subrahmanian %K complexity; %K computational %K disjunctive %K function; %K knowledge %K LANGUAGE %K Logic %K minimal %K model %K nonmonotonic %K objective %K optimisation; %K OPTIMIZATION %K problems; %K program %K Programming %K programming; %K reasoning; %K representation; %K semantics; %K stable %K user-specified %X Almost all semantics for logic programs with negation identify a set, SEM(P), of models of program P, as the intended semantics of P, and any model M in this class is considered a possible meaning of P with regard to the semantics the user has in mind. Thus, for example, in the case of stable models [M. Gelfond et al., (1988)], choice models [D. Sacca et al., (1990)], answer sets [M. Gelfond et al., (1991)], etc., different possible models correspond to different ways of "completing" the incomplete information in the logic program. However, different end-users may have different ideas on which of these different models in SEM(P) is a reasonable one from their point of view. For instance, given SEM(P), user U1 may prefer model M1 isin;SEM(P) to model M2 isin;SEM(P) based on some evaluation criterion that she has. We develop a logic program semantics based on optimal models. This semantics does not add yet another semantics to the logic programming arena - it takes as input an existing semantics SEM(P) and a user-specified objective function Obj, and yields a new semantics Opt(P)_ sube; SEM(P) that realizes the objective function within the framework of preferred models identified already by SEM(P). Thus, the user who may or may not know anything about logic programming has considerable flexibility in making the system reflect her own objectives by building "on top" of existing semantics known to the system. In addition to the declarative semantics, we provide a complete complexity analysis and algorithms to compute optimal models under varied conditions when SEM(P) is the stable model semantics, the minimal models semantics, and the all-models semantics. %B Knowledge and Data Engineering, IEEE Transactions on %V 16 %P 487 - 503 %8 2004/04// %@ 1041-4347 %G eng %N 4 %R 10.1109/TKDE.2004.1269672 %0 Conference Paper %B Design, Automation and Test in Europe Conference and Exhibition, 2003 %D 2003 %T On-chip stochastic communication [SoC applications] %A Tudor Dumitras %A Marculescu, R. %K buffer overflow %K CMOS technology %K CMOS technology scaling %K Costs %K data upsets %K Design automation %K Digital audio players %K Fault tolerance %K Fault tolerant systems %K generic tile-based architecture %K integrated circuit design %K integrated circuit interconnections %K interconnect correctness requirements %K Logic Design %K logic simulation %K MP3 encoder %K Multiprocessor interconnection networks %K on-chip fault-tolerance %K on-chip stochastic communication %K packet drops %K Pervasive computing %K Robustness %K SoC design %K SoC verification %K Stochastic processes %K Synchronisation %K synchronization failures %K system latency %K system-level fault-tolerance %K system-on-chip %K systems-on-chip %K video coding %X As CMOS technology scales down into the deep-submicron (DSM) domain, the costs of design and verification for systems-on-chip (SoCs) are rapidly increasing due to the inefficiency of traditional CAD tools. Relaxing the requirement of 100% correctness for devices and interconnects drastically reduces the costs of design but, at the same time, requires that SoCs be designed with some system-level fault-tolerance. In this paper, we introduce a new communication paradigm for SoCs, namely stochastic communication. The newly proposed scheme not only separates communication from computation, but also provides the required built-in fault-tolerance to DSM failures, is scalable and cheap to implement. For a generic tile-based architecture, we show how a ubiquitous multimedia application (an MP3 encoder) can be implemented using stochastic communication in an efficient and robust manner. More precisely, up to 70% data upsets, 80% packet drops because of buffer overflow, and severe levels of synchronization failures can be tolerated while maintaining a low latency. %B Design, Automation and Test in Europe Conference and Exhibition, 2003 %P 790 - 795 %8 2003/// %G eng %0 Conference Paper %D 2003 %T On-line computation of two types of structural relations in Japanese %A Aoshima,S. %A Phillips,C. %A Weinberg, Amy %8 2003/// %G eng %0 Report %D 2003 %T An O(n)-Space O(log n/log log n + f)-Query Time Algorithm for 3-D Dominance Reporting %A Shi,Qingmin %A JaJa, Joseph F. %K Technical Report %X We present a linear-space algorithm for handling the {\emthree-dimensional dominance reporting problem}: given a set $S$ of $n$ three-dimensional points, design a data structure for $S$ so that the points in $S$ which dominate a given query point can be reported quickly. Under the variation of the RAM model introduced by Fredman and Willard~\cite{Fredman94}, our algorithm achieves $O(\log n/\log\log n+f)$ query time, where $f$ is the number of points reported. Extensions to higher dimensions are also reported. (UMIACS-TR-2003-77) %I Instititue for Advanced Computer Studies, Univ of Maryland, College Park %V UMIACS-TR-2003-77 %8 2003/08/01/ %G eng %U http://drum.lib.umd.edu/handle/1903/1301 %0 Journal Article %J Advances in Informatics %D 2003 %T The Opsis project: materialized views for data warehouses and the web %A Roussopoulos, Nick %A Kotidis,Y. %A Labrinidis,A. %A Sismanis,Y. %B Advances in Informatics %P 64 - 81 %8 2003/// %G eng %0 Report %D 2003 %T Optimizing Heavily Loaded Agents %A V.S. Subrahmanian %X We develop algorithms to help scale software agents built on top of heterogeneous, legacy codebases. The algorithms apply to large data sets, to large volumes of workloads on agents, as well as algorithms for computationally intensive functions. %I Department of Computer Science, University of Maryland, College Park %8 2003/06// %G eng %0 Journal Article %J IEEE Symposium on Information Visualization Conference Compendium (demonstration) %D 2003 %T Overlaying graph links on treemaps %A Fekete,J. D %A Wang,D. %A Dang,N. %A Aris,A. %A Plaisant, Catherine %X Every graph can be decomposed into a tree structure plus a set ofremaining edges. We describe a visualization technique that displays the tree structure as a Treemap and the remaining edges as curved links overlaid on the Treemap. Link curves are designed to show where the link starts and where it ends without requiring an explicit arrow that would clutter the already dense visualization. This technique is effective for visualizing structures where the underlying tree has some meaning, such as Web sites or XML documents with cross-references. Graphic attributes of the links – such as color or thickness – can be used to represent attributes of the edges. Users can choose to see all links at once or only the links to and from the node or branch under the cursor. %B IEEE Symposium on Information Visualization Conference Compendium (demonstration) %8 2003/// %G eng %0 Conference Paper %B Proceedings of the second international conference on Human Language Technology Research %D 2002 %T OCR error correction using a noisy channel model %A Kolak,O. %A Resnik, Philip %B Proceedings of the second international conference on Human Language Technology Research %P 257 - 262 %8 2002/// %G eng %0 Journal Article %J Proc. of the 1st Intl. Symp. on 3D Data Processing, Visualization, and Transmission %D 2002 %T Octree approximation and compression methods %A Samet, Hanan %A Kochut, A. %X Techniques are presented to progressively approximateand compress in a lossless manner two-colored (i.e. bi- nary) 3D objects (as well as objects of arbitrary dimen- sionality). The objects are represented by a region octree implemented using a pointerless representation based on locational codes. Approximation is achieved through the use of a forest. This method labels the internal nodes of the octree as GB or GW, depending on the number of chil- dren being of type GB or GW. In addition, all BLACK nodes are labeled GB, while all WHITE nodes are labeled GW. A number of different image approximation methods are dis- cussed that make use of a forest. The advantage of these methods is that they are progressive which means that as more of the object is transmitted, the better is the approx- imation. This makes these methods attractive for use on the worldwide web. Progressive transmission has the draw- back that there is an overhead in requiring extra storage. A progressive forest-based approximation and transmission method is presented where the total amount of data that is transmitted is not larger than MIN(B,W), where B and W are the number of BLACK and WHITE blocks, respectively, in the region octree of the set of objects. %B Proc. of the 1st Intl. Symp. on 3D Data Processing, Visualization, and Transmission %8 2002/// %G eng %0 Journal Article %J Natural Language Processing and Information Systems %D 2002 %T Omnibase: Uniform access to heterogeneous data for question answering %A Katz,B. %A Felshin,S. %A Yuret,D. %A Ibrahim,A. %A Jimmy Lin %A Marton,G. %A Jerome McFarland,A. %A Temelkuran,B. %X Although the World Wide Web contains a tremendous amount of information, the lack of uniform structure makes finding the right knowledge difficult. A solution is to turn the Web into a “virtual database” and to access it through natural language.We built Omnibase, a system that integrates heterogeneous data sources using an object- property-value model. With the help of Omnibase, our Start natural language system can now access numerous heterogeneous data sources on the Web in a uniform manner, and answers millions of user questions with high precision. %B Natural Language Processing and Information Systems %P 230 - 234 %8 2002/// %G eng %R 10.1007/3-540-36271-1_23 %0 Journal Article %J Software Engineering, IEEE Transactions on %D 2002 %T An operational process for goal-driven definition of measures %A Briand,L.C. %A Morasca,S. %A Basili, Victor R. %K attributes; %K definition; %K development %K empirical %K engineering; %K expected %K experimental %K goal-driven %K GQM %K GQM/MEDEA; %K hypotheses; %K management; %K mathematical %K measure %K measurement; %K metrics; %K operational %K paradigm; %K process; %K Product %K properties; %K quality; %K software %K verification; %X We propose an approach (GOM/MEDEA) for defining measures of product attributes in software engineering. The approach is driven by the experimental goals of measurement, expressed via the GQM paradigm, and a set of empirical hypotheses. To make the empirical hypotheses quantitatively verifiable, GQM/MEDEA supports the definition of theoretically valid measures for the attributes of interest based on their expected mathematical properties. The empirical hypotheses are subject to experimental verification. This approach integrates several research contributions from the literature into a consistent, practical, and rigorous approach. %B Software Engineering, IEEE Transactions on %V 28 %P 1106 - 1125 %8 2002/12// %@ 0098-5589 %G eng %N 12 %R 10.1109/TSE.2002.1158285 %0 Journal Article %J Image Processing, IEEE Transactions on %D 2002 %T Optimal edge-based shape detection %A Moon, H. %A Chellapa, Rama %A Rosenfeld, A. %K 1D %K 2D %K aerial %K analysis; %K boundary %K conditions; %K contour %K cross %K detection; %K DODE %K double %K edge %K edge-based %K error %K error; %K exponential %K extraction; %K facial %K feature %K filter %K filter; %K Filtering %K function; %K geometry; %K global %K human %K images; %K imaging %K localization %K mean %K methods; %K NOISE %K operator; %K optimal %K optimisation; %K output; %K performance; %K pixel; %K power; %K propagation; %K properties; %K section; %K SHAPE %K square %K squared %K statistical %K step %K theory; %K tracking; %K two-dimensional %K vehicle %K video; %X We propose an approach to accurately detecting two-dimensional (2-D) shapes. The cross section of the shape boundary is modeled as a step function. We first derive a one-dimensional (1-D) optimal step edge operator, which minimizes both the noise power and the mean squared error between the input and the filter output. This operator is found to be the derivative of the double exponential (DODE) function, originally derived by Ben-Arie and Rao (1994). We define an operator for shape detection by extending the DODE filter along the shape's boundary contour. The responses are accumulated at the centroid of the operator to estimate the likelihood of the presence of the given shape. This method of detecting a shape is in fact a natural extension of the task of edge detection at the pixel level to the problem of global contour detection. This simple filtering scheme also provides a tool for a systematic analysis of edge-based shape detection. We investigate how the error is propagated by the shape geometry. We have found that, under general assumptions, the operator is locally linear at the peak of the response. We compute the expected shape of the response and derive some of its statistical properties. This enables us to predict both its localization and detection performance and adjust its parameters according to imaging conditions and given performance specifications. Applications to the problem of vehicle detection in aerial images, human facial feature detection, and contour tracking in video are presented. %B Image Processing, IEEE Transactions on %V 11 %P 1209 - 1227 %8 2002/11// %@ 1057-7149 %G eng %N 11 %R 10.1109/TIP.2002.800896 %0 Journal Article %J Future Generation Computer Systems %D 2002 %T Optimizing execution of component-based applications using group instances %A Beynon,Michael D. %A Kurc,Tahsin %A Sussman, Alan %A Saltz,Joel %K DataCutter %K Grid %K Wide-area network %X Recent research on programming models for developing applications on the Grid has proposed component-based models as a viable approach, in which an application is composed of multiple interacting computational objects. We have been developing a framework, called filter-stream programming, for building data-intensive applications that query, analyze and manipulate very large datasets in a distributed environment. In this model, the processing structure of an application is represented as a set of processing units, referred to as filters. In this paper, we develop the problem of scheduling instances of a filter group. A filter group is a set of filters collectively performing a computation for an application. In particular, we seek the answer to the following question: should a new instance be created, or an existing one reused? We experimentally investigate the effects on performance of instantiating multiple filter groups under varying application characteristics. %B Future Generation Computer Systems %V 18 %P 435 - 448 %8 2002/03// %@ 0167-739X %G eng %U http://www.sciencedirect.com/science/article/pii/S0167739X0100070X %N 4 %R 10.1016/S0167-739X(01)00070-X %0 Journal Article %J ACM Transactions on Graphics (TOG) %D 2002 %T Ordered and quantum treemaps: Making effective use of 2D space to display hierarchies %A Bederson, Benjamin B. %A Shneiderman, Ben %A Wattenberg,Martin %K hierarchies %K Human-computer interaction %K image browsers %K Information Visualization %K jazz %K ordered treemaps %K treemaps %K TREES %K zoomable user interfaces (ZUIs). %X Treemaps, a space-filling method for visualizing large hierarchical data sets, are receiving increasing attention. Several algorithms have been previously proposed to create more useful displays by controlling the aspect ratios of the rectangles that make up a treemap. While these algorithms do improve visibility of small items in a single layout, they introduce instability over time in the display of dynamically changing data, fail to preserve order of the underlying data, and create layouts that are difficult to visually search. In addition, continuous treemap algorithms are not suitable for displaying fixed-sized objects within them, such as images.This paper introduces a new "strip" treemap algorithm which addresses these shortcomings, and analyzes other "pivot" algorithms we recently developed showing the trade-offs between them. These ordered treemap algorithms ensure that items near each other in the given order will be near each other in the treemap layout. Using experimental evidence from Monte Carlo trials and from actual stock market data, we show that, compared to other layout algorithms, ordered treemaps are more stable, while maintaining relatively favorable aspect ratios of the constituent rectangles. A user study with 20 participants clarifies the human performance benefits of the new algorithms. Finally, we present quantum treemap algorithms, which modify the layout of the continuous treemap algorithms to generate rectangles that are integral multiples of an input object size. The quantum treemap algorithm has been applied to PhotoMesa, an application that supports browsing of large numbers of images. %B ACM Transactions on Graphics (TOG) %V 21 %P 833 - 854 %8 2002/10// %@ 0730-0301 %G eng %U http://doi.acm.org/10.1145/571647.571649 %N 4 %R 10.1145/571647.571649 %0 Conference Paper %B Proceedings of the Working Conference on Advanced Visual Interfaces %D 2002 %T OZONE: A zoomable interface for navigating ontology information %A Suh,B. %A Bederson, Benjamin B. %B Proceedings of the Working Conference on Advanced Visual Interfaces %P 139 - 143 %8 2002/// %G eng %0 Journal Article %J IEEE Transactions on Information Theory %D 2001 %T The one-inclusion graph algorithm is near-optimal for the prediction model of learning %A Li,Yi %A Long,P. M %A Srinivasan, Aravind %K Computer science %K concept class %K general-purpose algorithm %K graph theory %K learning prediction model %K learning systems %K near-optimal algorithm %K Neural networks %K one-inclusion graph algorithm %K optimisation %K Prediction algorithms %K prediction theory %K Predictive models %K probability %K Probability distribution %K Upper bound %K Vapnik-Chervonenkis dimension %K Virtual colonoscopy %X Haussler, Littlestone and Warmuth (1994) described a general-purpose algorithm for learning according to the prediction model, and proved an upper bound on the probability that their algorithm makes a mistake in terms of the number of examples seen and the Vapnik-Chervonenkis (VC) dimension of the concept class being learned. We show that their bound is within a factor of 1+o(1) of the best possible such bound for any algorithm %B IEEE Transactions on Information Theory %V 47 %P 1257 - 1261 %8 2001/03// %@ 0018-9448 %G eng %N 3 %R 10.1109/18.915700 %0 Journal Article %J Real-Time Systems %D 2001 %T Operating Systems %A Agrawala, Ashok K. %B Real-Time Systems %8 2001/// %G eng %0 Conference Paper %B KI 2001: advances in artificial intelligence: Joint German/Austrian Conference on AI, Vienna, Austria, September 19-21, 2001: proceedings %D 2001 %T Optimal Agent Selection %A Ozcan,F. %A V.S. Subrahmanian %A Golubchik,L. %X The Internet contains a vast array of sources that provide identical or similar services. When an agent needs to solve a problem, it may split the problem into “subproblems” and find an agent to solve each of the subproblems. Later, it may combine the results of these subproblems to solve the original problem. In this case, the agent is faced with the task of determining to which agents to assign the subproblems.We call this the agent selection problem (ASP for short). Solving ASP is complex because it must take into account several different parameters. For instance, different agents might take different amounts of time to process a request. Different agents might provide varying “qualities” of answers. Network latencies associated with different agents might vary. In this paper, we first formalize the agent selection problem and show that it is NP-hard. We then propose a generic cost function that is general enough to take into account the costs of (i) network and server loads, (ii) source computations, and (iii) internal mediator costs. We then develop exact and heuristic based algorithms to solve the agent selection problem. %B KI 2001: advances in artificial intelligence: Joint German/Austrian Conference on AI, Vienna, Austria, September 19-21, 2001: proceedings %P 2 - 2 %8 2001/// %G eng %0 Journal Article %J Mathematical Social Sciences %D 2001 %T Optimal collective dichotomous choice under partial order constraints %A Ben-Yashar,Ruth %A Khuller, Samir %A Kraus,Sarit %K constraints %K Information filtering %K Optimal collective dichotomous choice %X This paper generalizes optimal collective dichotomous choices by including constraints which limit combinations of acceptance and rejection decisions for m projects under consideration. Two types of constraints are examined. The first type occurs when acceptance of some projects requires acceptance of others. This type reduces the choice problem to the tractable (solvable in polynomial time) problem of finding a maximum weight closed subset of a directed acyclic graph. The second type occurs when some projects must be accepted when certain others are rejected. We show that this type renders the choice problem to be NP-complete by reduction from the problem of Vertex Cover. Applicability of the generalization to information filtering is discussed. %B Mathematical Social Sciences %V 41 %P 349 - 364 %8 2001/05// %@ 0165-4896 %G eng %U http://www.sciencedirect.com/science/article/pii/S016548960000069X %N 3 %R 10.1016/S0165-4896(00)00069-X %0 Conference Paper %B Proceedings of the IEEE 2nd International Symposium on Bioinformatics and Bioengineering Conference, 2001 %D 2001 %T Optimized seamless integration of biomolecular data %A Eckman,B. A %A Lacroix,Z. %A Raschid, Louiqa %K analysis %K Bioinformatics %K biology computing %K cost based knowledge %K Costs %K Data analysis %K data mining %K data visualisation %K Data visualization %K Data warehouses %K decision support %K digital library %K Educational institutions %K information resources %K Internet %K low cost query evaluation plans %K Mediation %K meta data %K metadata %K molecular biophysics %K multiple local heterogeneous data sources %K multiple remote heterogeneous data sources %K optimized seamless biomolecular data integration %K scientific discovery %K scientific information systems %K semantic knowledge %K software libraries %K visual databases %K Visualization %X Today, scientific data is inevitably digitized, stored in a variety of heterogeneous formats, and is accessible over the Internet. Scientists need to access an integrated view of multiple remote or local heterogeneous data sources. They then integrate the results of complex queries and apply further analysis and visualization to support the task of scientific discovery. Building a digital library for scientific discovery requires accessing and manipulating data extracted from flat files or databases, documents retrieved from the Web, as well as data that is locally materialized in warehouses or is generated by software. We consider several tasks to provide optimized and seamless integration of biomolecular data. Challenges to be addressed include capturing and representing source capabilities; developing a methodology to acquire and represent metadata about source contents and access costs; and decision support to select sources and capabilities using cost based and semantic knowledge, and generating low cost query evaluation plans %B Proceedings of the IEEE 2nd International Symposium on Bioinformatics and Bioengineering Conference, 2001 %I IEEE %P 23 - 32 %8 2001/11/04/6 %@ 0-7695-1423-5 %G eng %R 10.1109/BIBE.2001.974408 %0 Conference Paper %B IEEE Symposium on Information Visualization, 2001. INFOVIS 2001 %D 2001 %T Ordered treemap layouts %A Shneiderman, Ben %A Wattenberg,M. %K Clustering algorithms %K Computer science %K Data visualization %K Displays %K Electronic switching systems %K Filling %K Monte Carlo methods %K Read only memory %K Testing %B IEEE Symposium on Information Visualization, 2001. INFOVIS 2001 %I IEEE %P 73 - 78 %8 2001/// %@ 0-7695-7342-5 %G eng %R 10.1109/INFVIS.2001.963283 %0 Journal Article %J International Journal of Computer Vision %D 2000 %T Observability of 3D Motion %A Fermüller, Cornelia %A Aloimonos, J. %X This paper examines the inherent difficulties in observing 3D rigid motion from image sequences. It does so without considering a particular estimator. Instead, it presents a statistical analysis of all the possible computational models which can be used for estimating 3D motion from an image sequence. These computational models are classified according to the mathematical constraints that they employ and the characteristics of the imaging sensor (restricted field of view and full field of view). Regarding the mathematical constraints, there exist two principles relating a sequence of images taken by a moving camera. One is the “epipolar constraint,” applied to motion fields, and the other the “positive depth” constraint, applied to normal flow fields. 3D motion estimation amounts to optimizing these constraints over the image. A statistical modeling of these constraints leads to functions which are studied with regard to their topographic structure, specifically as regards the errors in the 3D motion parameters at the places representing the minima of the functions. For conventional video cameras possessing a restricted field of view, the analysis shows that for algorithms in both classes which estimate all motion parameters simultaneously, the obtained solution has an error such that the projections of the translational and rotational errors on the image plane are perpendicular to each other. Furthermore, the estimated projection of the translation on the image lies on a line through the origin and the projection of the real translation. The situation is different for a camera with a full (360 degree) field of view (achieved by a panoramic sensor or by a system of conventional cameras). In this case, at the locations of the minima of the above two functions, either the translational or the rotational error becomes zero, while in the case of a restricted field of view both errors are non-zero. Although some ambiguities still remain in the full field of view case, the implication is that visual navigation tasks, such as visual servoing, involving 3D motion estimation are easier to solve by employing panoramic vision. Also, the analysis makes it possible to compare properties of algorithms that first estimate the translation and on the basis of the translational result estimate the rotation, algorithms that do the opposite, and algorithms that estimate all motion parameters simultaneously, thus providing a sound framework for the observability of 3D motion. Finally, the introduced framework points to new avenues for studying the stability of image-based servoing schemes. %B International Journal of Computer Vision %V 37 %P 43 - 63 %8 2000/// %@ 0920-5691 %G eng %U http://dx.doi.org/10.1023/A:1008177429387 %N 1 %0 Conference Paper %B ICPR %D 2000 %T Off-line Skilled Forgery Detection using Stroke and Substroke Features %A Guo,K. %A David Doermann %A Rosenfeld, A. %B ICPR %V 2 %P 355 - 359 %8 2000/// %G eng %0 Conference Paper %B IEEE INFOCOM 2000. Nineteenth Annual Joint Conference of the IEEE Computer and Communications Societies. Proceedings %D 2000 %T Optimal design of signaling networks for Internet telephony %A Srinivasan, Aravind %A Ramakrishnan,K. G %A Kumaran,K. %A Aravamudan,M. %A Naqvi,S. %K bandwidth allocation %K Computational efficiency %K Computer networks %K Cost function %K Demand forecasting %K demand forecasts %K graph theory %K graphical design tool %K Internet telephony %K Linear programming %K Load forecasting %K Load management %K Network topology %K optimal design %K optimal load balancing %K optimisation %K performance %K quadratic assignment problem %K random graphs %K randomised algorithms %K randomized heuristics %K Signal design %K signaling networks %K Switches %K telecommunication signalling %K topology design %X We present an approach for efficient design of a signaling network for a network of software switches supporting Internet telephony. While one may take an integer programming approach to solve this problem, it quickly becomes intractable even for modest-sized networks. Instead, our topology design uses random graphs that we show to be nearly optimal in cost, highly connected, and computationally efficient even for large networks. We then formulate a quadratic assignment problem (QAP) to map the abstract topology into the physical network to achieve optimal load balancing for given demand forecasts, which we solve using randomized heuristics. Numerical results on several example networks illustrate the performance and computational efficiency of our method. A graphical design tool has been developed based on our algorithms %B IEEE INFOCOM 2000. Nineteenth Annual Joint Conference of the IEEE Computer and Communications Societies. Proceedings %I IEEE %V 2 %P 707-716 vol.2 - 707-716 vol.2 %8 2000/// %@ 0-7803-5880-5 %G eng %R 10.1109/INFCOM.2000.832245 %0 Conference Paper %B Parallel and Distributed Processing Symposium, 2000. IPDPS 2000. Proceedings. 14th International %D 2000 %T Optimizing retrieval and processing of multi-dimensional scientific datasets %A Chang,Chialin %A Kurc, T. %A Sussman, Alan %A Saltz, J. %K active data repository %K Area measurement %K Computer science %K Data analysis %K distributed memory parallel machines %K Educational institutions %K Information retrieval %K Information Storage and Retrieval %K infrastructure %K Microscopy %K Microwave integrated circuits %K multi-dimensional scientific datasets retrieval %K PARALLEL PROCESSING %K Pathology %K range queries %K regular d-dimensional array %K Satellites %K Tomography %X We have developed the Active Data Repository (ADR), an infrastructure that integrates storage, retrieval, and processing of large multi-dimensional scientific datasets on distributed memory parallel machines with multiple disks attached to each node. In earlier work, we proposed three strategies for processing range queries within the ADR framework. Our experimental results show that the relative performance of the strategies changes under varying application characteristics and machine configurations. In this work we investigate approaches to guide and automate the selection of the best strategy for a given application and machine configuration. We describe analytical models to predict the relative performance of the strategies where input data elements are uniformly distributed in the attribute space of the output dataset, restricting the output dataset to be a regular d-dimensional array %B Parallel and Distributed Processing Symposium, 2000. IPDPS 2000. Proceedings. 14th International %I IEEE %P 405 - 410 %8 2000/// %@ 0-7695-0574-0 %G eng %R 10.1109/IPDPS.2000.846013 %0 Journal Article %J Vision Research %D 2000 %T The Ouchi illusion as an artifact of biased flow estimation %A Fermüller, Cornelia %A Pless,Robert %A Aloimonos, J. %K Bias %K MOTION %K optical flow %K Plaid %K Statistics %X A pattern by Ouchi has the surprising property that small motions can cause illusory relative motion between the inset and background regions. The effect can be attained with small retinal motions or a slight jiggling of the paper and is robust over large changes in the patterns, frequencies and boundary shapes. In this paper, we explain that the cause of the illusion lies in the statistical difficulty of integrating local one-dimensional motion signals into two-dimensional image velocity measurements. The estimation of image velocity generally is biased, and for the particular spatial gradient distributions of the Ouchi pattern the bias is highly pronounced, giving rise to a large difference in the velocity estimates in the two regions. The computational model introduced to describe the statistical estimation of image velocity also accounts for the findings of psychophysical studies with variations of the Ouchi pattern and for various findings on the perception of moving plaids. The insight gained from this computational study challenges the current models used to explain biological vision systems and to construct robotic vision systems. Considering the statistical difficulties in image velocity estimation in conjunction with the problem of discontinuity detection in motion fields suggests that theoretically the process of optical flow computations should not be carried out in isolation but in conjunction with the higher level processes of 3D motion estimation, segmentation and shape computation. %B Vision Research %V 40 %P 77 - 95 %8 2000/01// %@ 0042-6989 %G eng %U http://www.sciencedirect.com/science/article/pii/S0042698999001625 %N 1 %R 10.1016/S0042-6989(99)00162-5 %0 Journal Article %J Parallel Processing Letters %D 1999 %T Object-relational queries into multidimensional databases with the active data repository %A Ferreira,R. %A Kurc, T. %A Beynon, M. %A Chang,C. %A Sussman, Alan %A Saltz,J. H %B Parallel Processing Letters %V 9 %P 173 - 195 %8 1999/// %G eng %N 2 %0 Report %D 1999 %T On Orthogonalization in the Inverse Power Method %A Stewart, G.W. %K Technical Report %X When the inverse power method is used to compute eigenvectors of asymmetric matrix corresponding to close eigenvalues, the computed eigenvectors may not be orthogonal. The cure for the problem is to orthogonalize the vectors using the Gram--Schmidt algorithm. In this note it is shown that the orthogonalization process does not cause the quality of the eigenvectors to deteriorate. Also cross-referenced as UMIACS-TR-99-64 %I Instititue for Advanced Computer Studies, Univ of Maryland, College Park %V UMIACS-TR-99-64 %8 1999/10/13/ %G eng %U http://drum.lib.umd.edu/handle/1903/1038 %0 Journal Article %J Technical Reports from UMIACS, UMIACS-TR-99-16 %D 1999 %T OSMA Software Program: Domain Analysis Guidebook %A Basili, Victor R. %A Seaman,Carolyn %A Tesoriero,Roseanne %A Zelkowitz, Marvin V %K Technical Report %X Domain analysis is the process of identifying and organizing knowledge abouta class of problems. This guidebook presents a method of performing experience domain analysis in software development organizations. The purpose of the guidebook is to facilitate the reader in characterizing two given development environments, applying domain analysis to model each, and then applying an evaluation process, based upon the Goal/Metric/Paradigm, to transfer a given development technology from one of the environments to the other. This guidebook describes this process and gives an example of its use within NASA. Also cross-referenced as UMIACS-TR-99-16 %B Technical Reports from UMIACS, UMIACS-TR-99-16 %8 1999/03/31/ %G eng %U http://drum.lib.umd.edu/handle/1903/1000 %0 Book Section %B Readings in information visualizationReadings in information visualization %D 1999 %T Overview+ detail %A Card,S.K. %A Mackinlay,J.D. %A Shneiderman, Ben %B Readings in information visualizationReadings in information visualization %P 285 - 286 %8 1999/// %G eng %0 Report %D 1998 %T An On-line Variable Length Binary Encoding %A Acharya,Tinku %A JaJa, Joseph F. %K Technical Report %X We present a methodology of an on-line variable-length binaryencoding of a set of integers. The basic principle of this methodology is to maintain the prefix property amongst the codes assigned on-line to a set of integers growing dynamically. The prefix property enables unique decoding of a string of elements from this set. To show the utility of this on-line variable length binary encoding, we apply this methodology to encode the LZW codes. Application of this encoding scheme significantly improves the compression achieved by the standard LZW scheme. This encoding can be applied in other compression schemes to encode the pointers using variable-length binary codes. (Also cross-referenced as UMIACS-TR-95-39) %I Instititue for Advanced Computer Studies, Univ of Maryland, College Park %V UMIACS-TR-95-39 %8 1998/10/15/ %G eng %U http://drum.lib.umd.edu/handle/1903/714 %0 Journal Article %J Journal of the ACM (JACM) %D 1998 %T An optimal algorithm for approximate nearest neighbor searching fixed dimensions %A Arya,Sunil %A Mount, Dave %A Netanyahu,Nathan S. %A Silverman,Ruth %A Wu,Angela Y. %K Approximation algorithms %K Box-decomposition trees %K closet-point queries %K nearest neighbor searching %K post-office problem %K priority search %X Consider a set of S of n data points in real d-dimensional space, Rd, where distances are measured using any Minkowski metric. In nearest neighbor searching, we preprocess S into a data structure, so that given any query point q ∈ Rd, is the closest point of S to q can be reported quickly. Given any positive real &egr;, data point p is a (1 +&egr;)-approximate nearest neighbor of q if its distance from q is within a factor of (1 + &egr;) of the distance to the true nearest neighbor. We show that it is possible to preprocess a set of n points in Rd in O(dn log n) time and O(dn) space, so that given a query point q ∈ Rd, and &egr; > 0, a (1 + &egr;)-approximate nearest neighbor of q can be computed in O(cd, &egr; log n) time, where cd,&egr;≤d 1 + 6d/e;d is a factor depending only on dimension and &egr;. In general, we show that given an integer k ≥ 1, (1 + &egr;)-approximations to the k nearest neighbors of q can be computed in additional O(kd log n) time. %B Journal of the ACM (JACM) %V 45 %P 891 - 923 %8 1998/11// %@ 0004-5411 %G eng %U http://doi.acm.org/10.1145/293347.293348 %N 6 %R 10.1145/293347.293348 %0 Conference Paper %B Workshop Web Inf. Data Management (WIDM), Washington DC %D 1998 %T Optimization of wrappers and mediators for web accessible data sources (websources) %A Bright,L. %A Raschid, Louiqa %A Vidal,M. E %B Workshop Web Inf. Data Management (WIDM), Washington DC %8 1998/// %G eng %0 Journal Article %J SIAM Journal on Matrix Analysis and Applications %D 1998 %T Overcoming Instability In Computing The Fundamental Matrix For A Markov Chain %A Heyman,Daniel P. %A O'Leary, Dianne P. %K decision process %K fundamental matrix %K Markov chains %X We present an algorithm for solving linear systems involving the probability or rate matrix for a Markov chain. It is based on a UL factorization but works only with a submatrix of the factor U. We demonstrate its utility on Erlang-B models as well as more complicated models of a telephone multiplexing system. %B SIAM Journal on Matrix Analysis and Applications %V 19 %P 534 - 540 %8 1998/// %G eng %U http://link.aip.org/link/?SML/19/534/1 %N 2 %R 10.1137/S0895479896301753 %0 Journal Article %J Flexible Query Answering Systems %D 1998 %T An overview of cooperative answering in databases %A Minker, Jack %X The field of cooperative answering goes back to work started by Joshi and Webber [12] in natural language processing in the early 1980s at the University of Pennsylvania. The work was applied to databases and information systems at the University of Pennsylvania by Kaplan [14, 15] and Mays [17]. Other early work at the University of Pennsylvania and at other universities is discussed in [25, 13, 26, 16, 18, 24]. Databases and knowledge base systems are often difficult to use because they do not attempt to cooperate with their users. A database or a knowledge base query system provides literal answers to queries posed to them. Such answers to queries may not always be the best answers. Instead, an answer with extra or alternative information may be more useful and less misleading to a user.This lecture surveys foundational work that has been done toward developing database and knowledge base systems with the ability to exhibit cooperative behavior. In the 1970s, Grice [11] proposed maxims of cooperative conversation. These maxims provide the starting point for the field of cooperative answering. %B Flexible Query Answering Systems %P 282 - 285 %8 1998/// %G eng %R DOI: 10.1007/BFb0056009 %0 Conference Paper %B Image Processing, 1997. Proceedings., International Conference on %D 1997 %T OCR-based rate-distortion analysis of residual coding %A Kia,O. E %A David Doermann %K analysis;redundancy;representative %K character %K coding;distortion %K coding;image %K coding;lossy %K coding;row-order %K coding;symbolic %K compression;data %K compression;document %K compression;lossy %K database %K distortion %K Evaluation %K image %K images;document %K images;experiments;ground %K measure;document %K OCR %K of %K performance;University %K processing;distance-order %K processing;image %K prototypes;residual %K recognition;rate %K representation;optical %K representation;progressive %K software;OCR %K system %K theory; %K transmission;rate-distortion %K truth;image %K Washington;compressed-domain %X Symbolic compression of document images provides access to symbols found in document images and exploits the redundancy found within them. Document images are highly structured and contain large numbers of repetitive symbols. We have shown that while symbolically compressing a document image we are able to perform compressed-domain processing. Symbolic compression forms representative prototypes for symbols and encode the image by the location of these prototypes and a residual (the difference between symbol and prototype). We analyze the rate-distortion tradeoff by varying the amount of residual used in compression for both distance- and row-order coding. A measure of distortion is based on the performance of an OCR system on the resulting image. The University of Washington document database images, ground truth, and OCR evaluation software are used for experiments %B Image Processing, 1997. Proceedings., International Conference on %V 3 %P 690 -693 vol.3 - 690 -693 vol.3 %8 1997/10// %G eng %R 10.1109/ICIP.1997.632215 %0 Journal Article %J Proceedings of the ASAP97 %D 1997 %T Optimized software synthesis for synchronous dataflow %A Bhattacharyya, Shuvra S. %A America,H. %B Proceedings of the ASAP97 %8 1997/// %G eng %0 Journal Article %J Signal Processing, IEEE Transactions on %D 1997 %T Optimizing synchronization in multiprocessor DSP systems %A Bhattacharyya, Shuvra S. %A Sriram,S. %A Lee,E. A %B Signal Processing, IEEE Transactions on %V 45 %P 1605 - 1618 %8 1997/// %G eng %N 6 %0 Conference Paper %B Proceedings of the SIGMETRICS symposium on Parallel and distributed tools %D 1996 %T An online computation of critical path profiling %A Hollingsworth, Jeffrey K %B Proceedings of the SIGMETRICS symposium on Parallel and distributed tools %P 11 - 20 %8 1996/// %G eng %0 Journal Article %J Information Sciences %D 1996 %T An on-line variable-length binary encoding of text %A Acharya,Tinku %A JaJa, Joseph F. %X We present a methodology for on-line variable-length binary encoding of a dynamically growing set of integers. Our encoding maintains the prefix property that enables unique decoding of a string of integers from the set. In order to develop the formalism of this on-line binary encoding, we define a unique binary tree data structure called the “phase in binary tree.” To show the utility of this on-line variable-length binary encoding, we apply this methodology to encode the pointers generated by the LZW algorithm. The experimental results obtained illustrate the superior performance of our algorithm compared to the most widely used algorithms. This on-line variable-length binary encoding can be applied in other dictionary-based text compression schemes as well to effectively encode the output pointers to enhance the compression ratio. %B Information Sciences %V 94 %P 1 - 22 %8 1996/10// %@ 0020-0255 %G eng %U http://www.sciencedirect.com/science/article/pii/0020025596000898 %N 1–4 %R 10.1016/0020-0255(96)00089-8 %0 Journal Article %J PPL-Parallel Processing Letters %D 1996 %T An optimal randomized parallel algorithm for the single function coarsest partition problem %A JaJa, Joseph F. %A Ryu,K. W. %X We describe a randomized parallel algorithm to solve the single function coarsest partition problem. The algorithm runs in O(log n) time using O(n) operations with high probability on the Priority CRCW PRAM. The previous best known algorithms run in O(log2 n) time using O(n log2 n) operations on the CREW PRAM and O(log n) time using O(n log log n) operations on the Arbitrary CRCW PRAM. The technique presented can be used to generate the Euler tour of a rooted tree optimally from the parent representation. %B PPL-Parallel Processing Letters %V 6 %P 187 - 194 %8 1996/// %G eng %N 2 %R 10.1142/S0129626496000182 %0 Conference Paper %B Visualization '96. Proceedings. %D 1996 %T Optimizing triangle strips for fast rendering %A Evans,F. %A Skiena,S. %A Varshney, Amitabh %K buffer %K data;triangulated %K disciplines;rendering %K model %K models;polygonal %K optimisation;triangulated %K partitioning;queuing %K rendering;graphics %K sizes;fast %K strip %K subsystem;interactive %K surfaces;data %K times;scientific %K triangulated %K visualisation; %K visualization;partially %K visualization;triangle %X Almost all scientific visualization involving surfaces is currently done via triangles. The speed at which such triangulated surfaces can be displayed is crucial to interactive visualization and is bounded by the rate at which triangulated data can be sent to the graphics subsystem for rendering. Partitioning polygonal models into triangle strips can significantly reduce rendering times over transmitting each triangle individually. We present new and efficient algorithms for constructing triangle strips from partially triangulated models, and experimental results showing these strips are on average 15% better than those from previous codes. Further, we study the impact of larger buffer sizes and various queuing disciplines on the effectiveness of triangle strips. %B Visualization '96. Proceedings. %P 319 - 326 %8 1996/11/27/1 %G eng %R 10.1109/VISUAL.1996.568125 %0 Journal Article %J Proc. Image Understanding Workshop %D 1996 %T Ordinal representations of visual space %A Fermüller, Cornelia %A Aloimonos, J. %B Proc. Image Understanding Workshop %P 897 - 904 %8 1996/// %G eng %0 Conference Paper %B PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON PARALLEL PROCESSING %D 1995 %T An Optimal Ear Decomposition Algorithm with Applications on Fixed-Size Linear Arrays %A Huang,Y. M. %A JaJa, Joseph F. %B PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON PARALLEL PROCESSING %8 1995/// %G eng %0 Journal Article %J Static Analysis %D 1995 %T Optimality in abstractions of model checking %A Cleaveland, Rance %A Iyer,P. %A Yankelevich,D. %B Static Analysis %P 51 - 63 %8 1995/// %G eng %0 Conference Paper %B Proceedings of the Fourth Workshop on Enabling Technologies: Infrastructure for Collaborative Enterprises, 1995 %D 1995 %T Organization overviews and role management: inspiration for future desktop environments %A Plaisant, Catherine %A Shneiderman, Ben %K Asia %K bank data processing %K Databases %K Environmental economics %K Environmental management %K future desktop environments %K Graphical user interfaces %K human resource management %K management information systems %K Management training %K multiple personal roles %K office automation %K organization overviews %K personnel %K Project management %K Prototypes %K role management %K role-centered approach %K scheduling %K semi-automated searches %K User interfaces %K Utility programs %K World Bank %X In our exploration of future work environments for the World Bank we proposed two concepts. First, organization overviews provide a consistent support to present the results of a variety of manual or semi-automated searches. Second this view can be adapted or expanded for each class of users to finally map the multiple personal roles an individual has in an organization. After command line interfaces, graphical user interfaces, and the current “docu-centric” designs, a natural direction is towards a role-centered approach where we believe the emphasis is on the management of those multiple roles. Large visual overviews of the organization can be rapidly manipulated and zoomed in on to reveal the multiple roles each individual plays. Each role involves coordination with groups of people and accomplishment of tasks within a schedule %B Proceedings of the Fourth Workshop on Enabling Technologies: Infrastructure for Collaborative Enterprises, 1995 %I IEEE %P 14 - 22 %8 1995/04/20/22 %@ 0-8186-7019-3 %G eng %R 10.1109/ENABL.1995.484544 %0 Conference Paper %B Proceedings of the 21st ACM SIGPLAN-SIGACT symposium on Principles of programming languages %D 1994 %T An operational framework for value-passing processes %A Cleaveland, Rance %A Yankelevich,Daniel %X This paper develops a semantic framework for concurrent languages with value passing. An operation analogous to substitution in the &lgr;-calculus is given, and a semantics is given for a value-passing version of Milner's Calculus of Communicating Systems (CCS). An operational equivalence is then defined and shown to coincide with Milner's (early) bisimulation equivalence. We also show how semantics maybe given for languages with asynchronous communication primitives. In contrast with existing approaches to value passing, this semantics does not reduce data exchange to pure synchronization over (potentially infinite) families of ports indexed by data, and it avoids variable renamings that are not local to processes engaged in communication. %B Proceedings of the 21st ACM SIGPLAN-SIGACT symposium on Principles of programming languages %S POPL '94 %I ACM %C New York, NY, USA %P 326 - 338 %8 1994/// %@ 0-89791-636-0 %G eng %U http://doi.acm.org/10.1145/174675.177941 %R 10.1145/174675.177941 %0 Conference Paper %B Proceedings of the fifth annual ACM-SIAM symposium on Discrete algorithms %D 1994 %T Optimal parallel approximation for prefix sums and integer sorting %A Goodrich,M. T %A Matias,Y. %A Vishkin, Uzi %B Proceedings of the fifth annual ACM-SIAM symposium on Discrete algorithms %P 241 - 250 %8 1994/// %G eng %0 Conference Paper %B Proceedings of the fifth annual ACM-SIAM symposium on Discrete algorithms %D 1994 %T Optimal randomized parallel algorithms for computing the row maxima of a totally monotone matrix %A Raman,R. %A Vishkin, Uzi %B Proceedings of the fifth annual ACM-SIAM symposium on Discrete algorithms %P 613 - 621 %8 1994/// %G eng %0 Conference Paper %B Proceedings 2nd North American Process Algebra Workshop, Ithaca, New York %D 1993 %T An operational semantics of value passing %A Cleaveland, Rance %B Proceedings 2nd North American Process Algebra Workshop, Ithaca, New York %8 1993/// %G eng %0 Journal Article %J Parallel and Distributed Systems, IEEE Transactions on %D 1993 %T Optimal algorithms on the pipelined hypercube and related networks %A JaJa, Joseph F. %A Ryu,K. W. %K algorithms;pipeline %K algorithms;pipelined %K combinatorial %K geometry;parallel %K hypercube;shuffle-exchange;combinatorial %K mathematics;computational %K packing;monotone %K polygon;parallel %K problems;cube-connected-cycles;line %K processing; %X Parallel algorithms for several important combinatorial problems such as the all nearest smaller values problem, triangulating a monotone polygon, and line packing are presented. These algorithms achieve linear speedups on the pipelined hypercube, and provably optimal speedups on the shuffle-exchange and the cube-connected-cycles for any number p of processors satisfying 1 les;p les;n/((log3n)(loglog n)2), where n is the input size. The lower bound results are established under no restriction on how the input is mapped into the local memories of the different processors %B Parallel and Distributed Systems, IEEE Transactions on %V 4 %P 582 - 591 %8 1993/05// %@ 1045-9219 %G eng %N 5 %R 10.1109/71.224210 %0 Journal Article %J Journal of Algorithms %D 1993 %T Optimal doubly logarithmic parallel algorithms based on finding all nearest smaller values %A Berkman,O. %A Schieber,B. %A Vishkin, Uzi %B Journal of Algorithms %V 14 %P 344 - 370 %8 1993/// %G eng %N 3 %0 Journal Article %J SIAM Journal on Numerical Analysis %D 1993 %T Ordering Effects on Relaxation Methods Applied to the Discrete One- Dimensional Convection-Diffusion Equation %A Elman, Howard %A Chernesky, Michael P. %X The authors present an analysis of relaxation methods for the one-dimensional discrete convection-diffusion equation based on norms of the iteration matrices. In contrast to standard analytic techniques that use spectral radii, these results show how the performance of iterative solvers is affected by directions of flow associated with the underlying operator, and by orderings of the discrete grid points. In particular, for problems of size n, relaxation against the flow incurs a latency of approximately n steps in which convergence is slow, and red-black relaxation incurs a latency of approximately n/2 steps. There is no latency associated with relaxation that follows that flow. These results are largely independent of the choice of discretization. %B SIAM Journal on Numerical Analysis %V 30 %P 1268 - 1290 %8 1993/10/01/ %@ 0036-1429 %G eng %U http://www.jstor.org/stable/2158237 %N 5 %0 Journal Article %J Communications of the ACM %D 1992 %T The Omega test: a fast and practical integer programming algorithm for dependence analysis %A Pugh, William %B Communications of the ACM %V 8 %P 2 - 6 %8 1992/// %G eng %N 102-114 %0 Journal Article %J Algorithmica %D 1991 %T Optimal algorithms for adjacent side routing %A Wu,S.A. %A JaJa, Joseph F. %X We consider the switchbox routing problem of two-terminal nets in the case when all thek nets lie on two adjacent sides of the rectangle. Our routing model is the standard two-layer model. We develop an optimal algorithm that routes all the nets whenever a routing exists. The routing obtained uses the fewest possible number of vias. A more general version of this problem (adjacent staircase) is also optimally solved. %B Algorithmica %V 6 %P 565 - 578 %8 1991/// %G eng %N 1 %R 10.1007/BF01759060 %0 Conference Paper %B Computer Vision and Pattern Recognition, 1991. Proceedings CVPR '91., IEEE Computer Society Conference on %D 1991 %T Optimal matching of planar models in 3D scenes %A Jacobs, David W. %K 3D %K approximation;flat %K error;close %K error;model %K features;computerised %K features;optimal %K matching;planar %K models;point %K object;image;maximum %K pattern %K picture %K processing; %K recognition;computerised %K scenes;bounded %K sensing %X The problem of matching a model consisting of the point features of a flat object to point features found in an image that contains the object in an arbitrary three-dimensional pose is addressed. Once three points are matched, it is possible to determine the pose of the object. Assuming bounded sensing error, the author presents a solution to the problem of determining the range of possible locations in the image at which any additional model points may appear. This solution leads to an algorithm for determining the largest possible matching between image and model features that includes this initial hypothesis. The author implements a close approximation to this algorithm, which is O( nm isin;6), where n is the number of image points, m is the number of model points, and isin; is the maximum sensing error. This algorithm is compared to existing methods, and it is shown that it produces more accurate results %B Computer Vision and Pattern Recognition, 1991. Proceedings CVPR '91., IEEE Computer Society Conference on %P 269 - 274 %8 1991/06// %G eng %R 10.1109/CVPR.1991.139700 %0 Journal Article %J Hypermedia/Hypertext And Object-Oriented Databases. Chapman & Hall %D 1991 %T An overview of Hyperties, its user interface and data model %A Plaisant, Catherine %B Hypermedia/Hypertext And Object-Oriented Databases. Chapman & Hall %P 17 - 31 %8 1991/// %G eng %0 Report %D 1990 %T On-line algorithms for weighted matching and stable marriages %A Khuller, Samir %A Mitchell,S. G %A Vazirani,V. V %X We give an on-line deterministic algorithm for the bipartite weighted matching problem that achieves a competitive ratio of $O(n)$. In fact, this algorithm is almost optimal - the lower bound on the performance ratio of any deterministic online algorithm is $\Omega (n/ \sqrt{log n})$. We also study the stable marriage problem, where we are interested in the number of unstable pairs produced. We show that the simple "first come, first served" deterministic algorithm yields on the average $O(n$ log $n$) unstable pairs, but in the worst case no deterministic or randomized on-line algorithm can do better that $\Omega(n^{2})$ unstable pairs. %I Cornell University %V TR90-1143 %8 1990/// %G eng %0 Journal Article %J Information Systems %D 1990 %T Optimal view caching %A Amir,Amihood %A Roussopoulos, Nick %X A view in a database is a subset of records selected from several files to satisfy some condition, e.g. the join of two relations.It has been shown that it is more efficient to store pointers to the satisfying records of the base relations rather than the complete records of the view. We are interested in efficient caching of such pointers (in effect constructing the view) when the database lies in secondary storage and only a limited number of buffers exists in memory. A view caching is optimal if it is done with the minimum number of reads from secondary storage. We prove that optimal view caching is NP -complete. %B Information Systems %V 15 %P 169 - 171 %8 1990/// %@ 0306-4379 %G eng %U http://www.sciencedirect.com/science/article/pii/030643799090032K %N 2 %R 16/0306-4379(90)90032-K %0 Conference Paper %B Workshop on Visual Motion, 1989.,Proceedings %D 1989 %T Optimal motion estimation %A Spetsakis, M. E %A Aloimonos, J. %K 3D motion interpretation %K Automation %K Computer vision %K computerised pattern recognition %K computerised picture processing %K constraint minimization %K dynamic imagery %K Educational institutions %K feature correspondences %K Gaussian noise %K Image motion analysis %K Laboratories %K maximum-likelihood-principle %K Minimization methods %K Motion analysis %K Motion estimation %K motion parameters %K moving object %K multiple frames %K nonlinear constraint %K Optical computing %K optimal motion estimation %K parameter estimation %K quadratic minimization %K quadratic programming %K redundancy %K rigidity assumption %K robotics applications %K structure-from-motion %K successive images %K two-frame %X The problem of using feature correspondences to recover the structure and 3D motion of a moving object from its successive images is analyzed. They formulate the problem as a quadratic minimization problem with a nonlinear constraint. Then they derive the condition for the solution to be optimal under the assumption of Gaussian noise in the input, in the maximum-likelihood-principle sense. The authors present two efficient ways to approximate it and discuss some inherent limitations of the structure-from-motion problem when two frames are used that should be taken into account in robotics applications that involve dynamic imagery. Finally, it is shown that some of the difficulties inherent in the two-frame approach disappear when redundancy in the data is introduced. This is concluded from experiments using a structure-from-motion algorithm that is based on multiple frames and uses only the rigidity assumption %B Workshop on Visual Motion, 1989.,Proceedings %I IEEE %P 229 - 237 %8 1989/03/20/22 %@ 0-8186-1903-1 %G eng %R 10.1109/WVM.1989.47114 %0 Report %D 1989 %T Optimal parallel algorithms for one-layer routing %A Chang,S. C. %A JaJa, Joseph F. %A Ryu,K. W. %I Instititue for Advanced Computer Studies, Univ of Maryland, College Park %8 1989/// %G eng %0 Journal Article %J Computer Physics Communications %D 1989 %T Ordering techniques for the preconditioned conjugate gradient method on parallel computers %A Elman, Howard %A Agrón, Elvira %X We consider the parallel implementation of the preconditioned conjugate gradient method using multicolor incomplete factorization as preconditioners. We discuss numerical experiments on sample problems arising from elliptic partial differential equations, together with an analytic study of the effects of communication and artihmetic costs on loosely coupled architectures. Our main conclusion is that multicolor orderings result in slower convergence of the preconditioned conjugate gradient method than natural orderings, but that the lower parallel costs of the multicolor techniques typically make their overall performance better. %B Computer Physics Communications %V 53 %P 253 - 269 %8 1989/05// %@ 0010-4655 %G eng %U http://www.sciencedirect.com/science/article/pii/0010465589901641 %N 1-3 %R 16/0010-4655(89)90164-1 %0 Report %D 1988 %T Optimal Architectures for Multidimensional Transforms %A Chakrabarti,Chaitali %A JaJa, Joseph F. %K Technical Report %X Multidimensional transforms have widespread applications in computer vision, pattern analysis and image processing. The only existing optimal architecture for computing multidimensional DFT on data of size n = Nd requires very large rotator units of area O(n^2) and pipeline-time O(log n). In this paper we propose a family of optimal architectures with areatime trade-offs for computing multidimensional transforms. The large rotator unit is replaced by a combination of a small rotator unit, a transpose unit and a block rotator unit. The combination has an area of O(N^(d+2a)) and a pipeline time of O(N^(d/2-a)log n), for 0 < a < d/2. We apply this scheme to design optimal architectures for two-dimensional DFT, DHT and DCT. The computation is made efficient by mapping each of the one-dimensional transforms involved into two dimensions. %I Institute for Systems Research, University of Maryland, College Park %V ISR-TR-1988-39 %8 1988/// %G eng %U http://drum.lib.umd.edu/handle/1903/4770 %0 Conference Paper %B Proceedings of Second International Conference on Computer Vision %D 1988 %T Optimal Computing Of Structure From Motion Using Point Correspondences In Two Frames %A Spetsakis, M. E %A Aloimonos, J. %K Automation %K Computer vision %K Educational institutions %K Gaussian noise %K Image motion analysis %K Laboratories %K Least squares approximation %K Least squares methods %K Motion estimation %K Optical computing %B Proceedings of Second International Conference on Computer Vision %I IEEE %P 449 - 453 %8 1988/12/05/8 %@ 0-8186-0883-8 %G eng %R 10.1109/CCV.1988.590022 %0 Conference Paper %B VLSI Algorithms and Architectures %D 1988 %T Optimal parallel algorithms for expression tree evaluation and list ranking %A Cole,R. %A Vishkin, Uzi %B VLSI Algorithms and Architectures %P 91 - 100 %8 1988/// %G eng %0 Report %D 1988 %T Optimal Systolic Designs for the Computation of the Discrete Hartley and the Discrete Cosine Transforms %A Chakrabarti,Chaitali %A JaJa, Joseph F. %K *ALGORITHMS %K *BINARY ARITHMETIC %K Computer architecture %K DCT(DISCRETE COSINE TRANSFORM) %K DISCRETE FOURIER TRANSFORMS %K ITERATIONS %K NUMERICAL MATHEMATICS %K TWO DIMENSIONAL %X In this paper, we propose new algorithms for computing the Discrete Hartley and the Discrete Cosine Transform. The algorithms are based on iterative applications of the modified small n algorithms of DFT. The one dimensional transforms are mapped into two dimensions first and then implemented on two dimensional systolic arrays. Pipelined bit serial architectures operating on left to right LSB to MSB binary arithmetic is the basis of the hardware design. Different hardware schemes for implementing these transforms are studied. We show that our schemes achieve a substantial speed-up over existing schemes. %I Institute for Systems Research, University of Maryland, College Park %8 1988/// %G eng %U http://stinet.dtic.mil/oai/oai?&verb=getRecord&metadataPrefix=html&identifier=ADA452389 %0 Journal Article %J Parallel and Distributed Computing %D 1987 %T An optimal parallel algorithm for selection %A Vishkin, Uzi %B Parallel and Distributed Computing %V 4 %P 79 - 86 %8 1987/// %G eng %0 Conference Paper %B Foundations of Computer Science, 1987., 28th Annual Symposium on %D 1987 %T An output sensitive algorithm for computing visibility graphs %A Ghosh,Subir Kumar %A Mount, Dave %X The visibility graph of a set of nonintersecting polygonal obstacles in the plane is an undirected graph whose vertices are the vertices of the obstacles and whose edges are pairs of vertices (u, v) such that the open line segment between u and v does not intersect any of the obstacles. The visibility graph is an important combinatorial structure in computational geometry and is used in applications such as solving visibility problems and computing shortest paths. An algorithm is presented that computes the visibility graph of s set of obstacles in time O(E + n log n), where E is the number of edges in the visibility graph and n is the total number of vertices in all the obstacles. %B Foundations of Computer Science, 1987., 28th Annual Symposium on %P 11 - 19 %8 1987/10// %G eng %R 10.1109/SFCS.1987.6 %0 Journal Article %J ACM Transactions on Programming Languages and Systems (TOPLAS) %D 1985 %T Optimal parallel generation of a computation tree form %A Bar-On,I. %A Vishkin, Uzi %B ACM Transactions on Programming Languages and Systems (TOPLAS) %V 7 %P 348 - 357 %8 1985/// %G eng %N 2 %0 Journal Article %J Information and Control %D 1985 %T Optimal parallel pattern matching in strings %A Vishkin, Uzi %B Information and Control %V 67 %P 91 - 113 %8 1985/// %G eng %N 1-3 %0 Journal Article %J Discrete applied mathematics %D 1984 %T An optimal parallel connectivity algorithm* 1 %A Vishkin, Uzi %B Discrete applied mathematics %V 9 %P 197 - 207 %8 1984/// %G eng %N 2 %0 Journal Article %J SIAM Journal on Scientific and Statistical Computing %D 1984 %T Ordering Schemes for Parallel Processing of Certain Mesh Problems %A O'Leary, Dianne P. %B SIAM Journal on Scientific and Statistical Computing %V 5 %P 620 - 632 %8 1984/// %G eng %0 Journal Article %J Journal of Algorithms %D 1982 %T An O (logn) parallel connectivity algorithm %A Shiloach,Y. %A Vishkin, Uzi %B Journal of Algorithms %V 3 %P 57 - 67 %8 1982/// %G eng %N 1 %0 Journal Article %J Journal of Algorithms %D 1982 %T An O (n2log n) parallel max-flow algorithm %A Shiloach,Y. %A Vishkin, Uzi %B Journal of Algorithms %V 3 %P 128 - 146 %8 1982/// %G eng %N 2 %0 Journal Article %J IEEE Transactions on Systems, Man, and Cybernetics %D 1982 %T An Optimization Approach to Edge Reinforcement %A Narayanan,K. A. %A O'Leary, Dianne P. %A Rosenfeld,Azriel %X The present investigation is concerned with an optimization approach to edge reinforcement which does not involve probabilistic concepts. A cost function is defined, based on the local pattern of edge responses, which is low for sharp, strong edges, and a steepest descent method is used to adjust the edge magnitudes so as to reduce the value of this cost function. The edge reinforcement results obtained in this way appear to be better than those obtained by relaxation methods, whether based on conditional probability or on optimization. The results of an application of the considered procedure to a portion of a Landsat scene are shown, taking into account the input image, the results of 10 iterations of the steepest descent procedure, results produced by means of the relaxation process reported by Schachter et al. (1977), and results obtained with the aid of the optimization relaxation process considered by Faugeras and Berthod (1979). %B IEEE Transactions on Systems, Man, and Cybernetics %V SMC-12 %P 551 - 553 %8 1982/// %G eng %0 Conference Paper %B Proceedings of the Second US Army/IEEE Software Life Cycle Management Workshop. New York: Computer Societies Press %D 1978 %T Operation of the Software Engineering Laboratory %A Basili, Victor R. %A Zelkowitz, Marvin V %B Proceedings of the Second US Army/IEEE Software Life Cycle Management Workshop. New York: Computer Societies Press %8 1978/// %G eng %0 Journal Article %J ACM SIGMOD Record %D 1974 %T Opportunities for data base reorganization %A Shneiderman, Ben %B ACM SIGMOD Record %V 6 %P 1 - 8 %8 1974/12// %@ 0163-5808 %G eng %U http://doi.acm.org/10.1145/983082.983083 %N 4 %R 10.1145/983082.983083 %0 Journal Article %J Communications of the ACM %D 1973 %T Optimum data base reorganization points %A Shneiderman, Ben %K data base %K files %K Information retrieval %K reorganization %X In certain data base organization schemes the cost per access may increase due to structural inefficiencies caused by update. By reorganizing the data base the cost per access may be reduced. However, the high cost of a reorganization prohibits frequent reorganizations. This paper examines strategies for selecting the optimum reorganization points. %B Communications of the ACM %V 16 %P 362 - 365 %8 1973/06// %@ 0001-0782 %G eng %U http://doi.acm.org/10.1145/362248.362267 %N 6 %R 10.1145/362248.362267