%0 Journal Article %J Nature %D 2009 %T Microbial oceanography in a sea of opportunity %A Bowler,Chris %A Karl,David M. %A Rita R Colwell %K Astronomy %K astrophysics %K Biochemistry %K Bioinformatics %K Biology %K biotechnology %K cancer %K cell cycle %K cell signalling %K climate change %K Computational Biology %K development %K developmental biology %K DNA %K drug discovery %K earth science %K ecology %K environmental science %K Evolution %K evolutionary biology %K functional genomics %K Genetics %K Genomics %K geophysics %K immunology %K interdisciplinary science %K life %K marine biology %K materials science %K medical research %K medicine %K metabolomics %K molecular biology %K molecular interactions %K nanotechnology %K Nature %K neurobiology %K neuroscience %K palaeobiology %K pharmacology %K Physics %K proteomics %K quantum physics %K RNA %K Science %K science news %K science policy %K signal transduction %K structural biology %K systems biology %K transcriptomics %X Plankton use solar energy to drive the nutrient cycles that make the planet habitable for larger organisms. We can now explore the diversity and functions of plankton using genomics, revealing the gene repertoires associated with survival in the oceans. Such studies will help us to appreciate the sensitivity of ocean systems and of the ocean's response to climate change, improving the predictive power of climate models. %B Nature %V 459 %P 180 - 184 %8 2009/05/13/ %@ 0028-0836 %G eng %U http://www.nature.com/nature/journal/v459/n7244/abs/nature08056.html %N 7244 %R 10.1038/nature08056 %0 Book Section %B Passive and Active Network MeasurementPassive and Active Network Measurement %D 2009 %T Triangle Inequality and Routing Policy Violations in the Internet %A Lumezanu,Cristian %A Baden,Randy %A Spring, Neil %A Bhattacharjee, Bobby %E Moon,Sue %E Teixeira,Renata %E Uhlig,Steve %K Computer %K Science %X Triangle inequality violations (TIVs) are the effect of packets between two nodes being routed on the longer direct path between them when a shorter detour path through an intermediary is available. TIVs are a natural, widespread and persistent consequence of Internet routing policies. By exposing opportunities to improve the delay between two nodes, TIVs can help myriad applications that seek to minimize end-to-end latency. However, sending traffic along the detour paths revealed by TIVs may influence Internet routing negatively. In this paper we study the interaction between triangle inequality violations and policy routing in the Internet. We use measured and predicted AS paths between Internet nodes to show that 25% of the detour paths exposed by TIVs are in fact available to BGP but are simply deemed “less efficient”. We also compare the AS paths of detours and direct paths and find that detours use AS edges that are rarely followed by default Internet paths, while avoiding others that BGP seems to prefer. Our study is important both for understanding the various interactions that occur at the routing layer as well as their effects on applications that seek to use TIVs to minimize latency. %B Passive and Active Network MeasurementPassive and Active Network Measurement %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 5448 %P 45 - 54 %8 2009/// %@ 978-3-642-00974-7 %G eng %U http://dx.doi.org/10.1007/978-3-642-00975-4_5 %0 Journal Article %J Computer %D 2008 %T The ASC-Alliance Projects: A Case Study of Large-Scale Parallel Scientific Code Development %A Hochstein, L. %A Basili, Victor R. %K ASC-Alliance %K code %K development;large-scale %K development;parallel %K engineering; %K machines;software %K Parallel %K projects;computational %K scale %K Science %K scientific %K scientists;large %K software %K systems;parallel %X Computational scientists face many challenges when developing software that runs on large-scale parallel machines. However, software-engineering researchers haven't studied their software development processes in much detail. To better understand the nature of software development in this context, the authors examined five large-scale computational science software projects operated at the five ASC-Alliance centers. %B Computer %V 41 %P 50 - 58 %8 2008/03// %@ 0018-9162 %G eng %N 3 %R 10.1109/MC.2008.101 %0 Journal Article %J Computer Graphics and Applications, IEEE %D 2008 %T Evaluating Visual Analytics at the 2007 VAST Symposium Contest %A Plaisant, Catherine %A Grinstein,G. %A Scholtz,J. %A Whiting,M. %A O'Connell,T. %A Laskowski,S. %A Chien,L. %A Tat,A. %A Wright,W. %A Gorg,C. %A Zhicheng Liu %A Parekh,N. %A Singhal,K. %A Stasko,J. %K 2007 %K analytics %K analytics;data %K and %K Contest;Visual %K Science %K Symposium %K Technology;data %K VAST %K visualisation; %K visualization;visual %X In this article, we report on the contest's data set and tasks, the judging criteria, the winning tools, and the overall lessons learned in the competition. We believe that by organizing these contests, we're creating useful resources for researchers and are beginning to understand how to better evaluate VA tools. Competitions encourage the community to work on difficult problems, improve their tools, and develop baselines for others to build or improve upon. We continue to evolve a collection of data sets, scenarios, and evaluation methodologies that reflect the richness of the many VA tasks and applications. %B Computer Graphics and Applications, IEEE %V 28 %P 12 - 21 %8 2008/04//march %@ 0272-1716 %G eng %N 2 %R 10.1109/MCG.2008.27 %0 Journal Article %J Software, IEEE %D 2008 %T Understanding the High-Performance-Computing Community: A Software Engineer's Perspective %A Basili, Victor R. %A Carver, J. C %A Cruzes, D. %A Hochstein, L. M %A Hollingsworth, Jeffrey K %A Shull, F. %A Zelkowitz, Marvin V %K community;software %K computational %K engineering; %K engineering;software %K Science %K software;high-performance-computing %X Computational scientists developing software for HPC systems face unique software engineering issues. Attempts to transfer SE technologies to this domain must take these issues into account. %B Software, IEEE %V 25 %P 29 - 36 %8 2008/08//july %@ 0740-7459 %G eng %N 4 %R 10.1109/MS.2008.103 %0 Book Section %B Scalable Uncertainty ManagementScalable Uncertainty Management %D 2007 %T Aggregates in Generalized Temporally Indeterminate Databases %A Udrea,Octavian %A Majkić,Zoran %A Subrahmanian,V. %E Prade,Henri %E Subrahmanian,V. %K Computer %K Science %X Dyreson and Snodgrass as well as Dekhtyar et. al. have provided a probabilistic model (as well as compelling example applications) for why there may be temporal indeterminacy in databases. In this paper, we first propose a formal model for aggregate computation in such databases when there is uncertainty not just in the temporal attribute, but also in the ordinary (non-temporal) attributes. We identify two types of aggregates: event correlated aggregates, and non event correlated aggregations, and provide efficient algorithms for both of them. We prove that our algorithms are correct, and we present experimental results showing that the algorithms work well in practice. %B Scalable Uncertainty ManagementScalable Uncertainty Management %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 4772 %P 171 - 186 %8 2007/// %@ 978-3-540-75407-7 %G eng %U http://dx.doi.org/10.1007/978-3-540-75410-7_13 %0 Journal Article %J Annals of the History of Computing, IEEE %D 2007 %T Developing a Computer Science Department at the University of Maryland %A Minker, Jack %K administrative %K data %K department;educational %K MARYLAND %K processing; %K Science %K University;computer %X This article describes the first six years of the Computer Science Department, established in 1973 at the University of Maryland. The department evolved out of the Computer Science Center, which had been instituted in February 1962. In 1980, the National Academy of Sciences judged the department as being among the leading computer science departments in the US. %B Annals of the History of Computing, IEEE %V 29 %P 64 - 75 %8 2007/12//oct %@ 1058-6180 %G eng %N 4 %R 10.1109/MAHC.2007.4407446 %0 Journal Article %J Nature %D 2007 %T Structural Biology: Analysis of 'downhill' protein folding; Analysis of protein-folding cooperativity (Reply) %A Sadqi,Mourad %A Fushman, David %A Muñoz,Victor %K Astronomy %K astrophysics %K Biochemistry %K Bioinformatics %K Biology %K biotechnology %K cancer %K cell cycle %K cell signalling. %K climate change %K Computational Biology %K development %K developmental biology %K DNA %K drug discovery %K earth science %K ecology %K environmental science %K Evolution %K evolutionary biology %K functional genomics %K Genetics %K Genomics %K geophysics %K immunology %K interdisciplinary science %K life %K marine biology %K materials science %K medical research %K medicine %K metabolomics %K molecular biology %K molecular interactions %K nanotechnology %K Nature %K neurobiology %K neuroscience %K palaeobiology %K pharmacology %K Physics %K proteomics %K quantum physics %K RNA %K Science %K science news %K science policy %K signal transduction %K structural biology %K systems biology %K transcriptomics %X Ferguson et al. and Zhou and Bai criticize the quality of our nuclear magnetic resonance (NMR) data and atom-by-atom analysis of global 'downhill' folding, also claiming that the data are compatible with two-state folding. %B Nature %V 445 %P E17-E18 - E17-E18 %8 2007/02/15/ %@ 0028-0836 %G eng %U http://www.nature.com/nature/journal/v445/n7129/full/nature05645.html?lang=en %N 7129 %R 10.1038/nature05645 %0 Conference Paper %B Visual Analytics Science And Technology, 2006 IEEE Symposium On %D 2006 %T VAST 2006 Contest - A Tale of Alderwood %A Grinstein,G. %A O'Connell,T. %A Laskowski,S. %A Plaisant, Catherine %A Scholtz,J. %A Whiting,M. %K Alderwood;human %K analysis;data %K analytics %K and %K contest;data %K information %K interaction;sense %K making;visual %K Science %K technology %K visualisation; %X Visual analytics experts realize that one effective way to push the field forward and to develop metrics for measuring the performance of various visual analytics components is to hold an annual competition. The first visual analytics science and technology (VAST) contest was held in conjunction with the 2006 IEEE VAST Symposium. The competition entailed the identification of possible political shenanigans in the fictitious town of Alderwood. A synthetic data set was made available as well as tasks. We summarize how we prepared and advertised the contest, developed some initial metrics for evaluation, and selected the winners. The winners were invited to participate at an additional live competition at the symposium to provide them with feedback from senior analysts %B Visual Analytics Science And Technology, 2006 IEEE Symposium On %P 215 - 216 %8 2006/11/31/2 %G eng %R 10.1109/VAST.2006.261420 %0 Book Section %B Computational Logic in Multi-Agent SystemsComputational Logic in Multi-Agent Systems %D 2005 %T Distributed Algorithms for Dynamic Survivability of Multiagent Systems %A Subrahmanian,V. %A Kraus,Sarit %A Zhang,Yingqian %E Dix,Jürgen %E Leite,João %K Computer %K Science %X Though multiagent systems (MASs) are being increasingly used, few methods exist to ensure survivability of MASs. All existing methods suffer from two flaws. First, a centralized survivability algorithm (CSA) ensures survivability of the MAS – unfortunately, if the node on which the CSA exists goes down, the survivability of the MAS is questionable. Second, no mechanism exists to change how the MAS is deployed when external factors trigger a re-evaluation of the survivability of the MAS. In this paper, we present three algorithms to address these two important problems. Our algorithms can be built on top of any CSA. Our algorithms are completely distributed and can handle external triggers to compute a new deployment. We report on experiments assessing the efficiency of these algorithms. %B Computational Logic in Multi-Agent SystemsComputational Logic in Multi-Agent Systems %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 3259 %P 139 - 144 %8 2005/// %@ 978-3-540-24010-5 %G eng %U http://dx.doi.org/10.1007/978-3-540-30200-1_1 %0 Book Section %B On the Move to Meaningful Internet Systems 2005: CoopIS, DOA, and ODBASEOn the Move to Meaningful Internet Systems 2005: CoopIS, DOA, and ODBASE %D 2005 %T Probabilistic Ontologies and Relational Databases %A Udrea,Octavian %A Yu,Deng %A Hung,Edward %A Subrahmanian,V. %E Meersman,Robert %E Tari,Zahir %K Computer %K Science %X The relational algebra and calculus do not take the semantics of terms into account when answering queries. As a consequence, not all tuples that should be returned in response to a query are always returned, leading to low recall. In this paper, we propose the novel notion of a constrained probabilistic ontology (CPO). We developed the concept of a CPO-enhanced relation in which each attribute of a relation has an associated CPO. These CPOs describe relationships between terms occurring in the domain of that attribute. We show that the relational algebra can be extended to handle CPO-enhanced relations. This allows queries to yield sets of tuples, each of which has a probability of being correct. %B On the Move to Meaningful Internet Systems 2005: CoopIS, DOA, and ODBASEOn the Move to Meaningful Internet Systems 2005: CoopIS, DOA, and ODBASE %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 3760 %P 1 - 17 %8 2005/// %@ 978-3-540-29736-9 %G eng %U http://dx.doi.org/10.1007/11575771_1 %0 Book Section %B Mathematical Foundations of Computer Science 2004Mathematical Foundations of Computer Science 2004 %D 2004 %T PRAM-On-Chip: A Quest for Not-So-Obvious Non-obviousness %A Vishkin, Uzi %E Fiala,Jirí %E Koubek,Václav %E Kratochvíl,Jan %K Computer %K Science %X Consider situations where once you were told about a new technical idea you reacted by saying: “but this is so obvious, I wonder how I missed it”. I found out recently that the US patent law has a nice formal way of characterizing such a situation. The US patent law protects inventions that meet three requirements: utility, novelty and non-obviousness. Non-obviousness is considered the most challenging of the three to establish. The talk will try to argue that a possible virtue for a technical contribution is when, in restrospect, its non-obviousness is not too obvious; and since hindsight is always 20/20, one may often need to resort to various types of circumstantial evidence in order to establish non-obviousness. There are two reasons for bringing this issue up in my talk: (i) seeking such a virtue has been an objective of my work over the years, and (ii) issues of taste in research are more legitimate for invited talks; there might be merit in reminding younger researchers that not every “result” is necessarily also a “contribution”; perhaps the criterion of not-so-obvious non-obviousness could be helpful in some cases to help recognize a contribution. The focus of the second focal point for my talk, the PRAM-On-Chip approach, meets at least one of the standard legal ways to support non-obviousness: “Expressions of disbelief by experts constitute strong evidence of non-obviousness”. It is well documented that the whole PRAM algorithmic theory was considered “unrealistic” by numerous experts in the field, prior to the PRAM-On-Chip project. In fact, I needed recently to use this documentation in a reply to the U.S. patent office. An introduction of the PRAM-On-Chip approach follows. Many parallel computer systems architectures have been proposed and built over the last several decades. The outreach of the few that survived has been severely limited due to their programmability problems. The question of how to think algorithmically in parallel has been the fundamental problem for which these architectures did not have an adequate answer. A computational model, the Parallel Random Access Model (PRAM), has been developed by numerous (theoretical computer science) algorithm researchers to address this question during the 1980s and 1990s and is considered by many as the easiest known approach to parallel programming. Despite the broad interest the PRAM generated, it had not been possible to build parallel machines that adequately support it using multi-chip multiprocessors, the only multiprocessors that were buildable in the 1990s since low-overhead coordination was not possible. Our main insight is that this is becoming possible with the increasing amounts of hardware that can be placed on a single chip. From the PRAM, as a starting point, a highly parallel explicit multi-threaded (XMT) on-chip processor architecture that relies on new low-overhead coordination mechanisms and whose performance objective is reducing single task completion time has been conceived and developed. Simulated program executions have shown dramatic performance gains over conventional processor architectures. Namely, in addition to the unique parallel programmability features, which set XMT apart from any other current approach, XMT also provides very competitive performance. If XMT will meet expectations, its introduction would greatly enhance the normal rate of improvement of conventional processor architectures leading to new applications. %B Mathematical Foundations of Computer Science 2004Mathematical Foundations of Computer Science 2004 %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 3153 %P 104 - 105 %8 2004/// %@ 978-3-540-22823-3 %G eng %U http://dx.doi.org/10.1007/978-3-540-28629-5_5 %0 Book Section %B Trust ManagementTrust Management %D 2004 %T Using Trust in Recommender Systems: An Experimental Analysis %A Massa,Paolo %A Bhattacharjee, Bobby %E Jensen,Christian %E Poslad,Stefan %E Dimitrakos,Theo %K Computer %K Science %X Recommender systems (RS) have been used for suggesting items (movies, books, songs, etc.) that users might like. RSs compute a user similarity between users and use it as a weight for the users’ ratings. However they have many weaknesses, such as sparseness, cold start and vulnerability to attacks. We assert that these weaknesses can be alleviated using a Trust-aware system that takes into account the “web of trust” provided by every user. Specifically, we analyze data from the popular Internet web site epinions.com . The dataset consists of 49290 users who expressed reviews (with rating) on items and explicitly specified their web of trust, i.e. users whose reviews they have consistently found to be valuable. We show that any two users have usually few items rated in common. For this reason, the classic RS technique is often ineffective and is not able to compute a user similarity weight for many of the users. Instead exploiting the webs of trust, it is possible to propagate trust and infer an additional weight for other users. We show how this quantity can be computed against a larger number of users. %B Trust ManagementTrust Management %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 2995 %P 221 - 235 %8 2004/// %@ 978-3-540-21312-3 %G eng %U http://dx.doi.org/10.1007/978-3-540-24747-0_17 %0 Journal Article %J Theory of Computing Systems %D 2003 %T Deterministic Resource Discovery in Distributed Networks %A Kutten,Shay %A Peleg,David %A Vishkin, Uzi %K Computer %K Science %X The resource discovery problem was introduced by Harchol-Balter, Leighton, and Lewin. They developed a number of algorithms for the problem in the weakly connected directed graph model. This model is a directed logical graph that represents the vertices’ knowledge about the topology of the underlying communication network. The current paper proposes a deterministic algorithm for the problem in the same model, with improved time, message, and communication complexities. Each previous algorithm had a complexity that was higher at least in one of the measures. Specifically, previous deterministic solutions required either time linear in the diameter of the initial network, or communication complexity $O(n^3)$ (with message complexity $O(n^2)$), or message complexity $O(|E_0| łog n)$ (where $E_0$ is the arc set of the initial graph $G_0$). Compared with the main randomized algorithm of Harchol-Balter, Leighton, and Lewin, the time complexity is reduced from $O(łog^2n)$ to\pagebreak[4] $O(łog n )$, the message complexity from $O(n łog^2 n)$ to $O(n łog n )$, and the communication complexity from $O(n^2 łog^3 n)$ to $O(|E_0|łog ^2 n )$. \par Our work significantly extends the connectivity algorithm of Shiloach and Vishkin which was originally given for a parallel model of computation. Our result also confirms a conjecture of Harchol-Balter, Leighton, and Lewin, and addresses an open question due to Lipton. %B Theory of Computing Systems %V 36 %P 479 - 495 %8 2003/// %@ 1432-4350 %G eng %U http://dx.doi.org/10.1007/s00224-003-1084-8 %N 5 %0 Conference Paper %B Scientific and Statistical Database Management, 2002. Proceedings. 14th International Conference on %D 2002 %T Efficient techniques for range search queries on earth science data %A Shi,Qingmin %A JaJa, Joseph F. %K based %K computing; %K content %K data %K data; %K databases; %K Earth %K factors; %K large %K mining %K mining; %K natural %K processing; %K queries; %K query %K range %K raster %K retrieval; %K scale %K Science %K sciences %K search %K spatial %K structures; %K tasks; %K temporal %K tree %K tree-of-regions; %K visual %X We consider the problem of organizing large scale earth science raster data to efficiently handle queries for identifying regions whose parameters fall within certain range values specified by the queries. This problem seems to be critical to enabling basic data mining tasks such as determining associations between physical phenomena and spatial factors, detecting changes and trends, and content based retrieval. We assume that the input is too large to fit in internal memory and hence focus on data structures and algorithms that minimize the I/O bounds. A new data structure, called a tree-of-regions (ToR), is introduced and involves a combination of an R-tree and efficient representation of regions. It is shown that such a data structure enables the handling of range queries in an optimal I/O time, under certain reasonable assumptions. We also show that updates to the ToR can be handled efficiently. Experimental results for a variety of multi-valued earth science data illustrate the fast execution times of a wide range of queries, as predicted by our theoretical analysis. %B Scientific and Statistical Database Management, 2002. Proceedings. 14th International Conference on %P 142 - 151 %8 2002/// %G eng %R 10.1109/SSDM.2002.1029714 %0 Book Section %B Computational Logic: Logic Programming and BeyondComputational Logic: Logic Programming and Beyond %D 2002 %T Error-Tolerant Agents %A Eiter,Thomas %A Mascardi,Viviana %A Subrahmanian,V. %E Kakas,Antonis %E Sadri,Fariba %K Computer %K Science %X The use of agents in today’s Internet world is expanding rapidly. Yet, agent developers proceed largely under the optimistic assumption that agents will be error-free. Errors may arise in agents for numerous reasons — agents may share a workspace with other agents or humans and updates made by these other entities may cause an agent to face a situation that it was not explicitly programmed to deal with. Likewise, errors in coding agents may lead to inconsistent situations where it is unclear how the agent should act. In this paper, we define an agent execution model that allows agents to continue acting “reasonably” even when some errors of the above types occur. More importantly, in our framework, agents take “repair” actions automatically when confronted with such situations, but while taking such repair actions, they can often continue to engage in work and/or interactions with other agents that are unaffected by repairs. %B Computational Logic: Logic Programming and BeyondComputational Logic: Logic Programming and Beyond %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 2407 %P 83 - 104 %8 2002/// %@ 978-3-540-43959-2 %G eng %U http://dx.doi.org/10.1007/3-540-45628-7_22 %0 Conference Paper %B Software Metrics, 2002. Proceedings. Eighth IEEE Symposium on %D 2002 %T What we have learned about fighting defects %A Shull, F. %A Basili, Victor R. %A Boehm,B. %A Brown,A. W %A Costa,P. %A Lindvall,M. %A Port,D. %A Rus,I. %A Tesoriero,R. %A Zelkowitz, Marvin V %K based %K Center %K Computer %K defect %K development; %K education; %K electronic %K Empiric %K engineering %K engineering; %K eWorkshops; %K for %K heuristics; %K reduction; %K Science %K software %K workshops; %X The Center for Empirically Based Software Engineering helps improve software development by providing guidelines for selecting development techniques, recommending areas for further research, and supporting software engineering education. A central activity toward achieving this goal has been the running of "e- Workshops" that capture expert knowledge with a minimum of overhead effort to formulate heuristics on a particular topic. The resulting heuristics are a useful summary of the current state of knowledge in an area based on expert opinion. This paper discusses the results to date of a series of e-Workshops on software defect reduction. The original discussion items are presented along with an encapsulated summary of the expert discussion. The reformulated heuristics can be useful both to researchers (for pointing out gaps in the current state of the knowledge requiring further investigation) and to practitioners (for benchmarking or setting expectations about development practices). %B Software Metrics, 2002. Proceedings. Eighth IEEE Symposium on %P 249 - 258 %8 2002/// %G eng %R 10.1109/METRIC.2002.1011343 %0 Conference Paper %B Enabling Technologies: Infrastructure for Collaborative Enterprises, 2000. (WET ICE 2000). Proeedings. IEEE 9th International Workshops on %D 2000 %T Evaluation challenges for a Federation of heterogeneous information providers: the case of NASA's Earth Science Information Partnerships %A Plaisant, Catherine %A Komlodi,A. %A Lindsay,F. %K Browsing %K collection;heterogeneous %K computing;groupware; %K data;online %K Earth %K Federation;data %K information %K NASA's %K Partnership %K Science %K systems;geophysics %K tools;geographic %X NASA's Earth Science Information Partnership Federation is an experiment funded to assess the ability of a group of widely heterogeneous earth science data or service providers to self organize and provide improved and affordable access to an expanding earth science user community. As it is self-organizing, the Federation is mandated to set in place an evaluation methodology and collect metrics reflecting the outcomes and benefits of the Federation. This paper describes the challenges of organizing such a federated partnership self-evaluation and discusses the issues encountered during the metrics definition phase of the early data collection. Our experience indicates that a large number of metrics will be needed to fully represent the activities and strengths of all partners, but because of the heterogeneity of the ESIPs the qualitative data (comments accompanying the metric data and success stories) becomes the most useful information. Other lessons learned included the absolute need for online browsing tools to accompany data collection tools. Finally, our experience confirms the effect of evaluation as an agent of change, the best example being the high level of collaboration among the ESIPs which can be in part attributed to the initial identification of collaboration as one of the important evaluation factors of the Federation %B Enabling Technologies: Infrastructure for Collaborative Enterprises, 2000. (WET ICE 2000). Proeedings. IEEE 9th International Workshops on %P 130 - 135 %8 2000/// %G eng %R 10.1109/ENABL.2000.883717 %0 Conference Paper %B Research and Technology Advances in Digital Libraries, 1999. ADL '99. Proceedings. IEEE Forum on %D 1999 %T Refining query previews techniques for data with multivalued attributes: the case of NASA EOSDIS %A Plaisant, Catherine %A Venkatraman,M. %A Ngamkajorwiwat,K. %A Barth,R. %A Harberts,B. %A Feng,Wenlan %K attribute %K attributes;processing %K collection;memory %K computing;meta %K data %K data;abstracted %K data;digital %K data;multivalued %K data;query %K Earth %K EOSDIS;NASA %K libraries;geophysics %K metadata;dataset;digital %K NASA %K previews %K processing; %K requirements;multi-valued %K Science %K techniques;undesired %K time;query %X Query Previews allow users to rapidly gain an understanding of the content and scope of a digital data collection. These previews present overviews of abstracted metadata, enabling users to rapidly and dynamically avoid undesired data. We present our recent work on developing query previews for a variety of NASA EOSDIS situations. We focus on approaches that successfully address the challenge of multi-valued attribute data. Memory requirements and processing time associated with running these new solutions remain independent of the number of records in the dataset. We describe two techniques and their respective prototypes used to preview NASA Earth science data %B Research and Technology Advances in Digital Libraries, 1999. ADL '99. Proceedings. IEEE Forum on %P 50 - 59 %8 1999/// %G eng %R 10.1109/ADL.1999.777690 %0 Journal Article %J Empirical Software Engineering %D 1996 %T The empirical investigation of Perspective-Based Reading %A Basili, Victor R. %A Green,Scott %A Laitenberger,Oliver %A Lanubile,Filippo %A Shull, Forrest %A Sørumgård,Sivert %A Zelkowitz, Marvin V %K Computer %K Science %X We consider reading techniques a fundamental means of achieving high quality software. Due to the lack of research in this area, we are experimenting with the application and comparison of various reading techniques. This paper deals with our experiences with a family of reading techniques known as Perspective-Based Reading (PBR), and its application to requirements documents. The goal of PBR is to provide operational scenarios where members of a review team read a document from a particular perspective, e.g., tester, developer, user. Our assumption is that the combination of different perspectives provides better coverage of the document, i.e., uncovers a wider range of defects, than the same number of readers using their usual technique. %B Empirical Software Engineering %V 1 %P 133 - 164 %8 1996/// %@ 1382-3256 %G eng %U http://dx.doi.org/10.1007/BF00368702 %N 2 %0 Book Section %B Algorithms and ComplexityAlgorithms and Complexity %D 1994 %T On a parallel-algorithms method for string matching problems (overview) %A Sahinalp,Suleyman %A Vishkin, Uzi %E Bonuccelli,M. %E Crescenzi,P. %E Petreschi,R. %K Computer %K Science %B Algorithms and ComplexityAlgorithms and Complexity %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 778 %P 22 - 32 %8 1994/// %@ 978-3-540-57811-6 %G eng %U http://dx.doi.org/10.1007/3-540-57811-0_3