%0 Journal Article %J mBio %D 2016 %T Phylogenetic Diversity of Vibrio cholerae Associated with Endemic Cholera in Mexico from 1991 to 2008 %A Choi, Seon Y %A Rashed, Shah M. %A Hasan, Nur A. %A Alam, Munirul %A Islam, Tarequl %A Sadique, Abdus %A Johura, Fatema-Tuz %A Eppinger, Mark %A Ravel, Jacques %A Huq, Anwar %A Cravioto, Alejandro %A Rita R Colwell %X An outbreak of cholera occurred in 1991 in Mexico, where it had not been reported for more than a century and is now endemic. Vibrio cholerae O1 prototype El Tor and classical strains coexist with altered El Tor strains (1991 to 1997). Nontoxigenic (CTX−) V. cholerae El Tor dominated toxigenic (CTX+) strains (2001 to 2003), but V. cholerae CTX+ variant El Tor was isolated during 2004 to 2008, outcompeting CTX− V. cholerae. Genomes of six Mexican V. cholerae O1 strains isolated during 1991 to 2008 were sequenced and compared with both contemporary and archived strains of V. cholerae. Three were CTX+ El Tor, two were CTX− El Tor, and the remaining strain was a CTX+ classical isolate. Whole-genome sequence analysis showed the six isolates belonged to five distinct phylogenetic clades. One CTX− isolate is ancestral to the 6th and 7th pandemic CTX+ V. cholerae isolates. The other CTX− isolate joined with CTX− non-O1/O139 isolates from Haiti and seroconverted O1 isolates from Brazil and Amazonia. One CTX+ isolate was phylogenetically placed with the sixth pandemic classical clade and the V. cholerae O395 classical reference strain. Two CTX+ El Tor isolates possessing intact Vibrio seventh pandemic island II (VSP-II) are related to hybrid El Tor isolates from Mozambique and Bangladesh. The third CTX+ El Tor isolate contained West African-South American (WASA) recombination in VSP-II and showed relatedness to isolates from Peru and Brazil. Except for one isolate, all Mexican isolates lack SXT/R391 integrative conjugative elements (ICEs) and sensitivity to selected antibiotics, with one isolate resistant to streptomycin. No isolates were related to contemporary isolates from Asia, Africa, or Haiti, indicating phylogenetic diversity. %B mBio %V 7 %8 Apr-05-2016 %G eng %U http://mbio.asm.org/lookup/doi/10.1128/mBio.02160-15 %N 2 %! mBio %R 10.1128/mBio.02160-15 %0 Journal Article %J The American Journal of Tropical Medicine and Hygiene %D 2015 %T Predictive Time Series Analysis Linking Bengal Cholera with Terrestrial Water Storage Measured from Gravity Recovery and Climate Experiment Sensors %A Rita R Colwell %A Unnikrishnan, Avinash %A Jutla, Antarpreet %A Huq, Anwar %A Akanda, Ali %X Outbreaks of diarrheal diseases, including cholera, are related to floods and droughts in regions where water and sanitation infrastructure are inadequate or insufficient. However, availability of data on water scarcity and abundance in transnational basins, are a prerequisite for developing cholera forecasting systems. With more than a decade of terrestrial water storage (TWS) data from the Gravity Recovery and Climate Experiment, conditions favorable for predicting cholera occurrence may now be determined. We explored lead–lag relationships between TWS in the Ganges–Brahmaputra–Meghna basin and endemic cholera in Bangladesh. Since bimodal seasonal peaks in cholera in Bangladesh occur during spring and autumn seasons, two separate logistical models between TWS and disease time series (2002–2010) were developed. TWS representing water availability showed an asymmetrical, strong association with cholera prevalence in the spring (τ = −0.53; P < 0.001) and autumn (τ = 0.45; P < 0.001) up to 6 months in advance. One unit (centimeter of water) decrease in water availability in the basin increased odds of above normal cholera by 24% (confidence interval [CI] = 20–31%; P < 0.05) in the spring, while an increase in regional water by 1 unit, through floods, increased odds of above average cholera in the autumn by 29% (CI = 22–33%; P < 0.05). %B The American Journal of Tropical Medicine and Hygiene %V 93 %P 1179 - 1186 %8 Sep-12-2015 %G eng %U http://www.ajtmh.org/content/journals/10.4269/ajtmh.14-0648 %N 6 %R 10.4269/ajtmh.14-0648 %0 Journal Article %J mBio %D 2014 %T Phylodynamic Analysis of Clinical and Environmental Vibrio cholerae Isolates from Haiti Reveals Diversification Driven by Positive Selection %A Azarian, Taj %A Ali, Afsar %A Johnson, Judith A. %A Mohr, David %A Prosperi, Mattia %A Veras, Nazle M. %A Jubair, Mohammed %A Strickland, Samantha L. %A Rashid, Mohammad H. %A Alam, Meer T. %A Weppelmann, Thomas A. %A Katz, Lee S. %A Tarr, Cheryl L. %A Rita R Colwell %A Morris, J. Glenn %A Salemi, Marco %X Phylodynamic analysis of genome-wide single-nucleotide polymorphism (SNP) data is a powerful tool to investigate underlying evolutionary processes of bacterial epidemics. The method was applied to investigate a collection of 65 clinical and environmental isolates of Vibrio cholerae from Haiti collected between 2010 and 2012. Characterization of isolates recovered from environmental samples identified a total of four toxigenic V. cholerae O1 isolates, four non-O1/O139 isolates, and a novel nontoxigenic V. cholerae O1 isolate with the classical tcpA gene. Phylogenies of strains were inferred from genome-wide SNPs using coalescent-based demographic models within a Bayesian framework. A close phylogenetic relationship between clinical and environmental toxigenic V. cholerae O1 strains was observed. As cholera spread throughout Haiti between October 2010 and August 2012, the population size initially increased and then fluctuated over time. Selection analysis along internal branches of the phylogeny showed a steady accumulation of synonymous substitutions and a progressive increase of nonsynonymous substitutions over time, suggesting diversification likely was driven by positive selection. Short-term accumulation of nonsynonymous substitutions driven by selection may have significant implications for virulence, transmission dynamics, and even vaccine efficacy. %B mBio %8 Jul-12-2016 %G eng %U http://mbio.asm.org/lookup/doi/10.1128/mBio.01824-14 %N 6 %! mBio %R 10.1128/mBio.01824-14 %0 Book Section %B Financial Cryptography and Data Security %D 2013 %T Parallel and Dynamic Searchable Symmetric Encryption %A Kamara, Seny %A Charalampos Papamanthou %E Sadeghi, Ahmad-Reza %K cloud storage %K Computer Appl. in Administrative Data Processing %K Data Encryption %K e-Commerce/e-business %K parallel search %K Searchable encryption %K Systems and Data Security %X Searchable symmetric encryption (SSE) enables a client to outsource a collection of encrypted documents in the cloud and retain the ability to perform keyword searches without revealing information about the contents of the documents and queries. Although efficient SSE constructions are known, previous solutions are highly sequential. This is mainly due to the fact that, currently, the only method for achieving sub-linear time search is the inverted index approach (Curtmola, Garay, Kamara and Ostrovsky, CCS ’06) which requires the search algorithm to access a sequence of memory locations, each of which is unpredictable and stored at the previous location in the sequence. Motivated by advances in multi-core architectures, we present a new method for constructing sub-linear SSE schemes. Our approach is highly parallelizable and dynamic. With roughly a logarithmic number of cores in place, searches for a keyword w in our scheme execute in o(r) parallel time, where r is the number of documents containing keyword w (with more cores, this bound can go down to O(logn), i.e., independent of the result size r). Such time complexity outperforms the optimal Θ(r) sequential search time—a similar bound holds for the updates. Our scheme also achieves the following important properties: (a) it enjoys a strong notion of security, namely security against adaptive chosen-keyword attacks; (b) compared to existing sub-linear dynamic SSE schemes (e.g., Kamara, Papamanthou, Roeder, CCS ’12), updates in our scheme do not leak any information, apart from information that can be inferred from previous search tokens; (c) it can be implemented efficiently in external memory (with logarithmic I/O overhead). Our technique is simple and uses a red-black tree data structure; its security is proven in the random oracle model. %B Financial Cryptography and Data Security %S Lecture Notes in Computer Science %I Springer Berlin Heidelberg %P 258 - 274 %8 2013/01/01/ %@ 978-3-642-39883-4, 978-3-642-39884-1 %G eng %U http://link.springer.com/chapter/10.1007/978-3-642-39884-1_22 %0 Journal Article %J Computational Science & Discovery %D 2013 %T Parallel geometric classification of stem cells by their three-dimensional morphology %A Juba,Derek %A Cardone, Antonio %A Ip, Cheuk Yiu %A Simon Jr, Carl G %A K Tison, Christopher %A Kumar, Girish %A Brady,Mary %A Varshney, Amitabh %B Computational Science & Discovery %V 6 %P 015007 %8 01/2013 %N 1 %! Comput. Sci. Disc. %R 10.1088/1749-4699/6/1/015007 %0 Journal Article %J CHI '13 Extended Abstracts on Human Factors in Computing Systems %D 2013 %T Personal informatics in the wild: hacking habits for health & happiness %A Li, Ian %A Jon Froehlich %A Larsen, Jakob E %A Grevet, Catherine %A Ramirez, Ernesto %X Abstract Personal informatics is a class of systems that help people collect personal information to improve self-knowledge. Improving self-knowledge can foster self-insight and promote positive behaviors, such as healthy living and energy conservation. The ... %B CHI '13 Extended Abstracts on Human Factors in Computing Systems %I SIGCHI, ACM Special Interest Group on Computer-Human InteractionACM %C New York, New York, USA %P 3179 - 3182 %8 2013/00/27 %@ 9781450319522 %G eng %U http://dl.acm.org/citation.cfm?doid=2468356.2479641 %! CHI EA '13 %R 10.1145/2468356.2479641 %0 Conference Paper %B CCS '13 Proceedings of the 2013 ACM SIGSAC Conference on Computer & Communications Security %D 2013 %T Practical Dynamic Proofs of Retrievability %A Shi, Elaine %A Stefanov, Emil %A Charalampos Papamanthou %K dynamic proofs of retrievability %K erasure code %K por %X Proofs of Retrievability (PoR), proposed by Juels and Kaliski in 2007, enable a client to store n file blocks with a cloud server so that later the server can prove possession of all the data in a very efficient manner (i.e., with constant computation and bandwidth). Although many efficient PoR schemes for static data have been constructed, only two dynamic PoR schemes exist. The scheme by Stefanov et. al. (ACSAC 2012) uses a large of amount of client storage and has a large audit cost. The scheme by Cash (EUROCRYPT 2013) is mostly of theoretical interest, as it employs Oblivious RAM (ORAM) as a black box, leading to increased practical overhead (e.g., it requires about 300 times more bandwidth than our construction). We propose a dynamic PoR scheme with constant client storage whose bandwidth cost is comparable to a Merkle hash tree, thus being very practical. Our construction outperforms the constructions of Stefanov et. al. and Cash et. al., both in theory and in practice. Specifically, for n outsourced blocks of beta bits each, writing a block requires beta+O(lambdalog n) bandwidth and O(betalog n) server computation (lambda is the security parameter). Audits are also very efficient, requiring beta+O(lambda^2log n) bandwidth. We also show how to make our scheme publicly verifiable, providing the first dynamic PoR scheme with such a property. We finally provide a very efficient implementation of our scheme. %B CCS '13 Proceedings of the 2013 ACM SIGSAC Conference on Computer & Communications Security %S CCS '13 %I ACM %P 325 - 336 %8 2013/// %@ 978-1-4503-2477-9 %G eng %U http://doi.acm.org/10.1145/2508859.2516669 %0 Journal Article %J Science %D 2013 %T Primate Transcript and Protein Expression Levels Evolve Under Compensatory Selection Pressures %A Zia Khan %A Ford, Michael J. %A Cusanovich, Darren A. %A Mitrano, Amy %A Pritchard, Jonathan K. %A Gilad, Yoav %X Changes in gene regulation have likely played an important role in the evolution of primates. Differences in messenger RNA (mRNA) expression levels across primates have often been documented; however, it is not yet known to what extent measurements of divergence in mRNA levels reflect divergence in protein expression levels, which are probably more important in determining phenotypic differences. We used high-resolution, quantitative mass spectrometry to collect protein expression measurements from human, chimpanzee, and rhesus macaque lymphoblastoid cell lines and compared them to transcript expression data from the same samples. We found dozens of genes with significant expression differences between species at the mRNA level yet little or no difference in protein expression. Overall, our data suggest that protein expression levels evolve under stronger evolutionary constraint than mRNA levels.Don't Ape Protein Variation Changes in DNA and messenger RNA (mRNA) expression levels have been used to estimate evolutionary changes between species. However protein expression levels may better reflect selection on divergent and constrained phenotypes. Khan et al. (p. 1100, published online 17 October; see the Perspective by Vogel) measured the differences among and within species between mRNA expression and protein levels in humans, chimpanzees, and rhesus macaques, identifying protein transcripts that seem to be under lineage-specific constraint between humans and chimpanzees. %B Science %V 342 %P 1100 - 1104 %8 2013/11/29/ %@ 0036-8075, 1095-9203 %G eng %U http://www.sciencemag.org/content/342/6162/1100 %N 6162 %! Science %0 Journal Article %J Arxiv preprint arXiv:1202.5150 %D 2012 %T Path O-RAM: An Extremely Simple Oblivious RAM Protocol %A Stefanov, Emil %A Elaine Shi %K Computer Science - Cryptography and Security %X We present Path O-RAM, an extremely simple Oblivious RAM protocol. %B Arxiv preprint arXiv:1202.5150 %8 2012 %G eng %U http://arxiv.org/abs/1202.5150 %0 Conference Paper %D 2012 %T Personal informatics in practice: improving quality of life through data %A Li, I. %A Medynskiy, Y. %A Jon Froehlich %A Larsen, J. %I ACM %P 2799 - 2802 %8 2012 %@ 1450310168 %G eng %0 Conference Paper %B Proceedings of the Twenty-Third Annual ACM-SIAM Symposium on Discrete Algorithms %D 2012 %T Polytope approximation and the Mahler volume %A Arya,Sunil %A da Fonseca,Guilherme D. %A Mount, Dave %X The problem of approximating convex bodies by polytopes is an important and well studied problem. Given a convex body K in Rd, the objective is to minimize the number of vertices (alternatively, the number of facets) of an approximating polytope for a given Hausdorff error ε. Results to date have been of two types. The first type assumes that K is smooth, and bounds hold in the limit as ε tends to zero. The second type requires no such assumptions. The latter type includes the well known results of Dudley (1974) and Bronshteyn and Ivanov (1976), which show that in spaces of fixed dimension, O((diam(K)/ε)(d−1)/2) vertices (alt., facets) suffice. Our results are of this latter type. In our first result, under the assumption that the width of the body in any direction is at least ε, we strengthen the above bound to [EQUATION]. This is never worse than the previous bound (by more than logarithmic factors) and may be significantly better for skinny bodies. Our analysis exploits an interesting analogy with a classical concept from the theory of convexity, called the Mahler volume. This is a dimensionless quantity that involves the product of the volumes of a convex body and its polar dual. In our second result, we apply the same machinery to improve upon the best known bounds for answering ε-approximate polytope membership queries. Given a convex polytope P defined as the intersection of halfspaces, such a query determines whether a query point q lies inside or outside P, but may return either answer if q's distance from P's boundary is at most ε. We show that, without increasing storage, it is possible to reduce the best known search times for ε-approximate polytope membership significantly. This further implies improvements to the best known search times for approximate nearest neighbor searching in spaces of fixed dimension. %B Proceedings of the Twenty-Third Annual ACM-SIAM Symposium on Discrete Algorithms %S SODA '12 %I SIAM %P 29 - 42 %8 2012/// %G eng %U http://dl.acm.org/citation.cfm?id=2095116.2095119 %0 Journal Article %J Physical Review A %D 2012 %T Preparation and probing of coherent vibrational wave packets in the ground electronic state of HD^{+} %A Bhattacharya, Rangana %A Chatterjee, Souvik %A Bhattacharyya, Shuvra S. %X We have investigated the formation of coherent vibrational wave packets in the ground electronic state of HD+ on exposure to intense ultrashort laser pulses of wavelength 1060 nm. The effects of the duration and field strength of the pulse on the final composition of the residual bound nuclear wave packet generated by such impulsive excitations have been studied. The resulting wave packet is allowed to evolve freely on the potential surface for some time, after which a weak pulse of sufficiently large duration is used for probing its composition. This pulse can cause only single-photon dissociation. The simulations have been performed with different probe wavelengths for accessing information about different portions of the wave packet in the vibrational quantum number space. Our aim was to investigate the extent to which information obtained from the kinetic-energy spectra of the photofragments induced by the probe pulse can be correlated to the structure of the wave packet. Simple time-dependent perturbation calculations have also been performed for obtaining the relative strengths of photofragment signals arising from different vibrational levels due to wave-packet dissociation. A comparison with our numerical results indicates that though the general features of the photofragment kinetic-energy spectra from a wave packet are consistent with the perturbation theory results in the intensity regime studied, dynamical evolution during a long pulse can modify the relative heights of the kinetic-energy peaks through nonperturbative interactions in some cases. %B Physical Review A %V 85 %8 2012 %G eng %U http://link.aps.org/doi/10.1103/PhysRevA.85.033424 %N 3 %! Phys. Rev. A %0 Journal Article %J arXiv:1208.6189 [cs] %D 2012 %T Preserving Link Privacy in Social Network Based Systems %A Mittal, Prateek %A Charalampos Papamanthou %A Song, Dawn %K Computer Science - Cryptography and Security %K Computer Science - Social and Information Networks %X A growing body of research leverages social network based trust relationships to improve the functionality of the system. However, these systems expose users' trust relationships, which is considered sensitive information in today's society, to an adversary. In this work, we make the following contributions. First, we propose an algorithm that perturbs the structure of a social graph in order to provide link privacy, at the cost of slight reduction in the utility of the social graph. Second we define general metrics for characterizing the utility and privacy of perturbed graphs. Third, we evaluate the utility and privacy of our proposed algorithm using real world social graphs. Finally, we demonstrate the applicability of our perturbation algorithm on a broad range of secure systems, including Sybil defenses and secure routing. %B arXiv:1208.6189 [cs] %8 2012/08/30/ %G eng %U http://arxiv.org/abs/1208.6189 %0 Journal Article %J Synthesis Lectures on Data Mining and Knowledge Discovery %D 2012 %T Privacy in Social Networks %A Zheleva,E. %A Terzi,E. %A Getoor, Lise %B Synthesis Lectures on Data Mining and Knowledge Discovery %V 3 %P 1 - 85 %8 2012/// %G eng %N 1 %0 Journal Article %J Radio Science %D 2012 %T Prototyping scalable digital signal processing systems for radio astronomy using dataflow models %A Sane, N. %A Ford, J. %A Harris, A. I. %A Bhattacharyya, Shuvra S. %K dataflow models %K digital downconverter %K Digital signal processing %K model-based design %K radio astronomy %K rapid prototyping %X There is a growing trend toward using high-level tools for design and implementation of radio astronomy digital signal processing (DSP) systems. Such tools, for example, those from the Collaboration for Astronomy Signal Processing and Electronics Research (CASPER), are usually platform-specific, and lack high-level, platform-independent, portable, scalable application specifications. This limits the designer's ability to experiment with designs at a high-level of abstraction and early in the development cycle. We address some of these issues using a model-based design approach employing dataflow models. We demonstrate this approach by applying it to the design of a tunable digital downconverter (TDD) used for narrow-bandwidth spectroscopy. Our design is targeted toward an FPGA platform, called theInterconnect Break-out Board (IBOB), that is available from the CASPER. We use the term TDD to refer to a digital downconverter for which the decimation factor and center frequency can be reconfigured without the need for regenerating the hardware code. Such a design is currently not available in the CASPER DSP library. The work presented in this paper focuses on two aspects. First, we introduce and demonstrate a dataflow-based design approach using thedataflow interchange format (DIF) tool for high-level application specification, and we integrate this approach with the CASPER tool flow. Secondly, we explore the trade-off between the flexibility of TDD designs and the low hardware cost of fixed-configuration digital downconverter (FDD) designs that use the available CASPER DSP library. We further explore this trade-off in the context of a two-stage downconversion scheme employing a combination of TDD or FDD designs. %B Radio Science %V 47 %P n/a - n/a %8 2012 %@ 1944-799X %G eng %U http://onlinelibrary.wiley.com/doi/10.1029/2011RS004924/abstract %N 3 %0 Conference Paper %B Dependable Computing Conference (EDCC), 2012 Ninth European %D 2012 %T The Provenance of WINE %A Tudor Dumitras %A Efstathopoulos, P. %K Benchmark testing %K CYBER SECURITY %K cyber security experiments %K data attacks %K data collection %K dependability benchmarking %K distributed databases %K distributed sensors %K experimental research %K field data %K information quality %K MALWARE %K Pipelines %K provenance %K provenance information %K raw data sharing %K research groups %K security of data %K self-documenting experimental process %K sensor fusion %K software %K variable standards %K WINE %K WINE benchmark %X The results of cyber security experiments are often impossible to reproduce, owing to the lack of adequate descriptions of the data collection and experimental processes. Such provenance information is difficult to record consistently when collecting data from distributed sensors and when sharing raw data among research groups with variable standards for documenting the steps that produce the final experimental result. In the WINE benchmark, which provides field data for cyber security experiments, we aim to make the experimental process self-documenting. The data collected includes provenance information – such as when, where and how an attack was first observed or detected – and allows researchers to gauge information quality. Experiments are conducted on a common test bed, which provides tools for recording each procedural step. The ability to understand the provenance of research results enables rigorous cyber security experiments, conducted at scale. %B Dependable Computing Conference (EDCC), 2012 Ninth European %P 126 - 131 %8 2012/// %G eng %0 Conference Paper %B 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) %D 2011 %T P2C2: Programmable pixel compressive camera for high speed imaging %A Reddy, D. %A Veeraraghavan,A. %A Chellapa, Rama %K Brightness %K brightness constancy constraint %K camera sensor %K CAMERAS %K compressive video sensing %K high speed imaging %K high-speed video frames %K Image sequences %K imaging %K imaging architecture %K independent shutter %K Liquid crystal on silicon %K low speed coded video %K Modulation %K optical flow %K P2C2 %K programmable pixel compressive camera %K reconstruction algorithm %K sparse representation %K Spatial resolution %K spatio-temporal redundancies %K temporal redundancy %K temporal super-resolution %K video coding %X We describe an imaging architecture for compressive video sensing termed programmable pixel compressive camera (P2C2). P2C2 allows us to capture fast phenomena at frame rates higher than the camera sensor. In P2C2, each pixel has an independent shutter that is modulated at a rate higher than the camera frame-rate. The observed intensity at a pixel is an integration of the incoming light modulated by its specific shutter. We propose a reconstruction algorithm that uses the data from P2C2 along with additional priors about videos to perform temporal super-resolution. We model the spatial redundancy of videos using sparse representations and the temporal redundancy using brightness constancy constraints inferred via optical flow. We show that by modeling such spatio-temporal redundancies in a video volume, one can faithfully recover the underlying high-speed video frames from the observed low speed coded video. The imaging architecture and the reconstruction algorithm allows us to achieve temporal super-resolution without loss in spatial resolution. We implement a prototype of P2C2 using an LCOS modulator and recover several videos at 200 fps using a 25 fps camera. %B 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) %I IEEE %P 329 - 336 %8 2011/06/20/25 %@ 978-1-4577-0394-2 %G eng %R 10.1109/CVPR.2011.5995542 %0 Journal Article %J Snowbird Learning Workshop %D 2011 %T Partial least squares based speaker recognition system %A Srinivasan,B.V. %A Zotkin,Dmitry N %A Duraiswami, Ramani %B Snowbird Learning Workshop %8 2011/// %G eng %0 Conference Paper %B Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference on %D 2011 %T A partial least squares framework for speaker recognition %A Srinivasan,B.V. %A Zotkin,Dmitry N %A Duraiswami, Ramani %K approximations;speaker %K attribute %K background %K GMM;Gaussian %K least %K mixture %K model;Gaussian %K model;NIST %K modeling %K processes;least %K recognition; %K recognition;speaker %K separability;latent %K squares %K squares;partial-least-squares;speaker %K SRE;interclass %K technique;multiple %K utterances;nuisance %K variability;partial %K variable %K verification;universal %X Modern approaches to speaker recognition (verification) operate in a space of "supervectors" created via concatenation of the mean vectors of a Gaussian mixture model (GMM) adapted from a universal background model (UBM). In this space, a number of approaches to model inter-class separability and nuisance attribute variability have been proposed. We develop a method for modeling the variability associated with each class (speaker) by using partial-least-squares - a latent variable modeling technique, which isolates the most informative subspace for each speaker. The method is tested on NIST SRE 2008 data and provides promising results. The method is shown to be noise-robust and to be able to efficiently learn the subspace corresponding to a speaker on training data consisting of multiple utterances. %B Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference on %P 5276 - 5279 %8 2011/05// %G eng %R 10.1109/ICASSP.2011.5947548 %0 Patent %D 2011 %T Photo-based mobile deixis system and related techniques %A Darrell,Trevor J %A Tom Yeh %A Tollmar,Konrad %E Massachusetts Institute of Technology %X A mobile deixis device includes a camera to capture an image and a wireless handheld device, coupled to the camera and to a wireless network, to communicate the image with existing databases to find similar images. The mobile deixis device further includes a processor, coupled to the device, to process found database records related to similar images and a display to view found database records that include web pages including images. With such an arrangement, users can specify a location of interest by simply pointing a camera-equipped cellular phone at the object of interest and by searching an image database or relevant web resources, users can quickly identify good matches from several close ones to find an object of interest. %V 10/762,941 %8 2011/01/18/ %G eng %U http://www.google.com/patents?id=jeXwAAAAEBAJ %N 7872669 %0 Conference Paper %B Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on %D 2011 %T Piecing together the segmentation jigsaw using context %A Chen,Xi %A Jain, A. %A Gupta,A. %A Davis, Larry S. %K algorithms;image %K approximation %K formulation;greedy %K function %K information;cost %K manner;image %K programming;approximation %K recognition;image %K segmentation; %K segmentation;jigsaw %K segmentation;quadratic %K solution;contextual %K theory;greedy %X We present an approach to jointly solve the segmentation and recognition problem using a multiple segmentation framework. We formulate the problem as segment selection from a pool of segments, assigning each selected segment a class label. Previous multiple segmentation approaches used local appearance matching to select segments in a greedy manner. In contrast, our approach formulates a cost function based on contextual information in conjunction with appearance matching. This relaxed cost function formulation is minimized using an efficient quadratic programming solver and an approximate solution is obtained by discretizing the relaxed solution. Our approach improves labeling performance compared to other segmentation based recognition approaches. %B Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on %P 2001 - 2008 %8 2011/06// %G eng %R 10.1109/CVPR.2011.5995367 %0 Conference Paper %B Second AAAI Symposium on Educational Advances in Artificial Intelligence %D 2011 %T Playing to Program: Towards an Intelligent Programming Tutor for RUR-PLE %A desJardins, Marie %A Ciavolino,Amy %A Deloatch,Robert %A Feasley,Eliana %X Intelligent tutoring systems (ITSs) provide students with a one-on-one tutor, allowing them to work at their own pace, and helping them to focus on their weaker areas. The RUR1–Python Learning Environment (RUR-PLE), a game-like virtual environment to help students learn to program, provides an interface for students to write their own Python code and visualize the code execution (Roberge 2005). RUR-PLE provides a fixed sequence of learning lessons for students to explore. We are extending RUR-PLE to develop the Playing to Program (PtP) ITS, which consists of three components: (1) a Bayesian student model that tracks student competence, (2) a diagnosis module that provides tailored feedback to students, and (3) a problem selection module that guides the student’s learning process. In this paper, we summarize RUR-PLE and the PtP design, and describe an ongoing user study to evaluate the predictive accuracy of our student modeling approach. %B Second AAAI Symposium on Educational Advances in Artificial Intelligence %8 2011/04/08/ %G eng %U http://www.aaai.org/ocs/index.php/EAAI/EAAI11/paper/viewPaper/3497 %0 Report %D 2011 %T Practical Data-Leak Prevention for Legacy Applications in Enterprise Networks %A Mundada,Y. %A Ramachandran,A. %A Tariq,M.B. %A Feamster, Nick %X Organizations must control where private information spreads; this problem is referred to in the industry as data leak prevention. Commercial solutions for DLP are based on scanning content; these impose high overhead and are easily evaded. Research solutions for this problem, information flow control, require rewriting applications or running a custom operating system, which makes these approaches difficult to deploy. They also typically enforce information flow control on a single host, not across a network, making it difficult to implement an information flow control policy for a network of machines. This paper presents Pedigree, which enforces information flow control across a network for legacy applications. Pedigree allows enterprise administrators and users to associate a label with each file and process; a small, trusted module on the host uses these labels to determine whether two processes on the same host can communicate. When a process attempts to communicate across the network, Pedigree tracks these information flows and enforces information flow control either at end-hosts or at a network switch. Pedigree allows users and operators to specify network-wide information flow policies rather than having to specify and implement policies for each host. Enforcing information flow policies in the network allows Pedigree to operate in networks with heterogeneous devices and operating systems. We present the design and implementation of Pedigree, show that it can prevent data leaks, and investigate its feasibility and usability in common environments. %I Georgia Institute of Technology %V GT-CS-11-01 %8 2011/// %G eng %U http://hdl.handle.net/1853/36612 %0 Journal Article %J Proceedings of the Twenty-Fifth Conference on Artificial Intelligence (AAAI) %D 2011 %T Predicting author blog channels with high value future posts for monitoring %A Wu,S. %A Elsayed,T. %A Rand, William %A Raschid, Louiqa %X The phenomenal growth of social media, both in scale andimportance, has created a unique opportunity to track infor- mation diffusion and the spread of influence, but can also make efficient tracking difficult. Given data streams rep- resenting blog posts on multiple blog channels and a focal query post on some topic of interest, our objective is to pre- dict which of those channels are most likely to contain a fu- ture post that is relevant, or similar, to the focal query post. We denote this task as the future author prediction problem (FAPP). This problem has applications in information diffu- sion for brand monitoring and blog channel personalization and recommendation. We develop prediction methods in- spired by (naıve) information retrieval approaches that use historical posts in the blog channel for prediction. We also train a ranking support vector machine (SVM) to solve the problem. We evaluate our methods on an extensive social media dataset; despite the difficulty of the task, all methods perform reasonably well. Results show that ranking SVM prediction can exploit blog channel and diffusion characteris- tics to improve prediction accuracy. Moreover, it is surpris- ingly good for prediction in emerging topics and identifying inconsistent authors. %B Proceedings of the Twenty-Fifth Conference on Artificial Intelligence (AAAI) %8 2011/// %G eng %0 Conference Paper %B Privacy, Security, Risk and Trust (PASSAT), 2011 IEEE Third International Conference on and 2011 IEEE Third International Confernece on Social Computing (SocialCom) %D 2011 %T Predicting Trust and Distrust in Social Networks %A DuBois,T. %A Golbeck,J. %A Srinivasan, Aravind %K distrust prediction %K Electronic publishing %K Encyclopedias %K graph theory %K inference algorithm %K Inference algorithms %K inference mechanisms %K Internet %K negative trust %K online social networks %K positive trust %K Prediction algorithms %K probability %K random graphs %K security of data %K social media %K social networking (online) %K spring-embedding algorithm %K Training %K trust inference %K trust probabilistic interpretation %K user behavior %K user satisfaction %K user-generated content %K user-generated interactions %X As user-generated content and interactions have overtaken the web as the default mode of use, questions of whom and what to trust have become increasingly important. Fortunately, online social networks and social media have made it easy for users to indicate whom they trust and whom they do not. However, this does not solve the problem since each user is only likely to know a tiny fraction of other users, we must have methods for inferring trust - and distrust - between users who do not know one another. In this paper, we present a new method for computing both trust and distrust (i.e., positive and negative trust). We do this by combining an inference algorithm that relies on a probabilistic interpretation of trust based on random graphs with a modified spring-embedding algorithm. Our algorithm correctly classifies hidden trust edges as positive or negative with high accuracy. These results are useful in a wide range of social web applications where trust is important to user behavior and satisfaction. %B Privacy, Security, Risk and Trust (PASSAT), 2011 IEEE Third International Conference on and 2011 IEEE Third International Confernece on Social Computing (SocialCom) %I IEEE %P 418 - 424 %8 2011/10/09/11 %@ 978-1-4577-1931-8 %G eng %R 10.1109/PASSAT/SocialCom.2011.56 %0 Journal Article %J Social Network Data Analytics %D 2011 %T Privacy in social networks: A survey %A Zheleva,E. %A Getoor, Lise %X In this chapter, we survey the literature on privacy in social networks. We focus both on online social networks and online affiliation networks. We formally define the possible privacy breaches and describe the privacy attacks that have been studied. We present definitions of privacy in the context of anonymization together with existing anonymization techniques. %B Social Network Data Analytics %P 277 - 306 %8 2011/// %G eng %R 10.1007/978-1-4419-8462-3_10 %0 Conference Paper %D 2011 %T Privacy settings from contextual attributes: A case study using Google Buzz %A Mashima, D. %A Elaine Shi %A Chow, R. %A Sarkar, P. %A Li,C. %A Song,D. %K contextual attributes %K data privacy %K Google Buzz %K privacy settings %K social networking (online) %K social networks %K Statistics %X Social networks provide users with privacy settings to control what information is shared with connections and other users. In this paper, we analyze factors influencing changes in privacy-related settings in the Google Buzz social network. Specifically, we show statistics on contextual data related to privacy settings that are derived from crawled datasets and analyze the characteristics of users who changed their privacy settings. We also investigate potential neighboring effects among such users. %P 257 - 262 %8 2011 %G eng %0 Conference Paper %D 2011 %T Privacy-preserving aggregation of time-series data %A Elaine Shi %A Chan, T. %A Rieffel, E. %A Chow, R. %A Song,D. %X We consider how an untrusted data aggregator canlearn desired statistics over multiple participants’ data, without compromising each individual’s privacy. We propose a construction that allows a group of partici- pants to periodically upload encrypted values to a data aggregator, such that the aggregator is able to compute the sum of all participants’ values in every time period, but is unable to learn anything else. We achieve strong privacy guarantees using two main techniques. First, we show how to utilize applied cryptographic techniques to allow the aggregator to decrypt the sum from multiple ciphertexts encrypted under different user keys. Second, we describe a distributed data randomization procedure that guarantees the differential privacy of the outcome statistic, even when a subset of participants might be compromised. %V 17 %8 2011 %G eng %U http://www.eecs.berkeley.edu/~elaines/docs/ndss2011.pdf %0 Journal Article %J ACM Transactions on Information and System Security (TISSEC) %D 2011 %T Private and Continual Release of Statistics %A Chan, T.-H. Hubert %A Elaine Shi %A Song, Dawn %K continual mechanism %K Differential privacy %K streaming algorithm %X We ask the question: how can Web sites and data aggregators continually release updated statistics, and meanwhile preserve each individual user’s privacy? Suppose we are given a stream of 0’s and 1’s. We propose a differentially private continual counter that outputs at every time step the approximate number of 1’s seen thus far. Our counter construction has error that is only poly-log in the number of time steps. We can extend the basic counter construction to allow Web sites to continually give top-k and hot items suggestions while preserving users’ privacy. %B ACM Transactions on Information and System Security (TISSEC) %V 14 %P 26:1 - 26:24 %8 2011 %@ 1094-9224 %G eng %U http://doi.acm.org/10.1145/2043621.2043626 %N 3 %0 Journal Article %J 19th Network and Distributed Security Symposium %D 2011 %T Private Set Intersection: Are Garbled Circuits Better than Custom Protocols? %A Huang,Y. %A Evans,D. %A Katz, Jonathan %X Cryptographic protocols for Private Set Intersection (PSI)are the basis for many important privacy-preserving ap- plications. Over the past few years, intensive research has been devoted to designing custom protocols for PSI based on homomorphic encryption and other public-key tech- niques, apparently due to the belief that solutions using generic approaches would be impractical. This paper ex- plores the validity of that belief. We develop three classes of protocols targeted to different set sizes and domains, all based on Yao’s generic garbled-circuit method. We then compare the performance of our protocols to the fastest custom PSI protocols in the literature. Our results show that a careful application of garbled circuits leads to solu- tions that can run on million-element sets on typical desk- tops, and that can be competitive with the fastest custom protocols. Moreover, generic protocols like ours can be used directly for performing more complex secure com- putations, something we demonstrate by adding a simple information-auditing mechanism to our PSI protocols. %B 19th Network and Distributed Security Symposium %8 2011/// %G eng %0 Conference Paper %B Proceedings of the fourth ACM international conference on Web search and data mining %D 2011 %T A probabilistic approach for learning folksonomies from structured data %A Plangprasopchok,Anon %A Lerman,Kristina %A Getoor, Lise %K collective knowledge %K data mining %K folksonomies %K social information processing %K social metadata %K taxonomies %X Learning structured representations has emerged as an important problem in many domains, including document and Web data mining, bioinformatics, and image analysis. One approach to learning complex structures is to integrate many smaller, incomplete and noisy structure fragments. In this work, we present an unsupervised probabilistic approach that extends affinity propagation [7] to combine the small ontological fragments into a collection of integrated, consistent, and larger folksonomies. This is a challenging task because the method must aggregate similar structures while avoiding structural inconsistencies and handling noise. We validate the approach on a real-world social media dataset, comprised of shallow personal hierarchies specified by many individual users, collected from the photosharing website Flickr. Our empirical results show that our proposed approach is able to construct deeper and denser structures, compared to an approach using only the standard affinity propagation algorithm. Additionally, the approach yields better overall integration quality than a state-of-the-art approach based on incremental relational clustering. %B Proceedings of the fourth ACM international conference on Web search and data mining %S WSDM '11 %I ACM %C New York, NY, USA %P 555 - 564 %8 2011/// %@ 978-1-4503-0493-1 %G eng %U http://doi.acm.org/10.1145/1935826.1935905 %R 10.1145/1935826.1935905 %0 Journal Article %J BMC Bioinformatics %D 2011 %T ProPhylo: partial phylogenetic profiling to guide protein family construction and assignment of biological process. %A Basu, Malay K %A Jeremy D Selengut %A Haft, Daniel H %K algorithms %K Archaea %K Archaeal Proteins %K DNA %K Methane %K Phylogeny %K software %X

BACKGROUND: Phylogenetic profiling is a technique of scoring co-occurrence between a protein family and some other trait, usually another protein family, across a set of taxonomic groups. In spite of several refinements in recent years, the technique still invites significant improvement. To be its most effective, a phylogenetic profiling algorithm must be able to examine co-occurrences among protein families whose boundaries are uncertain within large homologous protein superfamilies.

RESULTS: Partial Phylogenetic Profiling (PPP) is an iterative algorithm that scores a given taxonomic profile against the taxonomic distribution of families for all proteins in a genome. The method works through optimizing the boundary of each protein family, rather than by relying on prebuilt protein families or fixed sequence similarity thresholds. Double Partial Phylogenetic Profiling (DPPP) is a related procedure that begins with a single sequence and searches for optimal granularities for its surrounding protein family in order to generate the best query profiles for PPP. We present ProPhylo, a high-performance software package for phylogenetic profiling studies through creating individually optimized protein family boundaries. ProPhylo provides precomputed databases for immediate use and tools for manipulating the taxonomic profiles used as queries.

CONCLUSION: ProPhylo results show universal markers of methanogenesis, a new DNA phosphorothioation-dependent restriction enzyme, and efficacy in guiding protein family construction. The software and the associated databases are freely available under the open source Perl Artistic License from ftp://ftp.jcvi.org/pub/data/ppp/.

%B BMC Bioinformatics %V 12 %P 434 %8 2011 %G eng %R 10.1186/1471-2105-12-434 %0 Conference Paper %B Proceedings of the 19th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems %D 2011 %T The PR-star octree: a spatio-topological data structure for tetrahedral meshes %A Weiss,Kenneth %A De Floriani, Leila %A Fellegara,Riccardo %A Velloso,Marcelo %X We propose the PR-star octree as a combined spatial data structure for performing efficient topological queries on tetrahedral meshes. The PR-star octree augments the Point Region octree (PR Octree) with a list of tetrahedra incident to its indexed vertices, i.e. those in the star of its vertices. Thus, each leaf node encodes the minimal amount of information necessary to locally reconstruct the topological connectivity of its indexed elements. This provides the flexibility to efficiently construct the optimal data structure to solve the task at hand using a fraction of the memory required for a corresponding data structure on the global tetrahedral mesh. Due to the spatial locality of successive queries in typical GIS applications, the construction costs of these runtime data structures are amortized over multiple accesses while processing each node. We demonstrate the advantages of the PR-star octree representation in several typical GIS applications, including detection of the domain boundaries, computation of local curvature estimates and mesh simplification. %B Proceedings of the 19th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems %S GIS '11 %I ACM %C New York, NY, USA %P 92 - 101 %8 2011/// %@ 978-1-4503-1031-4 %G eng %U http://doi.acm.org/10.1145/2093973.2093987 %R 10.1145/2093973.2093987 %0 Journal Article %J Pattern Analysis and Machine Intelligence, IEEE Transactions on %D 2010 %T PADS: A Probabilistic Activity Detection Framework for Video Data %A Albanese, M. %A Chellapa, Rama %A Cuntoor, N. %A Moscato, V. %A Picariello, A. %A V.S. Subrahmanian %A Udrea,O. %K Automated;Programming Languages;Video Recording; %K Computer-Assisted;Models %K PADL;PADS;image processing algorithms;offPad algorithm;onPad algorithm;probabilistic activity description language;probabilistic activity detection framework;video data;video sequence;image sequences;object detection;probability;video surveillance;Algorit %K Statistical;Movement;Pattern Recognition %X There is now a growing need to identify various kinds of activities that occur in videos. In this paper, we first present a logical language called Probabilistic Activity Description Language (PADL) in which users can specify activities of interest. We then develop a probabilistic framework which assigns to any subvideo of a given video sequence a probability that the subvideo contains the given activity, and we finally develop two fast algorithms to detect activities within this framework. OffPad finds all minimal segments of a video that contain a given activity with a probability exceeding a given threshold. In contrast, the OnPad algorithm examines a video during playout (rather than afterwards as OffPad does) and computes the probability that a given activity is occurring (even if the activity is only partially complete). Our prototype Probabilistic Activity Detection System (PADS) implements the framework and the two algorithms, building on top of existing image processing algorithms. We have conducted detailed experiments and compared our approach to four different approaches presented in the literature. We show that-for complex activity definitions-our approach outperforms all the other approaches. %B Pattern Analysis and Machine Intelligence, IEEE Transactions on %V 32 %P 2246 - 2261 %8 2010/12// %@ 0162-8828 %G eng %N 12 %R 10.1109/TPAMI.2010.33 %0 Journal Article %J Journal of cryptology %D 2010 %T Parallel and concurrent security of the HB and HB+ protocols %A Katz, Jonathan %A Shin,J. S %A Smith,A. %X Hopper and Blum (Asiacrypt 2001) and Juels and Weis (Crypto 2005) recently proposed two shared-key authentication protocols—HB and HB+, respectively—whose extremely low computational cost makes them attractive for low-cost devices such as radio-frequency identification (RFID) tags. The security of these protocols is based on the conjectured hardness of the “learning parity with noise” (LPN) problem, which is equivalent to the problem of decoding random binary linear codes. The HB protocol is proven secure against a passive (eavesdropping) adversary, while the HB+ protocol is proven secure against active attacks. %B Journal of cryptology %V 23 %P 402 - 421 %8 2010/// %G eng %N 3 %R 10.1007/s00145-010-9061-2 %0 Journal Article %J ACM Transactions on Autonomous and Adaptive Systems (TAAS) %D 2010 %T Parsimonious rule generation for a nature-inspired approach to self-assembly %A Grushin,A. %A Reggia, James A. %B ACM Transactions on Autonomous and Adaptive Systems (TAAS) %V 5 %P 1 - 24 %8 2010/// %G eng %N 3 %0 Journal Article %J Advances in Cryptology–EUROCRYPT 2010 %D 2010 %T Partial fairness in secure two-party computation %A Gordon,S. %A Katz, Jonathan %X A seminal result of Cleve (STOC ’86) is that complete fairness is impossible to achieve in two-party computation. In light of this, various techniques for obtaining partial fairness have been suggested in the literature. We propose a definition of partial fairness within the standard real-/ideal-world paradigm that addresses deficiencies of prior definitions. We also show broad feasibility results with respect to our definition: partial fairness is possible for any (randomized) functionality f:X ×Y →Z 1 ×Z 2 at least one of whose domains or ranges is polynomial in size. Our protocols are always private, and when one of the domains has polynomial size our protocols also simultaneously achieve the usual notion of security with abort. In contrast to some prior work, we rely on standard assumptions only.We also show that, as far as general feasibility is concerned, our results are optimal (with respect to our definition). %B Advances in Cryptology–EUROCRYPT 2010 %P 157 - 176 %8 2010/// %G eng %R 10.1007/978-3-642-13190-5_8 %0 Journal Article %J Technical Reports of the Computer Science Department %D 2010 %T Partial least squares on graphical processor for efficient pattern recognition %A Srinivasan,Balaji Vasan %A Schwartz,William Robson %A Duraiswami, Ramani %A Davis, Larry S. %K Technical Report %X Partial least squares (PLS) methods have recently been used for many patternrecognition problems in computer vision. Here, PLS is primarily used as a supervised dimensionality reduction tool to obtain effective feature combinations for better learning. However, application of PLS to large datasets is hindered by its higher computational cost. We propose an approach to accelerate the classical PLS algorithm on graphical processors to obtain the same performance at a reduced cost. Although, PLS modeling is practically an offline training process, accelerating it helps large scale modeling. The proposed acceleration is shown to perform well and it yields upto ~30X speedup, It is applied on standard datasets in human detection and face recognition. %B Technical Reports of the Computer Science Department %8 2010/10/18/ %G eng %U http://drum.lib.umd.edu/handle/1903/10975 %0 Conference Paper %B International Conference on Pattern Recognition %D 2010 %T Performance Evaluation Tools for Zone Segmentation and Classification (PETS) %A Seo,W. %A Agrawal,Mudit %A David Doermann %X This paper overviews a set of Performance Evaluation ToolS (PETS) for zone segmentation and classification. The tools allow researchers and developers to evaluate, optimize and compare their algorithms by providing a variety of quantitative performance metrics. The evaluation of segmentation quality is based on the pixel-based overlaps between two sets of regions proposed by Randriamasy and Vincent. PETS extends the approach by providing a set of metrics for overlap analysis, RLE and polygonal representation of regions and introduces type-matching to evaluate zone classification. The software is available for research use. %B International Conference on Pattern Recognition %P 503 - 506 %8 2010/// %G eng %0 Journal Article %J IEEE Int. Conf. on Image Processing %D 2010 %T Performance impact of ordinal ranking on content fingerprinting %A Chuang,W.H. %A Varna,A.L. %A Wu,M. %X Content fingerprinting provides a compact representation ofmultimedia objects for copy identification. This paper ana- lyzes the impact of the ordinal-ranking based feature encod- ing on the performance of content fingerprinting. Expressions are derived for the identification performance of a fingerprint- ing system with and without ordinal ranking. The analysis in- dicates that when the number of features is moderately large, ordinal ranking can improve the robustness of the fingerprint- ing system to large distortions of the features and significantly increase the probability of detection. These results enhance understandings of ordinal ranking and provide design guide- lines for choosing different system parameters to achieve a desired identification accuracy. %B IEEE Int. Conf. on Image Processing %8 2010/// %G eng %0 Journal Article %J PLoS ONEPLoS ONE %D 2010 %T Perturbing the Ubiquitin Pathway Reveals How Mitosis Is Hijacked to Denucleate and Regulate Cell Proliferation and Differentiation In Vivo %A Caceres,Andrea %A Shang,Fu %A Wawrousek,Eric %A Liu,Qing %A Avidan,Orna %A Cvekl,Ales %A Yang,Ying %A Haririnia,Aydin %A Storaska,Andrew %A Fushman, David %A Kuszak,Jer %A Dudek,Edward %A Smith,Donald %A Taylor,Allen %X BackgroundThe eye lens presents a unique opportunity to explore roles for specific molecules in cell proliferation, differentiation and development because cells remain in place throughout life and, like red blood cells and keratinocytes, they go through the most extreme differentiation, including removal of nuclei and cessation of protein synthesis. Ubiquitination controls many critical cellular processes, most of which require specific lysines on ubiquitin (Ub). Of the 7 lysines (K) least is known about effects of modification of K6. Methodology and Principal Findings We replaced K6 with tryptophan (W) because K6 is the most readily modified K and W is the most structurally similar residue to biotin. The backbone of K6W-Ub is indistinguishable from that of Wt-Ub. K6W-Ub is effectively conjugated and deconjugated but the conjugates are not degraded via the ubiquitin proteasome pathways (UPP). Expression of K6W-ubiquitin in the lens and lens cells results in accumulation of intracellular aggregates and also slows cell proliferation and the differentiation program, including expression of lens specific proteins, differentiation of epithelial cells into fibers, achieving proper fiber cell morphology, and removal of nuclei. The latter is critical for transparency, but the mechanism by which cell nuclei are removed has remained an age old enigma. This was also solved by expressing K6W-Ub. p27kip, a UPP substrate accumulates in lenses which express K6W-Ub. This precludes phosphorylation of nuclear lamin by the mitotic kinase, a prerequisite for disassembly of the nuclear membrane. Thus the nucleus remains intact and DNAseIIβ neither gains entry to the nucleus nor degrades the DNA. These results could not be obtained using chemical proteasome inhibitors that cannot be directed to specific tissues. Conclusions and Significance K6W-Ub provides a novel, genetic means to study functions of the UPP because it can be targeted to specific cells and tissues. A fully functional UPP is required to execute most stages of lens differentiation, specifically removal of cell nuclei. In the absence of a functional UPP, small aggregate prone, cataractous lenses are formed. %B PLoS ONEPLoS ONE %V 5 %P e13331 - e13331 %8 2010/10/20/ %G eng %U http://dx.doi.org/10.1371/journal.pone.0013331 %N 10 %R 10.1371/journal.pone.0013331 %0 Journal Article %J Signal Processing Magazine, IEEE %D 2010 %T Picturing signal processing [From the Editor] %A Wu,M. %B Signal Processing Magazine, IEEE %V 27 %P 2 - 3 %8 2010/01// %@ 1053-5888 %G eng %N 1 %R 10.1109/MSP.2009.934727 %0 Journal Article %J IEEE Transactions on Audio, Speech, and Language Processing %D 2010 %T Plane-Wave Decomposition of Acoustical Scenes Via Spherical and Cylindrical Microphone Arrays %A Zotkin,Dmitry N %A Duraiswami, Ramani %A Gumerov, Nail A. %K Acoustic fields %K acoustic position measurement %K acoustic signal processing %K acoustic waves %K acoustical scene analysis %K array signal processing %K circular arrays %K cylindrical microphone arrays %K direction-independent acoustic behavior %K microphone arrays %K orthogonal basis functions %K plane-wave decomposition %K Position measurement %K signal reconstruction %K sound field reconstruction %K sound field representation %K source localization %K spatial audio playback %K spherical harmonics based beamforming algorithm %K spherical microphone arrays %X Spherical and cylindrical microphone arrays offer a number of attractive properties such as direction-independent acoustic behavior and ability to reconstruct the sound field in the vicinity of the array. Beamforming and scene analysis for such arrays is typically done using sound field representation in terms of orthogonal basis functions (spherical/cylindrical harmonics). In this paper, an alternative sound field representation in terms of plane waves is described, and a method for estimating it directly from measurements at microphones is proposed. It is shown that representing a field as a collection of plane waves arriving from various directions simplifies source localization, beamforming, and spatial audio playback. A comparison of the new method with the well-known spherical harmonics based beamforming algorithm is done, and it is shown that both algorithms can be expressed in the same framework but with weights computed differently. It is also shown that the proposed method can be extended to cylindrical arrays. A number of features important for the design and operation of spherical microphone arrays in real applications are revealed. Results indicate that it is possible to reconstruct the sound scene up to order p with p2 microphones spherical array. %B IEEE Transactions on Audio, Speech, and Language Processing %V 18 %P 2 - 16 %8 2010/01// %@ 1558-7916 %G eng %N 1 %R 10.1109/TASL.2009.2022000 %0 Patent %D 2010 %T Plasmonic Systems and Devices Utilizing Surface Plasmon Polariton %A Smolyaninov,Igor I. %A Vishkin, Uzi %A Davis,Christopher C. %X Plasmonic systems and devices that utilize surface plasmon polaritons (or “plasmons”) for inter-chip and/or intra-chip communications are provided. A plasmonic system includes a microchip that has an integrated circuit module and a plasmonic device configured to interface with the integrated circuit module. The plasmonic device includes a first electrode, a second electrode positioned at a non-contact distance from the first electrode, and a tunneling-junction configured to create a plasmon when a potential difference is created between the first electrode and the second electrode. %V 12/697,595 %8 2010/05/27/ %G eng %U http://www.google.com/patents?id=2VnRAAAAEBAJ %0 Journal Article %J SIAM Journal on Financial Mathematics %D 2010 %T Portfolio Selection Using Tikhonov Filtering to Estimate the Covariance Matrix %A Park,Sungwoo %A O'Leary, Dianne P. %K covariance matrix estimate %K Markowitz portfolio selection %K ridge regression %K Tikhonov regularization %X Markowitz's portfolio selection problem chooses weights for stocks in a portfolio based on an estimated covariance matrix of stock returns. Our study proposes reducing noise in the estimation by using a Tikhonov filter function. In addition, we prevent rank deficiency of the estimated covariance matrix and propose a method for effectively choosing the Tikhonov parameter, which determines the filtering intensity. We put previous estimators into a common framework and compare their filtering functions for eigenvalues of the correlation matrix. We demonstrate the effectiveness of our estimator using stock return data from 1958 through 2007. %B SIAM Journal on Financial Mathematics %V 1 %P 932 - 961 %8 2010/// %G eng %U http://link.aip.org/link/?SJF/1/932/1 %R 10.1137/090749372 %0 Conference Paper %B Robotics and Automation (ICRA), 2010 IEEE International Conference on %D 2010 %T Pose estimation in heavy clutter using a multi-flash camera %A Ming-Yu Liu %A Tuzel, O. %A Veeraraghavan,A. %A Chellapa, Rama %A Agrawal,A. %A Okuda, H. %K 3D %K algorithm;object %K based %K camera;multiview %K depth %K detection;object %K detection;pose %K distance %K edge %K edges;cameras;image %K edges;integral %K estimation;binary %K estimation;multiflash %K estimation;robot %K function;depth %K images;location %K localization;pose %K maps %K matching;cost %K matching;image %K pose-refinement %K texture;object %K transforms;angular %K vision;texture %K vision;transforms; %X We propose a novel solution to object detection, localization and pose estimation with applications in robot vision. The proposed method is especially applicable when the objects of interest may not be richly textured and are immersed in heavy clutter. We show that a multi-flash camera (MFC) provides accurate separation of depth edges and texture edges in such scenes. Then, we reformulate the problem, as one of finding matches between the depth edges obtained in one or more MFC images to the rendered depth edges that are computed offline using 3D CAD model of the objects. In order to facilitate accurate matching of these binary depth edge maps, we introduce a novel cost function that respects both the position and the local orientation of each edge pixel. This cost function is significantly superior to traditional Chamfer cost and leads to accurate matching even in heavily cluttered scenes where traditional methods are unreliable. We present a sub-linear time algorithm to compute the cost function using techniques from 3D distance transforms and integral images. Finally, we also propose a multi-view based pose-refinement algorithm to improve the estimated pose. We implemented the algorithm on an industrial robot arm and obtained location and angular estimation accuracy of the order of 1 mm and 2 #x00B0; respectively for a variety of parts with minimal texture. %B Robotics and Automation (ICRA), 2010 IEEE International Conference on %P 2028 - 2035 %8 2010/05// %G eng %R 10.1109/ROBOT.2010.5509897 %0 Conference Paper %B Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on %D 2010 %T Pose-robust albedo estimation from a single image %A Biswas,S. %A Chellapa, Rama %K 3D %K albedo %K estimation; %K estimation;shape %K Face %K filtering;face %K image;single %K image;stochastic %K information;pose-robust %K matching;pose %K nonfrontal %K pose;class-specific %K recognition;filtering %K recovery;single %K statistics;computer %K theory;pose %K vision;illumination-insensitive %X We present a stochastic filtering approach to perform albedo estimation from a single non-frontal face image. Albedo estimation has far reaching applications in various computer vision tasks like illumination-insensitive matching, shape recovery, etc. We extend the formulation proposed in that assumes face in known pose and present an algorithm that can perform albedo estimation from a single image even when pose information is inaccurate. 3D pose of the input face image is obtained as a byproduct of the algorithm. The proposed approach utilizes class-specific statistics of faces to iteratively improve albedo and pose estimates. Illustrations and experimental results are provided to show the effectiveness of the approach. We highlight the usefulness of the method for the task of matching faces across variations in pose and illumination. The facial pose estimates obtained are also compared against ground truth. %B Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on %P 2683 - 2690 %8 2010/06// %G eng %R 10.1109/CVPR.2010.5539987 %0 Journal Article %J Bioinformatics %D 2010 %T The power of protein interaction networks for associating genes with diseases %A Navlakha,S. %A Kingsford, Carl %B Bioinformatics %V 26 %P 1057 - 1057 %8 2010/// %G eng %N 8 %0 Journal Article %J Environmental Microbiology Reports %D 2010 %T The pre‐seventh pandemic Vibrio cholerae BX 330286 El Tor genome: evidence for the environment as a genome reservoir %A Haley,Bradd J. %A Grim,Christopher J. %A Hasan,Nur A. %A Taviani,Elisa %A Jongsik Chun %A Brettin,Thomas S. %A Bruce,David C. %A Challacombe,Jean F. %A Detter,J. Chris %A Han,Cliff S. %A Huq,Anwar %A Nair,G. Balakrish %A Rita R Colwell %X Vibrio cholerae O1 El Tor BX 330286 was isolated from a water sample in Australia in 1986, 9 years after an indigenous outbreak of cholera occurred in that region. This environmental strain encodes virulence factors highly similar to those of clinical strains, suggesting an ability to cause disease in humans. We demonstrate its high similarity in gene content and genome-wide nucleotide sequence to clinical V. cholerae strains, notably to pre-seventh pandemic O1 El Tor strains isolated in 1910 (V. cholerae NCTC 8457) and 1937 (V. cholerae MAK 757), as well as seventh pandemic strains isolated after 1960 globally. Here we demonstrate that this strain represents a transitory clone with shared characteristics between pre-seventh and seventh pandemic strains of V. cholerae. Interestingly, this strain was isolated 25 years after the beginning of the seventh pandemic, suggesting the environment as a genome reservoir in areas where cholera does not occur in sporadic, endemic or epidemic form. %B Environmental Microbiology Reports %V 2 %P 208 - 216 %8 2010/02/01/ %@ 1758-2229 %G eng %U http://onlinelibrary.wiley.com/doi/10.1111/j.1758-2229.2010.00141.x/abstract?userIsAuthenticated=false&deniedAccessCustomisedMessage= %N 1 %R 10.1111/j.1758-2229.2010.00141.x %0 Journal Article %J Proceedings of the Conference on Uncertainty in Artificial Intelligence %D 2010 %T Probabilistic similarity logic %A Bröcheler,M. %A Mihalkova,L. %A Getoor, Lise %X Many machine learning applications require theability to learn from and reason about noisy multi-relational data. To address this, several ef- fective representations have been developed that provide both a language for expressing the struc- tural regularities of a domain, and principled sup- port for probabilistic inference. In addition to these two aspects, however, many applications also involve a third aspect–the need to reason about similarities–which has not been directly supported in existing frameworks. This paper introduces probabilistic similarity logic (PSL), a general-purpose framework for joint reason- ing about similarity in relational domains that incorporates probabilistic reasoning about sim- ilarities and relational structure in a principled way. PSL can integrate any existing domain- specific similarity measures and also supports reasoning about similarities between sets of en- tities. We provide efficient inference and learn- ing techniques for PSL and demonstrate its ef- fectiveness both in common relational tasks and in settings that require reasoning about similarity. %B Proceedings of the Conference on Uncertainty in Artificial Intelligence %8 2010/// %G eng %0 Journal Article %J Environment and Planning B: Planning and Design %D 2010 %T The problem with zoning: nonlinear effects of interactions between location preferences and externalities on land use and utility %A Zellner,M.L. %A Riolo,R.L. %A Rand, William %A Brown,D.G. %A Page,S.E. %A Fernandez,L.E. %B Environment and Planning B: Planning and Design %V 37 %P 408 - 428 %8 2010/// %G eng %N 3 %0 Journal Article %J Handbook of Information and Communication Security %D 2010 %T Public-Key Cryptography %A Katz, Jonathan %X Public-key cryptography ensures both secrecy and authenticity of communication using public-key encryption schemes and digital signatures, respectively. Following a brief introduction to the public-key setting (and a comparison with the classical symmetric-key setting), we present rigorous definitions of security for public-key encryption and digital signature schemes, introduce some number-theoretic primitives used in their construction, and describe various practical instantiations. %B Handbook of Information and Communication Security %P 21 - 34 %8 2010/// %G eng %R 10.1007/978-3-642-04117-4_2 %0 Journal Article %J arXiv:1007.4268 [cs] %D 2010 %T Pushdown Control-Flow Analysis of Higher-Order Programs %A Earl, Christopher %A Might, Matthew %A David Van Horn %K Computer Science - Programming Languages %K F.3.2 %K F.4.1 %X Context-free approaches to static analysis gain precision over classical approaches by perfectly matching returns to call sites---a property that eliminates spurious interprocedural paths. Vardoulakis and Shivers's recent formulation of CFA2 showed that it is possible (if expensive) to apply context-free methods to higher-order languages and gain the same boost in precision achieved over first-order programs. To this young body of work on context-free analysis of higher-order programs, we contribute a pushdown control-flow analysis framework, which we derive as an abstract interpretation of a CESK machine with an unbounded stack. One instantiation of this framework marks the first polyvariant pushdown analysis of higher-order programs; another marks the first polynomial-time analysis. In the end, we arrive at a framework for control-flow analysis that can efficiently compute pushdown generalizations of classical control-flow analyses. %B arXiv:1007.4268 [cs] %8 2010/07/24/ %G eng %U http://arxiv.org/abs/1007.4268 %0 Conference Paper %B Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics %D 2010 %T Putting the user in the loop: interactive Maximal Marginal Relevance for query-focused summarization %A Jimmy Lin %A Madnani,Nitin %A Dorr, Bonnie J %X This work represents an initial attempt to move beyond "single-shot" summarization to interactive summarization. We present an extension to the classic Maximal Marginal Relevance (MMR) algorithm that places a user "in the loop" to assist in candidate selection. Experiments in the complex interactive Question Answering (ciQA) task at TREC 2007 show that interactively-constructed responses are significantly higher in quality than automatically-generated ones. This novel algorithm provides a starting point for future work on interactive summarization. %B Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics %S HLT '10 %I Association for Computational Linguistics %C Stroudsburg, PA, USA %P 305 - 308 %8 2010/// %@ 1-932432-65-5 %G eng %U http://dl.acm.org/citation.cfm?id=1857999.1858040 %0 Journal Article %J The VLDB Journal %D 2009 %T P r DB: managing and exploiting rich correlations in probabilistic databases %A Sen,P. %A Deshpande, Amol %A Getoor, Lise %X Due to numerous applications producing noisy data, e.g., sensor data, experimental data, data from uncurated sources, information extraction, etc., there has been a surge of interest in the development of probabilistic databases. Most probabilistic database models proposed to date, however, fail to meet the challenges of real-world applications on two counts: (1) they often restrict the kinds of uncertainty that the user can represent; and (2) the query processing algorithms often cannot scale up to the needs of the application. In this work, we define a probabilistic database model, PrDB, that uses graphical models, a state-of-the-art probabilistic modeling technique developed within the statistics and machine learning community, to model uncertain data. We show how this results in a rich, complex yet compact probabilistic database model, which can capture the commonly occurring uncertainty models (tuple uncertainty, attribute uncertainty), more complex models (correlated tuples and attributes) and allows compact representation (shared and schema-level correlations). In addition, we show how query evaluation in PrDB translates into inference in an appropriately augmented graphical model. This allows us to easily use any of a myriad of exact and approximate inference algorithms developed within the graphical modeling community. While probabilistic inference provides a generic approach to solving queries, we show how the use of shared correlations, together with a novel inference algorithm that we developed based on bisimulation, can speed query processing significantly. We present a comprehensive experimental evaluation of the proposed techniques and show that even with a few shared correlations, significant speedups are possible. %B The VLDB Journal %V 18 %P 1065 - 1090 %8 2009/// %G eng %N 5 %R 10.1007/s00778-009-0153-2 %0 Conference Paper %B Intl. Conf. on Document Analysis and Recognition (ICDAR 09) %D 2009 %T Page Rule-Line Removal using Linear Subspaces in Monochromatic Handwritten Arabic Documents %A Abd-Almageed, Wael %A Kumar,Jayant %A David Doermann %X In this paper we present a novel method for removing page rule lines in monochromatic handwritten Arabic documents using subspace methods with minimal effect on the quality of the foreground text. We use moment and histogram properties to extract features that represent the characteristics of the underlying rule lines. A linear subspace is incrementally built to obtain a line model that can be used to identify rule line pixels. We also introduce a novel scheme for evaluating noise removal algorithms in general and we use it to assess the quality of our rule line removal algorithm. Experimental results presented on a data set of 50 Arabic documents, handwritten by different writers, demonstrate the effectiveness of the proposed method. %B Intl. Conf. on Document Analysis and Recognition (ICDAR 09) %P 768 - 772 %8 2009/// %G eng %0 Journal Article %J Relation %D 2009 %T Pairwise Document Similarity in Large Collections with MapReduce %A Elsayed,T. %A Jimmy Lin %A Oard, Douglas %B Relation %V 10 %P 8372 - 8372 %8 2009/// %G eng %N 1.91 %0 Journal Article %J Journal of Computational Biology %D 2009 %T Parametric Complexity of Sequence Assembly: Theory and Applications to Next Generation Sequencing %A Nagarajan,Niranjan %A Pop, Mihai %X In recent years, a flurry of new DNA sequencing technologies have altered the landscape of genomics, providing a vast amount of sequence information at a fraction of the costs that were previously feasible. The task of assembling these sequences into a genome has, however, still remained an algorithmic challenge that is in practice answered by heuristic solutions. In order to design better assembly algorithms and exploit the characteristics of sequence data from new technologies, we need an improved understanding of the parametric complexity of the assembly problem. In this article, we provide a first theoretical study in this direction, exploring the connections between repeat complexity, read lengths, overlap lengths and coverage in determining the “hard” instances of the assembly problem. Our work suggests at least two ways in which existing assemblers can be extended in a rigorous fashion, in addition to delineating directions for future theoretical investigations. %B Journal of Computational Biology %V 16 %P 897 - 908 %8 2009/07// %@ 1066-5277, 1557-8666 %G eng %U http://www.liebertonline.com/doi/abs/10.1089/cmb.2009.0005 %N 7 %R 10.1089/cmb.2009.0005 %0 Journal Article %J Proceedings of the International Conference on Agents and Artificial Intelligence (ICAART-09) %D 2009 %T Participatory simulation as a tool for agent-based simulation %A Berland,M. %A Rand, William %X Participatory simulation, as described by Wilensky & Stroup (1999c), is a form of agent-based simulation inwhich multiple humans control or design individual agents in the simulation. For instance, in a participatory simulation of an ecosystem, fifty participants might each control the intake and output of one agent, such that the food web emerges from the interactions of the human-controlled agents. We argue that participatory simulation has been under-utilized outside of strictly educational contexts, and that it provides myriad benefits to designers of traditional agent-based simulations. These benefits include increased robustness of the model, increased comprehensibility of the findings, and simpler design of individual agent behaviors. To make this argument, we look to recent research such as that from crowdsourcing (von Ahn, 2005) and the reinforcement learning of autonomous agent behavior (Abbeel, 2008). %B Proceedings of the International Conference on Agents and Artificial Intelligence (ICAART-09) %8 2009/// %G eng %0 Journal Article %J SIGCOMM Comput. Commun. Rev. %D 2009 %T Passive aggressive measurement with MGRP %A Papageorge,Pavlos %A McCann,Justin %A Hicks, Michael W. %K active %K available bandwidth %K kernel module %K passive %K piggybacking %K probing %K streaming %K transport protocol %X We present the Measurement Manager Protocol (MGRP), an in-kernel service that schedules and transmits probes on behalf of active measurement tools. Unlike prior measurement services, MGRP transparently piggybacks application packets inside the often significant amounts of empty padding contained in typical probes. Using MGRP thus combines the modularity, flexibility, and accuracy of standalone active measurement tools with the lower overhead of passive measurement techniques. Microbenchmark experiments show that the resulting bandwidth savings makes it possible to measure the network accurately, but faster and more aggressively than without piggybacking, and with few ill effects to piggybacked application or competing traffic. When using MGRP to schedule measurements on behalf of MediaNet, an overlay service that adaptively schedules media streams, we show MediaNet can achieve significantly higher streaming rates under the same network conditions. %B SIGCOMM Comput. Commun. Rev. %V 39 %P 279 - 290 %8 2009/08// %@ 0146-4833 %G eng %U http://doi.acm.org/10.1145/1594977.1592601 %N 4 %R 10.1145/1594977.1592601 %0 Journal Article %J Proc. VLDB Endow. %D 2009 %T Path oracles for spatial networks %A Sankaranarayanan,Jagan %A Samet, Hanan %A Alborzi,Houman %X The advent of location-based services has led to an increased demand for performing operations on spatial networks in real time. The challenge lies in being able to cast operations on spatial networks in terms of relational operators so that they can be performed in the context of a database. A linear-sized construct termed a path oracle is introduced that compactly encodes the n2 shortest paths between every pair of vertices in a spatial network having n vertices thereby reducing each of the paths to a single tuple in a relational database and enables finding shortest paths by repeated application of a single SQL SELECT operator. The construction of the path oracle is based on the observed coherence between the spatial positions of both source and destination vertices and the shortest paths between them which facilitates the aggregation of source and destination vertices into groups that share common vertices or edges on the shortest paths between them. With the aid of the Well-Separated Pair (WSP) technique, which has been applied to spatial networks using the network distance measure, a path oracle is proposed that takes O(sdn) space, where s is empirically estimated to be around 12 for road networks, but that can retrieve an intermediate link in a shortest path in O(logn) time using a B-tree. An additional construct termed the path-distance oracle of size O(n · max(sd, 1/εd)) (empirically (n · max(122, 2.5/ε2))) is proposed that can retrieve an intermediate vertex as well as an ε-approximation of the network distances in O(logn) time using a B-tree. Experimental results indicate that the proposed oracles are linear in n which means that they are scalable and can enable complicated query processing scenarios on massive spatial network datasets. %B Proc. VLDB Endow. %V 2 %P 1210 - 1221 %8 2009/08// %@ 2150-8097 %G eng %U http://dl.acm.org/citation.cfm?id=1687627.1687763 %N 1 %0 Journal Article %J IEEE Transactions on intelligent transportation systems %D 2009 %T PERCEPTION AND NAVIGATION FOR AUTONOMOUS VEHICLES %A Hussein,M. %A Porikli, F. %A Davis, Larry S. %X We introduce a framework for evaluating human detectors that considers the practical application of a detector on a full image using multisize sliding-window scanning. We produce detection error tradeoff (DET) curves relating the miss detection rate and the false-alarm rate computed by deploying the detector on cropped windows and whole images, using, in the latter, either image resize or feature resize. Plots for cascade classifiers are generated based on confidence scores instead of on variation of the number of layers. To assess a method's overall performance on a given test, we use the average log miss rate (ALMR) as an aggregate performance score. To analyze the significance of the obtained results, we conduct 10-fold cross-validation experiments. We applied our evaluation framework to two state-of-the-art cascade-based detectors on the standard INRIA Person dataset and a local dataset of near-infrared images. We used our evaluation framework to study the differences between the two detectors on the two datasets with different evaluation methods. Our results show the utility of our framework. They also suggest that the descriptors used to represent features and the training window size are more important in predicting the detection performance than the nature of the imaging process, and that the choice between resizing images or features can have serious consequences. %B IEEE Transactions on intelligent transportation systems %V 10 %P 417 - 427 %8 2009/// %G eng %N 3 %0 Conference Paper %B Proceedings of the ACM SIGCOMM 2009 conference on Data communication %D 2009 %T Persona: an online social network with user-defined privacy %A Baden,Randy %A Bender,Adam %A Spring, Neil %A Bhattacharjee, Bobby %A Starin,Daniel %K ABE %K Facebook %K OSN %K persona %K privacy %K social networks %X Online social networks (OSNs) are immensely popular, with some claiming over 200 million users. Users share private content, such as personal information or photographs, using OSN applications. Users must trust the OSN service to protect personal information even as the OSN provider benefits from examining and sharing that information. We present Persona, an OSN where users dictate who may access their information. Persona hides user data with attribute-based encryption (ABE), allowing users to apply fine-grained policies over who may view their data. Persona provides an effective means of creating applications in which users, not the OSN, define policy over access to private data. We demonstrate new cryptographic mechanisms that enhance the general applicability of ABE. We show how Persona provides the functionality of existing online social networks with additional privacy benefits. We describe an implementation of Persona that replicates Facebook applications and show that Persona provides acceptable performance when browsing privacy-enhanced web pages, even on mobile devices. %B Proceedings of the ACM SIGCOMM 2009 conference on Data communication %S SIGCOMM '09 %I ACM %C New York, NY, USA %P 135 - 146 %8 2009/// %@ 978-1-60558-594-9 %G eng %U http://doi.acm.org/10.1145/1592568.1592585 %R 10.1145/1592568.1592585 %0 Journal Article %J Computer %D 2009 %T Persona : An Online Social Network with User-Defined Privacy Categories and Subject Descriptors %A Starin,Daniel %A Baden,Randy %A Bender,Adam %A Spring, Neil %A Bhattacharjee, Bobby %X Online social networks (OSNs) are immensely popular, with some claiming over 200 million users. Users share private content, such as personal information or photographs, using OSN applications. Users must trust the OSN service to protect personal information even as the OSN provider benefits from examining and sharing that information. We present Persona, an OSN where users dictate who may access their information. Persona hides user data with attribute-based encryption (ABE), allowing users to apply fine-grained policies over who may view their data. Persona provides an effective means of creating applications in which users, not the OSN, define policy over access to private data. We demonstrate new cryptographic mechanisms that enhance the general applicability of ABE. We show how Persona provides the functionality of existing online social networks with additional privacy benefits. We describe an implementation of Persona that replicates Facebook applications and show that Persona provides acceptable performance when browsing privacy-enhanced web pages, even on mobile devices. %B Computer %V 39 %P 135 - 146 %8 2009/// %G eng %N 4 %R 10.1145/1594977.1592585 %0 Journal Article %J Molecular biology and evolution %D 2009 %T A phylogenetic mixture model for the evolution of gene expression %A Eng,K. H %A Corrada Bravo, Hector %A Keleş,S. %B Molecular biology and evolution %V 26 %P 2363 - 2363 %8 2009/// %G eng %N 10 %0 Conference Paper %B Acoustics, Speech and Signal Processing, 2009. ICASSP 2009. IEEE International Conference on %D 2009 %T Plane-wave decomposition of a sound scene using a cylindrical microphone array %A Zotkin,Dmitry N %A Duraiswami, Ramani %K array;plane %K arrays; %K baffle;cylindrical %K beamforming;cylindrical %K decomposition;sound-hard %K localization;array %K microphone %K plane-wave %K processing;microphone %K scene %K signal %K spherical;source %K waves;sound %X The analysis for microphone arrays formed by mounting microphones on a sound-hard spherical or cylindrical baffle is typically performed using a decomposition of the sound field in terms of orthogonal basis functions. An alternative representation in terms of plane waves and a method for obtaining the coefficients of such a representation directly from measurements was proposed recently for the case of a spherical array. It was shown that representing the field as a collection of plane waves arriving from various directions simplifies both source localization and beamforming. In this paper, these results are extended to the case of the cylindrical array. Similarly to the spherical array case, localization and beamforming based on plane-wave decomposition perform as well as the traditional orthogonal function based methods while being numerically more stable. Both simulated and experimental results are presented. %B Acoustics, Speech and Signal Processing, 2009. ICASSP 2009. IEEE International Conference on %P 85 - 88 %8 2009/04// %G eng %R 10.1109/ICASSP.2009.4959526 %0 Book Section %B Programming Multi-Agent Systems %D 2009 %T Planning for Interactions among Autonomous Agents %A Au,Tsz-Chiu %A Kuter,Ugur %A Nau, Dana S. %E Hindriks,Koen %E Pokahr,Alexander %E Sardina,Sebastian %K Computer science %X AI planning research has traditionally focused on offline pl- anning for static single-agent environments. In environments where an agent needs to plan its interactions with other autonomous agents, planning is much more complicated, because the actions of the other agents can induce a combinatorial explosion in the number of contingencies that the planner will need to consider. This paper discusses several ways to alleviate the combinatorial explosion, and illustrates their use in several different kinds of multi-agent planning domains. %B Programming Multi-Agent Systems %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 5442 %P 1 - 23 %8 2009/// %@ 978-3-642-03277-6 %G eng %U http://www.springerlink.com/content/j258015ux2p38383/abstract/ %0 Journal Article %J SIAM Journal on Optimization %D 2009 %T A Polynomial-Time Interior-Point Method for Conic Optimization, with Inexact Barrier Evaluations %A Schurr,Simon P. %A O'Leary, Dianne P. %A Tits,Andre' L. %B SIAM Journal on Optimization %V 20 %P 548 - 571 %8 2009/// %G eng %R DOI: 10.1137/080722825 %0 Journal Article %J Bioinformatics %D 2009 %T A practical algorithm for finding maximal exact matches in large sequence datasets using sparse suffix arrays %A Zia Khan %A Bloom, Joshua S. %A Kruglyak, Leonid %A Singh, Mona %X Motivation: High-throughput sequencing technologies place ever increasing demands on existing algorithms for sequence analysis. Algorithms for computing maximal exact matches (MEMs) between sequences appear in two contexts where high-throughput sequencing will vastly increase the volume of sequence data: (i) seeding alignments of high-throughput reads for genome assembly and (ii) designating anchor points for genome–genome comparisons.Results: We introduce a new algorithm for finding MEMs. The algorithm leverages a sparse suffix array (SA), a text index that stores every K-th position of the text. In contrast to a full text index that stores every position of the text, a sparse SA occupies much less memory. Even though we use a sparse index, the output of our algorithm is the same as a full text index algorithm as long as the space between the indexed suffixes is not greater than a minimum length of a MEM. By relying on partial matches and additional text scanning between indexed positions, the algorithm trades memory for extra computation. The reduced memory usage makes it possible to determine MEMs between significantly longer sequences. Availability: Source code for the algorithm is available under a BSD open source license at http://compbio.cs.princeton.edu/mems. The implementation can serve as a drop-in replacement for the MEMs algorithm in MUMmer 3. Contact: zkhan@cs.princeton.edu;mona@cs.princeton.edu Supplementary information: Supplementary data are available at Bioinformatics online. %B Bioinformatics %V 25 %P 1609 - 1616 %8 2009/07/01/ %@ 1367-4803, 1460-2059 %G eng %U http://bioinformatics.oxfordjournals.org/content/25/13/1609 %N 13 %! Bioinformatics %0 Journal Article %J The VLDB Journal %D 2009 %T PrDB: managing and exploiting rich correlations in probabilistic databases %A Sen,Prithviraj %A Deshpande, Amol %A Getoor, Lise %X Due to numerous applications producing noisy data, e.g., sensor data, experimental data, data from uncurated sources, information extraction, etc., there has been a surge of interest in the development of probabilistic databases. Most probabilistic database models proposed to date, however, fail to meet the challenges of real-world applications on two counts: (1) they often restrict the kinds of uncertainty that the user can represent; and (2) the query processing algorithms often cannot scale up to the needs of the application. In this work, we define a probabilistic database model, P r DB, that uses graphical models, a state-of-the-art probabilistic modeling technique developed within the statistics and machine learning community, to model uncertain data. We show how this results in a rich, complex yet compact probabilistic database model, which can capture the commonly occurring uncertainty models (tuple uncertainty, attribute uncertainty), more complex models (correlated tuples and attributes) and allows compact representation (shared and schema-level correlations). In addition, we show how query evaluation in P r DB translates into inference in an appropriately augmented graphical model. This allows us to easily use any of a myriad of exact and approximate inference algorithms developed within the graphical modeling community. While probabilistic inference provides a generic approach to solving queries, we show how the use of shared correlations, together with a novel inference algorithm that we developed based on bisimulation, can speed query processing significantly. We present a comprehensive experimental evaluation of the proposed techniques and show that even with a few shared correlations, significant speedups are possible. %B The VLDB Journal %V 18 %P 1065 - 1090 %8 2009/// %@ 1066-8888 %G eng %U http://dx.doi.org/10.1007/s00778-009-0153-2 %N 5 %0 Book Section %B Scalable Uncertainty ManagementScalable Uncertainty Management %D 2009 %T PrDB: Managing Large-Scale Correlated Probabilistic Databases (Abstract) %A Deshpande, Amol %E Godo,Lluís %E Pugliese,Andrea %X Increasing numbers of real-world application domains are generating data that is inherently noisy, incomplete, and probabilistic in nature. Statistical inference and probabilistic modeling often introduce another layer of uncertainty on top of that. Examples of such data include measurement data collected by sensor networks, observation data in the context of social networks, scientific and biomedical data, and data collected by various online cyber-sources. Over the last few years, numerous approaches have been proposed, and several systems built, to integrate uncertainty into databases. However, these approaches typically make simplistic and restrictive assumptions concerning the types of uncertainties that can be represented. Most importantly, they often make highly restrictive independence assumptions, and cannot easily model rich correlations among the tuples or attribute values. Furthermore, they typically lack support for specifying uncertainties at different levels of abstractions, needed to handle large-scale uncertain datasets. %B Scalable Uncertainty ManagementScalable Uncertainty Management %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 5785 %P 1 - 1 %8 2009/// %@ 978-3-642-04387-1 %G eng %U http://dx.doi.org/10.1007/978-3-642-04388-8_1 %0 Book Section %B Theory of Cryptography %D 2009 %T Predicate Privacy in Encryption Systems %A Shen, Emily %A Elaine Shi %A Waters,Brent %E Reingold, Omer %K Computer science %X Predicate encryption is a new encryption paradigm which gives a master secret key owner fine-grained control over access to encrypted data. The master secret key owner can generate secret key tokens corresponding to predicates. An encryption of data x can be evaluated using a secret token corresponding to a predicate f ; the user learns whether the data satisfies the predicate, i.e., whether f ( x ) = 1. Prior work on public-key predicate encryption has focused on the notion of data or plaintext privacy, the property that ciphertexts reveal no information about the encrypted data to an attacker other than what is inherently revealed by the tokens the attacker possesses. In this paper, we consider a new notion called predicate privacy , the property that tokens reveal no information about the encoded query predicate. Predicate privacy is inherently impossible to achieve in the public-key setting and has therefore received little attention in prior work. In this work, we consider predicate encryption in the symmetric-key setting and present a symmetric-key predicate encryption scheme which supports inner product queries. We prove that our scheme achieves both plaintext privacy and predicate privacy. %B Theory of Cryptography %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 5444 %P 457 - 473 %8 2009 %@ 978-3-642-00456-8 %G eng %U http://www.springerlink.com/content/1717x5445k4718rp/abstract/ %0 Conference Paper %B 2009 AAAI Fall Symposium Series %D 2009 %T Predicting and Controlling System-Level Parameters of Multi-Agent Systems %A Miner,Don %A desJardins, Marie %K Complex Adaptive Systems %K System-level behavior %X Boid flocking is a system in which several individual agents follow three simple rules to generate swarm-level flocking behavior. To control this system, the user must adjust the agent program parameters, which indirectly modifies the flocking behavior. This is unintuitive because the properties of the flocking behavior are non-explicit in the agent program. In this paper, we discuss a domain-independent approach for detecting and controlling two emergent properties of boids: density and a qualitative threshold effect of swarming vs. flocking. Also, we discuss the possibility of applying this approach to detecting and controlling traffic jams in traffic simulations. %B 2009 AAAI Fall Symposium Series %8 2009/10/30/ %G eng %U http://www.aaai.org/ocs/index.php/FSS/FSS09/paper/viewPaper/909 %0 Journal Article %J EcoHealth %D 2009 %T Predicting the distribution of Vibrio spp. in the Chesapeake Bay: a Vibrio cholerae case study %A Constantin de Magny,G. %A Long,W. %A Brown,C. W. %A Hood,R. R. %A Huq,A. %A Murtugudde,R. %A Rita R Colwell %X Vibrio cholerae, the causative agent of cholera, is a naturally occurring inhabitant of the Chesapeake Bay and serves as a predictor for other clinically important vibrios, including Vibrio parahaemolyticus and Vibrio vulnificus. A system was constructed to predict the likelihood of the presence of V. cholerae in surface waters of the Chesapeake Bay, with the goal to provide forecasts of the occurrence of this and related pathogenic Vibrio spp. Prediction was achieved by driving an available multivariate empirical habitat model estimating the probability of V. cholerae within a range of temperatures and salinities in the Bay, with hydrodynamically generated predictions of ambient temperature and salinity. The experimental predictions provided both an improved understanding of the in situ variability of V. cholerae, including identification of potential hotspots of occurrence, and usefulness as an early warning system. With further development of the system, prediction of the probability of the occurrence of related pathogenic vibrios in the Chesapeake Bay, notably V. parahaemolyticus and V. vulnificus, will be possible, as well as its transport to any geographical location where sufficient relevant data are available. %B EcoHealth %V 6 %P 378 - 389 %8 2009/// %G eng %N 3 %R 10.1007/s10393-009-0273-6 %0 Conference Paper %B Software Maintenance, 2009. ICSM 2009. IEEE International Conference on %D 2009 %T Prioritizing component compatibility tests via user preferences %A Yoon,Il-Chul %A Sussman, Alan %A Memon, Atif M. %A Porter, Adam %K compatibility testing prioritization %K component configurations %K computer clusters %K Middleware %K Middleware systems %K object-oriented programming %K program testing %K software engineering %K Software systems %K third-party components %K user preferences %X Many software systems rely on third-party components during their build process. Because the components are constantly evolving, quality assurance demands that developers perform compatibility testing to ensure that their software systems build correctly over all deployable combinations of component versions, also called configurations. However, large software systems can have many configurations, and compatibility testing is often time and resource constrained. We present a prioritization mechanism that enhances compatibility testing by examining the ldquomost importantrdquo configurations first, while distributing the work over a cluster of computers. We evaluate our new approach on two large scientific middleware systems and examine tradeoffs between the new prioritization approach and a previously developed lowest-cost-configuration-first approach. %B Software Maintenance, 2009. ICSM 2009. IEEE International Conference on %P 29 - 38 %8 2009/09// %G eng %R 10.1109/ICSM.2009.5306357 %0 Journal Article %J Security Privacy, IEEE %D 2009 %T Prioritizing Vulnerability Remediation by Determining Attacker-Targeted Vulnerabilities %A Michel Cukier %A Panjwani,S. %K attacker-targeted vulnerabilities %K intrusion detection %K malicious connections %K security of data %K vulnerability remediation %K Windows service pack %X This article attempts to empirically analyze which vulnerabilities attackers tend to target in order to prioritize vulnerability remediation. This analysis focuses on the link between malicious connections and vulnerabilities, where each connection is considered malicious. Attacks requiring multiple connections are counted as multiple attacks. As the number of connections increases, so does the cost of recovering from the intrusion. The authors deployed four honey pots for four months, each running a different Windows service pack with its associated set of vulnerabilities. They then performed three empirical analyses to determine the relationship between the number of malicious connections and the total number of vulnerabilities, the number of malicious connections and the number of the vulnerabilities for different services, and the number of known successful attacks and the number of vulnerabilities for different services. %B Security Privacy, IEEE %V 7 %P 42 - 48 %8 2009/02//jan %@ 1540-7993 %G eng %N 1 %R 10.1109/MSP.2009.13 %0 Journal Article %J Computer Vision and Image Understanding %D 2009 %T Probabilistic fusion-based parameter estimation for visual tracking %A Han,Bohyung %A Davis, Larry S. %K Component-based tracking %K Density-based fusion %K Mean-shift %K visual tracking %X In object tracking, visual features may not be discriminative enough to estimate high dimensional motion parameters accurately, and complex motion estimation is computationally expensive due to a large search space. To tackle these problems, a reasonable strategy is to track small components within the target independently in lower dimensional motion parameter spaces (e.g., translation only) and then estimate the overall high dimensional motion (e.g., translation, scale and rotation) by statistically integrating the individual tracking results. Although tracking each component in a lower dimensional space is more reliable and faster, it is not trivial to combine the local motion information and estimate global parameters in a robust way because the individual component motions are frequently inconsistent. We propose a robust fusion algorithm to estimate the complex motion parameters using variable-bandwidth mean-shift. By employing correlation-based uncertainty modeling and fusion of individual components, the motion parameter that is robust to outliers can be detected with variable-bandwidth density-based fusion (VBDF) algorithm. In addition, we describe a method to update target appearance model for each component adaptively based on the component motion consistency. We present various tracking results and compare the performance of our algorithm with others using real video sequences. %B Computer Vision and Image Understanding %V 113 %P 435 - 445 %8 2009/04// %@ 1077-3142 %G eng %U http://www.sciencedirect.com/science/article/pii/S1077314208001872 %N 4 %R 10.1016/j.cviu.2008.11.003 %0 Journal Article %J IEEE transactions on pattern analysis and machine intelligence %D 2009 %T Probabilistic graphical models %A Gupta,A. %A Kembhavi,A. %A Davis, Larry S. %B IEEE transactions on pattern analysis and machine intelligence %V 31 %P 1775 - 1789 %8 2009/// %G eng %N 10 %0 Book %D 2009 %T Proceedings of the 1st ACM SIGKDD Workshop on Knowledge Discovery from Uncertain Data %E Pei,Jian %E Getoor, Lise %E de Keijzer,Ander %X The importance of uncertain data is growing quickly in many essential applications such as environmental surveillance, mobile object tracking and data integration. Recently, storing, collecting, processing, and analyzing uncertain data has attracted increasing attention from both academia and industry. Analyzing and mining uncertain data needs collaboration and joint effort from multiple research communities including reasoning under uncertainty, uncertain databases and mining uncertain data. For example, statistics and probabilistic reasoning can provide support with models for representing uncertainty. The uncertain database community can provide methods for storing and managing uncertain data, while research in mining uncertain data can provide data analysis tasks and methods. It is important to build connections among those communities to tackle the overall problem of analyzing and mining uncertain data. There are many common challenges among the communities. One is to understand the different modeling assumptions made, and how they impact the methods, both in terms of accuracy and efficiency. Different researchers hold different assumptions and this is one of the major obstacles in the research of mining uncertain data. Another is the scalability of proposed management and analysis methods. Finally, to make analysis and mining useful and practical, we need real data sets for testing. Unfortunately, uncertain data sets are often hard to get. The goal of the First ACM SIGKDD Workshop on Knowledge Discovery from Uncertain Data (U'09) is to discuss in depth the challenges, opportunities and techniques on the topic of analyzing and mining uncertain data. The theme of this workshop is to make connections among the research areas of uncertain databases, probabilistic reasoning, and data mining, as well as to build bridges among the aspects of models, data, applications, novel mining tasks and effective solutions. By making connections among different communities, we aim at understanding each other in terms of scientific foundation as well as commonality and differences in research methodology. The workshop program is very stimulating and exciting. We are pleased to feature two invited talks by pioneers in mining uncertain data. Christopher Jermaine will give an invited talk titled "Managing and Mining Uncertain Data: What Might We Do Better?" Matthias Renz will address the topic "Querying and Mining Uncertain Data: Methods, Applications, and Challenges". Moreover, 8 accepted papers in 4 full presentations and 4 concise presentations will cover a bunch of interesting topics and on-going research projects about uncertain data mining. %I ACM %C New York, NY, USA %8 2009/// %@ 978-1-60558-675-5 %G eng %0 Journal Article %J ACM SIGPLAN Notices %D 2009 %T Profile-guided static typing for dynamic scripting languages %A Furr,M. %A An,J.D. %A Foster, Jeffrey S. %B ACM SIGPLAN Notices %V 44 %P 283 - 300 %8 2009/// %G eng %N 10 %0 Journal Article %J Proc. HCIC Workshop %D 2009 %T Promoting energy efficient behaviors in the home through feedback: The role of human-computer interaction %A Jon Froehlich %B Proc. HCIC Workshop %V 9 %8 2009 %G eng %0 Journal Article %J Proc. HCIC Workshop %D 2009 %T Promoting energy efficient behaviors in the home through feedback: The role of human-computer interaction %A Jon Froehlich %B Proc. HCIC Workshop %V 9 %8 2009/// %G eng %0 Conference Paper %B Proceedings of the seventh ACM conference on Creativity and cognition %D 2009 %T Promoting social creativity: a component of a national initiative for social participation %A Shneiderman, Ben %A Churchill,Elizabeth %A Fischer,Gerhard %A Goldberg,Ken %K research agenda %K social creativity %K social participation %X This panel will discuss group processes that promote social creativity in science, engineering, arts, and humanities. We will offer positive and negative examples of social creativity projects, while suggesting research directions for dramatically increased social participation. The goal is to develop strategies that would expand resources and opportunities for research and education in social creativity. This requires our community to develop a unified position, then reach out to national science funding agencies, while building the case for the importance of this topic beyond our own community. How can social creativity, collaborative discovery, distributed innovation, and collective intelligence be framed as an international priority to cope with the problems of the 21st century and how can we identify a clear set of research challenges? The theme of technology-mediated social participation is outlined in the white paper for a National Initiative for Social Participation (http://iparticipate.wikispaces.com). The white paper suggests that successful research challenges should have three key elements: (1) compelling national need (healthcare, national security, community safety, education, innovation, cultural heritage, energy sustainability, environmental protection, etc.), (2) scientific foundation based on established theories and well-defined research questions (privacy, reciprocity, trust, motivation, recognition, etc.), and (3) computer science research challenges (security, privacy protection, scalability, visualization, end-user development, distributed data handling for massive user-generated content, network analysis of community evolution, cross network comparison, etc.). We seek recommendations for ways to increase the resources and attention for this field. We hope to inspire: universities to change course content, add courses, and offer new degree programs industry to help researchers on social creativity government to support these ideas and try them out in government applications scientists and artists to open themselves to more social/collaborative approaches %B Proceedings of the seventh ACM conference on Creativity and cognition %S C&C '09 %I ACM %C New York, NY, USA %P 7 - 8 %8 2009/// %@ 978-1-60558-865-0 %G eng %U http://doi.acm.org/10.1145/1640233.1640237 %R 10.1145/1640233.1640237 %0 Journal Article %J Advances in Cryptology–ASIACRYPT 2009 %D 2009 %T Proofs of storage from homomorphic identification protocols %A Ateniese,G. %A Kamara,S. %A Katz, Jonathan %X Proofs of storage (PoS) are interactive protocols allowing a client to verify that a server faithfully stores a file. Previous work has shown that proofs of storage can be constructed from any homomorphic linear authenticator (HLA). The latter, roughly speaking, are signature/message authentication schemes where ‘tags’ on multiple messages can be homomorphically combined to yield a ‘tag’ on any linear combination of these messages.We provide a framework for building public-key HLAs from any identification protocol satisfying certain homomorphic properties. We then show how to turn any public-key HLA into a publicly-verifiable PoS with communication complexity independent of the file length and supporting an unbounded number of verifications. We illustrate the use of our transformations by applying them to a variant of an identification protocol by Shoup, thus obtaining the first unbounded-use PoS based on factoring (in the random oracle model). %B Advances in Cryptology–ASIACRYPT 2009 %P 319 - 333 %8 2009/// %G eng %R 10.1007/978-3-642-10366-7_19 %0 Journal Article %J Proceedings of the National Academy of Sciences %D 2009 %T Protein quantification across hundreds of experimental conditions %A Zia Khan %A Bloom, Joshua S. %A Garcia, Benjamin A. %A Singh, Mona %A Kruglyak, Leonid %K kd-tree %K orthogonal range query %K quantitative proteomics %K space partitioning data structures %K tandem mass spectrometry %X Quantitative studies of protein abundance rarely span more than a small number of experimental conditions and replicates. In contrast, quantitative studies of transcript abundance often span hundreds of experimental conditions and replicates. This situation exists, in part, because extracting quantitative data from large proteomics datasets is significantly more difficult than reading quantitative data from a gene expression microarray. To address this problem, we introduce two algorithmic advances in the processing of quantitative proteomics data. First, we use space-partitioning data structures to handle the large size of these datasets. Second, we introduce techniques that combine graph-theoretic algorithms with space-partitioning data structures to collect relative protein abundance data across hundreds of experimental conditions and replicates. We validate these algorithmic techniques by analyzing several datasets and computing both internal and external measures of quantification accuracy. We demonstrate the scalability of these techniques by applying them to a large dataset that comprises a total of 472 experimental conditions and replicates. %B Proceedings of the National Academy of Sciences %V 106 %P 15544 - 15548 %8 2009/09/15/ %@ 0027-8424, 1091-6490 %G eng %U http://www.pnas.org/content/106/37/15544 %N 37 %! PNAS %0 Journal Article %J Nucleic Acids ResearchNucleic Acids Research %D 2009 %T PTM-Switchboard--a database of posttranslational modifications of transcription factors, the mediating enzymes and target genes %A Everett,L. %A Vo,A. %A Hannenhalli, Sridhar %B Nucleic Acids ResearchNucleic Acids Research %V 37 %P D66-D71 - D66-D71 %8 2009/01// %@ 0305-1048 %G eng %U http://nar.oxfordjournals.org/content/37/suppl_1/D66.short %N Database %R 10.1093/nar/gkn731 %0 Report %D 2009 %T Pushing Enterprise Security Down the Network Stack %A Clark,R. %A Feamster, Nick %A Nayak,A. %A Reimers,A. %X Network security is typically reactive: Networks provide connectivity and subsequently alter this connectivity according to various security policies, as implemented in middleboxes, or at higher layers. This approach gives rise to complicated interactions between protocols and systems that can cause incorrect behavior and slow response to attacks. In this paper, we propose a proactive approach to securing networks, whereby security-related actions (e.g., dropping or redirecting traffic) are embedded into the network fabric itself, leaving only a fixed set of actions to higher layers. We explore this approach in the context of network access control. Our design uses programmable switches to manipulate traffic at lower layers; these switches interact with policy and monitoring at higher layers. We apply our approach to Georgia Tech’s network access control system, show how the new design can both overcome the current shortcomings and provide new security functions, describe our proposed deployment, and discuss open research questions. %I Georgia Institute of Technology %V GT-CS-09-03 %8 2009/// %G eng %U http://hdl.handle.net/1853/30782 %0 Report %D 2008 %T Packets with provenance %A Ramachandran,A. %A Bhandankar,K. %A Tariq,M.B. %A Feamster, Nick %X Traffic classification and distinction allows network operators to provision resources, enforce trust, control unwanted traffic, and traceback unwanted traffic to its source. Today’s classification mechanisms rely primarily on IP addresses and port numbers; unfortunately, these fields are often too coarse and ephemeral, and moreover, they do not reflect traffic’s provenance, associated trust, or relationship to other processes or hosts. This paper presents the design, analysis, user-space implementation, and evaluation of Pedigree, which consists of two components: a trusted tagger that resides on hosts and tags packets with information about their provenance (i.e., identity and history of potential input from hosts and resources for the process that generated them), and an arbiter, which decides what to do with the traffic that carries certain tags. Pedigree allows operators to write traffic classification policies with expressive semantics that reflect properties of the actual process that generated the traffic. Beyond offering new function and flexibility in traffic classification, Pedigree represents a new and interesting point in the design space between filtering and capabilities, and it allows network operators to leverage host-based trust models to decide treatment of network traffic. %I Georgia Institute of Technology %V GT-CS-08-02 %8 2008/// %G eng %U http://hdl.handle.net/1853/25467 %0 Report %D 2008 %T Parallel Algorithms for Volumetric Surface Construction %A JaJa, Joseph F. %A Shi,Qingmin %A Varshney, Amitabh %X Large scale scientific data sets are appearing at an increasing rate whose sizes can range from hundreds of gigabytes to tens of terabytes. Isosurface extraction and rendering is an important visualization technique that enables the visual exploration of such data sets using surfaces. However the computational requirements of this approach are substantial which in general prevent the interactive rendering of isosurfaces for large data sets. Therefore, parallel and distributed computing techniques offer a promising direction to deal with the corresponding computational challenges. In this chapter, we give a brief historical perspective of the isosurface visualization approach, and describe the basic sequential and parallel techniques used to extract and render isosurfaces with a particular focus on out-of-core techniques. For parallel algorithms, we assume a distributed memory model in which each processor has its own local disk, and processors communicate and exchange data through an interconnection network. We present a general framework for evaluating parallel isosurface extraction algorithms and describe the related best known parallel algorithms. We also describe the main parallel strategies used to handle isosurface rendering, pointing out the limitations of these strategies. 1. %I Instititue for Advanced Computer Studies, Univ of Maryland, College Park %8 2008/// %G eng %U http://citeseerx.ist.psu.edu/viewdoc/summary?doi=?doi=10.1.1.122.4472 %0 Journal Article %J Journal of Molecular Graphics and Modelling %D 2008 %T Parallel, stochastic measurement of molecular surface area %A Juba,Derek %A Varshney, Amitabh %K gpu %K Molecular surface %K Parallel %K Progressive %K Quasi-random %K Stochastic %X Biochemists often wish to compute surface areas of proteins. A variety of algorithms have been developed for this task, but they are designed for traditional single-processor architectures. The current trend in computer hardware is towards increasingly parallel architectures for which these algorithms are not well suited.We describe a parallel, stochastic algorithm for molecular surface area computation that maps well to the emerging multi-core architectures. Our algorithm is also progressive, providing a rough estimate of surface area immediately and refining this estimate as time goes on. Furthermore, the algorithm generates points on the molecular surface which can be used for point-based rendering. We demonstrate a GPU implementation of our algorithm and show that it compares favorably with several existing molecular surface computation programs, giving fast estimates of the molecular surface area with good accuracy. %B Journal of Molecular Graphics and Modelling %V 27 %P 82 - 87 %8 2008/08// %@ 1093-3263 %G eng %U http://www.sciencedirect.com/science/article/pii/S1093326308000387 %N 1 %R 10.1016/j.jmgm.2008.03.001 %0 Conference Paper %B Proceedings of the 8th ACM SIGPLAN-SIGSOFT workshop on Program analysis for software tools and engineering %D 2008 %T Path projection for user-centered static analysis tools %A Khoo,Yit Phang %A Foster, Jeffrey S. %A Hicks, Michael W. %A Sazawal,Vibha %X The research and industrial communities have made great strides in developing sophisticated defect detection tools based on static analysis. To date most of the work in this area has focused on developing novel static analysis algorithms, but has neglected study of other aspects of static analysis tools, particularly user interfaces. In this work, we present a novel user interface toolkit called Path Projection that helps users visualize, navigate, and understand program paths, a common component of many tools' error reports. We performed a controlled user study to measure the benefit of Path Projection in triaging error reports from Locksmith, a data race detection tool for C. We found that Path Projection improved participants' time to complete this task without affecting accuracy, while participants felt Path Projection was useful and strongly preferred it to a more standard viewer. %B Proceedings of the 8th ACM SIGPLAN-SIGSOFT workshop on Program analysis for software tools and engineering %S PASTE '08 %I ACM %C New York, NY, USA %P 57 - 63 %8 2008/// %@ 978-1-60558-382-2 %G eng %U http://doi.acm.org/10.1145/1512475.1512488 %R 10.1145/1512475.1512488 %0 Journal Article %J SIGCOMM Comput. Commun. Rev. %D 2008 %T Path splicing %A Motiwala,Murtaza %A Elmore,Megan %A Feamster, Nick %A Vempala,Santosh %K multi-path routing %K path diversity %K path splicing %X We present path splicing, a new routing primitive that allows network paths to be constructed by combining multiple routing trees ("slices") to each destination over a single network topology. Path splicing allows traffic to switch trees at any hop en route to the destination. End systems can change the path on which traffic is forwarded by changing a small number of additional bits in the packet header. We evaluate path splicing for intradomain routing using slices generated from perturbed link weights and find that splicing achieves reliability that approaches the best possible using a small number of slices, for only a small increase in latency and no adverse effects on traffic in the network. In the case of interdomain routing, where splicing derives multiple trees from edges in alternate backup routes, path splicing achieves near-optimal reliability and can provide significant benefits even when only a fraction of ASes deploy it. We also describe several other applications of path splicing, as well as various possible deployment paths. %B SIGCOMM Comput. Commun. Rev. %V 38 %P 27 - 38 %8 2008/08// %@ 0146-4833 %G eng %U http://doi.acm.org/10.1145/1402946.1402963 %N 4 %R 10.1145/1402946.1402963 %0 Conference Paper %B Acoustics, Speech and Signal Processing, 2008. ICASSP 2008. IEEE International Conference on %D 2008 %T A pattern classification framework for theoretical analysis of component forensics %A Swaminathan,A. %A M. Wu %A Liu,K. J.R %K analysis;information %K classification; %K classification;parameter %K component %K estimation;pattern %K forensics;component %K forensics;pattern %K identification;forensic %K parameter %K processing;nonintrusive %X Component forensics is an emerging methodology for forensic analysis that aims at estimating the algorithms and parameters in each component of a digital device. This paper proposes a theoretical foundation to examine the performance limits of component forensics. Using ideas from pattern classification theory, we define formal notions of identifiability of components in the information processing chain. We show that the parameters of certain device components can be accurately identified only in controlled settings through semi non-intrusive forensics, while the parameters of some others can be computed directly from the available sample data via complete non-intrusive analysis. We then extend the proposed theoretical framework to quantify and improve the accuracies and confidence in component parameter identification for several forensic applications. %B Acoustics, Speech and Signal Processing, 2008. ICASSP 2008. IEEE International Conference on %P 1665 - 1668 %8 2008/04/31/4 %G eng %R 10.1109/ICASSP.2008.4517947 %0 Journal Article %J Visualization and Computer Graphics, IEEE Transactions on %D 2008 %T Persuading Visual Attention through Geometry %A Kim,Youngmin %A Varshney, Amitabh %K attention;visual %K generation;Attention;Awareness;Computer %K Geometry %K Graphics;Cues;Humans;Photic %K Interface;Visual %K modification;mesh %K perception; %K persuasion;art;mesh %K saliency;visual %K Stimulation;User-Computer %X Artists, illustrators, photographers, and cinematographers have long used the principles of contrast and composition to guide visual attention. In this paper, we introduce geometry modification as a tool to persuasively direct visual attention. We build upon recent advances in mesh saliency to develop techniques to alter geometry to elicit greater visual attention. Eye-tracking-based user studies show that our approach successfully guides user attention in a statistically significant manner. Our approach operates directly on geometry and, therefore, produces view-independent results that can be used with existing view-dependent techniques of visual persuasion. %B Visualization and Computer Graphics, IEEE Transactions on %V 14 %P 772 - 782 %8 2008/08//july %@ 1077-2626 %G eng %N 4 %R 10.1109/TVCG.2007.70624 %0 Conference Paper %B Proceedings of the first workshop on Online social networks %D 2008 %T Photo-based authentication using social networks %A Yardi,Sarita %A Feamster, Nick %A Bruckman,Amy %K social networks %K trust %X We present Lineup, a system that uses the social network graph in Facebook and auxiliary information (e.g., "tagged" user photos) to build a photo-based Web site authentication framework. Lineup's underlying mechanism leverages the concept of CAPTCHAs, programs that are designed to distinguish bots from human users. Lineup extends this functionality to help a Web site ascertain a user's identity or membership in a certain group (e.g., an interest group, invitees to a certain event) in order to infer some level of trust. Lineup works by presenting a user with photographs and asking the user to identify subjects in the photo whom a user with the appropriate identity or group membership should know. We present the design and implementation for Lineup, describe a preliminary prototype implementation, and discuss Lineup's security properties, including possible guarantees and threats. %B Proceedings of the first workshop on Online social networks %S WOSN '08 %I ACM %C New York, NY, USA %P 55 - 60 %8 2008/// %@ 978-1-60558-182-8 %G eng %U http://doi.acm.org/10.1145/1397735.1397748 %R 10.1145/1397735.1397748 %0 Conference Paper %B Proceedings of the 16th ACM international conference on Multimedia %D 2008 %T Photo-based question answering %A Tom Yeh %A Lee,John J %A Darrell,Trevor %K Computer vision %K Information retrieval %K Question answering %X Photo-based question answering is a useful way of finding information about physical objects. Current question answering (QA) systems are text-based and can be difficult to use when a question involves an object with distinct visual features. A photo-based QA system allows direct use of a photo to refer to the object. We develop a three-layer system architecture for photo-based QA that brings together recent technical achievements in question answering and image matching. The first, template-based QA layer matches a query photo to online images and extracts structured data from multimedia databases to answer questions about the photo. To simplify image matching, it exploits the question text to filter images based on categories and keywords. The second, information retrieval QA layer searches an internal repository of resolved photo-based questions to retrieve relevant answers. The third, human-computation QA layer leverages community experts to handle the most difficult cases. A series of experiments performed on a pilot dataset of 30,000 images of books, movie DVD covers, grocery items, and landmarks demonstrate the technical feasibility of this architecture. We present three prototypes to show how photo-based QA can be built into an online album, a text-based QA, and a mobile application. %B Proceedings of the 16th ACM international conference on Multimedia %S MM '08 %I ACM %C New York, NY, USA %P 389 - 398 %8 2008/// %@ 978-1-60558-303-7 %G eng %U http://doi.acm.org/10.1145/1459359.1459412 %R 10.1145/1459359.1459412 %0 Journal Article %J Journal of Systems and Software %D 2008 %T A pilot study to compare programming effort for two parallel programming models %A Hochstein, Lorin %A Basili, Victor R. %A Vishkin, Uzi %A Gilbert,John %K effort %K empirical study %K Message-passing %K MPI %K parallel programming %K PRAM %K XMT %X ContextWriting software for the current generation of parallel systems requires significant programmer effort, and the community is seeking alternatives that reduce effort while still achieving good performance. Objective Measure the effect of parallel programming models (message-passing vs. PRAM-like) on programmer effort. Design, setting, and subjects One group of subjects implemented sparse-matrix dense-vector multiplication using message-passing (MPI), and a second group solved the same problem using a PRAM-like model (XMTC). The subjects were students in two graduate-level classes: one class was taught MPI and the other was taught XMTC. Main outcome measures Development time, program correctness. Results Mean XMTC development time was 4.8 h less than mean MPI development time (95% confidence interval, 2.0–7.7), a 46% reduction. XMTC programs were more likely to be correct, but the difference in correctness rates was not statistically significant (p = .16). Conclusions XMTC solutions for this particular problem required less effort than MPI equivalents, but further studies are necessary which examine different types of problems and different levels of programmer experience. %B Journal of Systems and Software %V 81 %P 1920 - 1930 %8 2008/11// %@ 0164-1212 %G eng %U http://www.sciencedirect.com/science/article/pii/S0164121208000125 %N 11 %R 10.1016/j.jss.2007.12.798 %0 Journal Article %J Computational Statistics & Data Analysis %D 2008 %T Pooled ANOVA %A Last,Michael %A Luta,Gheorghe %A Orso,Alex %A Porter, Adam %A Young,Stan %X We introduce Pooled ANOVA, a greedy algorithm to sequentially select the rare important factors from a large set of factors. Problems such as computer simulations and software performance tuning involve a large number of factors, few of which have an important effect on the outcome or performance measure. We pool multiple factors together, and test the pool for significance. If the pool has a significant effect we retain the factors for deconfounding. If not, we either declare that none of the factors are important, or retain them for follow-up decoding, depending on our assumptions and stage of testing. The sparser important factors are, the bigger the savings. Pooled ANOVA requires fewer assumptions than other, similar methods (e.g. sequential bifurcation), such as not requiring all important effects to have the same sign. We demonstrate savings of 25%-35% when compared to a conventional ANOVA, and also the ability to work in a setting where Sequential Bifurcation fails. %B Computational Statistics & Data Analysis %V 52 %P 5215 - 5228 %8 2008/08/15/ %@ 0167-9473 %G eng %U http://www.sciencedirect.com/science/article/pii/S0167947308002168 %N 12 %R 16/j.csda.2008.04.024 %0 Journal Article %J Advances in Biometrics %D 2008 %T Pose and Illumination Issues in Face-and Gait-Based Identification %A Chellapa, Rama %A Aggarwal,G. %X Although significant work has been done in the field of face- and gait- based recognition, the performance of the state-of-the-art recognition algorithms is not good enough to be effective in operational systems. Most algorithms do reasonably well for controlled images but are susceptible to changes in illumination conditions and pose. This has shifted the focus of research to more challenging tasks of obtaining better performance for uncontrolled realistic scenarios. In this chapter, we discuss several recent advances made to achieve this goal. %B Advances in Biometrics %P 307 - 322 %8 2008/// %G eng %0 Journal Article %J Computer Vision–ECCV 2008 %D 2008 %T A pose-invariant descriptor for human detection and segmentation %A Lin,Z. %A Davis, Larry S. %X We present a learning-based, sliding window-style approach for the problem of detecting humans in still images. Instead of traditional concatenation-style image location-based feature encoding, a global descriptor more invariant to pose variation is introduced. Specifically, we propose a principled approach to learning and classifying human/non-human image patterns by simultaneously segmenting human shapes and poses, and extracting articulation-insensitive features. The shapes and poses are segmented by an efficient, probabilistic hierarchical part-template matching algorithm, and the features are collected in the context of poses by tracing around the estimated shape boundaries. Histograms of oriented gradients are used as a source of low-level features from which our pose-invariant descriptors are computed, and kernel SVMs are adopted as the test classifiers. We evaluate our detection and segmentation approach on two public pedestrian datasets. %B Computer Vision–ECCV 2008 %P 423 - 436 %8 2008/// %G eng %0 Journal Article %J Relation %D 2008 %T Position Paper: Improving Browsing Environment Compliance Evaluations for Websites %A Eaton,C. %A Memon, Atif M. %X Though it would be ideal for web pages to render and function con-sistently across heterogeneous browsing environments, the browser, browser version, and operating system used to navigate and interact with web content is known to have a significant impact on the subsequent level of user accessibil- ity. While research endeavors directed toward improving web accessibility have generally focused on addressing usability issues for individuals with physical limitations, providing accessible information and services for the en- tire web population also encompasses addressing the limitations of devices and platforms used to deploy web pages. We propose that more research be in- vested in the latter issue to facilitate the development of effective tools for de- tecting browsing environment influenced usability issues before inaccessible pages are released in the field. %B Relation %V 10 %P 6113 - 6113 %8 2008/// %G eng %N 1.14 %0 Journal Article %J SIGCOMM Comput. Commun. Rev. %D 2008 %T Practical issues with using network tomography for fault diagnosis %A Huang,Yiyi %A Feamster, Nick %A Teixeira,Renata %K Environment %K Fault detection %K network tomography %X This paper investigates the practical issues in applying network tomography to monitor failures. We outline an approach for selecting paths to monitor, detecting and confirming the existence of a failure, correlating multiple independent observations into a single failure event, and applying existing binary networking tomography algorithms to identify failures. We evaluate the ability of network tomography algorithms to correctly detect and identify failures in a controlled environment on the VINI testbed. %B SIGCOMM Comput. Commun. Rev. %V 38 %P 53 - 58 %8 2008/09// %@ 0146-4833 %G eng %U http://doi.acm.org/10.1145/1452335.1452343 %N 5 %R 10.1145/1452335.1452343 %0 Conference Paper %B Proceedings of the theory and applications of cryptographic techniques 27th annual international conference on Advances in cryptology %D 2008 %T Predicate encryption supporting disjunctions, polynomial equations, and inner products %A Katz, Jonathan %A Sahai,Amit %A Waters,Brent %X Predicate encryption is a new paradigm generalizing, among other things, identity-based encryption. In a predicate encryption scheme, secret keys correspond to predicates and ciphertexts are associated with attributes; the secret key SKf corresponding to a predicate f can be used to decrypt a ciphertext associated with attribute I if and only if f(I) = 1. Constructions of such schemes are currently known for relatively few classes of predicates. We construct such a scheme for predicates corresponding to the evaluation of inner products over ZN (for some large integer N). This, in turn, enables constructions in which predicates correspond to the evaluation of disjunctions, polynomials, CNF/DNF formulae, or threshold predicates (among others). Besides serving as a significant step forward in the theory of predicate encryption, our results lead to a number of applications that are interesting in their own right. %B Proceedings of the theory and applications of cryptographic techniques 27th annual international conference on Advances in cryptology %S EUROCRYPT'08 %I Springer-Verlag %C Berlin, Heidelberg %P 146 - 162 %8 2008/// %@ 3-540-78966-9, 978-3-540-78966-6 %G eng %U http://dl.acm.org/citation.cfm?id=1788414.1788423 %0 Book Section %B Wireless Sensor NetworksWireless Sensor Networks %D 2008 %T Predictive Modeling-Based Data Collection in Wireless Sensor Networks %A Wang,Lidan %A Deshpande, Amol %E Verdone,Roberto %X We address the problem of designing practical, energy-efficient protocols for data collection in wireless sensor networks using predictive modeling. Prior work has suggested several approaches to capture and exploit the rich spatio-temporal correlations prevalent in WSNs during data collection. Although shown to be effective in reducing the data collection cost, those approaches use simplistic corelation models and further, ignore many idiosyncrasies of WSNs, in particular the broadcast nature of communication. Our proposed approach is based on approximating the joint probability distribution over the sensors using undirected graphical models , ideally suited to exploit both the spatial correlations and the broadcast nature of communication. We present algorithms for optimally using such a model for data collection under different communication models, and for identifying an appropriate model to use for a given sensor network. Experiments over synthetic and real-world datasets show that our approach significantly reduces the data collection cost. %B Wireless Sensor NetworksWireless Sensor Networks %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 4913 %P 34 - 51 %8 2008/// %@ 978-3-540-77689-5 %G eng %U http://dx.doi.org/10.1007/978-3-540-77690-1_3 %0 Conference Paper %B Proceedings of the 1st ACM SIGKDD international conference on Privacy, security, and trust in KDD %D 2008 %T Preserving the privacy of sensitive relationships in graph data %A Zheleva,Elena %A Getoor, Lise %K anonymization %K graph data %K identification %K link mining %K noisy-or %K privacy %K social network analysis %X In this paper, we focus on the problem of preserving the privacy of sensitive relationships in graph data. We refer to the problem of inferring sensitive relationships from anonymized graph data as link reidentification. We propose five different privacy preservation strategies, which vary in terms of the amount of data removed (and hence their utility) and the amount of privacy preserved. We assume the adversary has an accurate predictive model for links, and we show experimentally the success of different link re-identification strategies under varying structural characteristics of the data. %B Proceedings of the 1st ACM SIGKDD international conference on Privacy, security, and trust in KDD %S PinKDD'07 %I Springer-Verlag %C Berlin, Heidelberg %P 153 - 171 %8 2008/// %@ 3-540-78477-2, 978-3-540-78477-7 %G eng %U http://dl.acm.org/citation.cfm?id=1793474.1793485 %0 Journal Article %J Visualization and Computer Graphics, IEEE Transactions on %D 2008 %T Promoting Insight-Based Evaluation of Visualizations: From Contest to Benchmark Repository %A Plaisant, Catherine %A Fekete,J.-D. %A Grinstein,G. %K Computer-Assisted;Software;Software Validation;User-Computer Interface; %K Factual;Evaluation Studies as Topic;Image Interpretation %K InfoVis contest;benchmark repository;information visualization;data visualisation;Algorithms;Benchmarking;Computer Graphics;Databases %X Information visualization (InfoVis) is now an accepted and growing field, but questions remain about the best uses for and the maturity of novel visualizations. Usability studies and controlled experiments are helpful, but generalization is difficult. We believe that the systematic development of benchmarks will facilitate the comparison of techniques and help identify their strengths under different conditions. We were involved in the organization and management of three InfoVis contests for the 2003, 2004, and 2005 IEEE InfoVis Symposia, which requested teams to report on insights gained while exploring data. We give a summary of the state of the art of evaluation in InfoVis, describe the three contests, summarize their results, discuss outcomes and lessons learned, and conjecture the future of visualization contests. All materials produced by the contests are archived in the InfoVis benchmark repository. %B Visualization and Computer Graphics, IEEE Transactions on %V 14 %P 120 - 134 %8 2008/02//jan %@ 1077-2626 %G eng %N 1 %R 10.1109/TVCG.2007.70412 %0 Journal Article %J IEEE Transactions on Signal Processing %D 2007 %T Parameterized Looped Schedules for Compact Representation of Execution Sequences in DSP Hardware and Software Implementation %A Ming-Yung Ko %A Zissulescu,C. %A Puthenpurayil,S. %A Bhattacharyya, Shuvra S. %A Kienhuis,B. %A Deprettere,E. F %K Application software %K array signal processing %K code compression methodology %K compact representation %K Compaction %K data compression %K Design automation %K Digital signal processing %K digital signal processing chips %K DSP %K DSP hardware %K embedded systems %K Encoding %K Field programmable gate arrays %K field-programmable gate arrays (FPGAs) %K FPGA %K Hardware %K hierarchical runlength encoding %K high-level synthesis %K Kahn process %K loop-based code compaction %K looping construct %K parameterized loop schedules %K program compilers %K reconfigurable design %K runlength codes %K scheduling %K Signal generators %K Signal processing %K Signal synthesis %K software engineering %K software implementation %K static dataflow models %K Very large scale integration %K VLSI %X In this paper, we present a technique for compact representation of execution sequences in terms of efficient looping constructs. Here, by a looping construct, we mean a compact way of specifying a finite repetition of a set of execution primitives. Such compaction, which can be viewed as a form of hierarchical run-length encoding (RLE), has application in many very large scale integration (VLSI) signal processing contexts, including efficient control generation for Kahn processes on field-programmable gate arrays (FPGAs), and software synthesis for static dataflow models of computation. In this paper, we significantly generalize previous models for loop-based code compaction of digital signal processing (DSP) programs to yield a configurable code compression methodology that exhibits a broad range of achievable tradeoffs. Specifically, we formally develop and apply to DSP hardware and software synthesis a parameterizable loop scheduling approach with compact format, dynamic reconfigurability, and low-overhead decompression %B IEEE Transactions on Signal Processing %V 55 %P 3126 - 3138 %8 2007/06// %@ 1053-587X %G eng %N 6 %R 10.1109/TSP.2007.893964 %0 Book Section %B Graph Drawing %D 2007 %T Parameterized st-Orientations of Graphs: Algorithms and Experiments %A Charalampos Papamanthou %A Tollis, Ioannis G. %E Kaufmann, Michael %E Wagner, Dorothea %K Algorithm Analysis and Problem Complexity %K Computer Graphics %K Data structures %K Discrete Mathematics in Computer Science %X st-orientations (st-numberings) or bipolar orientations of undirected graphs are central to many graph algorithms and applications. Several algorithms have been proposed in the past to compute an st-orientation of a biconnected graph. However, as indicated in [1], the computation of more than one st-orientation is very important for many applications in multiple research areas, such as this of Graph Drawing. In this paper we show how to compute such orientations with certain (parameterized) characteristics in the final st-oriented graph, such as the length of the longest path. Apart from Graph Drawing, this work applies in other areas such as Network Routing and in tackling difficult problems such as Graph Coloring and Longest Path. We present primary approaches to the problem of computing longest path parameterized st-orientations of graphs, an analytical presentation (together with proof of correctness) of a new O(mlog5 n) (O(mlogn) for planar graphs) time algorithm that computes such orientations (and which was used in [1]) and extensive computational results that reveal the robustness of the algorithm. %B Graph Drawing %S Lecture Notes in Computer Science %I Springer Berlin Heidelberg %P 220 - 233 %8 2007/01/01/ %@ 978-3-540-70903-9, 978-3-540-70904-6 %G eng %U http://link.springer.com/chapter/10.1007/978-3-540-70904-6_22 %0 Journal Article %J Computers and Software Engineering %D 2007 %T Password Changes: Empirical Results %A Michel Cukier %A Sharma,A. %X This paper focuses on a detailed analysis of oneaspect of password evolution based on empirical data: password changes. Passwords can be divided into weak and strong based on how easy it is to crack them. We present a model of password changes and analyze passwords collected during 21 months from a large network of an average of 770 users. The results include tracking the percentage of users with weak passwords over time and the percentage of users changing between weak and strong passwords. Based on the data analysis, the distribution of users switching between weak and strong passwords was characterized and two parameters of the model were estimated. %B Computers and Software Engineering %8 2007/// %G eng %U http://isastorganization.org/ISAST_CS_1_2007.pdf#page=10 %0 Journal Article %J ACM SIGCOMM HotNets VI %D 2007 %T Path splicing: Reliable connectivity with rapid recovery %A Motiwala,M. %A Feamster, Nick %A Vempala,S. %X We present path splicing, a primitive that constructs net-work paths from multiple independent routing processes that run over a single network topology. The routing processes compute distinct routing trees using randomly perturbed link weights. A few additional bits in packet headers give end systems access to a large number of paths. By changing these bits, nodes can redirect traffic without detailed knowl- edge of network paths. Assembling paths by “splicing” segments can yield up to an exponential improvement in path diversity for only a linear increase in storage and mes- sage complexity. We present randomized approaches for slice construction and failure recovery that achieve near- optimal performance and are extremely simple to config- ure. Our evaluation of path splicing on realistic ISP topolo- gies demonstrates a dramatic increase in reliability that ap- proaches the best possible using only a small number of slices and for only a small increase in latency. %B ACM SIGCOMM HotNets VI %8 2007/// %G eng %0 Conference Paper %B Proc. ACM SIGCOMM Hot-Nets %D 2007 %T Path splicing with network slicing %A Feamster, Nick %A Motiwala,M. %A Vempala,S. %B Proc. ACM SIGCOMM Hot-Nets %8 2007/// %G eng %0 Journal Article %J International Journal of Computer Vision %D 2007 %T Pedestrian detection via periodic motion analysis %A Ran, Y. %A Weiss, I. %A Zheng,Q. %A Davis, Larry S. %X We describe algorithms for detecting pedestrians in videos acquired by infrared (and color) sensors. Two approaches are proposed based on gait. The first employs computationally efficient periodicity measurements. Unlike other methods, it estimates a periodic motion frequency using two cascading hypothesis testing steps to filter out non-cyclic pixels so that it works well for both radial and lateral walking directions. The extraction of the period is efficient and robust with respect to sensor noise and cluttered background. In order to integrate shape and motion, we convert the cyclic pattern into a binary sequence by Maximal Principal Gait Angle (MPGA) fitting in the second method. It does not require alignment and continuously estimates the period using a Phase-locked Loop. Both methods are evaluated by experimental results that measure performance as a function of size, movement direction, frame rate and sequence length. %B International Journal of Computer Vision %V 71 %P 143 - 160 %8 2007/// %G eng %N 2 %0 Conference Paper %B Proc. Workshop on Hot Topics in Networks (HotNets) %D 2007 %T PeerWise discovery and negotiation of faster paths %A Lumezanu,C. %A Levin,D. %A Spring, Neil %B Proc. Workshop on Hot Topics in Networks (HotNets) %8 2007/// %G eng %0 Conference Paper %B Proceedings of the 3rd International Workshop on Software Engineering for High Performance Computing Applications %D 2007 %T Performance Measurement of Novice HPC Programmers Code %A Alameh, Rola %A Zazworka, Nico %A Hollingsworth, Jeffrey K %K measurement %K performance %K performance measures %K product metrics %K program analysis %X Performance is one of the key factors of improving productivity in High Performance Computing (HPC). In this paper we discuss current studies in the field of performance measurement of codes captured in classroom experiments for the High Productivity Computing Project (HPCS). We give two examples of measurements introducing two new hypotheses: spending more effort doesn't always result in improvement of performance for novices; the use of higher level MPI functions promises better performance for novices. We also present a tool - the Automated Performance Measurement System (APMS). APMS helps to partially automate the measurement of the performance of a set of parallel programs with several inputs. The design and implementation of the tool is flexible enough to allow other researchers to conduct similar studies. %B Proceedings of the 3rd International Workshop on Software Engineering for High Performance Computing Applications %S SE-HPC '07 %I IEEE Computer Society %P 3– - 3– %8 2007/// %@ 0-7695-2969-0 %G eng %U http://dx.doi.org/10.1109/SE-HPC.2007.4 %R http://dx.doi.org/10.1109/SE-HPC.2007.4 %0 Journal Article %J International Journal of Computer Vision %D 2007 %T Photometric stereo with general, unknown lighting %A Basri,R. %A Jacobs, David W. %A Kemelmacher,I. %X Work on photometric stereo has shown how to recover the shape and reflectance properties of an object using multiple images taken with a fixed viewpoint and variable lighting conditions. This work has primarily relied on known lighting conditions or the presence of a single point source of light in each image. In this paper we show how to perform photometric stereo assuming that all lights in a scene are distant from the object but otherwise unconstrained. Lighting in each image may be an unknown and may include arbitrary combination of diffuse, point and extended sources. Our work is based on recent results showing that for Lambertian objects, general lighting conditions can be represented using low order spherical harmonics. Using this representation we can recover shape by performing a simple optimization in a low-dimensional space. We also analyze the shape ambiguities that arise in such a representation. We demonstrate our method by reconstructing the shape of objects from images obtained under a variety of lightings. We further compare the reconstructed shapes against shapes obtained with a laser scanner. %B International Journal of Computer Vision %V 72 %P 239 - 257 %8 2007/// %G eng %N 3 %R 10.1007/s11263-006-8815-7 %0 Journal Article %J Bulletin of the American Physical Society %D 2007 %T Plasma Turbulence Simulation and Visualization on Graphics Processors: Efficient Parallel Computing on the Desktop %A Stantchev,George %A Juba,Derek %A Dorland,William %A Varshney, Amitabh %X Direct numerical simulation (DNS) of turbulence is computationally very intensive and typically relies on some form of parallel processing. Spectral kernels used for spatial discretization are a common computational bottleneck on distributed memory architectures. One way to increase the efficiency of DNS algorithms is to parallelize spectral kernels using tightly-coupled SPMD multiprocessor hardware architecture with minimal inter-processor communication latency. In this poster we present techniques to take advantage of the recent programmable interfaces for modern Graphics Processing Units (GPUs) to carefully map DNS computations to GPU architectures that are characterized by a very high memory bandwidth and hundreds of SPMD processors. We compare and contrast the performance of our parallel algorithm on a modern GPU versus a CPU implementation of several turbulence simulation codes. We also demonstrate a prototype of a scalable computational steering framework based on turbulence simulation and visualization coupling on the GPU. %B Bulletin of the American Physical Society %V Volume 52, Number 11 %8 2007/11/12/ %G eng %U http://meetings.aps.org/Meeting/DPP07/Event/70114 %0 Conference Paper %B Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series %D 2007 %T Plasmonics and the parallel programming problem %A Vishkin, Uzi %A Smolyaninov,I. %A Davis,C. %X While many parallel computers have been built, it has generally been too difficult to program them. Now, all computersare effectively becoming parallel machines. Biannual doubling in the number of cores on a single chip, or faster, over the coming decade is planned by most computer vendors. Thus, the parallel programming problem is becoming more critical. The only known solution to the parallel programming problem in the theory of computer science is through a parallel algorithmic theory called PRAM. Unfortunately, some of the PRAM theory assumptions regarding the bandwidth between processors and memories did not properly reflect a parallel computer that could be built in previous decades. Reaching memories, or other processors in a multi-processor organization, required off-chip connections through pins on the boundary of each electric chip. Using the number of transistors that is becoming available on chip, on-chip architectures that adequately support the PRAM are becoming possible. However, the bandwidth of off-chip connections remains insufficient and the latency remains too high. This creates a bottleneck at the boundary of the chip for a PRAM-On-Chip architecture. This also prevents scalability to larger “supercomputing” organizations spanning across many processing chips that can handle massive amounts of data. Instead of connections through pins and wires, power-efficient CMOS-compatible on-chip conversion to plasmonic nanowaveguides is introduced for improved latency and bandwidth. Proper incorporation of our ideas offer exciting avenues to resolving the parallel programming problem, and an alternative way for building faster, more useable and much more compact supercomputers. %B Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series %V 6477 %P 19 - 19 %8 2007/// %G eng %0 Journal Article %J International Journal of Computational Geometry and Applications %D 2007 %T Pointerless implementation of hierarchical simplicial meshes and efficient neighbor finding in arbitrary dimensions %A Atalay,F. B %A Mount, Dave %A Mitchell,J. %X We describe a pointerless representation of hierarchical regular simplicial meshes, basedon a bisection approach proposed by Maubach. We introduce a new labeling scheme, called an LPT code, which uniquely encodes the geometry of each simplex of the hi- erarchy, and we present rules to compute the neighbors of a given simplex efficiently through the use of these codes. In addition, we show how to traverse the associated tree and how to answer point location and interpolation queries. Our system works in arbitrary dimensions. %B International Journal of Computational Geometry and Applications %V 17 %P 595 - 631 %8 2007/// %G eng %N 6 %0 Conference Paper %D 2007 %T Portcullis: protecting connection setup from denial-of-capability attacks %A Parno, Bryan %A Wendlandt, Dan %A Elaine Shi %A Perrig, Adrian %A Maggs, Bruce %A Hu, Yih-Chun %K network capability %K per-computation fairness %X Systems using capabilities to provide preferential service to selected flows have been proposed as a defense against large-scale network denial-of-service attacks. While these systems offer strong protection for established network flows, the Denial-of-Capability (DoC) attack, which prevents new capability-setup packets from reaching the destination, limits the value of these systems. Portcullis mitigates DoC attacks by allocating scarce link bandwidth for connection establishment packets based on per-computation fairness. We prove that a legitimate sender can establish a capability with high probability regardless of an attacker's resources or strategy and that no system can improve on our guarantee. We simulate full and partial deployments of Portcullis on an Internet-scale topology to confirm our theoretical results and demonstrate the substantial benefits of using per-computation fairness. %S SIGCOMM '07 %I ACM %P 289 - 300 %8 2007 %@ 978-1-59593-713-1 %G eng %U http://doi.acm.org/10.1145/1282380.1282413 %0 Journal Article %J EURASIP Journal on Advances in Signal Processing %D 2007 %T Pose-Encoded Spherical Harmonics for Face Recognition and Synthesis Using a Single Image %A Yue,Zhanfeng %A Zhao,Wenyi %A Chellapa, Rama %X Face recognition under varying pose is a challenging problem, especially when illumination variations are also present. In this paper, we propose to address one of the most challenging scenarios in face recognition. That is, to identify a subject from a test image that is acquired under different pose and illumination condition from only one training sample (also known as a gallery image) of this subject in the database. For example, the test image could be semifrontal and illuminated by multiple lighting sources while the corresponding training image is frontal under a single lighting source. Under the assumption of Lambertian reflectance, the spherical harmonics representation has proved to be effective in modeling illumination variations for a fixed pose. In this paper, we extend the spherical harmonics representation to encode pose information. More specifically, we utilize the fact that 2D harmonic basis images at different poses are related by close-form linear transformations, and give a more convenient transformation matrix to be directly used for basis images. An immediate application is that we can easily synthesize a different view of a subject under arbitrary lighting conditions by changing the coefficients of the spherical harmonics representation. A more important result is an efficient face recognition method, based on the orthonormality of the linear transformations, for solving the above-mentioned challenging scenario. Thus, we directly project a nonfrontal view test image onto the space of frontal view harmonic basis images. The impact of some empirical factors due to the projection is embedded in a sparse warping matrix; for most cases, we show that the recognition performance does not deteriorate after warping the test image to the frontal view. Very good recognition results are obtained using this method for both synthetic and challenging real images. %B EURASIP Journal on Advances in Signal Processing %V 2008 %P 748483 - 748483 %8 2007/09/25/ %@ 1687-6180 %G eng %U http://asp.eurasipjournals.com/content/2008/1/748483 %N 1 %R 10.1155/2008/748483 %0 Journal Article %J Nucleic Acids Research %D 2007 %T Position and distance specificity are important determinants of cis-regulatory motifs in addition to evolutionary conservation %A Vardhanabhuti,Saran %A Wang,Junwen %A Hannenhalli, Sridhar %X Computational discovery of cis-regulatory elements remains challenging. To cope with the high false positives, evolutionary conservation is routinely used. However, conservation is only one of the attributes of cis-regulatory elements and is neither necessary nor sufficient. Here, we assess two additional attributes—positional and inter-motif distance specificity—that are critical for interactions between transcription factors. We first show that for a greater than expected fraction of known motifs, the genes that contain the motifs in their promoters in a position-specific or distance-specific manner are related, both in function and/or in expression pattern. We then use the position and distance specificity to discover novel motifs. Our work highlights the importance of distance and position specificity, in addition to the evolutionary conservation, in discovering cis-regulatory motifs. %B Nucleic Acids Research %V 35 %P 3203 - 3213 %8 2007/05/01/ %G eng %U http://nar.oxfordjournals.org/content/35/10/3203.abstract %N 10 %R 10.1093/nar/gkm201 %0 Journal Article %J Lecture Notes in Computer Science %D 2007 %T Poster Session 5: Matching and Registration-Simultaneous Appearance Modeling and Segmentation for Matching People Under Occlusion %A Lin,Z. %A Davis, Larry S. %A David Doermann %A DeMenthon,D. %B Lecture Notes in Computer Science %V 4844 %P 404 - 413 %8 2007/// %G eng %0 Journal Article %J IEEE/ACM Transactions on Networking (TON) %D 2007 %T Power optimization in fault-tolerant topology control algorithms for wireless multi-hop networks %A Hajiaghayi, Mohammad T. %A Immorlica,N. %A Mirrokni,V. S %B IEEE/ACM Transactions on Networking (TON) %V 15 %P 1345 - 1358 %8 2007/// %G eng %N 6 %0 Journal Article %J Computational Statistics & Data Analysis %D 2007 %T A practical approximation algorithm for the LMS line estimator %A Mount, Dave %A Netanyahu,Nathan S. %A Romanik,Kathleen %A Silverman,Ruth %A Wu,Angela Y. %K Approximation algorithms %K least median-of-squares regression %K line arrangements %K line fitting %K randomized algorithms %K robust estimation %X The problem of fitting a straight line to a finite collection of points in the plane is an important problem in statistical estimation. Robust estimators are widely used because of their lack of sensitivity to outlying data points. The least median-of-squares (LMS) regression line estimator is among the best known robust estimators. Given a set of n points in the plane, it is defined to be the line that minimizes the median squared residual or, more generally, the line that minimizes the residual of any given quantile q, where 0 < q ⩽ 1 . This problem is equivalent to finding the strip defined by two parallel lines of minimum vertical separation that encloses at least half of the points.The best known exact algorithm for this problem runs in O ( n 2 ) time. We consider two types of approximations, a residual approximation, which approximates the vertical height of the strip to within a given error bound ε r ⩾ 0 , and a quantile approximation, which approximates the fraction of points that lie within the strip to within a given error bound ε q ⩾ 0 . We present two randomized approximation algorithms for the LMS line estimator. The first is a conceptually simple quantile approximation algorithm, which given fixed q and ε q > 0 runs in O ( n log n ) time. The second is a practical algorithm, which can solve both types of approximation problems or be used as an exact algorithm. We prove that when used as a quantile approximation, this algorithm's expected running time is O ( n log 2 n ) . We present empirical evidence that the latter algorithm is quite efficient for a wide variety of input distributions, even when used as an exact algorithm. %B Computational Statistics & Data Analysis %V 51 %P 2461 - 2486 %8 2007/02/01/ %@ 0167-9473 %G eng %U http://www.sciencedirect.com/science/article/pii/S0167947306002921 %N 5 %R 10.1016/j.csda.2006.08.033 %0 Conference Paper %B Proceedings of the nineteenth annual ACM symposium on Parallel algorithms and architectures %D 2007 %T PRAM-on-chip: first commitment to silicon %A Wen,Xingzhi %A Vishkin, Uzi %K ease-of-programming %K explicit multi-threading %K on-chip parallel processor %K Parallel algorithms %K PRAM %K XMT %B Proceedings of the nineteenth annual ACM symposium on Parallel algorithms and architectures %S SPAA '07 %I ACM %C New York, NY, USA %P 301 - 302 %8 2007/// %@ 978-1-59593-667-7 %G eng %U http://doi.acm.org/10.1145/1248377.1248427 %R 10.1145/1248377.1248427 %0 Report %D 2007 %T Predicting Protein-Protein Interactions Using Relational Features %A Licamele,Louis %A Getoor, Lise %K Technical Report %X Proteins play a fundamental role in ever y process within the cell.Understanding how proteins interact, and the functional units they are par t of, is important to furthering our knowledge of the entire biological process. There has been a growing amount of work, both experimental and computational, on determining the protein-protein interaction network. Recently researchers have had success looking at this as a relational learning problem. In this work, we further this investigation, proposing several novel relational features for predicting protein-protein interaction. These features can be used in any classifier. Our approach allows large and complex networks to be analyzed and is an alternative to using more expensive relational methods. We show that we are able to get an accuracy of 81.7% when predicting new links from noisy high throughput data. %I Department of Computer Science, University of Maryland, College Park %V CS-TR-4909 %8 2007/01/07/ %G eng %U http://drum.lib.umd.edu/handle/1903/7555 %0 Journal Article %J Journal of Parallel and Distributed Computing %D 2007 %T Principles for designing data-/compute-intensive distributed applications and middleware systems for heterogeneous environments %A Kim,Jik-Soo %A Andrade,Henrique %A Sussman, Alan %K Computational science applications %K Data-/compute-intensive applications %K Heterogeneous environments %K Middleware systems %X The nature of distributed systems is constantly and steadily changing as the hardware and software landscape evolves. Porting applications and adapting existing middleware systems to ever changing computational platforms has become increasingly complex and expensive. Therefore, the design of applications, as well as the design of next generation middleware systems, must follow a set of guiding principles in order to insure long-term “survivability” without costly re-engineering. From our practical experience, the key determinants to success in this endeavor are adherence to the following principles: (1) Design for change; (2) Provide for storage subsystem I/O coordination; (3) Employ workload partitioning and load balancing techniques; (4) Employ caching; (5) Schedule the workload; and (6) Understand the workload. In order to support these principles, we have collected extensive experimental results comparing three middleware systems targeted at data- and compute-intensive applications implemented by our research group during the course of the last decade, on a single data- and compute-intensive application. The main contribution of this work is the analysis of a level playing field, where we discuss and quantify how adherence to these guiding principles impacts overall system throughput and response time. %B Journal of Parallel and Distributed Computing %V 67 %P 755 - 771 %8 2007/07// %@ 0743-7315 %G eng %U http://www.sciencedirect.com/science/article/pii/S0743731507000603 %N 7 %R 10.1016/j.jpdc.2007.04.006 %0 Journal Article %J Information and Computation %D 2007 %T Priority and abstraction in process algebra %A Cleaveland, Rance %A Lüttgen,Gerald %A Natarajan,V. %K Axiomatization %K Bisimulation %K Full abstraction %K Observation congruence %K Priority %K Process algebra %X More than 15 years ago, Cleaveland and Hennessy proposed an extension of the process algebra CCS in which some actions may take priority over others. The theory was equipped with a behavioral congruence based on strong bisimulation.This article gives a full account of the challenges in, and the solutions employed for, defining a semantic theory of observation congruence for this process algebra. A full-abstraction result is presented whose proof relies on a novel approach based on successive approximations for identifying the largest congruence contained in an intuitive but naïve equivalence. Prioritized observation congruence is also characterized equationally for the class of finite processes, while its utility for system verification is demonstrated by an illustrative example. %B Information and Computation %V 205 %P 1426 - 1458 %8 2007/09// %@ 0890-5401 %G eng %U http://www.sciencedirect.com/science/article/pii/S0890540107000624 %N 9 %R 10.1016/j.ic.2007.05.001 %0 Journal Article %J ACM Transactions on Embedded Computing Systems (TECS) %D 2007 %T Probabilistic design of multimedia embedded systems %A Hua,S. %A Qu,G. %A Bhattacharyya, Shuvra S. %B ACM Transactions on Embedded Computing Systems (TECS) %V 6 %P 15–es - 15–es %8 2007/// %G eng %N 3 %0 Journal Article %J Dynamical Vision %D 2007 %T A probabilistic framework for correspondence and egomotion %A Domke, J. %A Aloimonos, J. %B Dynamical Vision %P 232 - 242 %8 2007/// %G eng %0 Conference Paper %B Computer Vision, 2007. ICCV 2007. IEEE 11th International Conference on %D 2007 %T Probabilistic Fusion Tracking Using Mixture Kernel-Based Bayesian Filtering %A Han,Bohyung %A Joo,Seong-Wook %A Davis, Larry S. %K (numerical %K adaptive %K arrangement %K Bayesian %K Filtering %K filtering;multiple %K filters;probabilistic %K fusion %K fusion;tracking; %K integration;mixture %K kernel-based %K methods);sensor %K methods;array %K particle %K processing;particle %K sensors;object %K signal %K system;blind %K techniques;visual %K tracking;Bayes %K tracking;particle %K tracking;sensor %X Even though sensor fusion techniques based on particle filters have been applied to object tracking, their implementations have been limited to combining measurements from multiple sensors by the simple product of individual likelihoods. Therefore, the number of observations is increased as many times as the number of sensors, and the combined observation may become unreliable through blind integration of sensor observations - especially if some sensors are too noisy and non-discriminative. We describe a methodology to model interactions between multiple sensors and to estimate the current state by using a mixture of Bayesian filters - one filter for each sensor, where each filter makes a different level of contribution to estimate the combined posterior in a reliable manner. In this framework, an adaptive particle arrangement system is constructed in which each particle is allocated to only one of the sensors for observation and a different number of samples is assigned to each sensor using prior distribution and partial observations. We apply this technique to visual tracking in logical and physical sensor fusion frameworks, and demonstrate its effectiveness through tracking results. %B Computer Vision, 2007. ICCV 2007. IEEE 11th International Conference on %P 1 - 8 %8 2007/10// %G eng %R 10.1109/ICCV.2007.4408938 %0 Conference Paper %D 2007 %T Probabilistic go theories %A Parker,A. %A Yaman,F. %A Nau, Dana S. %A V.S. Subrahmanian %X There are numerous cases where we need to rea- son about vehicles whose intentions and itineraries are not known in advance to us. For example, Coast Guard agents tracking boats don’t always know where they are headed. Likewise, in drug en- forcement applications, it is not always clear where drug-carrying airplanes (which do often show up on radar) are headed, and how legitimate planes with an approved flight manifest can avoid them. Likewise, traffic planners may want to understand how many vehicles will be on a given road at a given time. Past work on reasoning about vehi- cles (such as the “logic of motion” by Yaman et. al. [Yaman et al., 2004]) only deals with vehicles whose plans are known in advance and don’t cap- ture such situations. In this paper, we develop a for- mal probabilistic extension of their work and show that it captures both vehicles whose itineraries are known, and those whose itineraries are not known. We show how to correctly answer certain queries against a set of statements about such vehicles. A prototype implementation shows our system to work efficiently in practice. %8 2007/// %G eng %U http://www.aaai.org/Papers/IJCAI/2007/IJCAI07-079.pdf %0 Conference Paper %B Proceedings of the 33rd international conference on Very large data bases %D 2007 %T Probabilistic graphical models and their role in databases %A Deshpande, Amol %A Sarawagi,Sunita %X Probabilistic graphical models provide a framework for compact representation and efficient reasoning about the joint probability distribution of several interdependent variables. This is a classical topic with roots in statistical physics. In recent years, spurred by several applications in unstructured data integration, sensor networks, image processing, bio-informatics, and code design, the topic has received renewed interest in the machine learning, data mining, and database communities. Techniques from graphical models have also been applied to many topics directly of interest to the database community including information extraction, sensor data analysis, imprecise data representation and querying, selectivity estimation for query optimization, and data privacy. As database research continues to expand beyond the confines of traditional enterprise domains, we expect both the need and applicability of probabilistic graphical models to increase dramatically over the next few years. With this tutorial, we are aiming to provide a foundational overview of probabilistic graphical models to the database community, accompanied by a brief overview of some of the recent research literature on the role of graphical models in databases. %B Proceedings of the 33rd international conference on Very large data bases %S VLDB '07 %I VLDB Endowment %P 1435 - 1436 %8 2007/// %@ 978-1-59593-649-3 %G eng %U http://dl.acm.org/citation.cfm?id=1325851.1326038 %0 Journal Article %J ACM Transactions on Computational Logic (TOCL) %D 2007 %T Probabilistic interval XML %A Hung,Edward %A Getoor, Lise %A V.S. Subrahmanian %K Semistructured Databases %K XML %X Interest in XML databases has been expanding rapidly over the last few years. In this paper, we study the problem of incorporating probabilistic information into XML databases. We propose the Probabilistic Interval XML (PIXML for short) data model in this paper. Using this data model, users can express probabilistic information within XML markups. In addition, we provide two alternative formal model-theoretic semantics for PIXML data. The first semantics is a “global” semantics which is relatively intuitive, but is not directly amenable to computation. The second semantics is a “local” semantics which supports efficient computation. We prove several correspondence results between the two semantics. To our knowledge, this is the first formal model theoretic semantics for probabilistic interval XML. We then provide an operational semantics that may be used to compute answers to queries and that is correct for a large class of probabilistic instances. %B ACM Transactions on Computational Logic (TOCL) %V 8 %8 2007/08// %@ 1529-3785 %G eng %U http://doi.acm.org/10.1145/1276920.1276926 %N 4 %R 10.1145/1276920.1276926 %0 Journal Article %J PHOTOGRAMMETRIE FERNERKUNDUNG GEOINFORMATION %D 2007 %T A probabilistic notion of camera geometry: Calibrated vs. uncalibrated %A Domke, J. %A Aloimonos, J. %B PHOTOGRAMMETRIE FERNERKUNDUNG GEOINFORMATION %V 2007 %P 25 - 25 %8 2007/// %G eng %N 1 %0 Book Section %B Introduction to Statistical Relational LearningIntroduction to Statistical Relational Learning %D 2007 %T Probabilistic Relational Models %A Getoor, Lise %A Friedman,N. %A Koller,D. %A Pfeffer,A. %A Taskar,B. %B Introduction to Statistical Relational LearningIntroduction to Statistical Relational Learning %P 129 - 129 %8 2007/// %G eng %0 Patent %D 2007 %T Probabilistic wavelet synopses for multiple measures %A Deligiannakis,Antonios %A Garofalakis,Minos N. %A Roussopoulos, Nick %X A technique for building probabilistic wavelet synopses for multi-measure data sets is provided. In the presence of multiple measures, it is demonstrated that the problem of exact probabilistic coefficient thresholding becomes significantly more complex. An algorithmic formulation for probabilistic multi-measure wavelet thresholding based on the idea of partial-order dynamic programming (PODP) is provided. A fast, greedy approximation algorithm for probabilistic multi-measure thresholding based on the idea of marginal error gains is provided. An empirical study with both synthetic and real-life data sets validated the approach, demonstrating that the algorithms outperform naive approaches based on optimizing individual measures independently and the greedy thresholding scheme provides near-optimal and, at the same time, fast and scalable solutions to the probabilistic wavelet synopsis construction problem. %V 11/225,539 %8 2007/03/15/ %G eng %U http://www.google.com/patents?id=IHWbAAAAEBAJ %0 Conference Paper %D 2007 %T Profiling Attacker Behavior Following SSH Compromises %A Ramsbrock,D. %A Berthier,R. %A Michel Cukier %K Linux %K Linux honeypot computers %K profiling attacker behavior %K remote compromise %K rogue code %K security of data %K SSH compromises %K system configuration %X This practical experience report presents the results of an experiment aimed at building a profile of attacker behavior following a remote compromise. For this experiment, we utilized four Linux honeypot computers running SSH with easily guessable passwords. During the course of our research, we also determined the most commonly attempted usernames and passwords, the average number of attempted logins per day, and the ratio of failed to successful attempts. To build a profile of attacker behavior, we looked for specific actions taken by the attacker and the order in which they occurred. These actions were: checking the configuration, changing the password, downloading a file, installing/running rogue code, and changing the system configuration. %P 119 - 124 %8 2007/06// %G eng %R 10.1109/DSN.2007.76 %0 Book Section %B Empirical Software EngineeringEmpirical Software Engineering %D 2007 %T Protocols in the use of empirical software engineering artifacts %A Basili, Victor R. %A Zelkowitz, Marvin V %A Sjøberg,D. I.K %A Johnson,P. %A Cowling,A. J %X If empirical software engineering is to grow as a valid scientific endeavor, the ability to acquire, use, share, and compare data collected from a variety of sources must be encouraged. This is necessary to validate the formal models being developed within computer science. However, within the empirical software engineering community this has not been easily accomplished. This paper analyses experiences from a number of projects, and defines the issues, which include the following: (1) How should data, testbeds, and artifacts be shared? (2) What limits should be placed on who can use them and how? How does one limit potential misuse? (3) What is the appropriate way to credit the organization and individual that spent the effort collecting the data, developing the testbed, and building the artifact? (4) Once shared, who owns the evolved asset? As a solution to these issues, the paper proposes a framework for an empirical software engineering artifact agreement. Such an agreement is intended to address the needs for both creator and user of such artifacts and should foster a market in making available and using such artifacts. If this framework for sharing software engineering artifacts is commonly accepted, it should encourage artifact owners to make the artifacts accessible to others (gaining credit is more likely and misuse is less likely). It may be easier for other researchers to request artifacts since there will be a well-defined protocol for how to deal with relevant matters. %B Empirical Software EngineeringEmpirical Software Engineering %I Springer %V 12 %P 107 - 119 %8 2007/// %G eng %0 Journal Article %J Proc. DialM-POMC Workshop on Foundations of Mobile Computing %D 2007 %T Provable algorithms for joint optimization of transport, routing and MAC layers in wireless ad hoc networks %A Kumar,V. S.A %A Marathe,M. V %A Parthasarathy,S. %A Srinivasan, Aravind %B Proc. DialM-POMC Workshop on Foundations of Mobile Computing %8 2007/// %G eng %0 Conference Paper %B Proceedings of the 4th International Workshop on Semantic Evaluations %D 2007 %T PUTOP: turning predominant senses into a topic model for word sense disambiguation %A Jordan Boyd-Graber %A Blei,David %X We extend on McCarthy et al.'s predominant sense method to create an unsupervised method of word sense disambiguation that uses automatically derived topics using Latent Dirichlet allocation. Using topic-specific synset similarity measures, we create predictions for each word in each document using only word frequency information. It is hoped that this procedure can improve upon the method for larger numbers of topics by providing more relevant training corpora for the individual topics. This method is evaluated on SemEval-2007 Task 1 and Task 17. %B Proceedings of the 4th International Workshop on Semantic Evaluations %S SemEval '07 %I Association for Computational Linguistics %C Stroudsburg, PA, USA %P 277 - 281 %8 2007/// %G eng %U http://dl.acm.org/citation.cfm?id=1621474.1621534 %0 Journal Article %J Journal of Computational Physics %D 2006 %T A parallel implicit method for the direct numerical simulation of wall-bounded compressible turbulence %A Martin, M.P %A Candler,Graham V. %K direct numerical simulation %K Implicit time integration %K Wall-bounded turbulent flow %X A new second-order accurate implicit temporal numerical scheme for the direct numerical simulation of turbulent flows is presented. The formulation of the implicit method and the corresponding tunable parameters are introduced. The numerical simulation results are compared with the results given by explicit Runge–Kutta schemes, theoretical results, and published experimental and numerical data. An assessment of the accuracy and performance of the method to simulate turbulent flows is made for temporally decaying isotropic turbulence and subsonic and supersonic turbulent boundary layers. Whereas no significant advantage over typical explicit time integration methods are found for the incompressible flows; it is shown that the implicit scheme yields significant reduction in computer cost while assuring time-accurate solutions for compressible turbulence simulations. %B Journal of Computational Physics %V 215 %P 153 - 171 %8 2006/// %@ 0021-9991 %G eng %U http://www.sciencedirect.com/science/article/pii/S0021999105004833 %N 1 %R 10.1016/j.jcp.2005.10.017 %0 Journal Article %J Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC2006). Genoa, Italy %D 2006 %T Parallel syntactic annotation of multiple languages %A Rambow,O. %A Dorr, Bonnie J %A Farwell,D. %A Green,R. %A Habash,N. %A Helmreich,S. %A Hovy,E. %A Levin,L. %A Miller,K.J. %A Mitamura,T. %A others %X This paper describes an effort to investigate the incrementally deepening development of an interlingua notation, validated by humanannotation of texts in English plus six languages. We begin with deep syntactic annotation, and in this paper present a series of annotation manuals for six different languages at the deep-syntactic level of representation. Many syntactic differences between languages are removed in the proposed syntactic annotation, making them useful resources for multilingual NLP projects with semantic components. %B Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC2006). Genoa, Italy %8 2006/// %G eng %0 Journal Article %J Physical Review APhys. Rev. A %D 2006 %T Parallelism for quantum computation with qudits %A O’Leary,Dianne P. %A Brennen,Gavin K. %A Bullock,Stephen S. %X Robust quantum computation with d-level quantum systems (qudits) poses two requirements: fast, parallel quantum gates and high-fidelity two-qudit gates. We first describe how to implement parallel single-qudit operations. It is by now well known that any single-qudit unitary can be decomposed into a sequence of Givens rotations on two-dimensional subspaces of the qudit state space. Using a coupling graph to represent physically allowed couplings between pairs of qudit states, we then show that the logical depth (time) of the parallel gate sequence is equal to the height of an associated tree. The implementation of a given unitary can then optimize the tradeoff between gate time and resources used. These ideas are illustrated for qudits encoded in the ground hyperfine states of the alkali-metal atoms 87Rb and 133Cs. Second, we provide a protocol for implementing parallelized nonlocal two-qudit gates using the assistance of entangled qubit pairs. Using known protocols for qubit entanglement purification, this offers the possibility of high-fidelity two-qudit gates. %B Physical Review APhys. Rev. A %V 74 %P 032334 - 032334 %8 2006/// %G eng %U http://link.aps.org/doi/10.1103/PhysRevA.74.032334 %N 3 %R 10.1103/PhysRevA.74.032334 %0 Conference Paper %B Computer Vision Systems, 2006 ICVS '06. IEEE International Conference on %D 2006 %T Parametric Hand Tracking for Recognition of Virtual Drawings %A Sepehri,A. %A Yacoob,Yaser %A Davis, Larry S. %X A hand tracking system for recognition of virtual spatial drawings is presented. Using a stereo camera, the 3D position of the hand in space is estimated. Then, by tracking the central region of the hand in 3D and estimating a virtual plane in space, the intended drawing of the user is recognized. Experimental results demonstrate the accuracy and effectiveness of this technique. The system can be used to communicate drawings and alphabets to a computer where a classifier can transform the drawn alphabets into interpretable characters. %B Computer Vision Systems, 2006 ICVS '06. IEEE International Conference on %P 6 - 6 %8 2006/01// %G eng %R 10.1109/ICVS.2006.50 %0 Conference Paper %B Proceedings of the joint international conference on Measurement and modeling of computer systems %D 2006 %T Partially overlapped channels not considered harmful %A Mishra,A. %A Shrivastava,V. %A Banerjee,S. %A Arbaugh, William A. %B Proceedings of the joint international conference on Measurement and modeling of computer systems %P 63 - 74 %8 2006/// %G eng %0 Conference Paper %B Proceedings of the SIGCHI conference on Human Factors in computing systems %D 2006 %T Participatory design with proxies: developing a desktop-PDA system to support people with aphasia %A Jordan Boyd-Graber %A Nikolova,Sonya S. %A Moffatt,Karyn A. %A Kin,Kenrick C. %A Lee,Joshua Y. %A Mackey,Lester W. %A Tremaine,Marilyn M. %A Klawe,Maria M. %K aphasia %K assistive technology %K multi-modal interfaces %K participatory design %X In this paper, we describe the design and preliminary evaluation of a hybrid desktop-handheld system developed to support individuals with aphasia, a disorder which impairs the ability to speak, read, write, or understand language. The system allows its users to develop speech communication through images and sound on a desktop computer and download this speech to a mobile device that can then support communication outside the home. Using a desktop computer for input addresses some of this population's difficulties interacting with handheld devices, while the mobile device addresses stigma and portability issues. A modified participatory design approach was used in which proxies, that is, speech-language pathologists who work with aphasic individuals, assumed the role normally filled by users. This was done because of the difficulties in communicating with the target population and the high variability in aphasic disorders. In addition, the paper presents a case study of the proxy-use participatory design process that illustrates how different interview techniques resulted in different user feedback. %B Proceedings of the SIGCHI conference on Human Factors in computing systems %S CHI '06 %I ACM %C New York, NY, USA %P 151 - 160 %8 2006/// %@ 1-59593-372-7 %G eng %U http://doi.acm.org/10.1145/1124772.1124797 %R 10.1145/1124772.1124797 %0 Conference Paper %B Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems %D 2006 %T Participatory, embodied, multi-agent simulation %A Blikstein,Paulo %A Rand, William %A Wilensky,Uri %K embedded agents %K multi-agent simulation %K participatory simulation %K ROBOTICS %K Sensors %X We will demonstrate the integration of a software-based multi-agent modeling platform with a participatory simulation environment and real-time control over a physical agent (robot). Both real and virtual participants will be able to act as agents in a simulation that will control a physical agent. The backbone of this demonstration is a widely used, freely available, mature modeling platform known as NetLogo. %B Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems %S AAMAS '06 %I ACM %C New York, NY, USA %P 1457 - 1458 %8 2006/// %@ 1-59593-303-4 %G eng %U http://doi.acm.org/10.1145/1160633.1160913 %R 10.1145/1160633.1160913 %0 Journal Article %J Journal of the Brazilian Computer Society %D 2006 %T The past, present, and future of experimental software engineering %A Basili, Victor R. %X This paper gives a 40 year overview of the evolution of experimental software engineering, from the past to the future, from a personal perspective. My hypothesis is that my work followed the evolution of the field. I use my own experiences and thoughts as a barometer of how the field has changed and present some opinions about where we need to go. %B Journal of the Brazilian Computer Society %V 12 %P 7 - 12 %8 2006/12// %@ 0104-6500 %G eng %U http://www.scielo.br/scielo.php?pid=S0104-65002006000400002&script=sci_arttext %N 3 %R 10.1007/BF03194492 %0 Journal Article %J Genome Biology %D 2006 %T Patterns of sequence conservation in presynaptic neural genes %A Hadley,Dexter %A Murphy,Tara %A Valladares,Otto %A Hannenhalli, Sridhar %A Ungar,Lyle %A Kim,Junhyong %A Bućan,Maja %X The neuronal synapse is a fundamental functional unit in the central nervous system of animals. Because synaptic function is evolutionarily conserved, we reasoned that functional sequences of genes and related genomic elements known to play important roles in neurotransmitter release would also be conserved. %B Genome Biology %V 7 %P R105 - R105 %8 2006/11/10/ %@ 1465-6906 %G eng %U http://genomebiology.com/2006/7/11/R105 %N 11 %R 10.1186/gb-2006-7-11-r105 %0 Conference Paper %B Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics %D 2006 %T PCFGs with syntactic and prosodic indicators of speech repairs %A Hale,John %A Shafran,Izhak %A Yung,Lisa %A Dorr, Bonnie J %A Harper,Mary %A Krasnyanskaya,Anna %A Lease,Matthew %A Liu,Yang %A Roark,Brian %A Snover,Matthew %A Stewart,Robin %X A grammatical method of combining two kinds of speech repair cues is presented. One cue, prosodic disjuncture, is detected by a decision tree-based ensemble classifier that uses acoustic cues to identify where normal prosody seems to be interrupted (Lickley, 1996). The other cue, syntactic parallelism, codifies the expectation that repairs continue a syntactic category that was left unfinished in the reparandum (Levelt, 1983). The two cues are combined in a Treebank PCFG whose states are split using a few simple tree transformations. Parsing performance on the Switchboard and Fisher corpora suggests that these two cues help to locate speech repairs in a synergistic way. %B Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics %S ACL-44 %I Association for Computational Linguistics %C Stroudsburg, PA, USA %P 161 - 168 %8 2006/// %G eng %U http://dx.doi.org/10.3115/1220175.1220196 %R 10.3115/1220175.1220196 %0 Journal Article %J EURASIP Journal on Applied Signal Processing %D 2006 %T PDR: A Performance Evaluation Method for Foreground-Background Segmentation Algorithms %A Kim,K. %A Chalidabhongse,T.H. %A Harwood,D. %A Davis, Larry S. %X We introduce a performance evaluation methodologycalled Perturbation Detection Rate (PDR) analysis for measuring performance of foreground-background seg- mentation. It has some advantages over the commonly used Receiver Operation Characteristics (ROC) analy- sis. Specifically, it does not require foreground targets or knowledge of foreground distributions. It measures the sensitivity of a background subtraction algorithm in detecting possible low contrast targets against the background as a function of contrast, also depending on how well the model captures mixed (moving) back- ground events. We compare four background subtrac- tion algorithms using the methodology. The experimen- tal results show how PDR is used to measure perfor- mance with respect to detection sensitivity in interest- ing low contrast regions. %B EURASIP Journal on Applied Signal Processing %8 2006/// %G eng %0 Report %D 2006 %T Physics-Based Detectors Applied to Long-Wave Infrared Hyperspectral Data %A Broadwater,Joshua %A Chellapa, Rama %K *HYPERSPECTRAL IMAGERY %K algorithms %K AUTOMATIC %K DETECTION %K FAR INFRARED RADIATION %K LOW STRENGTH %K Matched filters %K MILITARY OPERATIONS %K NIGHT %K OPTICS %K SIGNATURES %K SOILS %K SPECTROMETERS %K TARGET DETECTION %X Long-wave infrared (LWIR) hyperspectral image (HSI) data presents an interesting challenge for automatic target detection algorithms. LWIR HSI data is useful for both day and night operations, but weak signatures like disturbed soil can be problematic for standard matched-filter techniques. In this paper, we augment the standard matched-filter techniques with physics-based information particular to HSI data. Our results show that these physics-based detectors provide improved detection performance with quick processing times. %I CENTER FOR AUTOMATION RESEARCH, UNIVERSITY OF MARYLAND COLLEGE PARK %8 2006/11// %G eng %U http://stinet.dtic.mil/oai/oai?&verb=getRecord&metadataPrefix=html&identifier=ADA481339 %0 Journal Article %J Software: Practice and Experience %D 2006 %T PIT: A macro‐implemented implementation language %A Zelkowitz, Marvin V %K compilers %K Implementation languages %K Macros %X An implementation technique called PIT, for pseudo instructional technique, is described which utilizes the macro capabilities of most macro assemblers. A low level machine architecture is described via a set of macros that include some ‘high level’ features. Since the macros manipulate computer words, and refer to actual registers, their implementation in a system is relatively efficient, but since they do not reflect any one particular hardware design, they can be implemented by almost any macro assembler. Tests are built into the macros so that a PIT program will run without change on any machine that has defined these macrosThis technique should provide an alternative to using higher level languages as implementation languages if the object code produced by those compilers is deemed too slow (or too large) for the application that is being programmed. %B Software: Practice and Experience %V 2 %P 337 - 346 %8 2006/10/27/ %@ 1097-024X %G eng %U http://onlinelibrary.wiley.com/doi/10.1002/spe.4380020405/abstract %N 4 %R 10.1002/spe.4380020405 %0 Conference Paper %B Proceedings of the 3rd conference on USENIX Workshop on Real, Large Distributed Systems - Volume 3 %D 2006 %T A platform for unobtrusive measurements on PlanetLab %A Sherwood,Rob %A Spring, Neil %X TCP Sidecar is a network measurement platform for injecting probes transparently into externally generated TCP streams. By coupling measurement probes with nonmeasurement traffic, we are able to obtain measurements behind NATs and firewalls without alerting intrusion detection systems. In this paper, we discuss Sidecar's design and our deployment experience on PlanetLab. We present preliminary results from Sidecar-based tools for RTT estimation ("sideping") and receiver-side bottleneck location ("artrat"). %B Proceedings of the 3rd conference on USENIX Workshop on Real, Large Distributed Systems - Volume 3 %S WORLDS'06 %I USENIX Association %C Berkeley, CA, USA %P 2 - 2 %8 2006/// %G eng %U http://dl.acm.org/citation.cfm?id=1254840.1254842 %0 Journal Article %J ITTC-FY2006-TR-45030-01, Information and Telecommunication Center, The University of Kansas %D 2006 %T Postmodern internetwork architecture %A Bhattacharjee, Bobby %A Calvert,K. %A Griffioen,J. %A Spring, Neil %A Sterbenz,J. P.G %X Network-layer innovation has proven surprisingly difficult, in part because internetworking protocols ignore com-peting economic interests and because a few protocols dominate, enabling layer violations that entrench technologies. Many shortcomings of today’s internetwork layer result from its inflexibility with respect to the policies of the stake- holders: users and service providers. The consequences of these failings are well-known: various hacks, layering violations, and overloadings are introduced to enforce policies and attempt to get the upper hand in various “tus- sles”. The result is a network that is increasingly brittle, hostile to innovation, vulnerable to attack, and insensitive to concerns about accountability and privacy. Our project aims to design, implement, and evaluate through daily use a minimalist internetwork layer and aux- iliary functionality that anticipates tussles and allows them to be played out in policy space, as opposed to in the packet-forwarding path. We call our approach postmodern internetwork architecture, because it is a reaction against many established network layer design concepts. The overall goal of the project is to make a larger portion of the network design space accessible without sacrificing the economy of scale offered by the unified Internet. We will use the postmodern architecture to explore basic architectural questions. These include: • What mechanisms should be supported by the network such that any foreseeable policy requirement can be explicitly addressed? • To what extent can routing and forwarding be isolated from each other while maintaining an efficient and usable network? • What forms of identity should be visible within the network, and what forms of accountability do different identities enable? • What mechanisms are needed to enable efficient access to cross-layer information and mechanisms such that lower layers can express their characteristics and upper layers can exert control downward? We plan to build and evaluate a complete end-to-end networking layer to help us understand feasible solutions to these questions. The Internet has fulfilled the potential of a complete generation of networking research by producing a global platform for innovation, commerce, and democracy. Unfortunately, the Internet also amply demonstrates the com- plexity and architectural ugliness that ensue when competing interests vie for benefits beyond those envisioned in the original design. This project is about redesigning the waist of the architectural hourglass to foster innovation, enhance security and accountability, and accomodate competing interests. %B ITTC-FY2006-TR-45030-01, Information and Telecommunication Center, The University of Kansas %8 2006/02// %G eng %0 Journal Article %J SIGPLAN Not. %D 2006 %T Practical dynamic software updating for C %A Neamtiu,Iulian %A Hicks, Michael W. %A Stoyle,Gareth %A Oriol,Manuel %K dynamic software updating %K function indirection %K loop extraction %K type wrapping %X Software updates typically require stopping and restarting an application, but many systems cannot afford to halt service, or would prefer not to. Dynamic software updating (DSU) addresses this difficulty by permitting programs to be updated while they run. DSU is appealing compared to other approaches for on-line upgrades because it is quite general and requires no redundant hardware. The challenge is in making DSU practical: it should be flexible, and yet safe, efficient, and easy to use.In this paper, we present Ginseng, a DSU implementation for C that aims to meet this challenge. We compile programs specially so that they can be dynamically patched, and generate most of a dynamic patch automatically. Ginseng performs a series of analyses that when combined with some simple runtime support ensure that an update will not violate type-safety while guaranteeing that data is kept up-to-date. We have used Ginseng to construct and dynamically apply patches to three substantial open-source server programs---Very Secure FTP daemon, OpenSSH sshd daemon, and GNU Zebra. In total, we dynamically patched each program with three years' worth of releases. Though the programs changed substantially, the majority of updates were easy to generate. Performance experiments show that all patches could be applied in less than 5 ms, and that the overhead on application throughput due to updating support ranged from 0 to at most 32%. %B SIGPLAN Not. %V 41 %P 72 - 83 %8 2006/06// %@ 0362-1340 %G eng %U http://doi.acm.org/10.1145/1133255.1133991 %N 6 %R 10.1145/1133255.1133991 %0 Journal Article %J Image Processing, IEEE Transactions on %D 2006 %T Principal components null space analysis for image and video classification %A Vaswani, N. %A Chellapa, Rama %K approximate null space;classification error probability;face recognition;image classification;object recognition;principal components null space analysis;subspace linear discriminant analysis;video classification;image classification;principal component a %K Automated;Principal Component Analysis;Signal Processing %K Computer-Assisted;Information Storage and Retrieval;Models %K Computer-Assisted;Video Recording; %K Statistical;Pattern Recognition %X We present a new classification algorithm, principal component space analysis (PCNSA), which is designed for classification problems like object recognition where different classes have unequal and nonwhite noise covariance matrices. PCNSA first obtains a principal components subspace (PCA space) for the entire data. In this PCA space, it finds for each class "i", an Mi-dimensional subspace along which the class' intraclass variance is the smallest. We call this subspace an approximate space (ANS) since the lowest variance is usually "much smaller" than the highest. A query is classified into class "i" if its distance from the class' mean in the class' ANS is a minimum. We derive upper bounds on classification error probability of PCNSA and use these expressions to compare classification performance of PCNSA with that of subspace linear discriminant analysis (SLDA). We propose a practical modification of PCNSA called progressive-PCNSA that also detects "new" (untrained classes). Finally, we provide an experimental comparison of PCNSA and progressive PCNSA with SLDA and PCA and also with other classification algorithms-linear SVMs, kernel PCA, kernel discriminant analysis, and kernel SLDA, for object recognition and face recognition under large pose/expression variation. We also show applications of PCNSA to two classification problems in video-an action retrieval problem and abnormal activity detection. %B Image Processing, IEEE Transactions on %V 15 %P 1816 - 1830 %8 2006/07// %@ 1057-7149 %G eng %N 7 %R 10.1109/TIP.2006.873449 %0 Journal Article %J Information Systems %D 2006 %T The priority curve algorithm for video summarization %A Albanese, M. %A Fayzullin,M. %A Picariello, A. %A V.S. Subrahmanian %K Content based retrieval %K Video databases %K Video summarization %X In this paper, we introduce the concept of a priority curve associated with a video. We then provide an algorithm that can use the priority curve to create a summary (of a desired length) of any video. The summary thus created exhibits nice continuity properties and also avoids repetition. We have implemented the priority curve algorithm (PriCA) and compared it with other summarization algorithms in the literature with respect to both performance and the output quality. The quality of summaries was evaluated by a group of 200 students in Naples, Italy, who watched soccer videos. We show that PriCA is faster than existing algorithms and also produces better quality summaries. We also briefly describe a soccer video summarization system we have built on using the PriCA architecture and various (classical) image processing algorithms. %B Information Systems %V 31 %P 679 - 695 %8 2006/11// %@ 0306-4379 %G eng %U http://www.sciencedirect.com/science/article/pii/S0306437905001250 %N 7 %R 10.1016/j.is.2005.12.003 %0 Conference Paper %B Proceedings of the seventeenth annual ACM-SIAM symposium on Discrete algorithm %D 2006 %T The prize-collecting generalized Steiner tree problem via a new approach of primal-dual schema %A Hajiaghayi, Mohammad T. %A Jain,K. %B Proceedings of the seventeenth annual ACM-SIAM symposium on Discrete algorithm %P 631 - 640 %8 2006/// %G eng %0 Journal Article %J Machine Learning %D 2006 %T PRL: A probabilistic relational language %A Getoor, Lise %A Grant,J. %X In this paper, we describe the syntax and semantics for a probabilistic relational language (PRL). PRL is a recasting of recent work in Probabilistic Relational Models (PRMs) into a logic programming framework. We show how to represent varying degrees of complexity in the semantics including attribute uncertainty, structural uncertainty and identity uncertainty. Our approach is similar in spirit to the work in Bayesian Logic Programs (BLPs), and Logical Bayesian Networks (LBNs). However, surprisingly, there are still some important differences in the resulting formalism; for example, we introduce a general notion of aggregates based on the PRM approaches. One of our contributions is that we show how to support richer forms of structural uncertainty in a probabilistic logical language than have been previously described. Our goal in this work is to present a unifying framework that supports all of the types of relational uncertainty yet is based on logic programming formalisms. We also believe that it facilitates understanding the relationship between the frame-based approaches and alternate logic programming approaches, and allows greater transfer of ideas between them. %B Machine Learning %V 62 %P 7 - 31 %8 2006/// %G eng %N 1 %R 10.1007/s10994-006-5831-3 %0 Journal Article %J SIAM Journal on Computing %D 2006 %T A probabilistic analysis of trie-based sorting of large collections of line segments in spatial databases %A Lindenbaum,M. %A Samet, Hanan %A Hjaltason,G. R %B SIAM Journal on Computing %V 35 %P 22 - 58 %8 2006/// %G eng %N 1 %0 Journal Article %J CONCUR 2006–Concurrency Theory %D 2006 %T Probabilistic I/O automata: Theories of two equivalences %A Stark,E. %A Cleaveland, Rance %A Smolka,S. %B CONCUR 2006–Concurrency Theory %P 343 - 357 %8 2006/// %G eng %0 Journal Article %J Dagstuhl Seminar Proceedings %D 2006 %T Probabilistic, Logical and Relational Learning-Towards a Synthesis %A De Raedt,L. %A Dietterich,T. %A Getoor, Lise %A Muggleton,S. H %A Lloyd,J. W %A Sears,T. D %A Milch,B. %A Marthi,B. %A Russell,S. %A Sontag,D. %B Dagstuhl Seminar Proceedings %V 5051 %8 2006/// %G eng %0 Conference Paper %B Proceedings of the Third International Symposium on 3D Data Processing, Visualization, and Transmission (3DPVT'06) %D 2006 %T A probabilistic notion of correspondence and the epipolar constraint %A Domke, J. %A Aloimonos, J. %B Proceedings of the Third International Symposium on 3D Data Processing, Visualization, and Transmission (3DPVT'06) %P 41 - 48 %8 2006/// %G eng %0 Journal Article %J Information Systems %D 2006 %T Processing approximate aggregate queries in wireless sensor networks %A Deligiannakis,Antonios %A Kotidis,Yannis %A Roussopoulos, Nick %K Aggregate queries %K approximation %K sensor networks %X In-network data aggregation has been recently proposed as an effective means to reduce the number of messages exchanged in wireless sensor networks. Nodes of the network form an aggregation tree, in which parent nodes aggregate the values received from their children and propagate the result to their own parents. However, this schema provides little flexibility for the end-user to control the operation of the nodes in a data sensitive manner. For large sensor networks with severe energy constraints, the reduction (in the number of messages exchanged) obtained through the aggregation tree might not be sufficient. In this paper, we present new algorithms for obtaining approximate aggregate statistics from large sensor networks. The user specifies the maximum error that he is willing to tolerate and, in turn, our algorithms program the nodes in a way that seeks to minimize the number of messages exchanged in the network, while always guaranteeing that the produced estimate lies within the specified error from the exact answer. A key ingredient to our framework is the notion of the residual mode of operation that is used to eliminate messages from sibling nodes when their cumulative change to the computed aggregate is small. We introduce two new algorithms, based on potential gains, which adaptively redistribute the error thresholds to those nodes that benefit the most and try to minimize the total number of transmitted messages in the network. Our techniques significantly reduce the number of messages, often by a factor of 10 for a modest 2% relative error bound, and consistently outperform previous techniques for computing approximate aggregates, which we have adapted for sensor networks. %B Information Systems %V 31 %P 770 - 792 %8 2006/12// %@ 0306-4379 %G eng %U http://www.sciencedirect.com/science/article/pii/S0306437905000177 %N 8 %R 16/j.is.2005.02.001 %0 Journal Article %J Technical Reports from UMIACS, UMIACS-TR-2005-45 %D 2006 %T Programmer's Manual for XMTC Language, XMTC Compiler and XMT Simulator %A Balkan,Aydin O. %A Vishkin, Uzi %K Technical Report %X Explicit Multi-Threading (XMT) is a computing framework developed at the University of Maryland as part of a PRAM-on-chip vision (http://www.umiacs.umd.edu/users/vishkin/XMT). Much in the same way that performance programming of standard computers relies on C language, XMT performance programming is done using an extension of C called XMTC. This manual presents the second generation of XMTCprogramming paradigm. It is intended to be used by an application programmer, who is new to XMTC. In the first part of this technical report (UMIACS-TR 2005-45 Part 1 of 2), we define and describe key concepts, list the limitations and restrictions, and give examples. The second part (UMIACS-TR 2005-45 Part 2 of 2) is a brief tutorial, and it demonstrates the basic programming concepts of XMTC language with examples and exercises. %B Technical Reports from UMIACS, UMIACS-TR-2005-45 %8 2006/06// %G eng %U http://drum.lib.umd.edu/handle/1903/3673 %0 Conference Paper %B 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition %D 2006 %T A Projective Invariant for Textures %A Yong Xu %A Hui Ji %A Fermüller, Cornelia %K Computer science %K Computer vision %K Educational institutions %K Fractals %K Geometry %K Image texture %K Level set %K lighting %K Robustness %K Surface texture %X Image texture analysis has received a lot of attention in the past years. Researchers have developed many texture signatures based on texture measurements, for the purpose of uniquely characterizing the texture. Existing texture signatures, in general, are not invariant to 3D transforms such as view-point changes and non-rigid deformations of the texture surface, which is a serious limitation for many applications. In this paper, we introduce a new texture signature, called the multifractal spectrum (MFS). It provides an efficient framework combining global spatial invariance and local robust measurements. The MFS is invariant under the bi-Lipschitz map, which includes view-point changes and non-rigid deformations of the texture surface, as well as local affine illumination changes. Experiments demonstrate that the MFS captures the essential structure of textures with quite low dimension. %B 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition %I IEEE %V 2 %P 1932 - 1939 %8 2006/// %@ 0-7695-2597-0 %G eng %R 10.1109/CVPR.2006.38 %0 Journal Article %J Lecture notes in computer science %D 2006 %T ProMo-A Scalable and Efficient Framework for Online Data Delivery %A Roitman,H. %A Gal,A. %A Raschid, Louiqa %B Lecture notes in computer science %V 4032 %P 359 - 359 %8 2006/// %G eng %0 Journal Article %J Journal of Parallel and Distributed Computing %D 2006 %T Provable algorithms for parallel generalized sweep scheduling %A Anil Kumar,V. S %A Marathe,M. V %A Parthasarathy,S. %A Srinivasan, Aravind %A Zust,S. %B Journal of Parallel and Distributed Computing %V 66 %P 807 - 821 %8 2006/// %G eng %N 6 %0 Journal Article %J Computational Geometry %D 2006 %T Proximity problems on line segments spanned by points %A Daescu,Ovidiu %A Luo,Jun %A Mount, Dave %K Closest %K Farthest %K k closest lines %K Minimum (maximum) area triangle %K Proximity problem %X Finding the closest or farthest line segment (line) from a point are fundamental proximity problems. Given a set S of n points in the plane and another point q, we present optimal O ( n log n ) time, O ( n ) space algorithms for finding the closest and farthest line segments (lines) from q among those spanned by the points in S. We further show how to apply our techniques to find the minimum (maximum) area triangle with a vertex at q and the other two vertices in S ∖ { q } in optimal O ( n log n ) time and O ( n ) space. Finally, we give an O ( n log n ) time, O ( n ) space algorithm to find the kth closest line from q and show how to find the k closest lines from q in O ( n log n + k ) time and O ( n + k ) space. %B Computational Geometry %V 33 %P 115 - 129 %8 2006/02// %@ 0925-7721 %G eng %U http://www.sciencedirect.com/science/article/pii/S0925772105000805 %N 3 %R 10.1016/j.comgeo.2005.08.007 %0 Journal Article %J ACM Trans. Inf. Syst. Secur. %D 2005 %T A pairwise key predistribution scheme for wireless sensor networks %A Du,Wenliang %A Deng,Jing %A Han,Yunghsiang S. %A Varshney,Pramod K. %A Katz, Jonathan %A Khalili,Aram %K key predistribution %K Security %K Wireless sensor networks %X To achieve security in wireless sensor networks, it is important to be able to encrypt and authenticate messages sent between sensor nodes. Before doing so, keys for performing encryption and authentication must be agreed upon by the communicating parties. Due to resource constraints, however, achieving key agreement in wireless sensor networks is nontrivial. Many key agreement schemes used in general networks, such as Diffie-Hellman and other public-key based schemes, are not suitable for wireless sensor networks due to the limited computational abilities of the sensor nodes. Predistribution of secret keys for all pairs of nodes is not viable due to the large amount of memory this requires when the network size is large.In this paper, we provide a framework in which to study the security of key predistribution schemes, propose a new key predistribution scheme which substantially improves the resilience of the network compared to previous schemes, and give an in-depth analysis of our scheme in terms of network resilience and associated overhead. Our scheme exhibits a nice threshold property: when the number of compromised nodes is less than the threshold, the probability that communications between any additional nodes are compromised is close to zero. This desirable property lowers the initial payoff of smaller-scale network breaches to an adversary, and makes it necessary for the adversary to attack a large fraction of the network before it can achieve any significant gain. %B ACM Trans. Inf. Syst. Secur. %V 8 %P 228 - 258 %8 2005/05// %@ 1094-9224 %G eng %U http://doi.acm.org/10.1145/1065545.1065548 %N 2 %R 10.1145/1065545.1065548 %0 Conference Paper %B Proceedings of the 2005 ACM/IEEE conference on Supercomputing %D 2005 %T Parallel Parameter Tuning for Applications with Performance Variability %A Tabatabaee, Vahid %A Tiwari, Ananta %A Hollingsworth, Jeffrey K %K algorithms %K compilers %K design %X In this paper, we present parallel on-line optimization algorithms for parameter tuning of parallel programs. We employ direct search algorithms that update parameters based on real-time performance measurements. We discuss the impact of performance variability on the accuracy and efficiency of the optimization algorithms and proposed modified versions of the direct search algorithms to cope with it. The modified version uses multiple samples instead of single sample to estimate the performance more accurately.We present preliminary results that the performance variability of applications on clusters is heavy tailed. Finally, we studay and demonstrate the performance ofthe proposed algorithms for real scientific application. %B Proceedings of the 2005 ACM/IEEE conference on Supercomputing %S SC '05 %I IEEE Computer Society %P 57– - 57– %8 2005/// %@ 1-59593-061-2 %G eng %U http://dx.doi.org/10.1109/SC.2005.52 %R http://dx.doi.org/10.1109/SC.2005.52 %0 Journal Article %J Yugoslav Journal of Operations Research %D 2005 %T A parametric visualization software for the assignment problem %A Charalampos Papamanthou %A Paparrizos, Konstantinos %A Samaras, Nikolaos %B Yugoslav Journal of Operations Research %V 15 %P 147 - 158 %8 2005/// %@ 0354-0243 %G eng %U http://www.doiserbia.nb.rs/Article.aspx?id=0354-02430501147P&AspxAutoDetectCookieSupport=1 %N 1 %0 Journal Article %J International Journal of Geographical Information Science %D 2005 %T Path dependence and the validation of agent-based spatial models of land use %A Corresponding,D.G.B. %A Page,S %A Riolo,R %A Zellner,M %A Rand, William %B International Journal of Geographical Information Science %V 19 %P 153 - 174 %8 2005/// %G eng %N 2 %0 Journal Article %J International Journal of Geographical Information Science %D 2005 %T Path dependence and the validation of agent‐based spatial models of land use %A Brown,Daniel G. %A Page,Scott %A Riolo,Rick %A Zellner,Moira %A Rand, William %X In this paper, we identify two distinct notions of accuracy of land?use models and highlight a tension between them. A model can have predictive accuracy: its predicted land?use pattern can be highly correlated with the actual land?use pattern. A model can also have process accuracy: the process by which locations or land?use patterns are determined can be consistent with real world processes. To balance these two potentially conflicting motivations, we introduce the concept of the invariant region, i.e., the area where land?use type is almost certain, and thus path independent; and the variant region, i.e., the area where land use depends on a particular series of events, and is thus path dependent. We demonstrate our methods using an agent?based land?use model and using multi?temporal land?use data collected for Washtenaw County, Michigan, USA. The results indicate that, using the methods we describe, researchers can improve their ability to communicate how well their model performs, the situations or instances in which it does not perform well, and the cases in which it is relatively unlikely to predict well because of either path dependence or stochastic uncertainty. %B International Journal of Geographical Information Science %V 19 %P 153 - 174 %8 2005/// %@ 1365-8816 %G eng %U http://www.tandfonline.com/doi/abs/10.1080/13658810410001713399 %N 2 %R 10.1080/13658810410001713399 %0 Journal Article %J Oceans and health: pathogens in the marine environment %D 2005 %T Pathogenic Vibrio species in the marine and estuarine environment %A Pruzzo,C. %A Huq,A. %A Rita R Colwell %A Donelli,G. %X The genus Vibrio includes more than 30 species, at least 12 of which are pathogenic to humans and/or have been associated with foodborne diseases (Chakraborty et al., 1997). Among these species, Vibrio cholerae, serogroups O1 and O139, are the most important, since they are associated with epidemic and pandemic diarrhea outbreaks in many parts of the world (Centers for Disease Control and Prevention, 1995; Kaper et al., 1995). However, other species of vibrios capable of causing diarrheal disease in humans have received greater attention in the last decade. These include Vibrio parahaemolyticus, a leading cause of foodborne disease outbreaks in Japan and Korea (Lee et al., 2001), Vibrio vulnificus, Vibrio alginolyticus, Vibrio damsela, Vibrio fluvialis, Vibrio furnissii, Vibrio hollisae, Vibrio metschnikovii, and Vibrio mimicus (Altekruse et al., 2000; Høi et al., 1997). In the USA, Vibrio species have been estimated to be the cause of about 8000 illnesses annually (Mead et al., 1999). %B Oceans and health: pathogens in the marine environment %P 217 - 252 %8 2005/// %G eng %R 10.1007/0-387-23709-7_9 %0 Book Section %B Pattern Recognition and Machine IntelligencePattern Recognition and Machine Intelligence %D 2005 %T Pattern Recognition in Video %A Chellapa, Rama %A Veeraraghavan,Ashok %A Aggarwal,Gaurav %E Pal,Sankar %E Bandyopadhyay,Sanghamitra %E Biswas,Sambhunath %X Images constitute data that live in a very high dimensional space, typically of the order of hundred thousand dimensions. Drawing inferences from correlated data of such high dimensions often becomes intractable. Therefore traditionally several of these problems like face recognition, object recognition, scene understanding etc. have been approached using techniques in pattern recognition. Such methods in conjunction with methods for dimensionality reduction have been highly popular and successful in tackling several image processing tasks. Of late, the advent of cheap, high quality video cameras has generated new interests in extending still image-based recognition methodologies to video sequences. The added temporal dimension in these videos makes problems like face and gait-based human recognition, event detection, activity recognition addressable. Our research has focussed on solving several of these problems through a pattern recognition approach. Of course, in video streams patterns refer to both patterns in the spatial structure of image intensities around interest points and temporal patterns that arise either due to camera motion or object motion. In this paper, we discuss the applications of pattern recognition in video to problems like face and gait-based human recognition, behavior classification, activity recognition and activity based person identification. %B Pattern Recognition and Machine IntelligencePattern Recognition and Machine Intelligence %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 3776 %P 11 - 20 %8 2005/// %@ 978-3-540-30506-4 %G eng %U http://dx.doi.org/10.1007/11590316_2 %0 Conference Paper %B Image Processing, 2005. ICIP 2005. IEEE International Conference on %D 2005 %T Pedestrian classification from moving platforms using cyclic motion pattern %A Yang Ran %A Qinfen Zheng %A Weiss, I. %A Davis, Larry S. %A Abd-Almageed, Wael %A Liang Zhao %K analysis; %K angle; %K body %K classification; %K compact %K cyclic %K DETECTION %K detection; %K digital %K Feedback %K Gait %K human %K image %K information; %K locked %K loop %K loop; %K loops; %K module; %K MOTION %K object %K oscillations; %K pattern; %K pedestrian %K phase %K Pixel %K principle %K representation; %K sequence; %K sequences; %K SHAPE %K system; %X This paper describes an efficient pedestrian detection system for videos acquired from moving platforms. Given a detected and tracked object as a sequence of images within a bounding box, we describe the periodic signature of its motion pattern using a twin-pendulum model. Then a principle gait angle is extracted in every frame providing gait phase information. By estimating the periodicity from the phase data using a digital phase locked loop (dPLL), we quantify the cyclic pattern of the object, which helps us to continuously classify it as a pedestrian. Past approaches have used shape detectors applied to a single image or classifiers based on human body pixel oscillations, but ours is the first to integrate a global cyclic motion model and periodicity analysis. Novel contributions of this paper include: i) development of a compact shape representation of cyclic motion as a signature for a pedestrian, ii) estimation of gait period via a feedback loop module, and iii) implementation of a fast online pedestrian classification system which operates on videos acquired from moving platforms. %B Image Processing, 2005. ICIP 2005. IEEE International Conference on %V 2 %P II - 854-7 - II - 854-7 %8 2005/09// %G eng %R 10.1109/ICIP.2005.1530190 %0 Conference Paper %B IEEE International Conference on Image Processing, 2005. ICIP 2005 %D 2005 %T Pedestrian classification from moving platforms using cyclic motion pattern %A Yang Ran %A Qinfen Zheng %A Weiss, I. %A Davis, Larry S. %A Abd-Almageed, Wael %A Liang Zhao %K compact shape representation %K cyclic motion pattern %K data mining %K Detectors %K digital phase locked loop %K digital phase locked loops %K feedback loop module %K gait analysis %K gait phase information %K human body pixel oscillations %K HUMANS %K image classification %K Image motion analysis %K image representation %K image sequence %K Image sequences %K Motion detection %K Object detection %K pedestrian classification %K pedestrian detection system %K Phase estimation %K Phase locked loops %K principle gait angle %K SHAPE %K tracking %K Videos %X This paper describes an efficient pedestrian detection system for videos acquired from moving platforms. Given a detected and tracked object as a sequence of images within a bounding box, we describe the periodic signature of its motion pattern using a twin-pendulum model. Then a principle gait angle is extracted in every frame providing gait phase information. By estimating the periodicity from the phase data using a digital phase locked loop (dPLL), we quantify the cyclic pattern of the object, which helps us to continuously classify it as a pedestrian. Past approaches have used shape detectors applied to a single image or classifiers based on human body pixel oscillations, but ours is the first to integrate a global cyclic motion model and periodicity analysis. Novel contributions of this paper include: i) development of a compact shape representation of cyclic motion as a signature for a pedestrian, ii) estimation of gait period via a feedback loop module, and iii) implementation of a fast online pedestrian classification system which operates on videos acquired from moving platforms. %B IEEE International Conference on Image Processing, 2005. ICIP 2005 %I IEEE %V 2 %P II- 854-7 - II- 854-7 %8 2005/09// %@ 0-7803-9134-9 %G eng %R 10.1109/ICIP.2005.1530190 %0 Journal Article %J Digital Watermarking %D 2005 %T Performance study on multimedia fingerprinting employing traceability codes %A He,S. %A Wu,M. %X Digital fingerprinting is a tool to protect multimedia content from illegal redistribution by uniquely marking copies of the content distributed to each user. Collusion attack is a powerful attack whereby several differently-fingerprinted copies of the same content are combined together to attenuate or even remove the fingerprint. Coded fingerprinting is one major category of fingerprinting techniques against collusion. Many fingerprinting codes are proposed with tracing capability and collusion resistance, such as Traceability (TA) codes and Identifiable Parent Property (IPP) codes. Most of these works treat the important embedding issue in terms of a set of simplified and abstract assumptions, and they do not examine the end-to-end performance of the coded multimedia fingerprinting. In this paper we jointly consider the coding and embedding issues and examine the collusion resistance of coded fingerprinting systems with various code parameters. Our results show that TA codes generally offer better collusion resistance than IPP codes, and a TA code with a larger alphabet size and a longer code length is preferred. %B Digital Watermarking %P 84 - 96 %8 2005/// %G eng %R 10.1007/11551492_7 %0 Journal Article %J Institute for Systems Research Technical Reports %D 2005 %T A Photo History of SIGCHI: Evolution of Design from Personal to Public (2002) %A Shneiderman, Ben %K Technical Report %X For 20 years I have been photographing personalities and events in the emerging discipline of humanomputer interaction. Until now, only a few of these photos were published in newsletters or were shown to visitors who sought them out. Now this photo history is going from a personal record to a public archive. This archive should be interesting for professional members of this community who want to reminisce, as well as for historians and journalists who want to understand what happened. Students and Web surfers may also want to look at the people who created better interfaces and more satisfying user experiences. %B Institute for Systems Research Technical Reports %8 2005/// %G eng %U http://drum.lib.umd.edu/handle/1903/6529 %0 Conference Paper %B CHI '05 extended abstracts on Human factors in computing systems %D 2005 %T A picture is worth a thousand keywords: image-based object search on a mobile platform %A Tom Yeh %A Grauman,Kristen %A Tollmar,Konrad %A Darrell,Trevor %K content-based image retrieval %K interactive segmentation %K mobile interface %K Object recognition %X Finding information based on an object's visual appearance is useful when specific keywords for the object are not known. We have developed a mobile image-based search system that takes images of objects as queries and finds relevant web pages by matching them to similar images on the web. Image-based search works well when matching full scenes, such as images of buildings or landmarks, and for matching objects when the boundary of the object in the image is available. We demonstrate the effectiveness of a simple interactive paradigm for obtaining a segmented object boundary, and show how a shape-based image matching algorithm can use the object outline to find similar images on the web. %B CHI '05 extended abstracts on Human factors in computing systems %S CHI EA '05 %I ACM %C New York, NY, USA %P 2025 - 2028 %8 2005/// %@ 1-59593-002-7 %G eng %U http://doi.acm.org/10.1145/1056808.1057083 %R 10.1145/1056808.1057083 %0 Journal Article %J ACM SIGOPS Operating Systems Review %D 2005 %T Pioneer: verifying code integrity and enforcing untampered code execution on legacy systems %A Seshadri, Arvind %A Luk, Mark %A Elaine Shi %A Perrig, Adrian %A van Doorn, Leendert %A Khosla, Pradeep %K dynamic root of trust %K rootkit detection %K self-check-summing code %K software-based code attestation %K verifiable code execution %X We propose a primitive, called Pioneer, as a first step towards verifiable code execution on untrusted legacy hosts. Pioneer does not require any hardware support such as secure co-processors or CPU-architecture extensions. We implement Pioneer on an Intel Pentium IV Xeon processor. Pioneer can be used as a basic building block to build security systems. We demonstrate this by building a kernel rootkit detector. %B ACM SIGOPS Operating Systems Review %V 39 %P 1 - 16 %8 2005 %@ 0163-5980 %G eng %U http://doi.acm.org/10.1145/1095809.1095812 %N 5 %0 Conference Paper %B IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, 2005 %D 2005 %T Plane-wave decomposition analysis for spherical microphone arrays %A Duraiswami, Ramani %A Zhiyun Li %A Zotkin,Dmitry N %A Grassi,E. %A Gumerov, Nail A. %K Acoustic propagation %K Acoustic scattering %K acoustic signal processing %K acoustic waves %K array signal processing %K band-limit criteria %K beamforming %K Educational institutions %K Fourier transforms %K Frequency %K Laboratories %K microphone arrays %K Nails %K Nyquist criterion %K Nyquist-like criterion %K Partial differential equations %K plane-wave decomposition analysis %K sound field analysis %K spherical microphone arrays %K spherical wave-functions %X Spherical microphone arrays have attracted attention for analyzing the sound field in a region and beamforming. The analysis of the recorded sound has been performed in terms of spherical wave-functions, and recently the use of plane-wave expansions has been suggested. We show that the plane-wave basis is intimately related to the spherical wave-functions. Reproduction in terms of both representations satisfies certain band-limit criteria. We provide an error bound that shows that to reproduce the spatial characteristics of a sound of a certain frequency we need to be able to accurately represent sounds of up to a particular order, which establishes a Nyquist-like criterion. The order of the sound field in turn is related to the number of microphones in the array necessary to achieve accurate quadrature on the sphere. These results are illustrated with simulations. %B IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, 2005 %I IEEE %P 150 - 153 %8 2005/10// %@ 0-7803-9154-3 %G eng %R 10.1109/ASPAA.2005.1540191 %0 Conference Paper %B Rapid System Prototyping, 2005. (RSP 2005). The 16th IEEE International Workshop on %D 2005 %T Porting DSP applications across design tools using the dataflow interchange format %A Hsu,C.-J. %A Bhattacharyya, Shuvra S. %K actor interchange format %K coarse-grain dataflow graph %K data flow analysis %K data flow graphs %K dataflow interchange format %K dataflow model %K dataflow semantic %K DSP application %K DSP design tool %K DSP library component %K electronic data interchange %K formal specification %K rapid prototyping tool %K Signal processing %K software prototyping %K Specification languages %K vendor-independent language %X Modeling DSP applications through coarse-grain dataflow graphs is popular in the DSP design community, and a growing set of rapid prototyping tools support such dataflow semantics. Since different tools may be suitable for different phases or generations of a design, it is often desirable to migrate a dataflow-based application model from one prototyping tool to another. Two critical problems in transferring dataflow-based designs across different prototyping tools are the lack of a vendor-independent language for DSP-oriented dataflow graphs, and the lack of an efficient porting methodology. In our previous work, the dataflow interchange format (DIF) (C. Hsu et al., 2004) has been developed as a standard language to specify mixed-grain dataflow models for DSP systems. This paper presents the augmentation of the DIF infrastructure with a systematic porting approach that integrates DIF tightly with the specific exporting and importing mechanisms that interface DIF to specific DSP design tools. In conjunction with this porting mechanism, this paper also introduces a novel language, called the actor interchange format (AIF), for transferring relevant information pertaining to DSP library components across different tools. Through a case study of a synthetic aperture radar application, we demonstrate the high degree of automation offered by our DIF-based porting approach. %B Rapid System Prototyping, 2005. (RSP 2005). The 16th IEEE International Workshop on %P 40 - 46 %8 2005/06// %G eng %R 10.1109/RSP.2005.39 %0 Book Section %B Analysis and Modelling of Faces and GesturesAnalysis and Modelling of Faces and Gestures %D 2005 %T Pose-Encoded Spherical Harmonics for Robust Face Recognition Using a Single Image %A Yue,Zhanfeng %A Zhao,Wenyi %A Chellapa, Rama %E Zhao,Wenyi %E Gong,Shaogang %E Tang,Xiaoou %X Face recognition under varying pose is a challenging problem, especially when illumination variations are also present. Under Lambertian model, spherical harmonics representation has proved to be effective in modelling illumination variations for a given pose. In this paper, we extend the spherical harmonics representation to encode pose information. More specifically, we show that 2D harmonic basis images at different poses are related by close-form linear combinations. This enables an analytic method for generating new basis images at a different pose which are typically required to handle illumination variations at that particular pose. Furthermore, the orthonormality of the linear combinations is utilized to propose an efficient method for robust face recognition where only one set of front-view basis images per subject is stored. In the method, we directly project a rotated testing image onto the space of front-view basis images after establishing the image correspondence. Very good recognition results have been demonstrated using this method. %B Analysis and Modelling of Faces and GesturesAnalysis and Modelling of Faces and Gestures %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 3723 %P 229 - 243 %8 2005/// %@ 978-3-540-29229-6 %G eng %U http://dx.doi.org/10.1007/11564386_18 %0 Journal Article %J IEEE Intl. Conf. on Acoustics, Speech, and Signal Processing (ICASSP) %D 2005 %T Pose-normalized view synthesis from silhouettes %A Yue, Z. %A Chellapa, Rama %X In this paper, we introduce an active view synthesis approachfrom silhouettes. With the virtual camera moving on a properly se- lected circular trajectory around an object of interest, we achieve a collection of virtual views of the object, which is equivalent to the case that the object is on a rotating turntable and captured by a static camera whose optical axis is parallel to the turntable. We show how to derive the virtual camera’s extrinsic parameters at each position on the trajectory. Using the turning function dis- tance as the silhouette similarity measurement, this approach can be used to generate the desired pose-normalized images for recog- nition applications. %B IEEE Intl. Conf. on Acoustics, Speech, and Signal Processing (ICASSP) %8 2005/// %G eng %0 Journal Article %J Journal of Scientific Computing %D 2005 %T Preconditioning strategies for models of incompressible flow %A Elman, Howard %B Journal of Scientific Computing %V 25 %P 347 - 366 %8 2005/// %G eng %N 1 %0 Conference Paper %B 43rd AIAA Aerospace Sciences Meeting and Exhibit %D 2005 %T Preliminary Study of the SGS Time Scales for Compressible Boundary Layers using DNS Data %A Martin, M.P %X We use direct numerical simulation data of turbulent boundary layers and a priori testingto estimate the time scales that are associated with the subgrid-scale terms in the conser- vative form of the governing equations. The exact and modeled subgrid-scale terms are gathered from the filtered direct numerical simulation fields for several time steps. The model coefficients are evaluated using the Lagrangian averaging technique. We analyze the time scales directly by performing temporal autocorrelations along fluid particle paths. To assess the subgrid-scale models, we compare the time scales and the quantities given by the exact and modeled representations. It is found that in general, mixed models give a larger integral time scale than that of the exact terms and that the discrepancy is not related to the prescribed memory length associated with the evaluation of the model coefficients. %B 43rd AIAA Aerospace Sciences Meeting and Exhibit %C Reno, NV %8 2005/// %G eng %0 Conference Paper %B 35th AIAA Fluid Dynamics Conference %D 2005 %T Preliminary Study of the Turbulence Structure in Supersonic Boundary Layers using DNS Data %A Taylor,E. M %A Martin, M.P %A Smits,A. J. %X Direct numerical simulation data are used to visualize coherent structures in turbulentboundary layers at Mach numbers from 0.3 to 7. Different criteria to identify the three- dimensional turbulence structure are selected. We find that using the discriminant of the velocity gradient tensor, the swirling strength and the λ2 criteria give nearly identical results, with λ2 identifying more structures very close to the wall. %B 35th AIAA Fluid Dynamics Conference %C Toronto, Canada %8 2005/// %G eng %0 Book Section %B On the Move to Meaningful Internet Systems 2005: CoopIS, DOA, and ODBASEOn the Move to Meaningful Internet Systems 2005: CoopIS, DOA, and ODBASE %D 2005 %T Probabilistic Ontologies and Relational Databases %A Udrea,Octavian %A Yu,Deng %A Hung,Edward %A Subrahmanian,V. %E Meersman,Robert %E Tari,Zahir %K Computer %K Science %X The relational algebra and calculus do not take the semantics of terms into account when answering queries. As a consequence, not all tuples that should be returned in response to a query are always returned, leading to low recall. In this paper, we propose the novel notion of a constrained probabilistic ontology (CPO). We developed the concept of a CPO-enhanced relation in which each attribute of a relation has an associated CPO. These CPOs describe relationships between terms occurring in the domain of that attribute. We show that the relational algebra can be extended to handle CPO-enhanced relations. This allows queries to yield sets of tuples, each of which has a probability of being correct. %B On the Move to Meaningful Internet Systems 2005: CoopIS, DOA, and ODBASEOn the Move to Meaningful Internet Systems 2005: CoopIS, DOA, and ODBASE %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 3760 %P 1 - 17 %8 2005/// %@ 978-3-540-29736-9 %G eng %U http://dx.doi.org/10.1007/11575771_1 %0 Journal Article %J Theoretical Computer Science %D 2005 %T Probabilistic temporal logics via the modal mu-calculus %A Cleaveland, Rance %A Iyer,S. Purushothaman %A Narasimha,Murali %K Model-checking %K Probabilistic bisimulation %K Probabilistic temporal logic %K Probabilistic transition systems %K reactive systems %X This paper presents a mu-calculus-based modal logic for describing properties of reactive probabilistic labeled transition systems (RPLTSs) and develops a model-checking algorithm for determining whether or not states in finite-state RPLTSs satisfy formulas in the logic. The logic is based on the distinction between (probabilistic) “systems” and (nonprobabilistic) “observations”: using the modal mu-calculus, one may specify sets of observations, and the semantics of our logic then enable statements to be made about the measures of such sets at various system states. The logic may be used to encode a variety of probabilistic modal and temporal logics; in addition, the model-checking problem for it may be reduced to the calculation of solutions to systems of non-linear equations. Finally, the logic induces an equivalence on RPLTSs that coincides with accepted notions of probabilistic bisimulation in the literature. %B Theoretical Computer Science %V 342 %P 316 - 350 %8 2005/09/07/ %@ 0304-3975 %G eng %U http://www.sciencedirect.com/science/article/pii/S0304397505001829 %N 2-3 %R 10.1016/j.tcs.2005.03.048 %0 Conference Paper %B Proceedings of the 2005 conference on Genetic and evolutionary computation %D 2005 %T The problem with a self-adaptative mutation rate in some environments: a case study using the shaky ladder hyperplane-defined functions %A Rand, William %A Riolo,Rick %K dynamic environments %K Genetic algorithms %K hyperplane-defined functions %K self-adaptation %X Dynamic environments have periods of quiescence and periods of change. In periods of quiescence a Genetic Algorithm (GA) should (optimally) exploit good individuals while in periods of change the GA should (optimally) explore new solutions. Self-adaptation is a mechanism which allows individuals in the GA to choose their own mutation rate, and thus allows the GA to control when it explores new solutions or exploits old ones. We examine the use of this mechanism on a recently devised dynamic test suite, the Shaky Ladder Hyperplane-Defined Functions (sl-hdf's). This test suite can generate random problems with similar levels of difficulty and provides a platform allowing systematic controlled observations of the GA in dynamic environments. We show that in a variety of circumstances self-adaptation fails to allow the GA to perform better on this test suite than fixed mutation, even when the environment is static. We also show that mutation is beneficial throughout the run of a GA, and that seeding a population with known good genetic material is not always beneficial to the results. We provide explanations for these observations, with particular emphasis on comparing our results to other results [2] which have shown the GA to work in static environments. We conclude by giving suggestions as to how to change the simple GA to solve these problems. %B Proceedings of the 2005 conference on Genetic and evolutionary computation %S GECCO '05 %I ACM %C New York, NY, USA %P 1493 - 1500 %8 2005/// %@ 1-59593-010-8 %G eng %U http://doi.acm.org/10.1145/1068009.1068245 %R 10.1145/1068009.1068245 %0 Book Section %B Testing Commercial-Off-The-Shelf Components And SystemsTesting Commercial-Off-The-Shelf Components And Systems %D 2005 %T A process and role-based taxonomy of techniques to make testable COTS components %A Memon, Atif M. %B Testing Commercial-Off-The-Shelf Components And SystemsTesting Commercial-Off-The-Shelf Components And Systems %I Springer %P 109 - 140 %8 2005/// %@ 3540218718, 9783540218715 %G eng %0 Journal Article %J IEEE Transactions on Speech and Audio Processing %D 2005 %T Processing of reverberant speech for time-delay estimation %A Yegnanarayana,B. %A Prasanna,S. R.M %A Duraiswami, Ramani %A Zotkin,Dmitry N %K Acoustic noise %K acoustic signal processing %K array signal processing %K data mining %K Degradation %K delay estimation %K Feature extraction %K Hilbert envelope %K localization algorithm %K microphone arrays %K microphone location %K Microphones %K Phase estimation %K reverberation %K short-time spectral information %K Signal processing %K source features %K source information excitation %K speech enhancement %K Speech processing %K speech production mechanism %K speech signal %K time-delay %K time-delay estimation %X In this paper, we present a method of extracting the time-delay between speech signals collected at two microphone locations. Time-delay estimation from microphone outputs is the first step for many sound localization algorithms, and also for enhancement of speech. For time-delay estimation, speech signals are normally processed using short-time spectral information (either magnitude or phase or both). The spectral features are affected by degradations in speech caused by noise and reverberation. Features corresponding to the excitation source of the speech production mechanism are robust to such degradations. We show that these source features can be extracted reliably from the speech signal. The time-delay estimate can be obtained using the features extracted even from short segments (50-100 ms) of speech from a pair of microphones. The proposed method for time-delay estimation is found to perform better than the generalized cross-correlation (GCC) approach. A method for enhancement of speech is also proposed using the knowledge of the time-delay and the information of the excitation source. %B IEEE Transactions on Speech and Audio Processing %V 13 %P 1110 - 1118 %8 2005/11// %@ 1063-6676 %G eng %N 6 %R 10.1109/TSA.2005.853005 %0 Journal Article %J Molecular Microbiology %D 2005 %T Promoter architecture and response to a positive regulator of archaeal transcription %A Ouhammouch,Mohamed %A Langham,Geoffrey E %A Hausner,Winfried %A Simpson,Anjana J. %A El‐Sayed, Najib M. %A Geiduschek,E. Peter %X The archaeal transcription apparatus is chimeric: its core components (RNA polymerase and basal factors) closely resemble those of eukaryotic RNA polymerase II, but the putative archaeal transcriptional regulators are overwhelmingly of bacterial type. Particular interest attaches to how these bacterial-type effectors, especially activators, regulate a eukaryote-like transcription system. The hyperthermophilic archaeon Methanocaldococcus jannaschii encodes a potent transcriptional activator, Ptr2, related to the Lrp/AsnC family of bacterial regulators. Ptr2 activates rubredoxin 2 (rb2) transcription through a bipartite upstream activating site (UAS), and conveys its stimulatory effects on its cognate transcription machinery through direct recruitment of the TATA binding protein (TBP). A functional dissection of the highly constrained architecture of the rb2 promoter shows that a ‘one-site’ minimal UAS suffices for activation by Ptr2, and specifies the required placement of this site. The presence of such a simplified UAS upstream of the natural rubrerythrin (rbr) promoter also suffices for positive regulation by Ptr2 in vitro, and TBP recruitment remains the primary means of transcriptional activation at this promoter. %B Molecular Microbiology %V 56 %P 625 - 637 %8 2005/05/01/ %@ 1365-2958 %G eng %U http://onlinelibrary.wiley.com/doi/10.1111/j.1365-2958.2005.04563.x/abstract %N 3 %R 10.1111/j.1365-2958.2005.04563.x %0 Report %D 2005 %T Promoting Universal Usability with Multi-Layer Interface Design (2003) %A Shneiderman, Ben %K Technical Report %X Increased interest in universal usability is causing some researchers to study advanced strategies for satisfying first-time as well as intermittent and expert users. This paper promotes the idea of multi-layer interface designs that enable first-time and novice users to begin with a limited set of features at layer 1. They can remain at layer 1, then move up to higher layers when needed or when they have time to learn further features. The arguments for and against multi-layer interfaces are presented with two example systems: a word processor with 8 layers and an interactive map with 3 layers. New research methods and directions are proposed. %B Institute for Systems Research Technical Reports %8 2005/// %G eng %U http://drum.lib.umd.edu/handle/1903/6510 %0 Thesis %D 2005 %T PROTOTYPING THE SIMULATION OF A GATE LEVEL LOGIC APPLICATION PROGRAM INTERFACE (API) ON AN EXPLICIT-MULTI-THREADED (XMT) COMPUTER %A Gu,Pei %K Computer Science (0984) %X Explicit-multi-threading (XMT) is a parallel programming approach for exploiting on-chip parallelism. Its fine-grained SPMD programming model is suitable for many computing intensive applications. In this paper, we present a parallel gate level logic simulation algorithm and study its implementation on an XMT processor. The test results show that hundreds-fold speedup can be achieved. %8 2005/05/31/ %G eng %U http://drum.lib.umd.edu/handle/1903/2626 %0 Journal Article %J Canadian Journal of Microbiology %D 2004 %T Pandemic strains of O3:K6 Vibrio parahaemolyticus in the aquatic environment of Bangladesh %A Islam,M. S. %A Tasmin,Rizwana %A Khan,Sirajul I. s l a m %A Bakht,Habibul B. M. %A Mahmood,Zahid H. a y a t %A Rahman,M. Z. i a u r %A Bhuiyan,Nurul A. m i n %A Nishibuchi,Mitsuaki %A Nair,G. B. a l a k r i s h %A Sack,R. B. r a d l e y %A Huq,Anwar %A Rita R Colwell %A Sack,David A. %X A total of 1500 environmental strains of Vibrio parahaemolyticus, isolated from the aquatic environment of Bangladesh, were screened for the presence of a major V. parahaemolyticus virulence factor, the thermostable direct haemolysin (tdh) gene, by the colony blot hybridization method using a digoxigenin-labeled tdh gene probe. Of 1500 strains, 5 carried the tdh sequence, which was further confirmed by PCR using primers specific for the tdh gene. Examination by PCR confirmed that the 5 strains were V. parahamolyticus and lacked the thermostable direct haemolysin-related haemolysin (trh) gene, the alternative major virulence gene known to be absent in pandemic strains. All 5 strains gave positive Kanagawa phenomenon reaction with characteristic beta-haemolysis on Wagatsuma agar medium. Southern blot analysis of the HindIII-digested chromosomal DNA demonstrated, in all 5 strains, the presence of 2 tdh genes common to strains positive for Kanagawa phenomenon. However, the 5 strains were found to belong to 3 different serotypes (O3:K29, O4:K37, and O3:K6). The 2 with pandemic serotype O3:K6 gave positive results in group-specific PCR and ORF8 PCR assays, characteristics unique to the pandemic clone. Clonal variations among the 5 isolates were analyzed by comparing RAPD and ribotyping patterns. Results showed different patterns for the 3 serotypes, but the pattern was identical among the O3:K6 strains. This is the first report on the isolation of pandemic O3:K6 strains of V. parahaemolyticus from the aquatic environment of Bangladesh. %B Canadian Journal of Microbiology %V 50 %P 827 - 834 %8 2004/10// %G eng %U http://umd.library.ingentaconnect.com/content/nrc/cjm/2004/00000050/00000010/art00007 %N 10 %0 Report %D 2004 %T PAWN: Producer-Archive Workflow Network in support of digital preservation %A Smorul,M. %A JaJa, Joseph F. %A Wang, Y. %A McCall,F. %X We describe the design and the implementation of the PAWN (Producer – ArchiveWorkflow Network) environment to enable secure and distributed ingestion of digital objects into a persistent archive. PAWN was developed to capture the core elements required for long term preservation of digital objects as identified by previous research in the digital library and archiving communities. In fact, PAWN can be viewed as an implementation of the Ingest Process as defined by the Open Archival Information System (OAIS) Reference Model, and is currently being used to ingest significant collections into a pilot persistent archive developed through a collaboration between the San Diego Supercomputer Center, the University of Maryland, and the National Archives and Records Administration. We make use of METS (Metadata Encoding and Transmission Standards) to encapsulate content, structural, descriptive, and preservation metadata. The basic software components are based on open standards and web technologies, and hence are platform independent. %I Instititue for Advanced Computer Studies, Univ of Maryland, College Park %V UMIACS-TR-2004 %P 2006 - 2006 %8 2004/// %G eng %0 Journal Article %J Proceedings of the 38th CISS %D 2004 %T Performance study of ECC-based collusion-resistant multimedia fingerprinting %A He,S. %A Wu,M. %X Digital fingerprinting is a tool to protect multimediacontent from illegal redistribution by uniquely marking copies of the content distributed to each user. Fingerprinting based on error correction coding (ECC) handle the important issue of how to embed the fingerprint into host data in an abstract way known as the marking assumptions, which often do not fully account for multimedia specific issues. In this paper, we examine the performance of ECC based fingerprinting by considering both coding and embedding issues. We provide performance comparison of ECC-based scheme and a major alternative of orthogonal fingerprinting. As averaging is a feasible and cost-effective collusion attack against multimedia fingerprints yet is generally not considered in the ECC-based system, we also investigate the resistance against averaging collusion and identify avenues for improving collusion resistance. %B Proceedings of the 38th CISS %P 827 - 832 %8 2004/// %G eng %0 Conference Paper %B Proceedings of EMNLP %D 2004 %T A phrase-based hmm approach to document/abstract alignment %A Daumé, Hal %A Marcu,D. %B Proceedings of EMNLP %P 119 - 126 %8 2004/// %G eng %0 Book Section %B Foundations of Information and Knowledge Systems %D 2004 %T Plan Databases: Model and Algebra %A Yaman,Fusun %A Adali,Sibel %A Nau, Dana S. %A Sapino,Maria %A Subrahmanian,V. %E Seipel,Dietmar %E Turull-Torres,José %K Computer science %X Despite the fact that thousands of applications manipulate plans, there has been no work to date on managing large databases of plans. In this paper, we first propose a formal model of plan databases. We describe important notions of consistency and coherence for such databases. We then propose a set of operators similar to the relational algebra to query such databases of plans. %B Foundations of Information and Knowledge Systems %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 2942 %P 302 - 319 %8 2004/// %@ 978-3-540-20965-2 %G eng %U http://www.springerlink.com/content/yqhlqtu4te18q2e1/abstract/ %0 Conference Paper %B Proceedings of the 3rd international conference on Mobile and ubiquitous multimedia %D 2004 %T Pocket PhotoMesa: a Zoomable image browser for PDAs %A Khella,A. %A Bederson, Benjamin B. %B Proceedings of the 3rd international conference on Mobile and ubiquitous multimedia %P 19 - 24 %8 2004/// %G eng %0 Journal Article %J Environmental Microbiology %D 2004 %T Polylysogeny and prophage induction by secondary infection in Vibrio cholerae %A Espeland,Eric M. %A Lipp,Erin K. %A Huq,Anwar %A Rita R Colwell %X Strains of Vibrio cholerae O1, biotypes El Tor and classical, were infected with a known temperate phage (ΦP15) and monitored over a 15-day period for prophage induction. Over the course of the experiment two morphologically and three genomically distinct virus-like particles were observed from the phage-infected El Tor strain by transmission electron microscopy and field inversion gel electrophoresis, respectively, whereas only one phage, ΦP15, was observed from the infected classical strain. In the uninfected El Tor culture one prophage was spontaneously induced after 6 days. No induction in either strain was observed after treatment with mitomycin C. Data indicate that El Tor biotypes of V. cholerae may be polylysogenic and that secondary infection can promote multiple prophage induction. These traits may be important in the transfer of genetic material among V. cholerae by providing an environmentally relevant route for multiple prophage propagation and transmission. %B Environmental Microbiology %V 6 %P 760 - 763 %8 2004/07/01/ %@ 1462-2920 %G eng %U http://onlinelibrary.wiley.com/doi/10.1111/j.1462-2920.2004.00603.x/abstract?userIsAuthenticated=false&deniedAccessCustomisedMessage= %N 7 %R 10.1111/j.1462-2920.2004.00603.x %0 Journal Article %J Current Opinion in Chemical Biology %D 2004 %T Polyubiquitin chains: polymeric protein signals %A Pickart,Cecile M. %A Fushman, David %X The 76-residue protein ubiquitin exists within eukaryotic cells both as a monomer and in the form of isopeptide-linked polymers called polyubiquitin chains. In two well-described cases, structurally distinct polyubiquitin chains represent functionally distinct intracellular signals. Recently, additional polymeric structures have been detected in vivo and in vitro, and several large families of proteins with polyubiquitin chain-binding activity have been discovered. Although the molecular mechanisms governing specificity in chain synthesis and recognition are still incompletely understood, the scope of signaling by polyubiquitin chains is likely to be broader than originally envisioned. %B Current Opinion in Chemical Biology %V 8 %P 610 - 616 %8 2004/12// %@ 1367-5931 %G eng %U http://www.sciencedirect.com/science/article/pii/S1367593104001413 %N 6 %R 10.1016/j.cbpa.2004.09.009 %0 Journal Article %J Proceedings of the 6th Asian Conference on Computer Vision (ACCV'04) %D 2004 %T Pose-normailzed view synthesis of a symmetric object using a single image %A Yue, Z. %A Chellapa, Rama %X Object recognition under varying pose is a challenging prob-lem, especially when illumination variations are also present. In this paper, we propose a pose-normalized view synthe- sis method under varying illuminations for symmetric ob- jects. For a given non-frontal view of a symmetric object under non-frontal illumination, the mirror image of the orig- inal view is equivalent to the view when the object rotates around the Y -axis by the same angle as the original view but in the opposite direction, and under opposite illumina- tion condition in the X direction. Exploiting the bilateral symmetry of the object, we generate the mirror view of the object under the same illumination condition as the original view on a pixel-by-pixel basis. The frontal view under the same illumination is then easily obtained using view mor- phing techniques. %B Proceedings of the 6th Asian Conference on Computer Vision (ACCV'04) %P 915 - 920 %8 2004/// %G eng %0 Journal Article %J SIGCOMM Comput. Commun. Rev. %D 2004 %T Practical verification techniques for wide-area routing %A Feamster, Nick %X Protocol and system designers use verification techniques to analyze a system's correctness properties. Network operators need verification techniques to ensure the "correct" operation of BGP. BGP's distributed dependencies cause small configuration mistakes or oversights to spur complex errors, which sometimes have devastating effects on global connectivity. These errors are often difficult to debug because they are sometimes only exposed by a specific message arrival pattern or failure scenario.This paper presents an approach to BGP verification that is primarily based on static analysis of router configuration. We argue that: (1) because BGP's a configuration affects its fundamental behavior, verification is a program analysis problem, (2) BGP's complex, dynamic interactions are difficult to abstract and impossible to enumerate, which precludes existing verification techniques, (3) because of BGP's flexible, policy-based configuration, some aspects of BGP configuration must be checked against a higher-level specification of intended policy, and (4) although static analysis can catch many configuration errors, simulation and emulation are also necessary to determine the precise scenarios that could expose errors at runtime. Based on these observations, we propose the design of a BGP verification tool, discuss how it could be applied in practice, and describe future research challenges. %B SIGCOMM Comput. Commun. Rev. %V 34 %P 87 - 92 %8 2004/01// %@ 0146-4833 %G eng %U http://doi.acm.org/10.1145/972374.972390 %N 1 %R 10.1145/972374.972390 %0 Book Section %B Mathematical Foundations of Computer Science 2004Mathematical Foundations of Computer Science 2004 %D 2004 %T PRAM-On-Chip: A Quest for Not-So-Obvious Non-obviousness %A Vishkin, Uzi %E Fiala,Jirí %E Koubek,Václav %E Kratochvíl,Jan %K Computer %K Science %X Consider situations where once you were told about a new technical idea you reacted by saying: “but this is so obvious, I wonder how I missed it”. I found out recently that the US patent law has a nice formal way of characterizing such a situation. The US patent law protects inventions that meet three requirements: utility, novelty and non-obviousness. Non-obviousness is considered the most challenging of the three to establish. The talk will try to argue that a possible virtue for a technical contribution is when, in restrospect, its non-obviousness is not too obvious; and since hindsight is always 20/20, one may often need to resort to various types of circumstantial evidence in order to establish non-obviousness. There are two reasons for bringing this issue up in my talk: (i) seeking such a virtue has been an objective of my work over the years, and (ii) issues of taste in research are more legitimate for invited talks; there might be merit in reminding younger researchers that not every “result” is necessarily also a “contribution”; perhaps the criterion of not-so-obvious non-obviousness could be helpful in some cases to help recognize a contribution. The focus of the second focal point for my talk, the PRAM-On-Chip approach, meets at least one of the standard legal ways to support non-obviousness: “Expressions of disbelief by experts constitute strong evidence of non-obviousness”. It is well documented that the whole PRAM algorithmic theory was considered “unrealistic” by numerous experts in the field, prior to the PRAM-On-Chip project. In fact, I needed recently to use this documentation in a reply to the U.S. patent office. An introduction of the PRAM-On-Chip approach follows. Many parallel computer systems architectures have been proposed and built over the last several decades. The outreach of the few that survived has been severely limited due to their programmability problems. The question of how to think algorithmically in parallel has been the fundamental problem for which these architectures did not have an adequate answer. A computational model, the Parallel Random Access Model (PRAM), has been developed by numerous (theoretical computer science) algorithm researchers to address this question during the 1980s and 1990s and is considered by many as the easiest known approach to parallel programming. Despite the broad interest the PRAM generated, it had not been possible to build parallel machines that adequately support it using multi-chip multiprocessors, the only multiprocessors that were buildable in the 1990s since low-overhead coordination was not possible. Our main insight is that this is becoming possible with the increasing amounts of hardware that can be placed on a single chip. From the PRAM, as a starting point, a highly parallel explicit multi-threaded (XMT) on-chip processor architecture that relies on new low-overhead coordination mechanisms and whose performance objective is reducing single task completion time has been conceived and developed. Simulated program executions have shown dramatic performance gains over conventional processor architectures. Namely, in addition to the unique parallel programmability features, which set XMT apart from any other current approach, XMT also provides very competitive performance. If XMT will meet expectations, its introduction would greatly enhance the normal rate of improvement of conventional processor architectures leading to new applications. %B Mathematical Foundations of Computer Science 2004Mathematical Foundations of Computer Science 2004 %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 3153 %P 104 - 105 %8 2004/// %@ 978-3-540-22823-3 %G eng %U http://dx.doi.org/10.1007/978-3-540-28629-5_5 %0 Conference Paper %B CHI'04 extended abstracts on Human factors in computing systems %D 2004 %T Preschool children's use of mouse buttons %A Hourcade,J. P %A Bederson, Benjamin B. %A Druin, Allison %B CHI'04 extended abstracts on Human factors in computing systems %P 1411 - 1412 %8 2004/// %G eng %0 Journal Article %J Software, IEEE %D 2004 %T Preserving distributed systems critical properties: a model-driven approach %A Yilmaz,C. %A Memon, Atif M. %A Porter, Adam %A Krishna,A. S %A Schmidt,D. C %A Gokhale,A. %A Natarajan,B. %K configuration management %K formal verification %K Middleware %K middleware suite %K model-driven approach %K persistent software attributes %K QoS requirements %K Quality assurance %K quality of service %K quality-of-service %K Skoll distributed computing resources %K software configuration %K Software maintenance %K Software quality %K software quality assurance process %K system dependability %X The need for creating predictability in distributed systems is most often specified in terms of quality-of-service (QoS) requirements, which help define the acceptable levels of dependability with which capabilities such as processing capacity, data throughput, or service availability reach users. For longer-term properties such as scalability, maintainability, adaptability, and system security, we can similarly use persistent software attributes (PSAs) to specify how and to what degree such properties must remain intact as a network expands and evolves over time. The Skoll distributed continuous software quality assurance process helps to identify viable system and software configurations for meeting stringent QOS and PSA requirements by coordinating the use of distributed computing resources. The authors tested their process using the large, rapidly evolving ACE+TAO middleware suite. %B Software, IEEE %V 21 %P 32 - 40 %8 2004/// %@ 0740-7459 %G eng %N 6 %R 10.1109/MS.2004.50 %0 Conference Paper %B Proceedings of the 2nd ACM international workshop on Multimedia databases %D 2004 %T The priority curve algorithm for video summarization %A Fayzullin,M. %A V.S. Subrahmanian %A Albanese, M. %A Picariello, A. %K probabilistic %K Summarization %K system %K video %X In this paper, we introduce the concept of a priority curve associated with a video. We then provide an algorithm that can use the priority curve to create a summary (of a desired length) of any video. The summary thus created exhibits nice continuity properties and also avoids repetition. We have implemented the priority curve algorithm (PCA) and compared it with other summarization algorithms in the literature. We show that PCA is faster than existing algorithms and also produces better quality summaries. The quality of summaries was evaluated by a group of 200 students in Naples, Italy, who watched soccer videos. We also briefly describe a soccer video summarization system we have built on using the PCA architecture and various (classical) image processing algorithms. %B Proceedings of the 2nd ACM international workshop on Multimedia databases %S MMDB '04 %I ACM %C New York, NY, USA %P 28 - 35 %8 2004/// %@ 1-58113-975-6 %G eng %U http://doi.acm.org/10.1145/1032604.1032611 %R 10.1145/1032604.1032611 %0 Journal Article %J IEEE Wireless Communications %D 2004 %T Proactive key distribution using neighbor graphs %A Mishra,A. %A Min Ho Shin %A Petroni,N. L. %A Clancy,T. C %A Arbaugh, William A. %K access points %K Authentication %K authentication time %K Base stations %K Communication system security %K Delay %K graph theory %K GSM %K IEEE 802.11 handoff %K Land mobile radio cellular systems %K Message authentication %K mobile radio %K Multiaccess communication %K neighbor graph %K Network topology %K Roaming %K telecommunication security %K Telephone sets %K user mobility %K Wi-Fi networks %K wireless data networks %K Wireless LAN %K Wireless networks %X User mobility in wireless data networks is increasing because of technological advances, and the desire for voice and multimedia applications. These applications, however, require that handoffs between base stations (or access points) be fast to maintain the quality of the connections. In this article we introduce a novel data structure, the neighbor graph, that dynamically captures the mobility topology of a wireless network. We show how neighbor graphs can be utilized to obtain a 99 percent reduction in the authentication time of an IEEE 802.11 handoff (full EAP-TLS) by proactively distributing necessary key material one hop ahead of the mobile user. We also present a reactive method for fast authentication that requires only firmware changes to access points and hence can easily be deployed on existing wireless networks. %B IEEE Wireless Communications %V 11 %P 26 - 36 %8 2004/02// %@ 1536-1284 %G eng %N 1 %R 10.1109/MWC.2004.1269714 %0 Journal Article %J NIPS %D 2004 %T Probabilistic analysis of kernel principal components %A Zhou, S. %A Chellapa, Rama %A Moghaddam, B. %X This paper presents a probabilistic analysis of kernel principal compo-nents by unifying the theory of probabilistic principal component analy- sis and kernel principal component analysis. It is shown that, while the kernel component enhances the nonlinear modeling power, the proba- bilistic structure offers (i) a mixture model for nonlinear data structure containing nonlinear sub-structures, and (ii) an effective classification scheme. It turns out that the original loading matrix is replaced by a newly defined empirical loading matrix. The expectation/maximization algorithm for learning parameters of interest is also presented. %B NIPS %8 2004/// %G eng %0 Conference Paper %B Computer Vision and Pattern Recognition, 2004. CVPR 2004. Proceedings of the 2004 IEEE Computer Society Conference on %D 2004 %T Probabilistic identity characterization for face recognition %A Zhou,S. K %A Chellapa, Rama %K characterization; %K database %K database; %K encoding; %K Face %K identity %K identity; %K image %K localization %K management %K object %K PIE %K probabilistic %K problem; %K recognition; %K sequence; %K sequences; %K subspace %K systems; %K video %X We present a general framework for characterizing the object identity in a single image or a group of images with each image containing a transformed version of the object, with applications to face recognition. In terms of the transformation, the group is made of either many still images or frames of a video sequence. The object identity is either discrete- or continuous-valued. This probabilistic framework integrates all the evidence of the set and handles the localization problem, illumination and pose variations through subspace identity encoding. Issues and challenges arising in this framework are addressed and efficient computational schemes are presented. Good face recognition results using the PIE database are reported. %B Computer Vision and Pattern Recognition, 2004. CVPR 2004. Proceedings of the 2004 IEEE Computer Society Conference on %V 2 %P II-805 - II-812 Vol.2 - II-805 - II-812 Vol.2 %8 2004/07/02/june %G eng %R 10.1109/CVPR.2004.1315247 %0 Conference Paper %B Pattern Recognition, 2004. ICPR 2004. Proceedings of the 17th International Conference on %D 2004 %T Product approximation by minimizing the upper bound of Bayes error rate for Bayesian combination of classifiers %A Kang,Hee-Joong %A David Doermann %K approximation; %K Bayes %K Bayesian %K bound %K character %K classification; %K classifiers; %K conditional %K distribution; %K entropy; %K error %K formalism; %K handwritten %K methods; %K minimisation; %K multiple %K numerals; %K pattern %K probability %K probability; %K Product %K rate; %K recognition; %K statistics; %K unconstrained %K upper %X In combining multiple classifiers using a Bayesian formalism, a high dimensional probability distribution is composed of a class and decisions of classifiers. In order to do product approximation of the probability distribution, the upper bound of Bayes error rate, bounded by the conditional entropy of a class and decisions, should be minimized. A second-order dependency-based product approximation is proposed in this paper by considering the second-order dependency between the class and decisions. The proposed method is evaluated by combining the classifiers recognizing unconstrained handwritten numerals. %B Pattern Recognition, 2004. ICPR 2004. Proceedings of the 17th International Conference on %V 1 %P 252 - 255 Vol.1 - 252 - 255 Vol.1 %8 2004/08// %G eng %R 10.1109/ICPR.2004.1334071 %0 Conference Paper %B Proceedings of the 2004 annual national conference on Digital government research %D 2004 %T Project highlight: toward the statistical knowledge network %A Marchionini,Gary %A Haas,Stephanie %A Shneiderman, Ben %A Plaisant, Catherine %A Hert,Carol A. %X This project aims to help people find and understand government statistical information. To achieve this goal, we envision a statistical knowledge network that brings stakeholders from government at all levels together with citizens who provide or seek statistical information. The linchpin of this network is a series of human-computer interfaces that facilitate information seeking, understanding, and use. In turn, these interfaces depend on high-quality metadata and intra-agency cooperation. In this briefing, we summarize our accomplishments in the second year of the project. %B Proceedings of the 2004 annual national conference on Digital government research %S dg.o '04 %I Digital Government Society of North America %P 125:1–125:2 - 125:1–125:2 %8 2004/// %G eng %U http://dl.acm.org/citation.cfm?id=1124191.1124316 %0 Journal Article %J Proceedings of the Project Management Institute Research Conference %D 2004 %T Project portfolio earned value management using treemaps %A Cable,J. H %A Ordonez,J. F %A Chintalapani,G. %A Plaisant, Catherine %X Project portfolio management deals with organizing and managing a set of projects in anorganization. Each organization has its own way of managing the portfolio that meets its business goals. One of the main challenges is to track project performance across the entire portfolio in a timely and effective manner. It allows managers to diagnose performance trends and identify projects in need of attention, giving them the opportunity to take management action in a timely fashion. In this paper, we investigate the use of Earned Value Management (EVM) for tracking project performance across the portfolio, and explore the benefits of an interactive visualization technique called Treemaps to display project performance metrics for the entire portfolio on a single screen. %B Proceedings of the Project Management Institute Research Conference %8 2004/// %G eng %0 Journal Article %J TALTraitement Automatique Des Langues %D 2003 %T Parsing and Tagging of Bilingual Dictionaries %A Ma,Huanfeng %A Karagol-Ayan,Burcu %A David Doermann %A Oard, Douglas %A Wang,Jianqiang %X Bilingual dictionaries hold great potential as a source of lexical resources for training and testing automated systems for optical character recognition, machine translation, and cross-language information retrieval. In this paper, we describe a system for extracting term lexicons from printed bilingual dictionaries. Our work was divided into three phases - dictionary segmentation, entry tagging, and generation. In segmentation, pages are divided into logical entries based on structural features learned from selected examples. The extracted entries are associated with functional labels and passed to a tagging module which associates linguistic labels with each word or phrase in the entry. The output of the system is a structure that represents the entries from the dictionary. We have used this approach to parse a variety of dictionaries with both Latin and non-Latin alphabets, and demonstrate the results of term lexicon generation for retrieval from a collection of French news stories using English queries. %B TALTraitement Automatique Des Langues %V 44 %P 125 - 150 %8 2003/// %G eng %N 2 %0 Journal Article %J 3rd Int’l Workshop on Statistical and Computational Theories of Vision, Nice, France %D 2003 %T A particle filtering approach to abnormality detection in nonlinear systems and its application to abnormal activity detection %A Vaswani, N. %A Chellapa, Rama %X We study abnormality detection in partially observed nonlinear dynamic systems tracked usingparticle filters. An ‘abnormality’ is defined as a change in the system model, which could be drastic or gradual, with the parameters of the changed system unknown. If the change is drastic the particle filter will lose track rapidly and the increase in tracking error can be used to detect the change. In this paper we propose a new statistic for detecting ‘slow’ changes or abnormalities which do not cause the particle filter to lose track for a long time. In a previous work, we have proposed a partially observed nonlinear dynamical system for modeling the configuration dynamics of a group of interacting point objects and formulated abnormal activity detection as a change detection problem. We show here results for abnormal activity detection comparing our proposed change detection strategy with others used in literature. %B 3rd Int’l Workshop on Statistical and Computational Theories of Vision, Nice, France %8 2003/// %G eng %0 Journal Article %J Infection and ImmunityInfect. Immun. %D 2003 %T Pathogenic Potential of Environmental Vibrio Cholerae Strains Carrying Genetic Variants of the Toxin-Coregulated Pilus Pathogenicity Island %A Faruque,Shah M. %A Kamruzzaman,M. %A Meraj,Ismail M. %A Chowdhury,Nityananda %A Nair,G. Balakrish %A Sack,R. Bradley %A Rita R Colwell %A Sack,David A. %X The major virulence factors of toxigenic Vibrio cholerae are cholera toxin (CT), which is encoded by a lysogenic bacteriophage (CTXΦ), and toxin-coregulated pilus (TCP), an essential colonization factor which is also the receptor for CTXΦ. The genes for the biosynthesis of TCP are part of a larger genetic element known as the TCP pathogenicity island. To assess their pathogenic potential, we analyzed environmental strains of V. cholerae carrying genetic variants of the TCP pathogenicity island for colonization of infant mice, susceptibility to CTXΦ, and diarrheagenicity in adult rabbits. Analysis of 14 environmental strains, including 3 strains carrying a new allele of the tcpA gene, 9 strains carrying a new allele of the toxT gene, and 2 strains carrying conventional tcpA and toxT genes, showed that all strains colonized infant mice with various efficiencies in competition with a control El Tor biotype strain of V. cholerae O1. Five of the 14 strains were susceptible to CTXΦ, and these transductants produced CT and caused diarrhea in adult rabbits. These results suggested that the new alleles of the tcpA and toxT genes found in environmental strains of V. cholerae encode biologically active gene products. Detection of functional homologs of the TCP island genes in environmental strains may have implications for understanding the origin and evolution of virulence genes of V. cholerae. %B Infection and ImmunityInfect. Immun. %V 71 %P 1020 - 1025 %8 2003/02/01/ %@ 0019-9567, 1098-5522 %G eng %U http://iai.asm.org/content/71/2/1020 %N 2 %R 10.1128/IAI.71.2.1020-1025.2003 %0 Book Section %B Digital library use: Social practice in design and evaluationDigital library use: Social practice in design and evaluation %D 2003 %T The people in digital libraries: Multifaceted approaches to assessing needs and impact %A Marchionini,G. %A Plaisant, Catherine %A Komlodi,A. %B Digital library use: Social practice in design and evaluationDigital library use: Social practice in design and evaluation %I MIT Press %P 119 - 160 %8 2003/// %@ 9780262025447 %G eng %0 Book Section %B Perceptual organization in vision: behavioral and neural perspectivesPerceptual organization in vision: behavioral and neural perspectives %D 2003 %T Perceptual Completion and Memory %A Jacobs, David W. %B Perceptual organization in vision: behavioral and neural perspectivesPerceptual organization in vision: behavioral and neural perspectives %P 403 - 403 %8 2003/// %G eng %0 Book Section %B DUC 03 Conference ProceedingsDUC 03 Conference Proceedings %D 2003 %T Performance of a Three-Stage System for Multi-Document Summarization %A Dunlavy,Daniel M. %A Conroy,John M. %A Schlesinger,Judith D. %A Goodman,Sarah A. %A Okurowski,Mary Ellen %A O'Leary, Dianne P. %A Halteren,Hans van %B DUC 03 Conference ProceedingsDUC 03 Conference Proceedings %I U.S. National Inst. of Standards and Technology %8 2003/// %G eng %U http://duc.nist.gov/http://duc.nist.gov/ %0 Conference Paper %B Multimedia and Expo, 2003. ICME '03. Proceedings. 2003 International Conference on %D 2003 %T Performance of detection statistics under collusion attacks on independent multimedia fingerprints %A Zhao,Hong %A M. Wu %A Wang,Z.J. %A Liu,K. J.R %K analysis; %K attacks; %K based %K collusion %K content; %K DETECTION %K digital %K fingerprint %K fingerprinting; %K fingerprints; %K Gaussian %K identification; %K independent %K multimedia %K performance; %K preprocessing %K processes; %K redistribution; %K security; %K statistical %K statistics; %K systems; %K techniques; %K Telecommunication %K unauthorized %X Digital fingerprinting is a technology for tracing the distribution of multimedia content and protecting them from unauthorized redistribution. Collusion attack is a cost effective attack against digital fingerprinting where several copies with the same content but different fingerprints are combined to remove the original fingerprints. In this paper, we consider average attack and several nonlinear collusion attacks on independent Gaussian based fingerprints, and study the detection performance of several commonly used detection statistics in the literature under collusion attacks. Observing that these detection statistics are not specifically designed for collusion scenarios and do not take into account the characteristics of the newly generated fingerprints under collusion attacks, we propose pre-processing techniques to improve the detection performance of the detection statistics under collusion attacks. %B Multimedia and Expo, 2003. ICME '03. Proceedings. 2003 International Conference on %V 1 %P I - 205-8 vol.1 - I - 205-8 vol.1 %8 2003/07// %G eng %R 10.1109/ICME.2003.1220890 %0 Journal Article %J Environmental Microbiology %D 2003 %T Persistence of adhesive properties in Vibrio cholerae after long‐term exposure to sea water %A Pruzzo,Carla %A Tarsi,Renato %A Del Mar Lleò,Maria %A Signoretto,Caterina %A Zampini,Massimiliano %A Pane,Luigi %A Rita R Colwell %A Canepari,Pietro %X The effect of exposure to artificial sea water (ASW) on the ability of classical Vibrio cholerae O1 cells to interact with chitin-containing substrates and human intestinal cells was studied. Incubation of vibrios in ASW at 5°C and 18°C resulted in two kinds of cell responses: the viable but non-culturable (VBNC) state (i.e. <0.1 colony forming unit ml−1) at 5°C, and starvation (i.e. maintenance of culturability of the population) at 18°C. The latter remained rod shaped and, after 40 days’ incubation, presented a 47–58% reduction in the number of cells attached to chitin, a 48–53% reduction in the number of bacteria adhering to copepods, and a 48–54% reduction in the number of bacteria adhering to human cultured intestinal cells, compared to control cells not suspended in ASW. Bacteria suspended in ASW at 5°C became coccoid and, after 40 days, showed 34–42% fewer cells attached to chitin, 52–55% fewer adhering to copep-ods, and 45–48% fewer cells adhering to intestinal cell monolayers, compared to controls. Sarkosyl-insoluble membrane proteins that bind chitin particles were isolated and analysed by SDS-PAGE. After 40 days incubation in ASW at both 5°C and 18°C vibrios expressed chitin-binding ligands similar to bacteria harvested in the stationary growth phase. It is concluded that as vibrios do not lose adhesive properties after long-term exposure to ASW, it is important to include methods for VBNC bacteria when testing environmental and clinical samples for purposes of public health safety. %B Environmental Microbiology %V 5 %P 850 - 858 %8 2003/10/01/ %@ 1462-2920 %G eng %U http://onlinelibrary.wiley.com/doi/10.1046/j.1462-2920.2003.00498.x/abstract?userIsAuthenticated=false&deniedAccessCustomisedMessage= %N 10 %R 10.1046/j.1462-2920.2003.00498.x %0 Journal Article %J Joint IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance (VS-PETS) %D 2003 %T A perturbation method for evaluating background subtraction algorithms %A Chalidabhongse,T.H. %A Kim,K. %A Harwood,D. %A Davis, Larry S. %X We introduce a performance evaluation methodology calledPerturbation Detection Rate (PDR) analysis, for measuring performance of background subtraction (BGS) algorithms. It has some advantages over the commonly used Receiver Operation Characteristics (ROC) analysis. Specifically, it does not require foreground targets or knowledge of fore- ground distributions. It measures the sensitivity of a BGS algorithm in detecting low contrast targets against back- ground as a function of contrast, also depending on how well the model captures mixed (moving) background events. We compare four algorithms having similarities and differ- ences. Three are in [2, 3, 5] while the fourth is recently de- veloped, called Codebook BGS. The latter algorithm quan- tizes sample background values at each pixel into code- books which represent a compressed form of background model for a long image sequence. %B Joint IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance (VS-PETS) %8 2003/// %G eng %0 Journal Article %J Curr Genet %D 2003 %T Phylogenetic analysis reveals five independent transfers of the chloroplast gene ıt rbcL to the mitochondrial genome in angiosperms %A Cummings, Michael P. %A Nugent,J. M %A Olmstead,R. G %A Palmer,J. D %X We used the chloroplast gene rbcL as a model to study the frequency and relative timing of transfer of chloroplast sequences to the mitochondrial genome. Southern blot survey of 20 mitochondrial DNAs confirmed three previously reported groups of plants containing rbcL in their mitochondrion, while PCR studies identified a new mitochondrial rbcL. Published and newly determined mitochondrial and chloroplast rbcL sequences were used to reconstruct rbcL phylogeny. The results imply five or six separate interorganellar transfers of rbcL among the angiosperms examined, and hundreds of successful transfers across all flowering plants. By taxonomic criteria, the crucifer transfer is the most ancient, two separate transfers within the grass family are of intermediate ancestry, and the morning-glory transfer is most recent. All five mitochondrial copies of rbcL examined exhibit insertion and/or deletion events that disrupt the reading frame (three are grossly truncated); and all are elevated in the proportion of nonsynonymous substitutions, providing clear evidence that these sequences are pseudogenes. %B Curr Genet %V 43 %P 131 - 138 %8 2003/05// %G eng %N 2 %R 10.1007/s00294-003-0378-3 %0 Conference Paper %B Multimedia and Expo, IEEE International Conference on %D 2003 %T Pitch and timbre manipulations using cortical representation of sound %A Zotkin,Dmitry N %A Shamma,S.A. %A Ru,P. %A Duraiswami, Ramani %A Davis, Larry S. %X The sound receiver at the ears is processed by humans using signal processing that separate the signal along intensity, pitch and timbre dimensions. Conventional Fourier-based signal processing, while endowed with fast algorithms, is unable to easily represent signal along these attributes. In this paper we use a cortical representation to represent the manipulate sound. We briefly overview algorithms for obtaining, manipulating and inverting cortical representation of sound and describe algorithms for manipulating signal pitch and timbre separately. The algorithms are first used to create sound of an instrument between a guitar and a trumpet. Applications to creating maximally separable sounds in auditory user interfaces are discussed. %B Multimedia and Expo, IEEE International Conference on %I IEEE Computer Society %C Los Alamitos, CA, USA %V 3 %P 381 - 384 %8 2003/// %@ 0-7803-7965-9 %G eng %R http://doi.ieeecomputersociety.org/10.1109/ICME.2003.1221328 %0 Journal Article %J International Conference on Automated Planning and Scheduling (ICAPS) 2003 Workshop on planning for web services %D 2003 %T A planner for composing services described in DAML-S %A Sheshagiri,M. %A desJardins, Marie %A Finin,T. %B International Conference on Automated Planning and Scheduling (ICAPS) 2003 Workshop on planning for web services %8 2003/// %G eng %0 Book Section %B KI 2003: Advances in Artificial Intelligence %D 2003 %T Planning in Answer Set Programming Using Ordered Task Decomposition %A Dix,Jürgen %A Kuter,Ugur %A Nau, Dana S. %E Günter,Andreas %E Kruse,Rudolf %E Neumann,Bernd %K Computer science %X In this paper we introduce a formalism for solving Hierarchical Task Network ( HTN ) Planning using Answer Set Programming ( ASP ). We consider the formulation of HTN planning as described in the SHOP planning system and define a systematic translation method from SHOP ’s representation of the planning problem into logic programs with negation. We show that our translation is sound and complete : answer sets of the logic program obtained by our translation correspond exactly to the solutions of the planning problem. We compare our method to (1) similar approaches based on non- HTN planning and (2) SHOP , a dedicated planning system. We show that our approach outperforms non- HTN methods and that its performance is better with ASP systems that allow for nonground programs than with ASP systems that require ground programs. %B KI 2003: Advances in Artificial Intelligence %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 2821 %P 490 - 504 %8 2003/// %@ 978-3-540-20059-8 %G eng %U http://www.springerlink.com/content/ekt1kye92lh12lpj/abstract/ %0 Journal Article %J The Visual Computer %D 2003 %T Plenoptic video geometry %A Neumann, Jan %A Fermüller, Cornelia %X More and more processing of visual information is nowadays done by computers, but the images captured by conventional cameras are still based on the pinhole principle inspired by our own eyes. This principle though is not necessarily the optimal image-formation principle for automated processing of visual information. Each camera samples the space of light rays according to some pattern. If we understand the structure of the space formed by the light rays passing through a volume of space, we can determine the camera, or in other words the sampling pattern of light rays, that is optimal with regard to a given task. In this work we analyze the differential structure of the space of time-varying light rays described by the plenoptic function and use this analysis to relate the rigid motion of an imaging device to the derivatives of the plenoptic function. The results can be used to define a hierarchy of camera models with respect to the structure from motion problem and formulate a linear, scene-independent estimation problem for the rigid motion of the sensor purely in terms of the captured images. %B The Visual Computer %V 19 %P 395 - 404 %8 2003/// %@ 0178-2789 %G eng %U http://dx.doi.org/10.1007/s00371-003-0203-5 %N 6 %0 Conference Paper %B 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003. Proceedings %D 2003 %T Polydioptric camera design and 3D motion estimation %A Neumann, J. %A Fermüller, Cornelia %A Aloimonos, J. %K 3D motion estimation %K Algorithm design and analysis %K Application software %K CAMERAS %K Computer vision %K Eyes %K field-of-view camera %K Image motion analysis %K image sampling %K image sensor %K Image sensors %K Layout %K light ray %K Motion estimation %K multiperspective camera %K optimal camera %K optimal image formation %K optimal sampling pattern %K pinhole principle %K polydioptric camera design %K ray space %K scene independent estimation %K space structure analysis %K stereo image processing %K visual information processing %X Most cameras used in computer vision applications are still based on the pinhole principle inspired by our own eyes. It has been found though that this is not necessarily the optimal image formation principle for processing visual information using a machine. We describe how to find the optimal camera for 3D motion estimation by analyzing the structure of the space formed by the light rays passing through a volume of space. Every camera corresponds to a sampling pattern in light ray space, thus the question of camera design can be rephrased as finding the optimal sampling pattern with regard to a given task. This framework suggests that large field-of-view multi-perspective (polydioptric) cameras are the optimal image sensors for 3D motion estimation. We conclude by proposing design principles for polydioptric cameras and describe an algorithm for such a camera that estimates its 3D motion in a scene independent and robust manner. %B 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003. Proceedings %I IEEE %V 2 %P II- 294-301 vol.2 - II- 294-301 vol.2 %8 2003/06/18/20 %@ 0-7695-1900-8 %G eng %R 10.1109/CVPR.2003.1211483 %0 Journal Article %J Numerische Mathematik %D 2003 %T On the powers of a matrix with perturbations %A Stewart, G.W. %X Let A be a matrix of order n. The properties of the powers A k of A have been extensively studied in the literature. This paper concerns the perturbed powers Pk=(A+Ek)(A+Ek−1)(A+E1) where the E k are perturbation matrices. We will treat three problems concerning the asymptotic behavior of the perturbed powers. First, determine conditions under which Pk0 . Second, determine the limiting structure of P k . Third, investigate the convergence of the power method with error: that is, given u 1 , determine the behavior of u k =ngr k P k u 1 , where ngr k is a suitable scaling factor. %B Numerische Mathematik %V 96 %P 363 - 376 %8 2003/// %G eng %N 2 %R 10.1007/s00211-003-0470-0 %0 Journal Article %J Applied and Environmental MicrobiologyAppl. Environ. Microbiol. %D 2003 %T Predictability of Vibrio Cholerae in Chesapeake Bay %A Louis,Valérie R. %A Russek-Cohen,Estelle %A Choopun,Nipa %A Rivera,Irma N. G. %A Gangle,Brian %A Jiang,Sunny C. %A Rubin,Andrea %A Patz,Jonathan A. %A Huq,Anwar %A Rita R Colwell %X Vibrio cholerae is autochthonous to natural waters and can pose a health risk when it is consumed via untreated water or contaminated shellfish. The correlation between the occurrence of V. cholerae in Chesapeake Bay and environmental factors was investigated over a 3-year period. Water and plankton samples were collected monthly from five shore sampling sites in northern Chesapeake Bay (January 1998 to February 2000) and from research cruise stations on a north-south transect (summers of 1999 and 2000). Enrichment was used to detect culturable V. cholerae, and 21.1% (n = 427) of the samples were positive. As determined by serology tests, the isolates, did not belong to serogroup O1 or O139 associated with cholera epidemics. A direct fluorescent-antibody assay was used to detect V. cholerae O1, and 23.8% (n = 412) of the samples were positive. V. cholerae was more frequently detected during the warmer months and in northern Chesapeake Bay, where the salinity is lower. Statistical models successfully predicted the presence of V. cholerae as a function of water temperature and salinity. Temperatures above 19°C and salinities between 2 and 14 ppt yielded at least a fourfold increase in the number of detectable V. cholerae. The results suggest that salinity variation in Chesapeake Bay or other parameters associated with Susquehanna River inflow contribute to the variability in the occurrence of V. cholerae and that salinity is a useful indicator. Under scenarios of global climate change, increased climate variability, accompanied by higher stream flow rates and warmer temperatures, could favor conditions that increase the occurrence of V. cholerae in Chesapeake Bay. %B Applied and Environmental MicrobiologyAppl. Environ. Microbiol. %V 69 %P 2773 - 2785 %8 2003/05/01/ %@ 0099-2240, 1098-5336 %G eng %U http://aem.asm.org/content/69/5/2773 %N 5 %R 10.1128/AEM.69.5.2773-2785.2003 %0 Patent %D 2003 %T Prefix sums and an application thereof %A Vishkin, Uzi %E Ramot at Tel Aviv University Ltd. %X A method for performing prefix sums, by including a prefix sum instruction in the instruction set of a microprocessor. Both general prefix summation, base-zero prefix summation and base-zero suffix summation are included in the scope of the present invention. The prefix sum instruction may be implemented in software, using the instructions of existing instruction sets, or may be implemented in dedicated hardware, for example, as a functional unit of a microprocessor. The hardware implementation is suitable for application to the allocation of computational resources among concurrent tasks. The scope of the present invention includes one such application: guaranteeing conflict-free access to multiple single-ported register files. %V : 09/224,104 %8 2003/04/01/ %G eng %U http://www.google.com/patents?id=qCAPAAAAEBAJ %N 6542918 %0 Book Section %B The craft of information visualization: readings and reflectionsThe craft of information visualization: readings and reflections %D 2003 %T Preserving Context with Zoomable User Interfaces %A Hombsek,K. %A Bederson, Benjamin B. %A Plaisant, Catherine %B The craft of information visualization: readings and reflectionsThe craft of information visualization: readings and reflections %I Morgan Kaufmann Publishers Inc. %P 83 - 83 %8 2003/// %@ 978-1-55860-915-0 %G eng %0 Journal Article %J P2P Journal %D 2003 %T Probabilistic knowledge discovery and management for P2P networks %A Tsoumakos,D. %A Roussopoulos, Nick %B P2P Journal %P 15 - 20 %8 2003/// %G eng %0 Journal Article %J Computer Vision and Image Understanding %D 2003 %T Probabilistic recognition of human faces from video %A Zhou,Shaohua %A Krueger,Volker %A Chellapa, Rama %K Exemplar-based learning %K face recognition %K sequential importance sampling %K Still-to-video %K Time series state space model %K Video-to-video %X Recognition of human faces using a gallery of still or video images and a probe set of videos is systematically investigated using a probabilistic framework. In still-to-video recognition, where the gallery consists of still images, a time series state space model is proposed to fuse temporal information in a probe video, which simultaneously characterizes the kinematics and identity using a motion vector and an identity variable, respectively. The joint posterior distribution of the motion vector and the identity variable is estimated at each time instant and then propagated to the next time instant. Marginalization over the motion vector yields a robust estimate of the posterior distribution of the identity variable. A computationally efficient sequential importance sampling (SIS) algorithm is developed to estimate the posterior distribution. Empirical results demonstrate that, due to the propagation of the identity variable over time, a degeneracy in posterior probability of the identity variable is achieved to give improved recognition. The gallery is generalized to videos in order to realize video-to-video recognition. An exemplar-based learning strategy is adopted to automatically select video representatives from the gallery, serving as mixture centers in an updated likelihood measure. The SIS algorithm is applied to approximate the posterior distribution of the motion vector, the identity variable, and the exemplar index, whose marginal distribution of the identity variable produces the recognition result. The model formulation is very general and it allows a variety of image representations and transformations. Experimental results using images/videos collected at UMD, NIST/USF, and CMU with pose/illumination variations illustrate the effectiveness of this approach for both still-to-video and video-to-video scenarios with appropriate model choices. %B Computer Vision and Image Understanding %V 91 %P 214 - 245 %8 2003/07// %@ 1077-3142 %G eng %U http://www.sciencedirect.com/science/article/pii/S1077314203000808 %N 1–2 %R 10.1016/S1077-3142(03)00080-8 %0 Conference Paper %B Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval %D 2003 %T Probabilistic structured query methods %A Darwish,Kareem %A Oard, Douglas %K arabic %K CLIR %K OCR %K structured queries %K term replacement %X Structured methods for query term replacement rely on separate estimates of term tes of replacement probabilities. Statistically significantfrequency and document frequency to compute a weight for each query term. This paper reviews prior work on structured query techniques and introduces three new variants that leverage estima improvements in retrieval effectiveness are demonstrated for cross-language retrieval and for retrieval based on optical character recognition when replacement probabilities are used to estimate both term frequency and document frequency. %B Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval %S SIGIR '03 %I ACM %C New York, NY, USA %P 338 - 344 %8 2003/// %@ 1-58113-646-3 %G eng %U http://doi.acm.org/10.1145/860435.860497 %R 10.1145/860435.860497 %0 Conference Paper %B Computer Vision and Pattern Recognition, 2003. Proceedings. 2003 IEEE Computer Society Conference on %D 2003 %T Probabilistic tracking in joint feature-spatial spaces %A Elgammal,A. %A Duraiswami, Ramani %A Davis, Larry S. %K analysis; %K appearance %K appearance; %K color; %K colour %K Computer %K constraint; %K deformation; %K detection; %K distribution; %K edge %K estimation; %K extraction; %K feature %K feature-spatial %K feature; %K function %K gradient; %K image %K intensity; %K joint %K likelihood %K local %K maximization; %K maximum %K nonparametric %K object %K objective %K occlusion; %K optical %K partial %K probabilistic %K probability; %K region %K representation; %K row %K similarity-based %K small %K space; %K structure; %K target %K tracker; %K tracking; %K transformation %K vision; %X In this paper, we present a probabilistic framework for tracking regions based on their appearance. We exploit the feature-spatial distribution of a region representing an object as a probabilistic constraint to track that region over time. The tracking is achieved by maximizing a similarity-based objective function over transformation space given a nonparametric representation of the joint feature-spatial distribution. Such a representation imposes a probabilistic constraint on the region feature distribution coupled with the region structure, which yields an appearance tracker that is robust to small local deformations and partial occlusion. We present the approach for the general form of joint feature-spatial distributions and apply it to tracking with different types of image features including row intensity, color and image gradient. %B Computer Vision and Pattern Recognition, 2003. Proceedings. 2003 IEEE Computer Society Conference on %V 1 %P I-781 - I-788 vol.1 - I-781 - I-788 vol.1 %8 2003/06// %G eng %R 10.1109/CVPR.2003.1211432 %0 Conference Paper %B INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE %D 2003 %T Probabilistically survivable mass %A Kraus,S. %A V.S. Subrahmanian %A Tas,N. C %X Multiagent systems (MAS) can "go down" for alarge number of reasons, ranging from system mal- functions and power failures to malicious attacks. The placement of agents on nodes is called a de- ployment of the MAS. We develop a probabilis- tic model of survivability of a deployed MAS and provide two algorithms to compute the probability of survival of a deployed MAS. Our probabilistic model docs not make independence assumptions though such assumptions can be added if so de- sired. An optimal deployment of a MAS is one that maximizes its survival probability. We provide a mathematical answerto this question, an algorithm that computes an exact solution to this problem, as well as several algorithms that quickly compute approximate solutions to the problem. We have implemented our algorithms - our implementation demonstrates that computing deployments can be done scalably. %B INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE %V 18 %P 789 - 795 %8 2003/// %G eng %0 Journal Article %J CONCUR 2003-Concurrency Theory %D 2003 %T A process-algebraic language for probabilistic I/O automata %A Stark,E. W %A Cleaveland, Rance %A Smolka,S. A %B CONCUR 2003-Concurrency Theory %P 193 - 207 %8 2003/// %G eng %0 Journal Article %J Pattern Analysis and Machine Intelligence, IEEE Transactions on %D 2003 %T Properties of embedding methods for similarity searching in metric spaces %A Hjaltason,G. R %A Samet, Hanan %K complex %K contractiveness; %K data %K databases; %K decomposition; %K dimension %K distance %K distortion; %K DNA %K documents; %K EMBEDDING %K embeddings; %K Euclidean %K evaluations; %K FastMap; %K images; %K Lipschitz %K methods; %K metric %K MetricMap; %K multimedia %K processing; %K query %K reduction %K search; %K searching; %K sequences; %K similarity %K singular %K spaces; %K SparseMap; %K types; %K value %X Complex data types-such as images, documents, DNA sequences, etc.-are becoming increasingly important in modern database applications. A typical query in many of these applications seeks to find objects that are similar to some target object, where (dis)similarity is defined by some distance function. Often, the cost of evaluating the distance between two objects is very high. Thus, the number of distance evaluations should be kept at a minimum, while (ideally) maintaining the quality of the result. One way to approach this goal is to embed the data objects in a vector space so that the distances of the embedded objects approximates the actual distances. Thus, queries can be performed (for the most part) on the embedded objects. We are especially interested in examining the issue of whether or not the embedding methods will ensure that no relevant objects are left out. Particular attention is paid to the SparseMap, FastMap, and MetricMap embedding methods. SparseMap is a variant of Lipschitz embeddings, while FastMap and MetricMap are inspired by dimension reduction methods for Euclidean spaces. We show that, in general, none of these embedding methods guarantee that queries on the embedded objects have no false dismissals, while also demonstrating the limited cases in which the guarantee does hold. Moreover, we describe a variant of SparseMap that allows queries with no false dismissals. In addition, we show that with FastMap and MetricMap, the distances of the embedded objects can be much greater than the actual distances. This makes it impossible (or at least impractical) to modify FastMap and MetricMap to guarantee no false dismissals. %B Pattern Analysis and Machine Intelligence, IEEE Transactions on %V 25 %P 530 - 549 %8 2003/05// %@ 0162-8828 %G eng %N 5 %R 10.1109/TPAMI.2003.1195989 %0 Conference Paper %B Data Engineering, 2003. Proceedings. 19th International Conference on %D 2003 %T PXML: a probabilistic semistructured data model and algebra %A Hung,E. %A Getoor, Lise %A V.S. Subrahmanian %K algebra; %K data %K databases; %K instances; %K model; %K models; %K probabilistic %K processing; %K PXML; %K query %K relational %K semistructured %K structures; %K tree %K XML; %X Despite the recent proliferation of work on semistructured data models, there has been little work to date on supporting uncertainty in these models. We propose a model for probabilistic semistructured data (PSD). The advantage of our approach is that it supports a flexible representation that allows the specification of a wide class of distributions over semistructured instances. We provide two semantics for the model and show that the semantics are probabilistically coherent. Next, we develop an extension of the relational algebra to handle probabilistic semistructured data and describe efficient algorithms for answering queries that use this algebra. Finally, we present experimental results showing the efficiency of our algorithms. %B Data Engineering, 2003. Proceedings. 19th International Conference on %P 467 - 478 %8 2003/03// %G eng %R 10.1109/ICDE.2003.1260814 %0 Conference Paper %B 2002 IEEE Symposium on Security and Privacy, 2002. Proceedings %D 2002 %T P5 : a protocol for scalable anonymous communication %A Sherwood,R. %A Bhattacharjee, Bobby %A Srinivasan, Aravind %K Broadcasting %K communication efficiency %K Computer science %K cryptography %K data privacy %K Educational institutions %K Internet %K large anonymous groups %K P5 protocol %K packet-level simulations %K Particle measurements %K Peer to peer computing %K peer-to-peer personal privacy protocol %K privacy %K Protocols %K receiver anonymity %K scalable anonymous communication %K security of data %K sender anonymity %K sender-receiver anonymity %K Size measurement %K telecommunication security %X We present a protocol for anonymous communication over the Internet. Our protocol, called P5 (peer-to-peer personal privacy protocol) provides sender-, receiver-, and sender-receiver anonymity. P5 is designed to be implemented over current Internet protocols, and does not require any special infrastructure support. A novel feature of P5 is that it allows individual participants to trade-off degree of anonymity for communication efficiency, and hence can be used to scalably implement large anonymous groups. We present a description of P5, an analysis of its anonymity and communication efficiency, and evaluate its performance using detailed packet-level simulations. %B 2002 IEEE Symposium on Security and Privacy, 2002. Proceedings %I IEEE %P 58 - 70 %8 2002/// %@ 0-7695-1543-6 %G eng %R 10.1109/SECPRI.2002.1004362 %0 Journal Article %J 12th International Packet Video Workshop %D 2002 %T Packet loss recovery for streaming video %A Feamster, Nick %A Balakrishnan,H. %X While there is an increasing demand for streaming video applica-tions on the Internet, various network characteristics make the de- ployment of these applications more challenging than traditional TCP-based applications like email and the Web. Packet loss can be detrimental to compressed video with interdependent frames be- cause errors potentially propagate across many frames. While la- tency requirements do not permit retransmission of all lost data, we leverage the characteristics of MPEG-4 to selectively retrans- mit only the most important data in the bitstream. When latency constraints do not permit retransmission, we propose a mechanism for recovering this data using postprocessing techniques at the re- ceiver. We quantify the effects of packet loss on the quality of MPEG-4 video, develop an analytical model to explain these ef- fects, present a system to adaptively deliver MPEG-4 video in the face of packet loss and variable Internet conditions, and evaluate the effectiveness of the system under various network conditions. %B 12th International Packet Video Workshop %8 2002/// %G eng %0 Conference Paper %B Pattern Recognition, 2002. Proceedings. 16th International Conference on %D 2002 %T Page classification through logical labelling %A Liang,Jian %A David Doermann %A Ma,M. %A Guo,J. K %K article %K attributed %K base; %K character %K classification; %K constraints; %K document %K document; %K experimental %K global %K graph %K graph; %K hierarchical %K image %K images; %K labelling; %K logical %K model %K noise; %K OCR; %K optical %K page %K pages; %K processing; %K recognition; %K relational %K results; %K technical %K theory; %K title %K unknown %X We propose an integrated approach to page classification and logical labelling. Layout is represented by a fully connected attributed relational graph that is matched to the graph of an unknown document, achieving classification and labelling simultaneously. By incorporating global constraints in an integrated fashion, ambiguity at the zone level can be reduced, providing robustness to noise and variation. Models are automatically trained from sample documents. Experimental results show promise for the classification and labelling of technical article title pages, and supports the idea of a hierarchical model base. %B Pattern Recognition, 2002. Proceedings. 16th International Conference on %V 3 %P 477 - 480 vol.3 - 477 - 480 vol.3 %8 2002/// %G eng %R 10.1109/ICPR.2002.1047980 %0 Report %D 2002 %T A Parallel Block Multi-level Preconditioner for the 3D Incompressible Navier--Stokes Equations %A Elman, Howard %A Howle, V. E %A Shadid,John %A Tuminaro,Ray %K Technical Report %X The development of robust and efficient algorithms for both steady-statesimulations and fully-implicit time integration of the Navier--Stokes equations is an active research topic. To be effective, the linear subproblems generated by these methods require solution techniques that exhibit robust and rapid convergence. In particular, they should be insensitive to parameters in the problem such as mesh size, time step, and Reynolds number. In this context, we explore a parallel preconditioner based on a block factorization of the coefficient matrix generated in an Oseen nonlinear iteration for the primitive variable formulation of the system. The key to this preconditioner is the approximation of a certain Schur complement operator by a technique first proposed by Kay, Loghin, and Wathen [25] and Silvester, Elman, Kay, and Wathen [45]. The resulting operator entails subsidiary computations (solutions of pressure Poisson and convection--diffusion subproblems) that are similar to those required for decoupled solution methods; however, in this case these solutions are applied as preconditioners to the coupled Oseen system. One important aspect of this approach is that the convection--diffusion and Poisson subproblems are significantly easier to solve than the entire coupled system, and a solver can be built using tools developed for the subproblems. In this paper, we apply smoothed aggregation algebraic multigrid to both subproblems. Previous work has focused on demonstrating the optimality of these preconditioners with respect to mesh size on serial, two-dimensional, steady-state computations employing geometric multi-grid methods; we focus on extending these methods to large-scale, parallel, three-dimensional, transient and steady-state simulations employing algebraic multigrid (AMG) methods. Our results display nearly optimal convergence rates for steady-state solutions as well as for transient solutions over a wide range of CFL numbers on the two-dimensional and three-dimensional lid-driven cavity problem. Also UMIACS-TR-2002-95 %I Instititue for Advanced Computer Studies, Univ of Maryland, College Park %V UMIACS-TR-2002-95 %8 2002/10/25/ %G eng %U http://drum.lib.umd.edu//handle/1903/1239 %0 Journal Article %J 4th International Workshop on Algorithm Engineering and Experiments %D 2002 %T Partitioning planar graphs with costs and weights %A Mount, Dave %A Stein,C. %X A graph separator is a set of vertices or edges whose removal divides an input graph into components of bounded size. This paper describes new algorithms for computing separators in planar graphs as well as techniques that can be used to speed up their implementation and improve the partition quality. In particular, we consider planar graphs with costs and weights on the vertices, where weights are used to estimate the sizes of the components and costs are used to estimate the size of the separator. We show that one can find a small separator that divides the graph into components of bounded size. We describe implementations of the partitioning algorithms and discuss results of our experiments. %B 4th International Workshop on Algorithm Engineering and Experiments %V 2409 %P 98 - 110 %8 2002/// %G eng %0 Journal Article %J Proceedings of the 3rd International NASA Workshop on Planning and Scheduling for Space %D 2002 %T PASSAT: A user-centric planning framework %A Myers,K.L. %A Tyson,W.M. %A Wolverton,M.J. %A Jarvis,P.A. %A Lee,T.J. %A desJardins, Marie %X We describe a plan-authoring system called PASSAT (Plan-Authoring System based on Sketches, Advice, and Templates) that combines interactive tools for constructing plans with a suite of automated and mixed-initiative capabilities designed to complement human planning skills. PASSAT is organized around a library of predefined templates that encode task networks describing standard operating procedures and previous cases. Users can select from these templates to apply during plan development, with the system providing various forms of automated assistance. A mixed-initiative plan sketch facility helps users refine outlines for plans to complete solutions, by detecting problems and proposing possible fixes. An advice capability enables user specification of high-level guidelines for plans that the system helps to enforce. Finally, PASSAT includes process facilitation mechanisms designed to help a user track and manage outstanding planning tasks and information requirements, as a means of improving the efficiency and effectiveness of the planning process. PASSAT is designed for applications for which a core of planning knowledge can be captured in predefined action models but where significant user control of the planning process is required. %B Proceedings of the 3rd International NASA Workshop on Planning and Scheduling for Space %8 2002/// %G eng %0 Conference Paper %D 2002 %T Passive replication schemes in AQuA %A Ren,Yansong %A Rubel,P. %A Seri,M. %A Michel Cukier %A Sanders,W. H. %A Courtney,T. %K AQuA architecture %K distributed object management %K Fault tolerance %K group members %K large-scale distributed object-oriented systems %K management structure %K multidimensional quality of service %K passive replication scheme %K performance measurements %K reusable technologies %K scalability %K software fault tolerance %K software performance evaluation %K software reusability %K software solutions %X Building large-scale distributed object-oriented systems that provide multidimensional quality of service (QoS) in terms of fault tolerance, scalability, and performance is challenging. In order to meet this challenge, we need an architecture that can ensure that applications' requirements can be met while providing reusable technologies and software solutions. This paper describes techniques, based on the AQuA architecture, that enhance the applications' dependability and scalability by introducing two types of group members and a novel passive replication scheme. In addition, we describe how to make the management structure itself dependable by using the passive replication scheme. Finally, we provide performance measurements for the passive replication scheme. %P 125 - 130 %8 2002/12// %G eng %R 10.1109/PRDC.2002.1185628 %0 Journal Article %J Image and Vision Computing %D 2002 %T Performance analysis of a simple vehicle detection algorithm %A Moon, H. %A Chellapa, Rama %A Rosenfeld, A. %K Aerial image %K Bootstrap %K empirical evaluation %K Performance analysis %K Vehicle detection %X We have performed an end-to-end analysis of a simple model-based vehicle detection algorithm for aerial parking lot images. We constructed a vehicle detection operator by combining four elongated edge operators designed to collect edge responses from the sides of a vehicle. We derived the detection and localization performance of this algorithm, and verified them by experiments. Performance degradation due to different camera angles and illuminations was also examined using simulated images. Another important aspect of performance characterization — whether and how much prior information about the scene improves performance — was also investigated. As a statistical diagnostic tool for the detection performance, a computational approach employing bootstrap was used. %B Image and Vision Computing %V 20 %P 1 - 13 %8 2002/01/01/ %@ 0262-8856 %G eng %U http://www.sciencedirect.com/science/article/pii/S0262885601000592 %N 1 %R 10.1016/S0262-8856(01)00059-2 %0 Journal Article %J Numerische Mathematik %D 2002 %T Performance and analysis of saddle point preconditioners for the discrete steady-state Navier-Stokes equations %A Elman, Howard %A Silvester, D. J %A Wathen, A. J %B Numerische Mathematik %V 90 %P 665 - 688 %8 2002/// %G eng %N 4 %0 Conference Paper %D 2002 %T Performance evaluation of a probabilistic replica selection algorithm %A Krishnamurthy, S. %A Sanders,W. H. %A Michel Cukier %K client-server systems %K Dependability %K distributed object management %K dynamic selection algorithm %K Middleware %K probabilistic model %K probabilistic model-based replica selection algorithm %K probability %K quality of service %K real-time systems %K replica failures %K round-robin selection scheme %K static scheme %K time-sensitive distributed applications %K timeliness %K timing failures %K transient overload %X When executing time-sensitive distributed applications, a middleware that provides dependability and timeliness is faced with the important problem of preventing timing failures both under normal conditions and when the quality of service is degraded due to replica failures and transient overload on the server. To address this problem, we have designed a probabilistic model-based replica selection algorithm that allows a middleware to choose a set of replicas to service a client based on their ability to meet a client's timeliness requirements. This selection is done based on the prediction made by a probabilistic model that uses the performance history of replicas as inputs. In this paper, we describe the experiments we have conducted to evaluate the ability of this dynamic selection algorithm to meet a client's timing requirements, and compare it with that of a static and round-robin selection scheme under different scenarios %P 119 - 127 %8 2002/// %G eng %R 10.1109/WORDS.2002.1000044 %0 Conference Paper %D 2002 %T Performance evaluation of a QoS-aware framework for providing tunable consistency and timeliness %A Krishnamurthy, S. %A Sanders,W. H. %A Michel Cukier %K client-server systems %K Computer networks %K CORBA-based middleware %K distributed applications %K distributed object management %K Network servers %K QoS %K quality of service %K replica consistency %K replicated services %K server replicas %K timeliness %X Strong replica consistency models ensure that the data delivered by a replica always includes the latest updates, although this may result in poor response times. On the other hand, weak replica consistency models provide quicker access to information, but do not usually provide guarantees about the degree of staleness in the data they deliver. In order to support emerging distributed applications that are characterized by high concurrency demands, an increasing shift towards dynamic content, and timely delivery, we need quality of service models that allow us to explore the intermediate space between these two extreme approaches to replica consistency. Further, for better support of time-sensitive applications that can tolerate relaxed consistency in exchange for better responsiveness, we need to understand how the desired level of consistency affects the timeliness of a response. The QoS model we have developed to realize these objectives considers both timeliness and consistency, and treats consistency along two dimensions: order and staleness. We evaluate experimentally the framework we have developed to study the timeliness/consistency tradeoffs for replicated services and present experimental results that compare these tradeoffs in the context of sequential and FIFO ordering. %P 214 - 223 %8 2002/// %G eng %R 10.1109/IWQoS.2002.1006589 %0 Conference Paper %B Pattern Recognition, 2002. Proceedings. 16th International Conference on %D 2002 %T Performance evaluation of object detection algorithms %A Mariano,V.Y. %A Min,Junghye %A Park,Jin-Hyeong %A Kasturi,R. %A Mihalcik,D. %A Huiping Li %A David Doermann %A Drayer,T. %K algorithms; %K common %K data %K DETECTION %K detection; %K Evaluation %K evaluation; %K image %K object %K performance %K recognition; %K resource %K set; %K system; %K text-detection %K video %X The continuous development of object detection algorithms is ushering in the need for evaluation tools to quantify algorithm performance. In this paper a set of seven metrics are proposed for quantifying different aspects of a detection algorithm's performance. The strengths and weaknesses of these metrics are described. They are implemented in the Video Performance Evaluation Resource (ViPER) system and will be used to evaluate algorithms for detecting text, faces, moving people and vehicles. Results for running two previous text-detection algorithms on a common data set are presented. %B Pattern Recognition, 2002. Proceedings. 16th International Conference on %V 3 %P 965 - 969 vol.3 - 965 - 969 vol.3 %8 2002/// %G eng %R 10.1109/ICPR.2002.1048198 %0 Conference Paper %B Proceedings of the 6th Workshop on Languages, Compilers, and Run-Time Systems for Scalable Computers %D 2002 %T Persistent caching in a multiple query optimization framework %A Andrade,H. %A Kurc, T. %A Catalyurek,U. %A Sussman, Alan %A Saltz, J. %X This paper focuses on persistent caching in multi-client en-vironments, which aims to improve the performance of a data server by caching on disk query results that can be expensive to produce. We present and evaluate extensions to an existing multi-query optimization framework, called MQO, to incorporate persistent caching capabilities. %B Proceedings of the 6th Workshop on Languages, Compilers, and Run-Time Systems for Scalable Computers %8 2002/// %G eng %0 Journal Article %J Magnetics, IEEE Transactions on %D 2002 %T Perturbation technique for LLG dynamics in uniformly magnetized bodies subject to RF fields %A Bertotti,G. %A Mayergoyz, Issak D %A Serpico,C. %K analytical %K anisotropy %K anisotropy; %K applied %K bodies; %K circularly %K component; %K constant-in-time %K constant; %K damping %K demagnetisation; %K demagnetizing %K differential %K dynamics; %K effective %K elliptically %K equation; %K equations; %K exactly %K factors; %K field; %K film; %K films; %K Frequency %K gyromagnetic %K harmonic; %K higher %K Landau-Lifshitz-Gilbert %K large %K linear %K magnetic %K magnetisation; %K magnetization %K magnetization; %K magnetized %K modes; %K MOTION %K order %K partial %K particle; %K particles; %K perturbation %K polarized %K radio %K ratio; %K RF %K saturation %K solution; %K solvable %K system; %K technique; %K techniques; %K thin %K time-harmonic %K uniaxial %K uniformly %X The problem of magnetization dynamics of a uniformly magnetized uniaxial particle or film, under elliptically polarized applied field, is considered. In the special case of circularly polarized applied field and particles (films) with a symmetry axis, pure time-harmonic magnetization modes exist that can be computed analytically. Deviations from these highly symmetric conditions are treated as perturbation of the symmetric case. The perturbation technique leads to the exactly solvable system of linear differential equations for the perturbations which enables one to compute higher order magnetization harmonic. The analytical solutions are obtained and then compared with numerical results. %B Magnetics, IEEE Transactions on %V 38 %P 2403 - 2405 %8 2002/09// %@ 0018-9464 %G eng %N 5 %R 10.1109/TMAG.2002.803596 %0 Journal Article %J interactions %D 2002 %T A photo history of SIGCHI: evolution of design from personal to public %A Shneiderman, Ben %A Kang,Hyunmo %A Kules,Bill %A Plaisant, Catherine %A Rose,Anne %A Rucheir,Richesh %X For 20 years I have been photographing personalities and events in the emerging discipline of human--computer interaction. Until now, only a few of these photos were published in newsletters or were shown to visitors who sought them out. Now this photo history is going from a personal record to a public archive. This archive should be interesting for professional members of this community who want to reminisce, as well as for historians and journalists who want to understand what happened. Students and Web surfers may also want to look at the people who created better interfaces and more satisfying user experiences. %B interactions %V 9 %P 17 - 23 %8 2002/05// %@ 1072-5520 %G eng %U http://doi.acm.org/10.1145/506671.506682 %N 3 %R 10.1145/506671.506682 %0 Journal Article %J Mol Phylogenet Evol %D 2002 %T Phylogenetic analysis based on 18S ribosomal RNA gene sequences supports the existence of class Polyacanthocephala (Acanthocephala) %A García-Varela,M %A Cummings, Michael P. %A Pérez-Ponce de León,G. %A Gardner,S. L %A Laclette,J. P %X Members of phylum Acanthocephala are parasites of vertebrates and arthropods and are distributed worldwide. The phylum has traditionally been divided into three classes, Archiacanthocephala, Palaeacanthocephala, and Eoacanthocephala; a fourth class, Polyacanthocephala, has been recently proposed. However, erection of this new class, based on morphological characters, has been controversial. We sequenced the near complete 18S rRNA gene of Polyacanthorhynchus caballeroi (Polyacanthocephala) and Rhadinorhynchus sp. (Palaeacanthocephala); these sequences were aligned with another 21 sequences of acanthocephalans representing the three widely recognized classes of the phylum and with 16 sequences from outgroup taxa. Phylogenetic relationships inferred by maximum-likelihood and maximum-parsimony analyses showed Archiacanthocephala as the most basal group within the phylum, whereas classes Polyacanthocephala + Eoacanthocephala formed a monophyletic clade, with Palaeacanthocephala as its sister group. These results are consistent with the view of Polyacanthocephala representing an independent class within Acanthocephala. %B Mol Phylogenet Evol %V 23 %P 288 - 292 %8 2002/05// %G eng %N 2 %R 10.1016/S1055-7903(02)00020-9 %0 Conference Paper %B Proceedings: SIGCHI %D 2002 %T Physical Programming: Designing Tools for Children to Create Physical Interactive %A Montemayor,J. %A Druin, Allison %A Farber,A. %A Simms,S. %A Churaman,W. %B Proceedings: SIGCHI %8 2002/// %G eng %0 Conference Paper %D 2002 %T Planning in a multi-agent environment: theory and practice %A Dix,Jürgen %A Muñoz-Avila,Héctor %A Nau, Dana S. %A Zhang,Lingling %K agent architectures %K agent selection and planning %K formalisms and logics %X We give the theoretical foundations and empirical evaluation of a planning agent, shop, performing HTN planning in a multi-agent environment. shop is based on A-SHOP, an agentized version of the original SHOP HTN planning algorithm, and is integrated in the IMPACT multi-agent environment. We ran several experiments involving accessing various distributed, heterogeneous information sources, based on simplified versions of noncombatant evacuation operations, NEO's. As a result, we noticed that in such realistic settings the time spent on communication (including network time) is orders of magnitude higher than the actual inference process. This has important consequences for optimizations of such planners. Our main results are: (1) using NEO's as new, more realistic benchmarks for planners acting in an agent environment, and (2) a memoization mechanism implemented on top of shop, which improves the overall performance in a significant way. %S AAMAS '02 %I ACM %C New York, NY, USA %P 944 - 945 %8 2002/// %@ 1-58113-480-0 %G eng %U http://doi.acm.org/10.1145/544862.544960 %R 10.1145/544862.544960 %0 Journal Article %J Pattern Recognition %D 2002 %T Polydioptric cameras: New eyes for structure from motion %A Neumann, J. %A Fermüller, Cornelia %A Aloimonos, J. %B Pattern Recognition %P 618 - 625 %8 2002/// %G eng %0 Book Section %B Pattern RecognitionPattern Recognition %D 2002 %T Polydioptric Cameras: New Eyes for Structure from Motion %A Neumann, Jan %A Fermüller, Cornelia %A Aloimonos, J. %E Van Gool,Luc %X We examine the influence of camera design on the estimation of the motion and structure of a scene from video data. Every camera captures a subset of the light rays passing though some volume in space. By relating the differential structure of the time varying space of light rays to different known and new camera designs, we can establish a hierarchy of cameras. This hierarchy is based upon the stability and complexity of the computations necessary to estimate structure and motion. At the low end of this hierarchy is the standard planar pinhole camera for which the structure from motion problem is non-linear and ill-posed. At the high end is a camera, which we call the full field of view polydioptric camera, for which the problem is linear and stable. We develop design suggestions for the polydioptric camera, and based upon this new design we propose a linear algorithm for structure-from-motion estimation, which combines differential motion estimation with differential stereo. %B Pattern RecognitionPattern Recognition %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 2449 %P 618 - 625 %8 2002/// %@ 978-3-540-44209-7 %G eng %U http://dx.doi.org/10.1007/3-540-45783-6_74 %0 Journal Article %J Artificial life %D 2002 %T Predicting nearest agent distances in artificial worlds %A Schulz,R. %A Reggia, James A. %B Artificial life %V 8 %P 247 - 264 %8 2002/// %G eng %N 3 %0 Journal Article %J Nucleic Acids ResearchNucl. Acids Res. %D 2002 %T Predicting Transcription Factor Synergism %A Hannenhalli, Sridhar %A Levy,Samuel %X Transcriptional regulation is mediated by a battery of transcription factor (TF) proteins, that form complexes involving protein–protein and protein–DNA interactions. Individual TFs bind to their cognate cis‐elements or transcription factor‐binding sites (TFBS). TFBS are organized on the DNA proximal to the gene in groups confined to a few hundred base pair regions. These groups are referred to as modules. Various modules work together to provide the combinatorial regulation of gene transcription in response to various developmental and environmental conditions. The sets of modules constitute a promoter model. Determining the TFs that preferentially work in concert as part of a module is an essential component of understanding transcriptional regulation. The TFs that act synergistically in such a fashion are likely to have their cis‐elements co‐localized on the genome at specific distances apart. We exploit this notion to predict TF pairs that are likely to be part of a transcriptional module on the human genome sequence. The computational method is validated statistically, using known interacting pairs extracted from the literature. There are 251 TFBS pairs up to 50 bp apart and 70 TFBS pairs up to 200 bp apart that score higher than any of the known synergistic pairs. Further investigation of 50 pairs randomly selected from each of these two sets using PubMed queries provided additional supporting evidence from the existing biological literature suggesting TF synergism for these novel pairs. %B Nucleic Acids ResearchNucl. Acids Res. %V 30 %P 4278 - 4284 %8 2002/10/01/ %@ 0305-1048, 1362-4962 %G eng %U http://nar.oxfordjournals.org/content/30/19/4278 %N 19 %R 10.1093/nar/gkf535 %0 Journal Article %J Knowledge and Data Engineering, IEEE Transactions on %D 2002 %T Presentation planning for distributed VoD systems %A Hwang,Eenjun %A Prabhakaran,B. %A V.S. Subrahmanian %K Computer %K computing; %K databases; %K demand; %K distributed %K local %K multimedia %K network; %K on %K optimal %K plan; %K plans; %K presentation %K presentation; %K server; %K servers; %K video %K video-on-demand; %K VoD; %X A distributed video-on-demand (VoD) system is one where a collection of video data is located at dispersed sites across a computer network. In a single site environment, a local video server retrieves video data from its local storage device. However, in distributed VoD systems, when a customer requests a movie from the local server, the server may need to interact with other servers located across the network. In this paper, we present different types of presentation plans that a local server can construct in order to satisfy a customer request. Informally speaking, a presentation plan is a temporally synchronized sequence of steps that the local server must perform in order to present the requested movie to the customer. This involves obtaining commitments from other video servers, obtaining commitments from the network service provider, as well as making commitments of local resources, while keeping within the limitations of available bandwidth, available buffer, and customer data consumption rates. Furthermore, in order to evaluate the quality of a presentation plan, we introduce two measures of optimality for presentation plans: minimizing wait time for a customer and minimizing access bandwidth which, informally speaking, specifies how much network/disk bandwidth is used. We develop algorithms to compute three different optimal presentation plans that work at a block level, or at a segment level, or with a hybrid mix of the two, and compare their performance through simulation experiments. We have also mathematically proven effects of increased buffer or bandwidth and data replications for presentation plans which had previously been verified experimentally in the literature. %B Knowledge and Data Engineering, IEEE Transactions on %V 14 %P 1059 - 1077 %8 2002/10//sep %@ 1041-4347 %G eng %N 5 %R 10.1109/TKDE.2002.1033774 %0 Journal Article %J Technical Reports from UMIACS, UMIACS-TR-2002-30 %D 2002 %T A Probabilistic Clustering-Based Indoor Location Determination System %A Youssef,Moustafa A %A Agrawala, Ashok K. %A A. Udaya Shankar %A Noh,Sam H %K Technical Report %X We present an indoor location determination system based on signalstrength probability distributions for tackling the noisy wireless channel and clustering to reduce computation requirements. We provide two implementation techniques, namely, Joint Clustering and Incremental Triangulation and describe their tradeoffs in terms of location determination accuracy and computation requirement. Both techniques have been incorporated in two implemented context-aware systems: User Positioning System and the Rover System, both running on Compaq iPAQ Pocket PC's with Familiar distribution of Linux for PDA's. The results obtained show that both techniques give the user location with over 90% accuracy to within 7 feet with very low computation requirements, hence enabling a set of context-aware applications. Also UMIACS-TR-2002-30 %B Technical Reports from UMIACS, UMIACS-TR-2002-30 %8 2002/04/04/ %G eng %U http://drum.lib.umd.edu/handle/1903/1192 %0 Book Section %B Computer Vision — ECCV 2002Computer Vision — ECCV 2002 %D 2002 %T Probabilistic Human Recognition from Video %A Zhou,Shaohua %A Chellapa, Rama %E Heyden,Anders %E Sparr,Gunnar %E Nielsen,Mads %E Johansen,Peter %X This paper presents a method for incorporating temporal information in a video sequence for the task of human recognition. A time series state space model, parameterized by a tracking state vector and a recognizing identity variable , is proposed to simultaneously characterize the kinematics and identity. Two sequential importance sampling (SIS) methods, a brute-force version and an efficient version, are developed to provide numerical solutions to the model. The joint distribution of both state vector and identity variable is estimated at each time instant and then propagated to the next time instant. Marginalization over the state vector yields a robust estimate of the posterior distribution of the identity variable. Due to the propagation of identity and kinematics, a degeneracy in posterior probability of the identity variable is achieved to give improved recognition. This evolving behavior is characterized using changes in entropy . The effectiveness of this approach is illustrated using experimental results on low resolution face data and upper body data. %B Computer Vision — ECCV 2002Computer Vision — ECCV 2002 %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 2352 %P 173 - 183 %8 2002/// %@ 978-3-540-43746-8 %G eng %U http://dx.doi.org/10.1007/3-540-47977-5_45 %0 Conference Paper %B Image Processing. 2002. Proceedings. 2002 International Conference on %D 2002 %T Probabilistic recognition of human faces from video %A Chellapa, Rama %A Kruger, V. %A Zhou,Shaohua %K Bayes %K Bayesian %K CMU; %K distribution; %K Face %K faces; %K gallery; %K handling; %K human %K image %K images; %K importance %K likelihood; %K methods; %K NIST/USF; %K observation %K posterior %K probabilistic %K probability; %K processing; %K propagation; %K recognition; %K sampling; %K sequential %K signal %K still %K Still-to-video %K Uncertainty %K video %K Video-to-video %X Most present face recognition approaches recognize faces based on still images. We present a novel approach to recognize faces in video. In that scenario, the face gallery may consist of still images or may be derived from a videos. For evidence integration we use classical Bayesian propagation over time and compute the posterior distribution using sequential importance sampling. The probabilistic approach allows us to handle uncertainties in a systematic manner. Experimental results using videos collected by NIST/USF and CMU illustrate the effectiveness of this approach in both still-to-video and video-to-video scenarios with appropriate model choices. %B Image Processing. 2002. Proceedings. 2002 International Conference on %V 1 %P I-41 - I-44 vol.1 - I-41 - I-44 vol.1 %8 2002/// %G eng %R 10.1109/ICIP.2002.1037954 %0 Conference Paper %D 2002 %T Probabilistic validation of intrusion tolerance %A Sanders,W. H. %A Michel Cukier %A Webber,F. %A Pal,P. %A Watro,R. %P 78 - 79 %8 2002/// %G eng %U https://www.perform.csl.illinois.edu/Papers/USAN_papers/02SAN02.pdf %0 Journal Article %J Parallel Computing %D 2002 %T Processing large-scale multi-dimensional data in parallel and distributed environments %A Beynon,Michael %A Chang,Chialin %A Catalyurek,Umit %A Kurc,Tahsin %A Sussman, Alan %A Andrade,Henrique %A Ferreira,Renato %A Saltz,Joel %K Data-intensive applications %K Distributed computing %K Multi-dimensional datasets %K PARALLEL PROCESSING %K Runtime systems %X Analysis of data is an important step in understanding and solving a scientific problem. Analysis involves extracting the data of interest from all the available raw data in a dataset and processing it into a data product. However, in many areas of science and engineering, a scientist's ability to analyze information is increasingly becoming hindered by dataset sizes. The vast amount of data in scientific datasets makes it a difficult task to efficiently access the data of interest, and manage potentially heterogeneous system resources to process the data. Subsetting and aggregation are common operations executed in a wide range of data-intensive applications. We argue that common runtime and programming support can be developed for applications that query and manipulate large datasets. This paper presents a compendium of frameworks and methods we have developed to support efficient execution of subsetting and aggregation operations in applications that query and manipulate large, multi-dimensional datasets in parallel and distributed computing environments. %B Parallel Computing %V 28 %P 827 - 859 %8 2002/05// %@ 0167-8191 %G eng %U http://www.sciencedirect.com/science/article/pii/S0167819102000972 %N 5 %R 10.1016/S0167-8191(02)00097-2 %0 Journal Article %J ACM SIGCAPH Computers and the Physically Handicapped %D 2002 %T Promoting universal usability with multi-layer interface design %A Shneiderman, Ben %K first-time user %K Graphical user interfaces %K multi-layer interface %K novice user %K online help %K universal usability %X Increased interest in universal usability is causing some researchers to study advanced strategies for satisfying first-time as well as intermittent and expert users. This paper promotes the idea of multi-layer interface designs that enable first-time and novice users to begin with a limited set of features at layer 1. They can remain at layer 1, then move up to higher layers when needed or when they have time to learn further features. The arguments for and against multi-layer interfaces are presented with two example systems: a word processor with 8 layers and an interactive map with 3 layers. New research methods and directions are proposed. %B ACM SIGCAPH Computers and the Physically Handicapped %P 1 - 8 %8 2002/06// %@ 0163-5727 %G eng %U http://doi.acm.org/10.1145/960201.957206 %N 73-74 %R 10.1145/960201.957206 %0 Journal Article %J European Journal of Biochemistry %D 2002 %T Purification and properties of the extracellular lipase, LipA, of Acinetobacter sp. RAG‐1 %A Snellman,Erick A. %A Sullivan,Elise R. %A Rita R Colwell %K Acinetobacter sp. RAG‐1 %K LipA %K lipase %K protein purification %K zymogram %X An extracellular lipase, LipA, extracted from Acinetobacter sp. RAG-1 grown on hexadecane was purified and properties of the enzyme investigated. The enzyme is released into the growth medium during the transition to stationary phase. The lipase was harvested from cells grown to stationary phase, and purified with 22% yield and > 10-fold purification. The protein demonstrates little affinity for anion exchange resins, with contaminating proteins removed by passing crude supernatants over a Mono Q column. The lipase was bound to a butyl Sepharose column and eluted in a Triton X-100 gradient. The molecular mass (33 kDa) was determined employing SDS/PAGE. LipA was found to be stable at pH 5.8–9.0, with optimal activity at 9.0. The lipase remained active at temperatures up to 70 °C, with maximal activity observed at 55 °C. LipA is active against a wide range of fatty acid esters of p-nitrophenyl, but preferentially attacks medium length acyl chains (C6, C8). The enzyme demonstrates hydrolytic activity in emulsions of both medium and long chain triglycerides, as demonstrated by zymogram analysis. RAG-1 lipase is stabilized by Ca2+, with no loss in activity observed in preparations containing the cation, compared to a 70% loss over 30 h without Ca2+. The lipase is strongly inhibited by EDTA, Hg2+, and Cu2+, but shows no loss in activity after incubation with other metals or inhibitors examined in this study. The protein retains more than 75% of its initial activity after exposure to organic solvents, but is rapidly deactivated by pyridine. RAG-1 lipase offers potential for use as a biocatalyst. %B European Journal of Biochemistry %V 269 %P 5771 - 5779 %8 2002/12/01/ %@ 1432-1033 %G eng %U http://onlinelibrary.wiley.com/doi/10.1046/j.1432-1033.2002.03235.x/full %N 23 %R 10.1046/j.1432-1033.2002.03235.x %0 Journal Article %J Signal Processing, IEEE Transactions on %D 2001 %T Parameterized dataflow modeling for DSP systems %A Bhattacharya,B. %A Bhattacharyya, Shuvra S. %B Signal Processing, IEEE Transactions on %V 49 %P 2408 - 2421 %8 2001/// %G eng %N 10 %0 Conference Paper %B INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE %D 2001 %T Partitioning activities for agents %A Ozcan,F. %A V.S. Subrahmanian %X There are now numerous agent applications thattrack interests of thousands of users in situations where changes occur continuously. [Shim et al., 1994] suggested that such agents can be made effi- cient by merging commonalities in their activities. However, past algorithms cannot merge more than 10 or 20 concurrent activities. We develop tech- niques so that a large number of concurrent activ- ities (typically over 1000) can be partitioned into components (groups of activities) of small size (e.g. 10 to 50) so that each component’s activities can be merged using previously developed algorithms (e.g. [Shim et al., 1994]). We first formalize the prob- lem and show that finding optimal partitions is NP- hard. We then develop three algorithms - Greedy, A -based and BAB (branch and bound). A -based and BAB are both guaranteed to compute optimal solutions. Greedy on the other hand uses heuris- tics and typically finds suboptimal solutions. We implemented all three algorithms. We experimen- tally show that the greedy algorithm finds partitions whose costs are at most 14% worse than that found by A -based and BAB — however, Greedy is able to handle over thousand concurrent requests very fast while the other two methods are much slower and able to handle only 10-20 requests. Hence, Greedy appears to be the best. %B INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE %V 17 %P 1218 - 1228 %8 2001/// %G eng %0 Book Section %B From Fragments to Objects Segmentation and Grouping in VisionFrom Fragments to Objects Segmentation and Grouping in Vision %D 2001 %T Perceptual organization as generic object recognition %A Jacobs, David W. %E Thomas F. Shipley and Philip J. Kellman %X We approach some aspects of perceptual organization as the process of fitting generic models of objects to image data. A generic model of shape encodes prior knowledge of what shapes are likely to come from real objects. For such a model to be useful, it must also lead to efficient computations. We show that models of shape based on local properties of objects can be effectively used by simple, neurally plausible networks, and that they can still encode many perceptually important properties. We also discuss the relationship between perceptual salience and viewpoint invariance. Many gestalt properties are the subset of viewpoint invariant properties that can be encoded using the smallest possible sets of features, making them the ecologically valid properties that can also be used with computational efficiency. These results suggest that implicit models of shape used in perceptual organization arise from a combination of ecological and computational constraints. Finally, we discuss experiments demonstrating the role of convexity in amodal completion. These experiments point out some of the limitations of simple local shape models, and indicate the potential role that the part structure of objects also plays in perceptual organization. %B From Fragments to Objects Segmentation and Grouping in VisionFrom Fragments to Objects Segmentation and Grouping in Vision %I North-Holland %V Volume 130 %P 295 - 329 %8 2001/// %@ 0166-4115 %G eng %U http://www.sciencedirect.com/science/article/pii/S0166411501800303 %0 Conference Paper %B Active Middleware Services, 2001. Third Annual International Workshop on %D 2001 %T Performance optimization for data intensive grid applications %A Beynon,M. D %A Sussman, Alan %A Catalyurek,U. %A Kurc, T. %A Saltz, J. %B Active Middleware Services, 2001. Third Annual International Workshop on %P 97 - 105 %8 2001/// %G eng %0 Conference Paper %B Information Security and Privacy %D 2001 %T Personal secure booting %A Itoi,N. %A Arbaugh, William A. %A Pollack,S. %A Reeves,D. %B Information Security and Privacy %P 130 - 144 %8 2001/// %G eng %0 Journal Article %J International Journal of Computational Geometry & Applications %D 2001 %T A point-placement strategy for conforming Delaunay tetrahedralization %A Mount, Dave %A Gable,C. W %X A strategy is presented to find a set of points that yields a Conforming Delaunay tetrahedralization of a three-dimensional Piecewise-Linear complex (PLC). This algorithm is novel because it imposes no angle restrictions on the input PLC. In the process, an algorithm is described that computes a planar conforming Delaunay triangulation of a Planar Straight-Line Graph (PSLG) such that each triangle has a bounded circumradius, which may be of independent interest. %B International Journal of Computational Geometry & Applications %V 11 %P 669 - 682 %8 2001/// %G eng %N 6 %R 10.1142/S0218195901000699 %0 Conference Paper %B IEEE INFOCOM 2001. Twentieth Annual Joint Conference of the IEEE Computer and Communications Societies. Proceedings %D 2001 %T Practical programmable packets %A Moore,J. T %A Hicks, Michael W. %A Nettles,S. %K active picket language %K Application software %K complier %K Contracts %K Data security %K efficiency %K Explosives %K INFORMATION SCIENCE %K Internet %K IP %K IP networks %K low-level packet language %K packet switching %K performance %K PLAN %K practical programmable packets %K program compilers %K Protection %K resource control %K Resource management %K safe and nimble active packets %K Safety %K Security %K SNAP %K software IP router %K Software performance %K telecommunication security %K Transport protocols %X We present SNAP (safe and nimble active packets), a new scheme for programmable (or active) packets centered around a new low-level packet language. Unlike previous active packet approaches, SNAP is practical: namely, adding significant flexibility over IP without compromising safety and security or efficiency. In this paper we show how to compile from the well-known active picket language PLAN to SNAP, showing that SNAP retains PLAN's flexibility; give proof sketches of its novel approach to resource control; and present experimental data showing SNAP attains performance very close to that of a software IP router %B IEEE INFOCOM 2001. Twentieth Annual Joint Conference of the IEEE Computer and Communications Societies. Proceedings %I IEEE %V 1 %P 41-50 vol.1 - 41-50 vol.1 %8 2001/// %@ 0-7803-7016-3 %G eng %R 10.1109/INFCOM.2001.916685 %0 Report %D 2001 %T Preconditioners for Saddle Point Problems Arising in Computational Fluid Dynamics %A Elman, Howard %K Technical Report %X Discretization and linearization of the incompressible Navier-Stokesequations leads to linear algebraic systems in which the coefficient matrix has the form of a saddle point problem ( F B^T ) (u) = (f) (1) ( B 0 ) (p) (g) In this paper, we describe the development of efficient and general iterative solution algorithms for this class of problems. We review the case where (1) arises from the steady-state Stokes equations and show that solution methods such as the Uzawa algorithm lead naturally to a focus on the Schur complement operator BF^{-1}B^T together with efficient strategies of applying the action of F^{-1} to a vector. We then discuss the advantages of explicitly working with the coupled form of the block system (1). Using this point of view, we describe some new algorithms derived by developing efficient methods for the Schur complement systems arising from the Navier-Stokes equations, and we demonstrate their effectiveness for solving both steady-state and evolutionary problems. (Also referenced as UMIACS-TR-2001-88) %I Instititue for Advanced Computer Studies, Univ of Maryland, College Park %V UMIACS-TR-2001-88 %8 2001/12/03/ %G eng %U http://drum.lib.umd.edu//handle/1903/1170 %0 Journal Article %J Journal of Communications and Networks %D 2001 %T The price of safety in an active network %A Alexander,D. S %A Menage,P. B %A Keromytis,A. D %A Arbaugh, William A. %A Anagnostakis,K. G %A Smith,J. M %B Journal of Communications and Networks %V 3 %P 4 - 18 %8 2001/// %G eng %N 1 %0 Conference Paper %B In Proc. ICML %D 2001 %T Probabilistic models of relational structure %A Getoor, Lise %A Friedman,N. %A Taskar,B. %X Most real-world data is stored in relational form. In contrast, most statistical learning methods work with “flat ” data representations, forcing us to convert our data into a form that loses much of the relational structure. The recently introduced framework of probabilistic relational models (PRMs) allows us to represent probabilistic models over multiple entities that utilize the relations between them. In this paper, we propose the use of probabilistic models not only for the attributes in a relational model, but for the relational structure itself. We propose two mechanisms for modeling structural uncertainty: reference uncertainty and existence uncertainty. We describe the appropriate conditions for using each model and present learning algorithms for each. We present experimental results showing that the learned models can be used to predict relational structure and, moreover, the observed relational structure can be used to provide better predictions for the attributes in the model. %B In Proc. ICML %8 2001/// %G eng %0 Journal Article %J IJCAI workshop on text learning: beyond supervision %D 2001 %T Probabilistic models of text and link structure for hypertext classification %A Getoor, Lise %A Segal,E. %A Taskar,B. %A Koller,D. %X Most text classification methods treat each document as anindependent instance. However, in many text domains, doc- uments are linked and the topics of linked documents are cor- related. For example, web pages of related topics are often connected by hyperlinks and scientific papers from related fields are commonly linked by citations. We propose a unified probabilistic model for both the textual content and the link structure of a document collection. Our model is based on the recently introduced framework of Probabilistic Relational Models (PRMs), which allows us to capture cor- relations between linked documents. We show how to learn these models from data and use them efficiently for classifi- cation. Since exact methods for classification in these large models are intractable, we utilize belief propagation, an ap- proximate inference algorithm. Belief propagation automat- ically induces a very natural behavior, where our knowledge about one document helps us classify related ones, which in turn help us classify others. We present preliminary empiri- cal results on a dataset of university web pages. %B IJCAI workshop on text learning: beyond supervision %P 24 - 29 %8 2001/// %G eng %0 Journal Article %J ACM Trans. Database Syst. %D 2001 %T Probabilistic object bases %A Eiter,Thomas %A Lu,James J. %A Lukasiewicz,Thomas %A V.S. Subrahmanian %K consistency %K object-oriented database %K probabilistic object algebra %K probabilistic object base %K probability %K query language %K Query optimization %X Although there are many applications where an object-oriented data model is a good way of representing and querying data, current object database systems are unable to handle objects whose attributes are uncertain. In this article, we extend previous work by Kornatzky and Shimony to develop an algebra to handle object bases with uncertainty. We propose concepts of consistency for such object bases, together with an NP-completeness result, and classes of probabilistic object bases for which consistency is polynomially checkable. In addition, as certain operations involve conjunctions and disjunctions of events, and as the probability of conjunctive and disjunctive events depends both on the probabilities of the primitive events involved as well as on what is known (if anything) about the relationship between the events, we show how all our algebraic operations may be performed under arbitrary probabilistic conjunction and disjunction strategies. We also develop a host of equivalence results in our algebra, which may be used as rewrite rules for query optimization. Last but not least, we have developed a prototype probabilistic object base server on top of ObjectStore. We describe experiments to assess the efficiency of different possible rewrite rules. %B ACM Trans. Database Syst. %V 26 %P 264 - 312 %8 2001/09// %@ 0362-5915 %G eng %U http://doi.acm.org/10.1145/502030.502031 %N 3 %R 10.1145/502030.502031 %0 Journal Article %J ACM Transaction on Database Systems %D 2001 %T Probabilistic temporal databases %A Alex,D. %A Robert,R. %A V.S. Subrahmanian %B ACM Transaction on Database Systems %V 26 %P 41 - 95 %8 2001/// %G eng %N 1 %0 Journal Article %J ACM Trans. Database Syst. %D 2001 %T Probabilistic temporal databases, I: algebra %A Dekhtyar,Alex %A Ross,Robert %A V.S. Subrahmanian %X Dyreson and Snodgrass have drawn attention to the fact that, in many temporal database applications, there is often uncertainty about the start time of events, the end time of events, and the duration of events. When the granularity of time is small (e.g., milliseconds), a statement such as “Packet p was shipped sometime during the first 5 days of January, 1998” leads to a massive amount of uncertainty (5×24×60×60×1000) possibilities. As noted in Zaniolo et al. [1997], past attempts to deal with uncertainty in databases have been restricted to relatively small amounts of uncertainty in attributes. Dyreson and Snodgrass have taken an important first step towards solving this problem.In this article, we first introduce the syntax of Temporal-Probabilistic (TP) relations and then show how they can be converted to an explicit, significantly more space-consuming form, called Annotated Relations. We then present a theoretical annotated temporal algebra (TATA). Being explicit, TATA is convenient for specifying how the algebraic operations should behave, but is impractical to use because annotated relations are overwhelmingly large. Next, we present a temporal probabilistic algebra (TPA). We show that our definition of the TP-algebra provides a correct implementation of TATA despite the fact that it operates on implicit, succinct TP-relations instead of overwhemingly large annotated relations. Finally, we report on timings for an implementation of the TP-Algebra built on top of ODBC. %B ACM Trans. Database Syst. %V 26 %P 41 - 95 %8 2001/03// %@ 0362-5915 %G eng %U http://doi.acm.org/10.1145/383734.383736 %N 1 %R 10.1145/383734.383736 %0 Journal Article %J Technical Reports from UMIACS, UMIACS-TR-2001-79 %D 2001 %T Probabilistic Temporal Databases, II: Calculus and Query Processing %A Dekhtyar,A. %A Ozcan,F. %A Ross,R. %A V.S. Subrahmanian %X There is a vast class of applications in which we know that a certain event occurred, but do not know exactly when it occurred. However, as studied by Dyreson and Snodgrass \cite{ds98}, there are many natural scenarios where probability distributions exist and quantify this uncertainty. Dekhtyar et. al. extended Dyreson and Snodgrass's work and defined an extension of the relational algebra to handle such data. The first contribution of this paper is a declarative temporal probabilistic (TP for short) calculus which we show is equivalent in expressive power to the temporal probabilistic algebra of Dekhtyar et. al. Our second major contribution is a set of equivalence and containment results for the TP-algebra. Our third contribution is the development of cost models that may be used to estimate the cost of TP-algebra operations. Our fourth contribution is an experimental evaluation of the accuracy of our cost models and the use of the equivalence results as rewrite rules for optimizing queries by using an implementation of TP-databases on top of ODBC. %B Technical Reports from UMIACS, UMIACS-TR-2001-79 %8 2001/// %G eng %0 Journal Article %J BeOing: Electronic Industry Press %D 2001 %T Programming languages: designing and implementation %A Zelkowitz, Marvin V %B BeOing: Electronic Industry Press %V 6 %P 46 - 65 %8 2001/// %G eng %0 Journal Article %J Pattern Analysis and Machine Intelligence, IEEE Transactions on %D 2001 %T Projective alignment with regions %A Basri,R. %A Jacobs, David W. %K algebra;object %K alignment;projective %K approach;segmentation %K errors;image %K fixed %K objects;pose %K occlusion;planar %K patterns;partial %K points;flow %K recognition; %K recovery;projective %K segmentation;matrix %K transformations;region-based %X We have previously proposed (Basri and Jacobs, 1999, and Jacobs and Basri, 1999) an approach to recognition that uses regions to determine the pose of objects while allowing for partial occlusion of the regions. Regions introduce an attractive alternative to existing global and local approaches, since, unlike global features, they can handle occlusion and segmentation errors, and unlike local features they are not as sensitive to sensor errors, and they are easier to match. The region-based approach also uses image information directly, without the construction of intermediate representations, such as algebraic descriptions, which may be difficult to reliably compute. We further analyze properties of the method for planar objects undergoing projective transformations. In particular, we prove that three visible regions are sufficient to determine the transformation uniquely and that for a large class of objects, two regions are insufficient for this purpose. However, we show that when several regions are available, the pose of the object can generally be recovered even when some or all regions are significantly occluded. Our analysis is based on investigating the flow patterns of points under projective transformations in the presence of fixed points %B Pattern Analysis and Machine Intelligence, IEEE Transactions on %V 23 %P 519 - 527 %8 2001/05// %@ 0162-8828 %G eng %N 5 %R 10.1109/34.922709 %0 Journal Article %J Bioinformatics %D 2001 %T Promoter prediction in the human genome %A Hannenhalli, Sridhar %A Levy,S. %X Computational prediction of eukaryotic polII promoters has been one of the most elusive problems despite considerable effort devoted to the study. Researchers have looked for various types of signals around the transcriptional start site (TSS), viz. oligo-nucleotide statistics, potential binding sites for core factors, clusters of binding sites, proximity to CpG islands etc.. The proximity of CpG islands to gene starts is now a well established fact, although until recently, it was based on very little genomic data. In this work we explore the possibility of enhancing the promoter prediction accuracy by combining CpG island information with a few other, biologically motivated, seemingly independent signals, that cover most of the known knowledge. We benchmarked the method on a much larger genomic datasets compared to previous studies. We were able to improve slightly upon current prediction accuracy. Furthermore, we observe that CpG islands are the most dominant signals and the other signals do not improve the prediction. This suggests that the computational prediction of promoters for genes with no associated CpG-island (typically having tissue-specific expression) looking only at the immediate neighborhood of the TSS may not even be possible. We suggest some biological experiments and studies to better understand the biology of transcription. %B Bioinformatics %V 17 %P S90-S96 - S90-S96 %8 2001/06/01/ %@ 1367-4803, 1460-2059 %G eng %U http://bioinformatics.oxfordjournals.org/content/17/suppl_1/S90.short %N Suppl 1 %R 10.1093/bioinformatics/17.suppl_1.S90 %0 Conference Paper %B Thirteenth International Conference on Software Engineering and Knowledge Engineering %D 2001 %T A prototype experience management system for a software consulting organization %A Mendonça Neto,M. G. %A Seaman,C. %A Basili, Victor R. %A Kim,Y. M %X The Experience Management System (EMS) is aimed atsupporting the capture and reuse of software-related experience, based on the Experience Factory concept. It is being developed for use in a multinational software engineering consultancy, Q-Labs. Currently, a prototype EMS exists and has been evaluated. This paper focuses on the EMS architecture, underlying data model, implementation, and user interface. %B Thirteenth International Conference on Software Engineering and Knowledge Engineering %P 29 - 36 %8 2001/// %G eng %0 Conference Paper %B E-commerce Security and Privacy %D 2001 %T Provisional authorizations %A Jajodia,S. %A Kudo,M. %A V.S. Subrahmanian %X Past generations of access control systems, when faced with an access request, have issued a “yes” (resp. “no”) answer to the access request resulting in access being granted (resp. denied). In this chapter, we ar­gue that for the world’s rapidly proliferating business to business (B2B) applications and auctions, “yes/no” responses are just not enough. We propose the notion of a “provisional authorization” which intuitively says “You may perform the desired access provided you cause condition C to be satisfied.” For instance, a user accessing an online brokerage may receive some information if he fills out his name/address, but not otherwise. While a variety of such provisional authorization mecha­nisms exist on the web, they are all hardcoded on an application by application basis. We show that given (almost) any logic L, we may define a provisional authorization specification language pASLL. pASLL is based on the declarative, polynomially evaluable authorization spec­ification language ASL proposed by Jajodia et al [JSS97]. We define programs in pASLL, and specify how given any access request, we must find a “weakest” precondition under which the access can be granted (in the worst case, if this weakest precondition is “false” this amounts to a denial). We develop a model theoretic semantics for pASLL and show how it can be applied to online sealed-bid auction servers and online contracting. %B E-commerce Security and Privacy %V 2 %P 133 - 159 %8 2001/// %G eng %R 10.1007/978-1-4615-1467-1_8 %0 Journal Article %J Perceptual Organization for Artificial Vision Systems %D 2000 %T Perceptual Completion Behind Occluders: The Role of Convexity %A Liu,Z. %A Jacobs, David W. %A Basri,R. %X An important problem in perceptual organization is to determine whether two image regions belong to the same object. In this chapter, we consider the situation when two image regions potentially group into a single object behind a common occluder. We study the extent to which two image regions are grouped more strongly than two other image regions. Existing theories in both human and computer vision have mainly emphasized the role of good continuation. Namely, the shorter and smoother the completing contours are between two image regions, the stronger the grouping will be. In contrast, Jacobs [3] has proposed a theory that considers relative positions and orientations of two image regions. For instance, the theory predicts that two image regions that can be linked by convex completing contours are grouped more strongly than those that can only be linked by concave completing contours, even though the completing contours are identical in shape. We present, in addition to our previous work (Liu, Jacobs, and Basri, 1999), human psychophysical evidence that concurs with the predictions of this theory and suggests an important role of convexity in perceptual grouping. %B Perceptual Organization for Artificial Vision Systems %P 73 - 90 %8 2000/// %G eng %R 10.1007/978-1-4615-4413-5_6 %0 Journal Article %J International Journal of Human-Computer Interaction %D 2000 %T Performance Benefits of Simultaneous Over Sequential Menus as Task Complexity Increases %A Hochheiser,Harry %A Shneiderman, Ben %X To date, experimental comparisons of menu layouts have concentrated on variants of hierarchical structures of sequentially presented menus. Simultaneous menus-layouts that present multiple active menus on a screen at the same time-are an alternative arrangement that may be useful in many Web design situations. This article describes an experiment involving a between-subject comparison of simultaneous menus and their traditional sequential counterparts. A total of 20 experienced Web users used either simultaneous or sequential menus in a standard Web browser to answer questions based on U.S. Census data. Our results suggest that appropriate use of simultaneous menus can lead to improved task performance speeds without harming subjective satisfaction measures. For novice users performing simple tasks, the simplicity of sequential menus appears to be helpful, but experienced users performing complex tasks may benefit from simultaneous menus. Design improvements can amplify the benefits of simultaneous menu layouts.To date, experimental comparisons of menu layouts have concentrated on variants of hierarchical structures of sequentially presented menus. Simultaneous menus-layouts that present multiple active menus on a screen at the same time-are an alternative arrangement that may be useful in many Web design situations. This article describes an experiment involving a between-subject comparison of simultaneous menus and their traditional sequential counterparts. A total of 20 experienced Web users used either simultaneous or sequential menus in a standard Web browser to answer questions based on U.S. Census data. Our results suggest that appropriate use of simultaneous menus can lead to improved task performance speeds without harming subjective satisfaction measures. For novice users performing simple tasks, the simplicity of sequential menus appears to be helpful, but experienced users performing complex tasks may benefit from simultaneous menus. Design improvements can amplify the benefits of simultaneous menu layouts. %B International Journal of Human-Computer Interaction %V 12 %P 173 - 192 %8 2000/// %@ 1044-7318 %G eng %U http://www.tandfonline.com/doi/abs/10.1207/S15327590IJHC1202_2 %N 2 %R 10.1207/S15327590IJHC1202_2 %0 Journal Article %J Computing in Science Engineering %D 2000 %T A perspective on Quicksort %A JaJa, Joseph F. %K algorithm; %K algorithms; %K analysis; %K complexity %K complexity; %K computational %K geometry; %K Parallel %K Quicksort %K sorting; %X This article introduces the basic Quicksort algorithm and gives a flavor of the richness of its complexity analysis. The author also provides a glimpse of some of its generalizations to parallel algorithms and computational geometry %B Computing in Science Engineering %V 2 %P 43 - 49 %8 2000/02//jan %@ 1521-9615 %G eng %N 1 %R 10.1109/5992.814657 %0 Journal Article %J J Mol Evol %D 2000 %T Phylogenetic relationships of Acanthocephala based on analysis of 18S ribosomal RNA gene sequences %A García-Varela,M %A Pérez-Ponce de León,G. %A de la Torre,P %A Cummings, Michael P. %A Sarma,SS %A Laclette,J. P %X Acanthocephala (thorny-headed worms) is a phylum of endoparasites of vertebrates and arthropods, included among the most phylogenetically basal tripoblastic pseudocoelomates. The phylum is divided into three classes: Archiacanthocephala, Palaeacanthocephala, and Eoacanthocephala. These classes are distinguished by morphological characters such as location of lacunar canals, persistence of ligament sacs in females, number and type of cement glands in males, number and size of proboscis hooks, host taxonomy, and ecology. To understand better the phylogenetic relationships within Acanthocephala, and between Acanthocephala and Rotifera, we sequenced the nearly complete 18S rRNA genes of nine species from the three classes of Acanthocephala and four species of Rotifera from the classes Bdelloidea and Monogononta. Phylogenetic relationships were inferred by maximum-likelihood analyses of these new sequences and others previously determined. The analyses showed that Acanthocephala is the sister group to a clade including Eoacanthocephala and Palaeacanthocephala. Archiacanthocephala exhibited a slower rate of evolution at the nucleotide level, as evidenced by shorter branch lengths for the group. We found statistically significant support for the monophyly of Rotifera, represented in our analysis by species from the clade Eurotatoria, which includes the classes Bdelloidea and Monogononta. Eurotatoria also appears as the sister group to Acanthocephala. %B J Mol Evol %V 50 %P 532 - 540 %8 2000/06// %G eng %N 6 %0 Journal Article %J Mycol Res %D 2000 %T Phylogenetic relationships of ıt Phytophthora species based on ribosomal ITS I DNA sequence analysis with emphasis on Waterhouse groups V and VI %A Förster,H %A Cummings, Michael P. %A Coffey,M. D %X Phylogenetic relationships among Phytophthora species were investigated by sequence analysis of the internal transcribed spacer region I of the ribosomal DNA repeat unit. The extensive collection of isolates included taxa from all six morphological groups recognized by Waterhouse (1963) including molecular groups previously identified using isozymes and mtDNA restriction fragment length polymorphisms. Similar to previous studies, the inferred relationships indicated that molecular groups of P. cryptooea/drechsleri-like and P. megasperma-like taxa are polyphyletic. Morphological groups V and VI, which are differentiated by the presence of amphigynous or paragynous antheridia, are not monophyletic: species of the two groups are interspersed in the tree. Species with papillate and semi-papillate sporangia (groups I-IV) clustered together and this cluster was distinct from those of species with non-papillate sporangia. There was no congruence between the mode of antheridial attachment, sporangial caducity, or homo- or heterothallic habit and the molecular grouping of the species. Our study provides evidence that the antheridial position together with homo- or heterothallic habit does not reflect phylogenetic relationships within Phytophthora. Consequently, confirming studies done previously (Cooke & Duncan 1997), this study provides evidence that the morphological characters used in Phytophthora taxonomy are of limited value for deducing phylogenetic relationships, because they exhibit convergent evolution. %B Mycol Res %V 104 %P 1055 - 1061 %8 2000/09// %G eng %0 Journal Article %J Proceedings of The 13th International Software/Internet Quality Week %D 2000 %T A planning-based approach to GUI testing %A Memon, Atif M. %A Pollack,M. E %A Soffa,M. L %B Proceedings of The 13th International Software/Internet Quality Week %8 2000/// %G eng %0 Conference Paper %B Proceedings of the eleventh annual ACM-SIAM symposium on Discrete algorithms %D 2000 %T A point-placement strategy for conforming Delaunay tetrahedralization %A Murphy,Michael %A Mount, Dave %A Gable,Carl W. %B Proceedings of the eleventh annual ACM-SIAM symposium on Discrete algorithms %S SODA '00 %I Society for Industrial and Applied Mathematics %C Philadelphia, PA, USA %P 67 - 74 %8 2000/// %@ 0-89871-453-2 %G eng %U http://dl.acm.org/citation.cfm?id=338219.338236 %0 Conference Paper %B Proceedings of the 7th International Static Analysis Symposium, Lecture Notes in Computer Science. Springer Verlag %D 2000 %T Polymorphic versus monomorphic points-to analysis %A Foster, Jeffrey S. %A Fahndrich,M. %A Aiken,A. %B Proceedings of the 7th International Static Analysis Symposium, Lecture Notes in Computer Science. Springer Verlag %8 2000/// %G eng %0 Conference Paper %B String Processing and Information Retrieval, 2000. SPIRE 2000. Proceedings. Seventh International Symposium on %D 2000 %T PRAM-On-Chip Vision %A Vishkin, Uzi %B String Processing and Information Retrieval, 2000. SPIRE 2000. Proceedings. Seventh International Symposium on %P 260 - 260 %8 2000/// %G eng %0 Journal Article %J The Journal of Cell BiologyJ Cell Biol %D 2000 %T Pre-Messenger RNA Processing Factors in the Drosophila Genome %A Mount, Stephen M. %A Salz,Helen K. %X In eukaryotes, messenger RNAs are generated by a process that includes coordinated splicing and 3′ end formation. Factors essential for the splicing of mRNA precursors (pre-mRNA) in eukaryotes have been identified primarily through the study of nuclear extracts derived from mammalian cells and Saccharomyces cerevisiae genetics. Here, we identify homologues of most known pre-mRNA processing factors in the recently completed sequence of the Drosophila genome. The set of proteins required for RNA processing shows remarkably little variation among eukaryotic species, and individual proteins are highly conserved. In general, proteins involved in the mechanics of RNA processing are even more conserved than proteins involved in the interpretation of RNA processing signals. The genome does not appear to contain a gene for the U11 RNA, or for a protein unique to the U11 snRNP, which raises the possibility that the U12-dependent spliceosome functions without U11 in Drosophila. %B The Journal of Cell BiologyJ Cell Biol %V 150 %P F37-F44 - F37-F44 %8 2000/07/24/ %@ 0021-9525, 1540-8140 %G eng %U http://jcb.rupress.org/content/150/2/F37 %N 2 %R 10.1083/jcb.150.2.F37 %0 Journal Article %J Journal of the American Society for Information Science %D 2000 %T Previews and overviews in digital libraries: Designing surrogates to support visual information seeking %A Greene,Stephan %A Marchionini,Gary %A Plaisant, Catherine %A Shneiderman, Ben %X To aid designers of digital library interfaces, we present a framework for the design of information representations in terms of previews and overviews. Previews and overviews are graphic or textual representations of information abstracted from primary information objects. Previews act as surrogates for one or a few objects and overviews represent collections of objects. A design framework is elaborated in terms of the following three dimensions: (1) what information objects are available to users, (2) how information objects are related and displayed, and (3) how users can manipulate information objects. When utilized properly, previews and overviews allow users to rapidly discriminate objects of interest from those not of interest, and to more fully understand the scope and nature of digital libraries. This article presents a definition of previews and overviews in context, provides design guidelines, and describes four example applications. %B Journal of the American Society for Information Science %V 51 %P 380 - 393 %8 2000/01/01/ %@ 1097-4571 %G eng %U http://onlinelibrary.wiley.com/doi/10.1002/(SICI)1097-4571(2000)51:4%3C380::AID-ASI7%3E3.0.CO;2-5/abstract;jsessionid=E15C609DE95671E0E91A862B8AFD1CC6.d03t01 %N 4 %R 10.1002/(SICI)1097-4571(2000)51:4<380::AID-ASI7>3.0.CO;2-5 %0 Conference Paper %B Automatic Face and Gesture Recognition, 2000. Proceedings. Fourth IEEE International Conference on %D 2000 %T A probabilistic framework for rigid and non-rigid appearance based tracking and recognition %A De la Torre,F. %A Yacoob,Yaser %A Davis, Larry S. %B Automatic Face and Gesture Recognition, 2000. Proceedings. Fourth IEEE International Conference on %P 491 - 498 %8 2000/// %G eng %0 Journal Article %J Journal of Intelligent Information Systems %D 2000 %T Producing Interoperable Queries for Relational and Object-Oriented Databases %A Chang,Y. H %A Raschid, Louiqa %B Journal of Intelligent Information Systems %V 14 %P 51 - 75 %8 2000/// %G eng %N 1 %0 Journal Article %J ACM SIGCOMM Computer Communication Review %D 2000 %T A protocol-independent technique for eliminating redundant network traffic %A Spring, Neil %A Wetherall,D. %B ACM SIGCOMM Computer Communication Review %V 30 %P 87 - 95 %8 2000/// %G eng %N 4 %0 Report %D 1999 %T Page Segmentation and Zone Classification: The State of the Art %A Okun,O. %A David Doermann %A Pietikainen,M. %X Page segmentation and zone classification are key areas of research in document image processing, because they occupy an intermediate position between document preprocessing and higher-level document understanding such as logical page analysis and OCR. Such analysis of the page relies heavily on an appropriate document model and results in a representation of the physical structure of the document. The purpose of this review is to analyze progress made in page segmentation and zone classification and suggest what needs to be done to advance the field. %I University of Maryland, College Park %V LAMP-TR-036,CAR-TR-927,CS-TR-4079 %8 1999/11// %G eng %0 Journal Article %J Computers in biology and medicine %D 1999 %T Pathogenic mechanisms in ischemic damage: a computational study %A Ruppin,E. %A Ofer,E. %A Reggia, James A. %A Revett,K. %B Computers in biology and medicine %V 29 %P 39 - 59 %8 1999/// %G eng %N 1 %0 Journal Article %J Progress in brain research %D 1999 %T Penumbral tissue damage following acute stroke: a computational investigation %A Ruppin,E. %A Revett,K. %A Ofer,E. %A Goodall,S. %A Reggia, James A. %B Progress in brain research %V 121 %P 243 - 260 %8 1999/// %G eng %0 Book %D 1999 %T Perceptual Organization in Computer Vision %A Boyer,K.L. %A Sarkar,S. %A Feldman,J. %A Granlund,G. %A Horaud,R. %A Hutchinson,S. %A Jacobs, David W. %A Kak,A. %A Lowe,D. %A Malik,J. %I Academic Press %8 1999/// %G eng %0 Conference Paper %B Proceedings of the 13th international conference on Supercomputing %D 1999 %T Performance impact of proxies in data intensive client-server applications %A Beynon,M. D %A Sussman, Alan %A Saltz, J. %B Proceedings of the 13th international conference on Supercomputing %P 383 - 390 %8 1999/// %G eng %0 Journal Article %J Empirical Software Engineering %D 1999 %T Perspective-based Usability Inspection: An Empirical Validation of Efficacy %A Zhang,Zhijun %A Basili, Victor R. %A Shneiderman, Ben %X Inspection is a fundamental means of achieving software usability. Past research showed that the current usability inspection techniques were rather ineffective. We developed perspective-based usability inspection, which divides the large variety of usability issues along different perspectives and focuses each inspection session on one perspective. We conducted a controlled experiment to study its effectiveness, using a post-test only control group experimental design, with 24 professionals as subjects. The control group used heuristic evaluation, which is the most popular technique for usability inspection. The experimental design and the results are presented, which show that inspectors applying perspective-based inspection not only found more usability problems related to their assigned perspectives, but also found more overall problems. Perspective-based inspection was shown to be more effective for the aggregated results of multiple inspectors, finding about 30% more usability problems for 3 inspectors. A management implication of this study is that assigning inspectors more specific responsibilities leads to higher performance. Internal and external threats to validity are discussed to help better interpret the results and to guide future empirical studies. %B Empirical Software Engineering %V 4 %P 43 - 69 %8 1999/// %@ 1382-3256 %G eng %U http://dx.doi.org/10.1023/A:1009803214692 %N 1 %0 Conference Paper %B Readings in information visualization %D 1999 %T Physical data %A Card,S.K. %A Mackinlay,J.D. %A Shneiderman, Ben %B Readings in information visualization %I Morgan Kaufmann Publishers Inc. %C San Francisco %P 37 - 38 %8 1999/// %@ 1-55860-533-9 %G eng %0 Journal Article %J SIAM Journal on Scientific Computing %D 1999 %T Pivoted Cauchy-Like Preconditioners for Regularized Solution of Ill-Posed Problems %A Kilmer,Misha E. %A O'Leary, Dianne P. %B SIAM Journal on Scientific Computing %V 21 %P 88 - 110 %8 1999/// %G eng %U http://epubs.siam.org/sam-bin/dbq/article/30897http://epubs.siam.org/sam-bin/dbq/article/30897 %0 Journal Article %J Technical Reports from UMIACS %D 1999 %T Pixel Data Access for End-User Programming and Graphical Macros %A Potter,Richard %A Shneiderman, Ben %K Technical Report %X Pixel Data Access is an interprocess communication technique that enablesusers of graphical user interfaces to automate certain tasks. By accessing the contents of the display buffer, users can search for pixel representations of interface elements, and then initiate actions such as mouse clicks and keyboard entries. While this technique has limitations it offers users of current systems some unusually powerful features that are especially appealing in the area of end-user programming. Also cross-referenced as UMIACS-TR-99-27 %B Technical Reports from UMIACS %8 1999/05/25/ %G eng %U http://drum.lib.umd.edu/handle/1903/1009 %0 Journal Article %J ACM SIGPLAN NOTICES %D 1999 %T PLAN: A programming language for active networks %A Hicks, Michael W. %A Kakkar,P. %A Moore,J. T %A Gunter,C. A %A Nettles,S. %X PLAN (Programming Language for Active Networks) is anew language for programs that are carried in the packets of a programmable network. PLAN programs replace the packet headers (which can be viewed as `dumb' programs) used in current networks. As a header replacement, PLAN programs must be lightweight and of limited functionality. These limitations are mitigated by allowing PLAN code to call service routines written in other, more powerful lan- guages. These service routines may also be loaded into the routers dynamically. This two-level architecture, in which PLAN serves as a scripting or `glue' language for more gen- eral services, is the primary contribution of the paper. PLAN is a strict functional language providing a limited set of primitives and datatypes. PLAN de nes primitives for remotely executing PLAN programs on other nodes, and these primitives are used to provide basic data transport in the network. Because remote execution makes debugging di cult, PLAN provides strong static guarantees to the pro- grammer, such as type safety. A more novel property aimed at protecting network availability is a guarantee that PLAN programs use a bounded amount of space and time on active routers and bandwidth in the network. %B ACM SIGPLAN NOTICES %V 34 %P 86 - 93 %8 1999/// %G eng %0 Journal Article %J University of Pennsylvania %D 1999 %T PLAN Service Programmer's Guide for PLAN version 3.2 %A Hicks, Michael W. %B University of Pennsylvania %8 1999/08/12/ %G eng %0 Conference Paper %B IEEE INFOCOM '99. Eighteenth Annual Joint Conference of the IEEE Computer and Communications Societies. Proceedings %D 1999 %T PLANet: an active internetwork %A Hicks, Michael W. %A Moore,J. T %A Alexander,D. S %A Gunter,C. A %A Nettles,S. M %K 100 Mbit/s %K 300 MHz %K 48 Mbit/s %K active internetwork %K active network architecture %K active network implementation %K byte-code-interpreted applications %K Computer architecture %K Computer languages %K Computer networks %K congested conditions %K dynamic programming %K dynamic router extensions %K Ethernet %K Ethernet networks %K INFORMATION SCIENCE %K Internet %K Internet-like services %K internetworking %K IP %K IP networks %K link layers %K Linux user-space applications %K Local area networks %K ML dialect %K Network performance %K networking operations %K OCaml %K Packet Language for Active Networks %K packet programs %K packet switching %K Pentium-II %K performance %K performance evaluation %K PLAN %K PLANet %K Planets %K programmability features %K programming languages %K router functionality %K special purpose programming language %K Switches %K telecommunication network routing %K Transport protocols %K Web and internet services %X We present PLANet: an active network architecture and implementation. In addition to a standard suite of Internet-like services, PLANet has two key programmability features: (1) all packets contain programs; and (2) router functionality may be extended dynamically. Packet programs are written in our special purpose programming language PLAN, the Packet Language for Active Networks, while dynamic router extensions are written in OCaml, a dialect of ML. Currently, PLANet routers run as byte-code-interpreted Linux user-space applications, and support Ethernet and IP as link layers. PLANet achieves respectable performance on standard networking operations: on 300 MHz Pentium-II's attached to 100 Mbps Ethernet, PLANet can route 48 Mbps and switch over 5000 packets per second. We demonstrate the utility of PLANet's activeness by showing experimentally how it can nontrivially improve application and aggregate network performance in congested conditions %B IEEE INFOCOM '99. Eighteenth Annual Joint Conference of the IEEE Computer and Communications Societies. Proceedings %I IEEE %V 3 %P 1124-1133 vol.3 - 1124-1133 vol.3 %8 1999/03/21/25 %@ 0-7803-5417-6 %G eng %R 10.1109/INFCOM.1999.751668 %0 Journal Article %J Annals of Software Engineering %D 1999 %T A practical approach to implementing real-time semantics %A Bhat,G. %A Cleaveland, Rance %A L\üttgen,G. %B Annals of Software Engineering %V 7 %P 127 - 155 %8 1999/// %G eng %N 1 %0 Conference Paper %B Proceedings of the IJCAI-99 Workshop on Practical Reasoning and Rationality %D 1999 %T Practical reasoning and plan execution with active logic %A Purang,K. %A Purushothaman,D. %A Traum,D. %A Andersen,C. %A Perlis, Don %B Proceedings of the IJCAI-99 Workshop on Practical Reasoning and Rationality %P 30 - 38 %8 1999/// %G eng %0 Journal Article %J SIAM Journal on Scientific Computing %D 1999 %T Preconditioning for the steady-state Navier-Stokes equations with low viscosity %A Elman, Howard %B SIAM Journal on Scientific Computing %V 20 %P 1299 - 1316 %8 1999/// %G eng %N 4 %0 Conference Paper %B The Eighth International Symposium on High Performance Distributed Computing, 1999. Proceedings %D 1999 %T Predicting the CPU availability of time-shared Unix systems on the computational grid %A Wolski,R. %A Spring, Neil %A Hayes,J. %K accuracy %K Application software %K Autocorrelation %K Availability %K Central Processing Unit %K computational grid %K correlation methods %K CPU availability prediction %K CPU resources predictability %K CPU sensor %K Dynamic scheduling %K grid computing %K Load forecasting %K long-range autocorrelation dependence %K medium-term forecasts %K network operating systems %K Network Weather Service %K NWS %K performance evaluation %K self-similarity degree %K short-term forecasts %K successive CPU measurements %K Time measurement %K Time sharing computer systems %K time-shared Unix systems %K time-sharing systems %K Unix %K Unix load average %K vmstat utility %K Weather forecasting %X Focuses on the problem of making short- and medium-term forecasts of CPU availability on time-shared Unix systems. We evaluate the accuracy with which availability can be measured using the Unix load average, the Unix utility “vmstat” and the Network Weather Service (NWS) CPU sensor that uses both. We also examine the autocorrelation between successive CPU measurements to determine their degree of self-similarity. While our observations show a long-range autocorrelation dependence, we demonstrate how this dependence manifests itself in the short- and medium-term predictability of the CPU resources in our study %B The Eighth International Symposium on High Performance Distributed Computing, 1999. Proceedings %I IEEE %P 105 - 112 %8 1999/// %@ 0-7803-5681-0 %G eng %R 10.1109/HPDC.1999.805288 %0 Conference Paper %B Parallel and Distributed Processing, 1999. 13th International and 10th Symposium on Parallel and Distributed Processing, 1999. 1999 IPPS/SPDP. Proceedings %D 1999 %T Prefix computations on symmetric multiprocessors %A Helman,D.R. %A JaJa, Joseph F. %K access %K algorithms; %K AlphaServer;HP-Convex %K approach;symmetric %K architecture;symmetric %K Challenge;high-end %K computations;scalable %K DEC %K Exemplar;IBM %K Graphics %K market;large %K multiprocessor %K multiprocessors;Unix;distributed %K patterns;prefix %K performance;sparse %K power %K ruling %K scale %K server %K set %K SP-2;POSIX %K systems;memory %K threads;Silicon %X We introduce a new optimal prefix computation algorithm on linked lists which builds upon the sparse ruling set approach of Reid-Miller and Blelloch. Besides being somewhat simpler and requiring nearly half the number of memory accesses, we can bound our complexity with high probability instead of merely on average. Moreover, whereas Reid-Miller and Blelloch (1996) targeted their algorithm for implementation on a vector multiprocessor architecture, we develop our algorithm for implementation on the symmetric multiprocessor architecture (SMP). These symmetric multiprocessors dominate the high-end server market and are currently the primary candidate for constructing large scale multiprocessor systems. Our prefix computation algorithm was implemented in C using POSIX threads and run on four symmetric multiprocessors-the IBM SP-2 (High Node), the HP-Convex Exemplar (S-Class), the DEC AlphaServer; and the Silicon Graphics Power Challenge. We ran our code using a variety of benchmarks which we identified to examine the dependence of our algorithm on memory access patterns. For some problems, our algorithm actually matched or exceeded the performance of the standard sequential solution using only a single thread. Moreover, in spite of the fact that the processors must compete for access to main memory, our algorithm still achieved scalable performance with up to 16 processors, which was the largest platform available to us %B Parallel and Distributed Processing, 1999. 13th International and 10th Symposium on Parallel and Distributed Processing, 1999. 1999 IPPS/SPDP. Proceedings %P 7 - 13 %8 1999/04// %G eng %R 10.1109/IPPS.1999.760427 %0 Book %D 1999 %T A Priori Test of SGS Models in Compressible Turbulence %A Martin, M.P %A Piomelli,U. %A Candler,G. V %A Center,Army High Performance Computing Research %A Minnesota,University of %I Army High Performance Computing Research Center %8 1999/// %G eng %0 Conference Paper %D 1999 %T Proteus: a flexible infrastructure to implement adaptive fault tolerance in AQuA %A Sabnis,C. %A Michel Cukier %A Ren,J. %A Rubel,P. %A Sanders,W. H. %A Bakken,D. E. %A Karr,D. %K adaptive fault tolerance %K AQuA %K commercial off-the-shelf components %K CORBA applications %K cost %K dependable distributed systems %K distributed object management %K object replication %K proteus %K reconfigurable architectures %K Runtime %K Software architecture %K software fault tolerance %X Building dependable distributed systems from commercial off-the-shelf components is of growing practical importance. For both cost and production reasons, there is interest in approaches and architectures that facilitate building such systems. The AQuA architecture is one such approach; its goal is to provide adaptive fault tolerance to CORBA applications by replicating objects, providing a high-level method for applications to specify their desired dependability, and providing a dependability manager that attempts to reconfigure a system at runtime so that dependability requests are satisfied. This paper describes how dependability is provided in AQuA. In particular it describes Proteus, the part of AQuA that dynamically manages replicated distributed objects to make them dependable. Given a dependability request, Proteus chooses a fault tolerance approach and reconfigures the system to try to meet the request. The infrastructure of Proteus is described in this paper, along with its use in implementing active replication and a simple dependability policy %P 149 - 168 %8 1999/11// %G eng %R 10.1109/DCFTS.1999.814294 %0 Report %D 1998 %T Parallel Algorithms for Image Histogramming and Connected Components with an Experimental Study %A Bader,David A. %A JaJa, Joseph F. %K Technical Report %X This paper presents efficient and portable implementations of two useful primitives in image processing algorithms, histogramming and connected components. Our general framework is a single-address space, distributed memory programming model. We use efficient techniques for distributing and coalescing data as well as efficient combinations of task and data parallelism. Our connected components algorithm uses a novel approach for parallel merging which performs drastically limited updating during iterative steps, and concludes with a total consistency update at the final step. The algorithms have been coded in Split-C and run on a variety of platforms. Our experimental results are consistent with the theoretical analysis and provide the best known execution times for these two primitives, even when compared with machine specific implementations. More efficient implementations of Split-C will likely result in even faster execution times. (Also cross-referenced as UMIACS-TR-94-133.) %I Instititue for Advanced Computer Studies, Univ of Maryland, College Park %V UMIACS-TR-94-133 %8 1998/10/15/ %G eng %U http://drum.lib.umd.edu/handle/1903/681 %0 Report %D 1998 %T A Parallel Sorting Algorithm With an Experimental Study %A Helman,David R. %A Bader,David A. %A JaJa, Joseph F. %K Technical Report %X Previous schemes for sorting on general-purpose parallel machineshave had to choose between poor load balancing and irregular communication or multiple rounds of all-to-all personalized communication. In this paper, we introduce a novel variation on sample sort which uses only two rounds of regular all-to-all personalized communication in a scheme that yields very good load balancing with virtually no overhead. This algorithm was implemented in Split-C and run on a variety of platforms, including the Thinking Machines CM-5, the IBM SP-2, and the Cray Research T3D. We ran our code using widely different benchmarks to examine the dependence of our algorithm on the input distribution. Our experimental results are consistent with the theoretical analysis and illustrate the efficiency and scalability of our algorithm across different platforms. In fact, it seems to outperform all similar algorithms known to the authors on these platforms, and its performance is invariant over the set of input distributions unlike previous efficient algorithms. Our results also compare favorably with those reported for the simpler ranking problem posed by the NAS Integer Sorting (IS) Benchmark. (Also cross-referenced as UMIACS-TR-95-102) %I Instititue for Advanced Computer Studies, Univ of Maryland, College Park %V UMIACS-TR-95-102 %8 1998/10/15/ %G eng %U http://drum.lib.umd.edu/handle/1903/768 %0 Journal Article %J Machine Translation and the Information Soup %D 1998 %T Parallel strands: A preliminary investigation into mining the web for bilingual text %A Resnik, Philip %B Machine Translation and the Information Soup %P 72 - 82 %8 1998/// %G eng %0 Conference Paper %B Computer Vision, 1998. Sixth International Conference on %D 1998 %T Parameterized modeling and recognition of activities %A Yacoob,Yaser %A Black,M. J %K activities recognition %K admissible transformations %K articulated object motion %K deformable object motion %K exemplar activities %K image motion parameters %K Image sequences %K Motion estimation %K parameterized modeling %K recognition %K spatio-temporal variants %X A framework for modeling and recognition of temporal activities is proposed. The modeling of sets of exemplar activities is achieved by parameterizing their representation in the form of principal components. Recognition of spatio-temporal variants of modeled activities is achieved by parameterizing the search in the space of admissible transformations that the activities can undergo. Experiments on recognition of articulated and deformable object motion from image motion parameters are presented %B Computer Vision, 1998. Sixth International Conference on %P 120 - 127 %8 1998/01// %G eng %R 10.1109/ICCV.1998.710709 %0 Conference Paper %B ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI) %D 1998 %T Partial online cycle elimination in inclusion constraint graphs %A Aiken,A. %A Fhndrich,M. %A Foster, Jeffrey S. %A Su,Z. %B ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI) %8 1998/// %G eng %0 Conference Paper %B , The 19th IEEE Real-Time Systems Symposium, 1998. Proceedings %D 1998 %T Performance measurement using low perturbation and high precision hardware assists %A Mink, A. %A Salamon, W. %A Hollingsworth, Jeffrey K %A Arunachalam, R. %K Clocks %K Computerized monitoring %K Counting circuits %K Debugging %K Hardware %K hardware performance monitor %K high precision hardware assists %K low perturbation %K measurement %K MPI message passing library %K MultiKron hardware performance monitor %K MultiKron PCI %K NIST %K online performance monitoring tools %K Paradyn parallel performance measurement tools %K PCI bus slot %K performance bug %K performance evaluation %K performance measurement %K program debugging %K program testing %K real-time systems %K Runtime %K Timing %X We present the design and implementation of MultiKron PCI, a hardware performance monitor that can be plugged into any computer with a free PCI bus slot. The monitor provides a series of high-resolution timers, and the ability to monitor the utilization of the PCI bus. We also demonstrate how the monitor can be integrated with online performance monitoring tools such as the Paradyn parallel performance measurement tools to improve the overhead of key timer operations by a factor of 25. In addition, we present a series of case studies using the MultiKron hardware performance monitor to measure and tune high-performance parallel completing applications. By using the monitor, we were able to find and correct a performance bug in a popular implementation of the MPI message passing library that caused some communication primitives to run at one half their potential speed %B , The 19th IEEE Real-Time Systems Symposium, 1998. Proceedings %I IEEE %P 379 - 388 %8 1998/12/02/4 %@ 0-8186-9212-X %G eng %R 10.1109/REAL.1998.739771 %0 Report %D 1998 %T On the Perturbation of LU, Cholesky, and QR Factorizations %A Stewart, G.W. %K Technical Report %X To appear in SIMAXIn this paper error bounds are derived for a first order expansion of the LU factorization of a perturbation of the identity. The results are applied to obtain perturbation expansions of the LU, Cholesky, and QR factorizations. (Also cross-referenced as UMIACS-TR-92-24) %I Instititue for Advanced Computer Studies, Univ of Maryland, College Park %V UMIACS-TR-92-24 %8 1998/10/15/ %G eng %U http://drum.lib.umd.edu/handle/1903/565 %0 Report %D 1998 %T Perturbation Theory for the Singular Value Decomposition %A Stewart, G.W. %K Technical Report %X The singular value decomposition has a number of applications in digitalsignal processing. However, the the decomposition must be computed from a matrix consisting of both signal and noise. It is therefore important to be able to assess the effects of the noise on the singular values and singular vectors\,---\,a problem in classical perturbation theory. In this paper we survey the perturbation theory of the singular value decomposition. (Also cross-referenced as UMIACS-TR-90-124) Appeared in SVD and Signal Processing, II, R. J. Vacarro ed., Elsevier, Amsterdam, 1991. %I Instititue for Advanced Computer Studies, Univ of Maryland, College Park %V UMIACS-TR-90-124 %8 1998/10/15/ %G eng %U http://drum.lib.umd.edu/handle/1903/552 %0 Journal Article %J Mol Phylogenet Evol %D 1998 %T Phylogenetic relationships of platyhelminthes based on 18S ribosomal gene sequences %A Campos,A. %A Cummings, Michael P. %A Reyes,J. L %A Laclette,J. P %X Nucleotide sequences of 18S ribosomal RNA from 71 species of Platyhelminthes, the flatworms, were analyzed using maximum likelihood, and the resulting phylogenetic trees were compared with previous phylogenetic hypotheses. Analyses including 15 outgroup species belonging to eight other phyla show that Platyhelminthes are monophyletic with the exception of a sequence putatively from Acoela sp., Lecithoepitheliata, Polycladida, Tricladida, Trematoda (Aspidobothrii + Digenea), Monogenea, and Cestoda (Gyrocotylidea + Amphilinidea + Eucestoda) are monophyletic groups. Catenulids form the sister group to the rest of platyhelminths, whereas a complex clade formed by Acoela, Tricladida, "Dalyellioida", and perhaps "Typhloplanoida" is sister to Neodermata. "Typhloplanoida" does not appear to be monophyletic; Fecampiida does not appear to belong within "Dalyellioida," nor Kalyptorhynchia within "Typhloplanoida." Trematoda is the sister group to the rest of Neodermata, and Monogenea is sister group to Cestoda. Within Trematoda, Aspidobothrii is the sister group of Digenea and Heronimidae is the most basal family in Digenea. Our trees support the hypothesis that parasitism evolved at least twice in Platyhelminthes, once in the ancestor to Neodermata and again in the ancestor of Fecampiida, independently to the ancestor of putatively parasitic "Dalyellioida." %B Mol Phylogenet Evol %V 10 %P 1 - 10 %8 1998/08// %G eng %N 1 %R 10.1006/mpev.1997.0483 %0 Conference Paper %B Fourteenth International Conference on Pattern Recognition, 1998. Proceedings %D 1998 %T Pictorial query trees for query specification in image databases %A Soffer,A. %A Samet, Hanan %A Zotkin,Dmitry N %K Automation %K complex queries %K Computer science %K content-based retrieval %K Database systems %K database-image objects %K Educational institutions %K Electrical capacitance tomography %K grammars %K Image databases %K Image matching %K parsing %K pictorial query trees %K Postal services %K query specification %K query-image objects %K spatial constraints %K syntax %K visual databases %X A technique that enables specifying complex queries in image databases using pictorial query trees is presented. The leaves of a pictorial query tree correspond to individual pictorial queries that specify which objects should appear in the target images as well as how many occurrences of each object are required. In addition, the minimum required certainty of matching between query-image objects and database-image objects, as well as spatial constraints that specify bounds on the distance between objects and the relative direction between them are also specified. Internal nodes in the query tree represent logical operations (AND, OR, XOR) and their negations on the set of pictorial queries (or subtrees) represented by its children. The syntax of query trees is described. Algorithms for processing individual pictorial queries and for parsing and computing the overall result of a pictorial query tree are outlined %B Fourteenth International Conference on Pattern Recognition, 1998. Proceedings %I IEEE %V 1 %P 919-921 vol.1 - 919-921 vol.1 %8 1998/08// %@ 0-8186-8512-3 %G eng %R 10.1109/ICPR.1998.711383 %0 Journal Article %J Plant Syst Evol %D 1998 %T Pigment composition of putatively achlorophyllous angiosperms %A Cummings, Michael P. %A Welschmeyer,N. A %K Angiospermae %K Lennoaceae %K Monotropaceae %K Orchidaceae; Lennoaceae; Monotropaceae; Orobanchaceae; Orchidaceae; chlorophyll; carotenoid; pigment; high-performance liquid chromatography %K Orobanchaceae %X Chlorophyll and carotenoid pigment composition was determined for ten species of putatively achlorophyllous angiosperms using high-performance liquid chromatography. Four families were represented: Lennoaceae (Pholisma arenarium); Monotropaceae (Allotropa virgata, Monotropa uniflora, Pterospora andromedea, Sarcodes sanguinea); Orobanchaceae (Epifagus virginiana, Orobanche cooperi, O. uniflora); Orchidaceae (Cephalanthera austinae, Corallorhiza maculata). Chlorophyll a was detected in all tars, but chlorophyll b was only detected in Corallorhiza maculata. The relative amount of chlorophyll and chlorophyll-related pigments in these plants is greatly reduced compared to fully autotrophic angiosperms. %B Plant Syst Evol %V 210 %P 105 - 111 %8 1998/// %G eng %N 1-2 %0 Journal Article %J SIGPLAN Not. %D 1998 %T PLAN: a packet language for active networks %A Hicks, Michael W. %A Kakkar,Pankaj %A Moore,Jonathan T. %A Gunter,Carl A. %A Nettles,Scott %X PLAN (Packet Language for Active Networks) is a new language for programs that form the packets of a programmable network. These programs replace the packet headers (which can be viewed as very rudimentary programs) used in current networks. As such, PLAN programs are lightweight and of restricted functionality. These limitations are mitigated by allowing PLAN code to call node-resident service routines written in other, more powerful languages. This two-level architecture, in which PLAN serves as a scripting or 'glue' language for more general services, is the primary contribution of this paper. We have successfully applied the PLAN programming environment to implement an IP-free internetwork.PLAN is based on the simply typed lambda calculus and provides a restricted set of primitives and datatypes. PLAN defines a special construct called a chunk used to describe the remote execution of PLAN programs on other nodes. Primitive operations on chunks are used to provide basic data transport in the network and to support layering of protocols. Remote execution can make debugging difficult, so PLAN provides strong static guarantees to the programmer, such as type safety. A more novel property aimed at protecting network availability is a guarantee that PLAN programs use a bounded amount of network resources. %B SIGPLAN Not. %V 34 %P 86 - 93 %8 1998/09// %@ 0362-1340 %G eng %U http://doi.acm.org/10.1145/291251.289431 %N 1 %R 10.1145/291251.289431 %0 Report %D 1998 %T PLAN Security System %A Hicks, Michael W. %X Active Networks offer the ability to program the network on a per-router, per-user, or even per-packet basis. Unfortunately, this added programmability compromises the security of the system by allowing a wider range of potential attacks. Any feasible Active Network architecture therefore requires strong security guarantees. Of course, we should like these guarantees to come at the lowest possible price to the flexibility, performance, and usability of the system.The PLAN system is a distributed programming framework we have used to build an Active Network, PLANet [4]. In the PLAN system, code implementing distributed programs is broken into two parts: the PLAN level, and the Service Level. All programs in the PLAN level reside in the messages, or packets, that are sent between the nodes of the system. These programs are written in the Programming Language for Active Networks [6] (or simply, PLAN). PLAN programs serve to "glue" together Service level programs; PLAN may be thought of as a network scripting language. In contrast, Service level programs (or simply, services), reside at each node and are invoked by executing PLAN programs. Services are written in general-purpose languages (in particular, the language that the PLAN interpreter is written in) and may be dynamically loaded. The current Internet (IP and its supporting protocols) allows any user with a network connection to have some basic services. In addition to basic packet delivery provided by IP, basic information services like DNS, finger, and whois, and protocols like HTTP, FTP, TCP, SMTP, and so forth are provided. Similarly, a goal of PLANet is to allow any user of the network to have access to basic services; these services should naturally include some "activeness." This goal implies that some functionality, like packet delivery in the current Internet, should not require authentication; in PLANet, we allow "pure" PLAN programs to run unauthenticated. A PLAN program is considered "pure" if it only makes calls to services considered safe; for example, determining the name of the current host is a safe operation, while updating the host’s router table is not. Successfully calling unsafe services would require proper authorization. This security policy is stated more formally in the following subsection. %B Technical Reports (CIS) %8 1998/07/14/ %G eng %U http://repository.upenn.edu/cis_reports/108 %0 Journal Article %J University of Pennsylvania %D 1998 %T PLAN Service Programmer's Guide %A Hicks, Michael W. %B University of Pennsylvania %8 1998/// %G eng %0 Journal Article %J University of Pennsylvania (February 27, 1998) %D 1998 %T The PLAN system for building Active Networks %A Hicks, Michael W. %A Kakkar,P. %A Moore,J. T %A Gunter,C. A %A Alexander,D. S %A Nettles,S. %B University of Pennsylvania (February 27, 1998) %8 1998/// %G eng %0 Conference Paper %B High-Performance Distributed Computing, International Symposium on %D 1998 %T Prediction and Adaptation in Active Harmony %A Hollingsworth, Jeffrey K %A Keleher, Peter J. %K adaptation %K resource management. %K scheduling %X We describe the design and early functionality of the Active Harmony global resource management system. Harmony is an infrastructure designed to efficiently execute parallel applications in large-scale, dynamic environments.Harmony differs from other projects with similar goals in that the system automatically adapts ongoing computations to changing conditions through online reconfiguration. This reconfiguration can consist of system-directed migration of work at several different levels, or automatic application adaptation through the use of tuning options exported by Harmony-aware applications.We describe early experience with work migration at the level of procedures, processes and lightweight threads. %B High-Performance Distributed Computing, International Symposium on %I IEEE Computer Society %C Los Alamitos, CA, USA %P 180 - 180 %8 1998/// %G eng %R http://doi.ieeecomputersociety.org/10.1109/HPDC.1998.709971 %0 Journal Article %J CONCUR'98 Concurrency Theory %D 1998 %T Probabilistic resource failure in real-time process algebra %A Philippou,A. %A Cleaveland, Rance %A Lee,I. %A Smolka,S. %A Sokolsky,O. %B CONCUR'98 Concurrency Theory %P 465 - 472 %8 1998/// %G eng %0 Journal Article %J Physica D: Nonlinear Phenomena %D 1998 %T Problem solving during artificial selection of self-replicating loops* 1 %A Chou,H. H. %A Reggia, James A. %B Physica D: Nonlinear Phenomena %V 115 %P 293 - 312 %8 1998/// %G eng %N 3-4 %0 Report %D 1998 %T Project for Developing Computer Science Agenda(s) for High-Performance Computing: An Organizer's Summary %A Vishkin, Uzi %K Technical Report %X Designing a coherent agenda for the implementation of the HighPerformance Computing (HPC) program is a nontrivial technical challenge. Many computer science and engineering researchers in the area of HPC, who are affiliated with U.S. institutions, have been invited to contribute their agendas. We have made a considerable effort to give many in that research community the opportunity to write a position paper. This explains why we view the project as placing a mirror in front of the community, and hope that the mirror indeed reflects many of the opinions on the topic. The current paper is an organizer's summary and represents his reading of the position papers. This summary is his sole responsibility. It is respectfully submitted to the NSF. (Also cross-referenced as UMIACS-TR-94-129) %I Instititue for Advanced Computer Studies, Univ of Maryland, College Park %V UMIACS-TR-94-129 %8 1998/10/15/ %G eng %U http://drum.lib.umd.edu//handle/1903/677 %0 Journal Article %J Technical Reports of the Computer Science Department %D 1998 %T Providing Advisory Notices for UNIX Command Users: Design, Implementation, and Empirical Evaluations %A Kuah,Boon-Teck %A Shneiderman, Ben %K Technical Report %X UNIX Notices (UN) was developed to study the problems in providing advice tousers of complex systems. The issues studied were: what, when, and how to present the advice. The first experiment with 24 subjects examined how different presentation styles affect the effectiveness of UNs advice. The three presentation styles studied were: notice appears in separate window; notice appears only on request; notice appears in users window immediately. The results showed that the third style was significantly more effective than the first style. Furthermore, the results indicated that the most effective presentation method is also the most disruptive. The second experiment with 29 subjects studied how delay in the advice feedback affects the performance of UN. The treatments were: immediate feedback, feedback at end of session, and no feedback. Over a period of 6 weeks, the commands entered by the subjects were logged and studied. The results showed that immediate feedback caused subjects to repeat significantly fewer inefficient command sequences. However, immediate feedback and feedback at end of session may have given subjects a negative feeling towards UNIX. (Also cross-referenced as CAR-TR-651) %B Technical Reports of the Computer Science Department %8 1998/10/15/ %G eng %U http://drum.lib.umd.edu/handle/1903/387 %0 Journal Article %J International Journal of Computational Geometry and Applications %D 1997 %T Parallelizing and algorithm for visibility on polyhedral terrain %A Teng,Y. A %A Mount, Dave %A Puppo,E. %A Davis, Larry S. %X The best known output-sensitive sequential algorithm for computing the viewshedon a polyhedral terrain from a given viewpoint was proposed by Katz, Overmars, and Sharir 10, and achieves time complexity O((k + n(n)) logn) where n and k are the input and output sizes respectively, and () is the inverse Ackermann's function. In this paper, we present a parallel algorithm that is based on the work mentioned above, and achieves O(log2 n) time complexity, with work complexity O((k +n(n)) logn) in a CREW PRAM model. This improves on previous parallel complexity while maintaining work e ciency with respect to the best sequential complexity known. %B International Journal of Computational Geometry and Applications %V 7 %P 75 - 84 %8 1997/// %G eng %N 1/2 %0 Conference Paper %B Proceedings of the ACL SIGLEX workshop on tagging text with lexical semantics: Why, what, and how %D 1997 %T A perspective on word sense disambiguation methods and their evaluation %A Resnik, Philip %A Yarowsky,D. %B Proceedings of the ACL SIGLEX workshop on tagging text with lexical semantics: Why, what, and how %V 86 %8 1997/// %G eng %0 Journal Article %J SIAM Journal on Matrix Analysis and Applications %D 1997 %T Perturbation analysis for the QR decomposition %A Chang,X. C. %A Paige,C. C. %A Stewart, G.W. %B SIAM Journal on Matrix Analysis and Applications %V 18 %P 775 - 791 %8 1997/// %G eng %0 Journal Article %J SIAM Journal on Matrix Analysis and Applications %D 1997 %T Perturbation of eigenvalues of preconditioned Navier-Stokes operators %A Elman, Howard %B SIAM Journal on Matrix Analysis and Applications %V 18 %P 733 - 751 %8 1997/// %G eng %N 3 %0 Journal Article %J IMA Journal of Numerical AnalysisIMA J Numer Anal %D 1997 %T On the Perturbation of LU and Cholesky Factors %A Stewart, G.W. %X In a recent paper, Chang and Paige have shown that the usual perturbation bounds for Cholesky factors can systematically overestimate the errors. In this note we sharpen their results and extend them to the factors of the LU decomposition. The results are based on a new formula for the first-order terms of the error in the factors. %B IMA Journal of Numerical AnalysisIMA J Numer Anal %V 17 %P 1 - 6 %8 1997/01/01/ %@ 0272-4979, 1464-3642 %G eng %U http://imajna.oxfordjournals.org/content/17/1/1 %N 1 %R 10.1093/imanum/17.1.1 %0 Conference Paper %B Proceedings of the eighth annual ACM-SIAM symposium on Discrete algorithms %D 1997 %T A practical approximation algorithm for the LMS line estimator %A Mount, Dave %A Netanyahu,Nathan S. %A Romanik,Kathleen %A Silverman,Ruth %A Wu,Angela Y. %K Approximation algorithms %K least median-of-squares regression %K line arrangements %K line fitting %K randomized algorithms %K robust estimation %B Proceedings of the eighth annual ACM-SIAM symposium on Discrete algorithms %S SODA '97 %I Society for Industrial and Applied Mathematics %C Philadelphia, PA, USA %P 473 - 482 %8 1997/// %@ 0-89871-390-0 %G eng %U http://dl.acm.org/citation.cfm?id=314161.314349 %0 Conference Paper %D 1997 %T Probabilistic verification of a synchronous round-based consensus protocol %A Duggal,H.S. %A Michel Cukier %A Sanders,W. H. %K business-critical applications %K consensus protocol correctness %K crash failures %K formal verification %K network environment %K probabilistic verification %K probabilities %K probability %K program verification %K proper behavior %K protocol behavior %K Protocols %K realistic environment %K reliable distributed systems %K safety-critical applications %K simple consensus protocol %K software reliability %K stochastic assumptions %K synchronous round based consensus protocol %K synchronous round based consensus protocols %K traditional proof techniques %X Consensus protocols are used in a variety of reliable distributed systems, including both safety-critical and business-critical applications. The correctness of a consensus protocol is usually shown, by making assumptions about the environment in which it executes, and then proving properties about the protocol. But proofs about a protocol's behavior are only as good as the assumptions which were made to obtain them, and violation of these assumptions can lead to unpredicted and serious consequences. We present a new approach for the probabilistic verification of synchronous round based consensus protocols. In doing so, we make stochastic assumptions about the environment in which a protocol operates, and derive probabilities of proper and non proper behavior. We thus can account for the violation of assumptions made in traditional proof techniques. To obtain the desired probabilities, the approach enumerates possible states that can be reached during an execution of the protocol, and computes the probability of achieving the desired properties for a given fault and network environment. We illustrate the use of this approach via the evaluation of a simple consensus protocol operating under a realistic environment which includes performance, omission, and crash failures %P 165 - 174 %8 1997/10// %G eng %R 10.1109/RELDIS.1997.632812 %0 Journal Article %J The Journal of Supercomputing %D 1996 %T Parallel algorithms for image enhancement and segmentation by region growing, with an experimental study %A Bader, D.A. %A JaJa, Joseph F. %A Harwood,D. %A Davis, Larry S. %X This paper presents efficient and portable implementations of a powerful image enhancement process, the Symmetric Neighborhood Filter (SNF), and an image segmentation technique that makes use of the SNF and a variant of the conventional connected components algorithm which we call delta-Connected Components. We use efficient techniques for distributing and coalescing data as well as efficient combinations of task and data parallelism. The image segmentation algorithm makes use of an efficient connected components algorithm based on a novel approach for parallel merging. The algorithms have been coded in Split-C and run on a variety of platforms, including the Thinking Machines CM-5, IBM SP-1 and SP-2, Cray Research T3D, Meiko Scientific CS-2, Intel Paragon, and workstation clusters. Our experimental results are consistent with the theoretical analysis (and provide the best known execution times for segmentation, even when compared with machine-specific implementations). Our test data include difficult images from the Landsat Thematic Mapper (TM) satellite data. %B The Journal of Supercomputing %V 10 %P 141 - 168 %8 1996/// %G eng %N 2 %R 10.1007/BF00130707 %0 Conference Paper %B Proceedings of the eighth annual ACM symposium on Parallel algorithms and architectures %D 1996 %T Parallel algorithms for personalized communication and sorting with an experimental study (extended abstract) %A Helman,David R. %A Bader,David A. %A JaJa, Joseph F. %B Proceedings of the eighth annual ACM symposium on Parallel algorithms and architectures %S SPAA '96 %I ACM %C New York, NY, USA %P 211 - 222 %8 1996/// %@ 0-89791-809-6 %G eng %U http://doi.acm.org/10.1145/237502.237558 %R 10.1145/237502.237558 %0 Journal Article %J Schizophrenia bulletin %D 1996 %T Pathogenesis of schizophrenic delusions and hallucinations: a neural model %A Ruppin,E. %A Reggia, James A. %A Horn,D. %B Schizophrenia bulletin %V 22 %P 105 - 105 %8 1996/// %G eng %N 1 %0 Journal Article %J Discrete Applied Mathematics %D 1996 %T Polynomial-time algorithm for computing translocation distance between genomes %A Hannenhalli, Sridhar %X With the advent of large-scale DNA physical mapping and sequencing, studies of genome rearrangements are becoming increasingly important in evolutionary molecular biology. From a computational perspective, the study of evolution based on rearrangements leads to a rearrangement distance problem, i.e., computing the minimum number of rearrangement events required to transform one genome into another. Different types of rearrangement events give rise to a spectrum of interesting combinatorial problems. The complexity of most of these problems is unknown. Multichromosomal genomes frequently evolve by a rearrangement event called translocation which exchanges genetic material between different chromosomes. In this paper we study the translocation distance problem, modeling the evolution of genomes evolving by translocations. The translocation distance problem was recently studied for the first time by Kececioglu and Ravi, who gave a 2-approximation algorithm for computing translocation distance. In this paper we prove a duality theorem leading to a polynomial time algorithm for computing translocation distance for the case when the orientations of the genes are known. This leads to an algorithm generating a most parsimonious (shortest) scenario, transforming one genome into another by translocations. %B Discrete Applied Mathematics %V 71 %P 137 - 151 %8 1996/12/05/ %@ 0166-218X %G eng %U http://www.sciencedirect.com/science/article/pii/S0166218X96000613 %N 1–3 %R 10.1016/S0166-218X(96)00061-3 %0 Journal Article %J Computer applications in the biosciences : CABIOS %D 1996 %T Positional sequencing by hybridization %A Hannenhalli, Sridhar %A Feldman,William %A Lewis,Herbert F. %A Skiena,Steven S. %A Pevzner,Pavel A. %X Sequencing by hybridization (SBH) is a promising alternative to the classical DNA sequencing approaches. However, the resolving power of SBH is rather low: with 64kb sequencing chips, unknown DNA fragments only as long as 200 bp can be reconstructed in a single SBH experiment. To improve the resolving power of SBH, positional SBH (PSBH) has recently been suggested; this allows (with additional experimental work) approximate positions of every l-tuple in a target DNA fragment to be measured. We study the positional Eulerian path problem motivated by PSBH. The input to the positional eulerian path problem is an Eulerian graph G( V, E) in which every edge has an associated range of integers and the problem is to find an Eulerian path el, …, e|E| in G such that the range of ei, contains i. We show that the positional Eulerian path problem is NP-complete even when the maximum out-degree (in-degree) of any vertex in the graph is 2. On a positive note we present polynomial algorithms to solve a special case of PSBH (bounded PSBH), where the range of the allowed positions for any edge is bounded by a constant (it corresponds to accurate experimental measurements of positions in PSBH). Moreover, if the positions of every l-tuple in an unknown DNA fragment of length n are measured with O(log n) error, then our algorithm runs in polynomial time. We also present an estimate of the resolving power of PSBH for a more realistic case when positions are measured with Θ(n) error. %B Computer applications in the biosciences : CABIOS %V 12 %P 19 - 24 %8 1996/02/01/ %G eng %U http://bioinformatics.oxfordjournals.org/content/12/1/19.abstract %N 1 %R 10.1093/bioinformatics/12.1.19 %0 Conference Paper %B Parallel Processing Symposium, 1996., Proceedings of IPPS '96, The 10th International %D 1996 %T Practical parallel algorithms for dynamic data redistribution, median finding, and selection %A Bader, D.A. %A JaJa, Joseph F. %K algorithms;performance %K algorithms;scalability;statistical %K allocation; %K balancing %K clusters;distributed %K CM-5;communication %K CS-2;SPLIT-C;Thinking %K data %K data;median %K evaluation;resource %K finding;parallel %K Gray %K Machines %K memory %K model;dynamic %K Paragon;Meiko %K primitives;distributed %K problem;workstation %K Programming %K redistribution;load %K research %K scientific %K SP-1;Intel %K systems;parallel %K T3D;IBM %X A common statistical problem is that of finding the median element in a set of data. This paper presents a fast and portable parallel algorithm for finding the median given a set of elements distributed across a parallel machine. In fact, our algorithm solves the general selection problem that requires the determination of the element of rank i, for an arbitrarily given integer i. Practical algorithms needed by our selection algorithm for the dynamic redistribution of data are also discussed. Our general framework is a distributed memory programming model enhanced by a set of communication primitives. We use efficient techniques for distributing, coalescing, and load balancing data as well as efficient combinations of task and data parallelism. The algorithms have been coded in SPLIT-C and run on a variety of platforms, including the Thinking Machines CM-5, IBM SP-1 and SP-2, Gray Research T3D, Meiko Scientific CS-2, Intel Paragon, and workstation clusters. Our experimental results illustrate the scalability and efficiency of our algorithms across different platforms and improve upon all the related experimental results known to the authors %B Parallel Processing Symposium, 1996., Proceedings of IPPS '96, The 10th International %P 292 - 301 %8 1996/04// %G eng %R 10.1109/IPPS.1996.508072 %0 Journal Article %J Journal of Experimental Algorithmics (JEA) %D 1996 %T Practical parallel algorithms for personalized communication and integer sorting %A Bader,David A. %A Helman,David R. %A JaJa, Joseph F. %X A fundamental challenge for parallel computing is to obtain high-level, architecture independent, algorithms which efficiently execute on general-purpose parallel machines. With the emergence of message passing standards such as MPI, it has become easier to design efficient and portable parallel algorithms by making use of these communication primitives. While existing primitives allow an assortment of collective communication routines, they do not handle an important communication event when most or all processors have non-uniformly sized personalized messages to exchange with each other. We focus in this paper on the h-relation personalized communication whose efficient implementation will allow high performance implementations of a large class of algorithms. While most previous h-relation algorithms use randomization, this paper presents a new deterministic approach for h-relation personalized communication with asymptotically optimal complexity for h>p2. As an application, we present an efficient algorithm for stable integer sorting. The algorithms presented in this paper have been coded in Split-C and run on a variety of platforms, including the Thinking Machines CM-5, IBM SP-1 and SP-2, Cray Research T3D, Meiko Scientific CS-2, and the Intel Paragon. Our experimental results are consistent with the theoretical analysis and illustrate the scalability and efficiency of our algorithms across different platforms. In fact, they seem to outperform all similar algorithms known to the authors on these platforms. %B Journal of Experimental Algorithmics (JEA) %V 1 %8 1996/01// %@ 1084-6654 %G eng %U http://doi.acm.org/10.1145/235141.235148 %R 10.1145/235141.235148 %0 Conference Paper %B Real-Time Systems Symposium, IEEE International %D 1996 %T Predictability of real-time systems: a process-algebraic approach %A Natarajan,V. %A Cleaveland, Rance %K process algebra; real time systems predictability; process algebra; testing-based semantic preorder; process description language; tpl; timing behavior; variability; activity-completion times; semantic preorder; must-preorder; optimality %X This paper presents a testing-based semantic preorder that relates real-time systems given in the process description language TPL on the basis of the predictability of their timing behavior. This predictability is measured in terms of the amount of variability present in processes' "activity-completion times". The semantic preorder is shown to coincide with an already existing, well-investigated implementation relation for TPL-the must-preorder. The optimality of our relation is also established by means of a full abstraction result. An example is provided to illustrate the utility of this work. %B Real-Time Systems Symposium, IEEE International %I IEEE Computer Society %C Los Alamitos, CA, USA %P 82 - 82 %8 1996/// %G eng %R http://doi.ieeecomputersociety.org/10.1109/REAL.1996.563703 %0 Journal Article %J Tools and Algorithms for the Construction and Analysis of Systems %D 1996 %T Priorities for modeling and verifying distributed systems %A Cleaveland, Rance %A L\üttgen,G. %A Natarajan,V. %A Sims,S. %B Tools and Algorithms for the Construction and Analysis of Systems %P 278 - 297 %8 1996/// %G eng %0 Journal Article %J CONCUR'96: Concurrency Theory %D 1996 %T A process algebra with distributed priorities %A Cleaveland, Rance %A L\üttgen,G. %A Natarajan,V. %B CONCUR'96: Concurrency Theory %P 34 - 49 %8 1996/// %G eng %0 Journal Article %J Fundamenta Informaticae %D 1995 %T Papers on Context: Theory and Practice %A Perlis, Don %B Fundamenta Informaticae %V 23 %P 145 - 148 %8 1995/// %G eng %N 2 %0 Journal Article %J Computer %D 1995 %T The Paradyn parallel performance measurement tool %A Miller, B. P %A Callaghan, M. D %A Cargille, J. M %A Hollingsworth, Jeffrey K %A Irvin, R. B %A Karavanic, K. L %A Kunchithapadam, K. %A Newhall, T. %K Aerodynamics %K Automatic control %K automatic instrumentation control %K Debugging %K dynamic instrumentation %K flexible performance information %K high level languages %K insertion %K Instruments %K large-scale parallel program %K Large-scale systems %K measurement %K Paradyn parallel performance measurement tool %K Parallel machines %K parallel programming %K Performance Consultant %K Programming profession %K scalability %K software performance evaluation %K software tools %X Paradyn is a tool for measuring the performance of large-scale parallel programs. Our goal in designing a new performance tool was to provide detailed, flexible performance information without incurring the space (and time) overhead typically associated with trace-based tools. Paradyn achieves this goal by dynamically instrumenting the application and automatically controlling this instrumentation in search of performance problems. Dynamic instrumentation lets us defer insertion until the moment it is needed (and remove it when it is no longer needed); Paradyn's Performance Consultant decides when and where to insert instrumentation %B Computer %V 28 %P 37 - 46 %8 1995/11// %@ 0018-9162 %G eng %N 11 %R 10.1109/2.471178 %0 Conference Paper %B Parallel Processing Symposium, 1995. Proceedings., 9th International %D 1995 %T Parallel algorithms for database operations and a database operation for parallel algorithms %A Raman,R. %A Vishkin, Uzi %B Parallel Processing Symposium, 1995. Proceedings., 9th International %P 173 - 179 %8 1995/// %G eng %0 Journal Article %J ACM SIGPLAN Notices %D 1995 %T Parallel algorithms for image histogramming and connected components with an experimental study (extended abstract) %A Bader,David A. %A JaJa, Joseph F. %K connected components %K histogramming %K IMAGE PROCESSING %K image understanding %K Parallel algorithms %K scalable parallel processing %X This paper presents efficient and portable implementations of two useful primitives in image processing algorithms, histogramming and connected components. Our general framework is a single-address space, distributed memory programming model. We use efficient techniques for distributing and coalescing data as well as efficient combinations of task and data parallelism. Our connected components algorithm uses a novel approach for parallel merging which performs drastically limited updating during iterative steps, and concludes with a total consistency update at the final step. The algorithms have been coded in Split-C and run on a variety of platforms. Our experimental results are consistent with the theoretical analysis and provide the best known execution times for these two primitives, even when compared with machine-specific implementations. More efficient implementations of Split-C will likely result in even faster execution times. %B ACM SIGPLAN Notices %V 30 %P 123 - 133 %8 1995/08// %@ 0362-1340 %G eng %U http://doi.acm.org/10.1145/209937.209950 %N 8 %R 10.1145/209937.209950 %0 Journal Article %J IEEE Transactions on ComputersIEEE Trans. Comput. %D 1995 %T Parametric dispatching of hard real-time tasks %A Gerber,R. %A Pugh, William %A Saksena,M. %B IEEE Transactions on ComputersIEEE Trans. Comput. %V 44 %P 471 - 479 %8 1995/03// %@ 00189340 %G eng %U http://dl.acm.org/citation.cfm?id=626999 %N 3 %R 10.1109/12.372041 %0 Journal Article %J International Journal of Computer Vision %D 1995 %T Passive navigation as a pattern recognition problem %A Fermüller, Cornelia %X The most basic visual capabilities found in living organisms are based on motion. Machine vision, of course, does not have to copy animal vision, but the existence of reliably functioning vision modules in nature gives us some reason to believe that it is possible for an artificial system to work in the same or a similar way. In this article it is argued that many navigational capabilities can be formulated as pattern recognition problems. An appropriate retinotopic representation of the image would make it possible to extract the information necessary to solve motion-related tasks through the recognition of a set of locations on the retina. This argument is illustrated by introducing a representation of image motion by which an observer's egomotion could be derived from information globally encoded in the image-motion field. In the past, the problem of determining a system's own motion from dynamic imagery has been considered as one of the classical visual reconstruction problems, wherein local constraints have been employed to compute from exact 2-D image measurements (correspondence, optical flow) the relative 3-D motion and structure of the scene in view. The approach introduced here is based on new global constraints defined on local normal-flow measurements—the spatio-temporal derivatives of the image-intensity function. Classifications are based on orientations of normal-flow vectors, which allows selection of vectors that form global patterns in the image plane. The position of these patterns is related to the 3-D motion of the observer, and their localization provides the axis of rotation and the direction of translation. The constraints introduced are utilized in algorithmic procedures formulated as search techniques. These procedures are very stable, since they are not affected by small perturbations in the image measurements. As a matter of fact, the solution to the two directions of translation and rotation is not affected, as long as the measurement of the sign of the normal flow is correct. %B International Journal of Computer Vision %V 14 %P 147 - 158 %8 1995/// %@ 0920-5691 %G eng %U http://dx.doi.org/10.1007/BF01418980 %N 2 %0 Journal Article %J Neural computation %D 1995 %T Patterns of functional damage in neural network models of associative memory %A Ruppin,E. %A Reggia, James A. %B Neural computation %V 7 %P 1105 - 1127 %8 1995/// %G eng %N 5 %0 Report %D 1995 %T Perception of 3D Motion through Patterns of Visual Motion. %A Fermüller, Cornelia %X Geometric considerations suggest that the problem of estimating a system's three-dimensional (3D) motion from a sequence of images, which has puzzled researchers in the fields of Computational Vision and Robotics as well as the Biological Sciences, can be addressed as a pattern recognition problem. Information for constructing the relevant patterns is found in spatial arrangements or gratings, that is, aggregations of orientations along which retinal motion information is estimated. The exact form of the gratings is defined by the shape of the retina or imaging surface; for a planar retina they are radial lines, concentric circles, as well as elliptic and hyperbolic curves, while for a spherical retina they become longitudinal and latitudinal circles for various axes. Considering retinal motion information computed normal to these gratings, patterns are found that have encoded in their shape and location on the retina subsets of the 3D motion parameters. The importance of these patterns is first that they depend only on the 3D motion and not on the scene in view, and second that they utilize only the sign of image motion along a set of directions defined by the gratings. %I CENTER FOR AUTOMATION RESEARCH, UNIVERSITY OF MARYLAND COLLEGE PARK %8 1995/05// %G eng %0 Report %D 1995 %T On the perturbation of Schur complement in positive semidefinite matrix %A Stewart, G.W. %X This note gives perturbation bounds for the Schur complement of apositive definite matrix in a positive semidefinite matrix. %I Instititue for Advanced Computer Studies, Univ of Maryland, College Park %V TR-95-38 %8 1995/// %G eng %0 Book Section %B Combinatorial Pattern MatchingCombinatorial Pattern Matching %D 1995 %T Polynomial-time algorithm for computing translocation distance between genomes %A Hannenhalli, Sridhar %E Galil,Zvi %E Ukkonen,Esko %X With the advent of large-scale DNA physical mapping and sequencing, studies of genome rearrangements are becoming increasingly important in evolutionary molecular biology. From a computational perspective, study of evolution based on rearrangements leads to rearrangement distance problem, i.e., computing the minimum number of rearrangement events required to transform one genome into another. Different types of rearrangement events give rise to a spectrum of interesting combinatorial problems. The complexity of most of these problems is unknown. Multichromosomal genomes frequently evolve by a rearrangement event called translocation which exchanges genetic material between different chromosomes. In this paper we study the translocation distance problem, modeling evolution of genomes evolving by translocations. Translocation distance problem was recently studied for the first time by Kececioglu and Ravi, who gave a 2-approximation algorithm for computing translocation distance. In this paper we prove a duality theorem leading to a polynomial algorithm for computing translocation distance for the case when the orientation of the genes are known. This leads to an algorithm generating a most parsimonious (shortest) scenario, transforming one genome into another by translocations. %B Combinatorial Pattern MatchingCombinatorial Pattern Matching %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 937 %P 162 - 176 %8 1995/// %@ 978-3-540-60044-2 %G eng %U http://dx.doi.org/10.1007/3-540-60044-2_41 %0 Journal Article %J GeneticsGenetics %D 1994 %T P element-mediated in vivo deletion analysis of white-apricot: deletions between direct repeats are strongly favored. %A Kurkulos,M. %A Weinberg,J. M. %A Roy,D. %A Mount, Stephen M. %X We have isolated and characterized deletions arising within a P transposon, P[hswa], in the presence of P transposase. P[hswa] carries white-apricot (wa) sequences, including a complete copia element, under the control of an hsp70 promoter, and resembles the original wa allele in eye color phenotype. In the presence of P transposase, P[hswa] shows a high overall rate (approximately 3%) of germline mutations that result in increased eye pigmentation. Of 234 derivatives of P[hswa] with greatly increased eye pigmentation, at least 205 carried deletions within copia. Of these, 201 were precise deletions between the directly repeated 276-nucleotide copia long terminal repeats (LTRs), and four were unique deletions. High rates of transposase-induced precise deletion were observed within another P transposon carrying unrelated 599 nucleotide repeats (yeast 2 mu FLP; recombinase target sites) separated by 5.7 kb. Our observation that P element-mediated deletion formation occurs preferentially between direct repeats suggests general methods for controlling deletion formation. %B GeneticsGenetics %V 136 %P 1001 - 1011 %8 1994/03/01/ %@ 0016-6731, 1943-2631 %G eng %U http://www.genetics.org/content/136/3/1001 %N 3 %0 Conference Paper %B ICPR %D 1994 %T Page Segmentation Using Decision Integration and Wavelet Packet Basis %A Etemad,K. %A David Doermann %A Chellapa, Rama %B ICPR %P 345 - 349 %8 1994/// %G eng %0 Journal Article %J Proceedings of the International Conference on New Methods in Language Processing, Manchester, UK %D 1994 %T A paradigm for non-head-driven parsing: Parameterized message-passing %A Dorr, Bonnie J %A Lin,D. %A Lee,J. %A Suh,S. %B Proceedings of the International Conference on New Methods in Language Processing, Manchester, UK %8 1994/// %G eng %0 Journal Article %J Information Processing Letters %D 1994 %T On the parallel complexity of digraph reachability %A Khuller, Samir %A Vishkin, Uzi %K combinatorial problems %K Parallel algorithms %X We formally show that the directed graph reachability problem can be reduced to several problems using a linear number of processors; hence an efficient parallel algorithm to solve any of these problems would imply an efficient parallel algorithm for the directed graph reachability problem. This formally establishes that all these problems are at least as hard as the s−t reachability problem. %B Information Processing Letters %V 52 %P 239 - 241 %8 1994/12/09/ %@ 0020-0190 %G eng %U http://www.sciencedirect.com/science/article/pii/0020019094001537 %N 5 %R 10.1016/0020-0190(94)00153-7 %0 Journal Article %J IEEE Computational Science & Engineering %D 1994 %T Parallel Computing: Emerging from a Time Warp %A O'Leary, Dianne P. %B IEEE Computational Science & Engineering %V 1 %P 1,15 - 1,15 %8 1994/// %G eng %N 4 %0 Journal Article %J Lecture Notes in Computer Science %D 1994 %T On a parallel-algorithms method for string matching problems %A Sahinalp,S. C %A Vishkin, Uzi %B Lecture Notes in Computer Science %V 778 %P 22 - 32 %8 1994/// %G eng %0 Book Section %B Algorithms and ComplexityAlgorithms and Complexity %D 1994 %T On a parallel-algorithms method for string matching problems (overview) %A Sahinalp,Suleyman %A Vishkin, Uzi %E Bonuccelli,M. %E Crescenzi,P. %E Petreschi,R. %K Computer %K Science %B Algorithms and ComplexityAlgorithms and Complexity %S Lecture Notes in Computer Science %I Springer Berlin / Heidelberg %V 778 %P 22 - 32 %8 1994/// %@ 978-3-540-57811-6 %G eng %U http://dx.doi.org/10.1007/3-540-57811-0_3 %0 Journal Article %J Computer-Aided Design of Integrated Circuits and Systems, IEEE Transactions on %D 1994 %T A parallel-in-time method for the transient simulation of SOI devices with drain current overshoots %A Tai,G.-C. %A Korman,C.E. %A Mayergoyz, Issak D %K 2D %K algorithms;semiconductor %K analysis;transient %K architecture;parallel-in-time %K boundaries;silicon;time-domain %K computations;time %K computers;SOI %K computing;finite %K current %K device %K devices;drain %K devices;parallel %K difference %K dimensional %K domain %K effect %K engineering %K equations;fixed-point %K equations;space %K field %K Fortran;CPU %K gate %K iteration %K iteration;SIMD %K Machine;Gummel %K method;semiconductor %K methods;insulated %K methods;metal-insulator-semiconductor %K models;semiconductor-insulator %K overshoots;finite-difference %K Parallel %K parallelism;transient %K response; %K simulation;CM %K simulation;digital %K simulation;electronic %K simulation;two %K technique;massively %K time;Connection %K transistors;iterative %X This paper presents a new parallel-in-time algorithm for the two dimensional transient simulation of SOI devices. With this approach, simulation in both space and time domains is performed in parallel As a result, the CPU time is reduced significantly from the conventional serial-in-time method. This new approach fully exploits the inherent parallelism of the finite difference formulation of the basic semiconductor device equations and the massively parallel architecture of SIMD computers. The space domain computations are inherently parallel due to the nature of our technique of solving the finite-difference equations. Time domain parallelism is achieved by shifting the potentials from previous time points to subsequent points one-step forward along the time axis with each Gummel iteration. This algorithm employs a fixed-point iteration technique, therefore a direct solution of matrix equations is avoided. The algorithm is especially suitable for the transient simulation of SOI devices that exhibit transient drain current overshoot. Numerical experiments show that the new parallel-in-time method is up to eight times faster than the conventional serial-in-time method in SOI transient simulations. The program is coded in CM Fortran for the Connection Machine %B Computer-Aided Design of Integrated Circuits and Systems, IEEE Transactions on %V 13 %P 1035 - 1044 %8 1994/08// %@ 0278-0070 %G eng %N 8 %R 10.1109/43.298039 %0 Journal Article %J Algorithmica %D 1994 %T Pattern matching in a digitized image %A Landau,G. M %A Vishkin, Uzi %B Algorithmica %V 12 %P 375 - 408 %8 1994/// %G eng %N 4 %0 Journal Article %J Artificial Intelligence Planning Systems: Proceedings of the Second International Conference (AIPS-94) %D 1994 %T Planning-based integrated decision support systems %A Bienkowski,M.A. %A desJardins, Marie %B Artificial Intelligence Planning Systems: Proceedings of the Second International Conference (AIPS-94) %P 196 - 201 %8 1994/// %G eng %0 Journal Article %J Journal Algorithms %D 1994 %T A primal-dual parallel approximation technique applied to weighted set and vertex covers %A Khuller, Samir %A Vishkin, Uzi %A Young,N. %B Journal Algorithms %V 17 %P 280 - 289 %8 1994/// %G eng %N 2 %0 Conference Paper %B Handbook of pattern recognition and image processing (vol. 2) %D 1994 %T Principles of computer vision %A Aloimonos, J. %A Rosenfeld, A. %B Handbook of pattern recognition and image processing (vol. 2) %P 1 - 15 %8 1994/// %G eng %0 Journal Article %J AI Magazine %D 1993 %T Pagoda: A Model for Autonomous Learning in Probabilistic Domains %A desJardins, Marie %X My Ph.D. dissertation describes PAGODA (probabilistic autonomous goal-directed agent), a model for an intelligent agent that learns autonomously in domains containing uncertainty. The ultimate goal of this line of research is to develop intelligent problem-solving and planning systems that operate in complex domains, largely function autonomously, use whatever knowledge is available to them, and learn from their experience. PAGODA was motivated by two specific requirements: The agent should be capable of operating with minimal intervention from humans, and it should be able to cope with uncertainty (which can be the result of inaccurate sensors, a nondeterministic environment, complexity, or sensory limitations). I argue that the principles of probability theory and decision theory can be used to build rational agents that satisfy these requirements. %B AI Magazine %V 14 %P 75 - 75 %8 1993/03/15/ %@ 0738-4602 %G eng %U https://www.aaai.org/ojs/index.php/aimagazine/article/viewArticle/1036 %N 1 %R 10.1609/aimag.v14i1.1036 %0 Journal Article %J Information and Computation %D 1993 %T On parallel integer merging %A Berkman,O. %A Vishkin, Uzi %B Information and Computation %V 106 %P 266 - 285 %8 1993/// %G eng %N 2 %0 Journal Article %J Real-Time Systems %D 1993 %T A partial evaluator for the Maruti hard real-time system %A Nirkhe,V. %A Pugh, William %B Real-Time Systems %V 5 %P 13 - 30 %8 1993/// %G eng %N 1 %0 Journal Article %J IEEE Transactions on Software Engineering %D 1993 %T Performance comparison of three modern DBMS architectures %A Delis,A. %A Roussopoulos, Nick %K client-server %K Computational modeling %K Computer architecture %K database management systems %K DBMS architectures %K design rationales %K functional components %K Indexes %K Local area networks %K Military computing %K Packaging %K Performance analysis %K performance evaluation %K RAD-UNIFY type %K simulation models %K simulation results %K Software architecture %K software architecture configurations %K software engineering %K Throughput %K Workstations %X The introduction of powerful workstations connected through local area networks (LANs) inspired new database management system (DBMS) architectures that offer high performance characteristics. The authors examine three such software architecture configurations: client-server (CS), the RAD-UNIFY type of DBMS (RU), and enhanced client-server (ECS). Their specific functional components and design rationales are discussed. Three simulation models are used to provide a performance comparison under different job workloads. Simulation results show that the RU almost always performs slightly better than the CS, especially under light workloads, and that ECS offers significant performance improvement over both CS and RU. Under reasonable update rates, the ECS over CS (or RU) performance ratio is almost proportional to the number of participating clients (for less than 32 clients). The authors also examine the impact of certain key parameters on the performance of the three architectures and show that ECS is more scalable that the other two %B IEEE Transactions on Software Engineering %V 19 %P 120 - 138 %8 1993/02// %@ 0098-5589 %G eng %N 2 %R 10.1109/32.214830 %0 Journal Article %J SIAM Journal on Scientific ComputingSIAM J. Sci. Comput. %D 1993 %T Performance Enhancements and Parallel Algorithms for Two Multilevel Preconditioners %A Elman, Howard %A Guo,Xian-Zhong %B SIAM Journal on Scientific ComputingSIAM J. Sci. Comput. %V 14 %P 890 - 890 %8 1993/// %@ 10648275 %G eng %U http://link.aip.org/link/SJOCE3/v14/i4/p890/s1&Agg=doi %N 4 %R 10.1137/0914055 %0 Journal Article %J Algorithms and Data Structures %D 1993 %T Point probe decision trees for geometric concept classes %A Arkin,E. %A Goodrich,M. %A Mitchell,J. %A Mount, Dave %A Piatko,C. %A Skiena,S. %X A fundamental problem in model-based computer vision is that of identifying to which of a given set of concept classes of geometric models an observed model belongs. Considering a ldquoproberdquo to be an oracle that tells whether or not the observed model is present at a given point in an image, we study the problem of computing efficient strategies (ldquodecision treesrdquo) for probing an image, with the goal to minimize the number of probes necessary (in the worst case) to determine in which class the observed model belongs. We prove a hardness result and give strategies that obtain decision trees whose height is within a log factor of optimal.These results grew out of discussions that began in a series of workshops on Geometric Probing in Computer Vision, sponsored by the Center for Night Vision and Electro-Optics, Fort Belvoir, Virginia, and monitored by the U.S. Army Research Office. The views, opinions, and/or findings contained in this report are those of the authors and should not be construed as an official Department of the Army position, policy, or decision, unless so designated by other documentation. %B Algorithms and Data Structures %P 95 - 106 %8 1993/// %G eng %R 10.1007/3-540-57155-8_239 %0 Conference Paper %B Proceedings of the 1993 ACM conference on Computer science %D 1993 %T Potentials and limitations of pen-based computers %A Citrin,Wayne %A Halbert,Dan %A Hewitt,Carl %A Meyrowitz,Norm %A Shneiderman, Ben %X There are four possible genres of input devices that can be attached to personal workstations; keyboard, mouse, pen, and voice. For investigating potentials and limitations of pen-based computers, we propose to compare those four categories as different types of man-machine communication channel.Even though arrow keys allow a limited scope of 2D capability in keyboard usage, the primary use of a keyboard is typing. Typing generates a linear sequence of discrete characters. The maximum speed of typing is roughly 10 characters per second, hence the bandwidth of 100 (10×10) bps. A mouse is used primarily for pointing (menu item/object selection) and then for dragging (moving and re-sizing objects). Mouse pointing generates a discrete information of a point location on a 2D (planer) plane with the maximum speed of 2 clicking per second. Each pointing may generate 20 bits of information with the bandwidth of 40 (2×20) bps. Mouse dragging generates a continuous geometric pattern in 2D. Assuming the maximum rate of 40 selections per second in the eight (3 bits) possible directions of dragging, the bandwidth of mouse dragging peaks at 120 (40×3) bps. The usage of pen on a planer flat surface of LCD unit can be divided into scribing and tapping. By scribing we mean a generation of continuous pen strokes forming a character, a gesture, or a picture. Scribing includes drawing and gesturing. Tapping corresponds to a mouse clicking. Tapping may be considered as a special kind of gesture as in the PenPoint operating system. The bandwidth of scribing can be calculated in the same manner as for mouse dragging. With the maximum rate of 100 selections of direction per second for pen, scribing may produce strokes with the speed of 300 (100×3) bps. The bandwidth of pen tapping is almost the same as that of a mouse clicking except that a selection of a point is easier by a pen (3 tappings per second) than by a mouse (2 clickings per second). Talking through a microphone generates a linear sequence of continuous speech with a high degree of redundancy. Using the CELP speech compression algorithm, the bandwidth of normal speech can be reduced to 4800 bps. By vocal signaling we mean a generation of a sequence of discrete messages each of which consists of different pitches and loudness levels. The maximum rate of signaling could be 5 messages per second with 10 differentiable pitches and 10 levels of loudness producing the bandwidth of 35 (5×7) bps. Note that mouse dragging, pen scribing, and voice talking, each produces continuous data objects. Only after quantization by sampling, the data objects can be represented by discrete data structure. By precision we mean the degree of ease in duplicating the identical information using the same input technology. Keyboard is a high precision device because there is no difficulty in generating the same character over and over again. Drawing by a mouse is more difficult than drawing by a pen, because pen is easier to control than mouse. Precision of voice is low because it is difficult to duplicate the sound of the same pitch and the same volume. By latency we mean the set-up time necessary to start generating a stream of information. The latency of using keyboard and mouse is larger than the latency of using pen and voice. By translation we mean a process of converting the information generated by the input device into a sequence of discrete symbols, i.e., a transduction of a continuous data type to a discrete data type. Translation for a keyboard is not necessary. Translation of a mouse click on a menu item requires a finite table look-up which is rather simple operation. Translation of pen scribing and mouse dragging involves a handwriting recognition algorithm which is still a difficult problem at present time. Voice recognition is a very difficult problem. With the assumption that real-time translation is feasible for handwriting recognition and speech recognition, the efficiency of input device for text entry can be measured by how many characters can be entered in a second (cps). A simulated keyboard on CRT is used for entering text by a mouse. Long handwriting on a pen computer is used for a pen. When personal workstations become down-sized, the physical dimension of an input/output device becomes a dominant factor for the mobility of workstations. Keyboard and mouse are portable but intrusive. Wireless pen is mobile and less intrusive. Voice can be ubiquitous but intrusive. One conclusion we can draw from the above analysis is that pen is mightier than mouse. A pen can replace a mouse any time any place. However, keyboard, pen, and voice have different strong points and weak points. They compensate with each other. Therefore, we predict that future workstations will carry multi-modal user-interface with any combination of keyboard, pen, and voice. %B Proceedings of the 1993 ACM conference on Computer science %S CSC '93 %I ACM %C New York, NY, USA %P 536 - 539 %8 1993/// %@ 0-89791-558-5 %G eng %U http://doi.acm.org/10.1145/170791.171171 %R 10.1145/170791.171171 %0 Journal Article %J Sparks of innovation in human-computer interaction %D 1993 %T Preface to Sparks of Innovation in Human-Computer Interaction %A Shneiderman, Ben %B Sparks of innovation in human-computer interaction %8 1993/// %G eng %0 Conference Paper %B Proceedings of the 15th Annual Conference of the Cognitive Science Society, Boulder, Colorado %D 1993 %T Presentations and this and that: logic in action %A Miller,M. %A Perlis, Don %B Proceedings of the 15th Annual Conference of the Cognitive Science Society, Boulder, Colorado %V 251 %8 1993/// %G eng %0 Journal Article %J Systems, Man and Cybernetics, IEEE Transactions on %D 1993 %T Probabilistic analysis of some navigation strategies in a dynamic environment %A Sharma, R. %A Mount, Dave %A Aloimonos, J. %B Systems, Man and Cybernetics, IEEE Transactions on %V 23 %P 1465 - 1474 %8 1993/// %G eng %N 5 %0 Conference Paper %B Document Analysis and Recognition, 1993., Proceedings of the Second International Conference on %D 1993 %T The processing of form documents %A David Doermann %A Rosenfeld, A. %K AUTOMATIC %K business %K detectors; %K document %K documents; %K extraction; %K feature %K form %K forms; %K generation; %K generic %K handling; %K known %K markings; %K model %K modeling; %K non-form %K optimal %K properties; %K reconstruction; %K set; %K specialized %K stroke %K width %X An overview of an approach to the generic modeling and processing of known forms is presented. The system provides a methodology by which models are generated from regions in the document based on their usage. Automatic extraction of an optimal set of features to be used for registration is proposed, and it is shown how specialized detectors can be designed for each feature based on their position, orientation and width properties. Registration of the form with the model is accomplished using probing to establish correspondence. Form components which are corrupted by markings are detected and isolated, the intersections are interpreted and the properties of the non-form markings are used to reconstruct the strokes through the intersections. The feasibility of these ideas is demonstrated through an implementation of key components of the system %B Document Analysis and Recognition, 1993., Proceedings of the Second International Conference on %P 497 - 501 %8 1993/10// %G eng %R 10.1109/ICDAR.1993.395687 %0 Journal Article %J Sparks of innovation in human-computer interaction %D 1993 %T Protecting rights in user interface designs %A Shneiderman, Ben %B Sparks of innovation in human-computer interaction %P 351 - 351 %8 1993/// %G eng %0 Journal Article %J International Journal of Computational Geometry and Applications %D 1992 %T A parallel algorithm for enclosed and enclosing triangles %A Chandran,S. %A Mount, Dave %X We consider the problems of computing the largest area triangle enclosed within a given n-sided convex polygon and the smallest area triangle which encloses a given convex polygon. We show that these problems are closely related by presenting a single sequential linear time algorithm which essentially solves both problems simultaneously. We also present a cost-optimal parallel algorithm that solves both of these problems in O(log log n) time using n/log log n processors on a CRCW PRAM. In order to achieve these bounds we develop new techniques for the design of parallel algorithms for computational problems involving the rotating calipers method. %B International Journal of Computational Geometry and Applications %V 2 %P 191 - 214 %8 1992/// %G eng %N 2 %R 10.1142/S0218195992000123 %0 Journal Article %J Journal of Algorithms %D 1992 %T A parallel blocking flow algorithm for acyclic networks %A Vishkin, Uzi %B Journal of Algorithms %V 13 %P 489 - 501 %8 1992/// %G eng %N 3 %0 Journal Article %J Algorithmica %D 1992 %T Parallel computational geometry of rectangles %A Chandran,S. %A Kim,S. K %A Mount, Dave %X Rectangles in a plane provide a very useful abstraction for a number of problems in diverse fields. In this paper we consider the problem of computing geometric properties of a set of rectangles in the plane. We give parallel algorithms for a number of problems usingn processors wheren is the number of upright rectangles. Specifically, we present algorithms for computing the area, perimeter, eccentricity, and moment of inertia of the region covered by the rectangles inO(logn) time. We also present algorithms for computing the maximum clique and connected components of the rectangles inO(logn) time. Finally, we give algorithms for finding the entire contour of the rectangles and the medial axis representation of a givenn × n binary image inO(n) time. Our results are faster than previous results and optimal (to within a constant factor). %B Algorithmica %V 7 %P 25 - 49 %8 1992/// %G eng %N 1 %R 10.1007/BF01758750 %0 Journal Article %J J. Parallel Distrib. Comput. %D 1992 %T A parallel pipelined strategy for evaluating linear recursive predicates in a multiprocessor environment. %A Raschid, Louiqa %A Su,S. Y.W %B J. Parallel Distrib. Comput. %V 14 %P 146 - 162 %8 1992/// %G eng %N 2 %0 Conference Paper %B Proceedings of the 1992 ACM/IEEE conference on Supercomputing %D 1992 %T Parallel program performance metrics: a comprison and validation %A Hollingsworth, Jeffrey K %A Miller, B. P %B Proceedings of the 1992 ACM/IEEE conference on Supercomputing %P 4 - 13 %8 1992/// %G eng %0 Journal Article %J Parallel Computing %D 1992 %T Parallel sparse Cholesky factorization on a shared memory multiprocessor %A Zhang, G. %A Elman, Howard %K linear algebra %K Parallel algorithms %K shared memory multiprocessor %K sparse Cholesky factorization %X Parallel implementations of Cholesky factorization for sparse symmetric positive definite matrices are considered on a shared memory multiprocessor computer. Two column-oriented schemes, known as the column-Cholesky algorithm and the fan-in algorithm, along with enhancements of each, are implemented and discussed. High parallel efficiency of the column-Cholesky algorithm and its enhancement is demonstrated for test problems. A detailed investigation of the performance of the fan-in algorithm and its enhancement, the compute-ahead fan-in algorithm, is made to study the effects of overhead associated with the fan-in based schemes. %B Parallel Computing %V 18 %P 1009 - 1022 %8 1992/09// %@ 0167-8191 %G eng %U http://www.sciencedirect.com/science/article/pii/016781919290014X %N 9 %R 16/0167-8191(92)90014-X %0 Conference Paper %B Proceedings of the 14th conference on Computational linguistics - Volume 2 %D 1992 %T Parameterization of the interlingua in machine translation %A Dorr, Bonnie J %X The task of designing as interlingual machine translation system is difficult, first because the designer must have a knowledge of the principles underlying crosslinguistic distinctions for the languages under consideration, and second because the designer must then be able to incorporate this knowledge effectively into the system. This paper provides a catalog of several types of distinctions among Spanish, English, and German, and describes a parametric approach that characterizes these distinctions, both at the syntactic level and at the lexical-semantic level. The approach described here is implemented in a system called UNITRAN, a machine translation system that translates English, Spanish, and German bidirectionally. %B Proceedings of the 14th conference on Computational linguistics - Volume 2 %S COLING '92 %I Association for Computational Linguistics %C Stroudsburg, PA, USA %P 624 - 630 %8 1992/// %G eng %U http://dx.doi.org/10.3115/992133.992167 %R 10.3115/992133.992167 %0 Conference Paper %B Proceedings of the 30th annual meeting on Association for Computational Linguistics %D 1992 %T A parameterized approach to integrating aspect with lexical-semantics for machine translation %A Dorr, Bonnie J %X This paper discusses how a two-level knowledge representation model for machine translation integrates aspectual information with lexical-semantic information by means of parameterization. The integration of aspect with lexical-semantics is especially critical in machine translation because of the lexical selection and aspectual realization processes that operate during the production of the target-language sentence: there are often a large number of lexical and aspectual possibilities to choose from in the production of a sentence from a lexical semantic representation. Aspectual information from the source-language sentence constrains the choice of target-language terms. In turn, the target-language terms limit the possibilities for generation of aspect. Thus, there is a two-way communication channel between the two processes. This paper will show that the selection/realization processes may be parameterized so that they operate uniformly across more than one language and it will describe how the parameter-based approach is currently being used as the basis for extraction of aspectual information from corpora. %B Proceedings of the 30th annual meeting on Association for Computational Linguistics %S ACL '92 %I Association for Computational Linguistics %C Stroudsburg, PA, USA %P 257 - 264 %8 1992/// %G eng %U http://dx.doi.org/10.3115/981967.982000 %R 10.3115/981967.982000 %0 Journal Article %J Computing Systems in Engineering %D 1992 %T PARTI primitives for unstructured and block structured problems %A Sussman, Alan %A Saltz, J. %A Das,R. %A Gupta,S. %A Mavriplis,D. %A Ponnusamy,R. %A Crowley,K. %X This paper describes a set of primitives (PARTI) developed to efficiently execute unstructured and block structured problems on distributed memory parallel machines. We present experimental data from a three-dimensional unstructured Euler solver run on the Intel Touchstone Delta to demonstrate the usefulness of our methods. %B Computing Systems in Engineering %V 3 %P 73 - 86 %8 1992/// %@ 0956-0521 %G eng %U http://www.sciencedirect.com/science/article/pii/0956052192900962 %N 1-4 %R 10.1016/0956-0521(92)90096-2 %0 Conference Paper %B Proceedings of the 19th ACM SIGPLAN-SIGACT symposium on Principles of programming languages %D 1992 %T Partial evaluation of high-level imperative programming languages with applications in hard real-time systems %A Nirkhe,V. %A Pugh, William %B Proceedings of the 19th ACM SIGPLAN-SIGACT symposium on Principles of programming languages %P 269 - 280 %8 1992/// %G eng %0 Conference Paper %B , 11th IAPR International Conference on Pattern Recognition, 1992. Vol.I. Conference A: Computer Vision and Applications, Proceedings %D 1992 %T Perceptual computational advantages of tracking %A Fermüller, Cornelia %A Aloimonos, J. %K active vision %K Automation %K Employment %K fixation %K image intensity function %K Image motion analysis %K IMAGE PROCESSING %K Motion estimation %K Nonlinear optics %K Optical computing %K Optical sensors %K parameter estimation %K pattern recognition %K perceptual computational advantages %K spatiotemporal derivatives %K Spatiotemporal phenomena %K tracking %K unrestricted motion %K visual flow measurements %X The paradigm of active vision advocates studying visual problems in the form of modules that are directly related to a visual task for observers that are active. It is argued that in many cases when an object is moving in an unrestricted manner (translation and rotation) in the 3D world only the motion's translational components are of interest. For a monocular observer, using only the normal flow-the spatiotemporal derivatives of the image intensity function-the authors solve the problem of computing the direction of translation. Their strategy uses fixation and tracking. Fixation simplifies much of the computation by placing the object at the center of the visual field, and the main advantage of tracking is the accumulation of information over time. The authors show how tracking is accomplished using normal flow measurements and use it for two different tasks in the solution process. First, it serves as a tool to compensate for the lack of existence of an optical flow field and thus to estimate the translation parallel to the image plane; and second, it gathers information about the motion component perpendicular to the image plane %B , 11th IAPR International Conference on Pattern Recognition, 1992. Vol.I. Conference A: Computer Vision and Applications, Proceedings %I IEEE %P 599 - 602 %8 1992/09/30/Aug-3 %@ 0-8186-2910-X %G eng %R 10.1109/ICPR.1992.201633 %0 Book Section %B First IEEE Conference on Control ApplicationsFirst IEEE Conference on Control Applications %D 1992 %T On the Precise Loop Transfer Recovery and Transmission Zeroes %A Monahemi,M. %A Barlow,J. %A O'Leary, Dianne P. %B First IEEE Conference on Control ApplicationsFirst IEEE Conference on Control Applications %C Dayton, Ohio %8 1992/09// %G eng %0 Conference Paper %B Proceedings of the 6th international conference on Supercomputing %D 1992 %T Preconditioning parallel multisplittings for solving linear systems of equations %A Huang,Chiou-Ming %A O'Leary, Dianne P. %X We consider the practical implementation of Krylov subspace methods (conjugate gradients, GMRES, etc.) for parallel computers in the case where the preconditioning matrix is a multisplitting. The algorithm can be efficiently implemented by dividing the work into tasks that generate search directions and a single task that minimizes over the resulting subspace. Each task is assigned to a subset of processors. It is not necessary for the minimization task to send information to the direction generating tasks, and this leads to high utilization with a minimum of synchronization. We study the convergence properties of various forms of the algorithm. %B Proceedings of the 6th international conference on Supercomputing %S ICS '92 %I ACM %C New York, NY, USA %P 478 - 484 %8 1992/// %@ 0-89791-485-6 %G eng %U http://doi.acm.org/10.1145/143369.143454 %R 10.1145/143369.143454 %0 Conference Paper %B Proceedings of the 14th conference on Computational linguistics-Volume 2 %D 1992 %T Probabilistic tree-adjoining grammar as a framework for statistical natural language processing %A Resnik, Philip %B Proceedings of the 14th conference on Computational linguistics-Volume 2 %P 418 - 424 %8 1992/// %G eng %0 Journal Article %J SIAM Journal on Computing %D 1992 %T Processor efficient parallel algorithms for the two disjoint paths problem and for finding a Kuratowski homeomorph %A Khuller, Samir %A Mitchell,S. G %A Vazirani,V. V %B SIAM Journal on Computing %V 21 %P 486 - 486 %8 1992/// %G eng %0 Journal Article %J SIAM Journal on Computing %D 1991 %T Parallel Algorithms for Channel Routing in the Knock-Knee Model %A JaJa, Joseph F. %A Chang,Shing-Chong %K channel routing %K Layout %K left-edge algorithm %K line packing %K Parallel algorithms %K VLSI design %X The channel routing problem of a set of two-terminal nets in the knock-knee model is considered. A new approach to route all the nets within $d$ tracks, where $d$ is the density, such that the corresponding layout can be realized with three layers is developed. The routing and the layer assignment algorithms run in $O(\log n)$ time with $n / \log n$ processors on the CREW PRAM model under the reasonable assumption that all terminals lie in the range $[1,N]$, where $N = O(n)$. %B SIAM Journal on Computing %V 20 %P 228 - 245 %8 1991/// %G eng %U http://link.aip.org/link/?SMJ/20/228/1 %N 2 %R 10.1137/0220014 %0 Journal Article %J Integration, the VLSI Journal %D 1991 %T Parallel algorithms for VLSI routing %A JaJa, Joseph F. %K channel routing %K detailed routing %K global routing %K Parallel algorithms %K river routing %K VLSI routing %X With the increase in the design complexity of VLSI systems, there is an ever increasing need for efficient design automation tools. Parallel processing could open up the way for substantially faster and cost-effective VLSI design tools. In this paper, we review some of the basic parallel algorithms that have been recently developed to handle problems arising in VLSI routing. We also include some results that have not appeared in the literature before. These results indicate that existing parallel algorithmic techniques can efficiently handle many VLSI routing problems. Our emphasis will be on outlining some of the basic parallel strategies with appropriate pointers to the literature for more details. %B Integration, the VLSI Journal %V 12 %P 305 - 320 %8 1991/12// %@ 0167-9260 %G eng %U http://www.sciencedirect.com/science/article/pii/016792609190027I %N 3 %R 10.1016/0167-9260(91)90027-I %0 Journal Article %J Journal of Algorithms %D 1991 %T On parallel hashing and integer sorting %A Matias,Y. %A Vishkin, Uzi %B Journal of Algorithms %V 12 %P 573 - 606 %8 1991/// %G eng %N 4 %0 Report %D 1991 %T Parallel radiosity techniques for mesh-connected SIMD computers %A Varshney, Amitabh %X This thesis investigates parallel radiosity techniques for highly- parallel, mesh-connected SIMD computers. The approaches studies differ along the two orthogonal dimensions: the method of sampling-by ray-casting or by environment-project and the method of mapping of objects to processors - by object-space-based methods or by a balanced-load method. The environment- projection approach has been observed to perform better than the ray-casting approaches. For the dataset studied, the balanced-load method appears promising. Spatially subdividing the dataset without taking the potential light interactions into account has been observed to violate the locality property of radiosity. This suggests that object-space-based methods for radiosity must take visibility into account during subdivision to achieve any speedups based on exploiting the locality property of radiosity. This thesis also investigates the reuse patterns of form-factors in perfectly diffuse environments during radiosity iterations. Results indicate that reuse is sparse even when significant convergence is achieved. %I DTIC Document %C NORTH CAROLINA UNIV AT CHAPEL HILL DEPT OF COMPUTER SCIENCE %8 1991/// %G eng %0 Conference Paper %B , Proceedings of the First International Conference on Parallel and Distributed Information Systems, 1991 %D 1991 %T Parallel transitive closure computations using topological sort %A Hua,K. A %A Hannenhalli, Sridhar %K Computer science %K Concurrent computing %K data partitioning %K Database systems %K database theory %K deductive databases %K File systems %K horizontal partitioning %K joins %K local data fragments %K message passing multiprocessor system %K Multiprocessing systems %K Parallel algorithms %K PARALLEL PROCESSING %K parallel programming %K parallel transitive closure %K processing nodes %K relation tuples %K Relational databases %K sorting %K topological sort %X Deals with parallel transitive closure computations. The sort-based approaches introduced sorts the tuples of the relation into topological order, and the sorted relation is then horizontally partitioned and distributed across several processing nodes of a message passing multiprocessor system. This data partitioning strategy allows the transitive closure computation of the local data fragments to be computed in parallel with no interprocessor communication. The construction of the final result then requires only a small number of joins. Extensive analytical results are included in the paper as well. They show that the proposed techniques leads to a speedup that is essentially linear with the number of processors. Its performance is significantly better than the recently published hashless parallel algorithm %B , Proceedings of the First International Conference on Parallel and Distributed Information Systems, 1991 %I IEEE %P 122 - 129 %8 1991/12/04/6 %@ 0-8186-2295-4 %G eng %R 10.1109/PDIS.1991.183079 %0 Journal Article %J IEEE Transactions on Knowledge and Data Engineering %D 1991 %T A pipeline N-way join algorithm based on the 2-way semijoin program %A Roussopoulos, Nick %A Kang,H. %K 2-way semijoin program %K backward size reduction %K Bandwidth %K Computer networks %K Costs %K Data communication %K data transmission %K Database systems %K database theory %K Delay %K distributed databases %K distributed query %K forward size reduction %K intermediate results %K Local area networks %K network %K Parallel algorithms %K pipeline N-way join algorithm %K pipeline processing %K Pipelines %K programming theory %K Query processing %K Relational databases %K relational operator %K SITES %K Workstations %X The semijoin has been used as an effective operator in reducing data transmission and processing over a network that allows forward size reduction of relations and intermediate results generated during the processing of a distributed query. The authors propose a relational operator, two-way semijoin, which enhanced the semijoin with backward size reduction capability for more cost-effective query processing. A pipeline N-way join algorithm for joining the reduced relations residing on N sites is introduced. The main advantage of this algorithm is that it eliminates the need for transferring and storing intermediate results among the sites. A set of experiments showing that the proposed algorithm outperforms all known conventional join algorithms that generate intermediate results is included %B IEEE Transactions on Knowledge and Data Engineering %V 3 %P 486 - 495 %8 1991/12// %@ 1041-4347 %G eng %N 4 %R 10.1109/69.109109 %0 Journal Article %J Theoretical Computer Science %D 1991 %T Planar graph coloring is not self-reducible, assuming P ≠ NP %A Khuller, Samir %A Vazirani,Vijay V. %X We show that obtaining the lexicographically first four coloring of a planar graph is NP-hard. This shows that planar graph four-coloring is not self-reducible, assuming P≠ NP. One consequence of our result is that the schema of Jerrum et al. (1986) cannot be used for approximately counting the number of four colorings of a planar graph. These results extend to planar graph k-coloring, for k⩾4. %B Theoretical Computer Science %V 88 %P 183 - 189 %8 1991/09/30/ %@ 0304-3975 %G eng %U http://www.sciencedirect.com/science/article/pii/030439759190081C %N 1 %R 10.1016/0304-3975(91)90081-C %0 Journal Article %J Proceedings of the National Academy of SciencesPNAS %D 1991 %T Polyadenylylation in copia requires unusually distant upstream sequences %A Kurkulos,M. %A Weinberg,J. M. %A Pepling,M. E. %A Mount, Stephen M. %X Retroviruses and related genetic elements generate terminally redundant RNA products by differential polyadenylylation within a long terminal repeat. Expression of the white-apricot (wa) allele of Drosophila melanogaster, which carries an insertion of the 5.1-kilobase retrovirus-like transposable element copia in a small intron, is influenced by signals within copia. By using this indicator, we have isolated a 518-base-pair deletion, 312 base pairs upstream of the copia polyadenylylation site, that is phenotypically like much larger deletions and eliminates RNA species polyadenylylated in copia. This requirement of distant upstream sequences for copia polyadenylylation has implications for the expression of many genetic elements bearing long terminal repeats. %B Proceedings of the National Academy of SciencesPNAS %V 88 %P 3038 - 3042 %8 1991/04/15/ %@ 0027-8424, 1091-6490 %G eng %U http://www.pnas.org/content/88/8/3038 %N 8 %0 Conference Proceedings %B Proceedings of the Eighth International Workshop on Machine Learning %D 1991 %T Probabilistic evaluation of bias for learning systems %A desJardins, Marie %B Proceedings of the Eighth International Workshop on Machine Learning %P 495 - 499 %8 1991/// %G eng %0 Journal Article %J Noûs %D 1991 %T Putting One's Foot in One's Head–Part I: Why %A Perlis, Don %B Noûs %P 435 - 455 %8 1991/// %G eng %0 Journal Article %J CVGIP: Graphical Models and Image Processing %D 1991 %T Pyramid computation of neighbor distance statistics in dot patterns %A Banerjee,Saibal %A Mount, Dave %A Rosenfeld,Azriel %X This paper describes an algorithm for computing statistics of Voronoi neighbor distances in a dot pattern, using a cellular pyramid computer, in a logarithmic number of computational steps. Given a set of dots in a square region of the digital plane, the algorithm determines with high probability the Voronoi neighbors of the dots in the interior of the region and then computes statistics of the neighbor distances. An algorithm of this type may account for the ability of humans to perceive at a glance whether the dots in a pattern are randomly or regularly spaced, i.e., their neighbor distances have high or low variance. %B CVGIP: Graphical Models and Image Processing %V 53 %P 373 - 381 %8 1991/07// %@ 1049-9652 %G eng %U http://www.sciencedirect.com/science/article/pii/104996529190040Q %N 4 %R 10.1016/1049-9652(91)90040-Q %0 Journal Article %J Journal of Algorithms %D 1990 %T Packing and covering the plane with translates of a convex polygon %A Mount, Dave %A Silverman,Ruth %X A covering of the Euclidean plane by a polygon P is a system of translated copies of P whose union is the plane, and a packing of P in the plane is a system of translated copies of P whose interiors are disjoint. A lattice covering is a covering in which the translates are defined by the points of a lattice, and a lattice packing is defined similarly. We show that, given a convex polygon P with n vertices, the densest lattice packing of P in the plane can be found in O(n) time. We also show that the sparsest lattice covering of the plane by a centrally symmetric convex polygon can be solved in O(n) time. Our approach utilizes results from classical geometry that reduce these packing and covering problems to the problems of finding certain extremal enclosed figures within the polygon. %B Journal of Algorithms %V 11 %P 564 - 580 %8 1990/12// %@ 0196-6774 %G eng %U http://www.sciencedirect.com/science/article/pii/019667749090010C %N 4 %R 10.1016/0196-6774(90)90010-C %0 Journal Article %J Journal of Parallel and Distributed Computing %D 1990 %T Parallel algorithm for the solution of nonlinear poisson equation of semiconductor device theory and its implementation on the IVIPP %A Darling,J. P. %A Mayergoyz, Issak D %X The solution of the nonlinear Poisson equation of semiconductor device theory is important for the design of submicron devices used in VLSI circuits. The conventional technique applied for the solution of this equation is based on the application of Newton's method to simultaneous discretized equations. This technique requires large computational and storage capacities to handle the fine meshes associated with the modeling of submicron semiconductor devices. A new algorithm for the numerical solution of this equation has recently been developed. One advantage that this algorithm has over conventional techniques is that it is inherently parallel, and is thus well suited to implementation on a computer with large parallelism. This paper describes the initial implementation and testing of this algorithm on NASA's Massively Parallel Processor (MPP). %B Journal of Parallel and Distributed Computing %V 8 %P 161 - 168 %8 1990/02// %@ 0743-7315 %G eng %U http://www.sciencedirect.com/science/article/pii/074373159090090C %N 2 %R 10.1016/0743-7315(90)90090-C %0 Journal Article %J Parallel Computing %D 1990 %T Parallel QR factorization by householder and modified Gram-Schmidt algorithms %A O'Leary, Dianne P. %A Whitman,Peter %K Gram-Schmidt algorithm %K Householder algorithm %K Message passing systems %K QR factorization %X In this paper, the parallel implementation of two algorithms for forming a QR factorization of a matrix is studied. We propose parallel algorithms for the modified Gram-Schmidt and the Householder algorithms on message passing systems in which the matrix is distributed by blocks or rows. The models that predict performance of the algorithms are validated by experimental results on several parallel machines. %B Parallel Computing %V 16 %P 99 - 112 %8 1990/11// %@ 0167-8191 %G eng %U http://www.sciencedirect.com/science/article/pii/0167819190901634 %N 1 %R 10.1016/0167-8191(90)90163-4 %0 Book Section %B Diagnostic Monitoring of Skill and Knowledge AcquisitionDiagnostic Monitoring of Skill and Knowledge Acquisition %D 1990 %T Parsimonious covering theory in cognitive diagnosis and adaptive instruction %A Reggia, James A. %A D'Autrechy,C. L %B Diagnostic Monitoring of Skill and Knowledge AcquisitionDiagnostic Monitoring of Skill and Knowledge Acquisition %I Psychology Press %P 510 - 510 %8 1990/// %@ 089859992X, 9780898599923 %G eng %0 Journal Article %J Image and Vision Computing %D 1990 %T Perspective approximations %A Aloimonos, J. %K orthoperspective %K paraperspective %K perspective approximations %X In recent years, researchers in computer vision working on problems such as object recognition, shape reconstruction, shape from texture, shape from contour, pose estimation, etc., have employed in their analyses approximations of the perspective projection as the image formation process. Depending on the task, these approximations often yield very good results and present the advantage of simplicity. Indeed when one relates lengths, angles or areas on the image with the respective units in the 3D world assuming perspective projection, the resulting expressions are very complex, and consequently they complicate the recovery process. However, if we assume that the image is formed with a projection which is a good approximation of the perspective, then the recovery process becomes easier. Two such approximations, are described, the paraperspective and the orthoperspective, and it is shown that for some tasks the error introduced by the use of such an approximation is negligible. Applications of these projections to the problems of shape from texture, shape from contour, and object recognition related problems (such as determining the view vector and pose estimation) are also described. %B Image and Vision Computing %V 8 %P 179 - 192 %8 1990/08// %@ 0262-8856 %G eng %U http://www.sciencedirect.com/science/article/pii/026288569090064C %N 3 %R 16/0262-8856(90)90064-C %0 Journal Article %J Neural Computation %D 1990 %T Phase transitions in connectionist models having rapidly varying connection strengths %A Reggia, James A. %A Edwards,M. %B Neural Computation %V 2 %P 523 - 535 %8 1990/// %G eng %N 4 %0 Journal Article %J CONCUR'90 Theories of Concurrency: Unification and Extension %D 1990 %T A preorder for partial process specifications %A Cleaveland, Rance %A Steffen,B. %B CONCUR'90 Theories of Concurrency: Unification and Extension %P 141 - 151 %8 1990/// %G eng %0 Journal Article %J Information and Computation %D 1990 %T Priorities in process algebras %A Cleaveland, Rance %A Hennessy,Matthew %X An operational semantics for an algebraic theory of concurrency that incorporates a notion of priority into the definition of the execution of actions is developed. An equivalence based on strong observational equivalence is defined and shown to be a congruence, and a complete axiomatization is given for finite terms. Several examples higlight the novelty and usefulness of our approach. %B Information and Computation %V 87 %P 58 - 77 %@ 0890-5401 %G eng %U http://www.sciencedirect.com/science/article/pii/089054019090059Q %N 1-2 %R 10.1016/0890-5401(90)90059-Q %0 Journal Article %J Advances in Computing and Information—ICCI'90 %D 1990 %T Probabilistic analysis of set operations with constant-time set equality test %A Pugh, William %B Advances in Computing and Information—ICCI'90 %P 62 - 71 %8 1990/// %G eng %0 Conference Paper %B Proceedings of 10th International Conference on Pattern Recognition, 1990 %D 1990 %T Purposive and qualitative active vision %A Aloimonos, J. %K active vision %K Automation %K brain models %K complex visual tasks %K Computer vision %K environmental knowledge %K highly sophisticated navigational tasks %K HUMANS %K Image reconstruction %K intentions %K Kinetic theory %K Laboratories %K Medusa %K Motion analysis %K Navigation %K planning %K planning (artificial intelligence) %K purposive-qualitative vision %K recovery problem %K Robust stability %K Robustness %K SHAPE %K stability %X The traditional view of the problem of computer vision as a recovery problem is questioned, and the paradigm of purposive-qualitative vision is offered as an alternative. This paradigm considers vision as a general recognition problem (recognition of objects, patterns or situations). To demonstrate the usefulness of the framework, the design of the Medusa of CVL is described. It is noted that this machine can perform complex visual tasks without reconstructing the world. If it is provided with intentions, knowledge of the environment, and planning capabilities, it can perform highly sophisticated navigational tasks. It is explained why the traditional structure from motion problem cannot be solved in some cases and why there is reason to be pessimistic about the optimal performance of a structure from motion module. New directions for future research on this problem in the recovery paradigm, e.g., research on stability or robustness, are suggested %B Proceedings of 10th International Conference on Pattern Recognition, 1990 %I IEEE %V i %P 346-360 vol.1 - 346-360 vol.1 %8 1990/06/16/21 %@ 0-8186-2062-5 %G eng %R 10.1109/ICPR.1990.118128 %0 Journal Article %J Telematics and Informatics %D 1989 %T Parallel plan execution with self-processing networks %A D'Autrechy,C. L %A Reggia, James A. %B Telematics and Informatics %V 6 %P 145 - 157 %8 1989/// %G eng %N 3-4 %0 Journal Article %J Artificial Intelligence in Medicine %D 1989 %T Parsimonious covering as a method for natural language interfaces to expert systems* 1 %A Dasigi,V. R %A Reggia, James A. %B Artificial Intelligence in Medicine %V 1 %P 49 - 60 %8 1989/// %G eng %N 1 %0 Journal Article %J Circuits and Systems, IEEE Transactions on %D 1988 %T Parallel algorithms for planar graph isomorphism and related problems %A JaJa, Joseph F. %A Kosaraju,S.R. %K 2D %K algorithms; %K algorithms;parallel %K array;computational %K array;CREW-PRAM %K coarsest %K complexity;graph %K components;two-dimensional %K COMPUTATION %K graph %K graph;planar %K isomorphism;single-function %K model;mesh %K models;planar %K partitioning %K problem;triconnected %K processor %K theory;parallel %X Parallel algorithms for planar graph isomorphism and several related problems are presented. Two models of parallel computation are considered: the CREW-PRAM model and the two-dimensional array of processors. The results include O( radic;n)-time mesh algorithms for finding a good separating cycle and the triconnected components of a planar graph, and for solving the single-function coarsest partitioning problem %B Circuits and Systems, IEEE Transactions on %V 35 %P 304 - 311 %8 1988/03// %@ 0098-4094 %G eng %N 3 %R 10.1109/31.1743 %0 Report %D 1988 %T Parallel Algorithms for Wiring Module Pins to Frame Pads %A Chang,Shing-Chong %A JaJa, Joseph F. %K *ALGORITHMS %K *PARALLEL PROCESSING %K COMPUTER PROGRAMMING AND SOFTWARE %K efficiency %K INPUT %K LAYERS %K length %K MEMORY DEVICES %K MODELS %K MODULAR CONSTRUCTION %K NUMERICAL MATHEMATICS %K PARALLEL ORIENTATION %K PINS %K WIRE %X We present fast and efficient parallel algorithms for several problems related to wiring a set of pins on a module to a set of pads lying on the houndary of a chip. The one-layer model is used to perform the wiring. Our basic model of parallel processing is the CREW-PRAM model which is characterized by the presence of an unlimited number processors sharing the main memory. Concurrent reads are allowed while concurrent writes are not. All our algorithms use 0(n) processors, where n is the input length. Our algorithms have fast implementations on other parallel models such as the mesh or the hypercube. %I Institute for Systems Research, University of Maryland, College Park %8 1988/// %G eng %U http://stinet.dtic.mil/oai/oai?&verb=getRecord&metadataPrefix=html&identifier=ADA452383 %0 Journal Article %J Algorithmica %D 1988 %T Parallel construction of a suffix tree with applications %A Apostolico,A. %A Iliopoulos,C. %A Landau,G. M %A Schieber,B. %A Vishkin, Uzi %B Algorithmica %V 3 %P 347 - 365 %8 1988/// %G eng %N 1 %0 Conference Paper %B Artificial Intelligence Applications, 1988., Proceedings of the Fourth Conference on %D 1988 %T Parallel set covering algorithms %A Sekar,S. %A Reggia, James A. %K Butterfly parallel processor system %K irredundancy %K Parallel algorithms %K parsimonious set covering theory %K set covering %K set theory %X The authors develop some parallel algorithms for set covering. A brief introduction is given into the parsimonious set covering theory, and algorithms using one type of parsimony called irredundancy are developed. They also discuss several machine-independent parallel constructs that are used to express the parallel algorithms. The algorithms were tested on the Butterfly parallel processor system. The authors present some of the tests conducted and their analyses. Finally, the merits and limitations of the algorithms that were identified during the tests are presented %B Artificial Intelligence Applications, 1988., Proceedings of the Fourth Conference on %P 274 - 279 %8 1988/03// %G eng %R 10.1109/CAIA.1988.196115 %0 Journal Article %J GeneticsGenetics %D 1988 %T Partial Revertants of the Transposable Element-Associated Suppressible Allele White-Apricot in Drosophila Melanogaster: Structures and Responsiveness to Genetic Modifiers %A Mount, Stephen M. %A Green,M. M. %A Rubin,G. M. %X The eye color phenotype of white-apricot (w(a)), a mutant allele of the white locus caused by the insertion of the transposable element copia into a small intron, is suppressed by the extragenic suppressor suppressor-of-white-apricot (su(w(a))) and enhanced by the extragenic enhancers suppressor-of-forked su(f)) and Enhancer-of-white-apricot (E(w(a))). Derivatives of w(a) have been analyzed molecularly and genetically in order to correlate the structure of these derivatives with their response to modifiers. Derivatives in which the copia element is replaced precisely by a solo long terminal repeat (sLTR) were generated in vitro and returned to the germline by P-element mediated transformation; flies carrying this allele within a P transposon show a nearly wild-type phenotype and no response to either su(f) or su(w(a)). In addition, eleven partial phenotypic revertants of w(a) were analyzed. Of these, one appears to be a duplication of a large region which includes w(a), three are new alleles of su(w(a)), two are sLTR derivatives whose properties confirm results obtained using transformation, and five are secondary insertions into the copia element within w(a). One of these, w(aR84h), differs from w(a) by the insertion of the most 3' 83 nucleotides of the I factor. The five insertion derivatives show a variety of phenotypes and modes of interaction with su((f) and su(w(a)). The eye pigmentation of w(aR84h) is affected by su(f) and E(w(a)), but not su(w(a)). These results demonstrate that copia (as opposed to the interruption of white sequences) is essential for the w(a) phenotype and its response to genetic modifiers, and that there are multiple mechanisms for the alteration of the w(a) phenotype by modifiers. %B GeneticsGenetics %V 118 %P 221 - 234 %8 1988/02// %@ 0016-6731 %G eng %U http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1203276/ %N 2 %0 Conference Paper %B collection of position papers, IBM-NSF Workshop on$\textbackslashbackslash$ Opportunities and Constraints of Parallel Computing", IBM Almaden %D 1988 %T PRAM algorithms: teach and preach %A Vishkin, Uzi %B collection of position papers, IBM-NSF Workshop on$\textbackslashbackslash$ Opportunities and Constraints of Parallel Computing", IBM Almaden %8 1988/// %G eng %0 Journal Article %J Computer Languages %D 1988 %T Program complexity using Hierarchical Abstract Computers %A Bail,William G %A Zelkowitz, Marvin V %K CASE tools %K complexity %K ENVIRONMENTS %K measurement %K Prime programs %X A model of program complexity is introduced which combines structural control flow measures with data flow measures. This complexity measure is based upon the prime program decomposition of a program written for a Hierarchical Abstract Computer. It is shown that this measure is consistent with the ideas of information hiding and data abstraction. Because this measure is sensitive to the linear form of a program, it can be used to measure different concrete representations of the same algorithm, as in a structured and an unstructured version of the same program. Application of the measure as a model of system complexity is given for “upstream” processes (e.g. specification and design phases) where there is no source program to measure by other techniques. %B Computer Languages %V 13 %P 109 - 123 %8 1988/// %@ 0096-0551 %G eng %U http://www.sciencedirect.com/science/article/pii/0096055188900197 %N 3–4 %R 10.1016/0096-0551(88)90019-7 %0 Journal Article %J Automata, Languages and Programming %D 1987 %T Parallel construction of a suffix tree %A Landau,G. %A Schieber,B. %A Vishkin, Uzi %B Automata, Languages and Programming %P 314 - 325 %8 1987/// %G eng %0 Journal Article %J Parallel Computing %D 1987 %T Parallel implementation of the block conjugate gradient algorithm %A O'Leary, Dianne P. %K Conjugate gradient algorithm %K message passing architectures %K parallel implementation %X The conjugate gradient algorithm is well-suited for vector computation but, because of its many synchronization points and relatively short message packets, is more difficult to implement for parallel computation. In this work we introduce a parallel implementation of the block conjugate gradient alhorithm. In this algorithm, we carry a block of vectors along at each iteration, reducing the number of iterations and increasing the length of each message. On machines with relatively costly message passing, this algorithm is a significant improvement over the standard conjugate gradient algorithm. %B Parallel Computing %V 5 %P 127 - 139 %8 1987/07// %@ 0167-8191 %G eng %U http://www.sciencedirect.com/science/article/pii/0167819187900135 %N 1–2 %R 10.1016/0167-8191(87)90013-5 %0 Report %D 1987 %T Principle-Based Parsing for Machine Translation %A Dorr, Bonnie J %K *MACHINE TRANSLATION %K *PARSERS %K *SYNTAX %K CONTROL %K COROUTINE DESIGN %K CYBERNETICS %K grammars %K HUMANS %K LANGUAGE %K linguistics %K MODULAR CONSTRUCTION %K natural language %K PRINCIPLES BASED PARSERS %K PROCESSING %K STRATEGY %K SUBROUTINES %X Many syntactic parsing strategies for machine translation systems are based entirely on context-free grammars. These parsers require an overwhelming number of rules; thus, translation systems using rule-based parsers either have limited linguistic coverage, or they have poor performance due to formidable grammar size. This report shows how a principle-based parser with a co-routine design improves parsing for translation. The parser consists of a skeletal structure-building mechanism that operates in conjunction with a linguistically based constraint module, passing control back and forth until a set of underspecified skeletal phrase-structures is converted into a fully instantiated parse tree. The modularity of the parsing design accommodates linguistic generalization, reduces the grammar size, allows extension to other languages, and is compatible with studies of human language processing. Keywords: Natural language processing, Interlingual translation, Parsing, Subroutines, Principles vs. Rules, Co-routine design, Linguistic constraints. %I MASSACHUSETTS INST OF TECH CAMBRIDGE ARTIFICIAL INTELLIGENCE LAB %8 1987/12// %G eng %U http://stinet.dtic.mil/oai/oai?&verb=getRecord&metadataPrefix=html&identifier=ADA199183 %0 Journal Article %J IEEE Transactions on Systems, Man and Cybernetics %D 1987 %T A probabilistic causal model for diagnostic problem solving (parts 1 and 2) %A Peng,Y. %A Reggia, James A. %B IEEE Transactions on Systems, Man and Cybernetics %P 146 - 162 %8 1987/// %G eng %0 Conference Paper %B Proceedings of the First International Conference on Neural Networks %D 1987 %T Properties of a competition-based activation mechanism in neuromimetic network models %A Reggia, James A. %B Proceedings of the First International Conference on Neural Networks %8 1987/// %G eng %0 Journal Article %J Journal of Automated Reasoning %D 1987 %T Proving self-utterances %A Miller,M. %A Perlis, Don %B Journal of Automated Reasoning %V 3 %P 329 - 338 %8 1987/// %G eng %N 3 %0 Journal Article %J Theoretical Computer Science %D 1986 %T Parallel ear decomposition search (EDS) and st-numbering in graphs %A Maon,Y. %A Schieber,B. %A Vishkin, Uzi %B Theoretical Computer Science %V 47 %P 277 - 298 %8 1986/// %G eng %0 Conference Paper %B Proceedings of the 12th International Conference on Very Large Data Bases %D 1986 %T A parallel processing strategy for evaluating recursive queries %A Raschid, Louiqa %A Su,S. Y.W %B Proceedings of the 12th International Conference on Very Large Data Bases %P 412 - 419 %8 1986/// %G eng %0 Conference Paper %B AAAI %D 1986 %T A Parallel Self-Modifying Default Reasoning System %A Minker, Jack %A Perlis, Don %A Subramanian,K. %B AAAI %P 923 - 927 %8 1986/// %G eng %0 Journal Article %J Recent Developments in the Theory and Applications of Fuzzy Sets. Proceedings of NAFIPS %D 1986 %T The parsimonious covering model for inexact abductive reasoning in diagnostic systems %A Ahuja,S. B %A Reggia, James A. %B Recent Developments in the Theory and Applications of Fuzzy Sets. Proceedings of NAFIPS %P 86 - 1986 %8 1986/// %G eng %0 Journal Article %J Numerical Analysis %D 1986 %T Polynomial iteration for nonsymmetric indefinite linear systems %A Elman, Howard %A Streit, R. %B Numerical Analysis %P 103 - 117 %8 1986/// %G eng %0 Conference Paper %B IEEE Real-Time Systems Symposium, New Orleans, Louisiana %D 1986 %T Practicality of non-interfering checkpoints in distributed database systems %A Son,S. H %A Agrawala, Ashok K. %B IEEE Real-Time Systems Symposium, New Orleans, Louisiana %P 234 - 241 %8 1986/// %G eng %0 Journal Article %J SIAM Journal on Numerical Analysis %D 1986 %T Preconditioning by Fast Direct Methods for Nonself-Adjoint Nonseparable Elliptic Equations %A Elman, Howard %A Schultz, Martin H. %X We consider the use of fast direct methods as preconditioners for iterative methods for computing the numerical solution of nonself-adjoint elliptic boundary value problems. We derive bounds on convergence rates that are independent of discretization mesh size. For two-dimensional problems on rectangular domains, discretized on an n × n grid, these bounds lead to asymptotic operation counts of O(n2 log n log ε-1) to achieve relative error ε and O(n2(log n)2) to reach truncation error. %B SIAM Journal on Numerical Analysis %V 23 %P 44 - 57 %8 1986/02/01/ %@ 0036-1429 %G eng %U http://www.jstor.org/stable/2157450 %N 1 %0 Conference Paper %B Proceedings of the ACM SIGART international symposium on Methodologies for intelligent systems - %D 1986 %T A preliminary excursion into step-logics %A Drapkin,J. %A Perlis, Don %B Proceedings of the ACM SIGART international symposium on Methodologies for intelligent systems - %C Knoxville, Tennessee, United States %P 262 - 269 %8 1986/// %G eng %U http://dl.acm.org/citation.cfm?id=12837 %R 10.1145/12808.12837 %0 Journal Article %J Computer %D 1986 %T Principles and techniques in the design of ADMS+((advanced data-base management system)) %A Roussopoulos, Nick %A Kang,H. %B Computer %V 19 %P 19 - 23 %8 1986/// %G eng %0 Journal Article %J Tech Report HCIL-85-03 %D 1985 %T Performance on content-free menus as a function of study method %A Schwartz,J.P. %A Norman,K. L %A Shneiderman, Ben %B Tech Report HCIL-85-03 %8 1985/// %G eng %0 Journal Article %J Conference on Software Maintenance, 1985, Sheraton Inn Washington-Northwest, November 11-13, 1985 %D 1985 %T The Psychology of Program Documentation %A Brooks,R. %A Sheppard,S. %A Shneiderman, Ben %B Conference on Software Maintenance, 1985, Sheraton Inn Washington-Northwest, November 11-13, 1985 %P 191 - 191 %8 1985/// %G eng %0 Journal Article %J Theoretical Computer Science %D 1984 %T A parallel-design distributed-implementation (PDDI) general-purpose computer %A Vishkin, Uzi %B Theoretical Computer Science %V 32 %P 157 - 172 %8 1984/// %G eng %N 1-2 %0 Conference Paper %B Proceedings of the First International Conference on Data Engineering %D 1984 %T A Programming Environment Framework Based on Reusability %A Yeh,Raymond T. %A Mittermeir,Roland %A Roussopoulos, Nick %A Reed,Joylyn %B Proceedings of the First International Conference on Data Engineering %I IEEE Computer Society %C Washington, DC, USA %P 277 - 280 %8 1984/// %@ 0-8186-0533-2 %G eng %U http://dl.acm.org/citation.cfm?id=645470.655219 %0 Book %D 1984 %T Programming languages: design and implementation %A Pratt,T. W %A Zelkowitz, Marvin V %I Prentice-Hall %8 1984/// %G eng %0 Conference Paper %B Proc. Workshop on Non-Monotonic Reasoning %D 1984 %T Protected circumscription %A Minker, Jack %A Perlis, Don %B Proc. Workshop on Non-Monotonic Reasoning %P 337 - 343 %8 1984/// %G eng %0 Journal Article %J RAIRO Informatique théorique %D 1983 %T Parallel computation on 2-3-trees %A Paul,W. %A Vishkin, Uzi %A Wagener,H. %B RAIRO Informatique théorique %V 17 %P 397 - 404 %8 1983/// %G eng %N 4 %0 Journal Article %J Automata, Languages and Programming %D 1983 %T Parallel dictionaries on 2–3 trees %A Paul,W. %A Vishkin, Uzi %A Wagener,H. %B Automata, Languages and Programming %P 597 - 609 %8 1983/// %G eng %0 Journal Article %J Communications of the ACM %D 1983 %T Program indentation and comprehensibility %A Miara,Richard J. %A Musselman,Joyce A. %A Navarro,Juan A. %A Shneiderman, Ben %K indentation %K program format %K program readability %B Communications of the ACM %V 26 %P 861 - 867 %8 1983/11// %@ 0001-0782 %G eng %U http://doi.acm.org/10.1145/182.358437 %N 11 %R 10.1145/182.358437 %0 Journal Article %J CellCell %D 1983 %T Pseudogenes for human small nuclear RNA U3 appear to arise by integration of self-primed reverse transcripts of the RNA into new chromosomal sites %A Bernstein,L B %A Mount, Stephen M. %A Weiner,A M %K Animals %K Base Sequence %K DNA %K genes %K HUMANS %K Nucleic Acid Conformation %K Rats %K Recombination, Genetic %K Repetitive Sequences, Nucleic Acid %K RNA %K RNA, Small Nuclear %K RNA-Directed DNA Polymerase %K Templates, Genetic %K Transcription, Genetic %X We find that both human and rat U3 snRNA can function as self-priming templates for AMV reverse transcriptase in vitro. The 74 base cDNA is primed by the 3' end of intact U3 snRNA, and spans the characteristically truncated 69 or 70 base U3 sequence found in four different human U3 pseudogenes. The ability of human and rat U3 snRNA to self-prime is consistent with a U3 secondary structure model derived by a comparison between rat U3 snRNA and the homologous D2 snRNA from Dictyostelium discoideum. We propose that U3 pseudogenes are generated in vivo by integration of a self-primed cDNA copy of U3 snRNA at new chromosomal sites. We also consider the possibility that the same cDNA mediates gene conversion at the 5' end of bona fide U3 genes where, over the entire region spanned by the U3 cDNA, the two rat U3 sequence variants U3A and U3B are identical. %B CellCell %V 32 %P 461 - 472 %8 1983/02// %@ 0092-8674 %G eng %U http://www.ncbi.nlm.nih.gov/pubmed/6186397 %N 2 %0 Journal Article %J Journal of Algorithms %D 1982 %T parallel MAX-FLOW algorithm %A Shiloach,Y. %A Vishkin, Uzi %A An,O. %B Journal of Algorithms %V 3 %P 128 - 146 %8 1982/// %G eng %0 Report %D 1981 %T Preconditioned Conjugate-Gradient Methods for Nonsymmetric Systems of Linear Equations. %A Elman, Howard %I DTIC Document %8 1981/// %G eng %0 Conference Paper %B Proceedings of the eighteenth annual computer personnel research conference %D 1981 %T Putting the human factor into systems development %A Shneiderman, Ben %X As the community of computer users expands beyond experienced professionals to encompass novice users with little technical training, human factors considerations must play a larger role. “Computer shock” and “terminal terror” cannot be cured, they must be prevented by more careful human engineering during the system design phase. This paper offers four approaches to including human factors considerations during system design. These approaches focus on increasing user involvement and emphasize extensive pilot testing. Human factors cannot be added as refinements to a completed design; they must be a central concern during the initial requirements analysis and through every design stage. %B Proceedings of the eighteenth annual computer personnel research conference %S SIGCPR '81 %I ACM %C New York, NY, USA %P 1 - 13 %8 1981/// %@ 0-89791-044-3 %G eng %U http://doi.acm.org/10.1145/800051.801845 %R 10.1145/800051.801845 %0 Conference Paper %B Proceedings of the sixth international conference on Very Large Data Bases - Volume 6 %D 1980 %T Path expressions for complex queries and automatic database program conversion %A Shneiderman, Ben %A Thomas,Glenn %K data definition language %K data manipulation language %K Database conversion %K Database systems %K path expressions %K program conversion %K query languages %K transformation language %X Our efforts to develop an automatic database system conversion facility yielded a powerful, yet simple query language which was designed for ease of conversion. The path expression of this query language is a convenient and appealing notation for describing complex traversals with multiple boolean qualifications. This paper describes the path expression, shows how automatic conversions can be done, introduces the boolean functions as part of the basic path expression, offers four extensions (path macros, implied path, path replacement, and path optimization), and discusses some implementation issues. %B Proceedings of the sixth international conference on Very Large Data Bases - Volume 6 %S VLDB '80 %I VLDB Endowment %P 33 - 44 %8 1980/// %G eng %U http://dl.acm.org/citation.cfm?id=1286887.1286891 %0 Book %D 1979 %T Principles of software engineering and design %A Zelkowitz, Marvin V %A Shaw,A. C %A Gannon,J. D %I Prentice Hall Professional Technical Reference %8 1979/// %@ 013710202X %G eng %0 Conference Paper %B Proceedings of the 1978 annual conference - Volume 2 %D 1978 %T Personality and programming: Time-sharing vs. batch preference %A Lee,Jeanne M. %A Shneiderman, Ben %K Assertive/passive %K batch processing %K Locus of control %K Personality %K Programming %K Psychology %K Time sharing %X Only within the past ten years has some attention been given to psychological concerns of human-machine interface. A review of the literature in this area reveals that personality has received the least attention, but interest is growing. If critical personality factors can be isolated and associated with particular programming tasks, such information could be a useful tool for education as well as management. The hypothesis of this exploratory study was that two personality dimensions, assertiveness and locus of control, influence a programmer's choice of batch or interactive processing for program development. Locus of control relates to the perception an individual has of his/her influence over events. Assertiveness allows an individual expression in a manner that fully communicates personal desires without infringing upon the rights of others. These two dimensions and the programmer's preference for batch or interactive mode were studied through a questionnaire survey of experienced programmers. %B Proceedings of the 1978 annual conference - Volume 2 %S ACM '78 %I ACM %C New York, NY, USA %P 561 - 569 %8 1978/// %@ 0-89791-000-1 %G eng %U http://doi.acm.org/10.1145/800178.810092 %R 10.1145/800178.810092 %0 Journal Article %J ACM Comput. Surv. %D 1978 %T Perspectives in Software Engineering %A Zelkowitz, Marvin V %B ACM Comput. Surv. %V 10 %P 197 - 216 %8 1978/06// %@ 0360-0300 %G eng %U http://doi.acm.org/10.1145/356725.356731 %N 2 %R 10.1145/356725.356731 %0 Journal Article %J SIGSOFT Softw. Eng. Notes %D 1978 %T Productivity measurement on software engineering projects %A Zelkowitz, Marvin V %X The milestone is often used as a measure of project progress on large scale software developments. In this report, a quantitative measure of the milestone is developed and shown to be consistent with existing estimating techniques. %B SIGSOFT Softw. Eng. Notes %V 3 %P 30 - 31 %8 1978/10// %@ 0163-5948 %G eng %U http://doi.acm.org/10.1145/1010741.1010748 %N 4 %R 10.1145/1010741.1010748 %0 Book %D 1976 %T PL/I Programming with PLUM %A Zelkowitz, Marvin V %I Paladin House, Publishers %8 1976/// %G eng %0 Book %D 1973 %T Pattern Recognition %A Kanal,L. N %A Agrawala, Ashok K. %I Computer Science Center, University of Maryland %8 1973/// %G eng