%0 Conference Paper %B 2011 IEEE International Conference on Computer Vision (ICCV) %D 2011 %T Active scene recognition with vision and language %A Yu,Xiaodong %A Fermüller, Cornelia %A Ching Lik Teo %A Yezhou Yang %A Aloimonos, J. %K accuracy %K active scene recognition %K classification performance %K Cognition %K Computer vision %K Detectors %K Equations %K high level knowledge utilization %K HUMANS %K image classification %K inference mechanisms %K object detectors %K Object recognition %K reasoning module %K sensing process %K sensory module %K support vector machines %K Training %X This paper presents a novel approach to utilizing high level knowledge for the problem of scene recognition in an active vision framework, which we call active scene recognition. In traditional approaches, high level knowledge is used in the post-processing to combine the outputs of the object detectors to achieve better classification performance. In contrast, the proposed approach employs high level knowledge actively by implementing an interaction between a reasoning module and a sensory module (Figure 1). Following this paradigm, we implemented an active scene recognizer and evaluated it with a dataset of 20 scenes and 100+ objects. We also extended it to the analysis of dynamic scenes for activity recognition with attributes. Experiments demonstrate the effectiveness of the active paradigm in introducing attention and additional constraints into the sensing process. %B 2011 IEEE International Conference on Computer Vision (ICCV) %I IEEE %P 810 - 817 %8 2011/11/06/13 %@ 978-1-4577-1101-5 %G eng %R 10.1109/ICCV.2011.6126320 %0 Conference Paper %B 2011 Proceedings IEEE INFOCOM %D 2011 %T Decentralized, accurate, and low-cost network bandwidth prediction %A Sukhyun Song %A Keleher,P. %A Bhattacharjee, Bobby %A Sussman, Alan %K accuracy %K approximate tree metric space %K Bandwidth %K bandwidth allocation %K bandwidth measurement %K decentralized low cost system %K distributed tree %K end-to-end prediction %K Extraterrestrial measurements %K Internet %K low-cost network bandwidth prediction %K Measurement uncertainty %K pairwise bandwidth %K Peer to peer computing %K Prediction algorithms %K trees (mathematics) %X The distributed nature of modern computing makes end-to-end prediction of network bandwidth increasingly important. Our work is inspired by prior work that treats the Internet and bandwidth as an approximate tree metric space. This paper presents a decentralized, accurate, and low cost system that predicts pairwise bandwidth between hosts. We describe an algorithm to construct a distributed tree that embeds bandwidth measurements. The correctness of the algorithm is provable when driven by precise measurements. We then describe three novel heuristics that achieve high accuracy for predicting bandwidth even with imprecise input data. Simulation experiments with a real-world dataset confirm that our approach shows high accuracy with low cost. %B 2011 Proceedings IEEE INFOCOM %I IEEE %P 6 - 10 %8 2011/04/10/15 %@ 978-1-4244-9919-9 %G eng %R 10.1109/INFCOM.2011.5935251 %0 Conference Paper %B Proceedings of the 5th International Conference on Predictor Models in Software Engineering %D 2009 %T Using uncertainty as a model selection and comparison criterion %A Sarcia',Salvatore Alessandro %A Basili, Victor R. %A Cantone,Giovanni %K accuracy %K Bayesian prediction intervals %K Calibration %K cost estimation %K cost model %K model evaluation %K model selection %K prediction interval %K Uncertainty %X Over the last 25+ years, software estimation research has been searching for the best model for estimating variables of interest (e.g., cost, defects, and fault proneness). This research effort has not lead to a common agreement. One problem is that, they have been using accuracy as the basis for selection and comparison. But accuracy is not invariant; it depends on the test sample, the error measure, and the chosen error statistics (e.g., MMRE, PRED, Mean and Standard Deviation of error samples). Ideally, we would like an invariant criterion. In this paper, we show that uncertainty can be used as an invariant criterion to figure out which estimation model should be preferred over others. The majority of this work is empirically based, applying Bayesian prediction intervals to some COCOMO model variations with respect to a publicly available cost estimation data set coming from the PROMISE repository. %B Proceedings of the 5th International Conference on Predictor Models in Software Engineering %S PROMISE '09 %I ACM %C New York, NY, USA %P 18:1–18:9 - 18:1–18:9 %8 2009/// %@ 978-1-60558-634-2 %G eng %U http://doi.acm.org/10.1145/1540438.1540464 %R 10.1145/1540438.1540464 %0 Report %D 2003 %T Domain Tuning of Bilingual Lexicons for MT %A Ayan,Necip F %A Dorr, Bonnie J %A Kolak,Okan %K *DICTIONARIES %K *FOREIGN LANGUAGES %K accuracy %K BILINGUAL LEXICONS %K DOCUMENTS %K linguistics %K Vocabulary %X Our overall objective is to translate a domain-specific document in a foreign language (in this case, Chinese) to English. Using automatically induced domain-specific, comparable documents and language-independent clustering, we apply domain-tuning techniques to a bilingual lexicon for downstream translation of the input document to English. We will describe our domain-tuning technique and demonstrate its effectiveness by comparing our results to manually constructed domain-specific vocabulary. Our coverage/accuracy experiments indicate that domain-tuned lexicons achieve 88/% precision and 66/% recall. We also ran a Bleu experiment to compare our domain-tuned version to its un-tuned counterpart in an IR Ni-style NIT system. Our domain-tuned lexicons brought about an improvement in the Blen scores: 9.4/% higher than a system trained on a uniformly- weighted dictionary and 275/% higher than a system trained on no dictionary at all. %I Instititue for Advanced Computer Studies, Univ of Maryland, College Park %8 2003/02// %G eng %U http://stinet.dtic.mil/oai/oai?&verb=getRecord&metadataPrefix=html&identifier=ADA455197 %0 Conference Paper %B 2002 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) %D 2002 %T Creation of virtual auditory spaces %A Zotkin,Dmitry N %A Duraiswami, Ramani %A Davis, Larry S. %K accuracy %K Estimation %K Finite impulse response filter %K Heating %K Program processors %K Rendering (computer graphics) %K System-on-a-chip %X High-quality virtual audio scene rendering is a must for emerging virtual/augmented reality applications and for perceptual user interfaces. We describe algorithms for creation of virtual auditory spaces using measured and non-individualized HRTFs and head tracking. Details of algorithms for HRTF interpolation, room impulse response creation, and audio scene presentation are presented. Tests show that individuals externalize well, and find our interface natural. The system runs in real time with latency of less than 30 ms on an office PC without specialized DSP. %B 2002 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) %I IEEE %V 2 %8 2002/05// %@ 0-7803-7402-9 %G eng %R 10.1109/ICASSP.2002.5745052 %0 Conference Paper %B The Eighth International Symposium on High Performance Distributed Computing, 1999. Proceedings %D 1999 %T Predicting the CPU availability of time-shared Unix systems on the computational grid %A Wolski,R. %A Spring, Neil %A Hayes,J. %K accuracy %K Application software %K Autocorrelation %K Availability %K Central Processing Unit %K computational grid %K correlation methods %K CPU availability prediction %K CPU resources predictability %K CPU sensor %K Dynamic scheduling %K grid computing %K Load forecasting %K long-range autocorrelation dependence %K medium-term forecasts %K network operating systems %K Network Weather Service %K NWS %K performance evaluation %K self-similarity degree %K short-term forecasts %K successive CPU measurements %K Time measurement %K Time sharing computer systems %K time-shared Unix systems %K time-sharing systems %K Unix %K Unix load average %K vmstat utility %K Weather forecasting %X Focuses on the problem of making short- and medium-term forecasts of CPU availability on time-shared Unix systems. We evaluate the accuracy with which availability can be measured using the Unix load average, the Unix utility “vmstat” and the Network Weather Service (NWS) CPU sensor that uses both. We also examine the autocorrelation between successive CPU measurements to determine their degree of self-similarity. While our observations show a long-range autocorrelation dependence, we demonstrate how this dependence manifests itself in the short- and medium-term predictability of the CPU resources in our study %B The Eighth International Symposium on High Performance Distributed Computing, 1999. Proceedings %I IEEE %P 105 - 112 %8 1999/// %@ 0-7803-5681-0 %G eng %R 10.1109/HPDC.1999.805288 %0 Report %D 1998 %T The Bible, Truth, and Multilingual OCR Evaluation %A Kanungo,Tapas %A Resnik, Philip %K *BIBLE %K *CORPUS %K *DATASETS %K *GROUNDTRUTH %K *OPTICAL CHARACTER RECOGNITION %K *TEST SETS %K *TRANSLATIONS %K accuracy %K algorithms %K CYBERNETICS %K DOCUMENT IMAGES %K DOCUMENTS %K IMAGES %K INFORMATION SCIENCE %K LANGUAGE %K linguistics %K MULTILINGUAL OCR(OPTICAL CHARACTER RECOGNITION) %K SYMPOSIA %K TEST AND EVALUATION %X Multilingual OCR has emerged as an important information technology, thanks to the increasing need for cross-language information access. While many research groups and companies have developed OCR algorithms for various languages, it is difficult to compare the performance of these OCR algorithms across languages. This difficulty arises because most evaluation methodologies rely on the use of a document image dataset in each of the languages and it is difficult to find document datasets in different languages that are similar in content and layout. In this paper we propose to use the Bible as a dataset for comparing OCR accuracy across languages. Besides being available in a wide range of languages, Bible translation are closely parallel in content, carefully translated, surprisingly relevant with respect to modern-day language, and quite inexpensive. A project at the University of Maryland is currently implementing this idea. We have created a scanned image dataset with groundtruth from an Arabic Bible. We have also used image degradation models to create synthetically degraded images of a French Bible. We hope to generate similar Bible datasets for other languages, and we are exploring alternative corpora such as the Koran and the Bhagavad Gita that have similar properties. Quantitative OCR evaluation based on the Arabic Bible dataset is currently in progress. %I University of Maryland, College Park %8 1998/12// %G eng %U http://stinet.dtic.mil/oai/oai?&verb=getRecord&metadataPrefix=html&identifier=ADA458666