%0 Conference Paper %B Acoustics Speech and Signal Processing (ICASSP), 2010 IEEE International Conference on %D 2010 %T Automatic matched filter recovery via the audio camera %A O'Donovan,A.E. %A Duraiswami, Ramani %A Zotkin,Dmitry N %K acoustic %K array;real-time %K arrays;transient %K audio %K camera;automatic %K constraints;microphone %K filter %K filters;microphone %K images;room %K impulse %K Matched %K positions;acoustic %K processing;cameras;matched %K radiators;array %K recovery;beamforming;geometric %K response; %K response;sound %K sensor;acoustic %K signal %K source;source/receiver %K sources;audio %X The sound reaching the acoustic sensor in a realistic environment contains not only the part arriving directly from the sound source but also a number of environmental reflections. The effect of those on the sound is equivalent to a convolution with the room impulse response and can be undone via deconvolution - a technique known as matched filter processing. However, the filter is usually pre-computed in advance using known room geometry and source/receiver positions, and any deviations from those cause the performance to degrade significantly. In this work, an algorithm is proposed to compute the matched filter automatically using an audio camera - a microphone array based system that provides real-time audio images (essentially plots of steered response power in various directions) of environment. Acoustic sources, as well as their significant reflections, are revealed as peaks in the audio image. The reflections are associated with sound source(s) using an acoustic similarity metric, and an approximate matched filter is computed to align the reflections in time with the direct arrival. Preliminary experimental evaluation of the method is performed. It is shown that in case of two sources the reflections are identified correctly, the time delays recovered agree well with those computed from geometric constraints, and that the output SNR improves when the reflections are added coherently to the signal obtained by beamforming directly at the source. %B Acoustics Speech and Signal Processing (ICASSP), 2010 IEEE International Conference on %P 2826 - 2829 %8 2010/03// %G eng %R 10.1109/ICASSP.2010.5496187 %0 Conference Paper %B Computer Vision and Pattern Recognition Workshops, 2008. CVPRW '08. IEEE Computer Society Conference on %D 2008 %T Canny edge detection on NVIDIA CUDA %A Luo,Yuancheng %A Duraiswami, Ramani %K algorithms;connected-component %K analysis %K application %K Canny %K CUDA;computer %K detection;feature %K detection;NVIDIA %K detector;filtering;graphical %K detector;non %K edge %K extraction;smoothing %K feature %K filter %K layers;multistep %K methods; %K responses;nonmaxima %K stage;edge %K suppression;smoothing;computer %K VISION %K vision;edge %X The Canny edge detector is a very popular and effective edge feature detector that is used as a pre-processing step in many computer vision algorithms. It is a multi-step detector which performs smoothing and filtering, non-maxima suppression, followed by a connected-component analysis stage to detect ldquotruerdquo edges, while suppressing ldquofalserdquo non edge filter responses. While there have been previous (partial) implementations of the Canny and other edge detectors on GPUs, they have been focussed on the old style GPGPU computing with programming using graphical application layers. Using the more programmer friendly CUDA framework, we are able to implement the entire Canny algorithm. Details are presented along with a comparison with CPU implementations. We also integrate our detector in to MATLAB, a popular interactive simulation package often used by researchers. The source code will be made available as open source. %B Computer Vision and Pattern Recognition Workshops, 2008. CVPRW '08. IEEE Computer Society Conference on %P 1 - 8 %8 2008/06// %G eng %R 10.1109/CVPRW.2008.4563088 %0 Conference Paper %B IEEE International Conference on Acoustics, Speech and Signal Processing, 2008. ICASSP 2008 %D 2008 %T Measuring 1st order stretchwith a single filter %A Bitsakos,K. %A Domke, J. %A Fermüller, Cornelia %A Aloimonos, J. %K Cepstral analysis %K Educational institutions %K filter %K filtering theory %K Fourier transforms %K Frequency domain analysis %K Frequency estimation %K Gabor filters %K Image analysis %K IMAGE PROCESSING %K linear stretch measurement %K local signal transformation measurement %K Nonlinear filters %K Phase estimation %K Signal analysis %K Speech processing %X We analytically develop a filter that is able to measure the linear stretch of the transformation around a point, and present results of applying it to real signals. We show that this method is a real-time alternative solution for measuring local signal transformations. Experimentally, this method can accurately measure stretch, however, it is sensitive to shift. %B IEEE International Conference on Acoustics, Speech and Signal Processing, 2008. ICASSP 2008 %I IEEE %P 909 - 912 %8 2008/04/31/March %@ 978-1-4244-1483-3 %G eng %R 10.1109/ICASSP.2008.4517758 %0 Journal Article %J Information Forensics and Security, IEEE Transactions on %D 2007 %T Nonintrusive component forensics of visual sensors using output images %A Swaminathan,A. %A M. Wu %A Liu,K. J.R %K ACQUISITION %K array;color %K authenticating %K component %K filter %K forensics;patent %K infringements;visual %K Interpolation %K manipulations;intellectual %K modules;content %K property %K property; %K protection;nonintrusive %K rights %K sensors;image %K sensors;industrial %K sources;color %X Rapid technology development and the widespread use of visual sensors have led to a number of new problems related to protecting intellectual property rights, handling patent infringements, authenticating acquisition sources, and identifying content manipulations. This paper introduces nonintrusive component forensics as a new methodology for the forensic analysis of visual sensing information, aiming to identify the algorithms and parameters employed inside various processing modules of a digital device by only using the device output data without breaking the device apart. We propose techniques to estimate the algorithms and parameters employed by important camera components, such as color filter array and color interpolation modules. The estimated interpolation coefficients provide useful features to construct an efficient camera identifier to determine the brand and model from which an image was captured. The results obtained from such component analysis are also useful to examine the similarities between the technologies employed by different camera models to identify potential infringement/licensing and to facilitate studies on technology evolution %B Information Forensics and Security, IEEE Transactions on %V 2 %P 91 - 106 %8 2007/03// %@ 1556-6013 %G eng %N 1 %R 10.1109/TIFS.2006.890307 %0 Conference Paper %B Information Sciences and Systems, 2006 40th Annual Conference on %D 2006 %T Component Forensics of Digital Cameras: A Non-Intrusive Approach %A Swaminathan,A. %A Wu,M. %A Liu,K. J.R %K analysis;image %K approach;processing %K array %K camera;image %K Color %K colour %K detection;optical %K filter %K filters; %K forensics;detecting %K identification;intellectual %K infringement;digital %K interpolation;component %K intrusive %K module;cameras;image %K pattern;color %K property %K protection;non %K right %K sensors;interpolation;object %K tampering %K technology %X This paper considers the problem of component forensics and proposes a methodology to identify the algorithms and parameters employed by various processing modules inside a digital camera. The proposed analysis techniques are non-intrusive, using only sample output images collected from the camera to find the color filter array pattern; and the algorithm and parameters of color interpolation employed in cameras. As demonstrated by various case studies in the paper, the features obtained from component forensic analysis provide useful evidence for such applications as detecting technology infringement, protecting intellectual property rights, determining camera source, and identifying image tampering. %B Information Sciences and Systems, 2006 40th Annual Conference on %P 1194 - 1199 %8 2006/03// %G eng %R 10.1109/CISS.2006.286646 %0 Conference Paper %B Image Processing, 2005. ICIP 2005. IEEE International Conference on %D 2005 %T Robust observations for object tracking %A Han,Bohyung %A Davis, Larry S. %K (numerical %K adaptive %K analysis; %K component %K enhancement; %K filter %K Filtering %K framework; %K image %K images; %K likelihood %K methods); %K object %K observation %K particle %K PCA; %K principal %K tracking; %X It is a difficult task to find an observation model that will perform well for long-term visual tracking. In this paper, we propose an adaptive observation enhancement technique based on likelihood images, which are derived from multiple visual features. The most discriminative likelihood image is extracted by principal component analysis (PCA) and incrementally updated frame by frame to reduce temporal tracking error. In the particle filter framework, the feasibility of each sample is computed using this most discriminative likelihood image before the observation process. Integral image is employed for efficient computation of the feasibility of each sample. We illustrate how our enhancement technique contributes to more robust observations through demonstrations. %B Image Processing, 2005. ICIP 2005. IEEE International Conference on %V 2 %P II - 442-5 - II - 442-5 %8 2005/09// %G eng %R 10.1109/ICIP.2005.1530087 %0 Journal Article %J Image Processing, IEEE Transactions on %D 2002 %T Optimal edge-based shape detection %A Moon, H. %A Chellapa, Rama %A Rosenfeld, A. %K 1D %K 2D %K aerial %K analysis; %K boundary %K conditions; %K contour %K cross %K detection; %K DODE %K double %K edge %K edge-based %K error %K error; %K exponential %K extraction; %K facial %K feature %K filter %K filter; %K Filtering %K function; %K geometry; %K global %K human %K images; %K imaging %K localization %K mean %K methods; %K NOISE %K operator; %K optimal %K optimisation; %K output; %K performance; %K pixel; %K power; %K propagation; %K properties; %K section; %K SHAPE %K square %K squared %K statistical %K step %K theory; %K tracking; %K two-dimensional %K vehicle %K video; %X We propose an approach to accurately detecting two-dimensional (2-D) shapes. The cross section of the shape boundary is modeled as a step function. We first derive a one-dimensional (1-D) optimal step edge operator, which minimizes both the noise power and the mean squared error between the input and the filter output. This operator is found to be the derivative of the double exponential (DODE) function, originally derived by Ben-Arie and Rao (1994). We define an operator for shape detection by extending the DODE filter along the shape's boundary contour. The responses are accumulated at the centroid of the operator to estimate the likelihood of the presence of the given shape. This method of detecting a shape is in fact a natural extension of the task of edge detection at the pixel level to the problem of global contour detection. This simple filtering scheme also provides a tool for a systematic analysis of edge-based shape detection. We investigate how the error is propagated by the shape geometry. We have found that, under general assumptions, the operator is locally linear at the peak of the response. We compute the expected shape of the response and derive some of its statistical properties. This enables us to predict both its localization and detection performance and adjust its parameters according to imaging conditions and given performance specifications. Applications to the problem of vehicle detection in aerial images, human facial feature detection, and contour tracking in video are presented. %B Image Processing, IEEE Transactions on %V 11 %P 1209 - 1227 %8 2002/11// %@ 1057-7149 %G eng %N 11 %R 10.1109/TIP.2002.800896 %0 Conference Paper %B Information Visualization, 2002. INFOVIS 2002. IEEE Symposium on %D 2002 %T SpaceTree: supporting exploration in large node link tree, design evolution and empirical evaluation %A Plaisant, Catherine %A Grosjean,J. %A Bederson, Benjamin B. %K browser; %K camera %K data %K design %K diagrams; %K dynamic %K evolution; %K experiment; %K exploration; %K filter %K functions; %K graphical %K icons; %K integrated %K interfaces; %K large %K link %K movement; %K node %K novel %K optimized %K rescaling; %K search; %K SpaceTree; %K structures; %K topology; %K tree %K user %K visualisation; %K visualization; %X We present a novel tree browser that builds on the conventional node link tree diagrams. It adds dynamic rescaling of branches of the tree to best fit the available screen space, optimized camera movement, and the use of preview icons summarizing the topology of the branches that cannot be expanded. In addition, it includes integrated search and filter functions. This paper reflects on the evolution of the design and highlights the principles that emerged from it. A controlled experiment showed benefits for navigation to already previously visited nodes and estimation of overall tree topology. %B Information Visualization, 2002. INFOVIS 2002. IEEE Symposium on %P 57 - 64 %8 2002/// %G eng %R 10.1109/INFVIS.2002.1173148