TY - CONF T1 - Automatic matched filter recovery via the audio camera T2 - Acoustics Speech and Signal Processing (ICASSP), 2010 IEEE International Conference on Y1 - 2010 A1 - O'Donovan,A.E. A1 - Duraiswami, Ramani A1 - Zotkin,Dmitry N KW - acoustic KW - array;real-time KW - arrays;transient KW - audio KW - camera;automatic KW - constraints;microphone KW - filter KW - filters;microphone KW - images;room KW - impulse KW - Matched KW - positions;acoustic KW - processing;cameras;matched KW - radiators;array KW - recovery;beamforming;geometric KW - response; KW - response;sound KW - sensor;acoustic KW - signal KW - source;source/receiver KW - sources;audio AB - The sound reaching the acoustic sensor in a realistic environment contains not only the part arriving directly from the sound source but also a number of environmental reflections. The effect of those on the sound is equivalent to a convolution with the room impulse response and can be undone via deconvolution - a technique known as matched filter processing. However, the filter is usually pre-computed in advance using known room geometry and source/receiver positions, and any deviations from those cause the performance to degrade significantly. In this work, an algorithm is proposed to compute the matched filter automatically using an audio camera - a microphone array based system that provides real-time audio images (essentially plots of steered response power in various directions) of environment. Acoustic sources, as well as their significant reflections, are revealed as peaks in the audio image. The reflections are associated with sound source(s) using an acoustic similarity metric, and an approximate matched filter is computed to align the reflections in time with the direct arrival. Preliminary experimental evaluation of the method is performed. It is shown that in case of two sources the reflections are identified correctly, the time delays recovered agree well with those computed from geometric constraints, and that the output SNR improves when the reflections are added coherently to the signal obtained by beamforming directly at the source. JA - Acoustics Speech and Signal Processing (ICASSP), 2010 IEEE International Conference on M3 - 10.1109/ICASSP.2010.5496187 ER - TY - CONF T1 - Canny edge detection on NVIDIA CUDA T2 - Computer Vision and Pattern Recognition Workshops, 2008. CVPRW '08. IEEE Computer Society Conference on Y1 - 2008 A1 - Luo,Yuancheng A1 - Duraiswami, Ramani KW - algorithms;connected-component KW - analysis KW - application KW - Canny KW - CUDA;computer KW - detection;feature KW - detection;NVIDIA KW - detector;filtering;graphical KW - detector;non KW - edge KW - extraction;smoothing KW - feature KW - filter KW - layers;multistep KW - methods; KW - responses;nonmaxima KW - stage;edge KW - suppression;smoothing;computer KW - VISION KW - vision;edge AB - The Canny edge detector is a very popular and effective edge feature detector that is used as a pre-processing step in many computer vision algorithms. It is a multi-step detector which performs smoothing and filtering, non-maxima suppression, followed by a connected-component analysis stage to detect ldquotruerdquo edges, while suppressing ldquofalserdquo non edge filter responses. While there have been previous (partial) implementations of the Canny and other edge detectors on GPUs, they have been focussed on the old style GPGPU computing with programming using graphical application layers. Using the more programmer friendly CUDA framework, we are able to implement the entire Canny algorithm. Details are presented along with a comparison with CPU implementations. We also integrate our detector in to MATLAB, a popular interactive simulation package often used by researchers. The source code will be made available as open source. JA - Computer Vision and Pattern Recognition Workshops, 2008. CVPRW '08. IEEE Computer Society Conference on M3 - 10.1109/CVPRW.2008.4563088 ER - TY - CONF T1 - Measuring 1st order stretchwith a single filter T2 - IEEE International Conference on Acoustics, Speech and Signal Processing, 2008. ICASSP 2008 Y1 - 2008 A1 - Bitsakos,K. A1 - Domke, J. A1 - Fermüller, Cornelia A1 - Aloimonos, J. KW - Cepstral analysis KW - Educational institutions KW - filter KW - filtering theory KW - Fourier transforms KW - Frequency domain analysis KW - Frequency estimation KW - Gabor filters KW - Image analysis KW - IMAGE PROCESSING KW - linear stretch measurement KW - local signal transformation measurement KW - Nonlinear filters KW - Phase estimation KW - Signal analysis KW - Speech processing AB - We analytically develop a filter that is able to measure the linear stretch of the transformation around a point, and present results of applying it to real signals. We show that this method is a real-time alternative solution for measuring local signal transformations. Experimentally, this method can accurately measure stretch, however, it is sensitive to shift. JA - IEEE International Conference on Acoustics, Speech and Signal Processing, 2008. ICASSP 2008 PB - IEEE SN - 978-1-4244-1483-3 M3 - 10.1109/ICASSP.2008.4517758 ER - TY - JOUR T1 - Nonintrusive component forensics of visual sensors using output images JF - Information Forensics and Security, IEEE Transactions on Y1 - 2007 A1 - Swaminathan,A. A1 - M. Wu A1 - Liu,K. J.R KW - ACQUISITION KW - array;color KW - authenticating KW - component KW - filter KW - forensics;patent KW - infringements;visual KW - Interpolation KW - manipulations;intellectual KW - modules;content KW - property KW - property; KW - protection;nonintrusive KW - rights KW - sensors;image KW - sensors;industrial KW - sources;color AB - Rapid technology development and the widespread use of visual sensors have led to a number of new problems related to protecting intellectual property rights, handling patent infringements, authenticating acquisition sources, and identifying content manipulations. This paper introduces nonintrusive component forensics as a new methodology for the forensic analysis of visual sensing information, aiming to identify the algorithms and parameters employed inside various processing modules of a digital device by only using the device output data without breaking the device apart. We propose techniques to estimate the algorithms and parameters employed by important camera components, such as color filter array and color interpolation modules. The estimated interpolation coefficients provide useful features to construct an efficient camera identifier to determine the brand and model from which an image was captured. The results obtained from such component analysis are also useful to examine the similarities between the technologies employed by different camera models to identify potential infringement/licensing and to facilitate studies on technology evolution VL - 2 SN - 1556-6013 CP - 1 M3 - 10.1109/TIFS.2006.890307 ER - TY - CONF T1 - Component Forensics of Digital Cameras: A Non-Intrusive Approach T2 - Information Sciences and Systems, 2006 40th Annual Conference on Y1 - 2006 A1 - Swaminathan,A. A1 - Wu,M. A1 - Liu,K. J.R KW - analysis;image KW - approach;processing KW - array KW - camera;image KW - Color KW - colour KW - detection;optical KW - filter KW - filters; KW - forensics;detecting KW - identification;intellectual KW - infringement;digital KW - interpolation;component KW - intrusive KW - module;cameras;image KW - pattern;color KW - property KW - protection;non KW - right KW - sensors;interpolation;object KW - tampering KW - technology AB - This paper considers the problem of component forensics and proposes a methodology to identify the algorithms and parameters employed by various processing modules inside a digital camera. The proposed analysis techniques are non-intrusive, using only sample output images collected from the camera to find the color filter array pattern; and the algorithm and parameters of color interpolation employed in cameras. As demonstrated by various case studies in the paper, the features obtained from component forensic analysis provide useful evidence for such applications as detecting technology infringement, protecting intellectual property rights, determining camera source, and identifying image tampering. JA - Information Sciences and Systems, 2006 40th Annual Conference on M3 - 10.1109/CISS.2006.286646 ER - TY - CONF T1 - Robust observations for object tracking T2 - Image Processing, 2005. ICIP 2005. IEEE International Conference on Y1 - 2005 A1 - Han,Bohyung A1 - Davis, Larry S. KW - (numerical KW - adaptive KW - analysis; KW - component KW - enhancement; KW - filter KW - Filtering KW - framework; KW - image KW - images; KW - likelihood KW - methods); KW - object KW - observation KW - particle KW - PCA; KW - principal KW - tracking; AB - It is a difficult task to find an observation model that will perform well for long-term visual tracking. In this paper, we propose an adaptive observation enhancement technique based on likelihood images, which are derived from multiple visual features. The most discriminative likelihood image is extracted by principal component analysis (PCA) and incrementally updated frame by frame to reduce temporal tracking error. In the particle filter framework, the feasibility of each sample is computed using this most discriminative likelihood image before the observation process. Integral image is employed for efficient computation of the feasibility of each sample. We illustrate how our enhancement technique contributes to more robust observations through demonstrations. JA - Image Processing, 2005. ICIP 2005. IEEE International Conference on VL - 2 M3 - 10.1109/ICIP.2005.1530087 ER - TY - JOUR T1 - Optimal edge-based shape detection JF - Image Processing, IEEE Transactions on Y1 - 2002 A1 - Moon, H. A1 - Chellapa, Rama A1 - Rosenfeld, A. KW - 1D KW - 2D KW - aerial KW - analysis; KW - boundary KW - conditions; KW - contour KW - cross KW - detection; KW - DODE KW - double KW - edge KW - edge-based KW - error KW - error; KW - exponential KW - extraction; KW - facial KW - feature KW - filter KW - filter; KW - Filtering KW - function; KW - geometry; KW - global KW - human KW - images; KW - imaging KW - localization KW - mean KW - methods; KW - NOISE KW - operator; KW - optimal KW - optimisation; KW - output; KW - performance; KW - pixel; KW - power; KW - propagation; KW - properties; KW - section; KW - SHAPE KW - square KW - squared KW - statistical KW - step KW - theory; KW - tracking; KW - two-dimensional KW - vehicle KW - video; AB - We propose an approach to accurately detecting two-dimensional (2-D) shapes. The cross section of the shape boundary is modeled as a step function. We first derive a one-dimensional (1-D) optimal step edge operator, which minimizes both the noise power and the mean squared error between the input and the filter output. This operator is found to be the derivative of the double exponential (DODE) function, originally derived by Ben-Arie and Rao (1994). We define an operator for shape detection by extending the DODE filter along the shape's boundary contour. The responses are accumulated at the centroid of the operator to estimate the likelihood of the presence of the given shape. This method of detecting a shape is in fact a natural extension of the task of edge detection at the pixel level to the problem of global contour detection. This simple filtering scheme also provides a tool for a systematic analysis of edge-based shape detection. We investigate how the error is propagated by the shape geometry. We have found that, under general assumptions, the operator is locally linear at the peak of the response. We compute the expected shape of the response and derive some of its statistical properties. This enables us to predict both its localization and detection performance and adjust its parameters according to imaging conditions and given performance specifications. Applications to the problem of vehicle detection in aerial images, human facial feature detection, and contour tracking in video are presented. VL - 11 SN - 1057-7149 CP - 11 M3 - 10.1109/TIP.2002.800896 ER - TY - CONF T1 - SpaceTree: supporting exploration in large node link tree, design evolution and empirical evaluation T2 - Information Visualization, 2002. INFOVIS 2002. IEEE Symposium on Y1 - 2002 A1 - Plaisant, Catherine A1 - Grosjean,J. A1 - Bederson, Benjamin B. KW - browser; KW - camera KW - data KW - design KW - diagrams; KW - dynamic KW - evolution; KW - experiment; KW - exploration; KW - filter KW - functions; KW - graphical KW - icons; KW - integrated KW - interfaces; KW - large KW - link KW - movement; KW - node KW - novel KW - optimized KW - rescaling; KW - search; KW - SpaceTree; KW - structures; KW - topology; KW - tree KW - user KW - visualisation; KW - visualization; AB - We present a novel tree browser that builds on the conventional node link tree diagrams. It adds dynamic rescaling of branches of the tree to best fit the available screen space, optimized camera movement, and the use of preview icons summarizing the topology of the branches that cannot be expanded. In addition, it includes integrated search and filter functions. This paper reflects on the evolution of the design and highlights the principles that emerged from it. A controlled experiment showed benefits for navigation to already previously visited nodes and estimation of overall tree topology. JA - Information Visualization, 2002. INFOVIS 2002. IEEE Symposium on M3 - 10.1109/INFVIS.2002.1173148 ER -