%0 Conference Paper %B 2011 18th IEEE International Conference on Image Processing (ICIP) %D 2011 %T Variable remapping of images from very different sources %A Wei Zhang %A Yanlin Guo %A Meth, R. %A Sokoloff, H. %A Pope, A. %A Strat, T. %A Chellapa, Rama %K automatic object identification %K Buildings %K CAMERAS %K Conferences %K constrained motion estimation %K coordinates system %K Estimation %K G-RANSAC framework %K image context enlargement %K Image Enhancement %K image registration %K image sequence registration %K Image sequences %K Motion estimation %K Robustness %K temporal integration %K variable image remapping %X We present a system which registers image sequences acquired by very different sources, so that multiple views could be transformed to the same coordinates system. This enables the functionality of automatic object identification and confirmation across views and platforms. The capability of the system comes from three ingredients: 1) image context enlargement through temporal integration; 2) robust motion estimation using the G-RANSAC framework with a relaxed correspondence criteria; 3) constrained motion estimation within the G-RANSAC framework. The proposed system has worked successfully on thousands of frames from multiple collections with significant variations in scale and resolution. %B 2011 18th IEEE International Conference on Image Processing (ICIP) %I IEEE %P 1501 - 1504 %8 2011/09/11/14 %@ 978-1-4577-1304-0 %G eng %R 10.1109/ICIP.2011.6115729 %0 Journal Article %J IEEE Signal Processing Magazine %D 2010 %T Utilizing Hierarchical Multiprocessing for Medical Image Registration %A Plishker,W. %A Dandekar,O. %A Bhattacharyya, Shuvra S. %A Shekhar,R. %K Acceleration %K application parallelism %K Biomedical imaging %K domain-specific taxonomy %K GPU acceleration %K gradient descent approach %K Graphics processing unit %K hierarchical multiprocessing %K image registration %K Magnetic resonance imaging %K Medical diagnostic imaging %K medical image processing %K medical image registration %K multicore platform set %K Multicore processing %K PARALLEL PROCESSING %K parallel programming %K Robustness %K Signal processing algorithms %K Ultrasonic imaging %X This work discusses an approach to utilize hierarchical multiprocessing in the context of medical image registration. By first organizing application parallelism into a domain-specific taxonomy, an algorithm is structured to target a set of multicore platforms.The approach on a cluster of graphics processing units (GPUs) requiring the use of two parallel programming environments to achieve fast execution times is demonstrated.There is negligible loss in accuracy for rigid registration when employing GPU acceleration, but it does adversely effect our nonrigid registration implementation due to our usage of a gradient descent approach. %B IEEE Signal Processing Magazine %V 27 %P 61 - 68 %8 2010 %@ 1053-5888 %G eng %N 2 %0 Conference Paper %B Proceedings of the Tenth International Workshop on Multimedia Data Mining %D 2010 %T Web-scale computer vision using MapReduce for multimedia data mining %A White,Brandyn %A Tom Yeh %A Jimmy Lin %A Davis, Larry S. %K background subtraction %K bag-of-features %K Cloud computing %K clustering %K Computer vision %K image registration %K MapReduce %X This work explores computer vision applications of the MapReduce framework that are relevant to the data mining community. An overview of MapReduce and common design patterns are provided for those with limited MapReduce background. We discuss both the high level theory and the low level implementation for several computer vision algorithms: classifier training, sliding windows, clustering, bag-of-features, background subtraction, and image registration. Experimental results for the k-means clustering and single Gaussian background subtraction algorithms are performed on a 410 node Hadoop cluster. %B Proceedings of the Tenth International Workshop on Multimedia Data Mining %S MDMKDD '10 %I ACM %C New York, NY, USA %P 9:1–9:10 - 9:1–9:10 %8 2010/// %@ 978-1-4503-0220-3 %G eng %U http://doi.acm.org/10.1145/1814245.1814254 %R 10.1145/1814245.1814254 %0 Journal Article %J Image Processing, IEEE Transactions on %D 2009 %T Multicamera Tracking of Articulated Human Motion Using Shape and Motion Cues %A Sundaresan, A. %A Chellapa, Rama %K 2D shape cues %K 3D shape cues %K algorithms %K Anatomic;Models %K articulated human motion %K automatic algorithm %K Biological;Movement;Posture;Skeleton;Video Recording; %K Computer-Assisted;Models %K Eigenvalues and eigenfunctions %K human pose estimation %K HUMANS %K Image motion analysis %K IMAGE PROCESSING %K image registration %K Image segmentation %K Image sequences %K kinematic singularity %K Laplacian eigenmaps %K multicamera tracking algorithm %K pixel displacement %K pose estimation %K single-frame registration technique %K temporal registration method %K tracking %X We present a completely automatic algorithm for initializing and tracking the articulated motion of humans using image sequences obtained from multiple cameras. A detailed articulated human body model composed of sixteen rigid segments that allows both translation and rotation at joints is used. Voxel data of the subject obtained from the images is segmented into the different articulated chains using Laplacian eigenmaps. The segmented chains are registered in a subset of the frames using a single-frame registration technique and subsequently used to initialize the pose in the sequence. A temporal registration method is proposed to identify the partially segmented or unregistered articulated chains in the remaining frames in the sequence. The proposed tracker uses motion cues such as pixel displacement as well as 2-D and 3-D shape cues such as silhouettes, motion residue, and skeleton curves. The tracking algorithm consists of a predictor that uses motion cues and a corrector that uses shape cues. The use of complementary cues in the tracking alleviates the twin problems of drift and convergence to local minima. The use of multiple cameras also allows us to deal with the problems due to self-occlusion and kinematic singularity. We present tracking results on sequences with different kinds of motion to illustrate the effectiveness of our approach. The pose of the subject is correctly tracked for the duration of the sequence as can be verified by inspection. %B Image Processing, IEEE Transactions on %V 18 %P 2114 - 2126 %8 2009/09// %@ 1057-7149 %G eng %N 9 %R 10.1109/TIP.2009.2022290 %0 Conference Paper %B IEEE International Conference on Acoustics, Speech and Signal Processing, 2008. ICASSP 2008 %D 2008 %T Imaging concert hall acoustics using visual and audio cameras %A O'Donovan,A. %A Duraiswami, Ramani %A Zotkin,Dmitry N %K Acoustic imaging %K acoustic intensity images %K acoustic measurement %K Acoustic measurements %K Acoustic scattering %K acoustic signal processing %K acoustical camera %K acoustical scene analysis %K acquired audio registration %K audio cameras %K audio signal processing %K CAMERAS %K central projection %K Computer vision %K Educational institutions %K HUMANS %K image registration %K Image segmentation %K imaging concert hall acoustics %K Layout %K microphone arrays %K panoramic mosaiced visual image %K Raman scattering %K reverberation %K room acoustics %K spherical microphone array beamformer %K spherical microphone arrays %K video image registration %K visual cameras %X Using a developed real time audio camera, that uses the output of a spherical microphone array beamformer steered in all directions to create central projection to create acoustic intensity images, we present a technique to measure the acoustics of rooms and halls. A panoramic mosaiced visual image of the space is also create. Since both the visual and the audio camera images are central projection, registration of the acquired audio and video images can be performed using standard computer vision techniques. We describe the technique, and apply it to the examine the relation between acoustical features and architectural details of the Dekelbaum concert hall at the Clarice Smith Performing Arts Center in College Park, MD. %B IEEE International Conference on Acoustics, Speech and Signal Processing, 2008. ICASSP 2008 %I IEEE %P 5284 - 5287 %8 2008/// %@ 978-1-4244-1483-3 %G eng %R 10.1109/ICASSP.2008.4518852 %0 Journal Article %J Pattern Analysis and Machine Intelligence, IEEE Transactions on %D 2008 %T Model Driven Segmentation of Articulating Humans in Laplacian Eigenspace %A Sundaresan, A. %A Chellapa, Rama %K 3D voxel data segmentation %K algorithms %K Anatomic;Pattern Recognition %K Artificial intelligence %K Automated;Reproducibility of Results;Sensitivity and Specificity;Whole Body Imaging; %K Computer simulation %K Computer-Assisted;Imaging %K curve fitting %K Eigenvalues and eigenfunctions %K graph theory %K human articulation %K human body graphical model %K human motion analysis %K HUMANS %K Image Enhancement %K Image Interpretation %K Image motion analysis %K image registration %K Image segmentation %K Laplace transforms %K Laplacian eigenspace transform %K model driven segmentation %K probability %K representative graph %K skeleton registration %K Smoothing methods %K solid modelling %K spline fitting error %K splines (mathematics) %K Three-Dimensional;Joints;Models %K top-down probabilistic approach %K voxel neighborhood graph %X We propose a general approach using Laplacian Eigenmaps and a graphical model of the human body to segment 3D voxel data of humans into different articulated chains. In the bottom-up stage, the voxels are transformed into a high-dimensional (6D or less) Laplacian Eigenspace (LE) of the voxel neighborhood graph. We show that LE is effective at mapping voxels on long articulated chains to nodes on smooth 1D curves that can be easily discriminated, and prove these properties using representative graphs. We fit 1D splines to voxels belonging to different articulated chains such as the limbs, head and trunk, and determine the boundary between splines using the spline fitting error. A top-down probabilistic approach is then used to register the segmented chains, utilizing their mutual connectivity and individual properties. Our approach enables us to deal with complex poses such as those where the limbs form loops. We use the segmentation results to automatically estimate the human body models. While we use human subjects in our experiments, the method is fairly general and can be applied to voxel-based segmentation of any articulated object composed of long chains. We present results on real and synthetic data that illustrate the usefulness of this approach. %B Pattern Analysis and Machine Intelligence, IEEE Transactions on %V 30 %P 1771 - 1785 %8 2008/10// %@ 0162-8828 %G eng %N 10 %R 10.1109/TPAMI.2007.70823 %0 Conference Paper %B Acoustics, Speech and Signal Processing, 2006. ICASSP 2006 Proceedings. 2006 IEEE International Conference on %D 2006 %T Image Registration and Fusion Studies for the Integration of Multiple Remote Sensing Data %A Le Moigne,J. %A Cole-Rhodes,A. %A Eastman,R. %A Jain,P. %A Joshua,A. %A Memarsadeghi,N. %A Mount, Dave %A Netanyahu,N. %A Morisette,J. %A Uko-Ozoro,E. %K ALI %K EO-1 %K fusion studies %K geophysical signal processing %K Hyperion sensors %K image registration %K Internet %K multiple remote sensing data %K multiple source data %K Remote sensing %K Web-based image registration toolbox %X The future of remote sensing will see the development of spacecraft formations, and with this development will come a number of complex challenges such as maintaining precise relative position and specified attitudes. At the same time, there will be increasing needs to understand planetary system processes and build accurate prediction models. One essential technology to accomplish these goals is the integration of multiple source data. For this integration, image registration and fusion represent the first steps and need to be performed with very high accuracy. In this paper, we describe studies performed in both image registration and fusion, including a modular framework that was built to describe registration algorithms, a Web-based image registration toolbox, and the comparison of several image fusion techniques using data from the EO-1/ALI and Hyperion sensors %B Acoustics, Speech and Signal Processing, 2006. ICASSP 2006 Proceedings. 2006 IEEE International Conference on %V 5 %P V - V %8 2006/05// %G eng %R 10.1109/ICASSP.2006.1661494 %0 Conference Paper %B Geoscience and Remote Sensing Symposium, 2001. IGARSS '01. IEEE 2001 International %D 2001 %T Robust matching of wavelet features for sub-pixel registration of Landsat data %A Le Moigne,J. %A Netanyahu,N. S %A Masek,J. G %A Mount, Dave %A Goward, S.N. %K Feature extraction %K geo-registration %K geophysical measurement technique %K geophysical signal processing %K geophysical techniques %K Hausdorff distance metric %K image registration %K infrared %K IR %K land surface %K Landsat %K Landsat-5 %K Landsat-7 %K multispectral remote sensing %K robust feature matching %K robust matching %K sub pixel registration %K subpixel accuracy %K terrain mapping %K visible %K wavelet feature %K wavelet method %K Wavelet transforms %X For many Earth and space science applications, automatic geo-registration at sub-pixel accuracy has become a necessity. In this work, we are focusing on building an operational system, which will provide a sub-pixel accuracy registration of Landsat-5 and Landsat-7 data. The input to our registration method consists of scenes that have been geometrically and radiometrically corrected. Such preprocessed scenes are then geo-registered relative to a database of Landsat chips. The method assumes a transformation composed of a rotation and a translation, and utilizes rotation- and translation-invariant wavelets to extract image features that are matched using statistically robust feature matching and a partial Hausdorff distance metric. The registration process is described and results on four Landsat input scenes of the Washington, D.C., area are presented %B Geoscience and Remote Sensing Symposium, 2001. IGARSS '01. IEEE 2001 International %V 2 %P 706 -708 vol.2 - 706 -708 vol.2 %8 2001/// %G eng %R 10.1109/IGARSS.2001.976609 %0 Conference Paper %B Geoscience and Remote Sensing Symposium, 2000. Proceedings. IGARSS 2000. IEEE 2000 International %D 2000 %T Geo-registration of Landsat data by robust matching of wavelet features %A Le Moigne,J. %A Netanyahu,N. S %A Masek,J. G %A Mount, Dave %A Goward,S. %A Honzak,M. %K atmospheric techniques %K automated mass processing/analysis system %K chip-window pair %K cloud shadows %K Clouds %K Feature extraction %K feature matching %K geo-registration %K geometrically corrected scene %K geophysical signal processing %K Image matching %K image registration %K landmark chips %K Landsat chips %K Landsat data %K Landsat-5 data %K Landsat-7 data %K overcomplete wavelet representation %K pre-processed scenes %K radiometrically corrected scene %K REALM %K Remote sensing %K robust matching %K robust wavelet feature matching %K scenes %K statistically robust techniques %K sub-pixel accuracy registration %K wavelet features %K Wavelet transforms %K window %X The goal of our project is to build an operational system, which will provide a sub-pixel accuracy registration of Landsat-5 and Landsat-7 data. Integrated within an automated mass processing/analysis system for Landsat data (REALM), the input to our registration method consists of scenes that have been geometrically and radiometrically corrected, as well as pre-processed for the detection of clouds and cloud shadows. Such pre-processed scenes are then geo-registered relative to a database of Landsat chips. This paper describes our registration process, including the use of a database of landmark chips, a feature extraction performed by an overcomplete wavelet representation, and feature matching using statistically robust techniques. Knowing the approximate longitudes and latitudes of the four corners of the scene, a subset of chips which represent landmarks included in the scene are extracted from the database. For each of these selected landmark chips, a corresponding window is extracted from the scene, and each chip-window pair is registered using our robust wavelet feature matching. First results and future directions are presented in the paper %B Geoscience and Remote Sensing Symposium, 2000. Proceedings. IGARSS 2000. IEEE 2000 International %V 4 %P 1610 -1612 vol.4 - 1610 -1612 vol.4 %8 2000/// %G eng %R 10.1109/IGARSS.2000.857287 %0 Report %D 1994 %T Uncertainty Propagation in Model-Based Recognition. %A Jacobs, David W. %A Alter,T. D. %K *IMAGE PROCESSING %K *PATTERN RECOGNITION %K algorithms %K APPROXIMATION(MATHEMATICS) %K CYBERNETICS %K ERROR CORRECTION CODES %K image registration %K Linear programming %K MATCHING %K MATHEMATICAL MODELS %K PIXELS %K PROJECTIVE TECHNIQUES. %K regions %K THREE DIMENSIONAL %K TWO DIMENSIONAL %K Uncertainty %X Building robust recognition systems requires a careful understanding of the effects of error in sensed features. Error in these image features results in a region of uncertainty in the possible image location of each additional model feature. We present an accurate, analytic approximation for this uncertainty region when model poses are based on matching three image and model points, for both Gaussian and bounded error in the detection of image points, and for both scaled-orthographic and perspective projection models. This result applies to objects that are fully three-dimensional, where past results considered only two-dimensional objects. Further, we introduce a linear programming algorithm to compute the uncertainty region when poses are based on any number of initial matches. Finally, we use these results to extend, from two-dimensional to three-dimensional objects, robust implementations of alignment interpretation-tree search, and transformation clustering. (AN) %I MASSACHUSETTS INSTITUTE OF TECHNOLOGY ARTIFICIAL INTELLIGENCE LAB %8 1994/12// %G eng %U http://stinet.dtic.mil/oai/oai?&verb=getRecord&metadataPrefix=html&identifier=ADA295642