TY - CONF T1 - Semi non-intrusive training for cell-phone camera model linkage T2 - Information Forensics and Security (WIFS), 2010 IEEE International Workshop on Y1 - 2010 A1 - Chuang,Wei-Hong A1 - M. Wu KW - accuracy;training KW - analysis;cameras;cellular KW - analysis;image KW - camera KW - Color KW - colour KW - complexity;training KW - content KW - dependency;variance KW - feature;cell KW - forensics;digital KW - forensics;image KW - image KW - Interpolation KW - linkage;component KW - matching;interpolation; KW - matching;semi KW - model KW - nonintrusive KW - phone KW - radio;computer KW - training;testing AB - This paper presents a study of cell-phone camera model linkage that matches digital images against potential makes / models of cell-phone camera sources using camera color interpolation features. The matching performance is examined and the dependency on the content of training image collection is evaluated via variance analysis. Training content dependency can be dealt with under the framework of component forensics, where cell-phone camera model linkage is seen as a combination of semi non-intrusive training and completely non-intrusive testing. Such a viewpoint suggests explicitly the goodness criterion of testing accuracy for training data selection. It also motivates other possible alternative training procedures based on different criteria, such as the training complexity, for which preliminary but promising experiment designs and results have been obtained. JA - Information Forensics and Security (WIFS), 2010 IEEE International Workshop on M3 - 10.1109/WIFS.2010.5711468 ER - TY - CONF T1 - Concurrent transition and shot detection in football videos using Fuzzy Logic T2 - Image Processing (ICIP), 2009 16th IEEE International Conference on Y1 - 2009 A1 - Refaey,M.A. A1 - Elsayed,K.M. A1 - Hanafy,S.M. A1 - Davis, Larry S. KW - analysis;inference KW - boundary;shot KW - Color KW - colour KW - detection;sports KW - functions;shot KW - histogram;concurrent KW - logic;image KW - logic;inference KW - mechanism;intensity KW - mechanisms;sport;video KW - processing; KW - processing;videonanalysis;fuzzy KW - signal KW - transition;edgeness;football KW - variance;membership KW - video;video KW - videos;fuzzy AB - Shot detection is a fundamental step in video processing and analysis that should be achieved with high degree of accuracy. In this paper, we introduce a unified algorithm for shot detection in sports video using fuzzy logic as a powerful inference mechanism. Fuzzy logic overcomes the problems of hard cut thresholds and the need to large training data used in previous work. The proposed algorithm integrates many features like color histogram, edgeness, intensity variance, etc. Membership functions to represent different features and transitions between shots have been developed to detect different shot boundary and transition types. We address the detection of cut, fade, dissolve, and wipe shot transitions. The results show that our algorithm achieves high degree of accuracy. JA - Image Processing (ICIP), 2009 16th IEEE International Conference on M3 - 10.1109/ICIP.2009.5413648 ER - TY - CONF T1 - Learning Discriminative Appearance-Based Models Using Partial Least Squares T2 - Computer Graphics and Image Processing (SIBGRAPI), 2009 XXII Brazilian Symposium on Y1 - 2009 A1 - Schwartz, W.R. A1 - Davis, Larry S. KW - (artificial KW - analysis;learning KW - appearance KW - approximations;object KW - based KW - colour KW - descriptors;learning KW - discriminative KW - intelligence);least KW - learning KW - least KW - models;machine KW - person KW - recognition; KW - recognition;feature KW - squares KW - squares;image KW - techniques;partial AB - Appearance information is essential for applications such as tracking and people recognition. One of the main problems of using appearance-based discriminative models is the ambiguities among classes when the number of persons being considered increases. To reduce the amount of ambiguity, we propose the use of a rich set of feature descriptors based on color, textures and edges. Another issue regarding appearance modeling is the limited number of training samples available for each appearance. The discriminative models are created using a powerful statistical tool called partial least squares (PLS), responsible for weighting the features according to their discriminative power for each different appearance. The experimental results, based on appearance-based person recognition, demonstrate that the use of an enriched feature set analyzed by PLS reduces the ambiguity among different appearances and provides higher recognition rates when compared to other machine learning techniques. JA - Computer Graphics and Image Processing (SIBGRAPI), 2009 XXII Brazilian Symposium on M3 - 10.1109/SIBGRAPI.2009.42 ER - TY - CONF T1 - Image acquisition forensics: Forensic analysis to identify imaging source T2 - Acoustics, Speech and Signal Processing, 2008. ICASSP 2008. IEEE International Conference on Y1 - 2008 A1 - McKay,C. A1 - Swaminathan,A. A1 - Gou,Hongmei A1 - M. Wu KW - ACQUISITION KW - acquisition;image KW - analysis; KW - analysis;image KW - analysis;interpolation;statistical KW - cameras;color KW - cell KW - coefficients;computer KW - colour KW - editing KW - forensics;image KW - graphics;digital KW - identification;noise KW - images;forensic KW - Interpolation KW - phone KW - processing;data KW - softwares;imaging KW - source KW - statistics;scanners;signal AB - With widespread availability of digital images and easy-to-use image editing softwares, the origin and integrity of digital images has become a serious concern. This paper introduces the problem of image acquisition forensics and proposes a fusion of a set of signal processing features to identify the source of digital images. Our results show that the devices' color interpolation coefficients and noise statistics can jointly serve as good forensic features to help accurately trace the origin of the input image to its production process and to differentiate between images produced by cameras, cell phone cameras, scanners, and computer graphics. Further, the proposed features can also be extended to determining the brand and model of the device. Thus, the techniques introduced in this work provide a unified framework for image acquisition forensics. JA - Acoustics, Speech and Signal Processing, 2008. ICASSP 2008. IEEE International Conference on M3 - 10.1109/ICASSP.2008.4517945 ER - TY - JOUR T1 - Object Detection, Tracking and Recognition for Multiple Smart Cameras JF - Proceedings of the IEEE Y1 - 2008 A1 - Sankaranarayanan,A. C A1 - Veeraraghavan,A. A1 - Chellapa, Rama KW - algorithm;geometric KW - analysis;image KW - association;distributed KW - camera;visual KW - cameras; KW - cameras;object KW - colour KW - constraints;imaging KW - data KW - detection;object KW - detection;sensor KW - detection;three-dimiensional KW - fusion;target KW - model;video KW - network;distributed KW - recognition;object KW - scene KW - sensor KW - sensor;multiple KW - sensors;geometry;image KW - sensors;object KW - smart KW - texture;intelligent KW - tracking;target KW - tracking;video AB - Video cameras are among the most commonly used sensors in a large number of applications, ranging from surveillance to smart rooms for videoconferencing. There is a need to develop algorithms for tasks such as detection, tracking, and recognition of objects, specifically using distributed networks of cameras. The projective nature of imaging sensors provides ample challenges for data association across cameras. We first discuss the nature of these challenges in the context of visual sensor networks. Then, we show how real-world constraints can be favorably exploited in order to tackle these challenges. Examples of real-world constraints are (a) the presence of a world plane, (b) the presence of a three-dimiensional scene model, (c) consistency of motion across cameras, and (d) color and texture properties. In this regard, the main focus of this paper is towards highlighting the efficient use of the geometric constraints induced by the imaging devices to derive distributed algorithms for target detection, tracking, and recognition. Our discussions are supported by several examples drawn from real applications. Lastly, we also describe several potential research problems that remain to be addressed. VL - 96 SN - 0018-9219 CP - 10 M3 - 10.1109/JPROC.2008.928758 ER - TY - CONF T1 - Joint Acoustic-Video Fingerprinting of Vehicles, Part II T2 - Acoustics, Speech and Signal Processing, 2007. ICASSP 2007. IEEE International Conference on Y1 - 2007 A1 - Cevher, V. A1 - Guo, F. A1 - Sankaranarayanan,A. C A1 - Chellapa, Rama KW - acoustic-video KW - analysis;video KW - approximations;acoustic KW - Bayesian KW - colour KW - density KW - efficiency;joint KW - estimation;Bayes KW - fingerprinting;metrology KW - framework;Laplacian KW - functions;performance KW - fusion;color KW - identification;image KW - invariants;computational KW - methods;acoustic KW - metrology;acoustic KW - processing; KW - processing;fingerprint KW - signal KW - video AB - In this second paper, we first show how to estimate the wheelbase length of a vehicle using line metrology in video. We then address the vehicle fingerprinting problem using vehicle silhouettes and color invariants. We combine the acoustic metrology and classification results discussed in Part I with the video results to improve estimation performance and robustness. The acoustic video fusion is achieved in a Bayesian framework by assuming conditional independence of the observations of each modality. For the metrology density functions, Laplacian approximations are used for computational efficiency. Experimental results are given using field data JA - Acoustics, Speech and Signal Processing, 2007. ICASSP 2007. IEEE International Conference on VL - 2 M3 - 10.1109/ICASSP.2007.366344 ER - TY - CONF T1 - Segmentation using Meta-texture Saliency T2 - Computer Vision, 2007. ICCV 2007. IEEE 11th International Conference on Y1 - 2007 A1 - Yacoob,Yaser A1 - Davis, Larry S. KW - analysis;image KW - colour KW - enhancement;image KW - image KW - image;salient KW - patches;image KW - segmentation;image KW - segmentation;meta-texture KW - surface-roughness;image KW - texture; AB - We address segmentation of an image into patches that have an underlying salient surface-roughness. Three intrinsic images are derived: reflectance, shading and meta- texture images. A constructive approach is proposed for computing a meta-texture image by preserving, equalizing and enhancing the underlying surface-roughness across color, brightness and illumination variations. We evaluate the performance on sample images and illustrate quantitatively that different patches of the same material, in an image, are normalized in their statistics despite variations in color, brightness and illumination. Finally, segmentation by line-based boundary-detection is proposed and results are provided and compared to known algorithms. JA - Computer Vision, 2007. ICCV 2007. IEEE 11th International Conference on M3 - 10.1109/ICCV.2007.4408930 ER - TY - CONF T1 - Component Forensics of Digital Cameras: A Non-Intrusive Approach T2 - Information Sciences and Systems, 2006 40th Annual Conference on Y1 - 2006 A1 - Swaminathan,A. A1 - Wu,M. A1 - Liu,K. J.R KW - analysis;image KW - approach;processing KW - array KW - camera;image KW - Color KW - colour KW - detection;optical KW - filter KW - filters; KW - forensics;detecting KW - identification;intellectual KW - infringement;digital KW - interpolation;component KW - intrusive KW - module;cameras;image KW - pattern;color KW - property KW - protection;non KW - right KW - sensors;interpolation;object KW - tampering KW - technology AB - This paper considers the problem of component forensics and proposes a methodology to identify the algorithms and parameters employed by various processing modules inside a digital camera. The proposed analysis techniques are non-intrusive, using only sample output images collected from the camera to find the color filter array pattern; and the algorithm and parameters of color interpolation employed in cameras. As demonstrated by various case studies in the paper, the features obtained from component forensic analysis provide useful evidence for such applications as detecting technology infringement, protecting intellectual property rights, determining camera source, and identifying image tampering. JA - Information Sciences and Systems, 2006 40th Annual Conference on M3 - 10.1109/CISS.2006.286646 ER - TY - CONF T1 - Non-Intrusive Forensic Analysis of Visual Sensors Using Output Images T2 - Acoustics, Speech and Signal Processing, 2006. ICASSP 2006 Proceedings. 2006 IEEE International Conference on Y1 - 2006 A1 - Swaminathan,A. A1 - M. Wu A1 - Liu,K. J.R KW - algorithms;interpolation KW - analysis;image KW - analysis;output KW - array KW - cameras;forensic KW - Color KW - colour KW - engineering;forensic KW - forensic KW - images;visual KW - methods;nonintrusive KW - PROCESSING KW - sensor;digital KW - sensors;cameras;image KW - sensors;interpolation; KW - signal AB - This paper considers the problem of non-intrusive forensic analysis of the individual components in visual sensors and its implementation. As a new addition to the emerging area of forensic engineering, we present a framework for analyzing technologies employed inside digital cameras based on output images, and develop a set of forensic signal processing algorithms for visual sensors based on color array sensor and interpolation methods. We show through simulations that the proposed method is robust against compression and noise, and can help identify various processing components inside the camera. Such a non-intrusive forensic framework would provide useful evidence for analyzing technology infringement and evolution for visual sensors JA - Acoustics, Speech and Signal Processing, 2006. ICASSP 2006 Proceedings. 2006 IEEE International Conference on VL - 5 M3 - 10.1109/ICASSP.2006.1661297 ER - TY - CONF T1 - Efficient mean-shift tracking via a new similarity measure T2 - Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on Y1 - 2005 A1 - Yang,Changjiang A1 - Duraiswami, Ramani A1 - Davis, Larry S. KW - algorithm; KW - analysis; KW - Bhattacharyya KW - coefficient; KW - Color KW - colour KW - density KW - divergence; KW - estimates; KW - extraction; KW - fast KW - feature KW - frame-rate KW - Gauss KW - Gaussian KW - histograms; KW - image KW - Kernel KW - Kullback-Leibler KW - matching; KW - Mean-shift KW - measures; KW - nonparametric KW - processes; KW - sample-based KW - sequences; KW - similarity KW - spaces; KW - spatial-feature KW - tracking KW - tracking; KW - transform; AB - The mean shift algorithm has achieved considerable success in object tracking due to its simplicity and robustness. It finds local minima of a similarity measure between the color histograms or kernel density estimates of the model and target image. The most typically used similarity measures are the Bhattacharyya coefficient or the Kullback-Leibler divergence. In practice, these approaches face three difficulties. First, the spatial information of the target is lost when the color histogram is employed, which precludes the application of more elaborate motion models. Second, the classical similarity measures are not very discriminative. Third, the sample-based classical similarity measures require a calculation that is quadratic in the number of samples, making real-time performance difficult. To deal with these difficulties we propose a new, simple-to-compute and more discriminative similarity measure in spatial-feature spaces. The new similarity measure allows the mean shift algorithm to track more general motion models in an integrated way. To reduce the complexity of the computation to linear order we employ the recently proposed improved fast Gauss transform. This leads to a very efficient and robust nonparametric spatial-feature tracking algorithm. The algorithm is tested on several image sequences and shown to achieve robust and reliable frame-rate tracking. JA - Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on VL - 1 M3 - 10.1109/CVPR.2005.139 ER - TY - CONF T1 - Fast multiple object tracking via a hierarchical particle filter T2 - Computer Vision, 2005. ICCV 2005. Tenth IEEE International Conference on Y1 - 2005 A1 - Yang,Changjiang A1 - Duraiswami, Ramani A1 - Davis, Larry S. KW - (numerical KW - algorithm; KW - analysis; KW - Color KW - colour KW - Computer KW - Convergence KW - detection; KW - edge KW - fast KW - filter; KW - Filtering KW - hierarchical KW - histogram; KW - image KW - images; KW - integral KW - likelihood; KW - methods); KW - methods; KW - multiple KW - numerical KW - object KW - observation KW - of KW - orientation KW - particle KW - processes; KW - quasirandom KW - random KW - sampling; KW - tracking KW - tracking; KW - vision; KW - visual AB - A very efficient and robust visual object tracking algorithm based on the particle filter is presented. The method characterizes the tracked objects using color and edge orientation histogram features. While the use of more features and samples can improve the robustness, the computational load required by the particle filter increases. To accelerate the algorithm while retaining robustness we adopt several enhancements in the algorithm. The first is the use of integral images for efficiently computing the color features and edge orientation histograms, which allows a large amount of particles and a better description of the targets. Next, the observation likelihood based on multiple features is computed in a coarse-to-fine manner, which allows the computation to quickly focus on the more promising regions. Quasi-random sampling of the particles allows the filter to achieve a higher convergence rate. The resulting tracking algorithm maintains multiple hypotheses and offers robustness against clutter or short period occlusions. Experimental results demonstrate the efficiency and effectiveness of the algorithm for single and multiple object tracking. JA - Computer Vision, 2005. ICCV 2005. Tenth IEEE International Conference on VL - 1 M3 - 10.1109/ICCV.2005.95 ER - TY - CONF T1 - Background modeling and subtraction by codebook construction T2 - Image Processing, 2004. ICIP '04. 2004 International Conference on Y1 - 2004 A1 - Kim,Kyungnam A1 - Chalidabhongse,T.H. A1 - Harwood,D. A1 - Davis, Larry S. KW - (signal); KW - analysis; KW - background KW - codebook KW - coding; KW - colour KW - compression; KW - construction; KW - data KW - image KW - modeling KW - modeling; KW - MOTION KW - multimode KW - quantisation KW - representation; KW - sequence; KW - sequences; KW - subtraction; KW - technique; KW - video AB - We present a new fast algorithm for background modeling and subtraction. Sample background values at each pixel are quantized into codebooks which represent a compressed form of background model for a long image sequence. This allows us to capture structural background variation due to periodic-like motion over a long period of time under limited memory. Our method can handle scenes containing moving backgrounds or illumination variations (shadows and highlights), and it achieves robust detection for compressed videos. We compared our method with other multimode modeling techniques. JA - Image Processing, 2004. ICIP '04. 2004 International Conference on VL - 5 M3 - 10.1109/ICIP.2004.1421759 ER - TY - CONF T1 - Iterative figure-ground discrimination T2 - Pattern Recognition, 2004. ICPR 2004. Proceedings of the 17th International Conference on Y1 - 2004 A1 - Zhao, L. A1 - Davis, Larry S. KW - algorithm; KW - analysis; KW - Bandwidth KW - calculation; KW - Color KW - colour KW - Computer KW - density KW - dimensional KW - discrimination; KW - distribution; KW - distributions; KW - Estimation KW - estimation; KW - expectation KW - figure KW - Gaussian KW - ground KW - image KW - initialization; KW - iterative KW - Kernel KW - low KW - methods; KW - mixture; KW - model KW - model; KW - nonparametric KW - parameter KW - parametric KW - processes; KW - sampling KW - sampling; KW - segmentation KW - segmentation; KW - statistics; KW - theory; KW - vision; AB - Figure-ground discrimination is an important problem in computer vision. Previous work usually assumes that the color distribution of the figure can be described by a low dimensional parametric model such as a mixture of Gaussians. However, such approach has difficulty selecting the number of mixture components and is sensitive to the initialization of the model parameters. In this paper, we employ non-parametric kernel estimation for color distributions of both the figure and background. We derive an iterative sampling-expectation (SE) algorithm for estimating the color, distribution and segmentation. There are several advantages of kernel-density estimation. First, it enables automatic selection of weights of different cues based on the bandwidth calculation from the image itself. Second, it does not require model parameter initialization and estimation. The experimental results on images of cluttered scenes demonstrate the effectiveness of the proposed algorithm. JA - Pattern Recognition, 2004. ICPR 2004. Proceedings of the 17th International Conference on VL - 1 M3 - 10.1109/ICPR.2004.1334006 ER - TY - CONF T1 - Object tracking by adaptive feature extraction T2 - Image Processing, 2004. ICIP '04. 2004 International Conference on Y1 - 2004 A1 - Han,Bohyung A1 - Davis, Larry S. KW - adaptive KW - algorithm; KW - analysis; KW - colour KW - component KW - extraction; KW - feature KW - feature; KW - heterogeneous KW - image KW - image; KW - likelihood KW - Mean-shift KW - object KW - online KW - principal KW - tracking KW - tracking; AB - Tracking objects in the high-dimensional feature space is not only computationally expensive but also functionally inefficient. Selecting a low-dimensional discriminative feature set is a critical step to improve tracker performance. A good feature set for tracking can differ from frame to frame due to the changes in the background against the tracked object, and due to an on-line algorithm that adaptively determines a advantageous distinctive feature set. In this paper, multiple heterogeneous features are assembled, and likelihood images are constructed for various subspaces of the combined feature space. Then, the most discriminative feature is extracted by principal component analysis (PCA) based on those likelihood images. This idea is applied to the mean-shift tracking algorithm [D. Comaniciu et al., June 2000], and we demonstrate its effectiveness through various experiments. JA - Image Processing, 2004. ICIP '04. 2004 International Conference on VL - 3 M3 - 10.1109/ICIP.2004.1421349 ER - TY - CONF T1 - An appearance based approach for human and object tracking T2 - Image Processing, 2003. ICIP 2003. Proceedings. 2003 International Conference on Y1 - 2003 A1 - Capellades,M. B A1 - David Doermann A1 - DeMenthon,D. A1 - Chellapa, Rama KW - algorithm; KW - analysis; KW - background KW - basis; KW - by KW - Color KW - colour KW - correlogram KW - detection; KW - distributions; KW - frame KW - histogram KW - human KW - image KW - information; KW - object KW - processing; KW - segmentation; KW - sequences; KW - signal KW - subtraction KW - tracking; KW - video AB - A system for tracking humans and detecting human-object interactions in indoor environments is described. A combination of correlogram and histogram information is used to model object and human color distributions. Humans and objects are detected using a background subtraction algorithm. The models are built on the fly and used to track them on a frame by frame basis. The system is able to detect when people merge into groups and segment them during occlusion. Identities are preserved during the sequence, even if a person enters and leaves the scene. The system is also able to detect when a person deposits or removes an object from the scene. In the first case the models are used to track the object retroactively in time. In the second case the objects are tracked for the rest of the sequence. Experimental results using indoor video sequences are presented. JA - Image Processing, 2003. ICIP 2003. Proceedings. 2003 International Conference on VL - 2 M3 - 10.1109/ICIP.2003.1246622 ER - TY - JOUR T1 - Data hiding in image and video .I. Fundamental issues and solutions JF - Image Processing, IEEE Transactions on Y1 - 2003 A1 - M. Wu A1 - Liu,Bede KW - adaptive KW - analysis; KW - bits; KW - colour KW - condition; KW - constant KW - CONTROL KW - data KW - embedded KW - EMBEDDING KW - embedding; KW - encapsulation; KW - extractable KW - hiding; KW - image KW - Modulation KW - modulation; KW - multilevel KW - multiplexing KW - multiplexing; KW - NOISE KW - nonstationary KW - processing; KW - rate; KW - reviews; KW - shuffling; KW - signal KW - signals; KW - simulation; KW - solution; KW - techniques; KW - variable KW - video KW - visual AB - We address a number of fundamental issues of data hiding in image and video and propose general solutions to them. We begin with a review of two major types of embedding, based on which we propose a new multilevel embedding framework to allow the amount of extractable data to be adaptive according to the actual noise condition. We then study the issues of hiding multiple bits through a comparison of various modulation and multiplexing techniques. Finally, the nonstationary nature of visual signals leads to highly uneven distribution of embedding capacity and causes difficulty in data hiding. We propose an adaptive solution switching between using constant embedding rate with shuffling and using variable embedding rate with embedded control bits. We verify the effectiveness of our proposed solutions through analysis and simulation. VL - 12 SN - 1057-7149 CP - 6 M3 - 10.1109/TIP.2003.810588 ER - TY - CONF T1 - Probabilistic tracking in joint feature-spatial spaces T2 - Computer Vision and Pattern Recognition, 2003. Proceedings. 2003 IEEE Computer Society Conference on Y1 - 2003 A1 - Elgammal,A. A1 - Duraiswami, Ramani A1 - Davis, Larry S. KW - analysis; KW - appearance KW - appearance; KW - color; KW - colour KW - Computer KW - constraint; KW - deformation; KW - detection; KW - distribution; KW - edge KW - estimation; KW - extraction; KW - feature KW - feature-spatial KW - feature; KW - function KW - gradient; KW - image KW - intensity; KW - joint KW - likelihood KW - local KW - maximization; KW - maximum KW - nonparametric KW - object KW - objective KW - occlusion; KW - optical KW - partial KW - probabilistic KW - probability; KW - region KW - representation; KW - row KW - similarity-based KW - small KW - space; KW - structure; KW - target KW - tracker; KW - tracking; KW - transformation KW - vision; AB - In this paper, we present a probabilistic framework for tracking regions based on their appearance. We exploit the feature-spatial distribution of a region representing an object as a probabilistic constraint to track that region over time. The tracking is achieved by maximizing a similarity-based objective function over transformation space given a nonparametric representation of the joint feature-spatial distribution. Such a representation imposes a probabilistic constraint on the region feature distribution coupled with the region structure, which yields an appearance tracker that is robust to small local deformations and partial occlusion. We present the approach for the general form of joint feature-spatial distributions and apply it to tracking with different types of image features including row intensity, color and image gradient. JA - Computer Vision and Pattern Recognition, 2003. Proceedings. 2003 IEEE Computer Society Conference on VL - 1 M3 - 10.1109/CVPR.2003.1211432 ER - TY - CONF T1 - Watermarking for image authentication T2 - Image Processing, 1998. ICIP 98. Proceedings. 1998 International Conference on Y1 - 1998 A1 - M. Wu A1 - Liu,Bede KW - alterations KW - analysis;image KW - analysis;message KW - authentication;image KW - authentication;table KW - camera;frequency KW - coding;image KW - colour KW - compression;frequency-domain KW - detection;alterations KW - domain;image KW - EMBEDDING KW - image KW - image;compressed KW - image;ownership KW - localisation;color KW - look-up;watermarking;data KW - lookup; KW - method;digital KW - protection;table KW - storage;data KW - tampering;marked AB - A data embedding method is proposed for image authentication based on table look-up in frequency domain. A visually meaningful watermark and a set of simple features are embedded invisibly in the marked image, which can be stored in the compressed form. The scheme can detect and localize alterations of the original image, such as the tampering of images exported from a digital camera JA - Image Processing, 1998. ICIP 98. Proceedings. 1998 International Conference on VL - 2 M3 - 10.1109/ICIP.1998.723413 ER -