Jocelyn Chanussot (M’04SM’04F’12) received the M.Sc. degree in electrical engineering from the Grenoble Institute of Technology (Grenoble INP), Grenoble, France, in 1995, and the Ph.D. degree from Savoie University, Annecy, France, in 1998. In 1999, he was with the Geography Imagery Per- ception Laboratory for the Delegation Generale de l’Armement (DGA - French National Defense De- partment). Since 1999, he has been with Grenoble INP, where he was an Assistant Professor from 1999 to 2005, an Associate Professor from 2005 to 2007, and is currently a Professor of signal and image processing. He is conducting his research at the Grenoble Images Speech Signals and Automat- ics Laboratory (GIPSA-Lab). His research interests include image analysis, multicomponent image processing, nonlinear filtering, and data fusion in remote sensing. He has been a visiting scholar at Stanford University (USA), KTH (Sweden) and NUS (Singapore). Since 2013, he is an Adjunct Professor of the University of Iceland. In 2015-2017, he is a visiting professor at the University of California, Los Angeles (UCLA). Dr. Chanussot is the founding President of IEEE Geoscience and Remote Sensing French chapter (2007- 2010) which received the 2010 IEEE GRS-S Chapter Excellence Award. He was the co-recipient of the NORSIG 2006 Best Student Paper Award, the IEEE GRSS 2011 and 2015 Symposium Best Paper Award, the IEEE GRSS 2012 Transactions Prize Paper Award and the IEEE GRSS 2013 Highest Impact Paper Award. He was a member of the IEEE Geoscience and Remote Sensing Society AdCom (2009-2010), in charge of membership development. He was the General Chair of the first IEEE GRSS Workshop on Hyperspectral Image and Signal Processing, Evolution in Remote sensing (WHISPERS). He was the Chair (2009-2011) and Cochair of the GRS Data Fusion Technical Committee (2005-2008). He was a member of the Machine Learning for
In this paper, a new technique for image segmentation from high resolution fused multispectral image is recognized. The proposed techniques are a mixture of image fusion, feature extraction like shape and edge and also perform fused multispectral image classification. Researchers have performed an enormous amount of experiments on multi-focus image fusion techniques. To extract superior quality information and consistency by using multi-focus image fusion and also it is developing into many image processing applications. A fusion method enhances the feature of image and also enlarges the application of these data. The panchromatic image has high spatial information while multispectral image has high spectral information when merging these images to get a high resolution multispectral image. By using spectral and spatial information the feature is extracted to enhance the accuracy of the image. Therefore, conventional methods are suffering from either spatial or spectral characteristics, but the proposed methods save both spectral and spatial characteristics simultaneously. Conventional methods like a high pass filter modified principal component analysis and atour’s method. The proposed method uses multi-wavelet transforms through a pulse coupled neural network. Performance can be evaluated based on the classification of fused multispectral image and also investigate the impact of image fusion using selected features of reference and fused image. Using, SVM classifier to build classification experimentation on fused multispectral image. Therefore, the final results of the proposed method are more efficient than the conventional methods.
component analysis (PCA) method  and brovey method , the resultant output possesses high spatial resolution but suffers from spectral distortion. Besides, the combined algorithms of generalized IHS  and the optimization methods give a trade-off relationship between minimum spectral distortion and improvement of the spatial information. To overcome this limitation, in some works  adopted wavelet transform method such as high pass filter is used to transfer high frequency components of the panchromatic image into the low resolution MS image. This provides a far less spectral distortion than its previous methods. Using multiresolution analysis techniques, high resolution multispectral image is obtained. In these methods, the panchromatic and multispectralimages are decomposed into low and high frequency components. During the fusion process, the low frequency components of multispectral image are not changed. Discrete wavelet transform (DWT) [9, 10], curvelet transform , support value transform  and contourlet transform  based methods lack directional selectivity and ignore geometric properties. Hence, these shift variants transform preserve better spectral information but may lack spatial information resulting in blurring and artifacts. To overcome these drawbacks, NSCT [14, 15] which is shift invariant and non-redundant transform is used.
Fusion of multi-sensor images has been a very active research topic during recent years . When considering remotely sensed images, an archetypal fusion task is the pansharpening, i.e., fusing a high spatial resolution panchromatic (PAN) image and a low spatial reso- lution multispectral (MS) image. In recent years, hyperspectral (HS) imaging, acquiring a same scene in several hundreds of contiguous spectral bands, has opened a new range of relevant applications such as spectral unmixing  and classi•cation . To exploit the ad- vantages offered by different sensors, how to fuse HS, MS or PAN images has been explored widely [4–6]. Note that the fusion of MS and HS differs from pansharpening since both spatial and spectral information is contained in multi-band images. Therefore, a lot of pansharpening methods, such as component substitution  and rel- ative spectral contribution  are inapplicable or inef•cient for the HS/MS fusion problem. To overcome the ill-posedness of the fusion problem, Bayesian inference provides a convenient way to regularize the inverse problem by de•ning an appropriate prior distribution for the scene of interest. Following this strategy, various estimators have been implemented in the image domain [9–11] or in a transformed domain .
Nonlinear decomposition methods comprise a choice to conventional approaches for facing the problem of data fusion. In this paper, we discuss the application of this methodology to a admired remote sensing area called pan sharpening, which consists in the fusion of a low resolution multispectral image and a high-resolution panchromatic image. We design a complete pan sharpening scheme based on the use of morphological half gradient operators and demonstrate the application of this algorithm through the comparison with the state-of-the-art approaches. Four data sets obtained by the different satellites are employed for the performance monitoring, testifying the effectiveness of the proposed method in producing top-class images with a setting independent of the specific sensor.
Available Online at www.ijpret.com 599 approximation of the original image and with the wavelet coefficients, it is possible to reconstruct the original image without any loss of information.The ARSIS concept implementation (from its French acronym Amélioration de la Résolution Spatial par Injection de Structures) is a pan-sharpening method based on the assumption that the missing information of the low-resolution multispectral image can be provided by the high spatial frequencies of a higher resolution panchromatic image .The new methodology to obtain STRS based on Bayesian theory, which allows the uncertainties to be quantified. First, the multispectral reflectance spectra are imputed to the hyperspectral intervals based on the a prior covariance between spectral bands of similar signatures .The fusion problem is formulated as an inverse problem whose solution is the target image assumed to live in a lower dimensional subspace. A sparse regularization term is carefully designed, relying on a decomposition of the scene on a set of dictionaries. The dictionary atoms and the supports of the corresponding active coding coefficients are learned from the observed images. Then, conditionally on these dictionaries and supports, the fusion problem is solved via alternating optimization with respect to the target image (using the alternating direction method of multipliers) and the coding coefficients .The number of end members extracted from the MS image cannot exceed the number of bands in least-squares-based spectral unmixing algorithm, large reconstruction errors will occur for the HSI, which degrades the fusion performance of the enhanced HSI. Therefore, in this paper, a novel fusion framework is also proposed by dividing the whole image into several sub images, based on which the performance of the proposed spectral unmixing basedfusion algorithm can be further improved .
saturation (IHS) transformation or principal component analysis (PCA) transformation to MS images before pan-sharpening to solve the problem of high correlation among the spectral bands. Among these methods, literature  selects the first principal component (PC1) to instead of MS images. However, it is not based on any statistics between the high resolution PAN image and the low resolution PC1 image . To overcome this problem, literature  uses cross-correlation coefficient (CC) as the criterion and selects PC with highest absolute CC with PAN image to instead of MS images. However, all of them didn’t consider PC’ high-frequency detailed images and only use PC’ low-frequency approximate image instead of PAN’s in the pan-sharpening process. Different surface feature has different sensitivity in the different spectrum. This is one of the major characteristics of MS image. Therefore, if we don’t consider PC’ high- frequency detailed images, the fused image will loss some detail information of MS image. To overcome this problem, we use relative entropy as the criterion to reconstruct high-frequency detailed images from PC’s and PAN’s. Experimental results show that our fused image obtains a high spatial resolution and higher similarity with the referenced true high resolution MS image than other approaches.
In proposed method after preprocessing of images ,images are fused using DCHWT where the signal is decomposed by grouping the DCT coefficients in a way similar to that of DFT coefficients except for the conjugate operation in placing the coefficients symmetrically (as DCT is real).along with performance evaluation of the fused image .after fusion image post processing has to be done in spectral domain using top hat transform for better performance parameter of images.
The proposed scheme involves the two main procedures as shown in Fig. 1: shadow detection and shadow removal. In the detection step, we present a soft shadow detection method by multilevel image thresholding and image matting technique. The detection mainly contains three steps. Initially, the shadow image is classified to shadow and non-shadow areas roughly by hard threshold value.it results a binary map (hard map). The hard map of binary mask cannot provide the precise edges between the two areas, due to the presence of penumbra. So that, the shadow areas are eroded and dilated by morphologicaloperators and the difference middle areas are filled with the original image. Then the image matting method is employed to calculate the shadow coefficient for each pixel based on the mask image. 0 is shadow, 1 is non- shadow, and the penumbra area is from 0 to 1 to indicate the shadow probability.
F USION of multisensor images has been explored during recent years and is still a very active research area . A popular fusion problem in remote sensing consists of merging a high spatial resolution panchromatic (PAN) image and a low spatial resolution multispectral (MS) image. Many solutions have been proposed in the literature to solve this problem, which is known as pansharpening –. More recently, hy- perspectral (HS) imaging acquiring a scene in several hundreds of contiguous spectral bands has opened a new range of relevant applications such as target detection  and spectral unmixing . However, while HS sensors provide abundant spectral information, their spatial resolution is generally more limited . To obtain images with good spectral and spatial resolutions, the remote sensing community has been devoting increasing research efforts to the problem of fusing HS with MS or PAN
Image fusion is the process of merging two or more images obtained from the same sensor at different times or from two or more sensors at the same instant. The objective is to obtain more information from the fused image than from the individual images. In satellite images, the lower spatial resolution multispectralimages are fused with higher spatial resolution panchromaticimages. The fusion should result in the transfer of spectral and spatial information without introducing any artifacts. The goal is to combine the spectral and spatial resolutions of the multispectral and the panchromaticimages respectively to obtain a high-resolution multispectral image. Most of the fusion techniques that have been proposed are based on the compromise between the desired spatial enhancement and the spectral consistency. This paper provides an overview of the techniques available in the literature for the fusion of multispectral and panchromaticimages. The evaluation of the fusion technique employed is also an important step in the fusion process. Various quality metrics have been used in the literature to study, compare and assess the fusion technique employed. This paper provides a brief study on such quality metrics employed in the literature.
Figure 8 displays the results of different fusion methods for qualitative evaluation. Visual inspection provides a comprehensive comparison between the fused images. The PCA method has the largest color distortion when compare to the original MS image. All of these methods may improve the spatial and spectral resolutions of the images. The main difference between these methods is shown in Figure 9, depicting the zoomed-in images. Referring to Figure 9a, the striping effect appears in the row-column method, caused by the discontinuity during the row-column process. Pyramid EMD may overcome this problem, as shown in Figure 9b. Figure 9f shows the fused image with an edge effect using the Wavelet approach. Among these multi-scale fusion approaches, pyramid EMD yields promising results. The visual analy- sis shows that the spatial resolution of the proposed method is much higher than the others.
In recent decays a variety of image-fusion techniques have been developed to fuse Multi-spectral (MS) and Panchromatic (PAN) images which exhibit complementary characteristics of spatial and spectral resolutions. Among the fusion methods, the multi-resolution fusion techniques have been discussed most frequently in the recent publications due to its profits over other fusion techniques [4,5]. Therefore, this study focuses on the new model of multi- resolution fusion methods. The non-feedback retina basedfusion technique can better preserve the spectral and spatial information than the other conventional methods do . This technique extracts spatial detail information from a high-resolution PAN image first, and then injects the spatial information into the MS bands, respectively. In this manner, the spectral distortion can be reduced. However, the spatial detail information extracted from a high-resolution PAN image is not equivalent to that of existing in an original high-resolution MS band. This difference can also introduce spectral distortion into the fusion result, especially when
The strategy proposed by Otsu explained a clustering analysis built technique based with respect to picture variation. It automatically performs histogram shape- based picture thresholding for the decrease of a grey level picture to a binary picture. The calculation expect that the image for thresholding contains two classes of pixels (e.g., foreground area and background) and then calculates the optimum threshold differentiating those two classes so that their combined spread is minimal.Itexhaustively scans for the limit that minimizes the intra-class fluctuation, characterized as the weighted sum of variations of the two classes.
and, finally, computed the error rate, i.e., the fraction of erroneously classified pixels, with respect to a reference segmentation. Lacking ground-truth data, the reference was taken as the segmentation of the original uncompressed image, even though this introduces a small negative bias. Results are reported in Fig. 7, together with those obtained with the flat- reference scheme. With four-class segmentation, CBC provides excellent results with an accuracy over 96% at all rates, while the flat coder approaches this performance only at coding rates above 1 bits/sample. It must be underlined, however, that this result is strongly biased by the nature of our coding scheme, based itself on segmentation, which allows for such a good performance even when the image quality is poor. On the other hand, the presence of a segmentation map embedded in the data is actually a major strength of our approach, and it could even be used explicitly (unlike in this experiment) to improve accuracy. With 8- and 16-class segmentation, instead, compression and segmentation are clearly decoupled, but CBC keeps providing a performance significantly better than flat coding, with an accuracy of about 96% and 93% for 8 and 16 classes, respectively, at the higher rates.
due to thin and thick hair, hairs with different colors and similar color with the lesion . The interest of multispectral imaging in dermoscopy  relies on the principle that light of the visible and infrared spectrums penetrates the skin in different depths and this enables to reveal different features of the skin lesion. We can make the most of the information provided with infrared light since, in addition to reveal the melanin present in the deeper layers of the skin , it also better exhibit the hairs than under visible light (see the two first columns in Figure 1). Therefore, we have based our hair detection method on the IR images. Assuming that in IR images, hairs are darker that the surrounding zones, a morphological closing top-hat filtering is applied . Con- sidering that hairs are thin linear structures, the structuring element is a straight line. Since the hair direction is not known, a top-hat filtering operation is performed four times using a straight line with four different orientations and the maximum of the four top-hat responses is calculated . This is done for each component of the IR image and is expressed by
Abstract— Image change detection is a process that analyzes images of the same scene taken at different times in order to identify changes that may have occurred between the considered acquisition dates. With the development of remote sensing technology, change detection in remote sensing images becomes more and more important. Among them, change detection in synthetic aperture radar (SAR) images exhibits some more difficulties than optical ones due to the fact that SAR images suffer from the presence of the speckle noise, so that’s why we proposed an unsupervised distribution-free change detection approach for synthetic aperture radar (SAR) imagesbased on an image fusion strategy and a novel fuzzy clustering algorithm. The image fusion technique is introduced to generate a difference image by using complementary information from a mean-ratio image and a log-ratio image. In order to restrain the background information and enhance the information of changed regions in the fused difference image, wavelet fusion rules based on an average operator and minimum local area energy are chosen to fuse the wavelet coefficients for a low-frequency band and a high-frequency band, respectively. A reformulated fuzzy local-information C-means clustering algorithm is proposed for classifying changed and unchanged regions in the fused difference image. It incorporates the information about spatial context in a novel fuzzy way for the purpose of enhancing the changed information and of reducing the effect of speckle noise. Experiments on real SAR images show that the image fusion strategy integrates the advantages of the log-ratio operator and the mean-ratio operator and gains a better performance. The change detection results obtained by the improved fuzzy clustering algorithm exhibited lower error than its preexistences.
optical flow. Also generates solution to an aperture problem at the edges, this paper focuses mainly on the lossless coding of the images, and then compares the performance of both the uniform and adaptive mesh-based methods. Result showed nominal range of bit rates when compared with three-dimensional wavelet-based techniques. The mesh based method is efficient in compression of 3-D brain computed tomography and the adaptive mesh based method gives good result than that of the uniform mesh-based methods with high complexity . Aaron T. Deever and Sheila S. Hemami have proposed a method called lossless image compression with projection-based and adaptive reversible integer wavelet transforms. Here, a projection based scheme is introduced to reduce the first-order entropy of transform coefficients and to increase the performance of reversible integer wavelet transforms. Also the projection method has been framed for predicting a wavelet transform coefficient. This technique promotes optimal fixed prediction methods for the lifting based wavelet transforms. On the other side, the projection technique was emphasized for an adaptive prediction method which differentiates the final prediction process of lifting based transform on basis of modeling context. The result showed that, the projection technique poses very good performance on reversible integer wavelet transforms with the superior lossless compression when compared to current fixed and adaptive lifting based transform . Zixiang Xiong et. al. have jointly proposed a technique called lossy to lossless compression using 3D wavelet transforms. In this technique the authors exhibits a 3-D integer wavelet packet transform structure that supports implicit bit shifting of wavelet coefficients for the process of approximation of a 3-D unitary transformation . index. 3-D medical image compression using 3-D wavelet coders was developed by N. Sriraam and R. Shyamsunder. Daubechies 4, Daubechies 6, Cohen– Daubechies–Feauveau 9/7 and Cohen–Daubechies– Feauveau 5/3 are the four wavelet transforms that were used in this method with the encoders like 3-D SPIHT, 3-D Set Partitioned Embedded Block Coder (SPECK) and 3-D Binary Set Splitting with K-d trees (BISK) to find out the best wavelet–encoder combination. Two versions of wavelet transform known as symmetric and decoupled wavelet transform has been used. Magnetic Resonance Images (MRI) and X-Ray Angiograms (XA) are used for testing the algorithm. The best compression result possessed by the 3-D Cohen– Daubechies–Feauveau 9/7 symmetric wavelet with the 3-D SPIHT encoder .
NIR + G + B 0.79 0.73 1.75 1.61
NIR + R + G + B 1.04 0.63 1.76 1.95
4.3. Multispectral Score-level Fusion
We further investigate the question: Which features and combinations thereof are most discriminatory for different wavelengths? A test of all possible channel combinations (see Table 2, best results in bold) for each of the features LG, QSW, DCT and 2DG revealed the following results: Among all tested 2-channel combinations, the best overall performance was provided by the combination of the NIR and Red channels (LG provided slightly better results for NIR and Green with 0.87% vs. 0.93% EER) with EERs in the range 0.6-0.93%, corresponding to a significant im- provement (factors of 3.6-4.4) compared to single-channel NIR performance. Improvement was most imminent for the QSW feature, followed by 2DG, LG and DCT, but despite their different nature (especially DCT), revealed a similar level of improvement. Combinations of the color channels were not as successful and mostly did not improve the result of single channels (DCT and 2DG), only marginal improve- ment in case of R+G (QSW) and R+B (QSW and LG) could be observed. Consequently, 3-channel fusion delivered bet- ter results for only these two features, LG and QSW (0.7% EER for LG and 0.59% EER for QSW, both when combin- ing NIR+R+B). Especially the lower performance of 2DG for 3-channel fusion (1.34% EER for NIR+R+B) is surpris- ing, given that this algorithm was tuned for VW and showed better performance on individual color channels. Best 2- channel and 3-channel performance ROCs on UTIRIS are illustrated in Fig. 6. While possibilities in score-level com- bination are limited compared to feature-level fusion, in this configuration it is advisable to aim for NIR+Red channel fu- sion taking the additional processing and storage overhead for additional channels into account.