Top PDF A critical comparison of pansharpening algorithms

A critical comparison of pansharpening algorithms

A critical comparison of pansharpening algorithms

Most pansharpening methods proposed in the literature follow a general protocol, which is composed by two oper- ations: 1) extract from the PAN image the high-resolution ge- ometrical details of the scene that are not present in the MS image; 2) incorporate such spatial information into the low- resolution MS bands (interpolated to meet the spatial scale of the PAN image) by properly modeling the relationships be- tween the MS bands and the PAN image. This paper aims at providing a critical comparison among classical pansharpen- ing approaches applied to two different datasets. The credited Wald protocol is used for the assessment procedure and some useful guidelines for the comparison are given.
Show more

5 Read more

A critical Comparison of Graph Clustering Algorithms Using the K-clique Percolation Technique

A critical Comparison of Graph Clustering Algorithms Using the K-clique Percolation Technique

In the k-clique adjacency graph, k-clique percolation cluster is very much like a regular (edge) percolation cluster where the vertices represent the k-cliques of the original graph, and there is an edge between two vertices if the corresponding k-cliques are adjacent. Rolling a k-clique template is very much like moving a particle from one vertex of this adjacency graph to another one along an edge is equivalent to from one k-clique of the original graph to an adjacent one. A k-clique template can be assumed of as an object that is isomorphic to a complete graph of k vertices. Such a template can be located onto any k-clique of the original graph, and rolled to an adjacent k-clique by repositioning one of its vertices and keeping its other k−1 vertices fixed. Thus, the k-clique percolation clusters of a graph are all those subgraphs that can be fully discovered but cannot be left by rolling a k-clique template in them. Now, the threshold probability (critical point) of k- clique percolation using heuristic arguments is presented as a general result. It is examined that a giant k-clique component appears in an Erdos-Renyi at p = p c (k), where
Show more

5 Read more

Comparison of Clustering Algorithms Based on Outliers

Comparison of Clustering Algorithms Based on Outliers

Saif et.al, presented data analysis techniques for extracting hidden and interesting patterns from large datasets. DBSCAN (Density Based Spatial Clustering of Applications with Noise) is a pioneer density based algorithm. Many researchers attempted to enhance the basic DBSCAN algorithm, in order to overcome these drawbacks, such as VDBSCAN, FDBSCAN, DD_DBSCAN, and IDBSCAN. In this study, author survey over different variations of DBSCAN algorithms that were proposed so far. These variations are critically evaluated and their limitations are also listed. Haizau et.al proposed a robust method for robust local outlier detection with statistical parameters, which incorporates the clustering, based ideas in dealing with big data. . The experimental results demonstrate the efficiency and accuracy of the proposed method in identifying both global and local outliers, Moreover, the method also proved more robust analysis than typical outlier detection methods, such as LOF and DBSCAN. Hans et.al, propose a novel outlier detection model to find outliers that deviate from the generating mechanisms of normal instances by considering combinations of different subsets of attributes, as they occur when there are local correlations in the data set. This model enables to search for outliers in arbitrarily oriented subspaces of the original feature space. Abir et.al, proposed that clustering algorithms to determine the critical grouping in a set of unlabeled data. Lot of clustering works engages input number of clusters which is severe to find out. Yiang et.al, showed that The K-Medoids clustering algorithm solves the problem of the K-Means algorithm on processing the outlier samples, but it is not be able to process big- data because of the time complexity.
Show more

10 Read more

Comparison of multi-modal optimization algorithms based on evolutionary algorithms

Comparison of multi-modal optimization algorithms based on evolutionary algorithms

Many engineering optimization tasks involve finding more than one optimum solution. The present study provides a comprehensive review of the existing work done in the field of multi-modal function optimization and provides a critical analysis of the existing methods. Existing niching methods are analyzed and an improved niching method is proposed. To achieve this purpose, we first give an introduction to nich- ing and diversity preservation, followed by discussion of a number of algorithms. Thereafter, a comparison of clearing, clustering, deterministic crowding, probabilistic crowding, restricted tournament selection, sharing, species conserving genetic algorithms is made. Amodified niching-based tech- nique – modified clearing approach – is introduced and also compared with existing methods. For comparison, a versa- tile hump test function is also proposed and used together with two other functions. The ability of the algorithms in finding, locating, and maintaining multiple optima is judged using two performance measures: (i) number of peaks main- tained, and (ii) computational time. Based on the results, we conclude that the restricted tournament selection and the proposed modified clearing approaches are better in terms of finding and maintaining the multiple optima.
Show more

8 Read more

Comparison of Monte Carlo Metropolis, Swendsen-Wang, and Wolff Algorithms in the Critical Region for the 2-dimensional Ising Model

Comparison of Monte Carlo Metropolis, Swendsen-Wang, and Wolff Algorithms in the Critical Region for the 2-dimensional Ising Model

in total magnetization and total energy, which implies that more measure- ments are needed to reduce statistical errors. Also, critical slowing down, which implies that more updates are needed to get statistically independent states or configurations. More efficient algorithms are needed to reduce the critical slowing down. It has been documented that different Monte Carlo algorithms exhibit this critical slowing down at the critical point with dif- ferent outcomes. The Swendsen-Wang algorithm was invented by Swendsen and Wang[18]. The Metropolis algorithm was inefficient in the critical region because we encounter large areas in which spins are oriented in the same di- rection. When attempting flipping one of these spins, the change in energy ∆E takes the value of 8J, assuming that the external magnetic field is zero. With this value of ∆E, the Metropolis probability p = min(1, e − β∆E ) be-
Show more

84 Read more

Testing algorithms for critical slowing down

Testing algorithms for critical slowing down

Abstract. We present the preliminary tests on two modifications of the Hybrid Monte Carlo (HMC) algorithm. Both algorithms are designed to travel much farther in the Hamiltonian phase space for each trajectory and reduce the autocorrelations among phys- ical observables thus tackling the critical slowing down towards the continuum limit. We present a comparison of costs of the new algorithms with the standard HMC evolution for pure gauge fields, studying the autocorrelation times for various quantities including the topological charge.

8 Read more

An objective comparison of cell-tracking algorithms

An objective comparison of cell-tracking algorithms

Biologists however, when using tracking algorithms, have specific biological questions and are therefore usually more interested in specific aspects of the final segmentation and tracking analysis. For this reason, we evaluated four additional aspects of biological relevance. The Complete Tracks (CT) focuses on the fraction of ground truth cell tracks that a given method is capable to reconstruct in their entirety. The higher CT is, the larger is the fraction of cells that is correctly tracked throughout the entire movie, from the frame they appear in, to the frame they disappear from. CT is especially relevant when a perfect reconstruction of the cell lineages is required. The Track Fractions (TF) selects for each reference track its longest matching algorithm-generated tracklet (continuous cell tracking subsequence), computes the percentage of overlap of these subsequences with respect to the full tracks, and takes the average of these values. Intuitively, this can be interpreted as the fraction of an average cell’s trajectory that an algorithm reconstructs correctly, and therefore gives an indication of the algorithm’s ability to measure cell speeds or trajectories. In cases where the reliable detection of dividing cells is critical, Branching Correctness (BC) measures how efficient a method is at correctly detecting division events. Finally, the Cell Cycle Accuracy (CCA) measures how accurate an algorithm is at correctly reconstructing the length of the life of a cell, i.e., the time between two consecutive divisions. Both BC and CCA are informative about the ability of the algorithm to detect cell population growth. All biologically inspired measures take values in the interval [0,1], with higher values corresponding to better performance.
Show more

48 Read more

Object-Based Area-to-Point Regression Kriging for Pansharpening

Object-Based Area-to-Point Regression Kriging for Pansharpening

GSA. Although the boundaries of the results of HPF, AWLP, MF-HG and FE became a bit clearer, many spatial details were still seriously missing. For the result of BDSD, it has better accuracy values than most of the other methods, and the boundaries of various land covers became similar to the reference data, but the spectral distortion was serious. Spatial details in the result of PNN were well presented and the boundaries of many objects were enhanced, but serious spectral distorting was happened in the sharpened MS image (see Fig. 4(l)). GLP-BPT and GSA-BPT produced similar results that contained more spatial details than the CS- and MRA-based pansharpening algorithms, but the accuracy values for the result of GSA-BPT were a bit better than those of GLP-BPT. For the geostatistical based pansharpening algorithm of ATPRK, it produced result with color that consistent to the reference image, but the boundaries were also spatially blurred, like those of HPF and AWLP, MF-HG and FE. This is because this study site is covered by different land cover types (objects), and they have different spectral reflectance characteristics, which making it difficult for the linear regression in ATPRK to capture the spatial details of various objects. However, for the proposed OATPRK, as unique regression model was performed for each of the objects, it produced the most accurate result in terms of the quantitative comparison, and the spatial artifacts and blurred boundaries were reduced, and more spatial details were exploited, which is the most similar to the reference image. Compared with GSA, BDSD, PRACS, HPF, AWLP, MF-HG, FE, PNN, GLP-BPT, GSA-BPT and ATPRK, the UIQI value of OATPRK result increased by 10.52%, 2.61%, 14.10%, 7.49%, 6.08%, 4.44%, 5.77%, 5.69%, 5.39%, 4.78% and 6.55%, respectively, which demonstrates the superiority of OATPRK for fusing the IKONOS dataset.
Show more

16 Read more

An objective comparison of cell tracking algorithms

An objective comparison of cell tracking algorithms

Biologists however, when using tracking algorithms, have specific biological questions and are therefore usually more interested in specific aspects of the final segmentation and tracking analysis. For this reason, we evaluated four additional aspects of biological relevance. The Complete Tracks (CT) focuses on the fraction of ground truth cell tracks that a given method is capable to reconstruct in their entirety. The higher CT is, the larger is the fraction of cells that is correctly tracked throughout the entire movie, from the frame they appear in, to the frame they disappear from. CT is especially relevant when a perfect reconstruction of the cell lineages is required. The Track Fractions (TF) selects for each reference track its longest matching algorithm-generated tracklet (continuous cell tracking subsequence), computes the percentage of overlap of these subsequences with respect to the full tracks, and takes the average of these values. Intuitively, this can be interpreted as the fraction of an average cell’s trajectory that an algorithm reconstructs correctly, and therefore gives an indication of the algorithm’s ability to measure cell speeds or trajectories. In cases where the reliable detection of dividing cells is critical, Branching Correctness (BC) measures how efficient a method is at correctly detecting division events. Finally, the Cell Cycle Accuracy (CCA) measures how accurate an algorithm is at correctly reconstructing the length of the life of a cell, i.e., the time between two consecutive divisions. Both BC and CCA are informative about the ability of the algorithm to detect cell population growth. All biologically inspired measures take values in the interval [0,1], with higher values corresponding to better performance.
Show more

47 Read more

Comparison of Pansharpening Algorithms: Outcome of the 2006 GRS-S Data Fusion Contest

Comparison of Pansharpening Algorithms: Outcome of the 2006 GRS-S Data Fusion Contest

A variety of pansharpening techniques take advantage of the complementary characteristics of spatial and spectral resolu- tions of the data [12]. Among them, component-substitution (CS) methods [13] are attractive because they are fast, easy to implement, and allow user’s expectations to be fulfilled. When exactly three MS bands are concerned, the most widely used CS fusion method is based on the intensity–hue–saturation (IHS) transformation. The spectral bands are resampled and coregistered to the Pan image before the IHS transformation is applied. The smooth intensity component I is substituted with the high-resolution Pan and transformed back to the spectral domain via the inverse IHS transformation. This procedure is equivalent to inject, i.e., add, the difference between the sharp Pan and the smooth intensity I into the resampled MS bands [14]. Usually, the Pan image is histogram-matched, i.e., radiometrically transformed by a constant gain and bias in such a way that it similarly exhibits mean and variance as I, before substitution is carried out. However, since the histogram- matched Pan and the intensity component I do not generally have the same local mean, when the fusion result is displayed in color composition, large spectral distortion may be noticed as color changes. This effect occurs because the spectral response of the I channel, as synthesized by means of the MS bands, may be far different from that of Pan. Thus, not only spatial details but also slowly space-varying radiance offsets are locally injected. Generally speaking, if the spectral responses of the MS channels are not perfectly overlapped with the bandwidth of
Show more

11 Read more

Performance Comparison of ACO Algorithms for MANETs

Performance Comparison of ACO Algorithms for MANETs

ARA is a purely reactive MANET routing algorithm. It does not use any HELLO packets to explicitly find its neighbours. This algorithm (ARA) presents a detailed routing scheme for MANET’s, including route discovery and maintenance mechanisms [13]. Route discovery is achieved by flooding forward ants to the destination while establishing reverse links to the source. A similar mechanism is employed in other algorithms such as AODV. As flow through the network, routes are maintained primarily by data packets .If any failure occurs an attempt is made to send the packet over an alternate link. Or else, it is returned to the previous hop for similar processing .If the packet is eventually returned to the source, a new route discovery sequence is launched.
Show more

6 Read more

Comparison of Multiepisode Video Summarization Algorithms

Comparison of Multiepisode Video Summarization Algorithms

The ever-growing availability of multimedia data creates a strong requirement for e ffi cient tools to manipulate and present data in an e ff ective manner. Automatic video sum- marization tools aim at creating, with little or no human in- teraction, short versions which contains the salient informa- tion of original video. The key issue here is to identify what should be kept in the summary and how relevant informa- tion can be automatically extracted. To perform this task, we consider several algorithms and compare their performance to define the most appropriate one for our application.
Show more

8 Read more

An Experimental Comparison of Face Detection Algorithms

An Experimental Comparison of Face Detection Algorithms

Abstract—Human face detection and tracking is an important research area having wide application in human machine interface, content-based image retrieval, video coding, gesture recognition, crowd surveillance and face recognition. Human face detection is extremely important and simultaneously a difficult problem in computer vision, mainly due to the dynamics and high degree of variability of the head. A large number of effective algorithms have been proposed for face detection in grey scale images ranging from simple edge- based methods to composite high-level approaches using modern and advanced pattern recognition approaches. The aim of the paper is to compare Gradient vector flow and silhouettes, two of the most widely used algorithms in the area of face detection. Both the algorithms were applied on a common database and the results were compared. This is the first paper which evaluates the runtime analysis of Gradient vector field methodology and compares with silhouettes segmentation technique. The paper also explains the factors affecting the performance and error incurred by both the algorithms. Finally, results are explained which proves the superiority of the silhouette segmentation method over Gradient vector flow method.
Show more

5 Read more

A Review of Audio Fingerprinting and Comparison of Algorithms

A Review of Audio Fingerprinting and Comparison of Algorithms

 Linear Transforms: The use of a linear transform is to map a set of characteristics to a new set of features. An appropriate transform, when used, reduces redundancy. According to [11], there are optimal transforms such as Karhunen- Lo`eve (KL) or Singular Value Decomposition (SVD). These are however complex to compute. Hence, lower complexity transforms are used. The most common transforms used are Fast Fourier Transform (FFT), Discrete Cosine Transform (DCT), the Haar Transform or the Walsh-Hadamard Transform. The DFT transform, on comparison to other transforms has been found to be less sensitive to shifting. [12]
Show more

7 Read more

Comparison of Two Iris Localization Algorithms

Comparison of Two Iris Localization Algorithms

Iris localization is the beginning task in any iris-based biometric authentication system and if the iris part of an eye image is not detected accurately then it leads to errors in overall identification method. In this work, two algorithms are studied for iris localization. The first algorithm, Daugman’s operator uses basic integration and differentiation technique to highlight the iris image. The second algorithm DRLSE uses internal energy, external energy concept to localize the iris boundary. From the result of both algorithms, it is concluded that the accuracy of iris localization is depending upon the intensity of iris image. Some images are of dark intensity and some images are of light intensity. The time taken by DRLSE is less then as compared to Daugman’s operator. The accuracy of DRLSE is 98.6% as compared to 96.7% of the Daugman’s operator.
Show more

8 Read more

A Comparison of Algorithms for the Multivariate L1-Median

A Comparison of Algorithms for the Multivariate L1-Median

posed a steepest descent algorithm combined with a bisection algorithm. It is somehow similar to the HoCr algorithm, but the use of the bisection method instead of step-halving considerably increases the computation time. Bedall and Zimmermann (1979) proposed to use the Newton-Raphson pro- cedure with the exact expression (4) for the Hessian matrix, and is similar to the NLM method with analytical second derivative. It turned out to be much slower than the NLM procedure. The algorithms of Gower (1974) and Bedall and Zimmermann (1979) are included in the R-package depth. Due to their similarity with algorithms already included in the simulation study, and since they are not competitive in terms of computation speed, we do not report their performance in this paper.
Show more

23 Read more

Supervised Learning Classification Algorithms Comparison

Supervised Learning Classification Algorithms Comparison

This classifier is used primarily for text classification which generally involves training on high dimensional datasets. This classifier is relatively faster than other algorithms in making predictions. It is called “naïve” because it makes an assumption that a feature in the dataset is independent of the occurrence of other features i.e. it is conditionally independent. This classifier is based on the Bayes Theorem which states that the probability of an event Y given X is equal to the probability of the event X given Y multiplied by the probability of X upon probability of Y.
Show more

6 Read more

An objective comparison of cell-tracking algorithms

An objective comparison of cell-tracking algorithms

Generally speaking, cell tracking methods involve several steps: 1 preprocessing, in which the quality of raw image data is enhanced to facilitate further analysis; 2 cell segmentation t[r]

71 Read more

A Modest Comparison of Blockchain Consensus Algorithms

A Modest Comparison of Blockchain Consensus Algorithms

The constructed model can be used to analyze the BFT Consensus algorithm, and determine the performance im- pact of variations in factors such as packet error rate and number of misbehaving nodes. The constructed model can also be further extended to account for more edge- cases and reduce the number of assumptions and simpli- fications. It is however, not possible to answer the initial research question “Does the Exonum BFT consensus algo- rithm provide the same level of performance as the Bitcoin consensus algorithm?”. This is due to the fact that the definition of measurable network performance is consider- ably different for both projects, causing the assumptions to counter these differences to have more impact on the outcome of the comparison than the actual analysis would have, thus making it possible to end up with any outcome one may want to achieve. While proper comparison be- tween different consensus algorithm types is impossible, it still might be possible to compare equal types of consensus algorithms. This, however, requires further research.
Show more

8 Read more

Circular sequence comparison: algorithms and applications

Circular sequence comparison: algorithms and applications

Conventional tools, designed for linear sequences, could yield an incorrectly high genetic distance between closely related circular sequences. Indeed, when sequencing molecules, the position where a circular sequence starts can be totally arbitrary. Due to this arbitrariness, a suit- able rotation of one sequence would give much better results for a pairwise alignment, and hence highlight a similarity that any linear alignment would miss. A practi- cal example of the benefit this can bring to sequence anal- ysis is the following. Linearized human (NC_001807) and chimpanzee (NC_001643) MtDNA sequences, obtained from GenBank [16], do not start in the same region. Their pairwise sequence alignment using EMBOSS Needle [17] (default parameters) gives a similarity of 85.1 % and con- sists of 1195 gaps. However, taking different rotations of these sequences into account yields a much more signifi- cant alignment with a similarity of 91 % and only 77 gaps. This example motivates the design of efficient algorithms that are specifically devoted to the comparison of circular sequences [18–20].
Show more

14 Read more

Show all 10000 documents...