Measurement of ImageQuality plays an important role in numerous image processing applications such as forensic science, image enhancement, medical imaging, etc. In recent years, there is a growing interest among researchers in creating objective ImageQualityAssessment (IQA) algorithms that can correlate well with perceived quality. A significant progress has been made for fullreference (FR) IQA problem in the past decade. In this paper, we are comparing 5 selected FR IQA algorithms on TID2008 image datasets. The performance and evaluation results are shown in graphs and tables. The results of quantitative assessment showed wavelet-based IQA algorithm outperformed over the non-waveletbased IQA method except for WASH algorithm which the prediction value only outperformed for certain distortion types since it takes into account the essential structural data content of the image.
Abstract—As the coming era of digitized image information, it is critical to produce high compression performance while minimizing the amount of image data so the data can be stored effectively. Compression using waveletalgorithm is one of the indispensable techniques to solve this problem. The WaveletAlgorithm contains transformation process, quantization process, and lossy entropy coding. Wavelets are functions which allow data analysis of signals or images, according to scales or resolutions and it provides a powerful and remarkably flexible set of tools for handling fundamental problems in science and engineering, such signal compression, image de-noising, image enhancement and image recognition. The aim of this paper is to compare the imagequality by using 4 wavelet-basedimage compression techniques; namely Set Partitioning In Hierarchical Trees (SPIHT), Embedded Zerotree Wavelet (EZW), Wavelet Difference Reduction (WDR) and Adaptively Scanned Wavelet Difference Reduction (ASWDR). While for analysis, Peak Signal to Noise Ratio (PSNR), Mean Square Error (MSE), Compression Ratio (CR) and Bit Per Pixels (BPP) analysis are used. From the obtained results, it shows that WDR outperform in terms of compression efficiency.
Power quality has become a major concern for utility, facility, and consulting engineers in recent years. International as well as local standards have been formulated to address the power quality issues . To the facility managers and end users, fre- quent complaints by tenants/customers on occasional power failures of computer and communication equipment and the energy ine ﬃ ciency of the LV electrical distribution system are on the management’s agenda. Harmonic currents produced by nonlinear loads would cause extra copper loss in the dis- tribution network, which on one hand will increase the en- ergy cost and on the other hand would increase the elec- tricity tari ﬀ charge. The benefits of using power electronic devices in the LV distribution system in buildings, such as switch mode power supplies, variable speed drive units, to save energy are sometimes oﬀset by the increased energy loss in the distribution cables by current harmonics and the cost of remedial measures required. Voltage harmonics caused by harmonic voltage drops in the distribution cables are aﬀect- ing the normal operation of voltage-sensitive equipment as well.
Wavelets are numerical capacities to reduce information keen on various recurrence parts, and after that concen- trate every segment with a determination coordinated to its scale. The wavelet transform can de-correlate a pic- ture together in space and recurrence thus circulating vitality minimally keen on a couple low and high fre- quency coefficients. The effectiveness of waveletbased picture pressure plan determines both on wavelet chan- nels picked and in addition on the coefficient quantization plan.
B.Compression of 2D image with Haar Wavelet Technique It has been shown in previous section how one dimensional image can be treated as sequences of coefficients. Alternatively, we can think of images as piecewise constant functions on the half-open interval [0, 1). To do so, the concept of a vector space is used. A one-pixel image is just a function that is constant over the entire interval [0, 1). Let V 0 be the vector space of all these functions. A two pixel image has two constant pieces over the intervals [0, 1/2) and [1/2, 1). We call the space containing all these functions V 1 . If we continue in this manner, the space V j will include all piecewise-constant functions defined on the interval [0, 1) with constant pieces over each of 2 j equal subintervals. We can now think of every one-dimensional image with 2 j pixels as an element, or vector, in V j . Note that because these vectors are all functions defined on the unit interval, every vector in V j is also contained in V j+1 . For example, we can always describe a piecewise constant function with two intervals as a piecewise-constant function with four intervals, with each interval in the first function corresponding to a pair of intervals in the second. Thus, the spaces V j are nested; that is, V 0 ⊂ V 1 ⊂ V 2 ⊂ …… This nested set of spaces V j is a necessary ingredient for the mathematical theory of multiresolution analysis . It guarantees that every member of V 0 can be represented exactly as a member of higher resolution space V 1 . The converse, however, is not true: not every function G(x) in V 1 can be represented exactly in lower resolution space V 0 ; in general there is some lost detail .
A wavelet is a “small wave”, which has its energy concentrated in time. It gives a tool for the analysis of mandatory, non-stationary. It is also known as wave-like oscillations with amplitude which increases with zero and decreases up to zero. This is also known as one complete cycle it not only has an oscillating wave like characteristic but also has the ability to allow simultaneous time and frequency analysis with a flexible mathematical foundation. Wavelets are mainly design for specific purpose that makes them useful for signal processing and image processing. Convolution is the techniques that can combine using revert, shift, multiply and sum.
Nearly all of early researches mentioned the necessity of involving the human vision factor which is very important for IQA in the account. Although there is no many IQA algorithms in the early papers about modelling HVS, many of properties such as contrast and luminance sensitivity were proposed in the early papers. At first, difficulty of IQA may not seem quite challenging as the stated in literature . After all digital processing changes the image’s pixel values, and for evaluating the quality these changes are calculated as numerical and these numerical changes maps to corresponding visual preferences. But since this this process involves Human Visual System (HVS), estimating quality cannot be thatunequivocal.Perceiving systems of humans does not sees images as collection of pixels, in human vision there are soma factors like psychology and mapping varies depending on these factors . Until today, there is still yet to achieve a system that fully evaluate quality, but remarkable progress has been made.
The operating scheme is shown in Fig. 1, and the gen- erative method of selecting imagequality patches is il- lustrated in this chapter. Two neighboring pixels sampled from natural images have an extensive correl- ation. And the distributing range of the dots in a pixel-pairs scatter plot can represent the dispersed ex- tent of pixel values in an image. The value difference is relatively large between pixels locating on the edges of the scatter plot. And if the quantity of edges in an image is larger, the distributed range of the dots in a pixel-pairs scatter plot is wider. At the same time, the distributed width of the dots in a scatter plot represents the more edges and structures of an image, where the human vi- sion system is more sensitive to structures and edges. According to the target of representing the image qual- ity, the distributed width of the dots in a scatter plot is applied to find out the imagequality patches for speed- ing up the algorithm.
Specifically, the proposed RR-IQA feature, called spectrum of spatial regularity (SSR), characterizes the spatial distribution of image structures based on fractal analysis. The spatial-frequency components of image are first extracted by Log-Gabor filtering. Then fractal dimension is used to measure the spatial regularity of the arrangements in each Log-Gabor subband. Finally all the computed fractal dimensions are collected as a feature vector. By using fractal analysis that has a strong correlation with HVS, the image structures are well encoded and the difference of their spatial arrangements between images can be well characterized.
information about how often a particular pixel is gazed at. On the other hand, local standard deviation and gradient information signify the local and contextual infor- mation of any pixel. The local correlation between the global information obtained from the reference and distorted images is computed. The local correlation between the gradient information obtained from the reference and distorted images is also cal- culated. These local correlations are combined with the local RMS contrast between the images. From the experimental results, we ﬁnd that the integration of simple visual details of global perceptual diﬀerence information and local information may result in an eﬀective FR-IQA technique. It diﬀers from its predecessors in terms of treatment of local and global features by using the regional correlation. It has been shown in , gradient is structure-variant as well as contrast-variant. Thus similar variations in gradient magnitude values and standard deviation are expected for a pixel. However, change in standard deviation may not be caused by change in gra- dient magnitude only. Gradient orientation is also aﬀected by presence of distortion. Thus, the proposed approach applies all of these visual details to arrive at the quality score. The performance analysis of the technique in six benchmark databases shows the promise of the proposed method as a competitive technique in FR-IQA. Also, we carry out analysis on distortion wise performance of the FR-IQA techniques using color based representation. This representation of the results clearly shows that with certain distortions, the FR-IQA techniques fail to perform well. The representation also depicts the competitive performance of the proposed method as also analyzed in Section 4.4.3.
noise while preserving important features. There have been several denoising techniques like Wiener filter, Median filter, Average filter, and Wavelet Thresholding; Principle Component Analysis (PCA), Independent Component Analysis (ICA) and Topographic ICA. Each technique has its assumptions, merits, and demerits. The prime focus of this project is to perform comparative studies of diffusion filter and wavelet transform and to evaluate their performance in denoising data. The main advantages of wavelet transform over other existing signal processing techniques are its space-frequency localization and multi-scale view of the components of a signal, also to identify spatial structure in transect data. Furthermore, there must be a suitable representation of the data in order to facilitate any analysis procedures. By using transformation or decomposition techniques, it can achieve and reach the maximum goal of the signals set of basic functions prior to processing in the transform domain. Digital images are subject to a wide variety of distortions during acquisition, processing, storage, transmission, and reproduction, which may result in a degradation of visual quality. So, measurement of imagequality is very important to numerous image processing applications.
comparing to other IQA methods. Performance on the JPEG compression distortion shows that the proposed metric has the same performance in both SPCRM-SCHARR  with data rate equal to (Image size)/32 performs and SIRR  with L/16+64, but the data rate of the SPCRM-SCHARR  metric is not acceptable here. For other distortion types such as White Noise distortion, the performance of the proposed metric is comparable to other IQA metrics and better than SPCRM-SCHARR  algorithm. Therefore, the overall performance of the suggested metrics on the LIVE database is more desirable than other IQA metrics and similar to SIRR  metric. The supreme behavior of the presented algorithm can be explained by considering the blob detection behavior against different types of distortion. As the blob detector algorithm extracts features in different scales and orientations, the structural changes by the White Gaussian noise, JPEG and JPEG2000 compression are evaluated well. The reason is increasing the White Noise distortion level that leads to growing up the number of blobs due to the rising of local extremum in the luminance image. The sudden increase and decrease in the number of blobs and the effect of distortion on the image structure and the associated feature have caused the algorithm to react well against the White Noise distortion. In the Fast Fading and Gaussian blur distortion, the number of blobs decreases dramatically in high distortion levels. Due to this fact, these two types of distortions decrease the local luminance extremum. For these two distortion types, the Performance of the proposed metric is acceptable but not as good as the performance against the white noise and JPEG2000 compression distortions. By considering the data rate and performance on different distortions, the proposed method behaves more stable comparing to other IQA metrics. Furthermore, the results of the proposed method on different distortion types are nearly the same. Hence, the overall performance of the proposed metric outperforms the other state-of-the-art RR-IQA metrics and FR-IQA algorithms. 3- 2- Overall performance comparison
The objective of imagequalityassessment (IQA)  is to provide computational models to measure the perceptual quality of an image. In recent years, a large number of methods have been designed to evaluate the quality of an image, which may be distorted during acquisition, transmission, compression, restoration, and processing which lead to image degradation. In poor transmission channels, transmission errors or data dropping would happened, which lead to the imperfect quality and distortion of the received video data. Therefore, how to evaluate the imagequality has become a burning problem. In recent years, digital camera is equipped in most of the mobile products like cellular phone, PDA and notebook computer. Imagequality is the most important criteria to choose mobile products. In some cases, the benchmarks or reviews of products are based on subjective imagequality test and thus are dependent on tester and environment. The subjective imagequalityassessment often misleads the decision for the imagequality control parameters of Image Signal Processing (ISP) algorithm.The recent years has demonstrated and witnessed the tremendous and imminent demands for imagequalityassessment (IQA) methods at least in the following three ways: 1) They can be exploited to monitor imagequality for controlling quality of processing system. 2) They can be employed to benchmark image processing systems and algorithms. 3) They can also be embedded into image processing systems to optimize algorithms and parameter settings. Existing IQA metrics
metrics have been developed including blur and noise simul- taneously, such as the one by Zhu and Milanfar [ 6 ], introdu- cing a new concept called true image content. Their measure is correlated with noise, sharpness, and intensity contrast, manifested in visually salient geometric features such as edges, showing that such a measure correlates well with sub- jective quality evaluation for both blur and noise distortions. However, the Zhu and Milanfar technique has been designed to compare images within the same context (images covering the same area but having different quality attributes), while the sharpness metric of Ferzli and Karam has been developed to predict the relative amount of blurriness in images regardless of their context (note that, in what follows, we will consider that images resulting from distorting a given original enclose the same context and we will use the term different context for degraded or distorted images resulting from different originals). According to Zhu, the JNB technique fails to capture the trend of quality change in block-matching and three- dimensional (BM3D) [ 6 ] denoising experiments, since it cannot handle noise well. Later on, Narvekar and Karam [ 7 ] proposed an improved algorithmbased on the JNB paradigm for a no- reference objective image sharpness metric, introducing a tech- nique they called the cumulative probability of blur detection (CPBD). In this work, the sharpness metric converges to a finite number of quality classes. They used the LIVE [ 8 ] database to validate the performance of their metric. A training-based meth- od determines the centroids of the quality classes that repre- sent the perceived quality levels. Classification is based on assigning the image to one of the quality classes and then using the index of the corresponding quality class as the metric value for that image. They include measuring experiments for Gauss- ian blur and JPEG2000-compressed images, and they show that this metric performs better than other known metrics.
Anush Krishna Moorthy, Alan Conrad Bovik proposed NSS based NR IQA model , classify the Distortion Identification-basedImage INtegrity and Verity Evaluation (DIIVINE) index, and establish the summary statistics derived from an NSS wavelet coefficient model, using a two-stage framework for qualityassessment. Here, distortion-identification followed by distortion-specific QA. DIIVINE is accomplished of assessing the distorted imagequality across multiple distortion categories, simultaneously most NR IQA algorithms that are distortion- specific in nature. The DIIVINE index executes quite well on the LIVE IQA database , performing statistical parity with the full-reference structural similarity (SSIM) index.
In this paper we have worked on eight different spatial filtering techniques using nine full-referencebasedimagequality metrics. A comparative study of spatial filtering techniques on standard test image of Lenna corrupted of Gaussian, speckle and salt and pepper noise was carried out. We obtained very useful results which depicts that each filter works well for certain type of noise models and does not work so good for other models. The analysis was done using subjective interpretation of filtered images as well as using fullreferencebasedimagequality metrics. By comparing the results of imagequality metrics the conclusion is made that the Speckle noise can be reduced using Lee, Kuan and Anisotropic diffusion filter, Salt and Pepper noise can be suppressed using Median and AWMF and for Gaussian noise mean and wiener filter are immensely efficient. Although we got good results but still there is a scope of improvement in SSIM in the presence of speckle noise. One of the main future directions is to apply transform filtering in wavelet domain on images corrupted by speckle noise.
The common problems reported for large-scale databases, including the variety of fin- gerprint types and poor imagequality, can be solved with the help of the reference point [6,15,16]. Currently, core-point detection methods have only been proposed for scanner-based images. Numerous studies, as reported in the literature, have proposed new methods for the analysis and detection of the core points in scanner-based finger- print images. In general, the existing methods can be classified into two categories. The first method utilizes the Poincare index to locate the core point. This algorithm com- putes the total orientation variation around a point to determine whether a core point is present. The second method uses template matching or ridge, probability, or shape anal- ysis [8,16-18]. In real applications, Poincare index-based methods have been proven to be more robust than the second method because they are able to handle image rotation. Moreover, even though the computation cost is high, it is still acceptable.
Abstract. A wide variety of image denoising methods are available now. How- ever, the performance of a denoising algorithm often depends on individual input noisy images as well as its parameter setting. In this paper, we present a no- referenceimage denoising qualityassessment method that can be used to select for an input noisy image the right denoising algorithm with the optimal parame- ter setting. This is a challenging task as no ground truth is available. This paper presents a data-driven approach to learn to predict image denoising quality. Our method is based on the observation that while individual existing quality metrics and denoising models alone cannot robustly rank denoising results, they often complement each other. We accordingly design denoising quality features based on these existing metrics and models and then use Random Forests Regression to aggregate them into a more powerful unified metric. Our experiments on im- ages with various types and levels of noise show that our no-reference denoising qualityassessment method significantly outperforms the state-of-the-art quality metrics. This paper also provides a method that leverages our qualityassessment method to automatically tune the parameter settings of a denoising algorithm for an input noisy image to produce an optimal denoising result.
In this research, instead of relying on the global statistics features, NRIQACDI is presented based on the hypothesis that image distortions may alter the local region statistics (Local patches features). Then could help to improve the performance of IQA in predicting imagequality of contrast distorted images. In this study, instead of NR-IQA-CDI  relying on the global statistics features, NRIQACDI is presented based on the local statistics features. The experiments showed positive results that using local patches features with NSS could significantly improve the performance of IQA. The statistical tests indicate that the performance using local patches features with NSS are better than that of the NRIQACDI . The use of other statistical features and selection methods should be further investigated to increase the quality of prediction performance.
As for further work, the blur assessment can be easily extended to digital video qualityassessment to measure the level of blur in every consecutive frame in the video. The global blur measurement of a video will be the average of local blur measurement in every successive frame. Hence, the blur assessment can be improved so that is also applicable to real-time application. Besides that, blur identification algorithm is recommended to implement together with the blur assessment. Instead of measuring the amount of blur artifacts that present in an image, the type of blur that degrades the image can be identified so that user will have knowledge on which type of blur distorts the imagequality. Finally, further research can be done on the blur assessment by measuring blurriness in color images on other color component such as hue instead of measuring on the luminance component.