Top PDF Subjective quality assessment database of HDR images compressed with JPEG XT

Subjective quality assessment database of HDR images compressed with JPEG XT

Subjective quality assessment database of HDR images compressed with JPEG XT

A few studies appeared in 2014 that evaluated the perfor- mance JPEG XT to various degrees. The work by Pinheiro et al. [5] compared four tone-mapping operators in how they affect performance of three profiles of JPEG XT, when used to generate the base layer of a compressed image. This evaluation demonstrates the sensitivity of the compression results to the choice of the tone-mapping operator in the base layer and showed that profiles perform consistently at different bit rates when Signal-to-Noise Ratio (SNR) and Feature SIMilarity (FSIM) metrics were used for measurements. Other studies were mostly limited to the performance evaluation of only one of the three available profiles in JPEG XT [6], [7]. The work by Mantel et al. [6] presented a subjective and objective evaluation for profile C. The objective grades were compared to subjective scores concluding that the Mean Relative Square Error (MRSE) metric provides best prediction performance. The authors of [7] investigated the correlation between thirteen well known full-reference metrics and perceived quality of compressed HDR content. Their evaluation was performed only on profile A of JPEG XT. In contrast to [6] their results showed that commonly used metrics, e.g., Peak SNR (PSNR), Structural SIMilarity (SSIM), and Multi-Scale SSIM (MS- SSIM) are unreliable in prediction of perceived quality of HDR content. They concluded that two metrics, HDR-VDP-2 and FSIM, predicted the human perception of visual quality reasonably well. The study by Valenzise et al. [8] compared the performance of three objective metrics, i.e., HDR Visual Difference Predictor (HDR-VDP), PSNR, and SSIM, when considering HDR images compressed using one of the profiles of JPEG XT. The results of this study showed that simpler metrics can be effectively employed to assess image fidelity for applications such as HDR image compression.
Show more

6 Read more

Overview and Evaluation of the JPEG XT HDR Image Compression Standard

Overview and Evaluation of the JPEG XT HDR Image Compression Standard

The results of subjective experiments are crucial in the selection of the right image quality metric and as a ground truth reference, but a subjective experiment alone can- not cover the entire space of parameters. Moreover, due the tedious nature of those experiments, only limited number of images can be tested, which makes the out- comes difficult to generalize. For that reason, we analyze the compression performance based on the results of the HDR-VDP-2, which was the best performing objective quality metric (see Table 2). Because of the scale of the required computation, the quality scores for 106 high- resolution images and in total 159 000 conditions were computed on an HPC cluster. The image quality com- puted for a range of base and extension layer quality set- tings may result in arbitrary bit rate, making the results difficult to aggregate. Therefore, the predicted quality values were linearly interpolated to find the HDR-VDP- 2 Q-scores for each desired bit rate. This step was nec- essary to determine average performance and confidence intervals for all tested profiles. In the rest of this sec- tion, we will refer to predicted MOS, that means a MOS predicted from the HDR-VDP-2 Q-score based on the logistic function fitted to the subjective evaluation data (Figure 5).
Show more

15 Read more

JPEG XT: A Compression Standard for HDR and WCG Images [Standards in a Nutshell]

JPEG XT: A Compression Standard for HDR and WCG Images [Standards in a Nutshell]

The JPEG committee has carried out a large number of experiments, using both subjective and objective methodolo- gies, to asses the capability of the JPEG XT. A set of 12 objective quality metrics were tested on 106 HDR images (resolutions range from full HD to 4K) covering a high range of scenes typically captured in HDR images, including indoor, outdoor scenes, architecture, landscapes, portraits, frames from HDR video, and computer generated images. All images were carefully selected by experts in HDR imaging from the following publicly available datasets: Fairchild’s HDR Photographic Survey[8]and EPFL’s dataset of HDR images[9]. Since a TMO can be freely selected for encoding and its selection is not part of JPEG XT specifications, we tested 5 different commonly used operators: a simple gamma-based op- erator gamma, a global logarithmic tone-mapping operator [4] drago03, a global version of the photographic operator [10] reihard02, an operator optimized for encoding [11] mai11, and a local operator with strong contrast enhancement [12] mantiuk06. To fully understand the implications of the tone- mapping operators and JPEG XT parameters, all possible combinations of these parameters were tested. We used the combination of 10 base quality levels × 10 extension layer quality levels × 5 TMO’s × 3 profiles, which results in a total of 1 500 conditions for each of the 106 images resulting in 159 000 tests. However, such a large number of conditions clearly cannot be tested in a subjective experiment. Therefore, from the total 106 HDR images, a subset of 20 images was selected by experts for subjective evaluations and these images were adjusted for viewing on SIM2 HDR monitor. Please refer to [13] for more details on the subjective evaluations.
Show more

7 Read more

Benchmarking of objective quality metrics for HDR image quality assessment

Benchmarking of objective quality metrics for HDR image quality assessment

In addition to the acquisition and display technolo- gies, JPEG has been standardizing new codecs for HDR content. JPEG XT is a recent standard for JPEG backward- compatible compression of HDR images [6]. Using this compression standard, HDR images are coded in two lay- ers. A tone-mapped version of the HDR image is encoded using the legacy JPEG format in a base layer, and the extra HDR information is encoded in a residual layer. The advantage of this layered scheme is that any conventional JPEG decoder can extract the tone-mapped image, keep- ing backward compatibility and allowing for display on a conventional LDR monitor. Furthermore, a JPEG XT com- pliant decoder can use the residual layer to reconstruct a lossy or even lossless version of the HDR image. Cur- rently, JPEG XT defines four profiles (A, B, C, and D) for HDR image compression, of which profile D is a very sim- ple entry-level decoder that roughly uses the 12-bit mode of JPEG. Profiles A, B, and C all take into account the non-linearity of the human visual system. They essentially differ on the strategy used for creating the residual infor- mation and on the pre- and post-processing techniques. In profile A, the residual is represented as a ratio of the lumi- nance of the HDR image and the tone-mapped image after inverse gamma correction. The residual is log-encoded and compressed as an 8-bit greyscale image [7]. In pro- file B, the image is split into “overexposed” areas and LDR areas. The extension image is represented as a ratio of the HDR image and the tone-mapped image, after inverse gamma correction. Note that instead of a ratio, profile B uses a difference of logarithms. Finally, rofile C computes the residual image as a ratio of the HDR image and the inverse tone-mapped image. Unlike the other profiles, the inverse TMO is not a simple inverse gamma, but rather a global approximation of the inverse of the (possibly local) TMO that was used to generate the base-layer image. Sim- ilarly to profile B, the ratio is implemented as a difference of logarithms. However, instead of using the exact math- ematical log operation, profile C uses a piecewise linear approximation, defined by re-interpreting the bit-pattern of the half-logarithmic IEEE representation of floating- point numbers as integers, which is exactly invertible [8]. MPEG is also starting a new standardization effort on HDR video [9], revealing the growing importance of HDR technologies.
Show more

18 Read more

Perceived dynamic range of HDR images

Perceived dynamic range of HDR images

This paper has three main contributions. First, we construct a subjectively annotated dataset with perceived dynamic range, using complex stimuli and HDR viewing conditions (using a HDR display). This database is available upon request to the authors. Second, given the lack of standardized approaches to measure this kind of perceptual attribute, we propose a novel test methodology for gauging perceived dynamic range, which is somewhat inspired by the subjective assessment methodology for video quality (SAMVIQ) [10]. Third, based on the results of the study, we analyze the correlations between mean opinion scores (MOS) and three image features, i.e., pixel-based dynamic range, image key and spatial information. The rest of the paper is organized as follows. In section II related work is discussed, followed by the details about the experimental design (Section III). The obtained subjective scores are analysed and compared with the objective metrics in Section IV. Finally, the results are further discussed (Section V) and several conclusions are drawn in Section VI.
Show more

7 Read more

Nonparametric Quality Assessment of Natural Images

Nonparametric Quality Assessment of Natural Images

Databases: The performance of an NR-IQA algorithm is usually evaluated by using subjective image databases. There are several established subjective image evaluation databases within the IQA research area. In this work, two publicly available databases are utilized: LIVE [16] and CSIQ [17]. The LIVE database is probably the most widely used database in evaluating the performance of IQA algorithms. It consists of 29 undistorted reference images. Each of these reference images is then subjected to 5 to 6 degradation levels in five different distortion types: JPEG2000 compression (JP2K), JPEG compression (JPEG), additive white noise (WN), Gaussian blur (GB), and simulated fast fading channel (FF) yielding a total of 779 distorted images. These distorted images are provided with DMOS values which are in the range between 0 and 100. Meanwhile, the CSIQ database is composed of 866 distorted images. They are generated when a total of 6 different types of distortions are applied to 30 reference images at 4 to 5 levels. In contrast to the LIVE database, each distorted image is assigned with a DMOS value in the range between 0 and 1. In both databases, an image with a lower distortion level is assigned with a lower DMOS value.
Show more

14 Read more

Enhancement of JPEG Compression for GPS Images

Enhancement of JPEG Compression for GPS Images

GPS memory complicatedness [31-32] makes compression essential [33-34]. A technique of compression for JPEG based GPS picture with a higher compressi on ratio has been introduced. This paper has shown that JPEG's supposition that the average color of beginning of each blocks line is quite similar to the average color of end of its preceding blocks line is incorrect for most of GPS images. The change of the block order of JPEG compression algorithm can facilitate a reduction in the compressed file. The results are encouraging. The difference between the values of each blocks line can be dramatically decreased and as a result a much small compressed images can be obtained.
Show more

10 Read more

Local MAP estimation for quality improvement of compressed color images

Local MAP estimation for quality improvement of compressed color images

Milkdrop, Peppers, Mandrill), which are shown in Fig. 2. These images are 256 × 256 pixels in size and 24 bit per pixel (bpp) full color images. The proposed restoration algorithm was applied to JPEG compressed color images and JPEG2000 compressed ones. Our previous algorithm [8] and

23 Read more

Quality Assessment of Stereoscopic Images

Quality Assessment of Stereoscopic Images

Finally, we have pointed out that the 3D quality assess- ment method based on the use of 2D C4 metric is as efficient as the enhanced SSIM with local disparity distortion measure introduced in this paper but has a higher computational cost. In this paper, we proposed an approach involving 2D quality metrics while taking into account the stereo disparity information; this can be considered as the final limit of the conventional 2D approaches. It is worth pointing out that dealing with stereo data introduces a new perspective; in fact instead of dealing with quality assessment we should refer to quality of experience. Indeed, since 3D involves new perception factors such as the feeling of immersion, presence [14], and so forth, image quality is not anymore su ffi cient to represent the quality of the experience done by the observer when immersed in a stereo environment. Then, it is necessary to build a new setup which would take into account all the factors related to 3D. The first attempt has been drawn in [16] where image quality contributes with depth information to a more global “naturalness” model which contributes to a main “3D visual experience” model. But the impact of depth and visual comfort is still waiting to be investigated. New test setups have to be defined to identify all the factors related to 3D visual experience.
Show more

13 Read more

Visual attention in LDR and HDR images

Visual attention in LDR and HDR images

lar to FDMs of LDR images. The applied similarity met- ric demonstrated that these clusters are dissimilar in sta- tistically significant way. However, the similarity scores for clusters (i) and (ii) are not as small compared to clus- ter (iii) as it was expected, which means the metric did not capture the difference between FDMs adequately. There- fore, the impact of HDR on human visual attention is scene- dependent and it is hard to measure it using existing metrics. Future work will focus on finding an automated way to classify scenes for better understanding of the influence of HDR on visual attention. Different metrics of visual atten- tion need to be investigated to identify the metric that cap- tures the differences in visual attention patterns caused by HDR. The impact of HDR imaging on computational mod- els of visual saliency will also be considered.
Show more

6 Read more

Crowdsourcing Subjective Quality Assessment of Multimedia Content

Crowdsourcing Subjective Quality Assessment of Multimedia Content

itself. Moreover, the research can focus more on the subjects’ expectations and reactions to quality changes over longer periods of time. 3.1.6 Comparison Each test method has its own set of advantages, and choosing which methodology to use for an assessment study may be not be as straight forward as one might imagine. An important issue in choosing a test method is the fundamental difference between methods that use explicit references (e.g. DCR) and methods that do not use any explicit reference (e.g. ACR, PC and SSCQE) [30, 31]. The latter does not test fidelity with regards to a source sequence, which is often important in evaluation of high quality systems [30]. In this case, when the viewer’s detection of impairment is an important factor, the DCR method is recommended. ACR may be simple and fast to implement, and the presentation of the stimuli is similar to that of the common use of the systems under test. Thus, ACR is well suited for qualification tests [30, 31]. The PC test method takes advantage of the simple comparative judgement task in which to prioritise a set of stimuli. Because of its high discriminatory power, it is particularly valuable when several of the test items are nearly equal in quality [30]. Moreover, when using a large number of items in the test the more time consuming this procedure may be, which may be an inconvenience in some cases. The methodologies that consider long-duration sequences (e.g. SSCQE and QELDAC) are obviously more suited in situations where sequences of a longer duration needs to be assessed. The two methods vary slightly in how the method translates in regard to a final outcome of the assessment study. SSCQE would be used when the preferred outcome is a score based on the perceived quality at certain intervals throughout the test item [30]. However, QELDAC may be better in situations where the researcher would like to know what quality level is acceptable for a potential user [3].
Show more

95 Read more

AN ADAPTIVE POWER ALLOCATION SCHEME FOR ROBUST TRANSMISSION OF JPEG COMPRESSED IMAGES OVER MIMO- OFDM SYSTEMS

AN ADAPTIVE POWER ALLOCATION SCHEME FOR ROBUST TRANSMISSION OF JPEG COMPRESSED IMAGES OVER MIMO- OFDM SYSTEMS

The main challenges faced by future wireless communications especially WLANs is to provide high data rate wireless access at high quality of service. With constraints on the total available bandwidth and total transmit power; it is possible to overcome the challenge by employing MIMO-OFDM. Exploiting the rich scattering nature of indoor environments, MIMO systems help in increasing the data rate linearly with the number of transmit antennas and improve spectral efficiency [1],[2]. There are two modes of employing MIMO – spatial multiplexing and diversity. Spatial multiplexing mode is aimed at transmitting independent data through each transmit antenna; thereby increasing data transmission rate. In diversity mode, same data is transmitted through more than one antenna; thus increasing the chances of the transmitted data reaching the receiver correctly. Current WLAN standards like IEEE 802.11 a/g achieve data rates up to 54 Mbps [3],[4]. With a 4x4 MIMO, it is possible to boost the maximum raw data rate from 54 Mbps to more than 200 Mbps [6],[11].
Show more

8 Read more

The Effective of Image Retrieval in Jpeg Compressed Domain

The Effective of Image Retrieval in Jpeg Compressed Domain

Method purposed by Clymer and Bhatia [9] that organizes the DCT coefficients of an image into a quad tree structure. This way, the system can use these coefficients on the nodes of the quad tree as image features. However, although such a retrieval system can effectively extract features from DCT coefficients, the main weakness of this method is that the computation of the distances between images will grow undesirably fast when the number of relevant images is immense or the threshold value is big.

5 Read more

Face Recognition of Database of Compressed Images using Local Binary Patterns

Face Recognition of Database of Compressed Images using Local Binary Patterns

For both these problems, the solution can be obtained with a single technique used in this research work. In order to address the problem of reducing the size of the file size by using regular transformation technique, one can use to reduce the frame size of the image. In this way, size of the image is reduced. However the image need to reconstructed back to its original size while doing face recognition. Since the quality of the image is altered when an image is reconstructed from lower frame size to higher frame size, it may affect its face recognition capability. In this work it is proven that face recognition technique will succeed in recognizing the faces even if the image are compressed to 10% of its original size. To address the issue mentioned in second point, the reconstructed images can be stored at the computers held by security agencies. These reconstructed images are not of good quality and the images are blurred to the maximum extent possible when reconstructed from 10% size back to its original size.
Show more

8 Read more

A model of perceived dynamic range for HDR images

A model of perceived dynamic range for HDR images

the perception of such visual attributes. Therefore, in the fu- ture, we would like to further investigate this topic by looking at these and other perceptual factors that could be involved in this process at a lower level. Furthermore, the existing objective metrics have to be redesigned and, possibly, novel ones devel- oped targeted directly at HDR content. While this is beyond the scope of this paper, it was evident that a gap exists in this area. Another direction at which we would like to expand this work is the analysis of aesthetic attributes. Finally, we are inter- ested in extending this work to video content and investigating the temporal aspects. In the case of video, the perceptual phe- nomena behind the perception of dynamic range can be more complex. While, for a static image, the luminance range repro- ducible by an HDR display matches the steady-state dynamic range of the HVS, temporal variations of this range, e.g., due to a change from a bright to a dark scene, can span a much broader interval of luminance than the HVS could process at a given adaptation level, causing maladaptation phenomena and visual discomfort [37]. It is known that light / dark adaptation is not instantaneous, which results in higher masking for larger temporal variations of the luminance range. This entails a loss of contrast sensitivity in the maladaptation phase, but could en- hance the overall perception of bright-dark di ff erences on short time segments. Therefore, initial studies will be conducted with a similar methodology, using short clips, and the scores will be correlated with dynamic range models similar to those dis- cussed in this work.
Show more

17 Read more

Adjustable compression method for still JPEG images

Adjustable compression method for still JPEG images

As mentioned previously, in some applications it may be convenient adjust the size and quality of the images to the operating context. As example of operation of the proposed system depicted in previous figure and of adjustable compression applications, a "real-time" navigation system can be considered. In this application it may be convenient adjust the size and quality of the images to the operating context in order to maintain an acceptable service to the user. Nearly all cell phones on the market now include some sort of GPS capability therefore it is common using navigation systems apps over them. However, since there are multiple system and multiple GPS manufacturers, the operating conditions will most likely not be the same for every one of them. The performance perceived by user is depending on both cell-phone system capabilities (eg. processor, memory, battery) and environment conditions (eg. satellite coverage, user speed). So that, real-time features can be included into the system to provide an homogeneous quality of service. The “Environment Analysis Module” checks the system and environment conditions and deduces from them the ‘T’ constraint for the “Time control unit ”. In this way, several scenarios could occur: for high-speed user movement that requires frequent map- image refresh, a variable T time can been set to allow the system to compute the maps in accordance with system, user speed and necessary refresh rate; for low system capabilities or weak satellite coverage, the image process can be made with fewer data in order to maintaining response times; or finally for example, low battery modes of the system can make advisable setting appropriate T time in order to show the images with the least possible computation.
Show more

19 Read more

Subjective and objective quality evaluation of synthetic and high dynamic range images

Subjective and objective quality evaluation of synthetic and high dynamic range images

Recent years have seen tremendous growth in the acquisition, transmission, and storage of digital visual data[1]. With the proliferation of hand-held smart devices, the exponential increase in the amount of mobile image/video traffic will likely continue in the upcoming years. Some of the popular applications of visual data are streaming websites like YouTube, High Definition TVs, Video-on-demand services like Hulu and Netflix, Digital Cinema etc. On an average 350M photos are uploaded to Facebook every year and YouTube has over a billion users, roughly one-third of all internet users. Apart from the images and videos captured by optical cameras, the visual data traffic also comprises of computer graphics generated content, such as those in animated movies and video games. The genre of massively multi-player online gaming has 23.4M subscribers worldwide. In addition, fusion of natural and synthetic content is becoming increasing popular due to the widespread use of augmented reality applications (such as Google Glass).
Show more

178 Read more

MODIFIED POINTWISE SHAPE-ADAPTIVE DCT FOR HIGH-QUALITY DEBLOCKING OF COMPRESSED IMAGES

MODIFIED POINTWISE SHAPE-ADAPTIVE DCT FOR HIGH-QUALITY DEBLOCKING OF COMPRESSED IMAGES

The use of a transform with a shape-adaptive support involves actually two separate problems: not only the transform should adapt to the shape (i.e. a shape-adaptive transform), but the shape itself must adapt to the image features (i.e. an adaptive shape). The first problem has found a very satisfactory solution in the SA-DCT transform [1, 2]. The second problem is essentially application-dependant. It must be noted that conventional segmentation (or local-segmentation) techniques which are employed for video processing (e.g. [10]) are not suitable for degraded (noisy, blurred, highly compressed, etc.) data. In this approach, the SA-DCT in conjunction with the anisotropic LPA-ICI technique [11, 12, 13] is used. The approach is based on a method originally developed for pointwise adaptive estimation of 1-D signals [12, 13]. The technique has been generalized for 2-D image processing, where adaptive-size quadrant windows have been used [16]. Significant improvement of this approach has been achieved on the basis of anisotropic directional estimation [7, 11]. Multidirectional sectorial- neighborhood estimates are calculated for every point. Thus, the estimator is anisotropic and the shape of its support adapts to the structures present in the image. In Fig. 1, some examples of these anisotropic neighborhoods for the Lena and Cameraman images are
Show more

7 Read more

Quality Assessment of Post-Processed Images

Quality Assessment of Post-Processed Images

Enhancement of acutance is much more suitable for the sharpening of images. In general, there are two possible approaches towards acutance enhancement. First group is formed by algorithms trying to decrease the edge transition slope length. This length is defined as the distance between minimum and maximum image intensity values in the neighborhood of the edge. The simplified example of edge profiles with such an enhancement is shown in Figure 7.2 (a). This is applicable particularly well for image restoration of heavily blurred images where the second approach fails completely. One of the methods, developed by Arad and Gotsman [ 169 ], uses image dependent warping. As most of the other techniques, it also has problems with noise amplification and compression artifacts. Shavemaker et al. [ 170 ] proposed different approach based on morphological filtering. It takes the intensity values in the slope and substitutes half of them with the minimum and the other half with the maximum value found in the slope, thus creating the ideal step edge (like the one in Figure 7.1 (a)). This is, however, not very convenient because such edges look unnatural. Augmentation of this method for enlarged images can be found in [ 171 ], or [ 172 ], where instead of creating the step edge, the values are substituted by some kind of contrast stretching function between minimum and maximum values in a slope.
Show more

205 Read more

Subjective and objective quality assessment for volumetric video compression

Subjective and objective quality assessment for volumetric video compression

Volumetric video is becoming easier to capture and display with the recent technical developments in the acquisition, and dis- play technologies. Using point clouds is a popular way to repre- sent volumetric video for augmented or virtual reality applica- tions. This representation, however, requires a large number of points to achieve a high quality of experience and needs com- pression before storage and transmission. In this paper, we study the subjective and objective quality assessment results for volu- metric video compression, using a state-of-the-art compression algorithm: MPEG Point Cloud Compression Test Model Cate- gory 2 (TMC2). We conduct subjective experiments to find the perceptual impacts on compressed volumetric video with differ- ent quantization parameters and point counts. Additionally, we find the relationship between the state-of-the-art objective quality metrics and the acquired subjective quality assessment results. To the best of our knowledge, this study is the first to consider TMC2 compression for volumetric video represented as coloured point clouds and study its effects on the perceived quality. The results show that the effect of input point counts for TMC2 compression is not meaningful, and some geometry distortion metrics disagree with the perceived quality. The developed database is publicly available to promote the study of volumetric video compression.
Show more

6 Read more

Show all 10000 documents...