Top PDF Novel high-quality variational image decomposition models and their applications

Novel high-quality variational image decomposition models and their applications

Novel high-quality variational image decomposition models and their applications

After a brief overview of the use of partial differential equations in image processing, which has become very widespread in recent years, a novel image decomposition model called Impr[r]

304 Read more

Introducing anisotropic tensor to high order variational model for image restoration

Introducing anisotropic tensor to high order variational model for image restoration

Second order total variation (SOTV) models have advantages for image restoration over their first order counterparts including their ability to remove the staircase artefact in the restored image. However, such models tend to blur the reconstructed image when discretised for numerical solution [1–5]. To overcome this drawback, we introduce a new tensor weighted second order (TWSO) model for image restoration. Specifically, we develop a novel regulariser for the SOTV model that uses the Frobenius norm of the product of the isotropic SOTV Hessian matrix and an anisotropic tensor. We then adapt the alternating direction method of multipliers (ADMM) to solve the proposed model by breaking down the original problem into several subproblems. All the subproblems have closed-forms and can be solved efficiently. The proposed method is compared with state-of-the-art approaches such as tensor-based anisotropic diffusion, total generalised variation, and Euler’s elastica. We validate the proposed TWSO model using extensive experimental results on a large number of images from the Berkeley BSDS500. We also demonstrate that our method effectively reduces both the staircase and blurring effects and outperforms existing approaches for image inpainting and denoising applications.
Show more

13 Read more

Variational models and algorithms for blind image deconvolution with applications

Variational models and algorithms for blind image deconvolution with applications

Figure 1.1: Examples of Colour Fundus images of varying quality from excellent where blood vessels and other details can clearly be seen to inadequate where most of the detail is not visible. Such inadequate images cannot be used for diagnosis or screening. Blurring in retinal images leads to a substantial number of unnecessary referrals and in turn a waste of valuable hospital resource. In a current programme which sees three million diabetic patients undergo annual photographic screening, approximately 10% of the images acquired (by high-resolution digital cameras) are considered to be too blurred for assessment and so ungradeable due to inadequate clarity or poor field definition. This proportion of inadequate scans is typical of retinal imaging and may result in further referrals or even misdiagnosis. Such visually ungradeable images are more likely to come from patients who have reached an advanced stage of Retinopathy. Blurring of images is due to many factors such as motion of the camera or the tar- get scene, defocusing of the lens system, imperfections in the electronic, photographic, transmission medium, and obstructions. In retinal imaging, there are many contribut- ing factors influencing the quality of the received scan including patient-related factors such as eye movement and the age of the patient. Those who are particularly young or old find it difficult to keep the eye still during the process, making it difficult to obtain an adequate scan. Advanced ocular diseases and other coexisting conditions such as Parkinson’s disease also make it difficult for light to pass through the eye and can cause blur. Refractive error, difficulty maintaining careful focus and the skill and experience of the photographer are also contributing factors.
Show more

274 Read more

Introducing oriented Laplacian diffusion into a variational decomposition model

Introducing oriented Laplacian diffusion into a variational decomposition model

The decomposition model proposed by Osher, Solé and Vese in 2003 (the OSV model) is known for its good denoising performance. This performance has been found to be due to its higher weighting of lower image frequencies in the H −1 -norm modeling the noise component in the model. However, the OSV model tends to also move high-frequency texture into this noise component. Diffusion with an oriented Laplacian for oriented texture is introduced in this paper, in lieu of the usual Laplacian operator used to solve the OSV model, thereby significantly reducing the presence of such texture in the noise component. Results obtained from the proposed oriented Laplacian model for test images with oriented texture are given, and compared to those from the OSV model as well as the Mean Curvature model (MCM). In general, the proposed oriented Laplacian model yields higher signal-to-noise ratios and visually superior denoising results than either the OSV or the MCM models. We also compare the proposed method to a non-local means model and find that although the proposed method generally yields slightly lower signal-to-noise ratios, it generally gives results of better perceptual visual quality.
Show more

15 Read more

Variational Image Segmentation with Constraints

Variational Image Segmentation with Constraints

[45] are commonly used for this purpose. Aside from optimization methods, the design of new variational models also poses a challenge. With the recent surge in "big data", the development of image seg- mentation algorithms has seen a shift in direction towards more usage of data. The reason behind this trend is that a database of images contains a rich pool of prior knowledge. To draw an analogy, the human recognizes an object not only based on visual perception, but also on previously learned knowledge about that object. The way that variational models tap into prior knowledge is through prior constraints. A typical example is the use of prior shapes to segment similarly shaped objects in images [7, 40, 41]. Aside from shapes, priors can take on many other forms such as probability maps in segmentation [21], noise distributions in denoising [20], etc. They can also be conveniently obtained through deep learning methods [46]. The works in this thesis explore two kinds of prior constraints in variational level set models for image segmentation, namely landmarks and topology.
Show more

109 Read more

Variational Bayes inference in high dimensional time varying parameter models

Variational Bayes inference in high dimensional time varying parameter models

In this paper, we fill this gap in the literature by developing an iterative algorithm that can handle regressions with many time series observations and/or many predictors in the presence of time-varying parameters. We use variational Bayes (VB) methods which allow us to approximate the true high-dimensional posterior distribution in a simple and straightforward manner. The main idea behind VB methods is to approximate the high- dimensional and intractable posterior distribution using a simpler, tractable distribution. VB methods ensure that the approximation is good by minimizing the Kullback-Leibler distance between the true posterior and the proposed approximation. Following a large literature in physics and engineering where the mean field approximation was first developed, our proposed approximation to the posterior is decomposed into a series of simpler, independent densities that make inference scalable in high dimensions. We tackle computation by means of the an optimization algorithm that has as output the first two moments of the posterior density and resembles the expectation-maximization (EM) algorithm, instead of relying on computationally intensive MCMC methods. The result is an algorithm that combines Kalman filter updates for time-varying coefficients and volatilities with trivial posterior updates of all other model parameters and, hence, we call it the Variational Bayes Kalman Filter (VBKF).
Show more

61 Read more

High-Dimensional Covariance Decomposition into Sparse Markov and Independence Models

High-Dimensional Covariance Decomposition into Sparse Markov and Independence Models

A popular alternative sparse model, based on conditional independence relationships, has gained widespread acceptance in recent years (Lauritzen, 1996). In this case, sparsity is imposed not on the covariance matrix, but on the inverse covariance or the precision matrix. It can be shown that the zero pattern of the precision matrix corresponds to a set of conditional-independence relationships and such models are referred to as graphical or Markov models. Going back to the stock market example, a first-order approximation is to model the companies in different divisions 1 as conditionally independent given the S&P 100 index variable, which captures the overall trends of the stock returns, and thus removes much of the dependence between the companies in different divisions. High-dimensional estimation in models with sparse precision matrices has been widely studied, and guarantees for estimation have been provided under a set of sufficient conditions. See Section 1.2 for related works. However, sparse Markov models may not be always sufficient to capture all the statistical relationships among variables. Going back to the stock market example, the approximation of using the S&P index node to capture the dependence between companies of different divisions may not be enough. For instance, there can still be a large residual dependence between the companies in manufacturing and mining divisions, which cannot be accounted by the S&P index node.
Show more

43 Read more

Image quality transfer and applications in diffusion MRI

Image quality transfer and applications in diffusion MRI

compatibility of neighbouring patches, e.g. via a Markov random field or AutoContext (Tu and Bai, 2010), may help resolve remaining ambiguities. For example, close inspection of the 0.313 mm images in Fig. 12 reveals subtle artefacts, e.g. a blockiness to the image in the vicinity of the upward vertical arrow in the middle row, that arise when pushing the technique to its limit. Neighbourhood constraints and uncertainty evaluation can add further constraints to avoid such effects. All these candidate IQT implementations support many other important image reconstruction and analysis challenges, such as: synthesising other image contrasts, e.g. estimating T2-weighted images from T1 (Jog et al., 2015; Ye et al., 2013), to reduce acquisition time in clinical studies, or estimating X-ray CT images from MRI (Burgos et al., 2015), to avoid irradiating patients in cancer-therapy planning; learn- ing mappings among imaging protocols to reduce confounds in multi- centre studies or studies that straddle scanner upgrades (Mirzaalian et al., 2016); or removing artefacts e.g. from signal drop-out or subject motion. The demonstrations we make here illustrate the potential of IQT on its own to bring the power of tomorrow's imaging techniques into today's clinical applications. However, even greater future poten- tial lies in IQT's complementarity to rapid image acquisition strategies, such as compressed sensing (Lustig et al., 2007), MR fi ngerprinting (Ma et al., 2013), or simultaneous multislice (Moeller et al., 2010; Setsompop et al., 2012). The combination o ff ers great promise in realising practical low-power MR, or other imaging, devices such as portable desktop, ambulance or battle fi eld MRI scanners (Cooley et al., 2015, Sarracanie et al., 2015), or in intra-operative imaging applica- tions with a very tight acquisition-time budget, e.g. (Winston et al., 2012). Longer-term, these ideas support a future medical-imaging paradigm exploiting coupled design of a) bespoke high-powered devices to capture databases of high quality images, and b) widely deployed cheap and/or low-power devices designed speci fi cally to exploit the rich information from (a).
Show more

16 Read more

Improving Quality of Image Using PCA and DSWT at Two Level Decomposition

Improving Quality of Image Using PCA and DSWT at Two Level Decomposition

ABSTRACT: The fast development of digital image processing leads to the growth of feature extraction of images which leads to the development of Image fusion. The process of combining two different images into a new single image by retaining salient features from each image with extended information content is known as Image fusion. Two approaches to image fusion are Spatial Fusion and Transform fusion. Discrete Wavelet Transform plays a vital role in image fusion since it minimizes structural distortions among the various other transforms. Lack of shift invariance, poor directional selectivity and the absence of phase information are the drawbacks of Discrete Wavelet Transform. These drawbacks are overcome by Stationary Wavelet Transform and Dual Tree Complex Wavelet Transform and Principal Component Analysis (PCA). An image resolution enhancement technique based on interpolation of the high frequency subband images obtained by discrete wavelet transform (DWT), SWT, PCA and the input image. The edges are enhanced by introducing an intermediate stage by using stationary wavelet transform (SWT). DWT is applied in order to decompose an input image into different subbands. PCA maintains the high frequency subbands as well as the input image are interpolated. The estimated high frequency subbands are being modified by using high frequency subband obtained through SWT and DWT. Then all these subbands are combined to generate a new high resolution image by using inverse DWT (IDWT), inverse SWT and inverse of PCA. The quantitative and visual results are showing the superiority of the proposed technique over the conventional and state-of-art image resolution enhancement techniques HE and denoising using MDBUTMF filter. Parameters are also used to measure value such as PSNR, MSE, Normalized Correlation, CoC and Elapsed Time. Our proposed technique PCA with DSWT results better quality of visualization
Show more

12 Read more

Image Segmentation/Registration: a Variational Framework for 2-D and 3-D Applications

Image Segmentation/Registration: a Variational Framework for 2-D and 3-D Applications

Video tracking, a popular topic of research, includes methods distinguished by the information they utilize: a) boundary-based [80, 81, 30] and b) region-based methods [82]. The former utilizes edge features, which may be obtained by segmentation, in the image sequence of interest. With the aid of segmentation, the edges thereby obtained may be used as pivots, and a prediction of the boundary locations in the next frame may be made. An active contour method is used for example in [81, 30] to drive the contours enclosing the object in each frame. The prediction of the contour location entails the calculation of the so called optical flow [83, 84, 85, 86], or visual motion, along the image sequence. Optical flow refers to a 2-D vector field which characterizes the motion of each pixel along consecutive images. The calculation of optical flow, except for considering the difference between two consecutive frames, has other constraints such as global smoothness in order to obtain a reasonable solution [85]. Using the optical flow, we may propagate the active contours by such an amount to have a good prediction for the next frame, hence shortening the time in the segmentation at the next frame.
Show more

128 Read more

Variational techniques for medical and image processing applications

using generalized Gaussian distribution

Variational techniques for medical and image processing applications using generalized Gaussian distribution

In the first experiment, we choose an image (768 x 512) with two objects in the sky to demonstrate the capability of segmenting small objects in large background (Fig. 4a). The goal is to cluster the image into two classes: the sky and the two birds. We set the number of components, K = 5. Comparing the outcomes for K-means algorithm, GMM, and VGMM (Fig. 4c, Fig. 4d, Fig. 4e), there is an enormous misclassification of the sky and the space between the little object and the large object. Our method, VGGMM (Fig. 4f), is able to recognize the two birds and the components effectively. Contrasted to the other methods, the wings, the tail of the little bird (red square), and the big bird are also shown in more details.
Show more

66 Read more

Variational theory and domain decomposition for nonlocal problems

Variational theory and domain decomposition for nonlocal problems

To the best of authors’ knowledge, this article represents the first work on domain decomposition methods for nonlocal models. Our aim is to generalize iterative substructuring methods to a nonlocal setting and characterize the impact of non- locality upon the scalability of these methods. To begin our analysis, we first develop a weak form for (1.1) in Section 3 . The main theoretical construction for conditioning is in Section 4 . We establish spectral equivalences to bound the condition numbers of the stiffness and Schur complement matrices. For that, we prove a nonlocal Poincaré inequality for the lower bound and a dimension dependent estimate for the upper bound. This leads to the novel result that the condition number of the discrete nonlocal operator can be bounded independently of the mesh size. In Section 5 , we construct a suitable nonlocal domain decomposition framework with special attention to transmission conditions. Then, we prove the equivalence of the boundary value problems corresponding to the single domain and the two-domain decomposition. In Section 6 , we first de- fine a discrete energy minimizing extension, a nonlocal analog of discrete harmonic extension in the local case, to study the conditioning of the Schur complement in the nonlocal setting. We discretize our two-domain weak form to arrive at a non- local Schur complement. We perform numerical studies to validate our theoretical results. Finally in Section 7 , we draw con- clusions about conditioning and suggest future research directions for nonlocal domain decomposition methods.
Show more

19 Read more

Recursive Variational Mode Decomposition Algorithm for Real Time Power Signal Decomposition

Recursive Variational Mode Decomposition Algorithm for Real Time Power Signal Decomposition

reasoning for the same and arrived at similar techniques with slight variations and different interpretations. One such work by Daubechies [10] resulted in the synchro- squeezed wavelet transform which proved to be a good decomposition technique but is a heavily time consuming task and hence a real time realization of the same is a challenge. Then came up methods like Variational Mode Decomposition (VMD), Empirical Wavelet Transform (EWT) [6], [1] which also take up an adaptive basis for signal representation. Letter [8] discusses the use of these methods for power signal analysis and proves that they are good choice for the study of power signal distortions. Significant energy is present in only a very small number of frequency components in power signals and this facilitates the easy decomposition of power signal into Intrinsic Mode Function (IMF) as in VMD. Further the prior knowledge about the possible frequency components of the power signals allows us to fix the modes in decomposition of these signals and thus facilitates easy identification of presence of noise and other signal distortions and the exact time of onset of these distortions. Further, the components responsible for distortion can be captured and hence analysed for identification of the sources for these distortions. One major factor which prevents the real time implementation of these methodologies is the heavy computations involved behind these decompositions. This paper proposes a modification to the VMD algorithm by introducing the method of recursive FFT to estimate the Fourier transform as part of the VMD algorithm. In the real time implementation of the algorithm one new sample is introduced to the frame in each iteration to replace the oldest sample in that frame. Implementation of the recursive FFT for the Fourier transform calculation in each stage saves a lot of calculation overheads and hence results in an algorithm implementable in a real time system.
Show more

7 Read more

Objective Stereo Image Quality Assessment Model based on Matrix Decomposition

Objective Stereo Image Quality Assessment Model based on Matrix Decomposition

Abstract—Stereo image quality assessment (SIQA) is a key issue of stereo image processing. Image pixels have strong correlation and highly structured features, according to that an image quality mainly depends on the structure information distortion of the image, an objective stereo image quality assessment (OSIQA) model based on matrix decomposition is proposed. Firstly, the concavity and convexity maps of image are extracted through Hessian matrix decomposition, which reflects complexity of image, and the left-right image quality assessment (LR-IQA) value is gained by judging loss severity of concavity and convexity map, which is adopting singular value decomposition in the left and right images. Secondly, eigenvalues and eigenvectors of the absolute difference map that is the absolute differential value between the left image and right image in stereo image are extracted. Eigenvalues can reflect image energy of some directions, and eigenvectors can reflect the directionality of image. Depth perception quality assessment (DP-QA) value is gained by calculating the degree of the structure distortion under the edge and non- edge regions. Finally, OSIQA value is obtained through nonlinearly fitting of LR-IQA value and DP-QA value. Experimental results show that the proposed OSIQA model have a good consistency with subjective perception. The correlation coefficient and spearman rank order correlation coefficient between OSIQA model and subjective perception are more than 0.92, and rooted mean squared error is lower than 6.5.
Show more

8 Read more

Automating the formulation and resolution of convex variational problems: applications from image processing to computational mechanics

Automating the formulation and resolution of convex variational problems: applications from image processing to computational mechanics

class of situations concerns the case where J can be decomposed as the sum of a smooth and a non-smooth term. Such a situation arises in many variational models of image processing problems such as image denoising, inpainting, deconvolution, decomposition, etc. In some cases, such as limit analysis problems in mechanics for instance, smooth terms in J are absent so that numerical resolution of ( 1 ) becomes very challenging [ 27 , 45 ]. Important problems in applied mathematics such as optimal control [ 36 ] or optimal transportation [ 9 , 41 , 42 , 46 ] can also be formulated, in some circumstances, as convex variational problems. This is also the case for some classes of topology optimization problems [ 10 ], which can also be extended to non-convex problems involving integer optimization variables [ 28 , 48 ]. Finally, robust optimization in which optimization is performed while taking into account uncertainty in the input data of ( 1 ) has been developed in the last decade [ 7 , 8 ]. It leads, in some cases, to tractable optimization problems fitting the same framework, possibly with more complex constraints.
Show more

32 Read more

High performance high quality image demosaicing hardware designs

High performance high quality image demosaicing hardware designs

Since capturing three color channels (red(R), green(G), and blue(B)) per pixel increases the cost of digital cameras, most digital cameras capture only one color channel per pixel using a single image sensor. The images pass through a color filter array (CFA) before being captured by the image sensor. Several CFA patterns are shown in Figure 1.1. Bayer pattern, shown in Figure 1.1 (a), is the most commonly used CFA pattern in digital cameras [1]. Bayer pattern takes the human vision systems relatively higher sensitivity to green into account by sampling green channel at twice the rate of red and blue channels [1]. CFA interpolation, also known as demosaicing (or demosaicking), is the process of reconstructing the missing color channels of the pixels in the color filtered image using their available neighboring pixels. Demosaicing process is shown in Figure 1.2. There are many image demosaicing algorithms with varying reconstructed image quality and computational complexity.
Show more

53 Read more

High-Dimensional Classification Models with Applications to Targeting

High-Dimensional Classification Models with Applications to Targeting

It is desirable that the results can be delivered in easily interpretable rules, that can be used for directed marketing and creating business strategies. For example individuals over fifty tend to be more interested in emails then the general user. These kinds of rules are useful in a different and perhaps a more important way for a company than the information to target these customers because the model says so [1, pp. 58–60]. Berry and Linoff il- lustrates this with the question “who is the yoghurt lover?”, referring to a supermarket setting where the task focused on finding out which customers were interested in buying yoghurt [1, p. 59]. The yoghurt lover is not some- one described by a high score by the model, instead perhaps described as a middle-age woman living in the city. This information is useful to the company which can then focus on advertising in areas where there live a lot of middle-age women.
Show more

65 Read more

A Variational Approach to Hyperspectral Image Fusion

A Variational Approach to Hyperspectral Image Fusion

For future research one could try to incorporate technical information about the satellite sensors into the variational framework and other types of sensors and images could be used. To automate the sharpening process a robust registration method is needed. To reduce the bleeding of colors over some of the edges, deblurring could be included in the sharpening method. Furthermore, this method could be combined with hyperspectral analysis methods like demixing to not only improve the spatial but also the spectral resolution of the image. New detection and classification methods especially suitable for sharpened images could be developed, which take the spectral as well as the spatial information into account.
Show more

10 Read more

Image denoising by a direct variational minimization

Image denoising by a direct variational minimization

image, and l is a Lagrange multiplier (see [11] or [13]). The role of l, as the trade of between image smoothness and preservation of image features is lost: it becomes just a time step in the filtering process. The second problem related to the conventional PDE approach is that it is applied on a global image, so that the local image features are not sufficiently taken into account. In recent times, in the fields of Image Analysis, Processing and Synthesis, patch-based techniques emerged and meet with success. Defined as local square neighborhoods of image pixels, patches are very simple objects to work with, but they have the intrinsic ability to catch large-scale structures and textures present in natural images. Some recent image denoising methods are patch-based, such as “ Non-Local Means ” algorithm [14], and some of its derivatives [15,16]. In this work, we present a novel variational, and at the same time patch-based image smoothing method, which combines a mathematically well-posdenes of the varia- tional modeling with the efficiency of a patch-based approach. More-over, the proposed method is based on the direct variational minimization of the appropriate energy functional, which (as in [10]) involves fractional gradient. By doing so, we avoid problems of finding the optimal stopping time and the optimal time step. The role of l is sustained and the actual minimization is conducted till it converges (with respect to the prede- fined error bound of the particular optimization method). We note that patch-based approach is also convenient to make the proposed direct variational method computationally feasible and applicable on real images. Actually, if working with the whole image, one
Show more

16 Read more

Task Decomposition Exploration of Image Processing Applications on FPGA Based NoC

Task Decomposition Exploration of Image Processing Applications on FPGA Based NoC

As a promising on-chip interconnection architecture, Network on Chip (NoC) provides effective connection between the varieties of processing elements on System on Chip (SoC). This modular and scalable architecture provides high bandwidth, high scalability, and low complexity communication to SoC systems, especially for image processing applications with big data. To achieve the optimal system performance, designers strive for all aspects of efficiency in all parts of the NoC design. In particular, energy consumption and latency penalty of on-chip communication relative to other parts have grown meaningful extremely. For the SoC system based on NoC, the load balance of the network is critical to enhance the transmission capability of NoC.
Show more

5 Read more

Show all 10000 documents...