R., ''Some Numerical Results for the Solution of the Heat Equation Backwards in Time," Numerical Solutions of Nonlinear Differential Equations. , Stability of Bou[r]

130 Read more

While experimental design for well-**posed** inverse **linear** **problems** has been well studied, covering a vast range of well-established design criteria and optimization algorithms, its **ill**-**posed** counterpart is a rather new topic. The **ill**-**posed** nature of the problem entails the incorporation of regularization techniques. The consequent non-stochastic error introduced by regularization, needs to be taken into account when choosing an experimental design. We discuss different ways to define an optimal design that controls both an average total error of regularized estimates and a measure of the total cost of the design. We also introduce a **numerical** framework that efficiently implements such designs and natively allows for the solution of large-scale **problems**. To illustrate the possible applications of the methodology, we consider a borehole tomography example and a two-dimensional function recovery problem.

Show more
21 Read more

ever, partitioning a polytope into mutually exclusive simplexes is a non-trivial operation. An efficient algo- rithm to produce such decompositions is needed. The moment-map sampler is a direct method that general- izes the construction of a Dirichlet distribution. How- ever the computation of the Jacobian of the moment map may be costly, and the distribution is difficult to control—the Jacobian may not be simple. **Numerical** **methods** to compute distributions involving a compli- cated Jacobian should alleviate this difficulty.

Straightforward solution of discrete **ill**-**posed** least-squares **problems** with error-contaminated data does not, in general, give meaning- ful results, because propagated error destroys the computed so- lution. Error propagation can be reduced by imposing constraints on the computed solution. A commonly used constraint is the dis- crepancy principle, which bounds the norm of the computed so- lution when applied in conjunction with Tikhonov regularization. Another approach, which recently has received considerable atten- tion, is to explicitly impose a constraint on the norm of the com- puted solution. For instance, the computed solution may be required to have the same Euclidean norm as the unknown solution of the error-free least-squares problem. We compare these approaches and discuss **numerical** **methods** for their implementation, among them a new implementation of the Arnoldi–Tikhonov method. Also solution **methods** which use both the discrepancy principle and a solution norm constraint are considered.

Show more
18 Read more

Some iterative **methods** like GMRES, LSQR and etc. are paying more attention to residual vector r k = b − A x k where
x k is the kth approximation solution of (1.1) by which the sequence of residual norms is decreased. GMRES is a popular
method [ 3 ] which is widely used for solving **linear** system of equations. There are several implementations for this method that have been proposed for special goals with some advantages and disadvantages. Here, some GMRES implementations for solving **ill**-**posed** **linear** **problems** are applied to know which GMRES algorithm, with what extent, is more applicable to solve (nearly) singular **problems** and which one is not useful.

Show more
13 Read more

Clearly, the reconstructions are strongly dependent on the choice of the regularization parameters. There is a vast literature for choosing optimal regularization parameters for **linear** **problems**. However, there are, to our knowledge, no good **methods** for nonlinear **problems** like DOT. Hence, we chose the parameters add hock. That is we used a computer cluster to run the algorithm with large set of regularization parameters choices, then we evaluated the reconstructions and picked the visually best parameters for the TV and the mixed regularizations. Note that once this parameter was found it was kept fixed for all reconstructions in figures 7.8-7.11.

Show more
139 Read more

Abstract. In this paper **ill**-**posed** **linear** inverse **problems** that arises in many applications is considered. The instability of special kind of these **problems** and it’s relation to the kernel, is described. For finding a stable solution to these **problems** we need some kind of regularization that is presented. The results have been applied for a singular equation

a b s t r a c t
Multilevel **methods** are popular for the solution of well-**posed** **problems**, such as certain boundary value **problems** for partial differential equations and Fredholm integral equations of the second kind. However, little is known about the behavior of multilevel **methods** when applied to the solution of **linear** **ill**-**posed** **problems**, such as Fredholm integral equations of the first kind, with a right-hand side that is contaminated by error. This paper shows that cascadic multilevel **methods** with a conjugate gradient-type method as basic iterative scheme are regularization **methods**. The iterations are terminated by a stopping rule based on the discrepancy principle.

Show more
12 Read more

In the past decade of research, in the area of inverse and **ill**-**posed** **problems**, a great deal of attention has been devoted to the regularization in the Banach space setting. The research on regularization **methods** in Banach spaces was driven by different mathematical viewpoints. On the one hand, there are various practical applications where models that use Hilbert spaces are not realistic or appropriate. Usually, in such applications sparse solutions 1 of **linear** and nonlinear operator equations are to be determined, and models working in L p spaces, non-Hilbertian Sobolev spaces or continuous function spaces are preferable. On the other hand, mathematical tools and techniques typical of Banach spaces can help to overcome the limitations of Hilbert models. In the monograph [82], a series of different applications ranging from non-destructive testing, such as X-ray diffractometry, via phase retrieval, to an inverse problem in finance are presented. All these applications can be

Show more
272 Read more

i
σ 2
i +λ 2 ∈ [0, 1] are the Tikhonov ﬁlter factors. As mentioned in the intro- duction, a variety of parameter choice **methods** can be used to determine λ. Here we just describe GCV, for the introduction of W-GCV in the next section. The basic idea of GCV is that a good choice of λ should predict missing values of the data. That is, if an arbitrary element of the observed data is left out, then the corresponding regularized solution should be able to predict the missing observation fairly well [4]. We leave out each data value b j in turn and seek the value of λ that minimizes the

Show more
19 Read more

Thus, the exact values of coeﬃcients are never known and only estimates for the coeﬃ cients are available. The products ±s.ain is known as nominal-coeﬃcient variations, or errors in nominal coeﬃcients. Since all computations are always conducted with nominal coeﬃcients, it is necessary to check how coeﬃcient variation aﬀ ect the computing accuracy. There exist **problems** in which solution error are the same order of magnitude as error in setting coeﬃcients; this case is simplest to treat. However, there are **problems** wherein solution errors are greater than coeﬃcient errors. Finally, there are **problems** where even very small, practically unavoidable errors in setting co- eﬃcients, parameters, or initial and boundary conditions give rise to appreciable solution errors. Such **problems** are called **ill**-**posed** ones. Nevertheless, such prob- lems are often encountered in practice, and **methods** enabling their adequate solution need to be put to scrutiny.

Show more
a smaller dimensional problem. Other **methods** do not require the exact knowl- edge of σ but try to derive such information from the right-hand side b. They are basically heuristic **methods**. Very popular **methods** of such type are the L-curve criterion [33, 31] and the Generalized Cross Validation (GCV) criterion [33, 53, 21]. For large **problems** these criteria are computationally very expensive and therefore they are difficult to use for practical applications. Recently, in [8], the authors have proposed a computationally less expensive strategy based on computing an L-ribbon that contains the L-curve and its interior. For Tikhonov regularization, some strategies based on solving the quadratically constrained least squares problem (1.9) have been proposed. The algorithm presented in [11] makes use of “black box” solvers for the related unconstrained least squares **problems**. In [47, 46] a trust-region approach to (1.9) is discussed. In [22] the authors have presented a method for (1.9) based on Gauss quadrature; in [10] a modification of this latter method based on partial Lanczos bidiagonalization and Gauss quadrature is presented. The iterative scheme proposed in [14] is based on the works [46, 48] and requires the computation of the smallest eigen- value and the corresponding eigenvector to determine the proper regularization parameter. In [7] a modular approach is proposed for solving the first order equation associated with (1.9). However, in spite of the numerous **methods** pro- **posed** in the literature, the selection of a suitable regularization parameter λ for Tikhonov regularization is still an open question (especially for large scale **problems**) and an active area of research. Moreover, to the best of the author’ knowledge, little work has been made in developing **methods** for the choice of the regularization parameter for other regularization functionals, such as the Total Variation functional.

Show more
60 Read more

with a nonsymmetric and possibly rectangular matrix A 2 R m ⇥n or with
a symmetric positive semidefinite matrix A 2 R n ⇥n . Throughout this paper
k · k denotes the Euclidean vector norm or the associated induced matrix norm. The singular values of A are assumed to “cluster” at the origin. In particular, A is severely **ill**-conditioned and may be singular. This kind of least-squares **problems** often are referred to as discrete **ill**-**posed** **problems**. They arise, for instance, from the discretization of **linear** **ill**-**posed** **problems**, such as Fredholm integral equations of the first kind with a smooth kernel. The vector b represents available data, which is contaminated by an error vector e. This error may stem from measurement inaccuracies or discretization. Thus,

Show more
21 Read more

ing a probability distribution. The random matrices have wide applications in various branches of science and technology, out of which in this dissertation, we have only considered its application in Compressed sensing. It is very interesting that the random matrices are considered to be the best fit as a measurement matrix in compressed sensing by the researchers. This is due to the fact that the random matrices do satisfy the sufficient condition for a matrix to succeed for the purposes of com- pressed sensing and this special condition is known as the restricted isometry property (RIP). To satisfy this specific property a matrix needs all of its submatrices of given size be well-conditioned. This fits well in the circle of **problems** of the non-asymptotic random matrix theory. Now it is usual for one to wonder which specific types of random matrices satisfy the RIP. The answer is, all basic models of random matrices are nice restricted isometries. These include Gaussian and Bernoulli matrices, more generally all matrices with sub-Gaussian independent entries, and even more gen- erally all matrices with sub-Gaussian independent rows or columns. Also, the class of restricted isometries includes random Fourier matrices, more generally random sub-matrices of bounded or- thogonal matrices, and even more generally matrices whose rows are independent samples from an isotropic distribution with uniformly bounded coordinates.

Show more
87 Read more

Chapter 4 is devoted to the issue of clustering in statistical **ill**-**posed** **linear** inverse **problems** investigated in [58]. Section 4.1 describe the Notation. Section 4.2 introduces problem and its formulation and the assumptions we used in the estimation procedure . Section 4.3 addresses estimation procedure. Section 4.4 is devoted to evaluation of the error of the estimator. In particular, it provides an oracle inequality for the risk and study the minimax lower and the upper bounds for the risk under specific assumptions on the class of underlying functions. We compare the accuracy of estimation with and without clustering, theoretically in Section 4.4.4 and via a limited simulation study in Section 4.5 . Proofs can be found in Appendix B.

Show more
130 Read more

Full list of author information is available at the end of the article
Abstract
In this paper, we are concerned with the problem of approximating a solution of an **ill**-**posed** problem in a Hilbert space setting using the Lavrentiev regularization method and, in particular, expanding the applicability of this method by weakening the popular Lipschitz-type hypotheses considered in earlier studies such as

15 Read more

(Received 28 January 2011; revised 24 June 2011)
Abstract
We treat two related moving boundary **problems**. The first is the **ill**- **posed** Stefan problem for melting a superheated solid in one Cartesian coordinate. Mathematically, this is the same problem as that for freezing a supercooled liquid, with applications to crystal growth. By applying a front-fixing technique with finite differences, we reproduce existing **numerical** results, concentrating on solutions that break down in finite time. This sort of finite time blow-up is characterised by the speed of the moving boundary becoming unbounded in the blow- up limit. The second problem, which is an extension of the first, is proposed to simulate aspects of a particular two phase Stefan problem with surface tension. We study this novel moving boundary problem numerically, and provide results that support the hypothesis that it exhibits a similar type of finite time blow-up as the more complicated two phase problem. The results are unusual in the sense that it appears

Show more
17 Read more

1
**ILL**-**POSED** INVERSE **PROBLEMS** IN ECONOMICS
1. INTRODUCTION
A parameter of an econometric model is said to be identified if it is uniquely determined by the probability distribution from which the available data are sampled (hereinafter the population distribution). In other words, a parameter is identified if there is a one-to-one or many-to-one mapping from the population distribution to the parameter. The parameter may be a scalar, vector, or function. In many familiar economic settings, such as least squares (LS) or instrumental variables (IV) estimation of a **linear** model, the parameter of interest is a scalar or vector, and the identifying mapping is continuous. That is, small changes in the population distribution of the data produce only small changes in the identified parameter. When this happens, the parameter of interest can be estimated consistently by replacing the unknown population distribution with a consistent sample analog, such as the empirical distribution of the data (Manski 1988). Consistency of the sample analog implies that the difference between the sample analog and true population distribution is small when the sample size is large. The estimated parameter is consistent for the true parameter because continuity of the identifying mapping implies that the difference between the estimated and true parameter values is small if the difference between the sample analog and true population distribution is small.

Show more
60 Read more

In Section 5 , we present a new method of identify- ing the posterior distribution: we first characterize it through its Radon-Nikodym derivative with respect to the prior (Theorem 5.1 )[r]

43 Read more

Truncation criterion
This paper describes a new **numerical** method for the solution of large **linear** discrete **ill**-**posed** **problems**, whose matrix is a Kro- necker product. **Problems** of this kind arise, for instance, from the discretization of Fredholm integral equations of the ﬁrst kind in two space-dimensions with a separable kernel. The available data (right- hand side) of many **linear** discrete **ill**-**posed** **problems** that arise in applications is contaminated by measurement errors. Straight- forward solution of these **problems** generally is not meaningful because of severe error propagation. We discuss how to combine the truncated singular value decomposition (TSVD) with reduced rank vector extrapolation to determine computed approximate solutions that are fairly insensitive to the error in the data. Exploitation of the structure of the problem keeps the computational effort quite small. © 2010 Elsevier Inc. All rights reserved.

Show more
12 Read more