R., ''Some Numerical Results for the Solution of the Heat Equation Backwards in Time," Numerical Solutions of Nonlinear Differential Equations.. and Hilbert D., Methods of Mathematical P[r]

130 Read more

To the best of our knowledge, present symbolic techniques are not able to handle strongly nonlinear equations of the kind (1), even when β < ∞. We therefore an- alyzed the approach b), using the straightforward regularization technique obtained by replacing the Heaviside firing rate function by a Lipschitz continuous mapping. This yields an equation which is within the scope of the Picard–Lindelöf theorem and standard stability estimates for ODEs. That is well-**posed** and, at least in principle, approximately solvable by **numerical** **methods**.

23 Read more

18. Argyros, IK: A semilocal convergence for directional Newton **methods**. Math. Comput. 80, 327-343 (2011) 19. Argyros, IK, Hilout, S: Weaker conditions for the convergence of Newton’s method. J. Complex. 28, 364-387 (2012) 20. Argyros, IK, Cho, YJ, Hilout, S: **Numerical** **Methods** for Equations and Its Applications. CRC Press, New York (2012) 21. Tautenhahn, U, Jin, Q: Tikhonov regularization and a posteriori rule for solving nonlinear **ill**-**posed** **problems**. Inverse

15 Read more

Abstract. In this paper **ill**-**posed** **linear** inverse **problems** that arises in many applications is considered. The instability of special kind of these **problems** and it’s relation to the kernel, is described. For finding a stable solution to these **problems** we need some kind of regularization that is presented. The results have been applied for a singular equation

is widely used for various transport phenomena in various fields. In the equation, u = u(x, t), α and β are the substance concentration, flow velocity, and diffusion coefficient, respectively. The AD equation appeared to explain the unsteady heat transfer within the film by reducing the number of independent variables from three to two by a similarity transformation [1]. The same equation is used to express the transport for the solute based on the mass conservation for a particular choice of the sink term as a function of solute concentration [2].According to Chatwin and Allen [3], the AD equation (1) with constant β [4] holds when the velocity field is statistically steady, the cross-sectional area is independent of x and t, and the elapsed time is sufficiently large compared with the time taken for thorough mixing of the contaminant over the cross-section area [5]. Having some analytical solutions in some cases, the AD equation attracts many researchers studying on the **numerical** **methods** field to check the accuracy and the validity of the new **methods**. A problem with steady state solution is numerically solved by two uncondi- tionally stable fourth order compact implicit difference **methods** [6]. Several **problems** constructed on a one-dimensional form with constant coefficients of the AD equation are

Show more
11 Read more

We study the regularization **methods** for solving equations with arbitrary accretive op- erators. We establish the strong convergence of these **methods** and their stability with respect to perturbations of operators and constraint sets in Banach spaces. Our research is motivated by the fact that the fixed point **problems** with nonexpansive mappings are namely reduced to such equations. Other important examples of applications are evolu- tion equations and co-variational inequalities in Banach spaces.

23 Read more

This paper presents two **numerical** **methods** for solving the nonlinear con- strained optimal control **problems** including quadratic performance index. The **methods** are based upon **linear** B-spline functions. The properties of B-spline functions are presented. Two operational matrices of integration are introduced for related procedures. These matrices are then utilized to reduce the solution of the nonlinear constrained optimal control to a non- **linear** programming one to which existing well-developed algorithms may be applied. Illustrative examples are included to demonstrate the validity and applicability of the presented techniques.

Show more
22 Read more

We consider an optimal control problem for a **linear** system with integrally constrained control, with a **linear** terminal cost and with terminal constraints given by a set of **linear** inequalities. This problem is, in fact, the **ill**-**posed** problem because of nonuniqueness of optimal control, which always takes place if the end point of the optimal trajectory belongs to the interior of the reachable set of the control system. We propose here a simple **numerical** algorithm for solving the optimal control problem, which uses a known explicit description of the reachable sets for **linear** systems with integral quadratic constraints on control functions. The algorithm is based on the reduction of considered problem to the solution of a finite-dimensional convex programming problem in primal or dual variables. This method allows to avoid difficulties related to nonuniqueness of optimal control.

Show more
We can draw a number of conclusions from the **numerical** results. The first is that the HEUN algorithm, which has only **linear** convergence to , requires significantly more evaluations than the other **methods**. This is as we would expect. Because of the high rate of ultimate convergence, the K3 algorithm is generally superior when the problem is simple, i.e. when the solution trajectory is smooth and does not approach close to regions where the Jacobian is singular. However, where this is not the case RK3 appears more efficient and in particular we note that it is more reliable in that it

Show more
144 Read more

be of order N which would correspond to the uniform mesh case [3]. The condition number of the matrices corresponding to the **numerical** tests of Section 4 are on the order of 100. The resulting **linear** system is solved by GMRES [22, 27] which is consequently expected to perform well here even without preconditioning. GMRES is restarted after 20 steps (i.e., the solver is GMRES(20)) and is stopped on small relative residuals, more precisely the stopping criterion is

11 Read more

• Proﬁling reveals that for reasonable meshes the relative costs of contouring and interface updating are very low (less than 1%), while most of the time is spent constructing and solving the **linear** system (3.3) (about 90%) and extending the solution (about 10%). **Methods** such as fast multipoles [5, 32] can be used to bring down the asymptotic complexity of the **linear** solver from O (N 2 ) to O (N ), where N is the number of elements of ∂Ω

13 Read more

Recently, Tautenhahn and Hämarik (1999) have considered a monotone rule as a parameter choice strategy for choosing the regularization parameter while con- sidering approximate solution of an **ill**-**posed** operator equation T x = y, where T is a bounded **linear** operator between Hilbert spaces. Motivated by this, we pro- pose a new discrepancy principle for the simplified regularization, in the setting of Hilbert scales, when T is a positive and selfadjoint operator. When the data y is known only approximately, our method provides optimal order under certain natural assumptions on the **ill**-posedness of the equation and smoothness of the solution. The result, in fact, improves an earlier work of the authors (1997). 2000 Mathematics Subject Classification: 65J20, 65R30, 65J15, 47A52, 47J06.

Show more
13 Read more

"**ill**-conditioned", in the sense that some values do not conform to a nicely rounded density function. For example, if we were to count the numbers of theatres in a survey of British towns then almost all the values will be near 1, and not exceeding about 5• By comparison, the count for London would be of the order 50, and the standardised value would probably also be about 50. The squared distance component of a coefficient which compares any other town with London would therefore be of the order 2500, which does not compare at all favourably with the mean 2. We can assume that if a classification problem exists, then the density function for some variables cannot be expected to exhibit nice unimodal dis tributions. Standardisation must therefore be used with care, and can dangerously influence the results if applied to **ill**-con ditioned data.

Show more
457 Read more

We can clearly appreciate that, for s satisfying Assumption 1 (iii) (i.e. 0 < s ≤ 3), the **numerical** convergence rates fit very well the ones predicted by the theory. Note that the higher the regularity of the truth (i.e. the larger the s), the smaller the error. w.r.t the truth in the estimates. We note that for s = 4, the aforementioned assumption is violated and, in the case of Data Model 1, the slopes of the **numerical** convergence rates are slightly smaller than the theoretical ones. In this case (s = 4) there are also fluctuations of the error w.r.t. the truth obtained with 3DVAR. These fluctuations may be associated with the fact that since for the error w.r.t. the truth is very small for sufficiently large iterations and for Data Model 1 the noise level is fixed a priori (recall γ = 5 × 10 −4 ). However, for the Kalman filter these fluctuations are not so evident; presumably updating the covariance has a stabilizing effect. For Data Model 2, as N increases, the corresponding γ decreases and so these fluctuations in the error are non existent.

Show more
41 Read more

3.6 3.7 3.7 2.8 3.8 Table 1 reports the results of computational for Algorithm 1 and Karmarkar’s Algorithm. The first column of Table 1 contains the example number and the next two columns of each algorithm in this table contain the total iterations and the times (in seconds) of each algorithm. Table 1 also shows that in terms of the number of iterations required to complete the fifth **numerical** examples, the Algorithm 1 is better than Karmarkar’s Algorithm, but in terms of completion time required to complete the five **numerical** examples shows that Karmarkar’s algorithm looks better than Algorithm 1. This can be explained that in the proposed approach using an exponential function that generally require a longer time than use a polynomial function as in Karmarkar's algorithm.

Show more
is closely related to filtering **methods** such as 3DVAR and the Kalman filter when applied to the partially observed **linear** dynamical system (1.2). The similarity between both schemes provides a probabilistic interpretation of iterated regularization **methods**, and allows the possibility of quantifying uncertainty via the variance. On the other hand, we will employ techniques from the convergence analysis arising in regularization theories [6] to shed light on the convergence of filter based **methods**, especially when the **linear** observation operator is **ill**-**posed**. We do not employ Hilbert scales [7, 10, 17, 18] in our analysis as this would lead to a different focus, but their study in this problem would also be of interest.

Show more
29 Read more

Coordinate transformation plays a very importance role in the **numerical** treatment of global positioning system. For transforming GPS coordinates from WGS-84 coordi- nate system to a local coordinate system, the Bursa mo- del is generally used to solve transformation parameters, including three translation parameters, three rotation pa- rameters and one scale parameter. From a theoretical point of view, a great number of algorithms have been devel- oped to solve these **problems**. Early publications such as Grafarend et al. [1], Vanicek and Steeves [2], Vanicek et al. [3], Yang [4], and Grafarend and Awange [5] have given some detail algorithms of coordinate transformation. In general, their algorithms to solve transformation pa- rameters are all based on the classical least squares crite- rion. Recently, better **methods** are available in literature, e.g. hard or soft fixing of certain transformation parame- ters (e.g. rotation around some coordinate axes are strongly correlated with translations), reduction of coordinates to the centre of “gravity” etc., however, **ill**-**posed** **problems** were rarely considered in those **methods**. Actually, **ill**- **posed** **problems** often impact this kind of data processing.

Show more
For many years the subject of functional equations has held a prominent place in the attention of mathematicians. In more recent years this attention has been directed to a particular kind of functional equation, an integral equation, where in the unknown function occurs under the integral sign. Such equations occur widely in diverse areas of applied mathematics and physics. They offer a powerful technique for solving a variety of practical **problems**. One obvious reason for using the integral equation rather than differential equations is that all of the conditions specifying the initial value problem or boundary value problem for a differential equation. In the case of PDEs, the dimension of the problem is reduced in this process so that, for example, a boundary value problem for a practical differential equation in two independent variables transform into an integral equation involving an unknown function of only one variable. This reduction of what may represent a complicated mathematical model of a physical situation into a single equation is itself a significant step, but there are other advantages to the gained by replacing differentiation with integration. Some of these advantages arise because integration is a smooth process, a feature which has significant implications when approximate solutions are sought. Whether one is looking for an exact solution to a given problem or having to settle for an approximation to it, an integral equation formulation can often provide a useful way forward. For this reason integral equations have attracted attention for most of the last century and their theory is well-developed.

Show more
250 Read more

The concept of dynamic optimization is the natural extension of the theory of static optimization. Some classic examples of static optimization **problems** are represented by the **Linear** Programming (LP), the Quadratic Programming (QP), the integer/mixed integer programming (MIP), and the most famous case of Nonlinear Programming (NLP). In all these **problems**, the unknowns variables are defined over the real or integer numbers R , Z. The theory of dynamic optimization looks instead at **problems** whose unknowns are real functions. The solution of this kinds of **problems** goes back to the origin of differential calculus and has become an independent branch of research, first in the Calculus of Variations, and nowadays, in the Optimal Control. The first results are due to Leonhard Euler and to the Bernoulli brothers, who gave the foundations of the calculus of variations. In the second half of the XIX century, other important names of Mathematics contributed to theorems of existence as Jacobi and Weierstrass. The passage from calculus of variations to optimal control, is attributed to the Russian mathematician Lev Pontryagin and to the American Richard Bellman in the Fifties of the last century. The first is the founder of the indirect **methods** based on variational observations, the second discovered the Dynamic Programming Principle of optimality (Dpp) which gave birth to Dynamic Programming (DP). Later, a new family of **numerical** **methods** for the solution of optimal control **problems** was introduced, it is the family of direct **methods** based on the direct transcription of the optimal control problem (Figure 1.1). Suppose to tackle the following optimal

Show more
197 Read more

which guarantees convexity and hence convergence of the method. These developments imply use of results from optimization, **numerical** computations and approx- imation and express another enrichment of the original algebraic framework. Development of techniques that avoid the iterative nature is an additional challenge. DAP and the stabilisation problem: Frequency as- signment may also guarantee stabilisation but it is rather restrictive. There exist some stabilisation results for pole assignment (Byrnes, 1983), but the stabilisation version of DAP takes a different form. In this case we deal with semi-algebraic sets (stability domain) and the **linear** variety of DAP becomes a semi-algebraic set and this is a study in the field of semi-algebraic geometry. This is an open field for further development of DAP. Partial Decomposability of multivectors and re- stricted DAP: The study of DAP assumes that the de- sign parameter is entirely free. However, in many prac- tical cases parts of the design matrix (a row, or a number of them) are fixed. This is equivalent to decomposing a multivector as a product of lower dimensional multi- vectors. In case that this is not possible we examine the problem of approximate partial decomposability. This introduces a new dimension to the study of DAP which is relevant to design **problems**.

Show more
21 Read more