convergence of the method.

Top PDF convergence of the method.:

New Highly Accurate Iterative Method of Third Order Convergence for Finding the Multiple Roots of Non linear Equations

New Highly Accurate Iterative Method of Third Order Convergence for Finding the Multiple Roots of Non linear Equations

We present a new third order convergence iterative method for m multiple roots of nonlinear equation. The proposed method requires one evaluation of function and two evaluations of the first derivative of function. In numerical tests exhibit that the present method provides provides high accuracy numerical result as compared to other methods. The stability of the dynamical behaviour of iterative method is investigated by displaying the basin of attraction. Basin of attraction displays less black points which give us wider choices of initial guess in computation. Keywords: Basin of attraction, Multi-point iterative methods, Multiple roots, Nonlinear equations, Order of convergence.

9 Read more

Convergence criteria of Newton’s method on Lie groups

Convergence criteria of Newton’s method on Lie groups

In a Riemannian manifold framework, an analogue of the well-known Kantorovich’s the- orem was given in [] for Newton’s method for vector fields on Riemannian manifolds while the extensions of the famous Smale’s α-theory and γ -theory in [] to analytic vec- tor fields and analytic mappings on Riemannian manifolds were done in []. In the re- cent paper [], the convergence criteria in [] were improved by using the notion of the γ -condition for the vector fields and mappings on Riemannian manifolds. The radii of uniqueness balls of singular points of vector fields satisfying the γ -conditions were esti- mated in [], while the local behavior of Newton’s method on Riemannian manifolds was

15 Read more

Global convergence of a modified conjugate gradient method

Global convergence of a modified conjugate gradient method

A modified conjugate gradient method to solve unconstrained optimization problems is proposed which satisfies the sufficient descent condition in the case of the strong Wolfe line search, and its global convergence property is established simply. The numerical results show that the proposed method is promising for the given test problems.

12 Read more

Accelerating Convergence Method for Relaxation Cooperative Optimization

Accelerating Convergence Method for Relaxation Cooperative Optimization

To solve problems of higher computational cost and lower convergence speed in collaborative optimization, a new relaxation cooperative optimization method of accelerating convergence is presented. The optimizations in this method are divided into two stages. In the accelerating convergence stage, the calculation method of relaxation factors is improved, relaxation factors are constructed according to the inconsistent information between each disciplinary optimal solution and their average value; in the optimization solving stage, the optimal solution in the former stage is taken as the initial point, relaxation factors with consistent precision are collaboratively optimized to obtain the final optimal solution. Finally, this optimization method is tested by a typical numerical example. Experimental results show that this method can reduce the computational cost and accelerate the convergence speed greatly. 1

6 Read more

On Convergence of the Hardy Cross Method of Moment Distribution.

On Convergence of the Hardy Cross Method of Moment Distribution.

However, while the Hardy Cross method with the simultaneous joint balancing distribution sequence provides remarkable convergence in practice, a proof of its convergence was not published until much more recently [Volokh, 2002]. Volokh characterizes the method with the simultaneous joint balancing distribution sequence as a Jacobi iterative scheme. He starts with the classical displacement method of a structure and then shows an incremental form of the Jacobi iterative scheme that can be used to solve these simultaneous equations. By using a specific starting point, the incremental form of Jacobi iterative scheme can represent the process of applying the Hardy Cross method with the simultaneous joint balancing distribution sequence. Because of the diagonal dominance of the stiffness matrix from the displacement method’s simultaneous equations, the Jacobi iteration—and equivalently the Hardy Cross method—converges for any loading condition. Section 2.4 illustrates these mathematical transformations in detail.

81 Read more

Convergence of Relaxed Hermitian and Skew hermitian Splitting Method

Convergence of Relaxed Hermitian and Skew hermitian Splitting Method

Abstract. In this paper, based on the method first considered by Bai, Golub and Ng [Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems, SIAM J.Matrix Anal. Appl.}, 24(2003): 603-626], the relaxed Hermitian and skew-Hermitian (RHSS) splitting method is presented and then we prove the convergence of the method for solving positive definite, non-Hermitian linear systems. Moreover, we find that the new scheme can outperform the standard HSS method and can be used as an effective preconditioner for certain linear systems, which is shown through numerical experiments.

7 Read more

On convergence of the immersed boundary method for elliptic interface problems

On convergence of the immersed boundary method for elliptic interface problems

ever, there was almost no rigorous proof in the literature until recently [14], in which the author has proved the first order accuracy of the IB method for the Stokes equations with a periodic boundary condition. The proof is based on some known inequalities between the fundamental solution and the discrete Green function with a periodic boundary condition for Stokes equations. In [4], the author showed that the pressure obtained from IB method has O(h 1/2 ) order of convergence in the L 2 norm for a 1D model. In [19, 20], the authors designed some level set methods based on discrete

19 Read more

CONVERGENCE OF ADOMIAN METHOD FOR SOLVING KDV– BURGER EQUATION.

CONVERGENCE OF ADOMIAN METHOD FOR SOLVING KDV– BURGER EQUATION.

In this paper, convergence of Adomian decomposition method (ADM) when applied to KdV–Burgers equation is proved. Two approaches for extracting the soliton solutions to the nonlinear dispersive and dissipative KdV– Burgers equation are implemented. The first one is the classical ADM while, the second is the modified ADM which is called the general iteration method. Test examples are given and a comparison between the two approaches is carried out to illustrate the pertinent feature of the general iteration method.

6 Read more

Global Convergence of a Modified Tri Dimensional Filter Method

Global Convergence of a Modified Tri Dimensional Filter Method

3) Tri-dimensional filter method can make full use of the information we get along the algorithm process. This paper is divided into 4 sections. The next section introduces the concept of a Modified tri-dimensional filter and the NCP function. In Section 3, an algorithm of line search filter is given. The global convergence properties are proved in the last section.

8 Read more

On a new semilocal convergence analysis for the Jarratt method

On a new semilocal convergence analysis for the Jarratt method

Note that the p-Jarratt-type method (p ∈ [, ]) given in [] uses (.)-(.), but the suffi- cient convergence conditions are different from the ones given in the study and guarantees only third-order convergence (not fourth obtained here) in the case of the Jarratt method (for p = /).

16 Read more

On the Convergence for an Iterative Method for Quasivariational Inclusions

On the Convergence for an Iterative Method for Quasivariational Inclusions

In this paper, motivated by the research work going on in this direction, see, for instance, 2, 3, 7–21, we introduce an iterative method for finding a common element of the set of fixed points of a strict pseudocontraction and of the set of solutions to the problem 1.14 with multivalued maximal monotone mapping and relaxed δ, r -cocoercive mappings. Strong convergence theorems are established in the framework of Hilbert spaces.

11 Read more

NEW METHOD FOR GENERATING CONVERGENCE ACCELERATION ALGORITHMS

NEW METHOD FOR GENERATING CONVERGENCE ACCELERATION ALGORITHMS

ABSTRACT: In this work, we show a new method of generating convergence acceleration algorithms, by performing an adequate approximation technique. The Aitken’s  2 algorithm can be derived by this way, as well as the more general E-algorithm. Moreover, a family of algorithms for logarithmic sequences have been derived and tested with success on a set of logarithmic sequences.

6 Read more

Strong Convergence Theorems of the CQ Method for Nonexpansive Semigroups

Strong Convergence Theorems of the CQ Method for Nonexpansive Semigroups

Motivated by T. Suzuki, we show strong convergence theorems of the CQ method for nonexpansive semigroups in Hilbert spaces by hybrid method in the mathematical programming. The results presented extend and improve the corresponding results of Kazuhide Nakajo and Wataru Takahashi (2003).

8 Read more

On the convergence of an iteration method for continuous mappings on an arbitrary interval

On the convergence of an iteration method for continuous mappings on an arbitrary interval

In this paper, we consider an iterative method for finding a fixed point of continuous mappings on an arbitrary interval. Then, we give the necessary and sufficient conditions for the convergence of the proposed iterative methods for continuous mappings on an arbitrary interval. We also compare the rate of convergence between iteration methods. Finally, we provide a numerical example which supports our theoretical results.

10 Read more

Mathematical programming for the sum of two convex functions with applications to lasso problem, split feasibility problems, and image deblurring problem

Mathematical programming for the sum of two convex functions with applications to lasso problem, split feasibility problems, and image deblurring problem

mathematical programming for the sum of two convex functions. In infinite Hilbert space, we establish two strong convergence theorems as regards this problem. As applications of our results, we give strong convergence theorems as regards the split feasibility problem with modified CQ method, strong convergence theorem as regards the lasso problem, and strong convergence theorems for the mathematical programming with a modified proximal point algorithm and a modified

23 Read more

The Formulas to Compare the Convergences of Newton’s Method and the Extended Newton’s Method (Tsuchikura Horiguchi Method) and the Numerical Calculations

The Formulas to Compare the Convergences of Newton’s Method and the Extended Newton’s Method (Tsuchikura Horiguchi Method) and the Numerical Calculations

We get the following proposition by comparing the coefficients of ( x k − α ) 2 of formula (4.5) and (4.12). Proposition 4.6. Let the equation h(x) = 0 be deformed from f(x) = 0. Let f ( ) α = h ( ) α = 0 , and α(≠0) a simple root. Then the necessary and sufficient condition for the convergence to α of q-th power of TH-method of f(x) to be equal to or faster than that r-th power of TH-method of h(x) is that the real numbers q and r satisfy the following condition.

21 Read more

Multi step implicit iterative methods with regularization for minimization problems and fixed point problems

Multi step implicit iterative methods with regularization for minimization problems and fixed point problems

In this paper, we aim to find a common solution of the minimization problem (MP) for a convex and continuously Fréchet differentiable functional and the common fixed point problem of an infinite family of nonexpansive mappings in the setting of Hilbert spaces. Motivated and inspired by the research going on in this area, we propose two iterative schemes for this purpose. One is called a multi-step implicit iterative method with regu- larization which is based on three well-known methods: extragradient method, approxi- mate proximal method and gradient projection algorithm with regularization. Another is an implicit hybrid method with regularization which is based on the CQ method, extra- gradient method and gradient projection algorithm with regularization. Weak and strong convergence results for these two schemes are established, respectively. Recent results in this direction can be found, e.g., in [–].

26 Read more

Research and application of compact finite difference method of low reaction diffusion equation

Research and application of compact finite difference method of low reaction diffusion equation

The paper describes the theory of fractional derivative and specific application examples in the field of engineering sciences. On this basis, this paper mainly studies the low reaction-diffusion equations. First using compact operator, the paper constructs a higher-order finite difference scheme. Then the paper proves the existence and uniqueness of the difference solution by matrix method and analyzes the stability and convergence of the scheme by Fourier method.

7 Read more

Robust estimation : limit theorems and their applications

Robust estimation : limit theorems and their applications

the root to the M-estimating equations. Techniques justified by uniform convergence are used here. Uniform convergence also lends itself to the use of a graphical method of plotting "expectation curves". It can be used for either identifying the M-estimator from multiple solutions of the defining equations or in large samples (e.g. > 50) as a visual indica­ tion of whether the fitted model is a good approximation for the under­ lying mechanism. Theorems based on uniform convergence are given that show a domain of convergence (numerical analysis interpretation) for the Newton-Raphson iteration method applied to M-estimating equations for the location parameter when redescending loss functions are used.

226 Read more

Solution of optimal control problems by a pointwise projected newton method

Solution of optimal control problems by a pointwise projected newton method

convergence various diculties. In another context this aspect has been the focus of other research activities, see e.g. [9], [8]. The estimate in Theorem 3.5 enables us to show a result on the identication of the set of active indices. It estimates the measure of the set on which the active set at the current iterate diers from the active set of the solution by the distance of the iterate from the solution in the X-norm. This result is the key for the convergence analysis of the algorithm.

25 Read more

Show all 10000 documents...