Top PDF Peer Methods in Optimal Control

Peer Methods in Optimal Control

Peer Methods in Optimal Control

stiff problems and semi-discretizations of partial differential equations. Due to their high stage order they do not suffer from order reduction and work especially well for high accuracy requirements. Our main motivation is to clarify the potential of implicit peer methods to overcome the deficiencies of linear multistep methods when applied to optimal control problems. We construct peer methods with high adjoint consistency in the interior of the integration interval and show that the well-known backward differentiation formulas are optimal with respect to the achievable order. We will clearly identify that inappropriate adjoint initialization still remains a crucial issue for implicit peer methods, which restrict the overall consistency order to one. A fact that is also valid for linear multistep methods. Given the low consistency order of the discrete adjoints and therefore of the whole discretization, we have to conclude that implicit peer methods are not suitable for a first-discretize-then-optimize approach. The content of Chapter 4 has been published in [51].
Show more

109 Read more

Parallel methods for solving stochastic optimal control problems: control of drinking water networks

Parallel methods for solving stochastic optimal control problems: control of drinking water networks

• Risk-averse stochastic optimal control. In this thesis the ran- domness in the cost function is quantified with the expecta- tion operator leading to a so-called risk-neutral formulation. In a pioneering paper Artzner et al. (ADEH99) developed an ax- iomatic framework for risk measures 1 which turned out to be suitable for the formulation of risk-averse multistage optimal control problems (SDR09; Sha09; GR11; Sha12) and have been used in various applications such as power systems (ZG13) and finance (AMRU01). When these measures are introduced in sto- chastic programming models, this lead to lead to a new class of problems –risk-averse optimisation (Sha09; GR11; Sha12). In two recent papers Asamov and Ruszczy ´nski (AR15) and Col- lado et al. (CPR12) proposed methods to solve stochastic linear problems which however do not scale up well with the num- ber of scenarios and the prediction horizon. Risk-averse prob- lems, however, are significantly more complex than their risk- neutral counterparts and there are no algorithms to solve them fast enough for them to qualify for typical control applications — even for medium-scale problems. Developing optimisation methods for risk-averse optimisation problems in control appli- cations is an interesting research direction.
Show more

199 Read more

Forward and Inverse Methods in Optimal Control and Dynamic Game Theory

Forward and Inverse Methods in Optimal Control and Dynamic Game Theory

a feature vector is passed to the inverse optimal control, the inverse methods then try to learn the weight vector. The weight vector indicates the relative importance of the different features present in the feature vector. In this paper, the weight vector is always assumed to be positive. In reality, one will never have access to true cost function so that it can be compared against learned cost function. The best one can do is to learn an underlying cost function which respects the constraints and compare the predicted trajectories from that cost function with the actual trajectories generated by the system of interest. Typically, the expert will have some knowledge about the ways in which a given system is constrained. These constraints should be passed to the inverse methods so that a correct cost function can be learned. If the expert fails to account for the constraints that may be present in a system, it is very likely that the learned cost function will perform poorly. Also, it is possible that if enough trajectories are not observed, the cost function learned maybe incorrect. This is because the inverse methods may not have been able to sample the feasible state/control space completely. Another point to note is that although the feature vectors in the examples provided were chosen to be quadratic for reasons of physical intuition, it is possible to choose the feature vectors to be non-quadratic, if the problem so desires. The inverse methods will still be able to provide locally optimal solution to the residual minimization problem in (5.11)
Show more

91 Read more

Fast numerical methods for robust nonlinear optimal

control under uncertainty

Fast numerical methods for robust nonlinear optimal control under uncertainty

The different types of spectral methods vary in the type of basis functionals and the deriva- tion of the coefficients which can be carried out analytically, by sampling techniques or by projection techniques. Our main focus in this thesis is the polynomial chaos (PC) method, a spectral method in which Equation (2.19) represents a generalized global Fourier expansion, called polynomial chaos expansion, in a random polynomial basis. When using more general probability distributions such as the normal distribution, this method is also referred to as generalized polynomial chaos (gPC). First practical use of the gPC fracemwork can be found in sparse and linear system, often complicated by high dimensionality in state or uncertain parameter space originating from a discretization of PDE or stochastic processes. Application areas vary amongst computational fluid dynamics [ 57, 151, 89 ], aerodynamic design [ 134 ], mechanics [ 147 ] and geology [ 114 ]. Current work primarily addresses this setting. Examples for recent applications to stochastic optimal control of PDEs include [ 37, 155, 85, 84 ] as well as in shape and topology optimization [134, 147]. The first works of gPC in control systems with ordinary differential equations empirically studied questions such as convergence and stability on small test problems [ 73, 75, 55 ] or linear systems [ 138, 54 ]. Applications to real- world optimal control problems can be found in, e.g., [111, 80, 18], and the review article [ 81 ]. For recent work on the use of gPC for solving stochastic model predictive control, con- sider [ 76, 105, 106 ]. Fagiano [ 52 ] uses gPC to design a controller that induces convergence of the expected value of the state to the origin and satisfies state constraints in expectation.
Show more

140 Read more

Computational methods for solving optimal industrial process control problems

Computational methods for solving optimal industrial process control problems

state and control variables. Amongst these various constraints, the continuous inequal- ity constraints, often involving only state variables, are very difficult to handle directly. Thus, the constraint transcription technique introduced in [80] is used to convert each of them into an equivalent equality constraint in integral form. However, the integrand of each of these equality constraints is non-smooth. Thus, a local smoothing method is used to approximate the non-smooth functions by smooth functions. Consequently, each of these continuous inequality constraints is approximated by a sequence of inequality constraints in integral form, where the integrands are smooth approximating functions. These inequality constraints are known as the inequality constraints in canonical form, as they appear in the same form as the cost function. Then, by using the idea of the penalty function, these inequality constraints are appended to the cost function, forming an augmented cost function. Thus, the constrained time-delay optimal control problem is approximated by a sequence of time-delay optimal parameter selection problems subject to simple bounds on the control parameter vector. Each of them is to be solved as a nonlinear programming problem by using a gradient-based optimization technique, such as the sequential quadratic programming approximation scheme with active set strategy (see, for example, [80,147]). For this, the gradient formula of the augmented cost function with respect to the control parameter vector is derived. On this basis, an effective com- putational algorithm is developed for the time-delay constrained optimal control problem through solving a sequence of optimal parameter selection problems subject to simple bounds, each of which is regarded as a nonlinear programming problem.
Show more

156 Read more

Computing interval-valued reliability measures: application of optimal control methods

Computing interval-valued reliability measures: application of optimal control methods

In (Kozine and Krymsky 2012) the calculus of variations is used again to construct interval-valued probability measures. New constraints are introduced that have relevance for reliability applications. They are upper and lower bounds on failure rate. This has the double effect: better precision in the results and avoidance of the troublesome parameter – the upper bound on time to failure. Another promising outcome of that study was the prospective to depart from the use of rather general and complicated mathematical tool, the calculus of variations. As it was proven, under the constraints on the failure rate, optimal solutions are sought on the class of piecewise exponential distributions. In this way, the problem statement can simply be changed to finding the breaking points at which the probability density function either abruptly jumps up or drops.
Show more

25 Read more

Regression-based Monte Carlo methods with optimal control variates

Regression-based Monte Carlo methods with optimal control variates

that is, with different number of time steps. It turned out that the complexity of the MLMC algorithm can at best reduced down to order ε −2 . Further interesting approaches that reduce the complexity via deterministic quadrature-based algorithms can be found in [47] and [48]. Our aim is to derive an efficient algorithm with complexity rate better than ε −2 . This is achieved by using regression-type algorithms for the construction of control variates. As opposite to the SMC approach, our method takes advantage of the smoothness in µ, σ and f (which is needed for nice convergence properties of regression methods) and hence is especially efficient for smooth problems.
Show more

146 Read more

Optimal control methods for simulating the perception of causality in young infants

Optimal control methods for simulating the perception of causality in young infants

The results from the three sets of simulations highlight both the strengths and limitations of the optimal control model of infant causal perception. There are two major findings. First, OCM quickly learns a set of optimal tracking strategies for following a moving object. Second, when presented with a novel causal event, OCM appropriately anticipates the out- come of partially occluded, but not fully occluded, versions of the event.

6 Read more

Numerical Methods for Mixed-Integer Optimal Control with Combinatorial Constraints

Numerical Methods for Mixed-Integer Optimal Control with Combinatorial Constraints

It was a pleasure to work in the stimulating academic environment established at the Interdisciplinary Center for Scientific Computing and the Faculty of Mathematics and Computer Science at Heidelberg University. In particular, I would like to thank the members of the Simulation and Optimization, the Model-Based Optimizing Control and Experimental Design Group for contributing to this unique atmosphere. For the many cups of coffee we shared together and the interesting discussions, I would like to thank Anja Bettendorf, Lilli Bergner, Dr. Holger Diedam, Dr. Kathrin Hatz, Mar´ıa Elena Su´arez Garc´es, J¨urgen Gutekunst, Dr. Christian Hoffmann, Dr. Dennis Janka, Johannes Herold, Dr. Robert Kircheis, Dr. Tom Kraus, Dr. Huu Chuong La, Conrad Leidereiter, Dr. Simon Lenz, Enrique Guerrero Merino, Andreas Meyer, Nadia Said, Dr. Andreas Schmidt, Dr. Andreas Sommer and Leonard Wirsching.
Show more

220 Read more

Interval structure of Runge-Kutta methods for solving optimal control problems with uncertainties

Interval structure of Runge-Kutta methods for solving optimal control problems with uncertainties

Interval methods have been introduced over the years [10]. But until 1996, the interval analysis was just summarized in simple propositions [24]. In this paper, a discrete time interval based method is proposed for solving the OCPs with interval uncertainties. Time discretization of the OCPs has been introduced since the 1960s [9]. A survey of some of the earlier works can be found in [27]. For instance, properties of Runge- Kutta methods have been analyzed in [19, 30]. Since, Runge-Kutta methods have a significant role in the numerical treatment of differential systems [16], the main purpose of our study is to propose an interval version of Runge-Kutta methods for solving OCPs with interval uncertainties. The optimal control problem in this study is
Show more

17 Read more

Stochastic approximation methods for PDE constrained optimal control problems with uncertain parameters

Stochastic approximation methods for PDE constrained optimal control problems with uncertain parameters

with randomized estimators to reduce the computational effort. The work [VBV18] considers a risk measure that involves only the mean of the objective functional (hereafter named mean-based risk), with an additional penalty on the variance of the state, and proposes a gradient type method, in which the expectation of the gradient is computed by a multilevel Monte Carlo method. In [BvW11], the authors also consider a mean-based risk problem and propose a reduced basis method on the space of controls to dramatically reduce the computational effort. In the work [Kou12], the author presents a more general type of OCP, using the general notion of a risk measure, and derives the corresponding optimality system of PDEs to be solved. For its numerical solution, a trust-region Newton conjugate gradient algorithm is proposed in [KHRvBW13], combined with an adaptive sparse grid collocation for the discretization of the PDE in the stochastic space. The work [KS16] considers derivative-based optimization methods for the robust CVaR risk measure, which are built upon introducing smooth approximations to the CVaR. Finally, in the work [GLL11], the authors consider a boundary OCP where the deterministic control appears as a Neumann boundary condition. Using again a mean-based risk, they derive an optimality system of equations and provide a complete error analysis of the finite element approximation, as well as of the random parameter space approximation.
Show more

190 Read more

Optimal Control of Microgrid Networks Using Gradient Descent and Differential Evolution Methods

Optimal Control of Microgrid Networks Using Gradient Descent and Differential Evolution Methods

Abstract - The given thesis puts forth the application of various methods for optimal control of power flow in a network of microgrids(MGs). The methods used for solving the mathematical formulation of Microgrid control are Steepest Descent method, Newton method and Differential Evolution method. In the above first three come under a group of gradient descent methods where as the differential evolution is an Evolutionary Algorithm. A comparative study is performed by calculating the cost function for each of the three methods. We have taken a case of four Microgrids collaborating in a network. In the proposed work, the optimal control microgrid system with Energy storage systems are considered. The gradient descent methods and evolutionary algorithm are applied and an optimized result for energy storage systems are obtained. It is observed that the results obtained are good and comparable.
Show more

7 Read more

Superconvergence of semidiscrete finite element methods for bilinear parabolic optimal control problems

Superconvergence of semidiscrete finite element methods for bilinear parabolic optimal control problems

optimal control problems (see, e.g., [–]). Although bilinear optimal control problems are frequently met in applications, they are much more difficult to handle in comparison to linear or semilinear cases. There is little work on bilinear optimal control problems. Recently, Yang et al. [] investigated a priori error estimates and superconvergence of fi- nite element methods for bilinear elliptic optimal control problems. Hence, our results on bilinear parabolic optimal control problems are new.

14 Read more

Analytical and Numerical Methods for Optimal Control Problems on Manifolds and Lie Groups.

Analytical and Numerical Methods for Optimal Control Problems on Manifolds and Lie Groups.

1.1.1 Combined Homotopy and Neighboring Extremal Optimal Control For most OCPs in engineering applications, it is difficult to obtain analytical or closed form solutions using Pontryagin’s maximum principle (PMP) or dynamic pro- gramming (DP). Consequently, iterative/numerical methods are utilized for solving such OCPs [8], [78]. Two methods, which have been used independently in optimal control theory are homotopy (see, e.g., [10], [18], [51], [79], [91], [100]) and neighboring extremal optimal control (NEOC) (see, e.g., [14]). However, the combination of these two techniques has not been investigated. With this motivation, we combine these two techniques and arrive at a method for obtaining sub-optimal control in OCPs defined on a Euclidean space.
Show more

113 Read more

Proximal Methods for Elliptic Optimal Control Problems with Sparsity Cost Functional

Proximal Methods for Elliptic Optimal Control Problems with Sparsity Cost Functional

□ Notice that L : = L J ( ) ˆ 2 represents the smallest value of L such that (4.5) is satisfied. We remark that the discussion that follows is valid for L ≥ L J ( ) ˆ 2 as in Lemma 4.5. However, as we discuss below, the efficiency of our proximal schemes depends on how close is the chosen L to the minimal and optimal value L J ( ) ˆ 2 . Now, since this value is usually not available analytically, we discuss and implement below some numerical strategies for determining a sufficiently accurate approximation of L J ( ) ˆ 2 . In particular, we consider a power iteration [21], and the backtracking approach discussed in Remark 5.1.
Show more

26 Read more

Semi-smooth Newton methods for time-optimal control for a class of odes

Semi-smooth Newton methods for time-optimal control for a class of odes

Time optimal ontrol problems for a lass of linear multi-input systems are.. onsidered.[r]

25 Read more

Globalization of nonsmooth Newton methods for optimal control problems

Globalization of nonsmooth Newton methods for optimal control problems

for Optimal Control Problems Carsten Gr¨aser ? Abstract We present a new approach for the globalization of the primal-dual ac- tive set or equivalently the nonsmooth Newton method applied to an optimal control problem. The basic result is the equivalence of this method to a nonsmooth New- ton method applied to the nonlinear Schur complement of the optimality system. Our approach does not require the construction of an additional merrit function or additional descent direction. The nonsmooth Newton directions are naturally appro- priate descent directions for a smooth dual energy and guarantee global convergence if standard damping methods are applied.
Show more

8 Read more

Direct Transcription Methods in Optimal Control: Theory and Practice

Direct Transcription Methods in Optimal Control: Theory and Practice

Section 4.3 discusses the problem of initializing numerical algorithms for solving optimal control problems. The idea of using order reduction for initialization was suggested by the author, who worked on it independently in early 2004. Experiments in Matlab and SOCS showed the shortcomings of the approach originally adopted by the author. The section discusses possible solutions. Using monitor functions to initialize SOCS was a joint project of Dr. Campbell and a former student Mr. Kalla. In the spring of 2004, the author debugged, organized and rewrote some of the Matlab code written by Mr. Kalla and noted some numerical phenomena that had not been observed before.
Show more

133 Read more

Optimal Control Methods and the Variational Approach to Differential Equations

Optimal Control Methods and the Variational Approach to Differential Equations

Therefore, (1.3) - (1.5) is, in fact, a reformulation of the classical Dirichlet principle associated to the biharmonic operator. Consequently, the control variational method is a modification of the classical variational approaching, via the use of optimal control theory. From this point of view, it is also important to notice that our method is essentially different from the optimal control approaches obtained via the least squares fitting to the data procedure applied in various situations. Moreover, the control variational approach allows many variants of such modifications. This flexibility may be very advantageous in certain applications. We shall exemplify it in the next section via some abstract schemes. Section 3 is devoted to applications to unilateral problems where constrained control problems have to be used. The last section briefly discusses the case of time-dependent pro- blems, which is more difficult and still under development. The paper ends with a short conclusion.
Show more

6 Read more

Show all 10000 documents...