# Top PDF Genetic Algorithms in Stochastic Optimization and Applications in Power Electronics

### Genetic Algorithms in Stochastic Optimization and Applications in Power Electronics

CHAPTER 1 INTRODUCTION Genetic Algorithms (GAs) are widely used in multiple fields, ranging from math- ematics, physics, to engineering fields, computational science, manufacturing, even to economics [1], etc. The stochastic optimization problems are important in power electronics and control systems [2], and most designs require choosing optimum pa- rameters to ensure maximum control effect or minimum noise impact; however, they are tough to solve using the exhaustive searching method, especially when the search domain conveys a large area or is infinite [3]. Earlier researchers had proposed random search algorithms, such as the Stochastic Ruler (SR) algorithm [4], and the Stochastic Comparison (SC) algorithm [5]. However, these algorithms are difficult for application problems because it is hard to decide the termination of the algorithms. Recently, meta-heuristics methods, such as Genetic Algorithms, Particle Swarm Algorithms, and Simulated Annealing, have drawn attention to researchers. A combination of the searching method with statistical analysis techniques has made progress in solving noisy function optimization.

### Stochastic Approximation Algorithms With Applications To Particle Swarm Optimization, Adaptive Optimization, And Consensus

Neural u Networks with u Markovian Switching. u Neural Processing u Letters. (2013), u 1–14. 5. u Quan u Yuan, u Zhiqing u He, u A u property u of u eigenvalue u bounds u for u a u class u of u symmetric u tridiag- onal interval u matrices, Numerical u Linear Algebra u with Applications. u 2010, 233: u 1083–1090. 6. u Quan u Yuan, u Feng u Qian, u Wenli u Du, A u u Hybrid u Genetic u Algorithm u with u the u Baldwin u Effect,

### Novel particle swarm optimization algorithms with applications in power systems

The four states are Exploration, Exploitation, Convergence and Jumping-out states. Some limitations of APSO have been highlighted and thus some modifications have been made to develop an enhanced version. This version is named as Switching PSO (SPSO). Here the first state is considered as Convergence, second Exploration, third Exploitation and fourth as Jumping-out state. A new mode-dependent switching parameter is introduced here. A suitable value for acceleration coefficients is assigned according to the current state of the particle. A smaller value is assigned in convergence state, while a larger value in jumping state. The weight of inertia ω is also dependent on evolutionary factor and current state of the particle. An adaptive value of ω is assigned in each iteration according to the current state of the particle. The SPSO has been tested on several mathematical benchmark functions and has also been applied to the genetic regulatory networks (GRN) for parameter identification. Recently in [78] SPSO has been added the concept of Wavelet Neural Network (WNN) and then applied parameter optimization in face direction recognition. According to [136] SPSO has been modified and applied in the hybrid manner by introducing differential evolution (DE) and the new algorithm is then applied to lateral flow immunoassay analysis. In [9] the SPSO has been used with hybrid EKF for parameters estimation in lateral flow immunoassay. Consequent to the extensive search for the applications SPSO, we have found very limited number of applications. The idea of Markovian jumping and switching mechanism has widely been used in other techniques. After having the extensive search and study of the literature, we have noticed that SPSO has never been applied in power systems.

### System Architecture Optimization Using Hidden Genes Genetic Algorithms with Applications in Space Trajectory Optimization

mechanism C and logic B), may not perform well when the number of hidden genes in the optimal solution is low or zero. On the other hand, some algorithms that favor higher number of hidden genes (for example mechanism B, logic C) do not perform well when the optimal solution has high number of hidden genes. In some sense, the tags’ evolution in these algorithms ignores (to some level) the specifications of the problem being solved. The performance of each algorithm depends on the problem being solve and in general, we can not claim that an algorithm performs well in all the problems. However, in mechanisms A, E, F, G, H, logic A, and alleles concept, stochastic processes (crossover and mutation) have more effects on the evolution of the tags rather than the number of genes and they show better relative performance in all the tested problems. In logic A, the number of hidden tags are distributed in both children by assuming both Hidden-OR and Active-OR concepts.

### Parallel algorithms for two-stage stochastic optimization

There are a variety of applications that can be formulated as two-stage stochastic integer programs, for example, manufacturing [16], energy planning [17], logistics [18], etc. Gangam- manavar et al [19] propose a stochastic programming framework for economic dispatch prob- lem to address integration of renewable energy resources into power systems. Munoz et al [20] propose an approach for solving stochastic transmission and generation investment planning problem in which the reduce the number of scenarios by clustering the scenarios and using a representative scenario (centroid) from each cluster. Yue et al [21] show that stochastic programming model to schedule adaptive signal timing plans at oversaturated traffic signals outperforms deterministic linear programming in total vehicle delay. Park et al [22] propose a two-stage stochastic integer model for least-cost generation capacity expansion model to control carbon dioxide (CO2) emissions. Ahmed et al [23] propose a multi-stage stochastic integer programming approach for the problem of capacity expansion under uncertainty. Kim and Mehrotra [24] employed a two-stage stochastic integer programming approach for inte- grated staffing and scheduling problem with application to nurse management. Ariyawansa et al [25] have given free web access to a collection of stochastic programming test problems. SIPLIB [26] is another collection of test problems to facilitate computational and algorithmic research in stochastic integer programming. Depth and breadth of applications of stochastic optimization can be found in [27].

### Genetic Algorithms: Basic Concept and Applications

Genetic Algorithm is a class of search techniques that use the mechanisms of natural selection and genetics to conduct a global search of the solution space and this method can handle the common characteristics of economic load dispatch which cannot be handled by other optimization techniques like hill climbing method, indirect and direct calculus based methods, random search methods etc. Chen [16] studied that to solve hydro plant dispatch problem, Artificial Neural Network and Genetic Algorithm provides a more optimal solution than the conventional dynamic programming method. So much work has already been done to optimize cost of electricity of thermal power plants and many efforts have been made to apply GA to economic load dispatch problems. Through this paper, we try to apply GA to optimize cost of electricity in hydro power plants (Fig. 9 & 10) without any constraints. Current research shows how the cost curve varies in non-linear manner when no constraints are considered.

### Hierarchical planning and stochastic optimization algorithms with applications to self-driving vehicles and finance

Throughout a work-cycle, a field may be ploughed, sowed or planted, fertilized, sprayed and harvested. In general, operating widths vary. However, some operations are conducted multiple times in a repetitive manner and using the same operating width such as, e.g., 36m. This implies the possibility of using a particular created path plan repeatedly. In nature, field contours are often irregularly shaped and include non-plantable islands prohibited from trespassing (trees, power line poles or masts and the like). Especially then, it is not obvious how to efficiently plan paths for complete field coverage. Working sequentially lane-by- lane has the advantage of being intuitive, simple to implement and en- abling to visually accurately detect and control the area already worked, for example, during ploughing, and thus avoiding overlapping. More efficient field coverage and path planning techniques can be developed, that do not necessarily require sequential lane-by-lane operations. By the advent of modern sensor and actuation technologies, these alterna- tive more efficient field coverage path plans can be realized, and may even be executed by an autonomous agricultural machinery.

### Approximation Algorithms For Stochastic Combinatorial Optimization, With Applications In Sustainability

failure objective (defined in Chapter 2) subject to a partition-matroid constraint. We exploit the Maximum Coverage property (which is strictly stronger than submodularity) to two key ends. First, to observe a deterministic (1-1/e)- approximation for the base case where vaccination (a.k.a. colony establishment) is perfectly effective and spreading (matching the previous-best randomized ap- proximation guarantee for that problem from [3]). Secondly, in the probabilistic- failure case, we can evaluate (in polynomial-time) the objective function value (which is not necessarily possible if only submodularity holds), such that an existing algorithm of Calinescu, Chekuri, P´al and Vondrak [11] gives a ran- domized (1-1/e)-approximation. Our reductions recognize that the partition- matroid constraint need not be wasted constraining action per time step (this is captured by making failure-probability a function of time); instead it is used to greatly expand the descriptive power of the model without losing (almost any) theoretical traction.

### Stochastic Majorization-Minimization Algorithms for Large-Scale Optimization

Majorization-minimization algorithms consist of iteratively minimizing a majoriz- ing surrogate of an objective function. Because of its simplicity and its wide applicability, this principle has been very popular in statistics and in signal pro- cessing. In this paper, we intend to make this principle scalable. We introduce a stochastic majorization-minimization scheme which is able to deal with large- scale or possibly infinite data sets. When applied to convex optimization problems under suitable assumptions, we show that it achieves an expected convergence rate of O(1/ √ n) after n iterations, and of O(1/n) for strongly convex functions. Equally important, our scheme almost surely converges to stationary points for a large class of non-convex problems. We develop several efficient algorithms based on our framework. First, we propose a new stochastic proximal gradient method, which experimentally matches state-of-the-art solvers for large-scale ℓ 1 - logistic regression. Second, we develop an online DC programming algorithm for non-convex sparse estimation. Finally, we demonstrate the effectiveness of our approach for solving large-scale structured matrix factorization problems. 1 Introduction

### Evolutionary Algorithms for Multiobjective Optimization with Applications in Portfolio Optimization

To summarize, one of the main shortcomings of classical methods is that some prior knowledge of the problem is required to assign reasonable weights. In classical methods, it is not possible to find multiple solutions in a single run and also not possible to find all the Pareto optimal solutions. This causes the decision maker to miss out other desirable solutions to a problem. However, these algorithms are known to converge to a Pareto optimal solution of the multiob- jective problem (Stewart [29]). For more details on classical methods, we refer the reader to Deb [5], Coello Coello et al. [2], and Miettinen [17].

### Approximation algorithms for 2-stage stochastic optimization problems

distributions (described in the input). Typically, there is an underlying set of el- ements (clients) and a scenario is generated by independent choices (setting the demands) made for each element. Independent distributions allow one to suc- cinctly specify a class of distributions with exponentially many scenarios, and have been used in the Computer Science community to model uncertainty in various settings [13,18,5]. However, many of the underlying stochastic applica- tions often involve correlated data (e.g., in stochastic facility location the client demands are expected to be correlated due to economic and/or geographic fac- tors), which the independent-activation model clearly does not capture. A more general way of specifying the distribution is the black-box model, where the dis- tribution is speciﬁed only via a procedure (a “black box”) that one can use to independently sample scenarios from the distribution. In this model, each pro- cedure call is treated as an elementary operation, and the running time of an algorithm is measured in terms of the number of procedure calls. The black- box model incorporates the desirable aspects of both the previous models: it allows one to specify distributions with exponentially many scenarios and corre- lation in a compact way that makes it reasonable to talk about polynomial-time algorithms.

### A Framework for Analyzing Stochastic Optimization Algorithms Under Dependence

One of the key obstacles in developing stochastic extensions of quasi-Newton methods is the necessity of selecting appropriate step sizes. The analysis of the global convergence of the BFGS method [10] and other members of Broyden’s convex class [11] assumes that Armijo-Wolfe inexact line search is used. This is rather undesirable for a stochastic algorithm, as line search is both computationally expensive and difficult to analyze in a probabilistic setting. However, there is a special class of functions, the self-concordant functions, whose properties allow us to compute an adaptive step size based on local curvature and thereby avoid performing line searches. In [12], it is shown that the BFGS [13][14][15][16] method with adaptive step sizes converges superlinearly when applied to self-concordant functions.

### Stochastic Optimization for Big Data Analytics: Algorithms and Libraries

Stochastic Gradient Descent (Pegasos) for L1-SVM (primal) Stochastic Dual Coordinate Ascent (SDCA) for L2-SVM (dual) Stochastic Average Gradient (SAG) for Logistic Regression/Regression?[r]

### Area and Power Optimization of 802.15.4a UWB Pulse Low Noise Amplifiers by Genetic Algorithms

multicore machines are very powerful, they are not guaranteed to converge to the best solution. Thus, the initial pencil and paper design is necessary. Reference [1] provides an analytical derivations for a similar cascode LNA, and the analysis can be used as a good starting point. The procedure here is a little different, since the input match is a first order allpass section rather than a second order bandpass filter. Sub-micron transistor models are very complex [10]. It is required that the circuit is simulated using the SPICE models or the BSIM models for shorter channel length devices for accurate results. Keep in mind that the amplifier still needs to be tuned using a simulator to include the above mentioned effects. The Genetic Algorithm was chosen to do the optimization for this LNA. The GA is a very robust algorithm that has been discussed extensively in the literature [11].

### High Power Medium Frequency Magnetics for Power Electronics Applications.

Passive device designs must meet the stringent requirements brought about by the continued increase in capabilities and adoption rates of wide bandgap power semiconductors. Specifically, magnetic component designs need to consider new operating spaces and to take advantage of new soft magnetic materials and material processing. Generally, power converter controls and topologies are moving towards very high efficiencies through soft switching techniques that greatly reduce the losses associated with the switching devices. This in turn means that a dominant loss component of a power converter will be the passive devices and specifically the magnetic components [124], [125], [126]. Further, magnetic components will continue to play a critical role in power converters due to their energy storage and isolation capabilities. The high reliability and wide design envelope of magnetic devices also makes them critically important elements of any power converter design. It is because of the aforementioned factors, that both magnetic design techniques and the components used need to be fully

### Contributions to the Theory and Applications of Genetic Algorithms

The second part of the thesis is more synthetic and tries to examine some ap­ plications which might profit from the power of genetic search. This was attem pted not only because of the limitation of the results in the first part, but also due to my recognition of the difficulties in extending th at analysis to address more fundamental questions related to the dynamical behaviour of GAs. However, the reason behind these attem pts is more general than a quick browse through them may reveal. One can distinguish a gradation of problems in terms of the effectiveness with which they might be treated by a genetic algorithm approach. The simplest problems are those th at require optimisation of a function given in “closed form” by combinations of simple expressions such as + 1, y/ÿ or ^ ^ • The most relevant fact here is th at param etrisation of the search space is immediate and hence, at least in principle, a GA used as a function optimiser may offer an alternative when no other approach proves to be effective. At the other extreme stand problems for which no param etri­ sation may exist and hence th at may be impossible to model via a GA approach. In between these extremes are those problems th at do not appear to be amenable to a GA treatm ent and yet through careful modeUing can harness at least some of the power of GAs as search algorithms. The work reported in this thesis concentrates on problems of this class. We focus on the use of GAs as experimental tools to assist the design of different systems as part of an exploratory process. The interest is the search for potential interesting design dimensions in domains for which no other approach can guarantee to make good design choices. I believe th at Distributed Ar­ tificial Intelligence is one area th at can profit significantly from this approach. But there is a problem, as one needs to face the traditional maladies associated with the use of GAs as function optimisers and at the same time build systems th at achieve reasonably robust behaviour so th at the conclusions from the experiments can be regarded consistent throughout a series of different scenarios.

### Boolean lexicographic optimization: algorithms & applications

Table 5 shows the number of aborted instances, i.e. instances that a given solver cannot prove the optimum within the allowed CPU time limit or memory limit. The results al- low drawing several conclusions. First, for some solvers, the use of dedicated lexicographic optimization algorithms can provide remarkable performance improvements. A concrete ex- ample is Minisat+. The default solver aborts most problem instances, whereas Minisat+ inte- grated in an iterative pseudo-Boolean BLO solver ranks among the best performing solvers, aborting only 56 problem instances (i.e. the number of aborted instances is reduced in more than 85%). Second, for some other solvers, the performance gains are significant. This is the case with MSUnCore. For MSUnCore, the use of unsatisfiability-based lexicographic optimization reduces the number of aborted instances in close to 40%. SAT4J-PB integrated in an iterative pseudo-Boolean solver reduces the number of aborted instances in close to 6%. Similarly, SCIP integrated in an iterative pseudo-Boolean solver reduces the number of aborted instances in more than 4%. Despite the promising results of using iterative pseudo- Boolean solving, there are examples for which this approach is not effective, e.g. BSOLO. This suggests that the effectiveness of this solution depends strongly on the type of solver used and on the target problem instances. The results for the MaxSAT-based weight rescal- ing algorithm are less conclusive. There are several justifications for this. Given that existing

### Parallel Stochastic Evolution Algorithms for Constrained Multiobjective Optimization

approach can be a non-trivial exercise. The factors to be considered include the nature of the problem domain (solu- tion landscape), the metaheuristic structure, and the parallel environment. Parallelization of metaheuristics is an actively researched topic [3], but unlike SA, GA, and TS [2, 4], par- allelization of StocE has not been studied. In this work, parallel algorithms for StocE are presented considering a complete spectrum of parallel models [2]. VLSI cell place- ment is used as an optimization problem and the goal is to achieve scalable speed-ups using a low-cost cluster environ- ment. It is found that parallelization of StocE increases its effectiveness in solving large, multi-objective optimization problems. A comparison with parallel SA strategies is also given, which further highlights the performance gains.