This paper proposes an opposition-based DE algorithm for **global** **numerical** **optimization** (GNO2DE). This algo- rithm employs the opposite point method to utilize the existing search spaces to speed the convergence [21-24]. Usually, different problems require different settings for the control parameters. Generally, adaptation is intro- duced into an evolutionary algorithm, which can improve the ability to solve a general class of problems, without user interaction. In order to improve the adaptation and reduce the control parameter, GNO2DE uses a dynamic mechanism to dynamically tune the crossover rate CR during the evolutionary process. Moreover, GNO2DE can enhance the search ability by randomly selecting a can- didate from strategies “DE/rand/1/bin” and “DE/current to best/2/bin”. **Numerical** experiments clearly show that GNO2DE is feasible and effective.

19 Read more

To improve the quality of **optimization** result, the exploration property is important in the early stage of **optimization** process and the exploitation property is significant in the late stage of **optimization** process [11]. To ensure these properties, we modified the SaM for strictly controlling a balance between the exploration and the exploitation properties, which called the bias SaM (bSaM). We extended the Adaptive Cauchy Differential Evolution (ACDE) [10] by attaching bSaM. The ACDE shows good **optimization** performance by its adaptive parameter control without adaptive strategy control. We compared the bSaM with SaM and the bSaM extended ACDE with the state-of-the-art DE algorithms in various benchmark problems. The result of the performance evaluation showed that the bSaM is better than the SaM and the ACDE’s adaptive parameter control is better than the JADE’s adaptive parameter control in terms of adapting the strategy probability. In addition, we found out that the Long-tailed distribution still performs better than the Short-tailed distribution for the bSaM.

In past, many **optimization** algorithms based on gradient search for solving linear and non-linear equation but in gradient search method value of objective function and constraint unstable and multiple peaks if problem having more than one local optimum. Population based GWO is a meta-heuristic **optimization** algorithm has an ability to avoid local optima and get **global** optimal solution that make it appropriate for practical applications without structural modifications in algorithm for solving different constrained or unconstraint **optimization** problems. GWO integrated with adaptive technique reduces the computational times for highly complex problems.

11 Read more

Table 3. The results show that EDABC and LDABC can obtain competitive or better results than ABC for all benchmarks. Especially, it is clear from the result that for Rastrigin function, DABCs search the **global** best value and achieves 100% success rate. DABCs aren’t only higher quality solution, but also more stable. This is occurred, due to using differential operator which maintain the diversity of the algorithm and express the solution space characteristics fully regardless of the type of the considered function. Furthermore, the Mean, the Min values obtained by EDABC are better than those obtained by LDABC for almost all benchmarks, which demonstrates the employed bees’ behaviors have an impact on the ability of **global** search for ABC. Fig.7 and Fig.8 show that EDABC converge faster then LDABC algorithm for Rosenbrock and Ackley functions. Fig. 9 and Fig.10 show that EDABC has the ability of escaping the local optima after a long stagnation process for Sphere and Griewank functions. As a consequence, the EDABC algorithm produces better results than LDABC and ABC.

In recent years, many research have been inspired by animal behavior phenomena for developing **optimization** techniques, such as firefly algorithm (FA) [1] in 2008, cuckoo search (CS) [2] in 2009, bat algorithm (BA) [2] in 2010, artificial bee colony (ABC) [3] in 2007, monkey algorithm (MA) [4] in 2008, frog-leaping algorithm (SFLA) [5], [6] in 2003. FA mimics flashes of fireflies attract mating partners or potential prey to find optima. CS simulates cuckoos choose a nest where the host bird just laid its own eggs and the first hatched cuckoo evicts the host eggs by blindly propelling the eggs out of the nest. BA simulates the behavior of bats’ echolocation to find food or avoid obstacles by frequency and loudness. And ABC simulates natural bee colony foraging process to search and optimize the objectives by mutual cooperation. Because of its advantages of **global**, parallel efficiency, robustness and universality, these bio-inspired algorithms have been widely used in constrained **optimization** and engineering **optimization**[7], [8], [9], scientific computing, automatic control and other fields [10], [11], [12], [13], [14].

28 Read more

Abstract:-A novel population based **optimization** algorithm known as Sine Cosine Algorithm (SCA), in contrast to meta-heuristics; main feature is randomization having a relevant role in both exploration and exploitation in **optimization** problem. A novel randomization technique termed adaptive technique is integrated with SCA and exercised on unconstraint test benchmark function andlocalization of partial discharge in transformer like geometry. SCA algorithm has quality feature that it uses simple trigonometric terms like sine and cosine term for every unconstrained and complex constrained **optimization** problem. Integration of new randomization adaptive technique provides potential that ASCA algorithm to attain **global** optimal solution and faster convergence with less parameter dependency. Adaptive SCA (ASCA) solutions are evaluated and results shows its competitively better performance over standard SCA **optimization** algorithms.

15 Read more

in a local optimum. In addition, the exploiter bat’s search scope size becomes very small during the exploitation and it moves very slowly. Therefore it might need too much time to reach the **global** optima. These problems in DVBA have been eradicated by introducing probabilistic selection restart techniques in IDVBA.

Within the past few decades, Nature-inspired **optimization** algorithms have increasingly been gaining popularity in the areas of scientific and engineering research all over the world (Rauff, 2015). This development has thrilled many researchers and they have deduced various reasons for it. Some researchers argue that these Algorithms are successful because they were developed to replicate some of the most successful dynamics in biological, physical and chemical processes which occur naturally (Gandomi & Alavi, 2012; Priami, 2009). With this situation, the issue of choice of algorithm always surfaces (since there exists so many to choose from) whenever the need for **optimization** arises (Huang & Lam, 2002). There is this general understanding among researchers that the choice of the ‘best’ algorithm to solve a problem should largely be based on the nature of problem being faced. The No free lunch **optimization** theorem reinforced this line of thought (Wolpert & Macready, 1997; Xu, Caramanis, & Mannor, 2012). In fact, there is no agreement on the recommended principles guiding the choice of algorithms when faced with large-scale, nonlinear **optimization** problems (Ellison, Finn, Qin, & Tang, 2015).

Homotopy Continuation Method (HCM) was introduced to solve the problems of nonlinear **optimization** and also systems of nonlinear equations (Allgower and George, 1990). This method deforms a simple function into the function of interest by tracing path, computes series of zeros and ends in zero of that function of interest. Since the homotopy methods converge to a solution for any arbitrary chosen initial condition, they are said to be globally convergent.

26 Read more

Topographical MLSL (TMLSL) [Ali and Storey, 1994] and Topographical Optimiza- tion with Local Search (TOLS) [T¨orn and Viitanen, 1994] are two more recently developed methods of this type. TMLSL and TOLS are both reported to be considerably superior to MLSL and other previously introduced clustering methods. Nevertheless, these methods may fail in two different ways. First, the resulting groups of points, or clusters, may con- tain several regions of attractions, so that the **global** minimum can be missed. Second, one region of attraction may be divided over several clusters, in which case the corresponding optimum will be located more than once [Rinooy Kan and Timmer, 1987a].

129 Read more

section, we showed that DIRECT has trouble with functions with large values. These results will show that our modified version does not encounter these problems. We modified the termination criterium for this set of problems; for these tests, DIRECT and DIRECT-s were given identical function evaluation budgets, and we report the results after each algorithm has exhausted its budget. This type of termination accurately resembles the way DIRECT is used by **optimization** practitioners. The budgets given to DIRECT and DIRECT-s are based on the results found in Table 3.7. The results are presented in Table 3.8.

160 Read more

Despite huge success in solving tough **optimization** problems, Yang [89] asserts that it is hard to affirm mathematically why metaheuristic algorithms are that efficient. Mathematical analysis of rate of convergence and efficiency help obtain in-depth information about the behavior of an algorithm on a specific problem. This will help effectively modify existing or develop new method with authentic (not ad-hoc) results. Few efforts can be witnessed in literature trying to address this gap, however to reach maturity in this area, metaheuristic researchers need a lot of work in future. Another open area in metaheuristic research identified by this work is measuring the balance between exploration and exploitation. On part of comparative performance measurement, the study urges any agreed criteria instead of just comparing objective function values and number of function evaluations. The authors of this research foresee more intelligent, self-adaptive, or in other words self- optimizing next-generation metaheuristics in future. These algorithms will be smart enough to tune their parameters in order to find optimum quality solu- tion with minimum computational cost. In another article [7], the same author maintains the challenge of large-scale problems to be solved by metaheuristics; as mostly these algorithms are implemented and tested on small benchmark test problems with number of design variables ranging from few to hundred. Many engineering design, business and industry problems involve thousands and even millions of variables. Moreover, the researcher also predicts the next eight to ten years to be significant in addressing this open problem residing both in theory and practice.

35 Read more

Animal Migration **Optimization** (AMO) algorithm is most recent swam intelligence based **optimization** technique proposed by X. Li [1]. The AMO algorithm is very popular among researchers and scientists who are working on **optimization** problems. There are number of trifling multivariable **optimization** problems with capriciously high dimensionality which cannot be solved by precise search methods in stirred time. So search algorithms capable of searching near-optimal or good solutions within adequate computation time are very realistic in real life. In few years, the technical community has noticed the importance of a large number of nature-inspired metaheuristics and hybrids of these nature-inspired **optimization** methods. Metaheuristics may be measured a widespread algorithmic skeleton that can be applied to poles apart **optimization** problems with comparative a small number of modifications to get a feel for them to a specific problem. Metaheuristics are anticipated to make bigger the capabilities of heuristics by hybridizing one or more heuristic strategies using a higher-level methodologies (hence ‘meta’). Metaheuristics are strategies that provide guidance to the search process. Hyperheuristics are up till now an additional extension that focuses on heuristics that adapt their parameters in order to get better efficacy or result, or the effectiveness of the computation progression. Hyperheuristics endow with high-level methodologies that possibly will make use of machine learning and get a feel for their search behavior by modifying the application of the sub-procedures or even which procedures are used [2]. Algorithms on or after the meadow of computational intelligence, biologically inspired intelligent computing, and metaheuristics are applied to troublesome problems, to which more classical approaches may not be significant. Michalewicz and Fogel says that these tribulations are

**numerical** simulations. To validate the flow features of the optimal hollow projectile, the comparisons of drag between the normal and optimal projectile have been illustrated and the drag reduction effects at different Mach numbers have been obtained, it is larger than 30%. On the other hand, the flow structures of optimal projectile at low Mach numbers have been simulated numerically. The bow shock wave structure and its variation with Mach number have been discussed. Finally, we obtained the variation of projectile drag versus Mach number.

A parameter-less SKF algorithm is successfully introduced. This proposed algorithm is tested for all types of **optimization** problems in the CEC2014 benchmark suite (unimodal, simple multimodal, hybrid and composition functions) and proven able to reach near-optimal solution without any significant degradation as compared to the original SKF algorithm. This enhancement enables users to use the SKF algorithm directly without the need to tune the parameters when solving for any specific **optimization** problem in the future.

A new revised teaching leaning based **optimization** (R-TLBO) that kind of geographies overall exploration competences along with fast merging is presented in this work. The revised teaching leaning based **optimization** allows the learners and teachers of the teaching, learning based **optimization** (TLBO) technique to guide and learn in these directions that circumvent the difficulties of early local optima & convergence. In this paper, RTLBO is given to perform such kind of operations. Various **optimization** techniques are used and the concluded outcomes are matched with others from related studies. An experimental outcome shows that the planned technique is better than present techniques in the way of performance and accuracy. Additionally, investigations are performed on many experiments are done on typical benchmark difficulties to prove the efficiency of the planned technique. The compassion investigation is accomplished on dissimilar constraints of RTLBO to validate their effect on overall problems and benchmark.

measurement points across the combustion chamber [20] for the recreation of the unsteady pressure field and subsequent analysis, which in turn requires complex and expensive engine modifications. For this reason, most studies have resorted to **numerical** simulations in order to assess the noise source [21] instead. In particular, the use of computational fluid dynamics (CFD) is nowadays widely established in the automotive industry. Moreover, recent studies have demonstrated that CFD is a useful tool to recreate, visualize and study the combustion noise source [22, 23]. Despite the attractive benefits of this method, the simulation of an internal combus- tion engine is challenging due to complex geometry, spatially and temporally varying conditions and complicated combus- tion chemistry. Therefore, additional efforts must be focused on not only developing more robust codes, but also on the validation procedure to ensure a correct estimation of the involved physical phenomena [24].

17 Read more

From the **numerical** results, it can be concluded that the vibration suppression in mechanical systems using the combination of finite element method and heuristics **optimization** techniques represents an efficient and useful tool solving problems with no closed-form solution available, specially, in the case of vibration modes with close natural frequencies. It was demonstrated the presence of anti-resonance peaks at the predefined primary structure natural frequencies in the frequency response function (FRF). In the time domain, also can be observed the reduction of vibration amplitude when the system is excited at its natural frequencies once the MMDVA is attached.

11 Read more

Genetic Algorithms (GA’s) are adaptive heuristic search algorithm premised on the evolutionary ideas of natural selection and genetic. The basic concept of GA’s is designed to simulate processes in natural system necessary for evolution, specifically those that follow the principles As such they represent an intelligent exploitation of a random search within a defined search space to solve a problem. Genetic Algorithms has been widely studied, experimented and applied in many fields in engineering worlds. Not only does GA’s provide alternative methods to solving problem, it consistently outperforms other traditional methods in most of the problems link. Many of the real world problems involved finding optimal parameters, which might prove difficult for traditional methods but ideal for GA’s. However, because of its outstanding performance in **optimization**, GA’s has been wrongly regarded as a function optimizer. In fact, there are many ways to view genetic algorithms.

In another research, a selection operator for PSO was first introduced by Angeline [30]. It is similar to what was used in a genetic algorithm (GA). Other researchers used a part of crossover [31] and mutation [29] operations from GA into PSO. Pant et al. proposed a quadratic crossover oper- ator to PSO algorithm called quadratic interpolation PSO (QIPSO) [16]. An adaptive fuzzy particle swarm **optimization** (AFPSO) [19] was proposed to utilize fuzzy inferences for adjusting acceleration coefficients. Meanwhile, the quadratic crossover operator [16] was used in the proposed AFPSO algorithm (AFPSO-QI) [19] to have better performance in solving multimodal problems. Zhan et al. presented an adaptive particle swarm **optimization** (APSO) [18] using a real-time evolutionary state estimation procedure and an elitist learning strategy. A variant of PSO algorithm based on orthogonal learning strategy (OLPSO) [32] was introduced to guide particles for discovering useful information from their personal best positions and from their neighborhood’s best position in order to fly in better directions. Gao et al. [33] used PSO with chaotic opposition-based population initialization and stochastic search technique to solve complex multimodal problems. The algorithm called CSPSO finds new solutions in the neighborhoods of the previous best positions in order to escape from local optima in multimodal functions. Beheshti et al. proposed median-oriented particle swarm **optimization** (MPSO) [14] and centripetal accelerated particle swarm **optimization** (CAPSO) [15] based on Newton’s laws of motion to accelerate the learning and convergence of **optimization** problems.

20 Read more