Swarm intelligence has been discussed for last two decades. It is closely related with self- organization, decentralized system, dynamical networks, artificial intelligence, etc. Since the swarm intelligence is not a central-control system, it is adaptable to distributed problem-solving. Their decision making process is a natural selection. Natural selection does not mean that na- ture selects the fittest but that the decision is made naturally from multi-agents’ behavior. This kind of decision process is ideal and reasonable for the equality and fairness of members. Each individual follows some simple rules but their actions result in a powerful and economical collec- tive behavior. In the point of usage for interactions in **population** it is beneficial to describe the interactive system. This is simulated as a **population**-**based** model in computer science. Those **population**-**based** models belong to metaheuristic in nature-inspired algorithms[19].

Show more
73 Read more

the variable value in nonlinear equations system needs to be altered and tuned. The tuning process of all variables in nonlinear equations system is performed to find the suitable value that able to produce the best result. The tuning is a process of changing the value of all variables in nonlinear equations system. The process becomes hard and difficult when involves a large and complex structure of biochemical system where a large and complex structure of biochemical system make nonlinear equations system become more complex. The large and complex biochemical system involves many chemical reactions and involves many interactions between them. Generally, a large biochemical system comprises with many variables where these variables represent the components and the interaction in the biochemical system [14]. Due to the involvement of many variables, the **optimization** process becomes a problem where it makes the **optimization** process become hard and complicated. Due to that, there is need an automated approach in tuning all variables in nonlinear equations system [15]. There are many methods available in automated tuning approaches such as genetic algorithm (GA), differential evolution (DE) algorithm, Particle Swarm **Optimization** (PSO) algorithm and Artificial Bee Colony (ABC) algorithm. **Based** on comparative studies, used DE is more suitable compared to others because DE is more robust [16] and only involved a few parameters control [17], [18]. In this work, the DE is used to represent the variables in nonlinear equations system in chromosome form. Then, the chromosome is tuned in order to search the best solution that can produce the best result in the **optimization** process.

Show more
The reasons of biology death are varied. Some biology is illness or eaten by other predators or lack of nutrition resources for a long time and so on. But no matter what manner of biologic death, in general, those who survive is to adapt the living environment, which are eliminated are not suited to the biologic environment, which is Darwin's “the survival of the fittest” theory. In this algorithm, when some offspring were produced, the size of the **population** will increase. Overmuch individual and the limited resources will inevitably result in the struggle for existence. Then the survival chance is bigger for some biology which has stronger vitality, whereas it is smaller for the biologic with weak vitality.

Show more
10 Read more

variance structure and O(np) per-iteration cost to form the update equations. We observe that the convergence of **Newton**-Stein **method** has a phase transition from quadratic rate to linear rate. This observation is justified theoretically along with several other guaran- tees for the bounded as well as the sub-Gaussian covariates such as per-step convergence bounds, conditions for local rates and global convergence with line search, etc. Parame- ter selection guidelines of **Newton**-Stein **method** are **based** on our theoretical results. Our experiments show that **Newton**-Stein **method** provides significant improvement over the classical **optimization** methods.

Show more
52 Read more

Several experiments were carried out in order to investigate the reliability of the results ob- tained by the NCGA. The reliability of the results can be assessed by the average results that are given in Table 4 for case study 1 and Table 5 for case study 2. For case study 1, it could be ob- served that all component concentrations involved were in their optimum range, thus proved that the NCGA was able to produce reliable results. In addition, the ethanol production rate was higher and the total of the component concentrations involved was the lowest compared to the previous works. **Based** on this observation, it can be concluded that the NCGA demon- strated the reliability in handling the **optimization** problem in case study 1. In case study 2, it was found that all component concentrations involved were placed in their optimal range, therefore confirmed that the NCGA was capable of producing reliable result. Besides that, the trp production rate was higher and the NCGA was able to reduce more for the total of the com- ponent concentrations involved compared to previous works. **Based** on this finding, the NCGA has shown its ability in the **optimization** of case study 2.

Show more
19 Read more

Scalarization-**based** or weighted aggregation-**based** approaches (Ishibuchi et al., 2006; Zhang and Li, 2007; Liu et al., 2014; Wang et al., 2015; Mei et al., 2011) have been increasingly popular in recent years owing to their computational efficiency. These methods use traditional mathematical techniques to aggregate multiple objectives into a single parameterized objective to assign scalar fitness values to **population** members. The most representative scalarization-**based** **method** is MOEA/D (Ishibuchi et al., 2006), which has received increasing attention due to its computational simplicity and attractive search performance (Denysiuk et al., 2015; Wang et al., 2015). Meanwhile, indicator-**based** approaches (Zitzler and K¨unzli, 2004; Bader and Zitzler, 2011; Rodr´ıguez Villalobos and Coello Coello, 2012; Menchaca-Mendez and Coello, 2015), as a relatively recent trend, employ performance indicators for fitness assignment. More studies about the development and application of MOEAs are well reviewed by Zhou et al. (2011) or Eiben and Smith (2015).

Show more
17 Read more

Figure 5: S-bend duct **optimization**: Velocity streamlines plotted for the reference (left column) and optimized (right column) geometries. In the top four figures, streamlines are coloured **based** on the flow velocity while, in the bottom four, on the total pressure values. The intense flow recirculation close to the bottom side of the wall (figs. c and g) has drastically been reduced (figs. d and h), leading to a reduction of about 60% in the objective function.

15 Read more

We experimentally study on-line investment algorithms first proposed by Agarwal and Hazan and extended by Hazan et al. which achieve almost the same wealth as the best constant-rebalanced portfolio determined in hindsight. These algorithms are the first to combine optimal logarithmic regret bounds with efficient deterministic computability. They are **based** on the **Newton** **method** for of- fline **optimization** which, unlike previous ap- proaches, exploits second order information. After analyzing the algorithm **using** the po- tential function introduced by Agarwal and Hazan, we present extensive experiments on actual financial data. These experiments confirm the theoretical advantage of our al- gorithms, which yield higher returns and run considerably faster than previous algorithms with optimal regret. Additionally, we per- form financial analysis **using** mean-variance calculations and the Sharpe ratio.

Show more
The foundation of **fractal** image goes to Barnsley & Jacquin. In **fractal** image compression an image is represented by fractals rather than pixel. Block segmentation, region segmentation and the cross searching **method** lie under the umbrella of self- similarity algorithms. A **fractal** is worked as Iterated Function System which is a group of affine transformations. The main work for **fractal** coding is to extract the fractals which are used for approximation to the original image. These fractals are represented as a set of affine transformations. It has high compression ratio and simple decompression **method**. The main drawback is larger computational time for image compression. In order to reduce the computation time different **optimization** techniques have been proposed. It is found that the computational time can be improved by

Show more
The TLBO **method** is **based** on the effect of the teacher on the learners. The teacher is considered as a global best learned person (𝑔𝑏𝑒𝑠𝑡 𝑡 ) who shares his knowledge with the learners. The process of TLBO is divided to two phase. The first phase consists of the ‘Teacher Phase’ and the second phase consists of the ‘Learner Phase’. The ‘Teacher Phase’ means learning from the teacher and the ‘Learner Phase’ means learning through the interaction between learners. TLBO was modeled as follows [18]:

14 Read more

x ≤ Ax b − ≤ x Ax b − = (1.1) is called the linear complementarity problem (LCP). We call the problem the LCP (A, b). It is well known that several problems in **optimization** and engineering can be expressed as LCPs. Cottle, Pang, and Stone [1] [2] provide a thorough discussion of the problem and its applications, as well as providing solution techniques.

In this research, the local search **method** that used is **Newton**-Raphson **method** (also known as Newton’s **method**). This **method** is a powerful technique due to the basis for the most effective procedures in linear and nonlinear programming. **Newton**- Raphson **method** used to find the local minimum and then injected into the homotopy **optimization** **method** in order to find the global minimum.

19 Read more

In this work, an improved **method** **based** on the NM and PSO algorithms was proposed. The proposed **method** aimed to overcome the **optimization** problem in biochemical systems production. The problems that arise in the **optimization** process are two objectives that need to be considered; and the size of biochemical systems. In dealing with the problems, the presented study was conducted by combining the NM and PSO algorithms. The proposed **method** viewed the biochemical systems as NES. Then, NM was used to solve NES, while the PSO algorithm was utilized to alter and fine-tune the variables in NES. Two biochemical systems were used, namely the **optimization** of trp in E.coli pathway and the **optimization** of ethanol production in S. cerevisiae pathway, to measure the performance of the proposed **method**. The experimental results showed that the performance of the proposed **method** outperformed the results produced by other works. In conclusion, the proposed **method** was able to overcome the **optimization** problem in biochemical systems and performed better as compared to other works. For future work, the proposed **method** could be improved by referring to various other works available such as [27]–[30].

Show more
There are also several HCMs such as **Newton**-HCM, secant-HCM and Adomian-HCM as described in Wu (2005, 2006, 2007) [4,5,6]. For basic learning purpose, we choose **Newton**-HCM as our **method** to solve nonlinear equations. The formula of **Newton**-HCM can be written as follows

10 Read more

Among many biometrics approaches, iris recognition is known for its high reliability, but as databases grow ever larger, an approach is needed that can reduce matching time. This can be easily achieved by **using** iris classification This paper presents **fractal** dimension box counting **method** for classifying the iris images into four categories according to texture pattern. Initially eye image is localized by **using** random circular contour **method** than a preprocessed flat bed iris image of 256 64 size is generated, which is further divided into sixteen regions. Eight regions are drawn from the middle part of the iris image, The remaining eight regions are drawn from the bottom part of the iris image. From these sixteen regions sixteen 32×32 image blocks are generated. To calculate the **fractal** dimensions of these image blocks box counting **method** is used. This produces sixteen **fractal** dimensions. The mean values of the **fractal** dimensions of the two groups are taken as the upper and lower group **fractal** dimensions; respectively The double threshold algorithm uses to classify the iris into the four categories.Peformance of the implemented algorithms have been evaluated **using** confusion matrix and experimental results are reported. The classification **method** has been tested and evaluated on CASIA V 1 iris database.

Show more
Although the studies solving for the linear complementarity problem **based** on the nonlinear penalized equa- tion have good results. But there is no **method** that is given for solving the nonlinear penalized equation. Throughout the paper, we propose a generalized **Newton** **method** for solving the nonlinear penalized equation with under the suppose [ M M , + λ I ] is regular. So the **method** can better to solve linear complementarity problem. We will show that the proposed **method** is convergent. Numerical experiments are also given to show the effectiveness of the proposed **method**.

Show more
An **optimization** problem occurs when an objective function is, either minimized or maximized, over a set of constraints. In our study, the nonlinear least squares problem is formulated as an **optimization** problem without constraints, where the SSE is defined as an objective function. During the computation procedure, the SSE is minimized, whereas the unknown parameters in the proposed model are determined in the optimal sense [5]. Here, the nonlinear least squares problem, which is an unconstraint **optimization** problem, is defined by

The Unit Commitment (UC) problem plays an important role in the daily operation and planning of power systems. It involves two scheduling decisions, namely, the “unit commitment” decision and the “electrical power dispatch” (or load dispatch) decision. The ’unit commitment’ decision involves determining which generating units are to be running during each hour of the planning horizon, considering system capacity requirements including reserve, and the constraints on the start up and shut down of units, while the “electrical power dispatch” decision involves the allocation of system demand and spinning reserve capacity among the operating units during each specific hour of operation. As these two decisions are interrelated, the UC problem generally embraces both these decisions, and the objective is to obtain an overall least cost solution for operating the power system over the scheduling horizon. The exact solution of the problem can only be obtained by complete enumeration, often at the cost of a prohibitively large computational time requirement for realistic generation system. Mathematically, the UC problem can be described as a nonlinear, large-scale, mixed-integer **optimization** problem with a nonlinear solution space. The dimension of the problem increases rapidly with the system size and the scheduling horizon [1].

Show more