results have been saved. The average, best and worst **clustering** performances have been calculated from 30 runs. As it has been seen, population size and the maxi- mum iteration number are two important parameters, in order to get best solutions in the **nature**-**inspired** **optimization** **algorithms**. So in order to get optimum values, two parameters must be selected in such a way that both solution time and optimum value must be optimized. With this aim, firstly population size is selected as con- stant and the number of maximum iteration is changed as 100, 200, 300, 400 and 500. But only the results for iteration number 100, 200 and 300 have been shown in Table 8, so that the rows of the table aren’t too many.

Show more
16 Read more

Previous **clustering** **algorithms** (Cui, Potok, & Palathingal, 2005; Jain, 2010; Zhong, Liu, & Li, 2010) are denoted as static method which initially requires a pre-defined number of clusters. Hence, such **algorithms** are not appropriate to cluster data collections that are not accompanied with relevant information (i.e., number of classes or clusters). To date, such issues are solved using two approaches: estimation or by using dynamic swarm based approach. The first approach employs the validity index in **clustering**, which can drive to select the optimal number of clusters. Initially, it starts by determining a range of clusters (minimum and maximum number of clusters). Then, it performs **clustering** with the various numbers of clusters and chooses the number of clusters that produces the best quality performance. In the work of Sayed, Hacid and Zighed (2009), the **clustering** is of hierarchical agglomerative with validity index (VI) where at each level of merging step, it calculates the index of two closest clusters before and after merging. If the VI improves after merging, then merging of the clusters is finalized. This process continues until it reaches optimal **clustering** solution. Similarly, Kuo and Zulvia (2013) proposed an automatic **clustering** method, known as Automatic **Clustering** using Particle Swarm **Optimization** (ACPSO). It is based on PSO where it identifies number of clusters along with the usage of K-means that adjusts the **clustering** centers. The ACPSO determines the appropriate number of clusters in the range of [2, Nmax] . The result shows that ACPSO produces better accuracy and consistency compared to Dynamic **Clustering** Particle Swarm **Optimization** and Genetic (DCPG) algorithm, Dynamic **Clustering** Genetic Algorithm (DCGA) and Dynamic **Clustering** Particle Swarm **Optimization** (DCPSO). In the work of Mahmuddin (2008), a modified K-means and bees’ algorithm are integrated to estimate the total number of clusters in a dataset. The aim of using bees’ algorithm is to identify as near as possible the right centroids, while K-means is utilized to identify the best cluster. From previous discussions, it is learnt that the estimation approach is suitable for a problem that requires little or no knowledge of it; however, there is difficulty to determine the range of clusters for each dataset (lower and upper bound of number of clusters).

Show more
25 Read more

While the ﬁrst two classes aim to improve the performance of MOEAs on MaOPs, the third class, known as the objective reduction approaches, try to transform MaOPs to MOPs by re- ducing the number of redundant or irrelevant objectives, such that the problems can be directly solved with traditional MOEAs [109, 110]. In general, there are two types of objective reduction approaches: oﬄine approaches and online approaches. Oﬄine objective reduction approaches can be seen as pre-processing techniques, which work independently of speciﬁc MOEAs, but require a set of approximate Pareto optimal solutions as priori knowledge. In [111], Brockhoﬀ and Zitzler have presented both deterministic and stochastic **algorithms** for objective reduction based on the objective conﬂicts measured by ϵ-dominance. Sexane et al. have proposed to use principal component analysis (PCA) [112] and maximum variance unfolding (MVU) [113] for linear and nonlinear objective reduction, respectively [110]. Singh et al. have proposed to use corner solutions for objective reduction, which are obtained using a Pareto corner search evolu- tionary algorithm (PCSEA) [109]. Online objective reduction approaches are usually embedded in an existing MOEA and perform as part of the evolutionary procedure, where the idea is to update the reduced objectives iteratively using the solutions obtained by an MOEA during the **optimization** procedure. In [111], Brockhoﬀ et al. have proposed to embed the PCA approach into NSGA-II to build an online objective reduction approach, known as the PCA-NSGA-II. In [114], Jaimes et al. have proposed an online objective reduction algorithm to ﬁnd a k-sized objective subset based on correlations between the objectives. Recently, Guo et al. have pro- posed to adopt the partitioning around medoids (PAM) [115] **clustering** algorithm to detect the correlations between objectives and divide them into diﬀerent clusters [116].

Show more
229 Read more

When this cloud system is used for big data huge processing power is required for data processing. For the same, we unite groups of notes, process it and obtain enough processing power. Big data cloud centers consist of the following characteristics as huge transmission volume, high transmission frequency and hard transmission deadline. Hence, process scheduling is an indispensable issue in cloud computing. In order to avoid these nodes with less load and those at the shortest distance are identified and thereby the Quality of Service(QoS) metrics [6][7]: throughput and latency can be improved and decreased respectively. In the literature review, these research-related different **algorithms** have been reviewed. But with regard to the big data, it is a mandatory research problem that the performance of cloud computing should be improved. For the same, we have applied the **nature**-**inspired** firefly **algorithms** in the paper.

Show more
studied. These techniques using heuristic information were derivative free, easy to imple- ment, and shorten the solution time. The first product of these studies is genetic algorithm (GA) developed by Holland [1]. The evolutionary idea has been applied to the solution of the **optimization** problems. Instead of the evolving only one solution, a group of solutions called population has been used in the algorithm. Each solution is called individual. By this way, running such **algorithms** with multiple processors could be possible. After GA, simulated annealing [2] has been generally accepted as the second algorithm, **inspired** from the anneal- ing process of physical materials. In high temperatures, particles move randomly in order to explore the solution space. While temperature is decreasing, particles try to create a perfect crystalline structure, only with local movements.

Show more
Abstract— **Nature** is the best guide and its outlines and qualities are to a great degree monstrous and abnormal that it offers motivation to looks into to impersonate **nature** to take care of hard and complex issues in computer sciences. Bio **Inspired** figuring has come up as a new period in calculation covering extensive variety of uses. The **Nature** **inspired** algorithm are in hype with more impactful results in various application domain. This paper consist of detailed study about the recent advances in **nature** **inspired** **optimization** methods. This paper also gives the flash light over the various **optimization** algorithm with its aim. Moreover, it includes the comparative study between the Swarm intelligence **algorithms**. It also discusses the applicability of various algorithm. These kind of **nature**-**inspired** **algorithms** are used widely in various fields for solving a variety of problems like travelling agent problem, in bio-information, in scheduling, **clustering** and mining problems, image processing, engineering designs.

Show more
contradictory modifications of antenna performances. In this paper, a new mathematical weighted evaluation model involving antenna efficiency, center frequency, and band- width is proposed to evaluate the performance of a rectangular microstrip patch antenna (RMPA) based on both the transmission-line model and the cavity model. With the mathematic modeling of a RMPA, three bio-**inspired** **optimization** technologies includ- ing the Cuckoo Search (CS) algorithm, the Differential Evolution (DE) algorithm, and Quantum-behaved Particle Swarm **Optimization** (QPSO)are used to optimize the design of RMPA with certain constraints. The simulation processes and designed antennas’ per- formances are also presented and compared. With the evaluation model as the objective function, bio-**inspired** **optimization** approaches are utilized to determine the geometrical parameters of the optimal antenna based on given constraints.

Show more
21 Read more

The BM algorithmic program mimics behavior of the Blue Monkey. To model such interactions, every cluster of monkey’ area unit needed to maneuver over the search area. As mentioned earlier, the Monkeys when being divided into teams who begin to look for places of food at long distances area and stronger monkey not among the scope of traditional vision. The male Cercopithecus mitis have little to no interaction with the young ones. Because of the territorial **nature** of the cercopithecus mitis, the young males should go out as fast as possible in order to become more successful. They will enter a challange with the dominant male of another family. In the case where they success to defeat that male, they can be the leaders of this family, so they can offer food supplies, place to live and socialization for young males [20]. Normally, the groups of blue monkeys having one male and a big number of femals and babies [20].

Show more
13 Read more

Pembimbing : Prof. Dr. Mohammad Isa Irawan, M. T.
ABSTRAK
Multiple sequence alignment adalah proses dasar yang sering dibutuhkan
dalam mengolah beberapa sequence yang berhubungan dengan bioinformatika. Apabila multiple sequence alignment telah selesai dikerjakan, maka dapat dilakukan analisis-analisis lain yang lebih jauh, seperti analisis filogenetik atau prediksi struktur protein. Banyaknya kegunaan dari multiple sequence alignment mengakibatkannya menjadi salah satu permasalahan yang banyak diteliti. Banyak algoritma-algoritma metaheuristic yang berdasar pada kejadian-kejadian alami, yang biasa disebut dengan **nature**-**inspired** metaheuristic **algorithms**. Beberapa algoritma baru dalam **nature**-**inspired** metaheuristic **algorithms** yang dianggap cukup efisien antara lain adalah firefly algorithm, cuckoo search, dan flower

Show more
152 Read more

Genetic **algorithms** (GAs) are among the most popular evolutionary **algorithms** in terms of the diversity of their applications. Evolutionary **algorithms** mimic the process of evolution and hence the name. GAs are the search **algorithms** based on the mechanics of natural selection and natural genetics. They are based on the survival of the fittest concept (Darwinian Theory) which says that only the fittest will survive, reproduce and procreate, and successive generations will become better and better compared to previous generations. Unlike traditional **optimization** **algorithms** GAs search for a population of points rather than a single point and while doing so they make use of stochastic transition rules in place of deterministic rules. GAs use objective function information and not the derivative or second derivative.

Show more
A GA is a stochastic general search method [10]. It proceeds in an iterative manner by generating new populations of individuals from the old ones as shown in Figure 3. Every individual is the encoded (binary, real, etc.) version of a tentative solution. Figure 3 shows the selection and recombination phases of the genetic algorithm. GA is one of the most popular evolutionary **algorithms** in which a population of individuals evolves (moves through the fitness landscape) according to a set of rules such as selection, crossover and mutation.

Show more
tion but also for reducing the complexity and computational burden. Additionally, **nature**/bio-**inspired** **algorithms** are highly flexible as they can accept a mixture of variables in terms of type and continuity. This gives us the opportunity to give a variety of different variable types which expand feature selection in such **algorithms**. In this **chapter**, we will explore how **nature**/bio-**inspired** **algorithms** are applied in intrusion detection against different threats and attacks for various networks. The first section of this **chapter** gives an introduction and in-depth explanation of how the most popular **nature**/bio-**inspired** **algorithms** operate. Both the theoretical and prac- tical concepts are explained and how these **algorithms** operate to detect malicious behaviour in the context of cyber security. The second section includes a selection of the most notable and complete studies of anomaly detection using **nature**/bio- **inspired** **algorithms** in networks and in low-resources systems such as cyber-physical systems and IoT. In the third section the techniques used and the results produced are discussed. Finally, future directions on how **nature**-**inspired** **algorithms** could be ap- plied in detecting anomalies in such systems is presented.

Show more
42 Read more

Grounded theory methodology (GTM) has been termed a systematic, inductive, and comparative approach for conducting inquiry for the purpose of constructing theory. This approach differs from more conventional modes of inquiry in which the researcher chooses a theoretical framework for a study, formulates hypotheses and tests them. It also differs from ‘armchair’ or ‘desktop’ theorising or research that aims to provide descriptive accounts of the subject matter. In **chapter** eighteen Smith-Tolken argues that grounded theory methodology is conducive to curriculum inquiry, because the latter is a process and there is an interaction of actors, which fits GTM well, but it also gives impetus to theorising about the curriculum in a scholarly manner. Drawing on her PhD studies, she demonstrates this by drawing on a study of seven experiential learning modules that included engagement with non-academic communities external to the university.

Show more
13 Read more

authors’ knowledge, there is no study that evaluated the cou- pled of intelligent models with **nature**-**inspired** **optimization** **algorithms** for hydrological drought forecasting. Threfore, in this study, the ANN coupled with four **optimization** algo- rithms to forecast the hydrological drought. For this purpose, the Dez basin in the southwestern of Iran was considered as the case study. Dez dam is one of the highest dams in Iran that its reservoir is used for hydropower generation and irrigation of the farming lands in the downstream of the dam. Therefore, hydrological drought in this basin can affect agricultural pro- duction and reducing hydropower generation. The SHDI was calculated based on the inflow to the dam. Then, ANN was coupled by SSA, BBO, GOA and PSO **algorithms** to forecast the hydrological drought. The SSA, BBO, GOA and PSO **algorithms** have recently shown promising results in optimiz- ing machine learning models for hydrological applications [30]. Thus, they have been confidently selected for this study. The rest of the paper is organized as follows. Section II contains the case study, SPI, and SHDI calculation, selec- tion of input-output combinations, ANN and **optimization** **algorithms**. In section III, the results were presented and a detailed comparison between different models was carried out. Finally, a brief description of the findings of this research was drawn as the conclusion in section IV.

Show more
13 Read more

Abstract — Data-Mining (DM) has become one of the most valuable tools for extracting and manipulating data and for establishing patterns in order to produce useful information for decision-making. **Clustering** is a data mining technique for finding important patterns in unorganized and huge data collections. This likelihood approach of **clustering** technique is quite often used by many researchers for classifications due to its’ being simple and easy to implement. In this work, we first use the Expectation-Maximization (EM) algorithm for sampling on the medical data obtained from Pima Indian Diabetes (PID) data set. This work is also based on comparative study of GA, ACO & PSO based Data **Clustering** methods. To Compare the results we use different metrics such as weighted arithmetic mean, standard deviation, Normalized absolute error & Precision value that measured the performance to compare and analyze the results. The results prove that the accuracy generated by using particle swarm **optimization** is more as compare to other **optimization** **algorithms** named as genetic algorithm and ant colony **optimization** algorithm in classification process. So, this work shows that the particle swarm **optimization** techniques results as the best **optimization** technique to handle the process.

Show more
Fig. 3. Pseudo code of BAT algorithm.
A. Application in scheduling
Binary bat algorithm (BBA) which is the discrete version of bat algorithm was devised by Nakamura et al. [27]. Multi objective bat algorithm (MOBA) is the extended version of BA to deal with multi objective problems. A production scheduling problem including multiple stages, machines and products was solved by Musikapun and Pongcharoen in 2012 using BA and also suggested that with better choice of parameters, performance can improve by 8.4% approximately [28]. With execution time and mean flow time as **optimization** criteria, Marichelvam and Prabaharan used BA to solve flow shop scheduling problems [29]. For scheduling in clouds, BA was used by jacob for resource scheduling with makespan as **optimization** criteria [30]. A hybrid algorithm comprising PSO and MOBA was devised for maximizing profit in clouds.

Show more
GP proposed by John Koza in 1992[4], being an extension to Genetic **algorithms** it vary from the GA in terms of representation of the solution.GP represent an indirect encoding of a potential solution, in which search is applied to the solution directly, and solution can be a computer program. The basic difference in GP and GA is that GA involves fixed length encoding in contrast to GP which employ variable length encoding. In genetic programming, the individuals in the population are compositions of functions and terminals appropriate to the particular problem domain. The set of functions used typically includes arithmetic operations, mathematical functions, conditional logical operations, and domain-specific functions.

Show more
In (AL-Taharwa et al ,2008) Genetic Algorithm was applied in a static environment where the positions of the obstacles were known to the robot prior its movement. A simplified fitness function was used; it uses the path length to determine the best individual in a generation. It was discovered that Genetic Algorithm converges irrespective of the population size used. Modified Particle Swarm **Optimization** was applied to the path finding problem of mobile robot by ((Yarmohamadi et al ,2011). A penalty function was proposed as a constraint **optimization** to enable the robot finds the shortest path to the destination by observing the size and position of the obstacle which has blocked its trajectory. This approach allows the process not to be trap in local optimum and ensures a path is found always if it exist.

Show more
17 Read more

B. Other Interaction Networks in Evolutionary **Algorithms** In this initial investigation of self-organizing interaction networks in an Evolutionary Algorithm, we have intentionally focused on the simple but important interactions associated with competition and survival. There are other interaction types that are highly relevant and worth studying such as interactions associated with multi-parent search operations. At a smaller scale, one could also consider the evolution of interaction networks between genes in individual population members. Such work could follow a more traditional path of self-adaptation to create advanced search operators or one could consider less explored territory such as indirect gene expression (e.g. via Gene Regulatory Networks).

Show more
12 Read more