Wireless Sensor Network is consisting of group of sensor nodes which are able to sense data and to convey sensed information to the base station. Sensor nodes have finite energy and finite memory. For improving the life of sensor node it is essential to minimize the communication process. Data Aggregation is the useful method for extending the life and minimizing communication. Data aggregation combines the data from various nodes and summarizes the data. Sensor nodes convey aggregated data using wireless medium Sensor nodes are placed in the hostile and inaccessible areas the security is necessary factor. There is a possibility of attacks for this confidentiality integrity of data is necessary. So security is required for data aggregation for reliable data. This paper discusses the Multiobjective optimization and **Metaheuristic** **approach** to provide secure data aggregation.

In this paper, we have presented a **metaheuristic** bio-inspired **approach** to select the best features for detecting intrusions. We have studied the impact of features selected using BAT algorithm. We have also analyzed the performance of a Neural Network-RIPPER ensemble classifier on KDDCup’99 dataset and compared it with Neural Network-Decision tree classifier. RIPPER utilized the trained rules from neural network and finally it learns a rule for a given class. It is used to maximize the information gain and number of rules to cover the non- negative rates. The results show that the Neural network with RIPPER achieves better classification accuracy, highest detection rate and lowest false alarm rate of the Intrusion Detection Systems.

tion problem and they obtain better results than most of the local optimal methods. Sun and Huang [15] presented a genetic algorithm for polygonal approximation. An op- timal solution of dominant points can be found. However, it seems to be time consuming. Yin [17] focused in the computation efforts and proposed the tabu search tech- nique to reduce the computational cost and memory in the polygonal approximation. Horng [18] proposed a dy- namic programming **approach** to improve the fitting quality of polygonal approximation by combining the dominant point detection and the dynamic programming. Generally, the quality of the approximation result de- pends upon the initial condition where the heuristics take place and the metric used to measure the curvature. To solve complex combinatorial optimization problems me- taheuristic techniques have been introduced. Fred Glover [19] first coined the term **metaheuristic** as a strategy that guides another heuristic to search beyond the local opti- mality such that the search will not get trapped in local optima. **Metaheuristic** techniques combine two compo- nents, an exploration strategy and an exploitation heuris- tic, in a framework. The exploration strategy searches for new regions, and once it finds a good region the exploi- tation heuristic further intensifies the search for this area. In this context, metaheuristics encompass several well- known approaches such as genetic algorithm (GA), simu- lated annealing, tabu search (TS), scatter search, ant col- ony optimization and particle swarm optimization. Most of the central **metaheuristic** methods have been applied to the polygonal approximation problems and attained pro- mising results.

13 Read more

interchangeable system. The precision decreases from first group to the sixth group. The only benefit in the existing method is that the precision assemblies can be segregated. In selective assembly, the corresponding groups are assembled interchangeably. The clearance variation for each assembly group is 7 µm. But for the population it is 42 µm. Instead of assembling corresponding selective groups, the optimum selective group combination is obtained using **Metaheuristic** Techniques. In selective groups, the number of components in each selective group is not the same. In the proposed method, all the components are assembled in three stages and so there will be no surplus parts. The assembly clearance variation achieved is less than interchangeable systems. A high precision assembly is achieved with the same components using a better combination obtained through this proposed method.

The algorithms proposed for CC selection [14-16] are listed here. A G-factor based CC selection algorithm is proposed in [14]. This algorithm targeted the CA of inter-band non-contiguous CCs. It takes into account both radio channel characteristics and traffic load. In [15], the authors proposed the received signal power based CC selection algorithm. They targeted the non-continuous carrier aggregation. The traditional algorithms for CC selection and scheduling that we are going to use for com- parison are random, round robin, and proportional fair. As we are proposing the joint **approach**, therefore, we are treating this selection and scheduling algorithms to be the same, just for comparison. In random selection/scheduling (RS), the eNB randomly allocates the resource blocks (RBs) from the pool of CCs with the goal of equalizing the UEs on each CC. The round robin (RR) concerns evenly distributing the load to all CCs with the constraint of allocating a least number of CCs to the targeted UE. Both the above listed techniques are not suited for non- continuous CA scenarios. The authors in [16] propose an autonomous CC selection for the femtocell network. However, they employed the expected interference man- agement rather than the acquired information that we are exploiting in this study.

15 Read more

We cannot imagine life without electricity. Electricity or Electrical energy is required in every corner of life. Electrical power system is consists of Generation, Transmission and distribution. At the generation side powers is generated and send to customer by transmission and distribution lines. This transmission and distribution line is made of conductors. In transmission lines the ratio of resistance to reactance i.e. R/X is less while in distribution lines the ratio of resistance to reactance i.e. R/X is high. Because of the high value of resistance compare to reactance in distribution systems real power losses is more in distribution systems. In India, average Transmission & Distribution (T & D) losses, have been officially indicated as 23 percent of the electricity generated. However, as per sample studies carried out by independent agencies including the energy and resource institute (TERI), these losses have been estimated to be as high as 50 percent in some states. Network reconfiguration and capacitors placement are two major methods for depletion of real power loss and advancement of voltage profile. Because of the improvement in the technology day by day penetration of distributed generation to the grid is increasing. Distributed generation systems are small scale power generation or storage technologies typically in the range of 1 KW to 10,000 KW [1]. The proper size and location of distributed generation can play a very important role for depletion of distribution power losses and advancement of voltage profile. Improper size and position can lead to more losses and reduced voltage profile. In literature a wide range of methods have been proposed to find the best size and location of DG for loss minimization and voltage profile improvement, these methods can be broadly classified in two categories one is analytical best **approach** and other is intelligent **approach**. Paper [2] proposes the analytical expression to find the optimal size and power factor of four different types of DG unit at one location. Paper [3] proposes the improved analytical expression to find the multiple DG placements. Paper [4] discusses an improved analytical method for DG placement in primary distribution network. Genetic algorithm based method is proposed in Paper [5] to find the optimal size and location of DG in distribution system. Other intelligent methods are proposed in [14-18].

In order to performance evaluation, two series of experiment were conducted; For this purpose, the first series compare proposed algorithms with the optimal solution for the small size problem and the second series compare the proposed algorithms with GA **metaheuristic** for the medium and large instances. All the heuristics are coded using the MATLAB software and the entire experiments are performed on a PC with Intel Core 2 Dou 2.2 GHz CPU and 2 GB RAM. All the optimal solutions are obtained by Lingo 9.0 software. Because of NP-completeness of batch sizing and scheduling problem in FFS, it is very expensive to achieve the optimal solution for the medium and large instances. Therefore test problems in table 4 are limited to the small size problem. For this purpose, 20 instances are generated and Table 4, presents the comparison of exact method and MSGA:

This paper presents a forecasting model optimized by the DEPSO technique used for short-term PV power output forecasting of a PV system stationed at Deakin University (Victoria, Australia). DEPSO is a new **metaheuristic** swarm-based algorithm that efficiently and rapidly addresses global optimization problems. The stochastic nature of the DEPSO algorithm makes the system purely independent of its power output. Furthermore, the existence of the randomness of the system in the search process keeps the **metaheuristic** nature of the algorithm robust, reliable, efficient, and straightforward for short-term power forecasting. The limitations of the DE and PSO algorithms, such as the slow convergence rate of PSO and the lack of randomness in DE, are adequately addressed in the hybrid DEPSO technique. The comparison made among the DE, PSO, and DEPSO algorithms proves that the combinational evolutionary algorithm outperforms the two algorithms. The RMSE, MAE, MBE, VAR, WME, and MRE values of the forecasting algorithm are reduced to 4.4%, 0.03, −1.63, 0.01, 0.16, and 3.1%, respectively, when DEPSO is used under a 1 h time horizon. Meanwhile, these values reach 14.2%, 0.05, −3.67, 0.03, 0.19, and 9.2% for PSO and 9.4%, 0.06, −8.25, 0.064, 0.2, and 6.3% for DE under a 1 h time horizon. A comparison under different time horizons is highlighted in Table 5. Traditional methods like regression model and autoregressive moving average models have drawbacks of non-linear fitting capabilities which is addressed in the proposed model. Finally, the use of the DEPSO hybrid **metaheuristic** algorithm in short-term forecasting is supported by its simplicity, robustness, and novelty of implementation. DEPSO is also more computationally efficient

23 Read more

To put it simply, solutions of interest can be divided into two categories, the feasible so- lutions of interest (FoI) and the infeasible solutions of interest (IoI). The FoI consists of feasible solutions which are suboptimal but with attractive resource usage distributions. The IoI consists of infeasible solutions with better objective values but with small viola- tion(s) on resource constraints. While it is always possible to obtain information about shadow prices, reduced costs, etc. for linear programming problems, there is not much in- formation one can gather with the traditional approaches when dealing with problems other than LPs. With thirty generalized assignment benchmark problems (solution space ranging between 10 20 and 10 60 ), the feasible-infeasible two-population genetic algorithm (FI-2Pop GA) was shown for the problems studied to be very effective in finding the optimal and near-optimal solutions; and the idea about collecting SoIs from the sampled solutions along the process of searching optimal solution has proven to be effective. The proposed **approach** is further tested with twenty-five generalized quadratic assignment problems and has consis- tently proven effective in providing SoIs. The provided SoIs offer useful insight information which is otherwise normally unavailable to decision makers.

174 Read more

**Metaheuristic** **approach** to feature selection can be categorized into two types; employed Single Solution-based Metaheuristics (SBM) and Population-based Metaheuristics (PBM). SBM manipulate and transform a single solution using search algorithms such as the Hill Climbing [10] Simulated Annealing [11], and Tabu Search [12], On the other hand, PBM relies on optimization algorithms such as the Genetic Algorithm [13-15], Ant Colony Optimization [16, 17] and Particle Swarm Optimization [18, 19]. **metaheuristic** optimization algorithms represent an iterative improvement in a population of solutions that works as follows. First, the population is initialized. Then, a new population of solutions is generated. Next, the new population is integrated into the existing population using some selection procedures. The search process will be terminated when a certain criteria has been satisfied.

11 Read more

that two features overlap is acceptable to many researchers if their values are completely correlated, but at the same time it may not be easy to recognize the excess of features when a feature is associated with a set of features. According to the definition provided by John and Kohawi, a property is redundant if it is to be removed, so that it is weakly related and has a Markov blanket within the set of current properties. Because unrelated features need to be removed from all sides, they are cleared according to this definition. The existence of thousands of programs of information systems has complicated the role of extracting useful information from the collected data [1]. Feature selection (FS) is one of the most important steps in pre-processing because its purpose is to eliminate redundant and irrelevant variables in a data set. Feature selection methods are classified as packaging and filter [2, 3]. A number of approaches proposed for FS can be broadly classified into the following three categories: packaging, filter, and hybrid. In the packing method, a predetermined learning model is assumed, in which features are selected that justify the learning performance of a particular learning model, while in the filtering method, statistical analysis of the set of characteristics, no learning model is required. Schematic diagrams show how to find packing approaches and filter outstanding features in Figure 1. The combined **approach** tries to take advantage of the complementary strengths of packaging and filtering approaches [6, 7].

25 Read more

In this section, MCS was compared to PSO, ABC and CS. Performance of the proposed scheme is computed by determining different fidelity parameters such as SNR, MSE and ME given by Eqs (12), (13) and (14) respectively In this paper, denoising performance of PSO algorithm, ABC algorithm, Cuckoo Search (CS) and Modified Cuckoo Search (MSC) algorithm is compared on the basis of SNR, MSE and ME. For making this comparison, PSO, ABC, CS and MCS algorithm is executed with MATLAB R2012a. The performance result of various **metaheuristic** algorithms is shown in Fig.2, Fig.3, Fig.4 and Table I, Table II and Table III shows the results obtained by running the algorithms for an input SNR of 5 dB, 10 dB and 15 dB respectively at 400 iterations. It was observed that the MCS algorithm gives better value of SNR, MSE and ME as compared to PSO, ABC and CS algorithm.

Cosmologies are developed by physicists and philosophers to explain our ex- periences of the evolving cosmos. Intelligent deep-learning metaheuristics provide original frameworks for cosmologies which are founded on quantum information. Mathematical standard models of physical cosmology and par- ticle physics formalize an abundance of observations, yet there is no scientific consensus about how these models include our conscious experiences and fundamental philosophies of information. Furthermore, Naturalness in phys- ics is coupled to the related problem of fine-tuning. To address these founda- tional problems, within the quantum information paradigm, whilst aligning with standard scientific models, I introduce a topological deep-learning cos- mology **metaheuristic**. Braided, 3-coloured, world-strands are proposed to be the fundamental quantum information tracts (ethereal fibre bundles) of our evolving Triuniverse. This Braided Loop **Metaheuristic** comprises eternally evolving deep-learning feedback loops of superposed, braided, 3-coloured, quantum information world-strands, which process (in 3-level qutrit states) foundational properties coined Algebrus (labelled red), Algorithmus (labelled green) and Geometrus (labelled blue). Braids split from 1→2→3 (in knot re- presentation respectively: closed loop→trefoil knot→Borromean loops) thence combine from 3→2→1 to form eternally evolving deep-learning loops. This cosmology **metaheuristic** simultaneously incorporates initial Laws of Form; Emergentism (from substrate Mathematics, through Quantum Physics to Life); Consciousness (as a superposed triunity of Implicate Order, Process Philosophy and Aesthetic Relationalism); Reductionism (from Life, through Quantum Physics to Pure Mathematics expressed as Logical Axioms, Laws of Parsimony and Ideal Form); and the Braided Loop **Metaheuristic** reboots its eternal cycle with the initial Laws of Form. An agent’s personal anthropic Braided Loop **Metaheuristic** represents one of many-worlds, a meridional loop in a multiverse with horn-torus topology, where Nature’s physical para- meters vary equatorially. Fundamental information processing is driven by ψ -Epistemic Drive, the Natural appetite for information selected for advanta- geous knowledge. The meridional loops are ψ -Epistemic Field lines emanating How to cite this paper: McCoss, A. (2018)

28 Read more

T. Liao et al [3] proposed a new **approach** for Ant colony optimization. ACO is a probabilistic technique for solving computational problems. They used ACO in cloud and grid computing task scheduling, etc, but it doesn’t get a good performance since there are still some problems in pheromone update and the parameters selection. However, PACO also has some problems, such as the selection of the parameters and the way getting pheromone. In order to let Ant colony optimization get a better performance, a self adaptive ant colony optimization has be proposed in this paper which improves PACO.

Abstract: In order for terminals to accommodate the growth in International container transport, they must make significant changes to maintain their position with increasing demand. One important manner in which existing terminal capacity could be increased would be through more efficiency. In this paper, we consider terminal efficiency from the perspective of simultaneously improving both berth and quay crane scheduling. The **approach** is applied to a discrete and dynamic berth allocation and crane assignment problem for both mono-objective and multi-objective variants. The problem is solved through a neighborhood meta-heuristic called the Extended Great Deluge (EGD). The results obtained with this meta-heuristic have shown better results than a Genetic Algorithm proposed in other works. A Simulated Annealing algorithm (SA) is also implemented to serve as basis of comparison for new instances results. Both algorithms (EGD and SA) for mono-objective variant have been applied to different size instances based on real world and generated data. Two new EGD- based multi-objective approaches have been proposed. Computational results are presented and discussed.

23 Read more

In [75], Mahdavi and Rahnamayan reinforced the importance of meta- heuristic research as there have been specialized conferences, journals, web- sites, and research groups established. Creating a hierarchical classification, this study identifies two main approaches in solving optimization problems more efficiently: cooperative coevolution (CC) algorithms and non-decomposition methods. According to the authors, the earlier **approach** divides optimization problems into subcomponents and solves these components independently, and later on merging together to form an aggregated solution. Non-decomposition methods, on the other hand, solve any optimization problem as whole. High- lighting some of the crucial challenges to metaheuristics, the study contends that **metaheuristic** methods suffer from loss of efficiency and add up computa- tional cost when the dimensions of the problem in hand increase significantly. The curse of dimensionality increases with problem size and landscape com- plexity making the exploration of potential solutions sterner. Some of the gaps and future directions have also been presented in this study, which are discussed in the subsequent sections.

35 Read more

In this paper, we present two novel approaches to lexical substitution which are knowledge-based, generally language-independent, and use a com- bination of traditional wordnets and Wiktionary. The first **approach** uses simulated annealing (Kirk- patrick et al., 1983), which was first proposed for use in WSD by Cowie et al. (1992) but has attracted relatively little attention since then. The second ap- proach uses D-Bees (Abualhaija and Zimmermann, 2016), a relatively new, biologically inspired dis- ambiguation algorithm that models swarm intelli- gence. Both algorithms are **metaheuristic** (Talbi, 2009) in that they treat WSD as an optimization problem and modify heuristic (approximate) solu-

11 Read more

have been optimized on successively smaller sets. Multilevel programming is a useful **approach** if the hierar- chical order among the objectives is of prime importance and the user is not interested in the continuous trade-off among the functions. However, problems lower down in the hierarchy become very tightly constrained and often become numerically infeasible, so that the less important objectives have no influence on the final re- sult. Hence, multilevel programming should surely be avoided by users who desire a sensible compromise solu- tion among the various objectives. Also, this method is called lexicographic method.

Greedy randomized adaptive search procedure (GRASP) was introduced by Resende in 1995. GRASP [44] is a multi start or iterative procedure consisting of 2 phases. One is Construction phase and another is a local search phase. The construction phase creates a feasible solution by applying a greedy randomized criterion which is then improved until a local minimum is found during the local search phase the best overall solution is returned as the result. Construction phase can be illustrated by 2 basic types. One is Dynamic constructive heuristic and other one is Randomization. In constructive heuristic the elements that are not included in the partial solution are estimated with a greedy function and the list of best elements are kept in a so called restricted candidate list (RCL). Through randomization any element is randomly chosen and included in the partial solution thus resulting in a diversity of solutions. GRASP is memory less algorithm. Atkinson et al. (1998) applied GRASP to time constrained vehicle scheduling here two forms of adaptive search called local and global adaptation was described [8]. Fleurent et al. (1999) applied GRASP to solve quadratic assignment problem using Adaptive memory [45]. Laguna et al. (1999) introduces GRASP with path relinking to improve performance [72]. Prais et al. (2000) introduces a Reactive GRASP for a matrix decomposition problem in TDMA traffic assignment. In this work author suggested the refinement of GRASP that self adjusted the parameter values instead of fixed values in the construction phase [98]. Binato et al. (2001) introduces a new **metaheuristic** **approach** called greedy randomized adaptive path relinking (GRAPR). It uses generalized GRASP concepts in path relinking to explore different trajectories between two good solutions previously

11 Read more

Effective regression testing is a trade-off between the number of regression tests needed nd the cost. The greater the number of regression tests, the more complete is the program revalidation. However, this also requires a huge budget and greater resources which may not be affordable in practice. In this paper, several techniques have been described for minimizing the cost of regression testing and their relative abilities are examined. Analysis indicate that the Greedy Algorithm performs worse than Additional Greedy, 2-Optimal, and Genetic Algorithms overall. Also, the 2-Optimal Algorithm overcomes the weakness of the Greedy Algorithm and Additional Greedy Algorithm. It can also be accomplished that ABC outperforms the other approaches i.e. GA, ACO, BCO and PSO in test suite optimization process. As a future work, different versions of ABC have to be applied for minimizing the cost of regression testing and analytical study can be conducted in finding the best ABC version to achieve near global optimal solution. Also, the performance of ABC can be compared with other **metaheuristic** techniques for efficiency evaluation. Prioritization technique based on Cuscuta search algorithm has been proposed to find the near optimal solution which gives the same results as given by the optimal and ACO ordering but better than unorderd, random and reverse order. Cuscuta search can be implemented for regression test case prioritization and its comparison with existing **metaheuristic** techniques can be done. Various tools that can be used for implementation are MATLAB , Weka , Java IDE etc.