At present, the role of optimisation techniques in industrial applications has attracted massive attention because of their high accuracy, efﬁciency and adaptability that provides high-quality results 25 – 27 . Optimisation techniques have been highly explored in FLC based TIM drives for the appropriate tuning of control parameters that results in high performance and efﬁciency 28 , 29 . Ali et al. 30 introduced backtracking search algorithm (BSA) based FLC for controlling an inductionmotor speed, thus avoiding exhaustive traditional TE procedure for obtaining MFs. Ranjani & Murugesan 31 proposed particle swarm optimization (PSO) based FLC to determine the optimal fuzzy parameters for achieving the minimum value of the objective function (OF). Pan et al. 32 developed an optimal FLC utilizing genetic algorithm (GA) and PSO through the adjustment of control parameters to minimize the OF. Shareef et al. 33 established lighting search algorithm (LSA) based FLC to overcome the TE process in achieving the suitable value of MFs. Mutlag et al. 34 designed an advanced controller using differential search optimizationbased FLC to obtain the lowest value of OF and best value of MFs. Ochoa et al. 35 deployed Type-1 and Interval Type-2 fuzzy systems to enhance the performance of differential evolution (DE) algorithm to achieve dynamic adaptation of the mutation parameters as well as optimize the MFs. Castillo et al. 36 analyzed and compared the FLC optimizationalgorithms including bee colony optimization (BCO), DE, and harmony search algorithms. Melin et al. 37 applied shadowed type-2 fuzzy MFs to reduce the computational cost in control applications. Castillo et al. 38 optimized the gen- eralized type-2 fuzzy logic system with BCO to achieve the optimal conﬁguration of MFs. However, heuristic optimisation
In [22-25], comprehensive reviews of research on human motion analysis is presented. The focus on three key issues concerned with human motion analysis applications, namely human detection, tracking and activity understanding. Various approaches for each issue were presented. In , the main methods in human activity recognition from 3D data are condensed with an attention on methods that utilize depth data. Extensive categories of algorithms are discovered based upon the use of various features. The upsides and downsides of each algorithm in each category are addressed and analyzed. Most of the existing review papers relevant to abnormal detection either concentrate on a single research area or on a specific application domain. Papers in [1,9, 27-29] are related works that organize abnormal detection into multiple categories and discuss algorithms under each category. This review builds upon these works by essentially expanding the search in several directions. Different problems and challenges relevant to the abnormal behavior detection algorithms as well as their specific features are discussed comprehensively in the next sections. In addition, this study addresses five important evaluation benchmarks of abnormal behavior detection from 2007 to 2017 with their distinct characteristics and limitations to evaluate the performance of abnormal detection algorithms. Details about them are in section 9.
Missile’s steering system is one of systems that use Proportional Integral Derivative (PID) controller. The difficulty in using this controller is tuning the parameters, because PID controller uses 3 controllers. There are a lot of different ways to get values of the controller’s parameters, such as using classical method or even using evolutionary algorithms. One of evolutionary algorithms is Genetic Algorithm (GA). GA is a search algorithm that is based on genetic principles and usually used in optimizing systems. In this research, performance of the controller that is obtained using GA and using conventional method (i.e. Ziegler-Nichols (Z-N)) are compared in order to optimize missile’s steering system. The result of the simulation shows that PID controller obtained using GA is faster in making the system going towards the setpoint than PID controller obtained using Z-N method. Furthermore, parameters of PID controller from GA make system more robust than parameters from Z-N.
key management scheme was presented in the literature evaluation for providing security aspects in MANET. However, they not improve the trust value of the nearby nodes in the network. The key authentication based secured routing was designed to provide the security in the network. However, security level was not sufficient. Therefore, proposed SO-SKA technique uses spectral key authentication to obtain the higher security level. In addition, certificate exchange scheme was developed in MANET. However, security was not improved. Therefore, SO-SKA technique performs the key authentication by certificate exchange for achieving higher security. However, during the transmission, communication overhead may be occurred when transmitting more number of data packets in the network. This leads to degrade the performance of the network. SO-SKA minimizes the computational cost for providing the security in MANETs while the security depends on the hardness of spectral key management issues. However, secure authentication rate is not at required level and also energy efficiency of the network is not considered.
this paper  Fong Compare the performance measures of BP with those of NB and HMM and prove that performance measures of BP are better than those of NB and HMM. To increase efficiency and accuracy of machine learning and neural networks, researchers use many feature selection algorithms, namely, minimal-redundancy maximal- relevance criterion (mRMR)  feature weighting algorithms and subset search algorithms, all are based on evaluate the goodness of features individually or through feature subsets . In the literature, several unsupervised feature selection methods have been proposed where various criteria have been used to obtain new structure of the original data. Some of those works are spectral feature selection (SPEC) , Discriminative feature selection , Multi Cluster Feature Selection (MCFS)  feature selection using oppositional-based binary Kidney inspired algorithm  and feature selection using trace ratio criterion . In this paper, a new unsupervised feature selection method has been evolved using auto-encoder  since it has the capacity to learn the input features without labeled data , an auto- encoder is ideal for unsupervised feature selection. The aim of an auto-encoder is to reconstruct a set of data, typically for the purpose of increasing efficiently and dimensionality reduction, it learn automatically features from unlabeled data and reconstruct a useful representation of features. In this paper, we proved that auto-encoder reconstructs data better than other features selection techniques and consequently improves the processing result. 3. PROPOSED WORK
This paper presents Generation and Prioritization of test cases using two different algorithms Genetic Algorithm and Ant Colony Optimization. Both these algorithm are evolutionary search basedalgorithms. Both the algorithms are applied on the same example. The performance of both the algorithms has been studied and the results showed that Genetic Algorithm captures more number of iterations as compared to Ant colony optimization. Ant Colony Optimization gives fast results in lesser time as Compared to Genetic Algorithm. Also in G.A there is a need to select Fitness function, Best value for Chromosome population, probability for Crossover and Mutation operators. In ACO the pheromone value and heuristic value need to be updated but still ACO takes less number of iterations as
Based on Table 7. it can be observed that there are 16 correct classifications in category 0 ie credit decisions rejected, and there are 3 observations in category 0 misclassified. In addition, there are 46 correct classifications in category 1 namely credit decision accepted, and there are 2 wrong classifications in category 1. The Aper value logistic regression model that is equal to 7.46%, it shows regression model that got good to solve case Classification for lending decisions. In addition, to know the stability in the classification used Press'Q test. Based on the above results, the statistical value of Press'Q test is worth more than = 3.84 so it can be concluded that the classification on data training in dataset third is consistent.
There is now a virtual reality laboratory in CIDETEC, which uses as a primary tool an immersion cabin, who consists of three projectors, mirrors and a structure of three screens, which display the virtual environments for educational purposes and simulation. It was noted that the virtual environment to run the tests did not meet the necessary requirements for optimal performance of immersion cabin. It was proposed to solve the problem caused by the usage of VRML to create the virtual environment by replacing that tool for one that also allows the usage of new hardware devices and improve the visual quality of the models represented. After testing several tools the decision was made to use Panda3D for the development of the virtual environment, which can load models created in design tools such as Blender and 3ds Max, allowing the optimal usage of the endless road system, alongside with collision detection, providing a better alternative to the use of virtual environments.
This section deals with the congestion control algorithms widely used in TCP. The TCP sender maintains a buffer called as the congestion window which is used to record the packets which have been sent but not acknowledged by the receiver. The idea of additive increase (AI) is that when the network is not congested, the congestion window of a TCP source is increased by one packet per round trip time (RTT). The idea of multiplicative decrease (MD) is, when the network is congested, the congestion window of a TCP source is decreased to‘d’ times of the current congestion window size, where d is a constant coefficient less than 1. Slow Start is a congestion control algorithm ,it is called as slow start because the congestion window is increased from one. In the Congestion Avoidance algorithm a retransmission timer expiring or the reception of duplicate ACKs can implicitly signal the sender that a network congestion situation is occurring. The sender immediately sets its transmission window to one half of the current window size (the minimum of the congestion window and the receiver’s advertised window size), but to at least two segments. If congestion was indicated by a timeout, the congestion window is reset to one segment, which automatically puts the sender into Slow Start mode. If congestion was indicated by duplicate ACKs, the Fast Retransmit and Fast Recovery algorithms are invoked. Fast retransmit means that a TCP source retransmits the lost packets without waiting for its retransmission timer to expire. Fast Recovery takes place after Fast Retransmit.
In this research, we proposed different models to recognize the basic six emotions - Sad, Joy, Surprise, Disgust, Fear, Anger - of Arabic tweets (Syrian dialectal tweets). In order to test our models, we have built a balanced dataset of Syrian Tweets, and manually annotated it. We compared the results of several machine learning algorithms such as SVM, Naive Bayes, CRF. We also compared the results of our proposed models. In the future, we intend to expand the size of our labeled dataset, by acquiring more emotional Arabic Syrian tweets. We will validate this dataset by annotating it by 3 annotators, which will enable us to calculate the interagreement of the annotators. We also intend to expand our special lexicons (cursing words used in dialectal case, emotional dialectal words and idioms….) using automatic methods.
Given designs allow applying algorithms of consequent cycles balancing to search balance states of one layer. For example, we search the arc for which NB u ( 0 ) (sufficiently small number), if there is no such an arc we stop the layer balancing and solve the task NB u ( ) 0 for the problem thereof and pass over to the algorithm execution over again. For multilayer systems the arc search is fulfilled along all layers and inside the layer accordingly.
The accurate classification of data is the prime focus in data mining for providing needed information ,,,. Classification mostly performed by a classifier through examining the features characteristics of an objects and assigns the trained knowledge class set . For instance, a data set which consists a collection of records and each record instance have a set of attributes, where one attribute from the set will be considered for the class identification. Based on the identified class knowledge a classifier performs the classification of unobserved data objects. The objective of classification is to construct an accurate classifier to support unobserved data accurately for the real- time needs.
Images that are used in the analyses include synthetic and natural images. These images contain varying degrees of contrast, length, and shape of the edges. For each image, three types of noise: Gaussian, speckle and salt & pepper are added. Results of several edge detectors like CED, EGT, SMC, NLFS, RKT, and the proposed algorithm are compared. These algorithms are selected for the comparison based on the literature. Each algorithm has a different method to handle noise which can help in the comparison of this study. For example: CED reduces noise using Gaussian filter, removing non maxima edge then applying hysteresis thresholding. EGT handles noise using pseudo image generated from Canny then applying receiver operating characteristics thresholding. SMC applies multiple scales of Canny to hand noise as noise does not likely to appear in all scales. NLFS is non-linear edge detection which is robust to impulse noise and RKT is an example of using a large kernel.
Even though there is a significant literature in the field of ERPs adoption, many of vital issues in diverse areas are still sparse especially in cloud ERPs. However,  argues that organizations which adopt ERPs have better performance than non-adopters and yield benefits and expected values less than risk that may appear. [26;27] argue that organizational culture and national issues should be considered before implementing ERPs.  show that resource availability, functional requirements, IT infrastructure, data security, internet connection, and the total cost are very important factors that affect organizations’ decisions to adopt cloud ERPs. Some researchers study cloud ERPs adoption from a technological aspect as follow:  study data security and privacy, flexibility, scalability and easy implementation.  study on-demand IT resources,  take into consideration accessibility alone. Other researchers examine the organizational aspects; such as, circular factors in cloud ERPs adoption. refer to top management support as an important factor that influences organizations’ decisions whether to adopt Cloud ERPs or not.  studies IT readiness  study firm size. In addition, some researchers study cloud ERP drivers from an environmental aspect; such as, adequate user and technical support from provider, policy, government and competitors, competitive pressure (; ). Also, (; ; ) look to drivers from the vendor aspect and they studied the following factors: reputation, customer support, co- creation of value.
5. CONSTRUCTION OF HFPN MODEL Matsuno, et al.  did set the fundamental notion of HFPN that permits modeling of biological mechanisms without reforming any mathematical descriptions or any need for programming techniques. Formulating HPFN in this manner provides the maximum flexibility such as modeling of discrete and continuous processes. In addition, it requires no definition of consumed or produced quantities as functions of marking. These features elaborate why HFPN is well suited for biological simulation. Consequently, based on these particular model features, it has been used with Genomic object net for representing Genetic regulatory network for carbon starvation stress response in E.coli.
The leaf disease detection techniques were proposed by various researchers using contrast enhancement, histogram equalization, HSI colour transformation, noise removal filter for image preprocessing and edge detection, k means clustering and masking the green pixels for image segmentation [2-7]. Feature extraction was performed using Spatial Gray-level Dependence Matrix (SGDM), Gray-Level Co-occurrence matrix (GLCM), local colour histogram and colour coherence vector [8-11]. Different algorithms like SVM, ANN, k nearest neighbor, fuzzy logic, probabilistic neural networks and genetic algorithm are proposed for classification [1 -17]. The above mentioned methods were part of works that dealt with merely visible light image processing. But, for the early detection of diseases in leaves, it is essential to acquire the images with thermal camera which has the features such as (i) the detection of a disease at early points in time (ii) the differentiation among different diseases (iii) the separation of diseases caused by abiotic stresses and (iv) the quantification of disease severity. These parameters need to be assessed with a level higher or equivalent to the accuracy attained with standard assessment methods and with a shorter computation time.
This manuscript discusses the visualization methods of software systems architecture with composition of reverse engineering tools and restoration of software systems architecture. The visualization methods and analysis of dependencies in software packages are written in Java. To use this performance graph it needs to describe the relationships between classes inside the analyzed packages and between classes of different packages. This article discusses system visualization with using matrices of incoming and outgoing packet dependencies, allowing analyzing existing dependencies between classes within a package, and between classes of different packages. Obtaining such Information allows us to understand the reason for the emergence of dependencies between packages that determine architecture of the system, and also if necessary refactoring systems. In the manuscript also described the possibility of tools to provide the infrastructure for subsequent detection and error correction design in software systems and its refactoring. Keywords: Software Visualization, Reverse Engineering, Software Architecture, Dependency, Package.
The first phase of the two is training phase; it is used to generate training data set. In the mentioned control approach, the actual and change in torque of motor values are generated in form of vector and data is provided to neural network. Then the data is trained by back propagation training algorithm with respect to the actual torque of motor. Then the trained data is applied to the fuzzy interference system for generating the control fuzzy rules. In ANFIS, the fuzzy rules base control interference system is automatically generated.
The importance of the loss minimization for the inductionmotor drives may be realized from different perspectives because as far as the energy consumption is concerned the electric motors consume more than 50% of the electrical energy produced. Of this, the major share goes to induction motors, the main workhorse of industry. Also, they are most widely used in electrical drives because of their robustness, ruggedness, reliability and low cost. Despite of many advantages of inductionmotor there are some disadvantages also. Like it is not true constant speed motor, slip varies from less than 1% to more than 5%. Also it is not capable of providing high efficiency and low losses. But as it is so useful for industries we have to find some solution to solve these limitations and the solution is loss minimization controller that can take necessary action to reduce motor losses -. Not only reduce losses, but it can control various parameters of the induction machine such as flux, torque, voltage, stator current . Out of the several procedures for loss minimization of an inductionmotor drive, voltage reduction when the motor is operating with light load is found to be the effective method used for obtaining reduced power losses . In general, electrical motor drive contains losses like converter loss, motor loss and transmission loss. Thus in an effort to minimize the loss and improve efficiency of inductionmotor drive with different techniques, design and construction of some loss minimizing ideas are needed. This work presents a loss minimization model based techniques of inductionmotor drives. The efforts on loss minimization can be done through improved design of motor and converter by introducing better control techniques with the best result.
Abstract: In this paper a genetic algorithm based self-tuned Neuro fuzzycontroller (NFC) for the speed control of an inductionmotor drive (IMD) is presented. The normalization parameters and the membership functions of fuzzycontroller were translated into binary bit strings which are primitive by the genetic algorithm (GA) in order to be optimized for the fitness (or) objective function. In the proposed NFC system, a Fuzzy logic and Artificial Neural Network (ANN) structure based on Genetic Algorithm scheme is used.Speed error is given as input to the proposed NFC unlike conventional NFCs which employ both speed error and its derivative speed error as inputs of NFC. A genetic algorithm based NFC controller for an indirect vector control of inductionmotor is simulated in order to observe the validity or reliability of the proposed NFC method. The simulation results shows a very important improvement in shortening development time and system performance in the proposed NFC over a conventional NFC. In the practical applications the proposed NFC based Genetic Algorithm has less computational burden and it was easier to implement in the Simulink. Using MATLAB/SIMULINK software the effectiveness of the proposed NFC basedinductionmotor drive is tested at various operating conditions.