Fuzzy modelling is considered one of the main techniques used in computational intelligence. It is widely known that it can represent systems with semantic description. Fuzzysystems are designed to produce a rule-base composed of many fuzzy rules (i.e. IF THEN). Natural language is used to express the terms involved in these rules. In fact, the reasoning form of fuzzy rules expressed by human language offers a significant feature that helps users, who are in charge to make crucial decisions, understand how the systems’ outputs are concluded. From users view, this feature (i.e. interpretability) which is provided by fuzzy set theory, grants any created systems with more reliability.
Plamen P. Angelov (F’16, SM’04, M’99) holds a Personal Chair (full Professorship) in Intelligent Systems with the School of Computing and Com- munications, Lancaster University, UK. He obtained his Ph.D. (1993) and his D.Sc. (2015) from the Bulgarian Academy of Science. He is the Vice Pres- ident of the International Neural Networks Society and a member of the Board of Governors of the Systems, Man and Cybernetics Society of the IEEE, a Distinguished Lecturer of IEEE. He is Editor-in- Chief of the EvolvingSystems journal (Springer) and Associate Editor of IEEE Transactions on FuzzySystems as well as of IEEE Transactions on Cybernetics and several other journals. He received various awards and is internationally recognized pioneering results into on-line and evolving methodologies and algorithms for knowledge extraction in the form of human-intelligible fuzzy rule-based systems and autonomous machine learning. He holds a wide portfolio of research projects and leads the Data Science group at Lancaster.
In this paper, a novel on-line evolvingfuzzy clustering method that extends the evolving clustering method (ECM) of Kasabov and Song (2002) is presented, called EFCM. Since it is an on-line algorithm, the fuzzy membership matrix of the data is updated whenever the existing cluster expands, or a new cluster is formed. EFCM does not need the numbers of the clusters to be pre-defined. The algorithm is tested on several benchmark data sets, such as Iris, Wine, Glass, E-Coli, Yeast and Italian Olive oils. EFCM results in the least objective function value compared to the ECM and Fuzzy C-Means. It is significantly faster (by several orders of magnitude) than any of the off-line batch-mode clustering algorithms. A methodology is also proposed for using the Xie-Beni cluster validity measure to optimize the number of clusters.
[14,43] and the more general and philosophical concept of knowledge generation or discovery from the data . Both refer to the non-trivial process of identifying valid and understandable/interpretable structure in the data. In this respect system identification is meant in this paper mostly as a model structure identification rather than the more limited and practically more often used parameter identification. One can note that the parameter identification under a fixed model structure is nothing more than an adjustment, tuning and thus has obviously limitations related first of all to the choice of the model structure. Since the data streams are often non-stationary it is logical to assume the structure of the data to be also dynamic, that is, to evolve. An e-ntelligent system continuously learns new data to integrate this data with the existing models. It develops its structure and functionality continuously, always adapting and modifying its knowledge representation. The e-ntelligent system approach is demonstrated here through two system modelling techniques that the authors have introduced recently and are continuing to develop, namely the evolving connectionist systems  (ECOS) and evolvingfuzzysystems [12,13] (EFS).
Clustering is a new science that work and study is ongoing in this field because it is considered a lot in different science as a solution. In recent years this method is optimized and the results of optimization are provided as papers. The goal of optimization is obtaining to the minimum number of replicates and clusters with the most similar members. In this paper a comparison is done between two common algorithms in order to recognize the DoS attacks. Finally, the results derived from K-means algorithm are better to identify these kinds of attacks. We should mention that these results are not exactly true and by choosing different fields to study; it is clear that Fuzzy k- means acts better than k-means.
Once the local regions (clusters) are elicited, they are projected from the high-dimensional space to the one-dimensional axes to form the fuzzy sets as antecedent parts of the rules. Hereby, one cluster is associated with one rule. A visualization of this projection concept is shown in Figure 5 (three-dimensional example, visualized as ground plan), where three two-dimensional clusters are projected to the two input axes (the output axes is the third dimension), form- ing the antecedent parts. The (linear) consequent parameters are estimated by local learning approach, that is for each rule separately. This is also because in  it is reported that local learning has some favourable advantages over global learning (estimating the parameters from all rules in one sweep) such as smaller matrices to be inverted (hence more stable and faster), providing a better interpretation of the consequent functions (as obtaining local-piecewise hyper-planes snuggling along the real trend of the non-linear surface) and a higher flexibility when intending to adjoin new rules on demand (e.g. dur- ing an incremental learning phase). The underlying optimisation function is a weighted least squares problem, defined by
This thesis first started with engineering a complex network by controlling its nodes. The goal was to identify the best drivers which facilitate synchronisation of the network over the widest range of coupling strengths. Central nodes could be good candidates and heuristic centrality measures such as degree, betweenness or closeness centrality can be considered, although they are not related to dynamics of networks. We proposed a new controllability centrality to find the best driver node(s). In order to engineer a network for better collective behaviour, this metric was proposed to identify the most influential set of driver nodes on controllability of a dynamical network. The metric is based on single eigen-decomposition of the Laplacian matrix of the graph; thus, it is computationally efficient and applicable on large- scale networks. Simulation results prove the precision of this metric in networks with different scale-free, Watts-Strogatz and random topologies. Interestingly, controllability centrality shows the sub-modularity feature. It means that by only one eigen-decomposition of the Laplacian matrix, the best subset of nodes with any desired size can be identified. As an application, this metric successfully predicted the best frequency leader in secondary frequency control of distributed generation systems. This is one of the real-time requirements of future power management systems, where there are a lot of small capacity generators. The metric was also applied to identify brain areas of activation which may prevent disease to propagate in dementia networks. Results for these applications should prove of interest to the network community.
Abstract—Gene recruitment or co-option is defined as the placement of a gene under a foreign regulatory system. Such re-arrangement of pre-existing regulatory networks can lead to an increase in genomic complexity. This reorganization is recognized as a major driving force in evolution. We simulated the evolution of gene networks by means of the Genetic Algorithms (GA) technique. We used standard GA methods of (point) mutation and multi-point crossover, as well as our own operators for introducing or withdrawing new genes on the network. The starting point for our computer evolutionary experiments was a minimal 4-gene dynamic model representing the real genetic network controlling segmentation in the fruit fly Drosophila. Model output was fit to experimentally observed gene expression patterns in the early fly embryo. We found that the mutation operator, together with the gene introduction procedure, was sufficient for recruiting new genes into pre-existing networks. Reinforcement of the evolutionary search by crossover operators facilitates this recruitment. Gene recruitment causes outgrowth of an evolving network, resulting in structural and functional redundancy. Such redundancies can affect the robustness and evolvability of networks.
A BSTRACT : In recent years advances in technology have led to the generation of large volumes of data, mainly numerical data, highlighting the interest in processing them to extract knowledge and information from them. The main objective is to make more efficient the systems from which these data have been obtained and help in decision making. The information in a database is implicit in the values that represent the different states of the systems, whereas the knowledge is implicit in the relations between the values of the different attributes or present characteristics. These relationships are identified by groups to be discovered and describe the relationships between the input and output states. One of the main human functions is to classify, differentiate and group different objects according to their attributes. The article investigates how to apply fuzzy grouping algorithms, which allow an element to belong to more than one group by a degree of membership, in order to obtain relevant characteristics or recognize patterns of a set of data. We discuss a study that involved 4 main fuzzyalgorithms where each algorithm is explained and how they are related, as well as with each new algorithm solves problems that the previous one did not solve efficiently.
Evolvingsystems have been very popular, for instance in Bordignon and Gomide (2014), a learning approach to train uninorm-based hybrid neural networks is mentioned. The use of evolving classifiers for activity recognition is described in Garcia-Cuesta and Iglesias (2012) and Ordoñez et al. (2013). In Gomide and Lughofer (2014), Iglesias and Skrjanc (2014), and Lughofer and Sayed-Mouchaweh (2015), novel efficient techniques of evolving intelligent systems are discussed. A dynamic pattern recognition method is introduced in Har- tert and Sayed-Mouchaweh (2014). In Iglesias et al. (2015), an approach for classifying huge amounts of different news articles is designed. An evolving method that is able to keep track of computer users is proposed in Iglesias et al. (2014). In Klancar and Skrjanc (2015), a new approach called evolving principal component clustering is addressed. A new cluster- ing method is suggested in Lughofer and Sayed-Mouchaweh (2015). In Lughofer et al. (2015) and Pratama et al. (2015), novel evolvingfuzzy rule-based classifiers are addressed. An evolving neural fuzzy modeling approach is constructed in Marques Silva et al. (2014). In Sayed-Mouchaweh and Lughofer (2015), a novel approach in fault diagnosis is stud- ied. Stable systems are characterized by the boundedness criterion, i.e., if bounded algorithms inputs are employed, then the outputs and parameters exponentially decay to a small and bounded zone. In Ahn (2014), the author uses an induced L∞ approach to create a new filter with a finite impulse response structure for state-space models with exter- nal disturbances. The model predictive stabilization problem for Takagi–Sugeno fuzzy multilayer neural networks with general terminal weighting matrix are investigated in Ahn and Lim (2013). In Ahn (2012), an error passivation approach is used to derive a new passive and exponential filter for switched Hopfield neural networks with time-delay and noise disturbance. Two robust intelligent controllers for nonlinear systems with dead-zone are addressed in Perez-Cruz et al. (2014) and Perez-Cruz et al. (2014). In Torres et al. (2014) andZdesar et al. (2014), two stable controllers are introduced. However, most of these algorithm operate offline and are
node (cluster center, rule node) less than a certain threshold are allocated to the same cluster. Samples that do not fit into existing clusters, form new clusters. Cluster centers are continuously adjusted according to new data samples, and new clusters are created incrementally. ECOS learn from data and automatically create or update a local fuzzy model/function, e.g.:
models assign equal importance to all the samples seen so far. On-line model identification is advantageous, especially when convergence to an optimality criterion or stable state of the model structure can be achieved . However, this only holds for data streams which are generated from the same under- lying data distribution and do not show any drift or shift behavior to other parts of the input/output space . Drift (respectively shift) indicates the necessity of (gradual) out-dating of previously learned relationships (in terms of structure and parameters) during the incremental learning process as they are not valid any longer and should hence be eliminated from the model (for instance, consider completely new types of images in a surface inspection sys- tem). An alternative to gradual out-dating is the concept of re-learning, which can be either done based on all samples seen so far, providing lower weights for older samples in the learning process, or based on the latest data blocks only. The first variant slows down the learning process significantly over time, such that on-line real-time demands are hardly met. The second variant has the problem that older data is usually completely forgotten when extracting the models based on the new data blocks from scratch, causing a crisp switch between two models (from the old to the new). With gradual forgetting, a smooth transition from an old model to a new one can be achieved instead of an abrupt switch. Drift handling (in connection with gradual forgetting) was already applied in other machine learning techniques, e.g. in connection with Support Vector Machines (SVMs)  , ensemble classifiers , and instance-based (lazy) learning approaches  . However, to the best of our knowledge, this concept has not yet been applied to fuzzysystems (neither the concept of re-learning).
before regulation. (c) First cluster formed after regulation. (B) Clustering. (a) Introduction of a novel data point with no left neighbor. (b) Creation of a new cluster before regulation. (c) Final appearance of the fuzzy partitioning after regulation. (d) Introduction of a novel data point with both left and right neighbors. (e) Creation of a new cluster before regulation. (f) Final appearance of the fuzzy partitioning after regulation (Tung et al., 2011).
Abstract—Evolvingfuzzysystems (EFSs) are now well devel- oped and widely used thanks to their ability to self-adapt both their structures and parameters online. Since the concept was firstly introduced two decades ago, many different types of EFSs have been successfully implemented. However, there are only very few works considering the stability of the EFSs, and these studies were limited to certain types of membership functions with specifically pre-defined parameters, which largely increases the complexity of the learning process. At the same time, stability analysis is of paramount importance for control applications and provides the theoretical guarantees for the convergence of the learning algorithms. In this paper, we introduce the stability proof of a class of EFSs based on data clouds, which are grounded at the AnYa type fuzzysystems and the recently introduced empirical data analysis (EDA) methodological framework. By employing data clouds, the class of EFSs of AnYa type considered in this work avoids the traditional way of defining membership functions for each input variable in an explicit manner and its learning process is entirely data-driven. The stability of the considered EFS of AnYa type is proven through the Lyapunov theory, and the proof of stability shows that the average identification error converges to a small neighborhood of zero. Although, the stability proof presented in this paper is specially elaborated for the con- sidered EFS, it is also applicable to general EFSs. The proposed method is illustrated with Box-Jenkins gas furnace problem, one nonlinear system identification problem, Mackey-Glass time series prediction problem, eight real-world benchmark regression problems as well as a high frequency trading prediction problem. Compared with other EFSs, the numerical examples show that the considered EFS in this paper provides guaranteed stability as well as a better approximation accuracy.
Covariance provides a measure of the strength of the correlation between two or more sets of random variables . In general, when a mathematical model of the system can be obtained, the covariance matrices are chosen based on experience or through experiments. However, this can be a daunting process. It is very difficult (if not impossible) to derive accurate mathematical models for many complex systems in mechanical engineering. This leaves the system to be approximated by a KF. As a consequence, the covariance matrices must also be approximated. This process leaves room for a significant error making the training technique somewhat unreliable. Therefore, a new method to update process noise and observation error covariance matrices is proposed in this section to improve robustness of the training technique, which is defined as:
It is well known that fuzzy rule-based systems are universal function approximators ; they are suitable for extracting interpretable knowledge. Therefore, they are viewed as a promising framework for designing effective and powerful classifiers. The type of classifiers that can be built using the recently introduced evolvingfuzzy rule-based systems , can be called evolving  which differs from ‘evolutionary’. Evolvingfuzzy rule-based classifiers develop and adapt in on-line mode the non-linear classification surface. Evolutionary/genetic algorithms have recently been used for design of fuzzy rule-based systems in general  and classifiers in particular ,. They are based on the off-line optimization of one or more criteria in designing the fuzzy rule-base (classifier) using paradigms that stem from Nature such as mutation, crossover, and reproduction. Evolving in the sense that we use it in our paper and related works includes self-organising, self-developing in terms of the classifier (rule-base) structure. In this sense this paradigm can be considered as a higher level of adaptation (adaptation is usually related to parameters not to the structure of the systems ). Note, that similar principles were used by the authors in developing evolving classifiers also in  and . The concept is taken further in this paper comparing to  by analysing different possible architectures of eClass. Comparing to  the backbone of the approach is different – we use here and in  the evolvingfuzzy Takagi-Sugeno, eTS approach while in  we extended FLEXFIS  and its modification FLEXFIS-Mod  to the classification case (called FLEXFIS-Class), both families originally designed for fuzzy regression modelling tasks. The eTS family of evolving TS models (eTS, MIMO-eTS, exTS) has been recently
Abstract. This paper describes the results of the working group investigating the issues of empirical studies for evolvingsystems. The groups found that there were many issues that were central to successful evolution and this concluded that this is a very important area within software engineering. Finally nine main areas were selected for consideration. For each of these areas the central issues were identified as well as success factors. In some cases success stories were also described and the critical factors accounting for the success analysed. In some cases it was later found that a number of areas were so tightly coupled that it was important to discuss them together.
Fuzzy clustering is a widely applied method for obtaining fuzzy models from data. It has been applied successfully in various fields including finance and marketing. Despite the successful applications, there are a number of issues that must be dealt with in practical applications of fuzzy clustering algorithms. This technical report proposes two extensions to the objective function based fuzzy clustering for dealing with these issues. First, the (point) prototypes are extended to hypervolumes whose size is determined automatically from the data being clustered. These prototypes are shown to be less sensitive to a bias in the distribution of the data. Second, cluster merging by assessing the similarity among the clusters during optimization is introduced. Starting with an over-estimated number of clusters in the data, similar clusters are merged during clustering in order to obtain a suitable partitioning of the data. An adaptive threshold for merging is introduced. The proposed extensions are applied to Gustafson–Kessel and fuzzy c-means algorithms, and the resulting extended algorithms are given. The properties of the newalgorithms are illustrated in various examples.
Based on the analysis, the fuzzy clustering algorithms including FCM, SFCM and PCM are highly dependent on the features used. For example, FCM using PI is suitable feature for one type image for segmenting objects while using PL produces better results for other. In some cases, FCM using CIL shows good segmentation performance [61, 72- 76]. This raises an open question which feature set produces best segmentation results for which type of image . Addressing this issue, Ameer et al proposed a new algorithm namely merging initially segmented regions (MISR)  which merges initially segmented similar regions produced by clustering algorithm separately using a pair of feature set from PI, PL, and CIL. The detailed description of the MISR algorithm is given in Algorithm 4 with the full details in below.
The evolution of the coeﬃcients of the monetary rule in the structural VAR ac- cords well with narrative accounts of post-WWII U.S. economic history, with (e.g.) significant increases in the long-run coeﬃcients on inflation and money growth around the time of the Volcker disinflation. Overall, however, our evidence points towards a dominant role played by good luck in fostering the more stable macroeconomic envi- ronment of the last two decades. First, the Great Inflation was due, to a dominant extent, to large demand non-policy shocks, and to a lower extent to supply shocks. Second, bringing either Paul Volcker or Alan Greenspan back in time would only have had a limited impact on the Great Inflation episode. Although the systematic component of monetary policy clearly appears to have improved over the sample pe- riod, this does not appear to have been the dominant influence in post-WWII U.S. macroeconomic dynamics.