Top PDF Fuzzy rough approximations of process data

Fuzzy rough approximations of process data

Fuzzy rough approximations of process data

important in the decision model. The condition attribute c 3 could be omitted from the decision table, when u ¼ 0:8 is assumed. 6. Conclusions The presented variable precision fuzzy rough set (VPFRS) model with asymmetric bounds is a suitable tool for analyzing information systems with crisp or fuzzy attributes. We proposed to express the human operator’s decision model in the form of decision table with fuzzy attributes. The fuzzy character of attributes corre- sponds with the human ability to inference, using linguistic concepts rather than numbers. Particular steps of the VPFRS approach were illustrated by simple examples. It was shown that relaxation of strong inclusion requirements of one fuzzy set in another fuzzy set (admitting of a certain misclassification level in the human operator’s control) leads to an increase of the approximation quality of the decision system. In this paper, we discussed an extended and improved version of our VPFRS model. It is compatible with the fuzzy rough set concept of Dubois and Prade. However, there is more than one possibility to define fuzzy rough sets. This is not only due to different forms of fuzzy operators (intersection, union, implication) that can be used in def- initions of fuzzy rough approximations. We proposed a unified form of the crisp lower and upper approxima- tions that could also be applied to defining new models of fuzzy rough set. Thus, our aim of future research is to introduce and investigate new classes of fuzzy rough sets and VPFRS models, which go beyond the widely used concept of Dubois and Prade.
Show more

15 Read more

Missing data imputation using fuzzy-rough methods

Missing data imputation using fuzzy-rough methods

Auto Associative Neural Networks are a kind of neural network that are connected tightly with other neurons of their previous and next levels. In [16] the authors used an AANN to impute missing values. In the first step, an AANN is trained with instances which do not contain missing values. During the learning procedure, when instances containing missing values are given to the AANN, the weights are not updated but instead missing values are predicted and the process is repeated again. Another method based on AANN is introduced in [16]. In this model an AANN is used with inputs from both the dataset and the output of a genetic algorithm estimator. The GA estimates approximations for the missing values and gives them to the AANN. Then the outputs of the AANN are examined to see if the error is in a sensible range or not. If the error is minimized then the final values of the missing features are produced. Otherwise, the GA component generates another approximation for the missing values and this is given to the AANN again. This cycle is repeated until the error is minimized. Multi Task Learning (MTL) [20] is another type of neural network which solves more than one problem at the same time. These usually have more outputs than standard networks because they solve more than one problem. In the missing value imputation domain MTLs have one standard output which is the decision feature of the dataset and also outputs which try to predict missing values of features. Its major drawback is that it uses the quadratic error as the cost function to be minimized during the training phase.
Show more

25 Read more

Fuzzy Entropy-Assisted Fuzzy-Rough Feature Selection

Fuzzy Entropy-Assisted Fuzzy-Rough Feature Selection

rules are more compact. A good feature selection step will remove unnecessary attributes which may affect both rule comprehension and rule prediction performance. The work on rough set theory (RST) offers an alternative, and formal methodology that can be employed to reduce dimensionality of datasets, as a preprocessing step to assist any chosen modelling method for learning from data. It helps to select the most information-rich features in a dataset, without transforming the data, whilst at the same time attempting to minimise information loss during the selection process. Computationally, the approach is highly efficient, relying on simple set operations, which makes it suitable as a preprocessor for techniques that are much more complex. Unlike statistical correlation-reduction approaches [5], RST requires no human input or intervention. Most importantly however, it retains the underlying semantics of the data, which results in models that are more transparent to human scrutiny.
Show more

9 Read more

Fuzzy rough granular neural networks, fuzzy granules, and classification

Fuzzy rough granular neural networks, fuzzy granules, and classification

Integration of fuzzy sets [ 6 , 7 ] and rough sets [ 14 – 17 ] under fuzzy rough computing or rough fuzzy computing has recently drawn the attention of researchers. Many relationships have been established to extend and integrate the underlying concepts of these two methodologies judiciously to deal with additional aspects of data imperfection, especially in the context of granular computing. The main purpose of such hybridization [ 19 ] is to provide a high degree of flexibility [ 20 ], robust solutions and advanced tools for data analysis [ 21 ], and a framework for efficient uncertainty handling [ 22 ]. This rough fuzzy paradigm may also be considered as a tool for modeling the aforesaid f-granular characteristics of perception-based computing. In rough set theory, one starts with crisp equivalence classes, in the same way that fuzzy equivalence classes are central to the fuzzy rough sets approach. Each equivalence class may be used as a granule. The concept of crisp equivalence classes can be extended to fuzzy equivalence classes by the inclusion of a fuzzy tolerance relation on the universe, which determines the extent that two elements are similar in a relation. Fuzzy rough sets [ 11 ], based on a fuzzy tolerance relation, provide a means by which discrete or real-valued noisy data can be effectively reduced without any additional information about the data (such as thresholds on a particular domain of universe) for its analysis. The granulation structure produced by an equivalence class provides a partition of the universe. The intension of it is to approximate an imprecise concept in the domain of universe by a pair of approximation concepts, called lower and upper approximations. These approximations are used to define the notion of positive degree of each object, and this is used to define the dependency factor of each conditional attribute, all of which are then used to extract the domain knowledge about the data.
Show more

20 Read more

Dominance-based fuzzy rough set analysis of uncertain and possibilistic data tables

Dominance-based fuzzy rough set analysis of uncertain and possibilistic data tables

Proof. The properties depend on the following crucial fact, which can be derived easily from ( 18 ) and ( 19 ). Cl ≤ t ( x ) = ¬ Cl ≥ t +1 ( x ) (22) for x ∈ U and 1 ≤ t ≤ n − 1.  The precisiation of data means any new information about objects in U that is added to the data tables. It can be a new criterion or a piece of information that further refines the subsets of evaluations or assignments of an object. Rough set approaches usually require that more precise information about objects does not reduce the lower approximations of the decision classes. Thus, two types of precisiation properties are investigated in [ 1 ]. One is for the situation when new attributes or criteria are added to the decision table; and the other is for when more specific information about the evaluations and assignments of objects is added. The first type depends on the monotonicity of the t-norm, s-norm, and implication operations. Recall that the t-norm and s-norm operations are non-decreasing in their respective arguments, while the implication operation should be non-increasing in its left argument and non-decreasing in its right argument.
Show more

15 Read more

Parallel Fuzzy Rough Support Vector Machine for Data Classification in Cloud Environment

Parallel Fuzzy Rough Support Vector Machine for Data Classification in Cloud Environment

pen as part of the same transaction. We benefit both from the robust recovery architecture of the storage subsystem and also from other features such as online backup mechanisms. The recovery operation for se- rial processing is a more challenging because the large amounts of runtime state managed in main memory structures by operators in executor and the decoupled nature in which durable state is written out originally by the archiver. The crash recovery therefore involves standard database style recovery of all durable state, making all serial processing archives self-consistent with each other as well as the latest committed data from their underlying archive and rebuilding the run- time state of the various operators in executor. The ability to have robust recovery implementation that is capable of quickly recovering from a failure is essen- tial. Furthermore, the longer it takes to recover from a failure the more the amount of pent-up data has gath- ered and the longer it takes to catch up to the live data. The recovery from failure in distributed PFRSVM is assessed in terms of delay, process and correct (DPC) protocol which handles crash failures of processing nodes and network failures [24]. Here, the choice is made explicit as the user specifies an availability bound and it attempts to minimize the resulting in- consistency between the node replicas while meeting the given delay threshold. The protocol tolerates oc- currence of multiple simultaneous failures that occur during recovery. In DPC each node replica manages its own availability and consistency by implementing state machine as shown in figure 9 that has three states viz. STABLE, UPSTREAM FAILURE (UP FAIL- URE), and STABILIZATION.
Show more

24 Read more

A Variable Precision Fuzzy Rough Set Approach to a Fuzzy Rough Decision Table

A Variable Precision Fuzzy Rough Set Approach to a Fuzzy Rough Decision Table

The rough set theory and fuzzy set theory are extensions of classical set theory, and they are related but distinct and complementary theories. The rough set theory[1,2] is mainly focused on crisp information granulation, while its basic concept is indiscernibility, for example, an indiscernibility between different objects deduced by different attribute values of described objects in the information system; whereas the fuzzy set theory is regarded as a mathematical tool for imitating the fuzziness in the human classification mechanism, which mainly deals with fuzzy information granulation. Because of its simplicity and similarity with the human mind, its concept is always used to express quantity data expressed by language and membership functions in the intelligent system. In fuzzy sets, the attributes of elements may be between yes and no. For example, a beautiful scenery, we cannot simply classify the beautiful scenery into a category between yes and no. For the set of beautiful scenery, there does not exist good and definite border. The fuzzy sets cannot be described with any precise mathematical formula, but it is included in the physical and psychological process of human's way of thinking, because the physiology of human reasoning is never used any precise mathematical formula during the physical process of reasoning, and fuzzy sets is important in the pattern classification. Essentially, these two theories both study the problems of information granularity. The rough set theory[3,4] studies rough non-overlapping type and the rough concept; while the fuzzy set theory studies the fuzziness between overlapping sets, and these naturally lead to investigating the possibility of the "hybrid" between the rough sets and the fuzzy set. The hybrid of rough set and fuzzy set can be divided into three kinds of approximations that are the approximations of fuzzy sets in a crisp space, the approximations of crisp sets in fuzzy approximate space, and the approximations of fuzzy sets in fuzzy approximate space[8,9]. The paper mainly discusses the second case. In order to simulate the situation of this type, Dubios introduced the concept of fuzzy rough sets (Dubois and Prade, 1990)[6], which is an extension of rough set approximation deduced from a crisp set in fuzzy approximate space.
Show more

5 Read more

A fuzzy neighborhood rough set method for anomaly detection in large scale data

A fuzzy neighborhood rough set method for anomaly detection in large scale data

2.5. A novel approach: A high-performance parallel and distributed computation using mapreduce In order to compute an optimal set of cut-points, most of discretization algorithms perform an iterative search in the space of candidate discretizations, using different types of scoring functions for evaluating a discretization, that take a lot of time. In this paper, we propose a parallel process of discretization based on MapReduce using sliding grid. A sliding grid is specified by defining its range M and slide S. The range M is an interval of discretization while the slide S specifies the portion of the grid that is moved forward. A sliding window is specified as a tuple (M,s). A smooth sliding specification is highly desired where the slide S issmall relative to the range M. where𝑆 < 𝑀. The proposed algorithm based on MapReduce computed for each node i(𝑃 𝑖 ⊆ 𝐴) is a parallel process that consists of three steps: map, shuffle, and reduce as shown in Figure 5.
Show more

10 Read more

Lattice for covering rough approximations

Lattice for covering rough approximations

Covering is a common type of data structure and covering-based rough set theory is an efficient tool to process this type of data. Lattice is an important algebraic structure and used extensively in investigating some types of generalized rough sets. This paper presents the lattice based on covering rough approximations and lattice for covering numbers. An important result is investigated to illustrate the paper.

6 Read more

Axiomatic systems for rough sets and fuzzy rough sets

Axiomatic systems for rough sets and fuzzy rough sets

Keywords: Rough sets; Fuzzy sets; Fuzzy rough sets; Lower approximations; Upper approximations; Axioms 1. Introduction Rough set theory [16,17] is a new approach for reasoning about data. It has achieved a large amount of real applications such as medicine, information analysis, data mining, control and linguistics. As a tool to handling imperfect data, it complements other theories that deal with uncertain data, such as probability theory, evi- dence theory and fuzzy set theory. The main idea of rough sets corresponds to the lower and upper approx- imations. Pawlak’s definitions for the lower and upper approximations were originally introduced with reference to an equivalence relation. Many interesting properties of the lower and upper approximations have been derived by Pawlak [16,17] based on the equivalence relations. In this paper, we study a reverse problem. That is, can we characterize the notion of the lower and upper approximations in terms of those properties? We answer the question affirmatively.
Show more

11 Read more

Fuzzy-rough Classifier Ensemble Selection

Fuzzy-rough Classifier Ensemble Selection

Rough set theory (RST) has been successfully used as an attribute selection tool to discover data dependencies and reduce the number of attributes contained in a dataset by purely structural means [20]. Given a dataset with discretised attribute values, RST can find a subset (termed reduct) of the original attributes that are the most informative; all other attributes can be removed from the dataset with minimal information loss. However, it is most often the case that the values of attributes may be both crisp and real-valued, and this is where traditional rough set theory encounters a problem. It is not possible in the theory to say whether two different attribute values are similar and to what extent they are the same. For example, two close values may only differ as a result of noise, but in the standard RST-based approach they are considered to be as different as two values of a different order of magnitude. Dataset discretisation must therefore take place before reduction methods based on crisp rough sets can be applied. This is often still inadequate, however, as the degrees of membership of values to discretised values are not considered and thus can result in information loss. In order to combat this, extensions of rough sets based on fuzzy-rough sets [7] have been developed. A fuzzy-rough set is defined by two fuzzy sets, fuzzy lower and upper approximations, obtained by extending the corresponding crisp rough set notions. In the crisp case, elements either belong to the lower approximationwith absolute certainty or not. In the fuzzy-rough case, elements may have a membership in the range [0,1], allowing greater flexibility in handling uncertainty.
Show more

8 Read more

Fuzzy-rough nearest neighbour classification and prediction

Fuzzy-rough nearest neighbour classification and prediction

Fuzzy sets [ 39 ] and rough sets [ 24 ] are two natural computing paradigms that attempt to deal with characteristics of imperfect data and knowledge in a human-like fashion: the former model vague (typically, linguistic) information by expressing that objects belong to a set or relation to a given degree; on the other hand, the latter provide approximations of concepts in the presence of incomplete information, characterizing those objects that certainly, and possibly, belong to the concept. A hybrid fuzzy-rough set model was first proposed by Dubois and Prade in [ 10 ], was later extended and/or modified by many authors, and was applied successfully in various domains, most notably machine learning.
Show more

14 Read more

Towards scalable fuzzy-rough feature selection

Towards scalable fuzzy-rough feature selection

data objects to have an impact upon the result of the implication operation are those of classes other than that of the object under consideration. Of these, the nearest object of a different class will produce the smallest value for the implication operation, and therefore, it is this value only that is used, due to the fact that above definition results in the minimum of all implications. The process which considers all neighbours is naturally very time-consuming and is exacerbated further when the data contains a large number of data objects. For feature selection (FS), it will therefore require the calculation of the nearest neighbours for each feature subset candidate that is considered by the selection algorithm. Hence, there is very little saving in time when employing such a nearest neighbour approach. The first approach presented in this paper, seeks to approximate the nearest neighbour calculations by computing the nearest neighbour(s) for each data object prior to computing the lower approximation. Although the final subsets produced may not be true reducts (in the fuzzy-rough sense), their computation will be much less intensive and thus methods based on this framework are applicable to larger data [19].
Show more

37 Read more

Computing fuzzy rough approximations in large scale information systems

Computing fuzzy rough approximations in large scale information systems

In (fuzzy) rough set theory, the data at hand is represented as an information system, consisting of objects described by a set of attributes. Examples include an information system of patients (objects) described by symptoms and lab results (attributes), or an information system of online customers (objects) described by their demographics and purchase his- tory (attributes). Inconsistencies arise when objects in the information system have the same attribute values but belong to a different concept (i.e. a subset of objects in the data), meaning that the available attributes are not able to discern the concept. For instance, two patients with very similar symptoms might still have been diagnosed with a different disease, and out of two customers with very similar demographics and purchase history, one might have clicked on an advertisement while the other one did not. An important question is how to take such inconsistencies that arise in training data into account when building models to predict medical diagnosis for patients, or click behavior of future customers. To tackle this problem, Pawlak’s rough set theory [11] relies heavily on the construction of the so-called lower and upper approximations of concepts. These approximations are constructed based on an indiscernibility relation induced by the attributes in the information system. The original rough set theory could only handle nominal (discrete) attributes and requires discretization of continous attributes. Fuzzy rough set theory extends the original rough set theory for information systems with con- tinuous valued attributes, and detects gradual inconsistencies within the data.
Show more

8 Read more

Rough approximations of vague sets in fuzzy approximation space

Rough approximations of vague sets in fuzzy approximation space

Crown Copyright Ó 2010 Published by Elsevier Inc. All rights reserved. 1. Introduction Fuzzy set theory was first proposed by Zadeh [1] . It is an important mathematical approach to uncertain and vague data analysis, and has been widely used in the area of fuzzy decision making, fuzzy control, fuzzy inference, and so on [2–5] . Thereafter, the theory of rough set was proposed by Pawlak [6] , which was though of as another powerful tool for managing uncertainty that arises from inexact, noisy, or incomplete information. In terms of method, it was turned out to be method- ologically significant in the domains of artificial intelligence and cognitive science, especially when representating or reason- ing with imprecise knowledge, machine learning and knowledge discovery. In recent years, the combination of fuzzy set theory and rough set theory has been studied by many researchers [7–13,31–35] , hence, many new mathematical methods are generated for dealing with the uncertain and imprecise information, such as the fuzzy rough sets and rough fuzzy sets, etc. Meantime, many metric methods are presented and investigated by different authors, in order to measure the uncer- tainty and ambiguity of the different sets [14–20] .
Show more

16 Read more

On bipolar fuzzy rough continuous functions

On bipolar fuzzy rough continuous functions

Pawlak [11, 12] proposed the theory of rough sets. Keyun Qin and Pei [7] successfully compared fuzzy rough set models and fuzzy topologies on a finite universe. Mathew and John [3] established and developed topological structures on rough sets. Rough topology in terms of rough sets was introduced by Lellis Thivagar et al. [8]. In [4, 5, 10], the concept of fuzzy rough sets were studied by replacing crisp binary relations with fuzzy relations on the universe.

6 Read more

3. Rough convergence of a sequence of fuzzy numbers

3. Rough convergence of a sequence of fuzzy numbers

Abstract. We define the concept of rough limit set of a sequence of fuzzy numbers and obtain the relation between the set of rough limit and the extreme limit points of a sequence of fuzzy numbers. Finally, we investigate some properties of the rough limit set.

7 Read more

Extended Generalization of Fuzzy Rough Sets

Extended Generalization of Fuzzy Rough Sets

[16] L.A. Zadeh, “Similarity Relations and Fuzzy Orderings”, Information Science, vol. 3, no. 2, 1970, pp. 177-200. Rolly Intan received his B.Sc. in Computer Engineering from Sepuluh Nopember Institute of Technology, Surabaya Indonesia in 1991, and his M.A.Sc. in Compter Science from International Christian University, Tokyo, Japan in 2000. His Doctor of Enggineering in Computer Science and Engineering was obtained from Meiji University, Tokyo, Japan in 2003. He is currently a professor in Informatics Engineering, Petra Christian University, Surabaya, Indonesia. He is the former rector (president) of Petra Christian University, Surabaya. He serves as chair, co-chair and committee member in many international conferences, including IAENG International Conference on Data Mining and Applications since 2008. His primary research interests are Fuzzy Logic and Systems, Rough Sets, Granular Computing and Data Mining.
Show more

7 Read more

Gaussian process approximations for fast inference from infectious disease data

Gaussian process approximations for fast inference from infectious disease data

We therefore consider here a Gaussian process approximation approach to speed-up real time analysis of disease data when other methods are too complex. This simultaneously accounts for stochasticity and avoids the problems of direct fitting of ODE models. This approximation is applied initially to the stochastic SIR (susceptible-infectious-removed) model. We also consider the SEIR (susceptible-exposed-infectious-removed) model, although it is easy to use this approximation scheme with a more complex compartmental model or even models used outside epidemiology. We stress that our approach involves an additional approximation step beyond the derivation of a stochastic differential equation (SDE) limit, that must be controlled and which is the main technical component of our work.
Show more

24 Read more

ROUGH FUZZY-IDEALS OF -NEAR-RINGS

ROUGH FUZZY-IDEALS OF -NEAR-RINGS

L.A.Zadeh[12] introduced the notion of a fuzzy sets, and it is now a rigorous area of research with manifold applications ranging from engineering and computer science to medical diagnosis and social behaviour studies. Rosenfeld[8], applied the notion of fuzzy sets to algebra and introduced the notion of fuzzy subgroups. Jun et.al.,[4] defined fuzzy ideals in -near-rings. Meenakumari and Tamizhl Chelvan[5] discussed fuzzy bi-ideals in -near-ring.

6 Read more

Show all 10000 documents...