Several methods have been used for handling missing values. One is complete case analysis ignoring the sam- ples with missing values [3]. This method can only be used when the proportion of missing values is not large; moreover, much effective information is directly dis- carded. The second approach, called imputation method, imputes the values of missing data by statistical methods or machine learning methods [4,5]. This kind of ap- proach leads to additional bias in multivariate analysis [6]. Third method assumes a model for the covariates with missing values [7]. A disadvantage of this kind me- thod assumes implicitly that data are missed at random. For **attribute** **reduction**, the above concepts are not suitable. They all make assumption, so complete analysis of missing value is reduced. In addition, it is hard or even

The parameter setting in the proposed algorithm is rather static. It is recommended that the BAAR be integrated with a method to flexibly tuning the parameters in order to make it mutable or dynamic. Accordingly the algorithm is able to deal with each single dataset in a different way, relying on its characteristics such as the dimension of a dataset. Further investigation will focus on two other aspects, namely the classification accuracy and the time consumed for the above runs to achieve better minimal reduct. Applying BAAR to more datasets is currently on the go. The result obtained can be useful for deeper analysis of the said theory especially from the perspective of its behavior and its shortcomings (if any). Collectively, all the future works will contribute to the natural extension of the proposed BAAR, making it a smarter and more robust technique for **attribute** **reduction**.

Rough set can deal with missing values efficiently. Concepts of indiscernibility relation and discernibility matrix of rough set is very popular to fill missing values [1]-[2]. Rough sets core and reduct concepts used to impute missing value using most similar object [1]-[2]. New concept tolerance relation of rough set introduced by kryszkiewicz and for incomplete information dispensability of attributes, indispensability of attributes, core, and functional dependency has been redefined for rough set attributes [12]. Decision rule has been fetch directly from such an incomplete decision table. For missing value handling three approaches discussed briefly. It is shown that main concepts of these definitions are attributes value blocks. For computing lower and upper approximations, characteristic sets, characteristic relations and for rule induction **attribute** value blocks may be used. **Attribute** value pair block used for incomplete decision tables, to determine lower and upper approximations, characteristic relations, characteristic sets and for rule induction [13]. For discrete data rough set can be used efficiently but for continuous or real valued data, rough set cannot be used efficiently. To impute missing value, fuzzy-rough nearest neighbour concept used efficiently [14]. Dubois and Prade proposed Fuzzy rough set, which is very popular tool for crisp and real valued data sets [15]. It is extended further in terms of property and axioms [16]-[17]. Fuzzy rough set can be used efficiently for **attribute** **reduction** and missing value imputation. Indiscernibility concept of rough set and vagueness of fuzzy set merged in Fuzzy-Rough sets to handle uncertainty in better way for all type of data. Rough set depends on crisp equivalence classes and fuzzy-rough set depends on fuzzy equivalence classes [15]. Fuzzy-rough dependency function and fuzzy-rough quick reduct algorithm to compute reduct has been proposed [23]. It has been shown that fuzzy-rough quick reduct algorithm may not converge [19]. Depends on uncertainty degree fuzzy-rough based quick reduct algorithm has been redefined [20]. To manage noise of misclassification, variable precision fuzzy-rough sets are introduced [21]. **Reduction** algorithms proposed [22]-[23] for hybrid data with the help of dependency between conditional and decisions attributes using Shannon's Information entropy as in fuzzy rough sets. Granular computing based data **reduction** model proposed in fuzzy rough set [24]. For simultaneous attributes selection and feature extraction fuzzy-rough set based method proposed [25]. Over reduct or sub reduct may be fetched but proper reduct cannot be fetched by these algorithms due to running time. This type of information loss may misguide

In this part the number of average iteration is studied. The number of iteration gives an allusion to the complexity of a dataset. The maximum number of iterations refers to the higher in complexity of the dataset. While the minimal number of iterations leads to minimal in complexity. The numbers of average iterations are studied for Improve Water Cycle Algorithm for **Attribute** **Reduction** in Rough Set Theory (IWCA) with three algorithms (Investigating Composite Neighbourhood Structure (IS-CNS) by [8], Hybrid Variable Neighbourhood Structure (HVNS) [11] and Ant Colony Optimization for **Attribute** **Reduction** (ACOAR) [15]. The large size datasets had more iterations than the other datasets which is that will lead us to conclude the large size datasets are more complex than the other datasets, while small size datasets are less complex than the other datasets. Table II shows the iterations of the four algorithms that IWCA number of average iteration is significantly better than the number of average iterations of IS-CNS on an all datasets. The average iterations for IWCA better the average iterations for the HVNS and ACOAR. These results indicate that using intelligent selection helps to find the minimal redact with less number of iterations. On the other hand the total numbers of average iteration for IWCA outperform the IS-CNS, HVNS and ACOAR. In a conclusion IWCA has surpass the other methods in term or number of reducts, it has best average iterations compare with other methods.

In the scope of **attribute** **reduction** in RST applied to classification, any metaheuristic requires an evaluation function in order to rank the candidate solutions/reducts. Two different functions were evaluated in this work for the calculation of reducts in RST: the dependency degree between attributes and the relative dependency [8]. These functions, presented in Section 3.2, allows simplifying the calculation of the reducts in comparison with the standard approach based on the discernibility matrix, which presents high computational complexity, not being feasible for large and complex datasets. As mentioned above, three alternative metaheuristics are eva- luated for the calculation of reducts in RST: VNS, VND and DCS, which are described in Sections 4.1, 4.2 and 4.3, respectively. In addition, two local search schemes are also evaluated. Test results for some well- known databases are shown in Section 5. Processing time is also discussed. Each test case is related to the use of a spe- cific metaheuristic for the calculation of reducts applied to the classification of a specific database.

18 Read more

In rough set theory, an important concept is **attribute** **reduction** [2,13,14,16-18,22,23], which can be consid- ered as a kind of specific feature selection. In other words, based on rough set theory, one can select useful features from a given data set. Recently, more attention has been focused on the area of **attribute** **reduction** and many scholars have studied **attribute** **reduction** based on fuzzy rough sets [2,16-18,22,23]. Dai et al. [2] proposed a fuzzy rough set model for set-valued data and investi- gated the **attribute** **reduction** in set-valued information systems based on discernibility matrices and functions. Yao et al. [16] proposed an **attribute** **reduction** approach based on generalized fuzzy evidence theory in fuzzy de- cision system. Shen et al. [17] studied an **attribute** reduc- tion method based on fuzzy rough sets. Hu et al. [18] also proposed an **attribute** **reduction** approach by using information entropy as a tool to measure the significance of attributes. Rajen B. Bhatt and M. Gopal [22] put for- ward the concept of fuzzy rough sets on compact com- putational domain based on the properties of fuzzy t- norm and t-conorm operators and build improved feature selection algorithm. Zhao et al. [23] revisited **attribute** reductions based on fuzzy rough sets, and then presented and proved some theorems which describe the impacts of fuzzy approximation operators on **attribute** **reduction**. However, **attribute** **reduction** based on fuzzy rough set in interval and set-valued decision information systems has not been reported. In this paper, a fuzzy preference rela- tion is defined and the upper and lower approximations of decision classes based on the fuzzy preference relation are given. Moreover, the definition of the significance measure of condition attributes and the relative signifi- cance measure of condition attributes are given in inter- val and set-valued decision information systems by the introduction of fuzzy positive region and the dependency degree. And on this basis, a heuristic algorithm for cal- culating fuzzy positive region **reduction** in interval and set-valued decision information systems is given.

Resources in cloud computing environment are dynamic, different calculation node has different resource **attribute** value. Task need to scheduling has uncertain **attribute** value. To a set of submitted tasks, according to different demand user can input the expectations of corresponding weights about completion time, cost and load, then cloud computing scheduling system allocation right resources for each task according users’ needs and system load, make full use of resources in cloud computing environment. Finally, obtain the satisfactory distribution plan, allocate each task to appropriate calculation node.

The proposed cubing algorithm has the following salient features. 1) It supports not only high- dimensional data cubes but also hierarchical data cubes with multiple levels in a dimension. 2) The decomposition of the data cube space leads to significant **reduction** of processing and I/O overhead for many queries by restricting the number of cube segments to be processed for both the fact table and bitmap indices. 3) The prefix bitmap index is designed to support efficient OLAP by allowing fast look-up of relevant tuples. 4) The proposed cubing algorithm supports parallel I/O, parallel processing, and load balancing among disks and processors.

The concept of strength **reduction** finite element method was first proposed by Zienkiewicz and so on [1]. Wong provided to analyze what caused the inaccuracy of the slope stability by means of the finite element method [2]. Ugai, Matsui and San[3], Griffiths and Lane[4], Dawson and Roth[5], Manzari and Nour[6], they all made further research and analysis of the strength **reduction** finite element method, and pushed the development and application of this method. Duncan pointed out that the slope safety factor could be defined as, when the slope exactly reached the critical destructive status, the extent to reduce the shearing strength of soil [7]. That is to say, the definition of safety factor is the ratio between the actual shearing strength of soil and the shearing strength after the **reduction** at the time of being critical destructive when the slope exactly reaches the critical destructive status. This research indicates that strength **reduction** finite element method and limit equilibrium method are accordant when it is used to define the concept safety factor. The stress of computation of the strength **reduction** finite element method is just more reasonable. Besides, the strength **reduction** method attracts widespread interest of scholars, because it does not need to preliminarily assume the shape and the position of sliding surface, and it can reflect the gradually destructive process of the earth slope to some extent.

of **reduction** based on the information entropy. Generally, it is shown that the existing **attribute** **reduction** algorithms have some disadvantages such as the time performance, the quality of a **reduction** subset and so on. For generating a **reduction** of attributes rapidly and accurately on data sets with core attributes, this paper introduces a difference degree between equivalence classes or

In the light of universality of uncertainty, we propose a decision making model in completed information system. Con- sidering the **attribute** **reduction**, **attribute** importance and mismatched information, a multiple **attribute** decision making model based on importance of **attribute** is constructed. First of all, decision table is obtained by the knowledge known and deleting reduced attributes. Also, attributes value **reduction** obtained to simplify the decision table and rules is ex- tracted. Then, rules are utilized to make decision for a new problem. Finally, an example is advanced to illustrate our model.