Chen et al. [6] used a multiple regression analysis (MRA) model for **thermal** **error** compensation of a horizontal machining centre. With their experimental results, the **thermal** **error** was reduced from 196 to 8 l m. Yang et al. [14] also used the MRA model to form an **error** synthesis model which merges both the **thermal** and geometric errors of a lathe. With their exper- imental results, the **error** could be reduced from 60 to 14 l m. However, the **thermal** displacement usually changes with var- iation in the machining process and the environment; it is difficult to apply MRA to a multiple output variable model. In order to overcome the drawbacks of MRA models, more attention has subsequently been given to the Artificial Intelligence (AI) techniques such as Artificial Neural Networks (ANNs). Chen et al. [7] proposed an ANN model structured with 15 nodes in the input layer, 15 nodes in the hidden layer, and six nodes in the output layer in order to drive a **thermal** **error** compen- sation of the spindle and lead-screws of a vertical machining centre. The ANN model was trained with 540 training data pairs and tested with a new cutting condition, which was not included within the training pairs. Test results showed that the ther- mal errors could be reduced from 40 to 5 l m after applying the compensation model, but no justification for the number of nodes or length of training data was provided. Wang [13] used a neural network trained by a hierarchy-genetic-algorithm (HGA) in order to map the temperature variation against the **thermal** drift of the **machine** tool. Wang [10] also proposed a **thermal** model merging Grey system theory GM(1,m) and an Adaptive Neuro-**Fuzzy** Inference System (**ANFIS**). A hybrid learning method, which is a combination of both steepest descent and least-squares estimator methods, was used in the learning algorithms. Experimental results indicated that the **thermal** **error** compensation model could reduce the **thermal** **error** to less than 9.2 l m under real cutting conditions. He used six inputs with three **fuzzy** sets per input, producing a com- plete rule set of 729 (3 6 ) rules in order to build an **ANFIS** model. Clearly, Wang’s model is practically limited to low dimen- sional **modelling**. Eskandari et al. [15] presented a method to compensate for positional, geometric, and thermally induced errors of three-axis CNC milling **machine** **using** an offline technique. **Thermal** errors were modelled by three empirical meth- ods: MRA, ANN, and **ANFIS**. To build their models, the experimental data was collected every 10 min while the **machine** was running for 120 min. The experimental data was divided into training and checking sets. They found that **ANFIS** was a more accurate **modelling** method in comparison with ANN and MRA. Their test results on a free form shape show average improvement of 41% of the uncompensated errors. A common omission in the published research is discussion or scientific rigour regarding the selection of the number and location of **thermal** sensors.

Show more
17 Read more

The architecture and learning procedure of the Adaptive Neuro-**Fuzzy** System (**ANFIS**), have both been described by Jang [14]. According to Jang, the **ANFIS** is a neural network that is functionally the same as a Takagi-Sugeno type inference model. The **ANFIS** is a hybrid intelligent system that takes the advantages of ANN and the **fuzzy** logic theory into a single system. By employing the ANN technique to update the parameters of the Takagi-Sugeno type inference model, the **ANFIS** is given the ability to learn from given training data, the same as ANN. The solutions mapped out onto the Takagi- Sugeno type inference model can therefore be described in linguistic terms. The efficiency of any **ANFIS** model depends on the success in partitioning the input and output variables space correctly. This can be achieved by **using** a number of methods such as grid partitioning (**ANFIS**-Grid partition model), the subtractive **clustering** method (**ANFIS**-Scatter partition model) and **fuzzy** **c**- **means** **clustering** [15]. The equivalent **ANFIS** network with two variables is shown in Figure 2:The first layer implements a fuzzification, the second layer executes the T-norm of the antecedent part of the **fuzzy** rules, the third layer normalizes the membership functions (MF), the fourth layer computes the consequent parameters, and finally the last layer calculates the overall output as the summation of all incoming signals [14].

Show more
11 Read more

Abdulshahed et al. [7] employed an adaptive neuro **fuzzy** inference system (**ANFIS**) to forecast **thermal** **error** compensation on CNC **machine** **tools**. Two types of **ANFIS** model were built in this paper: **using** grid-partitioning and **using** **fuzzy** **c**-**means** **clustering**. According to the results, the **ANFIS** with **fuzzy** **c**-**means** **clustering** produced better results, achieving up to 94 % improvement in **error** with a maximum residual **error** of ± 4 μm. In another work [8] they built a **thermal** model by integrating ANN and GMC(1, N) models. The **thermal** model can predict the Environmental Temperature Variation **Error** (ETVE) of a **machine** tool with reduction in **error** from over 20 μm to better than ± 3 μm. Nevertheless, robust solution for both principle-**based** and some of data driven models require the measurement of temperature and related **thermal** **error** components that have to be obtained by time-consuming experiments. This is difficult to achieve in a working **machine** shop, because of the prohibitively costly downtime required to conduct the experiments.

Show more
11 Read more

Construction of the **ANFIS** model requires the division of the input-output data into rule patches. This can be achieved by **using** a number of methods such as grid partitioning, subtractive **clustering** method and **fuzzy** **c**-**means** (FCM) [20]. According to Jang [7], grid partition is only suitable for problems with a small number of input variables (e.g. fewer than 6). A model with three inputs with three **fuzzy** sets per input produces a complete rule set of 27 rules, whereas a model with six inputs requires 729 (3 6 ) rules. Clearly standard **ANFIS** models are practically limited to low dimensional **modelling**. It is important to note that an effective partition of the input space can decrease the number of rules and thus increase the speed in both learning and application phases. In order to obtain a small number of **fuzzy** rules, a **fuzzy** rule generation technique that integrates **ANFIS** with FCM **clustering** will be applied in this paper, where the FCM is used to systematically create the **fuzzy** MFs and **fuzzy** rules base for **ANFIS**. In addition, it also helps to determine the initial parameters of the **fuzzy** model. This is important because an initial value, which is very close to the ﬁnal value, will eventually result in the quick convergence of the model towards its ﬁnal value during the training process [21].

Show more
Early work by Chen et al. [4] used both a multiple regression analysis (MRA) model and an artiﬁcial neural network (ANN) model for **thermal** **error** compensation of a horizontal machining cen- tre. To build their models, 810 data sets were collected from ﬁve different tests; each test was run for 6 h for a heating cycle and then stopped for 10 h for a cooling down cycle. With their experi- mental results, the **thermal** **error** was reduced from 196 to 8 mm. Wang [10] used a Hierarchy-Genetic-Algorithm (HGA) trained neu- ral network in order to map the temperature change against the **thermal** response of the **machine** tool. Wang [8] also proposed a **thermal** model by **using** an Adaptive Neuro **Fuzzy** Inference Sys- tem (**ANFIS**) and optimised the number of sensors by Grey system model GM(1,m). A hybrid learning method, which is a combination of both steepest descent and the least-squares estimator methods, was used in the learning algorithms. Experimental results indicated that the **thermal** **error** compensation model could reduce the ther- mal **error** to less than 9 m under real cutting conditions. Wang in Refs. [10,8] used 150 min and 480 min of data acquisition in order to build HGA and **ANFIS** models, respectively. However, both mod- els require training cycles to calibrate the model how to respond to various changes in input conditions. Eskandari et al. [12] pre- sented a method by which to compensate for positional, geometric, and thermally induced errors of three-axis CNC milling **machine** **using** an ofﬂine technique. **Thermal** errors are modelled by three empirical models: MRA, ANN, and **ANFIS**. To build their models, the experimental data were collected every 10 min while the **machine** was running for 120 min. The experimental data are divided into training and checking data sets. Their validated results on a free form, show signiﬁcant average improvement of 41% of the errors. Abdulshahed et al. [13] proposed a **thermal** model by **using** an **ANFIS** with **fuzzy** **c**-**means** **clustering**. Different groups of key tem- perature points were identiﬁed from **thermal** images **using** a novel schema **based** on a GM (0, N) model and **Fuzzy** **c**-**means** **clustering**. Experimental results indicated that the **thermal** **error** compensa- tion model could reduce the **thermal** **error** to less than 2 m. Also, similar works have been carried out by the same authors in Refs. [11,14,15] .

Show more
13 Read more

The choice of inputs to the **thermal** model is a non-trivial decision which is ultimately a compromise between the ability to obtain data that sufficiently correlates with the **thermal** distortion and the cost of implementation of the necessary feedback sensors. In this thesis, temperature measurement was supplemented by direct distortion measurement at accessible locations. The location of temperature measurement must also provide a representative measurement of the change in temperature that will affect the **machine** structure. The number of sensors and their locations are not always intuitive and the time required to identify the optimal locations is often prohibitive, resulting in compromise and poor results. In this thesis, a new intelligent system for reducing **thermal** errors of **machine** **tools** **using** data obtained from thermography data is introduced. Different groups of key temperature points on a **machine** can be identified from **thermal** images **using** a novel schema **based** on a Grey system theory and **Fuzzy** **C**-**Means** (FCM) **clustering** method. This novel method simplifies the **modelling** process, enhances the accuracy of the system and reduces the overall number of inputs to the model, since otherwise a much larger number of **thermal** sensors would be required to cover the entire structure.

Show more
165 Read more

Image analysis generally refers to preparing of images by computer with the objective of discovering what objects are exhibited in the image [1].Image segmentation is one of the fundamental and difficult tasks in many of the image and vision applications. It has been studied extensively over the past several decades with a huge number of segmentation algorithms being published in the literature. Those image segmentation approaches can be divided broadly into four categories: thresholding, **clustering**, edge detection and region extraction.

Seismic surveys may be broadly defined as “surface seismic” and “borehole seismic” also known as “Vertical Seismic Profiling” (VSP). In a surface seismic study usually both the source and the receivers (“geophones”; sensitive ground velocity sensors) are located at the surface and the reflected waves are analysed. In a Vertical Seismic Profiling borehole investigation the receivers are located within the borehole and the source is usually located at the surface or, less frequently, downhole. The borehole seismic data recorded can provide calibrated, high resolution data that can be used alone, or in conjunction with surface seismic data in order to make exploration decisions and thus is a valuable technique for well characterisation. An additional application for borehole seismic logging **tools** is the monitoring of hydraulic fracturing (“fracking”) sites.

Show more
The quality of workpiece is depends on by the thermo-elastic behavior of the **machine** tool during the production process. **Machine** tool deformations occur due to waste heat from motors and frictional heat from guides, joints and the tool, while coolants act to reduce this influx of heat. Additional **thermal** influ- ences come from the **machine** tool’s environment and foundation. This leads to inhomogeneous, transient temperature fields inside the **machine** tool which dis- place the tool center point (TCP) and thus reduce production accuracy and finally the product quality [1]. Next to approximation strategies such as characteristic diagram **based** correction as in [8] and structure model **based** correction shown in [4], the most reliable way to predict the TCP displacement is via structure-me- chanical finite element (FE) simulation. A CAD model of a given **machine** tool serves as the basis for this approach. On it a FE mesh is created. After establishing the partial differential equations (PDEs) describing the heat transfer within the **machine** tool and with its surroundings, FE simulations are run in order to obtain the temperature fields of the **machine** tool for specified load regimes. **Using** lin- ear thermo-elastic expansion, the deformation can then be calculated from each temperature field and the displacement of the TCP read from this deformation field, see [6]. The accuracy of this latter approach depends on the correct mod- elling of the heat flux within the **machine** tool and the exchange with its sur- roundings. In order to calculate the correct amount of heat being exchanged with the environment, one may use known parameters from well-established tables. However, if the surrounding air is in motion or otherwise changing, computa- tional fluid dynamics (CFD) simulations are required to accurately determine these transient parameters. This two-step approach makes realistic thermo-elas- tic simulations particularly complicated and time-consuming. Negative aspect of this approach is the very computing time intensive CFD simulation. Some meth- ods aiming at real-time thermo-elastic simulations **based** on model order reduc- tion must therefore rely on the inaccurate predetermined parameter sets [5]. This could be supported if all the necessary CFD simulations could be run in ad- vance and supplied to the thermo-elastic models when they are needed. Nevertheless, the whole output of this CFD simulations is too much amount of data for an effective computation of the correction steps. Therefore, a reduction of this data is desirable wherefore the ideas of this paper comes up.

Show more
463 Read more

field through the interaction of attraction and exclusion is similar to objective of the **fuzzy** **clustering**. There is a greater repulsive force between different clusters that makes clusters more separate and data in the same cluster more compact, which can reduce the impact of weak noise points. In addition, the theory of data field **based** on the concept of physical field can help us to get the center of the potential field in the data space. The number and location of the potential centers offer a very important reference for determining the number of clusters and selecting the initial cluster centers. This can improve the defect of traditional FCM algorithm that the initial **clustering** centers is overly sensitive. Kumar etc. [4] proposed a hybrid approach to solve the problem of random initialization of cluster centers. They used BBO—a population **based** evolutionary algorithm which motivated by migration mechanism of ecosystems. Aldahdooh and Ashour [5] proposed a selection method for initial cluster centroid in K-**means** **clustering** instead of the random selection method. In recent years, under the inspiration of social and natural laws, the researchers have combined **fuzzy** **clustering** algorithm with other algorithms such as genetic algorithm, ant colony optimization algorithm and particle swarm algorithm and achieved great success. The EFCM algorithm proposed in this paper improves the selection process of initial **clustering** centers and iterative updating process of traditional FCM algorithm **based** on the theory of charge interaction, Coulomb's law and data field theory, which makes the algorithm more realistic. Experiments can be used to prove the advantages of the improved EFCM algorithm compared to the traditional FCM algorithm.

Show more
Currently, the **fuzzy** **c**-**means** algorithm plays a certain role in remote sensing image classification. However, it is easy to fall into local optimal solution, which leads to poor classification. In order to improve the accuracy of classification, this paper, **based** on the improved marked watershed segmentation, puts forward a **fuzzy** **c**-**means** **clustering** optimization algorithm. Because the watershed segmentation and **fuzzy** **c**-**means** **clustering** are sensitive to the noise of the image, this paper uses the adaptive median filtering algorithm to eliminate the noise information. During this process, the classification numbers and initial cluster centers of **fuzzy** **c**-**means** are determined by the result of the **fuzzy** similar relation **clustering**. Through a series of comparative simulation experiments, the results show that the method proposed in this paper is more accurate than the ISODATA method, and it is a feasible training method.

Show more
pling would allow for measurements of the dynamic evolu- tion of plumes, and feature tracking could then be used as a **means** to determine gas emission rates. A more fundamen- tal limitation is the NE1T of the spectrally filtered channels. There are cameras available commercially with NE1T ’s of < 20 mK and 60 Hz frame rates that can provide retrieval er- rors in SCDs below 10 %. Many improvements to the system can be envisaged. By viewing a target **using** three cameras ar- ranged with an angular spacing of 120 ◦ , a three-dimensional image could be acquired and quantitative measures of plume dimensions and plume morphology derived. Addition of fil- ters centred at different wavelengths would also permit a range of other gases to be measured. The **camera** could also be used in atmospheric research for studies of the radiative effects of clouds on the Earth’s radiation balance (Smith and Toumi, 2008) and to image toxic gases from industrial acci- dents or from deliberate gas releases, where personal safety is a major issue.

Show more
22 Read more

14 Read more

The current study focuses on a spindle system of a box- type precision CNC coordinate boring **machine**. **Thermal** balance experiments were performed **using** a temperature displacement acquisition system to measure the distribution of the temperature field and **thermal** deformation at different spindle speeds. The study analyzes how different spindle speeds affect **thermal** characteristics, then **using** **fuzzy** **clustering** regression analysis method to optimize the temperature variables, selected the variables for **thermal** **error**- sensitive, finally the MIMO artificial neural network approaches were established for spindle axial **thermal** elongation and radial **thermal** tilts. Subsequently, a new set of sample data is used to validate the model. The results indicate that the model has high prediction accuracy with perfect generalizations; one can obtain an exact model for subsequent **thermal** **error** compensation that provides references for the characteristic parameter for **thermal** equilibrium.

Show more
The Electroencephalogram (EEG) signal is a voltage signal arising from synchronized neural activity. EEG can be used to classify different mental states and to find abnormalities in neural activity. To check the abnormality in neural activity, EEG signal is classified **using** classifiers. In this project k-**means** **clustering** and **fuzzy** **c** **means** (FCM) **clustering** is used to cluster the input data set to Neural network. NeuroIntelligence is a neural network tool used to classify unknown data points. The non linear time series (NLTS) data set is initially clustered into Normal or Abnormal categories **using** k-**means** or FCM **clustering** methods. This clustered data set is used to train neural network. When an unknown EEG signal is taken, first NLTS measurements are extracted and input to trained neural network to classify the EEG signal. This method of classification proposed is unique and is very easy to classify EEG signals.

Show more
slim blood stream vessels may result in little round spots that are regionally just like MAs, both in style. Vessel segments may be turned off from the general shrub, and appear as little, black things of various forms. Almost every state-of-the-art technique views some type of image preprocessing phase, which usually includes disturbance decrease, filtering or colour modification. Retinal pictures have the largest comparison in the natural channel; accordingly it is a common practice to use the natural route for segmentation reasons. For noise decrease, convolution with Gaussian covers and median filtering are commonly used techniques. The number of pixels to be prepared is considerably decreased by only considering the local maxima of the preprocessed picture. We implement optimum recognition on each information, and determine a set of principles that explain the size, size, and form of the main optimum. The fundus picture features are produced with the success as 99, 94 and 100% for hard drive localization, hard drive border recognition and fovea localization re-spectively. These designs can be enhanced in bigger databases and also used for medical reasons. The area growing segmentation technique gives the good segmentation result in order to specify the area with appropriate factors. It takes too lots of your energy and effort to complete the **clustering** process, so it is expensive. The region splitting and consolidating technique will divided the pictures until the appropriate quality is achieved [3]. It is not suitable for more variety of pictures prepared simultaneously. Watershed is the edge **based** picture segmentation technique provides a huge variety of segmented pictures with high reliability which also experiences in over segmentation. Unclear **C** indicates (FCM) is a details **clustering** technique in which a details set is arranged into ‘n’ groups with every details point in the dataset which belongs to every group to a certain degree. A conventional FCM criteria does not incorporate the spatial details which makes it delicate to disturbance and other picture relics whereas Spatial Unclear **C** **means** **clustering** criteria features the spatial information into the account function for **clustering**. The Customized Spatial Unclear **C**-**Means** **clustering** method is used to identify glaucoma which is existing in the retina with various spatial harmonizes.

Show more
Many approaches have been proposed to segment masses from surrounding tissues in digital mammograms. Mahfuzah Mustafa and al [3], used Chan-Vese Active Contour and Localized Active Contour for segmenting lesions in digitized mammogram images, the effectiveness of these techniques are then compared, the results obtained by Chan-Vese Active Contour are proven to be better than the Localized Active Contour method. J. Quintanilla et al [4], proposed mathematical morphology to enhance potential MCs. Afterwards, three algorithms (**Fuzzy** **C**-**Means**, K-**Means**, and Possibilistic **Fuzzy** **c**-**Means**) are used and compared in order to segment ROIs images, trying to improve the results of microcalcifications cluster detection.

Show more
Abstract—The problem of mining a high dimensional data includes a high computational cost, a high dimensional dataset composed of thousands of attribute and or instances. The efficiency of an algorithm, specifically, its speed is oftentimes sacrificed when this kind of dataset is supplied to the algorithm. **Fuzzy** **C**-**Means** algorithm is one which suffers from this problem. This **clustering** algorithm requires high computational resources as it processes whether low or high dimensional data. Netflix data rating, small round blue cell tumors (SRBCTs) and Colon Cancer (52,308, and 2,000 of attributes and 1500, 83 and 62 of instances respectively) dataset were identified as a high dimensional dataset. As such, the Manhattan distance measure employing the trigonometric function was used to enhance the **fuzzy** **c**-**means** algorithm. Results show an increase on the efficiency of processing large amount of data **using** the Netflix ,Colon cancer and SRCBT an (39,296, 38,952 and 85,774 milliseconds to complete the different clusters, respectively) average of 54,674 milliseconds while Manhattan distance measure took an average of (36,858, 36,501 and 82,86 milliseconds, respectively) 52,703 milliseconds for the entire dataset to cluster. On the other hand, the enhanced Manhattan distance measure took (33,216, 32,368 and 81,125 milliseconds, respectively) 48,903 seconds on **clustering** the datasets. Given the said result, the enhanced Manhattan distance measure is 11% more efficient compared to Euclidean distance measure and 7% more efficient than the Manhattan distance measure respectively.

Show more
To test the procedure the TCP-displacements and boundary conditions for a **thermal** load case during the operating time is measured. For a finishing operation we reproduced the compensation procedure on standard PC which has nearly the same power as available on a standard **machine** tool control. As time step, to update the compensation values, ten minutes was chosen, **based** on measurements previously carried out. In Figure 6 the calculated thermally induced TCP-displacements after six hours machining time with an augmentation factor of 6’000 are shown. These calculated values are compared with measurements at five locations in the work space and have shown a good correlation. After numerically applying the compensation scheme **based** on location and component errors the remaining maximum TCP-displacements have been reduced by a ratio of 50.

Show more
10 Read more