The Adaptive Neuro Fuzzy Inference System (ANFIS) has become an attractive powerful modelling technique, combining well established learning laws of Artificial Neural Networks (ANNs) and the linguistic transparency of fuzzy logic theory . In the ANFIS models, different methods can be used to train the model for optimisation of the fuzzy rules . A sufficient number of data samples should be used to obtain an accurate model. There is no formula to estimate the number of data sample needed to train the ANFIS network. The number can vary greatly depending on the complexity of the system under consideration . However, many ANFIS networks have been trained successfully with small amounts of training data   . Buragohain and Mahanta  have proposed an ANFISbasedmodelling method where the number of data samples employed for training was minimized by application of an engineering statistical technique called full factorial design. Furthermore in   they have applied another method called V-Fold technique. Although, their techniques were able to construct an ANFIS model with a small number of training samples (as few as 7), they still used all the experimental samples in order to select the optimal ones. Data transformation can also change the smoothness and comparability of the data. For instance, Huang and Chi Chu  have proposed a data transformation technique to simplify the fuzzy modelling procedures. The transformation method allows the whole raw data to be mapped to another domain such that there is no need to adjust the membership functions, and the fuzzification process is simply taking place on the fixed ones. Shmilovici and Aguilar-Martin  have also utilised Box-Cox transform to improve the quality of the fuzzy model, before parameter optimization occurs. Therefore, optimisation in the number of training patterns and data domain used for training are of prime concern in the field of fuzzy modelling.
Thermal errors can have significant effects on CNC machine tool accuracy. The errors usually come from thermal deformations of the machine elements created by heat sources within the machine structure or from ambient change. The performance of a thermalerrorcompensationsystem inherently depends on the accuracy and robustness of the thermalerror model. In this paper, Adaptive Neuro Fuzzy Inference System (ANFIS), Artificial Neural Network (ANN) and Particle Swarm Optimization (PSO) techniques were employed to design four thermal prediction models: ANFIS by dividing the data space into rule patches (ANFIS-Scatter partition model); ANFIS by dividing the data space into rectangular sub-spaces (ANFIS-Grid partition model); ANN with a back-propagation algorithm (ANN-BP model) and ANN with a PSO algorithm (ANN-PSO model). Greysystemtheory was also used to obtain the influence ranking of the input sensors on the thermal drift of the machine structure. Four different models were designed, based on the higher-ranked sensors on thermal drift of the spindle. According to the results, the ANFIS models are superior in terms of the accuracy of their predictive ability; the results also show ANN-BP to have a relatively good level of accuracy. In all the models used in this study, the accuracy of the results produced by the ANFIS models was higher than that produced by the ANN models.
Early work by Chen et al.  used both a multiple regression analysis (MRA) model and an artiﬁcial neural network (ANN) model for thermalerrorcompensation of a horizontal machining cen- tre. To build their models, 810 data sets were collected from ﬁve different tests; each test was run for 6 h for a heating cycle and then stopped for 10 h for a cooling down cycle. With their experi- mental results, the thermalerror was reduced from 196 to 8 mm. Wang  used a Hierarchy-Genetic-Algorithm (HGA) trained neu- ral network in order to map the temperature change against the thermal response of the machine tool. Wang  also proposed a thermal model by using an Adaptive Neuro Fuzzy Inference Sys- tem (ANFIS) and optimised the number of sensors by Greysystem model GM(1,m). A hybrid learning method, which is a combination of both steepest descent and the least-squares estimator methods, was used in the learning algorithms. Experimental results indicated that the thermalerrorcompensation model could reduce the ther- mal error to less than 9 m under real cutting conditions. Wang in Refs. [10,8] used 150 min and 480 min of data acquisition in order to build HGA and ANFIS models, respectively. However, both mod- els require training cycles to calibrate the model how to respond to various changes in input conditions. Eskandari et al.  pre- sented a method by which to compensate for positional, geometric, and thermally induced errors of three-axis CNC milling machine using an ofﬂine technique. Thermal errors are modelled by three empirical models: MRA, ANN, and ANFIS. To build their models, the experimental data were collected every 10 min while the machine was running for 120 min. The experimental data are divided into training and checking data sets. Their validated results on a free form, show signiﬁcant average improvement of 41% of the errors. Abdulshahed et al.  proposed a thermal model by using an ANFIS with fuzzy c-means clustering. Different groups of key tem- perature points were identiﬁed from thermal images using a novel schema based on a GM (0, N) model and Fuzzy c-means clustering. Experimental results indicated that the thermalerror compensa- tion model could reduce the thermalerror to less than 2 m. Also, similar works have been carried out by the same authors in Refs. [11,14,15] .
The Grey systems theory, established by Deng in , is a methodology that focuses on solving problems involving incomplete information or small samples. The technique can be applied to uncertain systems with partially known information by generating, mining, and extracting useful information from available data so that system behaviours and their hidden laws of evolution can be accurately described. It uses a Black-Grey-White colour to describe complex systems . GM(1, N) is the most widely used implementation in literature , which can establish a first-order differential equation featured by comprehensive and dynamic analysis of the relationship between system parameters. The accumulated generating operation (AGO) is the most important characteristic of the Greysystemtheory, and its benefit is to increase the linear characters and reduce the randomness of the samples. Based on the existing GM(1, N) model, Tien  proposed a GMC(1, N) model, which is an improved Grey prediction model. The modelling values by GM(1, N) are corrected by including a convolution integral. Traditionally, these models have been calibrated by the least square method. However, due to the nonlinearity of the problem, the least square solution may not provide a satisfactory solution.
The choice of inputs to the thermal model is a non-trivial decision which is ultimately a compromise between the ability to obtain data that sufficiently correlates with the thermal distortion and the cost of implementation of the necessary feedback sensors. In this thesis, temperature measurement was supplemented by direct distortion measurement at accessible locations. The location of temperature measurement must also provide a representative measurement of the change in temperature that will affect the machine structure. The number of sensors and their locations are not always intuitive and the time required to identify the optimal locations is often prohibitive, resulting in compromise and poor results. In this thesis, a new intelligent system for reducing thermal errors of machine tools using data obtained from thermography data is introduced. Different groups of key temperature points on a machine can be identified from thermal images using a novel schema based on a Greysystemtheory and Fuzzy C-Means (FCM) clustering method. This novel method simplifies the modelling process, enhances the accuracy of the system and reduces the overall number of inputs to the model, since otherwise a much larger number of thermal sensors would be required to cover the entire structure.
In this paper, a new intelligent compensationsystem for reducing thermal errors of machine tools using data obtained from a thermal imaging camera is introduced. Different groups of key temperature points were identified from thermal images using a novel schema based on a Grey model GM (0, N) and fuzzy c-means (FCM) clustering method. An Adaptive Neuro-Fuzzy Inference System with fuzzy c-means clustering (FCM-ANFIS) was employed to design the thermal prediction model. In order to optimise the approach, a parametric study was carried out by changing the number of inputs and number of mem- bership functions to the FCM-ANFIS model, and comparing the relative robustness of the designs. According to the results, the FCM-ANFIS model with four inputs and six member- ship functions achieves the best performance in terms of the accuracy of its predictive abil- ity. The residual value of the model is smaller than ±2 l m, which represents a 95% reduction in the thermally-induced error on the machine. Finally, the proposed method is shown to compare favourably against an Artificial Neural Network (ANN) model. Ó 2014 The Authors. Published by Elsevier Inc. This is an open access article under
After analyze the principle of RSSI ranging, consider the environmental factors of positioning, put forward a weighted compensation of coordinate error localization algorithm, and do the simulation analysis under the condition of different reference distance. The results show that the algorithm has been greatly improved in accuracy and stability, and with a strong resistance to environmental interference can be used in an actual indoor environment. Acknowledgment
This paper has found many useful security measures which are being used to handle various security threats over cloud based e-learning technology. The third party cloud server is partially trusted and there is a need to provide security. As more number of users are involved in e-learning, the access control restricts the unauthorized users in accessing the cloud content. To add more security to the e-learning in cloud, the key management supports the organization by providing the access key only to the authorized group members for accessing the content from cloud. The system was implemented and tested by two groups of users, one for teachers second for students and they were asked to run the system, then evaluate its simplicity, the material and the user interface. In addition, key management techniques can be implemented along with the access control mechanism for achieving security in e-learning systems. They reported that the system overcomes most of the problems in teaching group-based e-learning scheme. In this proposed system, we designed the cloud-based e-learning architecture, which includes SaaS, PaaS, IaaS, and also Operational and Maintenance Services. Devices such as mobile, laptop, etc, are used to access the cloud e-learning system, which are said to be access layers. The Cloud based education will help the students, staff, Trainers, Institutions and also the learners to a very high extent and mainly students from rural parts of the world will get an opportunity to get the knowledge shared by the professor on other part of the world. In future work we conceptual cloud computing security requirements model with four components – data security; risk assessment; legal & compliance requirements; and business & technical requirements.
In many cases if we want to search efficiently some data have to be recalled. The human is able to recall visual information more easily using for example the shape of an object, or arrangement of colors and objects. Since the human is visual type, we look for images using other images, and follow this approach also at the categorizing. In this case we search using some features of images, and these features are the keywords. At this moment unfortunately there are not frequently used retrieval systems, which retrieve images using the non-textual information of a sample image. Our purpose is to develop a content based image retrieval system, which can retrieve using sketches in fre quently used databases. The user has a drawing area where he can draw those sketches, which are the base of the retrieval method. Using a sketch basedsystem can be very important and efficient in many areas of the life. In some cases we can recall our minds with the help of guess or drawing. The CBIR systems have a big significance in the criminal investigation. The identification of unsubstantial images, tattoos can be supported by these systems. Another possible application area of sketch based informa- tion retrieval is the searching of analog circuit graphs from a big database. The user has to make a sketch of the analog circuit, and the system can provide many similar circuits from the database. The Sketch-based image retrieval (SBIR) was introduced in QBIC and Visual SEEK systems. In these systems the user draws color sketches and blobs on the drawing area. The images were divided into grids, and the color and texture fea - tures were determined in these grids.
The temporal structure that existed would be that which is tracked by our temporal phenomenology. However, it’s not clear that a circular temporal topology satisfies each of the other requirements. For instance, if time is circular, then agents cannot sensibly take the future to be different from the past. As Dowden puts it: ‘[i]f any part of time were circular, then the future—at least this part of it—is also the past, and every event in that part occurs before itself.’ More, since any action performed is just as much directed towards the past as it is the future, it is not at all clear that this model underscores a past/future asymmetry. Last, it is not at clear that a circular model of time provides us with any scaffolding for the asymmetry of counterfactual dependence. Witness the following. It seems just as correct to say that: were it not the case that Billy threw the ball, then it would be the case that the window broke, as it would be to say that, were it not the case that the window broke, then it would not be the case that Billy threw the ball. After all, there is a global symmetry at the world described, and we may also have global causal loops. Given B&M’s requirements, that would appear to make circular time not a model of time, at all, and such a world temporally error theoretic. But that seems (intuitively) wrong; the folk do recognise circular temporal models as being models that preserve the reality of time.
38 several assumptions such as modelling the machine parts as a flat plate structures and linear temperature gradients. Yun et al  used Modified Lumped Capacitance Method (MLCM) and Genius Education Algorithm (GEA) to model the thermal behaviour of the ball screw, and FEA technique to model the thermal behaviour of the guide way. After modelling, the total thermalerror was obtained by adding both. The experimental validation was performed by measuring the linear positioning error of the Z axis feed drive system using a laser interferometer, achieving approximately 90% improvement in accuracy when compared with the model estimations. The modelling technique should be useful if applied on the full machine structure. Attia et al [75, 76] and Fraser et al  developed a generalized modelling methodology to compensate thermal errors of the machine tools in real time. This model, which is empirical based, used the inverse heat conduction problem (IHCP) in which temperature measurements are carried out at only two points near the heat source to estimate the time variation of the heat input in real time. Computations involved in obtaining the IHCP  include transfer functions which are sensitive particularly in the case of temperature sensor failure in real time. A thin flat plate was used as a reference for the development of generalized model. Similarly a simple structure was used to represent a vertical milling machine. Two types of heat inputs i.e. ramped and sinusoidal were used to compare the performance of both generalized model and FEA. The results were compared by using only two 30 minutes tests. Although the generalized model exhibited good accuracy and is quoted to be two orders of magnitude faster in comparison with FEA however validation on a real machine i.e. on a complex geometry would be of great interest with respect to the modelling time this empirical model may take. Tseng et al  proposed a thermalerror prediction model derived from the neural-fuzzy theory. IC-type temperature sensors and a Renishaw MP4 probe system were used to measure the temperature changes and thermal deformations respectively. Sensors were attached to the spindle motor, spindle sleeve side with one sensor measuring environmental variations. The prediction model improved
(e.g. Naive Bayes) as some of our features are not. Since we approach the problem as a binary deci- sion we picked a tree-based classifiers: Decision Tree. We already studied the performance of an- other classifier (Random Forest) but even if Ran- dom Forest performed better in cross validation experiments, Decision Tree resulted better in cross domain experiments, suggesting that it would be more reliable in a real situation (where the nega- tive topics are several). We use the Decision Tree implementation of the Weka toolkit (Witten and Frank, 2005).
This order was chosen because while persistency is an often overlooked factor in usability, it can have devastating effects on the user friendliness of a system. Usability problems do not have the same rate of appearing or disappearing (Kjeldskov, Skov, & Stage, 2010). If a user encounters the same problem every single time, even if it is a small problem, this may lead to a lowering of willingness to use a system. After each problem was given a persistency rate based on their code, frequency had to be decided. Frequency was based on the percentage of participants who encountered the problem in each round and was therefore also given for each round. An initial order of severity was decided based on the persistency group a problem was in, but within a group, rules for frequency were set up as well. Taken into account was the cutoff from Ruby and Chisnell (2008), who define a problem that is accounted by 30% or less than all participants as a problem that is not severe but needs improvement nonetheless. This was based on a single measure, and not on a longitudinal study. As this study was longitudinal and included a group of participants that have, in general, a different attitude towards and less experience with technology than younger
variable Y from the explanatory variable X . Then the random variable E becomes the error variable E = Y − f (X) when a predictor f (X) is used and the MEE principle aims at searching for a pre- dictor f (X ) that contains the most information of the response variable by minimizing information entropies of the error variable E = Y − f (X ). This principle is a substitution of the classical least squares method when the noise is non-Gaussian. Note that E [ Y − f (X)] 2 = R e 2 p E (e)de. The least squares method minimizes the variance of the error variable E and is perfect to deal with problems involving Gaussian noise (such as some from linear signal processing). But it only puts the first two moments into consideration, and does not work very well for problems involving heavy tailed non-Gaussian noise. For such problems, MEE might still perform very well in principle since mo- ments of all orders of the error variable are taken into account by entropies. Here we only consider R´enyi’s entropy of order α = 2: H R (E) = H R,2 (E) = − log R (p E (e)) 2 de. Our analysis does not
Although GA can provide good solutions in modelling applications, it however requires huge memory and faster processing units with large word lengths to execute huge number of repeated computations. Moreover, for highly multi- modal problems, the solutions may lose diversity and get trapped in local minima at some points unless special method is adopted to avoid premature convergence to suboptimal region of the search space . Therefore, in this work, particle swarm optimisation PSO is employ to train the antecedent parameters and RLS is used to optimise the consequent parameters of a fuzzy inference system. The proposed methodology is used to identify the non-parametric model of the twin rotor system.
To improve the performance of conventional SSSC based damping controllers, ANFIS technology has been employed in this paper. The fuzzy rules are trained using this technology. The proposed method has been evaluated on a 3-machine 9-bus power system. From the simulation results, it is understood that the ANFISbased SSSC controllers can provide better damping of the speed deviation, power angle and real power oscillations in terms of reduced settling times than the conventional SSSC based damping controllers under different operating conditions. The ANFIS control scheme is easy to tune and quite robust. Also such a nonlinear adaptive SSSC controller will yield better and fast damping under small and large disturbances even with changes in system operating conditions. Appendix
thresholding the 3-axis foot force/torque sensor readings. The strain gages on the robot are not perfect even after calibration. Often a nonzero value up to ±50N is measured on the swing foot. Sometimes a much bigger value can be observed during swing. This value can change from step to step (see Fig. 7), which makes robust contact state detection difficult. We implemented a simple Schmitt trigger for contact state detection. Instead of a single threshold, a Schmitt trigger has both low and high thresholds. If the foot is in contact and the force/torque sensor reading drops below the low threshold, the contact state of that foot switches to swing. If the foot is in swing and the sensed force is above the high threshold, the contact state changes to stance. We choose the low and high thresholds to be 80N and 100N respectively. The Schmitt trigger eliminates a lot of error compared to a single threshold. Even with the Schmitt trigger, the contact detection can go wrong sometimes due to bad sensor readings. We suspect the strain gage fluctuation is due to thermal effects, and a possible change in the strain gage bonding with the substrate.
The objective of the current research is to apply the predicted values to calculate again the net sales and provide them to the Company in the future and to get the good prediction for production and business operation policies. As the statistics provided by the Vietnam dairy Products Joint – Stock Company (Vinamilk) shows that the net sales are increasing in past years, this study applies GreySystemTheory to have the good forecasting in the next two years. As the results, the research would see the numbers in future. This is the very important objective; since if it's predicted well or even likely exactly, the Company can have their calculation applied to this change in the strategy management. Whenever, we know the good forecast, it is easier to build the strategy.
To test the procedure the TCP-displacements and boundary conditions for a thermal load case during the operating time is measured. For a finishing operation we reproduced the compensation procedure on standard PC which has nearly the same power as available on a standard machine tool control. As time step, to update the compensation values, ten minutes was chosen, based on measurements previously carried out. In Figure 6 the calculated thermally induced TCP-displacements after six hours machining time with an augmentation factor of 6’000 are shown. These calculated values are compared with measurements at five locations in the work space and have shown a good correlation. After numerically applying the compensation scheme based on location and component errors the remaining maximum TCP-displacements have been reduced by a ratio of 50.
Solid mechanics is an interdisciplinary subject mainly deals with deformation of materials and structures under external loading conditions. In many applications, safety is an important concern. For instance, a small crack can cause catastrophic damage to an airplane which can result in loss of many lives (see Fig. 1a). Moreover, such incidents can cause significant damage to the environment which might take years for complete recovery process such as the effect of oil spill due to a damaged tanker (see Fig. 1b), corroded pipeline, etc. In order to prevent from such undesired incidents, several approaches are commonly followed within the solid mechanics framework. One common approach is to perform experiments to test the durability of materials and structures. Although such an approach can provide us realistic data, such experiments are not always possible and they are mostly expensive. Today, engineers and researchers commonly use theoretical or computational approaches as an alternative to analyse the problems that they are working on. Theoretical approaches mostly depend on various assumptions to simplify the calculations. Moreover, they are restricted to specific geometries and loading conditions. Hence, it is essential to use a technique which does not have such limitations and is also economically feasible. Computational techniques are very good candidates for this purpose and they are commonly used both in industry and academia.