In the training process, the neurons participate in each other’s competition, and the neuron node which has a maximum output is the winner. The winning node has the capacity to restrain other competitors and activate its neighboring nodes. But because only the winning node is the best match to the input graphs, only the winner is allowed to output, and only the weights of the winner and its adjacent nodes are allowed to be adjusted [14]. That is to say **SOM** **network** imitates the distribution of the input graphs, or to say it can extract the characteris- tics of the input graphs, and classify the input graphs according to the similar characteristics, representing by a

Classification on a set of test tuples and estimate the performance of the rules produced. Due to the study of previously proposed multi-relational classification algorithms shows that in classification of multi-dimensional association rule mining, classification faced problem of continuity of frequency rules [18]. The multiple classification rules are approved by their support and confidence, which are used to build a classifier. In classifying the test tuples, the rules with the maximum confidence will classify it. But these entire algorithm Based on support and confidence threshold framework, which arising small disjunction mining problem. Due to Generation of weight and maintained of weight value of support difficult [19]. Therefore, we collocate with a new auto level threshold generation method in our algorithm to solve the problem of small disjununction mining. So, we optimize the classification rate of MrCAR with **SOM** **network** approach to classification of assorted kinds of databases. Finally the results demonstrated that our proposed algorithm has high accuracy. If there are multiple rules with equal maximum confidence then we use selection. If no rules satisfy the test tuples the default class, which is the majority class in the training set, is selected as class label.

Recent research activities in ANN also have shown that ANN have powerful classification (Dorothea Heiss, C. and Bajla I., 2005) and pattern recognition (Xin- Hua, S. and Hopke, P.K., 1996) capabilities. Inspired by biological system, ANN is able to learn from and generalized from experienced. ANN explore many competing hypotheses simultaneously using massively parallel **network** composed of non linear relatively computational elements interconnect by links with variable weights. It is this interconnected set of weights that contains the knowledge generated by the ANN (Adya, M. and Collopy, F., 1998).

40 Read more

Figure 8 and 9 show the winning neuron or best centroid neuron of GloveMAP hand grasping in order to justify the bottle grasping feature. Figure 8 also shows three features / clusters were extracted from the initial bottle grasping whereas each of the clusters consist of centroid called as a best matching unit (BMU). The group of features could be difference between subject and it could be effect of subject hand size and grasping style but the differences will not cause much compare to the fig. 9. However more output space dimensions of the **SOM** **network** were the more complex of its topological structure will be needed, thereupon, more neurons will be generated.

Combining with the special environment of Chinese market, this paper defines the listed Corpora- tion’s financial crisis, and analyzes the shortcomings of the existing financial early warning model. In order to further improve the accuracy of the financial early warning, and adaptively select op- timal training samples, short-term forecasting model of listed corporations based on the **SOM** **network** fusion BP **network** is proposed. The model firstly extracts the initial training samples re- lying on the **SOM** **network** and obtains the optimum ST samples and non ST samples in all training samples. Furthermore, the extracted samples are utilized to construct the financial early warning system of five different levels based on **SOM** **network**. Finally, the model is compared with other model algorithms. The results show that the financial early-warning model proposed in this paper possesses higher recognition accuracy on short term forecasting and monitoring of enterprise finance compared with other recognition models. Moreover, smaller data size is needed in this model on the premise that the effectiveness is guaranteed. Therefore, the early warning model proposed in this paper can better realize enterprise financial monitoring, so as to effectively pre- vent and defuse financial risks and crises.

10 Read more

The number of unique URLs generated by pre-processing is 190. We used a fixed value of 20 as the number of clusters, so the input to the **SOM** **network** is a 190 by 20 array. We have tested different parameters for the **SOM** **network** as follows: α varies from 0.2 to 0.9 and ω varies from 1 to 40 where α represents the learning rate and ω determines the number of times a URL being presented within one learning cycle before the neighbourhood size is decreased. In our algorithm, there are 18 learning cycles for organizing the Web pages. In particular, we decreased the neighbourhood size from its initial value of 17 to 0. Fig.2 and Fig.3 shows the **SOM** map with (α = 0.1, ω = 40) and (α = 0.5, ω = 40), respectively.

Ovennævnte tests af geotekstiler **som** drænfiltre må anses for ekstreme i forhold til den anvendelse **som** er emnet for denne rapport. Meget store mængder vand fra det omkringliggende jordvolumen ledes gennem drænfiltre ved passage ind i drænrør i forhold til almindelig afdræning af regnvand gennem markprofilet til grundvandet. Ved brug af geotekstiler **som** rodspærre er det den sidstnævnte situation **som** er aktuel. Baseret på konklusionerne i Waagepetersen (1988) vurderer vi at tilstopning med jord sædvanligvis ikke vil udgøre et problem for de mellem-tætte geotekstiler testet i dette forsøg (porestr. ned til 80 µm) under anvendelsen **som** rodspærre på friland. Det er derimod tvivlsomt hvorvidt dette også gælder ved brug af det tætteste

31 Read more

performed on all 5 test images to form the input data for testing the recognition system. Similarly, the image database for training uses 30 images and forms a matrix of 64 × 30 with 64 rows and 30 columns. The input vectors defined for the **SOM** are distributed over a 2D- input space varying over [0 255], which represents intensity levels of the gray scale pixels. These are used to train the **SOM** with dimensions [64 2], where 64 minimum and 64 maximum values of the pixel intensities are represented for each image sample. The resulting **SOM** created with these parameters is a single-layer feed forward **SOM** map with 128 weights and a competitive transfer function. The weight function of this **network** is the negative of the Euclidean distance [13]. As many as 5 test images are used with the image database for performing the experiments. Training and testing sets were used without any overlapping Fig. 4 shows the result of training and testing simulated in MATLAB using the image database and test input image.

five sub-processes by using Kohonen neural **network**. Zhao et al [15] analyzed the changing rate of impedance and divided the entire deterioration process of organic coating into three main stages by using **SOM** neural **network**. Xu et al [16] analyzed the deterioration process of organic coating by using the changing rate of phase angle at high frequency united to neural **network**. However, these neural **network** analyzes were based on immersion or cyclic wet-dry conditions, and it is time-taking to do these experiments for some complicated coating systems, so it will be a simple coating evaluation method for studying organic coatings with different breakage degree by using **SOM** neural **network**.

react according to predefined rules. This problem is looking like a classification problem, thus contributing to machine learning field. In [2] the author specifies that, security is the key step for **network** system. There are different soft computing methods have been expressed in the last decades but these methods will fail to detect attacks, which are vary from the examined patterns. In [3] the authors specifies that, PCA is useful technique for finding patterns in the data. Therefore Artificial Neural Networks (ANN) comes into existence, which can detect attack with a limited, nonlinear data sources. Intrusion detection is not a straight forward task so different detection approaches came into existence including the use of ANN such as neural networks. In real world environment intrusion detection poses different problems related to feature selection and classification. In this work we considered the **network** dataset such as NSL KDD dataset for experiment but this dataset contain huge amount of records so we will take only test set with some sub set and training sub set. In this process we will working with PCA in order to select a sub set of features from the training dataset. On the other hand, **SOM** enables abnormal connections from the normal connections. The section I specifies that introduction to our contribution, section II illustrates that, the intrusion detection and the dataset we considered for our work, section III specifies that proposed methods for anomaly detection and finally section IV represents our results with conclusions.

This paper deals with 3 different techniques for feature extraction of image. Face detection is a necessary first-step in face recognition systems, with the purpose of localizing and extracting the face region from the background. The Self- Organizing Map (**SOM**) Neural **Network** has been used for training of database and simulation of FR system. The developed algorithm for the face recognition system formulates an image-based approach, using discrete wavelet transform (DWT), discrete cosine transform (DCT) and Sobel edge detection, simulated in MATLAB. Simulation results are very promising.

each node. This allows the function to determine if a neuron can be activated or not. There are many activation functions that can be used for forecasting models such as Binary step, Sigmoid, Softmax, Relu, and Tanh. However, Choosing the specific activation function is solely dependent upon the problem. For example, The Sigmoid function is better suited towards binary classifications tasks while the Softmax function is geared towards multi-classifications tasks. The reoccurring issue for all these systems is the gradient drift on the neural **network**. Sometimes, the gradients are too steep in a specific direction and other times it can be too low or zero. This creates an issue for the optimal selection technique for the learning parameters. The gradients of the activation function are inherently the main issue when using a neural **network**. If you were to choose an unsuitable activation function for the neural **network**, the final forecasting model will be extremely inaccurate and can cause a devastating effect on the overall predictions. However, we cannot use the Step, and Identity techniques as they are known to be constant linear techniques. Since the model would work in the opposite direction under back propagation during learning phase, the gradient of linear function remains constant. This causes the use of linear functions in backpropagation networks to be inefficient. Gradient functions are meant for error calculation and optimizing final inputs. When moving backwards within the back propagated neural **network**, the gradient of sigmoid and tanh functions gets smaller. This makes the use of Sigmoid and TanH as activation functions to be useless as it causes the vanishing gradient problem[12]. There would be no additional improvement as the gradient model would remain the same. To solve this issue, we used Relu as activation function for our two hidden layer and sigmoid function for the output layer.

The vector will convey convergence to a motionless point if the energy E reduces. Zeros on a main diagonal of a weight matrix W and asynchronous update guarantee that the energy function (2) will decrease with each step [2, 5]. To ensure the convergence to a motionless point an asynchronous up dating is inevitable. In place of a motionless point a **network** with periodic cycle can be developed as terminal state of an attractor if we permit the input vector to get corrected in iteration.

Step4: According to the set time interval, the real-time acquisition of microgrid system’s three phase voltage of static switch, three-phase current of public bus, zero-sequence current, the data after wavelet processing is an input feature vector of the trained **som** neural **network**. Judge the reason of fault according to the output state of **som** neural **network** at this time.

10 Read more

of three basic parts for simplicity. The recognition rate with different noise types and ROC results validate that the NR-**Network** is evidently effective than other well- known noise-resistant face recognition algorithms. With the hierarchical high-level and low-level feature extraction mechanism, the presented **network** can still work well even at the high noise level based on a single face image. We also analyze the feature size of the ap- proaches compared in our experiments. The final output layer with just 256 hidden neurons of the NR-**Network** is rather economic. One shall note that we refrain to directly compare our tailored noise-resistant **network** against other state-of-the-art deep learning models. The main reason is that to the best of our knowledge, there is no specified net- work designed to solve the problem of face recognition af- fected by noise as addressed by our model. One possible future work is to involve sparsity based models [54], match- ing based methods [55], and error-correction based models [56] to further improve cost effectiveness and robustness.

14 Read more

Abstract: Image enhancement is preprocessinge method for qualitye improvement of image Echo cardiographice is critical subjeect in informatieon mininge and machine learning and it can be broadly utilized as a part of many fields. In this dissertatione a hybrid of improvement method based on wavelet transform function and neural networks is proposed. **SOM** were used to find correlation between noised and original DWT coefficients and approximationse. Experimentals results showed capability of proposed method to remove noise in terms of peak signal to noise ratio and visual quality. Different architectures and different activation functions is considered. The experimental results show the mean with the traditional enhancement methods, in this technique threshold-based enhancement digital image enhancement algorithm for mixed digital image enhancement is relatively clear, especially in the extra noise, extra complexs cases", can show its better presentations. In the improvements process in order to effort better enhancements effect, the systems takes more time to pay; the other for color digitals images processings has not been a good results. Therefore, focus on late goals and improves the efficiencys of color image enhancementes. after all, the algorithm has a drawbacks of needings more computing time when select a largers hybrid generations. This will be a key problem to solution in the following work.

side and the number of exchanged information between MN and neighbor networks, by delegating the user profiles and the computing task among networks Briefly, DVHD process is as follow; the MN initiates the handoff process and sends its reference to all available neighbor networks and computes its **Network** Quality Value (NQV). NQV is passed to the MN, which compares all received NQVs and chooses the **network** with the highest NQV as the **network** to which it will redirect its connections.

. CNN : A convolutional neural **network** (CNN, or ConvNet) is a type of feed-forwardartificial neural **network** in which the connectivity pattern between its neurons. The convolutional neural **network** is also known as shift invariant or space invariant artificial neural **network** (SIANN), which is named based on its shared weights architecture and translation invariance characteristics. we present an automatic brain tumor segmentation the system is deep learning convolution neural **network** based where by training the entire neural **network** vector with fixed set of feature at once neural **network** gets trained in several iterations where the result of 1 stage is passed onto the other stage where each of its stage determines its own feature vectors based on the probability of detection.in this way a **network** is build where the neural **network** in each stage has selected its own feature vector then the brain tumor is processed by such a **network** the tumor area can be segmented with extremely high accuracy in computation to other system the resulting system is very fast and the time needed to segment an entire brain with this system is very fast .

Hopfield **Network** is probably the best known example of a Recurrent neural **network** working as associative memory [1, 2] for pattern storage and recalling of any type of graphical image patterns. These pattern storage networks with their energy surfaces behave similar to the associative memory architectures of the brain where the complete information can be recalled given only partial knowledge of the information content [3]. The dynamical behavior of the neuron states strongly depends on the synaptic strength between neurons and the used state updating scheme [4]. The specification of the synaptic weights is conventionally referred to as learning. Initially Hopfield used Hebbian learning to determine the **network** weights and since then a number of learning rules have been suggested to improve its performance [5, 6]. The Hebbian rule has associated with it the advantages of being local and incremental. This means that the update of a particular connection depends on the information available on either side of the connection and also patterns can be incrementally added to the **network**. This learning rule exhibits the following limitations:

12 Read more

In this paper, we have presented a novel wind power output interval prediction approach that multiply **SOM** neural **network** and BP neural **network**. Firstly, the relevant parameters data are clustering by **SOM** neural **network**. Through this method we can solve the disadvantages of traditional clustering method. Then, we establish the prediction model based on the BP neural **network**. In order to enhance the reliability, we consider the interval of each data. The core idea of our method is consider the upper and lower as the interval of each cluster. The experiment was constructed on one set of measured data and the results demonstrated the reliability and accuracy of our model.