final hidden layer

Top PDF final hidden layer:

Improved Neural Network based Multi label Classification with Better Initialization Leveraging Label Co occurrence

Improved Neural Network based Multi label Classification with Better Initialization Leveraging Label Co occurrence

In multi-label text classification, one text can be associated with multiple labels (label co- occurrence) (Zhang and Zhou, 2014). Since la- bel co-occurrence itself contains information, we would like to leverage the label co-occurrence to im- prove multi-label classification using a neural net- work (NN). We propose a novel NN initialization method that treats some of the neurons in the final hidden layer as dedicated neurons for each pattern of label co-occurrence. These dedicated neurons are initialized to connect to the corresponding co- occurring labels with stronger weights than to oth- ers. While initialization of an NN is an important research topic (Glorot and Bengio, 2010; Sutskever et al., 2013; Le et al., 2015), to the best of our knowl- edge, there has been no attempt to leverage label co- occurrence for NN initialization.
Show more

6 Read more

Initializing neural networks for hierarchical multi-label text classification

Initializing neural networks for hierarchical multi-label text classification

The number of units in the final hidden layer can exceed the number of label co-occurrences in the training data. We must therefore decide what to do with the remaining hidden units. Kurata et al. (2016) assign random values to these units (shown in Figure 3 (B)). We will also use this scheme, but in addition we propose another variant: we assign the value zero for these neurons, so that the hid- den layer will only be initialized with nodes that represent label co-occurrence.

9 Read more

Initializing neural networks for hierarchical multi label text classification

Initializing neural networks for hierarchical multi label text classification

Many tasks in the biomedical domain re- quire the assignment of one or more pre- defined labels to input text, where the la- bels are a part of a hierarchical structure (such as a taxonomy). The conventional approach is to use a one-vs.-rest (OVR) classification setup, where a binary clas- sifier is trained for each label in the tax- onomy or ontology where all instances not belonging to the class are considered nega- tive examples. The main drawbacks to this approach are that dependencies between classes are not leveraged in the training and classification process, and the addi- tional computational cost of training par- allel classifiers. In this paper, we apply a new method for hierarchical multi-label text classification that initializes a neural network model final hidden layer such that it leverages label co-occurrence relations such as hypernymy. This approach ele- gantly lends itself to hierarchical classifi- cation. We evaluated this approach using two hierarchical multi-label text classifica- tion tasks in the biomedical domain using both sentence- and document-level classi- fication. Our evaluation shows promising results for this approach.
Show more

9 Read more

The Impact of the Neural Network Structure by the Detection of Undesirable Network Packets

The Impact of the Neural Network Structure by the Detection of Undesirable Network Packets

With reference to the used literature [12], [13] is a neural network with one hidden layer (3 layer neural network) and a sufficient number of hidden neurons, capable of simulating each binary or continuous function with desired accuracy. In our case, we assume nine input neurons and two output neurons. Number of neurons in input layer is given by number of parameters for describing the communication. We are using neural network for classification to two groups. There can be only one neuron in output layer. But for software realization reasons we have to use two output neurons. Other parameters of neural network are next.
Show more

5 Read more

Traffic flow Prediction with Big Data Using SAE’S Algorithm

Traffic flow Prediction with Big Data Using SAE’S Algorithm

A SAE model is created by stacking autoencoders to form a deep network by taking the output of the autoencoder found on the layer below as the input of the current layer .The l- layers in SAE, the first layer is trained as an autoencoder, with the training set as inputs. After obtaining the first hidden layer, the output of the kth hidden layer is used as the input of the (k + 1)th hidden layer, multiple autoencoders can be stacked hierarchically. This is shows in Fig. 2 By using the SAE network for traffic flow prediction, we need to add a standard predictor on the top layer. In this paper, we put a logistic regression layer on top of the network for supervised traffic flow prediction. The SAEs plus the predictor comprise the whole deep architecture model for traffic flow prediction.
Show more

8 Read more

Novel weighting in single hidden layer feedforward neural networks for data classification

Novel weighting in single hidden layer feedforward neural networks for data classification

The ELM is fast in classification tasks and also generates a high performance generalization compared with most of the existing methods such as backpropagation (BP) networks and support vector machine (SVM), reported in [14]. Moreover, the experimental results in [14] show that the standard deviations of the results obtained by the ELM algorithm are less than those of other methods. Here, our procedure for training the networks is the same as the theory of the ELM algorithm with a strong emphasis on attribute weighting and the hidden weights of the SLFNs.

9 Read more

Calculating the turbulent fluxes in the atmospheric  surface layer with neural networks

Calculating the turbulent fluxes in the atmospheric surface layer with neural networks

data completely unknown to them; this simulates the sit- uations in which ANNs would be used in climate models (where grid points play the role of stations). To test this, we choose the NL-Cab station for validation and DE-Keh as the unknown station. We selected these two stations be- cause the MOST method performed best for these stations; therefore it is a strong challenge for the ANNs to produce equivalent results. The results of the networks that perform best on the validation set are summarised in Table 4, where we compare the ANNs according to the increasing complex- ity of their network architecture. For comparison, and in view of reducing CPU time, we also show the results of the best simple networks (as defined in Sect. 2.6) in this table. Ta- ble 4 shows that all ANNs perform better than the MOST method on the validation data set (NL-Cab), in terms of the MSE and correlation coefficient (r). Applying these ANNs to the test data set (DE-Keh) results in an increased MSE and a lower correlation coefficient, whereas the MOST method performs better on the test data set. Among the ANNs, the 6–5–3–2 ANN displayed the best test performance with a MSE of 0.68×10 −2 , but the simpler 6–3–2 ANN was second best (also in terms of the MSE); thus, simple networks can be almost as good as larger networks. Networks with seven inputs have no substantial advantage over networks with six inputs in our research. ANNs with two hidden layers perform slightly better on the test data than ANNs with a single hid- den layer. The overall correlation between network outputs and target values is quite high (r ≥ 0.85) in all cases.
Show more

15 Read more

Neural Network for the Prediction of Retrofitting/Reconditioning/Upgradation cost of CNC Machines

Neural Network for the Prediction of Retrofitting/Reconditioning/Upgradation cost of CNC Machines

In this section I present the neural network model which is used for the prediction of retrofitting/reconditioning/upgradation cost of CNC machines. I used multilayer neural network with either two hidden layer h1 and h2 or single hidden layer h1, number of neurons in the layers h1 and h2 will be determined by the Training performance of the network. Activation Functions used for layer h1 is ‘linear function’ and for layer h2 is ‘tan sigmoid’.

22 Read more

Soft Computation Of Permissible Stresses Of Cold Formed Compression Members Using Back Propagation System

Soft Computation Of Permissible Stresses Of Cold Formed Compression Members Using Back Propagation System

As mentioned in the preceding chapter, the configuration and training of neural networks is a trail- and-error process due to such undetermined parameters as the number of nodes in the hidden layer, the learning parameter, and the number of training patterns. Hence, the I-section of 2.5 mm thickness is chosen so as to obtain the experience to configure and train the neural network. The parameters that are used to produce the training data are shown in Table 5.1. Moreover, Young’s Modulus is 250000 N/mm P

11 Read more

Artificial Neural Networks Application in Prediction of Water Quality

Artificial Neural Networks Application in Prediction of Water Quality

The ANN always consists of atleast three layers: input layer, hidden layer and output layer. Each layer consists neurons, and each neuron is connected to the next layer through weights. Neurons in the input layer will send its output as input for neurons in the hidden layer and similar is the connection between hidden and output layer. Number of hidden layers and number of neurons in the hidden layer change according to the problem to be solved. The number of input and output neuron is same as the number of input and output variables. [2]
Show more

6 Read more

The Principled Design of Large-Scale Recursive Neural Network Architectures–DAG-RNNs and the Protein Structure Prediction Problem

The Principled Design of Large-Scale Recursive Neural Network Architectures–DAG-RNNs and the Protein Structure Prediction Problem

Finally, the six-layered DAG-RNN architectures used to process 2D contact maps may shed some broader light on neural-style computations in multi-layered systems, including their distant biological relatives. First, preferential directions of propagation can be used in each hidden layer to integrate context along multiple cardinal directions. Second, the computation of each visible output requires the computation of all hidden outputs within the corresponding column. Thus final output converges to correct value first in the center of an output sheet, and then progressively propagates towards its boundaries. Third, weight sharing is unlikely to be exact in a physical implementa- tion and the effect of its fluctuations ought to be investigated. In particular, additional, but locally limited, degrees of freedom may provide increased flexibility without substantially increasing the risk of overfitting. Finally, in the 2D DAG-RNN architectures, lateral propagation is massive. This stands in sharp contrast with conventional connectionist architectures, where the primary focus has remained on the feedforward and sometimes feedback pathways, and lateral propagation used for mere lateral inhibition or “winner-take-all” operations.
Show more

28 Read more

Optimizing the Multilayer Feed Forward Artificial Neural Networks Architecture and Training Parameters using Genetic Algorithm

Optimizing the Multilayer Feed Forward Artificial Neural Networks Architecture and Training Parameters using Genetic Algorithm

Another application of genetic algorithm is to search for optimal hidden layer architectures, connectivity, and training parameters for ANN for predicting community acquired pneumonia among patients with respiratory complaints. Feed- forward ANN that uses back propagation algorithm with 35 nodes in the input layer, one node in the output layer, and between 0 and 15 nodes in each of 0, 1, or 2 hidden layers, is determined by the developed genetic algorithm. Neural network structure and training parameters are represented by haploid chromosomes consisting of ‘‘genes’’ of binary numbers. Each chromosome has five genes. The first two genes are 4-bit binary numbers, representing the number of nodes in the first and second hidden layers of the network, each of which could range from 0 to 15 nodes. The third and fourth genes are 2-bit binary numbers, representing the learning rate and momentum with which the network has been trained, which each could assume discrete values of 0.01, 0.05, 0.1, or 0.5. The fifth gene is a 1-bit binary number, representing whether implicit within-layer connectivity using the competition algorithm [16].
Show more

7 Read more

Artificial neural network modeling for predicting final moisture content of individual Sugi (Cryptomeria japonica) samples during air-drying

Artificial neural network modeling for predicting final moisture content of individual Sugi (Cryptomeria japonica) samples during air-drying

As one of the approaches to improve these problems, cascade-correlation learning algorithm was developed by Fahlman and Lebiere (1991) [16] and showed significant improvements. Cascade-correlation is a method of incre- mentally adding processing elements. Instead of adjusting the weights in an ANN of fixed topology, cascade-corre- lation begins with a minimal network, then automatically trains and adds new hidden units one by one, creating a multi-layer structure. Once a new hidden unit has been added to the ANN, its input-side weights are frozen. This unit then becomes a permanent feature-detector in the ANN, available for producing outputs or for creating other, more complex feature detectors. NeuralWorks Predict (NWP) software (NeuralWare Inc., Pittsburgh, PA, USA) was used in this study which implements the cascade- correlation learning algorithm. NWP outperforms other neural network tools in that it also builds ANNs in the clever strategy of stopping rules against over-fitting on empirical data. Moreover, NWP undertakes some nonlinear transformation for input variables, and produces input neurons for each transformation in advance of learning process to avoid the complex representation of the model. Types of transformation used include linear (scaling), log, log–log, exponential, exponential of exponent, square-root, square, inverse, inverse of square-root, inverse of square, and so on, depending on the complexity of the problem [17]. NWP also uses a genetic algorithm to make a suitable choice of input variables from the set of all input variables and transformations of input variables [17], since it effi- ciently explores the large space of subsets of possible input variables.
Show more

7 Read more

Development of a Neural-Networks Tool-Wear Monitoring System for a Turning Process

Development of a Neural-Networks Tool-Wear Monitoring System for a Turning Process

The number of neurons in the hidden layer was determined by experiments comparing the network performances with a different number of neurons in the hidden layer. During the experiment, networks were tested with two to seven neurons in the hidden layer, and for every topology several trainings with the same training set were performed so that the performances of every topology could be estimated as objectively as possible. Networks with a small number of neurons (two and three neurons) in the hidden layer did not present satisfactory results, which can be attributed to an insufficiently rich network structure that implied a small capacity for the function approximation. Networks with five or more neurons in the hidden layer successfully approximated the input-output dependence, so any of those topologies was appropriate for implementation. In selecting the final topology, a general direction was used, saying that the total number of neurons in the neural network should be as small as possible, since in that way the generalization network abilities were increasing and the appearance of "over fitting" was avoided. Considering all the above mentioned, a network with five neurons in the hidden layer was selected for the final network structure.
Show more

14 Read more

Neural Network in Image Compression

Neural Network in Image Compression

The capabilities to the single layer perceptron are limited to linear decision boundaries and simple logic functions. However, by cascading perceptrons in layers, we can implement complex decision boundaries and arbitrary Boolean expressions. Perceptrons in the network are called neurons or nodes and differs from Rosenbelt perceptron in the activation function used. The output of this layer feed into each of the second layered perceptron and so an. Often nodes are fully connected between layers i.e. every node in first layer is connected to every node in next layer. Refer figure 1.10 the multiple nodes in the output layer typically corresponds to multiple classes for the multiclass pattern recognition problem.
Show more

6 Read more

A LVQ like Network Approach for Classification

A LVQ like Network Approach for Classification

An iterative self-constructing clustering algorithm is used to determine the number of hidden nodes in the hidden layer.. Data are described by clusters with appropriate centers and devi[r]

7 Read more

Novel control of a high performance rotary wood planing machine

Novel control of a high performance rotary wood planing machine

rotational  speed  increases  the  general  trend  is  for  the  value  of  Q  to  increase  also,  and  in . general the smaller network, 20 hidden neurons in the hidden layer, trained [r]

276 Read more

1.
													Performance evaluation of the neural network diagnostic system for the re-emerging arboviral infection -dengue

1. Performance evaluation of the neural network diagnostic system for the re-emerging arboviral infection -dengue

The second experiment is done using the Artificial Neural Network with OneHidden Unit and default ten neurons in the hidden layer with the Bayesian regulation back propagation. trainbr is a network training function that updates the weight and bias values according to Levenberg-Marquardt optimization. [8] It minimizes a combination ofsquared errors and weights, and then determines the correct combination so as to producea network that generalizes well.

6 Read more

Radon Neutral work for Biomedical Image
Processing

Radon Neutral work for Biomedical Image Processing

FEEDFOREWARD NEURAL NETWORK The feedforward neural network was the first and simplest type of artificial neural network devised. In this network, the information moves in only one direction, forward, from the input nodes, through the hidden nodes (if any) and to the output nodes. There are no cycles or loops in the network. In the present work we are using Feedforeward neural network with backpropagation algorithm. We are employing LM algorithm as the backward propagation algorithm in the present work.

5 Read more

Radial
      basis function neural network for software engineering measures  A
      survey

Radial basis function neural network for software engineering measures A survey

these are connected in a definite manner as like layer structure is called as neural network architecture. It is mostly widely used for the purpose of optimizing issues. It can have multiple numbers of layers of processing units in form of feed forward way. Neural network is used as predictor that computes the formal model parameters and discover the process itself. Also have to note that the Back-error propagation is commonly used neural network and it is being used effectively in application studies in wide range of region.

6 Read more

Show all 10000 documents...