The Littoral Acoustic Demonstration Center has collected passive acoustic monitoring data in the northern Gulf of Mexico since 2001. Recordings were made in 2007 near the Deepwater Horizon oil spill that provide a baseline for an extensive study of regional marine mammal populations in response to the disaster. Animal density estimates can be derived from detections of echolocation signals in the acoustic data. Beaked whales are of particular interest as they remain one of the least understood groups of marine mammals, and relatively few abundance estimates exist. Efficient methods for classifying detected echolocation transients are essential for mining long-term passive acoustic data. In this study, three data clustering routines using k-means, self-organizing maps, and spectral clustering were tested with various features of detected echolocation transients. Several methods effectively isolated the echolocation signals of regional beaked whales at the species level. Feedforwardneuralnetworkclassifiers were also
Despite attracting a significant amount of research interest, a precise characterization of adversarial examples remains elusive. In this paper, we derived lower bounds on the norms of adversarial perturbations in terms of the model parameters of feedforwardneuralnetworkclassifiers consisting of convolutional layers, pooling layers, fully-connected layers and softmax layers. The bounds can be computed efficiently and thus may serve as an aid in model selection or the development of methods to increase the robustness of classifiers. They enable one to assess the robustness of a classifier without running extensive tests, so they can be used to compare different models and quickly select the one with highest robustness. Furthermore, the bounds enjoy a theoretical guarantee that no adversarial perturbation could ever be smaller, so methods which increase these bounds may make classifiers more robust. We tested the validity of our bounds on MNIST and CIFAR-10 and found no violations. Comparisons with adversarial perturbations generated using the fast gradient sign method suggest that these bounds can be close to the actual norms in the worst case.
Paulraj et al. [1, 2] developed for the moving vehicle recognition and classification system that is based on time domain approach with probabilistic neuralnetwork (PNN) and multi-classifier systems which are developed to classify the vehicle type and its distance. Nooralahiyan and Kirby  applied a directional microphone connected to a DAT (Digital Audio Tape) recorder. The digital signal was pre-processed by LPC (Linear Predictive Coding) parameter conversion based on autocorrelation analysis. A Time Delay NeuralNetwork (TDNN) was chosen to classify individual travelling vehicles based on their speed independent acoustic signature to four broad categories: buses or Lorries, small or large saloons, various types of motorcycles, and light goods vehicles or vans. Michael N. J. and Andrew Woodward  considered that the two machine learning algorithms namely artificial neural networks (ANN) and naïve Bayesian classifiers (NBC) are compared to apply the audio samples captured from the vehicle engine sounds. The same features extraction using wavelet method has been also constructed by Amir Averbuch [5, 6] for classification and detection of the vehicle types.
The flowchart of ANFIS procedure is shown in Figure 4. AN FIS distinguishes itself from normal fuzzy logic systems by the adaptive parameters, i.e., both the premise and consequent parameters are adjustable. The most remarkable feature of the ANFIS is its hybrid learning algorithm. The adaptation process of the parameters of the ANFIS is divided into two steps. For the first step of the consequent parameters training, the Least Squares method (LS) is used, because the output of the ANFIS is a linear combination of the consequent parameters. The premise parameters are fixed at this step. After the consequent parameters have been adjusted, the approximation error is back-propagated through every layer to update the premise parameters as the second step. This part of the adaptation procedure is based on the gradient descent principle, which is the same as in the training of the BP neuralnetwork. The consequence parameters identified by the LS method are optimal in the sense of least squares under the condition that the premise parameters are fixed.
An automatically skin cancer classification system is developed and the relationship of skin cancer image across different type of neuralnetwork are studied with different types of pre-processing. The collected images are feed into the system, and across different image processing procedure to enhance the image properties. Then the normal skin is removed from the skin affected area and the cancer cell is left in the image. Useful information can be extracted from these images and pass to the classification system for training and testing. Recognition accuracy of the 3-layers back-propagation neuralnetwork classifier is 89.9% and auto-associative neuralnetwork is 80.8% in the image database that include dermoscopy photo and digital photo.
The optimum network was selected among the networks based on the smallest cross-validation errors produced. Table1 shows that the Elman Network with validation error of 0.0023 and 18 hidden nodes was the optimum network compared to FeedforwardNetwork with validation error of 0.0024. Although the differences error between these two network was only 0.0001, the testing error for Elman Network was less than FeedforwardNetwork. The Elman Network produced testing error of 0.0033 compared to FeedforwardNetwork of 0.004. The difference between them was 0.0007. Therefore, the performance of FeedforwardNetwork testing in Figure 4 (a) get worst compared to the performance of Elman Network.
III. EXPERIMENTAL RESULTS AND ANALYSIS The image datasets were implemented (Matlab 2009a) for BPN, FFNN and MLPN, tested and compared. Each algorithm was trained and tested for each dataset, under the same model (kernel with the corresponding parameters) in order to achieve the same accuracy. The feed forward BPN for neural framed by generalizing the Widrow-Hoff learning rule to multiple layer network and non- linear differentiable transfer function is implemented with learning rate 0.5 and momentum factor as 0.95. Activation function maps the output of the summing junction into the final output. A value of less than 0.5 is labeled as 0 and the network classify the input image features as benign images. A value of more than 0.5 is labeled as 1 and the neuralnetwork classify the input image features as malign images. The accuracies of all classifiers, achieved for each specific dataset, were calculated under the same validation scheme, i.e., the same validation method and the same data realizations.
Abstract This paper presents a methodology for short-term load forecasting based on genetic algorithm feature selection and artificial neuralnetwork modeling. A feedforward artificial neuralnetwork is used to model the 24-h ahead load based on past consumption, weather and stock index data. A genetic algorithm is used in order to find the best subset of variables for modeling. Three datasets of different geographical locations, encompassing areas of different dimensions with distinct load profiles are used in order to evaluate the methodology. The developed approach was found to generate models achieving a minimum mean average percentage error under 2 %. The feature selection algorithm was able to significantly reduce the number of used features and increase the accuracy of the models.
Neuralnetworkclassifiers. The limitations of feature engineering motivate classification meth- ods that can implicitly discover relevant features. Badjatiya et al. (2017) and Gamb¨ack and Sikdar (2017) were the first to use recurrent neural net- works (RNNs) and convolution neural networks (CNNs), respectively, for hate speech detection in tweets. A comprehensive comparative study by Zhang et al. (2018) used a combined CNN and gated recurrent unit (GRU) network to outperform the state-of-the-art on 6 out of 7 publicly avail- able hate speech datasets by 1-13 F1 points. The authors hypothesize that CNN layers capture co- occurring word n-grams, but they do not perform an analysis of the features that their model actu- ally captures. Deep learning classifiers have also been explored for related tasks such as personal attacks and user comment moderation (Wulczyn et al., 2017; Pavlopoulos et al., 2017). Pavlopoulos et al. (2017) propose an RNN model with a self- attention mechanism, which learns a set of weights to determine the words in a sequence that are most important for classification.
In this paper, we will investigate a novel DSS control strategy: an adaptive controller with NN feedforward compen- sation, and propose a novel adaptive law for online NN weight learning to improve the convergence performance . We first transform the existing generalized DSS framework in  into a modified framework, and then the DSS control design can be considered as a regulation problem with measured disturbance rejection. With this observation, an adaptive NN compensator can be designed and superimposed upon a pre- designed linear two degree-of-freedom (DOF) DSS controller to achieve improved synchronization. The linear feedback controller is employed to guarantee the stability of the closed- loop system, while the NN is used to provide an extra feedforward compensation action to cope with uncertainties and nonlinearities. The salient feature of the proposed method lies in the fact that the NN feedforward control design does not require any information on the plant, and the effect from the system modeling uncertainties can also be diminished through the NN feedforward compensation. The experimental results on a QM DSS test rig demonstrate the superior performance of applying NN feedforward compensation over a linear feed- forward controller alone. The NN compensation strategy in ,  was also generalized from single input, single output (SISO) case to generic multiple input, multiple output (MIMO) cases, which makes it possible to apply this strategy to the coupled multivariable DSS control problem. In particular, in contrast to our previous work , we also propose a novel adaptive law for NN weight learning beyond the conventional e-modification or σ -modification , , where appropriate weight error between the ideal weights and their estimation is derived and used to update the NN weights so as to further improve the overall performance.
Technological advances and the enormous flood of papers have motivated many researchers and companies to innovate new technologies. In particular, handwriting recognition is a very useful technology to support applications like electronic books (eBooks), post code readers (that sort mails in post offices), and some bank applications. This paper proposes three systems to discri- minate handwritten graffiti digits (0 to 9) and some commands with different architectures and abilities. It introduces three classifiers, namely single neuralnetwork (SNN) classifier, parallel neural networks (PNN) classifier and tree-structured neuralnetwork (TSNN) classifier. The three classifiers have been designed through adopting feed forward neural networks. In order to optim- ize the network parameters (connection weights), the back-propagation algorithm has been used. Several architectures are applied and examined to present a comparative study about these three systems from different perspectives. The research focuses on examining their accuracy, flexibility and scalability. The paper presents an analytical study about the impacts of three factors on the accuracy of the systems and behavior of the neural networks in terms of the number of the hidden neurons, the model of the activation functions and the learning rate. Therefore, future directions have been considered significantly in this paper through designing particularly flexible systems that allow adding many more classes in the future without retraining the current neural networks.
Out of 10 speech samples recorded for each speaker, 5 samples are taken for feature extraction and training. i.e., the input data size for training the network is of size 200 × 65 for a cluster size of 5. ANN is designed and trained for various input cluster sizes of 10, 8, 5 and 4. The False Acceptance Rate (FAR) and False Rejection Rate (FRR) of the system for various clusters are shown in Fig.8 and Fig.9. The Equal Error Rates (EERs) corresponding to each cluster size is shown in Table 5. It is seen that minimum EER is obtained for the cluster of size 5. Hence cluster of size 5 can be considered as the best option for the recognition as the EER is approximately 0.0382 within a matching threshold of 0.152- 0.167. Thus the maximum genuine acceptance rate or recognition rate achieved for this system is 96.18%. Also the mean-squared difference (using minimum distance classifier) between the testing and training vectors for various cluster sizes are calculated and the EER obtained with cluster size of 5 is about 0.1662 which is the minimum EER obtained among all the clusters. Thus only 83.38% of recognition rate can be obtained using minimum distance classifier based direct method. The comparison of neuralnetwork based and minimum distance based classifiers are shown in Table 6 from which it can be conclude that the neuralnetwork method can be adopted for speaker identification with minimum error as the EER is only 3.82% for a cluster size 5.
The GNN model is developed using Matlab code. The network is trained and tested to match the subgraph on 5 nodes with graphs of 6 nodes to 10 nodes. Graphs are generated randomly with fixed number of nodes (n). Each pair of nodes in the graph are connected with some probability ( = 0.2). Each graph is checked for connec- tivity. If not connected, random edges between non ad- jacent nodes are added until the graph becomes con- nected. Select a random graph H on 5 nodes that is to be matched with the generated graph G. Label the vertices of H randomly from 20 to 30. The generated graph G may or may not have the subgraph H in it. A subgraph H is included in the generated graph G to assure the exis- tence of the subgraph in G. A graph G may have more copies of H in it. They are identified by considering all possible nC m combinations of the nodes in G. The sub-
For the maze exploration experiment the results are en- couraging, a neuralnetwork of any of the three types is able to develop the exploration behavior. The trained network is able to control the robot in the previously unseen environment. Typical behavioral patterns, like following the right wall have been developed, which in turn resulted in the very efficient exploration of an un- known maze. The best results achieved by any of the network architectures are quite comparable, with sim- pler perceptron networks (such as the 5-hidden unit per- ceptron) marginally outperforming Elman and RBF net- works.
The main part of the proposed system is the master data manger module or the admin module. The administrator controls the whole part of the system. It manages various credit card types, Credit Card Company, Vendor management and Data set management. Initially a user should register to the system and after that the particular user can request for credit card. Depending on their financial status the administrator can either accept or deny their request. Various credit card types like plain vanilla, etc are listed in this section and the customer can select the particular card with proper credit limit and interest limit. The credit limit and interest limit of different cards will be varying according to the standard of the card. Credit Card Company includes various banks that provides money and it also suggest their approved cards. In Vendor management various vendors can suggest their particular products and services. In this various shopping companies, water and electricity services are coded as vendors. In Data Set Management, the transactions occurred in the payment part are converted into data set. Apart from that a real data set is also uploaded for fraud detection. The dataset is initially converted into Arff file and after that the machine learning algorithms like Naïve Bayes, Decision Tree, Random Forest and Convolutional NeuralNetwork are applied. A comparison of these algorithms are made based on precision, recall and accuracy values. In addition to improve the performance for detecting fraud Adaptive Boosting algorithm is also used. After this Majority Voting algorithm is also used as a combination of two algorithms where each classifier makes its own prediction. An application for credit card is processed and the company can determine whether a card should be provided for the concerned person
Neural networks take a dissimilar approach in solving a problem as compared to conventional computers. Conventional computers utilize an algorithmic approach i.e. the computer observes a set of instructions in order to find the solution of the problem. Unless we know the specific steps that computer needs to follow the computer could not solve the problem. And this restricts the problem solving ability of the conventional computers to problems that we understand and know how it will be solved. But the computers would be much more useful if they could do things which we don't know how to do exactly.
Cervical Cancer is one the most threatening diseases of Indian women. According to ICMR – an institute for Cancer Prevention and Research, nearly 1, 22,844 women are affected by cervical cancer and out of that nearly 67,477 women were become victims. This malignant disease will develop in the cells of the cervix or on the neck of the uterus. But, this can be prevented and /or cured if it is diagnosed in the early stage. Due to the complexity of the cell nature, still it is a continuous problem for automating this procedure. Various algorithms and methodologies were proposed for segmenting and classifying cancer cells at the early stage into different categories. Different algorithms and methodologies are proposed by various researchers under various situations. In this paper, various research papers related to early prediction of cervical cancer are analyzed. This paper discusses Machine Learning algorithms like GLCM (Gray Level Co-occurrence Matrix), SVM (Support Vector Machines), k-NN (k-Nearest Neighbours CNNs (Convolutional Neural Networks), ), MARS (Multivariate Adaptive Regression Splines), PNNs (Probabilistic Neural Networks), spatial fuzzy clustering algorithms, Genetic Algorithm, C5.0, RFT (Random Forest Trees), Hierarchical clustering algorithm for feature extraction and CART (Classification and Regression Trees), cell segmentation and classification. The proposed work compares the merits and demerits of different algorithms which obtain good accuracy in classifying cervical cancer cells using machine learning algorithms.
This research focuses on improving indoor localization using wireless network and artificial neuralnetwork (ANN). This involves strategic study on wireless signal behavior and propagation inside buildings, suitable propagation model to simulate indoor propagation and evaluations on different localization methods such as distance based, direction based, time based and signature based. It has been identified that indoor signal propagation impairments are severe, non-linear and custom to a specific indoor location. To accommodate these impairments, an ANN is proposed to provide a viable solution for indoor location prediction as it learns the location specific parameters during training, and then performs positioning based on the trained data, while being robust to severe and non-linear propagation effects. The versatility of ANN allows different setup and optimization possibilities to affect location prediction capabilities. This research identified the best feedforward backpropagation neuralnetwork configuration for the generated simulation data and introduced a new optimization method. Indoor-specific received signal strength data were developed with the Lee’s in-building model according to a custom indoor layout. Simulation work was done to test localization performance with different feedforward backpropagation neuralnetwork setups with the generated received signal strength data as input. A data preparation method that converts the received signal strength raw data into average, median, min and max values prior to be fed into the neuralnetwork process was carried out. The method managed to increase location prediction performance using feedforwardneuralnetwork with two hidden layers trained with Bayesian Regularization algorithm producing root mean squared error of 0.0821m, which is 50% better in comparison to existing research work. Additional tests conducted with six different relevant scenarios verified the scheme for localization performance robustness. In conclusion, the research has improved the performance of indoor localization using wireless network and ANN.
This proposed method enables us to program the channels easily for packet forwarding. The utilization of multiple channels can be potentially optimized, thereby maximizing the network capacity of WMN . However,  focused on the framework; thus, the effective use of multiple channels was not completely addressed. The feasibility of the proposed channel utilization method, which balances the amount of traffic among multiple channels to maximize the network capacity, is assessed using several important metric components. These metrics are compared with routing metrics in terms of throughput, number of hops, and node count per channel or link capacity.