Abstract — Internet has become an essential component of our everyday social and financial activities. Nevertheless, internet-users may be vulnerable to different types of web-threats which may cause financial damages, identity theft, loss of private information, brand reputation damage and loss of customer’s confidence in e- commerce and online banking. Phishing is considered as a form of web-threats that is defined as the art of impersonating a website of an honest enterprise aiming to obtain confidential information such as usernames, passwords and social security number. So far, there is no single solution that can capture every phishing attack. In this article, we proposed an intelligent model for predicting phishing attacks based on Artificial NeuralNetwork “ANN” particularly self-structuringneural networks. Phishing is a continuous problem where features significant in determining the type of webpages are constantly changing. Thus, we need to constantly improve the network structure in order to cope with these changes. Our model solves this problem by automating the process of structuring the network and shows high acceptance for noisy data, fault tolerance and high prediction accuracy. Several experiments were conducted in our research, the number of epochs differs in each experiment. From the results, we find that all produced structures have high generalization ability.
The other class of neuralnetwork architecture is the recurrent neuralnetwork, which contains feedback connections from units in the subsequent layers to units in the preceding layers. Recurrent networks have feedback connections between neurons of different layers or loop type self-connections. This implies that the output of the network not only depends on the external inputs, but also on the state of the network in the previous training iteration. Determining the network architecture is one of the difficult tasks in constructing any model but one of the most essential steps to be taken. The neuralnetwork architecture employed in this study is feed-forward with one hidden layer, which sometimes called multi-layered perceptron. The advantage of multi-layered perceptron is that the number of neurons in the hidden layer can be changed to adapt to the complication of the relationships between input and output variables. Although neuralnetwork construction has been widely researched, there is no known procedure or algorithm for the general case. However, one of the experimental objectives of this study was to conclude the size of the hidden layer that produces the best predictive performance.
The virtual concept drift that occurs in phishing websites has a peculiarity, as it might be guided by some malevolent intelligent agent rather than occurring naturally. Therefore, another aim of this thesis is to minimize the cause of intentional virtual concept drift by making use of the black box nature of NeuralNetwork (NN). The black box nature of NN signifies that the only visible parts in any NN classification model are the input and output, whereas the process that transforms the inputs into outputs is obscured. This characteristic makes the task of picking a new set of features that can circumvent the classification model increasingly difficult. Nevertheless, most NN classification models are traditionally created using the trial and error method. Thus, one more aim of this thesis is to create an algorithm that simplifies structuring NN classifiers. The algorithm plays an important role in the proposed framework since it will derive the classifiers that are added to the ensemble. After confirming the presence of a concept drift, a new classifier will be created using the algorithm. Such a classifier is added to the previously derived classifiers forming an ensemble of classifiers each of which is considered an expert in a particular part of input features.
Prediction is forecasting the possible crime rate for the near future and the places that can become hotspots for a crime type. Crime rate prediction can be done for a type of crime or a place or both. Hotspot prediction is done for across a state or country and all the hotspots can be displayed. This analysis can be done by crime type. For both the analysis the historical data is important. We use temporal data such as year and month to calculate the crime rate using regression. Prediction is a mathematical model that tells us the future data by using past data. Mostly regression techniques are used for prediction. Regression is the method of modelling the relationship between data to analyze the way they contribute to the outcome together. Linear regression is used when the relationship between the variables is linear. If the relationship is non-linear than we can use polynomial regression. Logistic regression  also called the binomial regression can be used when the prediction has only two states. Overfitting is an issue with regression models. In Deep NeuralNetwork we can use LSTM model as shown in fig 4 for prediction. Long Short Term Memory model or simply Recurrent Neural Networks  can remember the past states and makes use of the past information to make predictions.
Ananthi.S, Vishnu Varthini.Sstudied methods of image pre-processing for acknowledgment of crop diseases. They utilized cucumber fine buildup, spot and fleece molds as study tests and detailed similar investigation of impact of basic filter and middle filter. They expressed that Leaves with spots must be pre-prepared right off the bat keeping in mind the end goal to complete the keen finding to crop in view of image processing and fitting elements ought to be extracted on the fundamental of this. A prediction approach in view of help vector machines  for creating weather based prediction models of plant diseases is proposed by Rakesh& Amar. The execution of ordinary multiple regression, artificial neuralnetwork (back proliferation neuralnetwork, summed up regression neuralnetwork) and bolster vector machine (SVM) was thought about. Stereomicroscopic strategy and Image examination  technique is thought about for helpfulness of image investigation as a productive and precise strategy to gauge organic product qualities like size, shape dispersal related structures by Mix and Pico. Brendon J. Woodford, Nikola K. Kasabov and C. Howard Wearing in paper titled "Natural product Image Advances in Image Processing for Detection of Plant Diseases" proposed wavelet based image processing strategy and neuralnetwork to build up a technique for on line recognizable proof of nuisance harm in pip organic product in plantations. Three irritations that are prevalent in plantations were chosen as the possibility for this exploration: the leaf- roller, codling moth, and apple leaf twisting midge. A novel approach  is proposed for coordinating image examination strategy into indicative master framework. A CLASE (Central Lab. of Agricultural Expert System) indicative model is utilized to oversee cucumber crop. The master framework discovers the diseases of client perception. Keeping in mind the end goal to analyze a confusion from a leaf image, four image processing stages are utilized: upgrade, division, highlight extraction and characterization .They tried three unique issue, for example, Leaf mineworker, Powdery and Downey and this approach has extraordinarily diminished
This paper aims to demonstrate the importance and possible value of housing predictive power which provides inde- pendent real estate market forecasts on home prices by using data mining tasks. A (FFBP) network model and (CFBP) network model are one of these tasks used in this research to compare results of them. We estimate the median value of owner occupied homes in Boston suburbs given 13 neighborhood attributes. An estimator can be found by fitting the inputs and targets. This data set has 506 samples. “ousing inputs” is a 13 × 506 matrix. The “housing targets” is a 1 × 506 matrix of median values of owner-occupied homes in $1000’s. The result in this paper concludes that which one of the two networks appears to be a better indicator of the output data to target data network structure than maximizing predict. The CFBP network which is the best result from the Output_network for all samples are found from the equa- tion output = 0.95 * Target + 1.2. The regression value is approximately 1, (R = 0.964). That means the Output_network is matching to the target data set (Median value of owner-occupied homes in $1000’s), and the percent correctly predict in the simulation sample is 96%.
A prediction approach based on support vector machines  for developing weather based prediction models of plant diseases is proposed by Rakesh & Amar. The performance of conventional multiple regression, artificial neuralnetwork (back propagation neuralnetwork, generalized regression neuralnetwork) and support vector machine (SVM) was compared. Stereomicroscopic method and Image analysis  method is compared for usefulness of image analysis as an efficient and precise method to measure fruit traits like size, shape dispersal related structures by Mix & Pico. Brendon J. Woodford, Nikola K. Kasabov and C. Howard Wearing in paper titled “Fruit Image Advances in Image Processing for Detection of Plant Diseases” proposed wavelet based image processing technique and neuralnetwork to develop a method of on line identification of pest damage in pip fruit in orchards. Three pests that are prevalent in orchards were selected as the candidates for this research: the leaf-roller, codling moth, and apple leaf curling midge.
Gabor and Seyal  introduce a neuralnetwork algorithm that relies primarily on the spike field distribution. MLP networks with the number of input and hidden nodes equal to the number of channels in the record and a single output node are used. Five bipolar 8 channel records from the EMU with durations ranging from 7.1 to 23.3 min are used for training and testing. Two networks are trained on only the slopes of the spike’s half-waves, and there is no notion of background context. The first uses the slope of the half-wave before the spike’s apex for all 8 channels as inputs, and the second uses the slope after the apex. The output of the algorithm is a weighted combination of the two network outputs with a value near 1.0 indicating a spike has been found. The duration (not specified) of the spike half waves is fixed so that no waveform decomposition is required. The algorithm slides along the data one sample at a time and identifies a spike when the output is greater than a threshold (e.g. 0.9). The method requires a distinct network for each patient and spike foci, so 7 networks were trained because two of the patients had independent foci. The training required 4–6 example spikes and the non spikes were generated by statistical variation resulting in 4 times more non-spikes. Although this method does not seem to be well suited for general detection, it might be a promising method for finding ‘similar’ events.
ANN model has been used to propose a mathematical model in many different fields, which is a computational model inspired by biological nervous systems. The ANN was selected for this work because of its ability to model the non-linear system. In this method, we have a set of procedures as shown in Fig. 2, which shows the mean steps for the A NN methods (Margarita, 2002; Uğur, 2004). In the first step the input and the target or the output data must be defined, in this work, the inputs to the neural networks are the numeric of significant parameters such as laser power, laser speed and finally laser frequency. These inputs influence on the LDS outputs such as groove profile, the groove dimensions, lap dimensions, interactive width and the surface roughness (Ra), see Table 1.
The combination ideas from nature, as human beings, their achievements and their understanding of the knowledge and experience acquired. In this work we tried to introduce a new approach for the combination of NN and GA with solving of overfitting problem. In our work, GA is a learning algorithm for NN, the structure of goal function is not important for us because that is hidden in neuralnetwork. Our solution is based on two model of genetic operator. We change mutation and crossover operators by the changes in NN weights and structure. Of course changes in structure of NeuralNetwork have to be meaningful. We show two methods with different Challenges and results.
The main objective of this system is to enhance the speech signal to obtain a clean signal with higher quality. The signal-processing problem of noise reduction and speech enhancement has received considerable attention within the adaptive filter community. Such system has been widely used in long distance telephony applications. One of the most challenging areas of this research is the development of adaptive algorithms for hearing aids. A closely linked problem, which has been the focus of research in recent years from the artificial neuralnetwork community (ANN), is that of blind separation of sources and in particular the convolutive blind source separation problem.
Econometric literature has proved that financial volatility and its underlying processes are often nonlinear in nature, see Martens, De Pooter, and Van Dijk (2004). Some authors, Halbleib-Chiriac and Voev (2011) and Heiden (2015) for example, have shown that covari- ance matrices, and their Cholesky factors as well, exhibit long-term dependencies. As a result, linear models may not be suited to study the behavior of this phenomena. Extending in the multivariate context a previous paper (see Bucci (2019)), I decided to approximate the relationship between the elements of the Cholesky decomposition and a set of macroe- conomic and financial variables through a universal approximator such as artificial neural networks (ANN) with multiple outputs.
Artificial neuralnetwork is the most widely used and most mature technology in artificial intelligence information fusion; classification recognition is one of the main application. In this article, the 6-13-6 three layer BP neuralnetwork classifier is designed for white blood cell image, and the BP network is trained and simulated by using the feature data extracted from the white blood cell image. The simulation results show that the classifier can classify the cells rapidly and accurately.
This paper focuses on multivariate statistical and artificial neural networks techniques for data reduction. Each method has a different rationale to preserve the relationship between input parameters during analysis. Principal Component Analysis which is a multivariate technique and Self Organising Map a neuralnetwork technique is presented in this paper. Also, a hierarchical clustering approach has been applied to the reduced data set. A case study of Air quality measurement has been considered to evaluate the performance of the proposed techniques.
Abstract: We present an Artificial NeuralNetwork (ANN) approach to predict stock market indices, particularly with respect to the forecast of their trend movements up or down. Exploiting different Neural Networks archi- tectures, we provide numerical analysis of concrete financial time series. In particular, after a brief r´esum´e of the existing literature on the subject, we consider the Multi-layer Perceptron (MLP), the Convolutional Neural Net- works (CNN), and the Long Short-Term Memory (LSTM) recurrent neural networks techniques. We focus on the importance of choosing the correct input features, along with their preprocessing, for the specific learning algo- rithm one wants to use. Eventually, we consider the S&P500 historical time series, predicting trend on the basis of data from the past days, and proposing a novel approach based on combination of wavelets and CNN, which outperforms the basic neural networks ones. We show, that neural networks are able to predict financial time series movements even trained only on plain time series data and propose more ways to improve results.
In this paper ID with different combination of neuralnetwork are used to achieve a good accuracy. We use five different set of data sets namely DEFCON, NSL- KDD, DARPA, ISCX-UNB and KDD 1999 Cup. The attacks in the data set flows in four categories DOS: denial of service, R2L: Remote to User, U2R: User to Root, Probing. To reduce the false positive rate and also increase the ability of detection the paper also suggested a new Swarm Intelligence(SI) approach to pre-process the data. Figure 1 shows the proposed work Architecture. It converts the non-numerical value into numerical value. It also used to clarify a complex optimization problem. After completing the pre-processing the data is trained by using five different types of neuralnetwork such as Feed Forward NeuralNetwork (FFNN), Deep NeuralNetwork (DNN), and Joint Evolution NeuralNetwork (JENN), Radial Basic Function NeuralNetwork (RBNN), Hybrid NeuralNetwork (HNN).optimization is a technique used to giving a resources to the perfect possible effect. After implementing these network function an artificial bee colony (ABC)optimization method is applied to give a better accuracy rate and efficiency to improve the system which is Joint Evaluation NeuralNetwork.
Abstract The data collected from electronic nose systems are multidimensional and usually contain a lot of redundant information. In order to extract only the relevant data, diﬀerent computational techniques are developed. The article presents and compares selected pattern recognition algorithms in application to qualitative determination of diﬀerent brands of tea. The measured responses of an array of 18 semi- conductor gas sensors formed input vectors used for further analysis. The initial data processing consisted on standardization, principal component analysis, data normalization and reduction. Soft computing one can divide into single method systems using neural networks, fuzzy systems, and hybrid systems like evolutionary-neural, neuro-fuzzy, evolutionary-fuzzy. All the presented systems were evaluated based on accuracy (generated error) and complexity (number of parameters and training time) criteria. A novel method of forming input data vector by aggregation of the ﬁrst three principal components is also presented.
Abstract— Activity detection based on likelihood ratio in the presence of high dimensional multimodal data acts as a challenging problem as the estimation of joint probability density functions (pdfs) with intermodal dependence is tedious. The existing method with above expectations fails due to poor performance in the presence of strongly dependent data. This paper proposes a Compressive Sensing Based Detection method in the Multi-sensor signal using the deep learning method. The proposed Tree copula- Grasshopper optimization based Deep Convolutional NeuralNetwork (TC-GO based DCNN) detection method comprises of three main steps, such as compressive sensing, fusion and detection. The signals are initially collected from the sensors in order to subject them under tensor based compressive sensing. The compressed signals are then fused together using tree copula theory, and the parameters are estimated with the Grasshopper optimization algorithm (GOA). The activity detection is finally performed using DCNN, which is trained with the Stochastic Gradient Descent (SGD) Optimizer. The performance of the proposed method is evaluated based on the evaluation metrics, such as probability of detection and probability of false alarm. The highest probability of detection and least probability of false alarm are obtained as 0.9083, and 0.0959, respectively using the proposed method that shows the effectiveness of the proposed method in activity detection.
political will affect markets leading to total confusion of the investors, mistrust of the performance of the market, existence of asymmetric information and, thereby, loss of the public confidence in the markets (Zhou and Sornette, 2006). Therefore, over the past few decades, in order to create the optimized conditions for allocating financial resources and evaluating the performance of risk management, the accurate forecasting of the price changes of financial assets has attracted the attention of researchers and policy-makers (Cox and Loomis, 2006). The classical methods such as regression and structural models, despite their relative success in forecasting the variables, have not produced desired results, according to researcher, because these methods generally rely on information obtained from historical events. Mainly because the economic and financial issues in stock market lead to the formation of complex and non-linear relations, the use of flexible non-linear models, such as neuralnetwork models, in modeling and forecasting the market indexes can yield impressive results (Aladag et al., 2009). On the other hand, the use of flexible nonlinear models, such as neuralnetwork models, is a response to the lack of consensus on rejection or acceptance of the efficient markets hypothesis. Despite the complexity of these methods in the process of pricing, they have the ability to forecast the future prices with acceptable error. So far, there have been several published results on forecasting stock market prices. Melin et al. (2012), Soni (2011), Dase and Pawar (2010), Li and Liu (2009), Thenmozhi (2006) examined the stock market in different region of the world, using artificial neuralnetwork models. Also, Sahin et al. (2012), Georgescu and Dinucă (2011), Mehrara et al. (2010), Tong- Seng (2007), Ghiassi et al. (2006), Sheta and Jong (2001) forecasted the time series using multilayer feed-forward neuralnetwork (MFNN), Nonlinear NeuralNetwork Auto-Regressive model with exogenous inputs (NNARX), and Adaptive Neuro-Fuzzy Inference Systems (ANFIS) methods. The striking point in all those studies is that, different models of neuralnetwork have a very high accuracy in forecasting the market in comparison with the classical models.
3.8 10 rad × − , improved by about 12.6 times. (c) shows the output torque of the controller is maintained within a smaller range of variation with and without friction compensation, which indicates the friction compensation does not consume too much energy, and this method can save energy if it is compared with conventional high gain friction compensation. Furthermore, the boundedness of network weights in (d) also verifies the above conclusion.