The geologic composition of the sediments in this subsided unit is composed of tectonically and stratigraphically different sets of Miocene and Pliocene rocks. There are roughly two rock complexes with different geologic, lithological, petrographic, and genetic characteristics. The ﬁrst complex contains Paleozoic and Mesozoic rocks that are in the subsurface, usually grouped in one unit, known as pre-Neogene basement (Fig. 2). It includes various metamorphic and magmatic rocks (e.g., gneiss and phyllonite) of Paleozoic age and quartz –chlorite–sericite schists for which Mesozoic age could be presumed (Fig. 2; Pand ži´c 1979 ; Pami´c et al. 1998). Sporadically, Mesozoic carbonates can be found in the outcrops (Belak et al. 1998), but none were determined in the well Tek-1. The other complex incorporates Neogene and Quaternary rocks, containing sediments and effusives in an age span from Lower Miocene to Holocene. These have traditionally been subdivided into lithostratigraphic units in the rank of formations according to Šimon ( 1980) and Hernitz (1980). These units are bracketed by E-log marker horizons (Tg, H, G, B, and A; Fig. 2) – taken in a broader sense meaning, some of them are actually unconformities, whereas many of them in speci ﬁc settings can be treated as a chronohorizons ( Vrbanac 2002). This conventional correlation is nowadays, obsolete, as there is much to be improved, since very few of these E-log marker horizons follow regional trends (Cvetkovi´c 2017). However, it serves well for this (local) example.
Weather forecasting (especially rainfall) is one of the most important and challenging operational tasks carried out by meteorological services all over the world. It is furthermore a complicated procedure that includes multiple specialized fields of expertise. Researchers in this field have separated weather forecasting methodologies into two main branches in terms of numerical modeling and scientific processing of meteorological data. The most widespread techniques used for rainfall forecasting are the numerical and statistical methods. Even though researches in these fields are being conducted for a long time, successes of these models are rarely visible. There is limited success in forecasting the weather parameters using the numerical model. The accuracy of the models is dependent upon the initial conditions that are inherently incomplete. These systems are not able to produce satisfactory results in local and short-term cases. The performances, however, are poor for long range prediction of monsoon rainfall even for the larger spatial scale and particularly, for the Indian region. As an alternative, statistical methods in which rainfall time series are treated as stochastic are widely used for long-range predication of rainfall. IMD has been using statistical models for predicting monsoon rainfall. Statistical models were successful in those years of normal monsoon rainfall and failed remarkably during the extreme monsoon years like 2002 and 2004. However, it is very difficult to get the same or better skill in predicting district level monsoon rainfall as that of all-India level monsoon rainfall using these statistical models. Two main drawbacks of these statistical models are:
Abstract Rolling is one of the most complicated processes in metal forming. Knowing the exact amount of basic parameters, especially inter-stand tensions can be effective in controlling other parameters in this process. Inter-stand tensions affect rolling pressure, rolling force, forward and backward slips and neutral angle. Calculating this effect is an important step in continuous rolling design and control. Since inter-stand tensions cannot be calculated analytically, attempt is made to describes an approach based on artificialneural network (ANN) in order to identify the applied parameters in a cold tandem rolling mill. Due to the limited experimental data, in this subject a five stand tandem cold rolling mill is simulated through finite element method. The outputs of the FE simulation are applied in training the network and then, the network is employed for prediction of tensions in a tandem cold rolling mill. Here, after changing and checking the different designs of the network, the 11-42-4 structure by one hidden layer is selected as the best network. The verification factor of ANN results according to experimental data are over R=0.9586 for training and testing the data sets. The experimental results obtained from the five stands tandem cold rolling mill. This paper proposed new ANN for prediction of inter-stand tensions. Also, this ANN method shows a fuzzy control algorithm for investigating the effect of front and back tensions on reducing the thickness deviations of hot rolled steel strips. The average of the training and testing data sets is mentioned 0.9586. It means they have variable values which are discussed in details in section 4. According to Table 7, this proposed ANN model has the correlation coefficients of 0.9586, 0.9798, 0.9762 and 0.9742, respectively for training data sets and 0.9905, 0.9798, 0.9762 and 0.9803, respectively for the testing data sets. These obtained numbers indicate the acceptable accuracy of the ANN method in predicting the inter-stand tensions of the rolling tandem mill. This method provides a highly accurate solution with reduced computational time and is suitable for on-line control or optimization in tandem cold rolling mills. Due to the limited experimental data, for data extraction for the ANN simulation, a 2D tandem cold rolling process is simulated using ABAQUS 6.9 software. For designing a network for this rolling problem, various structures of neuralnetworks are studied in MATLAB 7.8 software.
As the model is built with the intention as an agent to intra-day trading predictor, there is a need to balance between the numbers of computation effort with the speed of the execution. Choosing the right amount of hidden layers in the model can be tricky. According to Cybenko , for a continuous function with limited set of discontinuities, a single hidden layer should be sufficient to an ANN model. Because the characteristic of the data which is being used in this study fits the definition, then only one hidden layer will be used in this model. Figure 1 shows the ANN which is going to be used in the simulation.
The second model uses precipitation data from 24 stations along the discharge from the previous time step of the output station as well as the precipitation index values for the last 4 to 8 days. Neural network with architecture 29-8-1 was trained using several trials and errors. Based on sensitivity analysis, 15 inputs were found as effective inputs for ANN model. Sensitivity analysis was accomplished by applying change in one normalized input while the other inputs remain constant. As an example, the result of the sensitivity analysis has been shown in Fig. 5. Accordingly, an initial MLP network at (29-8-1) was replaced by a network at (15-5-1). Values of MRE, E and R 2 were obtained to be equal to 13.07, 0.967 and 0.905 respectively. Fig.6 shows even better results than that obtained in the second model. However, none of these models have performed reasonably for predicting large flood flows. The reason may be attributed to the fact that these models use time step patterns for both rainy and no rainy days. These were separated in the next two models.
Huang et.al. used GSOM as a visualization tool to cluster fMRI finger tapping and non-tapping data. The finger tapping experiment during the fMRI scan is commonly conducted for clinical purpose, for instance, it enables researchers to profile Parkinson Disease characteristics because of finger tapping deficits already discovered in Parkinson patients. The GSOM starts with a minimal number of nodes (usually 4) and grows new nodes on the boundary based on a heuristic. By using the SF, the data analyst has the ability to control the growth of the GSOM. All the starting nodes of the GSOM are boundary nodes, i.e. each node has the freedom to grow in its own direction at the beginning. New Nodes are grown from the boundary nodes. Once a node is selected for growing all its free neighbouring positions will be grown new nodes. Due to the flexible structure and dynamic node adding capacity, the GSOM has shown to provide better visualization as well as faster processing speed compared to the SOM. A further key application has been the use of the SF parameter to develop GSOMs at different levels of spread, thus enabling the generation of hierarchies of clusters. The GSOM based analysis performed by Huang et.al. and the results are displayed as under. Fig.6 shows 36 GSOM clustering images corresponding to 36 brain horizontal slices when the subject was at rest. Fig.7 illustrates 36 GSOM clustering partitions images when the subject was tapping fingers.
on VMS data for a subset of vessels belonging to length class [12–15 m) returned an estimate of the total annual fishing effort for the whole Italian fleet in this fleet segment. The model outputs were validated in two ways: internally (CV using test data not used for training – see Figure 8) and externally (i.e., through comparison with independent estimations from logbook data – see Figure 10). The latter type of validation, in particular, indicated a substantial agreement between the model output and the patterns depicted by the logbook data. The discrepancies between the model output and the logbook-based patterns can be explained in different ways. First, of course, these discrepancies could be related to model limitations in terms of predictive power. If this is the case, it is worth noting that the model was trained with VMS data for approximately 3% of the whole fleet in the length class [12–15 m) (see Figure 1). Therefore, the model output should be judged while considering that the model predictions were based on a small subset of the target universe. Moreover, it is reasonable to expect that the model performance will improve with the progressive coverage of the VMS (or AIS) for this fleet segment. Second, mismatches between model outputs and logbook-based patterns could be due also to misreporting or gaps in the logbook data. In fact, logbooks are characterized by consistency and accuracy issues
point-valued quantities. However, the main problem with interval filtering is that due to the conservative nature of interval computation, the estimates tend to be over-conservative, limiting their practical usage. 2 In practice, a single estimate is often required, and several studies have been aimed at inferring point-valued esti- mates from the interval estimates of the IKF. Chui and Chen 9 suggested using a weighted average of the IKF boundaries, and, in the absence of any weighting cri- teria, to take the arithmetic average of the boundaries. As demonstrated in this article, the wIKF methodology developed provides estimates that are much improved over taking the simple arithmetic average of the interval bounds. Another method was proposed by Weng et al. 10 in which evolutionary programming is used as a global search method to find the point estimate that minimises the maximum estimation error covariance. However, on the one hand, this method requires run- ning an iterative search algorithm at each time step, and on the other hand, it does not use actual measurement data to infer the desired point-valued estimate, being based on statistical principles alone. In the approach used here, the training of the network is done offline, so that it is only used for prediction during an actual mis- sion. This only requires forward propagation of infor- mation through the network, which can be computed efficiently using a vectorised implementation.
Flores et al. [17-18] has transformed the traditional ABC analysis for the inventory classification, taking into consideration another relevant i.e. significant criterion. This method, the so called bi-criteria inventory classification, uses the traditional ABC analysis to classify inventory by the first criterion, and then by the second criterion. The main disadvantage of this method is that the weights of the two criteria are assumed to be equal. A weighted linear optimization model is developed, which is based on the concept of data envelopment analysis (DEA) . The weighted additive function (score) is used, which includes all the performances in terms of different criteria for an item. An optimization linear model is defined for each item. By solving this model, the optimal inventory score for each item, as well as weighted factor values (weights) for all the criteria are generated. For a large number of items, this method is time consuming, but it provides an objective way of determining the weights. The author  proposes a very similar, but simplified model to Ramanathan , s model.
A second ANN SFF model was developed for forecasting stream flows at downstream locations based on precipitation and stream flows of the upstream station (Station I). The upstream station was chosen as the reference station to generate forecasted flows for the two downstream stations (Station II and Station III). The average travel times of peak flow from the reference station to the first and second downstream stations are approximately one hour and three hours, respectively. Preliminary investigation indicated the existence of a reasonable correlation between stream flows at the upstream station and those at either one of the downstream stations when the upstream data was lagged by appropriate time intervals. If the time lags were too long, part of the upstream data would exhibit no relation to the downstream flows. It would also prolong training time and reduce model performance. At each time step, data input to ANN include the preceding 6-hr record of precipitation and stream flows at the upstream station. A 15-minute time step was chosen to be consistent with the USGS data format. The predicted stream flows at the first and second downstream stations will be one hour and three hours ahead in real time, see Table 2.6. The SFF model differs from a hydrologic routing model in such a way that it has been trained to recognize additional watershed runoff contributing to the downstream stations.
Another method for evaluating companies is by using neuralnetworks, which can also be applied in business practice. The advantage of artificialneuralnetworks is their ability to predict future periods. Sánchez and Melin  state that neuralnetworks have a huge application in many different areas. Artificialneuralnetworks have innumerable advantages over conventional methods. They allow you to analyse complicated patterns quickly and with high precision and, according to Santin , they are flexible in their own use. The disadvantage of these networks is the need for large sample data, since many test observations are needed to produce such data, however this is very complicated for users .
d) Possibility of considering problems such as the uncertainty of some quantities, load variations etc. Therefore, a combination of Articial Intelligence Methods and ordinary methods has been presented to control the optimal reactive power. Among the most important, one can point at the utilization of fuzzy set theories, expert systems and articial neuralnetworks. In 5,6], fuzzy based reactive power control has been presented. Application of fuzzy sets is within the upper and lower limits of variables and within the coecients of the objective function. The diculties of these methods lie in precise determination of the upper and lower limits and the coecients of the objective function. Furthermore, because of using linear programming in the original solution, calculation time, in comparison with the application of neuralnetworks, is longer. In 7], the expert system has been utilized to compensate the reactive power. The advantage of the expert system approach is the very high speed of answering time. However, in the case of large power systems, these techniques have problems, because the expected time saving will be decreased and, as the knowledge base is larger, the search time will be increased. Also in 8], utilization of mixed integer programming in solving the control of reactive power has shown that the length of time taken for full precision in the obtained results is one of the disadvantages of this method.
With trials and errors it is shown in  that network with eight or more neurons are sufficient for this application of ANN in fault classification and detection in medium voltage dc shipboard power systems. Network is designed with nine inputs (extracted features) and two outputs. Different methods have been tested for provided data. Designed network is general for usage in different fault and operating conditions. Some of the highlighted contributions in this paper is that variation of electrical parameters do not affect method performance.For the purpose of power system fault identification different fault signals were generated through simulations performed in . Fault signals are analyzed with multi-wavelet packets and extracted fetaures are used for ANN input. Network output are ten fault types. Number of the neurons in hidden layer is chosen empirically and set to the value two times bigger than input layer. The potential application of Teager Energy Operator (TEO) and Discrete Energy Separation algorithm (DESA) in combination with Kalman filter, Hilber Transform and Wavelet transform for the power system control and the practical areas is highlighted in . In mentioned paper TEO and DESA are applied in detection of different distortions of voltage waveform.
Artificialneuralnetworks are computers whose architecture is modeled after the brain. In other words it is a computational system inspired by the structure, processing method and learning ability of a biological brain. They typically consist of many hundreds of simple processing units which are wired together in a complex communication network. Each unit or node is a simplified model of a real neuron which fires (sends off a new signal) if it receives a sufficiently strong input signal from the other nodes to which it is connected. An artificialneural network (ANN) is applied to several civil engineering problems, which have difficulty to solve or interrupt through conventional approaches of engineering mechanics. These include tide forecasting, earthquake-induced liquefaction and wave-induced seabed instability. ANN model can provide reasonable accuracy for civil engineering problems, and a more effective tool for engineering applications. A natural disaster is the effect of a natural hazard (e.g., flood, tornado, hurricane, volcanic eruption, earthquake, heat wave, or landslide). Earthquakes, landslides, tsunamis and volcanoes are complex physical phenomenon that leads to financial, environmental or human losses. Also, prediction of these disasters is a complex process that depends on many physical and environmental parameters. Many approaches exist in the literature based on scientific and statistical analysis. Data mining techniques can also be used for prediction of these natural hazards. Unfortunately, successful earthquake predictions are extremely rare. There are two basic categories of earthquake predictions: forecasts (months to years in advance) and short term predictions (hours or days in advance). Forecasts are based a variety of research, including the history of earthquakes in a specific region, the identification of fault characteristics (including length, depth
estimations are 7.3°, 4.3° and 28.5° with variance values of 0.2°, 0.03° and 0.4° for stratified, bubble and annular flows, respectively. Most of the interface orientation estimation errors for stratified and bubble flow patterns are smaller than those of annular flows. The largest error is about 36.5°, produced when estimating the interface orientation of an annular flow. The large mean error, in the interface orientation estimations of annular flows, is likely to be due to the network confusion that arises because the same interface orientation value (of one) is used for a pipe that is full of water, and for all annular flow patterns during network training. Another factor that contributes to the larger estimation errors in annular flows is the fact that only limited number of annular flow patterns can be generated and used to train the MLP estimators, in comparison to stratified and bubble flow patterns.
Cost estimation is the evaluation of many factors the most prominent of which are labor, and material (Smith and Mason, 1997). Many methods and procedures have been developed to calculate the cost of a product, each with their own pros and cons. The majority of the models rely on historical data, which do not consistently provide an accurate picture of the current conditions and are not always available. Historical data models often produce results with a low level of accuracy due to these limitations. Statistical models in current use include regression, bottoms-up, parametric, and more recently neuralnetworks (Layer et al., 2002). Regression is “the mathematical nature of the association between two variables” according to the NASA parametric cost estimation handbook 2004. Regression derives an association by using historical data about the part or process to find the best relationship between the cause attributes and the output value (Walpole et al., 2002).
This paper presents an attempt to design an encryption system based on artificialneuralnetworks of the GRNN type which is invariant to the secret keys. The proposed NN has been tested for various numbers of training iterations and for different numbers of hidden neurons, input data. The simulation results have shown a very good result, with relatively better performance than the traditional encryption methods.
analogies, of course, are not sufficient to justify the treatment of the problem with eigenvalue equations, as happens in the physical systems modeled by the Schr¨ odinger equation, and are used in this paper exclusively as a starting point that deserves further study. However it is a line of research that can clarify intimate aspects of the optimization of an artificialneural network and propose a new point of view of this process. We will demonstrate in the following sections that meaningful conclusions can be reached and that the proposed treatment actually allows to optimize artificialneuralnetworks by applying the formalism to some datasets available in literature. A first thought on the model is that it allows to naturally define the energy of the network, a concept already used in some types of ANNs, such as the Hopfield networks in which Lyapunov or energy functions can be derived for binary element networks allowing a complete characterization of their dynamics, and permits to generalize the concept of energy for any type of ANN.
This seminar is about the artificialneural network application in processing industry. An artificialneural network as a computing system is made up of a number of simple and highly interconnected processing elements, which processes information by its dynamic state response to external inputs. In recent times study of ANN models have gained rapid and increasing importance because of their potential to offer solutions to some of the problems in the area of computer science and artificial intelligence. Instead of performing a program of instructions sequentially, neural network models explore many competing hypothesis simultaneously using parallel nets composed of many computational elements. No assumptions will be made because no functional relationship will be established. Computational elements in neuralnetworks are non linear models and also faster. Hence the result comes out through non linearity due to which the result is very accurate than other methods. The algorithms are presented its clearly illustrator how multi layer neural network identifies the system using forward and
Data mining is the process of analyzing hidden patterns of data according to different perspectives, for categorization into useful information. The data is collected and assembled in common areas, such as data warehouses, for efficient analysis. Data mining tools predict future trends and behaviors, thus allowing businesses to make proactive, knowledge -driven decisions. Data mining principles have been around for many years, but, with the advent of big data, it is even more prevalent. It caused an explosion in the use of more extensive data mining techniques, partially because the size of the information is much larger and because the information tends to be more varied and extensive in its very nature and content .