It is now desired to find some trade-off optimum points to be used for designing. In this way, two special methods are explained. There can be found a break point shown by C in the figure. As it is clear in Figure 5, a small amount of reduction in weight of the structure relative to break point’s weight, will lead to a huge increment in deflection of the structure. The key point is here that increasing the weight of structure relative to the break point’s weight will not lead to a remarkable increment in deflection of the structure. Hence, it will be concluded that the trade-off Pareto point should be around the break point’s zone. Each method proposes a point around this area. The proposed point of this method is point C. Figures 6 to 8 show the variation of weight and deflection with the face sheets’ distance, the face sheets’ thickness and the core sheet’s thickness respectively. Figure 6 exposes the fact that increasing the face sheet distance will be useful while this distance is less than the value of h in the break point. Regarding to this figure, the mentioned point looks a desired designing point, because decreasing the face sheet distance from this value leads to a huge increase in deflection which is not acceptable and increasing the face sheet distance from this value will lead to growth in weight.
Sandwich composite panels are increasingly used in the construction of marine vehicles because of their outstanding strength, stiffness and light weight properties. However, the use of composite panels comes with difficulties in the design process as a result of the large number of design variables involved, including composite material design, topologies and laminate schemes. Hence, this work deals with the presentation of an optimaldesign of laminated composite sandwich marine structures subjected to underwater explosion. The optimization process is performed using a geneticalgorithm (GA), associated with the finiteelementmethod (FEM) for the structural analysis. In this optimization procedure, sandwich composite panel finiteelement model is built up, then the coupled acoustic–structural arithmetic from the widely used calculation program of the finiteelement “ABAQUS” is used to simulate and analyze the transient dynamic response of a sandwich composite panel that experiences loading by an acoustic pressure shock wave resulting from an underwater explosion “UNDEX”. This approach is well suited for enhancing the response of orthotropic and/or laminated composites which involve many design variables. In GA method, a new approach is considered to improve this evolutionary algorithm for laminated stacking sequence and material selection of face layer and cores. Simple crossover, modified ply mutation, and a new operator called “ply swap” are applied to achieve these goals. Keywords: optimization, geneticalgorithm, finiteelementmethod, sandwich panel, underwater explosion, cavitation
The geneticalgorithm is a method for solving both constrained and unconstrained optimization problems that is based on natural selection, the process that drives biological evolution. The geneticalgorithm repeatedly modifies a population of individual solutions. At each step, the geneticalgorithm selects individuals at random from the current population to be parents and uses them to produce the children for the next generation. Over successive generations, the population "evolves" toward an optimal solution. Geneticalgorithm can be used to solve a variety of optimization problems that are not well suited for standard optimization algorithms, including problems in which the objective function is discontinuous, non-differentiable, stochastic, or highly nonlinear.
total 144 pattern numbers which have been obtained from the numerical simulations to train GMDH-type neural networks. However, in order to demonstrate the prediction ability of the evolved GMDH-type neural networks, the data have been divided into two different sets, namely, training and testing sets. The training set, which consists of 116 out of 144 inputs-output data pairs, is used for training the neural network models using the method presented in section two. The testing set, which consists of 28 unforeseen input-output data samples during the training process, is merely used for testing to show the prediction ability of such evolved GMDH-type neural network models. The GMDH-type neural networks are now used for such input-output data to find the polynomial model of lift and drag coefficient with respect to their effective input parameters. In order to genetically design such GMDH-type neural network described in previous section a population of 25 individuals with a crossover probability of 0.7 and mutation probability of 0.07 has been used in 200 generation that no further improvement has been achieved for such population size. The structure of the evolved 2-hidden layer GMDH- type neural networks are shown in Figures 11 and 12 corresponding to the genome representations of acbbaabc for lift coefficient and abbcaaac for drag coefficient which a,b and c stand for reduced frequency, momentum coefficient, and angle w.r.t. the wall, respectively. The corresponding polynomial representation of such model for lift coefficient is as follows:
The MSQP Algorithm is a global optimization algorithm; this algorithm is the result of the combination of Multi start algorithm and the SQP method. It resumes the prin- cipal mechanisms of the SQP method to which are added other mechanisms destined to treat multi-modales prob- lems. The solutions found by the SQP method during the execution of each iteration are improved, so that always we keeps the global solution and the local solution is ignored. Thus, at the end of the treatment, we obtain the global solution, that is the solution of the problem mul- timodal. In MSQP algorithm, the aspect of the global search of start Multialgorithm is used to maintain the diversity. In fact, we considering that MSQP algorithm will converge after a small number of iterations to a local solution if we find another local solution, we can con- sider that it is not useful to continue the search from this moment, and it is better to start a new search.
All the parameter values have been also determined after a set of tests to find the balance between the quality of solutions and the running time. For example, to obtain a proper range of values for the population size and the starting temperature, a set of initial tests have been carried on the NSF network shown in Figure 2 to compare the performance of MOSAGA with different parameter settings. The optimal Pareto Front (PF) of the NSF benchmark problem that composes of 16 solutions has been found by an exhaustive search method in (Crichigno and Baran 2004b), where four objectives (2), (3), (4) and (5) have been considered in the algorithms. We thus considered the same four objectives in our MOSAGA algorithms in this group of tests. Table 2 presents the maximum, minimum and average number of non-dominated solutions found by each variant of MOSAGA in 50 runs. The setting of population size pop = 50 and the starting temperature t max = 50 provides the best solutions and requires less computing time, is thus
In order to overcome the non-uniformrandom generated from the traditional method of initialization of population, real coded chaotic initialization based on Pareto Optimality is used in this paper. Zhang Xi and so on analized real number coding method and summarized the characteristics. In general, first, by using real number as the initial population, the encoding process is simple. Secondly, real coding can eliminate the cliff problem in the process of "Hamming" of common binary code. Finally, real coding is easy to control, especially for chaos initialization and chaotic mutation.Population initialization is based on the scope of the problem and constraints. The algorithm proposed in this paper is based on the triangular tent map:
Buzacott (1975) developed the first EOQ model taking inflationary effects into account. In this model, a uniform inflation was assumed for all the associated costs and an expression for the EOQ was derived by minimizing the average annual cost. Misra (1975, 1979) investigated inventory systems under the effects of inflation. Bierman and Thomas (1977) suggested the inventory decision policy under inflationary conditions. An economic order quantity inventory model for deteriorating items was developed by Bose et al. (1995). Authors developed inventory model with linear trend in demand allowing inventory shortages and backlogging. The effects of inflation and time-value of money were incorporated into the model. Hariga and Ben-Daya (1996) then discussed the inventory replenishment problem over a fixed planning horizon for items with linearly time-varying demand under inflationary conditions. Ray and Chaudhuri (1997) developed a finite time-horizon deterministic economic order quantity inventory model with shortages, where the demand rate at any instant depends on the on- hand inventory at that instant. The effects of inflation and time value of money were taken into account. The effects of inflation and time-value of money on an economic order quantity model have been discussed by Moon and Lee (2000). The two-warehouse inventory models for deteriorating items with constant demand rate under inflation were developed by Yang (2004). The shortages were allowed and fully backlogged in the models. Some numerical examples for illustration were provided. Models for ameliorating / deteriorating items with time- varying demand pattern over a finite planning horizon were proposed by Moon et al. (2005). The effects of inflation and time value of money were also taken into account. An inventory model for deteriorating items with stock-dependent consumption rate with shortages was produced by Hou (2006). Model was developed under the
ABE process is basically a type of Case Based Reasoning . However, as it is argued in  there are positive advantages in respect with rule based systems, for example the reality that users are eager to accept solutions from analogy based techniques, rather than solutions derived from uncomfortable chains of rules or neural nets. Naturally, there are some difficulties with this approach such as lack of appropriate analogues and issues with selecting and using them. Choosing an appropriate set of projects participating in the cost estimation process are very important for any organizations to achieve their goals. In this process, several reasons involved, such as the quantity of investment projects, the existence of various decision criteria for example value maximization and risk minimization and many management activities. Moreover, project selection is a difficult task if there are relations between projects based on various selection criteria and decision maker’s preferences, mainly in the existence of a huge quantity of projects.
A significant number of extensions have been proposed to the basic Traveling salesman problem in order to make more realistic results from the mathematical models. It has been studied increasingly in the last decades and a review on previous works demonstrates that most of these models tend to be probabilistic. Generally, the TSP can be categorized into four main versions; symmetric travelling salesman problem (sTSP), asymmetric travelling salesman problem (aTSP), multi travelling salesman problem (mTSP), and probabilistic travelling salesman problem (PTSP). From another point of view some other factors such as time window, prize and penalty, pickup and delivery, and draft limit have been added to the classic TSP model.
prohibited operating zones and valve point opening. A diversity enhancement aiming to diversify the solutions among the pareto optimal front to cover the whole front is proposed. The proposed algorithm has been simulated on three and six unit systems with transmission losses and non-smooth objective functions.
Structural analysis is done on disc rotor for two materials stainless steel and Aluminum alloy. A present used material for disc brake is stainless steel. We are replacing the material with Aluminum alloy, since its density is less than that of stainless steel thereby reducing the weight of disc brake. By observing the stress values obtained in structural analysis, they are less than the yield stress value of Aluminum alloy, so using Aluminum alloy for disc brake is safe. And also by comparing with other materials, the stress value is less for aluminum alloy moreover temperature rise in aluminum is also less than that of stainless steel. So using Aluminum alloy is better. Reducing weight of disc rotor, Braking system becomes less bulky and compact. Moreover, with increased weight braking force also increases, hence weight reduction can result is effective performance of disc brake
This study aims to [balance the workload of each production line] and [reduce the number of replenishment times] in pick and pass warehouse system, instead of single optimal solution. Thus, we present a multi-objectivegeneticalgorithm for searching as many Pareto-optimal solutions as possible. The step of algorithm flow is shown in Fig. 1. Encoding
Parallel Coordinate plots also have a representational complexity of O(n), as each line bisecting the axes of the plot represents a single point in high dimensional space. Parallel coordinates lose no data in the representation process; this in turn ensures that there is a unique representation for each unique set of data. Unlike Chernoff Faces, parallel coordinate plots treat each dimension of the data in the same way, resulting in easy plotting of data points. The main weaknesses of this method are that it requires multiple views to see different trade-offs and it can be difficult to distinguish individual points if many data points are represented.
The current development of the finiteelementmethod started in 1940s in the field of structural engineering with the work in 1941 and in 1943,which used a frame of line (one dimensional) elements (bars and beams) for the solution of stresses in continuous solids. Courant proposed a variational form for the setting up of stresses . Later he presented piecewise interpolation (or shape) functions over triangular sub regions making up the whole region as a method to obtain approximate numerical solutions. In 1947 Levy developed a new method namely force (or flexibility) method, and the same author in his another work suggested one more method the stiffness (or displacement) method .
and text, each of which has their specific characteristics. To describe these components and their characteristics, the study conducted by Yu et al.  was used. Table 1 presents the components and the number of bits relating to each component. The RGB system was used for the color characteristics and 8 bits were considered for each of the three main colors. Therefore, each color can receive a value between 0 and 255. The font char- acteristic can be one of the four popular fonts, i.e., Cambria, Calibri, Times New Roman, and Arial; therefore 2 bits are used to describe it. The size characteristic can have 4 dif- ferent values for each image and text components; hence, 2 bits are considered for this characteristic. Considering that the cover design sheet is considered as a grid with two columns and four rows, the position characteristics has eight values. Therefore, this characteristic is demonstrated by 3 bits. Considering the characteristics of each compo- nent, in sum, the size of each chromosome which describes a cover design equals 60 bits.
cluster, variance between cluster and Davies-Bouldin validity index. For iris data, the number of cluster 3, 4, 5, and 6 are used. The method of selection DB index value was done by choosing smaller index value in order to get minimum probability of similarity inter-cluster. Summary of results is shown in Table 1 below. The selected result of culstering k- means iris data is the total of cluster 3 with minimum DB index which is 0.20.
The lens is located in front of the vitreous body filled with a hydrogel  whose modulus of is less than 10 Pa [36,37]. Besides, the lens is connected with the circular zonulas [38,39]. When the pressure differential between the anterior chamber and pos- terior chamber increased, the circular ciliary zonulas deformed largely which lead to the backward movement of the lens. Therefore, the lens was taken as a rigid body and allowed to move only in the Y direction (Ux = Uz = 0.0). The moving distance of the lens was determined by the anterior chamber depth and the deformation of the cornea. As the apical rise of the cornea varied little when IOP was more than 15 mmHg , the moving distance was equal to the change of the anterior chamber depth. In the fi- nite element model, the lens was subjected to the displacement constraints with the same amplitude as the moving distance of the lens at the current IOP level, which could be calculated as the linear relationship  between the anterior chamber and IOP was obtained, shown in Figure 4.