A Tuned Mass Damper (TMD) is a mechanical energy dissipating system consisting of a mass, a spring, and a damper (see Figure 2.1) that can be tuned to a particular frequency. Generally, the TMDs are tuned relative to a particular structural response frequency so that when the structure vibrates in that frequency, the TMD will res- onate out of phase and produce forces that work against the structural motion. Ac- cording to the literature, the first concept of TMDs was applied by Frahm in 1909 to mitigate the rolling motion of a ship. Since that time, TMDs were widely utilized in the vibration control of dynamic systems as well as civil engineering structures such as bridges [28, 29, 30, 31, 32] and buildings [33, 34, 35, 36, 37, 38, 39, 40, 41]. A well-known example of such an application is Taipei 101, a 500-meter high building that uses a pendulum as a TMD, suspended from the 92nd floor and reaching the 87th floor. This building is designed to withstand typhoons and strong earthquakes (see Figure 2.2). The studies show that using a single TMD at the top level of a tall building can eﬀectively reduce wind-induced motions [16, 42]. This is because generally, the structures response to the wind excitation based on the first struc- tural mode, in which the top levels of the building have maximum modal responses compared with the other stories. Therefore, placing a single TMD on the top level with a tuning frequency closer to the fundamental structural frequency can eﬃciently reduce the structural responses.
uctures are allowed to undergo severe this means saving lives at the expense of structures incurring excessive economic losses. Recently, the seismic design of structures has evolved towards a performance-based s need for new structural members and systems that possess enhanced deformation capacity and ductility, higher damage tolerance, and recovered or reduced In the 1960s, Buehler and Wiley titanium alloys, with a composition of 53 to 57% nickel by weight, that exhibited an unusual effect: severely deformed specimens of the alloys, with 15%, regained their original shape after a memory effect and the alloys exhibiting it were named shape-memory alloys (SMAs). It was later found that at sufficiently high temperatures such materials also possess the property of super elasticity, that is, the ability of recovering large deformations unloading cycles performed at
In GP, candidate solutions are represented using a tree structure (Figure 1). In order to create new solutions, GP uses stochastic oper- ators called genetic operators, typically enti- tled crossover and mutation. In the standard version of GP, these two operators work as fol- lows: given two solutions (called parents), crossover builds two new solutions (offspring) by swapping a subtree of the first parent with a subtree of the second parent. The subtrees are usually chosen at random. Mutation acts on one solution: given a tree, it creates a new solution by replacing a randomly chosen sub- tree with a newly generated subtree. These operators act on the structure (i.e., the syntax) of the individuals and ignore the information related to semantics. The application of stan- dard crossover and mutation operators yields new tree structures (Figures 2 and 3).
As illustrated in Figure 1.4, lift force (FL) and drag force (FD) provide a resultant force (FR). The resultant force can be decomposed into tangential component (FT) which produces rotational torque and the axial component (FA) pushing the blade backward. Rotor speed control in pitch-controlled turbines utilizes the fact that changing the blade pitch angle (α) with respect to rotor plane changes the wind angle of attack (β) and the resultant wind force on the blade, and that changes the tip speed (u) at a given wind speed (vW). This consequently alters the tip speed ratio. The tip speed ratio affects the rotor power coefficient and therefore varies the power captured by the turbine. In pitch-controlled rotors, to start-up the turbine (low tip speed ratio) high pitch angles are chosen to increase the rotor speed. On the other hand, at higher wind speeds lower angle of attack is chosen by increasing pitch angle to decrease the power captured by the blades. Pitch control is also helpful in storm condition when wind generator is shut down for protection. This is achieved by adjusting the blades in the feathered position which effectively decreases the mechanical power input of the turbine.
AI is at the centre of a new enterprise to build computational models of intelligence. The main assumption is that intelligence (human or otherwise) can be represented in terms of symbol structures and symbolic operations which can be programmed in a digital computer. There is much debate as to whether such an appropriately programmed computer would be a mind, or would merely simulate one, but AI researchers need not wait for the conclusion to that debate, nor for the hypothetical computer that could model all of human intelligence. Aspects of intelligent behavior, such as solving problems, making inferences, learning, and understanding language, have already been coded as computer programs, and within very limited domains, such as identifying diseases of soybean plants, AI programs can outperform human experts. Now the great challenge of AI is to find ways of representing the commonsense knowledge and experience that enable people to carry out everyday activities such as holding a wide-ranging conversation, or finding their way along a busy street. Conventional digital computers may be capable of running such programs, or we may need to develop new machines that can support the complexity of human thought.
Owing to the ability of FIS(s) to generate fuzzy rules from a given input-output data set, two types of FISs, two best known FISs, labeled by their originators, are Mamdani-type FIS developed by Mandami and Takagi-Sugeno-Kang (TSK) type FIS proposed by Takagi and Sugeno ,. Both types vary somewhat in a way the outputs are determined in accordance with the construction of the rule consequent ,. The main distinction between them is in generating the crisp value; the output values have to be fuzzy in Mamdani type and crisp in TSK type. Moreover, Mamdani type FIS applies defuzzification techniques, but the TSK type FIS applies a weighted average and weighted sum methods for calculating the crisp output variables. ,  mention that in Mamdani type FIS both the antecedent and the consequent are linguistic (fuzzy sets), but in the TSK type FIS the antecedent consists of fuzzy sets and the consequent is comprised of linear equations. Zaher et al. outline a comparison between Mamdani and Sugeno fuzzy methods for assessing which approach provides the best performance for predicting prices of Fund in the Egyptian market. Mamdani and Takagi-Sugeno-Kang models have been widely used for solving problems such as decision analysis, expert systems, prediction (forecasting), data classification, image processing, optimization, and control and system identification. For more comparison and transformation .
Experimental results indicate the existence of azi- muthal (toroidal) plasma rotation in tokamaks subjected to neutral beam heating. In field-reversed configurations azimuthal rotation is responsible for a type of instability that may destroy plasma confinement. One possible ap- proach to investigate the effects of rotation on MHD equilibrium and stability properties would be to obtain numerical solutions of the 3-dimensional ideal MHD equations. Magnetic flux surfaces rotate rigidly with the plasma, according to Alfen’s theorem, and are character- ized by poloidal flux and current functions that satisfy an elliptic partial differential equation which, in the limit of vanishing rotation, is reduced to the Grad-Shafranov equation. For these purposes is good apply the approach with fuzzy mathematics. In this case, the fuzzy Mark- ovian processes describe the behavior of the solution of Fokker-Planck equation, as generalization of Brownian motion, for Vlasov-Poisson-Fokker-Planck equation.
The first work that is now generally recognized as AI was done by Warren McCulloch and Walter Pitts (1943). They drew on three sources: knowledge of the basic physiology and function of neurons in the brain; a formal analysis of propositional logic due to Russell and Whitehead; and Turing’s theory of computation. They proposed a model of artificial neurons in which each neuron is characterized as being “on” or “off,” with a switch to “on” occurring in response to stimulation by a sufficient number of neighboring neurons. The state of a neuron was conceived of as “factually equivalent to a proposition which proposed its adequate stimulus.” They showed, for example, that any computable function could be computed by some network of connected neurons, and that all the logical connectives (and, or, not, etc.) could be implemented by simple net structures. McCulloch and Pitts also suggested that suitably defined networks could learn. Donald Hebb (1949) demonstrated a simple updating rule for modifying the connection strengths between neurons. His rule, now called Hebbian learning, remains an influential model to this day.
Reinforcement learning raises its own problems: when systems become very capable and general, then an effect similar to Goodhart's Law is likely to occur, in which sophisticated agents attempt to manipulate or directly control their reward signals. This motivates research areas that could improve our ability to engineer systems that can learn or acquire values at run-time. For example, inverse reinforcement learning may over a viable approach, in which a system infers the preferences of another actor, assumed to be a reinforcement learner itself. Other approaches could use di_erent assumptions about underlying cognitive models of the actor whose preferences are being learned (preference learning, ), or could be explicitly inspired by the way humans acquire ethical values. As systems become more capable, more epistemically diffcult methods could become viable, suggesting that research on such methods could be useful; for example, Bostrom reviews preliminary work on a variety of methods for specifying goals indirectly.
exemplifies conventional processing of an extremely high volume of sensor data. By capturing the data in a worldwide network of measurement stations, the IMS can diagnose nuclear explosions. Seismographs, hydroacoustic sensors, infrasonic sound stations and radionuclide detectors gener- ate large amounts of data. Central data processing in Vienna analyzes and reduces the raw sensor data in order to distinguish signals from the constant noise and to classify them as either “nuclear explosion” or “not nuclear explosion” (Russell et al. 2010: 32). This process cannot yet be performed completely automatically. The analysts still have to laboriously rework the results of the automatic systems, since the automatic processing is prone to errors due to heavy noise, incorrect classifica- tion or false associations. In order to further optimize the results, new methods were already pro- posed by various project groups in 2009 at the International Scientific Studies (ISS) conference in Vienna. The project groups agreed unanimously on the problem: The data recorded by the sensors of the IMS is too complex for the performance of a conventional statistical analysis based on linear discrimination – a linear function that defines the boundary between groups. All four project groups are testing the ability of MLpAI to make non-linear discriminations. The project groups have already used classified data from the last ten years as “ground truth” (training set for the machine learning algorithm). The prototypes analyzed seismic data (Kleiner et al. 2009), hydroacoustic data (Tuma/ Igel 2009), infrasound data (Procopio et al. 2009), and radionuclide data (Stocki et al. 2010). All pro- totypes were able to make the analysts’ task easier and make more accurate automated evaluations. Nevertheless, the researchers came to the conclusion that the systems are not yet operational and would need to be further refined. Overall, these research projects and improved further development (Arora et al. 2013) have demonstrated the high potential of MLpAI for verification in arms control based on the IMS data.
Howarth  developed a framework for task level programming assembly systems in which the realisation o f self learning manipulative skills plays a key role in ensuring autonomous operation. The approach to the control of peg-in-hole insertions is presented based on a multi-layer perceptron (MLP) network trained with error back-propagation. The applicability of the technique to the assembly problem in 4 DOF is first successfully evaluated by training the network using supervised learning; this required the prior estab lishment o f desired input-output pairs. The proposed ANN controller’s layout consists o f a three layered MLP with 5 inputs including three forces acting on the peg Fx, Fy, F2, the torque around the peg’s axis Mx and the required direction o f insertion a. The output layer features 4 nodes representing velocity commands for linear motions along the peg axes (Sx, 5y, Sz) and rotation around the direction o f insertion (80z).
An electric motor is a device for converting electrical power into mechanical power . An electric motor will try to deliver the required power even at the risk of self- destruction. In the use of onsite, motors for various reasons, often lead the overload failure occurred. Motor overload will lead to the motor overheated, cause the motor burning, and cause significant damage to the national economy. Therefore, to prevent this happening, a smart control method is needed to overcome the motor overload problem. One of the affected parameter in case of overload problem is the motor torque. Torque is one of the important parameters in a motor. The torque is proportional to the speed.
(HP ), also called clusters, represent groups of similar patterns with respect to an inter cluster separation criterion or, alternatively, an intra cluster homogeneity criterion (). Both approaches rely on a priori selected distances (typically lx-norm metrics, e.g. ) measuring the inter cluster similarity or the intra cluster dissimilarity, respectively. According to the criterion to optimize, a number of partitional clustering algorithms have been proposed in the literature. Some algorithms are indicated in case of numerical attributes describing the patterns to be clustered, others are more suitable when dealing with mixed attributes. The most of clustering procedures consider the number K of clusters as input parameter to the procedure (e.g., k-means, see  and ). There exist approaches not requiring the number of clusters as input parameter (e.g., Clique Partitioning Problem, see ). The selection of the suitable partitional clustering algorithm depends on the characteristics of the user profiling problem. In this work, we consider patterns I(Q(u))including all numeric attributes (see Table 2.2), and adopt the k-means algorithms for selecting the optimal partition in k clusters by minimizing the so-called Sum of Squares Error (SSE), i.e. the sum of the squared Euclidean distances between each pattern I(Q(u)) and the cluster centroid. This choice is particularly suitable, since the Euclidean distance allows capturing the differences between different patterns, once their entries are normalized (in our case, we consider a normalization in [0, 1]). The selection of parameter k of the clustering algorithm is a key feature of our approach. In fact, even if the clustering procedure is completely unsupervised, we deal with the problem of selecting the suitable number K of clusters by analyzing the results of the clustering obtained with different values of the input parameter k and by selecting the value K corresponding to the minimum number of distinct optimality criteria ocr(s ∗ (Q(u))) represented in each cluster. In such a way, similar users belong to same cluster W i and share the same user
In mass customization, Customers select the options and alternatives they prefer to specify their own products. This customization leads to large product diversity to manage as every product may be different. For manufacturing companies, it induces small sets of variable quantities of different products to manufacture. This diverse manufacturing model leads to large impact on manufacturing process and severely impact the product quality, the lead time and the cost. The manufacturing plant layout has to be designed taking into account all these characteristics. When all products are similar, manufacturing plant layouts are relatively easy to design. However, difficulties arise when products within product family are different and require some specific manufacturing operations. In a product family all final products may be different depending on set of options and alternatives selected by diversified customers. At manufacturing level, these differences may require extra manufacturing or control operations. When a manufacturing company manufactures many sets of variable quantities of different products, plat layout may be problematic to define. It is estimated that 20-50% of the cost of manufactured part is constituted by material handling. Handling is most impacted by the layout design. Thus efficient plant layout design is essential for small production costs and for higher competitiveness. Effective plant layout design can reduce these costs to 10-30%.With the need to manufacture large number of products in small quantities, it is difficult for plant layout designers and production managers to optimize plant layout design quickly for new products. Hence, there is a need to use ArtificialIntelligence techniques using soft computing approach to empower plant layout designers to optimize the process workflow with in plant and quickly reconfigure workflows for new products.
D. Automated Test case generation based on coverage analysis by Tim A.Majchrzak & Herbert Kuchen Paper focuses on automated generations of unit tests. They have developed tool, which is symbolically executed java byte code for searching execution path through program, for searching choice-point generation, constraint solving and backtracking was used .They have proposes a novel way to reduce the test cases, which was based on their contribution to the global coverage of the data flow and control flow. Basic aim behind work is elimination of repeated and meaningless test cases. For example a method for handling AVL trees should only be tested by AVL trees rather than with arbitrary trees .
Name and range checking is automatically triggered whenever the expert physician introduces a new term or value within the description of an action in a guideline, and forces her/him to use only terms/values that have already been deﬁned within the Clinical DB. Whenever the expert physician introduces a node or arc, diﬀerent controls are automatically activated to check whether the new element is consistent with several logical design criteria. For example, alter- native arcs may only exit from a decision action. Finally, a “semantic” checking regards the consistency of temporal constraints in the guideline. This checking is automatically triggered whenever the expert physician saves a guideline. In fact, alternative sequences of actions and sub-actions may form graph structures, and the constraints on the minimum and maximum durations of actions and mini- mum and maximum delays between actions have to be propagated throughout the graph, to verify consistency.
Artificialintelligence could add value to the review of disaggregated patient records. Where unique personal identifiers are commonly unavailable an intelligent system could still trace an individual person ’s “signature” over time within large datasets. This would be greatly important in TB surveillance, for instance to trace people with clinical manifestations suggestive of TB or disease relapse or adverse drug reactions and to complete the monitoring of mortality and the linkage with HIV. In addition, this approach could identify predictors of treatment failure, which in turn could be useful in patient care as well as TB drug development [24, 25]. Artificialintelligence has been proposed as a means to detect outbreaks of TB early on .
Abstract - ArtificialIntelligence also known as (AI) is the capability of a machine to function as if the machine has the capability to think like a human. In automotive industry, AI plays an important role in developing vehicle technology. Vehicular automation involves the use of mechatronics and in particular, AI to assist in the control of the vehicle, thereby relieving responsibilities from the driver or making a responsibility more manageable. Autonomous vehicles sense the world with such techniques as laser, radar, lidar, Global Positioning System (GPS) and computer vision. In this paper, there are several methodologies of AI techniques such as: fuzzy logic, neural network and swam intelligence which often used by autonomous car. On the other hand, self driving cars are not in widespread use, but their introduction could produce several direct advantages: fewer crashes, reduce oil consumption and air pollution, elimination of redundant passengers, etc. So that, in future where everyone can use a car and change the way we use our highways.