Top PDF The design of high lift aircraft configurations through multi-objective optimisation

The design of high lift aircraft configurations through multi-objective optimisation

The design of high lift aircraft configurations through multi-objective optimisation

Furthermore, the ow is here considered to be fully turbulent on both lower and upper surfaces of the aerofoil. Indeed, this simplication will aect the resultant lift and drag coecients of the analysed congurations, as illustrated by Rumsey et al. [51]. In their work the authors have shown the importance of specifying transition location correctly to accurately computation boundary layer's velocity proles, although no method is developed to estimate the transition location itself. Similar results have been shown for 3D conguration by Fares and Nolting [52] and Eliasson et al. [53, 54], where laminar conditions have been imposed on some regions of the RANS domain. However, in the work here presented laminar to turbulent transition is neglected in the view of achieving a more consistent solution within the optimisation process. In fact, although recent applications of transition prediction techniques (e.g. laminar boundary layer method and e N -database method [55], or the correlation-based γ − Re θ transition model [56]) to both 2D test cases [55] and 3D ones [56, 57] have shown promising results, the dependence of such methods on the mesh resolution and test case geometry might result in incorrect transition locations for some of the analysed designs. This inconsistency within an optimisation process might mislead the optimisation algorithm toward unreal optimum regions. An example of such a case is presented by Steed [56], where the transition model predicts early separation of the high-lift conguration analysed, but the fully turbulent case continues to follow the wind tunnel data.
Show more

285 Read more

Multi-objective optimisation of water distribution systems design using metaheuristics

Multi-objective optimisation of water distribution systems design using metaheuristics

Long term memory may be employed to reduce the probability of revisiting solutions [101]. Long term memory may be required as a mechanism to escape local optima, especially in cases of extended valleys in objective space. One technique in use is an additional tabu list which prohibits moves whose frequency of occurrence has exceeded a predefined threshold, or the penalization of solution fitness proportional to the frequency of their appearance in the search. This penalization method may also be of benefit for cases where the objective function assumes a limited number of values. An obvious problem is the selection of an appropriate penalty factor, which should neither be too small nor too large. Another mechanism for achieving long term memory is to oblige moves which have not yet been executed after a large number of iterations. It is possible to oscillate periodically between using long term and short term memory, or to use the two simultaneously. If such oscillation occurs, this may be understood in terms of periods of diversification (long term memory) and intensification (short term memory) [73, 101]. Tabu search was utilized for WDSs design optimisation by Fanni et al. [85] in 2000, and by Cunha and Ribeiro [47] in 2004, who used a neighbourhood wherein an individual differs from its neighbour by exactly one pipe whose diameter is one size larger or smaller. They employed moves to increase and decrease pipe sizes in periods of diversification and intensification, making use of frequency memory, including the number of times a pipe’s diameter was changed, the number of times it was assigned a particular value, and the number of iterations for which a pipe maintained its current size. Although they were able to achieve high quality solutions in times competitive with alternative methods, such as genetic algorithms, no general conclusions could be drawn about which is the most appropriate design metaheuristic [47]. Tabu search is not considered further as a solution methodology for WDS design in this dissertation, as the focus is on population-based metaheuristics, which lend themselves more readily to multi-objective optimisation.
Show more

370 Read more

Parametric design and multi-objective optimisation of containerships

Parametric design and multi-objective optimisation of containerships

ABSTRACT: The introduction of new regulations by the International Maritime Organisation (IMO), the fluc- tuation of fuel price levels, along with the continuous endeavour of the shipping industry for economic growth and profits has led the shipbuilding industry to explore new and cost-efficient designs for various types of merchant ships. In this respect, proper use of modern computer-aided design/computer-aided engineering sys- tems (CAD/CAE) extends the design space, while generating competitive designs in short lead time. The pre- sent paper deals with the parametric design and optimisation of containerships. The developed methodology, which is based on the CAESES/Friendship-Framework software system, is demonstrated by the conceptual design and multi-objective optimisation of a midsized, 6,500 TEU containership. The methodology includes a complete parametric model of the ship’s external and internal geometry and the development and coding of all models necessary for the determination of the design constraints and the design efficiency indicators, which are used for the evaluation of parametrically generated designs. Such indicators defining the objective func- tions of a multi-objective optimisation problem are herein the energy efficiency design index (EEDI), the re- quired freight rate (RFR), the ship’s zero ballast (Z.B.) container box capacity and the ratio of the above to be- low deck number of containers. The set-up multi-objective optimisation problem is solved by use of the genetic algorithms and clear Pareto fronts are generated.
Show more

8 Read more

Parametric design and multi-objective optimisation of containerships

Parametric design and multi-objective optimisation of containerships

Abstract—The introduction of new regulations by the International Maritime Organisation, the fluctuation of fuel price levels, along with the continuous endeavour of the shipping industry for economic growth and profits has led the shipbuilding industry to explore new and cost-efficient designs for various types of merchant ships. In this respect, proper use of modern computer-aided design/computer-aided engineering systems (CAD/CAE) extends the design space, while generating competitive designs in short lead time. The present paper deals with the parametric design and optimisation of containerships. The developed methodology, which is based on the CAESES/Friendship-Framework software system, is demonstrated by the conceptual design and multi-objective optimisation of a midsized, 6,500 TEU containership. The methodology includes a complete parametric model of the ship’s external and internal geometry and the development and coding of all models necessary for the determination of the design constraints and the design efficiency indicators, which are used for the evaluation of parametrically generated designs. Such indicators defining the objective functions of a multi-objective optimisation problem are herein the energy efficiency design index, the required freight rate, the ship’s zero ballast container box capacity and the ratio of the above to below deck number of containers. The set-up multi-objective optimisation problem is solved by use of the genetic algorithms.
Show more

8 Read more

Multi-objective optimisation using surrogate models for the design of VPSA systems

Multi-objective optimisation using surrogate models for the design of VPSA systems

Vacuum/pressure swing adsorption is an attractive and often energy efficient separation process for some applications. However, there is often a trade-off between the different objectives: purity, recovery and power consumption. Identifying those trade-offs is possible through use of multi-objective optimisation methods but this is computationally challenging due to the size of the search space and the need for high fidelity simulations due to the inherently dynamic nature of the process. This paper presents the use of surrogate modelling to address the computational requirements of high fidelity simulations needed to evaluate alternative designs. We present SbNSGA-II ALM, surrogate based NSGA-II, a robust and fast multi-objective optimisation method based on kriging surrogate models and NSGA-II with Active Learning MacKay (ALM) design criteria. The method is evaluated by application to an industrially relevant case study: a two column six step system for CO 2 /N 2 separation. A 5 times reduction in computational effort is observed.
Show more

30 Read more

An investigation of higher-order multi-objective optimisation for 3D aerodynamic shape design

An investigation of higher-order multi-objective optimisation for 3D aerodynamic shape design

With a modified implemetation of the Intensification Memory (IM) of the PCMOTS algorithm we can rapidly explore complicated design spaces and also identify the extreme optimum areas for each objective function. At the same time though the Pareto front is not very rich, representing incomplete exploration of the compromise optimum design area that penalises the quality of the Pareto Set. It is anticipated that with proper tuning of the IM size parameter PCMOTS will reveal the same quality trade-off surfaces as these revealed with PRMOTS, but at considerable reduced computational cost. This characteristic is extremely important for real-world computational engineering design problems, in particular when robustness criteria are considered during the design process.
Show more

9 Read more

Comparing Design Of Experiments and Evolutionary Approaches To Multi-Objective Optimisation Of Sensornet Protocols

Comparing Design Of Experiments and Evolutionary Approaches To Multi-Objective Optimisation Of Sensornet Protocols

A set of three typical sensornets was defined and reused for all experiments. Each sensornet consisted of 250 static motes of identical capability modelled on the Crossbow MICA2 mote. Motes were distributed randomly within a square of side length 21Km yielding a geographic distribution of uniform planar density. The resulting average degree of connectivity, the number of peer nodes with which a given node can feasibly communicate, is approximately 20 which is typical of sensornets [8]. All internodal communication was defined to occur through anisotropic radio broadcast in an obstacle-free vacuum. Signal propagation and attenuation was modelled using the Friis free space model with exponent of 2.0. The simulated motes ran a simulated distributed sens- ing application in which every node periodically produces a small data packet. The destination of each packet is randomly selected from all motes in the network to prevent bias from any implicit structure in the mote distribution.
Show more

8 Read more

Multi-objective design optimization for high-lift aircraft configurations supported by surrogate modeling

Multi-objective design optimization for high-lift aircraft configurations supported by surrogate modeling

An intuitive example is people needs prepare in advance warm clothes when the winter coming in the northern hemisphere, this mind buried in people minds are according to the years by years life experiences, it is a surrogate model to forecast a general weather trends, while sometimes abnormal weather in winter create conditions let people can expose more their skin, so this kind estimate according to statistics is a low-fidelity evaluation method. Nowadays, the ad- vanced meteorological science and technology, it can use the collected infor- mation from meteorological satellite, ground weather station, statistics data and simulations to forecast the weather in high-fidelity. Compare these two ap- proaches that can give the variables objective(s), the surrogate model can give a faster, lower cost, lower fidelity result compare to the high-fidelity approach. A new ADO methodology can be built by the combination of these two approach- es, call it CFD-surrogate-based optimization, it use the surrogate model to pre- dict the design objective(s), and use a optimizer to search the optimum from the surrogate modelling, and finally use the high-fidelity approach to check the ob- jective(s) (figure 3-2). This ADO method can do the optimization more effective, especially with the tasks as multi-dimension design space and multi-local opti- mum.
Show more

184 Read more

Parametric design and multi-objective optimisation of containerships

Parametric design and multi-objective optimisation of containerships

The energy efficiency design index, the ratio of the above to below deck 21 number of containers, the required freight rate, the ship’s zero-ballast container capacity and the total 22 s[r]

35 Read more

Multi-objective Optimisation of Marine Propellers

Multi-objective Optimisation of Marine Propellers

Due to the relatively high density of water, the efficiency of propellers for marine vehicles is very important in comparison with aircraft. The efficiency of propellers refers to the amount of the power of the motor(s) that is converted to thrust. In addition to the efficiency, this conversion should be done with a minimum level of vibration and noise. The third characteristic of a good propeller is low surface erosion which is caused by cavitation. Finding a balance between these three features is a challenging task which should be considered during the design process of a propeller. The main part of a propeller is its blades. The geometric shape of these blades should satisfy all the above-mentioned requirements.
Show more

10 Read more

Bayesian single- and multi- objective optimisation with nonparametric priors

Bayesian single- and multi- objective optimisation with nonparametric priors

reduce the entropy of the Pareto set the most. The proposed approach is called predictive entropy search for multi-objective optimisation (PESMO). Several experiments involving real-world and synthetic optimisation prob- lems, show that PESMO can lead to better performance than related methods from the literature. Furthermore, in PESMO the acquisition function is expressed as a sum across the different objectives, allowing for decoupled scenarios in which we can choose to only evaluate a subset of objectives at any given location. In the robotics example, one might be able to decouple the problems by estimating energy consumption from a simulator even if the locomotion speed could only be evaluated via physical experimentation. Another example, inspired by Gelbart et al. [2014], might be the design of a low-calorie cookie: one wishes to maximise taste while minimising calories, but calories are a simple function of the ingredients, while taste could require human trials. The results obtained show that PESMO can obtain better results with a smaller number of evaluations of the objective functions in such scenarios. Furthermore, we have observed that the decoupled evalua- tion provides significant improvements over a coupled evaluation when the number of objectives is large. Finally, unlike other methods [Ponweiser et al., 2008; Picheny, 2015], the computational cost of PESMO grows linearly with the number of objectives.
Show more

146 Read more

Evolutionary multi-objective worst-case robust optimisation

Evolutionary multi-objective worst-case robust optimisation

Jin & Sendho↵ (2003) consider the robustness as an additional objective and the single objective optimisation problem becomes a multi-objective optimisa- tion problem. A trade-o↵ between the performance and robustness is considered. Gunawan & Azarm (2005) introduced sensitivity region concept to measure the multi-objective sensitivity of a design. Considering the objective function contains design variables and parameters, if the variations in objective value is small when the parameter changes, then the design variable is not sensitive to parameter variations. It does not require the parameter distribution so this methods also applies to objec- tive functions that are non-di↵erentiable or discontinuous. Li et al. (2005) describe the robust optimal solutions are those solutions that are less sensitive to parame- ter variations, if the multi-objective optimisation problems involve parameters that are uncontrollable. A new Robust Multi-Objective Genetic Algorithm (RMOGA) is presented in (Forouraghi 2000) to get the trade-o↵ between performance and a robustness index that is defined based on worst-case sensitive region. Luo & Zheng (2008) propose a new method to search for robust solutions by converting a multi- objective robust optimisation problem into a bi-objective optimisation problem, one objective represents the solutions’ quality and the other objective is to optimise the solutions’s robustness.
Show more

120 Read more

Multi-objective optimisation in scientific workflow

Multi-objective optimisation in scientific workflow

without requiring significant adaptation to each problem [10]. Most meta-heuristic algorithms share a number of characteristics. First, they are typically stochastic, rather than deterministic as in classic optimisation algorithms [11], [12]. Second, they do not make use of gradient information of objective functions to guide the search. The gradient-based methods are found ill suited for real world multi- objective optimisation problems due to large search space with many local minima [1]. Instead, the search is guided based on some nature-inspired principles from physics, biology, etc. Meta-heuristics algorithms can be classified into two main groups: trajectory-based (or single-solution) and population-based. Trajectory-based approaches only deal with single candidate solution. They start with only one candidate solution, and then improve on that solution, describing a trajectory in design space [12]. Examples of trajectory-based approach are simulated annealing, tabu search, etc. On the other hand, population-based algorithms often use population characteristics to guide the search [12]. Some popular algorithms in this group are evolutionary algorithms, genetic algorithms and particle swarms.
Show more

10 Read more

Multi-objective optimisation in the presence of uncertainty

Multi-objective optimisation in the presence of uncertainty

Evolutionary computation (EC) techniques are now exten- sively used when attempting to discover the optimal or near optimal parameterisation for problems with unknown or complex function transformation from parameters to objec- tive(s) (see for instance the Adaptive Computing in Design and Manufacture series (Parmee, 2004)). Almost all optimi- sation procedures search the parameter space by evaluating the objectives for a given parameterisation before proposing a new, hopefully better, parameterisation. It is generally as- sumed that repeated evaluation of the objectives for a single parameterisation yields the same objective values. How- ever, a special, but not insubstantial, class of these prob- lems exists in which there is additional uncertainty in the veracity of the results obtained from the system model. One clear example arises when measurement error or stochas- tic elements in physical system leads to different results for repeated evaluations at the same parameter values (B¨uche et al. , 2002; Stagge, 1998). Our own interest in this topic arises from the optimisation of classification error rates in pattern recognition tasks: precise error rates depend upon the particular data set used; different, but statistically equiv- alent data sets, arising from bootstrap samples yield dif- ferent error rates (Fieldsend & Everson, 2004; Everson & Fieldsend, 2005) and it is important to evaluate the uncer- tainty associated with optimal classifiers.
Show more

8 Read more

An overview of population-based algorithms for multi-objective optimisation

An overview of population-based algorithms for multi-objective optimisation

Assuming that the problem under consideration is being solved for practical purposes, that is; it is a real-world problem and a design choice is to be made. Even if multiple competing objectives are under consideration only one solution can usually be employed. This requires that a human expert or decision maker (DM) to resolve the situation by selecting a small subset of the solutions presented by the algorithm. So in this scenario, or more formally a posteriori preference articulation, the algorithm is endowed with the task of finding a set of alternative solutions. Subsequently the DM will evaluate these solutions according to his/her preferences and make a selection. To facilitate this process the algorithm guides a population of decision vectors toward a tradeoff surface, be it convex, concave, or even discontinuous. This tradeoff surface should enable the DM to choose a solution that they believe would serve their purposes well. So it becomes apparent that in MOPs a DM plays an integral role in the solution procedure. There are various paradigms of interaction involving the DM and the solution process but in the present work a posteriori preference articulation is the main focus; for other types see (Miettinen 1999). This choice is based on the fact that this particular method of interaction with the DM allows a greater degree of separation between the algorithm and the decision making process. Also this is invariably the reason as to why this paradigm is usually employed by researchers when developing new optimization techniques: it enables the testing process to be conducted independently of the problem or application and most importantly it need not involve the DM at this stage. It should also be noted that we do not mention specific applications in this work. The only exception to this rule is when an application explicitly results in creation of an algorithm highly regarded in the community. The reason for this decision is that this approach enables us to present a more concise conceptual view of PBOTs. A view that we hope would be helpful to the practitioner in search of an approach to solve a particular problem.
Show more

43 Read more

Multi-objective optimisation of the cure of thick components

Multi-objective optimisation of the cure of thick components

trated. The standard cure profile results in a range of final degrees of cure (0.903–0.960) through the thickness. The corner design point results in a more uniform distribution (0.903–0.920) by increasing the second dwell temperature to 215 !C, decreasing the first dwell duration to 3000 s and increasing the ramp rate to 3.1 !C/min. This set of parameter values generates a controlled temperature overshoot of 22 !C. This results in a slightly lower maximum degree of cure and a milder gradient of temperature through thickness which contributes to a more uniform degree of cure distribution. The design in the horizontal region of the Pareto set uses a first dwell temperature of 135 !C and a first dwell dura- tion of 8600 s resulting in a small temperature and temperature gradient and therefore a virtually uniform degree of cure distribu- tion through thickness (0.899–0.906). The standard profile and the corner solution ( Figs. 9 and 10 ) show qualitative similarities in the evolution of the cure process. In both of them the overshoot occurs in the first dwell, whilst they have a similar peak reaction rate. The corner solution manages to reduce the overshoot by moving up to the second dwell as soon as the peak of the overshoot has occurred and the very fast part of the reaction has been completed. This results in a significant time saving compared to the standard pro- file which continues within the first dwell while the risk of over- shoot is minimised beyond the peak reaction point and an additional saving as the tool temperature increases right after the overshoot making the temperature more uniform resulting in a weaker temperature lag and more uniform degree of cure. The evo- lution of the cure is markedly different in the very low temperature overshoot solution corresponding to the horizontal region of the Pareto set ( Fig. 11 ). The long first dwell at a relatively low temper- ature ensures maximum progress of the reaction (about 60%) with very low overshoot. This is followed by the ramp and second dwell to complete the cure at higher temperature with the occurrence of a small overshoot. This results in negligible overall exotermic effects combined with a high level of uniformity in degree of cure through thickness with a variation lower than 1%.
Show more

11 Read more

Asynchronous Multi-objective Optimisation in Unreliable Distributed Environments

Asynchronous Multi-objective Optimisation in Unreliable Distributed Environments

Optimisation of solutions using computational models is becoming an increasingly common practice in engineering design and scientific investigations. It has been applied to a wide variety of problems, from biomechanics [26] to avionics [44] and the demand is increasingly for the simultaneous optimisation of several, possi- bly competing objectives. Such real-world problems usually require complex, time- consuming computer simulations to solve potential solutions and so parallelisation of optimisation algorithms is a practical necessity. Originally confined to dedicated, parallel computing clusters or special-purpose parallel computers, the emergence of grid computing as a common approach to the provision of computational capac- ity [1] has dictated the development of optimisation algorithms capable of efficient and effective performance in potentially unreliable distributed environments. Such algorithms can also be applied effectively to utilise small-scale, ad hoc grids of net- worked computers, bringing the benefits of computational optimisation within the reach of small to medium enterprises.
Show more

28 Read more

Multi-objective optimisation using the Bees Algorithm

Multi-objective optimisation using the Bees Algorithm

The first direct use of the immune system to solve MOOPs reported in the literature is the work o f (Yoo and Hajela 1999). This approach uses a linear aggregating function to combine objective function and constraint information into a scalar value that is used as the fitness function o f a GA. The best designs according to this value are defined as antigens and the rest of the population as a pool o f antibodies. (Freschi et al. 2009) gave good reviews regarding Multi- Objective Optimisation with Artificial Immune Systems (MOAIS). (Coello et al. 2007) are also well documented regarding MOAIS. More recently, (Tan et al. 2008) developed an Evolutionary Multi-Objective Immune Algorithm (EMOLA) to solve several benchmark problems. In order to design an algorithm that is capable o f exploiting the complementary features o f EA and AIS, a few features such as archive, an Entropy-based Density Assessment Scheme (EDAS), colonal selection and genetic operators are incorporated into the EMOLA.
Show more

313 Read more

Multi-objective optimisation of low-thrust trajectories

Multi-objective optimisation of low-thrust trajectories

Assuming a scenario in which a single IBSC needs to de-orbit multiple pieces of debris, one would need to solve an interesting mission design problem: the optimisation of the de- orbit sequence and trajectories for multiple target objects in minimum time and with minimum propellant. In the hypothetical mission scenario which is analysed in this work, it is assumed that a number of pieces of debris have been shortlisted as priority targets due to the threat they pose to satellites operating in LEO. For example Johnson et al. 124 propose some criteria to choose the object whose removal will be most effective to mitigate the risk of collisions. They underline that an effective removal strategy must be targeted first to large objects in crowded orbits up to 1500 km. Thus, a removal mission by means of an IBSC is planned to be launched from the Earth. Its task is that of removing five objects lying on different low Earth orbits. The design of such a mission is a complex optimisation problem, because it requires the computation of multiple low-thrust, many-revolution transfers. Therefore, this case study proposes an approach to the fast estimation and optimisation of the cost and time duration of the fetch and de-orbit sequences, adopting a MOO approach.
Show more

264 Read more

Multi-objective optimisation methods applied to aircraft techno-economic and environmental issues

Multi-objective optimisation methods applied to aircraft techno-economic and environmental issues

The optimisation of noise abatement trajectories for departures and arrivals was studied in [174178]. Both generic indices (e.g., noise footprints) and site- specic criteria, which consider the population distribution around the airports, were included in the studies. First, noise models and a geographic information system were integrated into a dynamic trajectory optimisation code [176]. Given any airport, the tool delivers the analysis and design of noise abatement proced- ures. Two objectives were considered (i.e., noise and fuel consumption) for the departure of a Boeing 737-300 from Schiphol (i.e., Amsterdam Airport). In the optimal trajectories, the noise aected about 35% fewer people compared to the reference fuel optimal trajectory, while requiring only 1% additional fuel. Follow- ing the previous case studies, the same tools and scenarios were also employed for the arrival of the aircraft in [177]. The aim was to produce noise optimal arrival trajectories. As a consequence, about 50% fewer people were disturbed compared to the reference, fuel optimal trajectory at the expense of about 15% more fuel and 10% longer endurance. In general, the shape of the descent proles is very similar. However, an operational limitation in both of the aforementioned cases forced modications as described in [178]: by slightly modifying the ight path, similar results were obtained. Similarly, it would be useful to combine noise in- dices with other principles. In [174], the previous tool was extended by including other noise performance criteria. Two extra parameters were added: population and area. By considering the arrival trajectories, the author performed paramet- ric analysis of a single composite noise objective. Consequently, the single goal optimisation case illustrates unsatisfactory performance, because it is not possible to demonstrate a trade-o. This indicates one must use individual (conicting) objective functions, so as to demonstrate a trade-o, which could be used by the decision makers. However, these studies focused on noise, whereas more objectives should be considered so as to acquire a trade-o.
Show more

318 Read more

Show all 10000 documents...