Materials by design is a core driver in enhancing sustainability and improving effi- ciency in a broad spectrum of industries. To this end, thermo-mechanical processes and many of the underlying phenomena were studied extensively in the context of specific cases. The goal of this thesis is threefold: First, we aim to establish a novel numerical model on the micro- and mesoscale that captures dynamic recrys- tallization in a generalized framework. Based on the inheritance of the idea of state switches, we term this scheme Field-Monte-Carlo Potts method. We employ a fi- nite deformation framework in conjunction with a continuum-scale crystal plasticity formulation and extend the idea of state switches to cover both grain migration and nucleation. We introduce physically-motivated state-switch rules, based on which we achieve a natural marriage between the deterministic nature of crystal plastic- ity and the stochastic nature of dynamic recrystallization. Using a novel approach to undertake the states-switches in a transient manner, the new scheme benefits from enhanced stability and can, therefore, handle arbitrary levels of anisotropy. We demonstrate this functionality at the example of pure Mg at room temperature, which experiences strong anisotropy through the different hardening behavior on the hc + ai-pyramidal and prismatic slip systems as opposed to the basal slip systems as well as through the presence of twinning as an alternative strain accommodating mechanisms. Building on this generalized approach, we demonstrate spatial con- vergence of the scheme along with the ability to capture the transformation from single- to multi-peak stress-strain behavior.
Efforts were placed in the development of high-temperature and high-strength resistant new materials, air cooling methods and protective coatings. Improved strength capabilities were developed in alloys to resist higher temperatures and increase turbine inlet temperatures. De- velopments were also made from heat resisting steel and wrought Nickel-Chromium alloys in the nineteen-fifties, Nickel base superalloys extruded blades in the nineteen-sixties, increased alloy temperature capabilities but reduced resistance to oxidation and hot corrosion. In the early seventies, equiaxed cast blades showing high number of grain boundaries and random crystal orientation were developed, that facilitated the inclusion for internal cooling in the design of the blade. Directionally solidified casting began to be used in the late seventies, leading to a fewer boundary conditions and crystal orientation in the radial direction. Maximum resistance to stress produced by centrifugal forces was obtained. Development of single crystal turbine blades, maintaining radial oriented crystal and no grain boundary conditions further enhanced strength capabilities. Introduction of air cooling technology in the nineteen-seventies was cru- cial to increase the turbine entry temperature, requiring at the same high level design and casting techniques. Evolving from single pass cooling to multi pass cooling to maximize heat transfer between the blade and the cooling air. Figure 3.1 describe the increment in the turbine entry temperature due to the association of implementations in the materials and techniques [ 65 ].
• The multiscale link between the VMM macro-crack and micro-crack models, described in Chapter III, has been accomplished by Shang et al. [111, 141] for monotonic failure. Thus, the extension to fatigue failure can be completed. The link between the LEFM-based irreversible cohesive model (Chapter II) and the VMM macro-crack model (Chapter III) is identified as a critical area for future research. Linking these two will provide fast calibration of the irreversible cohesive model parameters and macro-scale high cycle fatigue failure simulation. • In the micro-scale, the incorporation of a wide variety of crystal plasticity meth- ods developed in our research group [142, 143, 144, 145, 146, 147, 148, 149] in a variational multiscale cohesive framework will provide new simulation tools for the analysis of cracks and low cycle fatigue. Some work in this direction has been performed by Shang et al. [111, 141] for 2D problems. In three dimen- sions, Regueiro  has implemented the VMM to model strong discontinuities in rocks. Thus, this implementation can be combined with the irreversible cohe- sive model to model polycrystalline fatigue failure in three dimensions. Alterna- tive 3D microstructure crack modeling methods such as smeared crack methods and graph cut based methods  could also be explored.
through automation. The application of concepts from graph theory, such as directed graph and directed multigraph gives us hierarchical information that helps understand the causal connection among events and mechanisms. As point out in , this model- building approach has an advantage over the model-free machine learning approaches in interperability. As considerable evidence has indicated that the model-based planning, such as the one introduced in this chapter, is not only an essential ingredient of human intelligence, but also the key step to enable flexible adaptation for new tasks and goals. The importance of the usage of multigraph is that it enables us to form complex idea, knowledge, prediction, inference and response with a rather small set of simple elements. This kind of application of the principle of combinatorial generalization has long been re- garded as the key signature of intelligence [100, 14]. Third, we also introduce a cooperative mechanism to integrate the data exploration into the modeling process. In this way, the framework can not only generate constitutive models to make the best predictions among the limited data, but also estimate the most efficient way to select experiments such that the most needed information is included to generate the knowledge closure.
We consider the problem of blind separation of unknown source signals or images from a given set of their linear mixtures. It was discovered recently that exploiting the sparsity of sources and their mixtures, once they are projected onto a proper space of sparse representation, improves the quality of separation. In this study we take advantage of the properties of multiscale transforms, such as wavelet packets, to decompose signals into sets of local features with various degrees of sparsity. We then study how the separation error is affected by the sparsity of decomposition coefficients, and by the misfit between the probabilistic model of these coefficients and their actual distribution. Our error estimator, based on the Taylor expansion of the quasi-ML function, is used in selection of the best subsets of coefficients and utilized, in turn, in further separation. The performance of the algorithm is evaluated by using noise-free and noisy data. Experiments with simulated signals, musical sounds and images, demonstrate significant improvement of separation quality over previously reported results.
Our aim is to understand the evolution of defects and defect clusters in silicon and estimate the thermophysical properties for defect clusters under external conditions of stress and temperature. Since the defects are very small in size (less than nm), a direct measurement of the thermophysical properties of defect clusters is not possible using the current experimental techniques. Fortunately a few simulation and modeling methodologies have been developed over the last two decades that have captured the physics under a physical phenomenon reasonably well. These techniques in turn have been used extensively to model the material behavior under a given set of external conditions and have resulted in significant gains in terms of product quality and safety of processes. Depending upon the level of detail and accuracy one wants to achieve with these simulations, the techniques can be broadly divided into four types.
An important starting point is the concept of multi-resolution analysis (MRA) intro- duced in Mallat (1989) and Meyer (1995), of which wavelets are particularly popular examples. One main advantage of MRA is that it comes naturally with fast decompo- sition and reconstruction algorithms, and this has been essential for making wavelets a practical tool in signal processing (Daubechies et al., 2003; Shen, 2010). Although our work builds upon the theory of wavelet frames in the continuous setting, we decide to introduce our model in a purely discrete setup. This has the advantage that it is more direct and more easily linked with existing machine learning models, including dictionary learning and convolutional networks. However, as noted in Han (2010), there is a canonical link between affine systems in the continuous setting and fast algorithms in the discrete framework.
methods with the decoupled nature of MC sampling, the so called stochastic col- location method. This framework represents the stochastic solution as a polyno- mial approximation. This interpolant is constructed via independent function calls to the deterministic problem at different interpolation points. This strategy has emerged as a very attractive alternative to the spectral stochastic paradigm. However, the construction of the set of interpolation points is nontrivial, espe- cially in multi-dimensional random spaces. In , a methodology was pro- posed wherein the Galerkin approximation is used to model the physical space and a collocation scheme is used to sample the random space. A tensor product rule was used to interpolate the variables in stochastic space using products of one-dimensional (1D) interpolation functions based on Gauss quadrature points. Though this scheme leads to the solution of uncoupled deterministic problems as in the MC method, the number of realizations required to build the interpolation scheme increases as power of the number of random dimensions. On the other hand, the sparse grid resulting from the Smolyak algorithm de- pends weakly on dimensionality . Sparse grids has been applied in many fields, such as high-dimensional integration , interpolation [26, 27, 28] and solution of PDEs . For an in depth review, the reader may refer to . In [31, 32, 33], the authors used the Smolyak algorithm to build sparse grid in- terpolants in high-dimensional stochastic spaces based on Lagrange interpola- tion polynomials. Using this method, interpolation schemes can be constructed with orders of magnitude reduction in the number of sampled points to give the same level of approximation (up to a logarithmic factor) as interpolation on a uniform grid. Hereafter, this method is referred as conventional sparse grid collocation (CSGC) method.
Despite the successful prediction of precipitation boundaries in this work, there are still questions that thermodynamic methods fail to address. We assumed that counterion binding fractions on micelles were constant partially because of the difficulties to quantify its variation via thermodynamic methods. MD methods may be a solution to this problem because they can effectively visualize the microscale object of interest (25-30). A possible approach is to perform MD simulations to see what would occur around a surfactant micelle. By comparing the modeling results under different conditions, we could determine how binding varies. We could also obtain evidence of surfactant precipitation and thus derive the precipitation boundary, according to the direct observation of surfactant/counterion aggregates in simulated systems. MD simulations have not been used for investigating surfactant precipitation. Simulations would advance the understanding of the important issues involved in anionic surfactant precipitation in calcium salt solutions.
Addiction is a complex psychological and neurophysio- logical manifestation, defined in terms of drug-using behaviors. Because of the importance of behavior in defining the syndrome, and because the syndrome depends on the availability and accessibility of the drug of abuse, which in turn depends on social interactions, it is useful to extend the concept of multiscale upwards to the levels of these social interactions . Underlying mechanisms drive an individual to uncontrolled use and create feelings of craving as well as a physiological state of withdrawal. These mechanisms can be defined spatially at the level of genome to neural circuitry and temporally at multiple scales ranging from milliseconds to years, influencing each other through systems of feedback loops . For example, genetics will determine the functioning of certain receptors in the brain, their response and adaptation to repeated drug intake. This adaptation in turn can gradually change cog- nitive pathways and lead to the intrinsic demand for more drugs, which can translate into drug-seeking behavior involving other individuals. Success in drug-seeking behavior results in drug use and reinforces across these multiscale cycles.
In Fig.5, it is clear that the D2Q9 model is unable to describe the Knudsen layer, which was also reported previously [29–31], while the M-D2Q9-36 model can ob- tain satisfactory results with the global Knudsen number up to 0.5. When the global Knudsen number is larger than 0.5, the multiscale method starts to deviate more from the linearized BGK (LBGK) results. This is not surprising since the Knudsen layers overlap and rarefac- tion eﬀect becomes important for the whole ﬂow domain. Note, a typical Navier-Stokes and DSMC hybrid model usually become problematic when the Knudsen number is over 0.1, e.g., see Fig.4 in Ref.. To some extent, this indicates the advantage of coupling the kinetic-based LB models.
Recently, several newmultiscale schemes have been con- structed purely on the basis of gas kinetic theory [18–21]; we call these kinetic multiscale schemes (KMS) here. A distinctive feature of KMS is that the same evolutional quantity (i.e., the molecular velocity distribution function) is used to describe flowfields with different rarefaction levels, leading to relatively easy information exchange at the model coupling interfaces [19,20]. To capture different levels of rarefaction, usually different discrete velocity sets are necessary. In general, more discrete velocities are needed for higher levels of rarefaction, and fewer discrete velocities are required for lower levels of rarefaction. In particular, there have been efforts to design schemes specialized in continuum-level modeling, e.g., the gas-kinetic Bhatnagar-Gross-Krook (BGK) Burnett solutions . This provides a good opportunity to improve the efficiency of multiscale solvers. It is important practically to use fewer discrete velocities, or specialized continuum-level solvers, as widely as possible in the flow field wherever they are valid.
The understanding of spin transport and spin torque is of increasing importance for spintronic device ap- plications since the discovery of giant magnetoresis- tance (GMR) [1, 2] and tunnelling magnetoresistance (TMR). [3, 4] These phenomena have opened a new path for spintronic device design such as magnetic tunnelling junction (MTJ) sensors  and magneto- resistive random access memory (MRAM)  lead- ing to the development of new generations of com- puter architecture. In addition, read sensors for con- ventional magnetic recording rely on transport prop- erties to achieve the desired functionality. Both spin transport and spin torque are phenomena strongly af- fected by the interface structure and properties which will therefore play a crucial role in determining re- sistance arising from spin–dependent scattering at the interface.[7, 8, 9, 10, 11] From the theoretical point of view, the simulation of a general interface between two different materials is of great complexity. The usual
A similar statistical estimation problem arises in the context of “equation-free” modeling. In this case, coarse-grained equations exist only locally and are locally fitted to the data. The main idea of “equation-free” modeling is to use these locally fitted coarse-grained equations in combination with a global algorithm (for example, Newton-Raphson) in order to answer questions about the global dynamics of the coarse-grained model (for example, finding the roots of the drift). In this process, we go through the following steps: we simulate short paths of the system for given initial conditions. These are used to locally estimate the effective dynamics. Then, we carefully choose the initial conditions for the following simulations so that we reach an answer to whatever question we set on the global dynamics of the system, as quickly and efficiently as possible (see ). The statistical inference problem is similar to the one before: we have the data coming from the full model, we have a model for the effective local dynamics and we want to fit the data to this model. However, there is also an important difference: the available data is short paths of the full model. This issue has not been addressed in [15, 14] or , where it is assumed that the time horizon is either fixed or goes to infinity at a certain rate. We will address this problem in section 3, by letting the time horizon T be of order O ( α ), where is the scale separation variable and α > 0. Another important issue that we will address here is that of estimating the scale separation variable .
It can be seen from the table, that the change of pseudopotential improves both the lattice constant and the magnetic moment of iron; the resulting magnetic moment appearing even closer to the experimental data than that from Wien2K and VASP calculations. To check if the pseudopotential can be applied to the calculations of structure in the liquid state, we have simulated the structure of liquid iron, which is shown in Fig. 2 (along with experimental data and RDFs made from classical MD calculations). It can be seen, that the structure recovered with the new pseudopotential is at least as close to the X-ray diﬀraction data as the old one.
The architecture of VPH-HF is inspired by the concept of modularity: each component can be used in isolation or ensemble with others to offer more sophisticated functionalities. This approach ensured an effective extension of the VPH-HF prototype developed in the VPH-OP project to the new requirements and scenarios of the CHIC project. The whole back-end VPH-HF architecture is hidden to the users since the target goal is to allow the adoption in the
Despite the shared theme of population demography between biophysics and population genetics, the two are rarely integrated together to study microbial evolution. A few theoretical models implementing both fields have been utilized to study evolution, but none have been used to study the emergence of resistance. Consequently, the application of evolutionary models and theories to the resistance problem is largely unknown. Therefore, the role and contribution of molecular biophysics and population genetics to the emergence of resistance remains unclear. In this memoir, we implemented evolutionary models considering principally population genetics constraints. We used concepts from theoretical evolutionary studies to investigate the emergence of resistance. The evolutionary models that were developed in this memoir will be the base of the multiscale model for the prediction of microbial evolution to study resistance in a single unified model. Further integration of complex biophysical constraints such as protein folding, will contribute significantly to increase the predictive accuracy of our evolutionary models. Understanding these complex biophysical constraints has been part of the Serohijos group’s interest. Consequently, a significant amount of research in the group has been focused on elucidating epistasis using biophysical and population genetics approaches. This research resulted in two important findings on epistasis. It was determined that a significant amount of amino acid substitutions would have experienced epistasis due to simple selection for folding stability, accordingly linking epistasis to the strength of molecular selection (Dasmeh and Serohijos, 2018). Also, a proteome-wide scan in E. coli revealed that epistasis is stronger among highly expressed genes, therefore highlighting the combination of selection and epistasis in long-term evolution (Dasmeh et al., 2017). Combined with the recent advancements by the Serohijos group, the evolutionary perspective provided in this memoir will be an important foundation for research in antibiotic resistance.
Comparing the presented multiscale modelling method with a foldcore micromodel simulation of the experiment, especially the requirement of developing a 3‐D failure envelope for each fold pattern is challenging. If explicit simulations are used for the identification of the failure characteristics, the inaccuracies existing in the material description cannot be excluded by the application of a macro‐ scale modelling method. Additionally the homogenized representation of the foldcore is not able to capture all failure modes as intercellular buckling modes which depend on the detailed architecture of the cellular core. Since an intercellular buckling failure of the facesheet cannot be identified by the modelling method due to the homogenisation of foldcore, this failure mode has to be investigated separately. Therefore a micromodel simulation approach with detailed representation of the foldcore seems to be favourable, if the influence of a fold pattern modification on the panel failure has to be studied. On the other hand with the presented multiscale modelling method large structures made out of foldcore sandwich panels can be analysed efficiently by obtaining the safety margin against core failure for each homogenised core element.
A few years later, Stokowski met Walt Disney, who was working on “Fantasia”. During their meeting, they hatched the idea of a cartoon with classical music’s features. Stokowski encouraged Disney’s engineers to contact Bell Labs in order to use their earlier experimental stereophonic recording system in the film. After the confirmation of the collaboration with Bell Labs, Walt Disney had an inspiration. He thought that if some sounds of the movie would sound all around the audience instead of just in front of them, would add some extra realism in the movie. Thereby, the soundtrack of the film curried out with a three channel recording system, where the rear channels electronically “steered in” when desired. For this reason, Walt Disney is considered as the inventor of surround sound. The resultant know how was named “Fantasound” and was demonstrated in New York, Los Angeles, and a few other places, but it did not receive the desired effect and it was retired. At World War II, Stalin’s government expressed an interest in using it for soviet films, an agreement was made and the whole amount of the equipment was sent to Europe by ship. Unfortunately, this ship was torpedoed by a German pigboat and “Fantasound” went down to the bottom of the ocean.
Longitudinal data is a combination of cross section and time series data, where the subjects are independent and the time of observation of each subject is dependent. The advantages of studies on longitudinal data are knowing individual changes and requiring less subjects because of repeated observations. Moreover, the estimation is more efficient because it is done together for all subjects and observations. Some nonparametric methods for modeling longitudinal data are kernel, spline, local polynomial, fourier and wavelet. The wavelet method was developed to overcome the fourier method which is not good in modeling non-stationary data. In wavelet, Discrete Wavelet Transform (DWT) method is used. But there are limitations in DWT, because it can only be used for data modeling with the number of N= 2 P