master equation can also lose positivity when other interactions in the Hamiltonian are modified. We demonstrate in this particular case that treating the non-Markovian term as generic results in an equation of motion that is unstable (with respect to positivity) when one turns on addi- tional interactions in the system’s Hamiltonian. Thus, it does not seem likely that one can have a generic non-Markovian term in the master equation, which can be used, even phenomenologically and keeping the system-reservoir interaction constant, to investigate the effects of dissipation as one varies parameters of the system. Having such a generic term, though, would allow one to build reusable tools and approximations for the investigation of non-Markovian effects. Because of this, we suggest an alternative approach for simulating real-time, open-system dynamics that does not rely on non-Markovian master equations. The methodology is based on performing a sequence of transformations on the reservoir Hamiltonian which create auxiliary quantum degrees of freedom that can be included explicitly in the simulation. Their inclusion allows one to simulate highly non-Markovian processes without the drawbacks of non-Markovian master equations. Further, for systems with strong dissipation or long memory times, such as the spin-boson model, the structure of the reservoir is such that one can use matrix product state techniques. Indeed, this is the basis of the numerical renormalization group, but its implications go well beyond renormalization schemes. With such a generic approach, computational and analytic tools can be developed to handle a wide variety of opensystems 58,73 . This approach also gives insight into the mechanism by which an
Chapter 3 explores the use of RPMD to directly simulate ET reactions between mixed-valence transition metal ions in water. We compare the RPMD approach against benchmark semiclassical and quantumdynamics methods in both atomistic and system-bath representations for ET in a polar solvent. Without invoking any prior mechanistic or transition state assumptions, RPMD correctly predicts the ET reaction mechanism and quantitatively describes the ET reaction rate over twelve orders of magnitude in the normal and activationless regimes of ET. Detailed analysis of the dynamical trajectories reveals that the accuracy of the method lies in its exact description of statistical fluctuations, with regard to both reorganization of classical nuclear degrees of freedom and the electron tunneling event. The vast majority of the ET reactions in biological and synthetic systems occur in the normal and activationless regimes, and this work provides the foundation for future studies of ET and PCET reactions in condensed-phase systems. Additionally, this study discovers a shortcoming of the method in the inverted regime of the ET, which arises from the inadequate description of the quantization of the real-time electronic-state dynamics, and directly motivates further methodological refinement. This work has been published as Menzeleev, A. R., Ananth, N. and Miller, T. F., “Direct simulation of electron transfer using ring polymer molecular dynamics: Comparison with semiclassical instanton theory and exact quantum methods,” Journal of Chemical Physics, 135, 074106 (2011).
The study of the hemodynamics is an important component in the design and validation of artificial heart valves. The removal of natural valves and their replacement by artificial valves may considerably alter the physiological flow. This necessitates an extensive verification and validation process involving in-vivo, in-vitro and in-silico models. The pulse duplicator is an in-vitro setup used to validate the performance of the valve dynamics. Physiological pressure and flow waveforms across the valve can be simulated using this setup using lumped impedances representing the systemic resistances and compliances. The use of Computational Fluid Dynamics(CFD) models to simulate the pulse duplicator allows access to parameters like velocity profiles, shear stresses and areas of separation and recirculation which are critical in the design of a heart valve prosthesis. A CFD analysis of the dynamics of a 25 mm sized tilting disc aortic valve was done with geometry and flow waveforms as in the in-vitro pulse duplicator setup. Two phases of the flow were simulated –the opening phase and the fully open phase. The results of the simulation were validated with results from the in-vitro experimental setup.
Mesoscopic simulation aims at describing supramolecu- lar phenomena at the nanometer (length) and microsec- ond (time) scale for large interacting physical ensembles (representing millions of atoms) within comparatively short computational time frames (hours) by “coarse grained” neglect of uninteresting degrees of freedom. Dissipative Particle Dynamics (DPD) is a mesoscopic simulation technique for isothermal complex fluids and soft matter systems that combines features from Molec- ular Dynamics (MD), Langevin Dynamics and Lattice- Gas Automata [1–5]. It satisfies Galilean invariance and isotropy, conserves mass and momentum and achieves a rigorous sampling of the canonical ensemble due to soft particle pair potentials that diminish entanglements or caging effects. DPD is expected to show correct hydrody- namic behavior and to obey the Navier–Stokes equations. Whereas DPD particles in general may be arbitrarily defined “fluid packets”  the Molecular Fragment DPD variant [5–12] identifies each particle with a distinct
Both applications have demonstrated that the CCSB method converges with the number of configurations K and number of quantum states included in the basis Ω. We have also demonstrated that appropriate sampling of the initial coherent states via a compression parameter σ is necessary to ensure a reliable and accurate calcula- tion. Further developments of the method that we envis- age in future include development of methods to generate initial conditions, as imaginary time propagation is un- stable with trajectories; incorporation of SU(n) coherent states, as demonstrated in Ref. ; and the combination of the method with one to treat identical fermions  to study Bose-Fermi mixtures, as has been carried out by MCTDH  and ML-MCTDH  previously.
With the potential of the nanotube (5), using the methods of classical molecular dynamics it is possible to study the permeability of a single nanotube and the permeability of the materials produced by compacting of the homogeneous or heterogeneous tubes. In addition, it is important to study the issues of the sorption properties of the nanotubes. Some conclusions on these properties can be made for the resultant potential pattern (Figure 2, 3).
Our QMD model contains a further possibility for the simulation of the dynamical evolution of infinite nuclear matter such as supernova explosions, the glitch of the neutron star, and the initial stage of the universe. An intensive and systematic study of nuclear matter with the present model will be important since it contains fewer assumptions than the foregoing models as to the struc- ture of matter.
As yet, there have been no experiments aiming to create PREs (involving non-orthogonal pure states). It certainly would be difficult to implement the high-efficiency adaptive mea- surement schemes required, though a suggestion has been made , involving quantum transport with feedback, which alleviates some of the difficulties. However, PREs might have applications, irrespective of experimental realization, in quantum simulations. A generic quantum trajectory, which involves periods of non-trivial continuous evolution, will occupy an infinite sized ensemble of states, though this is reduced to a finite number when simulated using finite precision. Despite this, the memory required will still be expo- nentially large in the system dimension, D. In contrast, a finite PRE with only K states, will have memory requirements scaling generically only as D 2 (see Eq. (18)). The catch, as the reader of this paper will appreciate, is that there is a one-time large resource cost associated with identifying a PRE (and adaptive measurement scheme) applicable to the ME. This cost is dependent upon the applied algorithm, with the monodromy extension to polynomial homotopy continuation providing the greatest potential of allowing PREs to be found with sub-exponential (or a very small constant) complexity. If the finding of PREs can be made tractable for the ME in question, then it is possible that trajectory simulation could be most efficiently undertaken using the PRE ME unraveling.
Coordinate transformations are another instance of unphysical operation. Indeed, the instantaneous application of a coordinate transformation would violate the laws of rela- tivity. However, in [13, 15] it is shown that linear coordinate transformations—including Galileo boosts—can be implemented in an embedded quantum simulator. An embedded quantum simulator is a device which enables the realisation of unphysical operations by encoding the dynamics of the simulated system in a larger Hilbert space. The instanta- neous action of a suitable physical operation in the enlarged Hilbert space corresponds to the action of the unphysical operation in the simulated space. This is a multiplatform notion that has been initially explored in ion traps [14, 15].
trajectories are a mixture of both active and inactive ones . Here, the two dynamical phases coexist [52,53]. In order to access the active phase as well as the coexistence region, the system needs to be biased. This means that the Hamiltonian and the jump operators are adjusted such that the rare dynamics that is currently assumed at values of s away from 0 is becoming the typical dynamics. The technical details of this procedure are discussed in the following section, but we already illustrate the results in Figs. 2(b) and 2(c). Here, we bias the three-qubit system such that the dynamics is in the active phase. In Fig. 2(b) we see that the resulting SCGF, ˜ θ(s), is the original SCGF but shifted in a way that the crossover (formerly at s c < 0) is now
Abstract. We present analytical and simulation studies of the nonlinear instability and dynamics of an electron–hole/anti-electron (hereafter referred to as polaritons) system, which are common in ultra-small devices (semiconductors and micromechanical systems) as well as in dense astrophysical environments and the next generation intense laser–matter interaction experiments. Starting with three coupled nonlinear equations (two Schrödinger equations for interacting polaritons at quantum scales and the Poisson equation determining the electrostatic interactions and the associated charge separation effect), we demonstrate novel modulational instabilities and nonlinear polaritonic structures. It is suggested that the latter can transport information at quantum scales in high-density, ultracold quantumsystems.
Memory effects in openquantum dy- namics are often incorporated in the equa- tion of motion through a superoperator known as the memory kernel, which en- codes how past states affect future dynam- ics. However, the usual prescription for determining the memory kernel requires information about the underlying system- environment dynamics. Here, by deriv- ing the transfer tensor method from first principles, we show how a memory ker- nel master equation, for any quantum pro- cess, can be entirely expressed in terms of a family of completely positive dynam- ical maps. These can be reconstructed through quantum process tomography on the system alone, either experimentally or numerically, and the resulting equation of motion is equivalent to a generalised Nakajima-Zwanzig equation. For experi- mental settings, we give a full prescription for the reconstruction procedure, render- ing the memory kernel operational. When simulation of an open system is the goal, we show how our procedure yields a con- siderable advantage for numerically calcu- lating dynamics, even when the system is arbitrarily periodically (or transiently) driven or initially correlated with its envi- ronment. Namely, we show that the long time dynamics can be efficiently obtained from a set of reconstructed maps over a much shorter time.
Figure 2A demonstrate the strength of the Bohmian approach in modeling quantumsystems with strong interactions and nonlinear ion dynamics. For static and thermodynamic properties, we obtain results in very close agreement with DFT simulations. In addition, while the standard implementation of DFT involves the Born- Oppenheimer approximation, our Bohm approach can treat electrons and ions nonadiabatically, retaining the full coupling of the electron and ion dynamics. As a result, we can investigate the changes of the ion modes due to dynamic electron-ion correlations that are in- accessible to standard DFT. In contrast to a Langevin model, we have no free parameters and can thus predict the strength of the electron drag to the ion motion. Simulations based on time-dependent DFT (22) represent another way to avoid the Born-Oppenheimer approx- imation. However, this method is numerically extremely expensive, drastically limiting particle numbers and simulation times; at present, this limitation precludes results for the ion modes as presented here. The principal advantage of our approach is its relative numerical speed, which allows for the modeling of quantumsystems with large numbers of particles. For comparison, the recent time-dependent DFT simulation of (23) models a system of 128 electrons for approxi- mately 0.001 as per central processing unit (CPU) core and second. The
Over the last decade and a half, AMO systems (cold atoms and molecules, as well as trapped ions and photons) have been increasingly used to study strongly-interacting many-body systems. This has reached a level where these systems can be used to engineer microscopic Hamiltonians for the purposes of quantumsimulation [6, 70–76]. As a result, it has become important to explore the application of existing experimental techniques and theoretical formalisms from quantum optics in this new many-body context. Particularly strong motivation in this respect has come from the desire to engineer especially interesting many-body states, which often require very low temperatures (or entropies) in the system. An example of this are low-temperature states of the fermionic Hubbard model, which can be engineered in optical lattices [75, 76]. Many sensitive states in this model arise because of dynamics based on superexchange interactions, which arise in perturbation theory and can be very small in optical lattice experiments. A key current target of optical lattice experiments is the realisation of magnetic order driven by such interactions [77–80], with eventual goals to realise more complex spin physics or even pairing and superfluidity of repulsively interacting fermions . For these and other fragile states, understanding dissipative dynamics in the many-body system is then necessary both to control heating processes, and to provide new means to drive the systems to lower temperatures.
The main advantage of POD is that the subspace produced by the process is linear, which allows us to easily project the full master equation onto the space and develop filtering equations for the density matrix as a whole. However, this same linearity proves to limit the reduced models, as they are unable to capture the large deviations from the mean that characterize the nonlinear systems. Local Tangent Space Alignment produces a point-wise constructed nonlinear manifold, which can capture the nonlinear behavior. I was able to discover low-dimensional manifolds which capture the essential structure of the system dynamics. However, the lack of a functional form for the map between the manifold space and the density matrix space complicates the search for a reduced filter. In the case of phase bistability, I was able to fit the manifold coordinates by a linear combination of (linear and nonlinear) system observables, in order to derive a system-specific set of “Maxwell- Bloch” equations, which make a very good filter. This sort of fit, however, depends on having a good basis set of functions of observables with which to fit the data, which proved to be missing in the case of absorptive bistability. A structured way to create a much larger basis set might lead to success here, and in models of other quantumsystems.
Openquantumsystems come in two variants. The first variant (on which we will focus more) are scattering systems in which the dynamics allow particles to enter and leave (Newton, 2002; Messiah, 2014). One then normally defines a scattering region, out- side of which particles move free of any external forces or interactions. This situation is realised (at least to some level of approximation) in many decay or radiation pro- cesses (Weidenm¨ uller and Mitchell, 2009), but is also useful to describe phase-coherent transport in mesoscopic devices (Datta, 1997; Beenakker, 1997; Blanter and B¨ uttiker, 2000; Nazarov and Blanter, 2009) or photonic structures (Cao and Wiersig, 2015). The second variant (which we will encounter only briefly) are interacting systems in which the studied dynamical degrees of freedom are influenced by other degrees of freedom in the environment (Breuer and Petruccione, 2002). This situation spans from the quantum-statistical foundations of thermodynamics (Gemmer et al., 2010) to the description of decoherence (Weiss, 2008), with ample applications to quantum optics (Carmichael, 2009), quantum-critical phenomena (Sachdev, 1999) and quantum information processing (Nielsen and Chuang, 2010).
We developed a general framework to define a thermodynamic metric for systems that evolve under a Lindblad master equation, in such a way that geodesics correspond to minimally- dissipative paths in the quasistatic regime. The connection with the non-equilibrium free energy makes it extremely versatile and an extension to more general settings seems to be foreseeable with only minor modifications. For example, it should be possible to extend our approach to gen- eralised Gibbs ensembles [60–63] simply by re- defining the non equilibrium free energy (1) so to take into account the extra conserved quan- tities. We can even consider a dynamics which equilibrates to a non equilibrium steady states by simply substituting the thermal state in the definition (4) of the entropy production rate with the appropriate steady state π(H) . For non- equilibrium steady states, an interesting future direction is to consider relations between this ge- ometric approach and quantum thermodynamic uncertainty relations [42, 64]. Moreover, despite the fact that we assume a Lindblad master equa- tion, and therefore weak coupling, thanks to the reaction-coordinate mapping, an extension to the strong coupling regime is possible [65–72]. In this context, since the spectral density of the bath is modified under the mapping, we expect that in the strong coupling regime there will be qualita- tively changes in the behaviour of optimal tra- jectories, similarly to what we showed in Fig. 3. Similarly, the same framework can also be applied to other types of thermalisation such as in colli- sional models . Another interesting research direction is to investigate the connection between optimal thermodynamic processes and shortcuts to thermalisation . Overall, it seems that the introduction of a metric structure on the space of thermodynamic states is not only natural, but it is worth of more investigation in the future.
atoms, which makes the transition |gi → |ri resonant provided a neighbor is already in state |ri (Fig. 1(a)). This situation has already been explored in a two-level setup [12, 13, 16, 45], and very recently also in the consid- ered three-state setting . We remark that it is crucial for infection to only occur locally, i.e. in the neighborhood of an already infected site. In our case, it is therefore im- portant that the interactions decay sufficiently fast with the distance. A variety of different behaviors are known to emerge in systems where they do not [46–49].
The appearance of an effective overall disorder-onset rate in dense quantum ensembles complicates understanding their behaviour. Although I have provided an estimate of the severity of this disorder on dense ensemble dynamics, the overall empirical fitting parameters in the previous chapter do not provide a clear enough picture of the underlying physical processes involved. Therefore, I felt it would be beneficial to see if I could find a model that described the dense ensemble behaviour with a modified single-particle evolution scheme. If such a method were to exist, it would greatly reduce the computational time involved in investigating these systems and it would also provide further understanding to what underlying processes cause this evolution. Understanding these processes would then allow one to predict how these systems will behave without needing a full, complicated numerical solution.
The other half of the pool included several Commercial- Off-The-Shelf (COTS) software products. Considering these products enabled us to evaluate how well the open- source offerings measured up against the commercial products using the selected criteria. The purpose was to see if there were any critical functional capabilities included in COTS products that were not available in free, open source solutions. It also helped to clarify and understand the value offered by commercial vendors and solutions. One could assume that since commercial products have more dedicated paid resources to enhance products than a typical open-source project, the quality and capabilities of products might trump free open source solutions. A possible counter argument to that assumption is that, due to the lack of source code availability, fewer developers can take part in the development process to enhance the product and correct problems (i.e., bugs). The development of Linux strongly supports the counter argument for operating systems.