a spherically decomposed 1+1 scheme. Following the work of Moncrief, we write down an action for perturbations in space-time geometry, combine that with the action for a point-particle moving through this space-time, and then obtain Hamiltonian equations of motion for metric perturba- tions and the particle’s coordinates, as well as their canonical momenta. Hamiltonian equations for the metric-perturbation and their conjugate momenta, for even and odd parities, reduces to Zerilli-Moncrief and Regge-Wheeler master equations with source terms, which are gauge invariant, plus auxiliary equations that specify gauge. Hamiltonian equations for the particle, on the other hand, now include effect of metric perturbations — with these new terms derived from the same interaction Hamiltonian that had lead to those well-known source terms. In this way, space-time geometry and particle motion can be evolved in a self-consistent manner, in principle in any gauge. However, the point-particle nature of our source requires regularization, and we outline how the Detweiler-Whiting approach can be applied. In this approach, a singular field can be obtained ana- lytically using the Hadamard decomposition of the Green’s function; while the regular field, which needs to be evolved numerically, is the result of subtracting the singular field from the total metric perturbation. In principle, any gauge that has the singular-regular field decomposition is suitable for our self-consistent scheme. In reality, however, this freedom is only possible if our singular field has a high enough level of smoothness. For a singular field with minimum quality, one can adopt the Lorenz gauge condition, which we have recast into our formalism: for each l and m, we have 2 wave equations to evolve odd and even parity gauge invariant quantities and 8 first order differential equations to fix the Lorenz gauge and determine all the metric components.
One of the primary scientific requirements for LISA (the Laser Interferometer Space Antenna) is to map, in exquisite detail, the spacetime geometries of massive black holes (and, if they exist, other massive, compact bodies) by using the gravitational waves emitted by inspiraling white dwarfs, neutron stars, and small-mass black holes. This emission process has come to be called “Extreme Mass Ratio Inspiral” (EMRI, pronounced emm-ree). The possibility of making such maps from EMRI waves was discussed by Thorne in the early 1990s (e.g., in [1, 2]). In 1995 Ryan  laid the first detailed foundation for such mapping: he showed that, when the massive, central body is general-relativistic, axisymmetric, and reflection-symmetric, and when the orbiting object is in a near-equatorial, near-circular orbit in the vacuum region surrounding the body, the full details of the central body’s metric are encoded in (i) the phase evolution of the waves and also in (ii) the evolution of the frequencies (or phases) of wave modulation produced by orbital precession. Phinney  has given the name “bothrodesy” to the mapping of a black hole’s metric via EMRI waves, and bothrodesy has been identified, by the LISA International Science Team (LIST), as one of the prime goals for LISA . The initial phase of scoping out LISA’s data analysis challenges for EMRI waves is now underway [6, 7].
One of the primary scientific requirements for LISA (the Laser Interferometer Space Antenna) is to map, in exquisite detail, the spacetime geometries of massive black holes (and, if they exist, other massive, compact bodies) by using the gravitational waves emitted by inspiraling white dwarfs, neutron stars, and small-mass black holes. This emission process has come to be called “Extreme Mass Ratio inspiral” (EMRI, pronounced emm-ree). The possibility of making such maps from EMRI waves was discussed by Thorne in the early 1990s (e.g., in Refs. [1, 2]). In 1995 Ryan  laid the first detailed foundation for such mapping: he showed that, when the massive, central body is general relativistic, axisymmetric, and reflection-symmetric, and the orbiting object is in a near-equatorial, near-circular orbit in the vacuum region surrounding the body, the full details of the central body’s metric are encoded in (i) the phase evolution of the waves and also in (ii) the evolution of the frequencies (or phases) of wave modulation produced by orbital precession. Phinney  has given the name “bothrodesy” to the mapping of a black hole’s metric via EMRI waves, and bothrodesy has been identified, by the LISA International Science Team (LIST), as one of the prime goals for LISA . The initial phase of scoping out LISA’s data analysis challenges for EMRI waves is now underway .
As we are analyzing the final cycles before merger, hav- ing accepted that the bodies were compact, one might still ask whether Eq. 7 correctly describes the chirp mass in the non-Newtonian regime . In fact for the last or- bits, it does not: In Newtonian dynamics stable circular orbits may exist all the way down to merger, and energy lost to gravitational waves drives the inspiral between them. However in general relativity, close to the merger of compact objects (at least when one of the objects is much larger than the other) there are no such orbits past the innermost stable circular orbit (ISCO), whose typi- cal location is given below. Allowed interior trajectories must be non-circular and “plunge” inwards (see pp. 911 of ). The changes in orbital separation and frequency in the final revolutions are thus not driven by the grav- itational wave emission given by Eq. 7. This is why we used f GW max at the peak, rather than the final frequency
Experiments to detect gravitational waves began with Weber and his resonant mass detectors in the 1960s , followed by an international network of cryogenic reso- nant detectors . Interferometric detectors were first suggested in the early 1960s  and the 1970s . A study of the noise and performance of such detectors , and further concepts to improve them , led to proposals for long-baseline broadband laser interferome- ters with the potential for significantly increased sensi- tivity [29 – 32]. By the early 2000s, a set of initial detectors was completed, including TAMA 300 in Japan, GEO 600 in Germany, the Laser Interferometer Gravitational-Wave Observatory (LIGO) in the United States, and Virgo in Italy. Combinations of these detectors made joint obser- vations from 2002 through 2011, setting upper limits on a variety of gravitational-wave sources while evolving into a global network. In 2015, Advanced LIGO became the first of a significantly more sensitive network of advanced detectors to begin observations [33 – 36].
have been estimated by this means in about 20 galaxies . The most viable scenario for modeling of active galactic nuclei includes a super massive blackhole with the mass accreting the galaxian matter from its vicinity . At the distance of the Virgo cluster, 15 Mpc, the sphere of influence of a su- per-massive black holes (SBH) would shrink to a pro- jected radius of 0.07, not only well beyond the reach of any ground based telescope, beyond even HST capabili- ties . Assuming an isotropic, spherically symmetric system, Sargent et al. detected a central dark mass
This thesis details research I have completed during my time as a graduate student at Caltech. My focus has been on the analytic treatment of problems in General Relativity. My research roughly divides into two broad topics, and so I have split this thesis into two parts. The first part, Part I, deals exclusively with investigations of perturbed black holes. Part II details a new program for the visualization of curved spacetime; the intended application for this program is to interface with numerical relativity and provide a means for drawing physical insights from the dynamics of simulated spacetimes. In the papers I include in the second part of this thesis, the visualization technique is introduced, developed, and applied to various simple, analytically tractable situations in relativity. The most interesting and intricate application is to the study of perturbed black holes, and in this sense the study of perturbed black holes forms a unifying theme for my work. As such, in this summary chapter, I will first introduce the theory of linearized perturbations of the exact spacetimes which represent black holes. This is done in Section 1.2. Afterward, I will briefly summarize the work presented in this thesis, discussing each of the two parts in turn. Section 1.3 deals with studies of the spectra of perturbed black holes given in Part I, Chapters 2, 3, and 4. Section 1.4 discusses the research I have participated in which develops and applies the new visualization methods, which make up Part II, Chapters 5, 6, 7, 8, and 9. Finally, in Section 1.5, I conclude this introductory chapter with some remarks on future work which may grow out of the topics detailed here.
law it follows that the dimensionless quantity M 2 /K is the adiabatic invariant, which in principle can be quantized if to follow the Bekenstein conjecture. From the Euclidean action for the blackhole it follows that K and A serve as dynamically conjugate variables. This allows us to calculate the quantum tunneling from the blackhole to the white hole, and determine the temperature and entropy of the white hole.
Technology, India, Science & Engineering Research Board (SERB), India, Ministry of Human Resource Development, India, the Spanish Ministerio de Economía y Competitividad, the Conselleria d ’ Economia i Competitivitat and Conselleria d ’ Educació, Cultura i Universitats of the Govern de les Illes Balears, the National Science Centre of Poland, the European Commission, the Royal Society, the Scottish Funding Council, the Scottish Universities Physics Alliance, the Hungarian Scientific Research Fund (OTKA), the Lyon Institute of Origins (LIO), the National Research Foundation of Korea, Industry Canada and the Province of Ontario through the Ministry of Economic Development and Innovation, the Natural Sciences and Engineering Research Council of Canada, Canadian Institute for Advanced Research, the Brazilian Ministry of Science, Technology, and Innovation, Russian Foundation for Basic Research, the Leverhulme Trust, the Research Corporation, Ministry of Science and Technology (MOST), Taiwan, and the Kavli Foundation. The authors gratefully acknowledge the support of the NSF, STFC, MPS, INFN, CNRS and the State of Niedersachsen, Germany, for provision of compu- tational resources. This article has been assigned the document numbers LIGO-P150914 and VIR-0015A-16.
wave packet travels faster than c in the deep X-ray spectral region where total external reflection occurs and is also near the fundamental edge in some ionic materials; 3) in electron-electron interactions, the phase can act as a hidden variable to determine the scattering angle and energy-loss in a single interaction; 4) in the stable wave packet, relativity implies a 5-dimesional space-time mass; 5) consistency requires the antiparticle to the electron to have negative mass with positive charge; 6) a graphic Hamiltonian to describe relativistic dynamics of an elec- tron in magnetic fields is derived; 7) measurement of the stable wave packet is described as partly probabilistic and partly determined; 8) an inconsistency in the dynamics of Dirac’s positron is graphed, i.e. when his momentum k is equal to rest mass m 0 ; 9) new conservation laws are described.
Although most of this manuscript is framed in the lan- guage of a general quantum communication problem, the topics discussed herein directly relate to computation. While the individual computational steps will be executed in the same frame of reference, to implement ideas like distributed computing, we need to consider scenarios where the quantum information is generated in a different locale and/or frame of reference from where it is processed or stored. As an example, since quantum computers have demonstrated to be expensive and fragile, it’s not difficult to imagine that, in the near term, there may be a few, central, quantum computers which are responsible for doing a bulk of the world’s quantum computation. In such a scenario, it would be necessary to transfer the quantum information across long distances, most likely via satellite. In such an instance, the transferred information would be rotated via Wigner rotations. This discussion is especially germane to ‘‘blind computation’’ (Fitzsimons 2017), where it is necessary to send actual quantum states, not just a classical signal telling the quantum computer how to con- struct the states. Lastly, we believe the effects we study here serve as an exploration into non-standard dynamics that occur when we discard some of the usual assumptions made in quantum information science. For example, a quantum computer on Earth’s surface is moving through a gravitational field, which causes the computational states to rotate slightly during each step of the process. In Lan- zagorta and Uhlmann (2016), they showed that these effects can increase the computational complexity of quantum algorithms. As such, Grover’s search algorithm can turn exponential, negating the computational speed up
In Chapter 2 a pseudospectral numerical code is applied to a set of analytic or near-analytic solutions to Einstein’s equations which comprise a testbed for numerical-relativity codes. We then discuss methods for extracting gravitational- wave data from numerical simulations of black-hole binary systems, and intro- duce a practical technique for obtaining the asymptotic form of that data from finite simulation domains in Chapter 3. A formula is also developed to estimate the size of near-field effects from a compact binary. In Chapter 4 the extrapolated data is then compared to post-Newtonian (PN) approximations. We compare the phase and amplitude of the numerical waveform to a collection of Taylor approx- imants, cross-validating the numerical and PN waveforms, and investigating the regime of validity of the PN waveforms. Chapter 5 extends that comparison to include Padé and effective-one-body models, and investigates components of the PN models. In each case, a careful accounting is made of errors. Finally, we construct a long post-Newtonian–numerical hybrid waveform and evaluate the performance of LIGO’s current data-analysis methods with it. We suggest certain optimizations of those methods, including extending the range of template mass ratios to unphysical ranges for certain values of the total mass, and a simple an- alytic cutoff frequency for the templates which results in nearly optimal matches for both Initial and Advanced LIGO.
Abstract Statistical classical mechanics and quantummechanics are developed and well-known theories that represent a basis for modern physics. Statistical classical mechanics enable the derivation of the properties of large bodies by investigating the movements of small atoms and molecules which comprise these bodies using Newton's classical laws. Quantummechanics defines the laws of movement of small particles at small atomic distances by considering them as probability waves. The laws of quantummechanics are described by the Schrödinger equation. The laws of such movements are significantly different from the laws of movement of large bodies, such as planets or stones. The two described theories are well known and have been well studied. As these theories contain numerous paradoxes, many scientists doubt their internal consistencies. However, these paradoxes can be resolved within the framework of the existing physics without the introduction of new laws. To clarify the paper for the inexperienced reader, we include certain necessary basic concepts of statistical physics and quantummechanics in this paper without the use of formulas. Exact formulas and explanations are included in the Appendices. The text is supplemented by illustrations to enhance the understanding of the paper. The paradoxes underlying thermodynamics and quantummechanics are also discussed. The approaches to the solutions of these paradoxes are suggested. The first approach is dependent on the influence of the external observer (environment), which disrupts the correlations in the system. The second approach is based on the limits of the self-knowledge of the system for the case in which both the external observer and the environment are included in the considered system. The concepts of observable dynamics, ideal dynamics, and unpredictable dynamics are introduced. The phenomenon of complex (living) systems is contemplated from the point of view of these dynamics.
From the survey of literatures, it is found that no work however, is available on the quantum volume of dis- crete space and its growth rate, and the size of quantum primordial black holes and its gravitational field. Of course, Pandey has recently derived the similar expression for the quantum volume and its growth rate in a dif- ferent approach than the present one  . In the present theoretical approach, an attempt has been made to find the quantum volume of discrete space using first rank tensorial Einstein-Gauss gravitation law, Einstein’s mass energy equivalence and Heisenberg’s uncertainty principle. The size of quantum primordial blackhole of quantum mass and its gravitational field have been calculated and reported.
seems to point in this direction. The idea is that a blackhole evaporation process seems to imply that both unitarity and the equivalence principle cannot be true at the same time. This is because, on one side, for Hawking’s radiation to occur, the emitted particles must get entangled with the “twins” that fall into blackhole. On the other, if information is to come out with the radiation, then each emitted particle must also get entangled with all the radiation emitted before it. However, the monogamy of entanglement holds that a quantum system cannot be fully entangled with two independent systems at the same time an so, unitarity and the equivalence principle cannot coexist. In  it is suggested that we must forego the equivalence principle, allowing the event horizon to become a firewall. We, however, find it much wiser to do without unitarity. After all, the predictions of quantummechanics are consistent with what we in fact perceive only after unitarity is broken. What we propose, then, is to study the blackhole formation and evaporation process from the point of view of a quantum theory which incorporates, at the fundamental level, some kind of non-unitary evolution. A theory which allows for information to be lost but not only in exotic scenarios such as black holes but also, albeit in a smaller degree, in all situations and at all times. Of course, theories with such characteristics already exist in the form of objective collapse or dynamical reduction models (see [6, 7] for a general overview). The motivation behind such theories is to construct an alternative quantum formalism which solves the measurement problem. In order to do so, they modify the dynamical equation of the standard theory, with the addition of stochastic and nonlinear terms, such that the resulting theory is able to deal both with microscopic and macroscopic systems. In the next section we describe how this type of models may help in the solution of the information loss paradox.
One remarkable aspect of these departures from classical relativistic symme- tries is the possibility that the deformed kinematics they introduce might lead to experimentally testable consequences. We focused in particular on scenar- ios that predict Planck-scale suppressed modifications of the relativistic energy- momentum dispersion relation which may lead to a violation of Lorentz symmetry or be associated to “quantum” deformations of relativistic symmetries. We saw, in Chapter 2, how observations of GRBs and UHECR might carry signatures of such quantum space-time models. In particular we focused on the threshold anomalies that a specific kinematical framework based on a MDR might induce in the chain of production of very high energy neutrinos associated with UHE- CRs. We showed that different choices in the parameters of the model lead to different modifications of the bound , proposed by Bahcall and Waxman, on the flux of such high energy neutrinos.
Although our analysis breaks down in the limit of an extremal blackhole, the results (44) and (45) suggest that there may be information loss even in this case. This is because it is no longer true in general at the quantum level that the entropy is proportional to the area of the horizon (4). There will be information loss (entropy increase) even in the absence of any finite-temperature effects, if there is entangle- ment with modes beyond the horizon at the quantum level, as we have illustrated. This observation is related to the phenomenon of entropy generation during inflation in cosmology, which may also be regarded as a non-equilibrium process associated with information loss beyond the Hubble horizon. In the context of non-critical strings , we have discussed in ref.  how such an information loss can lead to a stochastic framework for time evolution, during such non-equilibrium processes. The stochasticity of the time evolution in the Liouville string , where the time variable is identified with a RG scale, can be derived from some specific proper- ties of the RG evolution in two-dimensional spaces (world sheets) [17, 3]. One can hope that a similar framework may be developed here, identifying a renormalization group (UV) scale logˆ ǫ with time. However, we are not yet in a position to prove that a similar stochasticity characterizes the RG evolution in the four-dimensional case. However, the presence of logarithmic infinities in the (entanglement) entropy of black holes does seem to be a generic phenomenon, independent of the dimension- ality of space-time, in view of the fact that they are present even in two-dimensionsal models .
In order to elaborate what is involved, let us consider the Penrose diagram in figure 1 (top left) describing a gravitationally collapsing body. It is clear from the figure that there are four distinct spacetime regions — marked A, B, C and D. Of these, region D — which is inside the collapsing body and outside the event horizon — is the least interesting one for our purposes. Even though the time dependence of the metric will lead to particle production in this region, we do not expect any universal behaviour here; the results will depend on the details of the collapse. Let us next consider region C which is outside both the collapsing body and the event horizon. This region is of primary importance — and has been extensively investigated in the literature — in connection with the blackhole radiation. This is schematically illustrated in figure 1 (top right) by an outgoing null ray that straddles just outside the horizon and escapes to future null infinity. The thermal nature of the blackhole radiation arises essentially due to the exponential redshift suffered by this null ray as it travels from just outside the collapsing matter to future null infinity. While this ray is inside the collapsing matter during part of its travel, the details of the collapse are sub-dominant to the effect of the exponential redshift at late times. We can investigate the blackhole evaporation scenario vis-a-vis different kinds of observers in this region: like, e.g, asymptotic and non-asymptotic static observers, radial and inspiraling free-fallers, observers moving in circular orbits etc.; all these cases indeed have been studied in the literature (see, for some recent work, refs. [19–21] which contain references to earlier papers). In this paper too, we will briefly discuss the physics in this region since recovering the standard results provides a ‘calibration test’ for our approach and calculations.
II. EVOLUTION OF SPINNING BINARY SYSTEMS We briefly review the current literature regarding the formation and evolution of spinning binary systems. The literature available focuses mainly on neutron star–blackhole (NS-BH) binaries (rather than BH-BH binaries). Later we shall show that the template bank used in this search is most sensitive to binaries with unequal masses such as NS- BH binaries. It is likely that the formation of BH-BH and NS-BH (and indeed NS-NS) systems is qualitatively simi- lar and that the discussion here will be relevant to all cases. A typical NS-BH evolution would involve two main sequence stars in binary orbit. As it evolves away from the main sequence, the more massive star would expand until it fills its Roche lobe before transferring mass to its companion. The more massive body would eventually undergo core collapse to form a BH, and the system as a whole would become a high-mass x-ray binary. As the second body expands and evolves, it would eventually fill its own Roche lobe and the binary would then go through a common-envelope phase. This common-envelope phase, characterized by unstable mass transfer, would be highly dissipative and would probably lead to both contraction and circularization of the binary’s orbit. Accretion of mass can allow the BH to spin up. It has been argued that the common-envelope phase, and associated orbital contrac- tion, is essential in the formation of a binary which will coalesce within the Hubble time . Finally, the second- ary body would undergo core collapse to form a NS (or if massive enough, a BH). Prior to the supernova associated with the core collapse of the secondary body, we would expect the spin of the BH to be aligned with the binary’s orbital angular momentum . However, the ‘‘kick’’ associated with the supernova of the secondary body could cause the orbital angular momentum of the post-supernova binary to become tilted with respect to the orbital angular momentum of the pre-supernova binary. Since the BH would have a small cross section with respect to the supernova kick, we expect any change to the direction of its spin angular momentum to be negligible and that the BH spin would be misaligned with respect to the post- supernova orbital angular momentum . The misalign- ment between the spin and orbital angular momentum is expected to be preserved until the system becomes detect- able to ground-based interferometers.
The reduced overall noise floors of the updated advanced detectors places more stringent requirements on disturbances introduced by potential squeezing sub-systems than previous detectors. The goal is improvement in absolute sensitivity. A range of ‘technical’ and fundamental noise floors limit the sensitivity of interferometric wave detectors over di↵erent frequency bands of the detectors’ sensitivity range. For example, see the previous Figure 2.6 for a break down of contributing noises predicted to limit the Advanced LIGO detector sensitivity. The limits to the e↵ective sensitivity improvement from squeezing are determined principally by: the degree of squeezing; the squeezed field coupling efficiency in and out of the detector (total losses); the quadrature (phase) fluctuations that project anti-squeezing into the measurement quadrature; and, the coupling of relative motion via backscattered light. Together these couple environmental noise into the detection channels of the interferometer, limiting the benefits of squeezed state injection. Much of the environmentally induced disturbances coupling in through these channels originate in the relative motion, pointing and cavity length noise of the OPO squeezer. Although locking control loops o↵er some significant improvements to sensing and cancelling these disturbances, they are subject to a number of bandwidth and lock point error limitations [144, 145]. Passive isolation of this environmental noise from the squeezed vacuum injection sub-system is essential for the best possible performance below the noise floors of the instrument. For this reason installation of a squeezed light source within the interferometer vacuum envelope on an isolated stage has been proposed as a way to maximise the benefits of squeezing 1 .