Even if the Karhunen-Loeve compression were computationally feasible, it suffers from another drawback. Namely, though it is optimal at maximizing the expected SNR, for typical CMB behav- iors, it is bad at retaining the information we want to preserve in the compression. What we desire in a compression method is to do the best job of reproducing the uncompressed spectrum with as few estimators as possible, which in practice is very different from maximizing the SNR. The problem is essentially one of dynamic range. For an interferometer, the response of a baseline to the CMB falls like one over the baseline length squared because long baselines have more fringes, so a long baseline averages over many more independent patches on the sky than a short baseline. On top of this, C ℓ generally falls rather quickly with increasing ℓ, so the intrinsic signal for a long baseline is much weaker than that for a short baseline. These two factors combined can easily lead to factors of several hundred between the expected variance on a short baseline and the expected variance on a long baseline. To see why this causes optimal subspace filtering to perform very poorly, picture the simple case of an experiment consisting of two pairs of visibilities, one pair at low ℓ, and one pair at high ℓ. The visibilities within the pairs sample almost the same CMB. Clearly, we would like our compression to keep one number for each pair, roughly corresponding to the average value in that pair. If the measurements within a pair are sufficiently similar, then there is essentially no other information contained in them. If they are slightly different, though, the K-L transform will think there is some power in the modes corresponding to the differences. So here is the problem:
Ben Mazin’s no-nonsense point of view helped me keep things in perspective over the years. Dave Vakil has become a true friend, someone I can always count on, even though we almost killed each other when we first arrived. Alice Shapley is an irreplaceable person in my life, and I never would have made it through 6 years without her friendship. Her caring, enthusiastic support and encouragement have meant so much to me, and she is one of the most fun people I know. Ashley Borders preserved my sanity with vital ice cream, singing, and chamber music breaks, and she has an uncanny way of knowing when I need to talk. Megan and Bryan Jacoby have both provided support and friendship, and Bryan was a real trouper for putting up with me and Jon in the office. Jackie Kessler’s cheerful late-night company helped keep me from really losing it those last few weeks. The St. Andrew’s 5 o’clock choir helped to remind me that there was life outside of grad school, and I thank them for their prayers and support.
The instrumental polarization measurements described in the preceding section sample the response of the system at the boresight of the primary beam. Ob- servations of extended polarized emission, however, require an understanding of the response of the system across the entire primary beam; any departures from uniform behavior would affect the interpretation of the cross polarized visibilities for extended sources. These abnormalities might arise, for example, from the il- lumination pattern of the feeds. We measured the off-axis characteristics of the beams with a series of instrumental polarization observations of 3C279 at the beam half-power points in the four cardinal directions and compared these values to the instrumental polarization measured at the beam center. The initial incarnation of CBIPOLCAL operated under the assumption that the calibrator is a point source at the phase center, so we derived the leakages at the half-power points by forcing CBIPOLCAL to assume that the calibrator is at the phase center. Under this as- sumption, pointings at the beam half-power positions introduce phase errors in the visibilities, so a new command, offset, was added to CBIPOLCAL which allows the user to supply a position offset to correct the phases. 12 No analogous correction is required for the amplitude; since CBIPOLCAL employs the fractional polarization rather than to absolute polarization, the rolloff in the primary beam should affect P and I with equal force. In fact, the use of the fractional polarization eliminates an additional layer of complication: it allows us to ignore the ∼ 30% change in width of the primary beam across the 26-36 GHz band. This approach relegates
ter output versus FRM bias current, displays the same hysteresis as illustrated in Figure 2.6 when the telescope observes a polarized source.
Although the FRMs are successful polarization modulation devices with suffi- ciently low instrumental polarization, the death sentence for FRMs in B ICEP came in a noise performance showdown between FRM-demodulated and PSB-differenced data. Stability data taken shortly before deploying the instrument indicated that the FRMs introduced excess white noise, and the power spectral densities of the FRM-demodulated bolometer timestreams were significantly higher than those from differenced PSBs. As a result, all but six FRMs were removed from the B ICEP focal plane for the 2006 observing season, and the remaining six were removed during the 2006–2007 Austral summer. The simple, yet effective, PSB differenc- ing strategy has worked extremely well for B ICEP . Upcoming CMB polarization experiments seeking to probe lower tensor-to-scalar limits may benefit from polar- ization modulation, as systematics become dominant limiting factors. B ICEP 2 and S PIDER , which are both small-aperture experiments, plan to use a rotating wave- plate at the aperture so that only polarized emission from the sky is modulated.
3. Why Is ∆T/T ≈ 10 −5 ?
As discussed in the previous section, metric fluctuations in the cosmic seed eventually lead to the production of primordial dark energy stars with masses −10 3 M O . The size of these dark energy stars is many orders of magni- tude smaller than the overall size of the cosmic seed, which accounts for the overall isotropy and homogeneity of the observable universe. However, density fluctuations during the process of break-up of the cosmic seed will lead to large variations in the densities of the quantum droplets for length scales small compared to the horizon radius. There are two obvious sources of the metric fluctuations in the cosmic seed: 1) density and velocity fluctuations in the collapsing cloud that was the precursor to the cosmic seed would have left an imprint in the cosmic seed, and 2) fluctuations associated with the quantum critical layers. We believe that the second possibil- ity is the more important. Furthermore because the thickness of the quantum critical layers is small compared to the horizon size, the spectrum of metric fluctuations generated by these quantum critical layers should be ap- proximately independent of k for length scales larger than the horizon size ct 0 . By the well know arguments of Harrison and Zeldovich   this leads to a density fluctuation spectrum of the form (4). Furthermore, the space-time inside a horizon surface is approximately described by the interior de Sitter metric, where g 00 de- creases from −1 farthest away from the horizon to nearly zero near to the quantum critical layers. Thus we ex- pect that the spectrum of fluctuations in the density of quantum droplets at the time of break-up of the cosmic has the form (4) with ε 0 − 1; i.e. δρ ρ ≈ ( ct 0 R 0 ) 2 . On the other hand observations of variations in the temper- ature of the CMB  as well as theories of the evolution of the large scale structures found in the present day universe  are consistent with ε ≈ 10 −5 . Remarkably our theory offers a simple explanation for this difference.
2 Stratospheric Balloons
Stratospheric balloons are near-space carriers able to reach altitudes of ∼ 40 km (corresponding to an environmental pressure of ∼ 3 mbar). Conventional stratospheric balloons are not sealed, and the gas pressure inside the balloon is the same as the surrounding atmospheric pressure. These balloons provide flight durations reaching 2 months (long duration stratospheric balloons, LDB), if the solar illumination is stable. Recently operated super-pressure balloons are sealed, and provide much longer flight durations (ultra long-duration flights, ULDBs). The lifted payload masses range from a few kg to more than two tons, depending on the balloons size, ranging from a few thousands cubic meters volume at float, to more than 1 million cubic meters. The heaviest payloads (2 tons) are larger and heavier than what can be reasonably carried by a satellite, at a mission cost which is roughly 100 times less than for a satellite mission. Once launched, operated and terminated, a stratospheric balloon payload is recovered and can be flown again, for improved measurements, after a relatively short amount of time since the first flight. This allows a staged development of the instrumentation, which is not possible in the case of satellite missions. Stratospheric balloons are thus important as precursors of satellite missions. They allow the community to test and qualify innovative space instrumentation, and improve it before using it in space. Last but not least, stratospheric missions are a very good way to educate and prepare students and young researchers. They can participate directly in all phases of the development: from the idea, to optimization, planning, construction, test, calibration, flight, dataanalysis of the mission.
1.1 Big bang models
We are entering an era of “precision cosmology” in which our basic models of the universe are being refined. Our picture of the universe has evolved remarkably over the past century; from a view limited to our Galaxy to a cosmos with billions of sim- ilar structures. Today, cosmology appears to be concerned with improving estimates of parameters that describe our cosmological models rather than searching for alter- natives. Recent observations of the large scale structure (eg. 2dFGRS; Percival et al., 2001) and the Wilkinson Microwave Anisotropy Probe (WMAP; Bennett et al., 2003a) observations of the cosmicmicrowavebackground (CMB), appear to confirm our key ideas on structure formation. However, there is the suggestion that our confidence may be misplaced. The WMAP observations, for example, have thrown up a num- ber of unusual features (Chiang et al., 2003; Efstathiou, 2003; Naselsky, Doroshke- vich & Verkhodanov, 2003; Chiang & Naselsky, 2004; Eriksen et al., 2004b; Land &
stochastic laws of CMB radiation are invariant with respect to the choice of coordinates. There is some (quite inconclusive) evidence from WMAP data that isotropy may fail, that is, some authors have suggested that data on CMB radiation may show some asymmetries which would be inconsistent with isotropy [see, e.g., Park (2004), Hansen et al. (2004)]. The existence of these asymmetries remains highly disputed, though, and it actually provides yet another intriguing area for statistical research. It is in fact hotly debated whether these asymmetries should be ascribed to experimental features or truly cosmological causes. From the theoretical point of view, cosmological models that would produce asymmetries do indeed exist, but they are highly nonstandard, ranging from global rotating solutions of Einstein’s field equa- tions to unconventional topological structures for the whole Universe. Much more methodological and applied research is needed in this area, but the question will most probably remain unsolved at least until the first releases of Planck data are available in a few years’ time. By now, it is fair to say that a vast majority of cosmologists is still sticking to the isotropy assumption, and this is what we shall do in the present paper. Some of the procedures we shall consider in Section 4 for testing non-Gaussianity, however, are known to have also power against nonisotropic behavior; see, for instance, the local curvature approach below.
Key words: methods: statistical ± cosmicmicrowavebackground ± cosmological parameters
± cosmology: observations ± cosmology: theory ± large-scale structure of Universe.
1 I N T RO D U C T I O N
A simultaneous analysis of the constraints placed on the cosmological parameters by various different kinds of data is essential because each different probe typically constrains a different combination of the parameters. By considering these constraints together, one can overcome any intrinsic degeneracies to estimate each fundamental parameter and its corresponding random uncertainty. The comparison of constraints can also provide a test for the validity of the assumed cosmological model or, alternatively, a revised evaluation of the systematic errors in one or all of the data sets. Recent papers that combine information from several data sets simultaneously include Bond & Jaffe (1998), Gawiser & Silk (1998), Lineweaver (1998), Bahcall et al.
In figure 2 we show our results for network evolution and in figure 3 the corresponding CMB anisotropies com- puted in our modified version of CMBact. In both figures we also show the corresponding results of the original CMBact code  for comparison. Regarding figure 2, we note thet the accuracy of CMBact is comparatively worse at low redshifts; this explains why the effects of the matter to acceleration transition seemingly become visible around reshifts of a few, while the onset of accel- eration occurs below z = 1. This point is not crucial for our analysis, since our goal is to make a comparaive study of the effects of the additional degrees of freedom on the strings. Moreover, these low redshifts have a relatively small effect on the overall CMB signal. Nevertheless, this is an issue which should be adressed if this code is to be used for quantitative comparisons with current or forth- coming CMB data.
7.3 Targeted Cluster Observations
Since November 2006, we have observed nine clusters and one blank test field at 150 GHz with Bolocam at the CSO, and each cluster has been observed for a total of approximately ten hours. We have been granted twenty nights per year for the next three years to con- tinue this project, and we expect to image thirty more clusters assuming a weather-related observing efficiency of ≃ 50%. We scanned the telescope in a Lissajous pattern, which was essentially 100% efficient mapping the center of the cluster, and 50% efficient at a radius of ≃ 5 arcminutes. 2 To remove the atmospheric noise from this data we have used the quadratic subtraction algorithm, masking off the data within 2 arcminutes of the cluster center as described in Section 5.3.1. This removal algorithm acts like a spatial high-pass filter on our data, with an effective length scale given by the eight arcmin diameter of the focal plane. From simulations, we estimate that the peak signal from a massive cluster is reduced by ≃ 2/3 due to our atmospheric noise removal. See Fig. 7.4. In the future we plan to use an iterative map-making procedure in order to recover more of the cluster flux.
(9 km) with further modifications and testing, and careful operation from a suitable 8–10 000 feet operating base.
There are several features of Airlander 10 that lend it well to CMB observations. Airlander 10’s potential payload mass and flight du- ration are both comparable to LDB, but the turnaround time for Airlander 10 flights could be days rather than years, with minimal risk of payload damage during landing. As a result, Airlander 10 has the potential to carry LDB-esque CMB observatories, offsetting the reduction in altitude with increased integration time. HAV indicate that Airlander 10’s basic design could be modified to mount a CMB telescope on the aircraft’s lower fins, yielding an instantaneous field of view of around a quarter of the sky. Should civil regula- tions for remote piloting of Airlander 10 be in place by 2019, HAV anticipate that a remotely piloted flight would be feasible within 5 yr. We assume that such a scenario is plausible without attempt- ing to assess the technical challenges associated with deploying a telescope on this platform. From here onwards, we refer to this modified aircraft as AirlanderCMB. AirlanderCMB’s utility in ob- taining large-scale polarization information rests on the frequency range attainable at commercial airline altitudes: here we investigate this in detail.
Actually, the first part – how fast were they going – was the easy part. Each chemical element of which stars are composed has a very particular set of colors that it emits.
When Hubble compared these colors to the same ones observed on earth he noticed that they were very slightly shifted toward the red, the long-wavelength (low-frequency) end of the visible spectrum. He recognized that this was due to the so-called Doppler shift caused by the movement of the stars as they emitted the light. Just as the whine of truck tires goes down in frequency as the truck passes by and starts to recede into the distance, the light from the stars shifted to lower frequencies as the stars receded. Figuring out the distance to the stars was harder. Not all stars and galaxies are the same size, so their apparent brightness is not a firm indicator of their distance; in fact, this remains the hardest part of the problem. Hubble used so-called Cepheid stars, identifiable by the characteristic variability of their luminosity, as a sort of “standard candle.” Actually, Hubble’s data were so scattered in 1929 that it is amazing that he could have made the conclusions he did. More recently scientists here at Vanderbilt, among others, have been using “type I-A supernovas,” all of which are exactly the same brightness, as standard candles, and the results have improved.
Syst. error 0.71 ×10 −3 Total error 0.92 ×10 −3
In order to reduce spurious contamination of the Stokes polarization parameters Litebird takes advantage of an L2-specific scanning strat- egy with relatively large precession angle giving a high level of redundancy of sky observations at many different timescales and a large-angle coverage for most of the pixels on the sky. This strategy has the advantage of reducing the level of intensity to polarization systematics, but also allows many different consistency tests. The use of an HWP, while significantly reducing the impact of 1/f noise in the polarization maps, also mitigates important systematic errors that were observed in Planck data, since it allows instantaneous estimation of Stokes parameters.
2. The Square Kilometre Array
The Square Kilometre Array (SKA) is a huge, new technology radio telescope that, with an extension of ∼ 3000 km and a collecting area of ∼ 1 km 2 , has been designed to provide a signiﬁcant breakthrough in the knowledge of the Universe. 25 Moreover, its extremely high sensitivity and resolution can contribute to set new constraints on CMB spectral distortions and dissipation processes beyond current limits. SKA will observe in the radio band with diﬀerent antenna concepts and a continuous frequency coverage from 50 MHz to 14 GHz, allowing all-sky surveys and redshift depth observations. 26 The project will consist of two consecutive phases, the Phase 1
In this note we investigate the effects of perturbations in a dark energy component with a constant equation of state on large scale cosmicmicrowavebackground anisotropies.
The inclusion of perturbations increases the large scale power. We investigate more speculative dark energy models with w < − 1 and find the opposite behaviour. Overall the inclusion of perturbations in the dark energy component increases the degenera- cies. We generalise the parameterization of the dark energy fluctuations to allow for an arbitrary constant sound speeds and show how constraints fromcosmicmicrowavebackground experiments change if this is included. Combining cosmicmicrowave back- ground with large scale structure, Hubble parameter and Supernovae observations we obtain w = −1 . 02 ± 0 . 16 (1 σ ) as a constraint on the equation of state, which is al- most independent of the sound speed chosen. With the presented analysis we find no significant constraint on the constant speed of sound of the dark energy component.
cluster observables: it is an essential ingredient to obtain unbiased results given the presence of selection effects.
Finally, in Chapter 4 we calibrated the bias and the scatter of our preferred cluster CMB lensing mass observable, the CMB lensing signal-to-noise, with mock observations obtained from an N -body simulation. We also quantified deviations from the commonly- assumed log-normality in the scatter. This chapter demonstrates how crucial it is to understand the response of a given cluster mass observable to realistic realistic mass distributions of galaxy clusters and their environments, with simulations being a convenient way to do this. In particular, we argued that the bias in the observable must be accurately determined, as it is degenerate with the cluster mass, and therefore with parameters such as Ω m and σ 8 . The intrinsic scatter has to be quantified as well if accurate cosmological constraints are to be obtained. In addition, in Chapter 4 we emphasised that these parameters must be determined for the precise specifications of the observable as it is used in the real analysis. As we illustrated quantitatively, changes in, for example, the convergence model used in the matched filter (e.g., NFW profile vs. NFW profile + two-halo term), or in the experimental specifications, significantly affect the observable bias and scatter. This will be especially important for future cluster surveys (e.g., from CMB-S4), in which, in order to achieve all their constraining potential, the cluster mass scale will have to be determined to percent-level accuracy, which means that sub-percent-level calibration of the mass observables used to set this mass scale will have to be attained.
Figure 4. Left panel: power spectra for the map from Fig.3. The black histogram shows the total power spectrum, the blue dots are for the m = 0 mode only and the red histogram shows the power associated with m ≥ 1 modes.
2.1 Collective ﬂow
Two important physics signals underpin the discovery of the s-QGP , , , . One is the quenching of jets that propagate through the dense medium, which is understood as due to energy loss of scattered partons due to stimulated gluon emission in an environment characterized by a high density of color charges. The other is the presence of signiﬁcant anisotropic azimuthal ﬂow in semi- peripheral nuclear collisions at high energy. This can be understood in terms of hydrodynamical streaming. Hydrodymanics models are able to reproduce observations using short reinteraction times and signiﬁcant coupling between the partons.
We have used Bayesian parameter estimation to estimate low- resolution polarized CMB maps, marginalized over foreground contamination. These may then be used as inputs for a likelihood analysis. The emission model is parameterized accounting for physical understanding of the Galactic emission. The method has been tested on simulated maps, and found to produce unbiased estimates for the CMB power, quantiﬁed by the optical depth to reionization. With the ﬁve-year WMAP data we ﬁnd a consistent result compared to template cleaning, with τ = 0.090 ± 0.019 from this method and 0.086 ± 0.016 from the standard template-cleaning method. This method captures the increase in errors where foreground uncertainty is larger, so depends less on a Galactic mask. Estimates of the polarized Galactic components indicate a synchrotron spectral index of order β = −3.0 in the Fan region in the Galactic anti-center, and the North Polar Spur area.
Received 2 March 2015 / Accepted 29 July 2015
The polarization modes of the cosmological microwavebackground are an invaluable source of information for cosmology and a unique window to probe the energy scale of inflation. Extracting this information frommicrowave surveys requires distinguishing between foreground emissions and the cosmological signal, which means solving a component separation problem. Component sep- aration techniques have been widely studied for the recovery of cosmicmicrowavebackground (CMB) temperature anisotropies, but very rarely for the polarization modes. In this case, most component separation techniques make use of second-order statistics to distinguish between the various components. More recent methods, which instead emphasize the sparsity of the components in the wavelet domain, have been shown to provide low-foreground, full-sky estimates of the CMB temperature anisotropies. Building on sparsity, we here introduce a new component separation technique dubbed the polarized generalized morphological component analysis (PolGMCA), which refines previous work to specifically work on the estimation of the polarized CMB maps: i) it benefits from a recently introduced sparsity-based mechanism to cope with partially correlated components; ii) it builds upon estimator aggre- gation techniques to further yield a better noise contamination/non-Gaussian foreground residual trade-off. The PolGMCA algorithm is evaluated on simulations of full-sky polarized microwave sky simulations using the Planck Sky Model (PSM). The simulations show that the proposed method achieves a precise recovery of the CMB map in polarization with low-noise and foreground contam- ination residuals. It provides improvements over standard methods, especially on the Galactic center, where estimating the CMB is challenging.