For more than a century, the development of modern physics has brought human beings the deepest understanding of nature in terms of the building blocks of matter and their interactions. In particular, it has been established that all matter consists of a number of particles more fundamental than previously thought. From the 6th century BC  through the 19th century, it was believed that each element in nature consisted of atoms, named after the Greek word “atomos”, meaning indivisible. In 1897 J. J. Thomson discovered the electron , which weighed much less than the lightest atom and was believed to be a constituent of atoms. In 1911 the discovery of nucleus in atoms by E. Rutherford  established the nuclear structure of atoms. In 1964 M. Gell-Mann and G. Zweig proposed the quark model to explain the con- stituents of atomic nuclei. At that time, only three flavors of quarks, including up, down, and strange, were proposed. The three-quark model was confirmed by deep inelastic scattering experiments [4, 5]. Three more quarks, charm , top, and bot- tom , were proposed and confirmed by experiments [8–11]. Quarks, together with leptons and bosons, form the elementary particles known today.
The success of the search of the Higgsboson in the two- photon channel decay relies on four main points: high invariant-mass resolution, vital to distinguish a tiny peak on a huge background; e ff ective photon identification, essential to separate prompt photons from photons sec- ondary to neutral meson decays produced in jets; event categorization, e ff ective with probing di ff erent signal-to- background or di ff erent mass resolution signal categories; and finally the background modeling, which after photon identification is mostly ( ∼ 70%) irreducible, due to gen- uine isolated two-photon QCD events. The inter-channel calibration of the electromagnetic calorimeter (ECAL) as well as the corrections for the crystal transparency loss due to radiation, are the fundamental components to in- sure good energy resolution. Energy corrections are fur- ther applied to clustered electromagnetic energy in order to insure complete shower and converted photon contain- ment as well as to correct the dependence on the pile-up event rate. The energy corrections are derived with a mul- tivariate energy regression trained on simulated Higgs bo- DOI: 10.1051 /
in this channel would appear as a small and narrow peak above a large and smooth prompt di-photon background. A search is performed for a Higgsbosondecaying into twophotons. The analysis is done using a dataset recorded by the CMS experiment at the LHC from pp collisions at a centre-of- mass energy of 7 TeV, which corresponds to an integrated luminosity of 4.8 fb −1 . To improve the sensitivity of the search, selected diphoton events are subdivided into classes according to indicators of mass resolution and signal-to-background ratio. Five mutually exclusive event classes are deﬁned as shown in Figure 1: four in terms of the pseudorapidity and the shower shapes of the photons, and a ﬁfth class into which are put all events containing a pair of jets passing selection requirements which are designed to select Higgs bosons produced by the vector boson fusion process. Two photon classiﬁers are used: the minimum value of a variable R 9 of the twophotons, R min 9 , and the maximum
Since the ATLAS and CMS collaborations reported the observation [1, 2] of a new particle with a mass of about 125 GeV and with properties consistent with those expected for the Higgsboson in the StandardModel (SM) [3–5], more precise measurements have strengthened the hypothesis that the new particle is indeed the Higgsboson [6–10]. These measurements were performed primarily in the bosonic decay modes of the new particle: H → γγ, ZZ, W + W − . It is essential to study whether it also directly decays into fermions as predicted by the SM. Recently CMS and ATLAS reported evidence for the H → τ + τ − decay mode at a significance level of 3.4 and 4.5 standard deviations, respectively [11–13], and the combination of these results qualifies as an observation . However, the H → b b ¯ decay mode has not yet been observed [15–20], and the only direct evidence of its existence so far has been obtained by the CDF and D0 collaborations  at the Tevatron collider.
The t ¯ tH search with H → (W W (∗) , τ τ, ZZ (∗) ) → leptons  exploits several multilepton signatures resulting from Higgsboson decays to vector bosons and/or τ leptons. Events are categorised based on the number of charged leptons and/or hadronically decaying τ leptons in the final state. The categorisation includes events with two same-charge leptons, three leptons, four leptons, one lepton and two hadronic τ leptons, as well as two same- charge leptons with one hadronically decaying τ lepton. Backgrounds include events with electron charge misidentification, which are estimated using data-driven techniques, non- prompt leptons arising from semileptonic b-hadron decays, mostly from t ¯ t events, again estimated from data-driven techniques, and production of t ¯ t + W and t t ¯ + Z, which are estimated using MC simulations. Signal and background event yields are obtained from a simultaneous fit to all channels.
Due to practical constraints, several MC generators were used to simulate signal and background processes. The W H and Z H signal processes are modelled using MC events produced by the Pythia  event generator, interfaced with the MRST modiﬁed leading-order (LO*)  parton distribution functions (PDFs), us- ing the AUET2B tune  for the parton shower, hadronization and multiple parton interactions. The total cross sections for these channels, as well as their corresponding uncertainties, are taken from the LHCHiggs Cross Section Working Group report . Dif- ferential next-to-leading order (NLO) electroweak corrections as a function of the W or Z transverse momentum have also been applied [22,12]. The Higgsboson decay branching ratios are cal- culated with Hdecay .
data to look for hh production in the h → bb and h → γγ channel. Both resonant and non-resonant anomalous production of hh pairs is searched for. Upper limits on the production of a narrow-width heavy scalar bosondecaying to hh as a function of its mass are shown in Fig. 2. The cross section for non-resonant hh production is constrained to be less than 2.2 pb at 95% confidence level. As a reminder, the SM hh production cross section is ∼ 10 fb, i.e., about two orders of magnitude smaller than the sensitivity of this search.
The electromagnetic calorimeter is a total absorption calorimeter which detects and measures the energies and positions of electrons, positrons and photons ranging from tens of MeV to 100 GeV. It provides neutral-pion/photon discrimination and, in conjunction with the central tracking system, electron/hadron discrimination. It consists of three large overlapping assemblies of lead-glass blocks (the barrel and the two end caps). Most electromagnetic showers are initiated before the lead-glass itself because of material such as the magnet coil and the pressure vessel in front of the calorimeter. Eor this reason, presampling devices are installed in both the barrel and end-cap regions, im m ediately in front of the lead-glass to measure the positron and to sample the energies of these pre-showers^ thus improving energy resolution. The intrinsic resolution of the calorimeter is 5-6% / V ^ , where E is the energy/in This resolution is degraded by a factor of about two by the material in front of it. The effect of the material is more significant near the overlap of the barrel and the endcap calorimeters i.e. at polar angles defined by 0.72 < | cos^ | < 0.84. The angular resolution of electromagnetic clusters is approximately 4 mrad both in 0 and (j) for energies above 1 0 GeV.
s = 7 TeV and √ s = 8 TeV datasets as scale factors with respect to the predicted Monte Carlo cross sections. Con- sistent scale factors are obtained for both datasets except for Z + c production. This is expected, because the two samples were generated using di ff erent programs, ALP- GEN and SHERPA. Some of the Z + c events arise from charm production in the parton shower of Z +light parton events. However, in the ALPGEN samples used for the √
In this paper, a search for the associated production of the Higgsboson with a vector boson, where the Higgsboson decays to a pair of tau leptons, is presented. This production mechanism is referred to in the following as VH, where V is either a W or Z boson. The analysis is part of a comprehensive program by the ATLAS Collaboration at the LHC to measure the Higgsboson production mecha- nisms, its couplings, and other characteristics. Similar studies have been performed with the VH production mechanism and subsequent decays of the Higgsboson to WW [17,18] and b b ¯ [19,20] by the ATLAS and CMS Collaborations and to tau lepton pairs  by the CMS Collaboration. The associated production is particularly useful in the decays of the Higgsboson to tau lepton pairs when both tau leptons decay hadronically, where the trigger can be a challenge. For VH production and leptonic decays of the W or Z boson, the W and Z boson decay products satisfy the trigger requirements with high efficiency.
and harmonically decaying τ leptons as well as events containing high missing transverse energy are identified in the detector and passed on to the HLT for further processing. The HLT consists of two software based triggers. First the L2 trigger uses more refined event selection using information from the whole detector; the L2 trigger reduces the data rate to 2 kHz. The EF uses reconstruction algorithms similar to those used in the full ATLAS reconstruction, here the data rate is reduced to the required 200 Hz. The complex algorithms used in the EF cause around 4 seconds of read out latency compared to 40 ms at L2 and 2.5 µs at L1. Following the HLT data is separated into different streams and recorded for offline analysis.
On July 4, 2012, the discovery of a new boson, with mass around 125 GeV and with properties compatible with those of a standard-modelHiggsboson, was announced at CERN by the ATLAS and CMS collaboration [1, 2]. The reported excess is most significant in the SM Higgs searches using the decay modes into γγ and ZZ. The re- sults in the ττ decay mode showed no excess of observed events in the mass range near 125 GeV, still compatible with both, a downward fluctuation from a background- only or background plus SM Higgsboson hypothesis. In this document a search for the SM Higgsboson is reported using final states with pairs of τ leptons in proton-proton collisions at √ s =7 and 8 TeV at the LHC using the data that have been collected in 2011 and 2012 correspond- ing to an integrated luminosity of 17 fb − 1 recorded by the CMS experiment. This luminosity splits in 4.9 fb − 1 of data taken at 7 TeV center-of-mass energy and 12.1 fb − 1 at 8
We have simulated the production and decay of the twoHiggs states in B-L model at LHC at different energies using MC programs MadGraph 5, Pythia8 and CalcHEP. We calculated the production cross section of light and heavy Higgs by using normal methods of StandardModel and new methods of B-L model and we found that the heavy Higgs had relatively small cross sections but it was accessible at LHC. Also, we presented all possible decay channels for light and heavy Higgs states and their branching ratios, the total width for each state and fo- cused on new decay channels of heavy Higgsboson in the B-L model into a pair of heavy neutrino or pair of new gauge boson where the new heavy Higgs could be detected at LHC using one of these channels.
After the new discovery of the StandardModelHiggsboson at CERN’s Large Hadron Collider LHC on 2012  , it is now time to test possible many extensions of the StandardModel (SM) using Monte Carlo simulation techniques and different computational tools of HEP. The StandardModel (SM) does not contain any elementa- ry charged scalar particle; the observation of a charged Higgsboson would indicate new physics beyond the SM. In the StandardModel of the electroweak interactions  the masses of both bosons and fermions are explained by the Higgs mechanism . This implies the existence of new one doublet of complex scalar fields which, in turn, leads to a single neutral scalar Higgsboson. One of the simplest ways to extend the scalar sector of the StandardModel is to add one more complex doublet to the model. Some extensions to the StandardModel con- tain more than one Higgs doublet  and predict Higgs bosons which can be lighter than the StandardModelHiggs. The models with two complex Higgs doublets predict two charged Higgs bosons H ± which can be pair- produced in proton proton collisions (LHC) and proton antiproton collisions (Tevatron) such these models as Two-Doublet HiggsModel (2HDM)  and Minimal SuperSymmetric Model (MSSM). The twoHiggs doublet model (2HDM) can provide additional CP-violation coming from the scalar sector and can easily originate dark matter candidates, also the Minimal SuperSymmetric Model (MSSM) predicts two doublet Higgs. The 2HDMs have a richer particle spectrum with two charged and three neutral Higgs Bosons. All neutral HiggsBoson could in principle be the scalar discovered at the LHC -. The SM picks up the ideas of local gauge invariant and SSB to implement a Higgs mechanism. The symmetry breaking is implemented by introducing a scalar doublet
Yet with the keystone of the StandardModel finally hoisted into place, the bedrock is already frac- tured: the StandardModel is an incomplete theory. Precise measurements of the Cosmic Microwave Background (CMB) and of the angular momenta of galaxies, as well as many other observations, all indicate that around 80% of the total matter in the universe is non-baryonic. The CP violation observed in the StandardModel is insufficient to explain the dominance of matter over anti-matter in the universe. The mass of the Higgsboson itself is subject to radiative corrections that scale quadratically with any scale Λ of physics beyond the SM – unless protected by some new symmetry. These questions and others motivate the coming runs of the LHC, and demand a continued broad program of searches and precision measurements. Perhaps the answers will come in the form of a simple resonance, suddenly accessible thanks to the increase in √ s. Maybe hints of new physics will instead begin as whispers from rare flavor processes – from heavy states beyond the LHC reach running in loops. Yet again, the Higgs itself could open a portal to physics beyond the SM. The effective couplings to gluons or photons could hint at new colored or electrically charged states. Measurements of the h → γγ, and h → V V rates constrain the effective scale of higher- dimensional operators. Current limits on the unobserved or (non-SM) invisible width of Higgs decays still leave substantial space for new physics, so continued indirect measurements of the width through interference [176–178], direct searches for invisible decays, and clever parameterization of coupling measurements will all help to ‘rout out’ new physics, if it is hiding there. Direct searches for exotic decays will be ever more exciting, and there is a panoply of potential enhancements of Higgsboson pair production.
Abstract. The High-Luminosity Large Hadron Collider (HL-LHC) is a ma- jor upgrade of the LHC, expected to deliver an integrated luminosity of up to 3000/fb over one decade. The very high instantaneous luminosity will lead to about 200 proton-proton collisions per bunch crossing (pileup) superimposed to each event of interest, therefore providing extremely challenging experimental conditions. The scientific goals of the HL-LHC physics program include pre- cise measurement of the properties of the recently discovered standardmodelHiggsboson and searches for beyond the standardmodel physics (heavy vector bosons, SUSY, dark matter and exotic long-lived signatures, to name a few). In this contribution we will present the strategy of the CMS experiment to investi- gate the feasibility of such search and quantify the increase of sensitivity in the HL-LHC scenario.
Jets are reconstructed  using the anti-k t algorithm  with radius parameter R = 0 . 4. At least two jets with | η | < 4 . 5 and p T > 25 GeV are required in the 2-jet selection. In the analy- sis of the 8 TeV data, the p T threshold is raised to 30 GeV for jets with 2 . 5 < | η | < 4 . 5. For jets in the ID acceptance ( | η | < 2 . 5), the fraction of the sum of the p T of tracks, associated with the jet and matched to the selected primary vertex, with respect to the sum of the p T of tracks associated with the jet (jet vertex fraction, JVF) is required to be at least 0 . 75. This requirement on the JVF reduces the number of jets from proton–proton interactions not associated with the primary vertex. Motivated by the VBF topology, three ad- ditional cuts are applied in the 2-jet selection: the difference of the pseudorapidity between the leading and sub-leading jets (tag jets) is required to be larger than 2 . 8, the invariant mass of the tag jets has to be larger than 400 GeV, and the azimuthal angle differ- ence between the diphoton system and the system of the tag jets has to be larger than 2 . 6. About 70% of the signal events in the 2-jet category come from the VBF process.
vertex. Such events are quite rare in SM processes, so the analysis has an advantageous signal-to- background ratio, despite the small cross section for the pp → H → ZZ → 4l process, which ranges from 0.5 fb to 7 fb depending on the Higgs mass. The main background is the irreducible non-resonant ZZ production, with Zs decaying leptonically, with an yield of ∼ 14 fb. A small contamination of Z+X reducible background (that is highly suppressed by lepton ID cuts) survives at small masses, around
and moderate missing transverse energy and multiple jets are exploited to increase the sensitivity of diﬀerent production modes. Events with two leptons are split into 6 categories with diﬀerent background composition and signal over background ratios, from 3 diﬀerent possible pairs of electron and muon and either 0 or 1 jet. Events with 2 forward jets are tagged as VBF production mode. If those 2 jets do not satisfy the VBF criteria, events are tagged as VH. In all categories, WW di-boson production is an important and irreducible background and its normalisation and shape are estimated from MC simulation. In order to suppress the overwhelming DY background in same ﬂavor (ee or μμ) channels, the invariant mass of the lepton pair is required to be far from Z boson mass (i.e. 76 to 106 GeV). Apart from DY background, other reducible ones are t¯t, QCD and W+jets which are estimated from data. The statistical interpretation in diﬀerent ﬂavor 0 and 1 jet events is based on the two dimensional distribution in the (m ll , m T ) plane, where m ll is the invariant mass of the di-lepton
One of most important way to make sure the observed particle is the Higgsboson is to measure its couplings to bosons and fermions and test the prediction of the Stan- dard Model. At the same time as the observation of the new boson, we observed a significant excess with respect to the backgroud in H → γγ, H → ZZ and H → W + W − , they strongly indicate the new boson have non-zero cou- pling to vector bosons. Because the excess of these chan- nels are observed in the search for Higgsboson with gluon fusion productions, they also indicate non-zero coupling of Higgs to quarks.