Search for Anomalous Single Top Quark Production in Association with a Photon in pp Collisions at √ s = 8 TeV

This thesis reports the results of a first search for the flavor-changing neutral current (FCNC) through the anomalous production of single top quark in association with a photon due to the anomalous interactions of tqγ (q = u or c) in pp collisions. This search is performed using 19.8 fb−1 of data collected with the CMS detector at the center-of-mass energy of 8 TeV. In this study we only concentrate on the muonic decay of the W-boson in top quark decay. The search is conducted in final states with an isolated muon, an isolated photon, jets, at most one of which is consistent with originating from the evolution of a b quark, and missing transverse momentum, corresponding to top quark decays, in which the W boson from the top quark decay is detected in the μν. A multivariate classification approach is chosen to achieve a powerful discrimination between signal like events and standard model backgrounds. No evidence for the FCNC processes are observed. Upper limits at 95% confidence level on the strengths of the anomalous couplings are found to be κtuγ < 0.025 and κtcγ < 0.091. The corresponding upper bounds on the branching ratios are Br(t→ uγ) < 0.013% and Br(t→ cγ) < 0.17%. The obtained upper bounds are the most stringent limit up to date. Upper limits on the signal cross sections are also reported for a restricted phasespace region to provide results that can be more easily compared with theoretical predictions. Observed upper limits on the cross sections are found to be 47 fb and 39 fb at 95% CL for tuγ and tcγ production, respectively. The restricted phase space defined similarly to the final analysis phase-space and requiring exactly one identified b jet in the data. These are the first results on anomalous tγ production within a restricted phase-space region.

Within the SM, Flavour Changing Neutral Currents (FCNC) are absent at the tree level and are highly suppressed at higher orders by the GIM mechanism [1].
In top quark sector, the GIM suppression is much stronger than the bottom-quark sector due to the large mass of the top quark. For this reason, the SM predicts very small rates for the branching ratios of top quark FCNC decays to an up type quark and a neutral gauge bosons: Br(t → X(= γ, Z, g) + q(= c, u)) < 10 −10 [2].
On the other hand, many models for new physics predict new heavy particles and interactions which can contribute to top quark FCNCs through the quantum loops and enhance the branching ratios of top quark FCNC decays orders of magnitude with respect to SM expectations. These new physics models include two higgs doublet models [3], exotic quarks [4], supersymmetry [5], or technicolour [6]. The predicted branching ratios for top quarks decaying to an up-type quark and a photon, Z boson, or gluon can be as large as 10 −7 to 10 −5 for certain regions of the parameter space in the mentioned models [2].
Although the branching ratio of top quark FCNC decays predicted in SM are far beyond the current experimental sensitivity, the observation of FCNC top decays with a branching ratio at the order of 10 −5 are around the limit of the projected high luminosity reach at LHC [7]. Therefore, any evidence for FCNC in the top-quark sector will be a clear indication of physics beyond the SM.
The LHC (Large Hadron Collider) is a top quark factory, producing large number of top quarks at designed center of mass energies and luminosity. This enables physicists to probe various properties of the top quark precisely. Searching for top quark FCNC interactions is one of the interesting topics which is followed by both ATLAS and CMS collaborations [8].
In order to search for physics beyond the SM through top quark FCNC processes, one can choose a specific new-physics scenario or follow a model-independent approach [9]. Experimental collaborations choose the latter approach to find independent signs of new physics or quantify the accuracy with which the new physics is excluded. FCNC interactions of top quarks would be probed through anomalous top quark decays, and through the anomalous production of top quark. From experimental point of view, each of the anomalous production and decay channels has its specific features and various analyses are defined to search for them by the experimental collaborations [8].
In this dissertation, a search is conducted for FCNC couplings of top quark with a light up-type quark and a photon. Using the effective Lagrangian approach, the Chapter 5 describes the analysis strategy. Analysis strategy is started by explaining the signal, backgrounds processes, and the datasets. It continues with background estimations and signal extraction procedures and is finalized with presenting the world's best limit for the branching fraction, Br(t → qγ), with 95% confidence level.

Chapter 2
Theoretical motivations and experimental review

Particles and interactions
Particle physics is concerned with the ultimate constituents of matter at the smallest scale and the interaction among them. Elementary particles which are regarded as the most fundamental building block of matters has changed with time as technology and physicists s knowledge has increased and improved. Fundamental particles of our own time are six flavors of leptons and quarks with spin- 1 2 , four gauge bosons with spin-1 and one spin-0 particle which are now defined as being point-like, without internal structure or excited states. At present, we know four fundamental interactions among these elementary particles: electromagnetic interaction, weak interaction, strong interaction and gravitational interaction. Among these interactions, the gravity is negligibly weak at elementary particle level and is usually out of game of particle physics.
The best theory which can explain all the phenomena of the particles in terms of the properties and interactions of these particles is called Standard Model (SM) of particle physics [10]. The SM, is a gauge theory based on the SU (3) C ⊗ SU (2) L ⊗ U (1) Y symmetry group. The strong, weak and electromagnetic interactions are described via the exchange of various spin-1 bosons amongst the spin-half particles.
The SU (3) as color gauge group which describes strong interaction, is found to be unbroken. Quarks are assigned to the fundamental 3 representation and antiquarks are assigned to the conjugate 3 representation. The eight massless spin-one  In the SM, the left and right handed components of quark and lepton fields are assigned to different representations of the electroweak gauge group SU (2) L ⊗U (1) Y to describe a chiral structure for the weak interactions. Thus, the left-handed fields are SU (2) L doublets, while the right-handed fields transform as SU (2) R singlets.
The SU (2) L ⊗U (1) Y gauge symmetry is broken spontaneously to the electromagnetic subgroup when the scalar field acquires a non-zero vacuum expectation value. (2.1) The Spontaneous Symmetry Breaking (SSB) of the electroweak group generates the mass of the three weak gauge bosons (W + , W − and Z bosons) and fermions. The SSB ensures that photon remains massless and gives rise to the appearance of a scalar particle in the model called the Higgs boson. We focus on the interactions and properties that correspond to the SU (2) L ⊗ U (1) Y factor of the SM gauge group. The electroweak Lagrangian is given by: The L gauge which describes the kinetic term of the gauge fields is given by where W Aµν (A = 1,2,3) and B µν are field strength tensors of the corresponding SU (2) L and U (1) Y gauge fields, respectively. This term includes both triple and quadratic self couplings of the electroweak gauge bosons.
The Higgs self-couplings and Higgs-gauge-bosons couplings are described by L Higgs after the SSB which is given by where D µ is the covariant derivative with the form g and g are the the gauge coupling constants, Y is equal to -1 (-2) for left (right) components of the fermion fields and − → τ is used to denote Pauli matrices.
The Yukawa Lagrangian has the form: The Yukawa couplings which are, in general, complex parameters are denoted by Γ i,j . After the SSB, one can easily obtain the fermion mass matrix from L Y ukawa .
The mass matrix can be diagonalize by means of bi-unitary transformations and the elements of this diagonal matrix is correspond to the fermion masses. The basis of fields in which the mass matrix is diagonal is called mass eigenstate.
The interactions between the fermions and the electroweak gauge bosons are described by L matter The right-handed fermion fields are singlets of SU (2) L and hence do not couple to W i µ . The L matter can be written as the sum of the charged and neutral weak current interaction: where θ W is called weak mixing angle (Weinberg angle) and g V and g A values are given in Table 2.2. The electroweak gauge bosons are written in terms of the mass eigenstates The observed states are mass eigenstates which are different from the weak eigenstates. To write the interactions in terms of the mass basis of the fermion fields we need to transform the fields using two unitary matrices as was discussed previously.
Because of the unitarity of the transformation matrices, it is verified that the form of the neutral current interactions are not changed by the process of rotating to the mass eigenstate for the fermion fields. Therefore, we do not have flavour-changing neutral-currents at the tree level in the SM.  Since there is no mass term for neutrinos 1 , the neutrino fields can be redefined while their kinematic terms do not change. Therefore, there is no flavor mixing between the leptons in the weak charged current interactions. The L CC in mass basis can be written as (2.14) V mn is a unitary matrix called the Cabibbo-Kobayashi-Maskawa matrix (CKM matrix) which describes the mixing between different generations. Assuming the uni- tarity, the magnitudes of the CKM elements are [12]  the off-diagonal unitarity conditions impose the following equations which can be presented as a closed triangle in the complex plane, as is shown in

Top quark 2.2.1 Introduction
The top quark is the most recently discovered quark, which was discovered at the Fermilab Tevatron, a proton-antiproton collider at a center-of-mass energy of 1.8 TeV in 1995 [13][14][15]. The top quark was predicted as a weak-isospin partner of b quark in the standard model after the discovery of the b quark at 1977. The presence of top quark could provide a natural way to suppress the experimentally not observed flavour-changing neutral current through the GIM mechanism [16] and make a renormalisable gauge theory of weak interactions by removing the anomaly.
Top quark mass was successfully predicted before its discovery through the radiative corrections in the standard model. Top quark can modify the W and Z masses and widths through the quantum loop corrections. Therefore, precise measurements of the W and Z boson properties provide a very good information to constraint the top quark mass. The most recent indirect measurements of the top quark mass using the Z-pole data, the W -boson mass and total width and several other electroweak quantities yields [17]. (2.17) The new direct measurement yields a top quark mass 173.21 ± 0.51(stat) ± 0.71(sys) GeV [18]. Top quark is the heaviest of the known quarks and its mass has been measured with the highest precision comparing to any other quark. Due to the heavy mass of top quark, also much heavier than the W -boson, and short lifetime of the top quark, it decays before it can hadronize to a W boson and b quark with almost 100 percent branching fraction. This provides a unique opportunity to study the effects due to its spin through the angular correlations among its decay products.
In the Standard Model, the Yukawa coupling to the top quark (y t = m t /v, where v ∼ 246 GeV is the vacuum expectation value), is very close to the unity.
Because of this observation, it has often been speculated that new-physics might be accessed via top quark physics specially in the electroweak symmetry breaking scenario. Therefore, precise measurements of the top quark and its interactions may reveal effects from new physics.

Top quark production and decays
At the LHC, the dominant production mechanism of the top quark is through the top quark pair production mediated by the gluon. The representative Feynman diagrams are shown in Figure 2 energy scale of the interaction is around the top quark mass, much larger than QCD scale, tt production at the LHC can be described by the quantum chromodynomic using perturbative approach.
At the LHC with √ s = 8 (14) TeV, around 80 (90)% of the total cross section of tt is due to the gg fusion, while the remainder is mostly due to the qq annihilation.
Since the minimal energy for tt production is 2m t which leads to the x = 0.05 (0.025) (x is momentum fraction of proton carried by a parton) for the 8 (14) TeV LHC and the gluon distribution inside the proton increases more steeply towards small x than the valence and the sea-quark distributions.
In addition to the pair production, top quark can be produced singly through the electroweak interaction. There are three different channels for the electroweak single top production which are shown in Figure 2.4. All three processes involve the top quark charged current and allow to measure the CKM matrix element |V tb | 2 directly. Therefore, the unitarity of the CKM matrix can be verified without any assumption on the number of generation and make a window to search for fourth generation. Measuring the properties of the standard model single top production is very important because it is a background to several new-physics scenarios and the presence of the new physics can make deviation from the SM prediction. For example, the existence of a flavour-changing neutral current gu → t would lead to the production of the single top quark with the signature very similar to the SM t-channel. At the LHC, the t-channel production mode is dominant, followed by the tW -channel. The s-channel production cross section is very low comparing to the huge background from tt with low chance to be observed at the 14 TeV LHC.
The top quark decays almost exclusively to a W boson and a b quark with the total width in the SM at NLO QCD [19]  Therefore, the effective approach can be viewed as a low energy description of the new physics with the heavy states.

CHAPTER 2. THEORETICAL MOTIVATIONS AND EXPERIMENTAL REVIEW12
As the 4-Fermi effective theory is corresponding to the low energy limits of electroweak theory, the well proven SM would be the low energy limit of a new physics model. If we knew the complete fundamental theory at high energy scales, we could find the effective theory at an arbitrary scale by integrating out the heavy fields (compared to SM) at different energy scale from the theory. The effective field theory at low energy would be an infinite tower of terms of higher dimensions operators which are an SU (3) C ⊗ SU (2) L ⊗ U (1) Y symmetric built from standard model fields. This is a "top-down" application of the effective field theory [20]. If the theory at high energy respects the SM gauge symmetries, same operators will be obtained at low energy for different theories. The difference between theories will be reflected in the coupling constant and numerical factors associated with each operator.
Since the complete fundamental theory is unknown, we need to use the "bottomup" approach of the effective field theory. In this approach, the SM Lagrangian is extended by introducing higher dimension operators which have coefficients of inverse powers of mass and they are suppressed by powers of the new physics scale Λ. Therefore, the higher the energy scale of new-physics, the smaller the effects on low-energy experiments.
The effective coupling constants of these operators should be determined by the experiments. The observation of any deviation from the prediction of the SM will require a non-zero value of some effective coupling constants. The value of the effective coupling constants can distinguish between different beyond standard model scenarios.
As was discussed, the higher dimension operators contain the power of the new physics scale in the denominator. Where is the energy scale of the new physics?
Is it close to the electroweak scale? or somewhere between electroweak scale and plank scale? FCNC processes can present an interesting clue about this fundamental question.

Dimension-six operators with top quark FCNC interactions
The effective Lagrangian can be written as a series, such that where L SM is the SM Lagrangian of dimension four and L (5) and L (6) contain all the dimension five and six operators. All terms are invariant under the SU (3) C ⊗ SU (2) L ⊗ U (1) Y gauge symmetries of the SM. There is just one allowed term in L (5) considering the demanded symmetries which breaks lepton number conservation and generates Majorana mass for left-handed neutrinos. Assuming lepton and baryon number conservation, a list of the dimension six operators are given in References [21,22]. We will focus on the operators which lead to top quark FCNC interactions.
The operators can be expressed as [23,24] where i, j = 1, 2, 3 are the flavour indices and C ij x are complex dimensionless couplings. HereQ Li , U Ri and D Ri are the quark fields which are introduced in Section 2.1.
Operators which contribute to FCNC decays of the form t → u(c)g and effect the strong sector are expressed in equation 2.20. Operators in equation 2.21 are analogous to those of 2.20 in the electroweak sector which contribute to top decays of the form t → u(c)γ and t → u(c)Z. The hermitian conjugate of these operators should also be included in the effective Lagrangian.

CHAPTER 2. THEORETICAL MOTIVATIONS AND EXPERIMENTAL REVIEW14
All operators in the left columns of equations 2.21, 2.20 yield γ µ and σ µν q ν terms, while those in the right columns give k µ ≡ (p i + p j ) µ and q µ terms. Not all these operators are independent. The equation of motion can be used to remove redundant operators from the effective Lagrangian. It is shown in reference [23] that all effective operators which contribute to the trilinear fermion-fermion-gauge (f i f j V ) vertices involving a W or Z boson, a photon or gluon, only involve γ µ and operators just contribute to the t → qZ decay due to the cancellation of the photon term after SSB in these terms. Therefore, there is no γ µ term in tqγ Lagrangian [26].
The most general effective Lagrangian describing the top quark FCNC interaction with an up-type quark (u or c quark) and a gauge boson can be written as [2].
where e is the electron electric charge, g is the weak coupling constant, g s is strong coupling constant, θ w is the Weinberg angle, P L,R = 1 2 (1 ∓ γ 5 ), σ µν = 1 2 [γ µ , γ ν ] and the symbolsq and t represent the up (or charm) and top quark spinor fields.
The parameters X qt , κ qZ , κ qγ and κ qg define the strength of the real and positive anomalous couplings for the current with photon, Z boson and gluon, respectively.
The relative contribution of the left and right currents are determined by X L,R , z L,R , γ L,R , g L,R and h L,R which are normalized as |X L | 2 + |X R | 2 = 1, |z L | 2 + |z R | 2 = 1, etc. In the Lagrangian, q is the momentum of the gauge boson and Λ is the new physics cutoff which by convention, is set to the top quark mass.
The partial widths for FCNC decays are given by [2]

FCNC top quark decays in the standard model
In the 1950s, the universality of the weak coupling constant was exhibited after describing the pion, muon and neutron decays by a "vector minus axial vector" (V-A) type of interaction. In that sense, it was expected that all particles which decay through the weak interactions should have had the same life time. Experiments showed that the life time of the particles containing the strange quark did not follow the expectation and the strangeness non-conserving weak decays are relatively suppressed comparing to the strangeness conserving weak decays. For example, the life time of the K + → π + π 0 was measured to be 20 times longer than π + → µ + ν. The universality of weak interactions was contradicted by this observation.
The universality of the weak interaction was resurrected by Cabibbo in 1963 [27] by introducing the Cabibbo angle which rotates the strangeness-conserving and strangeness-changing processes, keeping the total weak hadronic current unchanged

CHAPTER 2. THEORETICAL MOTIVATIONS AND EXPERIMENTAL REVIEW16
Experimentally, it could explain all strangeness-changing processes consistently with sinθ c 0.26 . Therefore, weak interactions were again universal [28].
A further step was considering the neutral weak interaction. Generically, one would also expect charge currents and neutral currents of similar strength, in particular flavour-changing neutral currents. A strong suppression was observed in kaon FCNC decay mode quite early which was The weak processes were understood as transitions between different quark flavours when the quark substructure of hadrons had been noticed. In 1960s, one quark doublet was known and the hadronic current was written as   16. In order to see the suppression reason clearly, one can write the order of the top quark FCNC decay width as [29] in which it is assumed that the loop amplitudes are controlled by the bottom quark.  • It does not have enough sources of CP violation to explain the observed ratio of the matter and antimatter.
• Why do we observe the fermions in three generations? • Why are the off-diagonal elements of the CKM matrix so small?
• The quark masses are so small (except for the top quark) comparing to the electroweak vacuum expectation value.
• Does the top quark with mass near to the electroweak vacuum expectation value play a more fundamental role in the electroweak symmetry breaking mechanism?
There is an enormous range of new physics scenarios attempting to resolve the standard model problems. We will focus on the models that predict an enhancement on the top quark FCNC branching ratios.
The decays t → V c (V = γ, Z, g) induced through loop process in minimal supersymmetric model were calculated for the first time in [30]. Suppersymmetric QCD violates flavor symmetry and there are flavour changing interactions between gluinos (g), squarks (q) and quarks (q) [31]. These QCD flavor changing interactions can contribute to the top quark FCNC decay through the loops. The diagrams for t → V c through the suppersymmetric QCD loops are shown in figure 2.6. It is shown that the top quark FCNC decay width depends strongly to the gluino and squark masses. For example for mg, mq < 120 GeV, the new contributions can enhance the branching ratio of t → V c as much as 3-4 order of magnitudes compared to the SM prediction.
In addition to the suppersymmetric QCD loop effects, the FCNC can be induced through the chargino loop. In reference [32], it is shown that the generation mixing can exist through the chargino (χ + ), squark (q) and quark (q) interaction. Therefore, the FCNC top quark decay is possible through the chargino loop as is shown in Several extensions of the SM involve an extended Higgs sector, with more than one Higgs doublet, such as supersymmetry, models with spontaneous CP violation and some grand unification theories [33].  [6,[35][36][37][38].  The top quark FCNC processes as a window to new physics can be explored at the LHC in different ways. In order to be independent of the underlying new physics model which is responsible for the FCNC process, the effective Lagrangian approach is chosen to search for the new physics signal as was discussed in section 2.3. The effective Lagrangian contributes to both production and decay of top quarks.
If the top quark FCNC anomalous couplings to the gauge bosons exist, its decay properties would be affected. One of the most prominent signatures of FCNC processes at the LHC, would be the direct observation of a top quark decaying into an up-type quark together with a photon, gluon or Z boson [39]. In order to have enough statistics, it is appropriate to search for top quark FCNC decays in tt events while one top decays to W b as expected from the SM and the other decays anomalously to an up-type quark together with a neutral gauge boson as is shown in figure 2.8. Different decay modes has different signatures and search strategy that will be discussed in the next section.
FCNC interaction of top quarks can be probed through the anomalous production of top quark. Some interesting production processes where the effect of the FCNC coupling could be significant are: • Direct top quark production ((2 → 1) process): The presence of tqg anomalous couplings lead to the production of a top quark (u(c) + g → t) without any additional particle in proton-proton collision through the diagram in figure 2.9 (a) [40]. The signature of this process is different from the SM single top production where the top quark is always accompanied by other particles. The top quark is produced singly with transverse momentum arising only from initial state QCD radiation. Therefore, its decay products tend to be back to back in azimuth plane. Due to the larger parton distribution function PDF) of u quark in proton compared to the gluon and other sea quarks, the top quark is produced boostedly and its decay product will have smaller opening angle. The difference between PDF of u andū leads also to the production of top quark more than anti-top quark.   same-sign top production (q q → t t) and associate production of a top quark and photon or Z boson (g q → t γ ()Z ) via tqg (c) and tqγ-tqZ (d) anomalous interactions.
• Single top quark production with one associated jet ((2 → 2) process): There are four different sub-processes which lead to one top quark in the final state together with one associated jet [41]. Although this final state can be sensitive to tqZ and tqγ anomalous coupling the tqg effects are more significant. The final state contains a top quark and a light quark or gluon, a topology similar to SM t-channel single top quark production. Related Feynman diagrams are shown in figure 2.10.
• tZ and tγ associated production: All the anomalous couplings may contribute to anomalous tZ and tγ associated production [42]. The (c) digram appearance of the same-sign top quark in hadron colliders [43,44]. to the presence of two anomalous vertices for tt production, this final state has proven to have very little SM backgrounds and is sensitive to new physics effects. Therefore, same-sign dilepton final state would provide a new window for searching for FCNC interactions [45].
In the effective Lagrangian approach, the cross section of the anomalous top production and the anomalous decay width would be a function of the anomalous coupling which should be determined from the experiments. It is nearly impossible to discriminate between the tcV and tuV anomalous interactions through the anomalous top quark decay experimentally. While different parton distribution function for the valence quarks and sea quarks provide a great opportunity to discriminate between the tcV and tuV anomalous interactions [40,46]

Experimental results and searches for top quark FCNC interactions
Over the years, different experiments have searched for FCNC processes in the anomalous decays of top quark in tt events or anomalous productions of single top events. In this section we will review previous experimental results obtained in different experiments on the top quark FCNC interactions [8]. In the literature, there are many alternatives for normalizing the coupling constants in L eff . Therefore, limits on top-quark branching ratios are more easily comparable among different experimental results. It is worth mentioning that the limits on the anomalous couplings, are given with the notation in their corresponding publications.

Search for top quark FCNC processes at TEVATRON
The top quark was discovered in a pp collider TEVATRON, with a center-of-mass energy of √ s = 1.8 TeV at 1995. After the top discovery, the TEVATRON experiments CDF and D0 collected more data at the center-of-mass energy of √ s = 1.96 leading to precise measurement of the top quark properties and good limit on the new physics parameters involved with top quark.
CDF has performed a search for the FCNC top quark decay t → qZ and t → qγ using 110 pb −1 of data at √ s = 1.8 [47]. The tt events as the dominant source of the top quark production are used in which one top decays anomalously. No excess over the SM prediction was observed and upper limits are set at 95% C.L. on the top quark FCNC decays which are The analysis was updated using 1.9 fb −1 of data at √ s = 1.96 for the t → qZ channel by CDF Collaboration and an upper limit of BR(t → qZ) < 3.7% obtained at 95% C.L. [48]. Similar search was performed using 4.1 fb −1 of data at √ s = 1.96 by D0 Collaboration and an upper limit of BR(t → qZ) < 3.2% obtained at 95% C.L. [49].
Among FCNC top quark decays, t → qg is very difficult to distinguish from generic multijets production via quantum chromodynamics (QCD) at a hadron col-lider. It has therefore been suggested to search for FCNC couplings in anomalous single top-quark production. The first limits on tqg FCNC couplings to the top quark were obtained in a D0 analysis based on 0.23 fb −1 of the integrated luminosity [50].

Search for top quark FCNC processes at HERA
In ep collision at HERA Collider at DESY, top quark can only be produced singly through the charged current (CC) reaction (ep → νtbX). The SM single top production cross section at HERA is less than 1 fb and is sensitive to the contribution of new physics [52]. The anomalous tqγ and tqZ FCNC interaction would induce the neutral current interaction (ep → etX) which could lead to a sizeable top quark production cross section. Due to the large mass of the Z boson single top production is dominated by the t-channel exchange of a photon. In order to produce a top quark in final state large momentum fraction of proton is needed in which the u-quark parton distribution function is dominant. Thus, this process is sensitive to tuγ anomalous couplings.
H1 and ZEUS have both searched for the single top quark production in ep collisions at HERA [53][54][55][56]. As no clear evidence for anomalous single top production was observed upper limit on the anomalous tuγ is set to be 0.16 and 0.12 at 95% C.L. by H1 and ZEUS experiment, respectively.

Search for top quark FCNC processes at LEP
In e − e + collision at LEP Collider at CERN, top quark may only be singly produced through the e − e + → e −ν tb process due to the large mass of top quark and its centerof-mass energy. The cross section of single top quark production is around 10 −4 fb at LEP2 center-of-mass energy which provided a good opportunity for observation of

CHAPTER 2. THEORETICAL MOTIVATIONS AND EXPERIMENTAL REVIEW27
the FCNC interaction through the e − e + → tc, s-channel process. Single top quark production through the FCNC interaction is sensitive to the tqZ and tqγ anomalous couplings simultaneously. Therefore, the observed upper limit will exclude a region in the BR(t → qγ)-BR(t → qZ) plane. The LEP experiments ALEPH, DELPHI, L3 and OPAL have searched for anomalous single top quark production via tqZ and tqγ anomalous interactions [57][58][59][60] ATLAS and CMS searched for t → qZ in events produced from the decay chain tt → Zq+W b [64,65]. In addition to the anomalous decay modes, CMS has searched for anomalous production of a single top quark in association with a Z boson which is sensitive to tqZ and tqg FCNC couplings simultaneously (see figure 2.9 (c) and (d)) [66]. The results are summarised in table 2.5.
As was discussed, the anomalous tqg interactions can induce various rare processes at hadron colliders. ATLAS collaborations have chosen the production of a single top quark without any additional particle (see figure 2.9 (a)) [69].  The anomalous tγ FCNC interaction was searched for the first time at LHC in production of a single top quark in association with a photon by the CMS collaboration [67]. No excess over the SM prediction was observed and upper limits on the anomalous couplings and branching ratio were set which can be found in table 2.5.
We will present that analysis in more detail in this thesis.

Chapter 3 Experimental setup
Essentially, the Ratherford α-particle scattering experiment is repeated over and over with the energy far larger than the binding energy of a system to probe the substructure of that system. Although the scientific method is the same, the energies and techniques have changed. Nowadays, accelerator machines are able to accelerate particles to extraordinary energies in the multi-TeV range.
There are two possibilities to collide a beam of accelerated particles. First, with another beam and second with a fixed target. In both cases one can study the sub-structure of the colliding particles. By using a fixed target, one can furthermore produce a beam of secondary particles. These particles may be stable, unstable, charged or neutral and the problem of accelerating unstable or neutral particles can be solved. On the other hand, the center-of-mass energy of the fixed target experiments increases with the square root of the beam energy while it increases linearly with the beam energy in beam-beam collisions. Therefore, in order to reach higher energies, it is much more efficient to use two beams in opposite directions.
Since the law of physics at sub-atomic distance scales is governed by quantum mechanics, the outcome of each collision cannot be known ahead of time and the theory can do predict the probabilities of various possible outcomes. Thus, the probability of a specific outcome of a collision connect experiments to theory and vice-versa. On the theory side, there is a well-developed formalism for predicting cross sections based on quantum field theory for a given model. On the experimental side, the performance of an accelerator is characterised by luminosity. The machine luminosity depends only on the beam parameters and can be written for a Gaussian beam distribution as [70] where N b is the number of particles per bunch, n b the number of bunches per beam, f rev the revolution frequency, γ r the relativistic gamma factor, n the normalized transverse beam emittance, β * the beta function at the collision point, and F the reduction factor due to the crossing angle at the interaction point (IP): θ c is the full crossing angle at the IP, σ z is the RMS bunch length, and σ is the transverse RMS beam size at the IP. The above expression assumes round beams, with σ z β, and with equal beam parameters for both beams. Finally, the expected number of events of a particular kind recorded per second can be found from multiplying the experimental measured luminosity to the theoretical calculated cross The luminosity can be increased by reducing the transverse beam emittance, by increasing the number of particles in the beam or by increasing the revolution frequency. The integral of the delivered luminosity with respect to time is called integrated luminosity and is a measure of the collected data size. All collider experiments aim to maximize their integrated luminosities, as the higher the integrated luminosity the more data is available to analyze.

The Large Hadron Collider
The Large Hadron Collider (LHC) is a collider located at CERN 1 near Geneva (Switzerland) [70]. It is the largest particle accelerator ever built by mankind. The • CMS: Compact Muon Solenoid [73] CMS is the other general-purpose detector. Its mission is to study physics similar to ATLAS.
• ALICE: A Large Ion Collider Experiment [74] ALICE is an experiment that involves the collision of lead ions rather than protons. When heavy ions collide a new state of matter called quark-gluonplasma will be created which can bring good information from the very early universe.
• LHCb: Large Hadron Collider beauty [75] The focus of the LHCb experiment is to study the phenomena which can be In addition to the four main detectors, two other petite detectors operate near the ATLAS and CMS detectors.
• LHCf: Large Hadron Collider forward experiment [77] The LHCf is the smallest detector at the LHC which stands about 460 feet in front of the ATLAS collision point. It is intended to measure the properties of forward-moving particles produced when protons crash together. The goal is to test the capability of cosmic ray measuring devices.
• TOTEM: TOTal Elastic and diffractive cross section Measurement [78] TOTEM is a long, thin detector connected to the LHC beam pipe, located about 650 feet away from the CMS detector. This experiment studies forward particles toward ultra high-precision measurements of the cross-sections (effective sizes) of protons.  • A photon leaves no trace in the tracking system, and deposits all its energy in the electromagnetic calorimeter.

The Compact Muon Solenoid
• An electron bends in the magnetic field and leaves trajectories in the tracking system. It deposits all its energy in the electromagnetic calorimeter.
• A muon bends in the magnetic field and leaves trajectories in the tracking system and pass through the ECAL, HCAL and the super-conducting coil.
Then it penetrates to the layers of the muon chamber and bends to the opposite side due to the magnetic field outside supper-conducting solenoid and leaves trajectories in all muon chambers and leaves the detector volume.
• A charged hadron (like pion, kaon and proton) bends in the magnetic field and leaves trajectories in the tracking system. It will pass through the electromagnetic calorimeter and deposit most of its energy in the hadronic calorimeter.
• A neutral hadron (like k 0 L or neutron) leaves no trace in the tracking system, and after passing through the electromagnetic calorimeter, deposit most of its energy in the hadronic calorimeter.
For more detailed information, please refer to the Technical Design Report of CMS [79].

Coordinate conventions
The CMS coordinate system is oriented such that at the origin centered at the nominal collision point the x-axis points south to the center of the LHC ring, the yaxis points vertically upward and the z-axis is in the direction of the beam toward the Jura mountains from LHC P5. The azimuthal angle φ is measured from the x-axis in the x − y plane and the radial coordinate in this plane is denoted by r. The polar angle θ is defined in the r − z plane and is measured from the positive z-axis. The polar angle is often transformed into pseudorapidity, defined as η = −ln( tan(θ/2)).
The plane transverse to the beam direction is called transverse plane (r − φ plane).
The component of momentum in the transverse plane is denoted by P T and the transverse energy is defined as E T = E sin(θ).

Tracker
The inner tracker system of the CMS detector is responsible for a precise measurement of the trajectories of charged particles as well as a precise reconstruction of secondary vertices produced at LHC collisions. A precise measurement of secondary vertices is necessary in many of the interesting physics channels, especially those related to b-jets and τ physics.
The CMS inner tracker system surrounds the interaction point with a radius of 115 cm, over a length of 270 cm on each side of the interaction point. At the LHC design luminosity of 10 34 cm −2 s −1 , there will be more than 20 overlapping proton-proton interactions and around 750 particles with each bunch crossing, which produce few thousand hits in the tracker. In order to perform a precise track reconstruction in such a dense environment, a tracker system with high granularity and high hit resolution is required. In addition, the time between each bunch crossing would be 25 ns which requires a fast response tracker system which does not allow the use of gas detectors due to its slow response. The intense particle flux will cause severe radiation damage to the tracking system and was the main challenge in the design of the tracking system. These requirements on granularity, speed and radiation hardness are satisfied by using the silicon detectors.
The CMS silicon tracker consists of two tracking devices, the inner pixel and the outer strip detectors. It consists of a central part (barrel) with three pixel layers The high resolution pixel detector is closest to the interaction region. It contributes to an unambiguous hit recognition and precise vertex reconstruction. It is also responsible for a small impact parameter resolution to distinguish secondary vertices arise from the decay of short lived particles after having traveled only a few hundred micrometers from the original collision point. In addition to the reconstruction of secondary vertices, the pixel detector is used to form seed tracks for the outer track reconstruction and high level triggering.
Pixel detector composed of pixel devices which provide a fine granularity in When a charged particle passes through the silicon detector, gives enough energy for electrons to be ejected from the silicon atoms, creating thousands or tens of thousands of electron-hole pairs. Each pixel uses an electric current to collect these charges on the surface as a small electric signal. A particle's trajectory can be deduced by knowing which pixels have been touched. Since the detector is made of two dimensional tiles, rather than strips, and has a number of layers, we can create a three-dimensional picture. Silicon pixels size (100 × 150 µm 2 in r − φ and z) allows to reach the desired resolution on impact parameter. The silicon detectors work in much the same way as the pixels: as a charged particle crosses the material it knocks electron from atoms and within the applied electric field these move giving a very small pulse of current lasting a few nanoseconds. This small amount of charge is then amplified, giving us hits when a particle passes, allowing us to reconstruct its path.
The CMS tracker has been operated successfully during Run 1 of the LHC ended in February 2013. As was mentioned in previous sections the LHC has delivered about 6.1 fb −1 integrated luminosity of data at 7 TeV and about 23.3 fb −1 at 8 TeV and CMS has recorded overall 93% of these data. During this time, less than 3% of the detector became inactive and less than 5% of the delivered luminosity was lost due to the tracker. By the time of the shutdown in 2013, about 2.3% (7.2%) of the barrel (endcap) modules of the pixel detector and 2.5% of the strip detector were inactive. The hit reconstruction efficiencies exceed 99% and 99.5% in the strip and pixel detector, respectively (with the exception of the innermost layer of the pixels) [81].

Electromagnetic calorimeter
The Electromagnetic Calorimeter (ECAL) is responsible for identifying, measuring the energies, and the location of electrons and photons precisely. One of the driving criteria in the CMS ECAL design was the requirement on energy resolution, in order to be sensitive to the decay of a Higgs boson into two photons. This capability is enhanced by the good energy resolution provided by a homogeneous crystal calorimeter. Crystal calorimeters have the potential to provide fast response, radiation tolerance and excellent performance for energy resolution.
The CMS ECAL is a hermetic homogeneous calorimeter made of lead tungstate (PbWO 4 ) crystals which have high density (8.28 g/cm 3 ) and a short radiation length (0.89 cm) allowing for a fine granularity and a very compact calorimeter system.
The scintillation decay time of these crystals is of the same order of magnitude as the LHC bunch crossing time and about 80% of the light is emitted in 25 ns.
The light output is relatively low and depends on the temperature. It is about 4.5 Each endcap is divided into 2 halves, or Dees which each holds 3662 crystals. The crystals have a front (rare) face cross section 28.62 × 28.62 mm 2 (30 × 30 mm 2 ) and a length of 220 mm which is correspond to 24.7 X 0 .
A preshower detector is placed in front of the endcap to provide π 0 −γ separation.
As was mentioned detecting the photons from the Higgs decay is one of the ECAL's main jobs. Neutral pions decay to two photons immediately after they are produced in collisions. These two low energy photon can inadvertently mimic high-energy photons when they are close together that the ECAL picks up together. A preshower detector sits in front of the ECAL within a fiducial region 1.653 < |η| < 2.6 to identify neutral pions. The preshower has a much finer granularity than the ECAL which can see each of the pion-produced particles as a separate photon, and can also help the identification of electrons against minimum ionizing particles, and improves the position determination of electrons and photons.
The CMS preshower consists of two lead radiators, about 2 and 1 radiation lengths thick respectively, each followed by a layer of silicon micro strip detectors to measure the deposited energy and the transverse shower profiles. The two layers of detectors have their strips orthogonal to each other to measure the vertical and horizontal position of particles. Figure 3.7 shows the arrangement of crystal modules, supermodules and endcaps, with the preshower in front in a schematic picture.
In general the ECAL energy resolution can be parametrized according to the where S is the stochastic term , N is the noise term, and C is the constant term.
The energy resolution of the CMS ECAL for electrons in beam tests with energies ranging between 20 and 250 GeV has been measured to be [82], where E is measured in GeV. The resolution has been measured in data and simulation using 2010 and 2011 LHC data. The resolution for E T ≈ 45 GeV electrons from Z boson decays is better than 2% in the EB , and is between 2% and 5% elsewhere.
The resulting energy resolution for photons with E T ≈ 60 GeV from 125 GeV Higgs boson decays varies across the EB from 1.1% to 2.6% and from 2.2% to 5% in the EE [83].

Hadronic calorimeter
The primary purpose of the Hadronic CALorimeter (HCAL) is to measure the hadron jets energy. In addition, the HCAL is also able to perform a precise time measurement for each energy deposit. Precise time measurements can be used for excluding calorimeter noise and energy deposits from beam halo and cosmic ray muons. Time information can also be valuable for identifying some new physics signals such as long-lived particle decays and slow high-mass charged particles [84].
The HCAL is a sampling calorimeter meaning that it finds the position, energy and arrival time of the incident particles using alternating layers of 'absorber' and fluorescent 'scintillator' materials. It has thin layers of scintillators interleaved between brass absorber plates. To maximize the absorber thickness in the small available space (about 1 m radially) inside the solenoid, the brass plates are relatively thick (5.5 cm) and the scintillator is relatively thin (3.8 mm). This structure of the detector produces a rapid light pulse. The light pulses are shifted in the visible region via wavelength shifting fiber and feed into readout boxes to be amplified by photodetectors. When the amount of light in a given region is summed up over many layers of tiles in depth, called a 'tower', this total amount of light is a measure of a particles energy.
The HCAL sits behind the tracker and the electromagnetic calorimeter as seen from the interaction point.
The HB is a sampling calorimeter located in the central detector |η| < 1.3.
The plastic scintillator is divided into 16 η sectors, resulting in a segmentation The HE cover a substantial portion of the rapidity range, 1.3 < |η| < 3. It consists of 19 layers of scintillator tiles sandwiched between 70 mm brass absorbers.
The total length of the calorimeter, including electromagnetic crystals, is about 10 interaction lengths (λ I ). The energy resolution for a pion in HE is σ/E ≈ 100%/ E(GeV).
In the central region, HB is not thick enough to contain hadronic shower fully, particularly those fluctuated showers which develop deep inside the HCAL. The effect of shower leakage has a direct consequence on the measurement of missing

4/sinθ interaction lengths and is
used to identify late starting showers and to measure the shower energy deposited after HB. The energy resolution for a pion in HO is σ/E ≈ 120%/ E(GeV) [73].
The very forward calorimeter, HF, is located in the forward region outside of the magnetic field volume, covers a large pseudo-rapidity range 3 < |η| < 5. The HF significantly improves jet detection and the missing transverse energy resolution which are essential in single top quark production studies, Standard Model Higgs, and all SUSY particle searches [86]. The HF experiences the highest particle fluxes.
Around 760 GeV per proton-proton interaction is deposited into the two forward calorimeters on average, compared to only 100 GeV for the rest of the detector.
This high fluxes of particles presents a unique challenge to calorimetry and its design. Quartz fibers were chosen to satisfy the requirement of surviving in this harsh conditions providing about 10 λ I . Iron absorbers, embedded quartz fibers, parallel to the beam make a fast ( 10 ns) detector to collect the Cherenkov radiation.

Solenoid
One of the most important features of the CMS apparatus is the presence of a high solenoidal magnetic field. It is the largest superconducting magnet ever built. The superconducting magnet for CMS has been designed to reach a uniform magnetic induction of 3.8 T in a long superconducting solenoid of 12.5 m length and 6 m diameter with a stored energy of 2.6 GJ at full current.
The configuration and parameters of the magnetic field lead to the measurement of muon momenta with good resolution, without making stringent demands on the spatial resolution of muon chambers. A magnetic field of 3.8 T brings substantial benefits not only for the muon tracking and inner tracking but also for electromagnetic calorimetry by preparing higher momentum resolution obtained in tracker.
The geometry of CMS magnet is shown in The coil contains four-layer superconducting thin solenoid built in five modules.
It is indirectly cooled by saturated helium at 4.5 • K temperature. In the core, a 3.8 T magnetic field is provided. The thick saturated iron yoke returns the magnetic flux generated by coils and provide a 2 T reversed magnetic fields to measure muon momentum. The yoke is composed of 11 elements, five three-layered barrel wheels , and three endcap disks in each side. The innermost yoke layer in the barrel region is 295 mm thick and each of the two outermost ones are thicker with 630 mm thick. The barrel rings are approximately 2.5 m long. The central barrel ring, centred on the interaction point, supports the superconducting coil. The main role of the yokes is to increase the field homogeneity in the tracker volume and to reduce the stray field by returning the magnetic flux of the solenoid. In addition, the steel plates play the role of absorber for the four interleaved layers (stations) of muon chambers that will be explained in the next section.

Muon system
As is implied by the experiment's name, 'Compact Muon Solenoid', detecting muons is one of the CMS most important tasks. Muon detection provides a clean signal to detect over the very high background rate at the LHC. For example, the Higgs decay to four muons through the H → ZZ → µ + µ − µ + µ − decay chain, called 'gold plated channel', provides the best 4-particle mass resolution and has led to the discovery of the Higgs boson recently. In addition, many new physics scenarios need a clean muon detection to be observed at the LHC. The CMS muon system is designed to identify and reconstruct muons over the entire kinematic range of the LHC. Muons can penetrate several meters of iron without interacting, unlike most particles, they pass through all layers of the detector. Therefore, the muon system is placed at the very edge of the experiment to detect muons. In order to cover the solenoid magnet, the muon system was designed to have a cylindrical barrel section and 2 planar endcap regions. The CMS muon system uses three different gas-ionization particle detector to  In the endcap region where the magnetic field is more intense and the background (signal) rate is higher than in the barrel, CSC are used. CSCs are multiwire proportional chambers consists of 6 anode wire planes interleaved to 7 cathode panels perpendicularly and the strips are milled on them. Because the strips and the wires are perpendicular, we get two position coordinates for each passing particle. The CSC detectors with fast response time, radiation resistance and fine segmentation ( 1 mm measurement of position in (r − φ) plane) identify muons in 0.9 < |η| < 2.4 range.
DTs and CSCs in the barrel and endcaps provide excellent position and time resolution. In order to assign the muon to the right bunch crossing when the LHC reaches full luminosity a complementary, dedicated trigger system consisting of resistive plate chambers (RPC) was added in both the barrel and endcap regions. The RPC system provides excellent timing with somewhat poorer spatial resolution over a large portion of the rapidity range of the muon system (|η| < 2.1). In addition to serving as dedicated triggers, the RPC system plays role in the muon reconstruction procedure. A higher trigger efficiency and greater rate capability will be obtained by processing the signals coming from the DT, CSC, and the RPCs in parallel.
The CMS RPC consists of two gaps with common read-out strips in between. Six layers of RPC chambers are embedded in the barrel iron yokes. The two innermost DT layers are sandwiched between RPC layers and the third and fourth DT layers are complemented with a single RPC layer as is shown in figure 3.9. In the endcap region, there is a plane of RPCs in each of the first 3 stations.

Trigger
The LHC will operate at 14 TeV center-of-mass energy and a high luminosity (10 34 cm −2 s −1 ), at which about 22 inelastic interactions are expected to occur per bunch crossing. About one billion proton-proton interactions will take place every second inside the detector which is impossible to store and process this large amount of data. Since experiments are typically searching for 'interesting' events (such as decays of rare particles) that occur at a relatively low rate, trigger systems are used to identify the events that should be recorded for later analysis. Therefore, the CMS trigger system has the formidable task of reducing the input data rate to a rate of O(100) Hz which will be written to permanent storage. The CMS trigger system is based on two steps called Level 1 (L1) Trigger and High-Level Trigger (HLT).
The CMS L1 trigger, is a hardware-based system, uses coarse local data from the calorimeter and muon systems to make electron/photon, jet and energy sum, and muon triggers. The L1 trigger has three components called local, regional and global. The local triggers component, also called Trigger Primitive Generators (TPG) is based on the information coming from the local calorimeter trigger and local muon trigger. The local calorimeter trigger looks for energy deposit in ECAL crystals or HCAL towers and the muon trigger search for signals from DT, CSC and RPC system. The regional calorimeter trigger uses all available information to form the e/γ candidates, calculate the transverse energy sums per calorimeter region and also prepare the isolation information for the e/γ and muon candidates. The The HLT trigger is a software-based system, to further reduce the event rate to about 100 Hz on average. The HLT uses full granularity detector data for performing the reconstruction and filtering algorithms on a large computing clusters. Only data accepted by the HLT are recorded for offline physics analysis. The starting selection of the HLT is based on the L1 candidates, and then improves the reconstruction and filtering process by using also the tracker information. The tracking is very important at the HLT level due to its role at the reconstruction. For example, improving the momentum resolution of the muon can reduce the muon trigger rate; by assigning a track to the calorimeter cluster, better electron identification is obtained; by finding the transverse impact parameter coming from the secondary vertex, it is possible to trigger on jets produced by b-quarks; it prepares good information for tagging the tau lepton decays hadronically.

Chapter 4 Event Reconstruction
In this chapter, we will review the procedure in which particles are produced in proton-proton collisions and are identified with the CMS detector.

Collider physics
In quantum field theory, one calculates the cross section of processes as one or two incoming fundamental particles interacting to form a final state. This could be easily used to find the expected number of events when the incoming particles are fundamental and without internal sub-structure like e − e + collisions. When the incoming fundamental particles are confined in a composite particle, like quarks and gluons inside the proton, the hard collision of interest only occurs when partons with the right quantum numbers happen to have the right center-of-mass energy to make the desired final state. Thus, a precise knowledge of the probability f i (x, Q 2 ) that  Cross sections (σ) are calculated by convoluting the parton level cross sectionσ with the PDFs. Total cross section can be written as whereσ depends on the renormalization (µ r ) and factorization scale (µ F ). It can be calculated perturbatively in QCD for hard scattering with energy scale much For an arbitrary hard process the effects of the EW and QCD next-to-leading (NLO) corrections can vary the parton level cross section. Therefore, the calculation of the higher order effects is very important to estimate the contribution of different processes more accurately and to make sure that the cross-sections are under control for precision measurements. In

Event Generation
In order to study a signal from SM processes or extract a signal of new physics from the SM backgrounds, one needs to generate and simulate the signal events similar to what is expected in real data. At high energy colliders like LHC, different issues make this procedure challenging. In each hard interactions hundreds of SM or BSM particles can be produced with momenta range over many orders of magnitude. The calculation of matrix element is too laborious at higher orders of perturbation theory.
At low energies, all soft hadronic phenomena (like hadronization and the underlying event) must rely upon QCD inspired models and cannot be computed from first principles. Many divergences and near divergences issues should be addressed after calculation of matrix element. Finally, the matrix elements must be integrated over a final-state phase space with huge dimensions in order to obtain predictions of experimental observables [91]. There is a very broad spectrum of event generators from general purpose ones to matrix element generators. The general purpose MC event generators such as HERWIG [92], Pythia [93] and Sherpa [94] provide a comprehensive list of LO matrix elements of the SM and some BSM processes. In addition to the LO matrix elements, multi purpose MC generators contain theory and models for a number of physics aspects, such as hard and soft interactions, parton distributions, initial and final state parton showers, multiple interactions, fragmentation and decay. In order to compute the hard process matrix element at higher order and cope with arbitrary final state, matrix element generators have therefore been constructed. Parton level events generated by the matrix element generators are processed by general purpose event generators to do the remained steps. The most widely used matrix element generators in CMS are ALPGEN [95], POWHEG [96] and MADGRAPH [97].

Renormalization and factorization scales
As can be seen in equation 4.1, the cross section for hadronic collisions can be expressed as the convolution of hard processes (short distance, calculable in perturbation theory) and soft processes (long distance, e.g. PDFs). The factorization scale separates the short-distance physics of the hard-scattering cross section from the long-distance hadronic physic [98].
The hard and soft processes have expansions in powers of strong coupling constant (α s ) while the coefficients of this expansion are known to a certain order of α s . Therefore, the series should be truncated at a defined order while uncalculated higher order terms are remained in the perturbation series. Due to missing higher order corrections in the calculations, there are theoretical uncertainties on the cross sections. They are traditionally estimated by varying the scale µ = µ F = µ r between Q/2 and 2Q where Q is set to the natural scale, the typical mass of the process.
It is worth noting that the inherent uncertainty derives from the cross section dependency on the unphysical renormalization and factorization scales is often large in a lowest order calculations. This dependency reduces by calculating the cross section at higher orders.

Parton showers
The momenta of the outgoing partons in a hard process can be calculated using matrix elements at leading, or in a few cases at next-to-leading in α s order. The effect of higher orders can also be simulated through a parton shower algorithm. It is typically formulated as a chain of momentum transfer from the high scales to the low scales in which the partons are confined into hadrons (≈ 1 GeV).
The high momenta coloured partons emit QCD radiation in the form of gluon as photons are radiated from the accelerated charged particles. Unlike the uncharged photons, the gluons carry colour charges and can split into gluons and cause further radiation, leading to parton showers. The cross section of a parton i splitting to j and k partons, for example q → q + g has two infra-red divergences, soft and collinear. The soft divergence occurs when the radiated gluon energy tends to zero and the collinear divergence occurs when the j and k partons are collinear. There-fore, enhanced higher order terms are associated with emitting a soft or collinear gluon for which the relevant QCD matrix elements are large.
The formulation of the parton branching is formalized in terms of the Sudakov form factor which is given by [99] where Q 0 is the scale at which the shower is terminated, α s is the strong coupling constant, z is the energy fraction of i carried by j and P ji is the splitting function in which several types of splitting are included. The Sudakov form factor is the probability of evolving from q 2 1 to q 2 2 without branching. The MC production of the parton showering follows the following structure.
Starting from the scale q 2 1 , the q 2 2 is found using the ratio of Sudakov factors at these two scales in a way that no further splitting occurs in between.

Parton distribution functions
The cross section defined by equation 4.1 will be influenced by the choice of PDF set.
On the other hand, the event shape can be varied for different PDF sets considering the PDF role in parton showers and multiple parton interactions. As was discussed, the f (x i , µ F ) can not be extracted from the first QCD principle. Nevertheless the perturbative QCD can predict the scale dependent evolution of the PDFs through the DGLAP equations.
Over the years, different ansatz are made and developed by different groups using relative data, e.g. from Deeply Inelastic Scattering (DIS).

Hadronization
Due to asymptomatic freedom of QCD, coloured quarks and gluons can be regarded as free particles during a hard interactions. After the particle shower has terminated, we enter the low-momentum-transfer and long distance in which color confinement will organize the partons into colorless hadrons. This is called hadronization or fragmentation. Fragmentation is governed by non-pertabutive QCD that can not be calculated from scratch. Different models are developed and parametrised to describe the transition between partonic final state and hadronic final state, including the Lund string model and cluster model [100]. The color-connected pairs of partons produces a jet of hadrons.

Tracks and vertices
The CMS tracker was designed to reach good space resolution during the high luminosity LHC run. In such a dense environment, efficient track-finding algorithms are also needed to deliver the desired performance.
At the the first step of the reconstruction process, the signals above specified thresholds in pixel and strip channels are clusters into the hits. Then the position of the hits are determined using specefic algorithm. The average hit efficiency which is the probability to find a cluster in a given sensor that has been traversed by a charged particle is more than 99% and 99.8% for pixel and silicon detector, respectively [101]. At the second step, the hits are used to reconstruct the tracks of the charged particles. The Combinatorial Track Finder (CTF) algorithm is used to produce a collection of reconstructed tracks. CTF produces the collection of reconstructed tracks using iterative algorithm. First, the easiest tracks to find (e.g large P T track) are searched for and the related hits are removed from the hit collections. Then the search is repeated for more difficult and challenging tracks while the number of hits are reduced.
Each iteration proceeds in 4 steps: • A seed is generated using few hits to provide initial track candidates.
• The seed trajectories along the expected flight path of a charged particle is extrapolated to find additional hits that can be assigned to the track candidate.
• The parameters of the tracks are obtained by the best fit.
• The tracks are selected by applying some criteria.
The magnetic field causes helical paths of charged particles and therefore five parameters are needed to define a trajectory. The parameters can be extracted using three 3-dimensional points or two 3-dimensional points in addition to the assumption of the origin of the track near to the interaction point. The high granularity of the pixel detector reduces the fraction of channels that are hit compared to the outer strip layer. For example, around 0.002-0.02% of channels in the pixel detector and 0.1-0.8% in the strip detector were occupied during the data taking with a 'zerobias' trigger, with about nine p-p interactions per bunch crossing. In addition, pixel detector produces 3-dimensional spatial measurements, which is essential for the estimation of the trajectory parameters. Therefore, track finding begins with trajectory seeds created in the inner region of the tracker. This also leads to the reconstruction of low-momentum tracks that are deflected before reaching the outer part of the tracker by the strong magnetic field.
In seed generation procedure the position of the center of the reconstructed beam spot and the locations of primary vertices in the event (including those from pileup events) is needed. A very fast track and vertex reconstruction algorithm is performed on the hits from the pixel detector to provide the initial information for the seed generation. The tracks and primary vertices found at this level are known as pixel tracks and pixel vertices, respectively.
A series of six iterations of the track reconstruction algorithm is applied for the full track reconstruction at the CMS. Iteration 0 is designed for prompt tracks near the interaction point with P T > 0.8 GeV and three pixel hits. The seeds at this iteration step are produced from three pixel hits (pixel triplets). Therefore, high quality seeds and well-measured starting trajectories is provided by the three precise 3-dimensional space points. Iteration 1 is used to recover prompt tracks that have only two pixel hits. The seeds are produced using two hits and a third space-point given by the location of a pixel vertex which is usually more than one because of pile up (Mixed pairs with vertex). Iteration 2 is responsible for finding low-P T prompt tracks. The seeds are produced from a combination of pixel and matched strip hits (Pixel triplets). Iterations 3-5 are intended to find tracks that are produced outside of the pixel detector volume or tracks that do not leave hits in the pixel detector. Iteration 3 and 4 use the two inner TIB layers and rings 12 of the TID/TEC and iteration 5 use two inner TOB layers and ring 5 of the TEC for seeds. Table 4.1 shows the seeding requirements and the minimum P T and the maximum transverse (d 0 ) and longitudinal (z 0 ) impact parameters relative to the center of the beam spot for each of the six tracking iterations . The configuration of the track seeding for each of the six iterative tracking steps. Shown are the layers used to seed the tracks, as well as the requirements on the minimum pt and the maximum transverse (d 0 ) and longitudinal (z 0 ) impact parameters relative to the centre of the beam spot. The Gaussian standard deviation corresponding to the length of the beam spot along the z-direction is σ [101].

Iteration
Seeding layers pt (GeV) d 0 (cm) z 0 0 Pixel After the track finding procedure, tracks are selected in a way to reduce the tracks which are not associated with a charged particle (fake track). Tracks with an acceptable fit (using χ 2 /number of degree of freedom (ndf)) which are originated from a primary interaction vertex (using tracks impact parameters which are the distance from the centre of the beam spot in the plane transverse to the beam-line and the distance along the beam-line from the closest pixel vertex) are selected.
The z coordinates at the point of closest approach to the beam line is used to cluster selected tracks. The clustering algorithm must balance between resolving all vertices including pileup and not splitting a single vertex. Track clustering is performed using a deterministic annealing (DA) algorithm [102] in which a global minimum is found for a problem with many degrees of freedom. In this method the z-coordinates of the points of closest approach of the tracks to the center of the beam spot and their associated uncertainties are used to build a function that its minimum leads to the most probable vertex positions.
The tracking algorithm provides a good track reconstruction for charged particle with P T > 0.1 GeV over the full pseudo-rapidity range of the tracker |η| < 2.5. The average track-reconstruction efficiency for charged particle with P T > 0.9 GeV is 94% (85%) in barrel (endcap) using tt simulated events under typical 2011 LHC pileup conditions . The achieved vertex position resolution for vertices with many tracks is 10-12 µm in each of the three spatial dimensions.

Particle Flow
The CMS experiment uses the particle-flow event reconstruction technique for reconstructing and identifying all stable particles in the event (i.e., electrons, photons, muons, charged hadrons and neutral hadrons). In this technique the direction, energy and type of particles are determine through a combination of all CMS subdetectors informations [103]. Then the list of the particles are used for building the jets, determining the missing transverse energy, reconstructing the decay products and etc.
As was discussed in section 4.3, the iterative-tracking strategy was adopted in tracker to achieve both high efficiency and low fake rate. Tracker is the cornerstone of the particle-flow event reconstruction considering good momentum resolution and precise measurement of the charged-particle direction at the production vertex. On the other hand, stable neutral particles such as photons and neutral hadrons are not reconstructed by the tracker. The information from the calorimeters are used to find the energy and direction of the neutral particles. A specific clustering algorithm is developed for the particle-flow event reconstruction and is performed separately in each sub-detector (ECAL barrel and endcap, HCAL barrel and endcap, PS first and second layer) except HF.
The particle flow elements, tracks and colorimeter clusters, are connected to each other by a link algorithm. In this algorithm, the last hit of the track is extrapolated to the PS layers, ECAL and HCAL and is linked to any given cluster if the extrapolated position is within the cluster boundaries. Similarly, a link between two calorimeter clusters, or between an ECAL and a PS cluster can be connected. Some 'blocks' containing two or three elements as the base of particle reconstruction and identification are produced. When a block is classified as a certain type of particle, it is removed from the unclassified blocks list.
The particle-flow algorithm is able to reconstruct more than 90% of the jet energy fractions which are carried by charged particles, photons and neutral hadrons with good precision. In figure 4.4, the particle flow reconstructed jet is compared to jet made from the sole calorimeter information (calo jet). The difference between the transverse momentum of the reconstructed jet and generated jet is scaled to the generated jet Pt to find the resolution of the jet reconstruction procedure. The particle flow jets has the efficiency near to 100% independent of the P T while the

Photon
Photons are reconstructed based on the clusters from the crystals within the ECAL.  Photon identification is based on the shower-shape and isolation variables. In the following some of the most commonly used variables are itemized: • The weighted cluster RMS along η inside 5 × 5 region of supercluster ( i ω i where the index i runs over the 5×5 surround-ing the seed crystal of the supercluster, η i is the pseudo-rapidity of the i th crystal,η 5×5 is the energy weighted mean of the 5 × 5 crystals pseudo-rapidity and ω i = 4.7 + ln(E i /E 5×5 ) ).
• the ratio of HCAL tower energy just behind the electromagnetic seed cluster in a cone of radius ∆R = 0.15 over the energy of electromagnetic supercluster (H/E).
• the scalar sum of energy of the hadronic particle flow candidates reconstructed in a cone of ∆R < 0.3 around the photon candidate (PF charged hadron isolation). This variable quantifies the amount of hadronic activity in the vicinity of a photon candidate. The PF charged hadron isolation is able to distinguish photon candidates originating from jet misidentification because they are more likely to be reconstructed close to charged hadronic particles than isolated prompt photons are.
• the sum of energy of neutral hadrons in a cone of ∆R < 0.3 (PF neutral hadron isolation).
• the sum of energy of photons in a cone of ∆R < 0.3 (PF photon isolation).
PF photon isolations should be corrected due to pile-up energy contribution as: where the energy density ρ event , computed using FastJet package [106], is the median background density per unit area and a measure of the pile-up activity in the event [107]. The effective area A ef f is the area of the isolation region weighted by a factor that takes into account the dependence of the pileup transverse energy density on η. When the extra contribution due to pileup is subtracted from the photon,

Electrons
The CMS detector benefits from a high resolution silicon tracker and electromagnetic calorimeter. However, the identification and reconstruction of electrons is a • η difference between supercluster and the associated inner track extrapolated from interaction vertex at ECAL surface (|∆η in |).
• phi difference between supercluster and the associated inner track extrapolated from interaction vertex at ECAL surface (|∆φ in |).
• The weighted cluster RMS along η as was introduced in section 4.5. and d vtx z ).
• Difference between the inverse of the supercluster energy the inverse of the track momentum (|1/E − 1/p|).
Isolation variable is computed by summing over the transverse momenta of photon, charged hadron and neutral hadron PF candidates within a cone of radius ∆R = 0.3 around the electron candidate. The electron PF isolation is defined as where ρ is the median energy and A ef f is the effective area. The effective area correction needs to be applied to the isolation sum to remove the effect of pileup.
The ratio of I e P F /P T is used for applying the isolation cut. In order to select the electron candidate for the analysis performed in this dissertation, a series of cuts are applied on the described variables. Table 4

Muons
The muon reconstruction at the CMS is based on the track reconstruction at the tracker and muon detectors. The matched energy deposits in the calorimeters are also used in the muon reconstruction. The tracks in the silicon tracker and muon spectrometers are reconstructed independently and are called tracker tracks and standalone tracks, respectively. Then, two complementary approaches are used for the muon reconstruction from these tracks.
• Global Muon reconstruction: standalone muon track is matched with a tracker track and a global muon track is fitted to the hits. This approach is very efficient for the muons with large transverse momenta (P T > 200 GeV).
• Tracker Muon reconstruction: all tracker tracks are extrapolated to the muon system while the expected energy loss is considered. If at least one muon segment at the muon system is matched with the track the tracker track is considered as a tracker muon. This approach is more efficient at low momentum compared to the global muon approach.
The majority of muons are reconstructed either as global or tracker muon. However, if both approaches fail and a standalone track left without any tracker track, third category of muons, called standalone muon only, are saved (≈ 1%). All the muon candidates are merged into a single collection, each one containing information from tracker, standalone and global fits. The candidates which are reconstructed by both approach are merged into a single candidate [109].
As with electron, additional information is associated with the muon candidate which are useful for muon quality identification and selection. Some of the main variables are [110] • The number of hits both in tracker and muon system as well as number of segments in muon system.
• The distance between the primary vertex and the transverse and longitudinal impact parameters (d 0 and d z ).
• The χ 2 of the fit for both tracker and global muon.
where P P U T is the sum of transverse momenta of tracks associated to non-leading vertices.
Frequently, muons originating from the decays of W and Z bosons have higher P T than those from other sources like hadron decays. Therefore, the separation between muons from different sources is improved by normalizing the isolation energy to the P T of the muon, giving the relative isolation variable I µ rel = I µ P F /P T . The selection cuts are made on these quantities to minimize the contribution of the muons originating from the cosmic rays, heavy flavor decays and hadronic showers. Two selection working points are used in this analysis called tight and loose collections. Table 4.4 list the various cuts applied to select loose and tight muons. Tight muons are required to have P T > 26 GeV and |η| < 2.1 to match the HLT criterion used to collect data, as described in section 5.1.2. Less restrictive cuts are applied to select loose muons with P T > 10 GeV and |η| < 2.5 which will be used to veto events with additional muons.

Jets
The QCD confinement requires that the original quark or gluon is never seen in its free states and they bind off into colorless hadrons. In a hard scattering processes, a quark or gluon fragments or hadronises immediately after being produced. The produced spray of the hadrons travel more or less in the direction of the final-state parton, collectively called a jet. Over the years, various methods are proposed and used to cluster hadrons and define jets [111]. Jet algorithms are usually involve one or more parameters that indicate how two particles are in a same or separate jet.
CMS uses anti − k t algorithm to define the jets [112].
The anti − k t is a sequential recombination algorithm which uses the following where ∆ ij = (y i − y j ) 2 + (φ i − φ j ) 2 and P T i , y i = 1 2 ln E+pz E−pz and φ i are the transverse momentum, rapidity and azimuth of particle i. R is radius parameter similar to radius in cone algorithm. The distances in equation 4.6 are used as follows:    The value of d ij is determined by the transverse momentum of particles and separation between particles (∆ ij ). If there is a hard particle with high transverse momentum between soft particles, the minimum of d ij occurs when i is hard particle and j is a soft particle close to the hard particle. Therefore a hard particle simply accumulate all the soft particles within a circle of radius R through the anti − k t algorithm and leads to a perfectly conical jet. If another hard particle is present in R < ∆ 12 < 2R distance from the first hard particle, there will be two hard jets. If  Jets are reconstructed from several types of inputs: • gen-jets: stable simulated particles, except for neutrinos, are clustered after hadronization and before interaction with the detector.
• PF-jets: all PF candidates are clustered without distinction of type and any energy threshold. The four-momentum vectors of PF candidates are used to reconstruct jets by the anti − k t algorithm with R=0.5 in CMS. PF jets take advantage of the excellent momentum and spatial resolutions for the charged hadrons and photons inside a jet, which together constitute 85% of the jet energy. The PF jet momentum and spatial resolutions are greatly improved with respect to calorimeter jets. Gen-jets are used as a reference to compare the PF-jet performance to the calo-jet performance in [103]. It is shown that the PF-jets are better matched with gen-jets compared to calo-jets and can be used in analysis down to P T 's as small as 5 GeV/c.
Although several correction factors exist to bring the energy scale of calo-jets up to unity PF-jets have an energy scale very close to unity and need only small residual corrections. The default jet-energy correction [113] brings the jet energy measured in the detector to the energy of the final state gen-jet or parton-jet which depends on the jet P T and η. First, the energy clustered inside a jet due to the underlying event, electronic noise and pileup are measured using the minimum bias events (L1 correction). Then the energy of the reconstructed jet is corrected to be matched with gen-jet and being uniform in P T and η using dijet, Z+jet and γ+jet Sample (L2 and L3 corrections) [114].
The analysis considers jets within η < 2.5 whose calibrated transverse energy is greater than 30 GeV and pass a set of quality cuts. PF jets must have more than one constituent, and they must have neutral hadronic, charged electromagnetic, and neutral electromagnetic energy fractions smaller than 99%, and charged hadronic energy fraction and charged particle multiplicity larger than 1%.

b-tagging
B-tagging or the identification of b-jets is a critical feature for many high energy processes. for example, b-tagging in top physics reduces the overwhelming background processes. The properties of the bottom hadrons can be used to identify a jet originating from b-quark.
Hadrons containing bottom quarks have sufficient lifetimes (τ ≈ 10 −12 ) that they travel some distances before decaying (≈ hundreds of micro meters). This travel distance leads to the presence of a secondary vertex originated from the B decay. When the tracks from the secondary vertex are extrapolated to the primary vertex, they will have a rather large impact parameter. The impact parameter is defined as the smallest distance between the track trajectory and the primary vertex This causes b-jets to be wider, have higher multiplicities and invariant masses, and also to contain low-energy leptons with momentum perpendicular to the jet.
A variety of algorithms has been developed and used by CMS Collaboration to discriminate between bottom and light-parton jets based on variables such as the impact parameters of charged-particle tracks, the presence or absence of a lepton, the properties of reconstructed decay vertices and combination of these properties [115]. Each of these algorithms yields a single discriminator value for each jet in which thresholds on these discriminators define a working point with a given tagging and miss-tagging (the efficiencies to tag non-b jets) efficiencies.
The Combined Secondary Vertex (CSV) algorithm is one of the most powerful and successful of these algorithms. CSV is a complex approach which uses of secondary vertices together with track-based lifetime information. Secondary vertex candidates should pass the following requirements: • shared tracks between the secondary and primary vertices should be less than 65%.
• secondary vertex candidates which have a radial distance of more than 2.5 cm with respect to the primary vertex and their mass exceeding 6.5 GeV c 2 are rejected.
• the candidates are rejected if the flight direction of each candidate is outside a cone of ∆R < 0.5 around the jet direction.
The efficiency of the secondary vertex reconstruction is about 65%. In addition to the secondary vertex, CSV uses the variables such as the flight distance significance in the transverse plane, the vertex mass, the number of tracks at the vertex, the ratio of the energy carried by tracks at the vertex with respect to all tracks in the jet, the pseudo-rapidities of the tracks at the vertex with respect to the jet axis and the number of tracks in the jet.
The distributions of most of the variables are significantly different for c-jets and other light-jets. Therefore, two likelihood ratios with different weights for c-jet and light jet backgrounds are built from these variables and are combined into a single discriminating variable. The CSV b-tag discriminator varies between 0 to 1.
Three working points are defined in this range called loose (CSV discriminator = 0.244), medium (CSV discriminator = 0.679) and tight (CSV discriminator = 0.898) working points in which the miss-tag rates are 10%, 1% and 0.1%, respectively. For loose selection a b-jet tagging efficiency of ≈ 80% and for medium and tight selections a b-jet tagging efficiency of ≈ 55 % is achieved. The CSV algorithm do the the best performance of b-tagging for medium and tight working points. In this dissertation, the medium working point is used to select b-tagged jets. In addition to the selection of b-jet, CSV discriminator is a very good variable which will be used to extract signal events that will be discussed in section 5.6.1.

Missing Transverse Energy
The vector momentum imbalance in the plane perpendicular to the beam direction is extracted from data samples [116].

Chapter 5 Analysis strategy
In previous chapters all the necessary pieces to search for anomalous top quark FCNC processes have been presented. In the following, they will be used to perform a full analysis to study the anomalous tqγ (q = u, c) interactions in a production of single top quark production in association with a photon in proton-proton collisions.

Signal modelling and generation
As was discussed in section 2.3.2, model independent approach is followed to search for top quark anomalous FCNC signs by various experiments. This approach is also followed in this analysis. It is worth repeating the Lagrangian related to the anomalous tqγ interactions from equation 2.22 with a small modifications as follows where Q t is equal to 2/3 which is the top quark electric charge and other parameters It was discussed in section 2.6 that this Lagrangian leads to the production of top quark in association with a photon.
In order to generate signal events, PROTOS (PROgram for TOp Simulations) event generator is used [118]. PROTOS is a leading order generator for some new In order to remove infra-red divergence in cross section due to the soft photon in final state, a minimum threshold on the transverse momentum of photon P T > 30 is applied at the generator level. In the production of signal events, the top quark branching ratio to a bottom quark and a W boson is assumed to be 100%. Then the W boson is free to decay only into a charge lepton (e, µ and τ ) and neutrino. One can write the quadratic dependence of the cross section on the anomalous couplings considering the minimum cut on the photon P T as: σ(pp → tγ → lνb γ) = 29.86 |κ tuγ | 2 (pb) (κ tcγ = 0), In this analysis, we will focus on the muonic decay of W boson from top quark Therefore, anomalous top is produced more than anti-top in tuγ signal channel.
The partial decay width of the top quark with flavour violating interactions are given by For numerical calculation we set m t = 172.5 GeV, m W = 80.419 GeV, s 2 W = 0.234 and α = 1/128.92. The corresponding branching ratio is then has the following form It is remarkable that PROTOS generates signal sample at Leading Order. The full Next to Leading Order QCD corrections to the signal cross section has been calculated in [119]. It is interesting to note that the NLO QCD corrections can enhance the total cross section up to 40% at the LHC. In figure 5.2, the K factors σ N LO σ LO as a function of the photon transverse momentum cut is shown. The K−factors decrease with the increasing transverse momentum cut. In [119], it has been shown that the NLO corrections does not depend strongly on the scale dependence which makes the theoretical predictions stable. The K-factors are considered to be 1.375 for both signal channel.

Background simulated samples
The SM background which can mimic the signatures of the signal processes can be grouped into two categories: those with a real prompt photon in the final state and those with a fake photon in the final state. Each category can be divided into processes with and without top quark in the final state. The background processes containing real photon are • W γ-jets and Zγ-jets: among the various di-boson processes produced in hadron colliders, W γ and Zγ have the highest rate. These two SM backgrounds contribute to the signal region when W → µν and Z → µ + µ − . The W γ and Zγ are produced at tree level through three processes: initial state radiation (ISR) where a photon is produced from one of the incoming partons, final state radiation (FSR) where a photon is radiated off one of the charged leptons from the vector boson decay, and finally when a photon is produced in s-channel via triple gauge interactions (W W γ). These two samples are generated by MadGraph [97].
• W W γ: the triple gauge boson associated production sample is generated with • γ-jets: this SM background can contribute to the final state when a muon from a jet is misidentified as an isolated tight muon. This background is generated with Pythia [93] in different photon P T bins. • tγ andtγ: Single top quark production through the s-channel, t-channel and tW -channel with ISR and FSR is an irreducible background in this analysis which is generated with MadGraph [97]. and the backgrounds with fake photon are • Dibosons: The Pythia is used to model the diboson processes, including W W , W Z and ZZ production. The cross sections of diboson processes are calculated at NLO accuracy in [121].
• W -jets and Z/γ(→ l + l − )-jets: For these two SM processes, inclusive samples including any number of jets are generated. Huge cross section of these processes leads to a considerable contribution in signal region through a fake photon. The cross section of inclusive W -jets and Z/γ(→ l + l − )-jets processes are calculated at NNLO accuracy in [122].
• tt and single top processes: An inclusive sample is generated for tt with madgraph in which all decay modes of the two W bosons from the top quark decays are included. Separated samples for single top and anti-top quark production is generated with powheg [123]. The cross sections for the tt and single top quark production are calculated at next-to-next-to-leading logarithmic (NNLL) accuracy in [124].

NLO modelling of W γ-jets and Zγ-jets
For the theoretical SM cross section, the MadGraph event generator is used for the simulation of W γ-jets and Zγ-jets processes. The cross sections are calculated at LO, so we need to scale the LO prediction to the NLO.
For W γ-jets and Zγ-jets processes, k-factor is defined as function of the transverse photon energy Where dσ N LO /dE γ T and dσ LO /dE γ T are the next-to-leading order and leading order differential cross-sections, respectively.
The K−factor for W γ-jets process is obtained using the Baur NLO generator [125] that calculates the cross section of W γ-jets including the NLO QCD corrections. The corresponding K−factor versus the transverse momentum of the photon is presented in figure 5.3 (left). In order to obtain a functional form of the K−factor, the distribution of the photon transverse momentum is fitted to a second order polynomial. The coefficients of the polynomial after the fit are:  We also perform the same way as W γ-jets process to calculate the K−factor of the Zγ-jets process as a function of the photon E T . The E γ T dependent K−factor is applied since the shape of photon transverse energy is an important variable in this analysis. Events must contain at least one jet with the condition defined in section 4.8.
Another useful information that can be used to reduce the contribution of the backgrounds is the presence of a b-jet in the signal final state. Any event with more than one b-tagged jet in the final state is vetoed. This criteria suppresses the background from the tt and ttγ events.
Finally, it is required that ∆R(l, γ) > 0.7 and ∆R(b − jet, γ) > 0.7 to have well separated objects and to reject FSR photons from high p T muon or final states partons.

Top quark reconstruction
Once pre-selection is done, a supposedly signal-enriched sample has been selected from real data. One important feature of the signal is that the selected muon and b-jet are the decay products of the top quark. On the other hand, photon recoils against top quark in signal events. Therefore, it is essential to reconstruct top quark from the physical objects to distinguish signal from SM backgrounds better.
Before the reconstruction of top quark, one needs to reconstruct the W boson from its decay products. It is assumed that the x and y components of the missing momentum are entirely due to the neutrino from W boson decay. In addition, selected muon is assumed to be the decay product of W boson. The W boson mass constraint is used to find the z momentum component of the neutrino as follows [126].
This equation has in general two solutions: The P z,ν obtained from the equation 5.7 has an imaginary part if the discrim-inant becomes negative. This happens due to the finite resolution of the missing transverse energy, lepton momentum resolution and the finite W boson width. In case that there is only imaginary solution we take the real part of the solution in order to keep the events with enough statistics. In case of having two real solutions, the solution with the smallest abstract value is chosen [127].
The W boson four momentum can be reconstructed after finding the P z,ν . Then one should assign a jet to reconstruct the top quark. A jet with the highest value for b-tag discriminator in each event is tagged as b-jet. The four momentum of the W boson and b-jet are used to reconstruct the top quark.

Main selection
The requirement on the number of b-tag jets determines whether the tt or W γ-jets The phase space defined by the pre-selection and top mass window cuts is called signal region.

Multivariate analysis
One of the most important challenges to the searches for rare signals in large data sets is to find suitable variable with high discrimination power after the pre-selection. Three are different boosting algorithms that employ different prescriptions of updating event weights at each training step and combine the trees by different methods. One of the most popular boosting algorithm is the so-called AdaBoost (adaptive boost) algorithm [129].
In AdaBoost algorithm first tree is trained with the original event weights.
Weights of events are modified for the next tree training and misclassified events are multiplied by a common boost weight exp(α. The factor α is derived from the fraction of misclassified training events (f ) in the previous classifier as Total event weight is renormalized such that the sum of weights remains constant.

Artificial neural networks
Artificial Neural Networks (ANN) are one of the oldest machine learning techniques used wildly in the high-energy physics and many important physics results have been extracted using this method in the last decades. ANN are layered networks of artificial neurons which recieve signal from another artificial neroun and forms an output signal. One can therefore view the neural network as a mapping from a space of input variables to an output variable in case of classification problem. In the following neural networks are briey discussed, while a comprehensive overview can be found in [130]. The ANN is used in background estimation procedure which is describe in section 5 of this thesis. Each neuron performs a weighted sum of the incoming signal For many problems linear approximation is the most appropriate method which is also used in this analysis. Other linear neural network and nonlinear approximation may lead to low accurancy or lengthy computations. The neural network is trained by the known signal and background points of input variables. W ji are found in a way that for a given inputs x, neural network yields a response close to 1 for signal-like events and 0 for background-like ones.

Background estimation
After the full selection is applied, the dominant background comes from the SM W γ-jets process. It can mimic the signal if W boson decays to muon and neutrino and be associated with the heavy jets. The contribution of W γ-jets is estimated from data. On the other hand, final states contain photon suffer from non-trivial significant background arising from jet faking as photon, mostly originating from the jets in W -jets events in this analysis. Jets typically can be misidentified as photon, also called fake photon, if they fluctuate to one or two leading π 0 's, which decays via π 0 → γγ resulting in an electromagnetic object indistinguishable from a highly energetic photon. The probability of jet faking a photon depends on whether the jet is in the ECAL barrel or endcap, and the p T of that jet. As the jet fragmentation models describe the jet fragmentation are not accurately known at this new energy regime, we extract the contribution of the backgrounds with jet faking photon from the data.

Data-driven estimation of the W − jets shape
Among SM backgrounds which contribute into the defined signal region with fake photon, W -jets process has the largest cross section and is the most signal like background. Therefore, we focus on the estimation of the contribution of this important background from data and trust on the simulated events for other backgrounds in this category. We are aware that the main difference between the signal or the dominant background (W γ-jets) and W -jets process is mostly due to the origin of the selected photon which has passed our tight selection criteria. We can use this feature to find a very similar region to the signal region which is occupied by events with a photon originated from a jet.
A particular shower shape variable which measures the effective width of the photon super cluster in the η direction, denoted as σ iηiη , defined in section 4.5 as where the sum runs over the 5 × 5 crystal matrix around the most energetic crystal in the SC. This variable is used in photon identification to discriminate the prompt photons against photon candidates that arise from the misidentification of jet fragments. Two photons from π 0 decay produced in jet fragments lead to the wider showers in ECAL compared to one isolated prompt photon. Almost uniform distribution of fake photons for this variable is also expected. Therefore, this variable can be used to select fake photons when the cuts on σ iηiη , written in table 4.2, are reversed.
In order to estimate the contribution of the W -jets process a control region, called W -jets control region, is defined using σ iηiη variable. The W -jets control region is defined similar to signal region while the photon candidates are selected by requiring the σ iηiη > 0.011 and σ iηiη > 0.031 in barrel and endcaps, respectively.
In order to remove the contribution of tt events in W -jets control region, extra condition is applied on the number of b-jets (N b−jets = 0).
The distributions of the σ iηiη variable in W -jets control region for data and SM backgrounds are shown in figure 5.7. The W -jets control region is enriched by the Wjets events which are shown in blue color. There are also some contaminations from the tt and Drell-Yan processes which will be considered as a source of systematic uncertainty in the background estimation procedure. The W -jets control region is almost free from signal contribution because signal events contain a prompt photon and a b-jet. It can be seen from figure 5.7 that we can find purer control region by increasing the cut value on the σ iηiη variable. However, the statistics will decrease. One expects not to see different behaviour for W -jets in signal and control region for many kinematic variables because a jet is misidentified as a photon in both regions. From now on, the data events in W -jets control region will be used to estimate the W -jets shape for the distribution of a large number of variables in signal region after checking for the similar behaviour of W -jets sample in signal and control region using W -jets MC sample.

Data-driven estimation of the W -jets and W γ-jets normalizations
We use a template fit method to estimate the normalizations of W -jets and W γjets in the signal region. The idea is to divide data into three elements: W -jets, W γ-jets and other backgrounds. Then, a proper variable will be used to find the normalization of the W -jets and W γ-jets from the fit. So, data can be parametrized as where X is an arbitrary kinematic variable. In equation 5.14, C W −j and C W γ−j are the normalization of the W -jets and W γ-jets samples, respectively. These normalization factors are supposed to be determined from the fit. The total number of predicted SM backgrounds except for W -jets and W γ-jets are denoted by b which is obtained from the simulated samples normalized to the related cross sections and to an integrated luminosity of 19.76 fb −1 of data. The probability distribution function for the W -jets, W γ-jets and sum of the other backgrounds are given by S W −j (X), S W γ−j (X) and B(X), respectively. These functions are obtained from • The MC simulated sample of W γ-jets for S W γ−j (X).
• The data sample in W -jets control region for S W −j (X).
• The sum of the MC simulated sample of all SM backgrounds except for the W -jets and W γ-jets for B(X).
In order to perform a reliable and stable template fit to estimate the unknown parameters, a variable which has different template for each element is needed. Various distribution are tried to find best variable which has the power to discriminate between the W -jets and W γ-jets events. It has been found that the cosine of the angle between two gauge bosons (γ, W ) behaves differently for W -jets and W γ-jets. Although the cos(γ, W ) variable shows a reasonable discrimination power other variables are found to have almost similar power to separate events of these two backgrounds. Therefore, selected variables are combined through neural network multivariate method to reach optimum separation between W -jets, W γ-jets backgrounds. It is observed that using multivariate classification output decreases the fit errors significantly and leads to more stable results with respect to the systematic uncertainty variations compared to cos(γ, W ) variable. Neural network input variables are listed below • cos(W, γ): it is expected to see more back to back γ and W boson in W γ-jets events compared to W -jets events.
• transverse momentum of the selected photon: prompt photon which are generated from the first inelastic interaction are more energetic than a photon which is produced inside a jet from π 0 decay.
• transverse momentum of the selected b-jet.
• ∆φ(γ, MET): use the balance of the γ and W boson information in transverse plane in W γ-jets process.
• H/E of the selected photon: although fake photons could pass our tight selections they tend to show their jet characteristics in distribution of H/E variable.
As signal events are containing a prompt photon recoil against top quark, the distributions of the cos(γ, W ) and other selected variables are very similar to the shape of the dominating W γ-jets background. In addition we know from previous experimental results that we will have small number of signal events in signal region  Before performing the fit, one should make sure that the W -jets predicts similar shape for neural network output in signal and control region using W -jets MC simulated events. Then the shape of data sample in W -jets control region can be used as S W −j (X) in the fit procedure. Figure 5.12 shows the distribution of neural network output for W -jets in signal region and control region while the cut on  Various sources of systematic uncertainties can vary the fit results. In the following we will investigate these effects.
A systematic uncertainty can be raised from the definition of the control region. As was discussed in previous section, the cut on σ iηiη can be varied to define a new control region. In order to estimate the systematic uncertainty due to definition of the control region, we vary the cut on σ iηiη by increasing the cut in two steps of 5% and 10%. It is clear from figure 5.7 that we are not allowed to increase the cut on the σ iηiη to the higher values because the remained data events are not statistically enough to perform the fit. The template fit is redone to find the fit result with the new shape for W -jets sample obtained from the new control region. The results are presented in Table 5   to find the related uncertainty on the data-driven background estimation. The uncertainties obtained from the error normalization of other backgrounds on the fit results are shown in table 5.5. Table 5.5: W -jets and W γ-jets relative variation of the data driven predictions for a variation of 30% of each background. tt It was discussed that our control region is enriched by W -jets events but there are few contributions of other backgrounds. In order to estimate this effect, the contribution of other backgrounds are subtracted from data in W -jets control region and the fit is redone. A new shape for W -jets is obtained when the contribution of other backgrounds is subtracted from data in W -jets control region. The effect is less than 7% for both W -jets and W γ-jets estimated number of events. It is clear that this source of error is not significant and our control region is almost pure.

Data-driven estimation of the W γ-jets shape
The contribution of the W γ-jets is estimated from data as was discussed in previous section. In order to distinguish signal events from this important background, we also need to know the shape of that for some variables accurately. Therefore, it is essential to find them directly from data.
In order to estimate the W γ-jets shape for an arbitrary variable in signal region, from data for an arbitrary variable to obtain the W γ-jets shape. To obtain the shape of W -jets from data in the side-band region, we use the data events in side band while the cut on variable σ iηiη is reversed as well as requiring no b-tagged jet (W -jet control region, see section 5.4.1). The number of W -jets events in the sideband is normalized due to its data driven values in signal region using the following equation: where N out is the number of the W -jets events in the side-band and N in is the number of the W -jets events in the top mass window which has been estimated from the template fit. The parameters α in is the fraction of the W -jets events in the top mass window and α out = 1 − α in which are taken from data in W -jets control region.

Other backgrounds
The W γ-jets and W -jets backgrounds described above are the major backgrounds for the signal processes considered in this dissertation. Some background categories are not included in these data-driven estimations. These backgrounds contribute only a small number of events after the final selections or can be described well by MC simulation. The shape and normalization of all SM backgrounds except for W γ-jets and W -jets are estimated from simulation.

Event weights and DATA/MC comparison
In order to achieve a better agreement between the SM prediction from simulated events and measured data, several types of weights need to be applied on the MC events. In this section different event weights that have to be used in this analysis will be presented. Finally, the measured data is compared to the MC prediction.

Cross section
The number of events in a generated sample is independent of the event rate of a certain process in measured data. However, more events are needed for high rate processes to find the behaviour of that process accurately. Therefore, each event in MC sample should be weighted due to the expected rate in the integrated luminosity of the dataset used. The weight is calculated as ω = σ × L number of generated events (5.16) where σ is the cross section of the related process and L is the total integrated luminosity.

Pileup reweighting
As already stated in chapter 4, multiple proton interactions happen during one bunch crossing and additional particles are produced that are not related to the one single interaction we want to study. In order to account for the pileup noise, CMS generated MC events with a specific pileup distribution model for 8 TeV simulated samples. Although the CMS pileup condition is modelled appropriately it needs to be corrected due to the real data situation. In figure 5.14, the number of pile-up interactions in data and simulation are compared.

B-tag discriminator reshaping
The correction factors applied to the simulation for b-tagging and mis-tagging efficiencies need to be accounted for. As described in section 4.9, we used the CSV algorithm with the medium working point to tag the b-jets candidates. Events contain more than one b-jets are discarded to suppress multi b-jet background events.
It will be seen in the next section that the b-tag discriminator distribution is also an important variable to suppress the zero b-jet background events.
Therefore, we need to correct the CSV discriminant not only for the medium working point but also for the whole range of CSV discriminant. In order to correct the MC CSV discriminant due to the measured data, the CSV discriminant value in MC is found in such a way that the MC efficiency is equal to the efficiency measured in data. The MC efficiency is calculated as

Other weights and correction factors
More scale factors are associated to the selected events in simulation to keep into account further differences between data and MC.   applied as a function of muon η in this analysis.  shown with red line in all the histograms. It is clear that the photon p T is a very good discriminant between SM processes and signal events.

DATA/MC comparison
In figures 5.19 and 5.20, distributions of jet multiplicity, b-tag discriminant, cos(γ, W ), ∆R(γ, b − jet), ∆R(γ, µ) and cos(top, γ) are shown. Among these variables, b-tag discriminant shows a very good discriminating power between signal and background events. Figure 5.21 shows the charge distribution of the selected muon in data, SM backgrounds and signal. As was discuses in section 2.6, one of the important features of tuγ signal channel is an asymmetry between top and anti-top production rates which can make the anomalous production of top quark more sensitive to find the couplings characteristics compared to anomalous top decays. This feature will be employed to discriminate signal from backgrounds in the following sections.

Signal extraction
The agreement between the SM prediction and measured data was tested in both number of events and shape of several variables. Although some variables present a reasonable separation between signal and background events better separation power can be achieved by combining these variables. In this section a multivariate technique is employed to find the signal excess clearly.

Training of BDT
In order to perform the MVA in this analysis a Boosted Decision Tree (BDT) is trained using the TMVA framework. The BDT is trained for tuγ and tcγ signal channels separately using W γ-jets, tt and di-bosons simulated background events.
The following variables are chosen as input variables for tuγ and tcγ BDT training.

Number of Jets
• jet multiplicity, • cosine of the angle between the reconstructed top quark and the photon, • muon charge (only for tuγ), Photon, muon and b-jet properties are distinctive features of signal events in this analysis. CSV discriminant nicely separates SM processes with one b-jet from those with no b-jet. Although ∆R cuts between photon, b-jet and muon are applied in pre-selection, these objects tend to be closer to each other in some SM processes.
Jet multiplicity can distinguish between multi-jet SM processes like tt and signal.
Finally, the cosine of the angle between the reconstructed top quark and the photon contains many properties of the signal events. The variables with the most discriminating power are photon p T and CSV discriminator. Input variables can contribute many times in construction of trees while some of them are used more because they can separate signal events with high signal selection efficiency and high background rejection efficiency. Therefore, TMVA provide the importance variable list which shows how important a variable contributed to BDT compared to other variables. Table 5.7 shows the variable importance for tγu and tγc signal channel. As was expected transverse momentum of the selected photon and CSV discriminant value of the selected b-jet play important roles in BDT training.

BDT output
The final BDT output is a single discriminant ranging from -1 to +1, discussed earlier in equation 5.11. Events with higher (lower) BDT output values are signallike (background like) events. Therefore, one expects to see signal events gathered close to +1 while background events be close to -1. As was discussed in section 5.3, a statistically independent MC sample should be used to test the BDT training. In figure 5.25, the BDT output distribution for test and train samples are shown for both signal channels. There is good agreement between the BDT output distribution

Systematic uncertainties
Numerous sources of systematic uncertainties are associated with both the background estimations and the simulation of the signals. There are three types of systematic effects considered in this analysis: those that affect only the rates of signal or background processes, those that affect only the shapes of the BDT discriminants for signal or background processes, and those that affect both the rate and the shape. In the last case, the rate and shape effects are treated simultaneously so that they are considered completely correlated. Below is a list of systematic effects considered for this analysis: • Luminosity: The overall uncertainty on the integrated luminosity of the data used in the analysis is estimated based on the cluster counting from the silicon pixel detector [133]. A value of 2.6% error is considered on the signal and background rates except for the backgrounds estimated with data-driven method.
• Pileup re-weighting: The systematic uncertainty due to Pileup re-weighting is determined by varying the minimum-bias cross section used to calculate the  pileup re-weighting by 5% from the default value of 69.3 mb. New pileup weights are applied to determine the uncertainty on both the rate and shapes [134].
• Muon, photon and trigger scale factors: The scale factors used to take into account the muon, photon and trigger efficiencies differences between measured data and simulation are varied up and down by their statistical and systematic uncertainties [135,136]. These variations are small and vary the shape and rates of the signal and background processes slightly.
• Photon Energy Scale: The uncertainty on the nominal photon energy scale is considered to be 1% in barrel and 3% in endcap [136]. This uncertainty can vary both the shape and rate of background and signal processes. However, the effects on shape is more significant because photon p T is the most important • Cross Sections: The expectation for some of the background processes yields are derived from theoretical predictions. Uncertainties affecting these normalizations are taken to be 30% conservatively.
• Background estimations: As discussed in Section 5.4.2, different source of uncertainties have influence on the results of the the fit. These errors are considered on the normalizations of W -jets and W γ-jets processes. The uncertainties on the data driven background rates are calculated to be 17% and 23% on the W γ-jets and W -jets rates, respectively.
• Uncertainty due to PDF: The systematic uncertainty originating from the proton parton distribution functions on the cross section of signal is estimated using the PDF4LHC recommendation [138]. In this method the cross section of tγ production due to the anomalous tuγ and tcγ interaction is calculated by the 22 eigenvalues of the CTEQ PDF sets as a function of photon p T . Table   5.8 shows PDF error in different photon p T bins which can change both shape and rate of the BDT output of signal distribution. • Signal NLO corrections: In section 5.1.1, the NLO k-factor is given as a function of photon p T . An uncertainty of 5% is assumed on the reported kfactors in [119]. All systematic uncertainties discussed above are accounted for in the limit calculation via nuisance parameters which are discussed in section 5.8. It is worth mentioning again that among the systematic uncertainties, the luminosity uncertainties only affect the normalization while the uncertainties from the pileup, trigger and lepton and photon selection efficiencies, b tagging, and jet energy scale and resolution affect also the shape of the output of the BDT discriminant of signal or background.
The PDF, renormalization and factorization scales, and top quark mass uncertainties affect both the shape and the normalization of the BDT discriminant of signal. The uncertainty on the normalization of the SM backgrounds is considered to affect the rate of these processes.
The impact of the systematic uncertainties on the result of the analysis is quantified by their relative impact on the expected cross-section upper limit. For each uncertainty, the limit is derived with and without including the corresponding nuisance parameter in the fit. The relative variation is defined by ∆σ exp = σ exp (without nuisance) − σ exp (with nuisance) σ exp (with nuisance) (5.18) The relative variation of the expected cross-section limit obtained for the various uncertainties is listed in table 5.9 for both tuγ and tcγ channels. It can be seen from table 5.9 that removing sources of systematics leads to tighter upper bounds (negative value for ∆σ exp ) for most of the uncertainties although it is not the case for all sources. This behaviour is mostly related to the statistical fluctuations and the smallness of the variations due to the certain systematic source. The BDT output for the tuγ channel shows more powerful discrimination between signal and backgrounds compare to tcγ channel. Therefore the shape and normalisation variations of the SM backgrounds due to the systematic uncertainties have smaller effects in tuγ channel and the positive value for ∆σ exp is more probable consequently.

Results
After estimating contributions of all SM backgrounds, taking into account all the SF and training BDT, the BDT output distributions for signal, backgrounds and data are shown in figure 5.27. As was discussed in section 5.3.1, independent BDTs are trained for tuγ and tcγ signal channels. It can be seen in figure 5.27 that the tuγ and tcγ signal distributions are well separated from the SM background distributions.
Furthermore, the measured data is described well by the SM prediction in the whole range of the BDT output and there is no evidence for signal events in both channels.
The BDT output distributions of the data driven backgrounds (W -jets and As no excess over the SM prediction is observed, we will constraint the contribution of the signal processes and top quark FCNC anomalous couplings consequently.

Limits
The results of these searches are published in reference [67]. The distribution of the BDT discriminant for data, SM backgrounds and signal are used to set upper limits on the signal cross sections. The limit setting procedure is performed using CLs method implemented in the theta framework [139]. The effects of systematic uncertainties on the BDT discriminant templates is modelled by varying each of the systematic sources ±1σ in simulation and re-deriving the templates of the BDT discriminant. The systematic uncertainties which only change the rate of the background processes are considered on the background and signal normalizations.
All systematic uncertainties are quantified as nuisance parameters with a Gaussian prior.
The 95% C.L. upper limits obtained from the shape analysis of the BDT output distributions for tuγ and tcγ signal are summarized in tables 5.10 and 5.11. The limit setting procedure for leading order and next to leading order is performed including the systematic uncertainties discussed in section 5.7 while the uncertainty on signal NLO corrections is removed for LO calculations.  Figure 5.28: The 95% confidence level exclusion limit in terms of the anomalous couplings tuγ (top) and tcγ (bottom). Expected upper limit on the σ tqγ × Br(W → lν l ) is shown with dash line accompanied with one and two sigma error bands. Observed limit is shown with black line. Red curve shows the theoretical cross section as a function of the anomalous couplings.
As was discussed in section 5.1.1, the cross section of the anomalous production of single top quark in association with a photon is a function of anomalous couplings.
Therefore, the upper limit on the cross sections are used to constrain the tuγ and tcγ anomalous couplings. In order to calculate the bounds at NLO, it is assumed that k-factor increase the cross section by a factor of 1.375 for both signal channels. It can be seen in tables 5.10 and 5.11 that the upper limits obtained on the tuγ anomalous couplings and related branching ratio is stronger than corresponding tcγ limits. It is due to the larger cross section of the tuγ signal compared to tcγ which is directly related to the larger PDF of up-quark compared to c-quark in proton.
Therefore, the anomalous production of single top quark in association with a photon is more sensitive to tuγ anomalous coupling. As was discussed in section 5.17, this feature enables us to discriminate between the anomalous tuγ and tcγ interactions ratios obtained in this analysis has improved previous limit by around two orders of magnitude and is the most stringent limit to date.

Chapter 6 Fiducial cross sections
Obtained limits in this analysis which are given in previous section depend on our specific model which predicts the existence of anomalous tqγ FCNC couplings. In this section, we provide upper limits as model independent as possible to allow comparison to an arbitrary theoretical prediction.

Motivations
The results of this analysis which leads to upper limit on the cross section of anomalous single top quark production in association with a photon (σ 95% tqγ ) multiplied by leptonic branching ratio of the W boson decay ( 1 3 ) are calculated in a defined signal region firstly. Events in which a muon, a photon and an invisible particle (for example neutrino) are associated with QCD jets are selected to define a region in the total phase space of pp collisions called signal region. Then upper limits obtained in signal region (restricted phase space) are extrapolated to the total phase space using information extracted from signal models. Although our main purpose is searching for anomalous top quark FCNC coupling restricted phase space can be used to test other new physics models which leads to similar final state objects.
In order to see the dependency of the results to the tqγ FCNC model clearly, we can define new variables. 'Visible cross section' is defined as: where σ 95% vis is the 95% CL upper limit on the cross section of any new physics signal inside the signal region, N 95% is upper limits at 95% CL on the number of new physics events inside the signal region and L is the integrated luminosity.
In our specific FCNC model, σ 95% vis can be written as A is the signal selection efficiency that accounts for the effect of selection cuts and detector inefficiencies and resolutions. The signal selection efficiencies are the ratio of the remaining events after all selection cuts to the total number of events before applying the cuts using the full leptonic samples which was discussed in section 5.
A = number of events remained after all selection cuts total number of events (6.3) A is found to be 1.86% and 2.42% for tuγ and tcγ samples respectively (see table   5.6). Therefore, one can write It is clear that the upper limits on σ 95% tqγ × Br(W → lν l ) given in table 5.10 depends on new physics model through the factor A. Because σ 95% vis depends on the number of backgrounds, number of data and uncertainty on the number of background event in signal region if we consider a simple cut and count analysis (uncertainty on signal selection efficiency can also vary σ 95% vis a bit which can be accounted for easily) and is independent of new physics model effects.
As was mentioned before, although the signal region defined in this analysis is optimised for the production of anomalous top quark in association with a photon through the FCNC interactions, this signal region can be used to test the signature of other new physics models which leads to the processes with muon, neutrino, b-jet and photon in final state. In other word, we should remove the dependency of the limits to the A factor as much as possible to make the results as model independent as possible. Therefore, σ 95% vis is a value which is model independent and can be used to limit any arbitrary new physics models if be reported in experimental particle physics paper.
One should note that the detector level objects which are reconstructed from detector response in various parts of the CMS detector are used to define signal region. Although reconstructed objects are originated from generated particles they are associated with many complicated detector effects which are known to a reasonable extent and are accounted for full simulation in this analysis. Therefore, if a phenomenologist is going to test a new physics model with the same final state (a muon, a photon, MET and jets) by σ 95% vis , it is necessary to redo these complicated detector simulation to extract needed information (A) for extrapolating the results from signal region to the total phase space and find the upper limit on the inclusive cross section of the new physics model (σ 95% new−physics = σ 95% vis A ). For example in [140], authors try to develop global analysis at NLO in QCD of the most constraining limits on top-quark FCNC operators. In this paper many experimental results are examined to constrain the top-quark FCNC couplings in different final states. One of the important final states are single top and photon production channel. If the experimental results are presented in such a way that can be easily used by phenomenologists, the final state can be employed in addition to the results for checking different effects to the same signal (NLO effects, generator effects, ...) or testing other signal models.
Although σ 95% vis is widely reported by experimental papers for various final states and is widely employed by phenomenalogist to constraint parameters space of new physics models, the complication of detector simulations and unclear detector inefficiencies makes the interpretation of results a bit vague [45,141]. The most straightforward way to use experimental results are accessing to the fiducial cross sections. The idea is to report the measured cross section (or upper limit on the cross section) similar to σ vis in a 'simple' restricted phase space. In addition, it is demanded to 'remove detector effects' to get rid of the complex detector simulations.

Fiducial phase space and cross section definitions
As was noted in previous subsection, model dependency of the results appears in factor A which includes the effects of both selection cuts and detector effects. The selection cut dependency can be removed significantly if we define a restricted phase space similar to the reconstructed signal region. The more similar restricted phase space to reconstructed signal region the less model dependent results.
Restricted phase space can be defined in different level of events simulation. It could be defined at parton level using the hard interaction output particles which are easily available from matrix element generator. In this case, the effects of the final state radiations and hadronisation should also be removed from the results.
On the other hand, final state radiation and fragmentation effects are easily available through general purpose event generators like PYTHIA and HERWIG [92,93].
Events after final state radiations and fragmentations contain the list of particles that pass through the detector. Events at this step are called particle level events. Therefore, the restricted phase space can be defined in particle level without losing the simplicity of the results application.
Fiducial phase space is defined by applying a set of selection cuts on particle level events. Before listing fiducial phase space requirements, we need to define particle level objects like photon, muon, jets, etc. The photons and leptons are required not to originate from the decay of a hadron. So, lepton candidates are from W boson, Z boson or τ decays. Electrons or muons from τ decays must satisfy the same requirements as prompt leptons. The missing transverse momentum is defined as the vectorial sum of all neutrinos present in the event. Neutrinos from hadron decays are rejected. Jets are reconstructed from all particles with cτ > 10 mm, excluding muons and neutrinos, using the anti-kt algorithm with a radius parameter of 0.5. Reconstructed particle-level jets are tagged using a B-hadron matching, where tagged means that at least one B-hadron is found within the jet.
Top quark candidates are reconstructed using muon, MET and b-jet candidates as described in section 5.2.2. If no b-jet was found, the highest P T jet is used to reconstruct the top quark.
Selected particle level candidates are used to define fiducial phase space which is summarized in table 6.1. It is tried to define a fiducial phase space similar to analysis reconstructed level signal region introduced in section 5 while cuts on more difficult variables like isolation requirements are removed.
In order to find the upper limit on the cross section of new physics in fiducial phase space defined in table 6.1, one should find a map from reconstructed level signal region in which we performed all analysis steps to this almost similar fiducial phase space. Signal selection efficiency for any arbitrary signal model can be written where A is the fraction of number of generated signal events at particle level that can pass fiducial phase space requirements defined in table 6.1. It is defined as: A = number of events remained in fiducial region total number of events (6.6) In equation 6.6, accounts for the difference between the fiducial phase space and reconstructed level signal region. Due to the similarity of fiducial phase space and signal region definition, is mostly related to detector inefficiencies and resolutions.
Although seems to be independent of the input model signal characteristics in defined fiducial phase space can effects epsilon [142]. For example, a signal model with photon and jets mostly in barrel experiences different reconstruction efficiency compared to a new physics signal model with photon and jets mostly produced in endcaps. A and A can be calculated using particle level and full simulated samples, respectively. Therefore, , which is also denoted as f id , can be written as We can rewrite equation 6.4 using equation 6.5, σ 95% tqγ × Br(W → lν l ) = σ 95% vis A (6.8) A is different for each signal model while and σ 95% vis are mostly model independent.
Therefore we can rewrite above equation in the following form σ 95% tqγ × A = σ 95% vis (6.9) The value of σ vis is called fiducial cross section, σ f id , which is independent of the input signal model to a reasonable extent and all detector effects are lifted.
vis f id (6.10) We summarize the way that a phenomenalogist can use σ f id for a given fiducial phase space to constraint parameters space of an arbitrary new physics model with a similar final state below.
• find the cross section of new physics model in pp collisions as a function of the model parameters (σ new−physics ).
• generate events and include radiation and hadronization effects.
• apply fiducial phase space requirements and find factor A.
• use (σ new−physics (model parameters) = σ f id A ) to find constraint on new physics parameters.

Upper limits on fiducial cross sections
In this section upper limit on σ f id is reported. In section 5, signal region is optimised for single top production in association with a photon and the SM contributions from different sources are estimated. In the defined signal region at the detector level, the BDT output distribution of the SM background, tqγ signal and the observed data are shown in figure 5.27. Upper limits on the σ tqγ × Br(W → lν l ) obtained from shape analysis of the BDT outputs are shown in One expects to find σ 95% vis−observed independent of the signal model in a defined signal region (considering same values for signal selection efficiency error) but the obtained values for σ 95% vis−observed (tuγ) and σ 95% vis−observed (tcγ) are not the same. The reason is that we have done shape analysis to set upper limits and the limits are sensitive to the shape of the BDT output for signal and SM backgrounds. Better separation between SM backgrounds and signal events more stringent upper limits on σ 95% vis . For example, the tuγ signal is better separated from SM backgrounds compared to tcγ signal and consequently σ 95% vis−observed (tuγ) is more stringent than σ 95% vis−observed (tcγ). As was discussed, σ 95% vis−observed (tuγ) and σ 95% vis−observed (tcγ) can not be used for testing new physics models. On the other hand, they may be used for testing different aspects of the tqγ processes such as the effects of the next to leading order to the tqγ FCNC processes [140]. Therefore, in addition to the shape dependent upper limits which are given in equation 6.11, we can perform a counting analysis in the signal region to remove BDT shape dependency.
In the signal region, there are 1794 data events and 1805.44 ± 215 background events. One can find the upper limit on the visible cross section performing a simple counting analysis. Considering 10% uncertainty on the signal selection efficiency and using CLs method, one finds Although this bound could be useful, it is a bit loose due to the applied loose selection cuts to define the signal region and the analysis suffers from large systematics uncertainties. The power of systematic uncertainties are reduced by an excellent signal discrimination power of BDT in the shape analysis. In order to report more useful results, additional requirement is imposed to define a more limited signal region. Events with no b-tagged jets are kept to be able to control the contribution of the W -jets and W γ-jets from data with enough statistics. After finding their contri-butions in data, as was discussed in section 5, we can choose events with exactly one The total number of background events and the number of data events for both signal regions are reported in table 6.2. The uncertainties in the SM expectation include both statistical and systematic uncertainties. The total number of observed events is decreased by a factor of approximately 6.5 after requiring exactly one identified b jet in an event, while the expected number of SM events decreases by a factor of 7. The combined relative uncertainty in the number of expected SM events increases from 12% to 19% when this b jet requirement is included. The upper limits on the fiducial cross sections, visible cross sections, A, A and f id are summarized in table 6.3 for the shape analysis of BDT output distributions and for the cut and count analysis in two given signal regions. Observed upper limits on the cross section in a restricted phase space are found to be 122 fb and 102 fb at 95% CL for tuγ and tcγ production, respectively, when at most one identified b jet is required in the data. These limits are found to be 47 fb and 39 fb at 95% CL for tuγ and tcγ production, respectively, when exactly one identified b jet is required in the data. Table 6.3: Model independent results in two fiducial region: 1-a region which is determined by the 'basic selection cuts' of this analysis 2-a region which is determined by the 'basic selection cuts' of this analysis and exactly one b-tag requirement. In fiducial region 1 we use two different method to set limits, shape analysis and counting analysis. The variables are: Upper limits on number of new physics events and cross section (N 95% observed and σ 95% vis−observed ) in fiducial region at detector level, signal selection efficiency at detector level (A) and particle level (A), fiducial efficiency ( f id ) and upper limits on the cross section of new physics model in fiducial region at particle level (fiducial cross section σ 95% f id−observed  in energy enhances the cross section of the anomalous tγ production by a factor of 2.5 and 3.1 for tuγ and tcγ signal channels, respectively. With this increase in cross sections, it is expected to reach the same exclusion limit by half of the data used in this analysis. However, higher luminosity at RUN 2 of the LHC leads to more pile-up which would effect the analysis reach.