Maximising the DUNE early physics output with current experiments

The Deep Underground Neutrino Experiment (DUNE) is a proposed next generation superbeam experiment at Fermilab. Its aims include measuring the unknown neutrino oscillation parameters -- the neutrino mass hierarchy, the octant of the mixing angle $\theta_{23}$ and the CP violating phase $\delta_{CP}$. The current and upcoming experiments T2K, NOvA and ICAL@INO will also be collecting data for the same measurements. In this paper, we explore the sensitivity reach of DUNE in combination with these other experiments. We evaluate the least exposure required by DUNE to determine the above three unknown parameters with reasonable confidence. We find that for each case, the inclusion of data from T2K, NOvA and ICAL@INO help to achieve the same sensitivity with a reduced exposure from DUNE thereby helping to economize the configuration. Further, we quantify the effect of the proposed near detector on systematic errors and study the consequent improvement in sensitivity. We also examine the role played by the second oscillation cycle in furthering the physics reach of DUNE. Finally, we present an optimization study of the neutrino-antineutrino running of DUNE.


I. INTRODUCTION
Ever since their discovery, neutrinos have puzzled physicists with questions. In the process of answering these questions, we have gained new insights into the world of elementary particles. While some of the mysteries involving neutrinos have been solved, work is being done on solving some of the other puzzles.
The mixing of neutrinos leading to neutrino oscillations was confirmed by the Super-Kamiokande experiment [1], more than a decade ago. In the years since, we have measured most of the neutrino oscillation parameters to some precision. Solar neutrino experiments like SNO and the reactor neutrino experiment KamLAND [2] have measured the solar oscillation parameters θ 12 and ∆ 21 (m 2 2 − m 2 1 ) quite precisely. The atmospheric parameters θ 23 and |∆ 31 | (|m 2 3 − m 2 1 |) have been measured by Super-Kamiokande, MINOS and T2K [3][4][5]. The smallest mixing angle θ 13 has been measured quite recently by the reactor neutrino experiments Double Chooz, Daya Bay and RENO [6][7][8]. The combined fit to world neutrino data significantly constrains most of the oscillation parameters today [9][10][11].
Some quantities however, still remain unmeasured. The sign of the atmospheric mass-squared difference ∆ 31 is currently unknown. The case with m 3 > m 1 (m 3 < m 1 ) is called Normal (Inverted) hierarchy or NH(IH). The octant in which the atmospheric mixing angle lies is another unknown. If θ 23 < 45 • (θ 23 > 45 • ), then θ 23 is said to lie in the Lower (Higher) octant or LO(HO). Finally, the value of the CP-violating phase δ CP is completely undetermined with the whole range 0 − 2π being allowed at 3σ C.L. However the best-fit value comes at ∼ 251 o [11] mainly from T2K data. There are many other fundamental questions like the absolute masses of the neutrinos and their Dirac/Majorana nature as well. However, these cannot be probed by neutrino oscillation experiments.
The primary task before the current and next generation of neutrino oscillation experiments is therefore, to measure the unknown parameters (mass hierarchy, octant of θ 23 and δ CP ) and to put more precise constraints on the values of the known ones. These can be achieved by experiments that probe the ν µ → ν e and ν µ → ν µ oscillation channels at scales relevant to the atmospheric mass-squared difference. The superbeam experiments T2K [12] and NOνA [13] which are operational are the two current generation long-baseline experiments. Atmospheric neutrino experiments like ICAL@INO [14] with a magnetized iron calorimeter detector can also throw light on the above issues. The combined capabilities of the long-baseline experiments T2K and NOνAand the atmospheric neutrino experiment ICAL have been discussed extensively, eg. see Refs. [15][16][17][18][19][20][21][22][23][24][25][26][27][28][29][30][31].
The main problem in determining the oscillation parameters is the problem of parameter degeneracy [15,16,[32][33][34], i.e. two different sets of oscillation parameters giving the same value of probability. Therefore, in the degenerate parts of the parameter space, it is difficult for any one experiment to measure all the unknown parameters. Depending on the values of the oscillation parameters in nature, the current and upcoming experiments may be able to measure one or more of the unknown parameters over the next few years. If not, we will need next generation facilities with more enhanced capabilities to achieve the same goals. The LBNE experiment [35] in the United States and the LBNO experiment [36] in Europe are two of the proposals for such a facility. Many studies have explored the physics reach of these experiments [37][38][39][40][41][42][43]. Recently there have been discussions on converging these different proposals into a unified endeavour of a long-baseline experiment using a Megawatt beam from Fermilab. The proposed detector is a modular 40 kton liquid argon detector at Sanford Underground Research Facility in South Dakota. This is called the Long-Baseline Neutrino Facility (LBNF) [44]. The first phase of this will be a 10 kton detector. There are also proposals for future atmospheric neutrino experiments, such as Hyper-Kamiokande [45] which is a WaterCerenkov detector and PINGU [46] which is a multi-Megaton ice detector using theCerenkov technique. Some phenomenological studies involving these experiments have been presented in Refs. [47,48].
By the time the next generation experiments start collecting data, we will also have information from the current generation of experiments NOνA, T2K and the upcoming ICAL experiment. It is therefore pertinent to ask what is the minimum amount of information needed from LBNE or LBNO in the light of the information from this data.
This question was addressed in Ref. [41] in the context of the LBNO experiment. It was shown that there exists a synergy between experiments and channels, because of which the combined analysis of many experiments gives very good sensitivity. In this work, we carry out a similar analysis for LBNE. We determine the most conservative specifications that this experiment needs, in order to measure the remaining unknown parameters to a specified level of precision.
In Section II, we discuss the configurations of the experiments considered in this work. The next section explores the question posed above -determining the minimal or 'adequate' configuration required for LBNE, in order to determine the unknown parameters. We then discuss the effect of systematics in Section IV and the significance of the second oscillation maximum for LBNE in Section V. Finally, in Section VI, we optimize the neutrino-antineutrino running of LBNE to get the best possible results.

II. SIMULATION DETAILS
Among the current generation of neutrino oscillation experiments, in this work we consider NOνA, T2K and ICAL@INO. NOνA and T2K are currently operational, while ICAL@INO is still under construction. The precise configuration of LBNE is still being worked on, and in this work we allow it's specifications to be variable. For this work, we have simulated the long-baseline experiments using the GLoBES package [49,50] along with its auxiliary data [51,52].
The T2K experiment in Japan shoots a beam of muon neutrinos from J-PARC to the Super-Kamiokande detector in Kamioka, through 295 km of earth. This experiment will run with a total integrated beam strength of around 8 × 10 21 pot (protons on target). The specifications used for this detector are as given in Ref. [12,[53][54][55]. We assume in our study that T2K will run only in the neutrino mode with the above pot. The T2K collaboration also has plans to run in the antineutrino mode. For advantages of neutrino vis-a-vis antineutrino runs we refer to [56]. The NOνA experiment at Fermilab takes neutrinos from the NuMI beam, with a beam power of 0.7 MW. The planned run of this experiment is for 6 years, divided into 3 years of neutrino and 3 years of antineutrino mode. The neutrinos are intercepted at the TASD detector in Ash River, 812 km away and 14 milliradians off the beam axis. The off-axis nature of these experiments helps impose cuts to reduce the neutral current background. After the measurement of the moderately large value of θ 13 , the event selection criteria were re-optimized with the intention of exploiting higher statistics [25,57]. We have used this new configuration for the NOνA experiment in our work.
ICAL@INO is a magnetized iron detector for observing atmospheric neutrinos [14]. Magnetization allows for a separation of µ + and µ − events, and hence a distinction between neutrinos and antineutrinos. The total exposure taken for this experiment is 500 kiloton yr, i.e. 10 years of data collection using a 50 kiloton detector. We assume an energy resolution of 10% and angular resolution of 10 • for the neutrinos in the detector. These give results comparable to the muon analysis [26,58] that has been developed by the INO collaboration. The new '3-d analysis' that also includes hadronic energy information [59] is expected to give better results. The statistical procedure followed in calculating the sensitivity of this experiment follows the treatment outlined in Ref. [60].
LBNE is a proposed next generation neutrino oscillation experiment. The beam of neutrinos will travel 1300 km from Fermilab to a liquid argon detector at SURF, South Dakota. The flux of neutrinos for LBNE [61] has a wide-band profile. The higher energy available , along with the long baseline means that the neutrinos will experience greater matter effects than NOνA or T2K. There are two options being considered for the proton beam -80 GeV and 120 GeV. For a given beam power, proton energy varies inversely with the number of protons in the beam per unit time, and hence a lower flux of neutrinos. In this work, we have chosen the 120 GeV beam which gives us a lower flux of neutrinos and hence a conservative estimate in our results. The specifications for the liquid argon detector have been taken from Ref. [62]. If the LBNE detector is built underground, it will also be possible for it to observe atmospheric neutrinos. In this work, we have not considered this possibility. A detailed study on atmospheric neutrinos at LBNE is presented in Ref. [42].
The sensitivity of LBNE to the mass hierarchy, octant of θ 23 and δ CP comes primarily from the ν µ → ν e oscillation probability P µe . An approximate analytical formula for this probability can be derived perturbatively [63][64][65] in terms of the two small parameters α = ∆ 21 /∆ 31 and sin θ 13 .
P µe = 4 sin 2 θ 13 sin 2 θ 23 (1 −Â) 2 +α sin 2θ 13 sin 2θ 12 sin 2θ 23 Here, ∆ = ∆ 31 L/4E is the oscillating term, and the effect of neutrinos interacting with matter in the earth is given by the matter termÂ = 2 √ 2G F n e E/∆ 31 , where n e is the number density of electrons in the earth. Note that this expression is valid in matter of constant density. This approximate formula is useful for understanding the physics of neutrino oscillations. However in our simulations, we use the full numerical probability calculated by GLoBES.
In the analyses that follow, we have evaluated the χ 2 for determining the mass hierarchy, octant of θ 23 and CP violation using a combination of LBNE and the current/upcoming experiments T2K, NOνA and ICAL. For each set of 'true' values assumed, we evaluate the χ 2 marginalized over the 'test' parameters. In our simulations, we have used the effective atmospheric parameters corrected for three-flavour effects [66][67][68]. The true values assumed for the parameters are: sin 2 θ 12 = 0.304, . The test hierarchy is varied as well. The solar parameters are already measured quite accurately, and their variation does not impact our results significantly. Therefore, we have not marginalized over them. We have imposed a prior of σ(sin 2 2θ 13 ) = 0.005 on the value of sin 2 2θ 13 , which is the expected precision from the reactor neutrino experiments [69]. The systematic uncertainties are parametrized in terms of four nuisance parameters -signal normalization error (2.5%), signal tilt error (2.5%), background normalization error (10%) and background tilt error (2.5%).
In section III, our aim is to economize the configuration of LBNE with the help of the current generation of experiments. We have done that by evaluating the 'adequate' exposure for LBNE. The qualifier 'adequate', as as defined in Ref. [41] in the context of LBNO, means the exposure required from the experiment to determine the hierarchy and octant with χ 2 = 25, and to detect CP violation with χ 2 = 9. To do so, we have varied the exposure of LBNE, and determined the combined sensitivity of LBNE along with T2K, NOνA and ICAL. The variation of total sensitivity with LBNE exposure tells us what the adequate exposure should be. In this work, we have quantified the exposure for LBNE in units of MWkt-yr. This is a product of the beam power (in MW), the runtime of the experiment (in years) 1 and the detector mass (in kilotons). As a phenomenological study, we will only specify the total exposure in this paper. This may be interpreted experimentally as different combinations of beam power, runtime and detector mass whose product gives this value for exposure. For example, an exposure of 20 MW-kt-yr could be achieved by using a 10 kt detector for 2 years (in each, ν and ν mode), with a 1 MW beam. We use events in the energy range 0.5 -1 GeV for LBNE which covers both first and second oscillation maxima. The relative contribution of the 2nd oscillation maxima is discussed in Section V.

A. Hierarchy sensitivity
In the left panel of Fig. 1, we have shown the combined sensitivity of LBNE, NOνA, T2K and ICAL for determining the mass hierarchy, as the exposure for LBNE is varied. Note that hierarchy sensitivity depends very strongly on the true value of δ CP and θ 23 . In this work, we are interested in finding out the least exposure needed for LBNE, irrespective of the true values of the parameters in nature. Therefore, we have evaluated the χ 2 for various true values of these parameters as listed in Section II, and taken the most conservative case out of them. Thus, the exposure plotted here is for the most unfavourable values of true δ CP and θ 23 . Since hierarchy sensitivity of the P µe channel increases with θ 23 , the worst case is usually found at the lowest value considered -θ 23 = 39 • . The most unfavourable of δ CP is around +(−)90 • for NH(IH) [24]. Separate curves are shown for both hierarchies, but the results are almost the same in both cases. We find that the adequate exposure for LBNE in this case is around 22 MW-kt-yr for both NH and IH. For the benchmark values of 1.2 MW power and 10 kt detector, this corresponds to just under 2 years of running in each mode. The two intermediate curves show the same sensitivity, but without including ICAL data in the analysis. In this case, the adequate exposure is around 39 MW-kt-yr. Thus, in the absence of ICAL data, LBNE would have to increase its exposure by over 75% to achieve the same results. Finally, we show the sensitivity from LBNE alone, in the lowermost curves. For the range of exposures considered, LBNE can achieve hierarchy sensitivity up to the χ 2 = 16 level. The first row of Table I shows the adequate exposure required for hierarchy sensitivity reaching χ 2 = 25 for only LBNE and also after adding the data from T2K, NOνAand ICAL. The numbers in the parentheses correspond to IH. With only LBNE, the exposure required to reach χ 2 = 25 for hierarchy sensitivity is seen to be much higher .

B. Octant sensitivity
The mass hierarchy as well as the values of δ CP and θ 23 in nature affect the octant sensitivity of experiments significantly. In our analysis, we have considered various true values of δ CP across its full range, and two representative true values of θ 23 -39 • and 51 • . Having evaluated the minimum χ 2 for each of these cases, we have chosen the lower value. Thus, we have ensured that the adequate exposure shown here holds, irrespective of the true octant of θ 23 . Note that octant sensitivity reduces as we go more towards θ 23 = 45 • . Thus the above choice of true θ 23 does not correspond to the most conservative case.
The middle panel of Fig. 1 shows the combined octant sensitivity of the experiments, as a function of LBNE exposure. Around 35-37 MW-kt-yr is the required exposure for LBNE, NOνA, T2K and ICAL to collectively measure the octant for both hierarchies. This implies a runtime of around 3 years in each mode for the 'standard' configuration of LBNE. Without information from ICAL however, LBNE would have to increase its exposure to around 65(50) MW-kt-yr for NH(IH) to measure the octant with χ 2 = 25. While only LBNE would need a higher exposure of 84 (76) MW-kt-yr for NH(IH). This is summarized in the 2nd row of Table I. C. Detecting CP violation CP detection ability of an experiment is defined as its ability to distinguish the true value of δ CP in nature from the CP conserving cases of 0 and 180 • . This obviously depends on the true value of δ CP . If δ CP in nature is close to 0 or 180 • , this ability will be poor, while if it is close to ±90 • , it will be high. CP detection also depends on θ 23 , and typically a decreasing function of θ 23 [31]. Here, we have tried to determine the fraction of the entire δ CP range for which our setups can detect CP violation with at least χ 2 = 9. We have always chosen the smallest fraction over various values of θ 23 (39 • , 45 • and 51 • ), so as to get a conservative estimate.
We find in the right panel of Fig. 1 that for the range of exposures considered, the fraction of δ CP is between 0.35 and 0.55. While the exposure increases by a factor of two, the increase in the fraction of δ CP is very slow. In Ref. [29], it was shown that the addition of information from ICAL to NOνA and T2K increases their CP detection ability. This is because ICAL data breaks the hierarchy-δ CP degeneracy that NOνA and T2K suffer from. However, LBNE data itself is also capable of lifting this degeneracy for most of the values of δ CP [70]. Therefore, the inclusion of ICAL data does not make any difference in this case. This combination of experiments can detect CP violation over 40% of the δ CP range with an exposure of about 65 MW-kt-yr at LBNE for NH (i.e. a runtime of around 5.5 years for LBNE). Without includ-   ing T2K and NOνA information the exposure required will be 114 MW-kt-yr for 40% coverage for discovery of δ CP .
In the following sections, we fix the exposure in each case to be the adequate exposure as listed in Table I, for the most conservative parameter values.

IV. ROLE OF THE NEAR DETECTOR IN REDUCING SYSTEMATICS
The role of the near detector (ND) in long-baseline neutrino experiments has been well discussed in the literature, see for example Refs. [71][72][73]. The measurement of events at the near and far detector (FD) reduces the uncertainty associated with the flux and cross-section of neutrinos. Thus the role of the near detector is to reduce systematic errors in the oscillation experiment. It has recently been seen that the near detector for the T2K experiment can bring about a spectacular reduction of systematic errors [74].
In this study, we have tried to quantify the improvement in results, once the near detector is included. Instead of putting in reduced systematics by hand, we have explicitly simulated the events at the near detector using GLoBES. The design for the near detector is still being planned. For our simulations, we assume that the near detector has a mass of 5 tons and is placed 459 metres from the source. The flux at the near detector site has been provided by the LBNE collaboration [61]. The detector characteristics for the near detector are as follows [75]. The muon(electron) detection efficiency is taken to be 95%(50%). The NC background can be rejected with an efficiency of 20%. The energy resolution for electrons is 6%/ E(GeV), while that for muons is 37 MeV across the entire energy range of interest. Therefore, for the neutrinos, we use a (somewhat conservative) energy resolution of 20%/ E(GeV). The systematic errors that the near detector setup suffers from are assumed to be the same as those from the far detector.
In order to simulate the ND+FD setup for LBNE, we use GLoBES to generate events at both detectors, treating them as separate experiments. We then use these two data sets to perform a correlated systematics analysis using the method of pulls [76]. This gives us the combined sensitivity of LBNE using both near and far detectors. (We have explained our methodology in Appendix A.) Thereafter, the procedure of combining results with other experiments and marginalizing over oscillation parameters continues in the usual manner. The results are shown in Fig. 2. The effect of reduced systematic errors is felt most significantly in regions where the results are best. This is because for those values of δ CP , the experiment typically has high enough statistics for systematic errors to play an important role.
Next, we have tried to quantify the reduction in systematic errors seen by the experiment, when the near detector is included. To be more specific, if the system-atic errors seen by the far detector setup are denoted by π, then what is the effective set of errors π eff for the far detector setup, once the near detector is also included? In other words, for given systematic errors π, we have found the value of π eff that satisfies the relation where the right-hand side denotes the correlated combination as described in Appendix A. The result of the computation is shown in Fig. 3, for the case of hierarchy determination. We have chosen typical values of systematic errors for the detector: ν e appearance signal norm error of 2.5%, ν µ disappearance signal norm error of 7.5%, ν e appearance background norm error of 10% and ν µ disappearance background norm error of 15%. The tilt error is taken as 2.5% in both appearance and disappearance channels. The first four numbers constitute π, as labelled in the figure. We have not varied the tilt errors in this particular analysis because their effect on overall results is quite small. The sensitivity of FD+ND obtained using these numbers, are matched by an FD setup with effective errors as follows: ν e appearance signal norm error of 1%, ν µ disappearance signal norm error of 1%, ν e appearance background norm error of 5% and ν µ disappearance background norm error of 5%. Similar results are obtained in the case of octant and CP sensitivity also. Thus, inclusion of the near detector brings the systematic errors down to 13-50% of their original value. These results are summarized in Table II. Systematic error only FD FD+ND νe app signal norm error 2.5% 1% νµ disapp signal norm error 7.5% 1% νe app background norm error 10% 5% νµ disapp background norm error 15% 5% For a baseline of 1300 km, the oscillation probability P µe has its first oscillation maximum around 2-2.5 GeV. This is easy to explain from the formula is the matter-modified atmospheric masssquared difference. In the limit ∆ 21 → 0, it is given by The second oscillation maximum, for which the oscillating term takes the value 3π/2, occurs at an energy of around 0.6-1.0. Studies have discussed the advantages of using the second oscillation maximum to get information on the oscillation parameters [77][78][79]. The neutrino flux that LBNE will use has a wide-band profile, which can extract physics from both, the first and second maxima. Figure 4 shows P µe for the LBNE baseline, superimposed on the ν µ flux. This is in contrast with NOνA, which uses a narrow-band off-axis beam concentrating on its first oscillation maximum, in order to reduce the π 0 background at higher energies.
In order to understand the impact of the second oscillation maximum, we have considered two different energy ranges. Above 1.1 GeV, only the first oscillation cycle is relevant. However, if we also include the energy range from 0.5 to 1.1 GeV, we also get information from the second oscillation maximum. Figure 5 compares the sensitivity to the hierarchy, octant and CP violation from only from the first oscillation cycle, and from both oscillation cycles. We see that inclusion of data from the second oscillation maximum only increases the χ 2 by a small amount. As expected, the effect is pronounced in the region where the effect of parameter degeneracies is significant. However, if the second oscillation maximum is not used, the amount of exposure required to get the same level of sensitivity is very high. This is shown in Table III. This is because it is difficult to increase the sensitivity in regions plagued by degeneracies.

VI. OPTIMIZING THE NEUTRINO-ANTINEUTRINO RUNS
One of the main questions while planning any beambased neutrino experiment is the ratio of neutrino to antineutrino run. Since the functional dependence of the oscillation parameters on the neutrino and antineutrino probabilities are different, an antineutrino run can provide a different set of data which may be useful in determination of the parameters. However, the interaction cross-section for antineutrinos in the detectors is smaller by a factor of 2.5-3 than the neutrino cross-sections. Therefore, an antineutrino run typically has lower statistics. Thus, the choice of neutrino-antineutrino ratio is often a compromise between new information and lower statistics.
It is now well known that neutrino and antineutrino oscillation probabilities suffer from the same form of hierarchy-δ CP degeneracy [24]. However, the octant-δ CP degeneracy has the opposite form for neutrinos and antineutrinos [27]. Thus, inclusion of an antineutrino run helps in lifting this degeneracy for most of the values of δ CP [70]. Therefore for hierarchy and octant the optimum balance between neutrino and antineutrino run depends on the type and extent of degeneracy that afflicts its measurement. But the situation is different for detection of CP violation. For the T2K experiment, it has been shown that once the octant degeneracy has been lifted by including some amount of antineutrino data, further antineutrino run does not help much in CP discovery; in fact it is then better to run with neutrinos to gain in statistics [56]. But this conclusion may change for a different baseline and matter effect. From Fig. 4 we see that for NOνA the oscillation peak does not coincide with the flux peak. Around the energy where the flux peaks, the probability spectra with δ CP = ±0, 180 • are not equidistant from the δ CP = ±90 • spectra. For antineutrino mode the curves for ±90 • switch position. Hence for neutrinos δ CP = 0 • is closer to δ CP = −90 • and δ CP = 180 • is closer to δ CP = 90 • , while the opposite is true for antineutrinos. This gives a synergy and hence running in both neutrino an antineutrino modes can be helpful. For T2K the energy where the flux peak occurs coincides with the oscillation peak. At this point the curves for δ CP = 0, 180 • are equidistant from δ CP = ±90 • and hence this synergy is not present. Thus, the role of antineutrino run is only to lift the octant degeneracy. In what follows we have varied the proportion of neutrino and antineutrino runs to ascertain what is the optimal combination. The adequate exposure is split into various combinations of neutrinos and antineutrinos -1/6 ν + 5/6 ν, 2/6 ν + 4/6 ν, ... 6/6 ν + 0/6 ν. The intermediate configuration 3/6 ν + 3/6 ν corresponds to the equal-run configuration used in the other sections. For convenience of notation, these configurations are referred to simply as 1+5, etc., i.e. without appending the '/6'. The results are shown in Fig. 6.
The top row of Fig. 6 shows the hierarchy sensitivity of LBNE for various combinations of neutrino and antineutrino run. Normal hierarchy and θ 23 = 39 • have been assumed as the true parameters. For LBNE, we have chosen an exposure of 22 MW-kt-yr which was found to be the adequate exposure in Section III assuming equal neutrino and antineutrino runs. In the left panel, we see the results for LBNE alone. the figure shows that in the favourable region of δ CP ∈ [−180 • , 0] the best sensitivity comes from the combination 3+3 or 4+2. Although the statistics is more for neutrinos, the antineutrino run is required to remove the wrong-octant regions. For normal hierarchy, δ CP ∈ [0, 180 • ] is the unfavourable region for hierarchy determination [24], as is evident from the figure. In this region, we see that the results are worst for pure neutrino run. The best sensitivity comes for the case 5+1. This amount of antineutrino run is required to remove the octant degeneracy. The higher proportion of neutrino run ensures better statistics. In the right panel, along with LBNE we have also combined data from NOνA, T2K and ICAL. With the inclusion of these data the hierarchy sensitivity increases further and even in the unfavourable region χ 2 = 25 sensitivity is possible with only neutrino run from LBNE. This is because NOνA, which will run in antineutrino mode for 3 years, will provide the necessary amount of information to lift the parameter degeneracies that reduce hierarchy sensitivity. Therefore, the best option for LBNE is to run only in neutrino mode, which will have the added advantage of increased statistics. In the favourable region also the sensitivity is now better for 6+0 and 5+1 i.e. less amount of antineutrinos from LBNE is required because of the antineutrino information coming from NOνA. Note that overall, the amount of antineutrino run depends on the value of δ CP . However combining information from all the experiments 4+2 seems to be the best option over the largest fraction of δ CP values.
In the middle row of Fig. 6, we have shown the octant sensitivity of LBNE alone (left panel) and in combination with the current experiments (right panel). We have fixed the true hierarchy to be inverted, and θ 23 = 39 • . We have used an exposure of 37 MW-kt-yr for LBNE. From the left panel, we see that the worst results are for the full neutrino runs for most values of δ CP . This is because removal of wrong octant solutions requires both neutrino and antineutrino data [27]. On the other hand statistics is more for neutrino data. The best compromise is reached for 2+4 i.e 1/3 rd neutrino and 2/3 rd antineutrino combination, which gives the best results over the widest range of δ CP values. Addition of NOνA, T2K and ICAL data increases the octant sensitivity. The octant sensitivity is best for combinations having more antineutrinos.
The results are somewhat different when the true hierarchy is normal (not shown in the plots). In this case, the optimal configuration depends on the value of δ CP . When δ CP ∈ [−180 • , 0], the lifting of the octant degeneracy requires both neutrinos and antineutrinos. But as before, the neutrinos can provide more statistics, and so the best results are again for the 2+4 combination. For δ CP ∈ [0, 180 • ], the octant degeneracy is absent and the best sensitivity is achieved for only neutrino run owing to higher statistics. Data from NOνA, T2K and ICAL already includes antineutrino information. It is because of the synergy between these experiments and LBNE, that the pure neutrino runs also fare well, depending on the value of δ CP . It is likely that δ CP will not be measured before LBNE starts running, which will make planning the run difficult. However, for all combinations of neutrino and antineutrino run, the octant sensitivity almost crosses χ 2 = 25. Therefore, the exact combination chosen does not make much difference to the final result for octant determination for NH.
The left and right panels of the bottom row in Fig. 6 show the ability of LBNE (by itself, and in conjunction with the current generation of experiment, respectively) to detect CP violation. Here the true hierarchy is NH and true θ 23 is 51. Although this true combination does not suffer from any octant degeneracy, we see in the left panel that 6+0 is not the best combination. This is due to the synergy between neutrino-antineutrino runs  for larger baselines as discussed earlier. In both cases, we find that the best option is to run LBNE with antineutrinos for around a third of the total exposure. On adding information from the other experiments, we find great improvement in the CP sensitivity. From the right panel, we see that the range of δ CP for which χ 2 = 9 detection of CP is possible is almost the same for most combinations of neutrino and antineutrino run. Therefore, as in the case of octant determination, the exact choice of combination is not very important.

VII. SUMMARY
The LBNE experiment at Fermilab has promising physics potential. Its baseline is long enough to see matter effects which will help it to break the δ CP -related degeneracies and determine the neutrino mass hierarchy and octant of θ 23 . This experiment is also known to be good for detecting CP violation in the neutrino sector. The current and upcoming experiments T2K, NOνA and ICAL@INO will also provide some indications for the values of the unknown parameters. In this work, we have explored the physics reach of LBNE, given the data that these other experiments will collect. We have evaluated the adequate exposure for LBNE (in units of MW-ktyr), i.e. the minimum exposure for LBNE to determine the unknown parameters in combination with the other experiments, for all values of the oscillation parameters. The threshold for determination is taken to be χ 2 = 25 for the mass hierarchy and octant, and χ 2 = 9 for detecting CP violation. The results are summarized in Table  I. We find that adding information from NOνA and T2K helps in reducing the exposure required by only LBNE for determination of all the three unknowns-hierarchy, octant and δ CP . Adding ICAL data to this combination further help in achieving the same level of sensitivity with a reduction in exposure of LBNE. Thus the synergy between various experiments can be helpful in economizing the LBNE configuration. We have also probed the role of the near detector in improving the results by reducing systematic errors. We have simulated events at the near and far detectors and performed a correlated systematics analysis of both sets of events. We find an improvement in the physics reach of LBNE when the near detector is included. We have also evaluated the drop in systematics because of the near detector. Our results are shown in Table II.
Further we have checked the role of information from the lowest energy bins which are affected by the second oscillation maximum of the probability. While the increase in sensitivity due to the second oscillation cycle is small, this increase is most significant in regions of the parameter space where the degeneracies reduce the sensitivity. In such regions, even a very large change in exposure brings about only a small increase in sensitivity, because of the degeneracy. Therefore, the second oscillation cycle plays an important role in increasing the sensitivity.
Finally, we have done an optimization study of the neutrino-antineutrino run for LBNE. The amount of antineutrino run required depends on the true value of δ CP . It helps in achieving two objectives -(i) reduction in octant degeneracy (ii) synergy between neutrino and antineutrino data for detecting CP violation. We find that the optimal combination for LBNE in conjunction with the other experiments is different from that of LBNE alone. For the latter, the pure neutrino run fares worse than combinations with mixed neutrino and antineutrino run. However, on combining with the experiments like NOνA, T2K and ICAL the pure neutrino run is better than or comparable to the mixed runs (except for octant sensitivity when the hierarchy is inverted). This is because the lifting of degeneracies by antineutrinos is partly achieved because of the antineutrino data at NOνA and ICAL. Therefore, the pure neutrino run of LBNE which has higher statistics performs better. For the combined analysis of all experiments the combination with 2/3 rd neutrino and 1/3 rd antineutrino run seems to be the optimal choice.
To conclude, the LBNE experiment can measure mass hierarchy, octant and δ CP with considerable precision. Inclusion of the data from the experiments like T2K, NOνA and ICAL can help LBNE to attain the same level of precision with a reduced exposure. Thus the synergistic aspects between different experiments can help in the planning of a more economized configuration for LBNE. modified due to systematic errors as where the index k(l) runs over the relevant normalization(tilt) systematic errors for a given experimental observable. All the pull variables {ξ j } take values in the range (−3, 3), so that the errors can vary from −3σ to +3σ. Here, E i is the mean energy of the i th energy bin, E min and E max are the limits of the full energy range, and E av is their average. The Poissonian χ 2 is calculated for each detector as The results from the two detector setups are then combined, along with a penalty for each source of systematic error. The final χ 2 is then calculated by minimizing over all combinations of ξ j . Usually, for two experiments with uncorrelated systematics, the adding of penalties and minimizing over the pull variables is done independently, and the resulting χ 2 values are added. In contrast, here we add the same pulls to both detector setups, and then minimize over the pull variables. This takes care of correlations between the systematic effects of the two setups.