B-mode constraints from Planck low multipole polarisation data

We present constraints on primordial B modes from large angular scale cosmic microwave background polarisation anisotropies measured with the Planck satellite. To remove Galactic polarised foregrounds, we use a Bayesian parametric component separation method, modelling synchrotron radiation as a power law and thermal dust emission as a modified blackbody. This method propagates uncertainties from the foreground cleaning into the noise covariance matrices of the maps. We construct two likelihoods: (i) a semi-analytical cross-spectrum-based likelihood-approximation scheme (momento) and (ii) an exact polarisation-only pixel-based likelihood (pixlike). Since momento is based on cross-spectra it is statistically less powerful than pixlike, but is less sensitive to systematic errors correlated across frequencies. Both likelihoods give a tensor-to-scalar ratio, r, that is consistent with zero from low multipole (2<= ell<30) Planck polarisation data. From full-mission maps we obtain r_0.05<0.274, at 95 per cent confidence, at a pivot scale of k = 0.05 Mpc^-1, using pixlike. momento gives a qualitatively similar but weaker 95 per cent confidence limit of r_0.05<0.408.


INTRODUCTION
Measurements of the polarised cosmic microwave background (CMB) on large angular scales provide a unique opportunity to constrain the physics of the early Universe. In particular, a detection of primordial -modes, parameterised by the tensor-to-scalar ratio , would provide strong evidence that the Universe underwent an inflationary phase and would shed light on the dynamics involved as follows.
Scalar perturbations generated in the early Universe yield a divergence-like ' -mode' polarisation pattern in the CMB, whereas tensor perturbations give an additional distinctive curl-like ' -mode' pattern (Seljak & Zaldarriaga 1997;Kamionkowski et al. 1997), of approximately equal amplitude to their -mode contribution. A definitive detection of primordial -modes would provide strong constraints on models of inflation and would fix the energy scale of inflation through the relation (Lyth 1985) 1/4 ≈ 1.04 × 10 16 GeV 0.01 where is the potential energy of the inflaton and the ratio of the amplitudes of the tensor and scalar power spectra at an arbitrary pivot scale * ≡ 0.05 Mpc −1 .
One of the main science goals of next-generation CMB experiments from space (LiteBIRD; Sugai et al. 2020) and from the ground, such as the Atacama Cosmology Telescope (ACTPol, Thornton et al. 2016), South Pole Telescope (SPTpol, Austermann et al. 2012), CMB-S4 (The CMB-S4 Collaboration et al. 2022) and the ★ E-mail: rmvd2@cam.ac.uk Simons Observatory (SO, Ade et al. 2019) is to achieve a statistically significant detection of primordial -modes down to ∼ 10 −3 . Galactic polarised foregrounds, noise as well as instrumental systematics, however, pose a real challenge for this ambitious science goal.
Currently, the tightest constraint on the tensor-to-scalar ratio comes from CMB polarization measurements by the BICEP/Keck experiment on scales of the recombination peak (Bicep/Keck Collaboration 2021, hereafter BK18). BK18 measures at intermediate multipoles (20 < ℓ < 330) using roughly 400 deg 2 of the sky. They place tight constraints on 0.05 < 0.036 (95% C.L.), when fixing the ΛCDM cosmological model to Planck 2018 best-fit values (Planck Collaboration 2020c, hereafter PCP18) and using a pivot scale 1 of * = 0.05 Mpc −1 . The recent analysis of the first flight of the SPI-DER balloon-borne experiment gives a weaker, but independent, 95% confidence limit of 0.05 < 0.19 from the recombination peak (Ade et al. 2022).
Since the Planck satellite observed the full sky, the Planck polarization maps contain information in the low multipole regime (2 ≤ ℓ < 30). Thus, with Planck one has potential access to the reionization bump feature caused by tensor perturbations 2 . It is important to have a reliable analysis of the Planck -mode constraints, 1 To be consistent with BK18, we use a pivot scale * = 0.05 Mpc −1 in this paper also. Note that PCP18 uses * = 0.002 Mpc −1 as pivot scale. Assuming a scaling relation of (0.05/0.002) − /8 and adopting the single-field-inflation consistency relation ( = − /8), we estimate the deviations in for different * to be ∼ 5 − 10% for ∼ 0. since there is no other experiment at present that can access these large angular scales (which are unconstrained by BK18). -mode observations over a wide range of angular scales can provide tests of inflationary dynamics, for example, whether there was an early period of fast roll inflation (see e.g. Contaldi 2014). The goal of the present paper is to establish an accurate upper limit on from low-ℓ Planck polarisation data.
Recently, tight constraints on using Planck low-ℓ maps and simulations from the NPIPE processing pipeline (Planck Collaboration 2020d) have been presented in Tristram et al. (2022Tristram et al. ( , 2021. Using the reionization bump spectrum in the multipole range 2 ≤ ℓ ≤ 35, and allowing negative values of 0.05 , they found the constraint 0.05 = −0.014 +0.108 −0.111 (Tristram et al. 2021) (2) at the 95% confidence level (C.L.). Extending the multipole range to 2 ≤ ℓ ≤ 150, thus including data from the recombination bump, they found 0.05 < 0.158 (95% C.L.). Combining temperature and polarisation data and adding a high-ℓ CMB temperature likelihood gave a surprisingly tight upper limit of 0.05 < 0.056 (95% C.L.) (Tristram et al. 2021). Combining the NPIPE and BK18 likelihoods Tristram et al. (2021) find 0.05 < 0.032 (95% C.L.), i.e. the addition of Planck produces a negligible improvement to constraints from BK18. The Tristram et al. (2021) results disagree strongly with the previous analysis of Planck polarization maps in Planck Collaboration (2020b), which found a relatively weak constraint of 0.002 < 0.41 (95% C.L.) from Planck measurements at low multipoles. This is much weaker than the Planck constraint on from measurements alone of 0.002 < 0.102 (PCP18). At low multipoles, the Tristram et al. (2021) analysis uses a heuristic low-multipole , , likelihood motivated by the Hammimeche-Lewis ansatz (Hamimeche & Lewis 2008, hereafter HL08) with covariance matrices computed from 400 end-to-end numerical simulations. A recent analysis presented in Beck et al. (2022), however, shows that inaccuracies in simulation-based noise covariance matrices (NCMs) can bias results towards low values of . Further, Beck et al. (2022) repeated the Planck analysis of Tristram et al. (2021) by 'conditioning' the numerical covariance matrices (i.e. setting off-diagonal terms to zero) finding significantly weaker constraints of 0.05 < 0.13 (95% C.L.) when combined with a high-ℓ likelihood. The weaker constraint comes primarily from the impact of conditioning on the Tristram et al. (2021) low multipole likelihood. Campeti & Komatsu (2022) found similar results to Beck et al. (2022) from a profile likelihood analysis.
In this paper, our aim is to reassess the Planck constraint on using only the low 'reionization' range of multipoles (2 ≤ ℓ < 30). It is important to establish a reliable limit from this multipole range, since it is not covered by BK18 (which has much higher signal-to-noise than Planck over the multipole range of the recombination feature and so the Planck -mode constraints over this multipole range offer little more than a crude consistency test of BK18).
Determining from large angular scale CMB data is challenging. Galactic polarised foregrounds need to be subtracted to high accuracy and instrument noise and systematics reduced to low levels to be able to set interesting limits on primordial -modes. In this paper we use NCMs that were introduced in de Belsunce et al. (2021, hereafter B21), capturing aspects of systematic effects from end-to-end simulations. We employ the Bayesian parametric component separation method from de Belsunce et al. (2022, hereafter B22), instead of subtracting single coefficient foreground templates. This foreground removal scheme propagates uncertainties from cleaning through to effective NCMs for the CMB, which are then used in the likelihoods. We constrain the tensor-to-scalar ratio using two approaches: (i) a semi-analytical cross-spectrum-based likelihood-approximation scheme momento using all available polarisation cross-spectra between the half-mission Planck maps, namely , , and , and (ii) a polarisation-only pixel-based likelihood (pixLike) using full-mission polarised Planck maps. Each of these likelihood schemes has a sound theoretical basis.
We assume adiabatic perturbations, with primordial power spectra for the scalar ( ) and tensor ( ) modes given by with constant spectral indices (i.e. no 'running', d , /d ln = 0). The tensor-to-scalar ratio is defined as the ratio of the perturbation amplitudes, ≡ / , at some chosen pivot scale * ≡ 0.05 Mpc −1 . As in PCP18, we fix the tensor spectral index in terms of the tensorto-scalar-ratio via the single-field-inflation consistency relation = − /8. This paper is organised as follows. Section 2 discusses the Planck data that we employ along with the required pixel-pixel noise covariance matrices. The likelihoods are introduced in Sec. 3. Tests of the likelihoods on simulations are given in Sec. 4. Results using Planck polarisation data are presented in Sec. 5 and our conclusions are summarised in Sec. 6. Two appendices address aspects of the noise modelling.

DATA
We use the Planck 2018 Low Frequency Instrument (LFI) maps, as processed with the Madam map-making algorithm and Planck High Frequency Instrument (HFI) data, as processed with the SRoll2 mapmaking algorithm. The Madam and SRoll2 map-making algorithms are described in detail in Planck Collaboration (2016, 2020a) and Delouis et al. (2019). Recently, two new sets of Planck maps have been introduced, (Planck Collaboration 2020d, hereafter NPIPE) and (BeyondPlanck Collaboration 2020, hereafter BeyondPlanck), for LFI and HFI jointly and for LFI alone respectively. We extend our analysis to NPIPE data products in Sec. 5.1. 3

Map compression
We are interested in a signal that affects the polarised CMB anisotropies on large angular scales corresponding to multipoles 2 ≤ ℓ ≤ 30. Therefore, we degrade the high-resolution Planck maps with HEALPix parameter side = 2048 (corresponding to 5.03×10 7 pixels) and perform the relevant matrix and map computations at low resolution (lr) with lr side = 16 (corresponding to 3072 pixels). This makes our analysis computationally tractable. In this paper, we closely follow the analysis methodology described in B21 and B22 and so will describe the key steps briefly here.
We first degrade the high resolution Planck and maps. To do this, we apply an apodised mask 4 with sky = 0.85 to suppress the Galactic plane region. (This procedure is required for the contaminated CMB sky maps to avoid smearing high amplitude foreground emission in the Galactic plane to high Galactic latitudes.) The maps are then smoothed using the harmonic-space smoothing operator with ℓ 1 = lr side and ℓ 2 = 3 lr side . We apply the smoothing operator (ℓ) and a HEALPix pixel window function at the map level to remove small-scale power, as discussed in B21.

Pixel-pixel noise covariance matrices
The fidelity of our posteriors on cosmological parameters depends on the noise and systematics modelling entering into the likelihoods. Therefore, we fit a NCM at each frequency to end-to-end simulations, introducing large-scale modes to capture systematics. Such modes are required to describe large-scale polarisation data from HFI, which is affected by analogue-to-digital-converter non-linearities. The fitting scheme has been tested in the context of large-angular scale CMB inference for the optical depth to reionization parameter in B21.
In more detail, we fit parametric models to sample covariance matrices computed from a number of end-to-end simulations of Madam (for LFI) and SRoll2 (for HFI) data, assuming the noise between frequencies to be independent. We omit 100 end-to-end simulations from the fitting, reserving them to test our likelihood approximations. The sample noise-and systematics-only covariance matrix is constructed asN = (n −n)(n −n) / , with the overbar denoting average over simulations and n are the simulated sky maps for and Stokes parameters. A smoothed templaten from the simulation maps 5 is first subtracted when computing this covariance.
We approach the problem of fitting a model to eachN as a maximum likelihood inference problem. We assume a Gaussian probability distribution for each noise realisation where M is the model for the noise covariance matrix. We take M to consist of two terms Here the parameter scales the low resolution (lr) map-making covariance matrix N 0 and Y Y models additional large-scale effects. The lr covariance matrices N 0 were constructed as part of the Planck FFP8 end-to-end simulation effort. They contain structure representing the scanning strategy, detector white noise and '1/f'-type noise and are used as the starting point for our fitting procedure.The additional term Y Y is an outer product of low-multipole spherical harmonics Y weighted with covariance . We solve analytically for as a function of ; see Eq. (29) in B21. To solve for M, we minimise the negative log-likelihood from Eq. (5) with respect to and hence solve for the desired NCM. We refer the reader to B21 for details. 5 This corresponds to a smoothed mean of the maps, i.e. the maximum likelihood solution of the map-making equation, see e.g. Tegmark (1997).

Galactic polarised foreground cleaning
Foregrounds pose a significant challenge for the measurement of low amplitude primordial -modes. In the present paper, we use a Bayesian parametric component separation method. We assume that the main Galactic polarised foregrounds, namely synchrotron and thermal dust radiation, have spectral energy distributions that can be described with a small number of parameters, see B22 for a detailed analysis in the context of Planck data.
We assume that the observed sky can be modelled as d = Am + n where the measured data vector d consists of stacked sky maps. The sky signal Am consists of a scaling matrix A and the model m. The instrumentation and measurement noise is n. The sky signal encodes the primordial CMB together with our foreground modelling, namely a power law for synchrotron and a modified blackbody emission law for thermal dust. Eq. (3) in B22 summarises our foreground model.
We can disentangle Galactic foregrounds from the primordial CMB and propagate the uncertainties from the cleaning procedure through to the NCMs and the subsequent parameter constraints. Therefore, we estimate the posterior distribution of the sky signal Am given the observed data d using a Bayesian framework P (A, m|d). The procedure develops iteratively, performing a multidimensional Newton-Raphson (NR) minimisation on the spectral index parameters; henceforth we will call this procedure the NR method.
A key aspect of our approach is the introduction of a spatial correlation length for the spectral indices, penalizing variations below this scale but allowing variations on larger scales across the sky. The induced small-scale rigidity "regularizes" the recovered component map solutions, reducing any overfitting of noise. The total covariance matrix outputted from the procedure contains correlations both within and between the model components, naturally incorporating information from the pixel-pixel frequency noise correlation matrices mentioned in Sec. 2.2 above.
Our main constraints on in the present paper will be derived using NR-cleaned CMB maps: from LFI we use the 30 and 70 GHz Madam frequency channels and from HFI we use the 100, 143, 217 and 353 GHz SRoll2 frequency channels. For inference in the map domain, we use full-mission maps. For the likelihood-approximation scheme, we measure the quadratic cross spectra (QCS) between CMB maps made from differing half-mission (HM1/HM2) sets of Planck maps.

LIKELIHOODS
We now briefly introduce and compare the two likelihoods that we shall use to constrain the tensor-to-scalar ratio from low-ℓ Planck data. In Sec. 3.1 we present a semi-analytic likelihood approximation scheme for cross-spectrum estimators based on the principle of maximum entropy, called momento. In Sec. 3.2 we present an "exact" polarisation-only pixel-based likelihood, called pixLike. Effects of regularising noise are discussed in Appendix A.

Cross-spectrum likelihood-approximation scheme
Our first likelihood is momento, built according to the principles of the "General Likelihood Approximate Solution Scheme" ( ; Gratton 2017). The main rationale is to only use aspects of the data that one is relatively confident in, here quadratic cross-spectra, when constructing a likelihood.
is then useful precisely because it allows one to compute posteriors where exact analytic likelihoods for such statistics are either unknown or too expensive to compute. momento was extensively tested on simulations and data for the inference of the optical depth to reionization, , from low-ℓ CMB temperature and polarisation data, as presented in B21. may be thought of as approximating the sampling distribution of the chosen data as the least presumptive one consistent with a chosen set of the moments of the data, according to the principle of maximum entropy. The moments may ideally be obtained analytically from the model in question but also could be obtained from forward simulations. If the models considered are continuous functions of some parameters say, one need not explicitly construct this distribution. Rather, one can actually compute an approximation of its gradient: This procedure is generally applicable to a multitude of statistics , ∈ [1, . . . , ], and above denotes a vector of these statistics. The . . . denotes the ensemble average of the enclosed quantity, while . . . denotes the appropriate cumulant or connected moment. One can then integrate along a path in parameter space to obtain the difference in likelihood between any two models. We refer the reader to Gratton (2017) and B21 for a detailed description of the method.
To summarise, momento uses near-optimal quadratic cross-spectra (QCS), to compress the CMB data down to a set of power spectrum elements (here we typically consider multipoles ℓ = 2 − 30). A fiducial angular power spectrum is needed, fid ℓ , in order to compute the QCS of the data. Subsequently, at each step in the likelihood the following steps are taken by momento: (i) take input set of theory ℓ 's; (ii) compute the residual between fiducial and theory spectra Δ ℓ = ℓ − fid ℓ ; (iii) choose the path fid ℓ + Δ ℓ , with ∈ [0, 1]; (iv) evaluate the gradients of the log-likelihood using Eq. (7) from computed moments, as a function of ; (v) perform a numerical integration using these gradients to evaluate the change in log-likelihood going from fid ℓ to ℓ . In the present paper, we constrain with momento by using all available QCS polarisation spectra: , , and on foregroundcleaned Planck polarisation maps.

Low multipole pixel-based likelihood
Under the assumption that signal and noise are Gaussian and the noise covariance matrix is known accurately, it is possible to write down a pixel-based likelihood for low resolution CMB maps, see e.g. Page et al. (2007). This likelihood is exact in real space, even for incomplete sky coverage. Although it is theoretically well motivated to assume that primordial CMB signal is inherently Gaussian, noise properties and systematics may lead to a break down of Gaussian assumption. Nevertheless, since a pixel based likelihood is easy to compute, it provides a cross-check of the momento scheme. Pixelbased likelihoods have two further disadvantages: (i) the NCMs need to be known accurately; (ii) the contribution of individual multipoles to the measurement cannot be isolated or visualized.
The pixel-based polarisation-only likelihood is given by the probability of the data given the theoretical noise and signal model: where d is the data vector and d its smoothed mean (a map-based systematics offset determined from fits to the end-to-end simulations), both containing the polarisation and maps. The model parameters are denoted by . The covariance matrix C consists of a signal S and noise N component: C = S + N. The signal matrix can be constructed following the procedure outlined in e.g. Tegmark & de Oliveira-Costa (2001).
Computing the pixel-based likelihood given in Eq. (8) requires calculating the signal covariance matrix as a function of the theoretical power spectra th ℓ in question. For each pixLike evaluation at some fiducial set of parameters L (d| ) the determinant and the inverse of the NCM are computed. However, both operations are of complexity O ( 3 pix ) rendering the problem computationally expensive and only feasible at low resolution.
For our default constraints from the pixel-based method we use the output and maps and covariance matrices from the foreground fitting scheme acting on all input maps. To allow for cross-checking, we also consider results from cleaned 100 GHz maps (i.e. from the foreground fitting with 143 GHz maps omitted) and from cleaned 143 GHz maps (i.e. from the foreground fitting with the 100 GHz maps omitted).

TESTS ON SIMULATIONS
To investigate robustness, we test both likelihoods on realistic simulations with correlated noise. Therefore, we generate 100 Gaussian realisations of the CMB with an input = 0.06 and with (i) = 0 and (ii) = 0.3. We co-add these signal maps with realistic noise and systematics SRoll2 end-to-end (E2E) simulations 6 . We perform a two-dimensional scan in and . When scanning over , we allow to change whilst keeping 10 9 exp (−2 ) = 1.870 fixed. For the two-dimensional scans we explore a broad parameter space by using a range of = 0.01 − 0.1 with Δ = 0.005 and = 0.0 − 1.5 with Δ = 0.02. The other parameters of the base ΛCDM cosmology are fixed to Planck 2018 best-fit values (PCP18) when generating theoretical angular power spectra th ℓ . Posteriors for after marginalizing over , normalised to unity peak height, are shown in Fig. 1 for pixLike and in Fig. 2 for momento 7 . Each solid black line represents the posterior of a simulation and the red dashed line shows the geometric mean of the normalised posteriors, given byP = ( =100 =1 (P /P ,max )) 1/ . The mean recovered upper limits (95% C.L.) for both tests are quoted in each figure legend. Both methods perform qualitatively correctly for the given test, i.e. the input values of are recovered at the 2 level and the mean upper limits on move upwards by ∼ 0.2 − 0.5 when increasing from 0 to 0.3. From the distributions shown in Figs. 1 and 2, pixLike appears to return slightly tighter constraints, with the 143 GHz pixLike likelihood appearing to be biased low and the momento likelihood to be biased slightly high.
We have looked in more detail for biases in our likelihoods by analysing the number of posteriors peaking at (or below) our input value of compared to the number peaking above that value 8 . For 6 We remind the reader that for these testing purposes we use end-to-end simulations that were kept back from entering into the NCM fitting described in Sec. 2.2. 7 We cross-check that the input values can be recovered with high fidelity when marginalising over and obtain matching results to figure 6 in B21. 8 Assuming a binomial distribution, we expect a scatter of √ /2 in the distribution of posterior peaks around the input values of , i.e. for 100 simulations this corresponds to 5 or 10 peaks at the 1 or 2 level respectively. pixLike we obtain 61 and 48 posteriors that peak above zero 9 for 100 and 143 GHz, respectively, for = 0. Increasing to 0.3 yields 46 and 24 posteriors that peak above = 0.3. Note that the spread in the posteriors is smaller for the 143 GHz channel than for the 100 GHz one. For momento we obtain 68 posteriors that peak above zero for = 0 and for of 0.3 leads to 72 posteriors peaking above the input value. The spread of the posteriors for momento is significantly larger than for pixLike. The mean upper limits of the posteriors of momento are higher by ∼ 0.2 − 0.5 compared to the pixLike ones. The geometric means of the posteriors, shown as the red dashed curves, paint a similar picture. We evaluate the geometric meansP at the input values of . We obtain likelihood ratios from the peak corresponding to 1.3 and 0.5 deviations for the 100 and 143 GHz pixLike likelihoods respectively. For of 0.3 we obtain 0.1 and 0.9 deviations, again, for 100 and 143 GHz respectively. For momento, this test yields a difference of 1.2 for = 0 and 1.1 for = 0.3. This illustrates that the posteriors are behaving reasonably and are unbiased at the ∼ level.
To further understand if the noise modelling or the likelihood methods themselves may be responsible for these potential biases, we present additional tests in Appendix B. These tests used Gaussian noise realisations generated directly from the assumed NCMs rather than noise realisations from end-to-end simulations. Whilst these tests give us confidence that the likelihoods give unbiased results, they illustrate their sensitivity to the noise and systematics modelling captured in the covariance matrices. Both likelihoods perform reasonably well at the 2 level.
The tests on simulations illustrate the challenges associated with constraining from low-ℓ Planck data alone. The spread in the distribution of posteriors show the need for high precision polarisation cross-spectrum measurements. These, in turn, depend sensitively on the signal-to-noise ratio in the data and on the handling of systematics in the noise description. Because of the sensitivity of our methods to the noise modelling, the tests described here suggest that our results 9 In the present analysis we explicitly do not introduce eff as done in Tristram  One hundred posteriors, normalised to unity peak height, for (marginalised over ) from a two-dimensional scan using the momento likelihood-approximation scheme. The CMB realisations have been co-added with realistic noise and systematics from SRoll2 end-to-end simulations. We use 100×143 full-mission cross-spectra with theory input values for = 0 (left panel) and = 0.3 (right panel). The red dashed curve is the geometric mean of the normalised posteriors. on should be interpreted as indicative rather than as rigorous 95% confidence limits.

RESULTS
In this section, we present results for the tensor-to-scalar ratio using the likelihoods pixLike and momento, discussed above. To quantify the Planck constraints coming from low-multipole polarisation data and to reduce the computational burden, we perform two-dimensional scans in and rather than combining with a highmultipole likelihood and scanning over the full set of ΛCDM parameters (which would return constraints on dominated by the fit to the temperature power spectrum). Throughout the paper, we fix other cosmological parameters to those of the Planck best-fit base ΛCDM model (PCP18): 0 = 67.04, Ω b ℎ 2 = 0.0221, Ω c ℎ 2 = 0.12, Ω ℎ 2 = 0.00064, * = 1.0411, s = 0.96 and vary as a function of to keep −2 = 1.870 fixed.
For our quoted constraints, we use NR-foreground-cleaned-maps with Planck 2018 Madam data for LFI (30 and 70 GHz) and SRoll2 data for HFI (100, 143, 217, and 353 GHz) with the corresponding covariance matrices. pixLike uses full-mission polarisation and maps, whereas momento uses half-mission polarisation maps. For momento the cleaning procedure is applied twice, once using polarisation maps from the first half of the mission (HM1) and then using maps from the second half of the mission (HM2). momento then computes HM1×HM2 , , , and quadratic cross spectra to constrain . We find the following upper limits at 95% confidence limit (C.L.): from the full-mission maps. So in comparison to momento, pixLike effectively uses auto-spectra in addition to cross-spectra and does not lose any other information in the compression from low-resolution maps to QCS, leading to a theoretical √ 2 improvement in signal-tonoise in the noise-dominated regime.
The NR-cleaned HM1×HM2 , , and cross spectra used in momento are shown in Fig. 4. The solid linet shows a theoretical model with = 0.06 and = 0 and to guide the eye, the dashed line shows the same model but with = 0.3 (close to the 95% C.L. upper limit of Eqs. (9a) and (9b)). The tensor power at the reionization bump scales as 2 . The spectrum constrains and given this the constraint on then comes almost entirely from the spectrum. The and spectra provide a consistency check since in the absence of parity violating physics, these spectra should be identically zero. Indeed, eliminating the and spectra from momento we find: Further eliminating the spectrum makes the constraint sensitively dependent on the (lower limit of the) prior (as is no longer constrained). For example, with a uniform 0.04 ≤ ≤ 0.1 prior we find momento 0.05 < 0.439 (Planck low-ℓ , prior as above).
The spectrum plotted in Fig. 4 is consistent with zero, though the negative points at ℓ = 5 − 7 may indicate residual systematic biases in the Planck spectra. This run of negative points would cause a momento posterior for using ℓ = 2 − 7 to peak even more strongly around = 0 than the one plotted in Fig. 3. The latter posterior, however, includes high points at ℓ = 8, 14 and 15 that favour positive values of , partially compensating the downward pull.
For our constraints on the optical depth to reionization in B22, we obtained consistent results for momento and pixLike from NRforeground cleaned Planck maps. Together with the rough agreement seen here for also, this gives us some confidence that the NCMs and smoothed templates we have constructed are capturing noise and systematic effects well enough for outputs from the pixel-based scheme to be of interest, at least when using SRoll2 products.

Comparison to NPIPE results
In the following we apply our low-multipole polarised CMB noise fitting and inference pipeline to the NPIPE maps (Planck Collaboration 2020d). The NPIPE maps use slightly more time-ordered data than the maps produced for the Planck 2018 data release and they apply different algorithms to clean and calibrate the time-ordered data. In addition, the NPIPE map making procedure applies a polarisation prior, effectively suppressing power at the largest angular scales (ℓ < 10); see Planck Collaboration (2020d) for details. The CMB maps thus have to be corrected by transfer functions measured from end-to-end simulations. These transfer functions are defined as the ratio of the estimated to the theoretical input power spectrum in harmonic space. For an exact treatment, the maps have to be deconvolved by the transfer functions which depend on multipoles ℓ and varying cosmology, i.e. and . For the present analysis, however, we follow the approach used in Tristram et al. (2021) and apply the transfer function provided by NPIPE. These have been computed using a sky mask with a value of sky = 0.6 10 , and we analogously apply the same transfer function to , , and (see appendix A in Tristram et al. (2021)). While this is clearly sub-optimal, it allows for an approximate comparison of the SRoll2 and NPIPE noise levels and illustrates the challenges associated with using NPIPE for inference at the lowest multipoles (ℓ < 10) 11 .
For the present NPIPE analysis, we construct NCMs using the noise fitting procedure presented in Sec. 2.2. As base matrix we use the NPIPE low-resolution map-making covariance matrix, N 0 , and add large-scale modes into N 0 . Using Eq. (5) we fit the base matrix to the covariance matrix computed from NPIPE end-to-end simulations. This procedure is similar to the modelling of NCMs from the SRoll2 simulations, but for NPIPE we find lower noise levels at low multipoles compared to SRoll2 resulting in tighter error bars for the NPIPE quadratic cross-spectra by a factor of 1.5 -3. The QCS measurements for NPIPE full-mission 100 × 143 GHz data are shown in Fig. 4 in black.
In general, the QCS estimates for NPIPE are less noisy than for SRoll2, especially at multipoles ℓ > 10, i.e. where the effect of the transfer function is below 10% 12 . At the lowest multipoles 2 ≤ ℓ ≤ 4, however, several strong outliers appear in all polarisation spectra. To constrain cosmological parameters, we perform a two-dimensional scan over and using all the available NPIPE polarisation spectra ( , , and ) for 100 × 143 full-mission data, yielding a 95% confidence limit for the tensor-to-scalar ratio of momento 0.05 which is shown in Fig. 5. To assess the impact of the very strong ℓ = 3 outlier in the cross-spectrum 13 , we also compute using the multipole range 4 ≤ ℓ ≤ 29 and obtain, as expected, a tighter result of < 0.33 (95% C.L.).
We do not apply the Bayesian component separation method to 10 We note that our sky mask uses sky ≈ 0.54. 11 We emphasise that while the present treatment of the transfer function is approximate, we do not perform the NR-cleaning procedure on NPIPE maps. This treatment suffices for a cross-spectrum-based comparison of our SRoll2based analysis to NPIPE-based ones (presented in e.g. Planck Collaboration 2020d; Tristram et al. 2022Tristram et al. , 2021Beck et al. 2022). 12 Table G momento (2 ≤ ≤ 29) r < 0.38 (95% C.L.) Figure 5. Posteriors for (marginalised over ) from two-dimensional lowℓ scans, using a cross-spectrum-based likelihood (momento), on templatecleaned 100 × 143 full-mission NPIPE maps. momento uses all the NPIPE polarisation spectra ( , , and ). The dashed posterior uses the full multipole range (2 ≤ ℓ ≤ 29) and the solid posterior removes the first two multipoles (4 ≤ ℓ ≤ 29) of all spectra since these are quite very noisy and most affected by the transfer function.
NPIPE frequency maps, discussed in Sec. 2, given that we do not have the required transfer functions. We did, however, compute constraints on using the pixel-based likelihood using template-cleaned NPIPE frequency maps at 100 GHz and at 143 GHz. These both returned posteriors very tightly peaked at zero ( < ∼ 0.12, 0.11 at 95% C.L. respectively). This behaviour left us concerned that our noise covariance matrices and transfer-function-handling are not accurate enough for constraining using our pixel-based scheme with NPIPE template-cleaned maps. See App. A for a further discussion of the sensitivity of pixLike to the NCMs and regularising noise.
To cross-check our results, we compute constraints for the optical depth to reionization and obtain, using the same NPIPE polarisation cross-spectra, a value of = 0.055 ± 0.0056 (NPIPE low-ℓ , , , ).
Our result is in agreement with the value quoted in Planck Collaboration (2020d) obtained using only the cross-spectrum, albeit ∼ 0.5 higher, which gives us some confidence in the computed QCS estimates, the noise fitting procedure, and the likelihood in this situation.

SUMMARY AND CONCLUSION
Measuring primordial -modes, parametrised by the tensor-to-scalar , from large-angular scale (2 ≤ ℓ ≤ 30) polarised CMB data is chal-lenging, given the low amplitude of the signal in comparison to Galactic polarised foregrounds and potential sources of systematic error. The determination of is a key science goal of upcoming CMB experiments and crucial to our understanding of inflation and the dynamics involved. This paper has coupled the significantly improved mapmaking algorithms for HFI (Delouis et al. 2019) with mathematically well-motivated techniques for foreground cleaning and likelihood construction. We have used maps that have been cleaned using a Bayesian parametric foreground fitting model (B22), and propagated the uncertainties from the cleaning through to the final constraints on . We have compared two likelihoods: (i) a polarisation-only pixel-based likelihood (pixLike), formally exact even on a masked sky, and (ii) a likelihood-approximation scheme (momento) that uses all the available polarisation cross-spectra, namely , , and , to effectively construct a least presumptive sampling distribution (Gratton 2017). The corresponding constraints are given in Eqs. (9a) and (9b). The pixel-based likelihood yields tighter results by ∼ 30% because it uses more data than our implementation of momento which uses only cross-spectra for robustness considerations. It is reassuring that the pixel-based likelihood gives consistent results given our simplified models for the noise covariance matrices and the removal of systematic template maps derived from end-to-end simulations. Additionally, we extend our cross-spectrum-based likelihood to fullmission 100 × 143 GHz NPIPE data, quoted in Eq. (12), and obtain similar constraints as for SRoll2.
Our main conclusion is that both likelihoods are in agreement with each other but do not support the claims of very tight constraints on from Planck data alone made in Tristram et al. (2022Tristram et al. ( , 2021. This is an important result because at present Planck is the only CMB experiment that can probe low multipoles in polarisation. The likelihood approximation used by Tristram et al. (2022Tristram et al. ( , 2021 is heuristic and has difficulties in handling cross-spectral multipoles that happen to be negative, and their results are unusually sensitive to the multipole range ℓ ∼ 30 − 40. In addition, a reanalysis of the above results by Beck et al. (2022) using 'conditioned' matrices, where effectively off-diagonal terms are set to zero, substantially broadens the constraints from Tristram et al. (2022Tristram et al. ( , 2021 for on NPIPE data, leading to results consistent with this paper 14 . Our results agree with what we would expect from estimates of the noise levels in the Planck data. Given the noise levels and residual systematics in the Planck polarisation maps it is unlikely that the large angular constraints on from Planck can be improved with further data processing. Tighter limits on the optical depth to reionization and large angle -modes will therefore require a new space mission. The CMB satellite Lite-BIRD (Sugai et al. 2020) will be optimised for such measurements and will require careful systematics control, likelihood construction and cosmological data analysis. reg noise = 0.10: r < 0.181 (95% C.L.) Figure A1. Marginalised posteriors for over from a two-dimensional lowℓ scan using the pixel-based likelihood (pixLike) for Planck polarisation data. The data has been cleaned using the NR method. The results for are shown as a function of regularising noise "reg noise" (in units of K) that is squared and added to the diagonals of the NCMs. The solid curve is our best estimate for without any additional regularising noise added. The dashed (reg noise = 0.01) and dash-dotted (reg noise = 0.1) lines illustrate two levels of varying regularising noise. The area under the posteriors is normalised to unity. The upper limit on is quoted with a 95% confidence limit (C.L.).
(Note that the corresponding constraints found after marginalising over are indistinguishable.) invertible (typically one uses smoothed maps to prevent aliasing and the corresponding smoothed covariance matrices are then necessarily rank deficient). The second is to render a likelihood insensitive to the form of the theory spectrum above a certain scale. This is desired when combining a low-ℓ likelihood with a high-ℓ one, stopping differences between the theory and the data being too highly penalised in the overlap region. Depending upon the expected signal, the second application typically requires significantly more noise added than the first, and to avoid biases also needs an additional compensating term in the numerator of the likelihood (Gratton 2011).
-modes have small amplitudes compared to the magnitude of other present signals and foregrounds. In Fig. A1 we illustrate the effect of adding varying amounts of regularising noise to the NCMs (without any addition of a compensating term). We note that increasing the regularising noise from 0.0 to 0.1 pushes the upper limits on closer to zero and yields artificially tight constraints on . As we are not coupling the low-likelihood to a high-for the -mode-focused analysis presented here, and in contrast with our -mode-focused analysis of B21, we need not be concerned about overlap effects. Hence we proceed with minimal amounts of regularizing noise.
These findings illustrate the importance of accurate and well-tested NCMs for parameter inference, in a complementary manner to the findings of Beck et al. (2022).

APPENDIX B: TESTS ON GAUSSIAN REALISATIONS OF THE NCMS
Given the results of our tests on likelihoods using end-to-end simulations, showing potentially moderate affects of bias, we performed further investigations on the likelihoods as detailed here. The aim was to distinguish between (i) issues arising from the noise in the simulations being different/more complicated than that described by the NCMs and (ii) issues in the likelihood schemes themselves. There-fore we generated constraints from new sets of noise realisations, with the noise this time being generated as Gaussian realisations directly from the NCMs themselves. The new realisations are co-added with the same Gaussian realisations of the CMB generated for the tests performed in Sec. 4.
The results for the two-dimensional tests are shown in Fig. B1, for momento to the left and pixLike to the right. Each plot shows 100 posteriors for = 0 and = 0.3 on Gaussian realisations of the respective NCMs. We use two sets of NCMs: (i) in Figs. B1a and B1b the NCM for full-mission maps, called 100 × 143 and (ii) in Figs. B1c and B1d the half-mission NCM obtained from the foreground cleaning procedure, labelled HM1, HM2 and HM1×HM2. The former choice aligns with NCMs used in B21 while the latter one corresponds with the work done in B22 and this paper, including the propagation of foreground-cleaning uncertainties.
In both situations the likelihoods agree qualitatively and perform sensibly, not showing any strong bias. For posteriors in the top row, based on 100 × 143 spectra, for = 0 we obtain 43 posteriors for momento and 45 and 47 posteriors for pixLike out of the 100 that peak above zero. In the bottom row, with a significant tensor contribution of = 0.3, we obtain the expected behaviour, as previously observed on the end-to-end simulations, that the posteriors move to higher values of . We get 61 posteriors for momento and 47 and 51 posteriors for pixLike out of the 100 that peak above of 0.3. The posteriors in the bottom row of Fig. B1 show Gaussian realisations of the half-mission covariance matrices, which is the situation closest to the constraints on presented in the body of the paper. For this test we obtain the following posterior peak distributions: for momento we obtain 43 posteriors peaking above zero and 52 above = 0.3 for input values of = 0 and = 0.3 respectively. For pixLike we obtain 48 and 42 posteriors that peak above zero for HM1 when = 0 and 45 and 49 above = 0.3 for an input value of of 0.3 for HM2.
For both sets of NCMs and for both likelihoods, when there is no tensor signal it can be seen indeed that no posteriors strongly disfavour = 0. We also note that the spread in the recovered posteriors on the Gaussian realisations of the half-mission NCMs, which involve a different data split and have the foreground cleaning uncertainties included, is larger than that for the full-mission noise maps.
These tests on correlated simulations give us confidence in the methods and also show that one should not expect to be able to constrain from low-ℓ Planck data to better than about ∼ 0.2. This paper has been typeset from a T E X/L A T E X file prepared by the author. (d) HM1 and HM2 NCM from NR foreground cleaning. Figure B1. Sets of 100 posteriors, normalised to unity peak height, for (marginalised over ) from two-dimensional scans on Gaussian realisations of the CMB co-added with Gaussian realisations of the appropriate NCMs. These are shown for momento (left panels) and pixLike (right panels), with input tensor-to-scalar ratios of = 0 (left subpanels) and = 0.3 (right subpanels). The top row uses 100 × 143 full-mission NCMs (without any foreground cleaning uncertainties) and the bottom row uses half-mission NCMs (which include uncertainties from the foreground cleaning procedure). The red dashed curves are the geometric mean of the normalised posteriors.