Developments in Sensitivity Methodologies and the Validation of Reactor Physics Calculations

The sensitivity methodologies have been a remarkable story when adopted in the reactor physics field. Sensitivity coefficients can be used for different objectives like uncertainty estimates, design optimization, determination of target accuracy requirements, adjustment of input parameters, and evaluations of the representativity of an experiment with respect to a reference design configuration. A review of the methods used is provided, and several examples illustrate the success of the methodology in reactor physics. A new application as the improvement of nuclear basic parameters using integral experiments is also described.


Introduction
Reactor physics calculations can be quite complex.The governing equation for neutronics is the differential-integral Boltzmann equation for neutron transport, which is a linear equation requiring the treatment of seven independent variables, three in space, two in angle, one in energy of the incident neutrons, and time.The difficulty in obtaining accurate solutions for problems in reactor core physics, shielding, and related applications is further aggravated by a number of factors.The nuclear data (i.e., the neutron cross-sections) frequently fluctuate rapidly over orders of magnitude in the energy variable.The neutron population is often sharply peaked in a particular angular direction, and those directions may vary strongly in space and energy.Finally, the geometric configurations that must be addressed are complex three-dimensional configurations, with many intricate interfaces resulting from arrays of fuel rods, coolant channels, and control rods, as well as reflectors and shielding penetrated by ducting and other irregularities.While a great deal of effort has been expended in developing computational methods to deal with these problems (they fall into two classes: Monte Carlo and deterministic), still a major hurdle exists connected with the scarce knowledge of the neutron cross sections.In the past many experiments have been performed in order to derive information that would reduce the uncertainties associated with the major neutronics design parameters.This would improve both the economical aspect of the design and the safety margins.The challenge that the reactor physicist is confronted with is how to transpose the experimental information to the reference design in order to reduce uncertainties.This has been done using several approaches including bias correcting factors, parameters that characterize systems, and data adjustment.One specific methodology, which makes use of sensitivity coefficients, has been particularly successful in reactor physics.Sensitivity coefficients are determined and assembled, using different methodologies, in a way that, when multiplied by the variation of the corresponding input parameter, they will quantify the impact on the targeted quantities whose sensitivity is referred to.Sensitivity coefficients can be used for different objectives like uncertainty estimates, design optimization, determination of target accuracy requirements, adjustment of input parameters, and evaluations of the representativity of an experiment with respect to a reference design configuration.In the following, first the theory of the methods used will be presented, and then some past applications will be shown to illustrate the remarkable success they have obtained

Theory
Sensitivity analysis and uncertainty evaluation are the main instruments for dealing with the sometimes scarce knowledge of the input parameters used in simulation tools [1].For sensitivity analysis, sensitivity coefficients are the key quantities that have to be evaluated.They are determined and assembled, using different methodologies, in a way that, when multiplied by the variation of the corresponding input parameter, they will quantify the impact on the targeted quantities whose sensitivity is referred to.Sensitivity coefficients can be used for different objectives like uncertainty estimates, design optimization, determination of target accuracy requirements, adjustment of input parameters, and evaluations of the representativity of an experiment with respect to a reference design configuration.
In uncertainty evaluation, the sensitivity coefficients are multiplied by the uncertainties of the input parameters in order to obtain the uncertainty of the targeted parameter of interest.The origin and quality of the uncertainties of the input parameters can be different and vary quite a lot.In some cases, they are provided by the expert judgment of qualified designer.In some other cases more useful information is available, for instance from experimental values, and they are cast in more rigorous formalism.This is the case, for instance, of the covariance matrix for neutron cross-sections, where correlations in energy and among the different input parameters (reactions, isotopes) are also provided.
Target accuracy assessments are the inverse problem of the uncertainty evaluation.To establish priorities and target accuracies on data uncertainty reduction, a formal approach can be adopted by defining target accuracy on design parameter and finding out required accuracy on data.In fact, the unknown uncertainty data requirements can be obtained by solving a minimization problem where the sensitivity coefficients in conjunction with the existing constraints provide the needed quantities to find the solutions.
Sensitivity coefficients are also used in input parameter adjustments.In this case, the coefficients are used within a fitting methodology (e.g., least square fit and Lagrange's multipliers with most likelihood function) in order to reduce the discrepancies between measured and calculational results.The resulting adjusted input parameters can be subsequently used, sometimes in combination with bias factors, to obtain calculational results to which a reduced uncertainty will be associated.
A further use of sensitivity coefficients is, in conjunction with a covariance matrix, a representativity analysis of proposed or existing experiments.In this case the calculation of correlations among the design and experiments allows to determine how representative is the latter of the former and, consequently, to optimize the experiments and to reduce their numbers.Formally one can reduce the estimated uncertainty on a design parameter by a quantity that represents the knowledge gained by performing the experiment.
There are two main methodologies developed for sensitivity and uncertainty analysis.One is the forward (direct) calculation method based either on the numerical differentiation or on a stochastic method (Monte Carlo type), and the other is the adjoint method based on the perturbation theory and employs adjoint importance functions.In general, the forward approach is preferable when there are few input parameters that can vary and many output parameters of interest.The contrary is true for the adjoint methodology.The adjoint methodology has been the one mainly adopted in rector physics, as the source of uncertainty is mainly related to the neutron cross-sections that can represent a very notable number of variables (up to several hundred thousand).Moreover, the linear property of the Boltzmann equation makes the adjoint approach even more attractive.

Historical Notes.
The perturbation theory has been introduced in reactor physics in the 50s, and one can find a classical presentation in the Weinberg and Wigner book [2] (see also [3]).This is the perturbation theory applied to the k eff of the critical reactor, and Usachev gave a comprehensive development in an article published at the Geneva conference of 1955 [4].
It is interesting to note that the perturbation theory applied to reactor makes use of a definition of a function (the adjoint flux), which has a specific physical meaning if one is dealing with a nonconservative system as in the case of a nuclear reactor.This physical interpretation of the adjoint flux has been the focus of extensive studies, during the 1960s, in particular by Lewins [5,6].
The perturbation theory, mostly developed and applied for reactivity coefficient studies, was readily used [7] for an application, sensitivity studies, which had a spectacular development in the 1970s and 1980s.This development was made possible by a generalization of the perturbation theory (thanks again to Usachev), which deals with the general problem of a variation of any kind of a neutron flux functional.Usachev derived an explicit formulation that relates the functional variation to any change of the Boltzmann operator [8].
This development and its further generalization by Gandini, to the case of any kind of linear and bilinear functional of the real and adjoint flux [9], opened a new territory for the perturbation theory.It was now possible to relate explicitly the variation of any type of integral parameter (multiplication factor, reaction rates, reactivity coefficients, source values, etc.) to any kind of change of the operator that characterizes the system.
The application of the generalized perturbation theory to real life problems led to new interesting developments that allowed clarification of specific characteristics of the new theory with implications for the computation of the generalized importance functions introduced by the theory [10].
Starting from the early 1970s, the generalized perturbation methods, which were essentially developed and used in Europe, became popular also in the rest of the world and in particular with new developments in several U.S. laboratories, ANL [11,12] and ORNL [13], and in Japan [14].
The perturbation methods, and their main application in the field of sensitivity analysis, have been used mostly in their first-order formulation.Actually, as for any perturbation theory, the power of the method is particularly evident when one considers small perturbations (for instance, for cross-sections σ) that therefore induce little changes of the functions (e.g., the neutron flux ϕ), which characterize the system and for whom one can neglect the second-order product (for instance δσδϕ).However, there have been theoretical developments that take into accounts higherorder effects without losing all the advantages typical of the first-order formulations [15][16][17].
Among the theoretical developments after the 1970s that had significant practical impact, one has to mention the extension of the perturbation theory to the nuclide field that allows study of the burnup due to irradiation in the reactor at the first order [18][19][20][21] and to higher orders [22].Subsequently, a new formulation, the "equivalent generalized perturbation theory" EGPT [23], allowed treatment, in a very simple and efficient way, of the perturbation and sensitivity analyses for reactivity coefficients.
Among the most recent development, it is worth mentioning those related to the ADS (accelerator driven systems) case with functionals that allow to calculate the sensitivity of the source importance (ϕ * ) and the inhomogeneous reactivity [24].
Finally, one should remember that, beside the neutronic field, there have been several studies for extending the perturbation theory developed for reactor physics to other domains (thermal-hydraulics, safety, etc.) with very interesting theoretical developments [25][26][27][28].

Sensitivity Coefficients and Perturbation
Theories.The variations of any integral parameter Q due to variations of cross-sections σ can be expressed using perturbation theories [29,30], to evaluate sensitivity coefficients S: where the sensitivity coefficients S j are formally given by For practical purposes, in the general expression of any integral parameter Q, the explicit dependence from some cross-sections (e.g., σ e i ) and the implicit dependence from some other cross-sections (e.g., σ im j ) are kept separated: As an example, we consider a reaction rate: where brackets , indicate integration over the phase space.In the case of a source-driven system, Φ is the inhomogeneous flux driven by the external source and the homogeneous flux in the case of critical core studies.In (4), σ e can be an energy dependent detector cross-section; R is "explicitly" dependent on the σ e and "implicitly" dependent on the cross-sections which characterize the system, described by the flux Φ.In other terms, R depends on the system cross-sections via Φ.Equation (1) can be rewritten as follows: where we have the hypothesis of an explicit dependence of Q on only one σ e .If we drop the index "im", where the term I is generally called "indirect" effect and the term D is called "direct" effect.While the direct effects can be obtained with explicit expressions of the derivatives of Q, the indirect effect (i.e., the sensitivity coefficients S) can be obtained with perturbation expression, most frequently at the first order [29,30].

Reactivity Coefficients.
A reactivity coefficient (like the Doppler effect) can be expressed as a variation of the reactivity of the unperturbed system (characterized by a value K of the multiplication factor, a Boltzmann operator M, a flux Φ, and an adjoint flux Φ * ): where K p corresponds to a variation of the Boltzmann operator such that The sensitivity coefficients (at first order) for Δρ to variations of the σ j are given as in [23]: where I f = Φ * , FΦ and I p f = Φ * p , FΦ p , F being the neutron fission production part of the M(= F − A) operator.

Reactivity
Rates.The classical formulations found, for example, in [29,30], can be applied to the case of, for example, damage rate or He-production in the structures, or to the power peak factor in the core: The sensitivity coefficients are given by where Φ has been defined above, and Ψ * R is the solution of and M * is the adjoint of the operator M. In the specific case of the power peak, this parameter can be expressed as the ratio: with Σ p the power cross-section, essentially represented by E f • Σ f , E f being the average energy released per fission.The sensitivity coefficients are defined as and Ψ * is the importance function solution of where Σ p,MAX is the Σ p value at the spatial point, where < Σ p Φ >≡< Σ p Φ > MAX , and Σ p,Reactor is the Σ p value at each spatial point of the reactor.In (15) effects due to Σ p,MAX and Σ p,Reactor variations are assumed to be negligible.

Nuclide Transmutation.
The generic nuclide K transmutation during irradiation can be represented as the nuclide density variation between time t 0 and t F .If we denote n K Fi the "final" density, the appropriate sensitivity coefficient is given by where the time-dependent equations to obtain n * and n, together with their boundary conditions, are defined in [18][19][20][21] Uncertainty Analysis, Experiment Representativity, and Target Accuracy Assessment.Uncertainty evaluation and experiment representativity factors are computed in ERANOS with covariance matrices provided in different general formats.
The uncertainties associated to the cross-section can be represented in the form of a variance-covariance matrix: where the elements d i j represent the expected values related to the parameters σ j and σ i .
The variance of Q can then be obtained as In order to plan for specific experiments able to reduce uncertainties on selected design parameters, a formal approach, initially proposed by Usachev and Bobkov [31], has been applied by Palmiotti and Salvatores [32] and further developed in by Gandini [33].
In the case of a reference parameter R, once the sensitivity coefficient matrix S R and the covariance matrix D are available, the uncertainty on the integral parameter can be evaluated by the equation: We can consider an integral experiment conceived in order to reduce the uncertainty ΔR 2 0 .Let us indicate by S E the sensitivity matrix associated with this experiment.If we give "representativity factor" the following expression: it can be shown [31] that the uncertainty on the reference parameter R 0 is reduced by If more than one experiment is available, ( 21) can be generalized.In the case of two experiments, characterized by sensitivity matrices S E1 and S E2 , the following expression [33] can be derived: where D is the new covariance matrix and The approach outlined here can be used to plan optimized integral experiments to reduce uncertainties on a set of integral parameters of a reference system.A successive step is the assessment of target accuracy requirements.Target accuracy assessment [32] is the inverse problem of the uncertainty evaluation.To establish priorities and target accuracies on data uncertainty reduction, a formal approach can be adopted by defining target accuracy on design parameters and finding out required accuracy on data in order to meet them.In fact, the unknown uncertainty data requirements can be obtained by solving a minimization problem where the sensitivity coefficients in conjunction with the constraints on the integral parameters provide the needed quantities for finding the solutions.
The unknown uncertainty data requirements d i can be obtained by solving the following minimization problem for the functional Q: with the following constraints: where N is the total number of integral design parameters, S ni are the sensitivity coefficients for the integral parameter R n , and R T n are the required target accuracies on the N integral parameters, λ i are "cost" parameters related to each σ i and should give a relative figure of merit of the difficulty of improving that parameter (e.g., reducing uncertainties with an appropriate experiment), and Corr ii are the correlation values between variable i and i .

Data Assimilation.
Uncertainty and sensitivity analysis can be used to effectively combine nuclear data covariance information, integral experiments, their "representativity," and their associated experimental uncertainties in order to reduce a priori uncertainties on performance parameters (like k eff or reactivity coefficients) that characterize a reference design configuration.Several approaches (usually called "bias factor" methods; see, e.g., [33]) have been attempted.A more rigorous approach is the so-called data assimilation (called also adjustment, calibration, tuning).
If we define B p the "a priori" nuclear data covariance matrix and S B the sensitivity matrix of the performance parameters B(B = 1 • • • B TOT ) to the J nuclear data, the "a priori" covariance matrix of the performance parameters is given by It can be shown that, using a set of I integral experiments A characterized by a sensitivity matrix S A , beside a set of statistically adjusted cross-section data, a new ("a posteriori") covariance matrix B p can be obtained (see, e.g., [34]): where B A is the integral experiment uncertainty matrix.In the case of (b ii are the experimental uncertainties of each experiment i) and S A is the sensitivity matrix of the I experiments to the J nuclear parameters (cross-sections by energy group, isotope, and reaction type): This matrix can then be used to define a new ("a posteriori") covariance matrix B B for the performance parameters: If we consider only one performance parameter B and only one experiment "i" and if we put B A = 0, we obtain from (26) the expression of the "representativity" of one integral experiment, as defined in [32]: Then, we can consider (30) as a generalized expression for the reference parameter uncertainty reduction as given in [32].This generalized expression accounts for more than one experiment and allows estimating the impact of any new experiment in the reduction of the "a priori" uncertainty of the design performance parameters [35].
In fact, we can define an "assimilated" representativity factor that characterizes the uncertainty reduction obtained through the assimilation process.Let us first define the uncertainties on an integral parameter (e.g., K eff ) for a specific design target system (e.g., a reactor to be designed) using the nuclear data covariances before, B p , and after, B p , assimilation: then using (21) we can derive the "assimilated" representativity factor r 2A RE associated with the assimilation process: If we define the uncertainty reduction factor UR as, we can obtain using (33) The "r" value corresponding to the Superphenix core The spectrum indicator r:

An Example from the Past: Superphenix
In the 1960s and 1970s, there has been the development in France of a "practical" (and powerful) method of nuclear data assimilation (or adjustment) and bias factors assessment together with associated uncertainties.The method was based on the following steps.
(i) Conceive and perform simple, "clean" integral experimental configurations in zero-power critical assemblies, each of them characterized by a meaningful parameter "r" (as a "spectrum indicator").In each configuration, several integral parameters i (e.g., K eff , critical buckling, reaction rate ratios, etc.) are measured.(ii) The observed experimental to calculation ratio E i /C i for each specific integral parameter was interpreted in terms of nuclear data statistical adjustments (see later).(iii) The adjusted data are associated to a new calculated value C i Residual E i /C i values are displayed as function of the "r" parameter (see Figure 1).(iv) The reference design core is also characterized by a specific value of "r".Its integral parameters R are calculated with the adjusted nuclear data.(v) Deduction by interpolation (see Figure 1) of the expected E R /C R on the reference design core and use of it as "bias factor" for the calculated value of the reference design core parameter C R .
In Europe, this approach was preferred to the "mock-up" approach, mainly used in the USA.
A milestone was reached in 1984 when this approach allowed prediction of the critical mass of Superphenix to approximately 3 (out of ∼300) subassemblies (corresponding to ∼0.3% Δk/k).

Target Accuracy Requirements:
The OECD/NEA Subgroup 26 The first and most significant recent initiative aiming to a systematic nuclear data uncertainty impact assessment was taken by the Working Party on Evaluation Cooperation (WPEC) of the OECD Nuclear Energy Agency Nuclear Science Committee when it established a subgroup (called 26) to develop a systematic approach to define data needs for advanced reactor systems and to make a comprehensive study of such needs for Generation-IV (Gen-IV) reactors.This subgroup was established at the end of 2005, and a final report was published in 2008 [36].A comprehensive sensitivity and uncertainty study was performed to evaluate the impact of neutron cross-section uncertainty on the most significant integral parameters related to the core and fuel cycle of a wide range of innovative systems, even beyond the Gen-IV range of systems.In particular, results have been obtained for the sodium-cooled advanced breeder reactor (ABR), the sodium-cooled low conversion ratio fast reactor (SFR), the sodium-cooled European fast reactor (EFR), the gas-cooled fast reactor (GFR), the lead-cooled fast reactor (LFR), and the accelerator driven lead-cooled minor actinide burner (ADMAB).These systems correspond to current studies within the Generation-IV initiative, the advanced fuel cycle initiative (AFCI), and the advanced fuel cycle and partitioning/transmutation studies in Japan and Europe.The integral parameter uncertainties were initially calculated using covariance data developed in a joint effort of several laboratories contributing to the subgroup activity.This set of covariance matrices was referred to as BOLNA [37].
The calculated integral parameter uncertainties, resulting from the initially assessed uncertainties on nuclear data of the BOLNA set were found rather acceptable for the early phases of design feasibility studies.In fact, the uncertainty on k eff was found to be less than 2% for all systems (with the exception of the ADS) and reactivity coefficient uncertainties below 20%.Power distributions uncertainties are also relatively small, except in the case of the ADS.However, later conceptual and design optimization phases of selected reactor and fuel cycle concepts will need improved data and methods in order to reduce margins, both for economical and safety reasons.For this purpose, a compilation of preliminary "Design Target Accuracies" was put together, and a target accuracy assessment was performed to provide an indicative quantitative evaluation of nuclear data improvement requirements by isotope, nuclear reaction, and energy range in order to meet the Design Target Accuracies.First priorities were formulated on the basis of common needs for fast reactors and, separately, thermal systems.These priority items (see Table 1) were included in the High Priority Request List (HPRL) of the OECD-NEA Data Bank.

A More Recent Example of Data Assimilation
The purpose of the work performed in this example [38] was to provide a first series of guidelines to improve methods and data used in the preliminary study of a sodium-cooled fast spectrum "advanced burner" reactor, as defined within the GNEP initiative and the AFCI program [39].The reference 1000 MWt ABR core concepts were developed with ternary metal and mixed oxide fuels.Compact core concepts of medium TRU conversion ratio (∼0.8 for the start-up core and ∼0. 7for the recycled equilibrium core) were developed by tradeoff between the burn-up reactivity loss and the TRU conversion ratio.Two enrichment zones are used for the metal core, whereas three enrichment zones are used for the oxide core.In both cases, there is a steel reflector surrounding the core and no fertile blanket.
The selected integral experiments should meet a series of requirements: (a) low and well-documented experimental uncertainties, (b) enabling to separate effects (e.g., capture and fission), and (c) allowing validating global energy-and space-dependent effects.
As for the point (b) above, irradiation experiments, in particular of separate isotope samples, allow very significant information on capture data, while fission rate experiments in well-characterized spectra provide high-accuracy information on fission data.As for the point (c), the global energy validation should be envisaged using as far as possible "representative" experiments, according to the definition given below, while specific spatial effects (as reflector effects in the ABR cores) should be singled out with appropriate experiments (e.g., experiments with or without blankets, to underline possible specific effects due to the presence of a steel reflector).
A series of experiments following the indicated requirements was selected to the purpose of reducing the current uncertainties on the targeted cores (ABR with metal or oxide fuels).Table 2 shows the list of significant experiments that have been chosen in the present study together with the main integral parameters that have been measured and that have been calculated.These experiments allow covering a wide range of fuel types, including the reference system As far as representativity, we considered a range of different ZPPR and ZPR experiments, in particular assemblies ZPPR-2, ZPPR-9, and ZPR6-7 with Pu oxide fuel, ZPPR-15 with Pu metal fuel, and ZPR6-6 with enriched UO 2 fuel.We performed a representativity study on the criticality of these experiments with respect to the two ABR cores.We added, for comparison purposes, the ZPR3-53 and 54 experiments.The results shown in Table 3 indicate that the ZPPR-15 experiment is the best suited to "represent" both ABR reference cores and that the other cores will not add significant information.In fact if we consider the extra information brought by, for example, ZPPR-9 with respect to ZPPR-15, we find, using (20), that adding ZPPR-9 there is only a very limited impact on the ABR k eff uncertainty reduction, since the r 12 value relative to ZPPR-15 and ZPPR-9 is too close to 1 (0.978).
Table 4 gives the C/E values with associated uncertainties before and after adjustment for the 44 integral experiments used in this study.The first remark is that ENDF/B-VII performs in general rather well.However, for a number of parameters (higher Pu isotopes and some minor actinides), there is a clear need for substantial improvements.
After adjustment, the "a posteriori" C/Es show a definite improvement, and with a few exceptions all residual calculation versus experiment discrepancies are reduced within the "a posteriori" experimental uncertainties.To obtain this result and in order to obtain a statistically sound adjustment (i.e., as indicated by a χ 2 test), it has been necessary in very few cases to modify (i.e., increase) the diagonal uncertainty values of the BOLNA covariance matrix for a specific reaction of a specific isotope.
After the adjustment we have applied the new crosssection covariance matrix to evaluate the "a posteriori" uncertainty on the k eff of the two reference systems (ABR with metal or oxide fuel).The results are given in Table 5.The uncertainty on the k eff of two reference ABR configurations is reduced significantly, from ∼1.5% to ∼0.6% in both cases.One interesting ancillary result is given by the "assimilated" representativity factors.As reported in Table 5 they are significantly higher than those of Table 3 and indicate the global representativity of the 44 experiments used in the data assimilation process with respect to the target systems for the K eff parameter.As for the nuclear-data-related uncertainty, it is possible to further reduce the value obtained here (∼0.6%), by including in the adjustment more nuclear data (e.g., more structural material data) and including few more integral experiments, carefully selected for that purpose and using more extensively the "representativity" approach outlined previously.
Finally, it is interesting to note that the proposed adjustment will reduce uncertainties not only of the k eff but also uncertainties on the local TRU nuclide densities after irradiation and, as a consequence, the uncertainty on the reactivity loss per cycle.

The New Approach: Consistent Data Assimilation
The major drawback of the classical adjustment method is the potential limitation of the domain of application of the adjusted data since adjustments are made on multigroup data, and the multigroup structure, the neutron spectrum used as weighting function, and the code used to process the basic data file are significant constraints.A new approach [43] has been developed in order to adjust physical parameters and not multigroup nuclear data, the objective being now to correlate the uncertainties of some basic parameters that characterize the neutron crosssection description, to the discrepancy between calculation and experimental value for a large number of clean, highaccuracy integral experiments.
This new approach is the first attempt to build up a link between the wealth of precise integral experiments and basic theory of nuclear reactions.A large amount of exceptionally precise integral measurements has been accumulated over last 50 years.These experiments were driven by the necessities of nuclear applications but were never fully exploited for improving predictive power of nuclear reaction theory.Recent advances in nuclear reaction modeling and neutron transport calculations, combined with sensitivity analyses methods, offer a reasonable possibility of deconvoluting results of the integral experiments in a way to obtain feedback on parameters entering nuclear reaction models.Essential ingredients of such a procedure will be covariances for model parameters and sensitivity matrices.The latter will provide direct link between reaction theory and integral experiments.By using integral reactor physics experiments (meter scale), information is propagated back to the nuclear level (femtometers) covering a range of more than 13 orders of magnitude.
The assimilation procedure results in more accurate and more reliable evaluated data files that will be of universal validity rather than tailored to a particular application.These files will naturally come with cross-section covariances incorporating both microscopic and integral measurements as well as constrains imposed by the physics of nuclear reactions.Thus, these covariances will encompass the entire relevant knowledge available at the time of evaluation.
On the physics side, the assimilation improves knowledge of model parameters, increasing the predictive power of nuclear reaction theory, and it would bring a new quality into nuclear data evaluation as well as refinements in nuclear reaction theory.[34] provide adjusted multigroup nuclear data for applications, together with new, improved covariance data and reduced uncertainties for the required design parameters, in order to meet target accuracies.
One should, however, set up a strategy to cope with the drawbacks of the methodology, which are related to the energy group structure and energy weighting functions adopted in the adjustment.
In fact, the classical statistical adjustment method can be improved by "adjusting" reaction model parameters rather than multigroup nuclear data.The objective is to associate uncertainties of certain model parameters (such as those determining neutron resonances, optical model potentials, level densities, and strength functions) and the uncertainties of theoretical nuclear reaction models themselves (such as optical model, compound nucleus, preequilibrium, and fission models) with observed discrepancies between calculations and experimental values for a large number of integral experiments.
The experiments should be clean (i.e., well documented with high QA standards) and high accuracy (i.e., with as low as possible experimental uncertainties and systematic errors), and carefully selected to provide complementary information on different features and phenomena, for example, different average neutron spectrum energy, different adjoint flux shapes, different leakage components in the neutron balance, different isotopic mixtures and structural materials.
In the past, a few attempts were made [44][45][46] to apply a consistent approach for improving basic nuclear data, in particular to inelastic discrete levels and evaporation temperatures data of 56 Fe for shielding applications, and to resolved resonance parameters of actinides (e.g., Γ and total widths and peak positions).This effort indicated not only the validity of the approach but also challenges to be overcome for its practical application.This was mainly related to the way of getting the sensitivity coefficients and to the need of reliable covariance information.
The consistent data assimilation methodology allows overcoming both difficulties, using the approach that involves the following steps.

(i) Selection of the appropriate reaction mechanisms
along with the respective model parameters to reproduce adopted microscopic cross section measurements with the EMPIRE [47] code calculations.Use of coupled channels, quantum-mechanical preequilibrium theories, and advanced statistical model accounting for width fluctuations and full gamma cascade ensures state of the art modelling of all relevant reaction mechanisms.
(ii) Determination of covariance matrices for the set of nuclear reaction model parameters obtained in the previous step.This is achieved by combining initial estimates of parameter uncertainties, with uncertainties/covariances for the adopted experimental data through the KALMAN [48] code.This way, the resulting parameter covariances will contain constraints imposed by nuclear reaction theory and microscopic experiments.Several parameters have been considered, including resonance parameters for a few dominating resonances, optical model parameters for neutrons, level density parameters for all nuclei involved in the reaction, parameters entering preequilibrium models, and parameters determining gamma-strength functions.
(iii) Sensitivity of cross-sections to the perturbation of the above-mentioned reaction model parameters are calculated with the EMPIRE code.
(iv) Use of the adjoint technique to evaluate sensitivity coefficients of integral reactor parameters to the cross-section variations, as described in the previous step.To perform this task, the ERANOS code system [49], which computes sensitivity coefficients based on generalized perturbation theory, is employed.
(v) Performing analysis of selected experiments using the best calculation tools available (in general Monte Carlo codes like MCNP).
(vi) Performing consistent data assimilation on basic nuclear parameters using integral experiment analysis with best methodology available to provide discrepancies between calculation and measured quantities.After the C/Es are available, they are used together with the sensitivity coefficients coming from the previous step in a data assimilation code.
(vii) Constructing new ENDF/B type data files based on modified reaction theory parameters for use by neutronic designers.

Evaluation of Nuclear Physics Parameter Covariances.
As indicated in the outline of the methodology, the first step is to provide estimated range of variation of nuclear physics parameters, including their covariance data.To this end the code EMPIRE [47] coupled to the KALMAN [48] code is employed.KALMAN code is an implementation of the Kalman filter technique based on minimum variance estimation.It naturally combines covariances of model parameters, of experimental data and of cross-sections.This universality is a major advantage of the method.KALMAN uses measurements along with their uncertainties to constrain covariances of the model parameters via the sensitivity matrix.Then, the final cross-section covariances can be calculated from the updated covariances for model parameters.This procedure consistently accounts for the experimental uncertainties and the uncertainties of the nuclear physics parameters.We emphasize that, under the term "reaction model," we mean also the resonance region described by models such as the multilevel Breit-Wigner formalism.

Evaluation Sensitivity Coefficients for Integral Experiments.
In order to evaluate the sensitivity coefficients of the nuclear parameters to the integral parameters measured in a reactor physics experiment, a folding procedure is applied, where the sensitivities calculated by EMPIRE are folded with those calculated by ERANOS (i.e, multigroup cross-section sensitivity coefficient to integral parameters).
Following this procedure, the sensitivities of integral experiments to nuclear parameters p k are defined as where R is an integral reactor physics parameter (e.g., K eff , reaction rates, and reactivity coefficient) and σ j a multigroup cross section (the j index accounts for isotope, cross section type, and energy group).
In general to compute σ j one can use (a) EMPIRE with an appropriate set of parameters p k to generate first (b) an ENDF/B file for that specific isotope and, successively, (c) to use NJOY, to obtain multigroup cross sections.
As specified in the previous section, one can compute the variation of the cross sections Δσ j resulting from a variation of each parameter p k variation.
Specifically, the procedure would consist in the generation of the Δσ j corresponding to fixed, well-chosen variations of each p k taken separately and therefore generating the Δσ j /Δp k .Following each EMPIRE calculation, an ENDF/B file for the isotope under consideration is generated and a subsequent run of NJOY on this file generates multigroup cross sections in the same energy structure used for the computation of the reactor physics integral parameters.The multigroup cross section variations associated to the individual fundamental parameter that has been varied in the corresponding EMPIRE calculation are readily computed by difference with the reference NJOY calculation for the isotope under consideration.
In parallel, the cross section sensitivity coefficients to integral parameter R, are provided, using the standard generalized perturbation theory in the ERANOS code system.Folding the two contributions (from EMPIRE and ERA-NOS), one obtains the sensitivity coefficients of the nuclear physics parameters to the integral measured parameters; see (36).Finally, as far as data adjustment (or data "assimilation"), the methodology makes use of (i) quantified uncertainties and associated variancecovariance data, (ii) well-documented, high-accuracy, and "representative" integral experiments, (iii) sensitivity coefficients for a variety of integral parameters.
6.4. 23Na Consistent Data Assimilation.As a practical example we have considered the case of the 23 Na isotope.For this case we have used propagation experiments of neutrons in a medium dominated by this specific isotope.These kinds of experiments were specifically intended for improving the data used in the shielding design of fast reactors.Two experimental campaigns taken from the SINBAD database [50] have been used in this practical application: the EURACOS campaign and the JANUS-8 campaign.
In order to perform the consistent data assimilation on the 23 Na, a set of 136 nuclear parameters was selected, and sensitivities to them in terms of multigroup cross section were calculated ( [51] provides the details of this step).The selected parameters include: scattering radius, bound level and 33 resonances (for each one: E n resonance peak energy, Γ n neutron width, Γ g radiative width, for a total of 102 parameters), 33  For what concerns the experiments, a set of reaction rate slopes (one for each detector in the two experiment campaigns) was selected.The selection was based, on low experimental and calculation uncertainty, good depiction of the neutron attenuation for the energy range to be characterized by the corresponding detector, complement of information (obtained by correlation calculations using the sensitivity coefficients), and good consistency among the C/E on the selected slopes.The selected slopes were the ratios of the fourth position to the first one for both detectors in the EURACOS experiment, while for the JANUS-8 experiment we selected the fourth to first position ratio for the 32 S  and 197 Au detectors, fourth to second position for the 55 Mn (there was no measurement in the first position), and third to first for the 103 Rh (the fourth position has a very large experimental uncertainty).A 41-group energy structure was adopted specifically to better describe the resonance structure of the 23 Na.The ERANOS code was used to calculate the multigroup sensitivity for the selected reaction rate slopes.
A specific code was written in order to manipulate the two sets of sensitivities (nuclear parameters and integral experiments to multigroup cross sections), check their consistency, calculate uncertainties on measured parameters, and perform the folding of (36).
Once obtained the sensitivity of nuclear parameters to the integral experiments, they were used together with the C/E of the computational analysis shown in Section 3 for a statistical adjustment.Table 6 shows the C/E after the adjustments for the selected reaction rates slopes.
As it can be observed, except for the gold detectors that did already show good C/E agreement, a remarkable improvement is obtained after the adjustments.
The corresponding variations of the nuclear parameters that are needed for obtaining such improvement are shown in Table 7.Only the parameters that require at least 0.3% of variation are reported, and for the meaning of the parameter name we refer to [51].
All the variations are in less than 1σ of the initial uncertainties and, therefore, look acceptable.Some important parameters show a significant improvement in the "a posteriori" standard deviation (e.g., scattering radius) that would translate in reduced uncertainties on design parameters when the "assimilated" cross sections will be used.
The χ 2 test after adjustment provided a value of 5.95, which is quite good in view of the fact that, for the statistical adjustment methodology adopted, the degrees of freedom of the problem are those of the number of experiments used in the adjustment, in this case 6.

Conclusions
We have shown that the sensitivity methodologies have been a remarkable success story when adopted in the reactor and fuel cycle physics field.Beside providing a unique tool to gain physics insight in reactor design and experiment analysis, sensitivity coefficients have been used for different objectives like uncertainty estimates, design optimization, determination of target accuracy requirements, adjustment of input parameters, and evaluations of the representativity of an experiment with respect to a reference design configuration.Several key examples of importance for fast reactor assessment have been provided for corroborating the success of the methodology in "real life" and its impact in industrial applications.
Even though so much success has been achieved by the validation methodology in reactor physics, still new challenges lay down the road.One of the current major hurdles that reactor physicists are confronted to is how to provide effective feedback, coming from the results of integral experiments, to nuclear physicists.As explained previously, in the past, through the multigroup adjustment, the reactor physicist would produce ad hoc nuclear data, needed for his specific reactor design, neglecting the fact of giving a feedback to evaluators.Of course, this approach limits the range of applicability of any findings coming out from integral experiments.
We have illustrated, by proposing the consistent method, a comprehensive approach that would cope with this problem.However, the consistent method is still in its infancy.In fact, for the moment it is restricted to single isotope experiments and to limited energy range applicability.Systematic for nuclear parameters of isotope families should be the next frontier; as well the extension of energy ranges to cover all the neutron spectrum of interest for different type of reactors.
In the same category, it is the problem of the new correlations created after adjustment.In fact (30), which defines the new a posteriori covariance matrix after adjustment, is a full matrix that correlates all the cross sections (isotopes, reactions, and energy groups) while the initial one, provided by evaluators, is quite sparse, and, in practice, only energy correlation is provided with a few reaction cross-correlations.Do the new correlations calculated by the adjustment provide physically sound indications or are they just the result of a mathematical procedure?In the former case, how one should use the information to improve evaluated nuclear data?This question is currently tackled by the OECD/NEA Subgroup 33 on "methods and issues for the combined use of integral experiments and covariance data" and surely deserves the attention of the reactor physics validation community.The Subgroup 33 has already provided a comprehensive assessment of current adjustment methodologies [52].
Finally, it should be reminded that reactor and fuel cycle physics is not the only field where sensitivity methods have been developed in the nuclear energy domain.We like to mention that there are books, not only journal articles, which present applications of adjoint sensitivity and uncertainty analysis to large-scale thermal hydraulics and thermomechanics; see, for example, [53][54][55], mostly due to the pioneering and systematic work of Professor D. Cacuci and coworkers.
In summary, sensitivity and validation methodologies in the nuclear energy domain are expected to play an even wider role in the future developments of nuclear energy in particular if advanced fuel cycles and innovative reactors will be implemented.

Figure 1 :
Figure 1: (E − C)/C trend following the spectrum indicator r for different experiments.

Table 1 :
Summary of highest-priority target accuracies for fast reactors from subgroup 26.

Table 2 :
List of integral experiments to be used in the statistical adjustment.
[42]periments performed in the MASURCA facility.bIrradiationexperimentsperformed in the Phenix reactor.fuels(oxide,metal): a wide range of Pu/(Pu + U) ratios and corresponding spectrum types (including both fission spectrum-type experiments and softer spectra), separated capture (PROFIL irradiation experiments in PHENIX,[40]) and fission rate effects for TRU (COSMO fission rate experiments,[41]), combined capture and fission effects (TRAPU irradiated fuels in PHENIX with different Pu vectors), and finally reflector versus blanket effects (ZPR3-53 with blanket and ZPR3-54 with reflector, CIRANO with reflector[42]).

Table 3 :
Representativity factors for k eff .

Table 4 :
C/E and associated uncertainties (σ) before and after adjustment.
Isotope A/B atom density ratio at the end of irradiation of a sample of isotope A. a Isotope atom density at the end of irradiation of TRAPU fuel pins with different initial Pu vectors.b Normalized fission rates and k eff in the COSMO critical experiment at MASURCA.c JEZEBEL9: Pu-239 Sphere.d JEZEBEL0: Pu-239 Sphere with high Pu-240 content.e k eff of the critical experiment CIRANO (high Pu content) at MASURCA.

Table 5 :
k eff Uncertainties (pcm) calculated with BOLNA and adjusted covariance.

Table 6 :
C/E before and after statistical adjustment.

Table 7 :
Parameter variations and standard deviations obtained by data assimilation.
a Nuclear scattering radius.b Bound-level resonance.c Resonance peak energy.d Optical model real volume radius for target nucleus.e Optical model real surface diffuseness for target nucleus.f Optical model real volume diffuseness for target nucleus.g Optical model scaling of total cross-sections due to intrinsic model uncertainty.h Optical model scaling of absorption cross-sections due to intrinsic model uncertainty.