Simulating counterfactuals

Counterfactual inference considers a hypothetical intervention in a parallel world that shares some evidence with the factual world. If the evidence specifies a conditional distribution on a manifold, counterfactuals may be analytically intractable. We present an algorithm for simulating values from a counterfactual distribution where conditions can be set on both discrete and continuous variables. We show that the proposed algorithm can be presented as a particle filter leading to asymptotically valid inference. The algorithm is applied to fairness analysis in credit-scoring.


Introduction
A counterfactual distribution is the probability distribution of a random variable under a hypothetical scenario that differs from the observed reality. "What would have been the outcome for this individual if they had received a different treatment?" is an example of a counterfactual question. Here the personal data of the individual constitute the evidence that specifies the observed reality, and the interest lies in the distribution of the outcome under a hypothetical treatment. Counterfactual questions belong to the third and highest level in the causal hierarchy (Shpitser and Pearl, 2008) and are in general more difficult than associational (first level) or interventional (second level) questions. Counterfactual queries (Shpitser and Pearl, 2007;Tikka, 2022) may often be non-identifiable from observational and experimental data in a non-parametric form (Wu et al., 2019).
Counterfactual inference consists of three steps (Pearl, 2009). First, the distribution of the latent background variables is updated given the evidence expressed as a set of conditions on the observed variables. Second, the causal model is modified according to the hypothetical intervention. Third, the counterfactual distribution of the target variable is calculated in the model that represents the counterfactual scenario under the updated distribution. The first step is already a major challenge because it requires determining a multivariate conditional distribution, even when the parametric form of the causal model is known. This can be done analytically only in special cases where, for instance, all the variables are either discrete or normally distributed. Evidence that includes continuous variables leads to a conditional distribution concentrated on a manifold, which are generally difficult to simulate.
Counterfactuals are often linked with questions about fairness, guilt, and responsibility. Notably, the fairness of prediction models has become a major concern in automated decision making (Pessach and Shmueli, 2022) where the requirement for fairness often arises directly from the legislation. For instance, the constitution of Finland states "No one shall, without an acceptable reason, be treated differently from other persons on the ground of sex, age, origin, language, religion, conviction, opinion, health, disability or other reason that concerns his or her person". Several definitions and measures of fairness have been proposed (Mehrabi et al., 2021;Caton and Haas, 2020). Among these proposals, counterfactual definitions of fairness (Kusner et al., 2017;Nabi and Shpitser, 2018;Chiappa, 2019;Wu et al., 2019;Richens et al., 2022) are intuitively appealing as they compare whether an individual decision would remain the same in a hypothetical counterfactual world. A classic example is credit scoring where the bank's decision to lend must not depend on sensitive variables such as gender or ethnicity (Bartlett et al., 2022).
There is often considerable uncertainty about the true causal mechanisms in real applications. Nevertheless, a prediction model should be fair under any reasonable causal model. This implies that a fairness evaluator can operate with a causal model that does not need to be correct and use data simulated according to the chosen causal model. Deviations from fairness in these scenarios indicate that the prediction model is not fair in general. In other words, the decision making should be fair not only for the cases seen so far but also for cases that could be potentially encountered in the future.
"Counterfactual explanations", better called "contrastive explanations" (Karimi et al., 2021), are important for explainable artificial intelligence (XAI) and interpretable machine learning (Guidotti, 2022). Our work differs essentially from most of the contrastive explainers in the literature because we operate with known structural causal models (SCM) and define counterfactuals following Pearl (2009). Differently from (Karimi et al., 2021), we allow the SCM to contain unobserved confounders that affect two or more observed variables, which makes it significantly harder to simulate the counterfactual distribution.
In this paper, we present an algorithm for simulating counterfactual distributions. The algorithm is applicable in settings where the causal model is fully known in a parametric form and the structural equations for continuous variables have an additive error term or, more generally, are strictly monotonic with respect to an error term. The algorithm is essentially tuning-free.
The algorithm processes the conditioning variables one by one in a topological order and obtains an approximate sample from the updated distribution of the background variables (step 1). The challenge of continuous conditioning variables is overcome by using binary search to find solutions that satisfy the conditions and then applying sequential Monte-Carlo to calibrate the distribution of these solutions. Discrete conditioning variables are processed by resampling observations that fulfil the condition. Next, the causal model is modified (step 2) and a sample from the counterfactual distribution is acquired by simulating the modified causal model with the obtained sample of the background variables (step 3).
We show that the conditional simulation at step 1 can be interpreted as a particle filter/sequential Monte Carlo (see, e.g., Gordon et al., 1993;Doucet et al., 2000;Del Moral, 2004;Cappé et al., 2005;Chopin and Papaspiliopoulos, 2020). Theoretical results from the particle filter literature guarantee good asymptotic properties of the sample. In particular, we state a mean square error convergence rate and a central limit theorem for samples obtained with the proposed algorithm.
We use the counterfactual simulation algorithm as the main component in a fairness evaluation algorithm of prediction models. We demonstrate with a credit scoring example that the fairness evaluation algorithm can be applied to opaque AI models without access to real data. As a result, we find out if an AI model is fair in the tested setting, and if not, learn how large differences there are in the outcome under different counterfactual interventions.
The rest of the paper is organized as follows. The notation and the basic definitions are given in Section 2. In Section 3, the counterfactual simulation algorithm and the fairness evaluation algorithm are introduced, and the conditional simulation is presented as a particle filter. In Section 4, the performance of the simulation algorithm is tested in benchmark cases where the counterfactual distributions can be derived analytically. In Section 5, the fairness evaluation algorithm is applied to a credit scoring example. Section 6 concludes the paper.

Notation and definitions
We use uppercase letters to denote variables, lowercase letters to denote values, and bold letters to denote sets of variables or values. The primary object of counterfactual inference is a causal model: • V is a set of observed (endogenous) variables that are determined by other variables in the model.
• U is a set of background variables that are unobserved random variables.
to V and such that F forms a mapping from U to V. Symbolically, the set of equations F can be represented by writing is the unique minimal set of variables sufficient for representing f V .
• p(u) is the joint probability distribution of U.
An SCM is associated with a directed graph G where there is an edge W → V if and only if W ∈ Pa * (V ). In other words, Pa * (V ) denotes the (observed and unobserved) parents of V in the graph G. We will consider only recursive SCMs where the set F defines a topological order of the variables V ∪ U, i.e., an order where Pa * (Z) < Z for all Z ∈ V ∪ U. The graph associated with a recursive SCM is a directed acyclic graph (DAG). Notation τ (V ) refers to the variables that precede V in the topological order meaning variables W ∈ V ∪ U such that W < V . We assume that variables U precede V in the topological order. Notation Pa(V ) is a shorthand notation for Pa * (V ) ∩ V meaning the observed parents. Similarly, the notation An * (V ) denotes the ancestors of V in G and An(V ) = An * (V )∩V. Notations τ (v), Pa * (v), Pa(v), An * (v) and An(v) refer to the values of the variables τ (V ), Pa * (V ), Pa(V ), An * (V ) and An(V ), respectively.
Data can be simulated from an SCM by generating the values u of the background variables U from the distribution p(u) and then applying the functions F in the topological order to obtain values v of V. For a data matrix D with named columns (variables), the following notation is used: D[i, ] refers to the ith row, D[C = c, ] where C is a column name of D refers to rows where condition C = c is fulfilled, D[, S] refers to data on variables S, i.e., columns whose variable names are in the set S, and D[i, S] refers to the ith row of variables S. The notation is similar to many high-level programming languages.
An intervention do(X = x) targeting SCM M induces a submodel, denoted by M do(X=x) , where those functions in M that determine the variables X are replaced by constant functions that output the corresponding values x. The joint distribution of V \ X in the submodel M do(X=x) is known as the interventional distribution or causal effect. The effects of interventions typically relate to interventional considerations such as the effect of a treatment on a response, but they also facilitate the analysis of counterfactuals that consider hypothetical actions that run contrary to what was observed in reality.
For an SCM where all variables are discrete, a counterfactual distribution can be evaluated by marginalizing over those values of the background variables that result in the specified evidence. More specifically, the probability distribution of a set of observed variables W ⊂ V \ X in the submodel M do(X=x) conditional on a set of observations (the evidence) C = c such that C, X ⊆ V can be written as The submodel M do(X=x) and the updated distribution p(u | C = c) together define a parallel world. In contrast to interventional distributions, the sets of variables C and X defining the evidence C = c and the intervention do(X = x) need not be disjoint for a counterfactual distribution. If the sets are disjoint, a counterfactual distribution simply reduces to a conditional interventional distribution.
To evaluate counterfactual probabilities when some of the variables in the conditioning set C are continuous, we follow the general procedure introduced by Pearl (2009).
Definition 2 (Evaluation of a counterfactual distribution). Let M = (V, U, F, p(u)) be a recursive SCM and let C, X ⊆ V and W ⊆ V \ X. The counterfactual distribution p(W do(X=x) = w | C = c) can be evaluated using the following three steps.
1. Update p(U = u) by the evidence C = c to obtain p(U = u | C = c) (Bayes' Theorem) 2. Construct the submodel M do(X=x) corresponding to the counterfactual scenario.
3. Use the submodel M do(X=x) and the updated distribution A simple illustration of counterfactual inference in a case where p(W do(X=x) = w | C = c) can be solved analytically is presented in Appendix A.
Our objective is to simulate observations from counterfactual distributions p(W do(X=x) = w | C = c) in the general case where an analytical solution is not available. Before the evaluation of a counterfactual probability, the SCM can be pruned similarly to what is done causal effect identification algorithms (Tikka and Karvanen, 2018) by restricting our attention to only those variables that are relevant for the task. For this purpose, we define an ancestral SCM as follows: To evaluate a counterfactual distribution, it suffices to restrict our attention to the relevant ancestral SCM. Theorem 1. Let M = (V, U, F, p(u)) be a recursive SCM and let η = p(W do(X=x) = w | C = c) be a counterfactual distribution such that W ∪ X ∪ C ⊆ V. Then η evaluated via Definition 2 in M is equivalent to η evaluated via Definition 2 in M W∪X∪C .
Proof. Consider a variable Y / ∈ (W ∪ X ∪ C). As Y is not an ancestor of W, X or C, it does not have any impact in steps 1-3 of Definition 2.
This type of pruning is a useful operation because it often reduces the number of variables in the SCM and thus allows for more efficient simulation.
The main application of counterfactual simulation is counterfactual fairness for which different definitions have been proposed. Let S be a sensitive variable (sometimes referred to as a protected variable), Y the outcome of interest and Y an estimator of Y . Kusner et al. (2017) under any context X = x and for any values s and s ′ of S. Other authors (Nabi and Shpitser, 2018;Chiappa, 2019) have found this definition too restrictive and have instead considered path-specific counterfactual inference where the sensitive variable may affect the decision via a fair pathway. For instance, even if gender affects education, it would be fair to use education in a recruitment decision. However, it would be discriminatory to base the recruitment decision on gender or its proxies such as the given name. The recognition of fair and unfair pathways requires an understanding of the underlying causal mechanisms. Along these lines, we will use the following definition.
, and let the conditioning variables be C = W ∪ Z ∪ S. The decisions based on an estimator Y are counterfactually fair if under any context W = w, Z = z and for any values s and s ′ of S.
Definition 4 fixes the non-sensitive parents W of the outcome to their observed values under any counterfactual intervention of the sensitive variables. In the recruitment example, this would mean that a counterfactual intervention on gender could change the given name, but the education would remain unchanged because it has a direct effect on job performance. This kind of definition is mentioned by Wu et al. (2019) who call it "individual indirect discrimination".

Algorithms
We proceed to construct an algorithmic framework for counterfactual simulation and fairness evaluation. The general principle of the counterfactual simulation is straightforward: the proposed simulation algorithm randomly generates the values of some background variables and numerically solves the rest of the background variables from the conditions. The obtained candidate observations are iteratively resampled with unequal probabilities so that the selected sample is an approximate random sample from the conditional distribution of the background variables. The counterfactual distribution is approximated using this sample of background variables together with the intervened SCM.
This section is organized as follows. First, the assumptions and some basic results are presented in Section 3.1. Next, the formal algorithms are presented in Sections 3.2 and 3.3, and their software implementation is briefly described in Section 3.4. Finally, in Section 3.5 we show that the simulation algorithm leads to asymptotically consistent inference and can be interpreted as a sequential Monte Carlo algorithm.

Assumptions and basic results
In order to simulate counterfactuals in cases where the set of conditioning variables C contains continuous variables, we refine the definition of the SCM by imposing some restrictions on background variables. First, we define a special type of background variable: Definition 5 (Dedicated error term). In an SCM M = (V, U, F, p(u)), a background variable U ∈ U is a dedicated error term if it is independent of the other background variables and has exactly one child. If an observed variable V ∈ V has only one dedicated error term, it is denoted by U V and the notation ξ With this definition, the background variables can be divided into two categories U = U ∪ U where U is the set of dedicated error terms and U is the set of other background variables which we call global background variables henceforth. Further, let U C and U V\C represent the dedicated error terms of variables in C and in V \ C, respectively. From the independence of dedicated error terms it follows that The variables U act as unobserved confounders while each dedicated error term only affects the variation of one specific observed variable. We will often consider the dedicated error terms separately from other parents of a variable and for this purpose, we introduce the notation Pa Our SCMs of interest have dedicated error terms with a monotonic effect in their respective functions.
Definition 6 (u-monotonic SCM). A recursive SCM M = (V, U, F, p(u)) is u-monotonic with respect to a set of continuous variables W ⊆ V if it has the following properties: 1. Each V ∈ V is univariate and has exactly one dedicated error term that is a continuous univariate random variable.
2. For all W ∈ W, it holds that given Pa • (W ), the value of W is determined as where g W is a continuous, differentiable and strictly increasing function of dedicated error term u w , and the parameters of the function are Pa • (W ).
The inverses (in their domain) and derivatives of the function g W exist and they are denoted by g −1 W and g ′ W , respectively. Cases where g W would be a strictly decreasing function can be accommodated to Definition 6 by redefining u new w = −u w and g new W (u) = g W (−u). The second property of Definition 6 is trivially fulfilled in the important special case, the additive noise model W is some function that does not depend on u w . The formulation of Definition 6 permits g W (u w ; Pa • (w)) to be a complicated function, which allows for heteroscedastic noise models.
The first step of Definition 2 involves deriving the conditional distribution p(U = u | C = c). The following results characterize conditional distributions in a u-monotonic SCM.
. Therefore, by the standard transformation formula, W has the density for all w in its domain, and zero elsewhere.
The notation ω W (u W =w ) is used later to assign a weight for u W =w . Note that for an additive noise model, we simply have ω W (u W =w ) = ξ W (u W =w ). The next lemma describes the conditional distribution of the background variables.
Lemma 2. In a u-monotonic SCM, the conditional density of U, U V\C | (C = c) satisfies: where terms ω C (u C=c ) in the last product are defined in Lemma 1.
Proof. The expression on the right is the joint density of ( U, U V\C , C), written by the chain rule: The claim follows by applying the mutual independence of dedicated error terms to the first product and Lemma 1 to the second product.
Combining the results above, the conditional distribution for all background variables can be written as follows: where I(·) is an indicator function. In other words, u and u V\C have a density, and u C and y depend on them deterministically as above.

Algorithms for conditional simulation
Next, we will consider algorithms that simulate observations from a u-monotonic SCM under given conditions (step 1 of Definition 2). First, we will present simulation algorithms for the cases with a single conditioning variable. Algorithm 1 considers the case of a continuous conditioning variable and Algorithm 2 the case of a discrete conditioning variable. Algorithm 3 processes multiple conditioning variables and calls Algorithms 1 and 2. The full workflow of counterfactual inference is implemented later in Algorithm 4. Algorithm 5 applies Algorithm 4 in fairness evaluation. We describe the operation of Algorithm 1. On lines 3 and 5, the procedure Simu-lateSCM is called. The unconditional call on line 5 first simulates n realizations of the background variables from p(u) and then applies functions F to obtain the values of the observed variables V. The conditional version on line 3 is similar, but the values of the variables in D 0 are not generated but taken directly from D 0 . As a result of line 3 or 5, the data matrix D contains n rows and the values of all variables in V and U. In practice, it is not necessary to simulate the descendants of U C on line 3 or 5 because these variables will be updated later.
Algorithm 1 An algorithm for simulating n observations from a u-monotonic SCM M = (V, U, F, p(u)) on the condition that the value of a continuous variable C is c. The optional argument D 0 is an n-row data matrix containing the values of some variables V 0 ⊂ V, U 0 ⊂ U that precede C in the topological order and have been already fixed.

return D
Starting from line 6, the values of the error term U C in the data matrix D are modified so that the condition C = c is fulfilled. The procedure FindRoot on line 7 uses binary search to find the value u i of U C so that where f C ∈ F is the function that determines the value C in the u-monotonic SCM M. Due to the monotonicity assumption, there is at most one value u i that solves equation (2), and if the solution exists it can be found by binary search. If a solution cannot be found, D[i, U C ] is set as "not available". The binary search is not needed for additive noise models, where u i can be solved analytically. On line 8, the sampling weights are calculated with function ω C defined in Lemma 1. If D[i, U C ] is not available, ω[i] will be set as 0.
On line 9, a re-sample is drawn with the weights ω = (ω [1], . . . , ω[n]) (it is assumed here that at least one of the weights is positive). The sampling is carried out with replacement which leads to conditionally independent but non-unique observations. Uniqueness could be achieved by adding low-variance noise to the background variables before line 10 but this would lead to a small divergence from the required condition C = c. On line 10, the observed variables of M are updated because the change in U C affects its descendants.
Next, we consider the case of a single discrete conditioning variable and describe the operation of Algorithm 2. Lines 3 and 5 are similar to lines 3 and 5 of Algorithm 1, respectively. On line 6, n observations are sampled with replacement from those observations that fulfil the target condition C = c. On line 7, the observed variables of M are updated because the change in U C affects its descendants.
Algorithm 2 An algorithm for simulating n observations from causal model M = (V, U, F, p(u)) on the condition that the value of discrete variable C is c. The optional argument D 0 is an n-row data matrix containing the values of some variables V 0 ⊂ V, U 0 ⊂ U that precede C in the topological order and have been already fixed.

return D
When multiple conditions are present in the counterfactual, sequential calls of Algorithms 1 and 2 are needed. Algorithm 3 presents the required steps where the type of the conditioning variable decides whether Algorithm SimulateContinuousCondition or SimulateDiscreteCondition is called. On line 3 or 5, the data are simulated according to the condition that is the first in the topological order. Starting from line 6, the remaining conditions are processed in the topological order. On line 7, the set S contains variables that have already been processed. On line 9 or 11, the data are simulated according to each condition in the topological order taking into account that the variables in S have been already fixed in D.

Algorithms for counterfactual inference and fairness
Next, we present high-level algorithms that use Algorithm 3 to simulate data from a multivariate conditional distribution. Algorithm 4 simulates observations from a counterfactual Algorithm 3 An algorithm for simulating n observations from a u-monotonic SMC M = (V, U, F, p(u)) with respect to all continuous variables in C 1 , . . . , C K under the conditions C = (C 1 = c 1 ) ∧ · · · ∧ (C K = c K ). The topological order of the variables in the conditions is C 1 < C 2 < . . . < C K . The batch size n * = n is used in Algorithm 2. for k = 2 to K do 7: if C k is continuous then

Implementation
Algorithms 1-5 are implemented in the R package R6causal (Karvanen, 2023) which contains R6 (Chang, 2021) classes and methods for SCMs. SCMs are defined as a list of functions as in Definition 1, and observations can be simulated by calling these functions in a topological order. The simulation method has an option that allows some variables to be fixed to the given values (corresponding to the argument D 0 in Algorithms 1 and 2). The user has to provide the weight functions ω C that are defined in Lemma 1 and are needed in Algorithm 1.
The data are stored using data.tables (Dowle and Srinivasan, 2023) which are fast and scale well in the number of observations and variables. Interventions explicitly modify the functions that define the SCM. With these methods for simulation and interventions, Algorithms 1-5 can be implemented as they are written in Sections 3.2 and 3.3.

Conditional simulation as a particle filter
We show that Algorithm 3 can be interpreted as a particle filter. This interpretation allows us to study the theoretical properties of the simulated sample. The notation used in this section is conventional for particle filters and differs from the rest of the paper. More specifically, we assume without loss of generality the topological order V 1 < V 2 < . . . < V J which implies Pa(V j ) ⊆ {V 1 , . . . , V j−1 }. We also assume that U j , j = 1, . . . , J is the dedicated error term of V j . We introduce a shortcut notation V 1:j = {V 1 , . . . , V j } and use V i j to denote the i th observation of V j . We will first review a basic particle filter algorithm and then show how it corresponds to Algorithm 3.
Particle filters are sequential Monte Carlo algorithms, which use sequentially defined 'proposal distributions' consisting of the initial distribution M 0 (dz 0 ) on Z 0 and transition probabilities M j (dz j | z 1:j−1 ) to Z j , and non-negative 'potential functions' G j (z 0:j ), where j is the time index (cf. Del Moral, 2004). Algorithm 6 summarises the basic particle filter algorithm, which outputs (approximate and dependent) samples from the probability Algorithm 6 ParticleFilter(M 0:J , G 1:J , n) Draw A 1:n j ∼ Categorical(W 1:n j ) and set Z i j = (Z A i j j−1 , Z i j ) for i = 1:n 6: output Z 1:n J distribution π J (B) = γ J (B)/γ J (Z J ), where B ⊆ Z J and the unnormalised 'smoothing distribution' γ J is defined as follows: For the reader's convenience, we state a mean square error convergence and central limit theorem for particle filter estimates (Chopin and Papaspiliopoulos, 2020, Propositions 11.2 and 11.3).
Theorem 2. Assume that the potential functions are bounded: G j ∞ = sup z G j (z) < ∞. Then, there exist constants b, σ h < ∞ such that for any bounded test function h : Z 0 × · · · × Z J → R the output of Algorithm 6 satisfies: where π J (h) stands for the expected value of h(X) where X follows distribution π J .
Theorem 2 is stated for bounded test functions only. It is possible to prove similar results for unbounded functions, too; see for instance Rohrbach and Jack, 2022) and references therein.
In order to cast Algorithm 3 as a particle filter, we need the following ingredients: • Initial particle states contain the global background variables Z 0 = U, • The particle states for j ≥ 1 contain the observed variables and their dedicated error terms Z j = (V j , U j ), • If V j ∈ C then the value of V j is set to be v j , • Proposal distributions on Z j = R 2 : -If V j is continuous and V j / ∈ C: -If V j is continuous and V j ∈ C: -If V j is discrete: • Potentials: -If V j ∈ C and V j is continuous: -If V j ∈ C and V j is discrete: We conclude that Algorithm 3 is a particle filter algorithm and therefore Theorem 2 is applicable: ) be a u-monotonic SCM and obtain a sample D by calling D = SimulateMultipleConditions(n, M, C), where C is a set of conditions on C ⊂ V. Further, let h : dom(V) → R be a bounded function and let π J be the conditional distribution of V given C. If the conditional density of is bounded for all V j ∈ C then the mean square error and central limit theorem as in Theorem 2 hold for h(D) and π J .
Proof. According to Theorem 2, the potential functions must be bounded. The potential functions for Algorithm 1 are We conclude the section by a number of remarks about possible extensions and elaborations of the algorithm. Algorithm 6 is similar to the seminal algorithm of Gordon et al. (1993), which can be elaborated in a number of ways so that the algorithm remains (asymptotically) valid. The resampling indices A 1:n j in step 5 need not be independent, as long as they form an unbiased 'sample' from the categorical distribution (see, e.g., Douc and Cappé, 2005;Chopin et al., 2022, and references therein). Furthermore, resampling can be (sometimes) omitted, and replaced by cumulation of the weights, and this can be done in an adaptive fashion (Liu and Chen, 1995). As a special case of adaptive resampling, we could omit the resampling on line 9 of Algorithm 1 entirely, and return a weighted sample instead. In Algorithm 3, the weights would be multiplied after processing each condition and the resampling would be carried out only as the final step. However, this approach may not work well with multiple conditions because on line 9 of Algorithm 3, the current data D[, S] would mostly contain observations with low weights. Resampling after each condition makes sure that new data are simulated only for relevant cases.
The particle filter is known to be often efficient, if we are interested in the variables Z J only, or Z J−ℓ for moderate lag ℓ. However, as we care also about Z 0 , large J often causes the samples to contain only a few or one unique value for Z 0 . This is a well-known problem, which is referred to as 'sample degeneracy' (Li et al., 2014). Algorithm 3 remains effective for an SCM with high total number of variables, as long as the number of conditioning variables J remains moderate.

Simulations
The critical part of Algorithm 4 is the quality of the sample returned by Algorithm 3. In this section, the performance of Algorithm 3 is studied using randomly generated linear Gaussian SCMs. Importantly, we can analytically derive the true distribution for comparison in this particular scenario.
Consider a linear Gaussian SCM with observed variables V = (V 1 , . . . , V J ) and mutually independent background variables U = (U 1 , . . . , U H ) that follow the standard normal distribution. The model is written as where b 0 is a constant vector, B 1 is a J × J strictly triangular matrix, and B 2 is a H × H matrix. The observed variables can be expressed in terms of the background variables as follows and because b 0 is a constant and U ∼ N(0, The joint distribution of V and U is Next, we consider the conditional distribution when the values of some observed variables are fixed. We partition the observed variables as V = V 1 ∪ V 2 such that the values of V 2 are fixed and write  The distribution of (V 1 , U) T conditional on V 2 = c is a normal distribution with the mean vector and the covariance matrix Finally, we derive the distribution after the intervention. The observed variables after the intervention can be expressed as where b • 0 is obtained from b 0 by setting the constants of intervened variables to the value of the intervention, B • 1 and B • 2 are obtained from B 1 and B 2 , respectively, by setting the values in the row vectors of the intervened variables to zero, and the distribution of U | V 2 = c is defined by equations (3) and (4).
In the simulation experiment, we generate linear Gaussian SCMs with random graph structure and random coefficients, apply Algorithm 1 and compare the simulated observations with the true conditional normal distribution given in equations (3) and (4). The parameters of the simulation experiment can be divided into the SCM parameters, the counterfactual parameters, and the parameters of Algorithm 3. The SCM parameters include the number of observed variables, the average number of neighbouring observed variables per observed variable, the average number of unobserved confounders per observed variable, and the probability distributions from where the coefficients in B 1 , B 2 , and b 0 will be drawn. The counterfactual parameters include the number of conditioning variables. The only free parameter of Algorithm 3 is the sample size n.
The performance measures of the simulation experiment evaluate the proportion of unique observations in the sample, univariate statistics of observed variables, and the covariance structure between variables. The proportion of unique observations is calculated by dividing the number of unique observations by the sample size. The repeated observations originate from the resampling on line 9 of Algorithm 1.
The univariate statistics are calculated for an arbitrarily chosen observed variable. The chosen variable is standardised by subtracting the true mean and dividing by the true standard deviation obtained from equation (4). The mean and standard deviation of the standardised variable z should be 0 and 1, respectively. In addition, the Kolmogorov-Smirnov statistic measuring the largest difference between the true cumulative distribution function and the empirical cumulative distribution function is reported. The correlation coefficient between two arbitrarily chosen observed variables was compared to the true correlation coefficient.
The key results are summarised in Table 1. The sample sizes in the simulation were 1000, 10000, 100000, and 1000000. For each setting, there were 1000 simulation rounds. Case A with five observed variables and one condition is expected to be an easy task. Cases B and C with ten observed variables represent moderate difficulty, and cases D and E with fifty observed variables are expected to be challenging.
It can be seen that the percentage of unique observations decreases when the setting becomes more challenging. In all cases, z, the mean of sample means was unbiased and the variation of sample means between simulation rounds decreased as the sample size increased. The mean of sample standard deviations was close to 1 in all cases except in case E where the variation was too small in average. This was reflected also in the Kolmogorov-Smirnov statistics that indicated major differences from the true distribution. The correlation coefficients were unbiased or almost unbiased in all cases. The average runtime per simulation round in case E with n = 1000000 was 121 seconds. The simulation code is available at https://github.com/JuhaKarvanen/simulating_counterfactuals. Table 1: The results of the simulation experiment. The first panel shows the SCM parameters: the number of observed variables |V|, the number of conditions, the expected number of neighbours for the observed variables, and the expected number of global background variables per observed variable. The second panel reports the performance measures for cases A-E with different sample sizes n. The measures are the percentage of unique observations out of the total of n observations (column 'uniq. %', ideally 100%), the mean of sample means of standardised variables (z, ideally 0) and its minimum and maximum, the mean of sample standard deviations (s z , ideally 1) and its minimum and maximum, the average of Kolmogorov-Smirnov statistics (K-S, ideally 0), and the average difference between the true and the sample correlation coefficients (diff. cor., ideally 0).

Application to fairness in credit scoring
In this section, we show how Algorithm 5 can be used in the fairness analysis of a scenario where the details of the prediction model are unknown and real data are not available. We consider credit scoring where the decision to grant a bank loan to a consumer depends on personal data and the amount of the loan. The credit decision is made by an automated prediction model that can be accessed via an application programming interface (API) but is otherwise unknown to the fairness evaluator. The outcome of the prediction model is the default risk. We consider three models: A) The prediction model has no restrictions for the predictors.
B) The prediction model does not directly use sensitive variables.
C) The prediction model uses only non-sensitive variables that have a direct causal effect on the outcome.
It is expected that the evaluation should indicate that only prediction model C is fair according to Definition 5. For the fairness evaluation, we use an SCM with the causal diagram shown in Figure 1. We consider variables that are similar to those typically present in credit scoring datasets, such as Statlog (German Credit Data) (Hofmann, 1994). Here, default is the outcome variable that indicates whether the customer will repay the loan or not. The annual income (in euros, continuous), the savings (in euros, continuous), the credit amount (in euros, continuous), type of housing (categorical), level of education (ordinal), the type of job (categorical), the length of employment (in months, discrete) and ethnicity (categorical) have a direct effect on the probability of default. The marital status (categorical), the number of children (discrete), age (in years, continuous), and gender (categorical) have only an indirect effect on the risk of default. Ethnicity and gender are the sensitive variables. The address does not have a causal effect on the risk of default but is included here as a potential proxy of ethnicity. The causal diagram also has five unobserved confounders U 1 , . . . , U 5 . The dedicated error terms are not displayed. The exact structure of the SCM is documented in the R code fairness example.R in the repository https://github.com/JuhaKarvanen/simulating_counterfactuals.
To set up the example, we simulated training data from an SCM corresponding to the causal diagram of Figure 1 and fitted prediction models A, B, and C for the default risk using XGBoost (Chen and Guestrin, 2016;Chen et al., 2023). These models are opaque AI models for the fairness evaluator who can only see the probability of default predicted by the models.
Algorithm 5 was applied to prediction models A, B, and C in 1000 cases that were again simulated from the same SCM. In the algorithm, Y (·) was one of the prediction models, M was the SCM whose causal diagram is depicted in Figure 1 For a fair prediction model, these probabilities should be the same regardless of gender and ethnicity as stated in Definition 4. Consequently, the difference between the largest and the smallest probability was chosen as a measure of fairness. For instance, if the estimated counterfactual probability of default was 0.09 for one combination of gender and ethnicity and 0.08 for all other combinations, the counterfactual difference would be 0.09 − 0.08 = 0.01. The fairness results based on the 1000 cases are presented in Table 2. Only prediction model C was evaluated to be fair. Even if the median counterfactual difference was close to zero for models A and B, there were cases where the difference was very large. Consequently, the use of models A and B in decision making would potentially lead to illegal discrimination. Table 2: The fairness results for the credit scoring models. The first row indicates the percentage of cases where the counterfactual difference was exactly 0, i.e., the fairness criterion held exactly. The second row shows the percentage of cases where the difference was smaller than 0.01, i.e., the counterfactual probabilities were not necessarily the same but close to each other. The third and fourth rows report the median and maximum of the counterfactual difference, respectively.

Discussion
We presented an algorithm for simulating data from a counterfactual distribution (Algorithm 4). The algorithm can be interpreted as a particle filter and it returns an approximate sample that yields asymptotically valid inference. The algorithm possesses multiple practical strengths. It is applicable with SCMs that may simultaneously have both continuous and discrete variables and may also have unobserved confounders. The algorithm solves the non-trivial problem of conditioning on continuous variables and operates rapidly with SCMs of sizes typically found in the literature. A documented open-source software implementation is available.
The counterfactual simulation algorithm has some requirements and limitations. In contrast to causal discovery and estimation of causal effects, the functions F and distributions p(u) of the (ancestral) SCM must be known. This is rather a property of counterfactual simulation than a limitation of the proposed algorithm. The SCM must be u-monotonic with respect to continuous conditioning variables. We do not consider this as a serious restriction because the monotonicity requirement concerns only a subset of variables and offers flexibility for the functional form of the dedicated error terms' impact.
We note that our method can be generalised to multivariate observations (clustered variables (Tikka et al., 2021)) with multivariate dedicated error terms. Namely, if the multivariate transformation is a differentiable bijection, we may modify Lemma 1 using a multivariate transformation formula. However, in this case, the inverse transformation and the Jacobian must be available for point-wise evaluations.
Sample degeneracy is a well-known problem of particle filters that also affects the counterfactual simulation algorithm when the number of conditioning variables increases. The literature on particle filters proposes ways to address the degeneracy problem. One possibility is to run independent particle filters and weigh their outputs, which leads to consistent estimates (cf. Vihola et al., 2018, Proposition 23). This is appealing because of its direct parallelisability but leads to a weighted sample. Another, perhaps even more promising approach, is to iterate the conditional particle filter (CPF) (Andrieu et al., 2010), which is a relatively easy modification of the particle filter algorithm, and which defines a valid Markov chain targeting the desired conditional distribution.
When carefully configured, the CPF can work with other resampling algorithms (Chopin and Singh, 2015;Karppinen et al., 2022). CPFs with adaptive resampling have been suggested as well (Lee, 2011). We leave the practical efficiency of the CPF and its variants for future research.
The significance of counterfactual simulation emerges from the context of fairness evaluation. The fairness evaluation algorithm (Algorithm 5) uses only simulated data, which extends the scope of evaluation to encompass data that could be potentially encountered. We perceive the proposed fairness evaluation algorithm as complementary to methods that are based on real data. It is important to note that in addition to Definition 4, the simulation algorithm can also be applied to other definitions of counterfactual fairness. Other potential applications of the counterfactual simulation algorithm include counterfactual explanations and algorithmic recourse (Karimi et al., 2021). Furthermore, the conditional simulation (Algorithm 3) is applicable to Bayesian networks that lack a causal interpretation.
wards well-informed decisions: Predicting long-term effects of policy reforms on life trajectories, grant 331817).