Pricing multi-asset derivatives by finite difference method on a quantum computer

Following the recent great advance of quantum computing technology, there are growing interests in its applications to industries, including finance. In this paper, we focus on derivative pricing based on solving the Black-Scholes partial differential equation by finite difference method (FDM), which is a suitable approach for some types of derivatives but suffers from the {\it curse of dimensionality}, that is, exponential growth of complexity in the case of multiple underlying assets. We propose a quantum algorithm for FDM-based pricing of multi-asset derivative with exponential speedup with respect to dimensionality compared with classical algorithms. The proposed algorithm utilizes the quantum algorithm for solving differential equations, which is based on quantum linear system algorithms. Addressing the specific issue in derivative pricing, that is, extracting the derivative price for the present underlying asset prices from the output state of the quantum algorithm, we present the whole of the calculation process and estimate its complexity. We believe that the proposed method opens the new possibility of accurate and high-speed derivative pricing by quantum computers.


Introduction
Recently, people are witnessing the great advance of quantum computing 1 , which can speedup some computational tasks compared with exiting classical computers, and taking a strong interest in its industrial applications. Finance is one of promising fields. Since large financial institutions perform enormous computational tasks in their daily business 2 , it is naturally expected that quantum computers will tremendously speedup them and make a large impact on the industry. In fact, some recent papers have already discussed applications of quantum algorithms to concrete problems in financial engineering: for example, derivative pricing [4][5][6][7][8][9][10][11][12][13][14][15][16], risk measurement [17][18][19], portfolio optimization [20][21][22], and so on. See [23][24][25] as comprehensive reviews. * koichi.miyamoto@qiqb.osaka-u.ac.jp † kenjikun@mercari.com 1 For readers who are unfamiliar to quantum computing, we refer to [1] as a standard textbook. 2 For readers who are unfamiliar to financial engineering or, more specifically, derivative pricing, we refer to [2,3].
Among a variety of applications, we here consider the quantum method for derivative pricing. Especially, we focus on the approach based on solving the partial differential equation (PDE) by finite difference method (FDM). Let us describe the outline of the problem. First of all, we explain what financial derivatives, or simply derivatives are. They are products whose values are determined by prices of other simple assets (underlying assets) such as stocks, bonds, foreign currencies, commodities, and so on. They can be characterized by payoffs paid and/or received between parties involved in a derivative contract, whose amounts are determined by underlying asset prices. One of the simplest examples of derivatives is an European call (resp. put) option, the right to buy (resp. sell) some asset at the predetermined price (strike) K and time (maturity) T . This is equivalent to the contract that the option buyer receives the payoff f pay (S T ) = max{S T − K, 0} (call) or max{K − S T , 0} (put) at T , where S t is the underlying asset price at time t. Besides this, there are many types of derivatives, some of which contain complicated contract terms and are called exotic derivatives.
Since large banks hold a large number of exotic derivatives, pricing them is crucial for their business. We can evaluate a derivative price by modeling the random time evolution of underlying asset prices by some stochastic processes and calculating the expected values of the payoff 3 under some probability measure. Since analytical formulas for the derivative price are available only in limited settings, we often resort to numerical methods. One of major approaches is Monte Carlo simulation. That is, we generate many paths of underlying asset price evolution on some discretized time grid, and then take the average of payoffs over the paths. We can also take another approach: since the expected value obeys some PDE, which is called the Black-Scholes (BS) PDE, we can obtain the derivative price by solving it 4 . More concretely, starting from the maturity T , at which the derivative price is trivially determined as the payoff itself, we solve the PDE backward to the present, and then find the present value of the derivative.
We should choose the appropriate approach according to the nature of the problem. For example, PDE approach is suitable for derivatives whose price is subject to some continuous boundary conditions. One prominent example is the barrier option. In a barrier option contract, one or multiple levels of underlying asset prices, which are called barriers, are set. Then, they determine whether the payoff is paid at the maturity or not. For example, in a knock-out barrier option, the payoff is not paid if either of barriers is reached once or more by T , regardless of S T 5 . This means that the price of the knock-out barrier option is 0 at barriers. Such a boundary condition is difficult to be strictly taken into account in the Monte Carlo approach because of discretized time evolution, but it can be dealt with in the PDE approach.
Although the PDE approach is suitable in these cases, it is difficult to apply it to multi-asset derivatives, that is, the case where the number of underlying assets d is larger than 1. This is because of the exponential growth of complexity with respect to d, which is known as the curse of dimensionality. We can see this as follows. The BS PDE is (d + 1)-dimensional, where d and 1 correspond to asset prices and time, respectively. In FDM, which is often adopted for solving a PDE numerically, the discretization grid points are set in the asset price directions, and partial derivatives are replaced with matrices which correspond to finite difference approximation. This converts a PDE into a linear ordinary differential equation (ODE) system, in which the dependent variables are the derivative prices on grid points and the independent variable is time. Then, we solve the resulting ODE system. The point is that this calculation contains manipulations of the matrices with exponentially large size, that is, n d gr ×n d gr , where n gr is the number of the grid points in one direction. Since we have to take n gr proportional to ǫ −1/2 in order to accomplish the error level ǫ, as shown later, the time complexity of this approach grows as O(poly(ǫ −d/2 )) = O((1/ǫ) poly(d) ). Besides, the space complexity also grows exponentially, since we have to store the derivative prices on grid points in calculation. This makes the PDE approach, at least in combination with FDM, intractable on classical computers.
Fortunately, quantum computers might change the situation. This is because there are some quantum algorithms for solving linear ODE systems, whose time complexities depend on dimensionality only logarithmically [28][29][30][31]. This means that, in combination with these algorithms, we can remove the exponential dependency of time complexity of FDM on dimensionality. In fact, some quantum algorithms for solving PDE, including not only FDM-based ones but also different approaches, have already been proposed, and quantum speedup is obtained in some cases [32][33][34][35][36][37][38][39]. Note also that the space complexity can be also reduced exponentially, since, using a n-qubits system, we can encode a vector with exponentially large size with respect to n into the amplitudes of the quantum state.
In light of the above, this paper aims to speedup FDM-based pricing of multi-asset derivatives, utilizing the quantum algorithm. Although one might think that this is just a straightforward application of an existing algorithm to some problem, there is a nontrivial issue specific for derivative pricing.The issue is how to extract the present value of the derivative from the output of the quantum algorithm. By solving the BS PDE up to the present (t = 0) using the quantum algorithm, we obtain the vector V(0), which consists of the derivative prices on the grid points in the space of the underlying asset prices. However, it is given not as classical data but as a quantum state | V(0) , in which the elements of V(0) are encoded as amplitudes of computational basis states. On the other hand, typically, we are interested in only one element of V(0), that is, V 0 , the derivative price for the present underlying asset prices. This means that we have to obtain the amplitude of the specific computational basis state in | V(0) . Since the amplitude is exponentially small if the number of the grid point is exponentially large, reading it out requires exponentially large time complexity, which ruins the quantum speedup.
We circumvent this issue by solving PDE up to not the present but some future time t ter . The key observation is that V 0 can be expressed as the expected value of its price 6 at an arbitrary future time. Concretely, we may take the following way. First, we generate two states: | V(t ter ) , in which the derivative prices at t ter are encoded, and | p(t ter ) , in which the probability distribution of underlying asset prices at t ter are encoded. Then, we estimate the inner product p(t ter )| V(t ter ) , which is an approximation of V 0 . Note that the amplitude of each basis state in | V(t ter ) contains information to determine V 0 differently from | V(0) , in which the amplitude of one specific basis state is the sole necessary information. This leads to much smaller time complexity in the above way than reading V 0 out from | V(0) .
In the following sections, we describe the entire process of the above calculation: setting t ter , generating | V(t ter ) by the quantum algorithm, generating | p(t ter ) , and estimating V 0 . Besides, we estimate the complexity of the proposed method. We see that, in the expression of the complexity, there are not any factors like (1/ǫ) poly(d) but only some logarithmic factors to the power of d, which means substantial speedup compared with classical FDM.
The rest of this paper is organized as follows. Sections 2 and 3 are preliminary ones, which outline derivative pricing based on solving a PDE by FDM and the quantum algorithm for solving ODE systems, respectively. In Section 4, we discuss approximating V 0 as the expected value of the price at t ter . Here, we also discuss how to set t ter , taking into account the probability that underlying assets reach the barrier. Section 5 presents the main result, that is, the quantum calculation procedure for V 0 and its complexity. Section 6 summarizes this paper. All proofs are shown in the appendix.

Notations
Here, we explain the notations used in this paper.
R + means the set of all positive real numbers: For a positive integer n, [n] := {1, ..., n}. 6 Again, strictly speaking, the price must be divided by some numeraire.
For a positive integer n, I n denotes the n × n identity matrix. · means the Euclidian norm for a vector and the spectral norm for a matrix. We call each of them a "norm" simply. For a n × n matrix A, µ(A) means the logarithmic norm associated with · : µ(A) := lim h→0 + ( I n + hA − 1)/h. When a matrix A has at most s nonzero entries in any row and column, we say that the sparsity of A is s.
In this paper, we consider quantum states of systems consisting of some quantum registers with some qubits. For a real number x, |x denotes one of the computational basis states on some register, whose bit string corresponds to the binary representation of x. For i ∈ {0, 1}, we let |i and |ī denote a state on a multi-qubit register and a state on one qubit, respectively, in order to distinguish them. For x := (x 1 , ..., x d ) T ∈ R d , | x denotes the (unnormalized) state in which the elements of x are encoded in the amplitudes of computational basis states, that is, For a (unnormalized) state |ψ , its norm is defined as |ψ := ψ|ψ . If a state |ψ satisfies |ψ − |ψ ′ < ǫ, where ǫ is a positive real number and |ψ ′ is another state, we say that |ψ is ǫ-close to |ψ ′ .
2 Derivative pricing based on solving the PDE by FDM 2.1 Derivative pricing problem and the Black-Scholes PDE In this paper, we consider the following problem.
on [0, T ) × D and boundary conditions Here, t ∈ [0, T ], S := (S 1 , ..., S d ) T ∈ D, σ 1 , ..., σ d , r are positive real constants such that r < 1 are real constants such that ρ 11 = · · · = ρ dd = 1 and the matrix ρ := (ρ i j ) 1≤i≤d Here, we make some comments. (1) is the so-called BS PDE, which corresponds to the following derivative pricing problem. Under some probability space (Ω, F , P), we consider the d-dimensional stochastic process S (t) := (S 1 (t), ..., S d (t)) T , t ≥ 0 obeying the following stochastic differential equation (SDE) system: where W 1 , ..., W d are the Brownian motions on (Ω, F , P) satisfying dW i dW j = ρ i j dt for i, j ∈ [d] and the initial value is S (0) = S 0 . S 1 , ..., S d correspond to prices of d underlying assets and (3) describes the random time evolution of S (t) under the so-called risk-neutral measure, where any asset price grows with the risk-free rate r in expectation. σ i is the parameter called volatility, which parameterizes how the random movement of S i is volatile. This is the so-called BS model. Then, the derivative price is given by the conditional expected value of the payoff discounted by the risk-free rate. That is, the price of the derivative in which the payoff f pay ( S (T )) arises at maturity T is at time t, if S (t) = S . Here, 1 NB is a stochastic variable taking 1 if the condition for the payoff to be paid (e.g., barrier condition) is satisfied or 0 otherwise. It is known that V(t, S ) satisfies (1) and appropriate boundary conditions, which should be set according to the product characteristics of the derivative such as barrier conditions [26,27]. We here present some typical choices: • If U i is a knock-out barrier, that is, S i reaching U i leads to the payoff not being paid, • Suppose that f pay ( S (T )) takes the form of max{a 0 + d i=1 a i S i (T ), 0} with a 0 , a 1 , ..., a d ∈ R, which is the case with many types of derivatives including call and put options. In such a case, when either of S i 's is extremely high or low, the derivative can be far in-the-money, which means that it is highly likely that the positive payoff will be paid (e.g., a 0 < 0, a 1 , ..., a d > 0 (i.e., a basket call option) and S i ≫ −a 0 /a i ). In this situation, the derivative price is nearly equal to the discounted payoff. Therefore, we can set for sufficiently large U i . In some cases, V LB i can be set in the similar way.
For a later convenience, we here transform the PDE (1)

Application of FDM to the BS PDE
FDM is a method for solving a PDE by replacing partial derivatives with finite difference approximations.
In the case of (6), the approximation is as follows. First, letting n gr be a positive integer, we introduce the grid points in the directions of x: Namely, there are n gr equally spaced grid points in one direction and the total number of the grid points in D is N gr := n d gr , except ones on the boundaries. For later convenience, we set x (−1) i = l 1 and x (n gr ) i = h 1 . Hereafter, we assume that n gr is a power of 2 for simplicity, whose detail is explained in Section 5, and define m gr := log 2 n gr .
Then, (6) is transformed into the N gr -dimensional ODE system with the initial value ..,f pay ( x (N gr ) )) T =: f pay .
Here, Ỹ (τ), F and C(τ), which newly appear in (9), are as follows. Ỹ (τ) := (Ỹ 1 (τ), ...,Ỹ N gr (τ)) ∈ R N gr and its k-th element is an approximation of Y(τ, x (k) ). F is a N gr × N gr real matrix, which is expressed by a sum of Kronecker products of n gr × n gr matrices, that is, where I is the n gr × n gr identity matrix and are n gr × n gr tridiagonal matrices. C(τ) := (C 1 (τ), ..., C N gr (τ)) T is necessary to take into account the boundary conditions and its k-th element is Then, let us discuss the accuracy of the approximation (9). First, we make a following assumption.
, the solution of (6) and (7), is four-times differentiable with respect to x 1 , ..., x d and there exist ζ, ξ ∈ R such that We then obtain the following lemma, as proved in Appendix A.1. (6) and (7), and Ỹ (τ) be that of (9) and (10). Under Assumption 2.1, if, for a given ǫ ∈ R + , then, for any τ ∈ (0, T ), the inequality Lemma 2.1 means that the root mean square of the differences betweenỸ i (τ) and Y(τ, x (i) ) is upper bounded by ǫ. This result will be reflected to the estimation of the error in the proposed method for Problem 1.

Quantum algorithm for solving ordinary differential equation systems
In this section, we outline the algorithm of [29]. This is the algorithm for solving the linear ODE system d dt with the initial condition x(0) = x ini . Here, x(t) ∈ R N , A ∈ R N×N is a constant diagonalizable matrix, and b ∈ R N is a constant vector. Suppose that we want to find x(T ) for some T ∈ R + . The algorithm is based on the formal solution of (17) In order to calculate this, we consider the linear equation system on the tensor product space V := R q+1 ⊗ R N , where the former is the auxiliary space and the latter is the original space on which A operates: Here, m, p, k are positive integers set large enough (see the statement of Theorem 3.1), q := m(k (21) C m,k,p is designed based on the Taylor expansion of (18). The solution of (19) can be written as for some vectors x i, j , x m ∈ R N , and x m becomes close to x(T ), which we want to find. Note that x m is repeated p times in the solution X, which enhances the probability of obtaining the desired vector in the output quantum state of the algorithm. Although the C m,k,p (Ah t ) is an extremely large matrix, the quantum algorithms for solving linear equation systems (QLS algorithms) [33,[40][41][42] can output the solution of (19) only with complexity of O(log N), where N is the number of rows (or columns) in C m,k,p (Ah t ). The quantum algorithm in [29] leverages the algorithm in [42]. In order to use it, [29] assumes that the following oracles (i.e. unitary operators) are available: 1 For the matrix A, given a row index j and an integer l, this return ν( j, l), the column index of the l-th nonzero entry in the j-th row: • O A,2 For the matrix A, given a row index j and a column index k, this return the ( j, k) entry: • O x ini This prepares 1 x ini | x ini under the control by another qubit: | b under the control by another qubit: When b = 0, this is an identity operator.
Then, we present the theorem (Theorem 9 in [29]), which states the query complexity of the algorithm, with a slight modification.
In addition, suppose A has at most s nonzero entries in any row and column, and we have oracles O A,1 , O A,2 as above. Suppose x ini and b are N-dimensional vectors with known norms and we have oracles O x ini and O b as above. Let x evolve according to the differential equation (17) with the initial condition . Then there exists a quantum algorithm that produces a state |Ψ , which is ǫ-close to | j |ψ j with some unnormalized states |ψ 0 , |ψ 1 , ..., |ψ p(k+1)−1 and satisfies Ψ gar |Ψ gar = O(g 2 (p + 1) x(T ) 2 ).
The modifications from Theorem 9 in [29] are as follows. First, in [29], it is assumed that we perform post-selection and obtain | x(T ) / | x(T ) (strictly speaking, a state close to it). On the other hand, in Theorem 3.1, the output state is not purely | x(T ) / | x(T ) but contains | x(T ) as a part in addition to the unnecessary state |Ψ gar . This is because, in this paper, we use the algorithm of [29] as a subroutine in the quantum amplitude estimation (QAE) [43][44][45][46][47], as explained in Section 5, and the iterated subroutine in QAE must be an unitary operation. This means that we cannot perform post-selection, since it is a non-unitary operation. Note also that, we do not perform amplitude amplification for |Ψ 1 , which is done before post-selection in [29], and thus a factor g, which exists in the expression of the complexity (112) in [29], has dropped from (28) in this paper. Moreover, the meaning of the closeness ǫ is different between Theorem 3.1 in this paper and Theorem 9 in [29]. In the former, ǫ is the closeness between |Ψ and |Ψ , which corresponds to δ in [29]. On the other hand, Theorem 9 in [29] refers to the closeness of the state after post-selection to | x(T ) / | x(T ) . This difference also makes (28) different from (112) in [29]. 4 Approximating the present derivative price as the expected value of the price at a future time As we explained in the introduction, we aim to calculate V 0 as the expected value of the discounted price at some future time. Concretely, we set t ter ∈ (0, T ) and calculate where φ(t, S ) is the probability density function of S (t), and p NB (t, S ) is the conditional probability that the no event which leads to extinction of the payoff happens by t given S (t) = S . Although (29) holds for any t ter , for the effective numerical calculation, t ter should be set carefully. Recalling our motivation to evade exponential complexity to read out V 0 , which is explained in Section 1, we want to set t ter as large as possible. On the other hand, there are some reasons to set t ter small because of existence of boundaries. First, note that it is difficult to find p NB (t ter , S ) explicitly in the multi-asset case. However, for sufficiently small t ter , p NB (t ter , S ) is nearly equal to 1, since the payoff is paid at least if S (t) does not reach any boundaries and the probability that S (t) reaches any boundaries can be neglected for time close to 0. Besides, note that we obtain the derivative prices only on the points in boundaries by solving PDE. For small t ter , we can approximately calculate V 0 using only the information in boundaries, since the probability distribution of S (t ter ) over the boundaries is negligible. In summary, we should set t ter as large as possible in the range of the value for which the probability distribution of S (t ter ) is almost confined within the boundaries. For such t ter , we can approximate or, equivalently, where τ ter := T − t ter andφ(t, x) is the probability density of x(t) under the BS model (3) and will be explicitly given later.
Considering the above points, we obtain the lemma, which shows a criterion to set t ter . First, we make an assumption, which is necessary to upper bound the contribution from the outside of the boundaries to the integral (29).
for any S ∈ D.
That is, we assume that the payoff is upper bounded by some linear function, which is the case for many cases such as call/put options on linear combinations of S 1 , ..., S d (i.e. basket options). Then, the following lemma holds.
the inequality (36) andD The proof is given in Appendix A.2. Note that, in (35), the region of the integral is slightly different from D, the interior of the boundary in the x domain. This is just for interpreting the finite-sum approximation of the integral as the midpoint rule and explained in the proof of Lemma 5.1.

Quantum method for derivative pricing by FDM
In this section, we finally present the quantum method for derivative pricing by FDM. Our idea is calculating the present derivative price V 0 as (29), the expected value of the price at the future time t ter . As explained in Section 4, we approximate (29) as (31). In fact, we have to approximate (31) further, since we obtain the derivative prices only on the grid points by solving PDE using FDM. Therefore, we approximate (31) as where p k is the existence probability of x(t ter ), the log prices of underlying assets at t ter , on the k-th grid point and explicitly defined soon. In other words, we calculate where p := (p 1 , ..., p N gr ) T . Hereafter, we discuss how to estimate this inner product.

Generating the probability vector
Firstly, let us discuss how to generate p, a vector which representsφ(t ter , x), the probability distribution of x(t ter ), as a quantum state. As we will see below, although we aim to generate a quantum state in which the amplitudes of basis states are proportional toφ(t ter , x), we can apply the method to generate a state in which amplitudes are square roots of probabilities [10,48], sinceφ(t ter , x) can be regarded as the square roots of the probability densities under another distribution. Concretely speaking, we aim to generate the vector p := (p 1 , ..., p N gr ) T , whereφ(t, x), the probability density of x(t), is explicitly given as that is, the density of the d-dimensional normal distribution with the mean µ and the covariance matrix Σ. Actually, we generate this vector as a normalized quantum state, that is, Here, note that (φ(t, x)) 2 is ϕ( x) times a constant independent of x, where is the probability density function for another d-dimensional normal distribution. Therefore, |p is approximately the state |ϕ , where ϕ( x) is encoded into the square roots of the amplitudes, that is, Here, which is close to ϕ(t ter , x (k) ) d i=1 h i , and which is close to 1. Then, the task is boiled down to generating |ϕ . This can be done by the multivariate extension of the method of [48] for univariate distributions. The concrete procedure is Algorithm 1. Here, note that |k can be decomposed as where each |k i is a state on a m gr -qubit register (recall that n gr = 2 m gr ), and |k i can be further decomposed as for j = 1 to m gr do 4: Using k 1 , ..., k i−1 indicated by the first, ..., (i − 1)-th registers, respectively, and k [1] i , ..., k [ j −1] i , the bits on the first, ..., ( j − 1)-th qubits of the i-th register, respectively, rotate the j-th qubit in the i-th register as This transforms the entire state into wherek is an integer whose m gr -bit representation is k [1] i · · · k [ j] i 0 · · · 0 m gr − j .
At the end of this subsection, let us evaluate the error of (39) as an approximation for (31). As preparation, we evaluate the normalization factor P as follows: where Besides, we make an additional assumption.
Assumption 5.1. For Y(τ, x), the solution of (6) and (7), andφ(t, x), the probability density function of x(t) under the BS model (3), there exists η ∈ R such that Then, we obtain the following lemma, which guarantees us that we can approximate the integral by the finite sum over the grid points.
the following holds where p is defined as (40), Ỹ is the solution of (9).
The proof is given in Appendix A.3.

Generating the derivative price vector
Next, let us consider how to generate V, the vector which encodes the grid derivative prices at t ter . Precisely speaking, since we solve (9), we actually obtain the vector Ỹ , which encodes the approximations of Y(τ ter , x) on the grid points. Furthermore, by the algorithm presented in Section 3, we obtain not Ỹ itself but some quantum state like (27), which contains a state corresponding to Ỹ along with a garbage state. For the precise discussion, let us firstly make some assumptions in order to satisfy preconditions to use the quantum algorithm. The first one is as follows: (9) is independent of τ.
Then, hereafter, we simply write C(τ) as C. We make this assumption in order to fit the current setting to [29], which considered solving (17) for constant A and b (note that F in (9) is constant). Although C(τ) is not generally time-independent, the assumption is satisfied in some cases: • In some cases, a derivative is far in-the-money for a party at some points on the boundary, and this means that the party would receive a constant payoff K at the maturity with high probability. For example, -The payoff is the cash-or-nothing type.
-The payoff is capped, that is, the payoff function takes the form of f pay ( S ) = min{ f ( S ), K} with some function f ( S ).
In these cases, we can approximate that V(t, S ) ≈ e −r(T −t) K, which means that Y(τ, x) ≈ K, on the points.
• If a boundary corresponds to a knock-out barrier, V(t, S ) = 0 on it.
Of course, there are many cases where C(τ) is time-dependent, and it is desirable to expend our method to such cases. We leave this as a future work. The second assumption is as follows: where j ∈ [N gr ], l ∈ [s F ], s F is the sparsity of F, and ν( j, l) is the column index of the l-th nonzero entry in the j-th row, where j, k ∈ [N gr ] and z ∈ R. Besides, for f pay in (10) and C in (13) for C 0 and O C is an identity operator for C = 0.
Since F is explicitly given as (11), the sum of the Kronecker products of tridiagonal matrices, construction of O F,1 and O F,2 is straightforward. On the other hand, f pay and C are highly problem-dependent, and so are O f pay and O C . Therefore, we just assume their availability in this paper, referring to some specific cases.
• By the analogy with preparation of |p , we see that we can prepare where x L i, j and x R i, j are defined as (53), for i ∈ [d], j ∈ {0, 1, · · · , m gr },k 1 , · · · , k d ∈ {0, 1, · · · , n gr − 1} and b 1 , · · · , b j ∈ {0, 1}. Although it is difficult to analytically calculate this in general, there are some cases where it is possible. An example is the case where f pay depends on only one underlying asset price, say S 1 (and other assets are relevant to the barrier), and has a simple function form, e.g. f pay ( S ) = max{S 1 − K, 0}.
• If all boundaries correspond to knock-out barriers, C = 0, and therefore O C is just an identity operator.
Then, we obtain the following lemma, whose proof is presented in Appendix A.4.

Proposed algorithm
Finally, based on the above discussions, we present the quantum method to calculate the present derivative price V 0 . Our strategy is calculating this as (39). More concretely, we aim to subtract the information of p · Ỹ (τ ter ) from |Ψ in (66), the output state of the algorithm of [29].
In order to do this, we first modify the algorithm slightly. That is, we aim to solve not (19) but the following one by the QLS algorithm: Here, m, p, k are integers defined in the statement of Lemma 5.2, q := m(k + 1) + 2p + 1, h t = τ ter /m, X ∈ R N gr (q+1) , { e i } i=0,1,...,q is an orthonormal basis of R q+1 , and γ := (γ, ..., γ) T ∈ R N gr for some γ ∈ R + . Hereafter, we make the following assumption on γ: Assumption 5.4. We are given γ ∈ R + satisfying whereȲ (τ ter ) : This means that γ is comparable with the root mean square of Y(τ ter , x) on the grid points. Besides, the N gr (q + 1) × N gr (q + 1) matrixC m,k,p (Fh t ) is now defined as or, equivalently,C The solution of (70) is for some vectors Ỹ i, j , Ỹ (τ ter ) ∈ R N gr , and Ỹ (τ ter ) becomes close to Ỹ (τ ter ). Note that, in X, Ỹ (τ ter ) and γ are repeated (p + 1)-times. Then, applying the quantum algorithm, we can generate the quantum state |Ψ mod ǫ-close to Note that the query complexity for generating |Ψ mod is (67), similarly to |Ψ . This is because the complexity of the QLS algorithm depends only on the condition number and sparsity of the matrix and the tolerance [42], and the condition number and sparsity ofC m,k,p (Fh t ) is same as C m,k,p (Fh t ).
Using |Ψ mod , we can estimate p· Ỹ (τ ter ). The outline is as follows. First, we estimate the inner product where |Π := 1 by estimating the amplitude of |0 |0 in U † Π UΨ mod ,ǫ |0 |0 using QAE. Here, UΨ mod and U Π are the unitary operators such that Algorithm 2 Calculate e −rT p · Ỹ (τ ter ) Require: γ ∈ R + satisfying (71). ǫ ∈ R + satisfying (33) and (34). ǫ 1 ∈ R + satisfying (83). ǫ 2 ∈ R + satisfying (84). ǫΨ mod ∈ R + satisfying (87). Accesses to the oracle UΨ mod such that (80) and (86) and its inverse. Accesses to the oracle U Π such that (81) and its inverse. 1 and respectively. Note that, if we can generate |p , we can also generate |Π , since this is just a tensor product of 1 √ p+1 p(k+2) j=p(k+1) | j and |p . Next, by QAE, we estimate the probability that we obtain j ∈ {p(k + 2) + 1, ..., p(k + 3) + 1} in the first register when we measure |Ψ mod , and then obtain an estimation of γ (p + 1)N gr /Z. Finally, using E 1 and E 2 , the outputs of the first and second estimations, respectively, we calculate as an estimation of e −rT p · Ỹ (τ ter ). We present the detailed procedure is described as Algorithm 2. Here, taking some ǫ ∈ R + , we require the tolerances ǫ 1 and ǫ 2 in calculating E 1 and E 2 be is the root mean square of the derivative prices on the grid points at time t ter . Besides, we require that where These requirements guarantee the overall error to be smaller than ǫ. We formally state these points along with the complexity of the procedure in Theorem 5.1, whose proof is presented in Appendix A.5. Then, for any ǫ ∈ R + satisfying (33) and (34), Algorithm 2 outputs the real number ω such that σ max := max i∈[d] σ i , Ξ := max{ξ, ζ/d}, τ ter := T − t ter , t ter is defined as (36), ∆ i is defined as (57), k=1 (V(t ter , S (k) )) 2 , and κ V = V · V −1 is the condition number of V, which diagonalizes F.
Let us make some comments. First, note that the upper bound of the complexity (89) does not have any factor like (1/ǫ) poly(d) , which means the tremendous speedup with respect to ǫ and d compared with the classical FDM. On the other hand, the exponential dependence on d has not completely disappeared. In fact, (89) contains some constants to the power of d, and factors such as is the width between boundaries in the direction of x i , the logarithm of the i-th underlying asset price, and ∆ i is that divided by σ i √ t ter , which roughly measures the extent of the probability distribution of x i at time t ter . Therefore, these factors are just logarithmic factors to the power of d.
Second, we note that some calculation parameters are difficult to be determined in advance of pricing. For example, although we have assumed that we know γ such that (71) holds in advance, it is difficult because we do not knowȲ(τ ter ). Besides, although we set p = ⌈ F τ ter ⌉ in using the algorithm of [29], it is difficult to set p to this specific value since we can upper bound F but cannot calculate it precisely. Even in [29], the way to set p = ⌈ F τ ter ⌉ is not presented. Similar discussion can be applied to other parameters: k, h i , and so on. Fortunately, the algorithm works not only for such specific values of the parameters but also for comparable values. The factor 1/2 and 2 in (71) can be replaced with comparable values (say, 1/3 and 3), which results in change of the complexity only by some O(1) factor. p larger than but comparable with ⌈ F τ ter ⌉ (say, 2⌈ F τ ter ⌉) results in comparable computational accuracy and complexity with those for p = ⌈ F τ ter ⌉. In reality, we may perform computation for various parameter values and search the appropriate ranges of the parameters, for which the calculated derivative price seems to converge. In the practical business, once we find a set of appropriate calculation parameters, we can continue to use it with periodic check of convergence, since we typically perform pricing many times in different but similar settings on model parameters (e.g. σ i ) and contract terms (e.g. barrier level).

Summary
In this paper, we studied how to apply the quantum algorithm of [29] for solving linear differential equations to pricing multi-asset derivatives by FDM. As we explained, FDM is an appropriate method for pricing some types of derivatives such as barrier options, but suffers from the so-called curse of dimensionality, which makes FDM infeasible for large d, the number of underlying assets, since the dimension of the corresponding ODE system grows as ǫ poly(d) for the tolerance ǫ, and so does the complexity. We saw that the quantum algorithm for solving ODE systems, which provides the exponential speedup with respect to the dimensionality compared with classical methods, is beneficial also for derivative pricing. In order to address the specific issue for derivative pricing, that is, extracting the present price from the output state of the quantum algorithm, we adopted the strategy that we calculate the present price as the expected value of the price at some appropriate future time t ter . Then, we constructed the concrete calculation procedure, which is combination of the algorithm of [29] and QAE. We also estimated the query complexity of our method, which does not have any dependence like (1/ǫ) poly(d) and shows tremendous speedup with respect to ǫ and d.
We believe that this paper is the first step for the research in this direction, but there remains many points to be improved. First, we should consider whether the assumptions we made can be mitigated. For example, although we assume that C(τ) is time-independent (Assumption 5.2), some products do not fit to this condition: e.g., when we consider the upper boundary condition in the case of the Europeancall-like payoff f pay (S ) = max{S − K, 0} with some constant K, V(t, S ) ≈ S − e −r(T −t) K and therefore Y(τ, x) = e rτ V(t, S ) cannot be regarded as constant for large S . In order to omit this assumption, we might be able to extend the algorithm of [29] so that it can be applied to time-dependent C(τ) 7 Another important aspect is pricing early-exercisable derivatives. American-type (resp. Bermudantype) derivatives, in which either of parties can terminate the contract at any time (resp. at either of some predetermined dates) before the final maturity T , are widely traded and their pricing is important for banks. FDM is suitable and often used for pricing such products, since it determines the derivative price backward from T and can take into account early exercise. However, it is not straightforward to apply the quantum method proposed in this paper to pricing early-exercisable products. This is because, at exercisable date t exe , we need the operation V(t exe , S ) = max{V(t exe + 0, S ), f pay ( S )}, where V(t exe + 0, S ) is the derivative price right after t exe , but nonlinear operations on amplitudes such as the max function cannot be implemented on a quantum computer naively.
Including these points, we will investigate the possibility that the quantum FDM speedups pricing for the wider range of derivatives in the future work. 7 Actually, the algorithm in [31], which is based on the spectral method, can deal with time-dependent C(τ). However, in order to apply this algorithm, V(t, S ) must be smooth enough in the direction of t. On the other hand, in practice, the BS model parameters are often not smooth: for example, piece-wise constant volatilities are often used, which deteriorates smoothness of V(t, S ). In such a case, the algorithm of [29] is expected to be more suitable than that of [31], since the formal solution (18) is valid also for piece-wise constant model parameters. That is, if with some t dis ∈ (0, T ), x(t) can be written as

A Proofs
A.1 Proof of Lemma 2.1 First, we prove the following property of F in (11).

A.2 Proof of Lemma 4.1 A.2.1 Upper bound the probability that the underlying asset prices reach the boundaries
In order to prove Lemma 4.1, we weed some subsidiary lemmas. First, we prove the following one on the probability that the underlying asset prices reach the boundaries. (3) and any t ∈ (0,t u ], wherẽ the following holds Similarly, for any t ∈ (0,t l ), wheret the following holds Proof. It is well-known (see e.g. [3]) that S i (t) can be written as Therefore, we see that and Using a formula on the distribution of the maximum of a Brownian bridge with drift (THEOREM 3.1 in [50]), we obtain Then, for t ∈ (0,t u ], we obtain (99). The later part of the statement is proven similarly.
Using Lemma A.2, we can prove the following.
and any t ∈ (0, t b ], where the following holds where p NB (t, S ) is defined below (29). Proof.
p NB (t, S ) ≥ P S (u) does not reach any boundaries by t S t = S = 1 − P S (u) reaches either of boundaries by t S t = S where we used Lemma A.2 at the last inequality.

A.2.2 Upper bound the integral on the outside of the boundaries
Besides, we need the following lemmas, in order to upper bound the contribution from the outside of the boundaries to the integral (29). (3). Let H be a real number such that H > S i,0 and ǫ be a positive real number satisfying Then, for any t ∈ (0, t cu ), Proof. Because of (102) and the basic property of the Brownian motion, the probability density of Therefore, we see that ∞ H sφ i (t, s)ds Here, we used which hold for any c ∈ R + . Besides, because of (110) and (112), holds for t ∈ (0, t cu ). Combining (112), (114) and (116), we obtain for t ∈ (0, t cu ). On the other hand, where we used (115) again. Combining this and r − σ 2 i 2 t < 1 5 log H S i,0 , which holds for t ∈ (0, t cu ) because of (116), we obtain (3). Let L be a real number such that L < S i,0 and ǫ be a positive real number satisfying Then, for any t ∈ (0, t cl ), Proof. Similarly to (114), for t ∈ (0, t cl ), where we used (115) at the first inequality and (122) at the last inequality.
On the other hand, where we used (115) again. Then, for t ∈ (0, t cl ), (124) and which follows (120), lead to Combining these lemma, we obtain the following.
where t ter is defined as (36) and D half is defined as (106).
Proof. First, note that, under Assumption 4.1, for S = (S 1 , ..., S d ) T ∈ R d + , Therefore, we obtain In the right hand side, the first term is A 1 e −rt ter ∞ √ is the marginal density of S i (t), and therefore holds from Lemma A.4 8 . Similarly, from Lemma A.5, the second term is bounded as .
On the other hand, from Lemma A.4, we see that the third term is bounded as and, similarly, the fourth term is bounded as ǫA 0 e −rt ter where we used Lemma A.3 and (138) at the first inequality, and (129) at the third inequality. On the other hand, the second term of (139) is bounded as where we used Lemma A.6 and t ter < t c at the last inequality. Combining these, we obtain (35).
Since smaller h i 's lead to larger F , and then larger complexity, we take as large h i 's as possible, that is, Then, we can evaluate the complexity by substituting s and A in (28) with the sparsity and norm of F, respectively. The sparsity of F is O(d 2 ), since the matrices constituting F as (11) have sparsity at most 4 and the total number of them is O(d 2 ). Besides, for h i = Θ(h i ), as we will show soon. Using these, we obtain (67) by simple algebra. The remaining task is to show (148). Note that Here, we used D 1st = 2, D 2nd = 4, which follows the fact that the eigenvalues of the n × n tridiagonal Toeplitz matrix where a, b, c ∈ C, are b + 2 √ ac cos jπ n+1 , j = 1, ..., n [53]. Then, substituting h i in (149) byh i , we obtain (148) by simple algebra. by simple algebra. Here, note that, because of (69) and Assumption 5.4, Z = O g (p + 1)N grȲ (τ ter ) = O g (p + 1)N gr γ holds. Thus, if ǫ 1 , ǫ 2 and ǫ Ψ satisfy (83), (84) and (87), respectively, combining (158), (159) and (56) leads to e −rT γ N gr PE 1 E 2 − e −rT p · Ỹ (τ ter ) = O(ǫ).