Eigenvalues and Eigenvectors of Tau Matrices with Applications to Markov Processes and Economics

In the context of matrix displacement decomposition, Bozzo and Di Fiore introduced the so-called $\tau_{\varepsilon,\varphi}$ algebra, a generalization of the more known $\tau$ algebra originally proposed by Bini and Capovani. We study the properties of eigenvalues and eigenvectors of the generator $T_{n,\varepsilon,\varphi}$ of the $\tau_{\varepsilon,\varphi}$ algebra. In particular, we derive the asymptotics for the outliers of $T_{n,\varepsilon,\varphi}$ and the associated eigenvectors; we obtain equations for the eigenvalues of $T_{n,\varepsilon,\varphi}$, which provide also the eigenvectors of $T_{n,\varepsilon,\varphi}$; and we compute the full eigendecomposition of $T_{n,\varepsilon,\varphi}$ in the specific case $\varepsilon\varphi=1$. We also present applications of our results in the context of queuing models, random walks, and diffusion processes, with a special attention to their implications in the study of wealth/income inequality and portfolio dynamics.

For all ε, ϕ ∈ R, the asymptotic spectral distribution of T n,ε,ϕ in Weyl's sense can be easily obtained from the theory of generalized locally Toeplitz sequences [18,19], which immediately yields for T n,ε,ϕ the asymptotic spectral distribution function (or symbol) 2 cos θ.Precise eigenvalue estimates can also be given on the basis of classical interlacing results [21,Section 4.3] after observing that T n,ε,ϕ is a small-rank perturbation of T n,0,0 and the eigenvalues of T n,0,0 are known.It should be noted, however, that both asymptotic spectral distribution results and interlacing estimates completely ignore the outliers of T n,ε,ϕ , i.e., the eigenvalues lying outside the interval [−2, 2] (the range of the symbol 2 cos θ).On the other hand, the outliers, which are determined by the parameters ε, ϕ, are precisely the objects one is interested in when dealing with several noteworthy applications.Such applications include, for example, queuing models and Markov chains/processes [4,8,20,23], where the eigenvector corresponding to the (unique) outlier of (a suitable transform of) T n,ε,ϕ corresponds to the steady-state distribution of the considered chain/process.
In this paper, we study the spectral properties of T n,ε,ϕ and present a few applications in the context of Markov chains/processes, with a special focus on queuing models, random walks, diffusion processes and economics issues.The structure of the paper, including a summary of our contributions, is given below.
• In Section 2, we study some basic spectral properties of T n,ε,ϕ that will simplify the analysis of later sections.
• In Section 3, we derive the asymptotics of the outliers of T n,ε,ϕ and the associated eigenvectors.Our main results in this regard are Theorems 3.1-3.3,which are validated through numerical experiments in Tables 3.1-3.3.• In Section 4, we derive equations for the eigenvalues of T n,ε,ϕ .For all ε, ϕ ∈ R for which these equations can be solved, one obtains not only the eigenvalues but also the eigenvectors of T n,ε,ϕ .Our main results in Section 4 are Theorems 4.1-4.5.• In Section 5, we solve the equations obtained in Section 4 for specific values of ε, ϕ.In particular, we show how it is possible to re-obtain through our equations the eigendecomposition of T n,ε,ϕ for ε, ϕ ∈ {0, 1, −1}; and we address the new case εϕ = 1, which is the case of interest for the applications presented in Section 6. • In Section 6, we present a few applications in the context of Markov chains/processes, with a special focus on queuing models, random walks in a multidimensional lattice, multidimensional reflected diffusion processes and economics issues.In particular, we investigate the implications of our results within a model for wealth/income inequality and portfolio dynamics with an arbitrary number of assets: we provide analytical formulas for the steady-state (stationary) distribution of the underlying stochastic process (a multidimensional reflected diffusion process), we compute the convergence speed towards the steady state, and we also derive closed-form expressions for relevant moments of the stationary distribution such as the average wealth and the wealth variance.• In Section 7, we draw conclusions and outline possible future lines of research.

Basic Properties of the Eigenvalues and Eigenvectors of T n,ε,ϕ
In this section, we collect some basic properties of the eigenvalues and eigenvectors of T n,ε,ϕ which will allow us to tackle the analysis of the next sections with useful a priori knowledge.Throughout this paper, the eigenvalues of T n,ε,ϕ which do not belong to the interval [−2, 2] are referred to as outliers.We denote by e 1 , . . ., e n the vectors of the canonical basis of R n , and by E n the symmetric permutation matrix (flip matrix) whose rows are those of the identity matrix I n in reverse order: Similarly, if ϕ = 0 and 7.The result follows immediately from the fact that the matrix T n,ε,ϕ is irreducible and from the so-called Gershgorin's third theorem [10, p. 80].

Asymptotics of the Outliers of T n,ε,ϕ
If |ε| > 1 and n is large enough, property 2 of Theorem 2.1 says that (ε + ε −1 , v n ) is substantially an eigenpair of T n,ε,ϕ (it is an exact eigenpair if εϕ = 1).A similar consideration applies to (ϕ + ϕ −1 , w n ).The next theorems formalize this intuition.We remark that, for every x > 0, with equality holding if and only if x = 1.In what follows, Λ(X) denotes the spectrum of the matrix X.
the eigenvalue ν n is eventually an outlier.
If x, y ∈ R n , we set (x, y) = x ⊤ y.If u ∈ R n , we denote by P u the orthogonal projector onto the subspace u generated by u.In the case where u = 0, the projector P u is explicitly given by Theorem 3.1.Suppose that |ε| > 1 and ϕ = ε.Let (µ n , x n ) be an eigenpair of T n,ε,ϕ such that µ n → ε + ε −1 as n → ∞ and x n 2 = 1 for all n.Then, the following properties hold.1.Eventually, µ n is an outlier of T n,ε,ϕ and any other eigenvalue λ n ∈ Λ(T n,ε,ϕ ) satisfies |λ n − (ε + ε −1 )| ≥ c for some positive constant c independent of n.
The next theorem is completely analogous to Theorem 3.1 and can be proved by the same type of argument or by using the relation between T n,ε,ϕ and T n,ϕ,ε (see Theorem 2.1).Theorem 3.2.Suppose that |ϕ| > 1 and ε = ϕ.Let (ν n , y n ) be an eigenpair of T n,ε,ϕ such that ν n → ϕ + ϕ −1 as n → ∞ and y n 2 = 1 for all n.Then, the following properties hold.1.Eventually, ν n is an outlier of T n,ε,ϕ and any other eigenvalue To conclude our analysis, we address the case where |ε|, |ϕ| > 1 and ε = ϕ.Theorem 3.3.Suppose that |ε|, |ϕ| > 1 and ε = ϕ.Then, the following properties hold.1.There exist exactly two distinct eigenvalues µ n , ν n of T n,ε,ϕ which are eventually the unique two outliers of T n,ε,ϕ and satisfy µ n , ν n → ε + ε −1 = ϕ + ϕ −1 .2. Let x n and y n be eigenvectors of T n,ε,ϕ associated with µ n and ν n , respectively, and satisfying x n 2 = y n 2 = 1 for all n.Then, up to a renaming of µ n and ν n , we eventually have E n x n = x n and E n y n = −y n .Moreover, x n − P vn+wn x n 2 → 0 and y n − P vn−wn y n 2 → 0 as n → ∞, where Proof. 1.We first recall that all eigenvalues of T n,ε,ϕ are distinct by Theorem 2.1.Also, an eigenvalue converging to ε + ε −1 exists for sure by Lemma 3.1 and more than two eigenvalues converging to ε + ε −1 cannot exist by Theorem 2.1 as 2].Suppose by contradiction that there exists a unique eigenvalue µ n converging to ε + ε −1 and let x n be a corresponding eigenvector with x n 2 = 1.Let {u 1,n , . . ., u n,n = x n } be an orthonormal basis of R n formed by eigenvectors of T n,ε,ϕ with corresponding eigenvalues λ 1,n , . . ., λ n,n = µ n : We expand the vector v n on this basis as in (3.1) and we get ] for all n, there exists a positive constant c independent of n such that frequently as n → ∞.Passing to a subsequence of indices n, if necessary, we may assume that (3.8) is satisfied for all n.Note that (3.8) is the same as (3.5).Hence, by reasoning as before, we infer that (3.6)-(3.7)hold and we conclude that x n − P vn x n 2 → 0 (for the considered subsequence of indices n).This is impossible for the following reasons.
• Since ε = ϕ, we have T n,ε,ϕ = T n,ϕ,ε and, by Theorem 2.1, (λ, u) is an eigenpair of T n,ε,ϕ if and only if the same is true for (λ, E n u).• By Theorem 2.1, each eigenvalue λ of T n,ε,ϕ is simple and so E n u = ±u for all eigenvectors u of T n,ε,ϕ .In particular which are clearly incompatible with E n x n = ±x n as the latter implies ( Expand the vectors v n + w n and v n − w n on this basis: (3.12) Keeping in mind that ε = ϕ, the equations Passing to the norms, we obtain Now, recall from the proof of item 1 that (in the present case where ε = ϕ) all eigenvectors u of T n,ε,ϕ satisfy we have τ i,n = 0 in the expansion (3.10), and for the eigenvectors u i,n satisfying E n u i,n = −u i,n we have ρ i,n = 0 in the expansion (3.9).It follows that, eventually, one among x n and y n (say x n ) must satisfy E n x n = x n and the other (say y n ) must satisfy the "opposite" equation E n y n = −y n .Indeed, if we frequently had E n x n = x n and E n y n = y n , then we would also have τ n−1,n = τ n,n = 0 frequently, which is impossible by (3.16).Similarly, if we frequently had E n x n = −x n and E n y n = −y n , then we would also have ρ n−1,n = ρ n,n = 0 frequently, which is impossible by (3.15).By renaming µ n and ν n (if necessary), we can assume that the eigenvector x n associated with µ n eventually satisfies E n x n = x n , and the eigenvector y n associated with ν n eventually satisfies E n y n = −y n .In particular, we eventually have Thus, by applying (3.9), (3.11), (3.15) and (3.17), we eventually obtain Similarly, one can show that y n − P vn−wn y n 2 2 → 0. In Tables 3.1-3.3,we validate through numerical experiments the results presented in Theorems 3.1-3.3.The experiments have been performed via the high-performance computing language Julia [7] with a machine precision equal to 1.1 • 10 −308 (1024-bit precision).We note that the convergences predicted by Theorems 3.1-3.3are quite fast.Actually, this could be expected on the basis of property 2 in Theorem 2.1, where we see that for |ε|, |ϕ| > 1 the pairs (ε + ε −1 , v n ) and (ϕ + ϕ −1 , w n ) are substantially eigenpairs of T n,ε,ϕ already for moderate n due to the exponential convergence to 0 of the error terms ε −n (εϕ − 1)e n and ϕ −n (εϕ − 1)e 1 .

Equations for the Eigenvalues and Eigenvectors of T n,ε,ϕ
In this section, we derive equations for the eigenvalues of T n,ε,ϕ .As we shall see, the equations for the outliers are formally the same as the equations for the non-outliers with the only difference that the trigonometric functions sin x and cos x must be replaced by the corresponding hyperbolic functions sinh x and cosh x.For all ε, ϕ ∈ R for which these equations can be solved, one obtains not only the eigenvalues but also the eigenvectors of T n,ε,ϕ .A special role in the following derivation is played by the theory of linear difference equations [22].
Let λ ∈ R and v ∈ C n \{0}, so that (λ, v) is a candidate eigenpair for the real symmetric matrix T n,ε,ϕ .We have ⇐⇒ exists a sequence (w 0 , w 1 , . ..) such that w i = v i for i = 1, . . ., n and The characteristic equation of the linear difference equation (4.1) is given by We consider five different cases.
In this case, we set λ = 2 cos θ with θ ∈ (0, π).The roots of the characteristic equation (4.2) are given by and they are distinct because θ ∈ (0, π).The general solution of (4.1) is given by where A, B ∈ C are arbitrary constants.Keeping in mind that v = 0, we have T n,ε,ϕ v = λv ⇐⇒ exists a sequence (w 0 , w 1 , . ..) such that w i = v i for i = 1, . . ., n and      We summarize in the next theorem the result that we have obtained.In this case, a corresponding eigenvector v = (v 1 , . . ., v n ) is given by In this case, we set λ = 2 cosh θ with θ ∈ (0, ∞).The roots of the characteristic equation (4.2) are given by and they are distinct because θ ∈ (0, ∞).The general solution of (4.1) is given by where A, B ∈ C are arbitrary constants.Keeping in mind that v = 0, we have • If 1 − εe θ = 0, i.e., e −θ = ε, then the equation A + B = εAe θ + εBe −θ is equivalent to B = 0 and so As often happens in mathematics, the "limit" case 1 − εe θ = 0 merges with the case 1 We summarize in the next theorem the result that we have obtained.In this case, a corresponding eigenvector v = (v 1 , . . ., v n ) is given by In this case, the characteristic equation (4.2) has only one root x = 1 with multiplicity 2. The general solution of (4.1) is given by where A, B are arbitrary constants.Keeping in mind that v = 0, we have T n,ε,ϕ v = λv ⇐⇒ exists a sequence (w 0 , w 1 , . ..) such that w i = v i for i = 1, . . ., n and    • If 1 − ε = 0, then the equation A = εA + εB is equivalent to B = 0 and so The case 1 − ε = 0 merges with the case 1 − ε = 0, because if 1 − ε = 0 then ε = 1 and We summarize in the next theorem the result that we have obtained.
Theorem 4.3.The number λ = 2 is an eigenvalue of T n,ε,ϕ if and only if In this case, a corresponding eigenvector v = (v 1 , . . ., v n ) is given by In this case, we set λ = −2 cosh θ with θ ∈ (0, ∞).The derivation is essentially the same as in Section 4.2; we leave the details to the reader and we report the analog of Theorem 4.2.

εϕ = 1
We focus in this section on the case εϕ = 1, which is crucial for the applications presented in Section 6.To the best of the authors' knowledge, this case has never been addressed in the literature.Besides εϕ = 1, we also assume that: • ε, ϕ > 0 (because no additional difficulties are encountered if ε, ϕ < 0); • ε, ϕ = 1 (because the case ε = ϕ = 1 has already been addressed in Section 5.1).Under these assumptions, we have Using sine addition/subtraction formulas, we see that equation (4.3) is equivalent to We still have to find one eigenvalue, which can be neither 2 nor −2 because, under our assumptions, equations (4.7) and (4.11) are not satisfied.In other words, the eigenvalue we are looking for is an outlier.Since equation (4.5) is equivalent to it has a unique solution in (0, ∞) given by θ = |log ε|.We then obtain the outlier λ and the corresponding eigenvector v from Theorem 4.2: After straightforward manipulations, involving also a renormalization of v, we get for the outlier eigenpair (λ, v) the following simplified expressions: Note that this outlier eigenpair could also be obtained from property 2 of Theorem 2.1.In conclusion, if we set then the eigendecomposition of T n,ε,ϕ is given by

Applications
In this section, we present a few applications of our results in the context of Markov chains and processes.Section 6.1 deals with a queuing model.Sections 6.2 and 6.3 are devoted to random walks in unidimensional and multidimensional lattices, respectively.Finally, Sections 6.4 and 6.5 focus on multidimensional reflected diffusion processes and related economics applications.

Queuing Model
Consider a continuous-time Markov chain with n states 0, . . ., n − 1 and with transition rate matrix (infinitesimal generator) given by where λ, µ > 0. Markov chains of this kind are referred to as M/M/1/K queues (with K = n − 1).They find applications in queuing theory [8,20,23], especially in telecommunications [20,Section 5.7].In this section, we derive the eigendecomposition of We begin with the following lemma, which can be proved by direct computation.
Lemma 6.1.Let be a real tridiagonal matrix such that b i c i > 0 for all i = 1, . . ., n − 1.Then where By applying Lemma 6.1 to the matrix Q ⊤ n,λ,µ , we obtain where .
A direct verification shows that Since εϕ = 1, the eigendecomposition of X n,λ,µ (and hence also of Q ⊤ n,λ,µ ) is immediately obtained from the results in Section 5.2.In particular, the eigenpairs of Q ⊤ n,λ,µ are given by (ν k , w k ), k = 0, . . ., n − 1, where and, for k = 1, . . ., n − 1, the steady-state (or stationary/limiting) distribution of the considered queuing model, i.e., the normalized positive eigenvector of Q ⊤ n,λ,µ associated with the eigenvalue 0, is given by where it is understood that in the case ρ = 1 we take the limit ρ → 1.For a different derivation of this result, see [20,Section 5.7].
Remark 6.2 (Second Eigenvalue).It is clear from (6.3) and the geometric-arithmetic mean inequality √ λµ ≤ 1 2 (λ + µ) that all nonzero eigenvalues of Q ⊤ n,λ,µ are negative.The largest of them, i.e., the second largest eigenvalue after 0, is ν The second eigenvalue gives information about the convergence speed towards the steady-state distribution of power methods [10, p. 371]; see also [16] and [23,Section 7.2].We will return to the role of the second eigenvalue in Section 6.5.Remark 6.3.The above derivation of the eigendecomposition of Q ⊤ n,λ,µ requires only the hypothesis λµ > 0. In other words, the eigendecomposition of Q ⊤ n,λ,µ is given by (6.2)-( 6.3) for all λ, µ ∈ R such that λµ > 0.

Random Walk in a Unidimensional Lattice
Consider a discrete-time Markov chain with n states 1, . . ., n and with matrix of transition probabilities given by where p, q > 0 and p+ q ≤ 1. Markov chains of this kind are often referred to as random walks in the unidimensional lattice {1, . . ., n}; see Figure 6.1.The difference with respect to traditional random walks in Z is that states 1 and n act as absorbing/reflecting barriers: when the system is in state 1, it cannot go to a hypothetical previous state 0 with probability q (as it happens for all other states 2, . . ., n), because the probability q of going to a previous state 0 is absorbed in the probability of staying in state 1, which grows from 1 − p − q to 1 − p; a similar discussion applies to state n.
Remark 6.4 (Steady-State Distribution).Since the steady-state distribution of the unidimensional random walk, i.e., the normalized positive eigenvector of P ⊤ n,p,q associated with the eigenvalue 1, is given by where it is understood that in the case β = 1 we take the limit β → 1.
Figure 6.2: Random walk in a bidimensional lattice.
We refer the reader to [19, Section 2.1.2]for more details on the multi-index notation.Consider a discrete-time Markov chain with N (n) states 1, . ., n with matrix of transition probabilities P n,p,q = d r=1 P nr ,pr,qr , where • p = (p 1 , . . ., p d ) and q = (q 1 , . . ., q d ) satisfy p, q > 0 and p + q ≤ 1, • the matrix P nr ,pr ,qr is defined by (6.4) for (n, p, q) = (n r , p r , q r ), • denotes the tensor (Kronecker) product.Markov chains of this kind are often referred to as random walks in the d-dimensional lattice {1, . . ., n}.They are a generalization of the unidimensional random walks discussed in Section 6.2.By the properties of tensor products [19, Section 2.5], for all i, j = 1, . . ., n, the probability of going from state i to state j is given by (P n,p,q ) ij = d r=1 (P nr ,pr,qr ) ir jr , and it is equal to the product for r = 1, . . ., d of the probability of going from state i r to state j r in a unidimensional random walk with transition matrix P nr ,pr,qr as considered in Section 6.2.In short, a d-dimensional random walk is the result of d independent unidimensional random walks (one for each space dimension); see Figure 6.2 for a bidimensional illustration.
By the properties of tensor products and the results of Section 6.2, we can immediately obtain the eigendecomposition of P ⊤ n,p,q .In particular, the eigenpairs of P ⊤ n,p,q are given by (µ k , w k ), k = 0, . . ., n − 1, where w kr , and (µ kr , w kr ) is defined by (6.5)-(6.6)for (k, n, p, q, α) = (k r , n r , p r , q r , α r ) with α r = p r /q r .Remark 6.5 (Steady-State Distribution).The steady-state distribution of the d-dimensional random walk is given by i.e., it is the tensor product of the steady-state distributions of the individual unidimensional random walks that compose it.

Multidimensional Diffusion Processes
Consider a d-dimensional diffusion process, where the diffusions in each dimension are independent of each other and subject to a reflecting boundary condition at each side.We assume for simplicity that, for every r = 1, . . ., d, the direction x r is discretized uniformly with n r nodes separated by a discretization step ∆ r > 0. This discretization gives rise to a n 1 × • • • × n d lattice whose points x i are naturally indexed by a multi-index i = 1, . . ., n, with n = (n 1 , . . ., n d ).The diffusion in direction x r is a Brownian motion characterized by two parameters: a drift µ r ∈ R and a variance σ 2 r > 0. For the direction x r , the infinitesimal generator L nr,µr ,σr coincides with the generator of a 1-dimensional diffusion process with drift µ r and variance σ 2 r discretized uniformly with n r nodes separated by a discretization step ∆ r .In formulas, L nr,µr ,σr is an n r × n r matrix, which in the case µ r ≤ 0 is given by , and Q n,λ,µ is defined in (6.1); and in the case µ r ≥ 0 is given by , and Q n,λ,µ is defined in (6.1).In short, The differential operator (infinitesimal generator) of the d-dimensional diffusion process is given by where µ = (µ 1 , . . ., µ d ) and σ = (σ 1 , . . ., σ d ).More details on the discretized multidimensional diffusion process considered here will be given in Section 6.5 along with an economics application; for more on diffusion processes, see [4] for a mathematical treatment and [1,2,16] for an economical application-oriented approach.
Remark 6.6 (Steady-State Distribution).The steady-state distribution of the d-dimensional diffusion process generated by L n,µ,σ , i.e., the normalized positive eigenvector of L ⊤ n,µ,σ associated with the eigenvalue 0, is given by i.e., it is the tensor product of the steady-state distributions p r of the individual unidimensional diffusion processes generated by the operators L nr,µr,σr , r = 1, . . ., d.

Dynamics of Wealth and Income Inequality
In this section, we present an economic application of the results obtained in Section 6.4.We begin with an overview of the topic, which may not be so familiar to non-economists.

Modeling the Evolution of Wealth and Income
The sources of the vast wealth and income inequality is a key topic of study within macroeconomics and finance; see [1,2,3,5,6] for empirical evidence and modeling approaches.Central to the questions of inequality are: • what is the source of heterogeneity that drives the stationary distribution of income or wealth?
• how would the income or wealth distribution evolve over time given aggregate changes?For example, researchers can ask how the stationary distribution of wealth will change-and how long it will take to be reached-given experiments such as a new income tax, technological changes driving more volatile wages, or increases in the returns on an asset such as housing.Methodologically, the analysis of income inequality is done through examining the stationary distribution of discrete-or continuous-time stochastic processes associated with income or wealth.Typically, researchers act as follows.
• They choose a stochastic process for the assets of interest (for example, housing wealth, human wealth (i.e., wages), stocks, bonds, social security income, etc.).• They use data to estimate the parameters of the stochastic process for that "portfolio" of assets; see [1] for a survey intended to bridge the continuous-time versions of these models.In some cases, the parameters are derived from optimal control of a Hamilton-Jacobi-Bellman equation [1,2,6].• They solve for the stationary distribution associated with the stochastic process.In this way, they can examine properties of the distribution, relate it back to the data, and conduct hypotheticals on the impact of policy.With this approach, the emphasis on the steady-state distribution has come out of necessity.Even the speed of convergence towards the steady state has recently become an active research field; see [17] for a theory of the convergence rates largely focused on infinite-dimensional univariate models, and [24] for earlier evidence and theory on transition rates of the firm size distribution (methodologically, much of the literature on income/wealth inequality is similar to the firm dynamics literature, where the goal is to understand the distribution of firm sizes or productivity as well as the role of firm or worker heterogeneity in generating that distribution [16,24,25]).
The function p r (x r ) does not evolve over time and determines the limiting (equilibrium) density function p ) characterizing the steady-state probability distribution of the process.• Any function W that maps a state x ∈ [0, 1] d to a scalar "wealth" or "payoff" W (x). 1 Clearly, W (X(t)) is a random variable evolving over time together with the portfolio X(t), and we are interested in quantities like the average wealth E[W (X)] and the wealth variance Var[W (X)] computed in the steady-state distribution p(x), that is,

Discrete-State Formulation
Suppose we discretize the hypercube [0, 1] d by introducing a n 1 × • • • × n d lattice with n r points in direction x r separated by a discretization step ∆ r > 0, as in Section 6.4.This essentially means that we allow each random variable (asset) X r (t) to assume only a finite number of values.Consequently, the portfolio X(t) = (X 1 (t), . . ., X r (t)) can only be in a finite number of states x 1 , . . ., x n .The use of upwind finite differences allow us to convert the 2d PDEs (6.10)-(6.11) to a unique system of ODEs dp dt (t) = L ⊤ n,µ,σ p(t) (6.12) subject to an initial condition p(0), where L n,µ,σ is the infinitesimal generator (6.7) and p i (t) is the probability that the portfolio X(t) is in state x i at time t.After this discretization, the continuous-state continuous-time Markov process of Section 6.5.2 is changed into a discrete-state continuous-time Markov chain.Here, the objects of interest are the discrete counterparts of those mentioned in Section 6.5.2, i.e., the following.
• The stationary distribution p = (p 1 , . . ., p n ) of the process, that is, the probability vector independent of t satisfying (6.12).Clearly, p is the normalized positive eigenvector of L ⊤ n,µ,σ associated with the zero eigenvalue and is given by (6.9).
• Any function W that maps a state x i ∈ [0, 1] d to a scalar "wealth" or "payoff" W ( is a random variable evolving over time together with the portfolio X(t), and we are interested in quantities like the average wealth E[W (X)] and the wealth variance Var[W (X)] computed in the steady-state distribution p, that is, ) where W = (W 1 , . . ., W n ) is the vector (tensor) of payoffs and W2 is the componentwise square of W (in general, operations on vectors that have no meaning in themselves must be interpreted in the componentwise sense).
Considering that p is known from (6.9), formulas (6.13)-(6.14)allow us to compute both the average wealth and the wealth variance in the steady state of the process.This lets us analyze different hypothetical scenarios.For example, if the drift µ 1 of the housing component of an individual's portfolio increases, what would the impact be on the average wealth?Alternatively, we could ask how the wealth variance (a simple measure of inequality) would change if the variance of wages increases.

Convergence Speed to the Steady State
The results of Section 6.4 allow us to quantify the convergence speed to the steady state of the Markov chain presented in Section 6.5.3.Indeed, as we know from Section 6.4, all nonzero eigenvalues of L n,µ,σ are negative and the largest of them, i.e., the second largest eigenvalue after 0, is given by ν = max The second eigenvalue provides a measure of the convergence speed towards the steady state.The reason is the following: for essentially every choice of the initial distribution p(0), the quantities p(t), E[W (X(t))], Var[W (X(t))] converge to their stationary counterparts p, E[W (X)], Var[W (X)] in (6.9), (6.13), (6.For more details on the role of the second eigenvalue as a measure of the asymptotic convergence rate towards the steady state, see, e.g., [16] and [23, Section 7.2].

Derivatives with Respect to Drifts and Variances
For the convenience of economists, we here report the derivatives of the steady-state distribution p in (6.9), the average wealth E[W (X)] in (6.13), and the wealth variance Var[W (X)] in (6.We remark that the above derivatives are defined even in the case µ r = 0 and their values in this case are obtained by taking the limit of the corresponding expression as µ r → 0. The derivatives (6.19)-(6.20)enable an analysis of how the steady state changes when properties of the underlying process change.For example, if the volatility of housing prices σ 2 1 increases, equations (6.19)-(6.20)provide the resulting impact on the steady state.The derivatives (6.21)-(6.24)can be used to examine how key moments of the stationary distribution change.For example, a researcher could analyze the impact on the steady-state variance of the wealth distribution, i.e., Var[W (X)], in the case where the volatility of housing prices σ 2 1 increases.Figure 6.3 illustrates this by showing how the mean and variance of the stationary wealth distribution change with respect to the parameters of the underlying stochastic process.The figure has been realized through a discretization of the square [0, 1] 2 by a n 1 × n 2 lattice with n 1 = n 2 = 31 points in each direction and (consequently) two equal discretization steps ∆ 1 = ∆ 2 = 1/30.It should be noted, however, that the graphs in Figure 6.3 do not really depend on n 1 and n 2 , because they converge to limiting graphs as n 1 , n 2 → ∞ (and convergence is already reached for n 1 = n 2 = 31).

Table 3 .
1: Validation of Theorem 3.1 in the case ε = 3 and ϕ = 1/2 where ε + ε −1 = 3.3.For every n we have denoted by µ n the unique outlier of T n,ε,ϕ and by x n the corresponding normalized eigenvector computed by Julia.

Table 3 .
2: Validation of Theorems 3.1 and 3.2 in the case ε = 4 and ϕ = −2 where ε+ε −1 = 4.25 and ϕ+ϕ −1 = −2.5.For every n we have denoted by µ n , ν n the unique two outliers of T n,ε,ϕ and by x n , y n the corresponding normalized eigenvectors computed by Julia.We have called µ n the outlier closest to ε + ε −1 and ν n the other outlier.

Table 3 .
3: Validation of Theorem 3.3 in the case ε = ϕ = 8/5 where ε + ε −1 = ϕ + ϕ −1 = 2.225.For every n we have denoted by µ n , ν n the unique two outliers of T n,ε,ϕ and by x n , y n the corresponding normalized eigenvectors computed by Julia.We have called µ n the outlier whose eigenvector x n is the closest to its projection onto v n + w n and ν n the other outlier.We have numerically verified that, up to rounding errors, E n x n = x n and E n y n = −y n for all the considered n.
2 1 , . . ., σ 2 d ), and with the edges of the hypercube [0, 1] d acting as reflecting barriers.The probability density function p r (x r , t) for the asset X r (t) at time t is determined by the Kolmogorov forward equation (Fokker-Planck equation)