Canonical Typicality For Other Ensembles Than Micro-Canonical

We generalize L\'evy's lemma, a concentration-of-measure result for the uniform probability distribution on high-dimensional spheres, to a much more general class of measures, so-called GAP measures. For any given density matrix $\rho$ on a separable Hilbert space $\mathcal{H}$, GAP$(\rho)$ is the most spread out probability measure on the unit sphere of $\mathcal{H}$ that has density matrix $\rho$ and thus forms the natural generalization of the uniform distribution. We prove concentration-of-measure whenever the largest eigenvalue $\|\rho\|$ of $\rho$ is small. We use this fact to generalize and improve well-known and important typicality results of quantum statistical mechanics to GAP measures, namely canonical typicality and dynamical typicality. Canonical typicality is the statement that for ``most'' pure states $\psi$ of a given ensemble, the reduced density matrix of a sufficiently small subsystem is very close to a $\psi$-independent matrix. Dynamical typicality is the statement that for any observable and any unitary time-evolution, for ``most'' pure states $\psi$ from a given ensemble the (coarse-grained) Born distribution of that observable in the time-evolved state $\psi_t$ is very close to a $\psi$-independent distribution. So far, canonical typicality and dynamical typicality were known for the uniform distribution on finite-dimensional spheres, corresponding to the micro-canonical ensemble, and for rather special mean-value ensembles. Our result shows that these typicality results hold also for GAP$(\rho)$, provided the density matrix $\rho$ has small eigenvalues. Since certain GAP measures are quantum analogs of the canonical ensemble of classical mechanics, our results can also be regarded as a version of equivalence of ensembles.

Roughly speaking, "canonical typicality" is the statement that the reduced density matrix of a subsystem obtained from a pure state of the total system is nearly deterministic if the pure state is randomly drawn from a sufficiently large subspace and the subsystem is not too large.More precisely, the original statement of canonical typicality [26,7,31,18] asserts that for most pure states ψ from a high-dimensional (e.g., microcanonical) subspace H R of the Hilbert space H S of a macroscopic quantum system S and for a subsystem a of S = a ∪ b so that H S = H a ⊗ H b , the reduced density matrix is close to the partial trace of ρ R := P R /d R (the normalized projection to H R ) and thus deterministic, provided that d R := dim H R is sufficiently large: Here, the words "most ψ" refer to the uniform distribution u R (normalized surface area measure) over the unit sphere in H R .The name "canonical typicality" comes from the fact that if H R = H mc is a micro-canonical subspace and thus ρ R = ρ mc a micro-canonical density matrix, then tr b ρ mc is close to the canonical density matrix for a with suitable β, provided b is large and the interaction between a and b is weak; see, e.g., [18] for a summary of the standard derivation of this fact.
In this paper, we replace the uniform distribution by other, much more general distributions, so-called GAP measures, and show that for them a generalized canonical typicality remains valid.For any density matrix ρ replacing ρ R in H S , GAP(ρ) is the most spread-out distribution over S(H S ) with density matrix ρ; the acronym stands for Gaussian adjusted projected measure [23,19].For ρ = ρ can , it arises as the distribution of wave functions in thermal equilibrium [19,17].If a system is initially in thermal equilibrium for the Hamiltonian H 0 but then driven out of equilibrium by means of a time-dependent H t , its wave function will still be GAP(ρ)-distributed for suitable ρ.For general ρ, we think of GAP(ρ) as the natural ensemble of wave functions with density matrix ρ; for a more detailed description, see Section 2.2.
We prove quantitative bounds asserting that for any ρ with small eigenvalues (so ρ is far from pure) and GAP(ρ)-most ψ ∈ S(H S ), Some reasons for seeking this generalization are: first, that it is mathematically natural; second, that in situations in which we can ask what the actual distribution of ψ is (more detail later), this distribution might not be uniform; third, that it shows that the sharp cut-off of energies involved in the definition of H mc actually plays no role; and finally, that it informs and extends our picture of the equivalence of ensembles.A more detailed discussion of these reasons is given in Section 2.1.
As a direct consequence of generalized canonical typicality let us mention that, just as canonical typicality implies that for most pure states ψ ∈ S(H S ) the entanglement entropy − tr(ρ ψ a log ρ ψ a ) has nearly the maximal value log d a with d a = dim H a [22] (because ρ ψ a ≈ tr b I S /D = I a /d a with I the identity operator and D = d a d b = dim H S ), generalized canonical typicality implies that GAP(ρ)-typical ψ have entanglement entropy − tr(ρ ψ a log ρ ψ a ) ≈ − tr(ρ a log ρ a ) with ρ a = tr b ρ.Since different probability distributions over the unit sphere in a Hilbert space H can have the same density matrix, and since the outcome statistics of any experiment depend only on the density matrix, it may seem at first irrelevant to even consider distributions over S(H).However, for example, an ensemble of spins prepared so that (about) half are in state ↑ and the others in ↓ is physically different from a uniform ensemble over S(C 2 ), even though both ensembles have density matrix 1  2 I.Likewise, for an ensemble of particles prepared by taking them from a system in thermal equilibrium, the wave function is GAP-distributed (see Section 2.2).More basically, probability distributions play a key role in any typicality statement, i.e., one saying that some condition is satisfied by most wave functions-"most" relative to a certain distribution; such a statement cannot be formulated in terms of density matrices.
We note that the generalization of canonical typicality from uniform measures to GAP measures is not straightforward.First, not every measure µ over S(H S ) with a given density matrix ρ with small eigenvalues makes it true that for µ-most ψ, ρ ψ a ≈ tr b ρ.We give a counter-example in Remark 15 in Section 3. Second, if ρ is not close to a multiple of a projection, then GAP(ρ) is far from uniform; specifically, its density will at some points be larger than at others by a factor like exp(D) (see Remark 13).And third, even measures close to uniform (for example the von Mises-Fisher distribution, see again Remark 13), can fail to satisfy generalized canonical typicality.
In this paper, we prove generalized canonical typicality in rigorous form by providing error bounds for (5) at any desired confidence level that is implicit in the word "most," see Theorem 1 and Theorem 3. Compared to the known error bounds based on u R , we can prove more or less the same bounds with d R replaced by the reciprocal of the largest eigenvalue of ρ, with • the operator norm.Thus, the approximation is good as soon as no single direction contributes too much to ρ.In particular, for ρ = ρ R , our results essentially reproduce the known error bounds.As one central part of our proof, we also establish a variant of Lévy's lemma [25,27,24] (a statement about the concentration of measure on a high-dimensional sphere, see below) for GAP measures instead of the uniform measure (Theorem 2).In particular, our version of Lévy's lemma holds also on infinite dimensional spheres, where the uniform measure does not exist.Furthermore, we provide several corollaries.The first one shows that for any observable and GAP(ρ)-most ψ, the coarse-grained Born distribution is near a ψ-independent one (see Remark 4 in Section 3.1 for discussion).The second arises from evolving the observable with time and provides a form of dynamical typicality [2], which means that for typical initial wave functions, the time evolution "looks" the same; here, "typical" refers to the GAP(ρ) distribution, and "look" (which in [48] meant the macroscopic appearance) refers to the Born distribution for the observable considered.In fact, Corollary 2 even shows that the relevant kind of closeness (to a t-dependent but ψ-independent distribution) holds jointly for most t ∈ [0, T ].As a further variant (Corollary 3), dynamical typicality also holds when "look" refers to ρ ψ a .Put differently, the statement here is that for GAP(ρ)-most ψ and most t ∈ [0, T ], where ψ t = U t ψ and ρ t = U t ρ U * t for an arbitrary unitary time evolution U t (allowing for time-dependent H t ).In the original version of canonical typicality, one particularly considers for ρ R the micro-canonical density matrix ρ mc for a fixed Hamiltonian H, for which the time evolution yields nothing interesting because it is invariant anyway; but if we consider arbitrary ρ, then ρ can evolve in a non-trivial way even for fixed H.
Another corollary (Corollary 4) concerns the conditional wave function ψ a of a (which is the natural notion of the subsystem wave function for a, see Section 2.2 for the definition): It is known that if d R is large, then for u R -most ψ and most bases of H b , the Born distribution of ψ a is approximately GAP(tr b ρ R ).We generalize this statement as follows: if d b is large and ρ has small eigenvalues, then for GAP(ρ)-most ψ and most bases of H b , the Born distribution of ψ a is approximately GAP(tr b ρ).
The results of this paper can also be regarded as a variant of equivalence-of-ensembles in quantum statistical mechanics, i.e., as a new instance of the well-known phenomenon in statistical mechanics that it does not make a big difference whether we use the microcanonical ensemble or the canonical one (for suitable β) or another equilibrium ensemble.Indeed, the uniform distribution over the unit sphere in a micro-canonical subspace can be regarded as a quantum analog of the micro-canonical distribution in classical statistical mechanics, and the GAP measure associated with a canonical density matrix as a quantum analog of the canonical distribution; see also Remark 11 in Section 3.2.
Our results on generalized canonical typicality (5) provide two kinds of error bounds based on two strategies of proof.They are roughly analogous to the following two bounds on the probability that a random variable X deviates from its expectation EX by more than n standard deviations Var(X): First, the Chebyshev inequality yields the bound 1/n 2 , which is valid for any distribution of X.Second, the Gaussian distribution has very light tails, so if X is Gaussian distributed, then the aforementioned probability is actually smaller than e −n (a type of bound known as a Chernoff bound), so the Chebyshev bound would be very coarse.Likewise, the two kinds of bound we provide are based, respectively, on the Chebyshev inequality and the Chernoff bound (in the form of Lévy's lemma).The former is polynomial in p max , the latter exponential as in e −1/pmax .For the original statement of canonical typicality (using u R ), the Chebyshev-type bounds were first given by Sugita [46], the Chernoff-type bounds by Popescu et al. [30].Our proof of the Chebyshev-type bounds makes heavy use of results of Reimann [35].
A version of Lévy's lemma was also established for the mean-value ensemble on a finite-dimensional Hilbert space H [28].This is the uniform distribution on S(H) restricted to the set {ψ ∈ S(H) : ψ|A|ψ = a} for a given observable A and a value a satisfying further conditions.However, as also the authors of [28] point out, the physical relevance of this ensemble remains unclear.Also dynamical typicality has been established for the mean-value ensemble, see [39] for an overview.
The remainder of this paper is organized as follows: In Section 2, we elucidate the motivation and background.In Section 3, we formulate and discuss our results.In Section 4, we provide the proofs.In Section 5, we conclude.

Motivation
Canonical typicality is often (rightly) used as a justification and derivation of the canonical density matrix ρ can from something simpler, viz., from the uniform distribution over the unit sphere in an appropriate subspace H mc .So it may appear surprising that here we consider other distributions instead of the uniform one.That is why we give some elucidation in this section.
The uniform distribution for ψ can appear in either of two roles: as a measure of probability or a measure of typicality.What is the difference?The concept of probability, in the narrower sense used here, refers to a physical situation that occurs many times or can be made to occur many times, so that one can meaningfully speak of the empirical distribution of part of the physical state, such as ψ, over the ensemble of trials.In contrast, the concept of typicality, in the sense used here, refers to a hypothetical ensemble and applies also in situations that do not occur repeatedly, such as the universe as a whole, or occur at most a few times; it defines what a typical solution of an equation or theory looks like, or the meaning of "most."Typicality is used in defining what counts as thermal equilibrium (e.g., [10] and references therein), but also in certain laws of nature such as the past hypothesis (a proposed law about the initial micro-state of the universe serving as the basis of the arrow of time; see [21,Sec. 5.7] for a formulation in terms of typicality).Moreover, it plays a key role for the explanation of certain phenomena by showing that they occur in "most" cases.
The mathematical statements apply regardless of whether we think of the measure as probability or typicality.If we use u mc as probability, then the question naturally arises whether the actual distribution of ψ is uniform, and generalizations to other measures are called for.The GAP measures are then particularly relevant, not just as a natural choice of measures, but also because they arise as the thermal equilibrium distribution of wave functions.
But also for u mc as a measure of typicality, which is perhaps the more important or more widely used case, the generalization is relevant.The way we practically think of canonical typicality is that if ψ is just "any old" wave function of S, then ρ ψ a will be approximately canonical.But the theorem of original canonical typicality (using u mc ) would require that the coefficients of ψ relative to energy levels of S outside of the microcanonical energy interval [E − ∆E, E] are exactly zero, which of course goes against the idea of ψ being "any old" ψ.Of course, we would expect that the canonicality of ρ ψ a does not depend much on whether other coefficients are exactly zero or not.And the theorems in this paper show that this is correct!They show that if the ρ we start from is not ρ mc , then the crucial part of the reasoning (the typical-ψ part) still goes through, just with corrections reflected in the deviation of tr b ρ from tr b ρ mc (which, by the way, will be minor for ρ = ρ can with appropriate inverse temperature β).More generally, the theorems in this paper prove the robustness of canonical typicality towards changes in the underlying measure.
The results of this paper also show that when computing the typical reduced state ρ ψ a for "any old" ψ, we can start from various choices of ρ of the whole, as long as they yield approximately the same tr b ρ.The results thus provide researchers with a new angle of looking at canonical typicality: it is OK to imagine "any old" ψ, and not crucial to start from u mc .More generally, our results are a kind of equivalence-of-ensembles statement in the quantum case, and thus add to the picture consisting of various senses in which different thermal equilibrium ensembles are practically equivalent, in this case with "ensemble" meaning ensemble of wave functions (i.e., measures over the unit sphere).Again, it plays a role that the GAP measures arise as the thermal equilibrium distribution of wave functions, and thus as an analog of the canonical ensemble in classical statistical mechanics.This means also that if ψ is itself a conditional wave function, a case in which we know [19,17] that (for high dimension and most orthonormal bases) ψ is approximately GAP distributed, then canonical typicality applies.A special application concerns the thermodynamic limit, for which it is desirable to think of the conditional wave function ψ A of a region A in 3-space as obtained from ψ A ′ for a larger A ′ ⊃ A, which in turn is obtained from ψ A ′′ for an even larger A ′′ ⊃ A ′ , and so on.Then for each step, ψ A ′ (etc.) is GAP-distributed.
By the way, the results here also have the converse implication of supporting the naturalness of the GAP measures.One might even consider a version of the past hypothesis that uses, as the measure of typicality, a GAP measure instead of the uniform distribution over the unit sphere in some subspace of the Hilbert space of the universe.

Mathematical Setup and Some Background
One often considers the uniform distribution over the unit sphere in a subspace H ′ of a system's Hilbert space H.While this distribution is associated with a density matrix given by the normalized projection to H ′ , the measure GAP(ρ) forms an analog of it for an arbitrary density matrix.We now give its definition and that of some other mathematical concepts we use.
Throughout this paper, all Hilbert spaces H are assumed to be separable, i.e., to have either a finite or a countably infinite orthonormal basis (ONB).The unit sphere S(H) is always equipped with the Borel σ-algebra.
Density matrix.To any probability measure µ on S(H) we can associate a density matrix ρ µ by (which always exists [49, Lemma 1]).Note that if µ has mean zero then ρ µ is the covariance matrix of µ.It will turn out for µ = GAP(ρ) that ρ µ = ρ.
GAP measure.The measure GAP(ρ) was first introduced for finite-dimensional H by Jozsa, Robb, and Wootters [23], who named it Scrooge measure.
A complex-valued random variable Z will be said to be Gaussian with mean z ∈ C and variance σ 2 > 0 if and only if Re Z and Im Z are independent real Gaussian random variables with mean Re z respectively Im z and each with variance σ 2 /2.Let (Z n ) n=1... dim H be a sequence of independent C-valued Gaussian random variables with mean 0 and variances 1 Named after Ebenezer Scrooge, a fictional character in and the protagonist of Charles Dickens' novella A Christmas Carol (1843) who is known for being very stingy.As Jozsa et al. argue, the gap measure is in some sense the most spread-out distribution on S(H) with density matrix ρ and they choose the name "Scrooge measure" because the measure is "particularly stingy with its information." Then we define G(ρ) to be the distribution of the random vector i.e., the Gaussian measure on H with mean 0 and covariance operator ρ.(It is known [32] in general that for every φ ∈ H and every positive trace-class operator ρ there exists a unique Gaussian measure on H with mean φ and covariance operator ρ.)Note that which also shows that Ψ G < ∞ almost surely, but in general Ψ G = 1, i.e., G(ρ) is not a distribution on the sphere S(H).Projecting the measure G(ρ) to the sphere S(H) would not result in a measure with density matrix ρ; therefore we first adjust the density of G(ρ) and define the adjusted Gaussian measure GA(ρ) on H as the measure that has density ψ 2 relative to G(ρ), i.e., GA(ρ)(dψ which is a probability measure by virtue of (12).It will turn out below that ψ 2 is the right factor to ensure that ρ GAP(ρ) = ρ.Let Ψ GA be a GA(ρ)-distributed random vector.We define GAP(ρ) to be the distribution of Note that the denominator is almost surely non-zero (because every 1-element subset of H has G(ρ)-measure 0 because every Z n has continuous distribution).With this we find that indeed See [49] for a complete proof of existence and uniqueness of GAP(ρ) for every density matrix ρ.GAP(ρ) can also be characterized as the minimizer of the "accessible information" functional under the constraint that its density matrix is ρ [23].If all eigenvalues of ρ are positive and D := dim H < ∞, then GAP(ρ) possesses a density relative to the uniform distribution u on S(H) [19,17], It was argued in [19] and mathematically justified in [17] that GAP measures describe the thermal equilibrium distribution of the (conditional) wave function of the system if ρ is a canonical density matrix.
It was also shown in [19] that GAP is equivariant under unitary transformations, i.e., for all density matrices ρ, all unitary operators U on H, and all measurable sets M ⊂ S(H) one has GAP(UρU * )(M) = GAP(ρ)(UM) .
In particular, GAP is equivariant under unitary time evolution, and, as a consequence, GAP(ρ t ) is the relevant distribution on S(H) whenever the system starts in thermal equilibrium with respect to some Hamiltonian H 0 and evolves according to any Hamiltonian H t at later times.More generally, the results of [17] (and their extension in Corollary 4) show that if a system has density matrix ρ arising from entanglement, then its (conditional) wave function (relative to a typical basis, see below) is asymptotically GAP-distributed.Thus, GAP is the correct distribution in many practically relevant cases.On top of that, when we have no further restriction than that the density matrix is ρ, then the natural concept of a "typical ψ" should refer to the most spread-out distribution compatible with ρ, which is GAP(ρ).
Finally, let us remark that GAP(ρ) is also invariant under global phase changes, i.e., GAP(ρ)(M) = GAP(ρ)(e iϕ M) for all measurable M ⊂ S(H) and ϕ ∈ R. Hence, GAP(ρ) naturally also defines a probability distribution on the projective space of complex rays in H and all results presented in the following can be equivalently formulated for rays instead of vectors.
Remark 1.In terms of ρ µ , we can easily formulate and prove a weaker version of our main result (5); this version is related to (5) in more or less the same way as the statement that in a certain population, the average height is 170 cm, is related to the stronger statement that in that population, most people are 170 cm tall.The weaker version asserts that the average of ρ ψ a over ψ using the GAP(ρ) distribution is equal to tr b ρ, whereas the statement about (5) was that most ψ relative to GAP(ρ) have ρ ψ a (approximately) equal to tr b ρ.On the other hand, the statement about the average is stronger because it asserts, not approximate equality, but exact equality.On top of that, the average statement is not limited to the GAP measure but holds for any probability measure µ.Here is the full statement: for separable H = H a ⊗ H b , any probability measure µ on S(H), and a random vector ψ with distribution µ, Indeed, tr b commutes with µ-integration,2 so

⋄
Norms.The distance between two density matrices will be measured in the trace norm where M * denotes the adjoint operator of M. If M can be diagonalized through an orthonormal basis (ONB), then M tr is the sum of the absolute eigenvalues.We will also sometimes use the operator norm which, if M can be diagonalized through an ONB, is the largest absolute eigenvalue.
Purity.For a density matrix ρ, its purity is defined as tr ρ 2 .In terms of the spectral decomposition ρ = n p n |n n|, the purity is tr ρ 2 = n p 2 n , which can be thought of as the average size of p n .In particular, the purity is positive and ≤ 1; it is = 1 if and only if ρ is pure, i.e., a 1d projection; for a normalized projection ρ R = P R /d R , the purity is 1/d R ; conversely, 1/purity can be thought of as the effective number of dimensions over which ρ is spread out.It also easily follows that because p 2 n ≤ p n ρ , and if p n 0 is the largest eigenvalue, then p 2 n 0 ≤ n p 2 n because all other terms are ≥ 0. In words, the average p n is no greater than the maximal p n , which is bounded by the square root of the average p n (and the square root of the maximal p n ).

Conditional wave function. For
and ψ ∈ S(H), the conditional wave function ψ a [5,6,19] of system a is a random vector in H a that can be constructed by choosing a random one of the basis vectors |m b , let us call it |M b , with the Born distribution taking the partial inner product of |M b and ψ, and normalizing: (Note that the event that b M|ψ a = 0 has probability 0 by (23).In the context of Bohmian mechanics, the expression "conditional wave function" refers to the position basis and the Bohmian configuration of b [5]; but for our purposes, we can leave it general.) We can also think of ψ a as arising from ψ through a quantum measurement with eigenbasis B on system b, which leads to the collapsed quantum state ψ a ⊗ |M b .Correspondingly, we call the distribution of ψ a in S(H a ) the Born distribution of ψ a and denote it by Born ψ,B a .However, when considering ψ a , we will not assume that any observer actually, physically carries out such a quantum measurement; rather, we use ψ a as a theoretical concept of a wave function associated with the subsystem a.It is related to the reduced density matrix ρ ψ a in a way similar to how a conditional probability distribution is to a marginal distribution, ψ a is also related to the GAP measure, in fact in two ways.First, when we average Born ψ,B a over all ONBs B (using the uniform distribution corresponding to the Haar measure), then we obtain GAP(ρ ψ a ) [17, Lemma 1].Put differently, if we think of both M and B as random and ψ a thus as doubly random, then its (marginal) distribution is GAP(ρ ψ a ); put more briefly, GAP(ρ ψ a ) is the distribution of the collapsed pure state in a after a purely random quantum measurement in b on ψ.Second, if d b is large, then even conditionally on a single given B, the distribution of ψ a is close to a GAP measure for most B and most ψ according to a GAP measure on H a ⊗ H b ; this is the content of Corollary 4 below.

Main Results
In this section, we present and discuss our main results about generalized canonical typicality.In the following, we use the notation µ(f ) for the average of the function f under the measure µ, µ(f Note that, by (18), The statement of our generalized canonical typicality differs in that it concerns approximate equality and holds for the individual ρ ψ a , not only for its average.

Statements
We first formulate our main theorem on canonical typicality for GAP measures and the underlying variant of Lévy's lemma for GAP measures.We then give a list of further consequences of this generalized version of Lévy's lemma, including results on dynamical typicality and the fact that the typical Born distribution of conditional wave functions is itself a GAP measure.At the end of this section we also state a slightly weaker version of our main theorem that is not based on Lévy's lemma but instead allows for a rather elementary proof based on the Chebyshev inequality.Finally, the known bounds for uniformly distributed ψ will be stated in Remark 12 in Section 3.2 for comparison.
Theorem 1 (Generalized canonical typicality, exponential bounds).Let H a and H b be Hilbert spaces with H a having finite dimension d a and H b being separable, and let ρ be a density matrix on H = H a ⊗ H b .Then for every δ > 0, where c = 48π.
Remark 2. The relation (28) can equivalently be formulated as a bound on the confidence level, given the allowed deviation: For every ε ≥ 0, where C = 1 2304π 2 .This form makes it visible why we call Theorem 1 an "exponential bound": because the bound on the probability of too large a deviation is exponentially small in 1/ ρ .In contrast, the bound (37) is polynomially small in tr ρ 2 .⋄ A key tool for proving Theorem 1 is Theorem 2 below, a variant of Lévy's lemma for GAP measures.Recall the notation (26).
Theorem 2 (Lévy's lemma for GAP measures).Let H be a separable Hilbert space, let f : S(H) → R be a Lipschitz continuous function with Lipschitz constant 3 η, let ρ be a density matrix on H, and let ε ≥ 0. Then where C = 1 288π 2 .Remark 3. The statement remains true for complex -valued f if we replace the constant factor 6 in (30) by 12 and C by C/2, as follows from considering the real and imaginary parts of f separately.⋄ 3 A Lipschitz constant refers to a metric on the domain, and two metrics are often considered on the sphere: the spherical metric (distance along the sphere, d sph (ψ, φ) = arccos Re ψ|φ ) and the Euclidean metric (distance in the ambient space across the interior of the sphere, d Eucl (ψ, φ) = ψ − φ ).We use the spherical metric, as did [27,30,31], but since d Eucl (ψ, φ) ≤ d sph (ψ, φ) ≤ π 2 d Eucl (ψ, φ), using the Euclidean metric would at most change the Lipschitz constants by a factor of π 2 .
As an immediate consequence of Theorem 2 for f (ψ) = ψ|B|ψ , which has Lipschitz constant η ≤ 2 B [30, Lemma 5], we obtain: Corollary 1.Let ρ be a density matrix and B a bounded operator on the separable Hilbert space H.For every ε ≥ 0, with C = 1 2304π 2 .Remark 4. Corollary 1 provides an extension to GAP measures of the known fact [33] that ψ|B|ψ has nearly the same value for most ψ relative to the uniform distribution.This kind of near-constancy is different from the near-constancy property of a macroscopic observable, viz., that most of its eigenvalues (counted with multiplicity) in the micro-canonical energy shell are nearly equal.Here, in contrast, nothing (except boundedness) is assumed about the distribution of eigenvalues of B. In particular, if B is a self-adjoint observable, then a typical ψ may well define a non-trivial probability distribution over the spectrum of B, not necessarily a sharply peaked one.The nearconstancy property asserted here is that the average of this probability distribution is the same for most ψ.In fact, it also follows that the probability distribution itself is the same for most ψ ("distribution typicality"), at least on a coarse-grained level (by covering the spectrum of B with not-too-many intervals) and provided that many dimensions participate in ρ.This follows from inserting spectral projections of the observable for B in (31).⋄ In contrast to the uniform distribution on the sphere in the micro-canonical subspace, which is invariant under the unitary time evolution, GAP(ρ 0 ) will in general evolve, in fact to GAP(ρ t ) by (17).This leads to questions about what the history t → ψ t looks like.Inserting U * t BU t for B in (31) leads us to the first equation in the following variant of "dynamical typicality" for GAP measures.
Corollary 2. Let H be a separable Hilbert space, B a bounded operator and ρ a density matrix on H, and t → U t a measurable family of unitary operators.Then for every ε, t ≥ 0, Clearly, for U t we have in mind either a unitary group U t = exp(−iHt) generated by a time-independent Hamiltonian H, or a unitary evolution family U t satisfying i d dt U t = H t U t and U 0 = I generated by a time-dependent Hamiltonian H t .However, the group resp.co-cycle structure play no role in the proof.(In [48], a similar result for the uniform distribution over the sphere in a large subspace was formulated only for time-independent Hamiltonians, but the proof given there actually applies equally to time-dependent ones.) The last two corollaries were applications of Lévy's lemma that did not involve reduced density matrices.We now turn to bi-partite systems again and present two further corollaries.We first ask whether, for GAP(ρ 0 )-typical ψ 0 , the reduced density matrix ρ ψt a remains close to tr b ρ t over a whole time interval [0, T ].The following corollary answers this question affirmatively for most times in this interval.
Corollary 3. Let H a and H b be Hilbert spaces with H a having finite dimension d a and H b being separable, ρ a density matrix on H = H a ⊗H b , and t → U t a measurable family of unitary operators on H. Then for every ε, T > 0, GAP(ρ) ψ ∈ S(H) : 1 where The next corollary expresses that for GAP(ρ)-typical ψ, large d b , and small tr ρ 2 , the conditional wave function ψ a (relative to a typical basis) has Born distribution close to GAP(tr b ρ).(Note that we are considering the distribution of ψ a conditionally on a given ψ, rather than the marginal distribution of ψ a for random ψ, which would be S(H) GAP(ρ)(dψ) Born ψ,B a (•).)Recall the notation (26).
where Born ψ,B a is the distribution of the conditional wave function, ONB(H b ) is the set of all orthonormal bases on H b , and u ONB the uniform distribution over this set.
Remark 5. We conjecture that the closeness between Born ψ,B a and GAP(tr b ρ) is even better than stated in Corollary 4, at least when 0 is not an eigenvalue of tr b ρ, in the sense that (35) holds not only for continuous f but even for bounded measurable f , and in fact uniformly in f with given f ∞ .This conjecture is suggested by using Lemma 6 of [17] instead of Lemma 5, or rather a variant of it with more explicit bounds.⋄ Whereas Theorem 1 is based on the rather technical concentration of measure result Theorem 2, a slightly weaker statement can be obtained using only the Chebychev inequality and a bound on the variance of random variables of the form ψ → ψ|A|ψ with respect to GAP(ρ) given in Proposition 1 in Section 4.2.The latter bound is also of interest in its own right and has already been established for self-adjoint A by Reimann in [35].
Theorem 3 (Generalized canonical typicality, polynomial bounds).Let H a and H b be Hilbert spaces with H a having finite dimension d a and H b being separable.Let ρ be a density matrix on H = H a ⊗ H b with ρ < 1/4.Then for every δ > 0, Remark 6.Again, we can equivalently express Theorem 3 as a bound on the confidence level 1 − δ for any given allowed deviation of ρ ψ a from tr b ρ: For every ρ with ρ < 1/4 and every ε > 0,

⋄
Remark 7.While our main motivation for developing Theorem 3 is the different strategy of proof, and while the exponential bound of Theorem 1 will usually be tighter than the polynomial bound of Theorem 3, this is not always the case: the bound of Theorem 3 is actually sometimes better, as the following example shows.Suppose that ρ = 1 √ D = p 1 and that all other p j are equal, i.e., for all j > 1.Then, and for, e.g., d a = 1000 and ε = 0.01 we find that for 4.67 • 10 13 < D < 9.17 • 10 31 , i.e., in this example there is a regime in which D is already very large but still the polynomial bound is smaller than the exponential one.⋄

Discussion
Remark 8. System size.Theorem 3 shows, roughly speaking, that as soon as GAP(ρ)-most wave functions ψ have ρ ψ a close to tr b ρ.If we think of 1/ tr ρ 2 as the effective number of dimensions participating in ρ, and if this number of dimensions is comparable to the full number D = dim H = d a d b of dimensions, then (41) reduces to Since the dimension is exponential in the number of degrees of freedom, this condition roughly means that the subsystem a comprises fewer than 20% of the degrees of freedom of the full system.(The same consideration was carried out in [13,14] for the original statement of canonical typicality.)The stronger exponential bound yields that a can even comprise up to 50% of the degrees of freedom [13,14].⋄ Remark 9. Canonical density matrix.A ρ of particular interest is the canonical density matrix The relevant condition for generalized canonical typicality to apply to ρ = ρ can is that it has small purity tr ρ 2 and small largest eigenvalue ρ .We argue that indeed it does.One heuristic reason is equivalence of ensembles: since ρ mc has purity 1/d mc and largest eigenvalue 1/d mc , which is small, the values for ρ can should be similarly small.Another heuristic argument is based on the idealization that the system consists of many non-interacting constituents, so that H = H ⊗N 1 and H = N j=1 I ⊗(j−1) ⊗ H 1 ⊗ I ⊗(N −j) , so ρ can = ρ ⊗N 1can .It is a general fact that for tensor products ρ 1 ⊗ ρ 2 of density matrices, the purities multiply, tr(ρ 1 ⊗ ρ 2 ) 2 = (tr ρ 2 1 )(tr ρ 2 2 ), and the largest eigenvalues multiply, ρ 1 ⊗ ρ 2 = ρ 1 ρ 2 .Thus, the purity of ρ can is the N-th power of that of ρ 1can , and likewise the largest eigenvalue.Since N ≫ 1 and the values of ρ 1can are somewhere between 0 and 1, and not particularly close to 1, the values of ρ can are close to 0, as claimed.We expect that mild interaction does not change that picture very much.⋄ Remark 10.Classical vs. quantum.While classically, a typical phase point from a canonical ensemble is also a typical phase point from some micro-canonical ensemble, a typical wave function from GAP(ρ β ) does not lie in any micro-canonical subspace H mc (if H = H mc ) and even if it does lie in an H mc , then it is not typical from that subspace; that is because typical wave functions are superpositions of many energy eigenstates, and the weights of these eigenstates in ρ mc and ρ can are reflected in the weights of these eigenstates in the superposition.Therefore, already in the case that ρ is a canonical density matrix, Theorems Then for every δ > 0, When we apply our Theorem 3 to ρ = ρ R (and assume d R ≥ 4), we obtain that GAP(ρ) = u R , tr ρ 2 = 1/d R , and almost exactly the bound (44) except for a (rather irrelevant) factor √ 28 and d 2.5 a instead of d 2 a .Further explanation of how this different exponent comes about can be found in Section 4.6.
Theorem 5 (Canonical typicality, exponential bounds [30,31]).With the notation and hypotheses as in Theorem 4, for every δ > 0 such that This theorem was stated slightly differently in [30,31]; we give the derivation of this form in Section 4.6.Again, the bound agrees with the one (28) provided by Theorem 1 for ρ = ρ R (so ρ = 1/d R ) up to worse constants and additional factors of d a .
Next, here is the standard statement of Lévy's lemma: 4Theorem 6 (Lévy's Lemma [27]).Let H be a Hilbert space of finite dimension D := dim H ∈ N, let f : S(H) → R be a function with Lipschitz constant η, let u be the uniform distribution over S(H), and let ε > 0. Then where Ĉ = 2 9π 3 .When we apply our Theorem 2 to ρ = I/D, we obtain that GAP(ρ) = u, ρ = 1/D, and exactly the bound (47) except for worse constants.Note that Theorem 2 holds also for infinite dimensional separable H.
We turn to previous results for dynamical typicality.In [48], an inequality analogous to the bound (32) of Corollary 2 was proven for the uniform distribution over the sphere in a subspace.In [28], variants of Lévy's lemma and dynamical typicality were established for the mean-value ensemble of an observable A for a value a ∈ R, defined by restricting the uniform distribution on S(C D ) to the set {ψ ∈ S(C D ) : ψ|A|ψ = a} and normalizing afterwards.However, the physical relevance of this ensemble is unclear, since, in general, the mean value of an observable is itself no observable, and thus it is unclear how this ensemble could be prepared or occur in an experiment.⋄ Remark 13.Lévy's lemma for other distributions.Lévy's lemma, although it applies to the uniform and GAP measures, does not apply to all rather-spread-out distributions on the sphere; it is thus a non-trivial property of the family of GAP measures.This can be illustrated by means of the von Mises-Fisher (VMF) distribution, a well known and natural probability distribution on the unit sphere S(R D ) in R D that is different from the GAP measure.It has parameters κ ∈ R + and µ ∈ S(R D ) and can be obtained from a Gaussian distribution in R D with mean µ and covariance κ −1 I by conditioning on S(R D ).The analog of Lévy's lemma for the von Mises-Fisher distribution is false; this can be seen as follows.Its density with respect to the uniform distribution u on S(R D ) varies at most by a factor of e 2κ when varying x (while keeping D and κ fixed).For a given Lipschitz function F on the sphere, insertion of F (x) g(x) for f (x) in a real variant of Lévy's lemma for the uniform distribution (Theorem 6 above) yields that F (x) g(x) for u-most x is close to the u-average of F g, which equals the VMF-average of F (where the Lipschitz constant of f = F g could be a bit worse than that of F ).The set of exceptional x has small has nearly full area if the dimension d is large enough.As pointed out by, e.g., Milman and Schechtman [27], it follows for a function f : S(R d ) → R with Lipschitz constant η (by taking S = f −1 (m) and m the median of f ) that most points ψ have f (ψ) close to m if d is large enough.The variant quoted here referring to the mean instead of the median is due to Maurey and Pisier [29] and also described in [27, App.V].
u-measure, and since C(D, κ) ∈ [e −κ , e κ ] and thus g(x) ∈ [e −2κ , e 2κ ], it also has small VMF-measure (larger at most by a factor of e 2κ ).Thus, for VMF-most x, F (x) is close to VMF(F )/g(x), and thus not constant at all.The same argument shows that Lévy's lemma is violated for any sequence of measures (µ D ) D∈N on S(R D ) whose density g D relative to u is bounded uniformly in D, has Lipschitz constant bounded uniformly in D, but deviates significantly from 1 on a non-negligible set in S(R D ).
For GAP measures the situation is very different.From ( 16) one can see, for example, that if the eigenvalue p n 2 of ρ = n p n |n n| is twice as large as another eigenvalue p n 1 , then the density ( 16) at ψ = |n 2 is 2 D+1 times as large as that at ψ = |n 1 .Thus, the density and its Lipschitz constant are not (for relevant choices of ρ) bounded uniformly in D; rather, non-uniform GAP measures become more and more singular with respect to the uniform distribution for large D. ⋄ Remark 14. Generalized canonical typicality from conditional wave function?One might imagine a different strategy of deriving generalized canonical typicality, based on regarding ψ itself as a conditional wave function and using the known fact [19,17] be the measure that is concentrated on the finite set {|n : 1 ≤ n ≤ D} and gives weight p n to each |n .This measure is the narrowest, most concentrated measure with density matrix ρ, and thus a kind of opposite of GAP(ρ), the most spread-out measure with density matrix ρ.A random vector ψ with distribution µ is a random eigenvector |n .What the reduced density matrix ρ is always a pure state and thus far away from tr b ρ = ℓ,m p ℓm |ℓ a ℓ| if that is highly mixed.Note, however, that if instead of a product basis, we had taken (|n ) n=1...D to be a purely random ONB of H, then (with overwhelming probability if a I a and thus also tr b ρ (which by (18) is the µ-average of ρ ψ a ) is close to d −1 a I a , so ρ ψ a ≈ tr b ρ for µ-most ψ, despite the narrowness of µ. ⋄ Remark 16.Canonical typicality with respect to GAP(ρ) does not hold for every ρ.
Let us consider the special case in which ρ has one eigenvalue that is large (e.g., 10 −1 ), while all others are very small (e.g., 10 −1000 ).Such a situation occurs for example for N-body quantum systems with a gapped ground state |0 at very low temperature, T of order (log N) −1 .So call the large eigenvalue p and suppose for definiteness that all other eigenvalues are equal, with O(1/D) referring to the trace norm and the limit D → ∞.In that case, tr ρ 2 ≈ p 2 (e.g., 10 −2 , while d a may be 10 100 ), so the smallness condition (41) for generalized canonical typicality is strongly violated.To investigate ρ ψ a , note that any vector ψ ∈ S(H) can be written as ψ = cos θe iα |0 + sin θ|φ with θ ∈ [0, π/2], α ∈ [0, 2π), and |φ ⊥ |0 .If ψ has distribution GAP(ρ), then φ has distribution u S(|0 ⊥ ) and is independent of θ and α, α is independent of θ and uniformly distributed, and a lengthy computation shows that the distribution of θ has density .Since the latter depends on θ (and thus is not deterministic but has a non-trivial distribution), it follows that ρ ψ a ≈ tr b ρ with high probability.⋄ Remark 17.Comparison to large deviation theory.In large deviation theory [50], one studies another version of concentration of measures: one considers a sequence of probability distributions (P N ) N ∈N on (say) the real line and studies whether (and at which rate) P N [x, ∞) tends to 0 exponentially fast as N → ∞ for fixed x ∈ R. Our situation is a bit similar, with the role of x played by ε in (29), and that of P N by the distribution of ρ ψ a −tr b ρ tr in R for GAP(ρ)-distributed ψ.However, our situation does not quite fit the standard framework of large deviations because we do not necessarily consider a sequence ρ N of density matrices, but rather a fixed ρ with small ρ .That is why we have provided error bounds in terms of the given ρ.⋄ 4 Proofs

Proof of Remark 1
What needs proof here is that also in infinite dimension, the partial trace commutes with the expectation, where we used Fubini's theorem in the third and the definition of ρ µ in the fourth line.Since a bounded operator A is uniquely determined by the quadratic form φ → φ|A|φ , it follows that E µ (ρ ψ a ) = tr b ρ µ .

Proof of Theorem 3
We start with the proof of the polynomial version of generalized canonical typicality and thereby introduce approximation techniques for infinite dimensional Hilbert spaces, which will also be used in the proof of the exponential bounds of Theorem 1 later on.
For the proof of Theorem 3 we make use of a result from Reimann [35].Let (|n ) n=1...D be an orthonormal basis of eigenvectors of ρ and p 1 , . . ., p D the corresponding (positive) eigenvalues.Reimann used the density of the GAP measure GAP(ρ) to compute expressions of the form where the expectation is taken with respect to GAP(ρ) and c j = j|ψ are the coordinates of ψ ∈ S(H) with respect to the orthonormal basis (|j ) j=1...D .With the help of these expressions he derived an upper bound for the variance Var ψ|A|ψ (also taken with respect to GAP(ρ)) for self-adjoint operators A : H → H.We show that Reimann's upper bound for Var ψ|A|ψ remains essentially valid also for non-self-adjoint A and this bound will be a main ingredient in our proof of Theorem 3. We start by computing the expectation E ψ|A|ψ and an upper bound for the variance Var ψ|A|ψ for an arbitrary operator A : H → H, where the expectation and variance are with respect to the measure GAP(ρ).We closely follow Reimann [35] who did these computations in the case that A is self-adjoint.We arrive at the same bound for the variance (with the distance between the largest and smallest eigenvalue of A replaced by its operator norm), however, one step in the proof needs to be modified to account for A not being necessarily self-adjoint.Moreover, we show that the expression for E ψ|A|ψ and the upper bound for Var ψ|A|ψ remain valid if H has countably infinite dimension, i.e., if it is separable.Proposition 1.Let ρ be a density matrix on a separable Hilbert space H with positive eigenvalues p n such that p max = ρ < 1/4 and let dim H ≥ 4. For GAP(ρ)-distributed ψ and any bounded operator A : H → H, and Proof.We first assume that D := dim H < ∞.The formula for the expectation follows immediately from the fact that the density matrix of GAP(ρ) is ρ: For a complex-valued random variable X the variance can be computed by Since the variance of a random variable does not change when a constant is added, we can assume for its computation without loss of generality that E ψ|A|ψ = 0. Let (|n ) n=1,...,D be an orthonormal basis of H consisting of eigenvectors of ρ.For ψ ∈ S(H) we write with c l = l|ψ and A ml = m|A|l .Then for X = ψ|A|ψ we find that Reimann [35] showed that the fourth moments where This implies Because of |A mm | ≤ A it follows from the computation in [35] that Moreover, as it was shown in [35], K ml ≤ 1 1−pmax for all l and m and therefore m,l Since A is not necessarily self-adjoint, we have to proceed in a different way than Reimann [35] did to bound this term.To this end we make use of the Cauchy-Schwarz inequality for the trace, i.
= tr(AA * ρ 2 ) tr(A * Aρ 2 ) (67b) Combining (64), (65), (66) and (67c) proves the bound for the variance and thus finishes the proof in the finite-dimensional case.Now suppose that H has a countably infinite ONB.The expectation can be computed as before since GAP(ρ)(|ψ ψ|) = ρ remains true in the infinite-dimensional setting [49].For the variance, we approximate ρ by density matrices ρ n , n ∈ N, of finite rank defined by Then ρ n −ρ tr → 0 as n → ∞, and therefore Theorem 3 in [49] implies that GAP(ρ n ) ⇒ GAP(ρ) (weak convergence).Note also that from some n 0 onwards, ∞ m=n p m ≤ p 1 and thus Since f is continuous, it follows from the weak convergence of the measures GAP(ρ n ) that GAP(ρ n )(f ) → GAP(ρ)(f ) and therefore altogether that GAP(ρ n )(f n ) → GAP(ρ)(f ).Since, as one easily verifies, tr ρ 2 n → tr ρ 2 , the bound for the variance in the finite-dimensional case remains valid in the infinitedimensional setting. 5roof of Theorem 3. Without loss of generality assume that all eigenvalues of ρ are positive.Proposition 1 together with Chebyshev's inequality implies for any operator A and any ε > 0 that where I b is the identity on H b , we find A lm = 1, and similarly For any d a ×d a matrix M = (M ij ) it holds that M tr ≤ √ d a M 2 , where M 2 denotes the Hilbert-Schmidt norm of M which is defined by see, e.g., Lemma 6 in [30].Therefore, we have that and thus where we used (69c), (71c), (72c) and A lm = 1 in the last step.By replacing ε → d and solving for ε gives (36) and thus finishes the proof.

Proof of Theorem 1
The proof of Theorem 1 follows largely the one of canonical typicality given in [30]; some crucial differences concern our generalization of the Lévy lemma and the steps needed for covering infinite dimension.
Let U a be a unitary operator on H a .Then the function f : S(H) → C, f (ψ) = tr a (U a ρ ψ a ) = ψ|U a ⊗I b |ψ is Lipschitz continuous with Lipschitz constant η ≤ 2 U a = 2 (see, e.g., Lemma 5 in [30]).By Theorem 2 and Remark 3, By (27), Let (U j a ) j=0 be unitary operators that form a basis for the space of operators on H a such that 6 Then As in [30], the density matrix ρ ψ a can be expanded as where C j (ρ ψ a ) = tr a (U j * a ρ ψ a ) and (81) becomes 6 One possible choice is given by where (|k ) k=0...da−1 is an orthonormal basis of H a , see [30].
This implies that and, after replacing Setting and solving for ε finishes the proof.

Proof of Theorem 2
The proofs begins with an auxiliary theorem formulated as Theorem 8 below.For better orientation, we also state the analogous fact about Gaussian distributions as Theorem 7 and start with quoting its real version:7 Lemma 1 ( [27]).Let F : R D → R be a Lipschitz function with constant η.Let X = (X 1 , . . ., X D ) be a vector of independent (real) standard Gaussian random variables.Then for every ε > 0, Now let ρ = D n=1 p n |n n| be a density matrix on the D-dimensional Hilbert space H, and let Z be a random vector in H whose distribution is G(ρ), the Gaussian measure with mean 0 and covariance ρ as defined in Section 2.2; equivalently, Z = D n=1 Z n |n , where the Z n are independent complex mean-zero Gaussian random variables with variances Then we can write Z = ρ/2 Z, where the components Zn of Z = D n=1 Zn |n are D independent complex mean-zero Gaussian random variables with variances E| Zn | 2 = 2, which can be in a natural way identified with a vector of 2D independent real standard Gaussian variables.
If F : H → R is Lipschitz with constant η, then F • ρ/2 : H → R is also Lipschitz with constant η ρ /2.This function can also naturally be considered as a function on R 2D and then an application of Lemma 1 immediately proves the following theorem: Theorem 7. Let dim H < ∞, let ρ be a density matrix on H, let Z be a random vector with distribution G(ρ), and let F : H → R be a Lipschitz function with Lipschitz constant η.Then for every ε > 0, However, instead of using Theorem 7, we will use Theorem 8 below, a similar result for the Gaussian adjusted measure GA(ρ) defined in Section 2.2, which has density ψ 2 relative to G(ρ).Its proof closely follows the proof of Lévy's Lemma in [27]; for convenience of the reader we provide all the details.Theorem 8. Let dim H < ∞, let ρ be a density matrix on H, let Z be a random vector with distribution GA(ρ), and let F : H → R be a Lipschitz function with Lipschitz constant η.Then for every ε > 0, GA(ρ) ψ ∈ S(H) : Proof.We identify H with C D by means of the ONB (|n ) n=1...D .Let ϕ : R → R be a convex function and let Z = ( Z1 , . . ., ZD ) be a vector with the same distribution as Z but independent of it.With the help of Jensen's inequality and Hölder's inequality we find that where we use the notation F (Z) and F (ψ) interchangeably.We can write Z n = Re Z n + iIm Z n where Re Z n and Im Z n are independent real-valued Gaussian random variables with mean 0 and variance p n /2.Since E|Re Z n | 2 = p n /2 and E|Re Z n | 4 = 3p 2 n /4 we obtain We identify Z with the vector X := (Re Z 1 , Im Z 1 , Re Z 2 , . . ., Re Z D , Im Z D ) of real Gaussian random variables and similarly Z with Y := (Re Z1 , Im Z1 , Re Z2 , . . ., Re ZD , Im ZD ).
For each 0 ≤ θ ≤ π 2 set X θ = X sin θ + Y cos θ.One easily sees that the joint distribution of X and Y , which is the multivariate normal distribution with mean vector 0 and covariance matrix diag(p 1 , p 1 , . . ., p D , p D , , p 1 , p 1 , . . ., p D , p D )/2, is the same as the joint distribution of X θ and d dθ X θ = X cos θ −Y sin θ since linear combinations of independent Gaussian random variables are again Gaussian and the entries of the expectation vector and covariance matrix can be easily computed.
Since F can be approximated uniformly by continuously differentiable functions, we can without loss of generality assume that F is continuously differentiable.
Let us now assume that ϕ is non-negative.Then ϕ 2 is also convex.Then we find with the help of Jensen's inequality that where in the last step we used Fubini's theorem and the fact that the joint distribution of X θ and d dθ X θ is the same as the joint distribution of X and Y .Let λ ∈ R and set ϕ(x) = exp(λx).Then we get Altogether we obtain By Markov's inequality we find that Since λ ∈ R was arbitrary, we can minimize the right-hand side over λ.The minimum is attained at λ min = 4ε/(π 2 ρ η 2 ) and inserting this value in (98c) finally yields (91).
The last ingredient we need for the proof of Theorem 2 is the following lemma: Lemma 2. For all r > 0 it holds that Proof.With the help of Hölder's inequality we find that Note that in the third line we used (93) and that n p n = 1.We can write where the Zn are independent complex standard Gaussian random variables.For a random variable Y let M Y (t) = E(e tY ) denote its moment generating function.The Chernoff bound states that for any a ∈ R, Here we thus obtain We compute Next note that 2(Re Zn ) 2 and 2(Im Zn ) 2 are chi-squared distributed random variables with one degree of freedom and that the moment generating function of a random variable Y with distribution χ 2 1 is given by Therefore, and this implies where we chose s = ρ −1 in the last line.Because of we find that Inserting this into (100c) finishes the proof.
Proof of Theorem 2. We first assume that D = dim H < ∞.Without loss of generality we can assume that GAP(ρ)(f ) = 0. Due to the continuity of f it follows that there exists a ϕ ∈ S(H) such that f (ϕ) = 0.This implies for all φ ∈ S(H) that where we used in the last step that the distance (in the spherical metric) between two points on the unit sphere is bounded by π.Thus f is bounded by πη.Let 0 < r < 1 and define f : For every ψ, ϕ ∈ H such that ψ , ϕ ≥ r we find that where the last inequality follows from Thus f is Lipschitz continuous with constant η/r on {ψ ∈ H : ψ ≥ r}.Now let ψ, ϕ ∈ H such that ψ , ϕ ≤ r and ϕ ≤ ψ .Then we obtain where the last inequality follows from Due to the symmetry of the argument in ψ and ϕ, one finds the same estimate in the case that ψ ≤ ϕ and we conclude that f is Lipschitz continuous with constant 5η/r on {ψ ∈ H : ψ ≤ r}.Finally, let ψ, ϕ ∈ H such that ψ ≤ r and ϕ ≥ r and define γ : [0, 1] → H, γ(t) = (1 − t)ψ + tϕ.Then there exists a t 0 ∈ [0, 1] such that γ(t 0 ) = r and Therefore, we find that We conclude that f is Lipschitz continuous with Lipschitz constant 6η/r.Using the definition of f , we find that By Lemma 2, the second term can be bounded by √ 2 exp(−(1/2 − r 2 )/2 ρ ).In order to estimate the first term in (119e), we first derive an upper bound for |GA(ρ)( f )|.We compute and so we obtain, again by Lemma 2, This implies with the help of Theorem 8 that provided that ε > 5η exp(−(1/2 − r 2 )/2 ρ ).Altogether we arrive at We can assume without loss of generality that ε < πη (126) because otherwise the left-hand side of (30) vanishes: indeed, the distance between any two points on the sphere is at most π, so their f values can differ at most by πη, and for the same reason f (ψ) can differ from its average relative to any measure by at most πη.
Likewise, we can assume without loss of generality that ε ≥ 10η exp(−1/8 ρ ) (127) because otherwise the right-hand side of ( 30) is greater than 1: indeed, for ε < 10η exp(−1/8 ρ ), As a consequence of ( 126) and ( 127), the first exponent in (125) is greater than the second, so by ( 127).This finishes the proof in the finite-dimensional case.Now suppose that H has a countably infinite ONB.Consider the density matrices ρ n defined as in (68).Let ε ′ > 0. Because the set is open in S(H), it follows from the weak convergence of the measures GAP(ρ n ) to GAP(ρ) by the "portmanteau theorem" [3, Thm.2.1] that for some large enough N ∈ N with N ≥ n 0 .Recall that n 0 ∈ N is chosen such that ρ n = ρ for all n ≥ n 0 .Let H N := span{|n : n = 1, . . ., N}.Then, since ρ N is a density matrix on H N and GAP(ρ N ) is concentrated on H N , it follows with what we have already proven in the finite-dimensional case that where C = 1 288π 2 .Noting that ρ N = ρ and that ε ′ > 0 was arbitrary, we can altogether conclude that i.e., the bound (130) remains true in the infinite-dimensional setting.

Proofs of Corollaries 2, 3, 4
Proof of Corollary 2. As already noted before Corollary 2, the first inequality follows immediately from Corollary 1 by inserting U * t BU t for B. For the proof of the second inequality we define Then, for every s > 0 we find that i.e., with δ := e sε , GAP(ρ) ψ ∈ S(H) :

Further Explanations to Remark 12
As discussed after Theorem 4, applying Theorem 3 to ρ = ρ R yields the worse factor d 2.5 a instead of d 2 a .Here we want to give some details why in this special case of Theorem 3, slightly better bounds can be obtained.In [30,31], Theorem 5 was stated in a slightly different form; more precisely, there it was shown that for every ε > 0, The first square root dominates as soon as which we can, of course, assume without loss of generality since otherwise we would have δ > 1 and then the lower bound on the probability would be trivial.This immediately implies (46).

Summary and Conclusions
Typicality theorems assert that, for big systems, some condition is true of most points, or here, most wave functions.The word "most" usually refers to a uniform distribution u (say, over the unit sphere S(H R ) in some Hilbert subspace H R ), but here we use the GAP measure as the natural analog of the uniform distribution in cases with given density matrix ρ.Since the GAP measure for ρ = ρ can is the thermal equilibrium distribution of wave functions, our typicality theorems can be understood as expressing a kind of equivalence of ensembles between a micro-canonical ensemble of wave functions (u S(Hmc) ) and a canonical ensemble of wave functions (GAP(ρ can )).Yet, our results apply to arbitrary ρ.
The key mathematical step is the generalization of Lévy's lemma to GAP measures, that is, of the concentration of measure on high-dimensional spheres.The fact that the pure states of a quantum system are always the points on a sphere then allows us to deduce very general typicality theorems from this kind of concentration of measure.In particular, these typicality statements are largely independent of the properties of the Hamiltonian and require only that many dimensions participate in ρ.
Specifically, some of these statements concern a bi-partite quantum system a ∪ b, where b is macroscopically large.We have shown that for most ψ from the GAP(ρ) ensemble, the reduced density matrix ρ ψ a is close to its average tr b ρ assuming that the largest eigenvalue (Theorem 1) or at least the average eigenvalue (Theorem 3) of ρ is small.That is, we have established an extension of canonical typicality to GAP measures.This family of measures is particularly natural in this context because it arises anyway in the context of bi-partite systems as the typical asymptotic distribution of the conditional wave function [19,17], a fact extended further in Corollary 4.
Another important application of concentration-of-measure of GAP yields (Corollary 1) that for any observable B, most ψ from the GAP(ρ) ensemble have nearly the same Born distribution (when suitably coarse grained).Moreover (Corollaries 2 and 3), if the initial wave function ψ 0 is GAP(ρ)-distributed, then for any unitary time evolution the whole curves t → ψ t |B|ψ t and t → ρ ψt a are nearly deterministic (and given by tr(Bρ t ) and tr b ρ t ).
In sum, our results describe simple relations between the following concepts: reduced density matrix, many participating dimensions, and GAP measures.That is, if many dimensions participate in ρ, then for GAP(ρ)-most ψ, the reduced density matrix ρ ψ a is nearly independent of ψ.

First 4 a
suppose that H R = H.Similarly to the proof of Theorem 3 one finds that u ψ ∈ S(H) : ρ ψ a − tr b ρ tr > ε ≤ Var ψ|A lm |ψ , (146) where A lm = |l a m| ⊗ I b and (|l a ) l=1...da is an orthonormal basis of H a .Instead of bounding the sum by d 2 a times a uniform bound on the variances Var ψ|A lm |ψ , one can now make use of the fact that for uniformly distributed ψ ∈ S(H), the second and fourth moments of the coefficients c l of ψ in an orthonormal basis (|n ) n=1...D of eigenvectors of ρ can be computed explicitly.More precisely, they satisfyE(|c n | 2 ) = 1 D , E(|c n | 2 |c k | 2 ) = 1 + δ nk D(D + 1) ,(147)and all other second and fourth moments vanish, see e.g.[8, App.A.2 and C.1].With this we find that Var ψ|A lm |ψ = k,n |A lm kn | 2 1 + δ kn D(D + 1) + tr(A (lm) * A (lm) ) = d a l tr(|l a l| ⊗ I b ) = d a D (149) and therefore u ψ ∈ S(H) : ρ ψ a − tr b ρ tr > ε ≤ d H R = H is a subspace of H, then this bound remains valid after replacing ρ by P R /d R , u by u R , H by H R and D by d R .This follows immediately from the previous computations after noting that l,m tr(A (lm) * P R A (lm) P R ) ≤ l,m tr(A (lm) * P R A (lm) ) = d a l tr((|l a l| ⊗ I b P R ) = d a d R .(151) Setting δ := d 4 a /(ε 2 d R ) and solving for ε finally gives Theorem 4.
3 and 1 are not just simple consequences of canonical typicality but independent results.⋄ Remark 11.Equivalence of ensembles.We can now state more precisely the sense in which our results provide a version of equivalence of ensembles.It is well known that if a and b interact weakly and b is large enough, then both ρ mc and ρ can in H S = H a ⊗ H b lead to reduced density matrices close to the canonical density matrix (4) for a, tr b ρ mc ≈ ρ a,can ≈ tr b ρ can , provided the parameter β of ρ can and ρ a,can is suitable for the energy E of ρ mc .Hence, Theorems 3 and 1 yield that we can start from either u mc or GAP(ρ can ) and obtain for both ensembles of ψ that ρ ψ a is nearly constant and nearly canonical.⋄ Remark 12.Comparison to original theorems.The original, known theorems about canonical typicality, which refer to the uniform distribution over a suitable sphere instead of a GAP measure, are still contained in our theorems as special cases, except for worse constants and in some places additional factors of d a (which we usually think of as constant as well).For more detail, let us begin with the known theorem analogous to Theorem 3 (formulated this way in [14, Eq. (32)], based on arguments from [46]): Theorem 4 (Canonical typicality, polynomial bounds).Let H a and H b be Hilbert spaces of respective dimensions d a , d b ∈ N, H = H a ⊗H b , H R be any subspace of H of dimension d R , ρ R be 1/d R times the projection to H R , and u R the uniform distribution over S(H R ).
tr b is a finite sum and thus trivially commutes with E µ .)So suppose that H b has a countable ONB (|l b ) l∈N , and let |φ a ∈ H a .Then a φ|E µ tr b |ψ ψ| |φ a = S(H)