Analysis of the Laplacian and Spectral Operators on the Vicsek Set

We study the spectral decomposition of the Laplacian on a family of fractals $\mathcal{VS}_n$ that includes the Vicsek set for $n=2$, extending earlier research on the Sierpinski Gasket. We implement an algorithm [24] for spectral decimation of eigenfunctions of the Laplacian, and explicitly compute these eigenfunctions and some of their properties. We give an algorithm for computing inner products of eigenfunctions. We explicitly compute solutions to the heat equation and wave equation for Neumann boundary conditions. We study gaps in the ratios of eigenvalues and eigenvalue clusters. We give an explicit formula for the Green's function on $\mathcal{VS}_n$. Finally, we explain how the spectrum of the Laplacian on $\mathcal{VS}_n$ converges as $n \to \infty$ to the spectrum of the Laplacian on two crossed lines (the limit of the sets $\mathcal{VS}_n$.)

1. Introduction. Kigami [15] has developed a theory of Laplacians on a class of fractals called pcf self-similar fractals. One example, the Sierpinski gasket SG, has become the "poster child" for this theory [21] in the belief that it is the simplest nontrivial example. As a result, a lot of very concrete results have been obtained for SG. This paper extends some of these concepts and results to a different family of finitely ramified self-similar fractals, the Vicsek sets VS n , with n = 2 corresponding to the Vicsek set VS. We also obtain results for VS that have no analogs on SG.
To review the standard theory, a pcf self-similar fractal V will be a compact set in the plane, defined as the limit of a sequence of graphs Γ 0 , Γ 1 , . . . with vertices V 0 ⊂ V 1 ⊂ · · · . The property of self-similarity takes the form of a family of mappings from V to itself, {F i } which are contractive similarities and have the property that . For example, the Sierpinski Gasket is defined by three similarities, each of which sends the entire set SG to one of its three smaller triangular component copies. We refer to the graph at stage m of the approximation as the mth level graph approximation. The Vicsek set (specifically the second order Vicsek set VS 2 , but sometimes simply called the Vicsek set) is the fractal defined by the similarities F i : VS 2 → VS 2 F 1 (x) = x/3 F 4 (x) = x/3 + 2/3(0, 1) F 2 (x) = x/3 + 2/3(1, 0) F 0 (x) = x/3 + 2/3(1/2, 1/2) F 3 (x) = x/3 + 2/3(1, 1) The first graph approximation Γ 0 is the complete graph on four vertices (that is, the vertices of a unit square and an edge connecting every pair of vertices). The next approximation Γ 1 consists of five miniature copies of Γ 0 arranged in an X shape with branches of length 2 (hence VS 2 ). Further graph approximations likewise consist of five copies of the previous level; they display finer levels of branching. Higher order Vicsek sets VS n are similar, except that Γ 1 is an X-shaped graph consisting not of five but 4n − 3 copies of Γ 0 , with arms of length n. Instead of five similarities, we have 4n − 3 similarities.
It is intuitive from the picture and also easy to demonstrate that as n → ∞, VS n approaches the pair of crossed line segments between (0, 0) and (1,1) and (1, 0) and (0, 1). (That is, the maximum Euclidean distance of any point in VS n from the crossed lines approaches zero.) This is important to note because it suggests a connection between fractal analysis on the Vicsek sets and classical analysis on the line; later in this paper we show that the spectrum of the (Neumann) Laplacian as defined on the Vicsek sets does, in fact, approach the spectrum for the classical Neumann Laplacian on the cross.
On VS n , we can define a standard self-similar probability measure as follows: for each graph approximation, let ν m be the probability measure which weights each vertex by its degree. Then the standard measure µ on VS n is defined by The renormalization factor for VS n is 2n − 1, so the renormalized graph energy on Γ m is E m (u) = (2n − 1) −m E m (u), and we can define the fractal energy E(u) = lim m→∞ E m (u). We define dom E as the space of continuous functions with finite energy. Now we have the tools to define a fractal Laplacian. In dom E, E extends by the polarization formula to a bilinear form E(u, v) which defines an inner product in this space. If µ is the standard measure, we can define the Laplacian with a weak formulation: ∆u = f if f is continuous, u ∈ dom E, and where dom 0 E = {v ∈ E : v| bdry = 0}. There is also a pointwise formula (which is proven to be equivalent in [21]) which, for nonboundary points in VS n computes with K a constant, and where ∆ m is a discrete Laplacian associated with the graph Γ m , defined by ∆ m u(x) = 1 deg x y∼x (u(y) − u(x)), for x not on the boundary.
The Laplacian satisfies the scaling property ∆(u • F i ) = (4n − 3)(2n − 1)(∆u) • F i and by iteration In this paper, we restrict attention to the Laplacian defined with Neumann boundary conditions. The Neumann boundary conditions are "natural", in the sense that the weak formulation need only be modified to allow all v ∈ dom E, and the pointwise formulation is also valid at boundary points. It is also possible to define a normal derivative ∂ n u(q i ) at boundary point, and the Neumann condition is ∂ n u(q i ) = 0. Moreover, there are infinitely many points in VS n that have neighborhoods isometric to neighborhoods of boundary points; the Neumann boundary conditions treat the boundary points no differently from these equivalent points. (Note that this is not true on SG.) These are ample reasons to prefer Neumann to Dirichlet boundary conditions. An additional benefit is that the theory is considerably simpler.
The Laplacian on a fractal such as SG or VS n has a discrete spectrum of positive eigenvalues λ 0 < λ 1 < λ 2 < · · · , which can be computed explicitly by the method of spectral decimation developed by Fukushima and Shima, and applied to the Vicsek set in [23]. Spectral decimation is a method of relating eigenfunctions and eigenvalues from one graph approximation to a finer one. In Section 2, we describe the method and explicitly compute an algorithm for spectral decimation on VS 2 , which allows us to numerically calculate eigenfunctions on the Vicsek set, and observe patterns in the data.
Let {λ j } denote the spectrum of the Laplacian, and let {u j } denote an orthonormal basis of eigenfunctions. Then for any bounded function f , we can define the spectral operator f (−∆) on L 2 (VS n ) by These operators include the fundamental solutions to the heat and wave equations, and solutions for other space-time equations. Because of the importance of spectral operators to classical analysis, understanding spectral operators and the Laplacian on VS is a key goal in the development of analysis on fractals.
In computing a spectral operator, we can group terms in the sum corresponding to the same eigenvalue, and write where, at a given point x {u j } being an orthonormal basis of the λ-eigenspace E λ . In Section 3 we show how, for certain special points x, we can simplify this sum to a single term. Fixing a point x on the boundary, or at the center, and letting E x λ denote the subspace of E λ of functions vanishing at x, we can choose the orthonormal basis so that the first element u 1 is in (E x λ ) ⊥ and the rest belong to E x λ . Then, P λ (x, y) = u 1 (x)u 1 (y).
Additionally, in Section 3 we prove a formula for the inner product of two eigenfunctions on a graph approximation, and show that it converges in the limit to the inner product on the Vicsek set. This ensures that functions which are orthogonal on graph approximations remain orthogonal on the Vicsek set, and makes it possible to compute P λ when x is a point on the boundary or at the center. Here we follow some of the ideas in [2].
In Section 4, we give some numerical data using our MATLAB algorithms for the eigenvalues and eigenfunctions of the Laplacian on VS 2 and VS n . We also give data on the eigenvalue counting function N (x) and the Weyl ratio N (x)/x α , for the appropriate power α.
In Section 5, we give numerical results for the heat kernel, the propagator for the wave equation, and the spectral projections onto the 0-series.
In Section 6, we show that each 0-series eigenfunction is determined by its restriction to the diagonal of the Vicsek set.
In Section 7, we prove, following [4] the existence of a ratio gap in the spectrum of the Laplacian. A ratio gap is an interval (a, b) such that the ratio of any two eigenvalues must fall outside the interval; this is a measure of the sparseness of the spectrum. Related results have been obtained in [14].
In Section 8, we show the existence of eigenvalue clusters; that is, arbitrarily many distinct eigenvalues in an arbitrarily small interval.
In Section 9, we calculate an explicit Green's function for the Laplacian on the Vicsek set.
In Section 10, we examine the convergence of eigenfunctions and eigenvalues of the Laplacian on VS n as n → ∞ and show that they approach the corresponding values for the Laplacian on the cross.
In Section 11 we establish some properties of the Weyl ratio on VS n that begin to explain the curious apparent convergence to a function that is unrelated to the Weyl ratio on the cross.
For more data and programs, refer to www.math.cornell.edu/~mhw33 ( [6]). It is possible to describe VS n as the closure of a countable union of straight line segments; start with the two diagonals, and take all images under all iterates of {F i }.
(Some images will be proper subsets of other line segments and should be deleted to eliminate redundancy.) We call this the skeleton of VS n , SK(VS n ) = ∪ ∞ j=1 I j , where the line segments I j intersect only at points. Since the skeleton is dense, any continuous function is uniquely determined by its restriction to the skeleton, but the skeleton is not all of the Vicsek set, since it has µ-measure zero.
Each line segment I j has a simple one-dimensional energy for the appropriate constant c. From this point of view, the energy form on VS n is trivial. Because we combine the trivial energy with the unrelated measure µ, we obtain a nontrivial Laplacian. On the other hand, there is a natural measure on the skeleton: just take the sum of Lebesgue measure on each I j . By the embedding of the skeleton in VS n we may also regard this as a measure ν on VS n . Of course it is not a finite measure, as the sum of the lengths of the line segments I j diverges. It satisfies the self-similar identity There is good reason to consider ν as the universal energy measure on VS n . If f ∈ dom E then we may define an associated energy measure ν f with E(f, f ) = ν f (VS n ) and roughly speaking ν f (A) is the contribution to E(f, f ) coming from the set A, for any simple set A (for example, a finite union of cells.) For each I j consider the function f j defined by f j (s j (t)) = t on I j , which is constant on every other interval that intersects I j . Then f j is harmonic at every point except the endpoints of I j , and ν fj is exactly Lebesgue measure on I j . So We can also see that f = ∞ j=1 f j is a finite sum on each I j and ν = ν f , although f does not have finite energy. One can also show that ν f ν for every function f ∈ dom E. This is the "universal" property of ν.
On SG one can define the Kusuoka measure ν = ν h1 + ν h2 where {h 1 , h 2 } is an orthonormal basis of global harmonic functions (modulo constants) in the energy norm, and this serves as a universal energy measure. A similar approach would not work on VS n , since it would produce a measure supported on the two diagonals alone.
It is possible to define an energy Laplacian on VS n using the energy E and the energy measure ν in place of µ, although there are some technical problems because ν is not finite. Such a Laplacian would be rather "trivial", since it would amount to the second derivative along each line segment I j , together with matching conditions on first derivatives at points of intersection. We will not consider this Laplacian further in this paper.
We hope that this paper makes a strong case that the Vicsek sets deserve to be considered the simplest nontrivial examples of pcf self-similar fractals. There are two sides to this statement. The first is that the analysis is nontrivial. Indeed, if you just restrict attention to harmonic functions on VS n , the theory is basically trivial: these are just linear functions on each of the arms of VS n that are constant on all trees that attach to an arm. But the graphs we have obtained for eigenfunctions of the Laplacian reveal that these are nontrivial functions.
The other side of our assertion is that VS n is simpler than SG. The expression for the Green's function and the numerical data for solutions of the wave equation are good a posteriori evidence for this. We can also point to two structural features that can be considered a priori evidence. The first is topological: VS n is contractible while SG has infinite dimensional homology. Indeed, the cycles in SG play a role in the description of the structure of some of the eigenspaces of the Laplacian (the 5-series in the terminology of [21].) The second relates to symmetry: while SG only has a 6-element symmetry group, VS n has an infinite symmetry group. Indeed this group is a semidirect product of one copy of S 4 and infinitely many copies of S 3 and S 2 . (S k denotes the permutation group on k letters.) The S 4 symmetries are the permutations of the 4 arms, which fix the center point q 0 . For any cell, F w V with center point F w q 0 , with w m = 0, there will be either S 2 or S 3 symmetries permuting 2 or 3 of the arms of the cell, depending on whether the cell F w V has 2 or 1 neighboring cells (the permutable arms are the ones with no neighbors.) 2. Spectral decimation. The method of spectral decimation was invented by Fukushima and Shima [11] for SG to relate eigenfunctions and eigenvalues on the graph approximations to each other and the eigenfunctions and eigenvalues on SG.
In essence, an eigenfunction on Γ m with eigenvalue λ m can be extended to an eigenfunction on Γ m+1 with eigenvalue λ m+1 , where λ m = R(λ m+1 ) for an explicit functions R, except for certain specified forbidden eigenvalues, and all eigenfunctions on SG arise as limits of this process starting at some level m. This is true regardless of the boundary conditions, but if we specify Dirichlet or Neumann boundary conditions we can describe explicitly all eigenspaces and their multiplicities. This method was extended to the Vicsek sets by Zhou [23].
We describe the procedure briefly here. First, there is a local extension algorithm that shows how to uniquely extend an eigenfunction u defined on V m to a function defined on V m+1 such that the λ-eigenvalue equations hold on all points of V m+1 \ V m . Then there is a rational function R(λ) such that if u satisfies a λ m -eigenvalue equation on V m , then the extended function will satisfy the λ m+1eigenvalue equation on V m+1 if λ m = R(λ m+1 ) and λ m is not a forbidden eigenvalue. (Forbidden eigenvalues are singularities of the spectral decimation function R. It is "forbidden" to decimate to a forbidden eigenvalue. Because forbidden eigenvalues have no predecessor -there is no λ m−1 corresponding to λ m -we speak of forbidden eigenvalues being "born" at a level of approximation m.) We have the following theorem from [23]: where T n and U n are the Chebyshev polynomials of the first and second kind. Then the spectral decimation function R is Moreover, the forbidden eigenvalues are 4/3 and the zeroes of f n and g n .
We also have a matrix equation for the eigenfunction extension formula: If u| V0 is a vector of the values of u on V 0 and u| V1\V0 is defined analogously, then where J is the V 0 ×(V 1 \V 0 ) adjacency matrix, X is the adjacency matrix for V 1 \V 0 , with the degrees of each vertex as its diagonal entries, and M is a diagonal matrix with M ii = −X ii . Multiplying this matrix by the values of u on any k-cell (with λ 1 replaced by λ k+1 ), we similarly get the values of u on the (k + 1)-cells contained in that k-cell.
In the case of VS 2 , we have R(λ) = 36λ 3 −48λ 2 +15λ. The forbidden eigenvalues are 0, 1/2, 4/3, and (7± √ 17)/12. There is a 0-eigenvalue born at level 0, and a 4/3eigenvalue born at every level thereafter, and continued eigenvalues are formed by successively choosing one of the three inverse functions of R (see Figure 3), so long as this does not lead to a forbidden eigenvalue. Using the labeling system described in Figure 4, the matrix which allows us to continue eigenfunctions is given by (Note that the only roots of −3/γ = (1 − 2λ)f (λ) are forbidden eigenvalues, so γ is well-defined as long as λ is not forbidden.) We denote the 4/3-series as those eigenvalues continued from a 4/3-eigenvalue, and the 0-series as those eigenvalues continued from the 0-eigenvalue. To find λ m from λ m−1 we have to invert R; in the case of VS 2 , there are three inverses, shown in Figure 3. Note that for the sequence 15 m λ i to converge to an eigenvalue λ on VS, we need λ m to approach zero, so we must choose the smallest of the three inverses all but finitely many times.
A proof in [23] guarantees that spectral decimation produces all possible eigenvalues and eigenfunctions (up to linear combination), so this formula allows us to explicitly determine the values of eigenfunctions at arbitrarily high graph approximations. We make several observations from numerical calculation of the eigenfunctions (see Section 6). One is that the restrictions of certain eigenfunctions to the diagonal (the segment in R 2 between (0, 0) and (1, 1)) are periodic with period proportional to 1/m and approximate sine functions; this suggests that higher Vicsek sets VS n , as they converge to a cross, will have eigenfunctions approaching the sine and cosine functions in the classical case. We will prove this fact in Section 10.
Secondly, we observe that for the 0-series eigenfunctions, choosing the smallest inverse function of R first means that λ 1 = 0 so the eigenfunction is extended to be constant on V 1 . On each of the five 1-cells, we start as before, with the eigenfunction having a value of 1 one all boundary points; so the eigenfunction is miniaturized into identical copies at each graph approximation, and the eigenvalue is multiplied by 15. The same thing happens for any number of initial choices of the smallest inverse function. We next describe the structure of the spectrum of the Neumann Laplacian on VS n in complete detail. Let φ 1 , φ 2 , . . . , φ 2n−1 denote the inverse functions of the polynomial R in Theorem 2.1, in increasing order. We note that φ j is an increasing function when j is odd and is a decreasing function when j is even. We write ρ n = (4n − 3)(2n − 1) for the Laplacian renormalization factor. We write 0 = λ 0 < λ 1 < λ 2 < · · · for the distinct eigenvalues. The spectral decimation rules are summarized as follows: where in the first case the eigenvalue is in the 0-series and in the second case it is in the 4/3-series and born on level k. (ii) All but a finite number of the w m are equal to 1. (iii) For the 0-series, the first w j with w j = 1 must have w j an odd number; for the 4/3-series, w 1 must be an odd number but w 1 = 2n − 1. (iv) The multiplicity of each 0-series eigenvalue is 1, while the multiplicity of each 4/3-series eigenvalue born on level k is 2(4n − 3) k + 1.
Condition (ii) is required in order that the limits in (i) exist. Let m 0 denote the largest value of m for which w m = 1 (if this never happens, let m 0 = 0.) Then we can rewrite the limits in (i) in terms of a single function ψ n defined by denotes the m-fold composition of φ 1 ). This limit exists because the Taylor expansion of R(t) about t = 0 is ρ n t + O(t 2 ), so the Taylor expansion of . Then (i) says the eigenvalues are either Condition (iii) spells out explicitly the rules for avoiding forbidden eigenvalues. We may explain the multiplicities in (iv) as follows. To satisfy the 4/3-eigenvalue equation on level k we may assign initial values at the points in V k so that the sum of the values on the four boundary points of every k-cell is 0. This gives a space of dimension #{V k } − #{k-cells} and it is easy to see that Theorem 2.2. Eigenvalues in the 0-series and 4/3-series alternate: λ j is 0-series for j even and 4/3-series for j odd. More precisely, the spectrum consists of an initial segment of length 2n followed by segments of length 4n − 2. In each segment all the 4/3-series eigenvalues are born level 0 (hence have multiplicity 3) except the last one.
Inductively, we define Σ k to be the sequence Then {0, ρ k n ψ n (Σ k )} is an initial segment of the spectrum, and after {0, ρ n ψ n (Σ 1 )} it breaks up into segments of length 4n − 2 with 0-series and 4/3-series alternating, and all but the last 4/3-series alternating, and all but the last 4/3-series eigenvalues are born on level 0.

Scaling inner products.
In order to find an orthonormal basis for eigenspaces, we have to relate the graph inner product f, g m to the inner product on the next graph approximation, f, g m+1 . This is necessary because we need to compute the inner product exactly, and we would like to be able to show that functions orthogonal on one graph approximation will remain orthogonal when spectrally decimated at higher levels. We now prove, as [17] does for the Sierpinski Gasket, a multiplicative formula for f, g m+1 in terms of f, g m and the current discrete eigenvalue λ m .
The product below converges, and in the limit gives the inner product on VS 2 for u and v eigenfunctions born on level 0 with the same eigenvalue: Proof. On a graph approximation of the Vicsek Set, we call two points neighbors if they are connected by an edge. All points have either three or six neighbors. We define junction points to be those with six neighbors, and non-junction points to be those with three neighbors. For simplicity we take u = v as the general case is essentially the same. The graph inner product of two functions on the graph approximation V m is defined as where we need to multiply by 1 4 so that 1, 1 m = 1. This makes the limit µ a probability measure. Here each w is a "word," that is, a string of numbers corresponding to the five similarities that define At each graph approximation, these similarities map two distinct points to the junction points, and only one point to the boundary points, so we account for double-counting as follows: Fix an (m − 1)-cell C and let u 1 , u 2 , u 3 , u 4 be the values of u on its boundary. Then the contribution to u 2 m−1 due to C is  .2) we see that the Applying this to all (m − 1)-cells we obtain where the sum is over V m−1 . To deal with the cross-terms in (2.3) we apply the Gauss-Green formula, Combining this with (2.3), we see (2.4) Simplifying using the values for γ, a, b, c, d, and λ m−1 in terms of λ m , we get the normalization formula This allows us to compute the norm of a function the Vicsek set at any graph approximation, and, in the limit, on the Vicsek set itself: N (m).

Center values.
It is also useful to have a formula for the value of an eigenfunction at the center q 0 of VS 2 . Using (2.1) to continue a function u on V 0 to V 1 , we see that the values u 6 , u 10 , u 11 and u 15 are related to the values of u on V 0 by Substituting for d, b, and γ we get Continuing this process we get In particular, since any 4/3-series eigenfunction satisfies u 1 + u 2 + u 3 + u 4 = 0, all 4/3-series eigenfunctions vanish at q 0 .
3. Spectral projections at boundary points. We would like to be able to solve differential equations such as the wave equation where S depends on the equation and the expression in parentheses is a Fourier coefficient. Usually the sum and integral can be interchanged to yield where f is defined by the initial conditions and for instance for the heat equation and for the wave equation. We can get a better understanding of projection kernels K t (x, y) when we restrict one of the variables to specific boundary points. Suppose y = q i , i = 1, 2, 3, 4, and note that E 0 λ = {u ∈ E λ : u(q i ) = 0} has codimension 1. We can compute a normalized function defined to be perpendicular to this space In that case we can simplify If λ is a 4/3-series eigenvalue born on some level m 0 ≥ 1, there is an easy characterization of u λ 0 ; spectral decimation works "in reverse," i.e. u λ 0 is an eigenfunction of ∆ m0−1 with eigenvalue R(4/3) = 20. We can then continue spectral decimation back to all levels < m 0 because we will never encounter a forbidden eigenvalue.
i.e. u λ 0 is an eigenfunction of ∆ m0−1 with eigenvalue 20.  Proof. Fix a point x ∈ V m0−1 . First assume x is only part of a single 1-cell in Γ m0 , and let y 1 , y 2 , y 3 be the other boundary points of that cell. Then the function v shown in Figure 5a is a 4/3-series eigenfunction born on level m (this is easy to see since the sum around any small square is 0) Since u λ 0 is a 4/3-series eigenfunction born on level m 0 , we also know that the sum around any small square in Γ m0 is 0. Taking a linear combination of these equations, with coefficients given by Figure 5b, and recalling that the inner product weights the center 4 vertices by 2, we see that u λ 0 , w m0 = 0 where w is given by Figure 5c. Writing u, v + w m0 = 0 we get or equivalently A similar argument works when x is instead part of two 1-cells in Γ m0−1 , with the function on in Figure 6a playing the role of v and the one in Figure 6b playing the role of w.
Another special case occurs when we fix y = q 0 , where q 0 is the center point of the Vicsek set. At q 0 , all the eigenfunctions associated with 4/3-series eigenvalues are equal to zero (see Section 2.2). This is a fortunate result because, in calculating the projection kernel at q 0 , all the terms from the 4/3-series contribute zero, so we only need to consider the eigenfunctions associated with 0-series eigenvalues -and these form a one-dimensional vector space. 4. Numerical data for eigenvalues and eigenfunctions. Using our implementation of spectral decimation on VS n , we can compute the eigenvalues of the graph Laplacians ∆ m on the graph approximations Γ m . By repeatedly applying the smallest inverse of the spectral decimation function R, we can use these to approximate the eigenvalues λ i of the standard Laplacian ∆. Figure 7 shows plots of the eigenvalue counting function N (x) = #{i : λ i ≤ x}. Since the eigenvalue counting function grows as x α where α = log(4n−3)/ log((4n−3)(2n−1)), it is also useful to look at the Weyl ratio N (x)/x α , shown in Figure 8. For each n, these functions are asymptotically periodic as a function of log x, as predicted in [16]. What is rather striking and somewhat mysterious, there appears to be a convergence as n → ∞, after appropriate rescaling. We will attempt to explain some of this in Section 11.
We can also compute eigenfunctions of the graph Laplacians. Figure 9 shows 0series eigenfunctions and their restrictions to the diagonal, and Figure 10 shows the same for some 4/3-series eigenfunctions. The eigenfunctions in the diagonal plots have been continued with the lowest inverse several times to increase the number of data points. For n > 2, our implementation can only compute eigenfunctions restricted to the diagonal. Figures 11 and 12 show these plots for VS 8 . There is more data on the website [6].
We observe from the data a phenomenon known as miniaturization [3]. Taking a 0-series eigenfunction on the mth level approximation to VS 2 , if the function is continued by spectral decimation to the (m + 1)th level of approximation, the new eigenfunction is composed of 5 copies of the previous one; it is "miniaturized." Thus, eigenfunctions of higher eigenvalue are composed of copies of eigenfunctions of lower eigenvalue.
Formally this is solved by and since the Laplacian has a discrete spectrum with an orthonormal basis {u j } of eigenfunctions, −∆u j = λ j u j , the solution to the heat equation is Usually the sum and integral can be interchanged to yield where h is defined to be and called the heat kernel. From the eigenvalues and eigenfunctions we can construct the heat kernel on the standard Vicsek set. This is especially easy when one of the arguments is the center point of the Vicsek set, since then we only need to consider 0-series eigenfunctions. Plots of the heat kernel on the m = 4 approximating graph are shown in Figure 13.
Our data allows us to examine the behavior of the heat kernel h(t, q 0 , x) in greater detail. Estimates for the heat kernel are known, but they involve constants of unknown size. It is expected that h(t, q 0 , x) should involve a factor of t −α multiplying a term that drops off exponentially as x moves away from q 0 . Since the data in       suggest that the t −α factor is modified by an oscillating factor, we look at the ratio h(t, q 0 , x)/h(t, q 0 , q 0 ) = H(t, x). Actually, it seems more plausible that will be better behaved than H(t, x), but since we don't know how to compute h(t, x, x) effectively, this isn't an option. Note that H(t, x) is normalized so that H(t, q 0 ) = 1. Also, if we ignore the influence of the boundary, which is certainly very slight for small t, we expect H(t/15, F 0 x) should be very close to H(t, x). Figure 14 illustrates this invariance property.  First we look at the behavior of H(t, x) for x restricted to the diagonal. Figure 15 shows some typical graphs. We also look at − log H(t, x), again shown in Figure 15.  Since − log H(t, x) vanishes at x = q 0 , we try to fit a power law − log H(t, x) ≈ a|x| b where the constants a and b depend on t, and |x| denotes the distance to q 0 . However, we find that the power b varies significantly as we vary the neighborhood of q 0 where we do the fit. This leads us to doubt the power law model. Figure 15 shows a log-log plot of − log H(t, x) for some choices of t.
There is no compelling reason to restrict x to the diagonal in studying the heat kernel. In a crude sense, the heat kernel h(t, q 0 , x) should depend on the distance of x to q 0 in the resistance metric, which coincides with geodesic distance in VS 2 . But in fact, this is not very accurate. What we want to look at are what might be called "heatballs", sets of the form {x : h(t, q 0 , x) ≥ s} for different choices of t and s. A naive guess would be that the heatballs form a 1-parameter family of sets, at least if we stay toward the center of VS 2 where the influence of the boundary is small. Again this is only valid in a crude sense. Figure 16 shows some examples of heatballs for two different choices of t and a variety of s-values. One observation is that heatballs tend to spread further in directions perpendicular to the diagonal. Decreasing the value of s increases the size of the heatballs, so we may imagine that the heatballs for fixed t represent an "invasion" that spreads out from the center point q 0 . By and large the invasion follows an orderly patter, with a cells that lies on the diagonal being invaded first at the point closest to q 0 . However, there are examples where the invasion jumps around, and this produces examples of heatballs that are disconnected. Apparently, disconnected heatballs also may occur in the setting of manifolds [13]. Of course, it is also possible to study invasions with s fixed and t increasing.
The trace of the heat kernel and its value at the center, when multiplied by t α , are both periodic in log t (see [12]). This is shown in Figures 17 and 18 on the m = 7 graph approximation. The approximate sinusoidal behavior is explained for the trace in [2], and at the center in [12]. We note that the approximate sines are out of phase: Fitting to a + b sin(c log t + d) we get a = .90, b = .045, c = 2.33, and  where the wave propagator W (t, x, y) is given by From the eigenvalues and eigenfunctions we can also construct the wave propagator on the standard Vicsek set. As with the heat kernel, this is easiest to compute when one of the arguments is the center point of the Vicsek set, since then we only need to consider 0-series eigenfunctions. This is shown in Figure 19.
As already observed in the case of SG in [7] the wave propagator W (t, q 0 , x) is not supported in a small neighborhood of q 0 for fixed t; in other words, waves propagate at infinite speed. This is easily explained because the differential operators on either side of the wave equation do not have the same order. However, the amount of energy that propagates at high speed is relatively small. So we can expect a weak substitute for finite propagation speed. Attempts to understand this in [7] and [5] were stymied by the complexity of the wave propagator on SG (in [2] it was shown that time integrals of the wave propagator are computationally tamer on SG, but this did not help with a weak finite propagation speed).
On VS 2 the wave propagator at the center point may be effectively computed. In particular, when we increase the level of approximation the graph does not change appreciably: Figure 20 shows L 2 distances between w m (t, q 0 , ·) and w m−1 (t, q 0 , ·), where w m is the level m approximation to the wave propagator. In Figure 19 we display the graphs for some values of t. Unlike the heat kernel, the wave propagator is not known to be positive, and indeed we see time where negative values occur. We know so that positive values predominate, and it seems from the data that is bounded by a multiple of t. Recall that in Euclidean space, the singularity of the wave propagator worsens as the dimension increases. Our data is more in line with the n = 1 case.
Our data strongly suggests an approximate finite propagation speed. We can quantify this by choosing a small cutoff ε and looking for the maximum value of |x| where |w m (t, q 0 , x)| ≥ ε for fixed t, and then letting t vary. In Figure 21 we show plots of this function, both in the case when x is restricted to the diagonal and in the case where x varies over all VS 2 , for different choices of ε. Notice that in both cases the slope of the function increases with ε.

Spectral projections.
Another important class of spectral operators are the spectral projections. Let Λ be a subset (usually infinite) of the spectrum, and define for {u j } an orthonormal basis of the λ-eigenspace. Such operators are always bounded on L 2 (with norm 1) and usually not bounded on L 1 or L ∞ . A natural question to ask is under what conditions is P Λ bounded on L p for 1 < p < ∞?
In the classical setting such results can be obtained from the Marcinkiewicz multiplier theorem [19] and we expect that analogous results should be valid in the fractal setting, perhaps related to the transplantation theorems of [9] and [18]. We note that the results of [20] imply that we can always "segment" such problems; we write Λ k = Λ ∩ [0, N k ] for a natural sequences of cutoffs N k that lie at the beginning   of spectral gaps (in our case we take spectral decimation through level k). Then P Λ is bounded on L p if and only if P Λ k is uniformly (in k) bounded on L p . In [2] we looked at some spectral projection on SG, but it was difficult to arrive at meaningful predictions because of the computational complexity of the data. Here we are able to examine one example in detail: the case that Λ consists of the 0-series eigenvalues. Because these eigenvalues all have multiplicity one, it is straightforward to compute kernels K k (x, y) of the segmented projection operators P Λ k for k ≤ 5 on VS 2 . We make a few simple observations. The first is that for every x.
This follows from the fact that the constant 1 is in the 0-series, and every other 0-series eigenfunction is orthogonal to it. The second is that where Φ is any isometry of VS 2 . This is an immediate consequence of the fact that each 0-series eigenfunction is invariant under Φ (if u is a 0-series eigenfunction then so is u • Φ, with the same eigenvalue, and the multiplicities are all one   Figure 23. The restriction of K k (x, ·) to the diagonal when x is the junction point between two 1-cells, for 3 ≤ k ≤ 6.
the 0-series spectrum; it easy to construct 4/3-series eigenfunctions (on a higher level) that show this invariance. We examine the behavior of |K k (x, y)|dµ(y) as a function of k. Table 1 shows the maximum over x for k ≤ 5. This is overwhelming evidence that this maximum tends to infinity as k → ∞, which implies that P Λ is not bounded in L 1 or L ∞ . Next we ask, for fixed x, what are the y values where |K k (x, y)| is large? Looking at the graphs of K k (x, ·) in Figure 22 we see evidence that the answer is the values of y that are close to Φ(x) for some isometry Φ. Note that for some choices of x, the set of all Φ(x) is finite, but for other choices it may be infinite. (For example, if x is a boundary point, then it is a dense subset of a Cantor set that includes the intersection of VS 2 with the boundary of the unit square.) In Figure 23 we show the restriction to the diagonal of K k (x, ·) when x is the junction point between two 1-cells, for 3 ≤ k ≤ 6. The behavior is certainly more complicated than the kernels in the standard Calderon-Zygmund theory. On the other hand, the graphs to appear to be converging to some limiting shape. It would be interesting to make this statement more precise, and to investigate whether there is L p boundedness of P Λ for some values of p in 1 < p < ∞ other than p = 2.
6. Diagonals and the 0-series. We can write L 2 (VS) = H 0 ⊕ H 4/3 where H 0 represents the eigenfunctions associated with the 0-series, and H 4/3 represents those associated with the 4/3-series. These are orthogonal because the eigenvalues are distinct.
Theorem 6.1. Each 0-series eigenfunction of the Laplacian on the Vicsek Set is determined by its restriction to the diagonal.
Proof. If we look at the fractal Laplacian, we can view VS as the union of the diagonals with little trees T attached, each tree a small copy of 1/4 VS, one arm of the Vicsek set. u| T satisfies −∆u = λu, with ∂ n u = 0 at the outer boundary, and u(q 0 ) is a specified value if q 0 is the center point, because the center point lies on the diagonal.
Let v λ denote the function on 1/4 VS that satisfies −∆v λ = λv λ , ∂ n v λ (q) = 0 if q is a boundary point, and v λ (q 0 ) = 1. To show existence and uniqueness, we have to show that −∆u = λu on 1/4 VS, ∂ n u(boundary) = 0, and u(q 0 ) = 0 imply that u must be identically zero. Indeed, given such a function u, extend it by odd reflection across the center to the opposite arm of the Vicsek set, and set it identically zero on the other two arms. Then we obtain a global eigenfunction satisfying u(q j ) = 0 for the boundary points q j , so it belongs to the 4/3-series. But λ is a 0-series eigenvalue, and by spectral decimation, there are no simultaneous 0-series and 4/3-series eigenvalues; the only way out is if u = 0. Now let T denote any tree of level m that attaches to the diagonal at y. Then there exists ψ T : T → 1/4 VS with ψ T (y) = q 0 and ψ T (bdry(T )) = bdry(1/4 VS), and This says that any tree can be put in one-to-one correspondence with an arm of the Vicsek set in such a way as to respect similarities.
Let u be our 0-series eigenfunction. Then Since (15) −m λ is not a 4/3-series eigenvalue (if it were, then so would λ be) we have So λ and u| diagonal determine u according to the above equation.
We would like to go further and say that any function in H 0 is determined by its restriction to the diagonal, and aside from symmetry there are essentially no other conditions on the restrictions to the diagonal of H 0 functions. We begin with the analogous statement on the discrete approximations.
Let D m denote the intersection of V m with one arm of the diagonal. Note that #D 0 = 1, #D 1 = 2 and #D m = 3#D m−1 − 1. Let Z m denote the span of the 0-series eigenfunctions through level m. We may consider elements of Z m as functions either on V m or VS 2 . Note that dim Z 0 = 1, dim Z 1 = 2 and dim Z m = 3(dim Z m−1 ) − 1, so dim Z m = #D m = 1 2 (3 m + 1). Thus it is plausible that every function on D m is the restriction of a function in Z m , and every function in Z m is uniquely determined by its restriction to D m . In fact, these statements are equivalent. We conjecture a little more.
Let  Figure 24 and many more are available (for m = 4) on our website [6].
To pass from the discrete to the continuous version we consider function H 0 ∩ C (here C denotes the continuous functions on VS 2 ). Such functions have well-defined restrictions to D (one arm of the diagonal). To show that the restriction f | D of such a function determines f , it suffices to show that it determines f | Vm for all m, since ∪ m V m is dense in VS 2 and f is continuous. Let f m denote the projection of f onto Z m . By the results of [20] we know f m converges to f uniformly. If the conjecture is valid then Despite the fact that this is a finite sum for each m, it is a rather peculiar formula. The coefficients oscillate rapidly but do not go to zero as m increases. It does not seem likely that we can make any sense out of it if we do not assume that f is continuous. It seems unlikely that the existence of a continuous restriction to D for a function in H 0 implies that it is continuous on VS 2 . A more plausible conjecture is that if the restriction to D is Hölder continuous of some order then the function is Hölder continuous of the same order on VS 2 . Another reasonable conjecture is that the restrictions of H 0 ∩C to D form a dense subset of the continuous functions on D. A less likely conjecture is that the restrictions give all continuous functions on D. 7. Ratio gaps. In [4] it was shown that on SG there exist gaps in the ratios of eigenvalues. As a consequence, it is possible to define operators of the form ∆ −a∆ on the product of two copies of SG (∆ and ∆ denote the Laplacian on each copy of SG) where a lies in a gap, and these operators paradoxically behave in some ways like elliptic operators, despite the fact that the coefficient −a has the wrong sign. These operators were called quasielliptic in [4]. There are no analogous operators in classical PDE theory. Thus it is of great interest to know whether similar operators exist for products of fractals other than SG. In fact [14] shows that this is the case for VS 2 and VS 3 . Also [8] investigates this question for a variant of the SG type fractal. The method used in [8], which we follow here, yields a computer-assisted proof. The idea is that the method introduced in [4] leads to a large number of tedious calculations, and these are best left to the computer. In our method there is a parameter that may be chosen at will. Increasing will do a better job finding gaps, at the cost of increasing the number of computations. Let λ m be a graph eigenvalue born on level m 0 . Then where  Fix > 0. Any fractal eigenvalue λ is of the form where all but finitely many of the v j = 1. Thus there must be a word w of length and some graph eigenvalue λ m so that Combining this with (7.1) we see that every fractal eigenvalue λ can be written as for some integer r.
Consider the contribution of a word w to the eigenvalues described by (7.2). If w ends in a 1, then as long as m > m 0 can rewrite for some other word w of length (with one less 1 at the end), while m = m 0 means (7.2) for every word w ending in 1. Furthermore we can discard φ w (λ m0 ) if it is forbidden.
So far we've found finitely many intervals [a i , b i ] (allowing a i = b i ) so that each eigenvalue λ must satisfy for some i and r. Therefore any ratio of eigenvalues λ/µ must satisfy for some r, i, and j. Since ρλ is an eigenvalue if λ is, we can restrict our attention to ratios λ/µ ∈ [1, ρ] and hence to the finite number of intervals ρ r [R ij , S ij ] which intersect [1, ρ]. The gaps in the union of these intervals are then guaranteed to be ratio gaps. Figure 25 shows the ratio gaps that are proved to exist by this method for n = 2, 3, 4 using values of = 1, 2, 3. For all of these n there are ratio gaps containing √ ρ n , given in Table 2. We see clearly that the number and size of the ratio gaps increases with . However, we have not been able to confirm the existence of ratio gaps for n ≥ 5. For n = 5 none are revealed for ≤ 2 and our MATLAB implementation (see [6]) runs into memory problems for ≥ 3. For ≥ 3 we can, however, use a modified algorithm which searches only for ratio gaps containing a particular point. These searches have failed to find ratio gaps containing √ ρ 5 ≈ 12.3693. It is not clear if these failed searches should be interpreted as experimental evidence for the nonexistence of ratio gaps, or just as evidence that we need to consider higher values of to find ratio gaps.
8. Eigenvalue clusters. We say the spectrum of a Laplacian exhibits spectral clustering if the following holds: for every integer n and ε > 0 there exists an interval I of length ε that contains n distinct eigenvalues. This, for example, says you can find a million distinct eigenvalues within a millionth of each other. The eigenvalues will have to be very large, so it becomes computationally challenging to find such tight and large clusters. Clustering does not occur on the Sierpinski gasket SG. Experimental evidence suggests that it does occur on the pentagasket [1] and on the Julia sets [10]. The following lemma allows us to prove it holds on VS 2 . Lemma 8.1. Suppose spectral decimation holds with spectral renormalization factor ρ and spectral renormalization function R(λ). Suppose R has a fixed point t (R(t) = t) such that |R (t)| > ρ. Then spectral clustering occurs.
By taking j 0 large enough, we can make all the values φ (j0) k (λ p ) close enough to t so that |φ k (x)| ≤ a ≤ ρ −1 for all φ j0 k (λ p ). This means that {φ (j0+j1) k (λ p )} belongs to an interval of length no more than ca ji where c is the length for j 1 = 0. Then {ρ m+j0+j1 g(φ j0+j1 k (λ p ))} belongs to an interval of length at most cM ρ m+j0 (aρ) j1 . Since aρ < 1, this can be made ≤ ε by taking j 1 large enough. Thus we can find n distinct eigenvalues in an interval of length no more than ε.
On VS 2 , ρ = 15 and R(λ) = 36λ 3 − 48λ 2 + 15λ. So R(t) = t means 2t(18t 2 − 24t + 7) = 0 with solutions 0, 4± √ 2 6 . We are interested in the largest t, 4+ so to show R (t) > 15 we need t > 8/9. But (4 + √ 2)/6 = 0.902... so this is true. We thus have clustering in VS 2 . Computing the largest fixed point t of R on VS n for n = 3, . . . , 9 we also get R (t) > ρ (see Table 3) and hence that spectral clustering occurs. Because the ratio R (t)/ρ increases rapidly with n we conjecture that spectral clustering occurs for all n.   Table 3. The largest fixed point t of the spectral decimation function R on VS n satisfies R (t) > ρ for n = 2, . . . , 9. As a function of x, G should be harmonic in the complement of y. Suppose y lies in the upper right arm of VS n . The boundary points are labeled q 1 , q 2 , q 3 , q 4 , with q 1 corresponding to the arm where y is. Let z be the projection of y onto the diagonal of VS n . (In the case that y is on the diagonal already, z = y.) Now G(q j , y) = 0 for j = 1, 2, 3, 4. Define G(q 0 , y) = a, (where q 0 is the center point), G(z, y) = b, G(y, y) = c. The values a, b, c determine G because G(x, y) is linear on the arms (q 0 , q 2 ), (q 0 , q 3 ), (q 0 , q 4 ), on (q 0 , z) and (z, q 1 ), and along the unique path joining z to y. It is constant on every component of the complement of these 6 sets. Figure 26 shows the seven points q 0 , q 1 , q 2 , q 3 , q 4 , y, z together with the values of G(·, y) at these points. 10. Higher Vicsek sets. It is clear that VS n converges to a cross. The eigenfunctions of the Laplacian on the cross are well understood: the restriction to either diagonal is an eigenfunction on the unit interval, while at the center point the function is required to be continuous and to have the sum of its normal derivatives equal to zero. Thus any eigenfunction is either cos πkx on each diagonal or a j sin π(k + 1/2)x on each half diagonal, with a j = 0, for some integer k. We call the first type symmetric, and the second nonsymmetric. The symmetric eigenvalues (obtained by taking second derivatives) are π 2 k 2 , and the nonsymmetric eigenvalues are π 2 (k + 1/2) 2 .
We claim that the symmetric spectrum is the limit of the spectrum of the 0-series on VS n as n → ∞ (these are symmetric eigenfunctions), and the nonsymmetric spectrum is the limit of the spectrum of the 4/3-series born on level 0. (the 4/3 series born on levels ≥ 1 does not contribute to the limit because the eigenvalues go to infinity.) We also claim that the limits of the symmetric eigenfunctions are cosines, and the limits of the nonsymmetric eigenfunctions are sines.
To understand the behavior of the eigenvalues as n → ∞ we can restrict attention to the initial segment consisting of the 0-series eigenvalues ρ n ψ n (φ 2j−1 (0)) and the 4/3-series eigenvalues ρ n ψ n (φ 2j−1 (4/3)). From [23] we know  [23]. If j is small compared to n, which will always happen if we fix j and let n → ∞, then φ 2j−1 (4/3) ≈ π 2 /6( 2j−1 2n−1 ) 2 . Since ψ n (t) ∼ t for t near 0 we have There is no exact computation of the zeroes of h n , but the zeroes of g n are known, so φ 2j (0) = 1 + cos( 2n−1−2j 2n−1 )π 3 = 2 sin 2 π 2 ( 2j 2n−1 ) 3 and we have interlacing of zeroes of g n and h n , so If we assume that the lower bound is the asymptotically correct value, then we obtain the expected value 4π 2 3 (j − 1) 2 for the limit. We will show below that this is indeed correct.
We can also understand why the VS n eigenfunctions, restricted to the cross, converge to the eigenfunctions of the cross. To see this, we look at the graph eigenvalue equation on V 1 . Note that V 1 consists of four arms of n−1 squares joined at a central square. We label the diagonal vertices of one arm x 1 , x 2 , . . . , x n and the below and above diagonal vertices y 1 , . . . , y n−1 and z 1 , . . . , z n−1 (see Figure 27). By symmetry we will have u(y j ) = u(z j ) for every eigenfunction. The eigenvalue equation (with eigenvalue λ 1 ) at y j says so we obtain For 2 ≤ j ≤ n − 1 the eigenvalue equation at x j is We can simplify this equation to Note that this is exactly the eigenvalue equation (for eigenvalue 3λ 1 ) on the interior of the linear graph x 1 , . . . , x n . Similarly, at the endpoint x n the eigenvalue equation is which simplifies to and this is the correct eigenvalue equation (for eigenvalue 3λ 1 ) with Neumann conditions at that endpoint. The equation at the endpoint x 1 will depend on whether we are looking at the 0-series or the 4/3-series. For the 0-series the values along all four arms will be identical, so the eigenvalue equation is which simplifies to For the 4/3-series the sum of the values on all four arms will be zero, so the eigenvalue equation is which simplifies to These should be compared with the eigenvalue equation for the eigenfunctionũ with eigenvalue 3λ 1 on two copies of the linear graph with even and odd symmetries, namely (1 − 3 λ 1 )ũ(x 1 ) = 1 2 (ũ(x 2 ) ±ũ(x 1 )).
Note that we get the identical equation in the odd case, but in the even case we get (1 − 6 λ 1 )ũ(x 1 ) =ũ(x 2 ), so there is a significant distinction. In the case of the 4/3-series, we can therefore identify the restriction of the eigenfunctions to the diagonal with u(x k ) = sin π(j − 1/2) 2k − 1 2n − 1 , 1 ≤ j ≤ n − 1. Figure 28 shows some 0-series eigenfunctions plotted against the symmetric eigenfunctions on the cross for n = 3, 6, 9. It appears that u closely approximates u(x k ) = cos πj 2k − 1 2n − 1 .
We now sketch a proof that the eigenvalues 3λ 1 approach 3 λ 1 = 1 − cos 2πj 2n − 1 = 2 sin 2 πj 2n − 1 and the eigenvectors u(x k ) approachũ(x k ) as n → ∞. Here we fix the value of j, and we require the appropriate error estimate since both 3λ 1 and 3 λ 1 tend to zero. The idea is to use standard perturbation theory, using the fact that the two eigenvalue equations differ only at the single point x 1 , and the fact that the eigenvectorũ(x k ) is fairly uniformly distributed, so the valueũ(x 1 ) is relatively small.
Note that the first equation is not a linear generalized eigenvalue equation because G depends on λ 1 , but this does not really matter in our argument. The gist of the argument is that G − G is a matrix with only one non-zero entry ( G 11 − G 11 ) and we can bound this entry since λ 1 is bounded away from 4/3 for the 0-series; and also we knowũ exactly, hence |ũ(x 1 )| ≤ 1 while Gũ,ũ = n/2. This yields the estimate With a little more work, we can get the estimate for the first N eigenvalues, since we know λ 1 = O(1/n 2 ). With a little more work we can show that u −ũ = O(1/n) when u is properly normalized. So far we have dealt with the level 1 eigenvalues λ 1 . The actual eigenvalues λ on VS n are given by λ = ψ n (λ 1 ) for the lowest segment of the spectrum (this will include the first N eigenvalues once n is large enough). Figure 29 gives experimental evidence for the estimate t ≤ ψ n (t) ≤ t + ct 2 on 0 ≤ t ≤ 1 for a constant c independent of n. This shows that λ − λ 1 = O(1/n 3 ) as n → ∞ for the first N eigenvalues.
11. Weyl ratio. We now describe in more detail the Weyl ratio W n (t) = N n (t)/t αn on VS n , where α n = log(4n − 3) log ρ n , and N n (t) = λj ≤t m(λ j ) is the counting function for the number (counting multiplicity) of eigenvalues. According to a general theorem of Kigami and Lapidus [16], w n (t) = lim k→∞ W n (ρ k n t) exists. In order to compare w n for different values of n, we normalize byw n (s) = w n (λ 1 ρ s n ) so thatw n is a periodic function of period 1 withw n (0) = w n (λ 1 ). From the data it appears thatw n is converging to a limit as n → ∞, but this limit has nothing to do with the Weyl ratio on the cross, which tends to a constant. While we cannot supply a complete explanation of this phenomenon, we can make a few observations about the behavior ofw n (s) for some values of s. Because of high multiplicities the functions w n andw k have jump discontinuities. We write w n (t − ) = lim s→t − w n (s) and similarly forw k (s − ). First we note that it is possible to compute w n (λ i ) for small values of i. Lemma 11.1. For i ≤ j ≤ n − 1 we have w n (λ − 2j−1 ) = 4j − 3 (λ 2j−1 ) αn , w n (λ 2j−1 ) = 4j − 1 (λ 2j−1 ) αn , w(λ 2j ) = 4j (λ 2j ) αj .