Finding Structural Anomalies in Star Graphs Using Quantum Walks: A General Approach

In previous papers about searches on star graphs several patterns have been made apparent; the speed up only occurs when graphs are"tuned"so that their time step operators have degenerate eigenvalues, and only certain initial states are effective. More than that, the searches are never faster than order root N time. In this paper the problem is defined rigorously, the causes for all of these patterns are identified, sufficient and necessary conditions for quadratic-speed searches for any connected subgraph are demonstrated, the tolerance of these conditions is investigated, and it is shown that (unfortunately) we can do no better than order root N time. Along the way, a useful formalism is established that may be useful in future work involving highly symmetric graphs.


Abstract
In previous papers about searches on star graphs several patterns have been made apparent; the speed up only occurs when graphs are "tuned" so that their time step operators have degenerate eigenvalues, and only certain initial states are effective. More than that, the searches are never faster than O √ N time. In this paper the problem is defined rigorously, the causes for all of these patterns are identified, sufficient and necessary conditions for quadratic-speed searches for any connected subgraph are demonstrated, the tolerance of these conditions is investigated, and it is shown that (unfortunately) we can do no better than O √ N time. Along the way, a useful formalism is established that may be useful in future work involving highly symmetric graphs.

Introduction and Review of Quantum Walks
A quantum walk is a quantum version of a random walk [1]. In a quantum walk, a particle moves on a general structure, a graph, which is a collection of vertices and edges connecting them, and its motion is governed by amplitudes, whereas in a classical random walk it would be governed by probabilities. There are a number of types of quantum walks. First, the time can be either continuous [2] or advance in discrete steps [3,4]. Within the discretetime walks, there are another two types. In the coined walk, the particle sits on the vertices and an extra degree of freedom, a quantum coin, is needed to make the dynamics of the walk unitary. In the scattering walk, the particle sits on the edges, and no coin is necessary [5]. There have been a number of experimental implementations of quantum walks, some using trapped ions [6,7] and others using photons in optical networks [8]- [11].
Quantum walks have proven useful in finding new quantum algorithms and expanding the applicability of ones that are known. We are interested in searches. In most cases there is a distinguished vertex, one whose behavior is different from the others, and the object is to find this vertex [12]- [16]. More recently, quantum walks have been shown to be useful in finding marked structures, such as cliques [17], or structures that break the symmetry of a graph [18,19]. In this paper a generalization of the latter problem will be pursued.
In the scattering model of quantum walks [5] the states are defined on edges, with two states on every edge; one for each of the two possible directions. So, if two vertices connected by some edge are labeled a and b, then |a, b is the state on the edge that points from a to b, and |b, a is the state on the edge pointing from b to a. If the vertices are labeled 1, · · · , m, then a state of the system is written |Ψ = m j,k=1 α j,k |j, k , where α j,k ∈ C, 0 ≤ |α j,k | ≤ 1, and m j,k=1 |α j,k | 2 = 1. These α's are probability amplitudes, and |α j,k | 2 is the probability of measuring |Ψ and detecting the particle in the state |j, k .
Each vertex hosts a local unitary operator that maps all of the incoming states to outgoing states. That is, if v is a vertex, and {a j } is the set of connected vertices, then the unitary opperator defined on v, denoted U v , preforms a mapping U v : {|a j , v } → {|v, a j }. Notice that the notation used to describe the states indicates which vertex operator to apply. The only operator that will act on the state |s, t is U t , and the vertex operator most recently applied to |s, t was U s . The time step operator, U, is defined as all of the U v taken together; U = ⊕ v U v , where this direct sum is taken over all vertices. Like each U v , U is unitary, but unlike each of the vertex-specific operators, U is an endomorphism. As such, we can talk about the eigenvalues and eigenvectors (equivalently, "eigenstates") of U. Because the edge state implies immediately which of the vertex operators to apply, we will in general only talk about U as a whole and abandon the vertex-specific notation. For example, U|s, t = ⊕ v U v |s, t = U t |s, t , but it is unnecessary on the right hand side to indicate which vertex operator is being used, since it is already announced by the state, "|s, t ".
The eigenvalues of U all have modulus 1, and there is no "steady state". In an intuituve, not-rigorous sense, because U must be unitary (it is a quantum mechanical time evolution operator) it must conserve information. That is, for any given state there is a unique preimage. But a particle on a vertex at time t could have been on any adjacent vertex at time t − 1. In the earliest attempts to define a quantum analog to discrete random walks the states were again defined on the vertices, but it was quickly found that an ancillary "coin space" needed to be attached to each vertex to maintain unitarity and keep track of the state's previous vertex. The edge state formalism (e.g., |a, b ) is an equivalent formalism that eliminates the need for ancillary spaces, while still carrying the additional information required.
Definition For the purposes of this paper a "search" is defined to be a process used to distinguish between N − 1 identical elements, and one marked element that a priori can be any of the N total elements. Under this definition the Shor algorithm, for example, is not a search because it is used to find numbers with certain properties that set them apart. The set of numbers being considered are not all equally likely to be "marked". Part of the reasoning behind this definition is that it leads naturally to graphs with very high symmetry, which substantially reduces the dimension and difficulty of the problem.
Define a mapping φ : E → E, where E is the set of edges, to be some rearrangement of edges such that U = φ −1 • U • φ, or equivalently φ • U = U • φ. φ is called a quantum graph automorphism [20], and it is a rearrangement of edges and vertices that leaves the effect of U invariant. If there is a set of edges that can be mapped into each other by some quantum graph automorphism, then we say that these edges belong to the same equivalence class, and a uniform superposition of states on these edges is seen as a single edge on the "collapsed graph".
For example, consider the complete graph K 3 with vertices labeled A, B, C. Define A to be a strictly reflecting vertex (i.e., U|B, A = |A, B ) and B and C as strictly transmitting (i.e., U|B, C = |C, A ). We find that the only non-trivial quantum graph automorphism is the one that exchanges B and C. That is, φ makes the following exchanges: |A, B ↔ |A, C , |B, A ↔ |C, A , and |B, C ↔ |C, B .
Whereas Definition For any graph G with a time step operator U we define the "collapsed graph" or "automorphism graph" as G A ≡ {|ψ : φ|ψ = |ψ , ∀φ where U = φ −1 Uφ}. That is, the collapsed graph is composed only of those states that are left invariant under It's straightforward to demonstrate that G A is closed under U. If |ψ is a state on G A , then |ψ ∈ G A ⇒ φ|ψ = |ψ ⇒ Uφ|ψ = U|ψ ⇒ φU|ψ = U|ψ ⇒ U|ψ ∈ G A By using collapsed quantum graphs we can decrease the dimension of the state space substantially. In the example above the number of dimensions was cut in half, but the greater the symmetry the greater the decrease in dimensions. This section describes the model that will be used throughout this paper. In figure 2 the 0 vertex is the "hub vertex". A hub vertex is a vertex with N connections, where N is allowed to vary. It is generally assumed that the reflection and transmission coefficients at a hub are the same for each of the N edges, so any incoming state, |j, 0 will be mapped to

Star graphs in With an Unknown Flaw
Unitarity at the hub requires that |r| 2 + (N − 1)|t| 2 = 1 and 2Re(r * t) + (N − 2)|t| 2 = 0. For the examples in this paper we'll assume that the hub is a standard "diffusive vertex" which means that r = −1 + 2 N and t = 2 N , although it's easy to generalize away from this. The subgraph, G, is attached to any one of the N edges radiating from vertex 0 which, in analogy to a spoked wheel, is called the "hub" or "hub vertex". Although we don't know which edge is connected to G we can, without loss of generality, assume that it is attached to vertex 1. In this way, vertex 1 is the "marked" vertex.
Vertices 0 and 1, the states between them, and everything in G make up the Right side of the graph.
Vertices 2 through N "reflect with a phase of φ", which means that U|0, j = e iφ |j, 0 . φ is left with an unspecified value, so that it can be used to "dial in" certain eigenvalues of U (how this is done will be shown momentarily).
Vertices 0 and 2 through N, as well as the edges connecting them, are called the Left side of the graph.
Throughout this paper I will be referring to Right or Left eigenvalues or vectors. This will always indicate that the thing in question is native to that side of the graph. All eigenvectors and values The goal of a quantum search on a star graph is to somehow get the probability amplitude on the states |0, 1 and |1, 0 as high as possible, so that when a measurement is made, the result is likely to be the marked edge. In this way the marked edge is found, and the search completed. Typically it is assumed that measurements cannot be made on G itself since, in some sense, if we had access to G we wouldn't be looking for it. So if a particle would have been measured on an edge in G, the measurement is assumed to be a null result. This turns out to be a smaller problem than it might seem at first and so a better, less-specific goal is to get the state supported mostly on the Right side.
The effect of U on |0, 1 , as well as on any of the states in G, is different for different graphs, and will be left as a "black box". For every value of N ≥ 2 we have another, different graph, and a new problem to solve. However, by taking advantage of the obvious symmetries on the Left side, this graph can be collapsed substantially. For ease of notation define the following: This is the collapsing discussed in the previous section. While there may be additional collapsing/simplification taking place inside of the subgraph, G, that will not be directly addressed in this paper. At the hub vertex we find that U does this: Rewriting this in terms of a new perturbation variable, ≡ 1 N , we find: Note that U is a function of , and define U 0 ≡ U| =0 . The number of edges, N , connected to the hub vertex is the only variable in the graph, and studying the dependence of the eigenvalues and eigenvectors on = 1 N will be the focus of the rest of this paper. Unless otherwise noted, assume that every variable (eigenvalues, eigenvectors, etc.) is a function of . In general, denote f 0 ≡ f ( )| =0 . For example, suppose |w is an eigenvector of U. It may depend on , because U depends on . |w 0 is the corresponding eigenvector of U 0 , defined as |w 0 ≡ |w | =0 , and it is constant. When = 0, which is the N → ∞ limit, the states of the two sides of the graph, Left ≡ {|in , |out } and Right ≡ {G, |0, 1 , |1, 0 }, are kept separate by U 0 . The eigenvalues and vectors of both the Left and Right sides are changed relatively little by (this will be proven), so once they have been found they can be used without modification. Clearly, = 0 is the easiest case to work with, and dealing with very large values of N (where search algorithms are useful) corresponds to very small values of . Phrased in this way, the problem lends itself naturally to a perturbative approach.
It's worth taking a moment to repeat that, and to point out how profound it is. The original problem involved an infinite family of graphs indexed by, the number of edges in the star graph, N , each of which had a Hilbert space of dimension 2N + |G|, where |G| is the number of edge states in G. But by using the symmetry of vertices 2 through N to collapse the graph, the problem now only considers a single graph with a Hilbert space of dimension 4 + |G| and a single non-constant vertex (the "hub") that behaves like a valve between the Left and Right sides in a simple and predictable way.
Because the location of the 1 vertex is unknown, the initial states we have access to are of the form, Which can also be written, In other words, the initial state is (to within a small error) just a superposition of |in and |out .
There's absolutely nothing stoping us from starting with a state like 1 √ N N j=1 (−1) j |0, j . However, this state breaks the graph's symmetry a bit, since we now have to treat the odd and even vertices separately, and the graph becomes that much more unwieldy.
A cursory look at U 0 reveals that the eigenvalues from the Left side of the graph are λ = ±e i φ 2 , and that the corresponding eigenvectors are 1 √ 2 |out ± e i φ 2 |in . It will be shown that if the Right side of the graph shares one or both of these eigenvalues, then a "rotation" (as described in section 4) may occur, and an initial state of the form | 0 = 1 √ 2 |out ± e i φ 2 |in will move into another eigenstate with the same eigenvalue, |r 0 , in the Right side in O √ N time. So, finding G boils down to finding one of the eigenvalues and eigenvectors of the Right side, and this can be done by "dialing in" φ so that the Left side shares an eigenvalue with the Right. This, by the way, is the great advantage of star graphs; any eigenvalue can be easily attained, and there are only two dimensions (|in and |out ). That said, the results and techniques of this paper can be applied far more generally.
The "paired" eigenvalues of U that are involved in searches with a quadratic speed up take the form λ( ) = ∞ j=0 a j (± √ ) j (this will be proven). In what follows, it will often be advantageous to write this as λ = λ 0 e ±ic √ + O( ). It will be shown that finding a known graph merely requires matching Left and Right eigenvalues of U 0 . Finding an unknown graph means finding an eigenvalue of the Right side, which consists of finding λ 0 and dealing with c.

Grover Graph Example
For the Grover graph, G is simply a vertex that reflects with a phase of π. The Grover graph is named for the Grover algorithm, which it exactly emulates. This example is simple enough that it can be completed by hand. The pairing of eigenvectors, the √ dependences and errors, as well as the O( √ N ) time required to execute the search are all evident here. The basis states of the (collapsed) Grover graph are: The time step matrix, U, is defined as before with the addition that U|0, 1 = −|1, 0 . So, using the basis states as listed above, This clean factoring is symptomatic of the separation of the Right and Left sides when = 0. Clearly, double roots can be found when φ = 0, so in what follows φ will be set to 0. The eigenvectors on the Left are | (1) = 1 √ 2 (|out + |in ) and | (−1) = 1 √ 2 (|out − |in ), for λ = ±1, and the eigenvectors on the Right are |r (1) = 1 √ 2 (|0, 1 − |1, 0 ) and |r (−1) = 1 √ 2 (|0, 1 + |1, 0 ), for λ = 1 and λ = −1 respectively.
The only reasonable initial states we have access too, under the assumption that the target vertex, 1, is unknown, are even superpositions of the form 1 √ N N j=1 |0, j and 1 √ N N j=1 |j, 0 . These states are located almost completely on the Left side. This means that the initial states will always start on the Left, up to a tiny error. Setting the initial state to be (approximately) in the 1-eigenspace: Applying U repeatedly yields: N , then to within an error of O 1 √ N the system will be on the Right side, in either the state |0, 1 or |1, 0 . A measurement at this time will complete the search.
It's worth pointing out that if we hadn't set φ = 0 then the "true" eigenvectors |V (k) ( ) would have been almost entirely constrained to one side or the other. This is because |V (k) (0) span the same eigenspaces as the Right and Left eigenvectors of U 0 . If the Right and Left sides share no common eigenvalues, then we have four distinct eigenvalues and four distinct eigenspaces. So, rather than being a rearrangment of the eigenvectors of U 0 , as was the case above, the |V (k) ( ) are merely tiny perturbations of each of them individually.
Grover originally described his algorithm with a diffusion operator and an oracle operator. The diffusion operator is here replaced by the hub vertex, and the oracle with the differently-phased reflections from the remaining vertices. The result is exactly the same, and we can see that Grover's proof of optimality is merely a statement about the nature of the hub. That is, there's no way to modify the hub alone to get a better speed up.

Algebraic Functions and the Behavior of Zeros
In this section we'll consider the relationship between the zeros of a polynomial and the coefficients of that polynomial. Later these results will be applied to characteristic polynomials. For more background on the math used in this section see [21] and [22].
Definition A "globally analytic function" is an analytic function that is defined as every possible analytic continuation of an analytic function from a particular point in a domain. Globally analytic functions can be many-valued.
Just to emphasize the point that globally analytic functions can be many-valued, if f (z) = 4 √ z, then f (16) = 2, 2i, −2, −2i, that is; f is "4-valued". For the different branches, Proof The proof of this theorem is included in the appendix.
Define P (z, ) to be a polynomial in z and , written P (z, ) = d j=0 a j ( )z j , where each a j ( ) is a polynomial.

Theorem 3.2.
There exists an open disk D, containing 0, and d analytic functions, f (1) , · · · , f (d) , such that: Note that f k are indexed functions, and not necessarily branches of the same globally analytic function. It is true that when P (z, ) is not simultaneously reducible in both z and the zeros are all branches of a single globally analytic function of , however it isn't necessary to know that here.
Proof In appendix.
Issues can crop up with the theorem above. Specifically, if P (z, 0) has a repeated zero. When this is the case, we can concern ourselves with the annulus 0 < |z − λ (k) | < δ, and 0 < < σ, where δ and σ are as defined in the theorem above.
Before dealing with higher order roots we need a few more tools.
is an irreducible polynomial in z and , then all of the double roots of P are isolated in the -plane. That is, if for some value 0 , P (z, 0 ) has a double root, then there exists δ > 0 such that when 0 < | − 0 | < δ, P (z, ) does not have a double root in z.
Proof In appendix.
Theorem 3.4. In the neighborhood of a zero of P (z, ) of multiplicity s > 1, the zeros take the form f α ( ) = ∞ n=−∞ A n ω nα n H , where H < s. Specifically, the zeros are branches of one or more H i -valued global analytic functions, with the given Puiseux series expansion, such that H i = s.

Proof In appendix
Notice that the only new case that this last theorem applies to are repeated roots. We've continued to assume that a d ( ) = 0.

A Note on the Analyticity of the Characteristic Polynomial
In the section above it was assumed that the polynomial P (z, ) is a polynomial with respect to both z and . Yet a quick look at U reveals that this may not necessarily be the case for the characteristic polynomial, since U contains entries of O( √ ). However, it can be shown that for the situation being considered in this write-up C(z, ) = |U( ) − zI| is always a polynomial with respect to . In fact, it is affine with respect to .
Theorem 3.5. Assume that U is a time step matrix as described so far. That is, there is a Left and Right side and these are connected only through a hub vertex with N edges where the reflection and transmission coefficients are r = −1 + 2 N and t = 2 N . Then C(z, ) = |U − zI| is an affine polynomial of = 1 N , and can be written C(z, ) = C 0 (z) + f (z).
Proof The proof of this is in the appendix, in section 9.2.

Pairing
In the last section it was seen that the zeros, λ (k) , of a polynomial with coefficients that are polynomials of take the form and these eigenvalues are grouped together into sets of size H which permute when loops around zero. For any given eigenvalue of U 0 there can be more than one of these sets, and they may have different sizes. When the polynomial in question is the characteristic polynomial of the transition matrix of a quantum walk, C(z, ) = a 0 ( ) + a 1 ( )z + · · · + a d−1 ( )z d−1 + z d , then some new restrictions are brought into play.
The perturbation is assumed to be set up in such a way that for some path in the -plane starting at = 0 U is unitary and along this path |λ (k) ( )| = 1. In a star graph U is unitary for all positive integer values of N , so since = 1 N this path is along the positive real axis in the -plane.
Theorem 4.1. The eigenvalues, λ, of the matrix for a quantum walk, U, with a characteristic polynomial that is a polynomial in both λ and , can only take the form of Proof Since U 0 is unitary, lim →0 λ( ) exists and is finite (|λ (k) (0)| = 1, for all k). Therefore the Puiseux series for λ (k) ( ) has no negative terms.

the angle between any two branches is at least 2π
H . Indeed, the H directions that the different zeros take as they move away from λ 0 are initially evenly spaced. The only effect of taking different paths from zero is to rotate every (λ (k) ( )−λ 0 ). If there is a path the maintains the unitarity of U( ), then for values of along that path |λ (k) ( )| = 1. For small values of the unit circle is essentially a line. Specifically, if λ (k) ( ) are restricted to the unit circle and λ (k) ( ) − λ 0 are evenly spaced, then either the angle between λ (1) ( ) − λ 0 and λ (2) ( ) − λ 0 must be π, or λ( ) is single-valued (H = 1). But that rules out values of H above 2.
Definition Two eigenvalues are said to be "paired" when a small loop around 0 in the -plane causes them to switch (H = 2). As seen above, paired eigenvalues vary on the order of O( √ ), and non-paired eigenvalues vary by O( ). In addition, the associated eigenvectors and eigenprojections, P, are also said to be paired.
Proof The proof of this is included in the appendix.
In that proof it was shown that P (k) ( ) can be expressed as a power series in √ . This means that since we can define |V (k) using these projections, not only do we find that the eigenvectors can likewise be written as power series in √ , but we automatically gain normalization: | V (k) |V (k) | = 1, ∀ . What follows barely deserves to be a theorem, but it needs to be emphasized.
Proof Since all eigenprojections can be written as power series in √ , eigenvectors with distinct eigenvalues can as well and it follows that a linear combination of such eigenvectors can also be written as a power series in An extremely useful consequence of these theorems is that eigenvectors of U 0 , which are easy to find, can be used as approximations of the exact eigenvectors of U, which are difficult to find, are variable, and needlessly complex.
and P S as the projection operator onto S.
If |u ∈ S, then ∀m 0 |u is also in S, and U m |u is almost entirely in S. Proof The proof of this theorem is trivial, but takes up a lot of space. It can be found in the appendix.
The essential idea behind this theorem is that if an initial state starts in an eigenstate, or collection of eigenstates, of U 0 , then it will approximately stay there under arbitrarily many applications of U. This means that, when setting up a search, only one eigenspace/eigenvalue of U 0 needs to be considered. This keeps the situation much simpler.

Necessary and Sufficient Conditions for Pairing
In this section it will be shown that the eigenvectors of U 0 can be disjointly divided up into "constant eigenvectors" and "active eigenvectors". Theorem 4.5 (the three-case theorem). If λ 0 is a root of C 0 (z) with multiplicity s, then only one of the following cases applies to the "λ 0 family" of roots of C(z, ), That is, all of the roots are constant. ii) λ (k) ( ) = λ 0 , for all but one value of k. This root takes the form iii) λ (k) ( ) = λ 0 , for all but two values of k. These two are paired and take the form Proof From theorem 3.5, the characteristic equation can be written C(z, However, by theorem 4.1 we know that unitarity implies s − t ≤ 2. Therefore, if λ 0 is a zero of C 0 (z) with multiplicity s, then it must also be a zero of f (z) with a multiplicity of at least s − 2. This leads to three cases: Here the entire λ 0 family is identically equal to λ 0 , and has no dependence. ii In this case s − 1 of the members of the λ 0 family are constant, and the last root is a solution of 0 Since there is only one paired set of eigenvectors for a given λ 0 , we can label them |V + and In the next few theorems the following properties will be important.
Proof These three results are immediate, upon inspection.
Theorem 4.7. Paired eigenvectors "straddle the hub". That is, if |V is a paired eigenvector, then |V 0 has both a Left and Right side component.
Proof Paired eigenvectors have eigenvalues that always take the form λ = λ 0 e ic √ +O( ).
is only involved in the transimission between the Left and Right sides, in order for V 0 |U 1 |V 0 to be non-zero, |V 0 must show up on both sides. That is; if |V 0 was entirely supported on the Right side, then U 1 |V 0 would be entirely supported on the Left side. 1) If the λ 0 -eigenspace of U 0 is bound in G, then the λ 0 -eigenspace of U is D dimensional and all of the associated eigenvectors are constant. This is case i) of the Three Case Theorem.
2) If the λ 0 -eigenspace of U 0 is in contact with the hub vertex, then the Right sided λ 0 -eigenspace of U is D-1 dimensional and the D-1 associated eigenvectors are constant and bound in G. This leaves one eigenvector which is non-constant in , and is in contact with the hub vertex. This is either case ii) or case iii) of the Three Case Theorem.
Proof The first result is trivial. If an eigenvector is bound in G, then varying (which only affects reflection and transmission across the hub vertex) can't have any impact on it. So, for eigenvectors bound in G, U|V = U 0 |V = λ 0 |V . The contra-positive is likewise clear; if an eigenvector is non-constant, then it must be in contact with the hub vertex.
The second result is far more hard won, and is included in the appendix.
In the proof of the above theorem (in the appendix), the Left side was given a particular form; it consisted of the states |in and |out , with U|out = e iφ |in . It may seem strange that a particular form for the Left side can be used to say such general things about the Right side. But keep in mind that what's been shown is that a certain number of the Right side λ 0 eigenvectors are bound in G. Once we know that an eigenvector on the Right is definitely not in contact with the hub vertex, then it doesn't matter what the Left side is doing.
This last theorem can be stated far more succinctly as, Theorem 4.9. Excluding a special case, a Right side eigenvector is constant and has a constant eigenvalue if and only if it is not in contact with the hub vertex.
That special case is λ 2 0 + e iφ = 0. When this happens the eigenvectors on each side are "balanced", and there is no net flow of probability across the hub. In the Grover graph example, this happens when e iφ = −1, which means that all of the edges become identical to the marked edge. As a result, the quantum graph can be collapsed to just two states: Without two sides, there is no net flow. In general, even though there may not be a further collapsing of the graph, when λ 2 0 + e iφ = 0 there is no net flow of probability across the hub.
Definition This unique non-constant member of the λ 0 family of eigenvectors, |w , depends on the Left side. But while |w may depend on φ and , |w 0 ≡ lim →0 |w does not. It is unique so long as λ 2 0 = e iφ (so long as there is no pairing). |w 0 is precisely the λ 0 eigenvector of U 0 lost when moving from the = 0 case to the = 0 case. This |w 0 is a uniquely defined Right side λ 0 eigenvector of U 0 , and it is called the "Right side active λ 0 eigenvector". An essentially identical proof demonstrates the existence of "Left side active λ 0 eigenvectors".
Definition All of the other λ 0 eigenvectors are bound in G, and so are called "bound λ 0 eigenvectors". Bound eigenvectors are independent of , or any structure on the far side of the hub. If the λ 0 -eigenspace of U 0 is D-dimensional overall, then the space of bound λ 0 eigenvectors of U will be D, D-1, or D-2 dimensional depending on whether the Left and Right sides have active eigenvectors.
Definition The "active subspace" is the span of all of the active eigenvectors of U 0 , for all eigenvalues. Since it is the compliment of the span of all of the bound eigenvectors, and each non-constant eigenvector is perpendicular to every bound eigenvector, every nonconstant eigenvector is contained in the the active subspace. Proof The proof of this is somewhat involved and so is included in the appendix. Among the useful results of the Fundamental Pairing Theorem is the fact that U can be expressed in the following form: And, with correctly chosen complex phases, the paired eigenvectors take the form: So, on each side of the hub vertex we have a unique active eigenvector and paired eigenvectors are just combinations of the active eigenvectors from both sides, up to an error of O( √ ). This makes the situation pretty simple, and we can ignore the bound states entirely.

The Efficiency of Searches Using Paired Eigenvectors
In order to execute an search in O √ N it is necessary to used paired eigenspaces, since this is where changes of O( √ ) can take place. The question addressed in this section is, given an initial state on one side of the hub vertex, how much of it can be transferred to the other? That is, what is the lower bound on the probability of a successful measurement under ideal conditions? Happily, the answer is 1 − |O( √ )|. In the course of proving the fundamental pairing theorem an important additional fact was also proven, and is worth noting separately.
Theorem 4.11. Paired eigenvectors are always evenly divided across the hub vertex. That is, if P is a projection onto either the Left or Right side and |V ± is a paired eigenvector, Proof As established in the fundamental pairing theorem, The same holds for the Right side projector.
Proof In the last subsection it was established that From which it follows that, we can infer that c is real, and we can therefore use the somewhat Proof To show that U interchanges | 0 and |r 0 it suffices to show that , since the floor function rounds down by at most 1.
. This means that searches with | 0 as the initial state are almost guaranteed to succeed, and all of the work that goes into setting up the search: choosing an initial state, knowing what to look for afterward, and figuring out how many times to iterate U, has now been pushed back to finding the active eigenvectors | 0 and |r 0 .
Even this is relatively straightforward. |r 0 can be found be looking for the one eigenvector of P R U 0 with eigenvalue λ 0 that is in contact with the hub vertex. This can be done in a few ways. For example, by removing each bound λ 0 eigenvector, or by comparing the λ 0 -eigenspaces of U 0 and a quantum graph with the hub vertex's behavior replaced with something simple, like U|1, 0 = 0. Trivially, if the Right side λ 0 -eigenspace is one dimensional and in contact with the hub, then the λ 0 eigenvector is the active eigenvector.
The value of N or , and the entire Left side, are unimportant to determining the active eigenvectors.

Best Choice Selection and the Best Time Bound
The bigger c is the faster a search will proceed. So, when the eigenvalue can be selected, the best choice is the eigenvalue with the largest value of c.
The specificity of the star graph described in section 2 allows us to be more precise about the value of c. Regardless of the value of φ, for j = 1, · · · , d. Each of these eigenvectors has a value of c, c (j) . While there is no lower bound for all of these, we can construct a lower bound for at least some of them.
is the number of Right side active eigenvectors, then d j=1 c (j) 2 = 2 and there exists at least one j such that c (j) ≥ 2 d .
Proof |1, 0 is adjacent to the hub vertex and therefore |1, 0 is entirely contained in the active subspace. It follows that 1 = d j=1 1, 0|r Definition There is at least one eigenvalue such that m ≤ π 2 √ 2 √ dN . This is the "best time bound". If the eigenvalue with the highest value of c is used, then the search will require this many time steps at most.
Where |G| is the number of states in G. Examples can be constructed where there is an arbitrarily small value of c (j) = √ 2 | 1, 0|r 0 | for some values of j, but not for all. So long as there is a bound on the size of G, there is at least one eigenvalue where c (j) can be bound from below. Equivalently, there is always at least one eigenvalue such that when the initial state is prepared in the corresponding Left eigenstate, the number of time steps required for a search is bound from above.

Tolerances
For any of a number of reasons the Left and Right eigenvalues may not exactly match. However, the pairing doesn't break immediately, but instead fades away quickly as the disagreement between the eigenvalues increases. The eigenvalues don't have to be exactly equal in order for a O( √ N ) search, but they do have to be nearly equal by an amount dictated by .
This section will make heavy use of the math introduced in section 3.

Grover Graph example revisited
This is exactly the same situation previously seen in section 2.1, but now we are allowing e iφ to take values other than 1. The advantage of the Grover graph is that it is simple enough that it is exactly solvable, even with this generalization.
The characteristic polynomial is The four eigenvalues, expressed as one global function, is Where the ±'s are independent. In the original example, φ = 0, so e iφ −1 2 2 = 0 and looping around zero permutes pairs of eigenvalues. When φ = 0, then to lowest order When φ = 0 the characteristic polynomial no longer has a double zero when = 0. We can still expand around = 0, but we find that the power series is in , not √ .
Clearly the pairing has been lost. However, the double zero required for pairing hasn't vanished, merely moved. This new location is 0 . To find it we set the discriminant equal to zero and solve for . In this particular case the discriminant is exactly what's found under the inner radical, D = (e iφ − 1) 2 − 4(e iφ + 1) 2 + 4(e iφ + 1) 2 2 . We find immediately . We're interested in the zero that corresponds to 0 | φ=0 = 0, and find that 0 . Expanding λ as a power series in , which we find can be done, the eigenvalues can now be written, Notice that if | | | 0 |, then the quadratic behavior is recovered. This will be discussed in substantially more detail below.
There's something subtle that happens with the eigenvectors here as well. When φ = 0, all four of the eigenvectors of U are evenly divided between the Left and Right sides, to within O ( √ ). That is, U has four eigenvectors and when φ = 0 the limit of all of them, as → 0, are evenly divided between the two sides since all four are paired.
When φ = 0, U 0 has four distinct eigenvalues: {1, e i φ 2 , −1, −e i φ 2 }. Therefore, the four eigenvectors of U, in the limit as → 0, converge to two couples of vectors. Two on the Left for λ 0 = e i φ 2 , −e i φ 2 , and two on the Right for λ 0 = 1, −1. Clearly there is a dramatic difference between these limits. This is a "twisting" of the eigenvectors of U that occurs around a double zero. A careful reading of theorem 4.2 reveals that it says that eigenvectors are continuous when they have distinct eigenvalues. When an eigenvalue is degenerate this "continuity of eigenvectors" becomes a "continuity of eigenspaces". There are two degenerate eigenspaces when = 1 2 − 1 2cos( φ 2 ) (when φ = 0 these are the λ = ±1 eigenspaces of the original graph). Picking either, the degenerate space turns into two paired eigenvectors as moves away from this point, and it turns into two non-paired and single sided eigenvectors as φ moves away from this point.
From either direction the two eigenvectors of U converge to the same degenerate eigenspace. The fact that the vectors they converge to are different is unimportant.

Altering the Graph
We know that eigenvalues can pair, and that their values vary on the order of √ = 1 √ N , when U 0 has an identical eigenvalue on both the Left and Right sides. However, we can hope that there should be some leeway, and that there will be pairing for nearly equal eigenvalues. The concern here is that changing the graph will turn a double root of C(z, 0) into a closely-spaced pair of distinct roots of C(z, 0). Since pairing and quadratic speed searches depend on the existence of double roots, these may be lost along with the double root.
Define a new parameter, ξ, which describes a change in the graph, such as a change in the reflection and transmission coefficients of some of the vertices. Further, define ξ such that C(z, , ξ) is a polynomial in all of its arguments, and so that C(z, 0, 0) has a double root in z. ξ = e iφ − 1 ≈ iφ, from section 5.1, is the motivating example.
Theorem 5.1. If U has entries that are analytic functions of a given variable, ξ, then the eigenvectors of U, and by extension the eigenvalues, vary by O(ξ).
Proof Using the same argument seen in the proof of theorem 4.2 (which is in the appendix) we can show that the resolvent, R(ζ, , ξ) = (U − ζI), can be expressed as a power series in ζ, √ , and ξ. Since R(ζ, , ξ) is a power series in ξ, it follows that the eigenprojections, P (k) ( , ξ) = − 1 2πi λ (k) R(ζ, , ξ)dζ, and the eigenvectors, |V (k) , are also expressible as power series in ξ.
Notice that this theorem does not rule out ordinary pairing of O( √ ), since U has entries of the form 2 . ξ has been defined so that C(z, 0, 0) has a double root. By applying the arguments of section 3 to isolated roots of C(z, , ξ) we find that those roots, λ (k) ( , ξ), are analytic with respect to both and ξ.
Theorem 5.2. Changing ξ causes the location of double-zeros to drift. That is, a new value of that depends on ξ, labeled 0 , may be found such that C(z, 0 (ξ), ξ) has a double root in z, and 0 (ξ) is a continuous function of ξ.
Proof Consider the discriminant of C, defined as D = j>k (λ (j) − λ (k) ) 2 , where {λ (k) } are the roots of C in the variable z. The well-known and relevant properties of D( , ξ) are 1) D( , ξ) = 0 if and only if C(z, , ξ) has a double root in z, and 2) if C(z, , ξ) is a polynomial in its arguments, then D( , ξ) is also a polynomial in its arguments.
Because C(z, 0, 0) has a double root, we know that D(0, 0) = 0. Using the same trick that was used to describe the behavior of λ with respect to (theorem 3.2), we can describe the new location of the double root in space with respect to ξ as 0 ξ) j where s is less than or equal to the multiplicity of the root in D( , ξ) in the variable . The important thing to notice here is that 0 (ξ) is continuous with respect to ξ.
As shown in thm. 3.4, the only time that the Puiseux series of λ (k) ( ) can have noninteger powers is when the expansion is taken about the location in the -plane of a double root of C(z, ). With the introduction of ξ, as shown in the last theorem, the location of the double root is a function, 0 (ξ). In order for the Puiseux series of λ (k) ( ) to have half-integer powers in it must be expanded about 0 and so it takes the form λ (k) ( , ξ) = ∞ n=0 a n (ξ) − 0 (ξ) n . There are no new issues with expanding around = 0 as opposed to = 0, and the above series can certainly be constructed.

Nearly-Paired Eigenvalues
Assume that the graph is altered in the manner described in the last subsection. Define the Left and Right eigenvalues when = 0 as λ and λ r respectively. These are both analytic functions of ξ, and thus the phase difference between them can also be described as an analytic function of ξ. Define this phase difference as δ.
So long as λ λ * r is invertible as a function of ξ, we can express ξ as a power series in δ and δ ∝ ξ + O(ξ 2 ). This is a fairly reasonable assumption. In the Grover Graph example provided at the beginning of this section δ = φ 2 .
So, we can reasonably assume that U, the eigenvectors, and the eigenvalues are all analytic functions of δ = λ λ * r . In addition the point in the -plane about which the eigenvalues permute, 0 , is a continuous function of δ.
We can now write λ as a power series in δ and √ − 0 .
Since e x is an analytic function, we can express this in a somewhat more convenient form: Already we can make a few observations. Since these eigenvalues must have modulus 1 over a range of values of and δ, it follows that b j (δ) is real for all j. This is important for the following proof. Proof Since λ is a power series is √ and δ, it follows that λ(0, δ) = e ib 0 (δ)+ib 1 (δ) √ − 0 (δ)+ib 2 (δ)(− 0 (δ))+··· is a power series in δ. Therefore, either 0 (δ) is a power series in δ 2 , or all of the odd terms (a 1 , a 3 , . . .) are zero. However this can't be the case, since b 1 (0) = c (this is the same c used throughout the rest of this paper). So, 0 is a power series in δ 2 . From the definition of δ, e iδ = λ λ * r , we find that: Finally, since 0 is a power series in Notice also that We can now write,

Nearly-Paired Eigenvectors
Eigenvalues and eigenspaces vary by O (δ, √ ). This is an important distinction to make. The fundamental pairing theorem is essentially the statement that when δ = 0, then |V ± 0 ≡ lim →0 |V ± = 1 √ 2 (| 0 ± |r 0 ). However, when δ = 0 we find that |V ± 0 must each converge independently to | 0 and |r 0 . |V ± 0 is defined as a limit of eigenvectors of U, and must itself be an eigenvector of U 0 . But if λ = λ r , then no combination of the Left and Right active eigenvectors can be eigenvectors of U 0 . Therefore, lim δ, →0 |V ± (δ, ) doesn't exist. Clearly, the "angle" between |V ± (δ, ) and the active eigenvectors, | 0 and |r 0 , is somehow dependent on δ and possibly some relationship between δ and .
Theorem 5.4. The angle between the paired eigenvectors and the active eigenvectors, ω, is to lowest order a function of δ 2 4c 2 . Proof The proof of this is included in the appendix.
In the first version of this proof, when dealing with exactly-matched eigenvalues, there was no issue with taking the limit → 0 to find the value of ω. However, in this case we have terms involving δ 2 making that difficult.

Tuning
We now make the declaration that δ 2 4c 2 ≡ t = O(1), and that O (δ, ) is still small. In this way we can take the limit as both and δ go to zero, but fix a relationship between them. Then, When t ≈ 0, N 2c δ 2 and ω ≈ π 4 . In this case the graph behaves like a normal, paired system. That is, the mis-matching of the eigenvalues isn't large enough to affect the algorithm. The paired eigenvectors are each equal combinations of both the Right and Left active eigenvectors (to within O( √ , δ)), and quadratic speed searches can be executed using the vectors | 0 and |r 0 .
For t ≈ 0, in the {| , |r } basis, 4c 2 and ω ≈ 0. The system does not behave like a paired system, but instead behaves as though there are no matched eigenvalues at all. So, for large values of t the initial state stays where it is. The Left and Right eigenstates are decoupled, and are no longer useful for a search.
For t 1, in the {| , |r } basis, "t" describes how "well tuned" a pair of active eignvectors are for given values of and δ. It also provides an easy way to move back and forth between and − 0 , since Graphs with a large value of t are "poorly tuned", and graphs with a small value of t are "well tuned".
We now wish to calculate P (m) = | r 0 |U m | 0 | 2 , which is the probability of a successful search, using | 0 as an initial state and |r 0 as a target state, after m time steps. How this depends on t will be considered in the following theorem.
Theorem 5.5. There is a better than 50% chance of a successful search of the N edges of the hub vertex using the states | 0 and |r 0 after m = π 2c √ N iterations of the time step operator, whenever where δ is the difference in phase between the Left and Right eigenvalues, and c = | r 0 |U| 0 |.
Proof The proof of this theorem will be included in the appendix.
So, when the error between the Left and Right eigenvalues is less than c 2 N , then we can ignore that error, and the algorithm will work normally more than half of the time. We can do slightly better. A carfeul reading of the proof reveals that the lower bound on the probability is closer to ≈ 58.7%. The usual "O( √ , δ)" error does appear here, however that extra 8% can be used to ignore these terms for sufficiently large values on N (N ≥ O(100)) and sufficiently small values of δ (δ ≤ O(0.1)).

Summary of Results
Take as given a star graph as described in section 2, with an attached subgraph G and a time step operator U. Define = 1 N , where N is the number of edges attached to the hub vertex.
-There are two kinds of eigenvectors of U 0 : bound eigenvectors, and actvie eigenvectors. For a given eigenvalue, λ 0 , of U 0 there is at most one active eigenvector, |r 0 . The "Right side λ 0 active eigenvector" is unique, and will exist if and only if the λ 0 eigenspace of U 0 is in contact with the Right side of the hub.
Bound eigenvectors are completely isolated inside of G and are independent of .
-The same statements applies to the Left side (although frequently throughout this paper we have assumed it to have a fixed, simple structure), and the Left side active λ 0 eigenvector is denoted | 0 . Writing the U as U = U 0 + √ U 1 + · · ·, the value of c is c = | r 0 |U 1 | 0 |. For a Left side consisting of only the states |out and |in , we find that c = √ 2| 1, 0|r 0 |.
-We can quickly find a λ 0 , | 0 , and |r 0 because all of them are determined entirely by U 0 , which tends to be easy to work with. This is because, in addition to being blockdiagonal (the blocks corresponding to the Left and Right sides), U 0 is often sparse, and unlike U, it has no dependence on . With | 0 and |r 0 in hand, the values of c and m follow immediately.
-λ 0 doesn't need to be an exact double eigenvalue in order for a quadratic speed up to occur, however the allowable difference in the eigenvalues, δ, is smaller for larger values of N . That is; larger searches need more exact control.
When δ < c 2 N , the probability of a successful search after m = π √ N 2c iterations of U is greater than half.

Errors
In practice, there are errors are produced by: -Approximations of all of the important terms (i.e., λ = λ 0 e ic √ + O( )). -The initial state is not exactly equal to | 0 , since it has a component on the Right side.
-| 0 and |r 0 are eigenvectors of U 0 , not U, but are approximated by linear combinations of |V ± , the actual eigenvectors of U with eigenvalues λ ± .
-Rounding error due to the fact that m must be an integer.
-λ 0 may not be an exact double root, but instead a pair such that |λ 0 − λ 0 | = δλ √ All of these produce errors of O( √ ) or less in the final state. This is unlikely to be a problem on any individual run of the algorithm, and can be easily dealt with by repetition.

Generalizations and Future Work
Generalized Values of r and t In section 2, r and t were introduced, along with the unitarity conditions: 2Re(tr * ) + (N − 2)|t| 2 = 0 (23) Throughout this paper the standard solution, r = −1 + 2 , t = 2 , have been used, where = 1 N as usual. These are found by assuming arg(r) = π and arg(t) = 0. However, by allowing arbitrary phase angles we find a family of solutions: where cos(x − y) < 1. This condition is necessary for the second unitary condition to have solutions. In the collapsed graph we find R R = r and R L = r + (N − 2)t to be the reflection coefficients for the hub vertex from the Right and Left sides respectively, and T = t √ N − 1 as the transmission coefficient. By unitarity, |R R | 2 + |T | 2 = |R L | 2 + |T | 2 = 1 and R R T + R L T = 0. We can use this last relation to quickly find We can now use this to re-visit theorems 3.5 and 4.1. In the proof of theorem 3.5 we found that T 2 − R R R L = 1 for the standard r and t, but this generalized solution may put that clean result at risk. Luckily, In the proof of 3.5 it was shown that C(z, ) = p 1 (z) + p 2 (z)R R ( ) + p 3 (z)R L ( ), where p 1 , p 2 , p 3 are polynomials. So, while we still have a clean result, C(z, ) is no longer a polynomial of . But if a substitution is made, r = (1 − 2µ), then C(z, µ) is a polynomial with respect to µ. Notice that µ is real when is real, and when x = π and y = 0 we find µ = . Using µ instead of in section 3 we find that paired eigenvalues are of the form λ = λ 0 e ±i √ µ + O(µ). However, because µ = O( ), this result is the same as the original: and so on, all of the other results in this paper remain the same.

Multiple Copies of G
Assume there are N total edges connected to the hub vertex, as before, but rather than 1 edge connected to G there are M edges connected to M copies of G. These various copies of G can be connected to each other, but (for this generalization) in such a way that in the automorphism graph there is only one remaining edge on the Left and Right sides.
The reflection and transmission coefficients are

Multiple Subgraphs
If there are multiple subgraphs that share a common eigenvalue but have different forms, then it can be show that they will behave collectively as though they were a single subgraph. The probability of a particular subgraph being the result of a search increases with increasing c, as one might expect.

General Highly-Symmetric Graphs
The star graph gives us a way of looking at the behavior of a hub vertex in depth. For highly symmetric graphs with bounded diameter (the maximum distance between any pair of vertices is bound), we will find at least one hub vertex. Using the techniques of this paper it should be fairly straight forward to generalize to automorphism graphs with multiple hubs. For example, in the investigation of the behavior of finite-depth tree graphs. The bolo graph (which resembles a bolo tie) has a bound state, and 4 Right side active eigenvectors. Using techniques from the paper we can quickly decide on the best eigenvalue and Left side eigenvector to use as an approximate initial state to ensure the quickest search. For the purposes of this example, N = 10 6 . We can get all the information we need from U 0 , so we can ignore the Left side entirely. Define the basis vectors as: And in this basis define the effect of U 0 as: This is an equal scattering in three directions at vertex 1. The difference is that a signal returns from the b arm in 1 time step, and from the A arm in 2.
Step 1) Find the eigenvalues and eigenvectors of the Right side.
The characteristic polynomial is C 0 (z) = z 5 + 1 3 z 4 − 2 3 z 3 + 2 3 z 2 − 1 3 z − 1, and the five eigenvalues are then found to be: λ = −1, −1, 1, 1 Already we know that there must be at least one bound eigenstate with eigenvalue -1, since the active eigenvector for each eigenvalue is unique and -1 is a degenerate eigenvalue. The eigenvectors, in the same order, are: , 0 All of the last four eigenvectors are active eigenvectors. This is immediately obvious because 0, 1|r (j) = 0 for each of them, so they are hub adjacent. The λ 0 = −1 active eigenvector can be found by first finding the unique bound eigenvector. What remains in the −1 eigenspace must be the active eigenvector.
The active eigenspace is 6-dimensional and is spanned by these four Right side and the two Left side active eigenvectors. Paired, or otherwise dependent on , eigenvectors are always expressible as superpositions of these six active eigenvectors. Even if the Left side were replaced with something more interesting, these four vectors would provide all the information necessary to analyze how the Right side interacts.
Step 2) Select a target eigenvalue. We already have enough information to see which of these states, or rather which of these eigenvalues, is the best target for a search. The state that "leans" toward the hub is always the best choice. There are two reasons: first, because the optimal number of iterations for a quadratic search is is given by m = π 2c √ N , and since c = | 0 |U 1 |r 0 | = √ 2| 1, 0|r 0 |, the state that overlaps |1, 0 most will yield the shortest search time. And second, a large value of c (j) means that the Right side active eigenvector is concentrated closer to the hub, which means that states on the edge between 0 and 1 are more likely to be measured.
Step 3) Tune the Left side eigenvalues. The Left side λ 0 = −1 active eigenvector is (−1) = 1 √ 2 (|out − |in ), and this eigenstate is only possible when e iφ = λ 2 0 = (−1) 2 = 1. So, by setting φ = 0, the eigenvalues on the Left become ±1. This does mean that, since both sides now share both 1 and -1 as eigenvalues, paired states can exist for both eigenspaces. However, by initializing with the -1 eigenstate, the +1 state is unimportant. We want to match the -1 state because it is the fastest (largest value of c (j) ).
Step 5) Iterate the time step operator, U. Use U to step time forward m = 1813 = π √ 3 √ 10 6 = π 2c (−1) √ 10 6 times. This process will cause the state to rotate from (−1) to r (−1) , to within an additional error of O(0.1%) produced by the rounding of the floor function, and the fact that the paired eigenvetors, |V ± , are only very closely approximated by combinations of (−1) and r (−1) .
Finally, the Left and Right eigenvalues need not be exact. According to theorem 5.5, the search will still work more than half of the time when the difference between the Left and Right eigenvalues, δ, satisfies δ < c (−1) 2 N = 3 4 2 10 6 ≈ 0.001. So, for N = 10 6 , the complex phase of the Left eigenvalue should be in the range [π − 0.001, π + 0.001].
Notice that, aside from finding the value of m and estimating errors, N (or ) was never considered at all. Indeed the the Left and Right sides are handled separately from beginning to end.

Proofs from section 3 (Algebraic Functions and the Behavior of Zeros)
Theorem. 3.1 If F (z) is globally analytic in an annulus around 0, and is m-valued, then F (z) can be expressed as a Puiseux series (a Laurent series with certain rational powers) of the form F (z) = ∞ n=−∞ A n z n m . Moreover, the m different branches of F , F (j) , can be separated by an arbitrary branch cut through the annulus and expressed as where ω is a primitive mth root of unity, ω = e i 2π m .
Proof Consider the annulus in the z-plane defined by D z = {z : 0 < r < |z| < R}, and the mapping z = ζ m . Define G(ζ) ≡ F (ζ m ) on the annulus D ζ = {ζ : 0 < r In addition, G is single valued. Defining any one of the branches of F to be the principle branch, f 0 , and taking the value of G(x) = F (0) (x m ), x ∈ R, we can then define G(z) to be the terminal value of any analytic continuation of G from x. While F (z) has m branches, G(ζ) has m corresponding 2π m wedges. Continuation along a path, C ζ , that starts at x ∈ R + and traverses once around the circle |ζ| = x corresponds to traversing the circle C z , defined by |z| = x m , in D z , m times. But since F (z) is m valued, traversing |z| = x m m times will return F to the principle branch. Thus, G(xe 2πi ) = G(x), and more generally G(ζe 2πi ) = G(ζ), for ζ ∈ D ζ .
Since G(ζ) is analytic and single-valued in the annulus D ζ , it admits a Laurent series: G(ζ) = ∞ n=−∞ A n ζ n . Therefore, the globally analytic m-valued function F can be written as Notice that if the initial branch, F (0) (z) = ∞ n=−∞ A n z n m , is analytically continued once around C z a 2π phase is added to z and we find the function taking values on the next branch cut (so a very natural ordering of the branch cuts is here defined by subsequent loops around z = 0). We find that F (1)  , · · · , f (d) , such that: Proof Note that f (k) are indexed functions, and not necessarily branches of the same globally analytic function. While it is true that when P (z, ) is not simultaneously reducible in both z and the zeros, as functions or , are all branches of the same globally analytic function, it isn't necessary to know that here. What is important is that some of these Proof Since the zeros of P (z, 0) are distinct, ∃δ such that |z − λ (k) | ≤ δ contains no zeros other than λ (k) . Define C k to be the loop defined by |z − λ (k) | = δ. By the argument principle 1 2πi C k ∂zP (z,0) P (z,0) dz = 1. Here we are assuming that the zeros of P (z, 0) are all of degree 1, and there is only one zero inside of C k . The case of higher degree zeros is dealt with in the next theorem.
The zeros of P (z, ) are continuous functions of the coefficients of P (z, ) (other than a d at a d = 0), and the coefficients of P (z, ) are continuous functions of . By definition ∃σ > 0 such that when < σ, |f (k) ( ) − λ (k) | < δ. In other words, the zero will stay within C k , so for small values of , 1 2πi C k ∂zP (z, ) P (z, ) dz = 1. Notice that we've only used the definition of f k ( ) as the zero of P (z, ) corresponding to λ (k) , independent of its analytic properties.
This integral can be used to pick out the value of the zero, f (k) ( ), by multiplying the argument of the integral by z. By the residue calculus: The important thing to notice here is that, since P (z, ) is a polynomial of near = 0, f k ( ) is analytic.
Repeating this process for each of the d simple zeros of P (z, 0) yields the d analytic functions f (1) ( ), · · · , f (d) ( ) that are the zeros of P (z, ). P (z, ) is an irreducible polynomial in z and , then all of the double roots of P are isolated in the -plane. That is, if for some value 0 , P (z, 0 ) has a double root, then there exists δ > 0 such that when 0 < | − 0 | < δ, P (z, ) does not have a double root in z.

Theorem. 3.3 If
The proof of this requires the introduction of a new object: the discriminant.
Definition The "discriminant of P ", D[P ], is a function of the coefficients of a polynomial P , and D[P ] ≡ a 2d−2 d j>k (λ (j) − λ (k) ) 2 , where a d is the leading coefficient of P , and {λ (k) } are the zeros of P .
Clearly the discriminant is zero if and only if P has a repeated root, and this is the property that makes it so appealing.
Proof A known fact about the discriminant is that it is the resultant of P and P z , which is in some sense like being the gcd(P, P z ). One of the methods of finding the discriminant is very much like Euclid's algorithm for finding the gcd of two numbers, but rather than finding combinations of the two numbers that produce the lowest non-zero value, the discriminant involves finding the combination of polynomials that produces the lowest degree non-zero polynomial. For example, if P (z) = Az 2 + Bz + C, then P z (z) = 2Az + B, and Which is the well-known discriminant for quadratic equations. So, D[P ] = B 2 − 4AC = (−4A)P + (2Az + B)P z .
The things to keep in mind here are: i) D = 0 if and only if P (z) has a repeated zero. ii) D is not a function of z. iii) D is a polynomial in the coefficients of P . iv) If the coefficients of P are polynomials of , then so is D.
Proof D( ) inherits its analyticity from P (z, ). Because D is analytic, if it has a nonisolated zero, then it must be identically zero. But if the discriminant of P is always zero, then P always has a double root, and must therefore be reducible into factors. Because the original assumption was that P (z, ) is irreducible, D cannot be identically zero, and therefore any zeros of D( ) are isolated. This means that changing splits a multiple zero into simple zeros in general, and if P (z, 0) has a repeated root, then within a small punctured disk about = 0, P (z, ) has only simple roots.
Theorem. 3.4 In the neighborhood of a zero of P (z, ) of multiplicity s > 1, the zeros take the form f (j) ( ) = ∞ n=−∞ A n ω jn n H , where H < s. Specifically, the zeros are branches of one or more H i -valued global analytic functions, with the given Puiseux series expansion, such that H i = s.
Proof Without loss of generality, assume that the zero of multiplicity s is found at = 0, so f (1) (0) = f (2) (0) = · · · = f (s) (0). In the last theorem it was shown that within a small disk excluding = 0 these s functions are different. For some 0 within this punctured disk we can apply theorem 3.2 almost verbatim to show that f (1) ( ), · · · , f (s) ( ) are analytic functions.
However, they may not necessarily be single-valued. If they are analytically continued in a loop around = 0 they may come back with a different value. However, by definition P (f (k) ( ), ) = 0, and there are only d possible such functions. As a result, if f (k) ( ) does not return to it's original value it must return as one of the other functions. Therefore, looping around = 0 permutes f (1) , · · · , f (s) . If H loops brings f k back to its original value, then f (k) is a branch of an H-valued global analytic function. Clearly, H < s since there are only s available functions to permute. The s different functions can be grouped according to which of the global analytic functions they're branches of, where H i = s Since these functions are branches of a global analytic function in an annulus, by theorem 3.4 these functions must be of the form f (j) ( ) = ∞ n=−∞ A n ω jn n H .

Proof That the Characteristic Polynomial is Affine in
For a hub vertex with reflection coefficients r and t, we know that unitarity implies |r| 2 + (N − 1)|t| 2 = 1 and 2Re(rt) + (N − 2)|t| 2 = 0. The most commonly used solution, and the one assumed throughout this paper, is r = −1 + 2 , t = 2 . The generalized solution is handled in section 7.
In the automorphism graph we found that the reflection coefficients are R L = r + (N − 2) t = 1 − 2 and R R = r = −1 + 2 , for the Left and Right sides respectively, and the transmission coefficients between the two sides is Theorem. 3.5 Assume that U is a time step matrix as described so far. That is, there is a Left and Right side and these are connected only through a hub vertex with N edges where the reflection and transmission coefficients are r = −1 + 2 N and t = 2 N . Then C(z, ) = |U − zI| is an affine polynomial of = 1 N , and can be written C(z, ) = C 0 (z) + f (z).
Proof First, some cumbersome notation. Define |M| to be the determinant of the matrix M, |M| (i,j) to be the determinant with the (i, j) term replaced with a zero, and |M| <i,j> to be the determinant of M with the ith column and jth row removed. Note that if M is an n × n matrix, then |M| (i,j) is also n × n, and |M| <i,j> is (n − 1) × (n − 1).
In what follows we'll make use of the fact that we can remove an element from the determinant of M, but must include a determinant of a minor matrix. For example, This is can be more succinctly written, |M| = |M| (1,3) +g|M| <1,3> . Notice that |M| (1,3) is 3 × 3, while |M| <1,3> is 2 × 2.
If we list all of the Left side states first, then U ≡ U L T ij T kl U R where U L and U R are the restrictions of U to the Left and Right sides, and T ij and T kl are all zero except for a single T = 2 √ − 2 at the indicated coordinate. Note that if the T 's are at the coordinates (i, j) and (k, l), then the reflection coefficient for the left side, R L , can be found at (k, j) and similarly R R can be found at (i, l). It is now straightforward to see, There's a slight abuse of notation in this step. Since the third term has had the jth row removed, the coordinate of T kl is not (k, l), but is instead (k, l − 1). It follows that, U R − zI <i,j>,<k,l> The second and third terms of this last line are both equal to zero. In the second term the lth column is removed, but since U R − zI is a square matrix, removing a row leaves the columns linearly dependent. Thus the matrix is degenerate, and the determinant is zero. A similar argument holds for the third term. Both of the non-zero blocks of the fourth term are square matrices, and are therefore not necessarily zero. We now have, This is enough to show that C(z, ) is a polynomial in both z and . However, this can be taken a step further. Repeating the same trick we find that By definition we know that U R − zI <i,j>,<k,l> since these matrices are missing the same rows and columns. We can now write, And finally, using the fact that R R = −1 + 2 , R L = 1 − 2 , and T = 2 It's worth noting that R R R L − T 2 = −1 is not a coincidence dependent on how the solutions to the unitarity condition are chosen, but is in fact the unitarity condition itself. Thus, if we had used a solution different from r = −1 + 2 , t = 2 , we would find that R R R L − T 2 is always a constant.
The long manipulation in this proof is essentially just a careful removal of everydependent element of U, so the explicit in the above equation is in fact the only remaining . Clearly, the characteristic polynomial is a polynomial in z and an affine polynomial in . With C 0 (z) ≡ |U 0 − zI| we can now write

Proofs from section 4 (Pairing)
Proof The "resolvent" is a matrix defined as R(ζ, ) = (U − ζI) −1 . When ζ 0 is not an eigenvalue of U, Since U 0 is a known unitary matrix, for which ζ 0 is not an eigenvalue, we know that R(ζ 0 , 0) is well-defined. The entries of U can be expanded as power series in √ , and since the entries of U are continuous functions of , ||U − U 0 || can be made arbitrarily small. Therefore, for small values of and (ζ − ζ 0 ) we find that ||(ζ − ζ 0 )I + (U − U 0 )|| < ||R(ζ 0 , 0)|| −1 , and therefore R(ζ, ) can be written as a double power series in ζ and √ . Interesting things happen when ζ = λ (k) . The projection operator onto the λ (k)eigenspace can be expressed as P (k) = − 1 2πi R(ζ, )dζ, where the integral is taken over a curve that loops once around λ (k) , and no other eigenvalues. This can be shown by applying P (k) to the eigenvector |V (j) .
The important thing is that since R(ζ, ) is expressible as a power series in √ , then so is P (k) .
Away from = 0 the eigenvalues are distinct, and thus the eigenprojections are 1dimensional and can be written P (k) = |V (k) V (k) |. So if the entries of P (k) take the form of ∞ n=0 c n ( √ ) n , then so do the entries in the corresponding eigenvector.

Theorem. 4.8
Assume that the Right side λ 0 -eigenspace of U 0 is D dimensional. 1) If the λ 0 -eigenspace of U 0 is bound in G, then the λ 0 -eigenspace of U is D dimensional and all of the associated eigenvectors are constant. This is case i) of the Three Case Theorem.
2) If the λ 0 -eigenspace of U 0 is in contact with the hub vertex, then the Right sided λ 0 -eigenspace of U is D-1 dimensional and the D-1 associated eigenvectors are constant and bound in G. This leaves one eigenvector which is non-constant in , and is in contact with the hub vertex. This is either case ii or case iii of the Three Case Theorem.
Proof The first result is trivial. If an eigenvector is bound in G, then varying (which only affects reflection and transmission across the hub vertex) can't have any impact on it. So, for eigenvectors bound in G, U|V = U 0 |V = λ 0 |V .
For the second result we assume that the λ 0 -eigenspace is in contact with the hub vertex, and we'll use the set up described in section 2. Define |V = α|in + β|out + γ|1, 0 + δ|0, 1 + |G , where |G = P G |V and U|V = λ 0 |V . So |V is in contact with the hub vertex, is an eigenvector of U, and has a constant eigenvalue. It may seem too restrictive to assume that the Left side has a particular form, but we'll find that it makes no difference.
The rough idea of the proof is to show that if |V 0 is a hub-adjacent λ 0 eigenvector of U 0 , then |V cannot be a λ 0 eigenvector of U. Instead, the eigenvalue must be a nonconstant function of . Since a λ 0 eigenvector is lost when we change from the = 0 case to the = 0 case, we can say that if the λ 0 -eigenspace of U 0 is D dimensional, then the λ 0 -eigenspace of U is D-1 dimensional.
Assume that there is no Left side λ 0 -eigenspace. By the Three Case theorem and theorem 4.7, the only possibilities are that all of the eigenvectors in the λ 0 family are constant, or there is one unique non-constant eigenvector. Since the λ 0 space of U 0 is entirely Right sided, |V 0 takes the form |V 0 = γ 0 |1, 0 + δ 0 |0, 1 + |G 0 . It follows that, Keep both of these innocent looking results in mind for a moment. Now extending to the = 0 case, This is difficult to solve directly, but fortunately the last of these four relations can be used to eliminate a variable. Notice that U [δ|0, 1 + |G ] = U 0 [δ|0, 1 + |G ] since both |0, 1 and |G are unaffected by the hub vertex. Beginning with the last relation from both the = 0 and = 0 cases, So we now have four straightforward equations and four variables: 0 + e iφ ) = 0 Unless φ was specifically chosen so that e iφ + λ 2 0 = 0 it follows that α = β = γ = δ = 0, which contradicts the statement that |V 0 is in contact with the hub vertex. What has just been shown is that if |V 0 is a λ 0 eigenvector of U 0 in contact with the hub vertex, then |V either has a different eigenvalue or e iφ + λ 2 0 = 0. First assume that λ 2 0 + e iφ = 0 and λ 2 0 − e iφ = 0 (there is no Left side λ 0 -eigenspace). If the λ 0 eigenspace of U 0 is in contact with the hub vertex, then by moving from the = 0 case to the = 0 case at least one eigenvector is lost. Since λ 2 0 − e iφ = 0, by thm. 4.7 there can be no pairing. The Three Case Theorem then implies that there is at most only one non-constant eigenvalue in the λ 0 family. So, only one eigenvector is lost and if the Right side λ 0 -eigenspace of U 0 is D dimensional, then the Right side λ 0 -eigenspace of U must be D-1 dimensional. Assume that |P and |Q are eigenvectors with eigenvalues in the λ 0 family, and that both are in contact with the hub vertex. They may be paired, or one or both may be constant. |P and |Q are in the active subspace, and |P 0 and |Q 0 are in the λ 0 -eigenspace of U 0 , so it follows that span{|P 0 , |Q 0 } = span{| 0 , |r 0 }. Define the eigenvalues of |P and |Q to be λ 0 e ip( ) and λ 0 e iq( ) , where p(0) = q(0) = 0.

The Fundamental Pairing Theorem
Because the spans are equal and two dimensional there exists a transformation between the orthonormal bases, {| 0 , |r 0 } and {|P 0 , |Q 0 }, which we can write | 0 |r 0 = e iγ cos(ω) e iδ sin(ω) e iγ+iη sin(ω) −e iδ+iη cos(ω) By carefully choosing the relative phases between all four vectors, we can assume without loss of generality that | 0 |r 0 = cos(ω) sin(ω) −sin(ω) cos(ω) and use this to define | and |r as: Notice that while | 0 and |r 0 are eigenvectors of U 0 , | and |r are not eigenvectors of U, they're merely defined in terms of the eigenvectors |P and |Q . Since eigenvectors can be expressed as power series in √ , we can write | = | 0 + √ | 1 + O( ), and similarly for |r . . We can calculate |U 11 − U 22 | and |U 12 | directly: In the last step here we used the fact that | 0 and |r 0 are one-sided, and since U 1 is only involved in transmitting between the Left and Right sides, 0 |U 1 | 0 = r 0 |U 1 |r 0 = 0.
The reverse implication, that the λ 0 -eigenspace of U 0 is adjacent to both sides of the hub vertex if there exists paired eigenvectors, is a direct result of theorem 4.7.
Proof The (arbitrary) phase of each of the eigenvectors can be carefully chosen so that each of these amplitudes are real, and so that 0 ≤ ω ≤ π 2 .
Theorem. 5.5 There is a better than 50% chance of a successful search of the N edges of the hub vertex using the states | 0 and |r 0 after m = π 2c √ N iterations of the time step operator, whenever where δ is the difference in phase between the Left and Right eigenvalues, and c = | r 0 |U| 0 |.
Proof First, we find an expression for P (m, t), P (m, t) Notice that this isn't a function of , it's a function of (1 + t) . This raises issues, because we may chose the wrong value of m. m = π 2c √ is the value that would be chosen if the graph was assumed to be "correctly tuned", with δ = 0. Knowing only that the "error" between the eigenvalues is small, this value of m is the natural choice. is the value of m that should be chosen if δ is known, and is being compensated for. That is, if the exact difference between the eigenvalues is known, then the number of iterations can be adjusted to give a slightly better chance of success.
Taking into account the difference between the eigenvalues, And not taking into account the difference δ, but instead assuming that δ = 0, 1+t over the interval 0 ≤ t ≤ 1 2 . This condition can be rewritten, This means that P π 2c √ N > 1 2 whenever δ < c 2 N .