1 Introduction

Despite widespread knowledge of how a Riemann–Hilbert formulation allow us to describe the solutions of the Painlevé equations, the corresponding description remains incomplete for discrete Painlevé equations. In this paper, we provide such a formulation for an important equation known as the q-difference sixth Painlevé equation and show that (under certain conditions) the corresponding Riemann–Hilbert problem is solvable, the resulting monodromy mapping is bijective, and the monodromy manifold is an algebraic surface given by an explicit equation.

Assuming \(q\in {\mathbb {C}}\), \(0<|q|<1\), and given nonzero parameters \(\kappa =(\kappa _0,\kappa _t,\kappa _1,\kappa _\infty )\in {\mathbb {C}}^4\), the system known as the q-difference sixth Painlevé equation is

$$\begin{aligned} q\text {P}_{\text {VI}}:\ {\left\{ \begin{array}{ll} f{\overline{f}}&{}=\dfrac{({\overline{g}}-\kappa _0\,t)({\overline{g}}-\kappa _0^{-1}t)}{({\overline{g}}-\kappa _\infty )({\overline{g}}-q^{-1}\kappa _\infty ^{-1})},\\ g{\overline{g}}&{}=\dfrac{(f-\kappa _t\,t)(f-\kappa _t^{-1}t)}{q(f-\kappa _1)(f-\kappa _1^{-1})}, \end{array}\right. } \end{aligned}$$
(1.1)

where \(f,g:T\rightarrow \mathbb{C}\mathbb{P}^1\) are complex functions defined on a domain T invariant under multiplication by q and we have used the abbreviated notation \(f=f(t)\), \(g=g(t)\), \({\overline{f}}=f(qt)\), \({\overline{g}}=g(qt)\), for \(t\in T\). We will refer to Eq. (1.1) by the abbreviation \(q\text {P}_{\text {VI}}\).

\(q\text {P}_{\text {VI}}\) was first derived by Jimbo and Sakai [20] as the compatibility condition of a pair of linear q-difference systems. They showed that this formulation could be interpreted as a q-difference version of isomonodromic deformation, in close parallel to the role played by the classical sixth Painlevé equation as the isomonodromic condition for a rank-two Fuchsian system with four regular singular points at 0, 1, \(\infty \), t, where t is allowed to move in \({\mathbb {C}}\setminus \{0,1\}\) [12, 21].

The sixth Painlevé equation (\(\text {P}_{\text {VI}}\)) plays an important role in many settings in mathematics and physics. We mention the construction of self-dual Einstein metrics in general relativity [33], classification of 2D-topological field theories [7], mirror symmetry, and quantum cohomology [24] as noteworthy examples.

Letting \(q\rightarrow 1\) in \(q\text {P}_{\text {VI}}\), with \(\kappa _j=q^{k_j}\) for \(j=0,t,1,\infty \), under the assumption that \(f\rightarrow u\) and \(g\rightarrow (u-t)/(u-1)\), the system reduces to \(\text {P}_{\text {VI}}\):

$$\begin{aligned} u_{tt}=&\left( \frac{1}{u}+\frac{1}{u-1}+\frac{1}{u-t}\right) \frac{u_t^2}{2}-\left( \frac{1}{t}+\frac{1}{t-1}+\frac{1}{u-t}\right) u_t\\&+\frac{u(u-1)(u-t)}{t^2(t-1)^2}\left( \alpha +\frac{\beta t}{u^2}+\frac{\gamma (t-1)}{(u-1)^2}+\frac{\delta t(t-1)}{(u-t)^2}\right) , \end{aligned}$$

where

$$\begin{aligned} \alpha =\frac{(2 k_\infty +1)^2}{2},\quad \beta =-2 k_0^2,\quad \gamma =2k_1^2,\quad \delta =\frac{1-4 k_t^2}{2}. \end{aligned}$$

Due to its relation to \(\text {P}_{\text {VI}}\), the q-difference equation \(q\text {P}_{\text {VI}}\) has drawn increasing interest in recent times. Mano [25] derived the generic leading order asymptotics of solutions near \(t=0\) and \(t=\infty \) and gave an implicit solution to the corresponding nonlinear connection problem. Jimbo et al. [22] extended Mano’s asymptotic result near \(t=0\) to an explicit asymptotic expansion beyond all orders for the generic solution. They obtained this asymptotic representation through an interesting connection of \(q\text {P}_{\text {VI}}\) with conformal field theory, analogous to the one for \(\text {P}_{\text {VI}}\) established by Gamayun et al. [13].

In this paper, we study \(q\text {P}_{\text {VI}}\) via the Jimbo–Sakai linear problem [20]. Using Birkhoff’s theory [1], we define an associated Riemann–Hilbert problem (RHP), which captures the general solution of \(q\text {P}_{\text {VI}}\). The jump matrices of this RHP across a single closed contour form a corresponding monodromy manifold that is a focal point of this paper.

Recently, this monodromy manifold was the object of an extensive study by Ohyama et al. [27], who showed that such a manifold forms an algebraic surface. Furthermore, they conjectured, see [27, Conjecture 7.10], that the algebraic surface is smooth, under additional conditions. In this paper, we prove a stronger version of this conjecture, see Theorem 2.17 and Remark 2.18.

Consider the general class of solutions (fg) of \(q\text {P}_{\text {VI}}\) defined on a domain T given by a discrete q-spiral, i.e., \(T=q^{\mathbb {Z}}t_0\), for some \(t_0\in {\mathbb {C}}^*\). The deformation of the Jimbo–Sakai linear problem (see §3.2) yields an auxiliary equation associated with \(q\text {P}_{\text {VI}}\)

$$\begin{aligned} \frac{{\overline{w}}}{w}=\kappa _\infty \frac{q \kappa _\infty {\overline{g}}-1}{{\overline{g}}-\kappa _\infty }. \end{aligned}$$
(1.2)

We refer to (fg) as a solution of \(q\text {P}_{\text {VI}}(\kappa ,t_0)\) and call the triplet (fgw) a solution of \(q\text {P}_{\text {VI}}^{\text {aux}}(\kappa ,t_0)\).

Starting with an initial value of (fg) in \({\mathbb {C}}^*\times {\mathbb {C}}^*\), and iterating in t, \(q\text {P}_{\text {VI}}\) can become apparently singular when (fg) takes the value of one of the following eight base-points,

$$\begin{aligned} \begin{aligned}&b_1=(0,q^{-1}\kappa _0^{+1}t),{} & {} b_3=(\kappa _t^{+1}t,0),{} & {} b_5=(\kappa _1^{+1},\infty ),{} & {} b_7=(\infty ,\kappa _\infty ^{+ 1}),\\&b_2=(0,q^{-1}\kappa _0^{-1}t),{} & {} b_4=(\kappa _t^{-1}t,0),{} & {} b_6=(\kappa _1^{-1},\infty ),{} & {} b_8=(\infty ,\kappa _\infty ^{-1}q^{-1}). \end{aligned} \end{aligned}$$
(1.3)

Each of these can be resolved through a blow up, so that the iteration is once again well-defined [31]. There are, however, formal solutions of equations (1.1), which never take a value in \({\mathbb {C}}^*\times {\mathbb {C}}^*\). We exclude such solutions from our consideration.

1.1 Main results

The main results of this paper are given by Theorems 2.122.15, 2.17 and 2.20 in Sect. 2. Throughout the paper, it is assumed that the parameters \(\kappa \) and \(t_0\) satisfy the non-resonance conditions,

$$\begin{aligned} \kappa _0^2,\kappa _t^2,\kappa _1^2,\kappa _\infty ^2\notin q^{\mathbb {Z}},\qquad (\kappa _t\kappa _1)^{\pm 1}, (\kappa _t/\kappa _1)^{\pm 1}\notin t_0q^{\mathbb {Z}}. \end{aligned}$$
(1.4)

As in Ohyama et al. [27], the non-splitting conditions

$$\begin{aligned}&\kappa _0^{\epsilon _0}\kappa _t^{\epsilon _t} \kappa _1^{\epsilon _1} \kappa _\infty ^{\epsilon _\infty }\notin q^{\mathbb {Z}}, \end{aligned}$$
(1.5a)
$$\begin{aligned}&\kappa _0^{\epsilon _0} \kappa _\infty ^{\epsilon _\infty }\notin t_0q^{\mathbb {Z}}, \end{aligned}$$
(1.5b)

where \(\epsilon _j\in \{\pm 1\}\), \(j=0,t,1,\infty \), also play an important role. The monodromy manifold contains reducible monodromy when one or more of these conditions are violated – see Lemma 2.10.

The RHP corresponding to \(q\text {P}_{\text {VI}}\) is given by Definition 2.7. Our first main result, Theorem 2.12, shows that the RHP with irreducible monodromy is always solvable. This has important ramifications for the mapping that sends solutions of \(q\text {P}_{\text {VI}}\) to points on the monodromy manifold, which we will refer to as the monodromy mapping. In particular, Corollary 2.13 shows that the monodromy mapping is bijective when the non-splitting conditions are satisfied.

The RHP may be solvable in some cases of reducible monodromy. In Sect. 4.2, we show that in such cases, the RHP is solved explicitly in terms of certain orthogonal polynomials yielding special function solutions of \(q\text {P}_{\text {VI}}\).

Our second main result, Theorem 2.15, constructs an embedding of the monodromy manifold into \((\mathbb{C}\mathbb{P}^1)^4/{\mathbb {C}}^*\), where the quotient is taken with respect to scalar multiplication. The image of this embedding is described as the zero set of a polynomial, given explicitly in Definition 2.14, minus a curve.

This embedding allows us to study algebro-geometric properties of the monodromy manifold. Our third main result, Theorem 2.17, focuses on the singularities of the monodromy manifold and proves that it is smooth if and only if it excludes reducible monodromy, i.e., if and only if the non-splitting conditions hold true.

Finally, our fourth main result, Theorem 2.20, identifies the monodromy manifold with an explicit affine algebraic surface when the non-splitting conditions are satisfied.

1.2 Notation

Here, we briefly describe the notation used in this paper. The symbol \(\sigma _3\) is the well-known Pauli matrix \(\sigma _3={\text {diag}}(1,-1)\). The q-Pochhammer symbol is the (convergent) product

$$\begin{aligned} (z;q)_\infty =\prod _{k=0}^{\infty }{(1-q^kz)}\qquad (z\in {\mathbb {C}}). \end{aligned}$$

Note that the entire function \((z;q)_\infty \) satisfies

$$\begin{aligned} (qz;q)_\infty =\frac{1}{1-z}(z;q)_\infty , \end{aligned}$$

with \((0;q)_\infty =1\) and, moreover, possesses simple zeros at \(q^{-{\mathbb {N}}}\). The \( q \)-theta function

$$\begin{aligned} \theta _q(z)=(z;q)_\infty (q/z;q)_\infty \quad (z\in {\mathbb {C}}^*),\quad {\mathbb {C}}^*:={\mathbb {C}}\setminus \{0\}, \end{aligned}$$
(1.6)

is analytic on \({\mathbb {C}}^*\), with essential singularities at \(z=0, \infty \), and has simple zeros on the q-spiral \(q^{\mathbb {Z}}\). It satisfies

$$\begin{aligned} \theta _q(qz)=-\frac{1}{z}\theta _q(z)=\theta _q(1/z). \end{aligned}$$
(1.7)

For \(n\in {\mathbb {N}}^*\), we use the common abbreviation for repeated products of these functions

$$\begin{aligned} \theta _q(z_1,\ldots ,z_n)&=\theta _q(z_1)\cdot \ldots \cdot \theta _q(z_n),\\ (z_1,\ldots ,z_n;q)_\infty&=(z_1;q)_\infty \cdot \ldots \cdot (z_n;q)_\infty . \end{aligned}$$

We will refer to the complex projective space \({\mathbb {C}}{\mathbb {P}}^1\) as \({\mathbb {P}}^1\) and, for positive integer k, denote the k-fold direct product \({\mathbb {P}}^1\times \ldots \times {\mathbb {P}}^1\) by \(({\mathbb {P}}^1)^k\). (We remind the reader that \({\mathbb {P}}^1\times {\mathbb {P}}^1\) is not the same space as \({\mathbb {P}}^2\).)

1.3 Outline of the paper

In Sect. 2, we give the precise statements of the main results of the paper. Section 3 is devoted to the Jimbo–Sakai linear system. Here, we renormalize the linear system of [20] and describe the outcomes of Birkhoff’s classical theory [1] for this system. In Sect. 4, we study the solvability of RHP I, defined in Definition 2.7, and prove Theorem 2.12. Section 5 concerns the monodromy manifold and proofs of Theorems 2.152.17 and 2.20 are given there. We conclude the paper with a conclusion in Sect. 6.

2 Detailed Statement of Results

In order to state our main results, we recall the Jimbo–Sakai linear problem for \(q\text {P}_{\text {VI}}\) and define the corresponding monodromy manifold and mapping in Sect. 2.1. In Sect. 2.2, we formulate the associated RHP via Birkhoff’s theory. In Sect. 2.3 we state our first main result, Theorem 2.12. Then, in Sect. 2.4, we state our main results on the monodromy manifold, that is, Theorems 2.15, 2.17 and 2.20.

2.1 The Jimbo–Sakai linear system

Suppose \(\kappa =(\kappa _0,\kappa _t,\kappa _1,\kappa _\infty )\in \mathbb C^4\), all nonzero, are given and \(t\in T\) lies on a discrete q-spiral \(T=q^{\mathbb {Z}}t_0\). Consider the linear system

$$\begin{aligned} Y(qz)&=A(z,t)Y(z),\end{aligned}$$
(2.1)
$$\begin{aligned} A(z,t)&=A_0(t)+z A_1(t)+z^2 A_2, \end{aligned}$$
(2.2)

where A(zt) is a \(2\times 2\) matrix polynomial with determinant given by

$$\begin{aligned} |A(z,t)|=(z-\kappa _t^{+1}t)(z-\kappa _t^{-1}t)(z-\kappa _1^{+1})(z-\kappa _1^{-1}), \end{aligned}$$
(2.3)

and assume that

$$\begin{aligned} A_0(t)=H(t)\begin{pmatrix} \kappa _0^{+1}t &{} 0\\ 0 &{}\quad \kappa _0^{-1}t \end{pmatrix}H(t)^{-1},\quad A_2=\begin{pmatrix} \kappa _\infty ^{+1} &{} 0\\ 0 &{}\quad \kappa _\infty ^{-1} \end{pmatrix}. \end{aligned}$$
(2.4)

for an \(H=H(t)\in GL_2({\mathbb {C}})\). This is the Jimbo–Sakai linear problem [20], which we have scaled to remove redundant parameters (see Sect. 3.1 for details). Throughout this paper we assume that the parameters \(\kappa \) and \(t_0\) satisfy the non-resonance conditions (1.4), which ensure that the linear problem is fully non-resonant (see [23, Definition 1.1]).

By Carmichael [2], the linear system (2.1) has solutions \(Y_0(z,t)\) and \(Y_\infty (z,t)\) respectively given by convergent series expansions around \(z=0\) and \(z=\infty \) of the following form,

$$\begin{aligned} Y_0(z,t)&=z^{\log _q(t)}\Psi _0(z,t)z^{k_0\sigma _3},&\Psi _0(z,t)&=H(t)+\sum _{n=1}^\infty z^n M_n(t),\end{aligned}$$
(2.5a)
$$\begin{aligned} Y_\infty (z,t)&=z^{\log _q(z)-1}\Psi _\infty (z,t) z^{k_\infty \sigma _3},&\Psi _\infty (z,t)&=I+\sum _{n=1}^\infty z^{-n} N_n(t), \end{aligned}$$
(2.5b)

where \(q^{k_j}=\kappa _j\) for \(j=0,\infty \). The matrix functions \(\Psi _\infty (z,t)\) and \(\Psi _0(z,t)^{-1}\) extend to single-valued analytic functions in z on \({\mathbb {P}}^1\setminus \{0\}\) and \({\mathbb {C}}\) respectively. Furthermore, their determinants are explicitly given by

$$\begin{aligned}&|\Psi _\infty (z,t)|=\left( \kappa _t^{+1}\frac{qt}{z},\kappa _t^{-1}\frac{qt}{z},\kappa _1^{+1}\frac{q}{z},\kappa _1^{-1}\frac{q}{z};q\right) _\infty ,\end{aligned}$$
(2.6a)
$$\begin{aligned}&|\Psi _0(z,t)|^{-1}=|H|^{-1}\left( \kappa _t^{+1}\frac{z}{t},\kappa _t^{-1}\frac{z}{t},\kappa _1^{+1}z,\kappa _1^{-1}z;q\right) _\infty . \end{aligned}$$
(2.6b)

A central object of study in this paper is the connection matrix

$$\begin{aligned} C(z,t):=\Psi _0(z,t)^{-1}\Psi _\infty (z,t). \end{aligned}$$

This matrix is single-valued in z on \({\mathbb {C}}^*\) and is related to Birkhoff’s connection matrix

$$\begin{aligned} P(z,t):=Y_0(z,t)^{-1}Y_\infty (z,t), \end{aligned}$$

by

$$\begin{aligned} P(z,t)=z^{\log _q(z/qt)} z^{-k_0\sigma _3} C(z,t)z^{k_\infty \sigma _3}. \end{aligned}$$

For our purposes, it is more convenient to work with C(zt), rather than P(zt), due to its single-valuedness. We will also refer to the connection matrix C(zt) as the monodromy of the linear system (2.1).

For any fixed t, C(zt) has the following analytic characterisation in z.

  1. (1)

    It is a single-valued analytic function in \(z\in {\mathbb {C}}^*\).

  2. (2)

    It satisfies the q-difference equation

    $$\begin{aligned} C(qz,t)=\frac{t}{z^2}\kappa _0^{\sigma _3}C(z,t)\kappa _\infty ^{-\sigma _3}. \end{aligned}$$
  3. (3)

    Its determinant is given by

    $$\begin{aligned} |C(z,t)|=c\, \theta _q\left( \kappa _t^{+1}\frac{z}{t},\kappa _t^{-1}\frac{z}{t},\kappa _1^{+1}z,\kappa _1^{-1}z\right) , \end{aligned}$$

    for some \(c\in {\mathbb {C}}^*\).

We correspondingly make the following definition.

Definition 2.1

We denote by \({\mathfrak {C}}(\kappa ,t)\), for any fixed \(t\in {\mathbb {C}}^*\), the set of all \(2\times 2\) matrix functions satisfying properties (1)–(3) above.

Next, we consider deformations of the linear system (2.1), as \(t\rightarrow qt\), which leave the matrix function P(zt) invariant, i.e. such that \(P(z,qt)=P(z,t)\), which is equivalent to

$$\begin{aligned} C(z,qt)=z \,C(z,t). \end{aligned}$$

We call such a deformation isomonodromic.

Jimbo and Sakai [22] showed that, upon introducing the following coordinatesFootnote 1 (fgw) on A,

$$\begin{aligned} A_{12}(z,t)&=\kappa _\infty ^{-1} w(z-f),\end{aligned}$$
(2.7a)
$$\begin{aligned} A_{22}(f,t)&=q(f-\kappa _1)(f-\kappa _1^{-1})g, \end{aligned}$$
(2.7b)

isomonodromic deformation of the linear system (2.1) is locally equivalent to (fgw) satisfying \(q\text {P}_{\text {VI}}^{aux}(\kappa ,t_0)\). Building on this, we prove the following lemma in Sect. 3.2.

Lemma 2.2

Let (fgw) be any solution of \(q\text {P}_{\text {VI}}^{aux}(\kappa ,t_0)\) and denote

$$\begin{aligned} {\mathfrak {M}}=\left\{ m\in {\mathbb {Z}}:(f(q^mt_0),g(q^mt_0))\ne (\infty ,\kappa _\infty )\right\} . \end{aligned}$$
(2.8)

Then, the linear system A(zt) is regular in t on \(q^{\mathfrak {M}} t_0\) and the corresponding connection matrix is given by

$$\begin{aligned} C(z,t)=z^m D(t)C_0(z),\quad (t=q^mt_0,m\in {\mathfrak {M}}), \end{aligned}$$
(2.9)

for a matrix \(C_0(z)\in {\mathfrak {C}}(\kappa ,t_0)\), unique up the left-multiplication by diagonal matrices. Here D(t) is a diagonal matrix which may be eliminated from Eq. (2.9) by rescaling \(H(t)\mapsto H(t)D(t)\) in Eq. (2.4).

In Lemma 2.2, we have the freedom of rescaling the auxiliary variable w by \(w\mapsto {\widetilde{w}}=dw\), \(d\in {\mathbb {C}}^*\), which is equivalent to gauging the linear system by a constant diagonal matrix,

$$\begin{aligned} A(z,t)\rightarrow D^{-1}A(z,t)D,\quad D=\begin{pmatrix} 1 &{}\quad 0\\ 0 &{}\quad d\\ \end{pmatrix}, \end{aligned}$$

and thus rescaling the matrix \(C_0(z)\in {\mathfrak {C}}(\kappa ,t_0)\) as

$$\begin{aligned} C_0(z)\rightarrow C_0(z)D. \end{aligned}$$

Hence, Lemma 2.2 provides us with a mapping

$$\begin{aligned} (f,g)\rightarrow [C_0(z)], \end{aligned}$$
(2.10)

which associated to any solution (fg) of \(q\text {P}_{\text {VI}}(\kappa ,t_0)\) the equivalence class of \(C_0(z)\) in \({\mathfrak {C}}(\kappa ,t_0)\) quotiented by arbitrary left and right-multiplication by invertible diagonal matrices. This warrants the following definition.

Definition 2.3

We define \({\mathcal {M}}(\kappa ,t_0)\) to be the space of connection matrices \({\mathfrak {C}}(\kappa ,t_0)\) quotiented by arbitrary left and right-multiplication by invertible diagonal matrices. We refer to \({\mathcal {M}}(\kappa ,t_0)\) as the monodromy manifold of \(q\text {P}_{\text {VI}}(\kappa ,t_0)\).

Correspondingly, we call the mapping (2.10), which associates with any solution (fg) of \(q\text {P}_{\text {VI}}(\kappa ,t_0)\), a point on the monodromy manifold, the monodromy mapping.

Remark 2.4

The space \({\mathcal {M}}(\kappa ,t_0)\) was first introduced and studied in Ohyama et al. [27][§4.1.1], where it is denoted as \({\mathcal {F}}\). Ohyama et al. [27] showed how this space can naturally be endowed with the structure of a complex algebraic variety, under certain assumptions of genericity including the non-resonance (1.4) and non-splitting conditions (1.5). Compatible with this structure, we endow \({\mathcal {M}}(\kappa ,t_0)\) with the structures of a complex manifold and algebraic variety, in Theorems 2.17 and 2.20 respectively. The proof that these structures are compatible with those in [27] is postponed to the end of the paper, see Remark 5.6.

In Sect. 3.3, we prove the following lemma concerning injectivity of the monodromy mapping.

Lemma 2.5

The monodromy mapping, defined in Definition 2.3, is injective.

2.2 The main Riemann–Hilbert problem

In this paper, we analyse the monodromy mapping through the, via Birkhoff’s theory [1], corresponding Riemann–Hilbert problem (RHP).

To introduce this RHP, we return to the single-valued matrix functions \(\Psi _0(z,t)\) and \(\Psi _\infty (z,t)\), defined in Eq. (2.5). Let us denote \(t_m=q^mt_0\) for \(m\in {\mathbb {Z}}\). By Lemma 2.2, we may choose H such that

$$\begin{aligned} \Psi _\infty (z,t_m)=\Psi _0(z,t_m)\,z^mC_0(z), \end{aligned}$$
(2.11)

for \(m\in {\mathfrak {M}}\).

Next, we need to choose Jordan curves \(\gamma ^{{(}{m}{)}}\), \(m\in {\mathbb {Z}}\), which separate the points in the complex plane where \(\Psi _\infty (z,t_m)\) and \(\Psi _0(z,t_m)\) are respectively non-invertible and singular. These points are precisely the zeros of the determinants (2.6a) and (2.6b) respectively. We thus make the following definition.

Definition 2.6

Consider a family \((\gamma ^{{(}{m}{)}})_{m\in {\mathbb {Z}}}\) of positively oriented Jordan curves in \({\mathbb {C}}^*\) and denote by \(D_+^{{(}{m}{)}}\) and \(D_-^{{(}{m}{)}}\) the inside and outside of \(\gamma ^{{(}{m}{)}}\) respectively, for \(m\in {\mathbb {Z}}\). Then we call this family of curves admissable if, for \(m\in {\mathbb {Z}}\),

$$\begin{aligned} q^{{\mathbb {Z}}_{>0}}\cdot \{\kappa _t t_m,\kappa _t^{-1} t_m,\kappa _1,\kappa _1^{-1}\}&\subseteq D_-^{{(}{m}{)}},\\ q^{{\mathbb {Z}}_{\le 0}}\cdot \{\kappa _t t_m,\kappa _t^{-1} t_m,\kappa _1,\kappa _1^{-1}\}&\subseteq D_+^{{(}{m}{)}}, \end{aligned}$$

where we use the notation \(U\cdot V=\{uv:u\in U,v\in V\}\) for compatible sets U and V, and

$$\begin{aligned} D_-^{{(}{m+1}{)}}\subseteq D_-^{{(}{m}{)}}, \end{aligned}$$

see Fig. 1.

Fig. 1
figure 1

An example of two contours \(\gamma ^{{(}{m}{)}}\) and \(\gamma ^{{(}{m+1}{)}}\) satisfying the conditions in Definition 2.6, where \(t=q^mt_0\) and the red lines denote the four spirals \(q^{{\mathbb {R}}}\cdot x\), \(x\in \{\kappa _t^{\pm 1}t_0,\kappa _1^{\pm 1}\}\)

We can always construct an admissible family of curves and it follows that

$$\begin{aligned} \Psi ^{{(}{m}{)}}(z)={\left\{ \begin{array}{ll} \Psi _\infty (z,t_m) &{} z\in D_+^{{(}{m}{)}},\\ \Psi _0(z,t_m) &{} z\in D_-^{{(}{m}{)}}, \end{array}\right. } \end{aligned}$$
(2.12)

defines a solution of the following RHP, with \(C(z)=C_0(z)\), for \(m\in {\mathfrak {M}}\).

Definition 2.7

(RHP I). Given a connection matrix \(C\in {\mathfrak {C}}(\kappa ,t_0)\) and a family of admissable curves \((\gamma ^{{(}{m}{)}})_{m\in {\mathbb {Z}}}\), for \(m\in {\mathbb {Z}}\), find a matrix function \(\Psi ^{{(}{m}{)}}(z)\) which satisfies the following conditions.

  1. (i)

    \(\Psi ^{{(}{m}{)}}(z)\) is analytic on \({\mathbb {C}}\setminus \gamma ^{{(}{m}{)}}\).

  2. (ii)

    \(\Psi ^{{(}{m}{)}}(z')\) has continuous boundary values \(\Psi _+^{{(}{m}{)}}(z)\) and \(\Psi _-^{{(}{m}{)}}(z)\) as \(z'\) approaches \(z\in \gamma ^{{(}{m}{)}}\) from \(D_+^{{(}{m}{)}}\) and \(D_-^{{(}{m}{)}}\) respectively, related by

    $$\begin{aligned} \Psi _+^{{(}{m}{)}}(z)=\Psi _-^{{(}{m}{)}}(z)z^mC(z),\quad z\in \gamma ^{{(}{m}{)}}. \end{aligned}$$
  3. (iii)

    \(\Psi ^{{(}{m}{)}}(z)\) satisfies

    $$\begin{aligned} \Psi ^{{(}{m}{)}}(z)=I+{\mathcal {O}}\left( z^{-1}\right) \quad z\rightarrow \infty . \end{aligned}$$

The matrix function \(\Psi ^{{(}{m}{)}}(z)\), defined in Eq. (2.12), is uniquely characterised as the solution of RHP I. Indeed, we have the following lemma, which we prove in Sect. 3.3.

Lemma 2.8

For any fixed \(m\in {\mathbb {Z}}\), if RHP I in Definition 2.7 has a solution \(\Psi ^{{(}{m}{)}}(z)\), then this solution is globally invertible on the complex plane and unique.

From here on we say that \(\Psi ^{{(}{m}{)}}(z)\) exists if and only if RHP I has a solution for that particular value of m, as justified by the uniqueness in the above lemma.

If RHP I is solvable, then we can construct a corresponding isomonodromic linear system, by setting

$$\begin{aligned} A(z,q^mt_0):={\left\{ \begin{array}{ll} z^2 \Psi ^{{(}{m}{)}}(qz)\kappa _\infty ^{\sigma _3} \Psi ^{{(}{m}{)}}(z)^{-1} &{} \text {if } z\in q^{-1}(D_+^{{(}{m}{)}}\cup \gamma ^{{(}{m}{)}}),\\ q^m t_0 \Psi ^{{(}{m}{)}}(qz)\kappa _0^{\sigma _3}C(z) \Psi ^{{(}{m}{)}}(z)^{-1} &{} \text {if } z\in D_+^{{(}{m}{)}}\cap q^{-1}D_-^{{(}{m}{)}},\\ q^m t_0 \Psi ^{{(}{m}{)}}(qz)\kappa _0^{\sigma _3} \Psi ^{{(}{m}{)}}(z)^{-1} &{} \text {if } z\in D_-^{{(}{m}{)}}\cup \gamma ^{{(}{m}{)}}. \end{array}\right. }\qquad \quad \end{aligned}$$
(2.13)

This defines a matrix polynomial of the form (2.2) and the values of (fgw) may be read directly from the solution of the RHP as follows (details are given in Sect. 3.3). Let

$$\begin{aligned} \Psi ^{{(}{m}{)}}(z)&=H(t_m)+{\mathcal {O}}(z){} & {} (z\rightarrow 0), \end{aligned}$$
(2.14)
$$\begin{aligned} \Psi ^{{(}{m}{)}}(z)&=I+z^{-1}U(t_m)+{\mathcal {O}}(z^{-2}){} & {} (z\rightarrow \infty ), \end{aligned}$$
(2.15)

and denote \(H=(h_{ij})\) and \(U=(U_{ij})\), then

$$\begin{aligned} w&=(q^{-1}-\kappa _\infty ^2)u_{12}, \end{aligned}$$
(2.16a)
$$\begin{aligned} f&=t_m\kappa _\infty \left( \kappa _0-\kappa _0^{-1}\right) \frac{h_{11}h_{12}}{w|H|}, \end{aligned}$$
(2.16b)
$$\begin{aligned} g&=q^{-1}\kappa _\infty ^{-1}(f-\kappa _t t_m)(f-\kappa _t^{-1} t_m)g_1^{-1}, \end{aligned}$$
(2.16c)
$$\begin{aligned} g_1&=f^2+f\left( (q^{-1}-1)u_{11}+\frac{h_{21}w}{h_{11}\kappa _\infty ^2}\right) +\frac{\kappa _0 t}{\kappa _\infty }. \end{aligned}$$
(2.16d)

2.3 Solvability of the main RHP

The notion of reducible monodromy, given in the following definition, plays an important role in our main results.

Definition 2.9

We call a connection matrix \(C(z)\in {\mathfrak {C}}(\kappa ,t_0)\) irreducible when none of its entries are identically zero, otherwise we call it reducible. Similarly, we call monodromy \([C(z)]\in {\mathcal {M}}(\kappa ,t_0)\) irreducible when C(z) is irreducible and reducible otherwise.

Lemma 2.10

The monodromy manifold \({\mathcal {M}}(\kappa ,t_0)\) does not contain reducible monodromy if and only if the non-splitting conditions (1.5) hold true.

Remark 2.11

This lemma can be inferred from Ohyama et al. [27][Theorem 4.3]. We give a proof in Sect. 4.1.

We are now in a position to state our first main result, which we prove in Sect. 4.1.

Theorem 2.12

Consider RHP I defined in Definition 2.7. If the connection matrix \(C(z)\in {\mathfrak {C}}(\kappa ,t_0)\) is irreducible, see Definition 2.9, then this RHP is solvable. More precisely, for any \(m\in {\mathbb {Z}}\), at least one of the solutions \(\Psi ^{{(}{m}{)}}(z)\) and \(\Psi ^{{(}{m+1}{)}}(z)\) of RHP I exists.

Let (fgw) be the unique corresponding solution of \(q\text {P}_{\text {VI}}^{aux}(\kappa ,t_0)\) via Eq. (2.16). Then, for \(m\in {\mathbb {Z}}\), \(\Psi ^{{(}{m}{)}}(z)\) fails to exist if and only if \((f(t_m),g(t_m))=(\infty ,\kappa _\infty )\).

Corollary 2.13

If the non-splitting conditions (1.5) hold true, then the monodromy mapping is bijective.

Proof

Due to Lemma 2.5, the monodromy mapping is injective. Take any monodromy in the monodromy manifold. Then, by Lemma 2.10, it must be irreducible. Theorem 2.12 thus shows that there exists a solution of \(q\text {P}_{\text {VI}}\) with that monodromy. So the monodromy mapping is also surjective and the corollary follows. \(\square \)

For reducible monodromy, solvability of RHP I is more subtle than in the irreducible case handled in Theorem 2.12. We discuss this in Sect. 4.2, where we show that the RHP with reducible monodromy can be transformed into the standard Fokas-Its-Kitaev RHP [8, 9] for certain orthogonal polynomials. We further show that the corresponding solutions of \(q\text {P}_{\text {VI}}\) can be expressed in terms of determinants containing Heine’s basic hypergeometric functions. We thus see that special function solutions occur when the monodromy of the linear problem is reducible, a phenomenon well-known for the classical sixth Painlevé equation [26].

2.4 Results on the monodromy manifold

Our second main result is the identification of the monodromy manifold with an explicit surface. To state this result, we define a set of coordinates on the monodromy manifold \({\mathcal {M}}(\kappa ,t_0)\), using a construction introduced in our previous paper [23].

Firstly, we require the following notation: for any \(2\times 2\) matrix R of rank one, let \(R_1\) and \(R_2\) be respectively its first and second column, then we define \(\pi (R)\in {\mathbb {P}}^1\) by

$$\begin{aligned} R_1=\pi (R)R_2, \end{aligned}$$

with \(\pi (R)=0\) if and only if \(R_1=(0,0)^T\) and \(\pi (R)=\infty \) if and only if \(R_2=(0,0)^T\).

Take a connection matrix \(C(z)\in {\mathfrak {C}}(\kappa ,t_0)\) and denote

$$\begin{aligned} (x_1,x_2,x_3,x_4)=(\kappa _t t_0,\kappa _t^{-1} t_0,\kappa _1,\kappa _1^{-1}). \end{aligned}$$
(2.17)

Let \(1\le k\le 4\), then |C(z)| has a simple zero at \(z=x_k\) and thus \(C(x_k)\), while nonzero, is not invertible. We define the coordinates

$$\begin{aligned} \rho _k=\pi (C(x_k)),\quad (1\le k\le 4). \end{aligned}$$

Note that \((\rho _1,\rho _2,\rho _3,\rho _4)\) are invariant under left multiplication of C(z) by diagonal matrices. However, multiplication by diagonal matrices from the right has the effect of scaling

$$\begin{aligned} (\rho _1,\rho _2,\rho _3,\rho _4)\rightarrow (c\rho _1,c\rho _2,c\rho _3,c\rho _4), \end{aligned}$$
(2.18)

for some \(c\in {\mathbb {C}}^*\).

Therefore, the coordinates \(\rho \) naturally lie in \(({\mathbb {P}}^1)^4/{\mathbb {C}}^*\) and we obtain a mapping

$$\begin{aligned} {\mathcal {P}}:{\mathcal {M}}(\kappa ,t_0)\rightarrow ({\mathbb {P}}^1)^4/{\mathbb {C}}^*,[ C(z)]\mapsto [\rho ], \end{aligned}$$
(2.19)

which is easily seen to be an embedding (see Lemma 5.1).

We proceed in giving an explicit description of the image of the monodromy manifold under \({\mathcal {P}}\). To this end, we make the following definition.

Definition 2.14

Define the quadratic polynomial

$$\begin{aligned} T(\rho :\kappa ,t_0)=T_{12}\rho _1\rho _2+T_{13}\rho _1\rho _3+T_{14}\rho _1\rho _4+T_{23}\rho _2\rho _3+T_{24}\rho _2\rho _4+T_{34}\rho _3\rho _4, \end{aligned}$$

with coefficients given by

$$\begin{aligned} T_{12}&= \theta _q\left( \kappa _t^2,\kappa _1^2\right) \theta _q\left( \kappa _0\kappa _\infty ^{-1}t_0,\kappa _0^{-1}\kappa _\infty ^{-1}t_0\right) \kappa _\infty ^2,\\ T_{34}&= \theta _q\left( \kappa _t^2,\kappa _1^2\right) \theta _q\left( \kappa _0\kappa _\infty t_0,\kappa _0^{-1}\kappa _\infty t_0\right) ,\\ T_{13}&=- \theta _q\left( \kappa _t\kappa _1^{-1}t_0,\kappa _t^{-1}\kappa _1t_0\right) \theta _q\left( \kappa _t\kappa _1\kappa _0^{-1}\kappa _\infty ^{-1},\kappa _0\kappa _t\kappa _1\kappa _\infty ^{-1}\right) \kappa _\infty ^2,\\ T_{24}&=-\theta _q\left( \kappa _t\kappa _1^{-1}t_0,\kappa _t^{-1}\kappa _1t_0\right) \theta _q\left( \kappa _0\kappa _t\kappa _1\kappa _\infty ,\kappa _t\kappa _1\kappa _\infty \kappa _0^{-1}\right) ,\\ T_{14}&= \theta _q\left( \kappa _t\kappa _1t_0,\kappa _t^{-1}\kappa _1^{-1}t_0\right) \theta _q\left( \kappa _1\kappa _\infty \kappa _0^{-1}\kappa _t^{-1},\kappa _0\kappa _1\kappa _\infty \kappa _t^{-1}\right) \kappa _t^2,\\ T_{23}&= \theta _q\left( \kappa _t\kappa _1t_0,\kappa _t^{-1}\kappa _1^{-1}t_0\right) \theta _q\left( \kappa _t\kappa _\infty \kappa _0^{-1}\kappa _1^{-1},\kappa _0\kappa _t\kappa _\infty \kappa _1^{-1}\right) \kappa _1^2. \end{aligned}$$

Note that T is homogeneous and multilinear in the variables \(\rho =(\rho _1,\rho _2,\rho _3,\rho _4)\). Therefore, if we denote its homogeneous form by

$$\begin{aligned} T_{hom}(\rho _1^x,\rho _1^y,\rho _2^x,\rho _2^y,\rho _3^x,\rho _3^y,\rho _4^x,\rho _4^y)=\rho _1^y\rho _2^y\rho _3^y\rho _4^yT\left( \frac{\rho _1^x}{\rho _1^y},\frac{\rho _2^x}{\rho _2^y},\frac{\rho _3^x}{\rho _3^y},\frac{\rho _4^x}{\rho _4^y}\right) , \end{aligned}$$
(2.20)

then, using homogeneous coordinates \(\rho _k=[\rho _k^x: \rho _k^y]\in {\mathbb {P}}^1\), \(1\le k\le 4\), the equation

$$\begin{aligned} T_{hom}(\rho _1^x,\rho _1^y,\rho _2^x,\rho _2^y,\rho _3^x,\rho _3^y,\rho _4^x,\rho _4^y)=0, \end{aligned}$$
(2.21)

defines a surface in \(({\mathbb {P}}^1)^4/{\mathbb {C}}^*\). We denote this surface by

$$\begin{aligned} {\mathcal {S}}(\kappa ,t_0)=\{[\rho ]\in ({\mathbb {P}}^1)^4/{\mathbb {C}}^*:T(\rho :\kappa ,t_0)=0\}. \end{aligned}$$

Our second main result is given by the following theorem, which is proven in Sect. 5.1.

Theorem 2.15

Denote by \({\widehat{\kappa }}\) the tuple of complex parameters \(\kappa \) after replacing \(\kappa _0\mapsto 1\). Then the image of the monodromy manifold \({\mathcal {M}}(\kappa ,t_0)\) under the mapping \({\mathcal {P}}\), defined in Eq. (2.19), is given by the surface \({\mathcal {S}}(\kappa ,t_0)\), minus the curve

$$\begin{aligned} {\mathcal {X}}(\kappa ,t_0):={\mathcal {S}}(\kappa ,t_0)\cap {\mathcal {S}}({\widehat{\kappa }},t_0). \end{aligned}$$
(2.22)

Let us denote

$$\begin{aligned} {\mathcal {S}}^*(\kappa ,t_0)={\mathcal {S}}(\kappa ,t_0)\setminus {\mathcal {X}}(\kappa ,t_0), \end{aligned}$$
(2.23)

then, the mapping

$$\begin{aligned} {\mathcal {M}}(\kappa ,t_0)\rightarrow {\mathcal {S}}^*(\kappa ,t_0),\ \text {where}\ [C(z)] \mapsto {\mathcal {P}}([C(z)]), \end{aligned}$$

is a bijection.

The curve \({\mathcal {X}}(\kappa ,t_0)\) in the above theorem has a geometric interpretation, which is described in the following remark.

Remark 2.16

The curve \({\mathcal {X}}={\mathcal {X}}(\kappa ,t_0)\) does not depend on \(\kappa _0\) and can be written as the intersection

$$\begin{aligned} {\mathcal {X}}= \bigcap _{\lambda _0\in {\mathbb {C}}^*}{\mathcal {S}}(\lambda _0,\kappa _t,\kappa _1,\kappa _\infty ,t_0). \end{aligned}$$
(2.24)

Informally, one can think of points on the curve \({\mathcal {X}}\) in \({\mathcal {S}}(\kappa ,t_0)\) as corresponding to connection matrices \(C(z)\in {\mathfrak {C}}(\kappa ,t_0)\) whose determinant is identically zero, i.e. they satisfy properties (1) and (2) of Definition 2.1, but property (3) with \(c=0\). Therefore, these coordinate values do not lie in the image of \({\mathcal {P}}\). In the proof of Theorem 2.15, we obtain an explicit parametrisation of \({\mathcal {X}}\), see Eq. (5.20).

We note that, any point \([\rho ]\in {\mathcal {S}}(\kappa ,t_0)\) with more than two coordinates zero or more than two coordinates infinite, necessarily lies on the closed curve \({\mathcal {X}}\), defined in Eq. (2.22), and is thus not a point on the surface \({\mathcal {S}}^*(\kappa ,t_0)\).

However, when one of the non-splitting conditions (1.5) is violated, one of the coefficients of the polynomial \(T(\rho )\), in Definition 2.14, vanishes. In that case, there exist points \([\rho ]\in {\mathcal {S}}(\kappa ,t_0)\) with precisely two coordinates zero or two coordinates infinite. Such points cannot lie on the closed curve \({\mathcal {X}}\) (as this would imply that \(\kappa _0\in q^{\mathbb {Z}}\)), and are in one to one correspondence with reducible monodromy on the monodromy manifold.

For example,

$$\begin{aligned} \left\{ [\rho ]\in {\mathcal {S}}^*:\rho _1=\rho _2=0\right\} ={\left\{ \begin{array}{ll} \{[(0,0,\rho _3,\rho _4)]:\rho _3,\rho _4\in {\mathbb {P}}^1\setminus \{0\}\} &{} \text { if }T_{34}=0,\\ \emptyset &{} \text { otherwise,} \end{array}\right. } \end{aligned}$$

and

$$\begin{aligned} \left\{ [\rho ]\in {\mathcal {S}}^*:\rho _3=\rho _4=\infty \right\} ={\left\{ \begin{array}{ll} \{[(\rho _1,\rho _2,\infty ,\infty )]:\rho _{1},\rho _2\in {\mathbb {C}}\} &{} \text { if }T_{34}=0,\\ \emptyset &{} \text { otherwise.} \end{array}\right. } \end{aligned}$$

If \(\kappa _0=\kappa _\infty t_0\), so that \(T_{34}=0\), then these two subspaces correspond respectively to the equivalence classes of the collection of upper-triangular connection matrices

$$\begin{aligned} C(z)=\begin{pmatrix} \theta _q\left( \frac{z}{\kappa _t t_0},\frac{z\kappa _t}{t_0}\right) &{}\quad c\,\theta _q\left( \frac{z}{\nu t_0},\frac{z\nu }{\kappa _0\kappa _\infty }\right) \\ 0 &{}\quad \theta _q\left( \frac{z}{\kappa _1},z\kappa _1\right) \end{pmatrix}\qquad (c\in {\mathbb {C}},\nu \in {\mathbb {C}}^*), \end{aligned}$$

and the equivalence classes of the collection of lower-triangular connection matrices

$$\begin{aligned} C(z)=\begin{pmatrix} \theta _q\left( \frac{z}{\kappa _t t_0},\frac{z\kappa _t}{t_0}\right) &{}\quad 0\\ c\,\theta _q\left( \frac{z}{\nu t_0},z\nu \kappa _0\kappa _\infty \right) &{}\quad \theta _q\left( \frac{z}{\kappa _1},z\kappa _1\right) \end{pmatrix}\qquad (c\in {\mathbb {C}},\nu \in {\mathbb {C}}^*), \end{aligned}$$

in the monodromy manifold.

Furthermore, these two subspaces intersect at the single point \([(0,0,\infty ,\infty )]\in {\mathcal {S}}^*(\kappa ,t_0)\), which corresponds to the equivalence class of the diagonal connection matrix in the monodromy manifold given by setting \(c=0\) in any of the above two formulas.

By Theorem 2.15, the monodromy manifold inherits any topological properties of the space \({\mathcal {S}}^*(\kappa ,t_0)\) via the mapping \({\mathcal {P}}\). Diagonal monodromy, or anti-diagonal monodromy, form singularities on the monodromy manifold, which is the content of our third main result, proven in Sect. 5.2.

Theorem 2.17

If the non-splitting conditions (1.5) hold true, then the monodromy manifold \({\mathcal {M}}(\kappa ,t_0)\) is a smooth complex surface.

On the other hand, if one or more of the non-splitting conditions are violated, then the set

$$\begin{aligned} {\mathcal {M}}_{sing}:=\{[C(z)]\in {\mathcal {M}}(\kappa ,t_0):\text {{ C}({ z}) is diagonal or anti-diagonal}\}, \end{aligned}$$

is non-empty (but finite), its elements form singularities of the monodromy manifold and away from them the monodromy manifold is smooth.

Remark 2.18

We note that the above theorem implies the assertion in Conjecture 7.10 of Ohyama, Ramis and Sauloy [27]. This conjecture is made under the conditions (1.4), (1.5) and additional assumptions on the parameters, but our proof shows that the result holds without these additional assumptions.

In our fourth and final result we identify the monodromy manifold with an explicit affine algebraic surface via an embedding into \({\mathbb {C}}^6\). To construct this embedding, let us denote by

$$\begin{aligned} T'(p)=T_{12}'\rho _1\rho _2+T_{13}'\rho _1\rho _3+T_{14}'\rho _1\rho _4+T_{23}'\rho _2\rho _3+T_{24}'\rho _2\rho _4+T_{34}'\rho _3\rho _4, \end{aligned}$$

the quadratic polynomial \(T(p)=T(p;\kappa ,t_0)\) after replacing \(\kappa _0\mapsto 1\).

Take \(1\le i<j\le 4\) and consider the coordinate

$$\begin{aligned} \eta _{ij}:=\frac{T_{ij}\rho _i\rho _j}{\theta _q(\kappa _0,\kappa _0^{-1})T'(\rho )}. \end{aligned}$$
(2.25)

So, for example, \(\eta _{12}\) is given by

$$\begin{aligned} \eta _{12}=\frac{1}{\theta _q(\kappa _0,\kappa _0^{-1})}\frac{T_{12}\rho _1^x\rho _2^x\rho _3^y\rho _4^y}{T_{12}'\rho _1^x\rho _2^x\rho _3^y\rho _4^y+T_{13}'\rho _1^x\rho _2^y\rho _3^x\rho _4^y+\ldots +T_{34}'\rho _1^y\rho _2^y\rho _3^x\rho _4^x}, \end{aligned}$$

in homogeneous coordinates.

Note that \(\eta _{ij}\) is invariant under scalar multiplication \(\rho \mapsto c \rho \), \(c\in {\mathbb {C}}^*\). Furthermore, the denominator of \(\eta _{ij}\) does not vanish on \({\mathcal {S}}^*(\kappa ,t_0)\), as any such point \([\rho ]\) would necessarily lie on the curve \({\mathcal {X}}\), see Eq. (2.22).

This means that the \(\eta _{ij}\), \(1\le i<j\le 4\), are six well-defined coordinates on \({\mathcal {S}}^*(\kappa ,t_0)\), and thus on the monodromy manifold \({\mathcal {M}}(\kappa ,t_0)\), which lie in \({\mathbb {C}}^6\). Furthermore, by construction, they satisfy the following four equations,

$$\begin{aligned}&\eta _{12}+\eta _{13}+\eta _{14}+\eta _{23}+\eta _{24}+\eta _{34}=0,\end{aligned}$$
(2.26a)
$$\begin{aligned}&a_{12}\eta _{12}+a_{13}\eta _{13}+a_{14}\eta _{14}+a_{23}\eta _{23}+a_{24}\eta _{24}+a_{34}\eta _{34}=1,\end{aligned}$$
(2.26b)
$$\begin{aligned}&\eta _{13}\eta _{24}-\eta _{12}\eta _{34}b_1=0,\end{aligned}$$
(2.26c)
$$\begin{aligned}&\eta _{14}\eta _{23}-\eta _{12}\eta _{34}b_2=0, \end{aligned}$$
(2.26d)

where the coefficients \(a_{ij}=T_{ij}'/T_{ij}\), \(1\le i<j\le 4\), read

$$\begin{aligned}&a_{12}=\prod _{\epsilon =\pm 1}\frac{\theta _q\big (\kappa _0^\epsilon \big )\theta _q\big (\kappa _\infty ^{-1}t_0\big )}{\theta _q\big (\kappa _0^{\epsilon }\kappa _\infty ^{-1}t_0\big )},{} & {} a_{34}=\prod _{\epsilon =\pm 1}\frac{\theta _q\big (\kappa _0^\epsilon \big )\theta _q\big (\kappa _\infty t_0\big )}{\theta _q\big (\kappa _0^{\epsilon }\kappa _\infty t_0\big )},\\&a_{13}=\prod _{\epsilon =\pm 1}\frac{\theta _q\big (\kappa _0^\epsilon \big )\theta _q\big (\kappa _t\kappa _1\kappa _\infty ^{-1}\big )}{\theta _q\big (\kappa _0^{\epsilon }\kappa _t\kappa _1\kappa _\infty ^{-1}\big )},{} & {} a_{24}=\prod _{\epsilon =\pm 1}\frac{\theta _q\big (\kappa _0^\epsilon \big )\theta _q\big (\kappa _t\kappa _1\kappa _\infty \big )}{\theta _q\big (\kappa _0^{\epsilon }\kappa _t\kappa _1\kappa _\infty \big )},\\&a_{14}=\prod _{\epsilon =\pm 1}\frac{\theta _q\big (\kappa _0^\epsilon \big )\theta _q\big (\kappa _t^{-1}\kappa _1\kappa _\infty \big )}{\theta _q\big (\kappa _0^{\epsilon }\kappa _t^{-1}\kappa _1\kappa _\infty \big )},{} & {} a_{23}=\prod _{\epsilon =\pm 1}\frac{\theta _q\big (\kappa _0^\epsilon \big )\theta _q\big (\kappa _t\kappa _1^{-1}\kappa _\infty \big )}{\theta _q\big (\kappa _0^{\epsilon }\kappa _t\kappa _1^{-1}\kappa _\infty \big )},\\ \end{aligned}$$

and

$$\begin{aligned} b_1=\frac{T_{13}T_{24}}{T_{12}T_{34}},\qquad b_2=\frac{T_{14}T_{23}}{T_{12}T_{34}}. \end{aligned}$$

Definition 2.19

We denote by \({\mathcal {F}}(\kappa ,t_0)\) the affine algebraic surface in

$$\begin{aligned} \{(\eta _{12},\eta _{13},\eta _{14},\eta _{23},\eta _{24},\eta _{34})\in {\mathbb {C}}^6\} \end{aligned}$$

defined by Eq. (2.26). We correspondingly denote by

$$\begin{aligned} \Phi :{\mathcal {S}}^*(\kappa ,t_0)\rightarrow {\mathcal {F}}(\kappa ,t_0), [\rho ]\rightarrow \eta , \end{aligned}$$

the mapping defined through the \(\eta \)-coordinates (2.25) and write

$$\begin{aligned} \Phi _{{\mathcal {M}}}=\Phi \circ {\mathcal {P}}:{\mathcal {M}}(\kappa ,t_0)\rightarrow {\mathcal {F}}(\kappa ,t_0), [ C(z)]\rightarrow \eta , \end{aligned}$$

where \({\mathcal {P}}\) is the mapping defined in Eq. (2.19).

Our fourth and final main result is given by the following theorem, which is proved in Sect. 5.3.

Theorem 2.20

Let \(\kappa \) and \(t_0\) be parameters satisfying the non-resonance conditions (1.4) and the non-splitting conditions (1.5). Then the mapping \(\Phi _{{\mathcal {M}}}\), given in Definition 2.19, is an isomorphism between the monodromy manifold \({\mathcal {M}}(\kappa ,t_0)\) and the affine algebraic surface \({\mathcal {F}}(\kappa ,t_0)\).

Remark 2.21

We note that the algebraic surface \({\mathcal {F}}(\kappa ,t_0)\) is invariant under the translations

$$\begin{aligned} t_0\mapsto q\, t_0,\qquad \kappa _j\mapsto q\,\kappa _j\quad (j=0,t,1,\infty ), \end{aligned}$$

since the coefficients in Eq. (2.26) are invariant under them.

The surface \({\mathcal {F}}(\kappa ,t_0)\) can be identified with the intersection of two quadrics in \({\mathbb {C}}^4\). This can be seen by using Eq. (2.26a) and (2.26b) to eliminate any two of the six variables.

For example, consider eliminating \(\{\eta _{24},\eta _{34}\}\) from (2.26) using (2.26a) and (2.26b). The relevant determinant is given by

$$\begin{aligned} \begin{vmatrix} 1&\quad 1\\ a_{24}&\quad a_{34}\\ \end{vmatrix}=\kappa _t\kappa _1\kappa _\infty \theta _q(\kappa _t^{-1}\kappa _1^{-1}t_0,\kappa _t\kappa _1\kappa _\infty ^2t_0)\prod _{\epsilon =\pm 1}{\frac{\theta _q(\kappa _0^\epsilon )^2}{\theta _q(\kappa _0^\epsilon \kappa _\infty t_0, \kappa _0^\epsilon \kappa _t \kappa _1 \kappa _\infty )}}. \end{aligned}$$

Let us assume that \(\kappa _t\kappa _1\kappa _\infty ^2t_0\notin q^{\mathbb {Z}}\). If not, then we can instead choose another pair of coordinates to eliminate. The non-resonance conditions (1.4) and non-splitting conditions (1.5) now guarantee that the above determinant is non-zero. Upon eliminating \(\{\eta _{24},\eta _{34}\}\), Eqs. (2.26c) and (2.26d) respectively become

$$\begin{aligned} \begin{aligned} u_{0} \eta _{12}^2+u_{1} \eta _{12}\eta _{13}+u_{2} \eta _{12}\eta _{14} +u_{3} \eta _{12}\eta _{23}+u_{4}\eta _{14}\eta _{23} +u_5 \eta _{12}=0,\\ v_{0}\hspace{0.3mm} \eta _{13}^2+v_{1} \hspace{0.3mm}\eta _{12}\eta _{13}+v_{2} \hspace{0.3mm}\eta _{13}\eta _{14} +v_{3}\hspace{0.3mm} \eta _{13}\eta _{23}+v_{4}\hspace{0.3mm}\eta _{14}\eta _{23} +v_5\hspace{0.3mm} \eta _{13}=0, \end{aligned} \end{aligned}$$
(2.27)

with coefficients given by

$$\begin{aligned} u_0&=-\kappa _t^2\kappa _1^2\kappa _\infty ^2\, \theta _q\left( t_0\kappa _t\kappa _1,\frac{t_0}{\kappa _t\kappa _1\kappa _\infty ^2}\right) \prod _{\epsilon =\pm 1}\frac{\theta _q\left( \kappa _0^\epsilon \right) }{\theta _q\left( \frac{t_0}{\kappa _\infty }\kappa _0^\epsilon \right) },\\ u_1&=\kappa _t^2\kappa _1^2\,\theta _q\left( \kappa _t^2\kappa _1^2,\kappa _\infty ^2\right) \prod _{\epsilon =\pm 1}\frac{\theta _q\left( \kappa _0^\epsilon \right) }{\theta _q\left( \frac{\kappa _t\kappa _1}{\kappa _\infty }\kappa _0^\epsilon \right) },\\ u_2&=\kappa _1^2\kappa _\infty ^2\,\theta _q\left( \kappa _1^2\kappa _\infty ^2,\kappa _t^2\right) \prod _{\epsilon =\pm 1}\frac{\theta _q\left( \kappa _0^\epsilon \right) }{\theta _q\left( \frac{\kappa _1\kappa _\infty }{\kappa _t}\kappa _0^\epsilon \right) },\\ u_3&=\kappa _t^2\kappa _\infty ^2\,\theta _q\left( \kappa _t^2\kappa _\infty ^2,\kappa _1^2\right) \prod _{\epsilon =\pm 1}\frac{\theta _q\left( \kappa _0^\epsilon \right) }{\theta _q\left( \frac{\kappa _t\kappa _\infty }{\kappa _1}\kappa _0^\epsilon \right) },\\ u_4&=-\kappa _\infty ^4 \frac{\theta _q\left( \kappa _t^2,\kappa _1^2\right) ^2 \theta _q(t_0\kappa _t\kappa _1\kappa _\infty ^2)}{\theta _q(t_0\kappa _t\kappa _1)^2\, \theta _q\left( \frac{t_0}{\kappa _t\kappa _1}\right) } \prod _{\epsilon =\pm 1}\frac{\theta _q\left( \kappa _0^\epsilon ,\frac{t_0}{\kappa _\infty }\kappa _0^\epsilon \right) }{\theta _q\left( \frac{\kappa _1\kappa _\infty }{\kappa _t}\kappa _0^\epsilon ,\frac{\kappa _t\kappa _\infty }{\kappa _1}\kappa _0^\epsilon \right) },\\ u_5&=\kappa _t\kappa _1\kappa _\infty \prod _{\epsilon =\pm 1}\frac{\theta _q\left( \kappa _t\kappa _1\kappa _\infty \kappa _0^\epsilon \right) }{\theta _q\left( \kappa _0^\epsilon \right) }, \end{aligned}$$

and

$$\begin{aligned} v_0&=-\kappa _t^2\kappa _1^2\, \theta _q\left( t_0\kappa _t\kappa _1,\frac{t_0 \kappa _\infty ^2}{\kappa _t\kappa _1}\right) \prod _{\epsilon =\pm 1}\frac{\theta _q\left( \kappa _0^\epsilon \right) }{\theta _q\left( \frac{\kappa _t\kappa _1}{\kappa _\infty }\kappa _0^\epsilon \right) },\\ v_1&=t_0\kappa _t\kappa _1\,\theta _q\left( t_0^2,\kappa _\infty ^2\right) \prod _{\epsilon =\pm 1}\frac{\theta _q\left( \kappa _0^\epsilon \right) }{\theta _q\left( \frac{t_0}{\kappa _\infty }\kappa _0^\epsilon \right) },\\ v_2&=\kappa _1^2\kappa _\infty ^2\,\theta _q\left( \frac{t_0 \kappa _t}{\kappa _1},\frac{t_0\kappa _1 \kappa _\infty ^2}{\kappa _t}\right) \prod _{\epsilon =\pm 1}\frac{\theta _q\left( \kappa _0^\epsilon \right) }{\theta _q\left( \frac{\kappa _1\kappa _\infty }{\kappa _t}\kappa _0^\epsilon \right) },\\ v_3&=\kappa _t^2\kappa _\infty ^2\,\theta _q\left( \frac{t_0 \kappa _1}{\kappa _t},\frac{t_0\kappa _t \kappa _\infty ^2}{\kappa _1}\right) \prod _{\epsilon =\pm 1}\frac{\theta _q\left( \kappa _0^\epsilon \right) }{\theta _q\left( \frac{\kappa _t\kappa _\infty }{\kappa _1}\kappa _0^\epsilon \right) },\\ v_4&=\kappa _\infty ^4 \frac{\theta _q\left( \frac{t_0\kappa _t}{\kappa _1},\frac{t_0\kappa _1}{\kappa _t}\right) ^2 \theta _q(t_0\kappa _t\kappa _1\kappa _\infty ^2)}{\theta _q(t_0\kappa _t\kappa _1)^2\, \theta _q\left( \frac{t_0}{\kappa _t\kappa _1}\right) } \prod _{\epsilon =\pm 1}\frac{\theta _q\left( \kappa _0^\epsilon ,\frac{\kappa _t\kappa _1}{\kappa _\infty }\kappa _0^\epsilon \right) }{\theta _q\left( \frac{\kappa _1\kappa _\infty }{\kappa _t}\kappa _0^\epsilon ,\frac{\kappa _t\kappa _\infty }{\kappa _1}\kappa _0^\epsilon \right) },\\ v_5&=\kappa _t\kappa _1\kappa _\infty \prod _{\epsilon =\pm 1}\frac{\theta _q\left( t_0\kappa _\infty \kappa _0^\epsilon \right) }{\theta _q\left( \kappa _0^\epsilon \right) }, \end{aligned}$$

Thus, for generic parameter values, the monodromy manifold of \(q\text {P}_{\text {VI}}\) is isomorphic to the intersection of the two quadrics defined by Eq. (2.27) in \({\mathbb {C}}^4\). Intersections of two quadrics in \({\mathbb {P}}^4\) are known as Segre surfaces and it is well-known that they are isomorphic to Del Pezzo surfaces of degree four, see e.g. [15].

It is interesting to contrast this with the monodromy manifolds of the classical Painlevé equations. They are isomorphic to affine cubic surfaces [35]. In particular, their corresponding projective completions are Del Pezzo surfaces of degree three [15].

We further note that Chekhov et al. [3] conjectured explicit affine Del Pezzo surfaces of degree three as the monodromy manifolds of the q-Painlevé equations higher up in Sakai’s classification scheme [31] than \(q\text {P}_{\text {VI}}\).

From Corollary 2.13, Theorems 2.17 and 2.20, we obtain the following corollary.

Corollary 2.22

Let \(\kappa \) and \(t_0\) be such that the non-resonance conditions (1.4) and non-splitting conditions (1.5) are fulfilled. Then, composition of the monodromy mapping with \(\Phi _{{\mathcal {M}}}\), defined in Definition 2.19, yields a bijective mapping from the solution space of \(q\text {P}_{\text {VI}}(\kappa _,t_0)\) to the smooth algebraic surface \({\mathcal {F}}(\kappa ,t_0)\),

$$\begin{aligned} \{(f,g)\text { solution of }q\text {P}_{\text {VI}}(\kappa _,t_0)\}\rightarrow {\mathcal {F}}(\kappa ,t_0). \end{aligned}$$
(2.28)

In particular, we may write the general solution of \(q\text {P}_{\text {VI}}(\kappa ,t_0)\) as

$$\begin{aligned} f(t)&=f(t;\kappa ,t_0,\eta ),\\ g(t)&=g(t;\kappa ,t_0,\eta ), \end{aligned}$$

with \(t\in q^{\mathbb {Z}} t_0\) and \(\eta \) varying in \({\mathcal {F}}(\kappa ,t_0)\).

Remark 2.23

By identifying the domain of the mapping (2.28) with the initial value space of \(q\text {P}_{\text {VI}}\) at \(t=t_0\), the mapping becomes a bijective correspondence between complex (algebraic) surfaces. One can show that this correspondence is a biholomorphism using standard arguments. Namely, one observes that the matrix functions \(\Psi _j(z,t_0)\), \(j=0,\infty \), defined in Eq. (2.5), can be chosen locally analytically in (fg) as long as one stays away from the exceptional lines above the base points \(b_7\) and \(b_8\). The corresponding connection matrix is then locally analytic in (fg) and, consequently, so are the \(\eta \)-coordinates. To prove the latter statement around points on the exceptional lines above \(b_7\) and \(b_8\), one simply applies the argument with \(t=q\,t_0\) rather than \(t=t_0\), recalling that the time-evolution is a biholomorphism beween the initial value spaces at \(t=t_0\) and \(t=q\, t_0\). It follows that the mapping (2.28) is a bijective holomorphism and thus biholomorphism.

Remark 2.24

By specialising to the parameter setting

$$\begin{aligned} \kappa _0=\kappa _t,\quad \kappa _\infty =p^{-1}\kappa _1,\quad p=q^{\frac{1}{2}}, \end{aligned}$$
(2.30)

the \(q\text {P}_{\text {VI}}(\kappa )\) equation collapses to its symmetric form

where

and h is related to (fg) as

$$\begin{aligned} h(p^{2m} t_0)=f(q^mt_0),\quad h(p^{2m-1}t_0)=g(q^mt_0)\quad (m\in {\mathbb {Z}}). \end{aligned}$$

As both the non-resonance and non-splitting conditions (1.4) and (1.5) are generically not violated by (2.30), all the aspects of our treatment of \(q\text {P}_{\text {VI}}\) can be carried over to \(q\text {SP}_{\text {VI}}\). We further note that \(q\text {SP}_{\text {VI}}\) is also known as \(q\text {P}_\text {III}\) in the literature [14].

Remark 2.25

Regarding Painlevé VI, and its associated standard linear problem, the corresponding monodromy mapping was thoroughly studied by Inaba et al. [17]. The associated monodromy manifold can be identified with an explicit affine cubic surface, a fact which first appeared in Fricke and Klein [11] and was rediscovered by Jimbo [19] in the context of Painlevé VI. Our construction of the surface \({\mathcal {F}}(\kappa ,t_0)\), in Theorem 2.20, may be considered as a q-analog of this. Iwasaki [18] studied the smoothness of the Painlevé VI monodromy manifold and associated cubic. Theorem 2.17 can be considered a q-analog of [18, Theorem 1] in the non-resonant parameter regime.

3 The Linear Problem

Consider the linear system

$$\begin{aligned} Y(qz)=A(z)Y(z), \end{aligned}$$
(3.1)

where A(z) is a complex \(2\times 2\) matrix polynomial of degree two,

$$\begin{aligned} A(z)=A_0+zA_1+z^2 A_2, \end{aligned}$$

with both \(A_0\) and \(A_2\) invertible and semi-simple.

Jimbo and Sakai [20] showed that isomonodromic deformation of such a linear system, as the eigenvalues of \(A_0\) as well as two of the zeros of the determinant of A(z) evolve via multiplication by q, defines an evolution of the coefficient matrix A(z) which is birationally equivalent to \(q\text {P}_{\text {VI}}^{aux}\).

In Sect. 3.1, we show that the linear system (3.1) can always be normalised to the standard form (2.1) we use in this paper. Then, in Sect. 3.2, we formulate the main results of Jimbo and Sakai [20] regarding isomonodromic deformation of the linear system (2.1) and prove Lemma 2.2.

Finally, in Sect. 3.3, we show how the linear system (2.1) can be recovered from RHP I, defined in Definition 2.7, yielding in particular Lemma 2.5.

3.1 Normalising the linear system

In this section we normalise the linear system (3.1) to the standard form (2.1).

Recall that \(A_0\) and \(A_2\) are semi-simple and we denote their eigenvalues by \(\{\sigma _1,\sigma _2\}\) and \(\{\mu _1,\mu _2\}\) respectively. By means of gauging the linear system with a constant matrix, \(Y(z)\mapsto G Y(z)\), so that \(A(z)\mapsto G A(z)G^{-1}\), we may ensure that \(A_2={\text {diag}}(\mu _1,\mu _2)\) is diagonal.

We further denote the zeros of the determinant of A(z) by \(x_k\), \(1\le k\le 4\), so that

$$\begin{aligned} |A(z)|=\mu _1\mu _2(z-x_1)(z-x_2)(z-x_3)(z-x_4). \end{aligned}$$
(3.2)

Evaluating this determinant at \(z=0\) gives the identity

$$\begin{aligned} \sigma _1\sigma _2=\mu _1\mu _2 x_1 x_2 x_3 x_4. \end{aligned}$$

By means of a scalar gauge as well as a scaling of the independent variable,

$$\begin{aligned} Y(z)\mapsto g(z)Y(cz),\quad g(z):=z^{\log _q(s)},\quad c,s\in {\mathbb {C}}^*, \end{aligned}$$

so that the linear system transforms as \(A(z)\mapsto s A(cz)\), we may ensure that

$$\begin{aligned} \mu _1\mu _2=1,\quad x_3x_4=1,\quad \sigma _1\sigma _2=x_1 x_2. \end{aligned}$$

We introduce a time variable t, satisfying \(t^2=\sigma _1\sigma _2\), and four nonzero parameters \(\kappa =(\kappa _0,\kappa _t,\kappa _1,\kappa _\infty )\), through

$$\begin{aligned}&\sigma _1=\kappa _0^{+1}t,{} & {} x_1=\kappa _t^{+1}t,{} & {} x_3=\kappa _1^{+1},{} & {} \mu _1=\kappa _\infty ^{+1},\\&\sigma _2=\kappa _0^{-1}t,{} & {} x_2=\kappa _t^{-1}t,{} & {} x_4=\kappa _1^{-1},{} & {} \mu _2=\kappa _\infty ^{-1}, \end{aligned}$$

and note that the linear system (3.1) has now been normalised to the form (2.1).

3.2 Isomonodromic deformation of the linear system

In this section we state important results by Jimbo and Sakai [20] on the isomonodromic deformation of the linear system (2.1). Here we recall that isomonodromic deformation stands for deformation as \(t\rightarrow q\,t\) such that \(P(z,qt)=P(z,t)\), or equivalently, such that the connection matrix satisfies

$$\begin{aligned} C(z,qt)=z\,C(z,t) \end{aligned}$$
(3.3)

Theorem 3.1

(Jimbo and Sakai [20]). Considering the linear system (2.1), Eq. (3.3) holds if and only if both \(Y_0(z,t)\) and \(Y_\infty (z,t)\), defined in Eq. (2.5), satisfy

$$\begin{aligned} Y(z,qt)=B(z,t)Y(z,t), \end{aligned}$$
(3.4)

for an (a posteriori unique), rational in z, matrix function B(zt), which takes the form

$$\begin{aligned} B(z,t)=\frac{z^2 I+zB_0(t)}{(z-q\kappa _t^{+1}t)(z-q\kappa _t^{-1}t)}. \end{aligned}$$

We proceed in making the time-evolution defined by (3.4) more explicit. Note that compatibility of the linear system (2.1) and time deformation (3.4) amounts to the following evolution of the coefficient matrix A,

$$\begin{aligned} A(z,qt)B(z,t)=B(qz,t)A(z,t), \end{aligned}$$
(3.5)

as well as the following evolution of the diagonalising matrix H(t) in (2.4),

$$\begin{aligned} H(qt)=B_0(t)H(t). \end{aligned}$$
(3.6)

We use the standard coordinates \(f=f(t),g=g(t)\) and \(w=w(t)\), defined by Eq. (2.7), on the linear system, whose definition we repeat here for convenience of the reader,

$$\begin{aligned} A_{12}(z,t)&=\kappa _\infty ^{-1} w(z-f),\nonumber \\ A_{22}(f,t)&=q(f-\kappa _1)(f-\kappa _1^{-1})g. \end{aligned}$$
(3.7)

Then the linear system is given in terms of \(\{f,g,w\}\) by

$$\begin{aligned} A(z,t)=\begin{pmatrix} \kappa _\infty ((z-f)(z-\alpha )+g_1) &{}\quad \kappa _\infty ^{-1} w(z-f)\\ \kappa _\infty w^{-1}(\gamma z+\delta ) &{}\quad \kappa _\infty ^{-1}((z-f)(z-\beta )+g_2) \end{pmatrix}, \end{aligned}$$

where

$$\begin{aligned} g_1&=q^{-1}\kappa _\infty ^{-1}(f-\kappa _t t)(f-\kappa _t^{-1}t)g^{-1},\nonumber \\ g_2&=q\kappa _\infty (f-\kappa _1)(f-\kappa _1^{-1})g, \end{aligned}$$
(3.8)

and, temporarily using the notation \(\mathring{\kappa }=\kappa +\kappa ^{-1}\),

$$\begin{aligned} \alpha&=\frac{1}{(1-\kappa _\infty ^2)f} \left( \kappa _\infty ^2 g_1-\kappa _\infty \mathring{\kappa }_0t+g_2+(\mathring{\kappa }_tt+\mathring{\kappa }_1)f-2 f^2\right) ,\\ \beta&=\frac{1}{(\kappa _\infty ^2-1)f} \left( \kappa _\infty ^2 g_1-\kappa _\infty \mathring{\kappa }_0t+g_2+\kappa _\infty ^2(\mathring{\kappa }_tt+\mathring{\kappa }_1)f-2 \kappa _\infty ^2f^2\right) ,\\ \gamma&=g_1+g_2+f^2+2(\alpha +\beta )f+\alpha \beta -(t^2+\mathring{\kappa }_t\mathring{\kappa }_1t+1),\\ \delta&=f^{-1}(t^2-(g_1+\alpha f)(g_2+\beta f)), \end{aligned}$$

Equation (3.5) is equivalent to the following conditions on the matrix \(B_0(t)\),

$$\begin{aligned} \begin{aligned}&A(q\kappa _t^{\pm 1}t, qt)(q\kappa _t^{\pm 1}t I+B_0(t))=0,\\&(q\kappa _t^{\pm 1}t I+B_0(t))A(\kappa _t^{\pm 1}t, t)=0,\\&A_0(qt)B_0(t)=qB_0(t)A_0(t). \end{aligned} \end{aligned}$$

The first two equations follow from the fact that both the left and right-hand side of (3.5) are necessarily analytic in \(z\in {\mathbb {C}}\) and the third follows from equating the degree one terms in z of both sides of Eq. (3.5).

These equations form an over-determined system for \(B_0=B_0(t)\). They allow one to express \(B_0\) explicitly in terms of \(\{f,g,w\}\), for example

$$\begin{aligned} B_0=\begin{pmatrix} \frac{q}{1-q}({\overline{f}}+{\overline{\beta }}-f-\beta ) &{}\quad -\frac{q({\overline{w}}-w)}{q\kappa _\infty ^2-1}\\ \frac{q\kappa _\infty ^2}{\kappa _\infty ^2-q}\left( \frac{{\overline{\gamma }}}{{\overline{w}}}-\frac{\gamma }{w}\right) &{}\quad \frac{q}{1-q}({\overline{f}}+{\overline{\alpha }}-f-\alpha ) \end{pmatrix}, \end{aligned}$$

and Jimbo and Sakai [20] showed that Eq. (3.9) are then equivalent to the \(q\text {P}_{\text {VI}}^{aux}\) time evolution of (fgw).

Furthermore, by means of a direct computation, one can check that Eqs. (2.4) and (3.6) translate to the elements of the diagonalising matrix \(H=(h_{ij})_{1\le i,j\le 2}\) satisfying

$$\begin{aligned} \frac{{\overline{h}}_{11}}{h_{11}}&=-\frac{q t}{f}\frac{\kappa _\infty ({\overline{g}}-\kappa _0 t)}{\kappa _0({\overline{g}}-\kappa _\infty )},\end{aligned}$$
(3.9a)
$$\begin{aligned} \frac{{\overline{h}}_{12}}{h_{12}}&=-\frac{q t}{f}\kappa _0\kappa _\infty \frac{({\overline{g}}-t/\kappa _0 )}{({\overline{g}}-\kappa _\infty )},\end{aligned}$$
(3.9b)
$$\begin{aligned} \frac{h_{21}}{h_{11}}&=\kappa _\infty \frac{\kappa _\infty g_1+\kappa _\infty f \alpha -t\kappa _0}{fw},\end{aligned}$$
(3.9c)
$$\begin{aligned} \frac{h_{22}}{h_{12}}&=\kappa _\infty \frac{\kappa _\infty g_1+\kappa _\infty f \alpha -t\kappa _0^{-1}}{fw}. \end{aligned}$$
(3.9d)

We are now in a position to prove Lemma  2.2.

Proof of Lemma 2.2

We start by showing that the linear system \(A=A(z,t)\) is regular in t away from values where \((f,g)=(\infty ,\kappa _\infty )\). To this end, consider the parametrisation of \(A=A(z,t)\) with respect to (fgw). By direct inspection, one can see that this parametrisation is regular for all values of \((f,g)\in {\mathbb {C}}^*\times {\mathbb {C}}^*\) and \(w\in {\mathbb {C}}^*\). The same is true near each of the six basepoints \(b_k\), \(1\le k\le 6\), defined in Eq. (1.3).

For example, consider the basepoint \(b_3=(\kappa _t t,0)\). We apply a change of variables,

$$\begin{aligned} f-\kappa _t t=FG,\quad g=G, \end{aligned}$$

so that \(\{F\in {\mathbb {C}},G=0\}\) lies on the exceptional line above \(b_3\), after a local blow up. The parametrisation of the matrix polynomial A is regular at \(G=0\), and takes the form

$$\begin{aligned} A(z,t)=\begin{pmatrix} \kappa _\infty ((z-\kappa _t t)(z-\alpha )+g_1) &{}\quad \kappa _\infty ^{-1} w(z-\kappa _t t)\\ \kappa _\infty w^{-1}(\gamma z+\delta ) &{}\quad \kappa _\infty ^{-1}(z-\kappa _t t)(z-\beta ) \end{pmatrix}, \end{aligned}$$

with

$$\begin{aligned} g_1=q^{-1}\kappa _\infty ^{-1}(\kappa _1-\kappa _1^{-1})t F. \end{aligned}$$

Geometrically, the line \(\{F\in {\mathbb {C}},G=0\}\), above \(b_3\), parametrises coefficient matrices A whose second column vanishes at \(z=\kappa _t t\). The one remaining point on the exceptional line above \(b_3\), which does not lie on this line, is an inaccessible initial value. Namely, the corresponding formal solution of \(q\text {P}_{\text {VI}}\) never takes value in \({\mathbb {C}}^*\times {\mathbb {C}}^*\) and is thus not a genuine solution. We conclude that A is regular for (fg) near \(b_3\). Similarly, it is shown that A is regular near the other basepoints \(b_k\), \(1\le k\le 6\), \(k\ne 3\).

The situation is slightly more involved for the remaining base-points \(b_7\) and \(b_8\), as the auxiliary equation (1.2) is singular at these points. Firstly, as (fg) approaches \(b_8=(\infty ,\kappa _\infty ^{-1}q^{-1})\), \({\overline{g}}\) approaches \(\kappa _\infty \) and consequently w vanishes, due to the auxiliary equation. Consider thus the change of variables

$$\begin{aligned} f=F^{-1},\quad g-\kappa _\infty ^{-1}q^{-1}=FG,\quad w=FW. \end{aligned}$$

In the local chart \(\{F,G,W\}\), the coefficient matrix A is regular at \(F=0\). Geometrically, the line \(\{F=0,G\in {\mathbb {C}}\}\), above \(b_8\), parametrises coefficient matrices A for which the entry \(A_{12}(z)\) is constant. In particular, A is regular near \(b_8\).

Finally, by the same reasoning, it follows that \(w\rightarrow \infty \), as (fg) approaches \(b_7=(\infty ,\kappa _\infty )\), and that the coefficient matrix A is thus singular there.

We conclude that A(zt) is singular at \(t=t_*\) if and only if \((f(t_*),g(t_*))=(\infty ,\kappa _\infty )\). Correspondingly, we write

$$\begin{aligned} {\mathfrak {M}}=\left\{ m\in {\mathbb {Z}}:(f(q^mt_0),g(q^mt_0))\ne (\infty ,\kappa _\infty )\}\right\} . \end{aligned}$$
(3.10)

For every \(t\in q^{{\mathfrak {M}}}t_0\), we choose any H(t) satisfying (2.4), but not necessarily (3.6), and let C(zt) denote the corresponding connection matrix. We proceed with proving Eq. (2.9) in the lemma.

To prove (2.9), it is enough to show that, for any \(m\in {\mathfrak {M}}\),

$$\begin{aligned} C(z,qt_m)=z \Delta C(z,t_m), \end{aligned}$$

for some diagonal matrix \(\Delta \), if \(m+1\in {\mathfrak {M}}\), and

$$\begin{aligned} C(z,q^2t_m)=z^2 \Delta C(z,t_m), \end{aligned}$$
(3.11)

for some diagonal matrix \(\Delta \), if \(m+1\notin {\mathfrak {M}}\) (so that necessarily \(m+2\in {\mathfrak {M}}\)).

The first case is a direct consequence of Theorem 3.1. We may further ensure that \(\Delta =I\) by imposing Eq. (3.6) at \(t=t_m\).

As to the second case, we note that, analogues to the proof of Theorem 3.1 by Jimbo and Sakai [20], one can show that \(P(z,q^2t)=P(z,t)\) if and only if \(Y_0(z,t)\) and \(Y_\infty (z,t)\) both satisfy

$$\begin{aligned} Y(z,q^2t)=F(z,t)Y(z,t), \end{aligned}$$

for an (a posteriori unique), rational in z, matrix function F(zt) which takes the form

$$\begin{aligned} F(z,t)=\frac{z^4 I+z^3 F_1(t)+z^2F_0(t)}{(z-\kappa _t^{+1}qt)(z-\kappa _t^{-1}qt)(z-\kappa _t^{+1}t)(z-\kappa _t^{-1}t)}. \end{aligned}$$

The corresponding time evolution of the coefficient matrix

$$\begin{aligned} A(z,q^2t)=F(qz,t)A(z,t)F(z,t)^{-1}, \end{aligned}$$

is equivalent to two iterations of \(q\text {P}_{\text {VI}}^{aux}\), and

$$\begin{aligned} F(z,t)=B(z,qt)B(z,t). \end{aligned}$$

By specialising to \(t=t_m\), we obtain (3.11). We may further ensure that \(\Delta =I\), by imposing

$$\begin{aligned} H(q^2 t_m)=F_0(t_m)H(t_m). \end{aligned}$$

This establishes Eq. (2.9).

The last statement of the lemma, follows from the fact that, rescaling \(H(t)\mapsto H(t)D(t)\), yields \(\Psi _0(z,t)\mapsto \Psi _0(z,t)D(t)\) and thus \(C(z,t)\mapsto D(t)^{-1} C(z,t)\). \(\square \)

3.3 On the \(q\text {P}_{\text {VI}}\) RHP

In Sect. 2.2, we formulated the main Riemann–Hilbert problem for the \(q\text {P}_{\text {VI}}\) equation, RHP I, in Definition 2.7. Let (fg) be a solution of \(q\text {P}_{\text {VI}}(\kappa ,t_0)\) and [C(z)] be its corresponding monodromy in the monodromy manifold via the monodromy mapping, see Definition 2.3. Then Eq. (2.12) defines a solution of RHP I. In this section, we show now we may reconstruct the solution (fg) from the solution of RHP I, giving in particular formulas (2.16). This furthermore yields a proof of Lemma 2.5.

Firstly, we prove Lemma 2.8.

Proof of Lemma 2.8

Note that the determinant of \(z^{m}C(z)\) may be written as

$$\begin{aligned} z^{2m}|C(z)|=c_m^{-1}\theta _q\left( \kappa _t^{+1}\frac{z}{t_m},\kappa _t^{-1}\frac{z}{t_m},\kappa _1^{+1}z,\kappa _1^{-1}z\right) ,\quad t_m=q^mt_0, \end{aligned}$$

for some \(c_m\in {\mathbb {C}}^*\). Assume we have a solution \(\Psi ^{{(}{m}{)}}(z)\) of RHP I, defined in Definition 2.7. Then its determinant \(\Delta ^{{(}{m}{)}}(z)\) is analytic on \({\mathbb {C}}\setminus \gamma ^{{(}{m}{)}}\), it satisfies the jump condition

$$\begin{aligned} \Delta _+^{{(}{m}{)}}(z)=\Delta _-^{{(}{m}{)}}(z)\, c_m^{-1}\theta _q\left( \kappa _t^{+1}\frac{z}{t_m},\kappa _t^{-1}\frac{z}{t_m},\kappa _1^{+1}z,\kappa _1^{-1}z\right) \quad (z\in \gamma ^{{(}{m}{)}}), \end{aligned}$$

and \(\Delta ^{{(}{m}{)}}(z)=1+{\mathcal {O}}(z^{-1})\) as \(z\rightarrow \infty \).

This scalar RHP is uniquely solved by

$$\begin{aligned} \Delta ^{{(}{m}{)}}(z)= {\left\{ \begin{array}{ll} \left( \kappa _t^{+1}\frac{qt}{z},\kappa _t^{-1}\frac{qt}{z},\kappa _1^{+1}\frac{q}{z},\kappa _1^{-1}\frac{q}{z};q\right) _\infty &{} \text { if }z\in D_+,\\ c_m\left( \kappa _t^{+1}\frac{z}{t},\kappa _t^{-1}\frac{z}{t},\kappa _1^{+1}z,\kappa _1^{-1}z;q\right) _\infty ^{-1} &{} \text { if }z\in D_-. \end{array}\right. } \end{aligned}$$
(3.12)

Indeed, the right-hand side satisfies this scalar RHP and, denoting the quotient of the left- and right-hand side of (3.12) by g(z), it follows that g(z) is an entire function on the complex plane satisfying \(g(z)\rightarrow 1\) as \(z\rightarrow \infty \). By Liouville’s theorem, \(g(z)\equiv 1\), which yields Eq. (3.12). In particular, the solution \(\Psi ^{{(}{m}{)}}(z)\) is globally invertible on \({\mathbb {C}}\).

Suppose we have another solution \({\widetilde{\Psi }}^{{(}{m}{)}}(z)\) of RHP I, then the quotient

$$\begin{aligned} R(z)={\widetilde{\Psi }}^{{(}{m}{)}}(z)\Psi ^{{(}{m}{)}}(z)^{-1}, \end{aligned}$$

is analytic on \({\mathbb {C}}\setminus \gamma ^{{(}{m}{)}}\). Furthermore, R(z) has a trivial jump on \(\gamma ^{{(}{m}{)}}\), i.e. \(R_+(z)=R_-(z)\). Therefore, R(z) extends to an analytic function on the entire complex plane. Finally, we know that \(R(z)=I+{\mathcal {O}}(z^{-1})\) as \(z\rightarrow \infty \), thus \(R(z)\equiv I\), again by Liouville’s theorem, and the lemma follows. \(\square \)

Starting with a solution of \(q\text {P}_{\text {VI}}\), we showed how to obtain a connection matrix in Sect. 2.2. Therefore, we obtain a solution of RHP I – see (2.12). We now describe how conversely, any solution of RHP I leads to a solution of \(q\text {P}_{\text {VI}}\).

Take a connection matrix \(C(z)\in {\mathfrak {C}}(\kappa ,t_0)\) and suppose RHP I has a solution for at least one \(m\in {\mathbb {Z}}\). We write

$$\begin{aligned} {\mathfrak {M}}:=\{m\in {\mathbb {Z}}:\Psi ^{{(}{m}{)}}(z) \text { exists}\}. \end{aligned}$$

For \(m\in {\mathfrak {M}}\), define \(A(z,q^mt_0)\) by Eq. (2.13). Due to the jump conditions of \(\Psi ^{{(}{m}{)}}(z)\) in RHP I, the matrix \(A(z,q^mt_0)\) has trivial jumps on \(\gamma ^{{(}{m}{)}}\) and \(q^{-1}\gamma ^{{(}{m}{)}}\) and thus extends to a single-valued function on the complex z-plane. Furthermore, it follows from the global analyticity and invertibility of \(\Psi ^{{(}{m}{)}}(z)\), see Lemma 2.8, that \(A(z,q^mt_0)\) is entire. Finally, as \(\Psi ^{{(}{m}{)}}(z)=I+{\mathcal {O}}(z^{-1})\) as \(z\rightarrow \infty \), it follows that \(A(z,q^mt_0)\) is a degree two matrix polynomial satisfying

$$\begin{aligned} A(z,q^mt_0)&=z^2\kappa _\infty ^{\sigma _3}+{\mathcal {O}}(z)\quad (z\rightarrow \infty ),\\ A(0,q^mt_0)&=H(q^m t_0) q^m t_0\kappa _0^{\sigma _3} H(q^m t_0)^{-1},\quad H(q^m t_0):=\Psi ^{{(}{m}{)}}(0), \end{aligned}$$

and, due to Eqs. (3.12) and (2.13),

$$\begin{aligned} |A(z,q^mt_0)|=(z-\kappa _tq^m t_0)(z-\kappa _t^{-1}q^m t_0)(z-\kappa _1)(z-\kappa _1^{-1}). \end{aligned}$$

Thus, \(A(z,q^mt_0)\) is a coefficient matrix of the form (2.2), for \(m\in {\mathfrak {M}}\). By construction, the connection matrix associated with \(A(z,q^mt_0)\) is given by \(z^mC(z)\), \(m\in {\mathfrak {M}}\).

For all \(m\in {\mathfrak {M}}\), assume that

$$\begin{aligned} A_{12}(z,q^mt_0)\not \equiv 0. \end{aligned}$$
(3.13)

Then the corresponding coordinates (fgw) are well-defined on A, via Eq. (2.7), and they form a solution of \(q\text {P}_{\text {VI}}^{\text {aux}}(\kappa ,t_0)\). Furthermore, we can read the values of (fgw) directly from the solution \(\Psi ^{{(}{m}{)}}(z)\) of the RHP through formulas (2.16).

These formulas are derived as follows. By expanding Eq. (2.13) around \(z=\infty \), and considering the (1, 2) and (1, 1) entry, we respectively obtain

$$\begin{aligned} w=(q^{-1}-\kappa _\infty ^2)u_{12},\quad \alpha =(1-q^{-1})u_{11}-f. \end{aligned}$$
(3.14)

The first equation is precisely Eq. (2.16a) for w. The formula (2.16b) for f follows by subtracting (3.9c) from (3.9d) and solving for f. By substituting \(\alpha =(1-q^{-1})u_{11}-f\) in Eq. (3.9c) we obtain Eq. (2.16d) for \(g_1\). Finally formula (2.16c) for g now follows from Eq. (3.8).

We are now in a position to prove Lemma 2.5.

Proof of Lemma 2.5

We have shown that, for any solution (fg) of \(q\text {P}_{\text {VI}}(\kappa ,t_0)\), there exists a connection matrix \(C(z)\in {\mathfrak {C}}(\kappa ,t_0)\), such that the values of (fg) may be read directly from the solution \(\Psi ^{{(}{m}{)}}(z)\) of RHP I in Definition 2.7, via Eq. (2.13). Here \([C(z)]=\textsc {M}\in {\mathcal {M}}(\kappa ,t_0)\) is the monodromy attached to (fg) via the monodromy mapping.

To prove the lemma, it remains to show be shown that these formulas are invariant under choosing a different representation \([{\widetilde{C}}(z)]=\textsc {M}\) of the monodromy, so that (fg) indeed only depends on the class \(\textsc {M}\). We proceed in proving this statement.

As \([{\widetilde{C}}(z)]=[C(z)]\), there exist invertible diagonal matrices \(D_{1,2}\) such that

$$\begin{aligned} {\widetilde{C}}(z)=D_1C(z)D_2. \end{aligned}$$

Thus, the solution \({\widetilde{\Psi }}^{{(}{m}{)}}(z)\) of RHP I, with \(C(z)\rightarrow {\widetilde{C}}(z)\), is related to \(\Psi ^{{(}{m}{)}}(z)\) by

$$\begin{aligned} {\widetilde{\Psi }}^{{(}{m}{)}}(z)={\left\{ \begin{array}{ll} D_2^{-1}\Psi ^{{(}{m}{)}}(z)D_2 &{} \text {if }z\in D_+^{{(}{m}{)}},\\ D_2^{-1}\Psi ^{{(}{m}{)}}(z)D_1^{-1} &{} \text {if }z\in D_-^{{(}{m}{)}}.\\ \end{array}\right. } \end{aligned}$$

Consequently, the matrix function \({\widetilde{H}}\) and \({\widetilde{U}}\), defined by Eqs.  (2.14) and (2.15) for \({\widetilde{\Psi }}^{{(}{m}{)}}(z)\), are related to H and U by

$$\begin{aligned} {\widetilde{H}}(t)=D_2^{-1}H(t)D_1^{-1},\quad {\widetilde{U}}(t)=D_2^{-1}U(t)D_2. \end{aligned}$$

The formulas (2.16b) and (2.16c) for f and g are invariant under such rescaling and the lemma follows. \(\square \)

We finish this section with some remarks on assumption (3.13). Firstly, note that this is a necessary assumption for the coordinates (fgw) to be well-defined. Now, suppose that \(A_{12}(z,q^mt_0)\equiv 0\), for some \(m\in {\mathfrak {M}}\), and write \(t_m=q^m t_0\). Then, we have

$$\begin{aligned} A_{11}(z,t_m)=\kappa _\infty (z-v_1)(z-v_2),\quad A_{22}(z,t_m)=\kappa _\infty ^{-1}(z-v_3)(z-v_4), \end{aligned}$$

where, by Eq. (2.3),

$$\begin{aligned} \{v_1,v_2,v_3,v_4\}=\{\kappa _t^{+1}t_m,\kappa _t^{-1}t_m,\kappa _1^{+1},\kappa _1^{-1}\}. \end{aligned}$$

Furthermore, as the eigenvalues of \(A(0,t_m)\) are \(\kappa _0^{\pm 1}t_m\), necessarily

$$\begin{aligned} \{\kappa _\infty v_1v_2,\kappa _\infty ^{-1}v_3 v_4\}=\{A_{11}(0,t_m),A_{22}(0,t_m)\}=\{\kappa _0 t_m,\kappa _0^{-1}t_m\}. \end{aligned}$$

By comparing the different possible values of \(v_{1},\ldots , v_4\) in the above two equations, it follows that the parameters must satisfy

$$\begin{aligned} \kappa _0^{\epsilon _0}\kappa _t^{\epsilon _t} \kappa _1^{\epsilon _1} \kappa _\infty ^{\epsilon _\infty }=1\quad \text {or}\quad \kappa _0^{\epsilon _0} \kappa _\infty ^{\epsilon _\infty }t_m=1, \end{aligned}$$

for some \(\epsilon _j\in \{\pm 1\}\), \(j=0,t,1,\infty \). So, at least one of the non-splitting conditions (1.5) is violated.

Furthermore, from the defining equations of \(\Psi _{0}\) and \(\Psi _{\infty }\), Eq. (2.5), it follows that \(\Psi _{\infty }(z,t_m)\) is lower-triangular and either \(\left( \Psi _{0}\right) _{11}(z,t_m)\) or \(\left( \Psi _{0}\right) _{12}(z,t_m)\) is identically zero. In particular, either \(C_{12}(z)\equiv 0\) or \(C_{22}(z)\equiv 0\), which means that C(z) is reducible, see Definition 2.9.

We discuss RHP I with reducible monodromy in further detail in Sect. 4.2.

4 Solvability, Reducible Monodromy and Orthogonal Polynomials

In this section we study the solvability of RHP I, defined in Definition 2.7, and consequently the invertibility of the monodromy mapping introduced in Definition 2.3. In Sect. 4.1, we prove Lemma 2.10 and Theorem 2.12. In Sect. 4.2, we discuss RHP I with reducible monodromy.

4.1 Solvability

We start this section by proving Lemma 2.10. To this end, we briefly recall some fundamental properties of q-theta functions, i.e. analytic functions \(\theta (z)\) on \({\mathbb {C}}^*\) such that \(\theta (z)/\theta (qz)\) is a monomial. For \(\alpha \in {\mathbb {C}}^*\) and \(n\in {\mathbb {N}}\), we denote by \(V_n(\alpha )\) the set of all analytic functions \(\theta (z)\) on \({\mathbb {C}}^*\), satisfying

$$\begin{aligned} \theta (qz)=\alpha z^{-n}\theta (z). \end{aligned}$$
(4.1)

We note that \(V_n(\alpha )\) is a vector space of dimension n if \(n\ge 1\), see e.g. [29].

For \(r\in {\mathbb {R}}_+\), we call

$$\begin{aligned} D_q(r):=\{|q|r\le |z|<r\}, \end{aligned}$$

a fundamental annulus. As described in the following lemma, q-theta functions are, up to scaling, completely determined by the location of their zeros within any fixed fundamental annulus.

Lemma 4.1

Let \(\alpha \in {\mathbb {C}}^*\), \(n\in {\mathbb {N}}\) and \(\theta (z)\) be a nonzero element of \(V_n(\alpha )\). Then, within any fixed fundamental annulus, \(\theta (z)\) has precisely n zeros, counting multiplicity, say \(\{a_1,\ldots ,a_n\}\), and there exist unique \(c\in {\mathbb {C}}^*\) and \(s\in {\mathbb {Z}}\) such that

$$\begin{aligned} \theta (z)=c z^s\theta _q(z/a_1,\ldots ,z/a_{n}),\quad \alpha =(-1)^n q^s a_1\cdot \ldots \cdot a_n. \end{aligned}$$
(4.2)

Conversely, for any choice of the parameters, Eq. (4.2) defines an element of \(V_n(\alpha )\).

Proof

See for instance [29]. \(\square \)

We proceed in proving Lemma 2.10.

Proof of Lemma 2.10

Take a connection matrix \(C(z)\in {\mathfrak {C}}(\kappa ,t_0)\) and suppose that C(z) is reducible. Then C(z) is triangular or anti-triangular.

Assume C(z) is triangular, then

$$\begin{aligned} C_{11}(z)C_{22}(z)=|C(z)|=c\theta _q(z\kappa _t t_0^{-1},z\kappa _t^{-1} t_0^{-1},z\kappa _1,z\kappa _1^{-1}), \end{aligned}$$

for some \(c\in {\mathbb {C}}^*\), where the second equality follows from Definition 2.1. Writing

$$\begin{aligned} (x_1,x_2,x_3,x_4)=(\kappa _t t_0,\kappa _t^{-1} t_0,\kappa _1,\kappa _1^{-1}), \end{aligned}$$

it follows from Lemma 4.1 that

$$\begin{aligned} C_{11}(z)=c_{11}\theta _q(z/x_i,z/x_j)z^n,\quad C_{22}(z)=c_{22}\theta _q(z/x_k,z/x_l)z^{-n}, \end{aligned}$$
(4.3)

for some labeling \(\{i,j,k,l\}=\{1,2,3,4\}\), \(c_{11},c_{22}\in {\mathbb {C}}^*\) and \(n\in {\mathbb {Z}}\).

Furthermore, by Definition 2.1,

$$\begin{aligned} \frac{C_{11}(qz)}{C_{11}(z)}=z^{-2}\frac{\kappa _0}{\kappa _\infty }t_0,\quad \frac{C_{22}(qz)}{C_{22}(z)}=z^{-2}\frac{\kappa _\infty }{\kappa _0}t_0, \end{aligned}$$

which implies

$$\begin{aligned} \frac{\kappa _0}{\kappa _\infty }t_0=x_i x_j q^n,\quad \frac{\kappa _\infty }{\kappa _0}t_0=x_k x_l q^{-n}, \end{aligned}$$
(4.4)

violating the non-splitting conditions (1.5).

Similarly, if C(z) is anti-triangular, then

$$\begin{aligned} \kappa _0 \kappa _\infty t_0=x_i x_j q^n,\quad \frac{1}{\kappa _0 \kappa _\infty }t_0=x_k x_l q^{-n}, \end{aligned}$$
(4.5)

for some re-labeling \(\{i,j,k,l\}=\{1,2,3,4\}\) and \(n\in {\mathbb {Z}}\), again violating the non-splitting conditions (1.5).

Conversely, if the non-splitting conditions (1.5) do not hold true, then either equalities (4.4) or equalities (4.5) can be realised by a re-labeling \(\{i,j,k,l\}=\{1,2,3,4\}\), for some \(n\in {\mathbb {Z}}\). In the former case, Eq. (4.3) with \(C_{12}(z)\equiv C_{21}(z)\equiv 0\) define a reducible connection matrix in \({\mathfrak {C}}(\kappa ,t_0)\).

It follows similarly that \({\mathfrak {C}}(\kappa ,t_0)\) contains reducible monodromy in the latter case and the lemma follows. \(\square \)

To study the solvability of RHP I, in Definition 2.7, it is helpful to consider the following slightly more general RHP.

Definition 4.2

(RHP II). Given a connection matrix \(C\in {\mathfrak {C}}(\kappa ,t_0)\) and a family of admissable curves \((\gamma ^{{(}{m}{)}})_{m\in {\mathbb {Z}}}\), for \(m,n\in {\mathbb {Z}}\), find a matrix function \(\Psi ^{{(}{m,n}{)}}(z)\) which satisfies the following conditions.

  1. (i)

    \(\Psi ^{{(}{m,n}{)}}(z)\) is analytic on \({\mathbb {C}}\setminus \gamma ^{{(}{m}{)}}\).

  2. (ii)

    \(\Psi ^{{(}{m,n}{)}}(z')\) has continuous boundary values \(\Psi _-^{{(}{m,n}{)}}(z)\) and \(\Psi _+^{{(}{m,n}{)}}(z)\) as \(z'\) approaches \(z\in \gamma ^{{(}{m}{)}}\) from \(D_-^{{(}{m}{)}}\) and \(D_+^{{(}{m}{)}}\) respectively, related by

    $$\begin{aligned} \Psi _+^{{(}{m,n}{)}}(z)=\Psi _-^{{(}{m,n}{)}}(z)z^mC(z),\quad z\in \gamma ^{{(}{m}{)}}. \end{aligned}$$
  3. (iii)

    \(\Psi ^{{(}{m,n}{)}}(z)\) satisfies

    $$\begin{aligned} \Psi ^{{(}{m,n}{)}}(z)=\left( I+{\mathcal {O}}\left( z^{-1}\right) \right) z^{n\sigma _3}\quad z\rightarrow \infty . \end{aligned}$$

By comparison with RHP I in Definition 2.7, we can identify \(\Psi ^{{(}{m,0}{)}}(z)=\Psi ^{{(}{m}{)}}(z)\). More generally, for any fixed \(n\in {\mathbb {Z}}\), RHP II is equivalent to RHP I, with C(z) replaced by \(C(z)z^{-n\sigma _3}\). In particular, we have the following analog of Lemma 2.8.

Lemma 4.3

For any fixed \(m,n\in {\mathbb {Z}}\), if RHP II in Definition 4.2 has a solution \(\Psi ^{{(}{m,n}{)}}(z)\), then this solution is globally invertible on the complex plane and unique.

Proof

The proof is analogous to that of Lemma 2.8. \(\square \)

Given the uniqueness in the above lemma, we say that \(\Psi ^{{(}{m,n}{)}}(z)\) exists if and only if RHP II has a solution for that value of \(m,n\in {\mathbb {Z}}\).

The main reason for considering the more general RHP above, is that we have the following result due to Birkhoff [1].

Lemma 4.4

For any fixed \(m\in {\mathbb {Z}}\), the solution \(\Psi ^{{(}{m,n}{)}}(z)\) to RHP II, in Definition 4.2, exists for at least one \(n\in {\mathbb {Z}}\).

Proof

See Birkhoff [1][§21] or the proof of Lemma 4.4 in [23]. \(\square \)

Our next step is to study the dynamics of \(\Psi ^{{(}{m,n}{)}}(z)\) as n varies, with the ultimate goal to obtain criteria for the existence of \(\Psi ^{{(}{m,n}{)}}(z)\) at \(n=0\), as these will allow us to prove solvability of RHP I and thus prove Theorem 2.12.

To this end, if \(\Psi ^{{(}{m,n}{)}}(z)\) exists, we denote its expansion around \(z=\infty \) by

$$\begin{aligned} \Psi ^{{(}{m,n}{)}}(z)=\left( I+z^{-1}U^{{(}{m,n}{)}}+z^{-2}V^{{(}{m,n}{)}}+z^{-3}W^{{(}{m,n}{)}}+{\mathcal {O}}(z^{-4})\right) z^{n\sigma _3}, \end{aligned}$$
(4.6)

as \(z\rightarrow \infty \), and associate a coefficient matrix \(A^{{(}{m,n}{)}}(z)\) as in Eq. (2.13),

$$\begin{aligned} A^{{(}{m,n}{)}}(z)={\left\{ \begin{array}{ll} z^2 \Psi ^{{(}{m,n}{)}}(qz)\kappa _\infty ^{\sigma _3} \Psi ^{{(}{m,n}{)}}(z)^{-1} &{} \text {if } z\in q^{-1}(D_+^{{(}{m}{)}}\cup \gamma ^{{(}{m}{)}}),\\ q^m t_0 \Psi ^{{(}{m,n}{)}}(qz)\kappa _0^{\sigma _3}C(z) \Psi ^{{(}{m,n}{)}}(z)^{-1} &{} \text {if } z\in D_+^{{(}{m}{)}}\cap q^{-1}D_-^{{(}{m}{)}},\\ q^m t_0 \Psi ^{{(}{m,n}{)}}(qz)\kappa _0^{\sigma _3} \Psi ^{{(}{m,n}{)}}(z)^{-1} &{} \text {if } z\in D_-^{{(}{m}{)}}\cup \gamma ^{{(}{m}{)}}. \end{array}\right. } \end{aligned}$$
(4.7)

Then \(A^{{(}{m,n}{)}}(z)\) is a degree two matrix polynomial of the form (2.2) except for a generally different normalisation at \(z=\infty \),

$$\begin{aligned} A^{{(}{m,n}{)}}(z)=z^2 (q^n\kappa _\infty )^{\sigma _3}+{\mathcal {O}}(z)\quad (z\rightarrow \infty ). \end{aligned}$$

In particular, the corresponding coordinates \(f^{{(}{n}{)}}(q^m t_0)\), \(g^{{(}{n}{)}}(q^m t_0)\) and \(w^{{(}{n}{)}}(q^m t_0)\) define a solution of \(q\text {P}_{\text {VI}}(\kappa ^{{(}{n}{)}},t_0)\) with

$$\begin{aligned} \kappa ^{{(}{n}{)}}=(\kappa _0,\kappa _t,\kappa _1,q^n\kappa _\infty ), \end{aligned}$$

if RHP II is solvable in m for that value of n.

We have the following lemma regarding solvability of RHP II as n varies.

Lemma 4.5

Fix \(m,n\in {\mathbb {Z}}\) and suppose that the solution \(\Psi ^{{(}{m,n}{)}}(z)\) of RHP II in Definition 4.2 exists. Then, recalling the definition of the matrices \(U=(u_{ij})\) and \(V=(v_{ij})\) in Eq. (4.6), either

  1. (i)

    \(u_{12}^{{(}{m,n}{)}}\ne 0\), in which case \(\Psi ^{{(}{m,n+1}{)}}(z)\) exists.

  2. (ii)

    \(u_{12}^{{(}{m,n}{)}}= 0\) but \(v_{12}^{{(}{m,n}{)}}\ne 0\), in which case \(\Psi ^{{(}{m,n+1}{)}}(z)\) does not exist but \(\Psi ^{{(}{m,n+2}{)}}(z)\) does exist.

  3. (iii)

    \(u_{12}^{{(}{m,n}{)}}= 0\) and \(v_{12}^{{(}{m,n}{)}}= 0\), in which case \(\Psi ^{{(}{m,n+k}{)}}(z)\) does not exist for any \(k>0\) and necessarily \(C_{12}(z)\equiv 0\) or \(C_{22}(z)\equiv 0\).

Similarly, either

  1. (I)

    \(u_{21}^{{(}{m,n}{)}}\ne 0\), in which case \(\Psi ^{{(}{m,n-1}{)}}(z)\) exists.

  2. (II)

    \(u_{21}^{{(}{m,n}{)}}= 0\) but \(v_{21}^{{(}{m,n}{)}}\ne 0\), in which case \(\Psi ^{{(}{m,n-1}{)}}(z)\) does not exist but \(\Psi ^{{(}{m,n-2}{)}}(z)\) does exist.

  3. (III)

    \(u_{21}^{{(}{m,n}{)}}= 0\) and \(v_{21}^{{(}{m,n}{)}}= 0\), in which case \(\Psi ^{{(}{m,n-k}{)}}(z)\) does not exist for any \(k>0\) and necessarily \(C_{11}(z)\equiv 0\) or \(C_{21}(z)\equiv 0\).

Proof

We start with the fundamental observation that, for any \(k\in {\mathbb {Z}}\), the solution \(\Psi ^{{(}{m,n+k}{)}}(z)\) exists if and only if there exists a matrix polynomial R(z) which satisfies

$$\begin{aligned} R(z)\Psi ^{{(}{m,n}{)}}(z)=(I+{\mathcal {O}}(z^{-1}))z^{(n+k)\sigma _3}. \end{aligned}$$
(4.8)

Indeed, if such a matrix R(z) exists, then \(\Psi ^{{(}{m,n+k}{)}}(z)=R(z)\Psi ^{{(}{m,n}{)}}(z)\) solves RHP II. Conversely, suppose \(\Psi ^{{(}{m,n+k}{)}}(z)\) exists, define

$$\begin{aligned} R(z)=\Psi ^{{(}{m,n+k}{)}}(z)\Psi ^{{(}{m,n}{)}}(z)^{-1}, \end{aligned}$$

then R(z) has a trivial jump on \(\gamma ^{{(}{m}{)}}\) and consequently extends to an analytic matrix function on the whole complex plane, satisfying

$$\begin{aligned} R(z)=(I+{\mathcal {O}}(z^{-1}))z^{k\sigma _3}(I+{\mathcal {O}}(z^{-1}))\quad (z\rightarrow \infty ). \end{aligned}$$

It follows that R(z) is a matrix polynomial and Eq. (4.8) follows directly from the normalisation of \(\Psi ^{{(}{m,n+k}{)}}(z)\) at \(z=\infty \).

By the above observation, the existence of \(\Psi ^{{(}{m,n+k}{)}}(z)\) can be studied through examining the solvability of Eq. (4.8), which is how we proceed in establishing the lemma.

Firstly, we consider \(k=1\). The matrix R(z) must take the form

$$\begin{aligned} R(z)=z\begin{pmatrix}1 &{}\quad 0\\ 0 &{}\quad 0\end{pmatrix}+ \begin{pmatrix}r_{11} &{}\quad r_{12}\\ r_{21} &{}\quad 0\end{pmatrix}, \end{aligned}$$

and Eq. (4.8) reduces to the following linear system of equations,

$$\begin{aligned} \begin{pmatrix} 0 &{}\quad 1 &{}\quad 0\\ u_{12}^{{(}{m,n}{)}} &{}\quad u_{22}^{{(}{m,n}{)}} &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad u_{12}^{{(}{m,n}{)}} \\ \end{pmatrix} \begin{pmatrix} r_{11}\\ r_{12}\\ r_{21}\\ \end{pmatrix}= \begin{pmatrix} -u_{12}^{{(}{m,n}{)}}\\ -v_{12}^{{(}{m,n}{)}}\\ 1\\ \end{pmatrix} \end{aligned}$$

This system is solvable if and only if \(u_{12}^{{(}{m,n}{)}}\ne 0\). Consequently, \(\Psi ^{{(}{m,n+1}{)}}(z)\) exists if and only if \(u_{12}^{{(}{m,n}{)}}\ne 0\). This establishes part (i) of the lemma.

Next, assume \(u_{12}^{{(}{m,n}{)}}=0\) and we proceed in studying the solvability of Eq. (4.8) with \(k=2\). The matrix R(z) must take the form

$$\begin{aligned} R(z)=z^2\begin{pmatrix}1 &{}\quad 0\\ 0 &{}\quad 0\end{pmatrix}+z \begin{pmatrix}r_{11}^{{(}{1}{)}} &{}\quad 0\\ r_{21}^{{(}{1}{)}} &{}\quad 0\end{pmatrix}+ \begin{pmatrix}r_{11}^{{(}{0}{)}} &{}\quad r_{12}^{{(}{0}{)}}\\ r_{21}^{{(}{0}{)}} &{}\quad 0\end{pmatrix}, \end{aligned}$$

and (4.8) reduces to

$$\begin{aligned} \begin{pmatrix} 0 &{}\quad 1 &{}\quad 0 &{}\quad 0 &{}\quad 0\\ 0 &{}\quad u_{22}^{{(}{m,n}{)}} &{}\quad 0 &{}\quad v_{12}^{{(}{m,n}{)}} &{}\quad 0\\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad v_{12}^{{(}{m,n}{)}}\\ v_{12}^{{(}{m,n}{)}} &{}\quad v_{22}^{{(}{m,n}{)}} &{}\quad 0 &{}\quad w_{12}^{{(}{m,n}{)}} &{}\quad 0\\ 0 &{}\quad 0 &{}\quad v_{12}^{{(}{m,n}{)}} &{}\quad 0 &{}\quad w_{12}^{{(}{m,n}{)}} \end{pmatrix} \begin{pmatrix} r_{11}^{{(}{0}{)}}\\ r_{12}^{{(}{0}{)}}\\ r_{21}^{{(}{0}{)}}\\ r_{11}^{{(}{1}{)}}\\ r_{21}^{{(}{1}{)}}\\ \end{pmatrix}= \begin{pmatrix} -v_{12}^{{(}{m,n}{)}}\\ -w_{12}^{{(}{m,n}{)}}\\ 0\\ 0\\ 1\\ \end{pmatrix}. \end{aligned}$$

It follows from direct computation that the above linear system has a solution if and only if \(v_{12}^{{(}{m,n}{)}}\ne 0\). We therefore conclude that, if \(u_{12}^{{(}{m,n}{)}}=0\), then \(\Psi ^{{(}{m,n+2}{)}}(z)\) exists if and only if \(v_{12}^{{(}{m,n}{)}}\ne 0\). This establishes part (ii) of the lemma.

Finally, consider the case when both \(u_{12}^{{(}{m,n}{)}}=0\) and \(v_{12}^{{(}{m,n}{)}}=0\). Then it follows directly from Eq. (4.7) that the entry \(A_{12}^{{(}{m,n}{)}}(z)\) of the matrix polynomial \(A^{{(}{m,n}{)}}(z)\) is identically zero, by considering its expansion around \(z=\infty \). Furthermore, as

$$\begin{aligned} \Psi ^{{(}{m,n}{)}}(qz)=z^{-2}A^{{(}{m,n}{)}}(z) \Psi ^{{(}{m,n}{)}}(z)\kappa _\infty ^{-\sigma _3}, \end{aligned}$$

for \(z\in q^{-1}D_+^{{(}{m}{)}}\), it follows that \(\Psi _{12}^{{(}{m,n}{)}}(z)\equiv 0\) on \(D_+^{{(}{m}{)}}\).

Now, consider Eq. (4.8) for any \(k>0\). Its (2, 2)-entry reads

$$\begin{aligned} R_{22}(z)\Psi _{22}^{{(}{m,n}{)}}(z)=z^{-n-k}(1+{\mathcal {O}}(z^{-1})), \end{aligned}$$
(4.9)

as \(z\rightarrow \infty \). However, recall that

$$\begin{aligned} \Psi _{22}^{{(}{m,n}{)}}(z)=z^{-n}(1+{\mathcal {O}}(z^{-1})), \end{aligned}$$

and thus Eq. (4.9) has no polynomial solution \(R_{22}(z)\). It follows that \(\Psi ^{{(}{m,n+k}{)}}(z)\) does not exist, for any \(k>0\).

Finally, we prove that one of the entries of C(z) must be identically zero. To this end, note that

$$\begin{aligned} \Psi ^{{(}{m,n}{)}}(qz)=q^m t_0A^{{(}{m,n}{)}}(z) \Psi ^{{(}{m,n}{)}}(z)\kappa _0^{-\sigma _3}, \end{aligned}$$
(4.10)

for \(z\in D_-^{{(}{m}{)}}\). There are two options, either

$$\begin{aligned} A_{11}^{{(}{m,n}{)}}(0)=q^m t_0\kappa _0,\quad A_{22}^{{(}{m,n}{)}}(0)=q^m t_0\kappa _0^{-1}, \end{aligned}$$

in which case it follows from (4.10) that \(\Psi _{12}^{{(}{m,n}{)}}(z)\equiv 0\) on \(D_-^{{(}{m}{)}}\) and consequently that \(C_{12}(z)\equiv 0\); or

$$\begin{aligned} A_{11}^{{(}{m,n}{)}}(0)=q^m t_0\kappa _0^{-1},\quad A_{22}^{{(}{m,n}{)}}(0)=q^m t_0\kappa _0, \end{aligned}$$

in which case it follows from (4.10) that \(\Psi _{11}^{{(}{m,n}{)}}(z)\equiv 0\) on \(D_-^{{(}{m}{)}}\) and consequently that \(C_{22}(z)\equiv 0\). This proves part (iii) of the lemma.

Parts (I)–(III) of the lemma are proven analogously. \(\square \)

Note that we have the following immediate corollary from Lemmas 4.4 and 4.5.

Corollary 4.6

Consider RHP II in Definition 4.2 and assume C(z) is irreducible. Then, for any \(m,n\in {\mathbb {Z}}\), the solution \(\Psi ^{{(}{m,n}{)}}(z)\) or \(\Psi ^{{(}{m,n+1}{)}}(z)\) exists.

We now have all the ingredients to prove Theorem 2.12.

Proof of Theorem 2.12

Take an irreducible connection matrix \(C(z)\in {\mathfrak {C}}(\kappa ,t_0)\). In RHP II, see Definition 4.2, we have an additional integer parameter n and we denote its solution by \(\Psi ^{{(}{m,n}{)}}(z)\), when it exists. For \(n=0\), this RHP is precisely RHP I. Proving the first part of Theorem 2.12, is thus equivalent to showing that, for any fixed \(m\in {\mathbb {Z}}\), the solution \(\Psi ^{{(}{m,0}{)}}(z)\) or \(\Psi ^{{(}{m+1,0}{)}}(z)\) of RHP II exists. We do this via a proof by contradiction.

Take \(m\in {\mathbb {Z}}\) and suppose that neither \(\Psi ^{{(}{m,0}{)}}(z)\) nor \(\Psi ^{{(}{m+1,0}{)}}(z)\) exists. As C(z) is irreducible, Corollary 4.6 implies that \(\Psi ^{{(}{m,-1}{)}}(z)\) and \(\Psi ^{{(}{m+1,-1}{)}}(z)\) necessarily exist.

To deduce a contradiction, we define the following matrix function

$$\begin{aligned} B(z)={\left\{ \begin{array}{ll} \Psi ^{{(}{m+1,-1}{)}}(z)\Psi ^{{(}{m,-1}{)}}(z)^{-1} &{} \text {if }z\in D_+^{{(}{m}{)}}\cup \gamma ^{{(}{m}{)}},\\ \Psi ^{{(}{m+1,-1}{)}}(z)z^{-m}C(z)^{-1}\Psi ^{{(}{m,-1}{)}}(z)^{-1} &{} \text {if }z\in D_-^{{(}{m}{)}}\cap D_+^{{(}{m+1}{)}},\\ z\Psi ^{{(}{m+1,-1}{)}}(z)\Psi ^{{(}{m,-1}{)}}(z)^{-1} &{} \text {if }z\in D_-^{{(}{m+1}{)}}\cup \gamma ^{{(}{m+1}{)}}. \end{array}\right. } \end{aligned}$$
(4.11)

The jump conditions of RHP II that \(\Psi ^{{(}{m+1,-1}{)}}(z)\) and \(\Psi ^{{(}{m,-1}{)}}(z)\) satisfy imply that B(z) has only trivial jumps on \(\gamma ^{{(}{m}{)}}\) and \(\gamma ^{{(}{m+1}{)}}\). Consequently, B(z) extends to a meromorphic function on the complex plane.

The only possible source of singularities (i.e., poles) on the right side of Eq. (4.11), is the term \(C(z)^{-1}\). (Note that \(\Psi ^{(m,n)}\) are analytic functions of z, which moreover are invertible for all z, see Lemma 4.3.) In \(D_-^{{(}{m}{)}}\cap D_+^{{(}{m+1}{)}}\), we know that the determinant of C(z) only vanishes at \(z=\kappa _t^{\pm 1}q^{m+1} t_0\), so that \(C(z)^{-1}\) has (simple) poles there. Therefore, B(z) has simple poles at \(z=\kappa _t^{\pm 1}q^{m+1} t_0\). This, combined with the fact that \(B(0)=0\) and \(B(\infty )=I\), yields

$$\begin{aligned} B(z)=\frac{z^2 I+z B_0}{(z-\kappa _tq^{m+1} t_0)(z-\kappa _t^{-1}q^{m+1} t_0)}, \end{aligned}$$

for a constant matrix \(B_0\).Footnote 2

We now turn our attention to the coefficient matrices \(A^{{(}{m,-1}{)}}(z)\) and \(A^{{(}{m+1,-1}{)}}(z)\) related to \(\Psi ^{{(}{m,-1}{)}}(z)\) and \(\Psi ^{{(}{m+1,-1}{)}}(z)\) via Eq. (4.7). It follows from the defining equation of B(z), Eq. (4.11), that these coefficient matrices are related by

$$\begin{aligned} A^{{(}{m+1,-1}{)}}(z)B(z)=B(qz)A^{{(}{m,-1}{)}}(z). \end{aligned}$$
(4.12)

To deduce this, it suffices to note that, for \(z\in q^{-1}D_+^{{(}{m}{)}}\), compatibility of the first rows of the right-hand sides of Eqs. (4.7) and (4.11), yields Eq. (4.12). By analytic continuation, Eq. (4.12) holds globally.

Now, recall that \(\Psi ^{{(}{m,n}{)}}(z)\) has an asymptotic expansion at infinity, see Eq. (4.6), of the form

$$\begin{aligned} \Psi ^{{(}{m,n}{)}}(z)=\left( I+z^{-1}U^{{(}{m,n}{)}}+z^{-2}V^{{(}{m,n}{)}}+{\mathcal {O}}(z^{-3})\right) z^{n\sigma _3},\quad U=(u_{ij}),\quad V=(v_{ij}). \end{aligned}$$

Due to part (i) of Lemma 4.5, we know that \(u_{12}^{{(}{m,-1}{)}}=0\) and \(u_{12}^{{(}{m+1,-1}{)}}=0\). We will proceed in showing that also \(v_{12}^{{(}{m,-1}{)}}=0\), which, due to part (iii) of Lemma 4.5, means that C(z) is not irreducible, giving us the desired contradiction.

To get there, we first note that, by considering the expansion of B(z) as \(z\rightarrow \infty \) in the first row of Eq. (4.11), we obtain \((B_0)_{12}=0\).

Similarly, as \(u_{12}^{{(}{m,-1}{)}}=0\), it follows from the first row of the right-hand side of Eq. (4.7) that the (1, 2) entry of A satisfies \(A_{12}^{{(}{m,-1}{)}}(z)={\mathcal {O}}(1)\) as \(z\rightarrow \infty \). Namely

$$\begin{aligned} A_{12}^{{(}{m,-1}{)}}(z)\equiv c, \end{aligned}$$
(4.13)

where c is a constant.

We now show that, Eqs. (4.12), (4.13) and the fact that \((B_0)_{12}=0\) imply that \(v_{12}^{{(}{m,-1}{)}}=0\).

Firstly, by comparing the determinants of the left and right-hand sides of Eq. (4.12), we obtain

$$\begin{aligned} |zI+B_0|=(z-\kappa _tq^{m+1} t_0)(z-\kappa _t^{-1}q^{m+1} t_0). \end{aligned}$$

As \((B_0)_{12}=0\), this implies the following dichotomy: either

  1. (I)

    \(B_0=\begin{pmatrix} -\kappa _tq^{m+1} t_0 &{}\quad 0\\ b_{21} &{}\quad -\kappa _t^{-1}q^{m+1} t_0 \end{pmatrix}\),     or

  2. (II)

    \(B_0=\begin{pmatrix} -\kappa _t^{-1}q^{m+1} t_0 &{}\quad 0\\ b_{21} &{}\quad -\kappa _t q^{m+1} t_0 \end{pmatrix}\),

for some \(b_{21}\in {\mathbb {C}}\).

Secondly, the left-hand side of Eq. (4.12) is analytic at \(z=\kappa _t^{\pm 1}q^{m} t_0\), but B(qz), on the right-hand side, has a pole at those two points. This means that

$$\begin{aligned} (\kappa _t^{\pm 1}q^{m+1}t_0 I+B_0)A^{{(}{m,-1}{)}}(\kappa _t^{\pm 1}q^{m}t_0)=0. \end{aligned}$$
(4.14)

We now consider the (1, 2)-entry of Eq. (4.14) for the two choices of the sign ±. The positive choice leads to a tautology in Case (I), while the negative choice gives

$$\begin{aligned} (\kappa _t^{-1}-\kappa _t)q^{m+1} t_0 c=0. \end{aligned}$$

On the other hand, the positive choice in Case (II) gives

$$\begin{aligned} (\kappa _t-\kappa _t^{-1})q^{m+1} t_0 c=0, \end{aligned}$$

while the negative choice is a tautology. Due to the non-resonance conditions (1.4), \(\kappa _t^2\ne 1\), and so it follows from the above results that \(c=0\). Therefore, \(A_{12}^{{(}{m,-1}{)}}(z)\) is identically zero, by Eq. (4.13).

Since A is lower triangular, it follows from Eq. (4.7) that \(\Psi ^{{(}{m,-1}{)}}(z)\) must be lower triangular for \(z\in D_+^{{(}{m}{)}}\). In particular, \(u_{12}^{{(}{m,-1}{)}}=v_{12}^{{(}{m,-1}{)}}=0\), which, due to part (iii) of Lemma 4.5, means that C(z) is not irreducible, giving us the desired contradiction.

We conclude that solution \(\Psi ^{{(}{m}{)}}(z)\) or \(\Psi ^{{(}{m+1}{)}}(z)\) of RHP I exists for any \(m\in {\mathbb {Z}}\), establishing the first part of the theorem.

Let (fgw) be the corresponding solution of \(q\text {P}_{\text {VI}}^{aux}(\kappa ,t_0)\) via (2.16). The second part of the theorem asserts that for \(m\in {\mathbb {Z}}\), \(\Psi ^{{(}{m}{)}}(z)\) fails to exist if and only if \((f(t_m),g(t_m))=(\infty ,\kappa _\infty )\).

So, suppose \(m\in {\mathbb {Z}}\) is such that \(\Psi ^{{(}{m}{)}}(z)\) fails to exist. If \((f(t_m),g(t_m))\ne (\infty ,\kappa _\infty )\), then it follows from Lemma 2.2 that the coefficient matrix \(A(z,t_m)\) is well-defined. But then Eq. (2.12) would yield a solution of RHP I, that is, \(\Psi ^{{(}{m}{)}}(z)\) exists, which contradicts our assumption. Thus \((f(t_m),g(t_m))= (\infty ,\kappa _\infty )\).

On the other hand, if \(\Psi ^{{(}{m}{)}}(z)\) exists, then \(A(z,t_m)\) is well-defined, via Eq. (2.13), and consequently \((f(t_m),g(t_m))\ne (\infty ,\kappa _\infty )\), by Lemma 2.2. So, indeed, \(\Psi ^{{(}{m}{)}}(z)\) fails to exist if and only if \((f(t_m),g(t_m))=(\infty ,\kappa _\infty )\). This completes the proof of the theorem. \(\square \)

4.2 Reducible monodromy, orthogonal polynomials and special function solutions

In the case of \(\text {P}_{\text {VI}}\), it is well-known that reducible monodromy yields special function solutions – see Mazzocco [26]. Furthermore, in such case, the solution of the standard RHP for \(\text {P}_{\text {VI}}\), when solvable, can be solved explicitly in terms of certain orthogonal polynomials [6, 10].

In this subsection, we show that the same phenomenon occurs for \(q\text {P}_{\text {VI}}\). Recall that the monodromy manifold contains reducible monodromy if and only if conditions (1.5a) or conditions (1.5b) are violated. We discuss one example from each of these two sets of non-splitting conditions.

Firstly, we consider the case where

$$\begin{aligned} \kappa _0=\kappa _t\kappa _1 \kappa _\infty , \end{aligned}$$

violating one of the conditions in (1.5a), and consider RHP II, defined in Definition 4.2, with the following upper-triangular connection matrix \(C(z)\in {\mathfrak {C}}(\kappa ,t_0)\),

$$\begin{aligned} C(z)=\begin{pmatrix} \theta _q\left( \frac{z}{\kappa _t t_0},\frac{z}{\kappa _1}\right) &{}\quad c\,\theta _q\left( \frac{z}{\nu t_0},\frac{z\nu }{\kappa _0\kappa _\infty }\right) \\ 0 &{}\quad \theta _q\left( \frac{z\kappa _t}{t_0},z\kappa _1\right) \end{pmatrix}. \end{aligned}$$
(4.15)

Here \(c\in {\mathbb {C}}\) and \(\nu \in {\mathbb {C}}^*\) are two monodromy datums that can be chosen at pleasure.

Writing \(t_m=q^m t_0\), the jump matrix of \(\Psi ^{{(}{m,n}{)}}(z)\) in RHP II can be written as

$$\begin{aligned} z^mC(z)=(-1)^m q^{\frac{1}{2}m(m+1)}t_0^m\begin{pmatrix} \kappa _t^m\theta _q\left( \frac{z}{\kappa _t t_m},\frac{z}{\kappa _1}\right) &{}\quad c\nu ^m\theta _q\left( \frac{z}{\nu t_m},\frac{z\nu }{\kappa _0\kappa _\infty }\right) \\ 0 &{}\quad \kappa _t^{-m}\theta _q\left( \frac{z\kappa _t}{t_m},z\kappa _1\right) \end{pmatrix}. \end{aligned}$$

We bring RHP II into the standard Fokas-Its-Kitaev RHP form [8, 9] for orthogonal polynomials, by applying a transformation

$$\begin{aligned} Y^{{(}{m,n}{)}}(z)={\left\{ \begin{array}{ll} D_1^{-1}\Psi ^{{(}{m,n}{)}}(z)F_\infty ^{{(}{m}{)}}(z)^{-1}D_1 &{} \text { if }z\in D_+^{{(}{m}{)},}\\ D_1^{-1}\Psi ^{{(}{m,n}{)}}(z)F_0^{{(}{m}{)}}(z)^{-1}D_2 &{} \text { if }z\in D_-^{{(}{m}{)}},\\ \end{array}\right. } \end{aligned}$$

where \(D_1\) and \(D_2\) diagonal matrices and \(F_0^{{(}{m}{)}}(z)\) and \(F_\infty ^{{(}{m}{)}}(z)\) analytic and invertible matrix functions on respectively \(D_-^{{(}{m}{)}}\) and \(D_+^{{(}{m}{)}}\).

After such a transformation, the jump matrix of \(Y^{{(}{m,n}{)}}(z)\) reads

$$\begin{aligned} J^{{(}{m}{)}}(z)=D_2^{-1}F_0^{{(}{m}{)}}(z)z^m C(z)F_\infty ^{{(}{m}{)}}(z)^{-1}D_1, \end{aligned}$$

and we wish to choose \(D_{1,2}\) and \(F_{0,\infty }\) such that this jump matrix is upper-triangular with diagonal entries constant and equal to 1. To this end, we choose \(F_{0,\infty }\) so that they cancel the q-theta functions on the diagonal,

$$\begin{aligned} F_\infty ^{{(}{m}{)}}(z)&= \begin{pmatrix} \left( \frac{q\kappa _tt_m}{z},\frac{q\kappa _1}{z};q\right) _\infty &{}\quad 0\\ 0 &{}\quad \left( \frac{qt_m}{\kappa _t z},\frac{q}{\kappa _1z};q\right) _\infty \\ \end{pmatrix},\\ F_0^{{(}{m}{)}}(z)&=\begin{pmatrix} \left( \frac{z}{\kappa _t t_m},\frac{z}{\kappa _1};q\right) _\infty &{}\quad 0\\ 0 &{}\quad \left( \frac{z\kappa _t}{t_m},z\kappa _1;q\right) _\infty \\ \end{pmatrix}, \end{aligned}$$

and we choose \(D_1\) and \(D_2\) to normalise the now constant diagonal entries so that they equal 1,

$$\begin{aligned} D_1=\begin{pmatrix} \nu ^{m} &{}\quad 0\\ 0 &{}\quad \kappa _t^{m}\\ \end{pmatrix},\quad D_2=(-1)^m q^{\frac{1}{2}m(m+1)}t_0^m\begin{pmatrix} \nu ^{m}\kappa _t^m &{}\quad 0\\ 0 &{}\quad 1\\ \end{pmatrix}. \end{aligned}$$

Then the jump matrix reads

$$\begin{aligned} J^{{(}{m}{)}}(z)=\begin{pmatrix} 1 &{}\quad w(z,t_m)\\ 0 &{}\quad 1 \end{pmatrix}, \end{aligned}$$

where

$$\begin{aligned} w(z,t)=\frac{c\,\theta _q\left( \frac{z}{\nu t},\frac{z \nu }{\kappa _0\kappa _\infty }\right) }{\left( \frac{z}{\kappa _t t},\frac{z}{\kappa _1};q\right) _\infty \left( \frac{qt}{\kappa _t z},\frac{q}{\kappa _1z};q\right) _\infty }, \end{aligned}$$
(4.16)

and \(Y^{{(}{m,n}{)}}(z)\) solves the following RHP, if it exists.

Definition 4.7

(RHP III). For \(m,n\in {\mathbb {Z}}\), find a matrix function \(Y^{{(}{m,n}{)}}(z)\) which satisfies the following conditions.

  1. (i)

    \(Y^{{(}{m,n}{)}}(z)\) is analytic on \({\mathbb {C}}\setminus \gamma ^{{(}{m}{)}}\).

  2. (ii)

    \(Y^{{(}{m,n}{)}}(z')\) has continuous boundary values \(Y_-^{{(}{m,n}{)}}(z)\) and \(Y_+^{{(}{m,n}{)}}(z)\) as \(z'\) approaches \(z\in \gamma ^{{(}{m}{)}}\) from \(D_-^{{(}{m}{)}}\) and \(D_+^{{(}{m}{)}}\) respectively, related by

    $$\begin{aligned} Y_+^{{(}{m,n}{)}}(z)=Y_-^{{(}{m,n}{)}}(z)\begin{pmatrix} 1 &{}\quad w(z,t_m)\\ 0 &{}\quad 1 \end{pmatrix}\qquad (z\in \gamma ^{{(}{m}{)}}), \end{aligned}$$

    where w(zt) is the weight function defined in Eq. (4.16).

  3. (iii)

    \(Y^{{(}{m,n}{)}}(z)\) satisfies

    $$\begin{aligned} Y^{{(}{m,n}{)}}(z)=\left( I+{\mathcal {O}}\left( z^{-1}\right) \right) z^{n\sigma _3}\quad z\rightarrow \infty . \end{aligned}$$

RHP III is the standard Fokas-Its-Kitaev RHP for orthogonal polynomials on the contour \(\gamma ^{{(}{m}{)}}\) with respect to the weight function \(w(z,t_m)\). We refer to Deift [4] for more background information on the theory of orthogonal polynomials and corresponding RHPs.

We proceed to draw some immediate conclusions from the equivalence between the RHPs II and III, given in Definitions 4.2 and  4.7 respectively, and the theory of orthogonal polynomials. If \(n<0\), then RHP III is unsolvable for every \(m\in {\mathbb {Z}}\) and thus the same holds true for RHP II.

When \(n=0\), RHP III is solvable for every \(m\in {\mathbb {Z}}\) and the solution is explicitly given by

$$\begin{aligned} Y^{{(}{m,0}{)}}(z)=\begin{pmatrix} 1 &{}\quad -{\mathcal {C}}^{{(}{m}{)}}\left[ w(\cdot ,t_m)\right] (z)\\ 0 &{}\quad 1 \end{pmatrix}, \end{aligned}$$

where \({\mathcal {C}}^{{(}{m}{)}}\) denotes the Cauchy operator on \(\gamma ^{{(}{m}{)}}\),

$$\begin{aligned} {\mathcal {C}}^{{(}{m}{)}}\left[ h(\cdot )\right] (z)=\frac{1}{2\pi i}\oint _{\gamma ^{{(}{m}{)}}}\frac{h(x)}{x-z}dx\quad (h(\cdot )\in L^2(\gamma ^{{(}{m}{)}})). \end{aligned}$$

When \(n>0\), RHP III is solvable if and only if the Hankel determinant of moments

$$\begin{aligned} \Delta _n(t_m):=\begin{vmatrix} \mu _{0}&\quad \mu _{1}&\quad \ldots&\quad \mu _{n-1}\\ \mu _{1}&\quad \mu _{2}&\quad \ldots&\quad \mu _{n}\\ \vdots&\quad \vdots&\quad \ddots&\quad \vdots \\ \mu _{n-1}&\quad \mu _n&\quad \ldots&\quad \mu _{2n-2} \end{vmatrix},\quad \mu _k:=\frac{1}{2\pi i}\oint _{\gamma ^{{(}{m}{)}}}z^kw(z,t_m)dz\quad (k\in {\mathbb {Z}}), \end{aligned}$$

is nonzero, in which case the solution of the RHP is explicitly given by

$$\begin{aligned} Y^{{(}{m,n}{)}}(z)=\frac{1}{\Delta _n(t_m)}\begin{pmatrix} p_n(z;t_m) &{}\quad -{\mathcal {C}}^{{(}{m}{)}}\left[ p_n(\cdot ;t_m)w(\cdot ,t_m)\right] (z)\\ p_{n-1}(z;t_m) &{}\quad -{\mathcal {C}}^{{(}{m}{)}}\left[ p_{n-1}(\cdot ;t_m)w(\cdot ,t_m)\right] (z) \end{pmatrix}, \end{aligned}$$

where \(p_n(z;t_m)\), for \(n\ge 0\), denotes the (generically) degree n polynomial

$$\begin{aligned} p_n(z;t_m)=\begin{vmatrix} \mu _0&\quad \mu _1&\quad \ldots&\quad \mu _{n}\\ \mu _1&\quad \mu _2&\quad \ldots&\quad \mu _{n+1}\\ \vdots&\quad \vdots&\quad \ddots&\quad \vdots \\ \mu _{n-1}&\mu _n&\quad \ldots&\quad \mu _{2n-1}\\ 1&\quad z&\quad \ldots&\quad z^n \end{vmatrix}. \end{aligned}$$

The latter polynomials satisfy the orthogonality condition

$$\begin{aligned} \frac{1}{2\pi i}\oint _{\gamma ^{{(}{m}{)}}}p_l(z,t_m)p_n(z,t_m)w(z,t_m)dz=\Delta _n(t_m)\Delta _{n+1}(t_m)\delta _{l,n}\quad (l,n\in {\mathbb {N}}), \end{aligned}$$

and thus form a sequence of orthogonal polynomials with respect to the complex functional

$$\begin{aligned} {\mathbb {C}}[z]\rightarrow {\mathbb {C}},p(z)\mapsto \frac{1}{2\pi i}\oint _{\gamma ^{{(}{m}{)}}}p(z)w(z,t_m)dz, \end{aligned}$$
(4.17)

when none of the Hankel determinants vanish.

We denote

$$\begin{aligned} {\mathfrak {M}}_n:=\{m\in {\mathbb {Z}}:\Psi ^{{(}{m,n}{)}}(z) \text { exists}\}, \end{aligned}$$

and assume \(c\ne 0\). We may employ a similar argument as in the proof of Theorem 2.12, to show that, for any \(m\in {\mathbb {Z}}\), \(m\in {\mathfrak {M}}_n\) or \(m+1\in {\mathfrak {M}}_n\). We thus obtain a corresponding solution \((w^{{(}{n}{)}},f^{{(}{n}{)}},g^{{(}{n}{)}})\) of \(q\text {P}_{\text {VI}}^\text {aux}(\kappa ^{{(}{n}{)}},t_0)\), where

$$\begin{aligned} \kappa ^{{(}{n}{)}}=(\kappa _0,\kappa _t,\kappa _1,\kappa _{\infty ,n}),\quad \kappa _{\infty ,n}:=q^n\kappa _\infty , \quad \kappa _0=\kappa _t\kappa _1\kappa _\infty , \end{aligned}$$

for \(n\ge 0\).

We proceed to derive explicit formulas for \(f^{{(}{n}{)}}\) and \(g^{{(}{n}{)}}\). To this end, we note that the next to highest order coefficient, in the asymptotic expansion

$$\begin{aligned} Y^{{(}{m,n}{)}}(z)=\left( I+z^{-1}Y_1^{{(}{m,n}{)}}+{\mathcal {O}}\left( z^{-2}\right) \right) z^{n\sigma _3}\quad z\rightarrow \infty , \end{aligned}$$

can be written explicitly as

$$\begin{aligned} Y_1^{{(}{m,n}{)}}=\displaystyle \begin{pmatrix} -\frac{\Gamma _n(t_m)}{\Delta _n(t_m)} &{}\quad \frac{\Delta _{n+1}(t_m)}{\Delta _n(t_m)}\\ \frac{\Delta _{n-1}(t_m)}{\Delta _n(t_m)} &{}\quad \frac{\Gamma _n(t_m)}{\Delta _n(t_m)} \end{pmatrix}, \end{aligned}$$

where \(\Delta _n(t_m)\) denotes the n-th Hankel determinant of moments and

$$\begin{aligned} \Gamma _n(t_m):=\begin{vmatrix} \mu _{0}&\quad \mu _{1}&\quad \ldots&\quad \mu _{n-2}&\quad \mu _{n}\\ \mu _{1}&\quad \mu _{2}&\quad \ldots&\quad \mu _{n-1}&\quad \mu _{n+1}\\ \vdots&\quad \vdots&\quad \ddots&\quad \vdots \\ \mu _{n-2}&\quad \mu _{n-1}&\quad \ldots&\quad \mu _{2n-4}&\quad \mu _{2n-2}\\ \mu _{n-1}&\quad \mu _n&\quad \ldots&\quad \mu _{2n-3}&\quad \mu _{2n-1} \end{vmatrix}, \end{aligned}$$

with \(\Gamma _1(t_m)=\mu _1\) and \(\Gamma _0(t_m)=0\).

By direct substitution of the corresponding asymptotic expansion of \(\Psi ^{{(}{m,n}{)}}(z)\) around \(z=\infty \) into Eq. (2.13), we find

$$\begin{aligned} w^{{(}{n}{)}}(t_m)=-q^{-1}(q\kappa _{\infty ,n}^2-1)\left( \frac{\nu }{\kappa _t}\right) ^m \frac{\Delta _{n+1}(t_m)}{\Delta _n(t_m)}, \end{aligned}$$

and

$$\begin{aligned} f^{{(}{n}{)}}(t)=\frac{\kappa _{\infty ,n}^2-1}{q\kappa _{\infty ,n}^2-1}\frac{\Gamma _n(t)}{\Delta _n(t)}- \frac{q^2\kappa _{\infty ,n}^2-1}{q\kappa _{\infty ,n}^2-1}\frac{\Gamma _{n+1}(t)}{\Delta _{n+1}(t)}+L(t), \end{aligned}$$
(4.18)

where \(t=t_m\) and the linear term L reads

$$\begin{aligned} L(t)=\kappa _t t+\kappa _1+\frac{\kappa _t(\kappa _1^2-1)+\kappa _1(\kappa _t^2-1)t}{\kappa _t\kappa _1(q\kappa _{\infty ,n}^2-1)}. \end{aligned}$$

Upon substituting the explicit formula for \(w^{{(}{n}{)}}\) into the auxiliary Eq. (1.2), and solving for g, we obtain

$$\begin{aligned} g^{{(}{n}{)}}(t)=\kappa _{\infty ,n}\frac{\nu \Delta _n(t/q)\Delta _{n+1}(t)-\kappa _t\Delta _n(t)\Delta _{n+1}(t/q){q\kappa _{\infty ,n}^2}}{\nu \Delta _n(t/q)\Delta _{n+1}(t)-\kappa _t\Delta _n(t)\Delta _{n+1}(t/q)q\kappa _{\infty ,n}^2}. \end{aligned}$$
(4.19)

Note that, by the above formulas, \(\Delta _n(t)=0\) if and only if \(f^{{(}{n}{)}}(t)=\infty \) and \(g^{{(}{n}{)}}(t)=\kappa _{\infty ,n}\), consistent with Theorem 2.12.

Furthermore, the moments \(\mu _k=\mu _k(t)\) can be expressed explicitly in terms of Heine’s basic hypergeometric functions. Indeed, a residue computation yields that the k-th moment equals

$$\begin{aligned} \mu _k=&\,S_1+S_2,\\ S_1=&\frac{c\,\kappa _0^2\, \theta _q(q\kappa _t\nu )}{(q;q)_\infty (q/\kappa _t^2;q)_\infty }\frac{\left( q^{1+k}\frac{q\kappa _0^2}{\kappa _t^2};q\right) _\infty }{\left( q^{1+k}\kappa _0^2;q\right) _\infty }\left( \frac{qt}{\kappa _t}\right) ^{k+1}\frac{\theta _q\left( \frac{\kappa _1\nu t}{\kappa _0^2}\right) }{\theta _q\left( \frac{\kappa _1 t}{\kappa _t}\right) }\\&\times \;_{2}\phi _1\left[ \begin{matrix} \kappa _1^2, q^{1+k}\kappa _0^2 \\ q^{2+k}\frac{\kappa _0^2}{\kappa _t^2} \end{matrix} ; q,\frac{qt}{\kappa _t\kappa _1} \right] ,\\ S_2=&\frac{c\,\kappa _0^2\, \theta _q\left( \frac{\kappa _t\nu }{\kappa _0^2}\right) }{\nu \kappa _t(q;q)_\infty (q/\kappa _1^2;q)_\infty }\frac{\left( q^{1+k}\frac{q\kappa _0^2}{\kappa _1^2};q\right) _\infty }{\left( q^{1+k}\kappa _0^2;q\right) _\infty }\left( \frac{q}{\kappa _1}\right) ^{k+1}\frac{\theta _q\left( \kappa _1\nu t\right) }{\theta _q\left( \frac{\kappa _1 t}{\kappa _t}\right) }\\&\times \;_{2}\phi _1 \left[ \begin{matrix} \kappa _t^2, q^{1+k}\kappa _0^2 \\ q^{2+k}\frac{\kappa _0^2}{\kappa _1^2} \end{matrix} ; q,\frac{q}{\kappa _t\kappa _1t} \right] . \end{aligned}$$

Sakai [30] first derived special function solutions of \(q\text {P}_{\text {VI}}\), written in terms of Casorati determinants of Heine’s basic hypergeometric functions, which correspond to setting \(\nu =\kappa _t^{-1}\) or \(\nu =\kappa _0^2/\kappa _t\) in the above, so that \(S_1=0\) or \(S_2=0\) respectively.

Ormerod et al. [28] related a family of semi-classical orthogonal polynomials to \(q\text {P}_{\text {VI}}\), via the Jimbo–Sakai linear system, and derived formulas similar to (4.18) and (4.19) above. To relate the orthogonal polynomials in this section to those in [28], we write the complex functional (4.17) in terms of q-Jackson integrals. Assuming that \(|\kappa _0|<|q|^{-\frac{1}{2}}\), a residue computation gives

$$\begin{aligned} \frac{1}{2\pi i}\oint _{\gamma ^{{(}{m}{)}}} p(z)w(z,t_m)dz=&\alpha _1(t_m;c,\nu ) \int _0^{q \kappa _t^{-1}t_m}p(z)W(z,t_m)d_qz\\&+\alpha _2(t_m;c,\nu ) \int _0^{q \kappa _1^{-1}}p(z)W(z,t_m)d_qz, \end{aligned}$$

for any entire function p(z), where the right-hand side integrals are standard Jackson integrals, W(zt) is the weight function

$$\begin{aligned} W(z,t):=z^\sigma \frac{\left( \frac{\kappa _t z}{ t},\kappa _1 z;q\right) _\infty }{\left( \frac{z}{\kappa _t t},\frac{z}{\kappa _1};q\right) _\infty },\quad \sigma :=2\log _q\kappa _0, \end{aligned}$$

and the dependence of the integral operator on the monodromy data \(\{c,\nu \}\) is hidden in the coefficients in front of the Jackson integrals,

$$\begin{aligned} \alpha _1(t;c,\nu )&=\frac{c\,(t/\kappa _t)^{-\sigma }}{(1-q)(q;q)_\infty ^2}\frac{\theta _q\left( q\kappa _t\nu ,\frac{\kappa _1 \nu t}{\kappa _0^2}\right) }{\theta _q\left( \frac{\kappa _1 t}{\kappa _t}\right) },\\ \alpha _2(t;c,\nu )&=\frac{c\,\kappa _1^{\sigma }}{(1-q)(q;q)_\infty ^2}\frac{\theta _q\left( q\kappa _1 \nu t,\frac{\kappa _t \nu }{\kappa _0^2}\right) }{\theta _q\left( \frac{\kappa _t}{\kappa _1 t}\right) }. \end{aligned}$$

Note that both coefficients satisfy \(\alpha (qt)=\frac{1}{\kappa _t \nu }\alpha (t)\) and the orthogonal polynomials in Ormerod et al. [28] then coincide with the polynomials \(p_n\) above, up to scalar multiplication, in the case when \(\nu \) is chosen such that \(\alpha _1(t)=-\alpha _2(t)\). In other words, \(\nu =\nu (t_0)\) is chosen such that

$$\begin{aligned} \left( \frac{\kappa _1 t_0}{\kappa _t}\right) ^{1+\sigma }=\frac{\theta _q\left( q\kappa _t\nu ,\frac{ \kappa _1 \nu t_0}{\kappa _0^2}\right) }{\theta _q\left( q\nu \kappa _1 t_0,\frac{\kappa _t \nu }{\kappa _0^2}\right) }. \end{aligned}$$

Next, we briefly consider an example coming from one of the conditions in (1.5b) being violated. Namely, we set

$$\begin{aligned} \kappa _0=\kappa _\infty t_0, \end{aligned}$$

and consider RHP I, defined in Definition 2.7, with a corresponding upper-triangular connection matrix of the form,

$$\begin{aligned} C(z)=\begin{pmatrix} \theta _q\left( \frac{z}{\kappa _t t_0},\frac{z\kappa _t}{t_0}\right) &{}\quad c\,\theta _q(\frac{z}{\kappa _0}\nu ,\frac{z}{\kappa _0}\nu ^{-1})\\ 0 &{}\quad \theta _q\left( \frac{z}{\kappa _1},z\kappa _1\right) \end{pmatrix}, \end{aligned}$$

where the monodromy datums \(c\in {\mathbb {C}}\) and \(\nu \in {\mathbb {C}}^*\) can again be chosen at pleasure.

We note that the jump matrix \(z^mC(z)\) can be rewritten as

$$\begin{aligned} z^mC(z)= \begin{pmatrix} q^{m(m+1)}t_0^{2m}\theta _q\left( \frac{z}{\kappa _t t_m},\frac{z\kappa _t}{t_m}\right) &{}\quad c\,\theta _q(\frac{z}{\kappa _0}\nu ,\frac{z}{\kappa _0}\nu ^{-1})\\ 0 &{}\quad \theta _q\left( \frac{z}{\kappa _1},z\kappa _1\right) \end{pmatrix}z^{-m\sigma _3}, \end{aligned}$$

where we denoted \(t_m:=q^m t_0\).

We apply the transformation

$$\begin{aligned} Y^{{(}{m}{)}}(z)={\left\{ \begin{array}{ll} D_1^{-1}\Psi ^{{(}{m}{)}}(z)F_\infty ^{{(}{m}{)}}(z)^{-1}D_1 z^{m\sigma _3} &{} \text { if }z\in D_+^{{(}{m}{)}},\\ D_1^{-1}\Psi ^{{(}{m}{)}}(z)F_0^{{(}{m}{)}}(z)^{-1} &{} \text { if }z\in D_-^{{(}{m}{)}},\\ \end{array}\right. } \end{aligned}$$

where

$$\begin{aligned} F_\infty ^{{(}{m}{)}}(z)&= \begin{pmatrix} \left( \frac{q\kappa _tt_m}{z},\frac{qt_m}{\kappa _tz};q\right) _\infty &{}\quad 0\\ 0 &{}\quad \left( \frac{q\kappa _1}{z},\frac{q}{\kappa _1z};q\right) _\infty ,\\ \end{pmatrix}\\ F_0^{{(}{m}{)}}(z)&=\begin{pmatrix} \left( \frac{z}{\kappa _t t_m},\frac{\kappa _t z}{ t_m};q\right) _\infty &{}\quad 0\\ 0 &{}\quad \left( \frac{z}{\kappa _1},z\kappa _1;q\right) _\infty \\ \end{pmatrix}, \end{aligned}$$

and

$$\begin{aligned} D_1=\begin{pmatrix} t_0^{-2m}q^{-m(m+1)} &{}\quad 0\\ 0 &{}\quad 1\\ \end{pmatrix}. \end{aligned}$$

Then the jump matrix for \(Y^{{(}{m}{)}}(z)\) reads

$$\begin{aligned} J^{{(}{m}{)}}(z)=\begin{pmatrix} 1 &{}\quad {\widehat{w}}(z,t_m)\\ 0 &{}\quad 1 \end{pmatrix}, \end{aligned}$$

where

$$\begin{aligned} {\widehat{w}}(z,t)=\frac{c\,\theta _q(\frac{z}{\kappa _0}\nu ,\frac{z}{\kappa _0}\nu ^{-1})}{\left( \frac{z}{\kappa _t t},\frac{\kappa _t z}{ t};q\right) _\infty \left( \frac{q\kappa _1}{z},\frac{q}{\kappa _1z};q\right) _\infty }, \end{aligned}$$
(4.20)

and \(Y^{{(}{m}{)}}(z)\) solves the following RHP, if it exists.

Definition 4.8

(RHP IV). For \(m\in {\mathbb {Z}}\), find a matrix function \(Y^{{(}{m}{)}}(z)\) which satisfies the following conditions.

  1. (i)

    \(Y^{{(}{m}{)}}(z)\) is analytic on \({\mathbb {C}}\setminus \gamma ^{{(}{m}{)}}\).

  2. (ii)

    \(Y^{{(}{m}{)}}(z')\) has continuous boundary values \(Y_-^{{(}{m}{)}}(z)\) and \(Y_+^{{(}{m}{)}}(z)\) as \(z'\) approaches \(z\in \gamma ^{{(}{m}{)}}\) from \(D_-^{{(}{m}{)}}\) and \(D_+^{{(}{m}{)}}\) respectively, related by

    $$\begin{aligned} Y_+^{{(}{m}{)}}(z)=Y_-^{{(}{m}{)}}(z)\begin{pmatrix} 1 &{}\quad {\widehat{w}}(z,t_m)\\ 0 &{}\quad 1 \end{pmatrix}\qquad (z\in \gamma ^{{(}{m}{)}}), \end{aligned}$$

    where \({\widehat{w}}(z,t)\) is the weight function defined in Eq. (4.20).

  3. (iii)

    \(Y^{{(}{m}{)}}(z)\) satisfies

    $$\begin{aligned} Y^{{(}{m}{)}}(z)=\left( I+{\mathcal {O}}\left( z^{-1}\right) \right) z^{m\sigma _3}\quad z\rightarrow \infty . \end{aligned}$$

This RHP takes the form of the Fokas-Its-Kitaev RHP for orthogonal polynomials, but with the contour \(\gamma ^{{(}{m}{)}}\) and weight function \({\widehat{w}}(z,t_m)\) scaling with the ‘degree’ m of the corresponding orthogonal polynomials. In particular, RHP IV is unsolvable for \(m<0\) and thus so is RHP I in Definition 2.7.

For \(m=0\), RHP IV is solvable and its solution is given by

$$\begin{aligned} \Psi ^{{(}{0}{)}}(z)=\begin{pmatrix} 1 &{}\quad -{\mathcal {C}}^{{(}{0}{)}}\left[ w(\cdot ,t_0)\right] (z)\\ 0 &{}\quad 1 \end{pmatrix}. \end{aligned}$$

From Eq. (2.13) it follows that the corresponding linear system A(zt) at \(t=t_0\) takes the upper-triangular form

$$\begin{aligned} A(z,t_0)=\begin{pmatrix} \kappa _\infty (z-\kappa _t t_0)(z-\kappa _t^{-1} t_0) &{}\quad A_{12}(z,t_0)\\ 0 &{}\quad \kappa _\infty ^{-1}(z-\kappa _1)(z-\kappa _1^{-1}) \end{pmatrix}. \end{aligned}$$
(4.21)

For \(m\ge 0\), RHP IV is solvable if and only if the mth Hankel determinant of moments for the weight function \({\widehat{w}}(z,t_m)\), with respect to the contour \(\gamma ^{{(}{m}{)}}\), is nonzero. We denote

$$\begin{aligned} {\mathfrak {M}}:=\{m\in {\mathbb {Z}}:\Psi ^{{(}{m}{)}}(z) \text { exists}\}, \end{aligned}$$

then \({\mathfrak {M}}\subseteq {\mathbb {N}}\) and, if \(c\ne 0\), then, by the same argument as in the proof of Theorem 2.12, we may show that, for any \(m\ge 0\), \(m\in {\mathfrak {M}}\) or \(m+1\in {\mathfrak {M}}\). Thus the domain of the corresponding solution (fg) is given by the semi q-spiral \(q^{{\mathbb {N}}}t_0\).

Note that, by Eq. (4.21), the value of g at \(t=t_0\) is given by

$$\begin{aligned} g(t_0)=q^{-1} \kappa _\infty ^{-1}=q^{-1}\kappa _0^{-1}t_0, \end{aligned}$$

and thus Eq. (1.1) has a singularity at \(t=q^{-1} t_0\) which cannot be resolved. In particular, there exists no isomonodromic continuation of the solution past \(t=t_0\), see also [5][Prop. 4.1].

This phenomenon has also been observed for solutions of other discete Painlevé equations associated with orthogonal polynomials, see e.g. Assche [34].

We emphasise that, also in this case, one can derive explicit expressions for \(f(q^mt_0)\) and \(g(q^mt_0)\), \(m\ge 0\), in terms of determinants of moments, but with the sizes of the determinants growing with m.

Finally, note that, if we set \(c=0\), so that C(z) is diagonal, we have \({\widehat{w}}(z,t)=0\) and \(A_{12}(z,t_0)\equiv 0\). In this singular case, \({\mathfrak {M}}=\{0\}\) and there is no solution (fg) of \(q\text {P}_{\text {VI}}(\kappa ,t_0)\) corresponding to this monodromy.

We finish this section by noting that, in general, the domain where RHP I, defined in Definition 2.7, is solvable,

$$\begin{aligned} {\mathfrak {M}}:=\{m\in {\mathbb {Z}}:\Psi ^{{(}{m}{)}}(z) \text { exists}\}, \end{aligned}$$

can take one of five particular forms when C(z) is reducible, characterised by

  1. (1)

    \(\forall m\in {\mathbb {Z}}\), \(m\in {\mathfrak {M}}\) or \(m+1\in {\mathfrak {M}}\);

  2. (2)

    \(\exists {m_0\in {\mathfrak {M}}}\) such that \({\mathfrak {M}}\subseteq {\mathbb {Z}}_{\ge m_0}\) and \(\forall m\ge m_0\): \(m\in {\mathfrak {M}}\) or \(m+1\in {\mathfrak {M}}\);

  3. (3)

    \(\exists {m_0\in {\mathfrak {M}}}\) such that \({\mathfrak {M}}\subseteq {\mathbb {Z}}_{\le m_0}\) and \(\forall m\le m_0\): \(m\in {\mathfrak {M}}\) or \(m-1\in {\mathfrak {M}}\);

  4. (4)

    \(\exists {m_0\in {\mathfrak {M}}}\) such that \({\mathfrak {M}}=\{m_0\}\);

  5. (5)

    \({\mathfrak {M}}=\emptyset \).

In the first example of this section, we saw cases (1) and (5). In the second example, we saw cases (2) and (4) with \(m_0=0\).

5 The Monodromy Manifold

This section is devoted to the monodromy manifold defined in Definition 2.3. In Sects. 5.15.2 and 5.3 we prove Theorems 2.15, 2.17 and 2.20 respectively.

5.1 On the embedding of the monodromy manifold

In Sect. 2.4, see Eq. (2.19), we defined a mapping \({\mathcal {P}}\) of the monodromy manifold to \(({\mathbb {P}}^1)^4/{\mathbb {C}}^*\). In this section we show that this mapping is an embedding and determine its image, proving Theorem 2.15.

Firstly, we have the following lemma.

Lemma 5.1

The mappings \({\mathcal {P}}\), defined in Eq. (2.19), is injective.

Proof

Take any two connection matrices \(C(z),{\widetilde{C}}(z)\in {\mathfrak {C}}(\kappa ,t_0)\) and suppose that their respective coordinate values \(\rho \) and \({\widetilde{\rho }}\) are identical up to scaling, i.e. \({\widetilde{\rho }}=c\rho \), for some \(c\in {\mathbb {C}}^*\). Then the matrix function

$$\begin{aligned} D(z)={\widetilde{C}}(z)\begin{pmatrix} 1 &{}\quad 0\\ 0 &{}\quad c\\ \end{pmatrix} C(z)^{-1}, \end{aligned}$$

is analytic on \({\mathbb {C}}^*\). But D(z) satisfies

$$\begin{aligned} D(qz)=\kappa _0^{\sigma _3}D(z)\kappa _0^{-\sigma _3}, \end{aligned}$$

and, as \(\kappa _0^2\notin q^{\mathbb {Z}}\), it follows from the general theory of q-theta functions, see e.g. Lemma 4.1, that \(D(z)\equiv D\) must be a constant diagonal matrix. Therefore, \([{\widetilde{C}}(z)]\) and [C(z)] represent the same point on the monodromy manifold. The thesis follows. \(\square \)

To determine the image of the monodromy manifold under \({\mathcal {P}}\), it is convenient to consider a related embedding into \(({\mathbb {P}}^1)^4\) of a finer quotient of the space \({\mathfrak {C}}(\kappa ,t_0)\), given in the following definition.

Definition 5.2

We define \(M(\kappa ,t_0)\) to be the space of connection matrices \({\mathfrak {C}}(\kappa ,t_0)\) quotiented by arbitrary left-multiplication by invertible diagonal matrices. We denote the equivalence class of \(C(z)\in {\mathfrak {C}}(\kappa ,t_0)\) in \(M(\kappa ,t_0)\) by \(\llbracket C(z)\rrbracket \) and denote by

$$\begin{aligned} \iota _M: M(\kappa ,t_0)\rightarrow {\mathcal {M}}(\kappa ,t_0),\llbracket C(z)\rrbracket \rightarrow [C(z)], \end{aligned}$$

the quotient mapping of \(M(\kappa ,t_0)\) onto the monodromy manifold.

Note that the coordinates \(\rho =(\rho _1,\rho _2,\rho _3,\rho _4)\) introduced in Sect. 2.4, i.e.

$$\begin{aligned} \rho _k=\pi (C(x_k)),\quad (1\le k\le 4),\quad (x_1,x_2,x_3,x_4)=(\kappa _t t_0,\kappa _t^{-1} t_0,\kappa _1,\kappa _1^{-1}), \end{aligned}$$

are invariant under left-multiplication by diagonal matrices and are thus well defined on equivalence classes in \(M(\kappa ,t_0)\). We thus obtain a mapping

$$\begin{aligned} P:M(\kappa ,t_0)\rightarrow ({\mathbb {P}}^1)^4,\llbracket C(z)\rrbracket \mapsto \rho . \end{aligned}$$
(5.1)

This mapping is an embedding, by the same argument as given in the proof of Lemma 5.1, with c set equal to 1.

Let \(\iota _{\mathbb {P}}\) denote the quotient mapping

$$\begin{aligned} \iota _{\mathbb {P}}:({\mathbb {P}}^1)^4\rightarrow ({\mathbb {P}}^1)^4/{\mathbb {C}}^*. \end{aligned}$$
(5.2)

The proof of Theorem 2.15 revolves around the diagram

(5.3)

which is commutative, because right multiplication by a diagonal matrix translates to scalar multiplication of \(\rho \) as shown in Eq. (2.18). We first determine the image of \(M(\kappa ,t_0)\) under P, following the technique developed in our previous paper [23], and then obtain Theorem 2.15 by projecting this image into \(({\mathbb {P}}^1)^4/{\mathbb {C}}^*\) via \(\iota _{\mathbb {P}}\).

To describe the image of \(M(\kappa ,t_0)\) under P, we make the following definition.

Definition 5.3

Recall the definition of the quadratic polynomial \(T(\rho :\kappa ,t_0)\) as well as its homogeneous form \(T_{hom}\) in Definition 2.14. Using homogeneous coordinates \(\rho _k=[\rho _k^x: \rho _k^y]\in {\mathbb {P}}^1\), \(1\le k\le 4\), the equation

$$\begin{aligned} T_{hom}(\rho _1^x,\rho _1^y,\rho _2^x,\rho _2^y,\rho _3^x,\rho _3^y,\rho _4^x,\rho _4^y)=0 \end{aligned}$$

defines a threefold in \(({\mathbb {P}}^1)^4\), which we denote by

$$\begin{aligned} S(\kappa ,t_0)=\{\rho \in ({\mathbb {P}}^1)^4:T(\rho :\kappa ,t_0)=0\}. \end{aligned}$$

Regarding the image of \(M(\kappa ,t_0)\) under P, we have the following result.

Proposition 5.4

Denote by \({\widehat{\kappa }}\) the tuple of complex parameters \(\kappa \) after replacing \(\kappa _0\mapsto 1\). The image of \(M(\kappa ,t_0)\) under the mapping P, defined in Eq. (5.1), is given by the threefold \(S(\kappa ,t_0)\) minus the codimension one subspace

$$\begin{aligned} X(\kappa ,t_0):=S(\kappa ,t_0)\cap S({\widehat{\kappa }},t_0) =\bigcap _{\lambda _0\in {\mathbb {C}}^*}S(\lambda _0,\kappa _t,\kappa _1,\kappa _\infty ,t_0). \end{aligned}$$
(5.4)

We denote by \(S^*(\kappa ,t_0)\) the space obtained by cutting this subspace from \(S(\kappa ,t_0)\), then the mapping

$$\begin{aligned} M(\kappa ,t_0)\rightarrow S^*(\kappa ,t_0),\ \text {where}\ \llbracket C(z)\rrbracket \mapsto P(\llbracket C(z)\rrbracket ), \end{aligned}$$

is a bijection.

Proof

Let us take a connection matrix \(C(z)\in {\mathfrak {C}}(\kappa ,t_0)\). It will be convenient to work with the following uniform notation,

$$\begin{aligned}&(\sigma _1,\sigma _2)=(\kappa _0 t_0,\kappa _0^{-1} t_0),\quad (\mu _1,\mu _2)=(\kappa _\infty ,\kappa _\infty ^{-1}), \end{aligned}$$
(5.5a)
$$\begin{aligned}&(x_1,x_2,x_3,x_4)=(\kappa _t t_0,\kappa _t^{-1} t_0,\kappa _1 ,\kappa _1^{-1}). \end{aligned}$$
(5.5b)

For any \(1\le i,j\le 2\), the matrix-entry \(C_{ij}(z)\) is an element of the two-dimensional vector space

$$\begin{aligned} V_{ij}:=\left\{ \text {analytic functions } \theta :{\mathbb {C}}^*\rightarrow {\mathbb {C}}\text { satisfying }\theta (qz)=\frac{\sigma _i}{\mu _j}z^{-2}\theta (z)\right\} , \end{aligned}$$

see Eq. (4.1), and we know that

$$\begin{aligned} C_{11}(z)C_{22}(z)-C_{12}(z)C_{22}(z)=c\theta _q(z/x_1,z/x_2,z/x_3,z/x_4), \end{aligned}$$
(5.6)

for some \(c\in {\mathbb {C}}^*\).

For each \(1\le k\le 4\), the equation \(\pi (C(x_k))=\rho _k\) translates to

$$\begin{aligned} \rho _k^y C_{11}(x_k)-\rho _k^x C_{12}(x_k)=0,\quad \rho _k^y C_{21}(x_k)-\rho _k^x C_{22}(x_k)=0, \end{aligned}$$
(5.7)

where we used homogeneous coordinates \(\rho _k=[\rho _k^x:\rho _k^y]\).

We proceed in studying Eq. (5.7) by choosing explicit bases of the vector spaces \(V_{ij}\), \(1\le i,j\le 2\). To this end, we introduce the following eight q-theta functions,

$$\begin{aligned} u_1^{11}(z)&=\theta _q\left( z/x_1,z x_1\frac{\mu _1}{\sigma _1}\right) ,&u_2^{11}(z)&=\theta _q\left( z/x_2,z x_2\frac{\mu _1}{\sigma _1}\right) ,\\ u_1^{12}(z)&=\theta _q\left( z/x_3,z x_3\frac{\mu _2}{\sigma _1}\right) ,&u_2^{12}(z)&=\theta _q\left( z/x_4,z x_4\frac{\mu _2}{\sigma _1}\right) ,\\ u_1^{21}(z)&=\theta _q\left( z/x_1,z x_1\frac{\mu _1}{\sigma _2}\right) ,&u_2^{21}(z)&=\theta _q\left( z/x_2,z x_2\frac{\mu _1}{\sigma _2}\right) ,\\ u_1^{22}(z)&=\theta _q\left( z/x_3,z x_3\frac{\mu _2}{\sigma _2}\right) ,&u_2^{22}(z)&=\theta _q\left( z/x_4,z x_4\frac{\mu _2}{\sigma _2}\right) . \end{aligned}$$

For any \(1\le i,j\le 2\), the collection \(\{u_1^{ij}(z),u_2^{ij}(z)\}\) forms a basis of \(V_{ij}\). We may thus write

$$\begin{aligned} C_{ij}(z)=\alpha _{1}^{ij}u_1^{ij}(z)+\alpha _{2}^{ij}u_2^{ij}(z), \end{aligned}$$
(5.8)

for some coefficients \(\alpha _{1}^{ij},\alpha _{2}^{ij}\in {\mathbb {C}}\).

Equation (5.7) now translate to eight equations among the coefficients in (5.8), which we group into the following two homogeneous systems,

$$\begin{aligned} \begin{pmatrix} 0 &{}\quad \rho _1^y u_2^{11}(x_1) &{}\quad -\rho _1^x u_1^{12}(x_1) &{}\quad -\rho _1^x u_2^{12}(x_1)\\ \rho _2^y u_{1}^{11}(x_2) &{}\quad 0 &{} -\rho _2^x u_1^{12}(x_2) &{}\quad -\rho _2^x u_2^{12}(x_2)\\ \rho _3^y u_{1}^{11}(x_3) &{}\quad \rho _3^y u_2^{11}(x_3) &{}\quad 0 &{}\quad -\rho _3^x u_2^{12}(x_3)\\ \rho _4^y u_{1}^{11}(x_4) &{}\quad \rho _4^y u_2^{11}(x_4) &{}\quad -\rho _4^x u_1^{12}(x_4) &{}\quad 0\\ \end{pmatrix} \begin{pmatrix} \alpha _1^{11}\\ \alpha _2^{11}\\ \alpha _1^{12}\\ \alpha _2^{12}\\ \end{pmatrix}= \begin{pmatrix} 0\\ 0\\ 0\\ 0\\ \end{pmatrix}, \end{aligned}$$
(5.9)

and

$$\begin{aligned} \begin{pmatrix} 0 &{}\quad \rho _1^y u_2^{21}(x_1) &{} \quad -\rho _1^x u_1^{22}(x_1) &{}\quad -\rho _1^x u_2^{22}(x_1)\\ \rho _2^y u_{1}^{21}(x_2) &{}\quad 0 &{} -\rho _2^x u_1^{22}(x_2) &{}\quad -\rho _2^x u_2^{22}(x_2)\\ \rho _3^y u_{1}^{21}(x_3) &{}\quad \rho _3^y u_2^{21}(x_3) &{} 0 &{}\quad -\rho _3^x u_2^{22}(x_3)\\ \rho _4^y u_{1}^{21}(x_4) &{}\quad \rho _4^y u_2^{21}(x_4) &{}\quad -\rho _4^x u_1^{22}(x_4) &{}\quad 0\\ \end{pmatrix} \begin{pmatrix} \alpha _1^{21}\\ \alpha _2^{21}\\ \alpha _1^{22}\\ \alpha _2^{22}\\ \end{pmatrix}= \begin{pmatrix} 0\\ 0\\ 0\\ 0\\ \end{pmatrix}. \end{aligned}$$
(5.10)

As the determinant of C(z) cannot be identically zero, we know that both vectors on the left-hand side of Eqs. (5.9) and (5.10) are nonzero. This in turn implies that the determinants of the \(4\times 4\) matrices on the left-hand side are zero. By means of a lengthy calculation, one can check that both determinants, coincide, up to some nonzero scalar multipliers, with the equation

$$\begin{aligned} T_{hom}(\rho _1^x,\rho _1^y,\rho _2^x,\rho _2^y,\rho _3^x,\rho _3^y,\rho _4^x,\rho _4^y)=0, \end{aligned}$$

where \(T_{hom}\) is defined in Definition 2.14. We refer the interested reader to our previous work [23][Appendix B] where an analogous computation is given.

It follows that P embeds \(M(\kappa ,t_0)\) into the threefold \(S(\kappa ,t_0)\).

We proceed to determine those coordinate-values in \(S(\kappa ,t_0)\) which cannot be realised by any connection matrix \(C(z)\in {\mathfrak {C}}(\kappa ,t_0)\).

Take any \(\rho \in S(\kappa ,t_0)\), then we know that both homogeneous equations (5.9) and (5.10) have non-trivial solutions. Let us take a solution of each respectively,

$$\begin{aligned} \left( \alpha _1^{11}, \alpha _2^{11}, \alpha _1^{12}, \alpha _2^{12}\right) ^T,\quad \left( a_1^{21}, \alpha _2^{21}, \alpha _1^{22}, \alpha _2^{22}\right) ^T, \end{aligned}$$
(5.11)

and let C(z) denote the corresponding matrix function via Eq. (5.8).

Then we know that C(z) is analytic on \({\mathbb {C}}^*\), it satisfies

$$\begin{aligned} C(qz)=z^{-2}t_0\kappa _0^{\sigma _3}C(z)\kappa _\infty ^{-\sigma _3}, \end{aligned}$$
(5.12)

and \(|C(x_k)|=0\) for \(1\le k \le 4\). Furthermore, by construction,

$$\begin{aligned}&C_{11}(z)\not \equiv 0\quad \text { or }\quad C_{12}(z)\not \equiv 0, \text { and } \end{aligned}$$
(5.13a)
$$\begin{aligned}&C_{21}(z)\not \equiv 0\quad \text { or }\quad C_{22}(z)\not \equiv 0. \end{aligned}$$
(5.13b)

There are two options, either Eq. (5.6) holds for some \(c\in {\mathbb {C}}^*\), which means that \(C(z)\in {\mathfrak {C}}(\kappa ,t_0)\) and thus \(\rho \) lies inside the range of P; or the determinant of C(z) is identically zero,

$$\begin{aligned} C_{11}(z)C_{22}(z)=C_{12}(z)C_{21}(z). \end{aligned}$$
(5.14)

In the latter case, \(\rho \) does not lie inside the range of P. To show this, suppose on the contrary that there is a \({\widetilde{C}}(z)\in {\mathfrak {C}}(\kappa ,t_0)\) with \(\pi ({\widetilde{C}}(x_k))=\rho _k\) for \(1\le k\le 4\). Then, by the same argument as in the proof of Lemma 5.1, we have

$$\begin{aligned} C(z)=D{\widetilde{C}}(z), \end{aligned}$$
(5.15)

for some diagonal matrix D. However, as the determinant of C(z) is identically zero, we must have \(|D|=0\). Consequently, Eq. (5.15) contradicts equations (5.13). It follows that, in the case when the determinant of C(z) is identically zero, \(\rho \) indeed does not lie in the range of P.

Therefore, to prove the proposition, it remains to be shown that the determinant of the matrix C(z), constructed above, is identically equal to zero if and only if the coordinate-values \(\rho \) lie in \(X=X(\kappa ,t_0)\), and that this space X is a codimension one subspace of \(S(\kappa ,t_0)\).

To this end, let us note that Eqs. (5.13) and (5.14) imply that either

  1. (i)

    \(C_{11}(z)\equiv 0\) and \(C_{21}(z)\equiv 0\),

  2. (ii)

    \(C_{12}(z)\equiv 0\) and \(C_{22}(z)\equiv 0\), or

  3. (iii)

    \(C_{11}(z)C_{22}(z)=C_{12}(z)C_{21}(z)\not \equiv 0\).

Case (i) corresponds, via Eqs. (5.9) and (5.10), to the four lines

$$\begin{aligned} \{\rho \in ({\mathbb {P}}^1)^4:\rho _i=\rho _j=\rho _k=0\}\quad (1\le i<j<k\le 4). \end{aligned}$$
(5.16)

Indeed, \(C_{11}(z)\equiv 0\) implies that the coefficients \(\alpha _{1}^{11},\alpha _2^{11}\) in Eq. (5.9) are zero. A non-trivial solution of (5.9) with these constraints exists if and only if the coordinate-values \(\rho \) lies inside one of the above four lines.

Similarly, case (ii) corresponds to the four lines

$$\begin{aligned} \{\rho \in ({\mathbb {P}}^1)^4:\rho _i=\rho _j=\rho _k=\infty \}\quad (1\le i<j<k\le 4). \end{aligned}$$
(5.17)

Note that the eight lines, defined in (5.16) and (5.17), indeed lie inside X.

Finally, in case (iii), C(z) must take the form

$$\begin{aligned} C(z)=\begin{pmatrix} c_{11}\theta _q(z/u_1)\theta _q(z/v_1) &{}\quad c_{12}\theta _q(z/u_1)\theta _q(z/u_2)\\ c_{21}\theta _q(z/v_1)\theta _q(z/v_2) &{}\quad c_{22}\theta _q(z/u_2)\theta _q(z/v_2) \end{pmatrix}, \end{aligned}$$

with

$$\begin{aligned} u_1=\kappa _0 t_0 \tau ,\quad u_2=\kappa _\infty \tau ^{-1},\quad v_1=\kappa _\infty ^{-1}\tau ^{-1},\quad v_2=\kappa _0^{-1} t_0\tau , \end{aligned}$$

for some \(\tau \in {\mathbb {C}}^*\) and nonzero constant multipliers satisfying \(c_{11}c_{22}=c_{12}c_{21}\). The corresponding \(\rho \)-coordinates of this matrix are given by

$$\begin{aligned} \rho _k=c\,\phi (\tau x_k)\quad (1\le k\le 4),\quad \phi (x):=\frac{\theta _q(x\kappa _\infty )}{\theta _q(x/\kappa _\infty )},\quad c=\frac{c_{11}}{c_{12}}\in {\mathbb {C}}^*. \end{aligned}$$
(5.18)

Consequently, for any choice of \(c,\tau \in {\mathbb {C}}^*\), Eq. (5.18) defines a point on the threefold \(S(\kappa ,t_0)\). We now make the important observation that formulae (5.18) are \(\kappa _0\)-independent. That is, Eq. (5.18) defines a point on \(S(\lambda _0,\kappa _t,\kappa _1,\kappa _\infty ,t_0)\), for any value of \(\lambda _0\). Thus these points lie in the subspace X.

To prove the proposition, it suffices to show that, conversely, any point in X lies either on one of the eight lines (5.16) and (5.17), or is given by (5.18) for a choice of \(c,\tau \in {\mathbb {C}}^*\). To this end, let us take a point \(\rho \in X\) which is not on one of the eight lines. Construct a corresponding matrix C(z) via Eqs. (5.9) and (5.10), see Eq. (5.11). So C(z) is analytic on \({\mathbb {C}}^*\), it satisfies (5.12) and Eq. (5.13) hold true.

As \(\rho \in X\subseteq S(1,\kappa _t,\kappa _1,\kappa _\infty ,t_0)\), we can similarly construct a matrix \({\widetilde{C}}(z)\) via Eqs. (5.9) and (5.10), which satisfies

$$\begin{aligned} {\widetilde{C}}(qz)=z^{-2}t_0{\widetilde{C}}(z)\kappa _\infty ^{-\sigma _3}. \end{aligned}$$

This matrix function is also analytic on \({\mathbb {C}}^*\), and satisfies (5.13).

Now suppose, for the sake of obtaining a contradiction, that \(\rho \) is not given by (5.18) for some \(c,\tau \in {\mathbb {C}}^*\), so that \(|C(z)|\not \equiv 0\). Consider the quotient \(D(z)={\widetilde{C}}(z)C(z)^{-1}\). As, by construction, C(z) and \({\widetilde{C}}(z)\) have the same \(\rho \)-coordinate values, it follows that D(z) is an analytic function on \({\mathbb {C}}^*\). However, D(z) satisfies the q-difference equation

$$\begin{aligned} D(qz)=D(z)\kappa _0^{-\sigma _3}, \end{aligned}$$

and therefore, by Lemma 4.1, \(D(z)\equiv 0\) and consequently \({\widetilde{C}}(z)\equiv 0\), which contradicts the fact that \({\widetilde{C}}(z)\) satisfies (5.13).

We conclude that the subspace X is explicitly parametrised by

$$\begin{aligned} X={\text {cl}}\left( \{(c\,\phi (\tau x_1),c\,\phi (\tau x_2),c\,\phi (\tau x_3),c\,\phi (\tau x_4)):c,\tau \in {\mathbb {C}}^*\}\right) , \end{aligned}$$
(5.19)

where \(\phi \) is the function defined in (5.18) and the closure is taken in \(({\mathbb {P}}^1)^4\). Thus X is a codimension one closed subspace of \(S(\kappa ,t_0)\). Furthermore, we have shown that X consists precisely of the points in the threefold \(S(\kappa ,t_0)\) that cannot be realised as coordinate-values \(\rho \) of any connection matrix \(C(z)\in {\mathfrak {C}}(\kappa ,t_0)\). Thus the image of \(M(\kappa ,t_0)\) under the embedding P is given by \(S(\kappa ,t_0)\setminus X\) and the proposition follows. \(\square \)

Proof of Theorem 2.15

Recall from Definition 5.2 that elements of \(M(\kappa ,t_0)\) are connection matrices C equivalent under left multiplication by a diagonal matrix, while the entries of \({\mathcal {M}}(\kappa ,t_0)\) are those equivalent under right and left multiplication by diagonal matrices. Note that the desired bijection is already proved for \(M(\kappa ,t_0)\) in Proposition 5.4. So the proof of the present theorem will follow under an appropriate quotient mapping \(M(\kappa ,t_0)\) to \({\mathcal {M}}(\kappa ,t_0)\) and the corresponding quotient from \(S^*(\kappa ,t_0)\) to \({\mathcal {S}}^*(\kappa ,t_0)\). Recall that Definition 5.2 denotes the former quotient by \(\iota _M\). The latter quotient is denoted by \(\iota _{{\mathbb {P}}}\), defined in Eq. (5.2).

Now, consider the commutative diagram (5.3). By Proposition 5.4, the image of P is given by \(S^*(\kappa ,t_0)\). Therefore, the image of the composition \(\iota _{{\mathbb {P}}}\circ P\) is given by \({\mathcal {S}}^*(\kappa ,t_0)\). As \(\iota _M\) is surjective, it follows from the commutativity of diagram (5.3) that the image of \({\mathcal {P}}\) is given by \({\mathcal {S}}^*(\kappa ,t_0)\).

In Lemma 5.1, it was shown that \({\mathcal {P}}\) is injective and it thus follows that \({\mathcal {P}}\) is a bijection, which proves the theorem. \(\square \)

Proof of Remark 2.16

We note that, by Eq. (5.19), we have the following explicit parametrisation of the curve \({\mathcal {X}}={\mathcal {X}}(\kappa ,t_0)\),

$$\begin{aligned} \begin{aligned} {\mathcal {X}}&={\text {cl}}\left( \{(\phi (\tau x_1),\phi (\tau x_2),\phi (\tau x_3),\phi (\tau x_4)):\tau \in {\mathbb {C}}^*\}\right) ,\\ \phi (x)&=\frac{\theta _q(x\kappa _\infty )}{\theta _q(x/\kappa _\infty )}, \end{aligned} \end{aligned}$$
(5.20)

where the closure is taken in \(({\mathbb {P}}^1)^4/{\mathbb {C}}^*\) and \(x_k,1\le k\le 4\), are as defined in Eq. (2.17). Note that this parametrisation is \(\kappa _0\)-independent, which implies

$$\begin{aligned} {\mathcal {X}}\subseteq \bigcap _{\lambda _0\in {\mathbb {C}}^*}{\mathcal {S}}(\lambda _0,\kappa _t,\kappa _1,\kappa _\infty ,t_0).\end{aligned}$$

By the definition of \({\mathcal {X}}\), Eq. (2.22), the right-hand side is also a subset of \({\mathcal {X}}\) and they are therefore equal, yielding the desired result, Eq. (2.24). \(\square \)

5.2 Smoothness of the monodromy manifold

In this subsection, we study the smoothness of the monodromy manifold \({\mathcal {M}}(\kappa ,t_0)\) and prove Theorem 2.17.

The monodromy manifold does not naturally come with a topology. However, due to Theorem 2.15 and Proposition 5.4, we have the following refined version of a commutative diagram (5.3),

(5.21)

where both P and \({\mathcal {P}}\) are bijective, and \(\iota _{S^*}\) denotes the quotient mapping \(\iota _{{\mathbb {P}}}\) restricted to \(S^*(\kappa ,t_0)\). The monodromy manifold inherits a topology from \({\mathcal {S}}^*(\kappa ,t_0)\) via \({\mathcal {P}}\). Similarly, \(M(\kappa _0,t_0)\) inherits a topology from the threefold \(S^*(\kappa ,t_0)\).

To prove Theorem 2.17, we first study the smoothness of the space \(S^*(\kappa ,t_0)\). We then deduce corresponding results for the surface \({\mathcal {S}}^*(\kappa ,t_0)\), by taking the quotient with respect to scalar multiplication. Finally, we translate the results for \({\mathcal {S}}^*(\kappa ,t_0)\) to the monodromy manifold.

The following proposition describes the singular set of the space \(S^*(\kappa ,t_0)\) and shows that it is empty if and only if the non-splitting conditions hold.

Proposition 5.5

The space \(S^*(\kappa ,t_0)\) is a complex 3-manifold singularities at points in the finite set

$$\begin{aligned} S^*_{sing}:=S^*(\kappa ,t_0)\cap \Theta , \end{aligned}$$
(5.22)

where

$$\begin{aligned} \Theta :=\{&(0,0,\infty ,\infty ),(0,\infty ,0,\infty ),(0,\infty ,\infty ,0),\end{aligned}$$
(5.23)
$$\begin{aligned}&(\infty ,0,0,\infty ),(\infty ,0,\infty ,0),(\infty ,\infty ,0,0)\}. \end{aligned}$$
(5.24)

Furthermore, all these singularities are ordinary double-point singularities.

In particular, the following statements are equivalent.

  1. (i)

    The space \(S^*(\kappa ,t_0)\) is smooth.

  2. (ii)

    The set \(S^*_{sing}\) is empty.

  3. (iii)

    The non-splitting conditions (1.5) hold true.

Proof

Recall that the space \(S^*(\kappa ,t_0)\) is defined as \(S(\kappa ,t_0)\setminus X(\kappa ,t_0)\), where \(S(\kappa ,t_0)\) is the zero locus of the polynomial \(T(\rho ;\kappa ,t_0)\) in \(({\mathbb {P}}^1)^4\) and \(X(\kappa ,t_0)\) denotes a subspace of \(S(\kappa ,t_0)\), defined in Eq. (5.4). From here on, we will often suppress the explicit parameter dependence on \((\kappa ,t_0)\) of \(T(\rho ),S,X\) and \(S^*=S{\setminus } X\).

Firstly, as X is, by definition, the zero locus of two polynomials, it is closed in S. Hence, \(S^*\) is open in S. To prove the first part of the proposition, we study whether the gradient of \(T(\rho )\) vanishes anywhere on the open subset \(S^*\) of S.

We start by considering whether \(S^*\) has any singularities in its affine part \(S^*\cap {\mathbb {C}}^4\). The zero locus of the gradient of \(T(\rho _1,\rho _2,\rho _3,\rho _4)\) is characterised by the linear equation

$$\begin{aligned} H\cdot (\rho _1,\rho _2,\rho _3,\rho _4)^T=0, \end{aligned}$$
(5.25)

where H is the Hessian matrix of T, i.e.

$$\begin{aligned} H=\begin{pmatrix} 0 &{}\quad T_{12} &{}\quad T_{13} &{}\quad T_{14}\\ T_{12} &{}\quad 0 &{}\quad T_{23} &{}\quad T_{24}\\ T_{13} &{}\quad T_{23} &{}\quad 0 &{}\quad T_{34}\\ T_{14} &{}\quad T_{24} &{}\quad T_{34} &{}\quad 0\\ \end{pmatrix}. \end{aligned}$$
(5.26)

We proceed to show that the determinant of H is nonzero. This implies that Eq. (5.25) has only one solution \({\underline{0}}:=(0,0,0,0)\in X\), which does not lie in \(S^*\). In particular, \(S^*\) has no singularities in its affine part.

In fact, we will prove that the determinant of H is given explicitly by

$$\begin{aligned} |H|=\kappa _0^{-2}\kappa _t^{2}\kappa _1^{2}\kappa _\infty ^{2}\theta _q\left( \kappa _0^2,\kappa _t^2,\kappa _1^2,\kappa _\infty ^2\right) ^2\theta _q\left( \kappa _t\kappa _1 t_0, \kappa _t\kappa _1^{-1} t_0, \kappa _t^{-1}\kappa _1 t_0, \kappa _t^{-1}\kappa _1^{-1} t_0\right) ^2, \end{aligned}$$
(5.27)

so that \(|H|\ne 0\), due to the non-resonance conditions (1.4).

To this end, we first note that |H| depends analytically on each of the parameters \(\kappa _j\in {\mathbb {C}}^*\), \(j=0,t,1,\infty \), and \(t_0\in {\mathbb {C}}^*\). We begin by studying the dependence of the determinant on \(\kappa _0\) and denote

$$\begin{aligned} h=h(\kappa _0):=|H|. \end{aligned}$$

Since each of the entries of H satisfies the q-difference equation

$$\begin{aligned} T_{ij}(q\,\kappa _0)=q^{-1}\kappa _0^{-2}T_{ij}(\kappa _0), \end{aligned}$$

\(1\le i<j\le 4\), we have

$$\begin{aligned} h(q\,\kappa _0)=q^{-4}\kappa _0^{-8}h(\kappa _0). \end{aligned}$$
(5.28)

It follows from Lemma 4.1 that h has precisely eight zeros, counting multiplicity, in \(\{\kappa _0\in {\mathbb {C}}^*\}\), modulo \(q^{\mathbb {Z}}\). We further note the following helpful symmetries,

$$\begin{aligned} h(\kappa _0^{-1})=h(\kappa _0),\qquad h'(\kappa _0^{-1})=-\kappa _0^{2}\, h'(\kappa _0). \end{aligned}$$
(5.29)

A direct calculation yields that h, evaluated at \(\kappa _0=1\), formally factorises as

$$\begin{aligned} h(1)= \prod _{\epsilon _1,\epsilon _2\in \{\pm 1\}}\big [&+\kappa _\infty \theta _q(\kappa _t^2,\kappa _1^2,\kappa _\infty ^{-1}t_0,\kappa _\infty t_0)\\&+ \epsilon _1 \kappa _\infty \theta _q(\kappa _t\kappa _1\kappa _\infty ^{-1},\kappa _t\kappa _1\kappa _\infty ,\kappa _t \kappa _1^{-1}t_0,\kappa _1 \kappa _t^{-1}t_0)\\&+ \epsilon _2 \kappa _t\kappa _1 \theta _q(\kappa _t^{-1}\kappa _1\kappa _\infty ,\kappa _t\kappa _1^{-1}\kappa _\infty ,\kappa _t \kappa _1 t_0,\kappa _1^{-1} \kappa _t^{-1}t_0)\big ]. \end{aligned}$$

The factor with \(\epsilon _1=\epsilon _2=-1\) vanishes identically by the addition law for theta functions, hence \(h(1)=0\). it furthermore follows from symmetries (5.29) that \(h'(1)=0\), so that \(\kappa _0=1\) is at least a double zero of h.

An analogous computation shows that \(\kappa _0=-1\) is at least a double zero of h.

Similarly, it follows that \(\kappa _0=q^{\frac{1}{2}}\) is a zero of h. To show that it is at least a double zero, we take the derivative of Eq. (5.28),

$$\begin{aligned} qh'(q\,\kappa _0)=q^{-4}\kappa _0^{-8}h'(\kappa _0)-8\,q^{-4}\kappa _0^{-9}h(\kappa _0). \end{aligned}$$

By evaluating this identity, and the second equation in (5.29), at \(\kappa _0=q^{-\frac{1}{2}}\), it follows that \(h'(q^{\frac{1}{2}})=0\) so that \(\kappa _0=q^{\frac{1}{2}}\) is at least a double zero of h. The same statement follows analogously for \(\kappa _0=-q^{\frac{1}{2}}\).

In conclusion, we have found four zeros of h, \(\kappa _0=\pm 1,\pm q^{\frac{1}{2}}\), each at least of degree two. But h is a degree 8 theta function. It follows from this, and Eq. (5.28), that

$$\begin{aligned} h=\kappa _0^{-2}\theta _q\left( \kappa _0^2\right) ^2 {\widetilde{h}}, \end{aligned}$$

where \({\widetilde{h}}\) is a function independent of \(\kappa _0\).

By following the same procedure with respect to the variables \(\kappa _t,\kappa _1,\kappa _\infty \), we obtain

$$\begin{aligned} h=c\,\kappa _0^{-2}\kappa _t^{2}\kappa _1^{2}\kappa _\infty ^{2}\theta _q\left( \kappa _0^2,\kappa _t^2,\kappa _1^2,\kappa _\infty ^2\right) ^2\theta _q\left( \kappa _t\kappa _1 t_0, \kappa _t\kappa _1^{-1} t_0, \kappa _t^{-1}\kappa _1 t_0, \kappa _t^{-1}\kappa _1^{-1} t_0\right) ^2, \end{aligned}$$

for some constant c which may only depend on \(t_0\) and q.

At this point, one simply evaluates both sides at \(\kappa _0=\kappa _t=\kappa _1=\kappa _\infty =i\), to obtain \(c=1\), which yields Eq. (5.27).

We now return to the proof of the proposition. We have already established that \({\mathcal {S}}^*\) has no singularities in its affine part. It remains to study whether \(S^*\) has singularities with one or more of their coordinates equal to \(\infty \). Note that we only have to check the cases where one or two of their coordinates are equal to \(\infty \), as points, with more than two coordinates equal to \(\infty \), lie in X and thus not in \(S^*\).

Let us start by considering whether there are any singularities in

$$\begin{aligned} S^*\cap \{(\rho _1,\rho _2,\rho _3,\infty ):\rho _{k}\in {\mathbb {C}}\text { for }1\le k\le 3\}. \end{aligned}$$
(5.30)

To this end, we evaluate the gradient of

$$\begin{aligned} F=\rho _4^yT\left( \rho _1^x,\rho _2^x,\rho _3^x,\frac{1}{\rho _4^y}\right) \end{aligned}$$

at \(\rho _4^y=0\), yielding

$$\begin{aligned} \nabla F|_{\rho _4^y=0}=(T_{14},T_{24},T_{34},T_{14}\rho _1^x+T_{24}\rho _2^x+T_{34}\rho _3^x)^T. \end{aligned}$$

For this gradient to vanish, it is required that \(T_{14}=T_{24}=T_{34}=0\), which cannot be realised without violating one of the non-resonance conditions (1.4). Therefore, \(S^*\) has no singularities with \(\rho _4=\infty \) and the remaining coordinates finite. Applying the same argument in the three other cases, it follows that the manifold \(S^*\) has no singularities with precisely one of their coordinates equal to \(\infty \).

Next, we consider the existence of singularities on \(S^*\) with two of their coordinates infinite. Let us for example consider \(\rho _3=\rho _4=\infty \) with \(\rho _1\) and \(\rho _2\) finite. Setting \(\rho _3^y=\rho _4^y=0\) in

$$\begin{aligned} \rho _1^y \rho _2^y \rho _3^y \rho _4^yT\left( \frac{\rho _1^x}{\rho _1^y},\frac{\rho _2^x}{\rho _2^y},\frac{\rho _3^x}{\rho _3^y},\frac{\rho _4^x}{\rho _4^y}\right) =0, \end{aligned}$$

reduces it to

$$\begin{aligned} T_{34}\rho _1^y \rho _2^y\rho _3^x\rho _4^x=0. \end{aligned}$$

Therefore,

$$\begin{aligned} \left\{ \rho \in S^*:\rho _3=\rho _4=\infty \right\} ={\left\{ \begin{array}{ll} \{(\rho _1,\rho _2,\infty ,\infty ):\rho _{1},\rho _2\in {\mathbb {C}}\} &{} \text { if }T_{34}=0,\\ \emptyset &{} \text { otherwise.} \end{array}\right. } \end{aligned}$$
(5.31)

In turn, \(T_{34}=0\) if and only if \(\kappa _0^{+1}\kappa _\infty t_0\in q^{\mathbb {Z}}\) or \(\kappa _0^{-1}\kappa _\infty t_0\in q^{\mathbb {Z}}\). Thus \(T_{34}\ne 0\) when the non-splitting conditions (1.5) hold true.

More generally, if the non-splitting conditions (1.5) hold true, then all of the coefficients \(T_{ij}\), \(1\le i<j\le 4\), are nonzero and consequently there are no points on \(S^*\) with two coordinates equal to \(\infty \). Thus we can conclude that \(S^*\) is smooth when conditions the non-splitting conditions hold true.

Returning to the example above, i.e. \(\rho _3=\rho _4=\infty \), under the assumption that \(T_{34}=0\), evaluation of the gradient of

$$\begin{aligned} F=\rho _3^y\rho _4^yT\left( \rho _1^x,\rho _2^x,\frac{1}{\rho _3^y},\frac{1}{\rho _4^y}\right) \end{aligned}$$

at \(\rho _3^y=\rho _4^y=0\), yields

$$\begin{aligned} \nabla F|_{\rho _3^y,\rho _4^y=0}=(0,0,T_{14}\rho _1^x+T_{24}\rho _2^x,T_{13}\rho _1^x+T_{23}\rho _2^x)^T, \end{aligned}$$

which vanishes at \(\rho _1^x=\rho _2^x=0\), and only at this point, as

$$\begin{aligned} \begin{vmatrix} T_{14}&\quad T_{24}\\ T_{13}&\quad T_{23} \end{vmatrix}^2=|H|\ne 0, \end{aligned}$$

where H the Hessian of T defined in Eq. (5.26).

The determinant of the Hessian of F at the point \((\rho _1^x,\rho _2^x,\rho _3^y,\rho _4^y)={\underline{0}}\) equals |H|, which is nonzero, and thus this point is a non-degenerate saddle point of F. In particular, \(\{F=0\}\) has an ordinary double point singularity at \({\underline{0}}\), by the complex Morse lemma. Therefore, the manifold \(S^*\) has an ordinary double point singularity at \(\rho =(0,0,\infty ,\infty )\), when \(\kappa _0^{+1}\kappa _\infty t_0\in q^{\mathbb {Z}}\) or \(\kappa _0^{-1}\kappa _\infty t_0\in q^{\mathbb {Z}}\).

More generally, if some of the non-splitting conditions (1.5) are violated, then the intersection \(S_{sing}^*\) of \(\Theta \) and \(S^*\) is non-empty, and at each point in \(S_{sing}^*\), \(S^*\) has an ordinary double point singularity and \(S^*\) is smooth elsewhere. Otherwise, \(S_{sing}^*\) is empty and in that case we have already shown that \(S^*\) has no singularities. This completes the proof of the proposition. \(\square \)

We now proceed to prove Theorem 2.17 by using Proposition 5.5.

Proof of Theorem 2.17

The first part of the proof is to show that the smoothness properties of the 3-manifold \(S^*(\kappa ,t_0)\), established in Proposition 5.5, are preserved by the quotient map to \({\mathcal {S}}^*(\kappa ,t_0)\). The second step will be to translate these results to the monodromy manifold \({\mathcal {M}}(\kappa ,t_0)\).

Recall that \({\mathcal {S}}^*(\kappa ,t_0)\) is the zero set of the polynomial \(T(\rho ;\kappa ,t_0)\), given in Definition 2.14. Due to Proposition 5.5, it can be singular only at points in the finite set \(\Theta \), given in Eq. (5.23). Recall also that \(S^*_{sing}\) refers to the subset of singular points lying on the 3-manifold \(S^*(\kappa ,t_0)\). Consider the smooth complex 3-manifold

$$\begin{aligned} {\widetilde{S}}^*(\kappa ,t_0)=S^*(\kappa ,t_0)\setminus S_{sing}. \end{aligned}$$

We denote the image of \(\Theta \) under the quotient map \(\iota _{S^*}\) by \({\widehat{\Theta }}\), so that the image of \({\widetilde{S}}^*(\kappa ,t_0)\) under \(\iota _{S^*}\) is given by

$$\begin{aligned} \widetilde{{\mathcal {S}}}^*(\kappa ,t_0)={\mathcal {S}}^*(\kappa ,t_0)\setminus {\mathcal {S}}_{sing}^*,\quad {\mathcal {S}}_{sing}^*:={\mathcal {S}}^*(\kappa ,t_0)\cap {\widehat{\Theta }}. \end{aligned}$$

As (non-zero) scalar multiplication acts smoothly on \({\widetilde{S}}^*(\kappa ,t_0)\), and no element of \({\widetilde{S}}^*(\kappa ,t_0)\) is invariant under this operation, it follows that \(\widetilde{{\mathcal {S}}}^*(\kappa ,t_0)\) is a smooth complex surface.

Now, consider a point \(\rho _0\in S_{sing}\). Since this point is invariant under the smooth action \(\rho \mapsto c\,\rho \), \(c\in {\mathbb {C}}^*\), it is easy to see that the quotient space \({\mathcal {S}}^*\) is not Hausdorff near its image \([\rho _0]\). In fact, near points in \(S_{sing}\), the space \({\mathcal {S}}^*\) even fails to locally be a \(T_1\) space. In particular, the smooth structure on \(\widetilde{{\mathcal {S}}}^*(\kappa ,t_0)\) cannot be extended to include points in \(S_{sing}\).

To complete the proof of the theorem, we translate the results on \({\mathcal {S}}^*(\kappa ,t_0)\) to \({\mathcal {M}}(\kappa ,t_0)\) via the mapping \({\mathcal {P}}\). To this end, recall that \({\mathcal {P}}\) maps the finite set \({\mathcal {M}}_{sing}\) onto \({\mathcal {S}}_{sing}\).

We have shown that \({\mathcal {S}}^*(\kappa ,t_0){\setminus } {\mathcal {S}}_{sing}\) is a smooth complex surface. Hence \({\mathcal {M}}(\kappa ,t_0)\setminus {\mathcal {M}}_{sing}\) is a smooth complex surface. Furthermore, elements of \({\mathcal {M}}_{sing}\) form singularities on the monodromy manifold, as points in \({\mathcal {S}}_{sing}\) are singularities on \({\mathcal {S}}^*(\kappa ,t_0)\).

Finally, we note that \({\mathcal {M}}_{sing}\) is non-empty if and only if \({\mathcal {S}}_{sing}\) is non-empty, and the latter holds true if and only if some of the non-splitting conditions are violated, by the equivalence in Proposition 5.5. The theorem follows. \(\square \)

5.3 The monodromy manifold as an algebraic surface

In this section, we prove Theorem 2.20, which allows us to identify the monodromy manifold with an affine algebraic surface embedded in \({\mathbb {C}}^6\). Furthermore, we describe how the monodromy manifold can also be embedded in \(({\mathbb {P}}^1)^3\).

Proof of Theorem 2.20

The mapping \(\Phi _{{\mathcal {M}}}\) is composed of two parts: \(\mathcal P: {\mathcal {M}}\rightarrow {\mathcal {S}}^*\) and \(\Phi :{\mathcal {S}}^*\rightarrow \mathcal F\). The mapping \({\mathcal {P}}\) is a (topological) isomorphism due to theorem 2.15. Hence, it only remains to show that the mapping \(\Phi \) is an isomorphism. To prove this, we construct a continuous inverse, which we denote by \(\Psi \), of \(\Phi \).

We start by recalling that \({\mathcal {S}}^*\), defined in Eq. (2.23), is locally described by coordinates \([\rho ]\) in the ambient space \(({\mathbb {P}}^1)^4/{\mathbb {C}}^*\). Similarly, \({\mathcal {F}}\) is described by the coordinates \(\eta _{ij}\), \(1\le i<j \le 4\), in \({\mathbb {C}}^6\).

The mapping \(\Phi \) is a continuous mapping from \({\mathcal {S}}^*\) to \({\mathcal {F}}\), described by Eq. (2.25) with respect to the above coordinates. In particular, note that, due to Eq. (2.25), for any labeling \(\{i,j,k,l\}=\{1,2,3,4\}\), we have

$$\begin{aligned} \eta _{ij}=0 \iff \rho _i=0\text { or }\rho _j=0\text { or }\rho _k=\infty \text { or }\rho _l=\infty . \end{aligned}$$
(5.32)

This means that \(\Phi \) maps the open subdomain \({\mathcal {S}}_0\subseteq {\mathcal {S}}^*\), given by

$$\begin{aligned} {\mathcal {S}}_0:=\{[\rho ]\in {\mathcal {S}}^*:\rho _k\ne 0,\infty \text {for }1\le k\le 4\}, \end{aligned}$$

into the subspace

$$\begin{aligned} {\mathcal {F}}_0:={\mathcal {F}}\cap ({\mathbb {C}}^*)^6, \end{aligned}$$

of the co-domain.

We proceed by defining an inverse of \(\Phi \) on this subdomain and co-domain, and subsequently extending this inverse to one on the full domain.

The relevant mapping on \({\mathcal {F}}_0\) is the following,

$$\begin{aligned} \Psi |_{{\mathcal {F}}_0}:{\mathcal {F}}_0\rightarrow ({\mathbb {P}}^1)^4/{\mathbb {C}}^*, \eta \rightarrow \left[ \left( \frac{T_{34}\eta _{13}}{T_{13}\eta _{34}},\frac{T_{34}\eta _{23}}{T_{23}\eta _{34}},\frac{T_{24}\eta _{23}}{T_{23}\eta _{24}},1\right) \right] . \end{aligned}$$

which we now show to be an inverse of \(\Phi |_{{\mathcal {S}}_0}\). By Eqs. (2.26a), (2.26c) and (2.26d), the image of \(\Psi |_{{\mathcal {F}}_0}\) is contained in \({\mathcal {S}}\). Furthermore, due to (2.26b), any point in the image cannot lie in \({\mathcal {X}}\). It thus follows that the image of \(\Psi |_{{\mathcal {F}}_0}\) is contained in \({\mathcal {S}}^*\). Furthermore, as \({\mathcal {F}}_0\) by definition excludes any of the \(\eta \)-coordinates to equal zero, \(\Psi |_{{\mathcal {F}}_0}\) maps \({\mathcal {F}}_0\) into \({\mathcal {S}}_0\). Finally, note that, for any point \(\rho \in {\mathcal {S}}_0\),

$$\begin{aligned} \Psi |_{{\mathcal {F}}_0}\circ \Phi |_{{\mathcal {S}}_0}([\rho ])&=\left[ \left( \frac{T_{34}\eta _{13}}{T_{13}\eta _{34}},\frac{T_{34}\eta _{23}}{T_{23}\eta _{34}},\frac{T_{24}\eta _{23}}{T_{23}\eta _{24}},1\right) \right] \\&=[(\rho _1/\rho _4,\rho _2/\rho _4,\rho _3/\rho _4,1)]\\&=[(\rho _1,\rho _2,\rho _3,\rho _4)], \end{aligned}$$

where, in the second equality, we used Eq. (2.25).

Similarly, it can be seen that \(\Phi |_{{\mathcal {S}}_0}\circ \Psi |_{{\mathcal {F}}_0}\) is the identity map on \({\mathcal {F}}_0\). It follows that \(\Psi |_{{\mathcal {F}}_0}\) is a (continuous) inverse of \(\Phi |_{{\mathcal {S}}_0}\).

The set \({\mathcal {S}}_0\) is an open dense subset of the domain \({\mathcal {S}}\) and, similarly, \({\mathcal {F}}_0\) is an open dense subset of the co-domain. It remains to deal with the special cases where one or more of the \(\rho _k\), \(1\le k\le 4\), is zero or infinite, and equivalently one or more of the \(\eta _{ij}\), \(1\le i<j\le 4\) is zero.

We handle each of these cases separately. The cases are described by

$$\begin{aligned} {\left\{ \begin{array}{ll} &{}{\mathcal {S}}_i^0:=\{[\rho ]\in {\mathcal {S}}^*:\rho _i=0,\rho _k\notin \{0,\infty \}\text { for }k\ne i\},\\ &{}{\mathcal {S}}_j^\infty :=\{[\rho ]\in {\mathcal {S}}^*:\rho _j=\infty ,\rho _k\notin \{0,\infty \}\text { for }k\ne j\},\\ &{}{\mathcal {S}}_{i,j}^{0,\infty }:=\{[\rho ]\in {\mathcal {S}}^*:\rho _i=0,\rho _j=\infty ,\rho _k\notin \{0,\infty \}\text { for }k\ne i,j\}, \end{array}\right. } \end{aligned}$$
(5.33)

for \(1\le i,j\le 4\) with \(i\ne j\). Note that \({\mathcal {S}}_{i,j}^{0,\infty }\) provides the boundaries of \({\mathcal {S}}_i^0\) and \({\mathcal {S}}_j^\infty \). Since no point on \({\mathcal {S}}_0\) can have two or more components all zero or all infinite, the sets defined in Eq. (5.33) glue together to provide all the boundaries or limit sets of \({\mathcal {S}}_0\) within \({\mathcal {S}}^*\).

We now express the surface \({\mathcal {S}}^*\) as a disjoint union of all of these cases with \({\mathcal {S}}_0\), that is,

$$\begin{aligned} {\mathcal {S}}^*=\,&{\mathcal {S}}_0\sqcup {\mathcal {S}}_1^0\sqcup {\mathcal {S}}_2^0\sqcup {\mathcal {S}}_3^0\sqcup {\mathcal {S}}_4^0\\&\sqcup {\mathcal {S}}_1^\infty \sqcup {\mathcal {S}}_2^\infty \sqcup {\mathcal {S}}_3^\infty \sqcup {\mathcal {S}}_4^\infty \\&\sqcup {\mathcal {S}}_{1,2}^{0,\infty }\sqcup {\mathcal {S}}_{1,3}^{0,\infty }\sqcup {\mathcal {S}}_{1,4}^{0,\infty }\sqcup {\mathcal {S}}_{2,1}^{0,\infty }\sqcup \ldots \sqcup {\mathcal {S}}_{4,2}^{0,\infty }\sqcup {\mathcal {S}}_{4,3}^{0,\infty }, \end{aligned}$$

where the last line indicates disjoint union of all \({\mathcal {S}}_{i,j}^{0,\infty }\), \(1\le i,j\le 4\), with \(i\ne j\).

We correspondingly decompose the codomain \({\mathcal {F}}\) into disjoint components. Motivated by Eq. (5.32), we define these components by

$$\begin{aligned} {\left\{ \begin{array}{ll} &{}{\mathcal {F}}_i^0:=\{\eta \in {\mathcal {F}}: i\in \{k,l\}\iff \eta _{kl}=0,\text { for }1\le k<l\le 4\},\\ &{}{\mathcal {F}}_j^\infty :=\{\eta \in {\mathcal {F}}: j\notin \{k,l\}\iff \eta _{kl}=0,\text { for }1\le k<l\le 4\},\\ &{}{\mathcal {F}}_{i,j}^{0,\infty }:=\{\eta \in {\mathcal {F}}: i\in \{k,l\}\text { and }j\notin \{k,l\} \iff \eta _{kl}=0,\\ &{}\qquad \qquad \quad \text { for }1\le k<l\le 4\}. \end{array}\right. } \end{aligned}$$
(5.34)

Equation (2.26) imply that any element \(\eta \) of \({\mathcal {F}}\) has either zero, three or four components equal to zero and the components in (5.34) indeed cover all of \({\mathcal {F}}\setminus {\mathcal {F}}_0\).

Then, inspired by (5.32), we correspondingly decompose \({\mathcal {F}}\) as a disjoint union,

$$\begin{aligned} {\mathcal {F}}=\,&{\mathcal {F}}_0\sqcup {\mathcal {F}}_1^0\sqcup {\mathcal {F}}_2^0\sqcup {\mathcal {F}}_3^0\sqcup {\mathcal {F}}_4^0\\&\sqcup {\mathcal {F}}_1^\infty \sqcup {\mathcal {F}}_2^\infty \sqcup {\mathcal {F}}_3^\infty \sqcup {\mathcal {F}}_4^\infty \\&\sqcup {\mathcal {F}}_{1,2}^{0,\infty }\sqcup {\mathcal {F}}_{1,3}^{0,\infty }\sqcup {\mathcal {F}}_{1,4}^{0,\infty }\sqcup {\mathcal {F}}_{2,1}^{0,\infty }\sqcup \ldots \sqcup {\mathcal {F}}_{4,2}^{0,\infty }\sqcup {\mathcal {F}}_{4,3}^{0,\infty }, \end{aligned}$$

Due to (5.32), \(\Phi \) maps each component in the decomposition of \({\mathcal {S}}^*\) into the corresponding component in the decomposition of \({\mathcal {F}}\). We extend \(\Psi \) to a global inverse of \(\Phi \) on \({\mathcal {F}}\), by locally defining it on each of the components in the decomposition of \({\mathcal {F}}\). The arguments for each of the three types of components are similar, and so we give the details for one of each type below to illustrate the details.

For example, for

$$\begin{aligned} {\mathcal {F}}_1^0=\{\eta \in {\mathcal {F}}: \eta _{12}=\eta _{13}=\eta _{14}=0\text { and }\eta _{23},\eta _{24},\eta _{34}\ne 0\}, \end{aligned}$$

we set

$$\begin{aligned} \Psi |_{{\mathcal {F}}_1^0}:{\mathcal {F}}_1^0\rightarrow {\mathcal {S}}_1^0,\eta \mapsto \left[ \left( 0,\frac{T_{34}\eta _{23}}{T_{23}\eta _{34}},\frac{T_{24}\eta _{23}}{T_{23}\eta _{24}},1\right) \right] , \end{aligned}$$

which defines an inverse of \(\Phi |_{{\mathcal {S}}_1^0}\). Similarly, for

$$\begin{aligned} {\mathcal {F}}_1^\infty =\{\eta \in {\mathcal {F}}:\eta _{23}=\eta _{24}=\eta _{34}=0\text { and } \eta _{12},\eta _{13},\eta _{14}\ne 0\}, \end{aligned}$$

we define

$$\begin{aligned} \Psi |_{{\mathcal {F}}_1^\infty }:{\mathcal {F}}_1^\infty \rightarrow {\mathcal {S}}_1^\infty ,\eta \mapsto \left[ \left( \infty ,\frac{\eta _{12}}{T_{12}},\frac{\eta _{13}}{T_{13}},\frac{\eta _{14}}{T_{14}}\right) \right] , \end{aligned}$$

which is an inverse of \(\Phi |_{{\mathcal {S}}_1^\infty }\). For the third and final example

$$\begin{aligned} {\mathcal {F}}_{1,2}^{0,\infty }=\{\eta \in {\mathcal {F}}:\eta _{12}=\eta _{13}=\eta _{14}=\eta _{34}=0\text { and } \eta _{23},\eta _{24}\ne 0\}, \end{aligned}$$

we take

$$\begin{aligned} \Psi |_{{\mathcal {F}}_{1,2}^{0,\infty }}:{\mathcal {F}}_{1,2}^{0,\infty }\rightarrow {\mathcal {S}}_{1,2}^{0,\infty },\eta \mapsto \left[ \left( 0,\infty ,\frac{\eta _{23}}{T_{23}},\frac{\eta _{24}}{T_{24}}\right) \right] , \end{aligned}$$

which is an inverse of \(\Phi |_{{\mathcal {S}}_{1,2}^{0,\infty }}\).

This extends \(\Psi \) to a global inverse of \(\Phi \) on \({\mathcal {F}}\). \(\Psi \) is continuous on each of the separate components and it is straightforward to check that its continuations to common boundary points of different components agree with each other. \(\square \)

We finish this section by describing an embedding of the monodromy manifold into \(({\mathbb {P}}^1)^3\). We assume that the non-splitting conditions (1.5) hold true. In particular, all the coefficients of the polynomial \(T(p:\kappa ,t_0)\) are nonzero and, therefore, there are no points \(\rho \in S^*(\kappa ,t_0)\) with two or more components all zero or all infinite. Thus,

$$\begin{aligned} \rho _{ij}=\frac{\rho _i}{\rho _j}\in {\mathbb {P}}^1,\quad (1\le i<j\le 4), \end{aligned}$$

form six well-defined coordinates on the surface \({\mathcal {S}}^*(\kappa ,t_0)\) and thus also on the monodromy manifold \({\mathcal {M}}(\kappa ,t_0)\).

Ohyama et al. [27] study the \(q\text {P}_{\text {VI}}\) monodromy manifold using these coordinates.Footnote 3 Theorem 2.15 yields explicit algebraic relations among them. For example, \(\rho _{12},\rho _{23}\) and \(\rho _{34}\) are related by

$$\begin{aligned} T_{12}\rho _{12}\rho _{23}^2 +T_{13}\rho _{12}\rho _{23} +T_{14}\rho _{12}\rho _{23}\rho _{34}^{-1} +T_{23}\rho _{23} +T_{24}\rho _{23}\rho _{34}^{-1} +T_{34}\rho _{34}^{-1}=0.\nonumber \\ \end{aligned}$$
(5.35)

Analogously to the proof of Theorem 2.20, we can show that these three coordinates yield an embedding of the monodromy manifold into \(({\mathbb {P}}^1)^3\),

$$\begin{aligned} {\mathcal {M}}(\kappa ,t_0)\rightarrow ({\mathbb {P}}^1)^3,[C(z)]\mapsto (\rho _{12},\rho _{23},\rho _{34}), \end{aligned}$$

with range given by the surface (5.35) minus a curve. This curve is defined by the intersection of (5.35) as \(\kappa _0\) varies over \({\mathbb {C}}^*\).

Remark 5.6

Assuming the non-splitting conditions (1.5), the six coordinates \(\rho _{ij}\), \(1\le i,j\le 4\), are analytic rational functions from \({\mathcal {F}}(\kappa ,t_0)\) to \(\mathbb{C}\mathbb{P}^1\), which together embed the surface into \((\mathbb{C}\mathbb{P}^1)^6\). The same statements holds true for these coordinates, as functions on the monodromy manifold \({\mathcal {M}}(\kappa ,t_0)\), with respect to the structure of a complex algebraic variety defined in Ohyama et al. [27]. It follows that this structure is compatible with the one induced by Theorem 2.20.

6 Conclusion

In this paper, we studied the \(q\text {P}_{\text {VI}}\) equation through its associated linear problem. Assuming non-resonant parameter conditions, we defined the corresponding Riemann–Hilbert problem, which captures the general solution of \(q\text {P}_{\text {VI}}\). This problem was shown to be solvable for irreducible monodromy, leading to a one-to-one correspondence between solutions of \(q\text {P}_{\text {VI}}\) and points on the corresponding monodromy manifold, when the non-splitting conditions are satisfied.

In turn, we constructed an explicit embedding of the monodromy manifold into \((\mathbb{C}\mathbb{P}^1)^4/{\mathbb {C}}^*\), whose image is described by the zero locus of a single quadratic polynomial, minus a curve. This allowed us to show that the monodromy manifold is a smooth complex surface, when the non-splitting conditions hold true. We further proved that it can be identified with an affine algebraic surface, under the same assumptions. This surface can be described as the intersection of two quadrics in \({\mathbb {C}}^4\) and its projective completion is thus a Segre surface.

The results of this paper suggests a possible framework for tackling several open questions. These include, for example, the classification of algebraic or symmetric solutions of \(q\text {P}_{\text {VI}}\), the construction of (classes of) special transcendental solutions via the geometry of the monodromy manifold, and the derivation of solutions with distinctive (e.g. bounded) global asymptotic behaviours.