1 Introduction

In this paper we will analyze some aspects for the tetradiagonal Hessenberg matrix of the form

(1)

where we assume that \(a_n>0\), and its bidiagonal factorization

$$\begin{aligned} T= L_{1} L_{2} U, \end{aligned}$$
(2)

with bidiagonal matrices given by

(3)

If the requirement

$$\begin{aligned} \alpha _j>0,\quad j\in \mathbb {N}, \end{aligned}$$

is fulfilled, we say that we have a positive bidiagonal factorization (PBF). In [5], this factorization was shown to be the key for a Favard theorem for bounded banded Hessenberg semi-infinite matrices and the existence of positive measures such that the recursion polynomials are multiple orthogonal polynomials and the Hessenberg matrix is the recursion matrix from this set of multiple orthogonal polynomials. We also gave a multiple Gauss quadrature together with explicit degrees of precision. Then, in [6] we studied for the tetradiagonal case when such PBF, in terms of continued fractions, for oscillatory matrices exists. Oscillatory tetradiagonal Toeplitz matrices were shown to admits a PBF. Moreover, it was proven that oscillatory banded Hessenberg matrices are organized in rays, with the origin of the ray not having the positive bidiagonal factorization and all the interior points of the ray having such positive bidiagonal factorization.

In the next section of this paper we succinctly discuss two cases that appear in the literature, the Jacobi–Piñeiro [11, 16] and the hypergeometric [3, 12] families. In the final section, the Darboux and Christoffel transformations [4] are connected with the PBF factorization, the coefficients \(\alpha \) in the PBF are reconstructed in terms of the values of the type II and I polynomials at 0, see Theorem 2 and the Darboux transformations are discussed at the light of the spectral Christoffel perturbations [1], see Theorem 3.

1.1 Preliminary material

For multiple orthogonal polynomials see [1, 11, 14].

Let us denote by \( T^{[N]}=T[\{0,1,\ldots ,N\}]\in \mathbb {R}^{(N+1)\times (N+1)}\) the \((N+1)\)-th leading principal submatrix of the banded Hessenberg matrix T:

(4)

Definition 1

(Recursion polynomials of type II) The type II recursion vector of polynomials

is determined by the following eigenvalue equation

$$\begin{aligned} TB(x)=x B(x). \end{aligned}$$
(5)

Uniqueness is ensured by taking as initial condition \(B_0=1\). We call the components \(B_n\) type II recursion polynomials. One obtains that \(B_1=x- c_0\), \(B_2=(x-c_0)(x-c_1)-b_1\), and higher degree recursion polynomials are constructed by means of the 4-term recurrence relation

$$\begin{aligned} B_{n+1}&=(x-c_n)B_n-b_nB_{n-1}-a_n B_{n-2},&n&\in \{2,3,\ldots \}. \end{aligned}$$
(6)

Definition 2

(Recursion polynomials of type I) Dual to the polynomial vector B(x) we consider the two following polynomial dual vectors

that are left eigenvectors of the semi-infinite matrix J, i.e.,

$$\begin{aligned} A^{(1)}(x) T&=x A^{(1)}(x),&A^{(2)}(x) T&=xA^{(2)}(x). \end{aligned}$$

The initial conditions, that determine these polynomials uniquely, are taken as

$$\begin{aligned} A^{(1)}_0&=1,&A^{(1)}_1&=\nu ,&A^{(2)}_0&=0,&A^{(2)}_1&=1, \end{aligned}$$

with \(\nu \ne 0\) being an arbitrary constant. Then, from the first relation

$$\begin{aligned} c_0A^{(a)}_0+b_{1}A^{(a)}_{1}+a_2A^{(a)}_{2}&=xA^{(a)}_0,&a&\in \{1,2\}, \end{aligned}$$

we get \(A^{(1)}_2=\frac{x}{a_2}-\frac{c_0+b_1\nu }{a_2}\) and \(A^{(2)}_2=-\frac{b_1}{a_2}\). The other polynomials in these sequences are determined by the following four term recursion relation

$$\begin{aligned} A^{(a)}_n a_n&=-A^{(a)}_{n-1}b_{n-1}+A^{(a)}_{n-2}(x-c_{n-2})-A^{(a)}_{n-3},&n&\in \{3,4,\ldots \},&a&\in \{1,2\}. \end{aligned}$$
(7)

For example, one finds

$$\begin{aligned} A^{(1)}_3 a_3&=-A^{(1)}_2b_2+A^{(1)}_1(x-c_1)-A^{(1)}_0{=} -b_2\Big (\frac{x}{a_2}{-}\frac{c_0+b_1\nu }{a_2}\Big ){+}\nu (x-c_1)-1,\\ A^{(2)}_3 a_3&=-A^{(2)}_2b_2+A^{(2)}_1(x-c_1)-A^{(2)}_0= b_2\frac{b_1}{a_2}+x-c_1. \end{aligned}$$

Second kind polynomials are also relevant in the theory of multiple orthogonality.

Definition 3

(Recursion polynomials of type II of the second kind) Let us consider the recursion relation (6) in the form

$$\begin{aligned} a_nB_{n-2}+b_nB_{n-1}+c_nB_n+B_{n+1}=xB_n, \end{aligned}$$
(8)

set \(b_0=a_0=a_1=-1\) and \(n\in \mathbb {N}_0\). The values, initial conditions, for \(B_{-2}, B_{-1}, B_0\) are required to get the values \(B_n\) for \(n\in \mathbb {N}\). The polynomials of type II correspond to the choice

$$\begin{aligned} B_{-2}&=0,&B_{-1}&=0,&B_0&=1. \end{aligned}$$
(9)

Two sequences of polynomials of type II of the second kind \(\big \{B_n^{(1)}\big \}_{n=0}^\infty \) and \(\big \{B_n^{(2)}\big \}_{n=0}^\infty \) are defined by the following initial conditions

$$\begin{aligned} B^{(1)}_{-2}&=1,&B^{(1)}_{-1}&=0,&B^{(1)}_0&=0, \end{aligned}$$
(10)
$$\begin{aligned} B^{(2)}_{-2}&=-1-\nu ,&B^{(2)}_{-1}&=1,&B^{(2)}_0&=0. \end{aligned}$$
(11)

Proposition 1

(Determinantal expressions)

  1. (i)

    For the recursion polynomials we have the determinantal expressions

    (12)

    Hence, they are the characteristic polynomials of the leading principal submatrices \(T^{[N]}\).

  2. (ii)

    For the recursion polynomials of type II of the second kind, \(B^{(1)}_{N+1} \) and \(B^{(2)}_{N+1} \), we have the following adjugate and determinantal expressions

Finite truncations of this matrices having this positive bidiagonal factorization are oscillatory matrices. In fact, we will be dealing in this paper with totally non negative matrices and oscillatory matrices and, consequently, we require of some definitions and properties that we are about to present succinctly.

Further truncations are, for \(k \in \{0,1,..., N\}\),

(13)
(14)

notice that \(T^{[N]}=T^{[N,0]}\). Corresponding characteristic polynomials are:

(15)

Totally nonnegative (TN) matrices are those with all their minors nonnegative [8, 9], and the set of nonsingular TN matrices is denoted by InTN. Oscillatory matrices [9] are totally nonnegative, irreducible [10] and nonsingular. Notice that the set of oscillatory matrices is denoted by IITN (irreducible invertible totally nonnegative) in [8]. An oscillatory matrix A is equivalently defined as a totally nonnegative matrix A such that for some n we have that \(A^n\) is totally positive (all minors are positive). From Cauchy–Binet Theorem one can deduce the invariance of these sets of matrices under the usual matrix product. Thus, following [8, Theorem 1.1.2] the product of matrices in InTN is again InTN (similar statements hold for TN or oscillatory matrices). We have the important result:

Theorem 1

(Gantmacher–Krein Criterion).[9, Chapter 2, Theorem 10]. A totally non negative matrix A is oscillatory if and only if it is nonsingular and the elements at the first subdiagonal and first superdiagonal are positive.

The Gauss–Borel factorization of the matrix \(T^{[N]}\) in (4) is the following factorization

$$\begin{aligned} T^{[N]}=L^{[N]} U^{[N]} \end{aligned}$$
(16)

with banded triangular matrices given by

Proposition 2

The Gauss–Borel factorization exists if and only if all leading principal minors \(\delta ^{[N]}\) of \(T^{[N]}\) are not zero.

For \(n\in \mathbb {N}\), the following expressions for the coefficients hold

(17)

where \(\delta ^{[-1]}=1\) and \(a_1=0\), and we have the following recurrence relation for the determinants

$$\begin{aligned} \delta ^{[n]}= a_n\delta ^{[n-3]}-b_n\delta ^{[n-2]}+c_n\delta ^{[n-1]}, \end{aligned}$$
(18)

is satisfied.

For oscillatory matrices the Gauss–Borel factorization exits and both triangular factors belong to InTN. See [13] for a modern account of the role of Gauss–Borel factorization problem in the realm of standard and non standard orthogonality.

2 Hypergeometric and Jacobi–Piñeiro examples

Now we discuss two cases of tetradiagonal Hessenberg matrices that appear as recurrence matrices of two families of multiple orthogonal polynomials. For each of them we consider bidiagonal factorizations and its positivity.

2.1 Hypergeometric multiple orthogonal polynomials

In [12] a new set of multiple hypergeometric polynomials were introduced by Lima and Loureiro. The corresponding recursion matrix \(T_{LL}\) was used in [7] to construct stochastic matrices and associated Markov chains beyond birth and death. For the hypergeometric case the \(T_{LL}=L_1L_2U\) bidiagonal factorization is provided in [12, Equations 107-110], that ensures the regular oscillatory character of the matrix \(T_{LL}\) for this hypergeometric case. Notice the correspondence between Lima–Loureiro’s \(\lambda _n\) and our \(\alpha _n\) is \( \lambda _{3n+2}\rightarrow \alpha _{3n+1}\), \(\lambda _{3n+1}\rightarrow \alpha _{3n}\) and \(\lambda _{3n}\rightarrow \alpha _{3n-1}\). These coefficients were gotten in [12] from [15, Theorem 14.5] as the coefficients of a branched-continued-fraction representation for \(_{3}F_{2}\). For more on this see the recent paper [17]. This sequence is TP, so that \(T_{LL}\) is a regular oscillatory banded Hessenberg matrix.

2.2 Jacobi–Piñeiro multiple orthogonal polynomials

Jacobi–Piñeiro multiple orthogonal polynomials, associated with weights \(w_1=x^{\alpha }(1-x)^\gamma , w_2=x^{\beta }(1-x)^\gamma \) with support on [0, 1] \(\alpha ,\beta ,\gamma >-1\), \(\alpha -\beta \not \in \mathbb {Z}\), is a well study case. This system is an AT system and the corresponding orthogonal polynomials and linear forms interlace its zeros, see [11], even though, as we will discuss now, the recursion matrix \(T_{JP}\) is not oscillatory. The corresponding monic recursion matrix \(T_{JP}\) was considered in [7, Section 4.3] and we show that this recursion matrix was a positive matrix whenever the parameters \(\alpha ,\beta \) lay in the strip given by \(|\alpha -\beta |<1\).

Lemma 1

(Jacobi–Piñeiro’s recursion matrix bidiagonal factorization). The Jacobi–Piñeiro’s recursion matrix has bidiagonal factorizations as in Eq. (2) with at least the following two set of parameters:

$$\begin{aligned} \alpha _{6n+1}&= \frac{( n + 1+\alpha ) (2 n + 1 + \alpha + \gamma ) (2 n + 1 + \beta + \gamma )}{(3 n + 1 + \alpha + \gamma ) (3 n + 2 + \alpha + \gamma ) (3 n + 1 + \beta + \gamma )},\\ {\tilde{\alpha }}_{6n+1}&= \frac{( n + 1+\alpha ) (2 n + 1 + \alpha + \gamma ) (2 n + 1 + \beta + \gamma )}{(3 n + 1 + \alpha + \gamma ) (3 n + 2 + \alpha + \gamma ) (3 n + 1 + \beta + \gamma )},\\ \alpha _{6n+2}&= \frac{n (2 n + 1 + \gamma ) (2 n + 1 + \alpha + \gamma )}{(3 n + 2 + \alpha + \gamma ) (3 n + 1 + \beta + \gamma ) (3 n + 2 + \beta + \gamma )},\\ {\tilde{\alpha }}_{6n+2}&= \frac{(n-\alpha +\beta ) (2 n + 1 + \gamma ) (2 n + 1 + \beta + \gamma )}{(3 n + 2 + \alpha + \gamma ) (3 n + 1 + \beta + \gamma ) (3 n + 2 + \beta + \gamma )},\\ \alpha _{6n+3}&= \frac{(n + 1) (2 n + 1 + \gamma ) (2 n + 2 + \beta + \gamma )}{(3 n + 2 + \alpha + \gamma ) (3 n + 3 + \alpha +\gamma ) (3 n + 2 + \beta + \gamma )},\\ {\tilde{\alpha }}_{6n+3}&= \frac{(n + 1+\alpha -\beta ) (2 n + 1 + \gamma ) (2 n + 2 + \alpha + \gamma )}{(3 n + 2 + \alpha + \gamma ) (3 n + 3 + \alpha +\gamma ) (3 n + 2 + \beta + \gamma )},\\ \alpha _{6n+4}&=\frac{( n + 1+\beta ) (2 n + 2 + \alpha + \gamma ) (2 n + 2 + \beta + \gamma )}{(3 n + 3 + \alpha + \gamma ) (3 n + 2 + \beta + \gamma ) (3 n + 3 + \beta + \gamma )},\\ {\tilde{\alpha }}_{6n+4}&=\frac{( n + 1+\beta ) (2 n + 2 + \alpha + \gamma ) (2 n + 2 + \beta + \gamma )}{(3 n + 3 + \alpha + \gamma ) (3 n + 2 + \beta + \gamma ) (3 n + 3 + \beta + \gamma )},\\ \alpha _{6n+5}&= \frac{(n + 1+\alpha -\beta ) (2 n + 2 + \gamma ) (2 n + 2 + \alpha + \gamma )}{(3 n + 3 + \alpha + \gamma ) (3 n + 4 + \alpha + \gamma ) (3 n + 3 + \beta + \gamma )},\\ {\tilde{\alpha }}_{6n+5}&= \frac{( n + 1) (2 n + 2 + \gamma ) (2 n + 2 + \beta + \gamma )}{(3 n + 3 + \alpha + \gamma ) (3 n + 4 + \alpha + \gamma ) (3 n + 3 + \beta + \gamma )},\\ \alpha _{6n+6}&= \frac{(n + 1-\alpha +\beta ) (2 n + 2 +\gamma ) (2 n + 3 + \beta + \gamma )}{(3 n + 4 + \alpha + \gamma ) (3 n + 3 + \beta + \gamma ) (3 n + 4 + \beta + \gamma )},\\ {\tilde{\alpha }}_{6n+6}&= \frac{( n + 1) (2 n + 2 +\gamma ) (2 n + 3 + \alpha + \gamma )}{(3 n + 4 + \alpha + \gamma ) (3 n + 3 + \beta + \gamma ) (3 n + 4 + \beta + \gamma )}, \end{aligned}$$

here \(n\in \mathbb {N}_0\).

Proof

We have \( \alpha _1 = c_0\) and and \(\alpha _4\) are gotten from and . Then, , , \(\alpha _{3(n-1)+1}\), \(n = 2,3,\ldots \), are determined recursively according to , and \( m_n + \alpha _{3n+1} = c_n\) (from the first relation we get , from the second \(m_n\) and from the third \(\alpha _{3(n-1)+1}\)). Now, these expressions for ’s and ’s lead to the remaining \(\alpha \)’s. Indeed, we have and (we get \(\alpha _3\) and \(\alpha _5\), respectively) and then we apply the recursion, \(n \in \mathbb {N}\), , (in each iteration we obtain \(\alpha _{3(n+1)}\) and \(\alpha _{3(n+1)+2}\), respectively). \(\square \)

Remark 1

The bidiagonal factorization \(\{{\tilde{\alpha }}_n\}_{n=1}^\infty \) was found in [2, Section 8.1], that is why we refer to it as the Aptekarev-Kalyagin-Van Iseghem (AKV) bidiagonal factorization.

For \(n\in \mathbb {N}_0\), given these two bidiagonal factorizations, the entries of the corresponding lower unitriangular factor \(L=L_1L_2={\tilde{L}}_1{\tilde{L}}_2\) of the lower factor L in the Gauss–Borel factorization of the Jacobi–Piñeiro’s Hessenberg transition matrix, can be expressed in the following two manners

(19)

To better understand the dependence on the set of Jacobi–Piñeiro’s parameters \((\alpha ,\beta )\) we define some regions in the plane. Let us denote by \({\mathscr {R}}{:=}\{(\alpha ,\beta )\in \mathbb {R}^2, \alpha ,\beta >-1, \alpha -\beta \not \in \mathbb {Z}\}\), that we call the natural region –where the orthogonality is well defined, and divide it in the following four regions:

$$\begin{aligned} {\mathscr {R}}_1&{:=}\{(\alpha ,\beta )\in {\mathscr {R}}: \alpha -\beta >1\},&{\mathscr {R}}_2&{:=}\{(\alpha ,\beta )\in {\mathscr {R}}: 0<\alpha -\beta<1\}, \\ {\mathscr {R}}_3&{:=}\{(\alpha ,\beta )\in {\mathscr {R}}: -1<\alpha -\beta<0\},&{\mathscr {R}}_4&{:=}\{(\alpha ,\beta )\in {\mathscr {R}}: \alpha -\beta <-1\}. \end{aligned}$$

We show these regions in the following figure

figure a

Lemma 2

  1. (i)

    For the sequence \(\{\alpha _n\}_{n\in \mathbb {N}}\) of the first bidiagonal factorization, we have

    1. (a)

      In the region \({\mathscr {R}}\) the sequence in TN but for \(\alpha _5\) that is negative in \({\mathscr {R}}_4\) and \(\alpha _6\) that is negative in \({\mathscr {R}}_1\).

    2. (b)

      Is a TN sequence in the strip \({\mathscr {R}}_2\cup {\mathscr {R}}_3\). Excluding \(\alpha _2=0\), the sequence is TP.

  2. (ii)

    For the AKV sequence \(\{{\tilde{\alpha }}_n\}_{n\in \mathbb {N}}\), we have

    1. (a)

      In the region \({\mathscr {R}}\) the sequence in TP but for \({\tilde{\alpha }}_2\) that is negative in \({\mathscr {R}}_1\cup {\mathscr {R}}_2\), \({\tilde{\alpha }}_8\) that is negative in \({\mathscr {R}}_1\) and \({\tilde{\alpha }}_3\) that is negative in \({\mathscr {R}}_4\).

    2. (b)

      Is a TP sequence in the half strip \({\mathscr {R}}_3\).

Proof

For the first set of bidiagonal parameters \(\{\alpha _n\}_{n\in \mathbb {N}}\), we check that all are positive in \({\mathscr {R}}\), but for \(\alpha _2=0\) and \(\alpha _5,\alpha _6\). From direct inspection we get that \(\alpha _5<0\) when \(1+\alpha -\beta <0\), i.e. in region \({\mathscr {R}}_4\) and \(\alpha _6<0\) when \(1-\alpha +\beta <0\), i.e. in region \({\mathscr {R}}_1\). Hence, the sequence \(\{\alpha _n\}_{n=1}^\infty \) is a TN sequence, TP but for \(\alpha _2=0\), in region \({\mathscr {R}}_2\cup {\mathscr {R}}_3\), is TN in \({\mathscr {R}}\) but for \(\alpha _5\) in \({\mathscr {R}}_4\) and TN in \({\mathscr {R}}\) but for \(\alpha _6\) in \({\mathscr {R}}_1\). For theAKV parameters \(\{{\tilde{\alpha }}_n\}_{n\in \mathbb {N}}\), all are positive in \({\mathscr {R}}\), but for \({\tilde{\alpha }}_2,{\tilde{\alpha }}_8\) and \({\tilde{\alpha }}_3\). The entry \({\tilde{\alpha }}_2<0\) when \(\alpha >\beta \), that is in \({\mathscr {R}}_1\cup {\mathscr {R}}_2\), \({\tilde{\alpha }}_8<0\) when \(1-\alpha +\beta <0\) i.e. in \({\mathscr {R}}_1\) and \({\tilde{\alpha }}_3\) when \(1+\alpha -\beta <0\), i.e. in \({\mathscr {R}}_4\). \(\square \)

Lemma 3

For the two first subdiagonals of the lower triangular matrix L in the Gauss–Borel factorization of the Jacobi–Piñeiro’s Hessenberg recursion matrix \(T_{JP}\) we have

  1. (i)

    The sequence is TP in the definition region \({\mathscr {R}}\).

  2. (ii)

    The sequence is TP but for (), that is negative in \({\mathscr {R}}_4\) (\({\mathscr {R}}_1\)).

Proof

From (19) and the first bidiagonal factorization we get that the lower triangular L has all its entries in the two first subdiagonals TP but for (), that is negative in \({\mathscr {R}}_4\) (\({\mathscr {R}}_1\)), and maybe . Looking now at the AKV factorization we see that . \(\square \)

Proposition 3

The Jacobi–Piñeiro’s recursion matrix satisfies:

  1. (i)

    Is oscillatory if and only if the parameters \((\alpha ,\beta )\) belong to the strip \({\mathscr {R}}_2\cup {\mathscr {R}}_3\).

  2. (ii)

    Admits a positive bidiagonal factorization at least if the parameters belong to the lower half strip \({\mathscr {R}}_3\).

  3. (iii)

    The retraction of the complementary matrix \(T_{JP}^{(4)}\) ( \(T_{JP}^{(5)}\)), described in [6, Theorem 11] is oscillatory in the region \({\mathscr {R}}_2\cup {\mathscr {R}}_3\cup {\mathscr {R}}_4\) (\({\mathscr {R}}\)).

Proof

  1. (i)

    It follows from Lemma 2.

  2. (ii)

    The AKV bidiagonal factorization sequence is TP in \({\mathscr {R}}_3\).

  3. (iii)

    We use the Gauss–Borel factorization of these retractions described in [6, Theorem 11], that we know have a bidiagonal factorization with TP sequences.

\(\square \)

Remark 2

From the previous discussion of the Jacobi–Piñeiro’s recursion matrix it becomes clear that demanding the matrix to have a positive bidiagonal factorization is sufficient but not necessary to have spectral measures. In the natural region \({\mathscr {R}}\) the Jacobi–Piñeiro’s weights exist, are positive with support on [0, 1]. However, we know that it is an oscillatory matrix only in the strip \({\mathscr {R}}_2\cup {\mathscr {R}}_3\). The associated matrix that is regular oscillatory in the natural region \({\mathscr {R}}\) is the retracted complementary matrix \(T_{JP}^{(5)}\). This observation leads to the question: Is it enough to have a spectral Favard theorem and positive measures that a retracted complementary matrix of the banded Hessenberg matrix is oscillatory?

3 Applications to Darboux transformations

3.1 Darboux transformations of oscillatory banded Hessenberg matrices

We now show how our construction connects with those of the seminal paper [2] by Aptekarev, Kalyagin and Van Iseghem on genetic sums, vector convergents, Hermite–Padé approximants and and Stieltjes problems, and with the Darboux–Christoffel transformations discussed in [4]. We identify the Darboux transformations of the oscillatory banded Hessenberg matrices with Christoffel transformations of the spectral measures. Recall that \(\nu \) is the initial condition given in Definition 2.

Definition 4

(Darboux transformed Hessenberg matrices). Given an oscillatory banded lower Hessenberg matrix T and its bidiagonal factorization as in (2), we consider the semi-infinite matrices

$$\begin{aligned} {\hat{T}}&{:=}L_2 U L_1,&\hat{{\hat{T}}}&{:=}U L_1 L_2. \end{aligned}$$

We will refer to these matrices as the first and second Darboux transformations of the banded Hessenberg matrix T.

Remark 3

These auxiliary matrices \({\hat{T}}^{[N]}\) are not the m-th leading principal submatrix of the matrix \({\hat{T}}{:=}L_2UL_1\). The difference is in the last diagonal entry. The entries of \({\hat{T}}\) are

$$\begin{aligned}&\left\{ \begin{aligned} {\hat{c}}_n&= \alpha _{3n+2} + \alpha _{3n+1} + \alpha _{3n} , \\ {\hat{b}}_n&= \alpha _{3n} \alpha _{3n-1}+ \alpha _{3n+1} \alpha _{3n-1}+ \alpha _{3n}\alpha _{3n-2} , \\ {\hat{a}}_n&=\alpha _{3n}\alpha _{3n-2} \alpha _{3n-4} , \end{aligned}\right.&\end{aligned}$$
(20)

All the entries of the \((N+1)\)-th leading principal submatrix of \(\,{\hat{T}}\) coincide with those of \({\hat{T}}^{[N]}\) but for the last diagonal entry, as \(({\hat{T}}^{[N]})_{N+1,N+1}=\alpha _{3N}+\alpha _{3N+1}\) while \({\hat{c}}_{N+1}=\alpha _{3N}+\alpha _{3N+1}+\alpha _{3N+2}\).

Remark 4

  1. i)

    From definition it is immediately checked that both Darboux transformed Hessenberg matrices has the same banded structure as T.

  2. ii)

    For coefficients of the second Darboux transform \(\hat{{\hat{T}}} \) we have

    $$\begin{aligned} \left\{ \begin{aligned} \hat{{\hat{c}}}_n&= \alpha _{3n+3} + \alpha _{3n+2} + \alpha _{3n+1}, \\ \hat{{\hat{b}}}_n&= \alpha _{3n+1} \alpha _{3n}+ \alpha _{3n+2} \alpha _{3n}+ \alpha _{3n+1}\alpha _{3n-1} , \\ \hat{{\hat{a}}}_n&=\alpha _{3n+1}\alpha _{3n-1} \alpha _{3n-3}. \end{aligned}\right. \end{aligned}$$
  3. iii)

    If \(\,T\) has a positive bidiagonal factorization, so that we can take \(\alpha _2>0\), then the Darboux transforms \({\hat{T}}, \hat{{\hat{T}}}\) are oscillatory.

Associated with these Hessenberg matrices we introduce the vectors of polynomials

$$\begin{aligned} \hat{{\hat{B}}}&{:=}UB,&{\hat{B}}&=L_2\hat{{\hat{B}}}{:=}L_2 U B. \end{aligned}$$
(21)

Notice that \({\hat{B}}_n\) and \( \hat{{\hat{B}}}_n\) are monic with \(\deg {\hat{B}}_n= \deg \hat{{\hat{B}}}_n= n+1\).

Lemma 4

We have

$$\begin{aligned} L_1 {\hat{B}}=xB. \end{aligned}$$
(22)

Proof

Equations (21) imply \( L_1 {\hat{B}}=L_1L_2 U B=TB=xB\). \(\square \)

Proposition 4

The eigenvalue properties \( {\hat{T}} {\hat{B}}=x {\hat{B}}\) and \(\hat{{\hat{T}}}\hat{{\hat{B}}}= x\hat{{\hat{B}}}\) are satisfied.

Proof

Equations (21) and (22) lead by direct computation to

$$\begin{aligned} {\hat{T}} {\hat{B}}&=L_2 U L_1{\hat{B}}=x L_2 U B=x {\hat{B}},&\hat{{\hat{T}}}\hat{{\hat{B}}}&= U L_1 L_2 UB=UTB=xUB=x\hat{{\hat{B}}}. \end{aligned}$$

\(\square \)

Let us denote by if \({\tilde{T}}^{[n]}\) and \(\tilde{{\tilde{T}}}^{[n]}\) the \((n+1)\)-th leading principal submatrices of \({\hat{T}}\) and \(\hat{{\hat{T}}}\).

Lemma 5

The polynomials \({\hat{B}}_n\) and \(\hat{{\hat{B}}}_n\) can be expressed as \( \hat{B}_n=x{\tilde{B}}_n\) and \( \hat{{\hat{B}}}_n=x \tilde{{\tilde{B}}}_n\), with the monic polynomials \({\tilde{B}}_n, \tilde{{\tilde{B}}}_n\) having degree n.

Proof

One has that \(\hat{{\hat{B}}}_0={\hat{B}}_0=\alpha _1+B_1=\alpha _1+x-c_0=x\). Then, as the sequences of polynomials are found by the recurrence determined by the banded Hessenberg matrices \({\hat{T}}\) and \(\hat{{\hat{T}}}\), respectively, we find that the desired result. \(\square \)

We call these polynomials \({\tilde{B}}_n\) and \( \tilde{{\tilde{B}}}_n\) as Darboux transformed polynomials of type II.

Proposition 5

The entries of the Darboux transformed polynomial sequences of type II

$$\begin{aligned} {\tilde{B}}&=\frac{1}{x} L_2 U B,&\tilde{{\tilde{B}}}&{:=}\frac{1}{x}UB, \end{aligned}$$
(23)

read

$$\begin{aligned} {\tilde{B}}_n&=\frac{1}{x}\big (B_{n+1}+(\alpha _{3n+1}+\alpha _{3n})B_n+\alpha _{3n}\alpha _{3n-2} B_{n-1}\big ),&\tilde{ {\tilde{B}}}_n&=\frac{1}{x}\big (B_{n+1}+\alpha _{3n+1}B_{n}\big ) \end{aligned}$$
(24)

were we take \(\alpha _k=0\) for \(k\in \mathbb {Z}_-\). The following determinantal expressions hold

$$\begin{aligned} {\tilde{B}}_{n+1}&=\det \big (xI_{n+1}-{\tilde{T}}^{[n]}\big ),&\tilde{ {\tilde{B}}}_{n+1}&=\det \big (xI_{n+1}-\tilde{ {\tilde{T}}}^{[n]}\big ). \end{aligned}$$

Proof

Equation (24) appears as the entries of the defining equations. The determinantal expressions follow from the fact that its expansions along the last row satisfy the recursion relations with adequate initial conditions. \(\square \)

Following definitions given in (13) and (15) we consider similar objects in this context. That is, we denote by \({\tilde{T}}^{[n,k]}\) (\(\tilde{{\tilde{T}}}^{[n,k]}\)) the matrix obtained from \({\tilde{T}}^{[n]}\) (\(\tilde{{\tilde{T}}}^{[n]}\)) by erasing the first k rows and columns. The corresponding characteristic polynomials are

$$\begin{aligned} {\tilde{B}}^{[k]}_{n+1}&=\det \big (xI_{n+1-k}-{\tilde{T}}^{[n,k]}\big ),&\tilde{ {\tilde{B}}}^{[k]}_{n+1}&=\det \big (xI_{n+1-k} -\tilde{ {\tilde{T}}}^{[n,k]}\big ). \end{aligned}$$

These polynomials \({\tilde{B}}_{n}^{[k]}\) (\(\tilde{{\tilde{B}}}_n^{[k]}\)) satisfy the same recursion relations, determined by \({\tilde{T}}\) (\(\tilde{{\tilde{T}}}\)) as do \({\tilde{B}}_{n}\) (\(\tilde{{\tilde{B}}}_n\)) but with different initial conditions. Following ii) in Proposition 1 we have the transformed recursion polynomials of type II

$$\begin{aligned} {\tilde{B}}_{n+1}^{(1)}&={\tilde{B}}^{[1]}_{n+1},&\tilde{{\tilde{B}}}_{n+1}^{(1)}&=\tilde{{\tilde{B}}}^{[1]}_{n+1},&{\tilde{B}}_{n+1}^{(2)}&= {\tilde{B}}^{[2]}_{n+1}-\nu {\tilde{B}}^{[1]}_{n+1},&\tilde{{\tilde{B}}}_{n+1}^{(2)}&= \tilde{ {\tilde{B}}}^{[2]}_{n+1}-\nu \tilde{{\tilde{B}}}^{[1]}_{n+1}. \end{aligned}$$

Then we consider the following vectors of polynomials

$$\begin{aligned} {\hat{B}}_{n+1}^{(1)}&= x{\tilde{B}}_{n+1}^{(1)},&\hat{{\hat{B}}}_{n+1}^{(1)}&=x\tilde{{\tilde{B}}}^{[1]}_{n+1},&{\hat{B}}_{n+1}^{(2)}&= x {\tilde{B}}^{(2)}_{n+1},&\hat{{\hat{B}}}_{n+1}^{(2)}&= x \tilde{{\tilde{B}}}^{(2)}_{n+1}. \end{aligned}$$

Proposition 6

(Vector Convergents) These recursion polynomials correspond to the vector convergent \(y^1_n=(A_{n,0},A_{n,1},A_{n,2})\) discussed in [2] as follows

$$\begin{aligned} B_n&=A_{3n,0},&{\hat{B}}_n&=A_{3n+1,0},&\hat{{\hat{B}}}_n&=A_{3n+2,0},\\ B_n^{(1)}&=A_{3n,1},&{\hat{B}}_n^{(1)}&=A_{3n+1,1},&\hat{{\hat{B}}}_n^{(1)}&=A_{3n+2,1},\\ -\frac{1}{\nu } B_n^{(2)}&=A_{3n,2},&-\frac{1}{\nu } {\hat{B}}_n^{(2)}&=A_{3n+1,2},&-\frac{1}{\nu } \hat{{\hat{B}}}_n^{(2)}&=A_{3n+2,2}, \end{aligned}$$

Proof

It follows from the fact that they satisfy the recursion relation [2, Equation (23)] and adequate initial conditions. \(\square \)

Then, following this dictionary the important [2, Lemma 5] states for \(x\geqslant 0\) that

Remark 5

Using these facts, Aptekarev, Kalyagin and Van Iseghem in [2, Lemmata 6 & 7] deduce the degree of polynomials, simplicity of zeros and interlacing properties of \(B_n\) with \(B_{n-1}\), \(B_n^{(1)}\) and \(B^{(2)}_n\). Notice that we derive the same result by just using the spectral properties of regular oscillatory matrices.

For recursion polynomials of type I, we introduce the following polynomials \( {\hat{A}}^{(2)}{:=}A^{(1)}L_1\), \({\hat{A}}^{(1)}{:=}A^{(2)}L_1\), \(\hat{ {\hat{A}}}^{(1)}{:=}A^{(1)}L_1L_2\) and \(\hat{{\hat{A}}}^{(2)}{:=}A^{(2)}L_1L_2\).

Proposition 7

Vectors \( {\hat{A}}^{(1)}\), \( {\hat{A}}^{(2)}\) are left eigenvectors of \({\hat{T}}\) and \(\hat{ {\hat{A}}}^{(1)}\), \(\hat{ {\hat{A}}}^{(2)}\) are left eigenvectors of \(\hat{{\hat{T}}}\).

Proof

A direct computation shows that \( {\hat{A}}^{(2)}{\hat{T}}=A^{(1)}L_1 L_2UL_1= A^{(1)}T L_1=xA^{(1)}L_1= x {\hat{A}}^{(2)}\). The other cases are proven similarly. \(\square \)

Lemma 6

Let us assume that \(1+\alpha _{2}\nu =0\). Then, \({\hat{A}}^{(2)}_0=0\) and \({\hat{A}}^{(2)}_1=\frac{1}{\alpha _3\alpha _1}x\).

Proof

Let us consider the vector \( {\hat{A}}^{(2)}=A^{(1)} L_1\) with components \( {\hat{A}}^{(2)}_n=A^{(1)}_n+\alpha _{3n+2}A^{(1)}_{n+1}\), \(n\in \mathbb {N}_0\). The first two entries are

$$\begin{aligned} {\hat{A}}^{(2)}_0&=A^{(1)}_0+\alpha _{2}A^{(1)}_{1}=1+\alpha _{2}\nu ,\\ {\hat{A}}^{(2)}_1&=A^{(1)}_1+\alpha _{5}A^{(1)}_{2}=\nu -\alpha _5\frac{c_0+b_1\nu }{a_2}+\frac{\alpha _5}{a_2}x=\nu -\alpha _5\frac{\alpha _1+(\alpha _3+\alpha _2)\alpha _1\nu }{\alpha _5\alpha _3\alpha _1}+\frac{\alpha _5}{\alpha _5\alpha _3\alpha _1}x\\ {}&=-\frac{1+\alpha _2\nu }{\alpha _3} +\frac{1}{\alpha _3\alpha _1}x. \end{aligned}$$

Here we have used Definition 2 and \(c_0=\alpha _1\), \(b_1=(\alpha _3+\alpha _2)\alpha _1\) and \(a_2=\alpha _5\alpha _3\alpha _1\). Then, as \(1+\alpha _{2}\nu =0\), we find the stated result. \(\square \)

Proposition 8

If \( 1+\nu \alpha _2=0\), we can write \({\hat{A}}^{(2)}_n= x {\tilde{A}}^{(2)}_n\) and \(\hat{{\hat{A}}}^{(1)}_n= x \tilde{{\tilde{A}}}^{(1)}_n\), for some polynomials \( {\tilde{A}}^{(2)}_n,\tilde{{\tilde{A}}}^{(1)}_n\).

Proof

The recursion relation \({\hat{c}}_0 {\hat{A}}^{(2)}_0+{\hat{b}}_1{\hat{A}}^{(2)}_1+{\hat{a}}_2 {\hat{A}}^{(2)}_2=x{\hat{A}}^{(2)}_0\) and Lemma 6 gives \({\hat{A}}^{(2)}_2=-\frac{{\hat{b}}_1}{{\hat{a}}_2 \alpha _3\alpha _1}x\). Hence, induction leads to the conclusion that \({\hat{A}}^{(2)}_n=x{\tilde{A}}^{(2)}_n\), for some polynomial \({\tilde{A}}^{(2)}_n\). \(\square \)

Lemma 7

We have \(\hat{\hat{A}}^{(2)}_0=0\) and \(\hat{\hat{A}}^{(2)}_1=\frac{1}{\alpha _4}x\).

Proof

Let us consider the vector \( {\hat{A}}^{(1)}=A^{(2)} L_1\) with components \( {\hat{A}}^{(1)}_n=A^{(2)}_n+\alpha _{3n+2}A^{(2)}_{n+1}\), \(n\in \mathbb {N}_0\). The first three entries are

$$\begin{aligned} {\hat{A}}^{(1)}_0&=A^{(2)}_0+\alpha _{2}A^{(2)}_{1}=\alpha _{2},\\ {\hat{A}}^{(1)}_1&=A^{(2)}_1+\alpha _{5}A^{(2)}_{2}=1-\alpha _5\frac{b_1}{a_2}=1-\alpha _5\frac{(\alpha _3+\alpha _2)\alpha _1}{\alpha _5\alpha _3\alpha _1}=-\frac{\alpha _2}{\alpha _3}, \\ {\hat{A}}^{(1)}_2&=A^{(2)}_2+\alpha _{8}A^{(2)}_{3}=-\frac{b_1}{a_2}+\frac{\alpha _8}{a_3} \Big (b_2\frac{b_1}{a_2}+x-c_1\Big ) =\frac{b_1}{a_2}\Big (\frac{\alpha _8b_2}{a_3}-1\Big )-\frac{\alpha _8c_1}{a_3}+\frac{\alpha _8}{a_3}x\\ {}&= \frac{\alpha _3+\alpha _2}{\alpha _5\alpha _3}\Big ( \frac{\alpha _6\alpha _4+\alpha _5\alpha _4+\alpha _5\alpha _3}{\alpha _6\alpha _4}-1 \Big )-\frac{\alpha _4+\alpha _3+\alpha _2}{\alpha _6\alpha _4}+\frac{1}{\alpha _6\alpha _4}x\\ {}&= \frac{\alpha _3+\alpha _2}{\alpha _5\alpha _3} \frac{\alpha _5\alpha _4+\alpha _5\alpha _3}{\alpha _6\alpha _4} -\frac{\alpha _4+\alpha _3+\alpha _2}{\alpha _6\alpha _4}+\frac{1}{\alpha _6\alpha _4}x\\&=\frac{1}{\alpha _6\alpha _4}\Big ( \frac{\alpha _3+\alpha _2}{\alpha _3} (\alpha _4+\alpha _3) -\alpha _4-\alpha _3-\alpha _2+x\Big )\\ {}&= \frac{\alpha _2}{\alpha _6\alpha _3}+\frac{1}{\alpha _6\alpha _4}x. \end{aligned}$$

Then, we consider \( \hat{ \hat{A}}^{(2)}=\hat{A}^{(1)} L_2\) with components \(\hat{{\hat{A}}}^{(2)}_n={\hat{A}}^{(1)}_n+\alpha _{3n+3}{\hat{A}}^{(1)}_{n+1}\), \(n\in \mathbb {N}_0\). The first two components being

$$\begin{aligned} \hat{{\hat{A}}}^{(2)}_0&={\hat{A}}^{(1)}_0+\alpha _{3}{\hat{A}}^{(1)}_{1}=\alpha _2+\alpha _3\Big (-\frac{\alpha _2}{\alpha _3}\Big )=0,\\ \hat{{\hat{A}}}^{(2)}_1&={\hat{A}}^{(1)}_1+\alpha _{6}{\hat{A}}^{(1)}_{2}=-\frac{\alpha _2}{\alpha _3}+\alpha _6\Big ( \frac{\alpha _2}{\alpha _6\alpha _3}+\frac{1}{\alpha _6\alpha _4}x\Big )=\frac{1}{\alpha _4}x, \end{aligned}$$

and the result follows. \(\square \)

Proposition 9

There are polynomials \(\tilde{{\tilde{A}}}^{(2)}_n \) such that \(\hat{{\hat{A}}}^{(2)}_n =x\tilde{{\tilde{A}}}^{(2)}_n \).

Proof

It holds for the two first entries \(\hat{{\hat{A}}}^{(2)}_0\) and \(\hat{{\hat{A}}}^{(2)}_1\). Hence, from the recursion relation \(\hat{{\hat{A}}}^{(2)}\hat{{\hat{T}}}= x\hat{{\hat{A}}}^{(2)}\) we get that it holds for any natural number n. \(\square \)

We name the polynomials \({\hat{A}}^{(1)}_n\), \(\tilde{{\tilde{A}}}^{(1)}_n \), \({{\tilde{A}}}^{(2)}_n\) and \(\tilde{{\tilde{A}}}^{(2)}_n \) as Darboux transformed polynomials of type I.

Proposition 10

Let us assume that \(1+\nu \alpha _2=0\). The entries of the Darboux transformed polynomials sequences of type I

$$\begin{aligned} {\tilde{A}}^{(2)}&=\frac{1}{x} A^{(1)}L_1,&\tilde{{\tilde{A}}}^{(1)}&=\frac{1}{x} A^{(1)}L_1L_2,&{\hat{A}}^{(1)}&= A^{(2)}L_1,&\tilde{{\tilde{A}}}^{(2)}&=\frac{1}{x} A^{(2)}L_1L_2 \end{aligned}$$

are given by

$$\begin{aligned} {\tilde{A}}^{(2)}_n&=\frac{1}{x}\big (A^{(1)}_n+\alpha _{3n+2}A^{(1)}_{n+1}\big ), \\ \tilde{{\tilde{A}}}^{(1)}_n&=\frac{1}{x}\big (A^{(1)}_n+(\alpha _{3n+2}+\alpha _{3n+3})A^{(1)}_{n+1}+\alpha _{3n+5}\alpha _{3n+3}A^{(1)}_{n+2}\big ), \\ {\hat{A}}^{(1)}_n&=A^{(2)}_n+\alpha _{3n+2}A^{(2)}_{n+1},\\ \tilde{{\tilde{A}}}^{(2)}_n&=\frac{1}{x}\big (A^{(2)}_n+(\alpha _{3n+2}+\alpha _{3n+3})A^{(2)}_{n+1}+\alpha _{3n+5}\alpha _{3n+3}A^{(2)}_{n+2}\big ). \end{aligned}$$

3.2 Spectral representation and Christoffel transformations

We identify the entries in the bidiagonal factorization (2) with simple rational expressions in terms of the recursion polynomials valuated at the origin.

Theorem 2

(Parametrization of the bidiagonal factorization) The \(\alpha \)’s in the bidiagonal factorization (2) can be expressed in terms of the recursion polynomials evaluated at \(x=0\) as follows:

$$\begin{aligned} \alpha _{3n+1}&=-\frac{B_{n+1}(0)}{B_{n}(0)}, \end{aligned}$$
(25)
$$\begin{aligned} \alpha _{3n+2}&=-\frac{A^{(1)}_{n}(0)}{A^{(1)}_{n+1}(0)},&1{+}\nu \alpha _2{=}0\, \text {is required,} \end{aligned}$$
(26)
$$\begin{aligned} \alpha _{3n+3}&= -\frac{A^{(1)}_{n}(0)A^{(2)}_{n+1}(0)-A^{(1)}_{n+1}(0)A^{(2)}_{n}(0)}{A^{(1)}_{n+1}(0) A^{(2)}_{n+2}(0)-A^{(1)}_{n+2}(0)A^{(2)}_{n+1}(0)} \frac{A^{(1)}_{n+2}(0)}{A^{(1)}_{n+1}(0)}. \end{aligned}$$
(27)

The relations

(28)

are satisfied as well.

Proof

Equation (21) and the fact that \({\hat{B}}(0)=0\) gives that \(UB(0)=0\). Hence, we get \(\alpha _{3n+1}B_n(0)+B_{n+1}(0)=0\) and (25) follow. Now, as \( {\hat{A}}^{(1)}= A^{(1)}L_1\) and \({\hat{A}}^{(1)}(0)=0\) implies \(A^{(1)}(0)L_1=0\). Hence, \(A_{n}^{(1)}(0)+A_{n+1}^{(1)}(0)\alpha _{3n+2}=0\) and we find (26). To prove (28) we observe that

$$\begin{aligned} A^{(1)}_n(0)+(\alpha _{3n+2}+\alpha _{3n+3})A^{(1)}_{n+1}(0)+\alpha _{3n+5}\alpha _{3n+3}A^{(1)}_{n+2}(0)&=0,\\ A^{(2)}_n(0)+(\alpha _{3n+2}+\alpha _{3n+3})A^{(2)}_{n+1}(0)+\alpha _{3n+5}\alpha _{3n+3}A^{(2)}_{n+2}(0)&=0, \end{aligned}$$

so that

and Eq. (28) follows.

This equation implies component-wise the following relations

$$\begin{aligned}&\alpha _{3n+2}+\alpha _{3n+3} =-\frac{A^{(1)}_{n}(0)A^{(2)}_{n+2}(0)-A^{(1)}_{n+2}(0)A^{(2)}_{n}(0)}{A^{(1)}_{n+1}(0) A^{(2)}_{n+2}(0)-A^{(1)}_{n+2}(0)A^{(2)}_{n+1}(0)}, \\&\alpha _{3n+5}\alpha _{3n+3}=-\frac{A^{(1)}_{n+1}(0)A^{(2)}_{n}(0)-A^{(1)}_{n}(0)A^{(2)}_{n+1}(0)}{A^{(1)}_{n+1}(0) A^{(2)}_{n+2}(0)-A^{(1)}_{n+2}(0)A^{(2)}_{n+1}(0)}. \end{aligned}$$

Thus, we get Eq. (27). \(\square \)

With the previous identification we are ready to show the complete correspondence of the described Darboux transformations of the oscillatory banded Hessenberg matrix T with Christoffel perturbations of the corresponding pair of positive Lebesgue–Stieltjes measures \(({\text {d}}\psi _1,{\text {d}}\psi _2)\).

Theorem 3

(Darboux vs Christoffel transformations) For \(\alpha _2=-\frac{1}{\nu }>0\), the multiple orthogonal polynomial sequences \(\big \{{ {\tilde{B}}}_n,{\hat{A}}^{(1)}_n,{\tilde{A}}^{(2)}_n\big \}_{n=0}^\infty \) and \(\big \{\tilde{ {\tilde{B}}}_n,\tilde{ {\tilde{A}}}^{(1)}_n, \tilde{ {\tilde{A}}}^{(2)}_n\big \}_{n=0}^\infty \) correspond to the Christoffel transformations given in [4, Theorems 4 & 6] of the multiple orthogonal polynomial sequence \(\{ B_n, A^{(1)}_n, A^{(2)}_n\}_{n=0}^\infty \). If the original couple of Lebesgue–Stieltjes measures is \(({\text {d}}\psi _1,{\text {d}}\psi _2)\), then the corresponding transformed pairs of measures are \(({\text {d}}\psi _2,x{\text {d}}\psi _1)\) and \((x{\text {d}}\psi _1,x{\text {d}}\psi _2)\), respectively.

Proof

Recalling (26), that \(c_n=\alpha _{3n+1}+\alpha _{3n}+\alpha _{3n-1}\) and that \(a_{n+1}=\alpha _{3n+2}\alpha _{3n}\alpha _{3n-2}\) we write

$$\begin{aligned} \alpha _{3n+1}+\alpha _{3n}&=\frac{A^{(1)}_{n-1}(0)}{A^{(1)}_{n}(0)}+c_n,&\alpha _{3n}\alpha _{3n-2}&=-\frac{A^{(1)}_{n+1}(0)}{A^{(1)}_{n}(0)} a_{n+1}. \end{aligned}$$

Then, using Theorem 2 and the first equation in (24) we get

$$\begin{aligned} {\tilde{B}}_n&=\frac{1}{x}\bigg (B_{n+1}+\Big (\frac{A^{(1)}_{n-1}(0)}{A^{(1)}_{n}(0)}+c_n\Big )B_n-\frac{A^{(1)}_{n+1}(0)}{A^{(1)}_{n}(0)} a_{n+1}B_{n-1}\bigg ), \\ {\hat{A}}^{(1)}_n&=A^{(2)}_n-\frac{A^{(1)}_{n}(0)}{A^{(1)}_{n+1}(0)}A^{(2)}_{n+1},\\ {\tilde{A}}^{(2)}_n&=\frac{1}{x}\bigg (A^{(1)}_n-\frac{A^{(1)}_{n}(0)}{A^{(1)}_{n+1}(0)}A^{(1)}_{n+1}\bigg ). \end{aligned}$$

These three equations are the Christoffel formulas in [4, Theorem 4] for the permuting Christoffel transformation \(({\text {d}}\psi _1,{\text {d}}\psi _2)\rightarrow ({\text {d}}\psi _2,x{\text {d}}\psi _1)\). Also, using again Theorem 2, the second equation in (24) and (28) we get

These three equations are the Christoffel formulas in [4, Theorem 6] for the Christoffel transformation \(({\text {d}}\psi _1,{\text {d}}\psi _2)\rightarrow (x{\text {d}}\psi _1,x{\text {d}}\psi _2)\). \(\square \)

4 Conclusions and outlook

In this paper we have discussed examples of structured tetradiagonal matrices of oscillatory type connected to two families of multiple orthogonal polynomials and the possibility of having a positive bidiagonal factorization. Moreover, it has been shown the relation of Darboux transformations of this matrix and Christoffel formulas for Christoffel perturbations of corresponding multiple orthogonal polynomials.

Other open questions are:

  1. (i)

    What happens when the banded recursion matrix has several superdiagonals as well as subdiagonals? What about the corresponding Darboux transformations?

  2. (ii)

    Chebyshev (T) systems appear in [9] in relation with influence kernels and oscillatory matrices. Is there any connection between the AT property and the oscillation of the matrix or some submatrix of it?