1 Introduction

Copositive programming gives a common framework to formulate many difficult optimization problems as convex conic ones. In fact, many \(\text {NP}\)-hard problems are known to have such reformulations (see for example the surveys [4, 11]). All the difficulty of these problems appears to be “converted” into the difficulty of understanding the cone of copositive matrices \({\mathcal {COP}}^{n}\) which consists of all symmetric \(n \times n\) matrices \(B \in {\mathcal {S}}^n\) with \(x^\mathsf{T} B x \ge 0\) for all \(x \in {\mathbb {R}}^n_{\ge 0}\). Its dual cone is the cone

$$\begin{aligned} \begin{aligned} {\mathcal {CP}}^{n}\; = \;&{{\,\mathrm{cone}\,}}\{xx^\mathsf{T} : x \in {\mathbb {R}}^n_{\ge 0}\}\\ \; = \;&\left\{ \sum _{i=1}^m \alpha _i x_i x_i^\mathsf{T} : m \in {\mathbb {N}}, \alpha _i \in {\mathbb {R}}_{\ge 0}, x_i \in {\mathbb {R}}^n_{\ge 0}, i = 1, \ldots , m\right\} \end{aligned} \end{aligned}$$

of completely positive \(n\times n\) matrices. Therefore, it seems no surprise that many basic questions about this cone are still open and appear to be very difficult.

One important problem is to find an algorithmic test deciding whether or not a given symmetric matrix A is completely positive. If possible one would like to obtain a certificate for either \(A \in {\mathcal {CP}}^{n}\) or \(A \not \in {\mathcal {CP}}^{n}\). Dickinson and Gijben [8] showed that this (strong) membership problem is \(\text {NP}\)-hard.

In terms of the definitions the most natural certificate for \(A \in {\mathcal {CP}}^{n}\) is giving a cp-factorization

$$\begin{aligned} A = \sum _{i=1}^m \alpha _i x_i x_i^\mathsf{T} \quad \text {with} \quad m \in {\mathbb {N}}, \alpha _i \in {\mathbb {R}}_{\ge 0}, \; x_i \in {\mathbb {R}}^n_{\ge 0}, \; i = 1, \ldots , m. \end{aligned}$$
(1)

For \(A \not \in {\mathcal {CP}}^{n}\) it is natural to give a separating hyperplane defined by a matrix \(B \in {\mathcal {COP}}^{n}\) so that the inner product of A and B satisfies \(\langle B, A \rangle < 0\).

From the algorithmic side, different ideas have been proposed. One can divide the relevant literature according to two complementary approaches:

  1. (1)

    Numerical methods which are practical but “only” can find approximate cp-factorizations. The papers by Jarre and Schmallowsky [20], Nie [25], Sponsel and Dür [30], Elser [13], Groetzner and Dür [16] fall into this category.

  2. (2)

    Theoretical methods which can compute exact cp-factorizations in finitely many algorithmic steps. The factorization method of Anstreicher, Burer, and Dickinson [7, Section 3.3] uses the ellipsoid method and works for all matrices which have a rational cp-factorization and lie in the interior of the cone \({\mathcal {CP}}^{n}\). Berman and Rothblum [1] use quantifier elimination for first order formulae over the reals to compute the \(\mathcal {CP}\)-rank of a given matrix, that is, the minimum number m of vectors used in a cp-factorization (1).

In this paper, in Sect. 3, we describe a new procedure that is based on pivoting like the simplex algorithm. To define the pivoting we apply the notion of the copositive minimum which we introduce in Sect. 2. Our algorithm (Algorithm 1) works for all matrices in the rational cone

$$\begin{aligned} \begin{aligned} \widetilde{\mathcal {CP}}^{n}\; = \;&{{\,\mathrm{cone}\,}}_{{\mathbb {Q}}}\{xx^\mathsf{T} : x \in {\mathbb {Q}}^n_{\ge 0}\} \\ \; = \;&\left\{ \sum _{i=1}^m \alpha _i x_i x_i^\mathsf{T} : m \in {\mathbb {N}}, \alpha _i \in {\mathbb {Q}}_{\ge 0}, x_i \in {\mathbb {Q}}^n_{\ge 0}, i = 1, \ldots , m\right\} . \end{aligned} \end{aligned}$$

Moreover, we conjecture that a variant of our algorithm (Procedure 3) always computes separating hyperplanes, if the input matrix is not completely positive. Overall, our procedure works for matrices with coefficients in any computable subfield F of the real numbers, in that case the coefficients \(\alpha _i\) of the formula above belong to \(F_{\ge 0}\) and the whole algorithmic procedure works similarly as the rational case that we consider in this paper.

Our algorithm uses rational numbers only if the input matrix is rational and so allows in principle exact computations. As a consequence, to the best of our knowledge, our algorithm is currently the only one that can find a rational cp-factorization whenever it exists. In [7] a similar result was obtained, but restricted to matrices in the interior of \({\mathcal {CP}}^{n}\). A related question is if every rational completely positive matrix has a rational cp-factorization, see the survey [29]. Generally we do not know but from the results in [7] and [10] it follows that this is true for matrices in the interior of \({\mathcal {CP}}^{n}\).

If the input matrix A is integral, one can also ask if it admits an integral cp-factorization, i.e. a cp-factorization of the form \(A = \sum _{i = 1}^m x_i x_i^\mathsf{T}\) with \(x_i \in {\mathbb {Z}}_{\ge 0}^n\) for all \(i = 1, \ldots , m\). For \(n \ge 3\) it is known that there are integral matrices \(A \in {\mathcal {CP}}^{n}\) which do not have an integral cp-factorization, see [2, Theorem 6.4]. For \(n = 2\) it was conjectured by Berman and Shaked-Monderer [2, Conjecture 6.13] that every integral matrix \(A \in {\mathcal {CP}}^{2}\) possesses an integral cp-factorization. This conjecture was recently proved by Laffey and Šimgoc [22]. In Sect. 4 we show that our simplex algorithm can be used to give a short, alternative proof of this result.

In Sect. 5 we describe how an implementation of our algorithm performs on some examples.

2 The copositive minimum and copositive perfect matrices

2.1 Copositive minimum

By \({{\mathcal {S}}}^{n}\) we denote the Euclidean vector space of symmetric \(n\times n\) matrices with inner product \(\langle A, B \rangle = {{\,\mathrm{Trace}\,}}(AB) = \sum _{i,j=1}^n A_{ij}B_{ij}\). With respect to this inner product we have the following duality relations between the cone of copositive matrices and the cone of completely positive matrices

$$\begin{aligned} {\mathcal {COP}}^{n}= ({\mathcal {CP}}^{n})^* = \{B \in {\mathcal {S}}^n : \langle A, B \rangle \ge 0 \text { for all } A \in {\mathcal {CP}}^{n}\}, \end{aligned}$$

and

$$\begin{aligned} {\mathcal {CP}}^{n}= ({\mathcal {COP}}^{n})^*. \end{aligned}$$

So, in order to show that a given symmetric matrix A is not completely positive, it suffices to find a copositive matrix \(B \in {\mathcal {COP}}^{n}\) with \(\langle B, A\rangle < 0\). We call B a separating witness for \(A \not \in {\mathcal {CP}}^{n}\) in this case, because the linear hyperplane orthogonal to B separates A and \({\mathcal {CP}}^{n}\).

Using the notation B[x] for \(x^\mathsf{T} B x = \langle B, xx^\mathsf{T} \rangle \), we obtain

$$\begin{aligned} {\mathcal {COP}}^{n}= \{B \in {\mathcal {S}}^n : B[x] \ge 0 \text { for all } x\in {\mathbb {R}}^n_{\ge 0}\}. \end{aligned}$$

Obviously, the cone of positive semidefinite matrices \({{\mathcal {S}}}^{n}_{\ge 0}\), whose interior is the open cone of positive definite matrices \({{\mathcal {S}}}^{n}_{>0}\), lies between the completely positive cone and the copositive cone: \({\mathcal {CP}}^{n}\subseteq {{\mathcal {S}}}^{n}_{\ge 0}\subseteq {\mathcal {COP}}^n\).

Definition 2.1

For a symmetric matrix \(B\in {{\mathcal {S}}}^{n}\) we define the copositive minimum as

$$\begin{aligned} {{\,\mathrm{min_{\mathcal {COP}}}\,}}(B) = \inf \left\{ B[v] : v \in {{\mathbb {Z}}}^n_{\ge 0}{\setminus }\{0\} \right\} , \end{aligned}$$

and we denote the set of vectors attaining it by

$$\begin{aligned} {{\,\mathrm{Min_{\mathcal {COP}}}\,}}(B) = \left\{ v \in {{\mathbb {Z}}}^n_{\ge 0} : B[v] = {{\,\mathrm{min_{\mathcal {COP}}}\,}}(B) \right\} . \end{aligned}$$

The following proposition shows that matrices in the interior of the cone of copositive matrices attain their copositive minimum.

Lemma 2.2

Let B be a matrix in the interior of the cone of copositive matrices. Then the copositive minimum of B is strictly positive and it is attained by only finitely many vectors.

Proof

Since B is copositive, we have the inequality \({{\,\mathrm{min_{\mathcal {COP}}}\,}}(B) \ge 0\). Suppose that \({{\,\mathrm{min_{\mathcal {COP}}}\,}}(B) = 0\). Then there is a sequence \(v_i \in {{\mathbb {Z}}}_{\ge 0}^n {\setminus } \{0\}\) of pairwise distinct lattice vectors such that \(B[v_i]\) tends to zero when i tends to infinity. From the sequence \(v_i\) we construct a new sequence \(u_i\) of vectors on the unit sphere \(S^{n-1}\) by setting \(v_i = \Vert v_i\Vert u_i\). The sequence \(u_i\) belongs to the compact set \({{\mathbb {R}}}^n_{\ge 0} \cap S^{n-1}\). Thus, by taking a subsequence if necessary, we may assume that \(u_i\) converges to a point \(u\in {{\mathbb {R}}}^n_{\ge 0} \cap S^{n-1}\). The sequence of norms \(\Vert v_i\Vert \) tends to infinity since the set of lattice vectors of bounded norm is finite. Thus we get

$$\begin{aligned} 0 = \lim _{i \rightarrow \infty } B[v_i] = \lim _{i \rightarrow \infty } \Vert v_i\Vert ^2 B[u_i], \end{aligned}$$

which implies that \(B[u] = 0\), contradicting our assumption \(B \in {{\,\mathrm{int}\,}}({\mathcal {COP}}^{n})\). Hence, \({{\,\mathrm{min_{\mathcal {COP}}}\,}}(B) > 0\).

By the same argument one can show that \({{\,\mathrm{Min_{\mathcal {COP}}}\,}}(B)\) only contains finitely many vectors. \(\square \)

2.2 A locally finite polyhedron

In our previous paper [10] and in this paper the set

$$\begin{aligned} {\mathcal {R}} = \{B \in {{\mathcal {S}}}^{n}: B[v] \ge 1 \text { for all } v \in {\mathbb {Z}}^n_{\ge 0} {\setminus } \{0\} \} \end{aligned}$$

plays a central role.Footnote 1 The set \({\mathcal {R}}\) is a locally finite polyhedron, meaning that every intersection of \({\mathcal {R}}\) with a polytope is a polytope itself. In [10, Lemma 2.3] we showed that \({\mathcal {R}}\) is contained in the interior of the cone of copositive matrices. Thus, we can rewrite \({\mathcal {R}}\) as

$$\begin{aligned} {\mathcal {R}} = \{B \in {{\mathcal {S}}}^{n}: {{\,\mathrm{min_{\mathcal {COP}}}\,}}(B) \ge 1 \}. \end{aligned}$$
(2)

Note also that [10, Lemma 2.3] together with Lemma 2.2 implies

$$\begin{aligned} {{\,\mathrm{cone}\,}}\mathcal {R} \setminus \{0\} = {{\,\mathrm{int}\,}}{\mathcal {COP}}^n \end{aligned}$$
(3)

The following theorem gives a tight outer approximation of the cone of completely positive matrices in terms of the boundary structure (its 1-skeleton to be precise) of the convex set \({\mathcal {R}}\). Similarly, Yıldırım [32] discusses uniform polyhedral approximations of the the cone of copositive matrices.

Theorem 2.3

We have

$$\begin{aligned} \begin{aligned} {\mathcal {CP}}^{n}= \{ Q\in {{\mathcal {S}}}^{n}: \langle Q, B \rangle \ge 0&\text { for all vertices and for all generators}\\&\hbox { of extreme rays } B \hbox { of } {\mathcal {R}} \}. \end{aligned} \end{aligned}$$
(4)

Proof

We have

$$\begin{aligned} {\mathcal {CP}}^{n}= ({\mathcal {COP}}^{n})^* = ({{\,\mathrm{int}\,}}({\mathcal {COP}}^{n}))^* = ({{\,\mathrm{cone}\,}}{\mathcal {R}})^*, \end{aligned}$$

where the identity \(K^* = ({{\,\mathrm{int}\,}}(K))^*\) is generally true for full dimensional convex cones and the last identity is (3). Since \({\mathcal {R}}\) is a locally finite polyhedron, \(({{\,\mathrm{cone}\,}}{\mathcal {R}})^*\) is equal to the right hand side of (4). \(\square \)

2.3 A linear program for finding a rational cp-factorization

In [10, Lemma 2.4] we showed that for \(A\in {{\,\mathrm{int}\,}}({\mathcal {CP}}^{n})\) and all sufficiently large \(\lambda >0\) the set

$$\begin{aligned} {\mathcal {P}}(A,\lambda ) = \{B\in {\mathcal {R}} : \langle A, B \rangle \le \lambda \} \end{aligned}$$

is a full-dimensional polytope.

In principle (cf. [10, Proof of Theorem 1.1]), this gives a way to compute a cp-factorization for a given matrix \(A\in {{\,\mathrm{int}\,}}({\mathcal {CP}}^{n})\) by solving the linear program

$$\begin{aligned} \min \left\{ \langle A, B \rangle : B \in {\mathcal {P}}(A,\lambda )\right\} : \end{aligned}$$
(5)

This is because the minimum is attained at a vertex \(B^*\) of \({\mathcal {P}}(A,\lambda )\). Hence, due to the minimality of \(\langle A, B^* \rangle \), the matrix A is contained in the (inner) normal cone

$$\begin{aligned} {{\mathcal {V}}}(B^*) = {{\,\mathrm{cone}\,}}\left\{ vv^\mathsf{T} : v \in {{\,\mathrm{Min_{\mathcal {COP}}}\,}}B^* \right\} \end{aligned}$$
(6)

of \({\mathcal {R}}\) at \(B^*\). For a rational matrix \(A\in {{\,\mathrm{int}\,}}({\mathcal {CP}}^{n})\) we obtain a rational cp-factorization in this way, that is, a decomposition of the form

$$\begin{aligned} A = \sum _{i=1}^m \alpha _i v_i v_i^\mathsf{T} \quad \text {with } \alpha _i \in {{\mathbb {Q}}}_{\ge 0 }\text { and } v_i \in {{\mathbb {Z}}}^n_{\ge 0}, \quad \text {for } i = 1, \ldots , m. \end{aligned}$$
(7)

To find this factorization, we apply Carathéodory’s theorem (see for example [27, Corollary 7.1i]) and choose a subset \(v_1, \ldots , v_m \in {{\,\mathrm{Min_{\mathcal {COP}}}\,}}B^*\) such that \(v_iv_i^\mathsf{T}\) are linearly independent and \(A \in {{\,\mathrm{cone}\,}}\{v_i v_i^\mathsf{T} : i = 1, \ldots , m\}\). So we can find unique non-negative rational coefficients \(\alpha _1, \ldots , \alpha _m\) giving the rational cp-factorization (7).

However, for solving the linear program (5) one needs an explicit finite algorithmic description of the set \({\mathcal {P}}(A, \lambda )\), for example by a finite list of linear inequalities. The proof of the polyhedrality of \({\mathcal {P}}(A, \lambda )\) in [10, Lemma 2.4] relies on an indirect compactness argument (similar to the one in the proof of Lemma 2.2) which does not yield such an explicit algorithmic description. In the remainder of this paper we are therefore concerned with finding a finite list of linear inequalities.

2.4 Copositive perfect matrices

In the next step we characterize the vertices of \({\mathcal {R}}\). The following definitions and the algorithm in the following section are inspired by Voronoi’s classical algorithm for the classification of perfect positive definite quadratic forms. These can for instance be used to classify all locally densest lattice sphere packings (see for example [23] or [28]). In (6) we use the letter \({{\mathcal {V}}}\) to denote the normal cone of a vertex, as it is a generalization of the Voronoi cone used in the classical setting. In fact, our generalization of Voronoi’s work can be viewed as an example of a broader framework described by Opgenorth [26]. In analogy with Voronoi’s theory for positive definite quadratic forms we define the notion of perfectness for copositive matrices:

Definition 2.4

A copositive matrix \(B \in {{\,\mathrm{int}\,}}({\mathcal {COP}}^{n})\) is called \({\mathcal {COP}}\)-perfect if it is uniquely determined by its copositive minimum \({{\,\mathrm{min_{\mathcal {COP}}}\,}}B\) and the set \({{\,\mathrm{Min_{\mathcal {COP}}}\,}}B\) attaining it.

In other words, \(B \in {{\,\mathrm{int}\,}}({\mathcal {COP}}^{n})\) is \({\mathcal {COP}}\)-perfect if and only if it is the unique solution X of the system of linear equations

$$\begin{aligned} \langle X, vv^\mathsf{T}\rangle = {{\,\mathrm{min_{\mathcal {COP}}}\,}}B, \text { for all } v \in {{\,\mathrm{Min_{\mathcal {COP}}}\,}}B. \end{aligned}$$

Hence, \({\mathcal {COP}}\)-perfect matrices are, up to scaling, exactly the vertices of \({\mathcal {R}}\).

Lemma 2.5

\({\mathcal {COP}}\)-perfect matrices exist in all dimensions (dimension \(n = 1\) being trivial): For dimension \(n \ge 2\) the following matrix

$$\begin{aligned} Q_{{\mathsf {A}}_n} = \begin{pmatrix} 2 &{}\quad -1 &{}\quad 0 &{}\quad \ldots &{}\quad 0 \\ -1 &{}\quad 2 &{}\quad \ddots &{}\quad \ddots &{}\quad \vdots \\ 0 &{}\quad \ddots &{}\quad \ddots &{}\quad \ddots &{}\quad 0 \\ \vdots &{}\quad \ddots &{}\quad \ddots &{}\quad 2 &{}\quad -1 \\ 0 &{}\quad \ldots &{}\quad 0 &{}\quad -1 &{}\quad 2 \\ \end{pmatrix} \end{aligned}$$
(8)

is \({\mathcal {COP}}\)-perfect; \(\frac{1}{2} Q_{{\mathsf {A}}_n}\) is a vertex of \({\mathcal {R}}\).

The matrix \(Q_{{\mathsf {A}}_n}\) is also known as a Gram matrix of the root lattice \({\mathsf {A}}_n\), a very important lattice, for instance in the theory of sphere packings (see for example [5]).

Proof

The matrix \(Q_{{\mathsf {A}}_n}\) is positive definite since

$$\begin{aligned} Q_{{\mathsf {A}}_n}[x] = x_1^2 + \sum _{i=1}^{n-1} (x_i - x_{i+1})^2 + x_n^2 \end{aligned}$$

is a sum of squares and \(Q_{{\mathsf {A}}_n}[x] = 0\) if and only if \(x = 0\). Thus \(Q_{{\mathsf {A}}_n}\) lies in the interior of the copositive cone. Furthermore,

$$\begin{aligned} {{\,\mathrm{min_{\mathcal {COP}}}\,}}Q_{{\mathsf {A}}_n} = 2 \quad \text {with} \quad {{\,\mathrm{Min_{\mathcal {COP}}}\,}}Q_{{\mathsf {A}}_n} = \left\{ \sum _{i=j}^k e_i : 1 \le j \le k \le n\right\} , \end{aligned}$$

where \(e_i\) is the i-th standard unit basis vector of \({\mathbb {R}}^n\). Thus, the \({n+1 \atopwithdelims ()2}\) vectors attaining the copositive minimum have a continued sequence of 1s in their coordinates and 0s otherwise. Now it is easy to see that the rank-1-matrices

$$\begin{aligned} \left( \sum _{i=j}^k e_i\right) \left( \sum _{i=j}^k e_i\right) ^\mathsf{T}, \text { where } 1 \le j \le k \le n, \end{aligned}$$

are linearly independent and span the space of symmetric matrices which shows that \(Q_{{\mathsf {A}}_n}\) is \({\mathcal {COP}}\)-perfect. \(\square \)

3 Algorithms

In this section we show how one can solve the linear program (5). Our algorithm is similar to the simplex algorithm for linear programming. It walks along a path of subsequently constructed \({\mathcal {COP}}\)-perfect matrices, which are vertices of the polyhedral set \({\mathcal {R}}\) that are connected by edges of \({\mathcal {R}}\).

We start with a simple version assuming that the input matrix lies in \(\widetilde{\mathcal {CP}}^{n}\). Of course, this assumption can usually not been easily checked beforehand and the rational cp-factorization is only given as the output of the algorithm. In this sense, the algorithm gets the promise that the input matrix possesses a rational cp-factorization. In theoretical computer science promise problems are common; for practical purposes we propose an extended procedure at the end of this section, see Procedure 3.

figure a

3.1 Description and analysis of the algorithm

In the following we describe the steps of Algorithm 1 in more detail:

In Step 1, we can choose for instance the initial vertex \(P=\frac{1}{2}Q_{{\mathsf {A}}_n}\) of \({{\mathcal {R}}}\) with \(Q_{{\mathsf {A}}_n}\) as in (8). Then the algorithm subsequently constructs vertices of \({{\mathcal {R}}}\).

In Step 2 we determine whether A lies in the polyhedral cone \({{\mathcal {V}}}(P)\). For this we consider all \(v\in {{\,\mathrm{Min_{\mathcal {COP}}}\,}}(P)\) giving generators \(vv^\mathsf{T}\) of the polyhedral cone \({{\mathcal {V}}}(P)\), respectively defining linear inequalities of the dual cone \(({{\mathcal {V}}}(P))^*\). Testing \(A\in {{\mathcal {V}}}(P)\) can then be done by solving an auxiliary linear program

$$\begin{aligned} \min \left\{ \langle A,Q \rangle : Q \in ({{\mathcal {V}}}(P))^*\right\} . \end{aligned}$$
(9)

The minimum equals 0 if and only if A lies in \({{\mathcal {V}}}(P)\). If \(A \in {{\mathcal {V}}}(P)\), then we can find non-negative coefficients \(\lambda _v\), with \(v \in {{\,\mathrm{Min_{\mathcal {COP}}}\,}}(P)\), to get a cp-factorization

$$\begin{aligned} A = \sum _{v \in {{\,\mathrm{Min_{\mathcal {COP}}}\,}}(P)} \lambda _v vv^\mathsf{T}. \end{aligned}$$

Using (an algorithmic version of) Carathéodory’s theorem we can choose in Step 3 a subset \(\{v_1, \ldots , v_m\} \subseteq {{\,\mathrm{Min_{\mathcal {COP}}}\,}}(P)\) so that we get a rational cp-factorization \(A = \sum _{i = 1}^m \alpha _i v_iv_i^\mathsf{T}\) with non-negative rational numbers \(\alpha _i\); see Sect. 2.3.

If the minimum of the auxiliary linear program (9) is negative we can find in Step 2(a) a generator R of an extreme ray of \(({{\mathcal {V}}}(P))^{*}\) with \(\langle A,R \rangle < 0\). Here, several choices of R with \(\langle A,R \rangle < 0\) may be possible and the performance depends on the choices made in this “pivot step”. A good heuristic for a “pivot rule” seems to be the choice of R with \(\langle A,R / \Vert R\Vert \rangle \) minimal, where \(\Vert R\Vert ^2 = \langle R, R\rangle \). Also a random choice of R among the extreme rays of \(({{\mathcal {V}}}(P))^{*}\) with \(\langle A,R \rangle < 0\) seems to perform quite well. When choosing the right pivots R in Step 2(a) Algorithm 1 always terminates, as shown by Theorem 3.1 below.

In Step 2(b) Algorithm 2 (see Sect. 3.2 is used to determine a new contiguous \({\mathcal {COP}}\)-perfect matrix N of P in direction of \(R\not \in {\mathcal {COP}}^{n}\), that is, a contiguous vertex of P on \({\mathcal {R}}\), connected via an edge in direction R. Note that such a vertex exists (and \({\mathcal {R}}\) is not unbounded in the direction of R) under the assumption \(R \not \in {\mathcal {COP}}^{n}\), because \({\mathcal {R}} \subseteq {{\,\mathrm{int}\,}}({\mathcal {COP}}^{n})\), see [10, Lemma 2.3]. We can exclude \(R \in {\mathcal {COP}}^{n}\) here, since together with \(\langle A,R \rangle < 0\) it would contradict the promise \(A\in \widetilde{\mathcal {CP}}^{n}\) on the input. Note also that as a byproduct of Algorithm 2 we compute generators of the cone \({\mathcal {V}}(N)\).

Finally, we observe that since \(\langle A,R \rangle < 0\), we have \(\langle A,N \rangle < \langle A,P \rangle \) in each iteration (Step 2) of the algorithm.

The following theorem shows that we can set up an algorithm for the promise problem.

Theorem 3.1

For \(A\in \widetilde{\mathcal {CP}}^{n}\), Algorithm 1 with suitable choices in Step 2(a) ends after finitely many iterations giving a rational cp-factorization of A.

In particular, with breadth-first-search added to Algorithm 1 we can guarantee finite termination (but this of course would be far less efficient).

Proof

For \(A\in {{\,\mathrm{int}\,}}({\mathcal {CP}}^{n}) \cap \widetilde{\mathcal {CP}}^{n}\) the assertion follows from Lemma 2.4 in [10].

So let us assume \(A\in {{\,\mathrm{bd}\,}}{\mathcal {CP}}^{n}\cap \widetilde{\mathcal {CP}}^{n}\). Note that \(\widetilde{\mathcal {CP}}^{n}\) is tessellated into cones \({{\mathcal {V}}}(P)\) of \({\mathcal {COP}}\)-perfect matrices P. In fact, the convex hull of \(D = \{xx^\mathsf{T} : x \in {{\mathbb {Z}}}^n_{\ge 0}, x \ne 0\}\) is a locally finite polyhedral set whose facets are in one-to-one correspondence with the \({\mathcal {COP}}\)-perfect matrices (see [26]). For any \(A\in \widetilde{\mathcal {CP}}^{n}\), the ray \(\{ \lambda A : \lambda \ge 0\}\) meets (at least) one facet of \({{\,\mathrm{conv}\,}}D\) and A is in \({{\mathcal {V}}}(P)\) of the corresponding \({\mathcal {COP}}\)-perfect matrix P.

Let \(\{ R_1, R_2, \ldots \}\) be a possible sequence of generators of rays constructed in Step 2(a) of Algorithm 1. For all of these generators, the inequality \(\langle A, R_i\rangle < 0\) holds. For k such generators, the conditions \(\langle Q, R_i\rangle < 0\) for \(i=1,\ldots , k\) are not only satisfied for \(Q=A\), but also for all Q in an \(\varepsilon \)-neighborhood of A (with a suitable \(\varepsilon \) depending on k). For any k, this neighborhood also contains points of \({{\,\mathrm{int}\,}}({{\mathcal {V}}}(P)) \subseteq {{\,\mathrm{int}\,}}({\mathcal {CP}}^{n})\). For these interior points Q, however, Algorithm 1 finishes after at most finitely many steps (when checking for \(Q \in {{\mathcal {V}}}(P)\) in Step 2). Thus, for some finite number of suitable choices in Step 2(a), the algorithm also ends for A. \(\square \)

3.2 Computing contiguous \({\mathcal {COP}}\)-perfect matrices

Our algorithm for computing contiguous \({\mathcal {COP}}\)-perfect matrices is inspired by a corresponding algorithm for computing contiguous perfect positive definite quadratic forms which is a subroutine in Voronoi’s classical algorithm. The following algorithm is similar to [9, Section 6, Erratum to algorithm of Section 2.3] and [31].

figure b

Computationally the most involved parts of Algorithm 2 are checking if a matrix lies in the interior of the cone of copositive matrices, and if so, computing its copositive minimum \({{\,\mathrm{min_{\mathcal {COP}}}\,}}\) and all vectors \({{\,\mathrm{Min_{\mathcal {COP}}}\,}}\) attaining it. We discuss these tasks in Sects. 3.3 and 3.4.

In the while loop (Step 2) of Algorithm 2, lower and upper bounds l and u for the desired value \(\lambda \) are computed, such that \(P+ l R\) and \(P + u R\) are lying in \({{\,\mathrm{int}\,}}({\mathcal {COP}}^{n})\) satisfying

$$\begin{aligned} {{\,\mathrm{min_{\mathcal {COP}}}\,}}(P+ l R)={{\,\mathrm{min_{\mathcal {COP}}}\,}}(P) > {{\,\mathrm{min_{\mathcal {COP}}}\,}}(P + u R). \end{aligned}$$

In other words, \(P + lR\) lies on the edge \([P,N] \subseteq {\mathcal {R}}\), but \(P + uR\) lies outside of \({\mathcal {R}}\).

The set S in Step 3 contains all vectors \(v \in {\mathbb {Z}}^n_{\ge 0}\) defining a separating hyperplane \(\{X \in {\mathcal {S}}^n : \langle X, vv^\mathsf{T} \rangle = 1\}\), separating \({\mathcal {R}}\) and \(P + uR\).

If \(v \in S\), then \(R[v] < 0\) and

$$\begin{aligned} (P+\lambda R)[v] = P[v] + \min _{w \in S} \left( \frac{1-P[w]}{R[w]}\right) R[v] \ge 1. \end{aligned}$$

If \(v \not \in S\) and \(R[v] \ge 0\), then clearly \((P + \lambda R)[v] \ge 1\), since \(\lambda \ge 0\). Finally, if \(v \not \in S\) and \(R[v] < 0\), then since \(\lambda \le u\), we have

$$\begin{aligned} (P+\lambda R)[v] \ge (P+u R)[v] \ge 1. \end{aligned}$$

Therefore, the choice of \(\lambda \) in Step 4 guarantees that \(P + \lambda R\) is the contiguous \(\mathcal {COP}\)-perfect matrix of P. We have \({{\,\mathrm{min_{\mathcal {COP}}}\,}}(P + \lambda R) = 1\) but \({{\,\mathrm{Min_{\mathcal {COP}}}\,}}(P + \lambda R) \not \subseteq {{\,\mathrm{Min_{\mathcal {COP}}}\,}}(P)\).

In practice the set S in Step 3 is maybe too big for a complete enumeration. In this case partial enumerations may help to pick successively smaller u’s first, which are not necessarily equal to the desired \(\lambda \); see [31].

3.3 Checking copositivity

From a complexity point of view, checking whether or not a given symmetric matrix is copositive is known to be co-NP-complete by a result of Murty and Kabadi [24].

Nevertheless, in our algorithms we need to check whether or not a given symmetric matrix lies in the cone of copositive matrices (Step 2(c) of Procedure 3) or in its interior (Step 2 of Algorithm 2). This can be checked by the following recursive characterization of Gaddum [15, Theorem 3.1 and 3.2], which of course is not computable in polynomial time: By

$$\begin{aligned} \Delta = \left\{ x \in {\mathbb {R}}^n : x \ge 0, \sum _{i=1}^n x_i = 1\right\} \end{aligned}$$

we denote the \((n-1)\)-dimensional standard simplex in dimension n. A matrix \(B \in {\mathcal {S}}_n\) lies in \({\mathcal {COP}}^{n}\) (in \({{\,\mathrm{int}\,}}({\mathcal {COP}}^{n})\)) if and only if every of its principal minors of size \((n-1)\times (n-1)\) lies in \({\mathcal {COP}}^{n}\) (in \({{\,\mathrm{int}\,}}(\mathcal {COP}_{n-1})\)) and the value

$$\begin{aligned} v = \max _{x \in \Delta } \min _{y \in \Delta } x^\mathsf{T} B y = \min _{y \in \Delta } \max _{x \in \Delta } x^\mathsf{T} B y. \end{aligned}$$
(10)

of the two-player game with payoff matrix B is non-negative (strictly positive).

One can compute the value of v in (10) by a linear program:

$$\begin{aligned} v = \max \{\lambda : \lambda \in {\mathbb {R}}, y \in \Delta , By \ge \lambda e\}, \end{aligned}$$

where \(e = (1, \ldots , 1)^\mathsf{T}\) is the all-ones vector.

3.4 Computing the copositive minimum

Once we know that a given symmetric matrix B lies in the interior of the copositive cone (i.e. after Step 2 of Algorithm 2) we apply the idea of simplex partitioning initially developed by Bundfuss and Dür [3] to compute its copositive minimum \({{\,\mathrm{min_{\mathcal {COP}}}\,}}(B)\) and all vectors \({{\,\mathrm{Min_{\mathcal {COP}}}\,}}(B)\) attaining it. Again we note that this is not a polynomial time algorithm.

First we recall some facts and results from [3]. A family \({\mathcal {P}} = \{\Delta ^1, \ldots , \Delta ^m\}\) of simplices is called a simplicial partitioning of the standard simplex \(\Delta \) if

$$\begin{aligned} \Delta = \bigcup _{i=1}^m \Delta ^i \quad \text {with} \quad {{\,\mathrm{int}\,}}(\Delta ^i) \cap {{\,\mathrm{int}\,}}(\Delta ^j) = \emptyset \quad \text {whenever } i \ne j. \end{aligned}$$

Let \(v^k_1, \ldots , v^k_n\) be the vertices of simplex \(\Delta ^k\). It is easy to verify that if a symmetric matrix \(B \in {\mathcal {S}}^n\) satisfies the strict inequalities

$$\begin{aligned} (v^k_i)^\mathsf{T} B v^k_j > 0 \text { for all } i, j = 1, \ldots , n, \text{ and } k= 1, \ldots , m, \end{aligned}$$
(11)

then it lies in \({{\,\mathrm{int}\,}}({\mathcal {COP}}^{n})\). Bundfuss and Dür [3, Theorem 2] proved the following converse: Suppose \(B \in {{\,\mathrm{int}\,}}({\mathcal {COP}}^{n})\), then there exists an \(\varepsilon > 0\) so that for all finite simplex partitions \({\mathcal {P}} = \{\Delta ^1, \ldots , \Delta ^m\}\) of \(\Delta \), where the diameter of every simplex \(\Delta ^k\) is at most \(\varepsilon \), strict inequalities (11) hold. Here, the diameter of \(\Delta ^k\) is defined as \(\max \{\Vert v^k_i - v^k_j\Vert : i, j = 1, \ldots , n\}\).

We assume now that \(B \in {{\,\mathrm{int}\,}}({\mathcal {COP}}^{n})\) and that we have a finite simplex partition \({\mathcal {P}}\) so that (11) holds. We furthermore assume that all the vertices \(v^k_i\) have rational coordinates. Such a simplex partition exists as shown by Bundfuss and Dür [3, Algorithm 2].

Each simplex \(\Delta ^k = {{\,\mathrm{conv}\,}}\{v^k_1, \ldots , v^k_n\}\) defines a simplicial cone by \({{\,\mathrm{cone}\,}}\{v^k_1, \ldots , v^k_n\}\). From now on we only work with the simplicial cones and not with the simplices any more, so we may scale the rational \(v^k_i\)’s to have integral coordinates.

The goal is now to find all integer vectors v in \(\Delta ^k\) which minimize B[v]. To do this we adapt the algorithm of Fincke and Pohst [14], which solves the shortest lattice vector problem. It is the corresponding problem for positive semidefinite matrices. The adapted algorithm will solve the following problem: Given a matrix \(B \in {{\,\mathrm{int}\,}}({\mathcal {CP}}^{n})\) and a simplicial cone, which is generated by integer vectors \(v_1, \ldots , v_n\) so that \(v_i^\mathsf{T} B v_j \ge 0\) holds, and given a positive constant M, find all integer vectors v in the cone so that \(B[v] \le M\) holds. Then by reducing M successively to B[v], whenever such a non-trivial integer vector v is found, we can find the copositive minimum of B in the simplicial cone, as well as all integer vectors attaining it.

The first step of the algorithm is to compute the Hermite normal form of the matrix V which contains the the vectors \(v_1, \ldots , v_n\) as it columns. (see for example Kannan and Bachem [21] or Schrijver [27], where it is shown that computing the Hermite normal form can be done in polynomial time). We find a unimodular matrix \(U \in \mathsf {GL}_n({\mathbb {Z}})\) such that \(UV = W\) holds, where W is an upper triangular matrix with columns \(w_1, \ldots , w_n\) and coefficients \(W_{i,j}\). Note that the diagonal coefficients of W are not zero since W has full rank. Moreover, denoting the matrix \((U^{-1})^\mathsf{T} B U^{-1}\) by \(B'\) we have for all ij

$$\begin{aligned} 0\le v_i^\mathsf{T} B v_j = w_i^\mathsf{T} (U^{-1})^\mathsf{T} B U^{-1} w_j = w_i B' w_j , \end{aligned}$$
(12)

where the inequality is strict for whenever \(i = j\).

We want to find all vectors \(v \in {{\,\mathrm{cone}\,}}\{v_1, \ldots , v_n\} \cap {\mathbb {Z}}^n\) so that \(B[v] \le M\). In other words, the goal is to find all rational coefficients \(\alpha _1, \ldots , \alpha _n\) satisfying the following three properties:

  1. (i)

    \(\alpha _1, \ldots , \alpha _n \ge 0\),

  2. (ii)

    \(\sum _{i=1}^n \alpha _i v_i \in {\mathbb {Z}}^n\),

  3. (iii)

    \(B\left[ \sum _{i=1}^n \alpha _i v_i\right] \le M\).

Since matrix U lies in \(\mathsf {GL}_n({\mathbb {Z}})\), a vector \(\sum _{i=1}^n \alpha _i v_i\) is integral if and only if \(\sum _{i=1}^n \alpha _i w_i\) is integral. Looking at the last vector componentwise we have

$$\begin{aligned} \sum _{i=1}^n \alpha _i w_i = \left( \sum _{j=1}^n \alpha _j W_{1,j}, \sum _{j=2}^n \alpha _j W_{2,j}, \ldots , \alpha _{n-1} W_{n-1,n-1} + \alpha _n W_{n-1,n}, \alpha _n W_{n,n} \right) . \end{aligned}$$

We first consider the possible values of the last coefficient \(\alpha _n\), and then continue to other coefficients \(\alpha _{n-1}, \ldots , \alpha _1\), one by one via a backtracking search. Conditions (i) and (ii) imply that

$$\begin{aligned} \alpha _n \in \{k/W_{n,n} : k = 0, 1, 2, \ldots \}. \end{aligned}$$

Condition (iii) gives an upper bound for \(\alpha _n\): Write \(\alpha = (\alpha _1, \ldots , \alpha _n)^\mathsf{T}\), then

$$\begin{aligned} M \ge (V\alpha )^\mathsf{T} B V\alpha = \alpha ^\mathsf{T} W^\mathsf{T} B' W \alpha = B'\left[ \sum _{i=1}^n \alpha _i w_i\right] \ge B'[\alpha _n w_n] = \alpha ^2_n B'[w_n], \end{aligned}$$

where the last inequality follows from (12). Hence, \(\alpha _n \le \sqrt{M/B'[w_n]}\) and so

$$\begin{aligned} \alpha _n \in \left\{ k/W_{n,n} : k = 0, 1, \ldots , \left\lfloor \sqrt{M/B'[w_n]} \right\rfloor W_{n,n}\right\} . \end{aligned}$$

Now suppose \(\alpha _n\) is fixed. We want to compute all possible values of the coefficient \(\alpha _{n-1}\). Then the second but last coefficient \(\alpha _{n-1} W_{n-1,n-1} + \alpha _n W_{n-1,n}\) should be integral and \(\alpha _{n-1}\) should be non-negative. Thus,

$$\begin{aligned} \alpha _{n-1} \in \left\{ (k - \alpha _n W_{n-1,n})/W_{n-1,n-1} : k = \lceil \alpha _n W_{n-1,n} \rceil , \lceil \alpha _n W_{n-1,n} \rceil + 1, \ldots \right\} . \end{aligned}$$

Again we use condition (iii) to get an upper bound for \(\alpha _{n-1}\):

$$\begin{aligned} \begin{aligned}&M \ge B'\left[ \sum _{i=1}^n \alpha _i w_i\right] \ge B'[\alpha _{n-1} w_{n-1} + \alpha _n w_n]\\&\qquad = \alpha _{n-1}^2 B'[w_{n-1}] + 2\alpha _{n-1}\alpha _n w_{n-1}^\mathsf{T} B' w_{n} + \alpha _n^2 B'[w_n], \end{aligned} \end{aligned}$$

and solving the corresponding quadratic equation gives the desired upper bound.

Now suppose \(\alpha _n\) and \(\alpha _{n-1}\) are fixed. We want to compute all possible values of the coefficient \(\alpha _{n-2}\) and we can proceed inductively.

3.5 Modifying the algorithm for general input

In this section we discuss an adaption of Algorithm 1 for general symmetric matrices A as input. If A is not in \({\mathcal {CP}}^{n}\) then the procedure ends with a separating witness matrix W if it terminates. However, we currently do not know if our Procedure 3 always terminates in this case (cf. Conjecture 3.2).

figure c

The difference between Algorithm 1 and Procedure 3 is in the new steps 2(a) and 2(c). Here it is tested, whether or not we can already certify that the input matrix A is not in \({\mathcal {CP}}^{n}\).

In Step 2(a) we check whether or not the current \({\mathcal {COP}}\)-perfect matrix P is already a separating witness. By this, the algorithm subsequently constructs an outer approximation of the \({\mathcal {CP}}^{n}\) cone:

$$\begin{aligned} {\mathcal {CP}}^{n}\subseteq \{ Q\in {{\mathcal {S}}}^{n}: \langle Q, B \rangle \ge 0 \hbox { for all constructed vertices}\ B \hbox { of } {\mathcal {R}} \} \end{aligned}$$

This procedure gives a tighter and tighter outer approximation of the completely positive cone (cf. Theorem 2.3).

In Step 2(c) it is checked whether or not R is a separating witness for A, that is, if not only \(\langle A,R \rangle < 0\) but also \(R\in {\mathcal {COP}}^{n}\) holds. The copositivity test of R can be realized as explained in Sect. 3.3.

For the case of \(A\not \in {\mathcal {CP}}^{n}\), we do not know if it is possible that Procedure 3 does not provide a separating witness W after finitely many iterations. With a suitably chosen rule in Step 2(b), however, we conjecture that the computation finishes with a certificate:

Conjecture 3.2

For \(A\not \in {\mathcal {CP}}^{n}\), Procedure 3 with a suitable “pivot rule” in Step 2(b) ends after finitely many iterations with a separating witness W.

We close this subsection with a few observations that can be made in the remaining “non-rational boundary cases”, that is, for \(A \in {{\,\mathrm{bd}\,}}{\mathcal {CP}}^{n}{\setminus } \widetilde{\mathcal {CP}}^{n}\). In this case, Procedure 3 may not terminate after finitely many steps, as shown in a 2-dimensional example in the following section. Assuming there is an infinite sequence of vertices \(P^{(i)}\) of \({{\mathcal {R}}}\) constructed in Procedure 3, we know however at least the following:

  1. (i)

    The \({\mathcal {COP}}\)-perfect matrix \(P^{(i)}\) is in \( \{ B\in {\mathcal {COP}}^{n}: \langle P^{(i-1)}, A\rangle > \langle B, A\rangle \ge 0 \}. \)

  2. (ii)

    The norms \(\Vert P^{(i)} \Vert \) are unbounded. Otherwise—following the arguments in the proof of Lemma 2.4 in [10]—we could construct a convergent subsequence with limit \(P \in {{\mathcal {R}}}\), for which we could then find a \(u\in {{\mathbb {R}}}^n_{\ge 0}\) of norm \(\Vert u\Vert =1\) with \(P[u]=0\) (contradicting \(P\in {{\,\mathrm{int}\,}}({\mathcal {COP}}^{n})\)).

  3. (iii)

    \(P^{(i)} / \Vert P^{(i)}\Vert \) contains a convergent subsequence with limit \(P \in \{ X \in {{\mathcal {S}}}^{n}: \langle X, A \rangle = 0\} \). It can be shown that this P is in \({{\,\mathrm{bd}\,}}{\mathcal {COP}}^{n}\). Infinite sequences of vertices \(P^{(i)}\) of \({{\mathcal {R}}}\) with such a limit P exist. For \(n=2\) we give an example in Sect. 4, in which A is from the “irrational boundary part” \(({{\,\mathrm{bd}\,}}{\mathcal {CP}}^{n}) {\setminus } \widetilde{\mathcal {CP}}^{n}\).

4 A 2-dimensional example

In this section we demonstrate how Algorithm 1 respectively Procedure 3 works for \(n = 2\). Thereby, we discover a relation to beautiful classical results in elementary number theory. In particular, we consider the case when the input matrix A lies on the boundary of \({\mathcal {CP}}^{2}\), see Fig. 1.

Fig. 1
figure 1

Subdivision of \({\mathcal {CP}}^{2}\) by Voronoi cones \({\mathcal {V}}(P)\). Matrices \(A = (a_{ij})\) are drawn with 2-dimensional coordinates \((x,y) = \frac{1}{a_{11}+a_{22}} (a_{11}-a_{22}, a_{12}).\) Integers \(\alpha ,\beta \) indicate that the shown point is on a ray spanned by the rank-1 matrix \(A=v v^\mathsf{T}\) with \(v=( \alpha , \beta )^\mathsf{T}\)

4.1 Input on the boundary

The boundary of \({\mathcal {CP}}^{2}\) splits into a part of diagonal matrices

$$\begin{aligned} A = \begin{pmatrix} \alpha &{}\quad 0 \\ 0 &{}\quad \beta \end{pmatrix} \quad \text {with} \quad \alpha , \beta \ge 0 \end{aligned}$$

and into rank-1 matrices \(A=xx^\mathsf{T}\). In the first case, Procedure 3 finishes already in its first iteration, if we use \(Q_{{\mathsf {A}}_2}\) as a starting perfect matrix,Footnote 2 where

$$\begin{aligned} Q_{{\mathsf {A}}_2} = \begin{pmatrix} 2 &{}\quad -1 \\ -1 &{}\quad 2 \end{pmatrix} \quad \text {and} \quad {{\,\mathrm{Min_{\mathcal {COP}}}\,}}(Q_{{\mathsf {A}}_2}) = \left\{ \begin{pmatrix} 1 \\ 0 \end{pmatrix} , \begin{pmatrix} 0 \\ 1 \end{pmatrix} , \begin{pmatrix} 1 \\ 1 \end{pmatrix} \right\} . \end{aligned}$$
(13)

Let us consider the other boundary cases for \(n=2\), where \(A = xx^\mathsf{T}\) is a rank-1 matrix. Without loss of generality we can assume that \(x = (\alpha , 1)^\mathsf{T}\). As we explain in the following, Procedure 3 will terminate after finitely many iterations with a \({\mathcal {COP}}\)-perfect matrix P satisfying \(x \in {{\,\mathrm{Min_{\mathcal {COP}}}\,}}P\) when \(\alpha \) is rational. For irrational \(\alpha \) the procedure will not terminate.

The first observation is that Procedure 3 subsequently replaces a \({\mathcal {COP}}\)-perfect matrix P by a contiguous \({\mathcal {COP}}\)-perfect matrix N in a way that one of the three vectors in \({{\,\mathrm{Min_{\mathcal {COP}}}\,}}(P)\) is replaced by the sum of the remaining two. Let P be a copositive matrix with

$$\begin{aligned} {{\,\mathrm{Min_{\mathcal {COP}}}\,}}P = \left\{ \begin{pmatrix} a \\ b \end{pmatrix} , \begin{pmatrix} c \\ d \end{pmatrix} , \begin{pmatrix} e \\ f \end{pmatrix} \right\} . \end{aligned}$$
(14)

It is known (see for example [17, Section “Determinants Determine Edges”]) that \(\det \left( {\begin{matrix} a &{} c\\ b &{} d\end{matrix}}\right) = \pm 1\) and we get a contiguous \({\mathcal {COP}}\)-perfect matrix N with

$$\begin{aligned} {{\,\mathrm{Min_{\mathcal {COP}}}\,}}N = \left\{ \begin{pmatrix} a \\ b \end{pmatrix} , \begin{pmatrix} c \\ d \end{pmatrix} , \begin{pmatrix} a+c \\ b+d \end{pmatrix} \right\} \ne {{\,\mathrm{Min_{\mathcal {COP}}}\,}}P \end{aligned}$$

by

$$\begin{aligned} N = P + 4 \begin{pmatrix} bd &{}\quad -\frac{1}{2}(ad + bc)\\ -\frac{1}{2}(ad + bc) &{}\quad ac \end{pmatrix}. \end{aligned}$$

For instance, starting with \(P = Q_{{\mathsf {A}}_2}\) as in (13)

$$\begin{aligned} \begin{pmatrix} 1 \\ 0 \end{pmatrix} \text { is replaced by } \begin{pmatrix} 1 \\ 2 \end{pmatrix} \text { if } \alpha < 1 \text { yielding } N = \begin{pmatrix} 6 &{}\quad -3\\ -3 &{}\quad 2 \end{pmatrix}, \end{aligned}$$

or

$$\begin{aligned} \begin{pmatrix} 0 \\ 1 \end{pmatrix} \text { is replaced by } \begin{pmatrix} 2 \\ 1 \end{pmatrix} \text { if } \alpha > 1 \text { yielding } N = \begin{pmatrix} 2 &{}\quad -3\\ -3 &{}\quad 6 \end{pmatrix}. \end{aligned}$$

Note that for \(\alpha =1\), Algorithm 1 also finishes already in the first iteration. The way these vectors are constructed corresponds to the way the famous Farey sequence is obtained. This relation between the Farey diagram/sequence and quadratic forms was first investigated in a classical paper of Adolf Hurwitz [19] in 1894 inspired by a lecture of Felix Klein; see also the book by Hatcher [17], which contains the proofs.

For concreteness, let us choose \(\alpha = \sqrt{2}\). Then \({{\,\mathrm{Min_{\mathcal {COP}}}\,}}(P)\) is changed by replacing a suitable vector subsequently with

$$\begin{aligned} \begin{pmatrix} 2 \\ 1 \end{pmatrix} , \begin{pmatrix} 3 \\ 2 \end{pmatrix} , \begin{pmatrix} 4 \\ 3 \end{pmatrix} , \begin{pmatrix} 7 \\ 5 \end{pmatrix} , \begin{pmatrix} 10 \\ 7 \end{pmatrix} , \begin{pmatrix} 17 \\ 12 \end{pmatrix} , \begin{pmatrix} 24 \\ 17 \end{pmatrix} , \begin{pmatrix} 41 \\ 29 \end{pmatrix} , \begin{pmatrix} 58 \\ 41 \end{pmatrix} , \begin{pmatrix} 99 \\ 70 \end{pmatrix} , \ldots \end{aligned}$$

Note that there is always a unique choice in Step 2(b) of Procedure 3 in case A is a \(2\times 2\) rank-1 matrix. Note also that the vectors represent fractions that converge to \(\sqrt{2}\). Every second vector corresponds to a convergent of the continued fraction expansion of \(\sqrt{2}\): We have

$$\begin{aligned} \sqrt{2} = 1 + \frac{1}{2 +\frac{1}{2+\frac{1}{2 + \frac{1}{2 + \frac{1}{2 + \ddots }}}}} \end{aligned}$$

and

$$\begin{aligned} 3/2 = 1 + \frac{1}{2}, \; 7/5 = 1 + \frac{1}{2 + \frac{1}{2}},\ldots , \; 99/70 = 1 + \frac{1}{2 +\frac{1}{2+\frac{1}{2 + \frac{1}{2 + \frac{1}{2}}}}},\ldots \end{aligned}$$

The \({\mathcal {COP}}\)-perfect matrix after ten iterations of the algorithm is

$$\begin{aligned} P^{(10)}= \begin{pmatrix} 4756 &{}\quad -6726 \\ -6726 &{}\quad 9512 \end{pmatrix}. \end{aligned}$$

It can be shown that the matrices \(P^{(i)}\) converge to a multiple of

$$\begin{aligned} B = \begin{pmatrix} 1 &{}\quad -\sqrt{2} \\ -\sqrt{2} &{}\quad 2 \end{pmatrix} \text { satisfying } \langle A,B\rangle = 0 \text { and } \langle X,B \rangle \ge 0 \text { for all } X \in {\mathcal {CP}}^{2}. \end{aligned}$$

However, every one of the infinitely many perfect matrices \(P^{(i)}\) satisfies

$$\begin{aligned} \langle X,P^{(i)}\rangle > 0 \text { for all } X \in {\mathcal {CP}}^{2}. \end{aligned}$$

4.2 Input outside

In case the input matrix \(A=(a_{ij})\) is outside of \({\mathcal {CP}}^{2}\) we distinguish two cases using the starting \({\mathcal {COP}}\)-perfect matrix \(Q_{{\mathsf {A}}_2}\): If \(a_{12} = a_{21} < 0\) then Procedure 3 finishes already in its first iteration (in Step 2(c)) with a separating witness

$$\begin{aligned} W = R = \begin{pmatrix} 0 &{}\quad 1 \\ 1 &{}\quad 0 \end{pmatrix} . \end{aligned}$$

If \(a_{12} = a_{21} \ge 0\), Procedure 3 terminates after finitely many iterations (in Step 2(a)) with a separating \({\mathcal {COP}}\)-perfect witness matrix \(W = P\).

We additionally note that it is a special feature of the \(n=2\) case that we can conclude that the input matrix A is outside of \({\mathcal {CP}}^{2}\) if we have a choice between two possible R with \(\langle A,R \rangle < 0\) in Step 2(b) of Procedure 3.

4.3 Integral input

Laffey and Šimgoc [22] showed that every integral matrix \(A \in {\mathcal {CP}}^{2}\) possesses an integral cp-factorization. This can also be seen as follows: If P is a copositive matrix with \({{\,\mathrm{Min_{\mathcal {COP}}}\,}}P\) as in (14) then the matrices

$$\begin{aligned} \begin{pmatrix} a\\ b \end{pmatrix} \begin{pmatrix} a\\ b \end{pmatrix}^\mathsf{T}, \begin{pmatrix} c\\ d \end{pmatrix} \begin{pmatrix} c\\ d \end{pmatrix}^\mathsf{T}, \begin{pmatrix} e\\ f \end{pmatrix} \begin{pmatrix} e\\ f \end{pmatrix}^\mathsf{T} \end{aligned}$$

form a Hilbert basis of the convex cone which they generate. This means that every integral matrix in this cone is an integral combination of the three matrices above. To show this, one immediately verifies this fact in the special case of \(P = Q_{\mathsf{A}_2}\). Then all the other cones are equivalent by conjugating with a matrix in \(\mathsf {GL}_2({\mathbb {Z}})\).

5 Computational experiments

We implemented our algorithm. The source code, written in C++, is available on GitHub [18]. In this section we report on the performance on several examples, most of them previously discussed in the literature. Generally, the running time of the procedure is hard to predict. The number of necessary iterations in Algorithm 1 respectively Procedure 3 drastically varies in the considered examples. Most of the computational time is taken by the computation of the copositive minimum as described in Sect. 3.4.

5.1 Matrices in the interior

For matrices in the interior of the completely positive cone, our algorithm terminates with a certificate in form of a cp-factorization. Note that in [12] and in [6] characterizations of matrices in the interior of the completely positive cone are given. For example, we have that \(A \in {{\,\mathrm{int}\,}}({\mathcal {CP}}^{n})\) if and only if A has a factorization \(A = BB^\mathsf{T}\) with \(B > 0\) and \({{\,\mathrm{rank}\,}}B = n\).

The matrix

$$\begin{aligned} \begin{pmatrix} 6 &{}\quad 7 &{}\quad 8 &{}\quad 9 &{}\quad 10 &{}\quad 11\\ 7 &{}\quad 9 &{}\quad 10 &{}\quad 11 &{}\quad 12 &{}\quad 13\\ 8 &{}\quad 10 &{}\quad 12 &{}\quad 13 &{}\quad 14 &{}\quad 15\\ 9 &{}\quad 11 &{}\quad 13 &{}\quad 15 &{}\quad 16 &{}\quad 17\\ 10 &{}\quad 12 &{}\quad 14 &{}\quad 16 &{}\quad 18 &{}\quad 19\\ 11 &{}\quad 13 &{}\quad 15 &{}\quad 17 &{}\quad 19 &{}\quad 21 \end{pmatrix} \end{aligned}$$

for example lies in the interior of \(\mathcal {CP}_6\), as it has a cp-factorization with vectors (1, 1, 1, 1, 1, 1), (1, 1, 1, 1, 1, 2), (1, 1, 1, 1, 2, 2), (1, 1, 1, 2, 2, 2), (1, 1, 2, 2, 2, 2) and (1, 2, 2, 2, 2, 2). It is found after 8 iterations of our algorithm.

5.2 Matrices on the boundary

For matrices in \(\widetilde{\mathcal {CP}}^{n}\) there exists a cp-factorization by definition. However, on the boundary of the cone these are often difficult to find.

The following example is from [16] and lies in the boundary of \(\widetilde{\mathcal {CP}}^5\):

$$\begin{aligned} \begin{pmatrix} 8 &{}\quad 5 &{}\quad 1 &{}\quad 1 &{}\quad 5\\ 5 &{}\quad 8 &{}\quad 5 &{}\quad 1 &{}\quad 1\\ 1 &{}\quad 5 &{}\quad 8 &{}\quad 5 &{}\quad 1\\ 1 &{}\quad 1 &{}\quad 5 &{}\quad 8 &{}\quad 5\\ 5 &{}\quad 1 &{}\quad 1 &{}\quad 5 &{}\quad 8\\ \end{pmatrix} \end{aligned}$$

Starting from \(Q_{{{\mathsf {A}}_5}}\) our algorithm needs 5 iterations to find the cp-factorization with the ten vectors (0, 0, 0, 1, 1), (0, 0, 1, 1, 0), (0, 0, 1, 2, 1), (0, 1, 1, 0, 0), (0, 1, 2, 1, 0), (1, 0, 0, 0, 1), (1, 0, 0, 1, 2), (1, 1, 0, 0, 0), (1, 2, 1, 0, 0) and (2, 1, 0, 0, 1).

While the above example can be solved within seconds on a standard computer, the matrix

$$\begin{aligned} A = \begin{pmatrix} 41 &{}\quad 43 &{}\quad 80 &{}\quad 56 &{}\quad 50\\ 43 &{}\quad 62 &{}\quad 89 &{}\quad 78 &{}\quad 51\\ 80 &{}\quad 89 &{}\quad 162 &{}\quad 120 &{}\quad 93\\ 56 &{}\quad 78 &{}\quad 120 &{}\quad 104 &{}\quad 62\\ 50 &{}\quad 51 &{}\quad 93 &{}\quad 62 &{}\quad 65 \end{pmatrix} \end{aligned}$$

from Example 7.2 in [16] took roughly 10 days and 70 iterations to find a factorization with only three vectors (3, 5, 8, 8, 2), (4, 1, 7, 2, 5) and (4, 6, 7, 6, 6). The second algorithm suggested in [16] found the following approximate cp-factorization in 0.018 seconds

$$\begin{aligned} A = {\tilde{B}}{\tilde{B}}^\mathsf{T}, \; \text { with } \; {\tilde{B}} = \begin{pmatrix} 0.0000 &{}\quad 3.3148 &{}\quad 4.3615 &{}\quad 3.3150 &{}\quad 0.0000\\ 0.0000 &{}\quad 0.7261 &{}\quad 4.3485 &{}\quad 6.5241 &{}\quad 0.0000\\ 0.0000 &{}\quad 4.5242 &{}\quad 9.9675 &{}\quad 6.4947 &{}\quad 0.0000\\ 0.0000 &{}\quad 0.1361 &{}\quad 7.4192 &{}\quad 6.9955 &{}\quad 0.0000\\ 0.0000 &{}\quad 5.3301 &{}\quad 3.8960 &{}\quad 4.6272 &{}\quad 0.0000 \end{pmatrix}. \end{aligned}$$

We also considered the following family of completely positive \((n+m)\times (n+m)\) matrices, generalizing the family of examples considered in [20]: The matrices

$$\begin{aligned} \begin{pmatrix} n {{\,\mathrm{Id}\,}}_m &{}\quad J_{m,n}\\ J_{n,m} &{}\quad m {{\,\mathrm{Id}\,}}_n \end{pmatrix}, \end{aligned}$$

with \(J_{\cdot ,\cdot }\) denoting an all-ones matrix of suitable size, are known to have cp-rank nm, that is, they have a cp-factorization with nm vectors, but not with less. These factorizations are found by our algorithm with starting \({\mathcal {COP}}\)-perfect matrix \(Q_{{{\mathsf {A}}_{m+n}}}\) for all \(n,m\le 3\) in less than 6 iterations.

5.3 Matrices that are not completely positive

For matrices that are not completely positive, our algorithm can find a certificate in form of a witness matrix that is copositive.

The following example is taken from [25, Example 6.2].

$$\begin{aligned} A= \begin{pmatrix} 1 &{}\quad 1 &{}\quad 0 &{}\quad 0 &{}\quad 1\\ 1 &{}\quad 2 &{}\quad 1 &{}\quad 0 &{}\quad 0\\ 0 &{}\quad 1 &{}\quad 2 &{}\quad 1 &{}\quad 0\\ 0 &{}\quad 0 &{}\quad 1 &{}\quad 2 &{}\quad 1\\ 1 &{}\quad 0 &{}\quad 0 &{}\quad 1 &{}\quad 6 \end{pmatrix} \end{aligned}$$

is positive semidefinite, but not completely positive. Starting from \(Q_{{{\mathsf {A}}_5}}\) our algorithm needs 18 iterations to find the copositive witness matrix

$$\begin{aligned} B = \begin{pmatrix} 363/5 &{}\quad -2126/35 &{}\quad 2879/70 &{}\quad 608/21 &{}\quad -4519/210\\ -2126/35 &{}\quad 1787/35 &{}\quad -347/10 &{}\quad 1025/42 &{}\quad 253/14\\ 2879/70 &{}\quad -347/10 &{}\quad 829/35 &{}\quad -1748/105 &{}\quad 371/30\\ 608/21 &{}\quad 1025/42 &{}\quad -1748/105 &{}\quad 1237/105 &{}\quad -601/70\\ -4519/210 &{}\quad 253/14 &{}\quad 371/30 &{}\quad -601/70 &{}\quad 671/105 \end{pmatrix} \end{aligned}$$

with \(\langle A, B \rangle = -2/5\), verifying \(A \not \in \mathcal {CP}_5\).