A function field variant of Pillai's problem

In this paper, we consider a variant of Pillai's problem over function fields $ F $ in one variable over $ \mathbb{C} $. For given simple linear recurrence sequences $ G_n $ and $ H_m $, defined over $ F $ and satisfying some weak conditions, we will prove that the equation $ G_n - H_m = f $ has only finitely many solutions $ (n,m) \in \mathbb{N}^2 $ for any non-zero $ f \in F $, which can be effectively bounded. Furthermore, we prove that under suitable assumptions there are only finitely many effectively computable $ f $ with more than one representation of the form $ G_n - H_m $.


Introduction
An about a hundred year old problem going back to Pillai [12] considers exponential Diophantine equations of the form (1) a n − b m = f for given positive integers a, b, f to be solved in positive integers n, m ≥ 2. If a and b are given, then topics of interest are to answer for which f equation (1) has infinitely many solutions, finitely many solutions or at most one solution in (n, m) ∈ N 2 , respectively, where N denotes the set of natural numbers i.e. positive integers. Furthermore, we can ask for a bound on the number or size of solutions (n, m) if there are only finitely many of them. In [13] Pillai proved, extending work of Herschfeld [9] for the case a = 3, b = 2, that if a and b are coprime, a > b ≥ 1, and |f | is sufficiently large, then equation (1) has at most one solution. He also claimed that (1) can have at most one solution if a and b are not coprime, but this is incorrect as was shown by the example 6 4 − 3 4 = 1215 = 6 5 − 3 8 in [3]. The finiteness of the number of solutions of (1) was already mentioned by Pólya in [14], where instead of Siegel's theorem on integral points on curves the approximation theorem of Thue is used in the proof. Bennett proved in [1] that for any triple (a, b, f ) of nonzero integers, with a, b ≥ 2, equation (1) has at most two solutions (n, m) ∈ N 2 . If one allows a and b to vary, Pillai conjectured in [13] that there are only finitely many solutions with n ≥ 2, m ≥ 2. Here for f = 1 we get the famous Catalan conjecture, completely proved by Mihȃilescu [11]. Other results with varying a are listed in the paper [3] already mentioned above. In [10] Luca used the abcconjecture to prove that the equation p n1 − p n2 = q m1 − q m2 has only finitely many solutions (p, q, n 1 , n 2 , m 1 , m 2 ) in positive integers, with p = q primes and n 1 = n 2 (see also [17] for a quantitative version of Pillai's conjecture that follows from the abc-conjecture). For a rather complete historical summary before 2009 on Pillai's problem we refer to [17].
A natural generalisation of this problem is to replace a n and b m by simple linear recurrence sequences A n = a 1 α n 1 +· · ·+a d α n d and B m = b 1 β m 1 +· · ·+b t β m t of integers. Pillai's equation is obtained when A n = a n and B m = b m . Since Waldschmidt's survey [17], a significant number of papers considered this generalization for special recurrences (e.g. k-generalized Fibonacci numbers and powers of 2 and 3; just in order to give a concrete reference we mention [5]). The authors aimed for and proved complete results in the sense that all exceptions, in which more than one solution exists, were explicitly determined. Such a result was proved by Stroeker and Tijdeman (cf. [16]) for the case a = 3, b = 2 proving another conjecture by Pillai. In [4] Chim, Pink and Ziegler proved that if A n and B m are strictly increasing in absolute values and have dominant roots α and β, respectively, which are multiplicatively independent, then there exists an effectively computable finite set E such that A n − B m = f has more than one solution (n, m) ∈ N 2 if and only if f ∈ E. The independence of the dominant roots is a natural condition since otherwise one can find counterexamples as given in [4].
In the present paper we consider a function field analogue of the Pillai problem. Silverman worked in [15] with the Cassels-Catalan equation ax m + by n = c for fixed a, b, c over function fields. We are interested in solutions (n, m) ∈ N 2 of where G n and H m are simple linear recurrence sequences defined over a function field F in one variable over C and f ∈ F * . We will prove that under weak assumptions there are only finitely many solutions (n, m) ∈ N 2 of (2) for any non-zero f ∈ F and that n, m are bounded by an effectively computable constant, depending only on f , the genus g of F , and the characteristic roots and coefficients of G n and H m . Moreover, we will show that under some suitable conditions there are only finitely many f ∈ F , which can be effectively computed, with two distinct representations of the form (2).

Notations and Results
Throughout the paper we denote by F a function field in one variable over C and by g the genus of F . For the convenience of the reader we give a short wrapup of the notion of valuations that can e.g. also be found in [6]: For c ∈ C and f (x) ∈ C(x), where C(x) is the rational function field over C, we denote by ν c (f ) the unique integer such that Our first result is the following theorem which states that any fixed non-zero element f ∈ F has only finitely many representations of the form G n − H m : Theorem 1. Let G n = a 1 α n 1 + · · · + a d α n d and H m = b 1 β m 1 + · · · + b t β m t be two simple linear recurrence sequences such that a i , α i , b j , β j ∈ F * for i = 1, . . . , d and j = 1, . . . , t. Assume that no α i or β j and no ratio α i /α j or β i /β j for i = j lies in C. Moreover, let f ∈ F * be a given non-zero element. Then there exists an effectively computable constant C, which depends only on the a i , α i , b j , β j , f and g, such that for all (n, m) ∈ N 2 with G n − H m = f we have max (n, m) ≤ C.
In Corollary 4 in [8] the case of G n − H m = 0 is completely solved. It is proven that there is an effectively computable upper bound for max (n, m) unless G n and H m differ not significantly which is described precisely.
In the special case that G n and H m are pure powers of non-constant polynomials in C[x] we get: For our next theorem we will need some further notation. In the case that In the second case this can be rewritten as ν ∞ (α 1 ) < min i=2,...,d ν ∞ (α i ) and since the α i are polynomials the inequality ν ∞ (α 1 ) < 0 is also satisfied.
We will now consider elements f ∈ F with more than one representation of the form G n − H m . Our goal is to prove that, under some not too restrictive conditions, more than one representation is only possible for finitely many f . Hence it is obvious that we must exclude situations where G n1 = G n2 for arbitrary large indices n 1 , n 2 . Thus we have to assume that there is a bound N 0 such that for n 1 , n 2 > N 0 we have G n1 = G n2 . By throwing away the first N 0 elements of the recurrence sequence and considering only the remaining ones, we may assume that the map n → G n is injective. We will write G n has no multiple values for this assumption.
Furthermore, if α 1 is the ν-dominant root of G n , there is an effectively computable bound N 1 such that for n > N 1 we have ν(a 1 α n 1 ) < min i=2,...,d ν(a i α n i ). By the same argument as above we may assume that this inequality holds for all n ∈ N. We will refer to this by saying the ν-dominant root has immediate effect.
Last but not least we call two elements α, β ∈ F multiplicatively independent if α r β s ∈ C for r, s ∈ Z implies that r = s = 0.
The result is now the following statement which implies that under the given conditions there are only finitely many f with at least two representations of the form G n − H m : Theorem 3. Let G n = a 1 α n 1 + · · · + a d α n d and H m = b 1 β m 1 + · · · + b t β m t be two simple linear recurrence sequences such that a i , α i , b j , β j ∈ F * for i = 1, . . . , d and j = 1, . . . , t. Assume that there exists a valuation ν in F such that α 1 and β 1 are the ν-dominant roots with immediate effect of G n and H m , respectively, that ∈ C, and that α 1 and β 1 are multiplicatively independent. Then there exists an effectively computable constant C, which depends only on the a i , α i , b j , β j and g, such that for all distinct (n 1 , Note that the assumptions in Theorem 3 already imply that G n has no multiple values. More precisely it is implied by the assumption α 1 / ∈ C in the case d = 1 and by the fact that α 1 is the ν-dominant root with immediate effect for d > 1. The same holds for H m . One could ask whether we can relax the dominant root condition. Indeed, it is possible to relax the dominant root condition to the cost of requiring more multiplicative independence and prove a similar statement. To fix ideas we restrict ourselves to the polynomial case, i.e. we will assume that a i , α i ∈ C[x]. Moreover, we will assume (by throwing away the first few elements of the recurrences if necesssary; compare with the dominant root case) that deg for all i, j and all n ∈ N, and refer to this by saying that G n has weak coefficients.
Now we describe what we mean by the relevant set of characteristic roots for a recurrence with weak coefficients. In a preparatory step we order the characteristic roots α i such that . Then we call the set R G = {α 1 , . . . , α k } the relevant set of characteristic roots of G n . In this language our result is the following statement: Theorem 4. Let G n = a 1 α n 1 + · · · + a d α n d and H m = b 1 β m 1 + · · · + b t β m t be two simple linear recurrence sequences such that a i , α i , b j , β j ∈ C[x] for i = 1, . . . , d and j = 1, . . . , t. Assume that G n and H m both have weak coefficients. Denote by R G and R H the relevant sets of characteristic roots of G n and H m , respectively, and assume that no element of R G or R H as well as no quotient of two distinct elements of R G or R H lies in C. Moreover, suppose that all pairs in the set {(α 1 , γ) : γ ∈ R H } ∪ {(δ, β 1 ) : δ ∈ R G } are pairs of multiplicatively independent elements, and that neither G n nor H m has multiple values. Then there exists an effectively computable constant C, which depends only on the a i , α i , b j , β j and g, such that for all distinct (n 1 , m 1 ), (n 2 , Note that this theorem can be generalized to more general elements in F if we replace the degree conditions by suitable valuation conditions as we have done in Theorem 3.

Preliminaries
The proofs in the next section will make use of height functions in function fields. Let us therefore define the height of an element f ∈ F * by where the sum is taken over all valuations on the function field F/C. Additionally we define H(0) = ∞. This height function satisfies some basic properties that are listed in the lemma below which is proven in [7]: Lemma 5. Denote as above by H the height on F/C. Then for f, g ∈ F * the following properties hold: Furthermore, the following theorem due to Brownawell and Masser plays an essential role within our proofs. It is an immediate consequence of Theorem B in [2]: Theorem 6 (Brownawell-Masser). Let F/C be a function field in one variable of genus g. Moreover, for a finite set S of valuations, let u 1 , . . . , u k be S-units and

Proofs
During this section C 1 , C 2 , . . . will denote effectively computable constants. To keep the indices small we will start a new numbering for each proof. Note that therefore there is no dependence between the constants occurring in different proofs. We begin with the proof of our first theorem: Proof of Theorem 1. Let G n , H m , f be as in the theorem. If we insert the sum representations of G n and H m into the equation G n − H m = f , bring all terms to one side, and divide by f , we get Now let S be a finite set of valuations such that f and all α i and a i for i = 1, . . . , d as well as all β j and b j for j = 1, . . . , t are S-units. We define and assume that (n, m) ∈ N 2 satisfies equation (3).
Our plan is to apply Theorem 6. Therefore we consider a minimal vanishing subsum of the left hand side of (3), i.e. no proper sub-subsum of this subsum vanishes, which contains the summand 1. This subsum contains at least one other summand. Without loss of generality we may assume that the summand − ai 0 f α n i0 is contained therein. By Theorem 6 we get the upper bound Thus we have Now there are two possible cases. If also a summand with an β j is contained in the minimal vanishing subsum with 1, then the same calculations show that m ≤ C 3 and we are done. Otherwise we consider a minimal vanishing subsum of the left hand side of (3) of the form After dividing by z 1 we can apply Theorem 6 to this subsum which yields Let us first assume that z 1 = bj 0 f β m j0 for j 0 = 1. Together with the bound in the last displayed expression we get Assume now that z 1 = − ai 1 f α n i1 for some i 1 . In this situation we end up with the bounds Thus, by putting all things together, we get for the exponential variables n, m the final bound max (n, m) ≤ max (C 3 , C 5 , C 7 ) , which proves the theorem.
In the special case of pure powers of polynomials the proof as well as the constant becomes much easier: Proof of Corollary 2. Let p, q be non-constant polynomials in C[x] and f a non-zero polynomial in C[x]. For (n, m) ∈ N 2 with p n − q m = f we get the equation Since there are only three summands and each of them is non-zero, there cannot be a proper vanishing subsum. Let S be the set containing ν ∞ as well as the valuations corresponding to the zeros of p, q, f . As we have three summands and the genus of C(x) is zero, the bound in Theorem 6 simplifies to |S|, which can be bounded above by The same bound also holds for m, with the same calculations.
In preparation of the proof of the other theorems we will formulate and prove a short lemma which will be used several times later on: Lemma 7. Let γ, δ ∈ F \ C be multiplicatively independent and n, m ∈ N. Assume that Then there exists an effectively computable constant C, depending only on γ, δ, g and L, such that max (n, m) ≤ C.
Proof. If γ has a zero that is not a zero of δ, then we have n ≤ L. Analogously, if γ has a pole that is not a pole of δ, we also have n ≤ L. If vice versa δ has a zero/pole that is not a zero/pole of γ, this would imply m ≤ L. Thus without loss of generality we may assume that either n ≤ L or each zero/pole of γ is also a zero/pole of δ and vice versa.
Let us now focus on the second case that γ and δ have the same zeros and poles. Since γ and δ are multiplicatively independent and non-constant, there exist two valuations ν and µ such that ν(γ)ν(δ)µ(γ)µ(δ) = 0 and From inequality (4) we get Hence we have the upper bound n ≤ max (L, C 2 ) =: C 3 . Using properties of the height in the same manner as in the proofs above, we get also the upper bounds This proves the lemma. Now we will use this lemma to prove our second theorem: Proof of Theorem 3. Let (n 1 , m 1 ), (n 2 , m 2 ) ∈ N 2 be two distinct pairs with G n1 − H m1 = G n2 − H m2 . Since neither G n nor H m has multiple values we have n 1 = n 2 and m 1 = m 2 . We write N = max (n 1 , n 2 ) and M = max (m 1 , m 2 ). If we insert the sum representations into G n1 − H m1 = G n2 − H m2 and bring all terms to one side, we get Let S be a finite set of valuations such that all α i and a i for i = 1, . . . , d as well as all β j and b j for j = 1, . . . , t are S-units. Now we differ between four cases. Firstly, we assume d = t = 1. Then equation (5) reduces to (6) a 1 α n1 If there is no proper vanishing subsum, then we divide by a 1 α N 1 and apply Theorem 6. Thus there is an effectively computable constant C 1 such that Therefore we have and by Lemma 7 Otherwise we can split equation (6) into two vanishing subsums of the shape and again by Lemma 7 max (n 1 , m 1 , n 2 , m 2 ) = max (k 1 , l 1 , k 2 , l 2 ) ≤ C 4 .
Secondly, we assume d = 1 and t > 1. Let {M, m 0 } = {m 1 , m 2 } and {k 1 , k 2 } = {n 1 , n 2 }. Since β 1 is the ν-dominant root with immediate effect of H m we have . . , t. We claim that there is a minimal vanishing subsum of (5) containing b 1 β M 1 and a 1 α k1 1 . If this would not be so, then b 1 β M 1 could be written as a sum of elements with ν-valuation strictly greater than ν(b 1 β M 1 ), but this is impossible. Hence we divide this minimal vanishing subsum by b 1 β M 1 and the application of Theorem 6 gives us H a 1 α k1 As we have seen above this yields under use of Lemma 7 max (k 1 , m 1 , m 2 ) ≤ C 6 .
The summand a 1 α k2 1 must be part of a minimal vanishing subsum with at least one other summand ω. Since the exponential variable occurring in ω is among k 1 , m 1 , m 2 , the height H(ω) can be bounded by an effectively computable constant. Therefore we have because the height of the quotient in the last line is bounded by Theorem 6. Altogether we have max (n 1 , m 1 , n 2 , m 2 ) = max (k 2 , k 1 , m 1 , m 2 ) ≤ C 8 .
The third case that d > 1 and t = 1 is handled analogously. Finally, assume that d > 1 and t > 1. Let {M, m 0 } = {m 1 , m 2 } and {N, n 0 } = {n 1 , n 2 }. Since α 1 and β 1 are the ν-dominant roots with immediate effect of G n and H m , respectively, we have . . , d and j = 2, . . . , t. Note that no summand of a vanishing subsum of the left hand side of (5) can have ν-valuation strictly smaller than each of the other summands. Otherwise this element could be written as a sum of elements with ν-valuation strictly greater than its own ν-valuation, which is impossible. Thus it must hold that ν(a 1 α N 1 ) = ν(b 1 β M 1 ). Moreover, a 1 α N 1 and b 1 β M 1 are in the same minimal vanishing subsum. Dividing this minimal vanishing subsum by a 1 α N 1 and applying Theorem 6 yields Thus the theorem is proven.
Finally, by using similar ideas, we prove our last theorem: Proof of Theorem 4. As in the proof of Theorem 3 we let S be a finite set of valuations such that all α i and a i for i = 1, . . . , d as well as all β j and b j for j = 1, . . . , t are S-units. Let again (n 1 , m 1 ), (n 2 , m 2 ) ∈ N 2 be two distinct pairs with G n1 − H m1 = G n2 − H m2 . Since neither G n nor H m has multiple values we have n 1 = n 2 and m 1 = m 2 . We write N = max (n 1 , n 2 ) and M = max (m 1 , m 2 ), and consider once again equation (5). We differ between four cases. The case d = t = 1 is covered by Theorem 3. Now we consider the case d = 1 and t > 1. In a minimal vanishing subsum containing b 1 β M 1 for degree reasons there must be also contained either another summand b j1 β M j1 for β j1 ∈ R H or a 1 α k1 1 for k 1 ∈ {n 1 , n 2 }.
In the first subcase we divide this minimal vanishing subsum by b 1 β M 1 and Theorem 6 gives an effectively computable constant C 1 such that In the second case we divide this minimal vanishing subsum as well by a 1 α N 1 and Theorem 6 yields As we have seen above this gives a bound N ≤ C 12 . Then we consider a minimal vanishing subsum of (5) containing b 1 β M 1 . If it contains another summand of the form b j3 β M j3 for β j3 ∈ R H , then we get an upper bound M ≤ C 13 in the same manner as for N . Otherwise this minimal vanishing subsum must contain a summand a i3 α k2 i3 for k 2 ∈ {n 1 , n 2 } and i 3 ∈ {1, . . . , d}. Here we get a bound