Bivariate Infinite Series Solution of Kepler's Equations

A class of bivariate infinite series solutions of the elliptic and hyperbolic Kepler equations is described, adding to the handful of 1-D series that have been found throughout the centuries. This result is based on the exact analytical computation of the partial derivatives of the eccentric anomaly with respect to the eccentricity $e$ and mean anomaly $M$ in a given base point $(e_c,M_c)$ of the $(e,M)$ plane. Explicit examples of such bivariate infinite series are provided, corresponding to different choices of $(e_c,M_c)$, and their convergence is studied. In particular, the polynomials that are obtained by truncating the infinite series up to the fifth degree reach high levels of accuracy in significantly large regions of the parameter space $(e,M)$.


Introduction
In the Newtonian approximation, the time dependence of the relative position of two distant or spherically symmetric bodies that move in each other's gravitational field can be written with explicit analytical formulas involving a finite number of terms only when the eccentricity, e, is equal to 0 or 1, corresponding to circular and parabolic orbits, respectively [1]. For 0 < e < 1 and for e > 1, such evolution can be obtained by solving for E one of the following two Kepler Equations (KEs) (see e.g. Chap. 4 of Ref. [1]), where M and E are measures of the epoch and the angular position called the mean and the eccentric anomaly, respectively (for convenience, the same symbols are used here for the elliptic and hyperbolic anomalies, even though they are defined in different ways). A handful of infinite series solutions of Eqs. (1) have been found throughout the centuries (see Chap. 3 in Ref. [2]). The solution for 0 < e < 1 has been written as an expansion in powers of e [3], or as an expansion in the basis functions sin(nM ) with coefficients proportional to the values J n (e) of Bessel functions [4]. Levi-Civita [5,6] with coefficients c k,q depending on the choice of the base values (e c , M c ). These solutions converge locally around (e c , M c ), and may be used to devise 2-D spline algorithms for the numerical computation of the eccentric anomaly E for every (e, M ), generalising the 1-D spline methods that have been described recently [8,9].

Methods
Let the unknown exact solution of Eq. (1) be If the analytical expression of the partial derivatives of g(e, M ) were known, a bivariate Taylor expansion could be written for any choice of base values e c , E c , M c = f (e c , E c ), so that Eq.
(2) would be demonstrated with the coefficients given by In order to obtain such derivatives, we notice that the definitions in Eqs. (1) and (3) imply the identity, In this expression, e and E are considered as the independent variables. Therefore, by taking the differential, we obtain, Since e and E are independent, the coefficients of dE and de must cancel separately. This condition can be used to obtain the partial derivatives of g. Taking also into account Eqs. (1) and (3) where we have defined the parameter λ and the functions C such that λ = 1 and C = cos g(e, M ), for e < 1, or λ = −1 and C = cosh g(e, M ), for e > 1. As it could be expected, Eq. (7) coincides with the usual rule for the derivative of the inverse function when e is considered to be a fixed parameter. The cancellation of the coefficient of de in Eq. (6) implies, with S defined as S = sin g(e, M ), for e < 1, or S = sinh g(e, M ), for e > 1.
Eqs. (7) and (8), taken together with Eqs. (1) and (3), can be used for the iterative computation of all the higher order derivatives entering Eq. (2) for e = 1. The calculations can be simplified by expressing all the derivatives in terms of only λ, S, and C, and using the following identities, which can be derived from the definitions of S and C and Eqs. (7) and (8), The second order derivatives can then be obtained by applying the operators ∂ ∂e and ∂ ∂M and the rules of Eqs. (9) and (10) to the first order derivatives given in Eqs. (7) and (8). The result is, Similarly, the third order derivatives can be obtained by applying the operators ∂ ∂e and ∂ ∂M and the rules of Eqs. (9) and (10) to the second order derivatives, Eqs. (11), (12), and (13). The result is, All the higher order derivatives can be obtained by iterating this procedure. These expressions are exact, but they depend on the unknown function g through S and C. Nevertheless, taken together with Eq. The determination of the radius of convergence for the univariate series has been a formidable mathematical problem (see Chap. 6 of Ref. [2]). Although the rigorous generalisation of such analyses to the bivariate series of Eqs. (2) and (4) surpasses the scope of this letter, a practical criterion for estimating the region of convergence will still be given in Sec. 3.

Examples, discussion and results
In this section, three examples of bivariate infinite series solutions of KEs are given. They have been obtained from Eqs. (2) and (4)  In all cases, it is convenient to define approximate solutions S n obtained by truncating the infinite series of Eq. (2) keeping only the terms with k + q ≤ n, so that with coefficients given by Eq. (4). The errors E n of the approximate solutions S n can then be evaluated in a self consistent way, From a practical point of view, the convergence of the infinite series for certain values of (e, M ) means that E n (e, M ) should tend to decrease for increasing n. This idea is used for obtaining an estimate of the region of convergence in the (e, M ) parameter space by comparing the average errors for lower and higher values of n with the following rule of thumb, Choosing e c = 0, E c = 0, so that M c = 0 rad, λ c = 1, S c = 0, C c = 1, the series of Eqs. (2) and (4) becomes, For almost all values of e 0.6 the error decreases as n increases, as expected for a convergent series. Below this value, inversions in the order of the E n s only occur around e ∼ 0.3 and e ∼ 0.4, when the second order and the first order approximations, respectively, become smaller than higher order ones. For e 0.8, the inversions become more evident and cannot be justified any longer with oscillatory terms in f and g. These considerations hint at the conclusion that in this specific direction the limit for convergence should be somewhere in between e ∼ 0.6 and e ∼ 0.8. A more precise, though still approximate, criterion of convergence may be obtained with the rule of thumb of Eq. (20). In the case of the M = π e line, this rule gives the values e < 0.72 (or M < 2.2 rad), in agreement with the conclusion obtained from the qualitative analysis of Fig. 1. Fig. 2 shows the contour levels in the (e, M ) plane of the error E 5 affecting the fifth degree polynomial approximation, S 5 , as given by Eq. (21). The limit e < 0.6627434193 for the convergence of Lagrange's univariate series (see page 26 of Ref. [2]) is represented by the vertical dashed magenta line. The continuous black curve marks the boundary of the region of convergence of the bivariate series of Eq. (21), as estimated with Eq. (20). The error E 5 is kept below ∼ 10 −4 rad for e 0.5 and M π/2, and is reduced to the level ∼ 10 −13 rad for e ∼ 0.01 and M ∼ π/1000. Moreover, the fifth order approximation reaches machine precision ǫ double in an entire neighbourhood of size ∆e ∼ 2 × 10 −3 , ∆M ∼ 3 × 10 −3 rad around the point (e c , M c ).
The errors E n (e, M ) shown in Fig. 3 for five polynomials S n , obtained by truncating Eq.
where now E and M indicate the (dimensionless) hyperbolic anomalies.  which is expected to converge in a suitable neighbourhood of (e c , M c ). A rule of thumb for estimating the actual size of the region of convergence has also been given.
Three explicit examples of such series have then been provided, two for the elliptic and one for the hyperbolic KE. Each of them, for fixed base point, turns out to converge in large parts of the (e, M ) plane. For (e, M ) close to (e c , M c ) within a range ∆e ∼ ∆M/π = O(10 −3 ), the polynomial obtained by truncating the infinite series up to the fifth degree reaches an accuracy at the level of machine double precision. More far away from (e c , M c ), but still within the region of convergence, higher order terms should be introduced to maintain such an accuracy.
Since these new solutions converge locally around (e c , M c ), a suitable set of them, centred around different (e c , M c ) and truncated up to a certain degree, may be used to design an algorithm for the numerical computation of the function E(e, M ) for every value of (e, M ). The resulting polynomials would form a 2-D spline, generalising the 1-D spline that has been proposed in Refs. [8,9] for solving KE for every M when e is fixed. e Formacion Profesional, and FIS2017-83762-P from Ministerio de Economia, Industria y Competitividad, Spain.