Analytical approximations for the inverse Langevin function via linearization, error approximation, and iteration

This paper details an analytical framework, based on an intermediate function, which facilitates analytical approximations for the inverse Langevin function—a function without an explicit analytical form. The approximations have relative error bounds that are typically much lower than those reported in the literature and which can be made arbitrarily small. Results include convergent series expansions in terms of polynomials and sinusoids which havemodest relative error bounds and convergence properties but are convergent over the domain of the inverse Langevin function. An important advance is to use error approximations, and then iterative relationships, which allow simple initial approximations for the inverse Langevin function, with modest relative errors, to generate approximations with arbitrarily low relative errors. One example is that of an initial approximating function, with a relative error bound of 0.00969, which yields relative error bounds of 2.77 × 10 and 2.66 × 10 after the use of first-order error approximation and then first-order iteration. Functions with much lower error bounds are possible and are detailed. Firstand second-order Taylor series can be used to simplify the errorand iteration-based approximations.


Introduction
The Langevin function arises in diverse contexts including the classical model of paramagnetism (Langevin 1905) and the ideal freely jointed chain model (e.g., Fiasconaro and Falo 2018). In the ideal jointed chain model, as illustrated in Fig. 1, a chain comprising of n identical rigid segments and, in an equilibrium energy environment, is such that independent movement of each segment is valid with each segment moving independently and with an angle consistent with a uniform distribution on the interval (-π, π]. With an applied point force, the expected chain extension, normalized, is given by the Langevin function (e.g., Iliafar et al. 2013), which is defined according to and whose graph is shown in Fig. 2 for the case of x > 0. In this equation, the non-descriptive variables of x (representing the normalized force for the freely jointed chain) and L (representing the normalized chain extension) are used. The inverse Langevin function, denoted L -1 , is shown in Fig. 2 for the case of y > 0, and is of interest as, for the freely jointed chain case, it defines the normalized force required for a given expected normalized chain extension. The freely jointed chain model, and chain-based network models, have applications, for example, in rubber elasticity (e.g., Ehret 2015), the micromodelling of polymers (e.g., Arruda and Boyce 1993, Boyce and Arruda 2000, Hossain and Steinmann 2012 and the biophysics of macromolecules (e.g., Holzapfel 2005). As the Langevin function and the inverse Langevin function are antisymmetric, it is sufficient to consider these functions, respectively, over the intervals [0, ∞) and [0, 1). As is evident in Fig. 2, the inverse Langevin function has a singularity at the end point of the interval which complicates analysis and an explicit analytical expression for this function is an unsolved problem. Finding an approximate analytical expression has received attention over a considerable period of time, e.g., Kuhn and Grün (1942), Treloar (1954Treloar ( , 1975, and Cohen (1991), and improved approximations have been detailed in recent years: Itskov et al. (2012), Nguessong et al. (2014), Darabi and Itskov (2015), Jedynak (2015), Kröger (2015), Marchi and Arruda (2015), Rickaby and Scott (2015), Petrosyan (2017), Jedynak (2018) and Marchi and Arruda (2019). These papers provide overviews of the approaches used and these include the use of Taylor series, several custom series, and the dominant approach of Padé approximants including the use of minimax optimization techniques. The Cohen approximation (Cohen 1991) has a maximum relative error of 0.0494 and improved approximations with maximum relative errors approaching 10 -4 Arruda 2015, Jedynak 2018) have been reported with the recent work of Marchi and Arruda (2019) detailing approximations with maximum relative errors less than 10 -6 . Benitez and Montáns (2018) utilized discretization and interpolation via cubic splines to obtain highly accurate numerical results; a maximum relative error of the order of 10 -11 can be achieved with the use of 10,000 points. This paper details approaches for finding analytical approximations to the inverse Langevin function that can be made arbitrarily accurate. The approach taken is distinctly different from those reported in the literature and is based on determining an intermediate function that linearizes the Langevin function. This approach was initially motivated by the potential of higher order spline functions for function approximation (Howard 2019). The use of an intermediate function facilitates, first, the definition of convergent series for the inverse Langevin function over its complete domain-an unsolved problem. Second, the use of an intermediate function facilitates error approximation and functional iteration. When combined, these lead to approximations for the inverse Langevin function which can be made arbitrarily accurate over its complete domain. For example, a first-order iteration based on a simple approximating function has relative error bounds of better than 6 × 10 -8 , 3 × 10 -16 , and 3 × 10 -30 depending on the order of error approximation used. Higher-order error approximation and/or higher levels of iteration lead to significantly lower relative error bounds. The error-based approximations can be simplified by using first-and second-order Taylor series.
The structure of the paper is illustrated in Fig. 3. Section "Approximation via an intermediate function" details the theory underpinning the use of an intermediate function for finding approximations to the inverse Langevin function. This section includes a direct linearization approach which results in a non-convergent series. Section "Convergent series for inverse Langevin function" details convergent series for the inverse Langevin function which are based on utilizing suitable basis sets to approximate the error arising from linearization. Section "Improved approximation via error approximation" details how error approximation, of different orders, can be used to yield improved approximations. Section "Improved approximation via iteration" details approximations, with lower relative error bounds, that arise from function iteration. Associated results are detailed in Section "Results: Error approximation and function iteration". Section "Simplified approximation via Taylor series" details how the error-based approximations can be simplified by using the first-and second-order Taylor series. Conclusions are provided in Section "Conclusion".

Notation
For a function f defined over the interval [α, β], an approximating function f A has a relative error, at a point x 1 , defined according to  Fig. 1 Model of a freely jointed chain with n segments. The left end is anchored whilst the free end is subject to a force F The relative error bound for the approximating function over the interval [α, β] is defined according to The notation x ∈ { 0 + , 1 − } is used with the meaning, respectively, of the limits as x approaches zero from above and one from below. The notation f k ð Þ x ð Þ ¼ d k dt k f x ð Þ is used in some instances. Mathematica has been used to facilitate analysis and to obtain numerical results. In general, relative error results for approximations to the inverse Langevin function have been obtained by sampling the interval (0, 1) with a resolution of 0.001.

Approximation via an intermediate function
The Langevin function is defined over the interval [0, ∞) and the determination of an approximation to the inverse of this function is facilitated if a monotonically increasing intermediate function, denoted f, can be defined which, as illustrated in Figure 4, changes from zero to infinity as its argument changes from zero to one and creates an approximately linear function L[f(x 1 )] with respect to x 1 .

Theorem 1 Approximation via an intermediate function
Consider the case of a monotonically increasing intermediate function f which, as illustrated in Fig. 4, is such that L where L[f(x 1 )] is close to being linear with a slope close to unity for x 1 ∈ [0, 1). When the linearity is such that then the inverse Langevin function can be approximated according to As the inverse Langevin function is antisymmetric Proof The relationships imply x 1 = y -ε(x 1 ) and With the assumption of │ε(x 1 )│ << x 1 , it follows that y ≈ x 1 and the required relationship then follows. The alternative relationships follow in a similar manner with the first order approximation being made.

Implication
These results imply that to find approximations to the inverse Langevin function, it is sufficient to find functions f which effectively linearize the Langevin function such that Eq. 4 holds. For the case where f(x 1 ) changes monotonically between zero and infinity as x 1 changes between zero and one, the following are valid approximations for the transformed Langevin function L[f(x 1 )] for, respectively, the right neighbourhood of the point zero and the left neighbourhood of the point one:

Determining intermediate function
Proof The proof is detailed in Appendix 1.

Transformed Langevin function with unity rate of change at interval end points
The goal is for the transformed function L[f(x 1 )] to be linear with unity slope over the interval [0, 1). A starting requirement is for L[f(x 1 )] to have unity slope at x ∈ {0 + , 1 -}.

Theorem 3 Transformed Langevin function with unity rate of change
For the case where f(x 1 ) changes monotonically between zero and infinity as x 1 changes between zero and one, and in a manner such that the approximations stated in Theorem 2 are valid, it follows that L[f(x 1 )] has unity slope at x 1 = 0 + , and at x 1 = 1 -, when

Proof
The proof is detailed in Appendix 2.

Notes
These constraints allow the coefficients in a valid function form for f(x 1 ) to be simultaneously solved (see Kröger 2015 for the Padé approximant case). Petrosyan (2017) solves for a valid function approximation via combining two function forms which satisfy, separately, the two asymptotic constraints. The result that the derivative of an approximation for the inverse Langevin function should have a rate of change of 3 at the origin is well known, widely used (e.g., Darabi and Itskov 2015), and consistent with a Taylor series expansion for this function, e.g., Itskov et al. 2012.

Low-order forms for intermediate function
There are many choices for a function that goes from zero to infinity as its argument goes from zero to one, for example However, this form does not satisfy the constraints specified in Theorem 3. The following polynomial form has potential with the first-, second-, and third-order expressions being solved consistent with the constraints specified in Theorem 3 to yield: Examples of second-and third-order functions that are close to optimum in the sense of minimizing the magnitude of the maximum relative error, over the interval [0, 1) in the approximation L -1 (y) ≈ f(y) for the inverse Langevin function, are These two functions lead to maximum relative error magnitudes in the approximation L -1 (y) ≈ f(y), respectively, of 0.00969 and 0.00583.

Higher-order forms for intermediate function
It is the case that the functions f defined by Eqs. 15, 18, and 19 have the required properties for the intermediate function, namely, being monotonically increasing between zero and infinity as its argument changes between zero and one and being such that L[f(x 1 )] has unity rate of change as the points of zero and one are approached. These functions are low-order approximations and higher-order approximations are possible as stated in the following theorem: Theorem 4 Higher order forms for intermediate function The coefficients in the general form for the intermediate function f, defined by Eq. 14, can be solved based on imposing the constraints of zero rate of change for higher-order derivatives at the points zero and one according to to yield, potentially, increasingly linear forms for L[f(x 1 )]. The results for first-, second-, and sixth-order approximations, respectively, are: The third-, fourth-, fifth-, seventh-, and eighth-order approximations are detailed in Appendix 3.

Proof
Mathematica was used to solve for the coefficients by using the approximations stated in Theorem 2.

Results
The linearity of the function L[f(x 1 )] is illustrated in Fig. 5 for the first-order case as specified by Eq. 21. Higher orders closely approximate the linear line with unity slope. The relative errors in the various orders of approximation for the inverse Langevin function, based on L -1 (y) ≈ f(y), where f is specified by Eqs. 21 to 23 and Eq. 122 to 126, are shown in Fig. 6. The relative error bounds, respectively, are 0.13, 0.0264, 0.0137, 0.0106, 6.06 × 10 −3 , 2.61 × 10 −3 , 3.15 × 10 −3 , and 6.18 × 10 −3 for the first-to eighth-order approximations. These results indicate that the series does not converge and with the sixth-order approximation yielding the smallest relative error bound of 2.61 × 10 −3 .

Notes
The polynomial-based approximations, as specified by Eqs. 21 to 23 and Eq. 122 to 126, and resulting from the linearization constraints specified by Eq. 20, show modest convergence and then divergence with respect to providing an approximation L -1 (y) ≈ f(y) for the inverse Langevin function. The relative error bound of 2.61 × 10 −3 for the optimum sixthorder series, as specified by Eq. 23, is between that of the inverse Langevin approximations proposed by Kröger 2015, with a bound of 2.8 × 10 -3 , and the non-linear function proposed by Petrosyan 2017, with a bound of 1.8 × 10 −3 . Optimized Padé approximants, e.g., Marchi and Arruda (2019), yield better convergence with a [4/4] expression having a relative error bound of 1.8 × 10 −4 .
Simulation results indicate that the second-, third-, and fourth-order approximations, as specified by Eqs. 22, 122, and 123, represent lower bounds for the inverse Langevin function over the interval [0, 1).
An unsolved problem is a convergent series for the inverse Langevin function. The use of an intermediate function allows convergent series to be defined for the inverse Langevin function and this is detailed in the following section.

Convergent series for inverse Langevin function
Consider the approximation for the inverse Langevin function Consider an associated relative error function where the approximation is valid when |ε(y)| ≪ 1. When the approximation L −1 (y) ≈ f(y) is good over the domain of [0, 1), the associated relative error is well defined and can be approximated arbitrarily accurately using a suitable basis set. This is the basis for a convergent series for the inverse Langevin function as detailed in the following theorem.

Theorem 5 Convergent series for inverse Langevin function
Consider an intermediate function f, which leads to the error function ε 1 y ð Þ ¼ L −1 y ð Þ f y ð Þ −1 being a smooth-bounded integrable function on the interval [0, 1). For this case, an orthonormal basis set {b 0 , b 1 , …} for the interval [0,1] leads to the convergent series x where b i , i ∈ {0, 1, 2, …}, is the ith orthonormal basis function and the ith coefficient, c i , is defined according to Here b * i is the conjugate of b i . Proof Convergence is guaranteed because, ε 1 (y) has been assumed to be a smooth, integrable function and the set of functions b i , i ∈ {0, 1, 2, …} has been assumed to be an orthonormal basis set for the interval [0, 1). A general reference is Debnath and Mikusinski 1999.

Suitable basis sets and results
Suitable basis sets for the interval [0,1] include the Legendre and the standard sinusoidal basis set and these are detailed in Appendix 4.
To illustrate the potential for a convergent series, consider the function f defined by Eq. 18: The associated error function ε 1 is shown in Fig. 7.

Notes
The rate of convergence with both the Legendre and sinusoidal basis sets is modest. In general, a starting approximation for the inverse Langevin function, with a lower relative error bound, is likely to have a more oscillatory error function that needs to be approximated and this leads to more terms in the series approximation before a better approximation for the inverse Langevin function arises. For example, a tenth-order Legendre series approximation based on the initial approximating function given by Kröger (2015) leads to an improved approximation with a relative error bound of 1.2 × 10 −4 for a tenth-order series. This relative error bound is close to that achieved by the approximation specified by Eq. 31, which is based on an initial starting approximation with a higher relative error. The coefficients in the convergent series for the inverse Langevin function can be computed with arbitrarily high Fig. 8 Relative error in approximations for L -1 (y) based on the second-, fourth-, sixth-, eighth-, and tenth-order Legendre basis set approximations for the error function ε 1 (y) Fig. 9 Relative error in approximations for L -1 (y) based on the second-, fourth-, sixth-, eighth-, and tenth-order sinusoidal basis set approximations for the error function ε 1 (y) accuracy. Importantly, the series converges over the domain [0, 1). This is in contrast with a Taylor series, e.g., Itskov et al. (2012), where convergence breaks down around 0.904. Dargazany (2013) details an algorithm that facilitates the efficient evaluation of the higher-order derivatives involved in a Taylor series approximation.
The following section details approaches with significantly lower relative error bounds.

Improved approximation via error approximation
Consider the result, consistent with Theorem 1, for an intermediate function f which is such that L[f(x 1 )] is close to being linear: If an approximation to the error function ε can be made, then an improved estimate for the inverse Langevin function results. Specifically, for y fixed, an approximation to the error ε at the point x 1 , which is in terms of y, is required. Consider the general case, and the illustration shown in Fig. 10, for a function q which is close to being linear with a slope close to unity such that q(x 1 ) = x 1 + ε(x 1 ) is an appropriate model. The goal is to find approximations to the error ε(x 1 ) = q(x 1 ) − x 1 , at the point x 1 , which are in terms of q(y 1 ) and y 1 where y 1 = q(x 1 ), x 1 = q −1 (y 1 ).
Theorem 6 Zero-, first-, and second-order error approximations For a function q, as illustrated in Fig. 10, which is such that the model q(x 1 ) = x 1 + ε(x 1 ), |ε(x 1 )| ≪ x 1 , is valid, approximations for x 1 = q −1 (y 1 ) and ε(x 1 ) = q(x 1 ) − x 1 are, first, for a zeroorder error approximation: Second, for a first-order error approximation: Third, for a second-order error approximation: where the latter approximations assume 2q 2 The proofs for these approximations are detailed in Appendix 5.

Utilizing zero-, first-, and second-order error approximations
For the case where L[f(x 1 )] is approximately linear according to L[f(x 1 )] = x 1 + ε(x 1 ), it is the case, according to Theorem 1, that L −1 (y) = f[y − ε(x 1 )]. The approximations for ε(x 1 ), detailed in Theorem 6, lead to the following approximations, with improved accuracy, for the inverse Langevin function: Fig. 10 Illustration of relationships for a function q which is approximately linear such that the model q(x 1 ) = x 1 + ε(x 1 ), │ε(x 1 )│ << x 1 , is valid Theorem 7 Zero-, first-, and second-order error-based approximations Consider an initial approximation to the inverse Langevin function of L −1 (y) ≈ f(y) which is based on an intermediate function f that linearizes the Langevin function such that the model L[f(x 1 )] = x 1 + ε(x 1 ) is valid with |ε(x 1 )| ≪ x 1 . Approximations with a lower relative error bound are defined by where and with the zero-, first-, and second-order error approximations defined according to: The second-order error function can be approximated leading to In these functions: and Proof Using the result, consistent with Theorem 1, of these results follow from the approximations for ε(x 1 ) specified in Theorem 6.

Consider the relatively simple intermediate function f, defined by
Eq. 18, and the approximation L −1 (y) ≈ f(y) which has a modest relative error bound of 9.69 × 10 −3 . The magnitude of the relative errors in approximations to the inverse Langevin equation, based on this function, and for the cases of zero-, first-, and secondorder error approximations, as specified by Theorem 7, are shown in Fig. 11. The relative error bounds for these cases are detailed in Table 1 and also for the case of a function, with a lower relative error bound, that is specified by Eq. 34.

Notes
These results show the significant improvement in utilizing accurate approximations for the error function. Fig. 11 Magnitude of the relative errors in the approximations for L -1 (y) based on the function defined in Eq. 18, and for the cases of zero-, first-, and secondorder error approximations as specified by Theorem 7 The approximation to the second-order error function, as defined by Eq. 45, yields results that are consistent with the precise form as specified by Eq. 44. For example, for the function f defined by Eq. 18, the approximation specified by Eq. 44 yields a relative error bound of 1.61 × 10 −8 while the approximation specified by Eq. 45 yields a slightly better error bound of 1.53 × 10 −8 .

Higher-order error approximation
Given the significant improvement in approximations for the inverse Langevin function that can be obtained by zero-, first-, and second-order error approximations, it is of interest if higher-order approximations for the error function can be specified. Such approximations are detailed in the following Theorem: Theorem 8 Higher order approximations for error function Consider an initial approximation to the inverse Langevin function of L −1 (y) ≈ f(y). Improved approximations, with a lower maximum relative error bound, are defined by L −1 (y) ≈ f 0 (y) where f 0 y ð Þ ¼ f y−ε A k y ð Þ Â Ã and ε A k , k ∈ {1, 2, …}, is a kth order approximation for the error function. First-, second-, and third-order approximations are defined according to: where Higher-order approximations can be specified by iteration and according to Proof The proofs for these results are detailed in Appendix 6.

Results
Results, for the functions f defined by Eqs. 18 and 34, are tabulated in Table 2 and, as expected, show the significant improvement in accuracy with higher-order error approximations and the improved accuracy by starting with an initial function with a lower relative error bound. The usual tradeoff of complexity for accuracy applies.

Proof
These results follow directly from Theorem 7 and Theorem 8.

Approximation via second-order iteration
Consider any of the approximations to the inverse Langevin function as specified by L −1 (y)≈f 1 (y) where f 1 has one of the forms specified in Theorem 9. Iteration by using the results stated in Theorem 7 or Theorem 8 leads to improved approximations for the inverse Langevin function: Theorem 10 Inverse Langevin approximation: secondorder iteration Each of the approximations for the inverse Langevin function, as specified in Theorem 9, can be used as a basis for an improved approximation according to where and ε k (y) has one of the forms specified by Theorem 7 or Theorem 8. Examples include: Proof These results follow directly from Theorem 7 and Theorem 9.

Approximation via higher-order iteration
By using the same order of error approximation at each iteration, general expressions for approximations to the inverse Langevin function, based on iteration, can be specified: Theorem 11 Iteration with set order of error approximation Direct iteration, with a set error approximation type at each stage, leads to the following general result: where, for example: First-order iteration forms can be written as where, respectively, for zero-and first-error approximations: Second-order iteration forms can be written as where, respectively, for zero-and first-order error approximation: Higher order iteration forms follow in a consistent manner.

Proof
The iteration results follow directly from Theorem 7, Theorem 9, and Theorem 10. The proof for the specific first-and second-order iteration forms is detailed in Appendix 7.
Evaluation of some of the expressions in these equations is facilitated by the use of Eqs. 46, 47 and the results: Alternative forms for iterative approximations The following theorem details two alternative forms for iterative approximations to the inverse Langevin function. These functions illustrate the complex nature of iterative approximate expressions for the inverse Langevin function.
Theorem 12 Alternative forms for iterative approximations First, consider the function f 1 defined by a first-order iteration, as specified by Eq. 57, where iteration is based on a zeroorder error approximation (Eq. 42). An expanded expression is Second, consider the function f 2 defined by a second-order iteration, as specified by Eq. 63, where iteration is based on a zero-order error approximation (Eq. 42). An expanded expression is where Proof The proofs for these results are detailed in Appendix 8.

Lower error bounds via iterative error approximation
Much lower relative error bounds are possible when error approximation and iteration are utilized. Indicative results are detailed in Table 3 for the case of the function specified by Eq. 80 which has the relative error bound of 9.69 × 10 −3 . Lower relative error bounds arise by using an initial function approximation with a lower error bound. For example, the function specified by Nguessong (Eq. 85), which has the relative error bound of 7.2 × 10 −4 , yields the results specified in Table 4.

Lower-error bounds via better base functions
By utilizing a base function with a lower relative error bound, improved approximations for the inverse Langevin function are possible with error approximation and iteration. Results are detailed in Table 5.

Notes
The variation, with different base functions, in the relative error bound of approximations for the inverse Langevin function are illustrated in Fig. 12 for the case of a first iteration based on zero order error approximation (see Eq. 57). In general, the upper bound on the magnitude of the relative error occurs in the mid-band region of the interval [ 0, 1] -the exception being the case of g 7 (y) defined by Eq. 86. For this function, the coefficients defining the approximation are not of sufficient accuracy to yield high-order approximations with iteration and a floor in the relative error is evident in Table 5. This problem can be simply overcome by entering the coefficient numbers with higher precision.
As is evident in the results tabulated in Tables 3, 4, and 5, the improvement in minimizing the relative error bound is dramatic with error approximation and iteration. The function specified by Eq. 80 approximates the inverse Langevin function with a relative error bound of 0.00969. As detailed in Table 3, this relative error bound can be reduced to 5.08 × 10 −8 , 2.66 × 10 −16 , and 2.06 × 10 −30 by a first-order iteration based, respectively, on zero-, first-, and second-order error approximations. A comparison of the results between Tables 3 and 4 shows the natural improvement with error approximation and iteration when a base function with a lower relative error bound is used.

Approximation with high accuracy and modest complexity
The results detailed in Table 5 indicate that approximations for the inverse Langevin function, based on a first-order iteration and a first-order error approximation, yield relative error bounds of the order of 10 −16 or better-a level of accuracy that is higher than that required for most applications. For a chosen base function f, the approximation is where For y fixed, the evaluation of L −1 (y) requires the determination of f(y), f (1) (y),

Simplified approximation via Taylor series
Consider, the initial error-based approximations for the inverse Langevin function as specified by Eqs. 42 to 45, where it is expected that the error term is small, i.e., |ε k (y)| << y. This justifies the simplified approximations for the inverse Langevin function as stated in the following theorem: Fig. 12 Graphs of the magnitude of the relative error in approximations to L -1 (y) based on a first iteration and on a zeroorder error approximation (Eq. 57). The base functions, f, are defined by Eqs. 80 to 86

First-and second-order iteration
The approximations detailed in Theorem 14 can be used to specify first-, second-, and higher-order iteration-based approximations for the inverse Langevin function: Theorem 15 First-and second-order iteration: firstorder error and first-order Taylor Consider the case of a first-order error approximation with either a first-or second-order Taylor series approximation (Eq. 96). A first-order iteration approximation to the inverse Langevin function is defined according to for the case of a first-order Taylor series approximation. Here, f 0 is defined by Eq. 100. For a second-order, Taylor series approximation where f 0 is defined by Eq. 101. A second-order iteration approximation is where, for a first-order Taylor series approximation: Here, f 1 is defined by Eq. 103. For a second-order Taylor series approximation: where f 1 is defined by Eq. 104. Proof Consider first-order iteration based on the first-order Taylor series approximation specified by Eq. 96 of It then follows from Eq. 58 that The first-order Taylor series approximation, as specified by Eq. 96, yields the required result of The other results follow in an analogous manner.

Results
A first-order Taylor series approximation, as defined in Eq. 94, in general, leads to significantly higher relative errors in approximations to the inverse Langevin function. In contrast, a second-order Taylor series approximation leads to relative errors comparable with the non-approximated case. Illustrative results are shown in Table 6 where the base function defined by Eq. 80 has been used. Table 6 The relative error bounds, over the interval (0,1), for approximations to the inverse Langevin function based on the function f defined by Eq. 80 and with the use of first-and second-order Taylor series approximations Taylor series approximation f(y) f 0 (y), ε 0 (y) Eq. 42 f 0 (y), ε 1 (y) Eq. 43 f 1 (y), ε 0 (y) Eq. 57 f 1 (y), ε 1 (y) Eq. 58 f 2 (y), ε 0 (y) Eq. 63

Example
Consider the relatively simple expression, as defined by Eq. 100, which is for a first-order Taylor approximation to a firstorder error-based approximation, i.e., When the function f is that defined by Petrosyan (Eq. 84 with a relative error bound of 1.8 × 10 −3 ), or Nguessong (Eq. 85 with a relative error bound of 7.2 × 10 −4 ), the relative error bounds, respectively, are 3.20 × 10 −6 and 3.81 × 10 −7 . These bounds are comparable with the [7/7] approximation given in Marchi and Arruda (2019) which has a relative error bound of 6.92 × 10 −7 . If this function is used iteratively, according to then the relative error bounds reduce, respectively, to 1.03 × 10 −11 and 1.09 × 10 −13 . As is clear from the results detailed in Table 6, these error bounds can be significantly improved upon by utilizing a second-order Taylor series approximation.

Computational complexity
For Padé-based approximations, where a measure of the computational complexity involved in the evaluation of the inverse Langevin function for a set argument can be clearly defined, a graph of the relative error bound versus computational complexity can readily be generated, e.g., Kröger 2015 andJedynak 2018. The results shown in Fig. 13 are indicative of the relative error improvement that is possible with an increase in functional and computational complexity. An unsolved problem is the determination of the computational complexity, in terms of the number of basic operations (addition, subtraction, multiplication, division, ...), for the approximations detailed in the paper, in particular for the Taylor series approximations detailed in Eqs. 100, 101, 103, 104, 106, and 107. There is potential, e.g., Muller (2006) and Brent (2018), for the computational efficiency of specific functions to be enhanced by innovative approaches. Further research is warranted.

Conclusion
In this paper, an analytical framework has been detailed which underpins, first, convergent series approximations for the inverse Langevin function and, second, analytical approximations for this function, with potentially arbitrarily small maximum relative error magnitudes. The basis for both approaches is the definition of an intermediate function f that leads to L[f(x 1 )] being a closely linear function with a slope that is close to one over the interval [0, 1). This function allows, first, the definition of an error function for the inverse Langevin function which can be approximated via a basis set decomposition. Second, it allows error approximation and then function iteration which leads, potentially, to arbitrarily low relative error bounds in approximations. Basis set decomposition for the defined error function underpins convergent series approximations for the inverse Langevin function. A tenth-order Legendre basis set, based on an initial approximating function as specified by Eq. 80, leads to a series approximation for the inverse Langevin equation with a relative error bound of 1.2 × 10 −4 . A twentiethorder series has a relative error bound of 3.2 × 10 −6 .
A modest approximating function (Eq. 80), with a relative error bound of 0.00969, leads to relative error bounds of 1.31 × 10 −4 , 2.77 × 10 −6 and 1.61 × 10 −8 with zero-, first-, and second-order error approximation. First-order iteration, based on a first-order error approximation, leads to a relative error bound of 2.66 × 10 −16 . Significantly lower relative error bounds can be obtained by higher-order iteration and by using a second-, or higher-, order error approximation. These results represent significant improvements on published analytical approximations for the inverse Langevin function.
First-and second-order Taylor series for the error-based approximations to the inverse Langevin function can be used to obtain simplified function forms. Whilst the firstorder Taylor series leads to results with a much lower accuracy, the second-order Taylor series results in expressions without accuracy compromise. As usual, there is a trade-off between complexity and accuracy. forms for intermediate function