On the error-sum function of Pierce expansions

We introduce the error-sum function of Pierce expansions. Some basic properties of the error-sum function are analyzed. We also examine the fractal property of the graph of it by calculating the Hausdorff dimension, the box-counting dimension, and the covering dimension of the graph.


Introduction
The notion of the error-sum function was first studied by Ridley and Petruska [21] in the context of regular continued fraction expansion.For any real number x, the error-sum function of the continued fraction expansion is defined by where p n (x) q n (x) := [a 0 (x); a 1 (x), a 2 (x), . . ., a n (x)] := a 0 (x) + 1 is the nth convergent (or approximant) of the continued fraction expansion, with p n (x), q n (x) coprime, a 0 (x) an integer, and a 1 (x), . . ., a n (x) positive integers.For any rational number x, a k (x) are undefined for some point and on, and hence P (x) is a series of finitely many terms.In such case, x = [a 0 (x); a 1 (x), . . ., a n (x)] for some n ≥ 0 and if, further, n ≥ 1 then a n (x) > 1.In fact, Petruska [18] used the errorsum function to prove the existence of a q-series F (z) = 1+ ∞ n=1 n k=1 (A − q k ) z n with the radius of convergence R for arbitrary given R > 1.Here, A = e 2πiP (β) and q = e 2πiβ , where β is some irrational number satisfying certain conditions in terms of the q n (β).Moreover, there are a number of studies using error-sum functions to obtain number-theoretical results.See, e.g., [1,[6][7][8][9] for further applications of error-sum functions.
The continued fraction expansion, along with the decimal expansion, is one of the most famous representations of a real number.Since there is, as is well known, a wide range of representations of real numbers (see [13] and [22] for details), it was natural for intrigued researchers to define the error-sum function for other types of representations and investigate its basic properties.To name but a few, the error-sum functions were defined and studied in the context of the tent map base series [4], the classical Lüroth series [27], the alternating Lüroth series [26], and the α-Lüroth series [2].In the previous studies, the list of examined basic properties includes, but is not limited to, boundedness, continuity, integrality, and intermediate value property (or Darboux property) of the error-sum function, and the Hausdorff dimension of the graph of the function.
The Pierece expansion is another classical representation of real numbers introduced by Pierce [19] about a century ago.Since then, a number of studies were conducted to study the arithmetic and metric properties of the Pierce expansions.See, e.g., [5, 10, 12, 17, 20, 23-25, 28, 29].It has proven to be useful in number theory, and we mention two applications among others.Firstly, the Pierce expansion provides us with a simple irrationality proof of a real number (see [24, p. 24]).A real number has an infinite Pierce expansion if and only if it is irrational.For instance, the irrationalities of 1 − e −1 , sin 1, and cos 1 follow, respectively, from their infinite Pierce expansions which coincide with their usual series expansions obtained from the Maclaurin series.As for the other application, Varona [28] constructed transcendental numbers by means of Pierce expansions.
Although the Pierce expansion has a long history and is widely studied, different from other representations mentioned above, its error-sum function has not yet been studied.In this paper, we define the error-sum function of the Pierce expansion, and analyze its basic properties and fractal properties of its graph.
The paper is organized as follows.In Section 2, we introduce some elementary notions of Pierce expansion and then define the error-sum function of Pierce expansion.In Section 3, we investigate the basic properties, e.g., boundedness and continuity, of the error-sum function.In Section 4, we determine the Hausdorff dimension, the box-counting dimension, and the covering dimension of the graph of the error-sum function.
Throughout the paper, N denotes the set of positive integers, N 0 the set of nonnegative integers, and N ∞ := N∪{∞} the set of extended positive integers.Following the convention, we define ∞ + c := ∞ and c/∞ := 0 for any constant c ∈ R. We denote the Lebesgue measure on [0, 1] by λ.For any subset A of a topological space X, the closure of A is denoted by A. Given any function g : A → B, we write the preimage of any singleton {b} ⊆ B under g simply as g −1 (b) instead of g −1 ({b}).
The classical Pierce expansion is concerned with the numbers in the half-open unit interval (0, 1].In this paper, we extend our scope to the numbers in the closed unit interval I := [0, 1].This extension is consistent with our use of N ∞ instead of N in this paper.
To dynamically generate the Pierce expansion of x ∈ I, we begin with two maps d 1 : I → N ∞ and T : I → I given by respectively, where ⌊y⌋ denotes the largest integer not exceeding y ∈ R. Observe that by definition, for each n ∈ N, we have d 1 (x) = n if and only if x ∈ I lies in the interval (1/(n + 1), 1/n] on which T is linear.For each n ∈ N, we write T n for the nth iterate of T , and T 0 := id I .For notational convenience, we write T n x for T n (x) whenever no confusion could arise.Given x ∈ I, we define the sequence of digits (d n (x)) n∈N by d n (x) := d 1 (T n−1 x) for each n ∈ N.Then, for any n ∈ N, by definitions of the map T and the digits, we have We recall two well-known facts about the digits in the following proposition.In particular, part (i) characterizes the digit sequence, and it is stated in any study of Pierce expansions with or without proof.We include the proof to make it clear that the replacement of (0, 1] and N by I and N ∞ , respectively, does not violate the basic properties. Proposition 2.1 (See [24] and [12,Proposition 2.2]).Let x ∈ I and n ∈ N. Then the following hold. (i) We shall consider a symbolic space which is a subspace of N N ∞ closely related to Pierce expansions.Let Σ 0 := {(σ k ) k∈N ∈ {∞} N }, and for each n ∈ N, let For ease of notation, we will occasionally write (σ k ) k∈N ∈ Σ n as (σ 1 , . . ., σ n ) in place of (σ 1 , . . ., σ n , ∞, ∞, . . .).We also define Then Σ n , n ∈ N 0 , consists of a sequence with strictly increasing n initial terms and ∞ for the remaining terms, and Σ ∞ consists of strictly increasing infinite sequences of positive integers.Finally, let Each element of Σ is said to be a Pierce sequence.In view of Proposition 2.1(i), for any x ∈ I, the digit sequence (d n (x)) n∈N is a Pierce sequence.We say σ := (σ k ) k∈N ∈ Σ is realizable if there exists x ∈ I such that d k (x) = σ k for all k ∈ N, and we denote by Σ re the collection of all realizable Pierce sequences.Note that, for any (σ n ) n∈N ∈ Σ, we have for all n ∈ N, which is analogous to Proposition 2.1(ii).
It is well known that for each x ∈ I, the iterations of T yield a unique expansion where the digit sequence (d n (x)) n∈N is a realizable Pierce sequence.(See Proposition 2.4 below.)The expression (2.3) is called the Pierce expansion, Pierce (ascending) continued fraction, or alternating Engel expansion of x.We denote (2.3) by For brevity, if the digit is ∞ at some point and on, i.e., if x = [d 1 (x), . . ., d n (x), ∞, ∞, . . .] P with d n (x) < ∞ for some n ∈ N, then we write x = [d 1 (x), . . ., d n (x)] P and say that x has a finite Pierce expansion of length n.As mentioned in Section 1, it is a classical result that the Pierce expansion of x ∈ (0, 1] is finite if and only if x is rational.Since 0 = [∞, ∞, . . .] P , we may write the Pierce expansion of 0 as [ ] P , which is of length zero.Thus, x ∈ I has a finite Pierce expansion if and only if x is rational.Proposition 2.2 (See [24, pp. 23-24]).For any Proof.The result follows from the definition of the digits.To see this, suppose otherwise.Put M := d n−1 (x) for some M ∈ N, so that d n (x) = M +1 by Proposition 2.1(i).Since d n+1 (x) = ∞, we have T n x = 0.By (2.1), we see that and so T n−1 x = 0.It follows that d n (x) = ∞, which is a contradiction.
We denote by f : I → Σ the map sending a number in I to its sequence of Pierce expansion digits, that is, for each x ∈ I, f is given by Clearly, f is well defined.We also note that f (I) = Σ re by definition.
Conversely, we shall introduce a function mapping a Pierce sequence to a real number in I by means of the formula modelled on (2.3).Define a map ϕ : Σ → I by where the first inequality follows from (2.2).
We rephrase [12,Proposition 2.1] in terms of the maps f and ϕ in the following proposition.According to Fang [12], the proposition is credited to Remez [20].See also [5,Section 4.1].
For each x ∈ I and n ∈ N, define the nth Pierce convergent or approximant, s n : I → R by .
Then s n (x) is nothing but the nth partial sum of the Pierce expansion (2.3).Using (2.1) repeatedly, we find that For every x ∈ I, we define and call E : I → R the error-sum function of Pierce expansions on I.Note that for any x ∈ I, by (2.4), Proposition 2.1(ii), and boundedness of T , we have is well defined as an absolutely and uniformly convergent series (or as a series with finitely many non-zero terms if T n x = 0 for some n ∈ N).
Defining an error-sum function on Σ is in order.For each n ∈ N, define the nth partial sum ϕ n : Σ → R by For every σ ∈ Σ, we define and call E * : Σ → R the error-sum function of Pierce sequences on Σ.Notice that for any σ := (σ n ) n∈N ∈ Σ, by (2.2), we have Hence, the series converges absolutely and uniformly on Σ, and it follows that E * (σ) is well defined.

Some basic properties of E(x)
This section is devoted to investigating some basic properties of the error-sum function of Pierce expansions E : I → R. It will usually be done by the aid of the symbolic space Σ and the error-sum function of Pierce sequences E * : Σ → R.
3.1.Symbolic space Σ.We equip N with the discrete topology and consider (N ∞ , T ) as its one-point compactification, so that a subset in (N ∞ , T ) is open if and only if it is either a subset of N or a set whose complement with respect to N ∞ is a finite set in N.
For a metric space (X, d), we denote by B d (x; r) the d-open ball centered at x ∈ X with radius r > 0, i.e., B d (x; r) := {y ∈ X : d(x, y) < r}.
Then ρ is a metric on N ∞ and induces T .
Proof.It is straightforward to check that ρ is a metric on N ∞ , so we prove the second assertion only. Let

Tychonoff's theorem tells us that N N
∞ is compact in the product topology, as a (countable) product of a compact space (N ∞ , T ).It is easy to see that any nonempty open set in the product topology contains a non-Pierce sequence, so that Σ is not open in N N ∞ .However, Σ is closed in the product topology.Lemma 3.2.The subspace Σ is closed in the product space N N ∞ , and so Σ is compact in the product topology.
∞ \ Σ, and this proves that Σ is closed in N N ∞ .The second assertion follows immediately since N N ∞ is compact in the product topology by Tychonoff's theorem, so that its closed subspace Σ is compact.
∞ and the topology induced by ρ N is equivalent to the product topology on The proof of the second assertion is almost identical to the standard proof of the well-known fact that any countable product of metric spaces is metrizable.So we omit the details.
Proof.The lemma is immediate from Lemmas 3.2 and 3.3.
For a given σ := (σ k ) k∈N ∈ Σ, we define σ (n) := (τ k ) k∈N ∈ Σ for each n ∈ N, by Fix n ∈ N and σ ∈ Σ n .Let Υ σ be the collection of sequences in Σ defined as and we call Υ σ the cylinder set of order n associated with σ.Then Υ σ consists of all sequences in Σ whose initial n terms agree with those of σ.By Lemma 3.2, it is clear that Υ σ is compact in Σ as a closed set in a compact space.Since Υ σ is open in Σ as well, it follows that Σ \ Υ σ is compact by the same lemma.We also define the fundamental interval of order n associated with σ := (σ k ) k∈N by Then any number x ∈ I σ has its Pierce expansion beginning with (σ k ) n k=1 , i.e., In view of the following proposition, the reason for I σ being called an interval should be clear.
For each n ∈ N and σ := (σ k ) k∈N ∈ Σ n , we write σ := ( σ k ) k∈N ∈ Σ n , where the σ k are given by If σ ∈ Σ re , we have instead that I σ is an open interval with the same endpoints, i.e., Consequently, the length of I σ is We illustrate the exclusion of the endpoint ϕ(σ) in (3.1 ′ ) by an example.Consider two sequences σ := (2) ∈ Σ 1 ∩ Σ re and σ equal, and so they have the same Pierce expansion, namely [2] P .It follows by the definition of fundamental intervals that I σ contains ϕ(σ), whereas I σ ′ fails to contain ϕ(σ ′ ).
For later use, we record an upper bound for λ(I σ ) derived from (3.2).For each 3.2.Mappings ϕ : Σ → I and f : I → Σ.By definition, the following observation is immediate.
For a fixed σ ∈ Σ n for some n ∈ N, we can explicitly describe the relation between the cylinder set Υ σ ⊆ Σ and the fundamental interval I σ ⊆ I in terms of the map f : I → Σ.We first observe the following from the definition The inclusion in the above observation is proper, i.e., f (I σ ) Υ σ , and by Proposition 2.4 we explicitly have where υ m−1 + 1 = υ m , for some m ≥ n.Consider a sequence (τ k ) k∈N in Σ given by Similarly, any sequence in Σ can be approximated arbitrarily close by sequences in f (I).Lemma 3.7.We have Σ = f (I).
Proof.Since f (I) ⊆ Σ, it suffices to show that any point in Σ \ f (I) is a limit point of f (I).Let σ := (σ k ) k∈N ∈ Σ \ f (I).Then σ ∈ Σ \ Σ re , so by Proposition 2.4, σ ∈ Σ n for some n ≥ 2 with σ n−1 + 1 = σ n .Now, an argument similar to the one in the proof of Lemma 3.6 shows that there is a sequence in f (I) converging to σ. Hence the result.
We are now concerned with the continuity of two maps of interest.We first show that ϕ : Σ → I is a Lipschitz mapping.
there is nothing to prove, so we suppose that σ and τ are distinct.If Assume that σ and τ share the initial block of length n ∈ N, i.e., σ (n) = τ (n) and σ n+1 = τ n+1 .Then where we used (2.6) and (2.2) for the second and third inequalities, respectively.Now we prove that f : I → Σ is continuous at every irrational number and at two rational numbers 0 and 1.
Lemma 3.9.The mapping f : Proof.By Proposition 2.3(ii), it suffices to show that f is continuous at x ∈ I for which ϕ −1 (x) is a singleton.Suppose otherwise.Put {σ} := ϕ −1 (x) for some σ ∈ Σ.Then f (x) = σ by Proposition 2.3(ii).Since f is not continuous at x, we can find an ε > 0 and a sequence ( This contradicts the convergence of (τ n k ) k∈N to τ .Therefore, f is continuous at x ∈ I for which ϕ −1 (x) is a singleton, and hence at every x ∈ I \ E ′ .However, the continuity does not hold at any rational number in the open unit interval (0, 1).Notice in Proposition 2.3(i) that σ ∈ Υ σ ′ and σ ′ ∈ Υ σ .Lemma 3.10.Let x ∈ E ′ and put ϕ −1 (x) = {σ, τ }.Then f is not continuous at x, in particular, we have Proof.The argument is similar to the proof of Lemma 3.9.The main difference in this proof is the use of compactness of Υ σ and Σ \ Υ σ .
Suppose to the contrary that the first convergence fails to hold.We can find an ε > 0 and a sequence ( and Υ σ is a compact metric space, there is a subsequence (υ n k ) k∈N converging to some υ ∈ Υ σ .Note that x n k = ϕ(f (x n k )) = ϕ(υ n k ) for each k ∈ N. Now, by continuity of ϕ (Lemma 3.8), we see that x n k → ϕ(υ) as k → ∞.Since x is the limit of (x n ) n∈N , it follows that x = ϕ(υ).Thus υ = σ or υ = τ by the doubleton assumption.Since τ ∈ Υ σ by Proposition 2.3(i), it must be that υ = σ.But then ρ N (υ, f (x n k )) = ρ N (υ, υ n k ) ≥ ε for all k ∈ N, by our choice of ε and (x n ) n∈N .This contradicts the convergence of (υ n k ) k∈N to υ.Therefore, limt→x t∈Iσ f (t) = σ.
The proof for the second convergence is similar.First note that since Υ σ \ f (I σ ) and f (I) are disjoint by (3.4), we have where the first equality follows from the injectivity of f .Now, in the preceding paragraph, by replacing I σ and Υ σ by I \I σ and Σ\Υ σ , respectively, and exchanging the roles of σ and τ , we obtain the desired result.
Notice that in the preceding lemma there is no additional assumption for σ and τ .Compare this with Proposition 2.3(i).Hence, Lemma 3.10 holds for either the case where σ ∈ Σ re with τ ∈ Σ \ Σ re or where σ ∈ Σ \ Σ re with τ ∈ Σ re .
and by definitions of ϕ n and s n , we have, for each n ∈ N, Thus (E * • f )(x) = E(x) for all x ∈ I.
by Lemma 3.12.Thus, we may change the order of the double series to obtain as desired.
The boundedness of E : I → R readily follows.
Proof.We make use of (3.5) and Lemma 3.12 to obtain both the desired upper and lower bounds.On one hand, for any σ := (σ n ) n∈N ∈ Σ, we have where the last inequality follows from (2.2).Notice that the equalities hold if and only if σ = (1, 2) ∈ Σ 2 ∩ (Σ \ Σ re ).On the other hand, for any σ := (σ n ) n∈N ∈ Σ, we have The second assertion is immediate in view of Lemma 3.11 and (1, 2) ∈ f (I).
Proof.We showed that the series in (2.5) is uniformly convergent on Σ.But ϕ is continuous by Lemma 3.8 and ϕ n is cleary continuous, and so each term in the series of E * is continuous.Therefore, E * is continuous as a uniformly convergent series of continuous functions.
The λ-almost everywhere continuity theorem for E(x) is now immediate.
Theorem 3.16.The error-sum function E : I → R is continuous on I \ E ′ and so E is continuous λ-almost everywhere.
Proof.Let x ∈ I \ E ′ .By Lemma 3.9, we know that f : I → Σ is continuous at x.
Moreover, E * : Σ → R is continuous by Lemma 3.15.But E = E * • f by Lemma 3.11, and therefore, E is continuous at x.For the second assertion, it is enough to recall that E ′ = Q ∩ (0, 1) which has zero λ-measure.Thus I \ E ′ is of full λ-measure, and the result follows.
On the other hand, we will show that E : I → R fails to be continuous at every point of E ′ (Theorem 3.18).The following lemma plays a key role in the proof of the theorem.Lemma 3.17.Let x ∈ E ′ and put ϕ −1 (x) = {σ, σ ′ }, where σ ∈ Σ re ∩ Σ n and σ ′ ∈ (Σ \ Σ re ) ∩ Σ n+1 for some n ∈ N. Then .
Proof.By Lemma 3.11, the continuity of E * (Lemma 3.15), and Lemma 3.10, we obtain the first equality as follows: Since x ∈ E ′ , by Proposition 2.3(i), x has a finite Pierce expansion of positive length, say x = [d 1 (x), d 2 (x), . . ., d n (x)] P for some n ∈ N.Then, since ϕ(σ ′ ) = x, we have In particular, we have . .
Now we are ready to prove that E is discontinuous at every point of the dense subset E ′ ⊆ I.
Then the following hold.
(i) If n is odd, then E is left-continuous but has a right jump discontinuity at x; more precisely, (ii) If n is even, then E is right-continuous but has a left jump discontinuity at x; more precisely, Proof.By Proposition 2.3(i), we have (i) Assume n is odd.Since I σ = (ϕ( σ), ϕ(σ)] by (3.1) and x = ϕ(σ), we have that t → x + if and only if t → x with t ∈ I σ .For the right-hand limit, apply Lemma 3.17 to obtain (3.6).For the left-hand limit, note that t → x − if and only if t → x with t ∈ I σ \ {x}.Then, by Lemma 3.11, the continuity of E * (Lemma 3.15), and Lemma 3.10, we deduce that Lemma 3.11 and therefore, we conclude that E is left-continuous at x.
(ii) The proof is similar to that of part (i), so we omit the details.
Note that for every point x ∈ E ′ , we have that lim t→x − E(t) is strictly greater than lim t→x + E(t), regardless of left or right discontinuity.
(ii) The proof is similar to that of part (i), so we omit the details.
Using the preceding lemma, we can describe the supremum and the infimum of E : I → R on each fundamental interval I σ .We show that approaching the left endpoint from the right yields the infimum, while approaching the right endpoint from the left yields the supremum.(See Proposition 3.5 for the left and right endpoints of the fundamental intervals.)Lemma 3.20.Let n ∈ N and σ ∈ Σ n .Then the following hold.
(i) If n is odd, we have , and the continuity of E * (Lemma 3.15), we find that where the last two equalities for both (3.8) and (3.9) follow from Lemma 3.19 and its proof.
For the infimum, notice that, by Proposition 3.5, t → (ϕ( σ)) + if and only if t → ϕ( σ) with t ∈ I σ .Hence Lemma 3.17 tells us that Combining this with (3.9) gives the result.
(ii) The proof is similar to that of part (i), so we omit the details.
The following lemma is an analogue of [27,Lemma 2.6], where the error-sum function of Lüroth series is discussed.This lemma will serve as the key ingredient in finding a suitable covering for the graph of E(x) in Section 4.
One might be tempted to say that E : I → R is fairly regular in the sense of λ-almost everywhere continuity (Theorem 3.16).However, the following theorem tells us that E is not well-behaved in the bounded variation sense.
Theorem 3.22.The error-sum function E : I → R is not of bounded variation.
Proof.Let V I (E) denote the total variation of E on I. Let n ∈ N. We consider the collection I := {I σ : σ ∈ Σ n }, i.e., the collection of all fundamental intervals of order n.Note that σ∈Σn λ(I σ ) = 1.Then, by Lemma 3.21, we have where the inequality follows from the fact that the I σ ∈ I are mutually disjoint intervals.Since n ∈ N is arbitrary, it follows that V I (E) is not finite.This completes the proof.
We prove that E : I → R enjoys an intermediate value property in some sense, which is an analogue of [21,Theorem 4.3].A similar result can also be found in [26,Theorem 2.5].In fact, every result aforementioned is a consequence of the following theorem.
Theorem 3.23.Suppose that g : J → R is a function on an interval J ⊆ R satisfying the following conditions.
(i) There exists a subset D of the interior of J such that g is continuous on J \ D. (ii) For any x ∈ D, g is either left-continuous or right-continuous at x with lim t→x − g(t) > lim t→x + g(t).Let a, b ∈ J with a < b.If g(a) < y < g(b), then there exists an x ∈ (a, b) \ D such that g(x) = y.

Proof. Consider the set
and let t 0 := sup S. Since g(a) < y by assumption, we have a ∈ S, and hence S is non-empty.So t 0 = −∞ and t 0 ∈ [a, b].Our aim is to show that t 0 is a desired root, that is, g(t 0 ) = y and t 0 ∈ (a, b) \ D.
We claim that t 0 > a.We consider three cases depending on the continuity at a. Case I. Assume a ∈ J \ D, so that g is continuous at a by condition (i).Then, since g(a) < y, there is an η 1 ∈ (0, b − a) such that g(t) < y for all t ∈ (a − η 1 , a + η 1 ) ∩ J.So t 0 ≥ a + η 1 and hence t 0 > a.
Case II.Assume that a ∈ D and g is left-continuous at a. Then lim t→a + g(t) < lim t→a − g(t) = g(a) < y by condition (ii) and assumption.By definition of the righthand limit, there exists an η 2 ∈ (0, b − a) such that g(t) < y for all t ∈ (a, a + η 2 ).So t 0 ≥ a + η 2 and hence t 0 > a.
Case III.Assume that a ∈ D and g is right-continuous at a. Then lim t→a + g(t) = g(a) < y by assumption.By definition of the right-hand limit, there exists an η 3 ∈ (0, b − a) such that g(t) < y for all t ∈ (a, a + η 3 ).So t 0 ≥ a + η 3 and hence t 0 > a.
By a similar argument, which we omit here, we can show that t 0 < b.We have shown above that t 0 ∈ (a, b).It remains to prove that g(t 0 ) = y with t 0 ∈ D. We show first that t 0 ∈ D. Suppose t 0 ∈ D to argue by contradiction.Since t 0 = sup S we can find a sequence (a n ) n∈N in S such that a n ≤ t 0 for each n ∈ N and a n → t 0 as n → ∞. (We can choose a n ∈ S such that t 0 − 1/n < a n ≤ t 0 for each n ∈ N.) Similarly, we can find a sequence (b n ) n∈N in [a, b] \ S such that b n ≥ t 0 for each n ∈ N and b n → t 0 as n → ∞.Then, by our choice of two sequences, g(a n ) < y and g(b n ) ≥ y for all n ∈ N. Now note that since t 0 ∈ D, g is either left-continuous or right-continuous at t 0 by condition (ii).If g is left-continuous at t 0 , then by condition (ii), we have which is a contradiction.If g is right-continuous at t 0 , then by condition (ii), we have which is a contradiction.This proves that t 0 ∈ D, as desired.Since t 0 ∈ J \ D, we know from condition (i) that g is continuous at t 0 .Hence, g(t 0 ) = y by definitions of S and t 0 .For, if not, say g(t 0 ) < y, we can find a δ ∈ (0, min{t 0 − a, b − t 0 }) such that g(t) < y on the interval (t 0 − δ, t 0 + δ), which contradicts t 0 = sup S. Similarly, g(t 0 ) > y gives a contradiction.This completes the proof that t 0 ∈ (a, b) \ D is a root of g(x) = y we were seeking.Remark 3.24.In Theorem 3.23, the assumption g(a) < y < g(b) for a < b is stricter than that of the standard Intermediate Value Theorem in R.This additional assumption is necessary because at every discontinuity, g has a sudden drop therein.To be precise, for every x ∈ D, we have lim t→x − g(t) > lim t→x + g(t) by condition Proof.Let x be rational.Then the regular continued fraction expansion of x is of finite length, say x = [a 0 (x); a 1 (x), . . ., a n (x)] for some n ∈ N 0 .By [21,Lemma 1.1] and [21,Theorem 2.3], the following hold: (i) If n is odd, then P is left-continuous but has a right jump discontinuity at x with the right-hand limit lim t→x + P (t) = P (x) − 1/q n (x).(ii) If n is even, then P is right-continuous but has a left jump discontinuity at x with the left-hand limit lim t→x − P (t) = P (x) + 1/q n (x).
Since q n (x) > 0 by definition (see [21], p. 274), it follows that lim t→x − P (x) > lim t→x + P (x) for every x ∈ Q.Moreover, by Theorem 2.3 of [21], P is continuous at every irrational point.Therefore, by taking J := R and D := Q in Theorem 3.23, the result follows.
We showed that E is bounded on I (Theorem 3.14) and it is continuous λ-almost everywhere (Theorem 3.16).Hence, E is Riemann integrable on I. Before calculating the integral, we first find a useful formula for E. Lemma 3.27.For every x ∈ I and for each n ∈ N, we have Proof.Let x ∈ I and n ∈ N. From the definition of digits, we have d n+j (x) = d j (T n x) for any j ∈ N. Then by making use of (2.4), we obtain The second inequality follows from Lemma 3.13.For E * (σ (n) ), we just need to take σ k = ∞ for all k ≥ n + 1 in the formula (3.5) to obtain Thus, by Lemma 3.12, we find that

The dimension of the graph of E(x)
In this section, we determine three widely used and well-known dimensions, namely the Hausdorff dimension, the box-counting dimension, and the covering dimension, of the graph of the error-sum function E : I → R. In fact, although E is discontinuous on a dense subset of I (Theorem 3.18) and is not of bounded variation (Theorem 3.22), it is not sufficiently irregular to have a graph of any dimension strictly greater than one.Nevertheless, we show that the Hausdorff dimension of the graph is strictly greater than its covering dimension.This will lead to the conclusion that the graph is indeed a fractal according to Mandelbrot's definition in his prominent book [16], where he coined the term fractal in a Euclidean space and defined it as a set whose covering dimension is strictly less than its Hausdorff dimension.
Throughout this section, for a subset F of R or of R 2 , we denote by H s (F ) the s-dimensional Hausdorff measure of F and by dim H F the Hausdorff dimension of F .In addition, we denote by dim B F and dim B F the lower and upper box-counting dimension of F , respectively.If dim B F = dim B F , we call this common value the box-counting dimension of F and denote the value by dim B F .Lastly, the covering dimension of F is denoted by dim cov F .
We refer the reader to [11, for details on the Hausdorff measure, the Hausdorff dimension, and the box-counting dimension, and [3, Chapters 1-2] for the covering dimension which is called the topological dimension in the book.

The Hausdorff dimension of the graph of E(x). Define
It should be mentioned that the proof idea of the following theorem is borrowed from earlier studies, e.g., [2,21,26,27].(ii) Γ(Σ) is compact.
Proof.(i) It is enough to show that Γ is a continuous injection, since Σ is compact (Lemma 3.2) and Γ(Σ) ⊆ R 2 is Hausdorff.Since ϕ : Σ → I and E * : Σ → R are continuous by Lemmas 3.8 and 3.15, respectively, it follows that Γ is continuous.
(ii) Since Σ is compact by Lemma 3.2, the result follows from part (i).
Proof.First note that since ϕ • f = id I and E * • f = E (Lemma 3.11), we have for any x ∈ I, and hence (Γ • f )(I) = G(I).Since Σ is compact (Lemma 3.2), the continuity of Γ (Lemma 4.3(i)) tells us that Γ(f (I)) = Γ(f (I)).Then, by Lemma 3.7, we have The following proposition gives us a general relation among dim H , dim B , and dim B for certain subsets of R 2 .Proposition 4.5 ([11,Proposition 3.4]).If F ⊆ R 2 is non-empty and bounded, then To prove Theorem 4.2, we first find a lower bound for the lower box-counting dimension.We need the following proposition to find an upper bound for the upper boxcounting dimension.The lemma provides an upper bound for the number of finite sequences whose length and the product of all terms are dominated, respectively, by prescribed numbers.The logarithm without base, denoted log, will always mean the natural logarithm.
Proof.Let ε := 2e −M with M > 0 large enough.Take n = n(M) ∈ N such that (n − 1)! ≤ e M ≤ n!.Clearly, n → ∞ as M → ∞ and vice versa.Then for any (σ k ) k∈N ∈ Σ, by (2.2), we have We obtain lower and upper bounds for M by means of Proposition 4.9: !/e M ≤ 1 by our choice of n, it must be that n < M.
We first write Σ as a union of finitely many sets.Define and for k ≥ 2, define We claim that Σ = n+1 k=1 Λ k .To prove the claim, we need to show that Σ ⊆ n+1 k=1 Λ k since the reverse inclusion is obvious.Let σ := (σ j ) j∈N ∈ Σ and assume and n j=1 σ j < ne M since σ ∈ Λ n .Since we have n+1 j=1 σ j ≥ (n + 1)e M by (4.1), it must be that σ ∈ Λ n+1 .Therefore, σ ∈ n+1 k=1 Λ k and this proves the claim.For each 1 ≤ k ≤ n+ 1, our aim is to find a covering of Γ(Λ k ) consisting of squares of side length ε = 2e −M and to determine an upper bound, which we will denote by a k , of the number of required squares.
Hence, Γ(Λ 1 ) can be covered by a 1 := 1 square of side length ε = 2e −M .Let k ∈ {2, . . ., n + 1}.For every σ := (σ j ) j∈N ∈ Λ k , since k j=1 σ j ≥ ke M , we have by Lemma 3.29 that This shows that for a fixed τ := (τ j ) j∈N ∈ Σ k−1 , we can cover Γ(Λ k ∩ Υ τ ) by one square of side length 2e −M = ε.Since k−1 j=1 σ j < (k − 1)e M by definition of Λ k , using Lemma 4.8, we see that at most where the last inequality holds true since M > n.So a n+1 > a n > • • • > a 1 , and it follows that Recall that by our choice of n, we have e M ≤ n! and n < M, and so and we will estimate the upper limit of each of the three terms in the second line above.Clearly, the limit of the first term is 0 as M → ∞.For the second term, using (4.2), we have In this subsection, we show that the covering dimension of the graph of E is zero, so that it is strictly smaller than the Hausdorff dimension.
Theorem 4.12.The graph of the error-sum function E : I → R has the covering dimension zero, i.e., dim cov G(I) = 0.
We say that a topological space X is totally separated if for every pair of distinct points x, y ∈ X, there are disjoint open sets U and V such that x ∈ U, y ∈ V , and X = U ∪ V .The following propositions will be used for the proof of the theorem.The theorem is a consequence of the following lemma.Lemma 4.15.We have dim cov Γ(Σ) = 0.
Proof.Obviously, Γ(Σ) ⊆ R 2 is non-empty and Hausdorff, and, furthermore, by Lemma 4.3(ii), it is compact.By Proposition 4.13, it is sufficent to show that Γ(Σ) is totally separated.To see this, first recall from Lemma 4.3(i) that Γ : Σ → Γ(Σ) is a homeomorphism.It is clear that N ∞ is totally separated, and so is its (countable) product N N ∞ .It follows that Σ ⊆ N N ∞ is also totally separated.Hence its homeomorphic image Γ(Σ) is totally separated.This proves the result.
Proof of Theorem 4.12.On one hand, since G(I) = ∅, we have dim cov G(I) ≥ 0 by [3,Example 1.1.9].On the other hand, since G(I) is a subset of the metrizable space Γ(Σ) ⊆ R 2 , Proposition 4.14 and Lemma 4.15 tell us that dim cov G(I) ≤ 0. This completes the proof.

(
ii).For the same phenomenon for E : I → R, see Theorem 3.18 and equations (3.6) and (3.7) therein.Corollary 3.25.Let a, b ∈ I with a < b.If E(a) < y < E(b), then there exists an x ∈ (a, b) \ E such that E(x) = y.Proof.By Theorems 3.16 and 3.18, E satisfies the two conditions of Theorem 3.23 with J := I and D := E ′ .Since (a, b) \ E = (a, b) \ E ′ , the result follows from Theorem 3.23.Using Theorem 3.23, we can prove the intermediate value property of P : R → R, the error-sum function of the regular continued fraction expansion, defined as in Section 1. Compare the following corollary with [21, Theorem 4.3], where the authors considered P | I , the restriction of P to I. Corollary 3.26.Let a, b ∈ R with a < b.If P (a) < y < P (b), then there exists an x ∈ (a, b) \ Q such that P (x) = y.

Theorem 4 . 1 .
The graph of the error-sum function E : I → R has the Hausdorff dimension one, i.e., dim H G(I) = 1.

Lemma 4 . 6 .
We have dim B G(I) ≥ 1. Proof.By Proposition 4.5, we have dim B G(I) ≥ dim H G(I). By monotonicity of the Hausdorff dimension and by Theorem 4.1, we further have dim H G(I) ≥ dim H G(I) = 1.Combining the inequalities, the result follows.

Proposition 4 .
13 ([3, Theorem 2.7.1]).Let X be a non-empty compact Hausdorff space.Then X is totally separated if and only if dim cov X = 0.