$C^r$-right equivalence of analytic functions

Let $f,g:(\mathbb{R}^n,0)\rightarrow (\mathbb{R},0)$ be analytic functions. We will show that if $\nabla f(0)=0$ and $g-f \in (f)^{r+2}$ then $f$ and $g$ are $C^r$-right equivalent, where $(f)$ denote ideal generated by $f$ and $r\in \mathbb{N}$.


Introduction and result
By N we denote the set of positive integers. A norm in R n we denote by | · | and by dist(x, V ) -the distance of a point x ∈ R n to a set V ⊂ R n (put dist(x, V ) = 1 if V = ∅).
Let f, g : (R n , 0) → (R, 0) be analytic functions. We say that f and g are C r -right equivalent if there exists a C r diffeomorphism ϕ : (R n , 0) → (R n , 0) such that f = g • ϕ in a neighbourhood of 0.
Let f : (R n , 0) → R be an analytic function. By J f we denote the ideal generated by ∂f ∂x 1 , . . . , ∂f ∂xn in the set of analytic functions (R n , 0) → R. The ideal J f is called the Jacobi ideal. Moreover, by (f ) we denote the ideal in set of analytic functions (R n , 0) → R generated by f .
The aim of this paper is proof of the following theorem Main Theorem. Let f, g : (R n , 0) → (R, 0) be analytic functions and let ∇f (0) = 0. If (g − f ) ∈ (f ) r+2 then f and g are C r -right equivalent, where r ∈ N.
The above theorem is a modification of author's result about C r -right equivalence of C r+1 functions. In [8,Theorem 5] and [9,Theorem 1] it has been proved Theorem 1. Let f, g : (R n , 0) → (R, 0) be C k functions, k, r ∈ N be such that k ≥ r + 1 and let ∇f (0) = 0. If (g − f ) ∈ (J f C k−1 (n)) r+2 then f and g are C r -right equivalent. By J f C k−1 (n) we mean the Jacobi ideal defined in the set of C k−1 functions (R n , 0) → R.
Methods of proofs of above theorems are similar. First we construct suitable vector field of class C r and next we integrate this vector field. The idea of construct vector field is descended from N. H. Kuiper, T. C. Kuo ([4], [5]). Whereas, integration of vector field is descended from Ch. Ehresmann ( [2], see also [3]).
There exists one more result which deals with C r -right equivalence of functions with similar condition for (g − f ). Namely, J. Bochnak has proved the following theorem ([1, Theorem 1]) Theorem 2. Let f, g : (R n , 0) → (R, 0) be C k functions, k, r ∈ N be such that k ≥ r + 2 and let ∇f (0) = 0. If (g − f ) ∈ m(J f C k−1 (n)) 2 then f and g are C r -right equivalent. By J f C k−1 (n) and m we mean respectively the Jacobi ideal and maximal ideal defined in the set of C k−1 functions (R n , 0) → R.
Proof of this theorem bases on Tougeron's Implicit Theorem ( [10]). Comparing the above results we see that Theorem 1 deals with C r -right equivalence of C r+1 functions, whereas Theorem 2 deals with C r -right equivalence of C r+2 functions. Since in the last Theorem power of Jacobi ideal does not depend on r, so it is difficult to say which Theorem is stronger. Additional, since in Main Theorem (g − f ) belongs to some power of ideal generated by f , whereas in Theorem 1 and Theorem 2 (g − f ) belongs to some power of ideal generated by partial derivatives of f , so this results are completely other type.

Auxiliary results
We start from define Łojasiewicz exponent in the gradient inequality. Let f : (R n , 0) → (R, 0) be an analytic function. It is known that there exists a neighbourhood U of 0 ∈ R n and constants C > 0, η ∈ [0, 1) such that the following Łojasiewicz gradient inequality holds The smallest exponent η in the above inequality is called the Łojasiewicz exponent in the gradient inequality and is denoted by ̺ 0 (f ) (cf. [6], [7]).
From the above inequality we obtain immediately that there exists a neighbourhood U of 0 ∈ R n and a constant C > 0 such that Let M, m, r ∈ N, M > r. Moreover, let p, q 1 , . . . , q m : (R n , 0) → R be analytic functions and let Q denote the ideal generated by q 1 , . . . , q m .
Lemma 1 (see [9]). If p ∈ Q M then (i) . . , q n (x))| M in a neighbourhood of 0 and for some positive constant C.
Lemma 2. Let f : (R n , 0) → (R, 0) be an analytic function. Then there exist a neighbourhood U at 0 ∈ R n , constant C > 0 such that for any x ∈ U, Proof. Let us assume contrary, that for any neighbourhood U and for any This contradicts the Lipschitz condition for function f .
where A 1 , A 2 , A 3 > 0 are some positive constants and U ∈ R n is some neighbourhood of the origin,. Then Now we will prove (3). Let us take k ∈ N n 0 and let |k| = m. First, consider the case when m is even.
Note that for m ≥ j ≥ 1 2 m + 1 and for any sequence i 1 , . . . , i j ∈ N n 0 , |i j | ≥ 1, such that |i 1 |+· · ·+|i j | = m, there exist at least 2j−m elements of this sequence which modules are equal 1. Therefore we can assume that |i m−j+1 | = . . . |i j | = 1 for m ≥ j ≥ 1 2 m + 1. From this and (2) we obtain where B 1 , B 2 , B 3 are some positive constants. Let us consider the case when m is odd. Note that for m ≥ j ≥ 1 2 (m + 1) and for any sequaence i 1 , . . . , i j ∈ N n 0 , |i j | ≥ 1, such that |i 1 | + · · · + |i j | = m, there exist at least 2j − m elements of this sequence which modules are equal 1. Knowing this fact, similar as previously, we show for some positive constants B 4 , B 5 . Finally, we proved (3).

Proof of Main Theorem
Let Z be the zero set of ∇f and let U ∈ R n be a neighbourhood of 0 such that f and g are well defined. By Lemma 2 there exists a positive constant A such that (5) |∇f Define the function F : R n × U → R by the formula From the above, diminishing U if necessary, we have that there exists a constant C 1 > 0 such that Indeed, Since (g − f ) ∈ (f ) r+2 and r ≥ 1, so from Lemma 1 and (1) we get Moreover, from definition of ∇F we get at once, that there exists a positive constant C 3 such that (7) |∇f (x)| ≥ C 3 |∇F (ξ, x)| for (ξ, x) ∈ G.
Now we will show that the mapping X : G → R n × R defined by is a C r mapping. The proof of this fact will be divided into several steps Step 1. The mapping X is continuous in G.

Indeed, let us fix ξ and let
Then for x ∈ U and x / ∈ Z, from (1) and Lemma 1 we have for some positive constants A 1 , A ′ . The above inequality also holds for x ∈ Z. Since A ′ does not depend on the choice of ξ so for (ξ, x) ∈ G we obtain Therefore X is continuous in G.
In summary from Step 1, 2 and 3 we obtain that X i are C r functions in G. Therefore X is a C r mapping in G.
Diminishing U if necessary, we may assume that A ′ dist(x, Z) < 1 2 . From (8) we obtain Hence the field W is well defined and it is a C r mapping. Consider the following system of ordinary differential equations (12) dy dt = W (t, y).
Since r ≥ 1, then W is at least of class C 1 on G, so it is a lipschitzian vector field. As a consequence, the above system has a uniqueness of solutions property in G. Since y 0 (t) = 0, t ∈ (−2, 2) is one of the solutions of (12), then the above implies the existence of a neigbourhood U ⊂ R n of 0 such that every integral solution y x of (12) with y x (0) = x, where x ∈ U, is defined at least in [0, 1]. Now, let us define a mapping ϕ : U → R n by the formula where y x stands for an integral solution of (12) with y x (0) = x. This mapping is ϕ is a C r bijection.It gives a C r diffeomorphism of some neighbourhood of the origin. Indeed, considering solutionsȳ x : [0, 1] → R n of (12) withȳ x (1) = x, where x is from some neigbourhood of the origin, we get that ϕ(ȳ x (0)) = x. Similar reasoning shows that the mapping x →ȳ x (0) for x is class C r in the neigbourhood of the origin. Consequently ϕ : (R n , 0) → (R n , 0) is a C r diffeomorphism and maps a neighbourhood of the origin onto a neighbourhood of the origin. Finally, note that for any x ∈ U, Indeed, from definition of W we derive the formula where e 1 = [1, 0, . . . , 0] ∈ R n+1 and [1, W ] : G → R × R n . Thus, if we denote by a, b the scalar product of two vectors a, b, then for t ∈ [0, 1], we have dF (t, y x (t)) dt = (∇F )(t, y x (t)), [1, W (t, y x (t))] = 1 X 1 (t, y x (t)) − 1 (∇ x F )(t, y x (t)), X(t, y x (t)) − ∂F ∂ξ (t, y x (t)) = 1 X 1 (t, y x (t)) − 1 (g(y x (t)) − f (y x (t)) − g(y x (t)) + f (y x (t))) = 0.
for x ∈ U. This ends the proof.

Remark
In Main Theorem we can not omit the assumption about analtyticity of function f and g. It follows from the fact that the Łojasiewicz gradient inequality holds only for analytic functions.
Note that the condition g − f ∈ (f ) r+2 in Main Theorem can be replaced by g = f (hf r+1 + 1), where h : (R n , 0) → R is an analytic function. It seems natural to try to replace this condition by g = hf , where h : (R n , 0) → R is an analytic function such that h(0) = 0. But then the theorem would not hold. Indeed, let f (x) = x 2 , g(x) = −x 2 and h(x) = −1, then g = hf but f and g are not right equivalent.