SOLVING UNCONSTRAINED FUZZY PROBLEMS USING IH DIFFERENTIABILITY

In this paper, we develop new differentiability of fuzzy valued function termed improved Hukuhara(iH) differentiability. iH differentiability is used to solve unconstrained fuzzy optimization. The advantage of iH differentiability over-generalized Hukuhara differentiability is that it provides us with a unique solution to fuzzy optimization problems that remain fuzzy as time passes. We used the Newton approach to find the non-dominated solution in this case. We also provide an example to demonstrate the proposed method.


INTRODUCTION
Zadeh [16] first proposed the fuzzy set theory in 1965. Various scholars have explored fuzzy sets since then, and many applications have been discovered. Fuzzy optimization is one of them, and it deals with the uncertainty in the optimization problem. Bellman and Zadeh [1] introduced the notion of fuzzy optimization issues in 1970, in which they examined decision-making in a fuzzy context. Following that, a great deal of work has been done in this sector. We'll look at a few of them here. Delgado et al [3], Kacprzyk [7], and Rommelfanger [11] provide a comprehensive overview of fuzzy optimization, including current breakthroughs and applications.
In fuzzy optimization problem, differentiability is very significant. As a result, several scholars have looked into fuzzy differentiability in various methods. The concept of fuzzy hukuhara differentiability was proposed by Puri et.al [10]. To solve fuzzy differential equations, Ding [4], Kaleva [6], and Seikkala [12] used H differentiability. H differentiability was employed Wu.H.C [15] to identify the optimality criteria of fuzzy optimization problems. Bede and Stefanini [13] established a more general approach for fuzzy valued functions termed generalized Hukuhara(gH) differentiability. However, gH differentiability has the drawback of not providing a unique solution to the problem.
H differentiability of fuzzy functions is a highly limited concept. Consider the functioñ F : X → F R , F R denotes the set of all fuzzy numbers on R. LetF (x) = c.g(x), c ∈ F R and g(x) ∈ R. The function is not H differentiable if g (x) < 0. However, we can observe that the function above is gH differentiable. The only drawback of gH differentiability is that it does not give a singular value. To address this issue, we introduced a new notion known as the improved Hukuhara derivative. The benefit of this derivative is that it provides us with a unique answer.
To obtain the solution of the functionF (x) = c.g(x) when g (x) < 0, we used the improved Hukuhara derivative instead of gH derivative.
The authors of the paper [9] used Hukuhara differentiability. However, the examples offered in the preceding paper did not meet the conditions of theorem 4.1 in [9]. In addition, in these situations, the objective functions are not H differentiable. As a result, we can deduce that the solutions obtained in [9] are not non-dominated..
We propose a new notion termed improved hukuhara differentiability in this study. Using the Newton technique and iH differentiability, we find the non-dominated solution to fuzzy optimization problems. In this work, we'll look at the examples in [9]. Clearly, it is iH differentiable, and the problem has a nondominated solution.
2. The value ofũ(λ x + (1 − λ )y) should be greater than or equal to min{ũ(x),ũ(y)}, x, y ∈ where K c denotes the space of all compact intervals in R. The α level of a fuzzy number is given 1]. For the fuzzy numberũ,ṽ ∈ F R ,ũ α andṽ α are denoted by [ũ α ,ũ α ] and [ṽ α ,ṽ α ] respectively, for any real number λ and α ∈ [0, 1], we define arithmetic operations using α level sets as follows. {d H (ũ α ,ṽ α )}, ∀ũ,ṽ ∈ F R . Sinceũ α ,ṽ α are compact intervals in R, (1) lim n→∞ c n = 0 then we say that a 0 is a cluster point ofF on the right of x 0 . The set of cluster points ofF on the right of x 0 is denoted by C R(x 0 ) (F ). Similarly we can define the cluster point ofF on the left of x 0 and set of cluster points ofF on the left of x 0 is denoted by C L(x 0 ) (F ).
Definition 2.5. LetF : X → F R be a fuzzy valued function and x 0 ∈ X. Define the function where h satisfies x 0 + h ∈ X. The function ψF (h) is called the slope function of secants at x 0 .
Definition 2.6. LetF 1 andF 2 be two fuzzy functions and is well defined in (x 0 , x 0 + δ ). If the two functions satisfy the following conditions

DIFFERENTIABILITY OF FUZZY VALUED FUNCTIONS
Suppose X be a subset of R n and a functionF : X → F R is said to be a fuzzy function, for each α ∈ [0, 1] we define the family of interval valued functionsF α : where the endpoint functionsf α ,f α are called upper and lower functions ofF respectively.
where c ∈ F R and g(x) ∈ R. If g (x) < 0, then the function above is not H differentiable.
However, the provided function is gH differentiable, and we may use gH differentiability to determine the answer. The only drawback of gH differentiability is that it does not provide us with a unique solution. As a result, we developed a improved Hukuhara derivative (iH derivative).We can see that the provided function is iH differentiable, and we can use iH differentiability to get the answer. The key benefit of iH differentiability is that it provides us with a unique solution.
Definition 3.2. A functionF : X → F R is said to be iH differentiable at x 0 ∈ X there exists an elementF (x 0 ) ∈ F R such that for all h > 0 sufficiently small and the limits the hukuhara derivative does not exists . Thus we proposed the new derivative and if g (x) < 0 the function is iH differentiable and we solve the function as follows: Since g (x) < 0,the minimum and the maximum value are −cg (x) and −cg (x).
is negative thus the minimum and maximum value are −cg (x) and −cg (x).
Thus the left hand limit and right hand limit are equal.
Theorem 3.1. Suppose the functionF : X → F R is iH differentiable then the interval valued Proof. The proof is obvious from the definition of iH differentiability.
Theorem 3.2. LetF : X → F R is iH differentiable at x 0 if and only if one of the following cases holds: Next we are going to explain the partial derivative ofF on X ⊂ R n . ie.F (x) = i , then clearlyF has the i th partial iH derivative at x 0 . ie. IfF is iH differentiable at x 0 then ∂F ∂ x i (x 0 ) is a fuzzy number. Thus ∀α ∈ [0, 1] we denote, Now we define a proposition which will be used in our main result.
Proof. The proof follows directly from Theorem 3.2 Now we can define the gradient of a fuzzy function.
Definition 3.5. The gradient ofF : X → F R at x 0 ,∇F (x 0 ), is defined as where ( ∂F ∂ x j )(x 0 ) denotes the j th partial iH derivative ofF at x 0 Here∇F (x) denote the n-dimensional fuzzy vector.∇ represents the gradient of a fuzzy function whereas ∇ represents the gradient of a real valued function.
Definition 3.6. Let the fuzzy function beF : X ⊂ R n → F R . Assume that x 0 ∈ X such that∇F is itself iH differentiable at x 0 . ie for each i, ∂F ∂ x i : X → F R is iH differentiable at x 0 . The iH partial derivative of ∂F ∂ x i is denoted by IfF is twice iH differentiable at each x 0 ∈ X then we say thatF is twice iH differentiable on X and if for each i,j=1,2,...,n the cross partial derivative ∂ 2F ∂ x i x j is continuous from X → F R then we say thatF is twice continuously iH differentiable on X.
Next we define p-times continuously iH differentiability of fuzzy valued functions similar to the above definiton.
ie.F : X → F R is p-times continuously iH differentiable on X if and only if all the partial iH derivatives of order p ∈ N exist and are continuous.
IfF is iH differentiable then f α and f α need not be differentiable. But by Proposition 3.1 we have f α + f α is always differentiable for all α ∈ [0, 1]. This property holds for the p-times iH differentiability of f α + f α .
Proof. It is clear from Proposition 3.1
Since ≺, denote the partial ordering on F R , we may follow the same solution procedure used in multiobjective problem. Now we write the mathematical expression for the fuzzy valued optimization problem minF (x) (4.4) x ∈ X ⊂ R n whereF : X → F R . Definition 4.1. Let x * ∈ X ⊂ R n be a non dominated solution of fuzzy optimization problem (4.4) if there exists no Next we go through the drawbacks of a theorem proved by Pirzada and Pathak . This theorem is applied in all the problems of [9] Theorem 4.1. LetF : X → F R be a fuzzy function. If x * is a locally non dominated solution of (4.4) and for any direction d, for any δ > 0 there exists λ ∈ (0, δ ) such thatF (x * + λ .d) and F (x * ) are comparable, then x * is a local minimizer of the real valued functions f α and f α , ∀ α ∈ [0, 1].
For example, consider a functionF (x) = (−1, 0, 1)x. We can see that the above conditions are not satisfied . Moreover ∀ x, y,F (x) andF (y) are not comparable. But 0 is a nondominated solution ofF .
Again in Theorem 4.1, the statement, x * is a local minimizer of the real valued functions f α and f α , ∀ α ∈ [0, 1] is restricitive. By this condition they stated that it is an ideal point and it is very difficult to find an ideal point for a fuzzy valued function. Thus all the examples considered in [9] do not have any ideal point. Consider an example in [9].
Thus we rewrite the above statement using the sum of the end point functions.
, then x * is a locally non dominated solution of (4.4).
Proof. We assume that x * is not a locally non dominated solution of (4.4). Then there exists x ∈ N ε (x * ) such thatF (x) ≺F (x * ). Thus there exists α * ∈ [0, 1] such that . Thus x * is a local minimizer of f α + f α . Hence the proof.

NEWTON METHOD TO FIND THE NON DOMINATED SOLUTION
Here we are going to define Newton method to find non-dominated solution of (4.4). Suppose that at each measurement point x (k * ) we can find out the values ofF (x (k * ) ),∇F (x (k * ) ) and∇ 2F (x (k * ) ). By considering the Proposition 1 and Proposition 2 we can find the values Thus by using Taylor's formula the quadratic real valued function h α can be obtained as , given x (k * ) we try to approximate a minimizer of f α + f α by finding a minimizer of h α ∀α ∈ [0, 1]. From first order necessary condition for h α we have ∇h α (x * ) = 0, for all α ∈ [0, 1].
Using (5.3) we can find the stationary points of f α + f α ∀ α ∈ [0, 1]. If we want to check if these points are minimizers of f α + f α , we need second order sufficient condition or convexity or generalized convexity of f α + f α ∀ α ∈ [0, 1].

Convergence.
Now we discuss about the convergence of the above method.
Theorem 5.1. AssumeF is three times continuously iH differentiable on R n and x * ∈ R is a point such that then ∀x (0) sufficiently close to x * , the Newton method is well defined ∀ k * and converges to x * with order of convergence at least 2.
Proof. Since the fuzzy functionF is three times continuously iH differentiable then φ is three times continuously differentiable function. Thus the proof is same as in the proof of Newton method.

Examples.
In this section we consider the examples given in [9] and we correct it using the proposed method.
The endpoint functions f α and f α are defined by we can clearly see that the end point functions obtained in our method are not differentiable and henceF is not H differentiable. Thus we cannot apply the method given by Pirzada and Pathak.

CONCLUSION
We introduced a new notion called iH differentiability of fuzzy valued functions in this study.
The advantage of iH differentiability over gH differentiability is that it provides us with a unique solution. We also looked at the examples in [9] and discovered that the functions are not H differentiable. As a result, we can't use Pirzada and Pathak's [9] Newton approach. However, the same examples in [9] are iH differentiable, resulting in a non-dominated solution to the fuzzy valued issues.