A Preconditioning Algorithm for the Positive Solution of Fully Fuzzy Linear System

||


Introduction
When information is imprecise and only some vague knowledge about the actual values of the parameters is available, it is convenient to make use of fuzzy numbers [13].One of the major applications of fuzzy number arithmetic is solving of linear systems whose parameters are all or partially represented by fuzzy numbers.System of fully fuzzy linear equations has been of growing interest in recent years.Many financial problems, for instance option valuation or portfolio selection, are modeled as linear systems.In the presence of uncertainty, those financial models are said to operate as FFLS [9].In this paper, the term fuzzy matrix is of the most importance concept, and we follow the definition proposed in [4], that is, a matrix with fuzzy numbers as its elements.
In [5] a general model is introduced for solving n × n fuzzy linear system whose coefficient matrix is crisp and the right hand vector to be an arbitrary fuzzy vector.Friedman et al. solved the fuzzy linear system by first reducing it to a 2n × 2n crisp linear system.Review about some methods for solving these systems can be found in [10].Here we consider another kind of fuzzy linear systems where all the parameters include fuzzy numbers.Recently, in [1,2] computational methods such as Cramer's rule, Gaussian elimination method, LU decomposition method and Adomian decomposition method have been used for solving FFLS.Iterative techniques for the solution of FFLS is presented in [3], whereby techniques such as Richardson, Jacobi, Jacobi over relaxation, Gauss-Seidel, SOR, accelerated over relaxation, symmetric and unsymmetrical successive over relaxation and extrapolated modified Aitken are studied.For other methods to solve FFLS, one may refer to [8].
Here, we consider FFLS of the form Ψ ⊗ ξ = φ, where Ψ = (Φ, Λ, Θ), with the model value (center) matrix Φ to be positive definite.We consider the use of the conjugate gradient method with preconditioning technique for approximating the positive solution of the above defined FFLS.We demonstrate with examples that the preconditioned algorithm converges to the exact solution more rapidly than the iterative Jacobi, Gauss-Seidel and SOR methods.This paper is structured as follows: In the next section, we give some preliminaries concerning fuzzy sets theory.In section 3, the new procedure based on preconditioning is introduced.Numerical examples are presented in section 4 to illustrate efficiency of the method.

Preliminaries
In this section, we present some brief backgrounds and notions of fuzzy sets theory [4,10].

Fuzzy Numbers
Definition 2.1.Assume Ω to be a universal set , and then a fuzzy subset Φ of Ω is defined by its membership function where the value of µ Φ (ω) at ω shows the grade of membership of ω in Φ.
A fuzzy subset Φ can be characterized as a set of ordered pairs of element ω and grade µ Φ (ω) and is often described as A fuzzy set Φ in Ω is said to be normal if there exists ω 0 ∈ Ω such that µ Φ (ω 0 ) = 1.

Definition 2.2. An arbitrary fuzzy number η(r) may be represented by a pair of functions
which satisfies the following conditions: The set of all fuzzy numbers η(r) is a convex cone which is embedded isomorphically and isometrically into a Banach space.When η(r) = η(r), the fuzzy number is simply referred to as a crisp number.
On the other hand, the fuzzy matrix is defined as: The matrix Ψ is positive if each of its elements is positive.Also, the n × n matrix Ψ may be represented as where the crisp matrices Φ = (ϕ ij ), Λ = (λ ij ) and Θ = (θ ij ) are called the model value (center) matrix and the left and right spread matrices, respectively.

Arithmetic Operations on Fuzzy Numbers
Suppose F be the set of LR-fuzzy numbers and let η = (ς, σ, β) and ζ = (ϱ, γ, κ) be two fuzzy numbers in F. The following operations are valid [9]:

Fully Fuzzy Linear Systems
In this section we review the solution procedure for FFLS.Consider the n × n FFLS of the form ( which may be represented as Assuming that the coefficient matrix and solution vector are positive in (3.1), then following §2.2 we obtain the crisp systems From (3.2), we find that once a solution to ξ 1 , say ξ sol 1 , is obtained, solution to ξ 2 and ξ 3 may be computed by solving the crisp linear systems In this paper, we assume that the crisp matrix Φ is positive definite, that is, for all nonzero vectors ϖ ∈ C, we have ϖ T Φϖ > 0, where ϖ T denotes the transpose.Positive definite matrices are of great importance in various applications.They are used, for instance, in optimization algorithms and in the construction of a wide variety of linear regression models [6].We shall mention that the spectrum of a positive definite matrix is the real line.In this respect, next section describes the procedure for solving the crisp linear systems (3.2) for positive definite matrix Φ.

A New Approach For Solving FFLS
Consider the crisp linear system of the form where Φ is a positive definite matrix of order n and ξ, φ ∈ C n×1 .One of the methods for solving (3.3) is the conjugate gradient method [7] which can be summarized by the following algorithm: The Conjugate Gradient Algorithm (CGA) Input: Φ, φ, ξ (0) , k max and ϵ Output: An approximation to ξ Terminate the iteration process if ∥γ (k+1) ∥/∥φ∥ < ϵ 7.
In CGA, ξ (0) is the starting vector of the process with k max and ϵ being the maximum number of iteration and convergence tolerance, respectively.The quantity ρ (k+1) is known as the conjugate direction and γ (k+1) is the residual vector of the linear system (3.3).We note that (ρ (k) ) T implies transpose of the vector (ρ (k) ).It can be verified that the number of iteration required for the residual vector norm to satisfy the convergence tolerance ϵ is proportion to the square root of the condition number of matrix Φ, that is, √ cond(Φ).For the descent method [7], it is well known that the speed of convergence increases when cond(Φ) → 1.In this respect, to accelerate the convergence of CGA, we replace the solution of the system (3.3) by that of an equivalent system Υ −1 Φξ = Υ −1 φ such that cond(Υ −1 Φ) is as close as possible to one.This process is commonly known as preconditioning.

Examples
In this section, we consider five examples to demonstrate the efficiency of PCGA for approximating the solution of FFLS.Note that for the first four examples the pseudospectrum of the crisp matrices are provided to demonstrate that the matrix Φ is positive definite.The EIGTOOL is used for generating the pseudospectrum [12].All the experiments are run using the MATLAB software and the function rand() is used for creating random matrices and vectors.The pair (k max , ϵ) is chosen as (1000, 1E − 10).In all the tables, 'itr' represents the number of iteration, 'nc' implies no convergence in k max number of iterations, 'cpu' denotes the total computational time for computing approximations to ξ 1 , ξ 2 and ξ 3 ; and 'res' is the relative residual norm at the last iteration with respect to the crisp linear systems.We note that 'itr'= (x, y, z) implies a method takes x, y and z number of iteration to find an approximate solution to ξ 1 , ξ 2 and ξ 3 , respectively.The same hold for the parameter 'res'.Fig. 1 shows the pseudospectrum of the crisp matrices Φ, Λ and Θ.We find that the eigenvalues of matrix Φ are all real and positive.From Table 1 we observe that both CGA and PCGA perform better than the iterative Jacobi, Gauss-Seidel and SOR methods.For comparison purpose, CGA takes five iterations to reach an approximate solution of ξ 1 , while the Gauss-Seidel and SOR methods take 183 iterations.Also, we find that the Jacobi algorithm does not converge in 1000 iterations.On the other hand, PCGA converges faster than CGA, with a total computational time of 6.00E − 3 as compared to 2.80E − 2. Here, we consider a random matrix Ψ of order 10 and the vectors {φ i } i = 1, 2, 3 are chosen to be random vectors of length 10.Fig. 2 shows the respective pseudospectrums of the crisp matrices.From Table 2, we observe that PCGA is more efficient that CGA.In 12 iteration, PCGA approximate {ξ i } i = 1, 2, 3 more accurately than CGA.We shall mention that in 1000 iteration the iterative Jacobi, Gauss-Seidel and SOR procedures do not converge.Example 4.3.We consider a random matrix Ψ of order 30 and the vectors {φ i } i = 1, 2, 3 are chosen to be random vectors of length 30.Fig. 3 shows the respective pseudospectrums of the crisp matrices.We note that for this example the iterative Jacobi, Gauss-Seidel and SOR procedures do not converge in the maximum number of iteration allowed.Table 2 confirms the superiority of PCGA over CGA.For comparison purpose, we find that PCGA requires around three times lesser iterations than CGA for convergence of {ξ i } i = 1, 2, 3 .Example 4.5.Finally, we consider a manufacturing problem [11].A manufacturing company has decided to produce three products namely P 1 , P 2 and P 3 .The company asks his production manager to determine how many number of each product should be produced so that optimum use of the three available machines M 1 , M 2 and M 3 are made.The available time of the machines that may hinder the process is described in Table 5 and Table 6 depicts the number of machine hours required for each unit of products P 1 , P 2 and P 3 , respectively.where ξ is a fuzzy vector representing the quantity of each product to be produced to optimize the use of the three machines.Table 7 shows the results obtained when solving the above FFLS using CGA and PCGA methods.We observe that the PCGA algorithm converges faster and more accurate as compared to the CGA procedure.Note that the quantity of each product to be produced to satisfy the machines constraints is

Conclusion
In this paper, we developed a preconditioned method for finding an approximation of the positive solution of FFLS, where the modal value matrix is positive definite.For comparison purpose, we consider several examples including two applications of FFLS.The numerical results depict that the preconditioned scheme converges to the exact solution more rapidly than the iterative Jacobi, Gauss-Seidel and SOR methods on solving the FFLS.
is illustrated in Fig.5.

Fig. 5 .
Fig.5.Illustration of the quantity of P 1 (red), P 2 (green) and P 3 (black) to be produced to satisfy the machines constraints.

Table 1
Results for Example 4.1.

Table 2
Results for Example 4.2.

Table 3
Results for Example 4.3.Next, we consider a random matrix Ψ of order 50 and the vectors {φ i } i = 1, 2, 3 are chosen to be random vectors of length 50.Fig.4showstherespective pseudospectrums of the crisp matrices.Similar conclusions as in Example 4.3 may be made fromTable 4.

Table 5
Available machine times.

Table 6
Machine hours required for each unit of the respective product.

Table 7
Results for Example 4.5.