Relaxed TS Fuzzy Model Transformation to Improve the Approximation Accuracy/Complexity Tradeoff and Relax the Computation Complexity

The primary goal of the article is to introduce the relaxed TS fuzzy model transformation, a method that enhances the original TS fuzzy model transformation in two ways. First, it focuses on achieving a more efficient reduction of the number of antecedent fuzzy sets—hence, the fuzzy rules of the TS fuzzy models—while minimizing the approximation error. Second, it aims to reduce the computational load required for the transformation process. With the first enhancement, the proposed transformation strikes a better balance between the number of fuzzy rules and the approximation accuracy of TS fuzzy models. With the second enhancement, a unique pre- and postprocessing of the TS fuzzy model transformation is introduced leading to the radical computational improvements. The core part of the original TS fuzzy model transformation is the higher order singular value decomposition (HOSVD) used to balance the approximation quality with the number of fuzzy rules by truncating singular values. The HOSVD itself is a computationally intensive algorithm, the possibilities for advancements in its implementation seem to be limited as much research has focused on its optimization in the past and had reached its pinnacle in terms of computational complexity more than a decade ago. Therefore, the approach presented in this article does not concentrates directly on enhancing HOSVD further, but instead proposes a unique pre- and postprocessing technique for the tensor on which HOSVD is applied, tailored to the special characteristics of the TS fuzzy model and the system model under consideration. Following a description of the proposed enhancements, the article presents numerical examples and two examples of real-world engineering models to demonstrate the effectiveness of the relaxed TS fuzzy model transformation compared to the original TS fuzzy model transformation.


Relaxed TS Fuzzy Model Transformation to Improve the Approximation Accuracy/Complexity Tradeoff and Relax the Computation Complexity Péter Baranyi
Abstract-The primary goal of the article is to introduce the relaxed TS fuzzy model transformation, a method that enhances the original TS fuzzy model transformation in two ways.First, it focuses on achieving a more efficient reduction of the number of antecedent fuzzy sets-hence, the fuzzy rules of the TS fuzzy models-while minimizing the approximation error.Second, it aims to reduce the computational load required for the transformation process.With the first enhancement, the proposed transformation strikes a better balance between the number of fuzzy rules and the approximation accuracy of TS fuzzy models.With the second enhancement, a unique pre-and postprocessing of the TS fuzzy model transformation is introduced leading to the radical computational improvements.The core part of the original TS fuzzy model transformation is the higher order singular value decomposition (HOSVD) used to balance the approximation quality with the number of fuzzy rules by truncating singular values.The HOSVD itself is a computationally intensive algorithm, the possibilities for advancements in its implementation seem to be limited as much research has focused on its optimization in the past and had reached its pinnacle in terms of computational complexity more than a decade ago.Therefore, the approach presented in this article does not concentrates directly on enhancing HOSVD further, but instead proposes a unique pre-and postprocessing technique for the tensor on which HOSVD is applied, tailored to the special characteristics of the TS fuzzy model and the system model under consideration.

Following a description of the proposed enhancements, the article presents numerical examples and two examples of real-world engineering models to demonstrate the effectiveness of the relaxed TS fuzzy model transformation compared to the original TS fuzzy model transformation.
Index Terms-Approximation-complexity tradeoff, TS fuzzy model transformation, TS fuzzy model.
The mathematical perspective of the TP model transformation involves extending the concept of the higher order singular value decomposition (HOSVD) [10] to continuous bounded functions.It serves as a method for numerically reconstructing the HOSVD of functions [1], [2], [3].Besides generating higher order singular value based orthonormed structures, the TP model transformation is capable of deriving convex tensor product forms [1], [2], [3], [4], [5], [6], [7], [8], [9], [11] equivalent to the transfer functions of commonly used TS fuzzy models, such that the antecedent fuzzy sets are defined as Ruspini partitions.Therefore, the TP model transformation is also referred to as the TS fuzzy model transformation in the literature on fuzzy modeling.

A. Novel Contributions of the Article
The article proposes an improved variant of the TS fuzzy model transformation, referred to as the relaxed TS fuzzy model transformation.Its key properties are as follows: 1) It provides improved accuracy and complexity tradeoff for fuzzy rule base reduction.
2) It requires less computational load that considerably extends the class of models, where the TS fuzzy model transformation can efficiently be applied.

B. Challenges and the Core Idea of the Solution
In the original TS fuzzy model transformation, HOSVD is executed on a tensor representing the discretized variant of the given model over a hyperrectangular grid.The resulting higher order singular values express the relative importance-defined based on approximation error-of the different fuzzy rules.Therefore, rank reduction of the discretized tensor obtained by truncating the smallest singular values directly leads to a tradeoff between the number of fuzzy rules and the approximation accuracy of the resulting TS fuzzy model.The sum of the truncated singular values represents an upper bound on the resulting approximation error.Thus, the fundamental procedure in the original TS fuzzy model transformation involves the application of HOSVD, a method that is inherently computationally intensive.
The evolution of HOSVD has been marked by extensive efforts to enhance its implementation, as detailed in [10].This pursuit has led to incremental improvements in the associated computational methods.For instance, the sequentially truncated HOSVD (ST-HOSVD) algorithm, introduced in 2012 [42].Approximately a decade ago, the development of HOSVD achieved its peak in terms of computational complexity and accuracy in handling singular values.Possibilities for further breakthroughs in this domain seem to be limited.
Therefore, the approach presented in this article does not concentrate on enhancing HOSVD further, but instead proposes a unique preprocessing and postprocessing technique for the tensor on which HOSVD is applied, tailored to the special characteristics of the TS fuzzy model and the system model under consideration, which leads to radical improvements.The article reveals and proves that the convexity of the output of the TS fuzzy model transformation, achieved by further transforming the result of the HOSVD using convex transformation techniques, makes it possible to remove blocks of identical elements within a given tensor.Depending on the number of such elements, a considerable reduction on computational requirements can be achieved.
Based on the above, the proposed approach does not serve as a substitute computational model for HOSVD, but instead represents a supplementary step to HOSVD in the case of the TS fuzzy model transformation, such that the data are tailored to the unique attributes of the TS fuzzy model and the system model at hand.

C. Outlines of the Proposed Solution
The key idea behind the relaxed TS fuzzy model transformation is to modify the inputs to and the outputs from the HOSVD based step of the original TS fuzzy model transformation.
In the proposed transformation, certain appropriately selected elements are temporarily separated from the discretized tensor.This results in a smaller values of the singular values and, further, the HOSVD is executed on a tensor that has fewer elements.In the final step, after the convex form is determined, the resulting decomposition is restructured and the previously separated elements are reinserted.

D. Structure of the Article
Section II serves to define the notations used throughout the article and establish the fundamental concepts necessary for the development of the relaxed TS fuzzy model transformation.Section III recalls the algorithm of the original TS fuzzy model transformation.Section IV proposes an improved HOSVD based rank reduction for tensors having identical blocks.Section V develops the relaxed TS fuzzy model transformation based on the improved reduction proposed in Section IV.Section VI presents two demonstrative examples to provide further details on the execution of the original and the relaxed TS fuzzy model transformation, and to be able to obtain a comprehensive comparison of the two.Sections VII and VIII provide a further comparison of the two approaches based on the real-world engineering benchmark problems of translational oscillators with rotating actuator (TORA) and an aeroelastic wing section model.Finally, Section IX concludes the article.

II. NOTATION AND BASIC CONCEPTS
A. Notation 1) i, j, m, n, g . . .are indices with the lower bound 1 and upper bounds I, J, M, N, G . ...
vectors, matrices, and tensors, respectively, where notation R I N is equivalent to R I 1 ×I 2 ×...×I N .3) 1 denotes a vector whose elements are all 1; 4) rank(A) denotes the rank of matrix A; 5) rank n (A) denotes the n-mode rank of tensor A, see [10] dimension i of A. Thus, the size of , where I n = K k=1 J k .9) A ∈ co{∀n : B n } represents the fact that A is within the convex hull defined by the vertices B n .10) U denotes an orthonormed matrix.11) W denotes the matrix such that 1 = W1 and ∀i, j : 0 ≤ [W] i,j .

B. Basic Concepts Definition 2.1: Transfer function of TS fuzzy model
Let us consider a set of fuzzy rules in the form Here, the membership values of the Ruspini-partitioned antecedent fuzzy sets A n,i n are defined by w n,i n (p n ).The consequents B i 1 ,i 2 ,...i N can represent scalar, vector, matrix, or even tensor elements, denoted as The transfer function of the TS fuzzy model, based on a product-sum-gravity approach and with singleton observations located at p n , can be expressed as [1], [2], [3], [9]: This transfer function with tensor-product operation takes the form of where The tensor product operation utilized in ( 5) is a concept that was introduced in tensor algebra in the year 2000 [10].Since then, (5) has been widely employed in the literature on TP model and TS fuzzy model transformation.

Definition 2.2: Discretized Tensor F of function f (p) over grid G
The discretized tensor F ∈ R G N ×J M represents the discretized version of the function f (p)∈ R J M over a rectangular grid defined by the grid tensor G∈ R G N ×N .It can be expressed as Here, the grid tensor G defines the coordinates of a hyperrectangular grid as , defined on each interval ω n of the N -dimensional hyperspace Ω.The grid covers Ω, meaning that the grid vectors satisfy the conditions, g n,1 = ω min n , and g n,G n = ω max n .

Definition 2.3: Piecewise multilinear approximation of function f (p) over grid G
The piecewise multilinear approximation f (p), denoted by a bar on top, is defined using the discretized tensor F of f (p) over grid G as In this equation, the vector i n (p n ) is given by where [g] g ≤ p n ≤ [g] g+1 , and [I] g represents the gth row of the identity matrix I.

III. ORIGINAL TS FUZZY MODEL TRANSFORMATION
The original TS fuzzy model transformation is built on the following three methods [1], [2], [3].

Method 3.1: CHOSVD based piecewise multilinear approximation of function f (p)
Let us determine F of f (p) over G.We perform the compact HOSVD (CHOSVD) [10], also known as truncated HOSVD, on F, where "Compact/truncated" means that all the zero singular values and their corresponding columns in the singular matrices are discarded: Here, Substituting ( 8) into (6), the approximation is defined as where

Method 3.2: Accuracy-complexity tradeoff
The CHOSVD of F is determined via the execution of SVD for each dimension as follows: Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
Diagonal matrix D n contains the singular values in decreasing order as follows: R n -the number of singular values-determines the absolute minimum number of antecedent fuzzy sets in dimension n.
Finally, the decomposition leads to If further nonzero singular values and the corresponding columns of the singular matrices U n are discarded then the following approximation error results: where ε expresses the L 2 norm error, which is bounded by the sum of the discarded singular values, see [10].Note that this is not the best approximation of tensor F under the decreased rank constraint if n > 2, see [10].

Method 3.3: Determination of the Ruspini-partitioned antecedent fuzzy sets
Each column of U n determines a candidate for one antecedent fuzzy set.Each element in a given column defines the values of the possible antecedent fuzzy sets over the grid g n .In order to determine the values of the Ruspini-partitioned antecedent fuzzy set over the grid, we can apply one of many different kinds of convex transformations to These transformations guarantee various advantageous features of the resulting convex hull defined by the consequents.This leads to Here, W n possesses the properties defined in (1).Therefore, each column of W n determines one antecedent fuzzy set over the grid and for all inputs as It is important to note that the convex transformations may add one column to U n , resulting in W n .Therefore, in this case, Finally, we arrive at the piecewise multilinear TS fuzzy model as This is a convex form meaning that The where ε is the approximation error over the grid caused by the rank reduction, as mentioned above, and β is caused by the piecewise linear approximation of the antecedent fuzzy sets between the grid.When the grid density approaches infinity then β → 0. The high resolution of antecedent fuzzy sets (the grid density over which the antecedents are defined) can be further enhanced as detailed in [7].Therefore, β → 0 is well supported and the article focuses only on ε from this point on.

IV. IMPROVED HOSVD-BASED TENSOR RANK REDUCTION
Before discussing the method used to achieve an improved accuracy and complexity tradeoff, let us first introduce the following definitions.
Definition 4.1: Block identical tensor referred to as a block identical tensor, denoted by the superscript "b," if it is constructed from identical blocks C, such that (21).Any convex combination of tensor blocks C stored in block identical tensor F b leads to a block identical tensor thus ∀g 1 , g 2 , . . .g N , g 1 , g 2 , . . .g N : Definition 4.2: Partially block identical tensor F p A tensor F p ∈ R G N ×J M is referred to as partially block identical, denoted by the superscript "p," if it is not block identical, but some of the elements of the blocks [F p ] g 1 ,g 2 ,...g N are identical for all g 1 , g 2 , . . .g N , given by , denoted by the superscript "v," of tensor where L = M m=1 J m and the ordering of the elements is defined by l that represents a linear index equivalent to the array index j 1 , j 2 , . . .j M as l = ordering(j 1 , j 2 , . . .j M ). (26)

Method 4.1: Separation of identical elements
Let us consider a partial block identical tensor F p .We can define the ordering in (26) in such a way that the resulting vectors of F v can be partitioned into two vectors as Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
where vector c ∈ R K contains the K number of elements that are identical for all g 1 , g 2 , . . .g N .Consequently, tensor F v can be partitioned in dimension N + 1 into two tensors as where stores the nonidentical elements, and the block identical tensor

Method 4.2: HOSVD-based rank reduction of partially block identical tensors
Consider a partially block identical tensor F p .Furthermore, let its vector block variant F v be derived and partitioned as shown in (28).The CHOSVD of F β results in Here, Based on Method 3.2, the CHOSVD is computed by executing SVD in each dimension as follows: Diagonal matrix D β n contains the singular values in decreasing order as follows: Because the elements of C b are excluded from F β the following properties hold for the singular values: If nonzero singular values and the corresponding singular vectors of U β n are discarded then an approximation error ε β results, however, it is bounded by the sum of the discarded singular values.Let us execute a convex transformation as follows: which leads to Here, , where ∀n : The convex transformation does not affect the error and because of (32), finally we have a smaller upper bound for ε β when smaller singular values σ β < σ are discarded.
In order to reinsert the elements of the excluded C b , let us create a block identical tensor C β ∈ R (I β ) N ×K from vector blocks c as follows: Based on Lemma 4.1, By substituting ( 29) and ( 36) into (28), we obtain where If we rearrange the core tensor S v using (26) as follows: then we arrive at where An alternative way of determining S β is to directly derive it from F p , once we have matrices W β n .Therefore, instead of ( 34)-( 40), we can calculate As a matter of fact, in this case there is no guarantee that the partially block identical elements will be the same or will even remain partially block identical, especially when a numerical computational error occurs.The conclusion of this section is that if we separate the partially identical elements then the CHOSVD will find smaller singular values which leads to an improved complexity tradeoff.Because of the convex form, the separated elements can be reinserted after the tradeoff is performed.

V. INTRODUCING THE RELAXED TS FUZZY MODEL TRANSFORMATION
Based on the improved tensor rank reduction introduced in the previous section, this section proposes the relaxed TS fuzzy model transformation that is an extension of the original TS fuzzy model transformation based on two additional steps.Fig. 1 illustrates the block diagram of the relaxed TS fuzzy model transformation, which will be detailed in subsequent sections.For comparison, the original TS fuzzy model transformation is also depicted in Fig. 1.The blocks that are vertically aligned along the left-hand side of the figure reflect the original version, while the two additional blocks on the right show the extensions that make up the relaxed TS fuzzy model transformation.Furthermore, the equations delineating the crucial steps are provided in Fig. 1.
Method 5.1: Relaxed TP model transformation Assume function f (p) is given in a closed form, such that all of the inner formulas of the function are known.Then, the following relaxed TS fuzzy model transformation can be executed. Step

(Additional step to the original TS fuzzy model transformation)
Define f β (p) by rearranging the parameter-dependent elements of f (p) into a vector and excluding the constant elements as Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.Step 1: Discretization Define the discretization grid based on G n .Then, derive the discretized tensor F β of f β (p).
Step 2: Approximation and complexity tradeoff Execute the CHOSVD and perform further rank reductionif required-by discarding nonzero singular values.This step results in Step 3: Convex transformation Execute a convex transformation as Step

4: Restructuring the core tensor and reinsertion of the partially identical elements (additional step to the original TS fuzzy model transformation).
There are two possible approaches here, as discussed in the context of (41).One way is to derive the core tensor, see (34) as Then, one can reinsert the separated constants as follows: see (38).Here, C is defined by (35).Then, restructuring the core tensor S v -counter to Step 0-leads to S β as follows: The alternative way is to directly calculate the elements of the core tensor based on (41) as follows: This does not guarantee that the elements will be partially identical as mentioned in the context of (41).
Step 5: Determine the piecewise linear antecedent fuzzy sets The antecedent fuzzy set are determined over the grid g n as follows: Thus The density of the discretization grid in Step 1 is limited by the available computational capacity.In order to reveal all the ranks, a high-density grid is required.However, Step 2 executes HOSVD on the discretized tensor, which is computationally expensive.The overall computational complexity of the HOSVD of tensor F ∈ R G N ×J M is detailed in [10] and described as One can see that when N increases, G n is strictly limited.The relaxed TS fuzzy model transformation restructures the elements into vectors and removes the constant elements.If K different elements are removed, see (27), then the computational complexity is reduced to where L = M m=1 J m , see (25).Another aspect is that the resolution (the grid density over which the antecedents are defined) of the piecewise linear antecedent fuzzy sets is also determined by G n , see Step 5. Thus, once the number of elements are decreased in the tensor, the grid density can be increased, which will lead to the improved resolution of the antecedent fuzzy sets as well.If further improvement of the resolution of the antecedents is needed, then the refining technique introduced in Section V of the recently published paper [7] can be applied.The end result is that the error β can be eliminated in a numerical sense.
Remark 5.1: The relaxed TS fuzzy model transformation does not yield a computational reduction in the absence of constant elements that can be separated.Its application is straightforward when the internal formulations of the provided functions are visible, allowing for the easy separation of constant elements.If the constant and nonconstant components are obscured, for Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
instance when a black-box form of the functions is given, it becomes necessary to use an additional algorithm to uncover the partially block-identical elements.

VI. DEMONSTRATIVE EXAMPLES
The primary objective behind the examples presented in this section is to show that the proposed relaxed TS fuzzy model transformation offers better complexity and accuracy tradeoffs than the original TS fuzzy model transformation.Furthermore, this example serves as evidence that the proposed transformation requires less computational resources.

A. Example 1
Consider the tensor function where To ensure a convex TP structure, the SNNN transformation is applied to (55).In the current scenario, the SNNN transformation does not increase the number of columns in W n .Therefore, we have where We proceed with the proposed relaxed TS fuzzy model transformation.In Step 0, the given function is relaxed by rearranging its elements into a vector, excluding the constant elements

Moving on to
Step 1 of the proposed method, CHOSVD is executed on the discretized tensor , and ε β = 8.5867 × 10 −10 .Note that ε β < ε and the size of D β is smaller than the size of D in the first two dimensions.The singular values obtained from this transformation are as follows.
Dimensions assigned p 1 and p 2 (they are same in the current scenario): 1.493 × 10 6 ; 1.102 × 10 5 ; 2.36 × 10 4 ; . ... One can observe that the resulting singular values are considerably smaller than in the case of (55).When performing the SNNN transformation on U β n to obtain W β n , it is observed that the transformation does not increase the number of columns in W β n .Consequently, we have where S β ∈ R 3×3×4 and W β n ∈ R 137×3 .Finally, we restructure the core tensor as which leads to F ≈ ε β S β 2 n=1 W n .Once again, the size of the core tensor S β is considerably (37.5%) smaller (3 and, further, the resulting approximation error obtained via the relaxed TS fuzzy model transformation over the grid is also smaller, as ε = 1.2303 × 10 −9 > ε β = 8.5867 × 10 −10 .This means that the number of fuzzy rules is decreased from 6 × 4 = 24 to 3 × 3 = 9, i.e., by 62.5% with better approximation accuracy over the grid. Let us investigate the computational complexity.The CHOSVD is executed on a tensor with the size of 137 × 137 × 2 × 3 × 2 that leads to 62 × 10 6 operational steps, see (51), in the case of the original TS fuzzy model transformation, while in the case of the relaxed TS fuzzy model transformation the size of the tensor is only 137 × 137 × 4, which leads to 20 × 10 6 operational steps, see (52).This is a reduction of around 67%.
Let us proceed by evaluating the complexity and accuracy tradeoff.The results are summarized in Tables I and II.Table I shows the L 2 norm error over the grid resulting from the original and the relaxed TS fuzzy model transformation.The first column denoted by R 1 and the first row denoted by R 2 show the kept number of singular values assigned to dimensions p 1 and p 2 , respectively.The column denoted by I 1 and the row denoted by I 2 show the number of the antecedent fuzzy sets over dimensions p 1 and p 2 , respectively, resulting from the convex transformation, see (56).Each cell is partitioned into two cells.The left one shows L 2 norm error ε while the right one shows the number of the resulting fuzzy rules #R as denoted in the third row.The tenth row denoted by R β 2 and the column denoted by R β 1 show the number of singular values that are kept when the relaxed TS fuzzy model transformation is executed.The 11th row denoted by I β 2 and its corresponding column denoted by I β 1 show the number of the antecedent fuzzy sets resulting from the convex transformation, see (57).The tables demonstrate well that the greater the number of singular values that are kept, the better the approximation that results.Table II compares the result of the original and the relaxed TS fuzzy model transformation.The column denoted by ε% shows the reduction of the approximation error as (1 − ε β /ε)% and the column denoted by R% shows the reduction of the fuzzy rules as (1 − #R β /#R)%.We can observe that a considerably better tradeoff results from the relaxed TS fuzzy model transformation in contrast to the original TS fuzzy model transformation.
For instance, the fourth column and the fifth row of Table I show that if we keep two singular values R 1 = R 2 = 2 in both dimensions, then the convex transformation increases the number of antecedent fuzzy sets to I 1 = I 2 = 3.The resulting TS fuzzy model has nine fuzzy rules and an L 2 norm error of 8 × 10 −9 over the grid.One can see that in this case, the relaxed TS fuzzy model also results in nine fuzzy rules, but with an L 2 norm approximation error of only 1.8 × 10 −9 .Table II shows that corresponds to a 77% reduction in approximation error.
In summary, the tables demonstrate that the relaxed TS fuzzy model leads to a considerable approximation error reduction in all variations and results in a significant fuzzy rule base reduction in many cases.Therefore, the relaxed TS fuzzy model transformation provides significantly better approximation and complexity tradeoffs than the original TS fuzzy model transformation.Further, the computational complexity is also reduced by 67%.Obviously, when h increases, this means that the L 2 norm of the separated constant elements also increases, and the comparison will be even more favorable to the relaxed TS fuzzy model transformation.Table III shows a case in which h = 10 000.The result of the relaxed TS fuzzy model transformation is the same as above since the value of the constant element h is excluded in the computation.Table IV shows the approximation error reduction.Comparing Table IV to Table II, we can observe that the approximation error reduction increases with h.

VII. EXAMPLE BASED ON AN ENGINEERING BENCHMARK PROBLEM
In the literature related to TS fuzzy model transformations, the real-word benchmark example of the TORA often appears [1], [2], [3], [34].It is an underactuated system, which has one actuated rotor and one unactuated translational cart.For comparability, we also use this example in this article.
Assume the following qLPV model of the TORA system: where x(t), u(t), and y(t) are the state, input, and output vectors, respectively.Here p 1 (t) = x 3 (t) and p 2 (t) = x 4 (t).The system matrix S(p(t)) takes the form of . Thus, the number of fuzzy rules is 4 × 2 = 8.
In the case of F β , we arrive at: Applying the convex transformation and reinserting the separated constant elements, we have We can observe that the original TS fuzzy model transformation results in eight fuzzy rules with error 0.1837, while the relaxed TS fuzzy model transformation can achieve an error of 0.1837 with only six fuzzy rules.If we keep two singular values in the first dimension using the original TS fuzzy model transformation, then the resulting TS fuzzy model has six fuzzy rules, however the error increases to 0.2212.The different variations of this complexity-accuracy tradeoff are given in Table VI.
Thus, we can increase the resolution of the grid to 4484 (1.5×) in the case of the relaxed TS fuzzy model transformation with the same computational power as was used for the original TS fuzzy model transformation, at least in the current example.

VIII. EXAMPLE OF A REAL-WORD ENGINEERING PROBLEM
In the literature related to TP model transformations, the real-world example of a very complex aeroelastic wing section often appears [4], [5], [31], [43], which is taken from a real engineering control problem.The data and parameters of the model were identified based on real-life measurements conducted by NASA.The above papers refer to further papers, which detail the physical measurement system and the identification processes.
For comparability, we also use this example in this article.The challenge when it comes to TS fuzzy modeling and control design with respect to this problem lies in the strong nonlinearities and complexity of the model.The statespace model of the 2-D aeroelastic wing section has state vector x(t) ∈ R 4 as x(t) = [x 1 (t) x 2 (t) x 3 (t) x 4 (t)] T = [h(t) α(t) ḣ(t) α(t)] T , where x 1 (t) is the plunging displacement and x 2 (t) is the pitching displacement.The system matrix S(p(t)) of the state-space model depends on the parameter vector p Here, free stream velocity U (t) is an external parameter.The entries of the system matrix are  Let us first execute the original TS fuzzy model transformation.Let the grid density be 1000 × 1000.The resulting singular values are as follows:: Dimension assigned to p 1 (t): 9.17 × 10 6 , 6.26 × 10 4 , 100.Dimension assigned to p 2 (t): 8.82 × 10 6 , 2.49 × 10 6 .The computational complexity has 5 × 10 9 operational steps, see (51).
We can observe that the relaxed TS fuzzy model transformation results in smaller singular values, leading to a better tradeoff and requires 60% less computational power.

IX. CONCLUSION
The primary goal of the TS fuzzy model transformation is to convert a given model into various different alternative TS fuzzy model representations with advantageous properties that can enhance subsequent design outcomes.A key feature of this transformation is its ability to identify the absolute minimum number of fuzzy rules and balance the tradeoff between the number of the fuzzy rules and approximation accuracy, in case further reduction is necessary.At the same time, a notable limitation is the intensive computational resources required, particularly for high-resolution execution with a larger number of inputs.In this context, a new variant of the TS fuzzy model transformation was introduced in this article, which is referred to as the relaxed TS fuzzy model transformation.This variant aims to provide an improved balance between number of fuzzy rules and approximation accuracy, while significantly reducing computational complexity.These benefits are amplified with an increase in the number and values of constant elements within the function.To demonstrate the effectiveness of the proposed approach and to facilitate a thorough comparison with the original TS fuzzy model transformation, the article includes four numerical examples.The first two examples are well-known benchmark in the literature concerning the development of TS fuzzy model transformations.The third and the fourth examples involve real engineering models frequently employed as benchmarks in related studies.As a conclusion, a practical design guideline can be advocated for through the use of the relaxed TS fuzzy model transformation over the original version in all scenarios, irrespective of the necessity for further reduction in the fuzzy rule base beyond the minimum requirement.This recommendation is underpinned by a significant reduction in computational complexity offered by the relaxed TS fuzzy model transformation.Future work on the further development of the relaxed TS fuzzy model transformation could focus on Interval type-2 TS fuzzy models and their convex hull manipulation possibilities, enhancing subsequent control design and improving the resulting control performance.
Conflict interest: The author declares that there is no conflict of interest regarding the publication of this paper.

Fig. 1 .
Fig. 1.Block diagram of the algorithms of the original and the relaxed TS fuzzy model transformation.

54) with h = 1 .
Let us begin by applying the original TS fuzzy model transformation directly to the tensor function f (p).In the examples we utilize the TS fuzzy model transformation and convex transformations available in the TS fuzzy model transformation MATLAB ToolBox available on the Wikipedia site of the TP model transformation.We define a G 1 = G 2 = 137 equidistant rectangular grid that covers the domain Ω.The first step of the transformation (CHOSVD) yields the TP structure (singular values less than 10 −9 are discarded):
{A} (i) is a matrix whose columns are the vectors from Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.

TABLE I COMPLEXITY
AND ACCURACY TRADEOFF WITH THE ORIGINAL AND THE RELAXED TS FUZZY MODEL TRANSFORMATION (EXAMPLE 1) Dimension assigned to p 1 and p 2 : 250.1711; 12.16; 0.087.Again, one can observe that the singular values are considerably smaller in the case of the relaxed TS fuzzy model transformation.Note that the number of elements of the discretized

TABLE V COMPLEXITY
AND ACCURACY TRADEOFF WITH THE ORIGINAL AND THE RELAXED TS FUZZY MODEL TRANSFORMATION (EXAMPLE 2)tensor upon which CHOSVD is executed in the case of the original TS fuzzy model transformation is 137 × 137 × 2 × 2 × 2 = 150152, which leads to a complexity of 41 × 10 6 operational steps, see (51), while in the case of the relaxed TS fuzzy model transformation, it has 18 769 elements, which leads to a complexity of 5 × 10 6 operational steps, see (52).This is a reduction of about 88%.The results are summarized in TableV.The table demonstrates that the relaxed TS fuzzy model transformation provides a considerably improved approximation and complexity tradeoff.For instance, if we decrease the number of fuzzy rules to 3 × 2 using the original TS fuzzy model transformation, then the resulting L 2 norm error is 0.9685, while the relaxed TS fuzzy model transformation results in an L 2 norm error of only 0.6833.