Abstract

This paper proposes two parameter identification methods for a nonlinear membership function. An equation converted method is introduced to turn the nonlinear function into a concise model. Then a stochastic gradient algorithm and a gradient-based iterative algorithm are provided to estimate the unknown parameters of the nonlinear function. The numerical example shows that the proposed algorithms are effective.

1. Introduction

Parameter estimation has many applications in system modelling and signal processing [15]. Recently, there exist many parameter estimation approaches, such as recursive least squares (RLS) methods [68], stochastic gradient (SG) methods [911], and iterative identification methods [1214]. For example, Chen et al. proposed an iterative method for Hammerstein systems with saturation and dead-zone nonlinearities [15]. Li et al. presented a gradient iterative estimation algorithm and a Newton iterative estimation algorithm for a nonlinear function [16].

In the past years, the approximation capability of fuzzy systems has received much attention especially for control purposes [1719]. The approximation theory of fuzzy systems is often applied to approximate the unknown functions of control systems. It is shown that the identification of membership functions plays an essential role to ensure the system capability. There are many different types of membership functions in the approximation theory of fuzzy systems, such as the Gaussian membership functions [2022] and other membership functions [23, 24].

Since the membership functions usually have complex nonlinear structures, the traditional least squares may be infeasible for the identification of the functions because their derivative functions sometimes do not have analytical solutions [2527]. Thanks to the gradient descent (GD) algorithm, which avoids solving an analytical function, thus it can be extended to complex nonlinear membership functions. The GD algorithm generates a sequence estimate using an iterative function, where the iterative function consists of two parts: one is the negative direction and the other is the step size [28, 29]. With a correct direction and an optimal step size, the GD algorithm can ensure the results converge to the true values.

In this paper, we propose two methods to estimate the unknown parameters of a nonlinear membership function; these two methods are both based on the GD method. First, a gradient iterative algorithm is proposed, which can estimate the parameters based on all the collected data, and thus, it has heavy computational efforts. In order to reduce the computational efforts, we turn the model of the nonlinear function into a regression model and then use two identification algorithms to estimate the unknown parameters.

The rest of the paper is organized as follows. Section 2 introduces the nonlinear function and the gradient-based iterative algorithm. Section 3 develops the model transformation-based stochastic gradient method and iterative method. Section 4 provides an illustrative example. Finally, concluding remarks are given in Section 5.

2. The Nonlinear Membership Function and the Gradient-Based Iterative Algorithm

Let us introduce some notations first. The symbol stands for an identity matrix of the appropriate sizes; the norm of a matrix is defined as ; the superscript T denotes the matrix transpose.

Consider a nonlinear membership function in [30]where , are the measured data contaminated with noises and and are unknown parameters to be estimated. When the parameters and are already known, this nonlinear membership function is often used in a single-input and single-output fuzzy system.

Define the cost function and the parameter vector as

The gradient of with respect to is

Let be the iterative variable and be the estimates of at iteration . We can get the following gradient-based iterative algorithm:

There exist many methods which can compute the step size , e.g., the steepest descent method, the stochastic gradient method, and the projection method [3135].

Define

Then, equation (1) can be simplified as

Define the cost function and the parameter vector as

The gradient of with respect to is

Let be the iterative variable and be the estimates of at iteration . We can get the following gradient-based iterative algorithm:

The above iterative algorithms are difficult and have heavy computational burden because they update the parameters with all the collected data. In order to reduce the computational effort, we propose two modified methods in the next section.

3. The Model Transformation-Based Stochastic Gradient Method and Iterative Method

Convert the nonlinear function (6) to an identification model:

Then define the parameter vector and the information vector as

Without loss of generality, a noise term with zero mean is introduced to the identification model in (10), and the identification model is put into a concise form

Let () be the th element of the vector ; we have , , and . At last, with the estimated parameters and , we obtain .

Let and be the estimate of and , and and are defined as

Using the gradient search and minimizing the quadratic criterion functionlead to the model transformation-based stochastic gradient (MT-SG) algorithm:

The steps of computing the parameter estimation vector by the MT-SG algorithm in (15)–(18) are listed as follows:(1)Collect the measured data (2)To initialize, let and (3)Compute according to (16)(4)Compute by (17)(5)Choose according to (18)(6)Update the parameter estimation vector by (15), and compare and : if they are sufficiently close, or for some preset small , if , then terminate the procedure and obtain the estimate ; otherwise, increase by 1 and go to step 3

In general, the MT-SG algorithm is suitable for online identification and the iterative algorithm is used for offline identification. The iterative algorithm employs the idea of updating the estimate using a fixed data batch with a finite length and thus has a higher estimation accuracy than the SG algorithm. Next, we use finite measurement input-output data and iterative with subscript k.

Define the stacked output vector and the stacked information matrix as

Let and be the iterative estimate of and at iteration , and , , and are defined as

Define a quadratic criterion function

Minimizing and using the negative gradient search lead to the model transformation-based gradient iterative algorithm (MT-GI) of computing :

The steps of computing the parameter estimation matrix by the MT-GI algorithm are listed as follows:(1)Collect the input data (2)To initialize, let and , with being a column vector whose entries are all unity and .(3)Compute by (23)(4)Form according to (24) and according to (25)(5)Choose a maximum by (26)(6)Compute by (22)(7)Compare and : if they are sufficiently close, or for some preset small , if , then terminate the procedure and obtain the iterative times , and estimate ; otherwise, increase by 1 and go to step 3

4. Example

Consider a nonlinear functionwhere , and , and then, we can conclude

Define

Assume is the input and is taken as a persistent excitation signal with zero mean and unit variances and is taken as white noise with a zero mean and variance . Applying the MT-SG algorithm and MT-GI algorithm to estimate the parameters of this system, the parameter estimates and their errors are shown in Tables 1 and 2, and the parameter estimation errors versus and are shown in Figures 1 and 2.

From Tables 1 and 2 and Figures 1 and 2, we can draw the following conclusions:(1)The MT-GI algorithm has higher estimation accuracy than the MT-SG algorithm(2)The parameter estimation errors by the MT-SG algorithm become smaller and smaller and go to zero with increasing(3)The parameter estimation errors by the MT-GI algorithm become smaller and smaller and go to zero with the subscript increasing

5. Conclusions

This paper presents two identification methods for a nonlinear membership function. An equation converted method is proposed to convert the nonlinear function to a concise model, and then an MT-SG algorithm and an MT-GI algorithm are provided to identify the nonlinear function. The simulation results verify the proposed algorithms.

Data Availability

The simulation data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the Natural Science Foundation of Jiangsu Province (no. BK20131109).