Abstract

Local polynomial regression (LPR) is applied to solve the partial differential equations (PDEs). Usually, the solutions of the problems are separation of variables and eigenfunction expansion methods, so we are rarely able to find analytical solutions. Consequently, we must try to find numerical solutions. In this paper, two test problems are considered for the numerical illustration of the method. Comparisons are made between the exact solutions and the results of the LPR. The results of applying this theory to the PDEs reveal that LPR method possesses very high accuracy, adaptability, and efficiency; more importantly, numerical illustrations indicate that the new method is much more efficient than B-splines and AGE methods derived for the same purpose.

1. Introduction

There are some effective and convenient numerical methods for partial differential equations problems with initial and boundary values;for example, the radial basis functions are used to solve two-dimensional sine-Gordon equation in [1], a family of second-order methods are used to solve variable coefficient fourth-order parabolic partial differential equations in [2], and fifth-degree B-spline solution is available for a fourth-order parabolic partial differential equations in [3]. Recently, H. Caglar and N. Caglar [4] have used local polynomial regression (LPR) method for the numerical solution of fifth-order boundary value problems [4]. Moreover, they manage to solve linear Fredholm and Volterra integral equations by using LPR method [5]. Consequently, we can consider LPR method is applied to some PDEs with initial and boundary values, and numerical results demonstrate that local polynomial fitting method is more accurate, simple, and efficient. First, we consider the numerical approximation for the simple nonhomogeneous PDE with initial values with the use of local polynomial regression: 𝜕2𝑢𝜕𝑡2𝜕2𝑢𝜕𝑥2=𝑔(𝑥,𝑡),𝑢|𝑡=0=𝜑(𝑥),𝜕𝑢|||𝜕𝑡𝑡=0=𝜓(𝑥).(1.1)

2. Bivariate Local Polynomial Regression

Bivariate local polynomial regression is an attractive method both from theoretical and practical point of view. Multivariate local polynomial method has a small mean squared error compared with the Nadaraya-Watson estimator which leads to an undesirable form of the bias and the Gasser-Muller estimator which has to pay a price in variance when dealing with a random design model. Multivariate local polynomial fitting also has other advantages. The method adapts to various types of designs such as random and fixed designs and highly clustered and nearly uniform designs. Furthermore, there is an absence of boundary effects: the bias at the boundary stays automatically of the same order as the interior, without the use of specific boundary kernels. The local polynomial approximation approach is appealing on general scientific grounds; the least squares principle to be applied opens the way to a wealth of statistical knowledge and thus easy generalizations. All the above-mentioned assertions or advantages can be found in literatures [610]. The basic idea of multivariate local polynomial regression is also proposed in [1114]. In this section, we briefly outline the idea of the extension of bivariate local polynomial fitting to bivariate regression.

Suppose that the state vector at point 𝑇 is 𝑋𝑇: 𝑦𝑖,𝑇=𝑓𝑖𝑋𝑇.(2.1) Our purpose is to obtain the estimation ̂𝑦𝑖,𝑇=𝑓𝑖(𝑋𝑇). In this paper, we use the 𝑑th order multivariate local polynomial 𝑓𝑖(𝑋) to predict the value of the fixed-point 𝑋𝑇. The polynomial function can be described as 𝑓𝑖(𝑋)0|𝑗|𝑑1𝐷𝑗!(𝑗)𝑓𝑖𝑋𝑇𝑋𝑋𝑇𝑗=0|𝑗|𝑑𝑏𝑗𝑋𝑇𝑋𝑋𝑇𝑗,(2.2) where 𝑗𝑗=1,𝑗2,𝑗!=𝑗1!𝑗2||𝑗||=!,2𝑙=1𝑗𝑙,||𝑗||0𝑑=𝑑||𝑗||=0|𝑗|𝑗1=0|𝑗|𝑗2=0,𝑋𝑗=𝑥𝑗11𝑥𝑗22.(2.3)𝑑 is the order of the expansion: 𝐷(𝑗)𝑓𝑖𝑋𝑇=𝜕|𝑗|𝑓𝑖(𝑋)𝜕𝑥𝑗11𝜕𝑥𝑗22𝜕𝑥𝑗𝑚𝑚𝑋=𝑋𝑇,𝑏𝑗𝑋𝑇=1𝐷𝑗!(𝑗)𝑓𝑖𝑋𝑇.(2.4)

In the bivariate regression method, the change of 𝑋𝑇 on the attractor is assumed to be the same as those of nearby points, 𝑋𝑇𝑎(𝑎=1,2,,𝐴) according to the distance order. Using 𝐴 pairs of (𝑋𝑇𝑎,𝑦𝑖,𝑇𝑎), for which the values are already known, the coefficients of 𝑓𝑖 are determined by minimizing 𝐴𝑎=1𝑦𝑖,𝑇𝑎0|𝑗|𝑑𝑏𝑗𝑋𝑇𝑋𝑇𝑎𝑋𝑇𝑗2𝐾𝐻𝑋𝑇𝑎𝑋𝑇.(2.5) For the weighted least squared problem, when 𝑋𝑇𝑊𝑋 is inverse, the solution can be described by 𝑋𝐵=𝑇𝑊𝑋1𝑋𝑇𝑊Υ,(2.6) where 𝑦Υ=𝑖,𝑇1,𝑦𝑖,𝑇2,,𝑦𝑖,𝑇𝐴𝑇,𝑏𝐵=0𝑋𝑇,𝑏1𝑋𝑇,,𝑏𝑑𝑋𝑇𝑇,𝐾𝑊=diag𝐻𝑋𝑇1𝑋𝑇,𝐾𝐻𝑋𝑇2𝑋𝑇,,𝐾𝐻𝑋𝑇𝐴𝑋𝑇,(2.7) and 𝑋 is the 𝐴×𝑆(𝑆=0|𝑗|𝑑|𝑗|/𝑗!), 1𝑋𝑋=𝑇1𝑋𝑇1𝑋𝑇1𝑋𝑇𝑑1𝑋𝑇2𝑋𝑇1𝑋𝑇2𝑋𝑇𝑑1𝑋𝑇𝐴𝑋𝑇1𝑋𝑇2𝑋𝑇𝑑.(2.8) Then we can get the estimation ̂𝑦𝑖,𝑇=𝑓𝑖(𝑋𝑇): ̂𝑦𝑖,𝑇=𝑓𝑖𝑋𝑇=𝐸𝑖𝑋𝑇𝑊𝑋1𝑋𝑇𝑊Υ,(2.9) where 𝐸𝑖=(1,0,0,,0)1×𝑆. There are several important issues about the bandwidth, the order of multivariate local polynomial function, and the kernel function which have to be discussed.

2.1. Parameters Estimations and Selections

There are many of the embedding dimensions algorithms [15, 16]. In univariate series {𝑦𝑖,𝑛}𝑁𝑛=1,𝑖=1,2,,𝑀, a popular method that is used for finding the embedding dimensions 𝑚𝑖 is the so-called false nearest-neighbor method [17, 18]. Here, we apply this method to the bivariate case.

For the multivariate local polynomial estimator, there are three important problems which have significant influence on the prediction accuracy and computational complexity. First of all, there is the choice of the bandwidth matrix, which plays a rather crucial role. The bandwidth matrix 𝐻 is taken to be a diagonal matrix. For simplification, the bandwidth matrix is designed into 𝐻=𝐼2, where 𝐼2 is a unit matrix of two orders. In theory, there exists an optimal bandwidth opt in the meaning of mean squared error, such that opt=argmin𝑓𝑖𝑓(𝑥)𝑖(𝑥)2𝑑𝑥.(2.10)

But the optimal bandwidth cannot be solved directly. So we discuss how to get the asymptotically optimal bandwidth. We make use of searching method to select bandwidth. When the bandwidth varies from small to big, compared with the values of the objective function, we can choose the smallest bandwidth to ensure the minimum value of the objective function. So the smallest bandwidth is the optimal bandwidth.

Given 𝑙=𝐶𝑙min, where min is minimum, 𝐶𝑙 is efficient of expansion. We search the bandwidth to ensure the objective function value of minimum in district [min,max], where the objective function stands for the mean square error (MSE).

First, we assume =min and then increase by making use of efficient of expansion 𝐶𝑙. When >max stops, the minimum is the optimal bandwidth if 𝐸MS() obtains the minimum..𝐸MS() can be taken place of 𝑒MS=1𝐼𝐼𝑖=1𝑓𝑖𝑓(𝑥)𝑖(𝑥)2,(2.11) where 𝐼 is the number predicted. In this paper, we choose min=2𝑚𝑛,max=𝑚2,(2.12) where 𝑚=max𝑋𝑖𝑋𝑗,𝐶𝑙=1.1. Compared to other methods, this method is more convenient.

In order to get closer to the ideal optimal bandwidth, we search once again by narrowing the interval on the basis of the previous searching process. Suppose 𝑗 is the bandwidth which makes 𝐶𝑗𝑙min optimal in the above searching process. Now, small interval [𝐶𝑙𝑗1min,𝐶𝑙𝑗+1min] is divided into 𝑛 equal intervals again. Suppose 𝑖=min𝐶𝑙𝑗1𝑖1+𝑛𝐶𝑙𝑥21(𝑖=1,,𝑛1).(2.13) Among 𝑛1 bandwidth, the bandwidth that makes 𝑒MS minimized is the optimal bandwidth.

Another issue in multivariate local polynomial fitting is the choice of the order of the polynomial. Since the modeling bias is primarily controlled by the bandwidth, this issue is less crucial, however. For a given bandwidth , a large value of 𝑑 would expectedly reduce the modeling bias but would cause a large variance and a considerable computational cost. Since the bandwidth is used to control the modeling complexity, and due to the sparsity of local data in multidimensional space, a higher-order polynomial is rarely used. So, we apply the local quadratic regression to fit the model (i.e., 𝑝=2,3). The third issue is the selection of the kernel function. In this paper, we choose the optimal spherical Epanechnikov kernel function [6, 7] which minimizes the asymptotic mean square error (MSE) of the resulting multivariate local polynomial estimators, as our kernel function.

3. LPR Solutions for PDE

For the following nonhomogeneous PDE with boundary values: 𝜕2𝑢𝜕𝑡2𝜕2𝑢𝜕𝑥2=𝑔(𝑥,𝑡),𝑢|𝑡=0=𝜑(𝑥),𝜕𝑢|||𝜕𝑡𝑡=0=𝜓(𝑥),0𝑥1,0𝑡1.(3.1)

Suppose 𝑋𝑇=(𝑥𝑇,𝑡𝑇), so our purpose is to obtain estimation ̂𝑦𝑖,𝑇 given 𝑛2 pairs of points 𝑋𝑇𝑖=(𝑥𝑇𝑖,𝑡𝑇𝑖), where ̂𝑦𝑖,𝑇=𝑓𝑖(𝑋𝑇), 𝑥𝑇1=0<𝑥𝑇2<<𝑥𝑇𝑛=1, 𝑡𝑇1=0<𝑡𝑇2<<𝑡𝑇𝑛=1; then we can get 𝑢𝑡𝑡𝑋𝑇𝑢𝑥𝑥𝑋𝑇𝑔𝑖𝑋𝑇,0𝑥1,0𝑡1,(3.2) that is described by 𝛽0+𝛽1𝑥𝑥0+𝛽2𝑡𝑡0+𝛽3𝑥𝑥02+𝛽4𝑥𝑥0𝑡𝑡0+𝛽5𝑡𝑡02+𝑡𝑡𝛽0+𝛽1𝑥𝑥0+𝛽2𝑡𝑡0+𝛽3𝑥𝑥02+𝛽4𝑥𝑥0𝑡𝑡0+𝛽5𝑡𝑡02+𝑥𝑥𝑔𝑖𝑋𝑇.(3.3) Consequently, the corresponding minimization function is 𝑛𝑇1=1𝜑(𝑥𝑇1,0)𝑢(𝑥𝑇1,0)2𝐾𝐻𝑋𝑇1𝑋𝑇+𝑛𝑇2=1𝜓𝑥𝑇2,0𝑢𝑡𝑥𝑇2,02𝐾𝐻𝑋𝑇2𝑋𝑇+𝑛1𝑛𝑖=2𝑗=1𝑔𝑥𝑇𝑖,𝑡𝑇𝑗𝑢𝑡𝑡𝑥𝑇𝑖,𝑡𝑇𝑗𝑢𝑥𝑥𝑥𝑇𝑖,𝑡𝑇𝑗2𝐾𝐻𝑋𝑇𝑖𝑋𝑇.(3.4) In order to find expression of 𝑋, we need to find the elements of 𝑋 at first.

Since binary function’s p-order Taylor expansion has (𝑝2+3𝑝+2)/2 items, then we can get expressions (3.5), (3.6), and (3.7): 𝑎𝑋=11𝑎12𝑎13𝑎14𝑎15𝑎16𝑎21𝑎22𝑎23𝑎24𝑎25𝑎26𝑎𝑛1𝑎𝑛2𝑎𝑛3𝑎𝑛4𝑎𝑛5𝑎𝑛6𝑑(𝑛+1),1𝑑(𝑛+1),2𝑑(𝑛+1),3𝑑(𝑛+1),4𝑑(𝑛+1),5𝑑(𝑛+1),6𝑑𝑛(𝑛1),1𝑑𝑛(𝑛1),2𝑑𝑛(𝑛1),3𝑑𝑛(𝑛1),4𝑑𝑛(𝑛1),5𝑑𝑛(𝑛1),6𝑔[𝑛(𝑛1)+1],1𝑔[𝑛(𝑛1)+1],2𝑔[𝑛(𝑛1)+1],3𝑔[𝑛(𝑛1)+1],4𝑔[𝑛(𝑛1)+1],5𝑔[𝑛(𝑛1)+1],6𝑔[𝑛(𝑛1)+2],1𝑔[𝑛(𝑛1)+2],2𝑔[𝑛(𝑛1)+2],3𝑔[𝑛(𝑛1)+2],4𝑔[𝑛(𝑛1)+2],5𝑔[𝑛(𝑛1)+2],6𝑔𝑛2,1𝑔𝑛2,2𝑔𝑛2,3𝑔𝑛2,4𝑔𝑛2,5𝑔𝑛2,6,(3.5)

where 𝑋 is 𝑛2×(𝑝+1)(𝑝+2)/2 matrix: 𝑖=1,2,,𝑛,𝑗=1,𝑎𝑖𝑗=1𝑗=2,𝑎𝑖𝑗=𝑥𝑇𝑖𝑥𝑇𝑗=3,𝑎𝑖𝑗=𝑡𝑇𝑖𝑡𝑇𝑗=4,𝑎𝑖𝑗=𝑥𝑇𝑖𝑥𝑇2𝑗=5,𝑎𝑖𝑗=𝑥𝑇𝑖𝑥𝑇𝑡𝑇𝑖𝑡𝑇𝑗=6,𝑎𝑖𝑗=𝑡𝑇𝑖𝑡𝑇2,𝑖=𝑛+1,𝑛+2,,2𝑛,,𝑛(𝑛1),𝑗=1,2,3,5,𝑑𝑖𝑗=0𝑗=4,6𝑑𝑖𝑗=1,𝑖=𝑛(𝑛1)+1,𝑛(𝑛2)+2,,𝑛2,𝑗=1,2,4,𝑔𝑖𝑗=0𝑗=3𝑔𝑖𝑗=1𝑘=1,2,,𝑛𝑗=5𝑔𝑖𝑗=𝑥𝑇𝑘𝑥𝑇𝑘=1,2,,𝑛𝑗=6𝑔𝑖𝑗𝑥=2𝑇𝑘𝑥𝑇,Υ(3.6)𝑇=𝜑𝑥𝑇1𝜑𝑥𝑇2𝑥𝜑𝑇𝑛𝑔𝑥𝑇1,𝑡𝑇2𝑔𝑥𝑇2,𝑡𝑇2𝑥𝑔𝑇𝑛,𝑡𝑇2𝑥𝑔𝑇1,𝑡𝑇𝑛1𝑔𝑥𝑇2,𝑡𝑇𝑛1𝑥𝑔𝑇𝑛,𝑡𝑇𝑛1𝜓𝑥𝑇1𝜓𝑥𝑇2𝑥𝜓𝑇𝑛1×𝑛2.(3.7)

Substitute (3.6) and (3.7) into 𝑋𝐵=𝑇𝑊𝑋1𝑋𝑇𝑊Υ.(3.8) Now, we can get the estimation ̂𝑦𝑖,𝑇=𝑓𝑖(𝑋𝑇),(𝑖=1,2,,𝑛), ̂𝑦𝑖,𝑇=𝑓𝑖𝑋𝑇=𝐸𝑖𝑋𝑇𝑊𝑋1𝑋𝑇𝑊Υ,(3.9) where 𝐸𝑖=(1,0,0,,0)1×𝑆(𝑆=0|𝑗|𝑑|𝑗|/𝑗!).

4. Numerical Illustrations and Discussions

In this section we consider the numerical results obtained by the schemes discussed previously by applying them to the following second-order and fourth-order initial boundary value problems. All computations are carried out using MATLAB 7.0.

Furthermore, in order to evaluate the accuracy and effectiveness, we apply the following indices, namely, the mean squared prediction error (MSE): 1MSE=𝑛𝑛𝑖=1𝑓𝑖𝑓(𝑥)𝑖(𝑥)2(4.1) and the absolute error AE𝑖𝑥=𝑓𝑖𝑓𝑥𝑖.(4.2)

Example 4.1. First, we consider the following second-order nonhomogeneous PDEs: 𝜕2𝑢𝜕𝑡2𝜕2𝑢𝜕𝑥2=𝑡sin𝑥𝑢|𝑡=0=0,𝜕𝑢|||𝜕𝑡𝑡=0=sin𝑥.(4.3)

The exact solution for problems (4.3) is 𝑢(𝑥,𝑡)=𝑡sin𝑥. We solve Example 4.1 with 𝑛=20,30,50 by choosing 𝑝=3,4 and various values of parameters 𝐻𝑖 presented in Table 1. The errors in the solutions are computed by our method (3.9). In Table 1, 𝑝=3,4. Given 𝑛=20, 𝑝=3,4, it is found that the magnitude of MSE is between 10 to the power of −3 and 10 to the power of −4. Given 𝑛=30, 𝑝=3,4, it can be seen that the magnitude of MSE is between 10 to the power of −4 and 10 to the power of −7. However, by setting up 𝑛=50, 𝑝=3,4, it is obvious that the magnitude of MSE is between 10 to the power of −5 and 10 to the power of −8. We conclude that MSE decreases with the increase of the value of 𝑛 the value of order 𝑝 has no much influence on MSE for a large value of 𝛽. Consider parameter 𝐻; specifically, MSEs get up to minimum for 𝑝=3, 𝑛=20 when 𝐻3=1/25, for 𝑝=3, 𝑛=30 when 𝐻4=3/100, for 𝑝=3, 𝑛=50 when 𝐻5=1/40. We can find that the optimal bandwidth 𝐻 locates in district [0.025𝐼2, 0.04𝐼2] by using method (2.12)-(2.13) for 𝑛=20, 30, 50. For 𝑝=4, there exists the same situation with 𝑝=3. Here, 𝐼2 is a unit matrix of two orders. The fitting figure is depicted in Figure 1 using matlab 7.0.

Example 4.2. Next we consider the following fourth-order nonhomogeneous PDEs: 𝜕2𝑢𝜕𝑡2+𝜕4𝑢𝜕𝑥4=𝜋41sin𝜋𝑥cos𝑡,0𝑥1,𝑡>0,(4.4) with the initial conditions, 𝑢(𝑥,0)=sin𝜋𝑥,𝑢𝑡(𝑥,0)=0,0𝑥1,(4.5) and the boundary conditions at 𝑥=0 and 𝑥=1 of the form 𝜕𝑢(0,𝑡)=𝑢(1,𝑡)=2𝑢𝜕𝑥2𝜕(0,𝑡)=2𝑢𝜕𝑥2(1,𝑡)=0,𝑡0.(4.6)

The exact solution for problems (4.4), (4.5), and (4.6) is 𝑢(𝑥,𝑡)=sin𝜋𝑥cos𝑡. This problem has been solved by several authors [19, 20]. Here, we try to solve Example 4.2 with 𝑛=30,50 by choosing 𝑝=3,4 and various values of parameters 𝐻 presented in Table 2. The errors in the solutions are computed by our method (3.9); the splines method for three kinds of values of 𝑝,𝑞,𝑠,𝜎, in [19], and the AGE method for two kinds of values of its parameters in [20] have been presented in Table 2. In Table 1, 𝑝=3,4. Given 𝑛=30, it is found that the magnitude of MSE is between 10 to the power of −4 and 10 to the power of −6 for 𝑝=3, between 10 to the power of −6 and 10 to the power of −8 for 𝑝=4. We can see the value of order 𝑝 has a little influence on MSE. Consider parameter 𝐻; specifically, MSEs get up to minimum for 𝑝=3, 𝑛=20 when 𝐻3=1/25, for 𝑝=3, 𝑛=30 when 𝐻4=3/100, for 𝑝=3, 𝑛=50 when 𝐻5=1/40. Similar to Example 4.1, we can also acknowledge the optimal bandwidth 𝐻 exists in district [0.025𝐼2, 0.03𝐼2] by using method (2.12)-(2.13), for 𝑛=30 and 𝑛=50 given 𝑝=4,5. Furthermore, we have tabulated the absolute errors (AEs) at 𝑥=0.10,0.20 for different combination of parameters 𝑝,𝐻 for 𝑛=30 and 𝑛=50, respectively. Given 𝑛=30, 𝑝=4, 5, the magnitude of AE is between 10 to the power of −3 and 10 to the power of −6 at point 𝑥=0.1, and given 𝑛=50, 𝑝=4, 5, the magnitude of AE is between 10 to the power of −5 and 10 to the power of −7 at point 𝑥=0.2. Here, 𝐼2 is a unit matrix of two orders. The fitting figure is also illustrated in Figure 2 using matlab 7.0.

5. Conclusions

In this paper, LPR method can be applied for the numerical solution of some kinds of PDEs. We can also see that LPR method has been exploited to solve fifth-order boundary value problems [4] and Fredholm and Volterra integral equations [5] whose maximum absolute errors are very small and calculation processes are simple and feasible. Compared with the splines method [19] and the AGE method [20], LPR method converges to solutions with fewer number of nodes, and the maximum absolute errors are a little smaller. Moreover, it is more flexible to resolve problems just only adjusting parameters ,𝑝. However, LPR methods have shortcomings which need more computation time compared with splines method [19] for the same problems, which can be tried to discuss and resolve afterward. In any case, it is more important that we can conclude LPR solution is a powerful tool for numerical solutions of differential equations with initial and boundary values.

Acknowledgment

This work was supported by Natural Science Foundation Projects of CQ CSTC of China (CSTC2010BB2310, CSTC2011jjA40033, CSTC2012jjA00037) and Chongqing CMEC Foundations of China (KJ080614, KJ120829).