Use Residual Correction Method and Monotone Iterative Technique to Calculate the Upper and Lower Approximate Solutions of Singularly Perturbed Non-linear Boundary Value Problems

This paper seeks to use the proposed residual correction method in coordination with the monotone iterative technique to obtain upper and lower approximate solutions of singularly perturbed non-linear boundary value problems. First, the monotonicity of a non-linear differential equation is reinforced using the monotone iterative technique, then the cubic-spline method is applied to discretize and convert the differential equation into the mathematical programming problems of an inequation, and finally based on the residual correction concept, complex constraint solution problems are transformed into simpler questions of equational iteration. As verified by the four examples given in this paper, the method proposed hereof can be utilized to fast obtain the upper and lower solutions of questions of this kind, and to easily identify the error range between mean approximate solutions and exact solutions.


Introduction
Let us begin by considering the boundary problems of a singularly perturbed problem; its equation is given as follows: Among the above equation,  and ) , ,  are a small parameter and a non linear term respectively, with reference to the existence and uniqueness of the solutions of such problem, please see illustrations in Bender and Orszag [1]; while the related study and applications can be referred to in [2]- [6]. Usually in the process of obtaining upper and lower solutions of a differential equation, the maximum principle of a differential equation must be relied on to establish the monotonic relation of the residual function of the differential equation with its equational solutions. To find out the monotonic relation of such problem, the Equations (1) and (2) is reformulated into the following residual equations: Where function R is called the residual function of the differential equation in ) , ( b a x  or boundary, or abbreviated as residual. Given that function ) (x u is the exact solution that makes the residuals of Equations (3)-(5) all satisfy zero, and further assume that approximate and continuous derivatives till second order. Based on the maximum principle of a second-order monotonic differential equation (Protten [7] and Wang and Hu [13] then when the following conditions are satisfied The inequation given below will also be established accordingly, namely: In Inequation (10) respectively. And a differential equation that possesses such relation is said to be of monotonicity.
Actually when solving a singularly perturbed non-linear problem, Inequation (6) is not always be satisfied. Therefore, this paper presents a "monotone iterative" concept to reinforce the monotonicity of a differential equation, allowing the maximum principle of differential equations to be applied to a wider range of singularly perturbed non-linear problems. For this purpose, a monotonicity correction parameter  is first added into Equation (1) to replace the original equation with the following equation: In the above equation, the superscript  is an initially assumed function or a value obtained from the previous iteration. If the differential value between the approximate solution ) ( x u and the exact solution then after putting Equation (12) into Equation (11) and making rearrangements, the following equation is generated: It can be deducted from the above-mentioned equation that the equation connecting function then based on Inequation (6), when Equation (14) satisfies Inequation (6), indicating that Equation (13) which is rewritten from Equation (3) using the monotone iterative technique has ensured the possible existence of its monotonicity with addition of 2  . It should be worth noticing that, when Equation (13) is used to obtain the lower solution of the exact solution, besides ) , ( u x R must be greater than or equal to zero, the both sums of the last two terms in Equation (14) must be greater than or equal to zero in , so as to ensure that the sum of all terms in the right of equal sign in Equation (14) is positive. The reason for this condition is that: when the sum of all terms in the right of equal sign in Equation (14) is greater than or equal to zero, this implies that ) ( x u  is smaller than or equal to the exact solution "zero", yet Expression (12) indicates that only a negative ) ( x u  can make the obtained approximate solution ) ( x u be smaller than or equal to the exact solution ) (x u . And similarly, this is also applicable in obtaining the upper solution. So following summing-up of all constraint conditions, besides the monotonicity that is required to satisfy Expression (6) when calculating the solutions, the following shall also be satisfied in the process of obtaining the lower solution: thus as an optimal lower solution, the approximate solution )) ( may be found and ) ( ) ( x u x u   may be ensured. Similarly, in order to obtain the upper solution, the following conditions, besides Expression (6), shall also be satisfied Similarly, an approximate solution )) ( which is the optimal upper solution may also be obtained, and ) ( ) ( x u x u   may be ensured.

Residual Correction Method
Though the gain of the upper and lower approximate solutions is useful in analyzing accuracy or credibility of a solution, given the considerable complexity and difficulty often in deriving the optimal solutions of mathematical programming problems under such constraint conditions, such theory has thus been under theoretical study in the long run but has not been applied practically in solving complex problems. To the author's knowledge, there were only attempts made by Lin and Chen [8] and Chang and Lee [9] recently to obtain solutions of mathematical programming problems of such inequations using genetic algorithms. In light of this situation, based on the concept of residual correction method of initial value problems, which was proposed by the author previously, Wang [10] simplifies the inquational mathematical programming problems of Expressions (16)-(23) into equational problems that resemble differential equations in obtaining solutions traditionally. For this purpose, an additional residual correction term is used in combination with the iteration technique to discretize formulae (16) or (20), and then transform these inequations into the following equation: Among the above equation, the superscript "m" stands for iteration times of residual correction, while the subscript is the serial number of discretized calculation grid points. And m i R serves as residual correction values at calculation grid points to correct the residual values at calculation grid points and further ensure the residual values within adjacent subintervals ) at i grid point are all positive or negative. The complete steps for residual correction are shown as follows: 1. Let 0  m , and assume that the residual at each grid point 2. Use Equation (24) to get new 1  m i u and its differential value within two orders; 3. Take the residual value in adjacent subintervals ( ) at i grid point as a basis to estimate the correction value 1  m i R required for next residual correction. Its residual correction relations are given as follows: i. For obtaining the lower approximate solution: ii. For obtaining the upper approximate solution: is a residual value function generated by putting the function resulted from the previous step and its derivative value into Expression (13).
Advance to next m value and repeat Steps 2 and 3 till

The Cubic Spline Method
The residual distribution within adjacent subintervals must be gained first before solutions of residual correction Equations (25) or (26) are obtained. However, although the traditional finite difference method is applied to lower-order differential equations and the functions and its derivatives resulting from such method are just results at grid points, these functions and its derivatives are of discontinuity at non-calculation grid points. For this reason, this paper adopts the cubic spline as an approximate function to discretize and transform Equation (13) into Equation (24), thus ensuring that the function itself and its derivatives under order 2 are continuous not only at grid points but also in any intervals. Based on the theory of the cubic spline, the general linear discretized formula for a onedimensional second-order differential equation is In the aforesaid formula, F, G and S are constant terms. To get solutions of such expression, the cubic spline functional relation is used to convert the formula into an algebraic equation that only involves i u or first derivative i m or second derivative i M . For example, Equation (27) is transformed into equation sets only containing second derivatives in the following form, as indicated below (Wang and Kahawita [11]) subject to the boundary conditions These formulae are recurrence relations, and can be expressed in matr ix form as where B0, C0, D0, AN, BN and DN can be acquired based on boundary conditions. Since Formula (34) consists of N+1 equations and takes the form of a tridiagonal matrix, then Thomas Algorithm can be used to calculate the second-order differential value of the differential equation swiftly. This paper seeks to use Formula (28) to obtain the second derivative 1  n i M of a second partial differential equation; then trace back to extract the first differential derivative 1  n i m and the functional value 1  n i u using the basic relation of cubic spline function. In addition, the value of the function at any point can be expressed as follow: Generally when the cubic spline approach is used for solutions, each time a different boundary is encountered, the cubic spline relation must be employed individually to get the formulae relating to B0, C0, D0, AN, BN and DN, making this method rather inconvenient for solutions. On this ground, this paper relies on mixed boundary conditions to deduce general formulae of B0, C0, D0, AN, BN and DN. let's begin by considering mixed boundary conditions where 0, 0, 0 and N, N, N are constants.
Based on the theory of cubic spline, Formula (36) can be discretized into Similarly at the boundary of the other end, the following expressi on can be generated:

Result and Discussion
In order to verify the correctness of the method proposed in this paper, the following four examples of singularly perturbed non-linear problems are given for validation purpose. Example 1. Kadalbajoo and Patidar [3] In order to derive the upper and lower approximate solutions, we adopt the technique proposed in the paper to rewrite Equation (46) into a monotone iteration equation as follows: As the above-mentioned equation satisfies Formula (6) when 0   , then when the following inequations are satisfied an optimal lower approximate solution will be found and vice versa, when similarly an optimal upper approximate )) ( solution can also be obtained. As described previously on cubic spline, we discretize Expressions (49)-(54) and use residual correction method to obtain solutions and residual values. The distribution of these solutions and the residual values before and after correction is shown as in Table 1 and Fig. 1. As Fig. 1 indicates, for a differential equation without residual correction, its residual value only satisfies the distribution of zero at grid points (i.e. 0  i R ), but it is not ensured that the residual values in subintervals of all grid points are greater/smaill than or equal to zero. Thus, as shown in Fig. 1, because the residual value is smaller than or equal to zero near the point x=0.5 but is greater than or equal to zero at both sides of the point x=0.5, the relationship between the approximate solutions obtained without residual correction and the exact solutions cannot be determined at this point. Additionally, regardless of the original residual value distribution, it can be seen that following residual correction as described in this text, the residual value is corrected to satisfy the residual value distribution required for Formula (49) in which the residual value is greater than or equal to zero or for Formula (52) where the residual value is smaller than or equal to zero. Therefore, the relation between the obtained approximate solutions and the exact solutions is as shown in Table 1 where the obtained upper approximate solution ) (x u  is always greater than the obtained lower solution ) (x u  , and the both solutions tend increasingly to both sides of the exact solution as numbers of grid points increase. That is to say: This suggests that the residual correction method presented in this paper can be used to correctly obtain the upper and lower approximate solutions. Secondly, the upper and lower approximate solutions obtained with such method can make it easy to analyze the maximum error of approximate solutions. In other words, if the final approximate solution ) (x u is a mean value between the upper solution and lower solution, it can still be convinced that the maximum error of the approximate solutions ) (x u at any point must be smaller than the maximum possible error, even though the exact solution is not identified As Table 1 indicates, the maximum possible error gained in this paper is roughly the same as the actual maximum error obtained in Kadalbajoo and Patidar [3]. However, it is worth noting that the actual maximum error obtained in Kadalbajoo and Patidar [3] is a value of difference between the numerical solution and the exact solution, while the maximum possible error ) ( . max x p E mentioned in this paper is a maximum possible error of mean approximate solutions under conditions of unidentified exact solution. If the exact solution is known, the maximum actual error ) ( . max x r E resulting in this paper will be outlined as in the last column of Table 1, in which one can see that the error magnitude is far smaller than the maximum possible error obtained in this paper. This is still the case for the fewest grid points (e.g. N=16 or 32), owing to residual value correction which leads to the residual values' symmetrical distribution on both sides of zero, as shown in Fig. 1, thus the obtained upper and lower approximate solutions are characterized by basic symmetry on both sides of the exact solution. So the mean approximate solution derived from the upper and lower average values will be in extreme proximity to the exact solution, allowing the method proposed in this paper to have the characteristics of obtaining optimal approximate solutions with few grid points. Fig. 2 shows the numerical solutions of this example, while Table 2 and Fig. 3 indicates the error range analysis of approximate solutions of this example as the parameter 2  increases when the number of grid points is 256. As shown in Table 2 and Fig. 3, the maximum possible error (i.e. u u    ) and actual maximum error increases with the parameter 2  . Because there is an inflexion point at x=0.5, as demonstrated in Fig. 2, the error increase at this point is particularly apparent, as shown in Fig. 3. But in any way, the actual maximum error is still far smaller than the maximum possible error.
subject to the boundary conditions Now that Equation (57) does not meet the requirements by Formula (6), so whether this differential equation is of monotonicity is not sure. In order to calculate the upper and lower approximate solutions of the differential equation correctly, we apply the technique proposed in this paper to restructure the Equation (57) into a monotone iterative equation as follows:  (59) is greater than or equal to zero, or smaller than or equal to zero. So in this example,  is not necessary to secure the monotonicity.
Accordingly, when the following formula is satisfied an optimal lower approximate solution )) ( an optimal upper approximate solution )) ( can also be secured.
Example 3. Kadalbajoo and Patidar [3] its residual equation after monotone iteration is its residual equation after monotone iteration is Similarly, to enable the formula to be of monotonicity, the value of  can be obtained, on basis of observations of approximate solutions, from values that are big enough to ensure 0 2    u  However in this example, the first-order differential value of its solutions is always smaller than or equal to zero, as shown in Fig. 2. So we can directly let the value of 2  be zero. Fig. 4-6 and Table 2 shows the error analysis of the approximate solutions extracted in Examples 2-4 varies with the parameter 2  .
In Fig. 4-6, the value of difference between the upper solution and lower solutions is always positive, which indicates that the monotonic iteration method proposed in this paper can be applied to strengthen the monotonicity of a differential equation, thus ensuring the obtained the upper and lower approximate solutions are greater or smaller than the exact solution in all intervals of x. Secondly, although the maximum possible errors of approximate solutions increase as 2  decreases, the method mentioned in this paper still can be effective in defining the maximum possible errors of solutions under conditions of unknown exact solutions. Besides, if based on the double mesh principle (Dollan et al. [12]), the approximate solutions are considered as the exact solutions, the exact maximum error obtained in this paper is far smaller than the estimated maximum possible error, as displayed in Table 2. This indicates that the obtained upper and lower approximate solutions can not only define the error range of approximate solutions, but also the mean approximate solution derived from the upper and lower approximate solutions is characterized by more accuracy.

Conclusions
As verified by the four examples in this article, the monotone iteration technique and residual correction method proposed in this paper have the following characteristics: a、 Capable of strengthening the monotonicity of a differential equation; b、 Fewest times required for residual correction and relatively rapid in solving inequational constraint mathematical programming problems; c、 Effective to obtain the upper and lower approximate solutions of singularly perturbed problems correctly; d、 Easy in determining the maximum possible error range when the exact solution is unknown; e、 Symmetric of difference between the upper/lower approximate solutions and the exact solutions, thus leading to improved accuracy of the mean approximate solutions