The Frontier of Estimator in Agriculture Economics: Application for Rice Products in 7 Top Countries in Asia

The purpose to econometrically calculate the comparing estimators in parametric estimations is successfully done in this paper. The two powerful estimators such as the James-Stein estimation (JSE) and Maximum Likelihood estimation (MLE) were employed to solve the most efficiently parametric forecasting. Experimentally, the yearly time-series data of rice exporting products during 2006 to 2016 was collected and observed from 7 Asian countries, which are the major exporters in the world, such as Thailand, India, Pakistan, Vietnam, Myanmar, China, and Cambodia. Methodologically, the comparison between the two estimators was based on the linear regression to provide residual terms. Empirically, the results of the estimator comparison implied that the JSE can be substantially replaced the counterpart when doing in the multiple-factors estimation, and error terms estimated were compared for seeking the best estimator to predict trends of rice products. Consequently, it is sensible to introduce the J-S estimator can be one of interesting tools in the modern econometric researches.


Introduction
In modern econometrics, the big problems that many quantitative researchers have been trying to computationally solve are the parametric estimation and prediction. Interestingly, to precisely calculate our parameters, is the maximum likelihood estimator really the best estimator for data predictive estimating in the recent moment? Additionally, there has been long time since a hundred years ago that quantitatively econometric papers consider only two dimensions, which refer to one dependent variable related to one independent factor (an unknown-mean variable), others are fixed as constant following assumptions. However, the huge mistake is that we are losing the ability of truth expressions by fixing and assuming as the traditional way. Can we put more dimensions of exogenous variables and make a precise estimation without any assumption simultaneously? This question leads the authors to employ the mathematically experimental study by using the JSE (James-Stein's estimator) comparing with the maximum likelihood estimator (MLE). Consequently, the empirical result of this paper will be the challenge one that can extend the frontier of econometric estimations and forecasts in the current moment of modern econometrics.
According to the field of agricultural economics, especially rice exporting productions, the trends of this fundamental food have been econometrically predicted for seeking the precise occasions to bargain on the future market and to manage rice supplies in the real market. Obviously, there are many internal and external factors affecting the fluctuations of the trends, and we should not consider them separately. Consequently, the purpose of the paper is aimed to improve the econometric estimation by comparing two estimators between the J-S estimator and MLE estimator. The hypothesis of the result is to believe that the J-S estimation will be the new tool that can overcome the problem of outliners capturing in the MLE estimation. The paper employs the sampled data of agriculture goods, which are rice products in top 7 countries in Asia. The yearly data is collected during 2006 to 2016. In the term of literature, it is undeniable that mathematical results or even econometrical outcomes are strikingly contrary to generally held belief even though an obviously valid proof is given. James Stein's estimator, which can be alternatively called "Stein's idea", made this issue dramatically in 1961. This is the beginning of the story called "shrinkage estimation", which deliberate biases are introduce to improve all performance and its modern implementations. Because of the J-S estimator has not been the talk of the town in econometrics yet. This seems the empirical application of J-S testing is still rare in current moment. From literature, for example, the original paper considered into the estimation with quadratic loss by Stein [3], the discussed paper called "Stein's Paradox in Statistics" by Efron and Morris [2], the paper of Brown [1] regarding the admissibility of invariant estimators in more location parameters, the applied time-series paper with James-Stein estimators by Senda and Taniguchi [5], and the paper of quantum physics using James-Stein estimator by Stander [6].

Simulation Method for Estimator Comparison
Before the beginning of real sampled estimations, the comparison between the MLE and JSE is investigated by simulated computations. In this paper, the bootstrap was the computational method relied on the relationship between samples and (unknown) populations by a comparable linkage between the samples at hand and appropriately designed (observable) resamples [7]. Let we compute the bootstrap percentile of the function q is a motif statistic. The samples, n q q ,...., 1 , are assumed to be identically independence distribution (i.i.d) and the function   I q P | , stands for the parametric density of choices [8]. Thus, we can set for each I, the bootstrap percentile is given by (1)

The James-Stein Estimator
The James-Stein estimator (JSE) is an estimator that provides an estimation of the mean of a multivariate normal distribution, and it dominates the classical Maximum-Likelihood estimator (MLE). The JSE is crucial to stand on their limitations as follows; 1) the JSE is a biased estimator, essentially, it trades bias for risk; 2) the James-Stein estimator is based on a minimax estimator, which is an estimator with the property that its largest risk is no greater than that of any other estimator. If the mean of a multivariate normal distribution is minimax, any estimator that dominates the MLE must also be minimax; 3) the JSE is defined as a minimax, which is an estimator for which no other estimator exists that dominates it. James and Stein showed that the MLE of the mean of a multivariate normal distribution is not an admissible estimator if the dimension exceeds two [4]. According to this paper, the J-S estimator is employed to solve the linear regression problem. The JSE readily extended to deal with a nonidentity covariance matrix, and this leads to two crucial concepts, namely, the "effective dimension" and "shifting the origin". We conduct P R X  and denote that a normally distributed random vector with mean  as well as positive covariance matrix  with a normal distribution, (2) and there is no spherically symmetric estimator that dominates the Maximum Likelihood estimator (MLE), this can be called as the effective dimension which is a real-valued quantity. To aware of the restriction on  existing and to proof the effectively dimensional condition is crucial for observing the problem for a different angle. Considering a diagonal covariance matrix, The squared error can be written as . Because the JSE of  P in general does not take a small risk for every individual element, it is evident that if one of the j  is relatively large, there is no longer possible to compensate for the possibility of introducing a larger MSE into this particular element [4]. The next technique is shifting the origin. It is well known that the risk of the JSE, which decreases as where R is defined as

RC C
is explained as the covariance matrix. p is the dimension of x, n the dimension of z, ML x is the effective dimension, which is written as If p n  and 2  is unknown, 2  is derived as follows; For the MLE and JSE, they are taken to rely on the following properties:   Empirically, speaking to the case of 20 samples with 2 factor dimensions, the result evidently expresses that the JSE and MLE represent the similar values of errors. However, the difference is more obvious when the dimensions are increased as multiple factors. The result confirms that the JSE provides the efficient estimator rather than the MLE in the case of 6 factor dimensions. The errors estimated by the JSE are lower than the counterpart, which are 0.4268 and 1.0775, respectively. The estimated outputs are clear when we focus on the figure 1. The mean squared error lose estimated by the JSE is nearly zero when the dimensions are more than six elements. Conversely, the errors provided by the MLE are constant when the dimensions are increased. This states the MLE cannot efficiently capture robustness in the situation contained multiple variables. To proof that the JSE is better than the MLE surely, the second simulation analysis is employed to complete this issue. 50 bootstrapped observations are generated. The results are represented in the table 2.   Evidently, from the table 2, the errors estimated from the J-S estimation are lower than the MLE when the factor dimensions are increased, even though it is not a huge increment in simulated observations. This strongly confirms that the J-S estimator is quite efficient to substitute the MLE, especially in the case of multi-dimensions analyses. Furthermore, the results are crystal clear when we consider into the figure 2.

Case 2: changing only in factor dimensions.
This section is to employ 100 bootstrapping observations with differences of factor dimensions during 1 to 16 elements. Empirically, the results numerically detail that error terms estimated from the JSE are obviously lower that the counterpart when the dimensions are raised continuously. This can imply that the JSE is better than the MLE in the case of multiple independent variables, which is more realistic and closed to the true value in parametric estimations. These results are graphically represented as the figure 3.  the simulation data of rice exporting indexes in 7 top countries in Asia was processed in the linear regression, which is employed the MLE and JSE to provide error terms. In this case, the estimated residuals were used to indicate the precise estimation. The results displayed in the figure 5 expresses that the J-S estimator can be more fitted for the estimation rather than the MLE, which are 4 of 7 cases of parametric estimations such as Vietnam, Thailand, Pakistan, and India, respectively. On the other hand, the MLE can be fitted for the rice exporting products in Myanmar, China, and Cambodia, respectively.

Conclusion
The clarification in the comparison of statistical estimators which are the James-Stein estimator (JSE) and Maximum Likelihood estimator (MLE) to apply in the agricultural indicator was successfully finished in this paper. Experimentally, it is obvious that the univariate factor, which is the data of rice exporting products in 7 Asian countries, is the main issue to do the parametric estimation. However, the problem leaded to produce this paper is the precision of the estimation, especially when the dimensions which can be referred to multiple-factor analyses are increased more than two elements. Consequently, the comparison between two estimators will provide the better way for improving the precision of econometrical forecasting.
Empirically, the results of the estimator comparison were clear that the JSE is the better estimator than the counterpart when doing in the multiple-factors estimation. Additionally, the simulation approach called bootstrapping was employed to extend the investigation in time-series data robustness. The results were obvious that the JSE can efficiently capture outliners in simulated data with high numbers of factor dimensions rather than the MLE. In the application for agricultural data (rice exporting products in top Asian countries), it is evident that the efficiency of parametric forecasting between the JSE and MLE based on the univariated linear regression can be strongly