A new dynamic geometric approach for empirical analysis of financial ratios and bankruptcy

This paper presents a complementary technique for the empirical 
analysis of financial ratios and bankruptcy risk using financial 
ratios. Within this new framework, we propose the use of a new 
measure of risk, the Dynamic Risk Space (DRS) measure. We provide 
evidence of the extent to which changes in values for this index 
are associated with changes in each axis's values and how this may 
alter our economic interpretation of changes in patterns and 
directions. In addition, this model tends to be generally useful 
for predicting financial distress and bankruptcy. This method 
would be a general methodological guideline associated with 
financial data, solving some methodological problems concerning 
financial ratios such as non-proportionality, non-asymmetry and 
non-scaled. To test the procedure, Multiple Discriminant Analysis 
(MDA), Logistic Analysis (LA) and Genetic Programming (GP) are 
employed to compare results by common and modified ratios for 
bankruptcy prediction. Classification methods outperformed using 
the DRS approach.


1.
Introduction. In recent decades, business failure prediction has been one of the major domains of financial research in evaluating the financial health of companies (Grice and Dugan, 2001). Bankruptcy involves significant costs, and corporate failure prediction has been stimulated by both private and government sectors all over the world (Charitu et al., 2004). Moreover, company failure may inflict negative shock for each of the shareholders, thus making the total cost of failure large in terms of economic and social costs (Shumway, 2001). Bankruptcy prediction models have been proven necessary to obtain more accurate statements of firms' financial situations than those made by decision-makers or independent auditors, whose performance in classifying companies is, in particular, not sufficiently accurate (Keasey and Watson, 1991). First, Beaver (1967) emphasized that corporate failure could be reliably predicted through the combined use of sophisticated quantitative methods and selected financial ratios. Then, Altman (1968) extended this narrow interpretation by investigating a set of financial ratios as well as economic ratios as possible determinants of corporate failures using multiple discriminant analysis.Since Altman (1968), literature on predicting bankruptcy has witnessed numerous extensions and modifications, and various techniques have been developed to measure risk and predict bankruptcy (RaviKumar and Ravi, 2007). However, none of them had a perfect predictor functional form, and all procedures utilized common ratios without any theoretical basis 1 .
In recent decades, while attempts have been made to solve problems using accounting based financial ratios in statistical analysis, none has been entirely successfully developed in quantitative and objective systems for bankruptcy prediction (Andres et al., 2005). Some attempts included trimming sample ratios, eliminating negative observations, and using various transformations such as logarithms and square roots to achieve more normal distributions (Canbas et al., 2004). However, most of these attempts have utilized common ratios, which may have increased cost of errors in the analysis and the problem of misspecification. In general, no equally convenient or superior alternative transformed ratio has been developed and applied 2 .
According to the literature, variables used in previous studies have generally exhibited non-normal distribution (Barnes, 1982;Ooghe and Verbaere, 1985; McLeay and Omar, 2000; Jagdeep and Weiss, 1996). Some researchers have made corrections for univariate non-normality and tried to approximate univariate normality by transforming the variables prior to the estimation of their model. Deakin (1976) used logarithmic transformation for the lack of normality of distributions, after which Foster (1986) used square root and lognormal transformation of financial ratios. However, logarithmic and square root transformation may also be arbitrary (So, 1987 andZavgren, 1983). Other researchers approximate univariate normality by 'trimming', which involves segregating outliers with reference to normal distribution (Ezzamel et al. 1990). The rank transformation used by Kane et al.(1996) reported improvement in fit and less biased results using linear models with untransformed data sets. Logarithmic and rank transformations and square roots are even more difficult to interpret because they can alter the natural monotonic relationships among data (Canbas et al. 2004). Recently, Ooghe et al. (2005) used Logit transformation to achieve better accuracy. Moreover, when data are arbitrarily truncated, important information concerning research questions may be discarded, thus greatly constraining the research task (Sun et al., 2007;Altman and Hotchkiss, 2006).
According to the latest literature, we cannot find any general guideline showing the usage of suitable transformation to achieve approximate normality functional form. In domain of bankruptcy and financial analysis, conventional application of ratio analysis as a rational to measure the operational performance of enterprises has been a dominant method for a long time (Griffin and Lemmon, 2002). Additionally, conventional process of utilizing the common ratios, which does not consider the real factors on producing the values on denominator and numerator of the ratio, has shown its significant weakness practically and theoretically based on various literatures. Therefore, providing an alternative model of evaluation to replace the ratio analysis on this domain shows to be essential.
Our objective in this paper is to propose a new approach called the Dynamic Risk Space measure (DRS) that involves data representation and to illustrate the use of this methodology to measure financial risk in ratio analysis and prediction bankruptcies. To illustrate this new methodology, Xand Y values are used as the numerator and denominator, respectively, in an X Y ratio. X and Y values are represented as Cartesian coordinates in our constructed modification box, in which we derive the isoclines of the associated components' ratios and application to bankruptcy risk.
The remainder of this paper proceeds as follows. Section 2 discusses a summary of the statistical methods of prediction and their general framework. In section 3, we briefly derive our new method, the Dynamic Risk Space (DRS) and its components, X i and Y i , analytically. Subsequently, changes in each risk component associated with changes in DRS coordinates are viewed geometrically. Section 4 illustrates an empirical application using Multiple Discriminant Analysis (MDA), Logit Analysis (LA) and Genetic Programming (GP) as classification methods, and we summarize and conclude in Section 5.
2. Literature on statistical methods of prediction.

2.1.
Multiple Discriminant Analysis (MDA). Altman (1968) employed multiple discriminant analysis (MDA), which has been applied to bankruptcy predictions in many different countries since then. For model building, multiple discriminant analysis (MDA), which takes the form of Z = β 1 V a + β 2 V b + ... + β n V n , is based on a stepwise approach usually adopted to select the best discriminating variables to predict distressed and non-distressed companies. This model can be compared to the logit analysis and hazard model to examine which provides a higher degree of accuracy in predicting financially distressed companies in some cases (Altman et al. 1997;Joo and Jinn, 2000). Otherwise, exposition of the simple linear regression method can be extended to Z i = β x i + u i . The cases for Z i will be considered non-distressed companies if Z i > 0, and Z i are classified as distressed, otherwise. Regarding the above model, x i represents the company's financial ratios, u i is the error term, and the range for Z i will be from −∞ to +∞.
MDA depends largely on restrictive assumptions such as linearity, normality, independence among input variables and a are-existing functional form relating the independent variable and dependent variable. Later studies used logistic (Ohlson, 1980) or probit (Zmijewski, 1984) to construct nonparametric methodology and improve degree of prediction accuracy.

2.2.
Logistic Analysis (LA). MDA is a prevalent technique in bankruptcy prediction in terms of classification or prediction ability among traditional models (Aziz and Dar, 2006). Some studies have found the Logit model to be superior to MDA (Gu, 2002). However, the research by Aziz and Dar (2006) has shown that the two models are equally efficient. The probability and likelihood function for the non-distressed firm is defined as P i = E(Y = 2/x i ) = 1 1+e −(B x i +u i ) , which is the logistic distribution function for a non-bankrupt company. For ease of exposition, this function can be extended to the formP i = 1 1+e −Z i , in which Z i = β x i + u i . If P i represents the probability of a firm's being non-distressed, then 1 − P i is the probability of its being distressed. The probability of the firms being distressed is obtained by substituting into the cumulative probability function in standardized equation format. Decision-making for a specific company will be classified as distressed if the calculated probability from the logit model is more than 0.5; otherwise, it will be classified as non-distressed. To date, there are many applications using Logistic method.

Genetic programming (GP).
Recently there are some sophisticated alternative methods that produce better-performing failure prediction. These methods are the fuzzy rule-based classification model, multi-logit model, dynamic event history analysis, multidimensional scaling, rough set analysis, and expert systems, such as neural networks, genetic algorithms and genetic programming (Landajo et al., 2008). Genetic programming (GP) is a search methodology belonging to the family of evolutionary computation. Classification rules for prediction classes of bankrupt and non-bankrupt firms in a sample include the need for the values of some financial ratios, which are called predicting variables, to be given (Ravi et al. 2008). When inputting a number of variables describing each firm and their related domains, an exhaustive search enumerating all the possible solutions is computationally impracticable. Hence, GP seems to be a powerful search method inspired by natural selection (Koza, 1992).
The Darwinian theory of evolution inspired genetic programming models. According to the most common implementations, a population of candidate solutions is maintained, and after a generation is accomplished, the population is fitted better for a given problem. Genetic programming uses tree-like individuals that can represent mathematical expressions. Such a GP individual is shown in Figure 1. Three genetic operators are mostly used in these algorithms: reproduction, crossover, and mutation. First the reproduction operator simply chooses an individual in the current population and copies it without changes into the new population. In second step, two parent individuals are selected and a sub-tree is picked on each one. Then crossover swaps the nodes and their relative sub-trees from one parent to the other. If a condition is violated the too-large offspring is simply replaced by one of the parents. There are other parameters that specify the frequency with which internal or external points are selected as crossover points. Figure 2 and Figure 3 show an example of crossover operators. The mutation operator can be applied to either a function node or a terminal node which in the tree is randomly selected.
If the chosen node is a terminal node it is simply replaced by another terminal and if it is a function and point mutation is to be performed, it is replaced by a new function with the same parity (Lee, 2004). When tree mutation is to be carried out, a new function node is chosen, and a new randomly generated sub-tree substitutes  the original node together with its relative sub-tree. A depth ramp is used to set bounds on size when generating the replacement sub-tree. Naturally it is to check that this replacement does not violate the depth limit. If this happens, mutation just reproduces the original tree into the new generation. Further parameters specify the probability with which internal or external points are selected as mutation points. An example of mutation operator is shown in Figure 4. The major step in genetic programming is first to generalize an initial population of rules representing potential solutions to predictions and then to evaluate each rule in a training set by means of a fitness function. After selecting the rules, GP will apply the genetic operators' cross-over, mutation, and reproduction to produce a new rule and then reinsert these offspring to create the new population. Then, these steps are repeated until an acceptable classification rule is found. There are several fitness functions, such as number of hits, sensitivity, specificity, relative squared error (RSE) and mean squared error (MSE) that can be applied to evaluate the performance of generated classification rules. We used "number of hits" as the fitness function because of its simplicity and efficiency, which is based on the number of samples correctly classified. Formally, the fitness f i of an individual program corresponds to the number of hits and is evaluated by f i = h, where h is the number of fitness cases correctly evaluated or number of hits. So, for classification fitness function, maximum fitness f max is given by f max = n, where n is the number of fitness cases. Fitness measure f i in Genetic Programming replaces by a "raw fitness" measure rf i and it is complemented with parsimony term. Therefore, in classification cases, the maximum fitness defines as rf max = n and the overall fitness will be f pp i . The fitness is evaluated by where 5000 is number of reproduction which in this case is found the best reproduction cycle by the software (can be changed by the user), S i is the size of the program; S max and S min represent, respectively, maximum and minimum program sizes and are evaluated by the formulas: S max = G(h + t) and S min = G, where G is the number of genes, and h and t are the head and tail sizes. Thus, when rf i = rf max and S i = S min , with f pp max = 1.0002 × rf max , the process will be optimized. Although these alternative methods are computationally more complex and sophisticated than classical statistical methods, but it is not clear whether they produce better performing corporate failure prediction models and whether the use of statistical techniques is valid under very restrictive assumptions (Ooghe and Speanjers, 2006). Moreover, alternate methods have various weaknesses such as; dichotomous dependent variables, sampling methods, non-stationary along with using the annual accounting information in the use of ratios.

Methodology.
3.1. Dynamic Risk Space (DRS). Following the static framework proposed by Bahiraie et al. (2009), which is a two-dimensional box, we introduce a new dynamic geometric device named the Dynamic Risk Space (DRS). This new approach allows for the visualization of the evolution of transformation that is associated with ratio value changes in which the pair values of each risk ratio (X i , Y i ) are represented as Cartesian coordinates. For expositional purposes, suppose that our proxy for risk chosen is employed using X i as numerator and Y i as denominator values of the Xi Yi ratio. For any number of firms, ∀i = 1, 2, 3, ..., n, the proposed Dynamic Risk Space (DRS i ) is defined as a function of X i and Y i . Consider a square two-dimensional space that captures all changes in numerator X i and denominatorY i , for any firm i and any period t, where changes in X(∆X) and Y(∆Y ) can be positive, negative or zero 3 . Let the risk flows for any hypothetical firm i consist of the set of all (∆X) and (∆Y ) for n years ∀t = 1, 2, 3, ..., n. Ratio values are usually available at uniform discrete time intervals: annually and quarterly. The dimensions of the DRS are central with respect to max(∆X) and max(∆Y ). The essential stipulation is that the length of any side is set at twice the maximum of the largest absolute change value of the numerator or denominator values (which are bigger) recorded during the considerable period t.
Correspondingly, the total area of DRS for i ∈ t is 2 × max(max |∆X i |) = 2L if the largest absolute value is from the ∆X i values or 2 × max(max |∆Y i |) = 2L if the largest value is from the ∆Y i values, where L is the length of one side of a DRS. ∆Y i values are depicted on the vertical axis (±∆Y ) and ∆X i values on the horizontal axis (±∆X) and are labeled in Figure 5 as (±∆X max ) and (±∆Y max ). Ultimately, the actual values of ±∆X max and ±∆Y max depend on which of the two is largest,  Figure 5), L and J on the same dynamic risk line share equal risk values, and K has higher risk compared to points J and L. According to Figure 5, ∆Y i values will fall and ∆X i values will remain unchanged in period K to J, while ∆X i values will rise and ∆Y i values will remain unchanged in period K to L. Respectively, for a point such as J or K away from the ←−→ AOC line, the risk is greater.
One of the primary innovations of the DRS index is the scaling and ranking ability factor, which stems directly from the DRS construction and is twice the absolute maximum of the largest change for the period of study. This is equivalent to 2L in Figure 5. Please note that the values for ∆X or ∆Y in the denominator and numerator will only be equal when either ∆X or ∆Y is also the largest change during the period of study.
Assume that: 1. Changes are a monotonically increasing function 2. Risk value requirements for both ∆X i values and ∆Y i values are equal. In Figure 1, the measure of dynamic risk space that satisfies criteria (I)-(IV) for n years ∀t = 1, 2, 3, ..., n and i firms ∀i = 1, 2, 3, ..., m is given by: .
Thus, the DRS index has a range of −1 < DRS i < 1.
For example, to illustrate the implications and applicability of this method, consider for company i as an example of the ratio that is mostly been used in studies. In our methodology, will be imposed on the transformation of this ratio. The values for this equation will be (−1 < DRS i < 1), and the argument can be extended as if ∆CF > ∆T A, meaning that the changes in cash flow are greater than the changes in total assets, and DRS will be in positive form (0 < DRS i < 1); in this case, the companies with higher cash flow will be safer. This is true in the short term, which is the focus of our study; in the long term, the company may affected by liquidation. The other case is if ∆CF < ∆T A, meaning that changes in total assets are greater than changes in cash flow and that DRS will have negative values (−1 < DRS i < 0) so that the company will be more likely to fail. While more assets do not necessarily mean more earnings, this may cause more cash flow. However, the investment is not sufficiently productive, and companies are more likely to go bankrupt.

3.2.
Geometrical relation between (∆X, ∆Y ) and DRS. In order to show the Geometrical relation between (∆X i , ∆Y i ) and DRS i consider the proposed measure of Dynamic Risk Space: that can be rewritten as Hence; This shows that, for every DRS index, there exists a unique straight line with slope of unity and α intercept, i.e. 2(max{max |∆X i |, max |∆Y i |})×(DRS i ). This implies that the DRS values will be the same for every point (∆X i , ∆Y i ) on the same line.

3.3.
Symmetry about the diagonal of ∆X and ∆Y . Consider again DRS i = ∆Xi−∆Yi 2(max{max |∆Xn|, max |∆Yn|}) , which is f (∆X i , ∆Y i ); and for (−∆B i , −∆M i ), it will be: This shows that the DRS index is symmetrical about the diagonal ∆X i = −∆Y i for each ∆X i = ∆Y i .

3.4.
Scaling of DRS. In order to have scaled measure, scaling is respect to the largest value for a given time scale (this could be months, years or even decades) allows us to observe the progress of risk changes over time. We have: The DRS rate of change can be verified by partial representation denoted by ∂DRS i ∂∆X i = 1 2(max{max |∆X n |, max |∆Y n |}) correspondingly. Hence, the DRS index exhibits proportional scaled measure. Proportionality is a theoretical assumption that may not, in fact, hold, and the degree of departure varies across industries and size classes. Thus, if the relationship between elements of a ratio is constant over time, size and industry, then the proportionality effect will be satisfied for ratios by using the DRS method.
3.5. Construction of weights for the DRS. When using this proposed DRS index to measure changing levels, we want to have it summed at a disaggregated level. Thus, we have to choose the appropriate weights to use to measure the changes. The solution for the weighted index by the significance of the sector is as follows: We have , and the summarized format will be .

ALIREZA BAHIRAIE, A.K.M. AZHAR AND NOOR AKMA IBRAHIM
It is interesting to note that this formulation enables us to cultivate a multilayered view of the changes to each value of the ratio. Thus, DRS * i (as the topmost layer) will encapsulate all the DRS i cells.
The ratio-based methods are popular methods of operations research, and we see many applications in assessing operational performance and financial analysis in the public and private sectors. The natural distribution of the DRS transformation ensures that the data are not skewed and should be more robust to the assumptions of the Gaussian statistical methods. In addition, our new measure (DRS) is naturally bounded and unaffected by the distance between observations and the outlier effect (if present) will be reduced. Within our new transformation negative values will be transformed to specific variations, thus removing the necessity for the deletion of negative data used in previous studies. Finally, with the use of pooled data over time, the DRS method will reduce the effects of history and maturation across a population. 4. Illustrative empirical application. The database used in our illustrative empirical study consists of 400 UK companies listed under London Stock Exchange (LSE). One hundred companies went bankrupt in which a firm is bankrupt when its total value of retained earnings is equal to or greater than 50% of its listed capital. Three hundred companies were non-bankrupt companies in the period from 1995-2005. Bankrupt companies are indicated as 1 and non-failed companies as 0. Thus, a firm will have a higher failure probability and be classified as part of the failing group if its score is higher than the cut-off point for each approach. In this study, based on the financial ratios successfully identified by past studies and availability, 40 indices have been built using balance-sheet data. Ratios and the significance of mean differences for each group are tested and presented in Table 1. These indices reflect different aspects of firm structure and performance: liquidity, turnover, operating structure and efficiency, capitalization and finally, profitability. Data are partitioned into two sets: the training set and the test set. The training set contains the known firms used during the evolution process to find an explicit classification rule able to separate an instance of a class of bankrupt firms from that of a non-bankrupt class. In contrast, the test set is used to evaluate the generalization ability of the rule identified. Since previous studies used the periods one and two years prior to bankruptcy separately, this new method is performed by using changes in the form of two annual reports, t and t − 1, and for each company; each ratio can satisfy the firm's performance observation within two years. For primary variable selection, and to test each variable's effectiveness in terms of discriminating power, CartProEx V.6.0 software was employed using the Mahalanobis D 2 measure to select the variables that produced the greatest degree of effectiveness for separation of the two groups. This will create a more stable and well-balanced model. Subsequently, we tested the selected variables using Multiple Discriminant Analysis (MDA), Logistic Analysis (LA) and Genetic Programming (GP) to illustrate that this new transformation will produce statistically more accurate predictions and can be used as an alternative to common ratios. Final regressions with different significant variables were obtained as significant indicators for each procedure.
To test the accuracy of common and new transform ratios, first we apply Multiple Discriminant Analysis (MDA) using the forward stepwise option technique 4 . The results obtained using SPSS V.15 software show the MDA model with variable coefficients and cut-off points and their classification accuracy. The following equations are reported in Table 2. The results reported an 86% and 82% accuracy level for transformed and original ratio-grouped cases. One value V i for each company will serve as the cut-off point, and decisions will be made according to the following rules; ∀V i ≤ Cut − Of f ⇒ non − bankrupt and ∀V i > Cut − Of f ⇒ bankrupt. Second, testing was performed by logistic regression, which has fewer constraints and limitations than the MDA procedure with the selected variable coefficients and cut-off points and their classification accuracy, as reported by the S-Plus software. The following equations are presented in Table 2. The logistic probability function will be P i = 1 1+e −Z i . Since there are two classes in bankruptcy classification problems, thus the probability for each bankrupt and non bankrupt cases are 0.5. Moreover, according to classification rules by logistic regression, the specific company is classified as distressed if the calculated probability from the logit model is more than 0.5; otherwise, it will be non-distressed. Transformed and original ratio accuracy levels show that 92.5% and 96% were correctly classified, respectively. (*, **, *** denote significant at 10%, 5% and 1% level, respectively) Following recent research by Etemadi et al.(2008), we tested these selected variables with Genetic Programming method (GP) as more sophisticated classification method. This is to obtain a fitness function tree and to illustrate that DRS transformation will allow more accurate predictions and can be used as an alternative to common ratios, even with GP. To implement the GP process and develop the bankruptcy model, newly released software, GeneXproTools software, version 4.1 was employed.
The best crossover and mutation operators are found as 0.44 and 0.05 level, respectively. In the final regressions, different variables are identified as significant indicators from the selected list for each procedure.
case of 200 firms applies; hence, there is no need for a normality test. MDA is a prevalent technique in bankruptcy prediction (Aziz and Dar, 2006) in terms of classification and ability among traditional models. Some techniques like Probit produced the same results. Figure 6. The best GP model obtained for DRS method Figures 6 and 7 show the best GP model obtained for each approach. These models are divided into three sub-trees, with each tree representing a gene, meaning that the model is a chromosome consisting of tree genes. The sum of the returns of the sub-trees for a firm should be compared with the "Rounding Threshold" to determine the class of the firm. From the classification sub-trees depicted in Figure  6, the decision trees for the DRS approach are obtained with a 98% accuracy rate.
From the classification sub-trees in Figure 7, decision trees for the common ratios approach have a 96.5% accuracy level. Significant variables in each process presented in Table 3. The representation of a solution to the problem provided by the GP algorithm is in the form of a decision sub-tree. Decision trees for each approach consist of 3 sub-trees which are presented as T1, T2 and T3. Each node is represented by constant value, independent variable or combination operator.
The constant values in each sub-tree T1, T2 and T3, are labeled as "C" and independent variables as "d". For example, the T1C1, represents the constant value in the first sub-tree which is "-5.57389". Each node which is assigned by  Table 4. Since there are two possible classes of events, genetic programming process and algorithm considers "0.5" as benchmark for two-class classification decision-making problems. Thus the benchmark value of 0.5 is used for possible probability whether a firm is bankrupt or non-bankrupt through the genetic programming decision tree. If the value for the specific training or test firm is greater than or equals 0.5, then this firm is marked as a "bankrupt firm". If the value of the GP model for a training or test firm is less than 0.5, then this firm is classified as a "non-bankrupt firm". A comparison of real classes of firms with classes as predicted by the GP model will determine the accuracy of the model as reported in Table 6. 4.1. Misclassification test. The error rate may be explored through a misclassification test as an alternative method. Within this method, a penalty number is assigned for making any specific type of mistake. Penalty numbers represent cost of error rates and an average cost of misclassification is the weighted average of all errors. In Table 5, possible classifications and misclassifications are shown.  Table 6 shows comparison accuracy for each classification model with respect to different data representations.  Table 6 exhibits the summarized accuracy level for MDA, LA and GP procedures; clearly, the results improved under the data transformation procedure. Based on the observation of better performance using this new transformation, the data set is not collected from a particular industry type or from firms with a similar size, nor is any outlier deletion applied. Thus, our process is free of any potential explanatory effect errors that might otherwise have been caused by the distribution of the independent variable 5 . The new model's properties are briefly explained in the description of methodology.

K-fold cross-validation.
In order to observe the effects of bias, we conduct the K-fold cross-validation procedures. First all the sets at treated as training sets, on which tree is built. Then, each one of the subsets is treated as testing test. This cross-validation procedure allows mean error rates to be calculated, which gives useful insight into the classifier's decision. This technique is simply k-fold cross-validation, wherein K is the number of data instances. This has the advantage of allowing the largest amount of training data to be used in each run, and, conversely, it means that the testing procedure is deterministic. With large data sets, however, this is computationally infeasible, and in certain situations, the deterministic nature of testing results in errors. Furthermore, k-fold cross-validation is the primary method for estimating turning parameters, dividing the data into k equal parts. For each k = 1, 2, ..., k fits the model with parameters to the other k − 1 parts and the kth part as a testing sample. In our experiment, we set our sample to 5-fold accuracy results. Table 7 represents the comparison of the 5-fold accuracy results. In summary, the description results highlight the following evidence that, under the transformation process, better classification accuracy results are achieved. It is not only the pattern of liquidity variation that is alternatively favorable to active companies but also that of turnover. However, indices are higher for active firms. The ratio of assets to operating income is higher for failed firms because of their reduced capital resources. Earning indices display greater solvency for active firms, even though debts have increased for those firms with respect to the likelihood of bankruptcy. Operating structure ratios for active companies have a lower incidence of interest charges on sales and value added, as well as higher depreciation charges over gross fixed assets for failed ones. Capitalization ratios clearly reflect the superior growth of active versus failed firms. The results suggest that some indicators like earnings to total debt ratio are traditionally considered in the empirical analysis but are not significant in any of the three considered models (Ugur and Weber, 2007). Profitability ratios emphasize the overall higher profitability of active enterprises. Finally, additional indices such as market shareholders' dividends, sales, returns and operating assets are significantly higher for healthy companies. 5. Summary and conclusions. One of the most well-known anomalies in terms of the risk factors is the effect of some ratios on bankruptcy risk and firm returns. One possible explanation for this effect that is consistent with the "efficient market hypothesis" (Modigliani and Miller, 1958) is that the ratio is a proxy for risk. In fact, in line with extended academic research on the impacts of market imperfections, financial ratios are still being used as a vital source of information and as indicators in financial studies. Also, in banking, ratios are taken as a proxy for the charter value of banks (Landskroner et al., 2006). The convenient use of financial ratios may exceed the cost of errors in analysis caused by ratio-related model misspecification, and in general, no equally convenient or superior alternative to untransformed ratios has been developed and applied to financial ratio analysis. Hence, this research was motivated to develop an alternative to the ratio-based methodology for financial studies.
The properties of our new method called DRS may contribute general guidelines for ratio analysis, in which there is no arbitrary conditioning, because the number of transformations is equal to the number of observations. Furthermore, the natural distribution of the DRS transformation ensures that the data are not skewed and should be more robust to the assumptions of the Gaussian statistical methods. In addition, DRS can be applied equally to a variety of distributional forms, thus making the technique particularly useful in ratio analysis, where a diverse set of distributional functions has been identified. Moreover, because new transformation DRS is naturally bounded and unaffected by the distance between observations, the outlier effect (if present) will be reduced. Similarly, distance data containing white noise and the sensitivity and power of statistical tests are improved. Negative values will be transformed to specific variations, thus removing the necessity for the deletion of negative data used in previous studies. Further more, DRS method provides proportional asymmetric scaled mature as a theoretical assumption that holds various degree of departure across industries and size classes (Cui et al., 2009). Thus, if the relationship between elements of a ratio is constant over time, size and industry, then the proportionality effect will be satisfied for ratios by using the DRS method. Our simple new methodology, called the DRS index, provided a geometric illustration of the new proposed risk measure and transformation behavior. There is a rise in classification accuracy with the application of this new transformation as compared to the performance of common ratios, using Multiple Discriminant Analysis (MDA), Logistic Analysis (LA) and Genetic Programming (GP) as statistical prediction techniques.
Genetic programming is a powerful heuristic nonparametric technique in forecasting and prediction. The output of the conventional methods is in quantity, while the output of the genetic programming is another computer program (Ding et al., 2010). Genetic programming works best for several types of problems and large data sets. The first type is where there is no ideal solution, (for example, a program that drives a car or a program that chooses a portfolio). Genetic programming will find a solution that attempts to compromise and be the most efficient solution from a large list of variables and solutions. Furthermore, genetic programming is useful in finding solutions where the variables are constantly changing. The best computer program that appeared in any generation, the best-so-far solution, is designated as the result of genetic programming (Koza, 1992).
According to the mathematical properties and prediction results as well, we suggest the use of this new methodology for studies with the initiation of financial ratios, as it can provide a conceptual and complimentary methodological solution to many problems associated with the use of ratios. While previous studies used data from one and two years prior to bankruptcy, this new method used changes in the form of two annual reports of each company and it can satisfy the firm's performance observation for the period within these two years. Consequently, further generalizing the model by including an additional year is recommended for future studies. Furthermore, as reported by the IMF, it is necessary to undertake such research to understand capital structures and other financial indicators, such as macro-and microeconomic variables that simultaneously might affect firms' performance; this could eventually improve bankruptcy predictions. Therefore, testing this model with respect to above issues will be important in the future.