Omega-optimized Portfolios: Applying Stochastic Dominance Criterion for the Selection of the Threshold Return

Purpose of the article: While using asymmetric risk-return measures an important role is played by selection of the investor‘s required or threshold rate of return. The scientific literature usually states that every investor should define this rate according to their degree of risk aversion. In this paper, it is attempted to look at the problem from a different perspective – empirical research is aimed at determining the influence of the threshold rate of return on the portfolio characteristics. Methodology/methods: In order to determine the threshold rate of return a stochastic dominance criterion was used. The results are verified using the commonly applied method of backtesting. Scientific aim: The aim of this paper is to propose a method allowing selecting the threshold rate of return reliably and objectively. Findings: Empirical research confirms that stochastic dominance criteria can be successfully applied to determine the rate of return preferred by the investor. Conclusions: A risk-free investment rate or simply a zero rate of return commonly used in practice is often justified neither by theoretical nor empirical studies. This work suggests determining the threshold rate of return by applying the stochastic dominance criterion.


Introduction
This work extends and complements the author's previous study aimed at researching Omega-optimized portfolios (Vilkancas, 2014). Portfolio optimization with respect to Omega ratio proposed by Keating and Shadwick (2002) has proved to be very successful, but the question of what criteria should be referred to while selecting threshold or minimum desired rate of return remains unanswered. During the previous study, obtained out of sample returns indicated that when the threshold level of return varied within the range from 1 to 2 percent (monthly data), the returns and the final value of Omega-optimized portfolios increased significantly and exceeded all the competing strategies of portfolio performance. The result obtained is rather important, as in practice the threshold rate of return is generally comparable to the risk-free investment rate or simply a zero. However, in practical terms, this result is still not satisfactory for several reasons: i) an investor performing portfolio optimization at the beginning of each portfolio formation or rebalancing must choose one specific "point" of threshold return; ii) the selected 1-2 percent range of threshold return is rather arbitrary; there is no guarantee that selection of another data sample, or a different period, would not result in lower or higher range limits; iii) finally, based on the traditional mean-variance criterion, higher portfolio return is associated with higher risk (again, within a certain range, beyond which only the risk begins to grow) and making a decision, the investor is required to use the utility function.
This article suggests a possible solution to the problem -to apply stochastic dominance criterion while selecting the threshold rate of return. The essence of this method is that at the initial stage of portfolio formation a dominant portfolio is selected from a set of portfolios optimized with respect to Omega function, which were generated by using different threshold level of return, and the weights of this portfolio are used to calculate and evaluate the performance of the next period. Dominant portfolios were identified using Anderson (1996) stochastic dominance methodology. To the author's best knowledge, this method has not been used previously in order to determine the threshold rate of return and thus select investment portfolios. After model backtesting with historical data, the results basically confirmed the expectation that this method can be successfully applied in selecting the threshold rate of return.

Literature review
Investment decisions are commonly relied on relative risk-reward measures. In the classical model by Markowitz this measure refers to the ratio of mean for returns and variance, which, in order to avoid the drawbacks associated with dispersion, was later supplemented by a considerable number of other "modern" measures, such as Sortino, Omega, Kappa and other similar ratios. An alternative approach, which is deeply rooted in economic theory, suggests that while making an investment decision a rational investor explicitly optimizes an expected utility function, i.e. the utility. The origins of the Expected Utility Theory are associated with the St. Petersburg paradox described by Bernoulli in 1738, the outcome of which is that the appropriate value of a lottery or an uncertain decision is not the expected value but the expected utility of the gain. In 1947 Von Neumann and Morgenstern formulated four rational behavior rules (axioms) to ensure the existence of utility function, which can be used to express the assessment of individual risk. According to the Expected Utility Theory, by choosing investment portfolio, the investors maximize the expected utility. Usually there are commonly recognized two characteristics of the utility function: i) utility function is strictly increasing, i.e. the investor's utility increases together with increasing wealth; ii) utility function is concave. The concavity feature is very important -it describes the tendency of the investor to avoid risks. The only essential condition for this feature is a declining marginal utility of wealth; however, if we replace oranges or potatoes with wealth or money, the concept of declining marginal utility does not appear so easy to perceive, as it is not entirely clear, why institutional investors' marginal utility of wealth should decline.
Although being the one which has helped to resolve many paradoxes in economic theory, the utility theory was not very "productive" in finance and quickly gave way to the portfolio theory and associated risk-reward measures proposed by Markowitz. As evidenced by Allais and Ellsberg's paradoxes, the utility theory is not perfect, even in terms of theory; however, the biggest problem of the expected utility optimization lies within the proper selection of utility function -every investor may possess a subjective utility and, accordingly, a different optimal investment portfolio. The idea of "objective" or "universal" risk-reward measures greatly simplifies the idea of investment selection, but "universal" risk-reward measurement concept also does not escape criticism. In case of a normal investment distribution, portfolio theory by Markowitz is correct, but where returns on investment are not symmetrical, the criterion of Mean-Variance ratio can lead to wrong decisions. For example, it is well known that the Sharpe ratio widely used in financial sector tends to "rank" investments incorrectly where the returns have a positive asymmetry, or where the numerator of this ratio is negative. A range of other risk-reward measures offered by scientists partially help to solve problems when the returns on investment are characterized by asymmetry and heavy tails, but a fundamental problem of rational investment selection under indetermination remains relevant.
The search for solutions to problems forced to reconsider the models of utility, risks and rational investor behavior. Although a very diverse perception of utility and risk usually leads to a failure while formulating a universal "recipe" of how investors should maximize the expected utility, a concept called Stochastic Dominance (SD) allows ranking investment opportunities according to their attractiveness without knowing specific value of the utility function and based solely on the properties of utility function characterizing investors' attitudes to risk. The main idea of SD is that investors prefer investment with low probabilities of negative or small returns and high probabilities of positive or large returns. In practice, most commonly used are the first and second degree stochastic dominance rules (respectively referred to as FSD and SSD).
Given the two alternatives A and B, the probabilities of each of which are defined by distribution function CDF, A dominates B in terms of FSD only when distribution function A is always below and to the right of distribution function B, i.e. Fa(x)≤-Fb(x), ∀x. A sufficient condition to ensure the first degree SD, is an increasing investor's utility function (U' >0), i.e. the investor is rational and prefers more wealth to less. This dominance feature is quite obvious, yet rare in practice -the first degree stochastically dominant portfolio would normally result in arbitrage opportunity. The second degree SD means that the investor is rational and tends to avoid risks -his utility function is increasing and concave, i.e. U'(x)>0 and U''<0. Alternative A dominates alternative B, where (1) The advantage of SD, compared with other risk--reward measures, is that the use of the SD criterion does not need to rely on simplified assumptions about the distributions of assessed processes; besides, the assessment of the alternatives includes all available information of distribution density function, not just its individual moments. The disadvantage of SDdetermining dominance is not a trivial task. Population distribution curves can dominate, but those of sample -cannot, or vice versa. In order to determine dominance statistical tests are applied: McFadden (1989), Davidson, Duclos (2000), Barrett, Donald (2003), Linton, Maasoumi and Whang (2005). Those tests differ from each other in the way the null hypothesis is formulated (whether the hypothesis of dominance or non-dominance is proposed), the ability of a test "to deal" with correlated samples (this is an important moment when we are working with financial time series) and methodologies of determining critical test values. The second degree stochastic dominance criterion may not be sufficient to determine the attractiveness of potential investments, and one may need to use a higher degree of dominance or even additional subjective criteria of investment "discrimination". Finally, even non--dominant securities may be useful in the process of investment portfolio formation. Investment portfolio formation allows splitting and thus reducing the risk and the effect of risk diversification may be stronger than the one of the stochastic dominance, therefore, in most cases the non-dominant securities should not be immediately refused as not suitable (Post 2003, Kuosmanen, 2004).

The Omega Ratio
The Omega ratio suggested by Keating and Shadwick (2002) is equivalent to the total distribution as it evaluates all higher-order moments. Thus, it is not necessary to rely on assumptions about investors' risk tolerance and their utility functions when using it, and hence, according to researchers, this is a "universal ratio" that helps with an objective assessment of the performance of investments.
The Omega ratio is described as: where: τ the threshold; R min , R max the minimum and maximum values of returns respectively.
When τ is closer to the Rmin value, the BCU area is larger than that of the LAB and the Omega value is high, and vice versa. While calculating the Omega, the threshold level of return is taken into account, in respect of which the result is considered as gain or loss; thus, if τ is seen as the required rate of return, the Omega ratio shows at what extent the obtained result exceeds the expectations of the investor. Accordingly, a higher Omega ratio means higher performance, i.e. return.
Although Keating and Shadwick introduced the Omega as a "universal measure of efficiency", which fully characterizes return-risk distribution and is intuitive, easy to understand and calculate, they soon recognized themselves, that in order to get full information about return-risk distribution, the Omega function should be assessed not at a single point τ of the threshold return but within the whole range. Later, the authors' position has become even more critical: according to them, "a function estimated at only one point can be completely misleading". Although an interpretation of an estimate of the Omega function obtained at one point of the threshold return -"more is better" -is really very simple, interpreting the estimates obtained within the range of the threshold return is far from simple.
The Omega function is strictly descending: where τ is lower than the mean of distribution μ, the Omega is higher than one (i.e. Ω>1 where τ<μ); where τ is higher than the mean of distribution μ, the Omega is lower than one (i.e. Ω<1, where τ>μ) and it is equal to one when τ=μ. It is intuitively understandable that the higher is the threshold return, the lower is the opportunity to achieve it, and therefore, an increasing threshold results into the value of the Omega coming to 0. Furthermore, the situation becomes complicated.
The level of investment risk depends on the characteristics of the Omega function ( Figure 1): the steeper is the plot, the lower is the risk, i.e. a lower probability of "extreme" return variations, and accordingly, the flatter is the plot, the higher the risk. Figure 1 represents Omega plots for UnitedHealth Group (UNH), Exxon Mobil Corporation (XOM) and Verizon Communications Inc. (VZ) where threshold returns ranges from 0 percent up to 5 percent. The figure shows that the UNH is more attractive than the XOM or VZ not taking into account the selected threshold; however, the attractiveness of the XOM, compared to VZ, will depend on the selected marginal return. Therefore, assessing the attractiveness of assets relying on a single selected threshold value is dangerous -assessment within the whole range is required. The plot of the Omega function obtained by changing threshold values actually allows a more efficient assessment of investment attractiveness, but there is a legitimate question in what way the ratio is better than a direct comparison of the distributions of returns, i.e. investment assessment using stochastic dominance criteria (Frey 2009).
Despite the warnings of Omega deficiencies while taking "point" estimates of investments with respect to this function, the issue of threshold selection is not analyzed more explicitly in literature thus often simply recognizing it is not clear how this threshold should be specified, or indicating that the threshold should be specified depending on the risk-return preferences of each investor (Mausser et al., 2006). In practice, the threshold value is generally equated to the rate of risk-free investments, the average expected return or simply a zero .

Optimization with Respect to the Omega Function
Performing the Omega-optimization of a portfolio, each threshold return value is attributed with positional weight, which maximizes the ratio of the expected gain and loss. Formally the problem of Omega optimization can be written as where: x l , x u are lower and upper bounds on weights.
When choosing an interval of the threshold return i.e. min τ and max τ , within which the Omega function will be optimized, the maximum threshold return should not exceed the maximum historical or simulated yield of portfolio securities. Obviously, ex post portfolio returns can never be higher than the returns of the component securities. In case of a higher threshold, meta-heuristic optimization algorithms, as opposed to traditional exact optimization algorithms, can actually give the solution, but it is obvious that such solution would not make logical sense (Shaw 2011).
The Omega ratio is easy to use in assessing past performance, but this function is non-convex and may have plenty of local minimums; thus, portfolio optimization using this function is quite tricky. (2006) proposed a method that, under certain conditions, allows solving the problem of the Omega-optimized portfolio using linear programming techniques; however, generally this method is not appropriate. In other cases, the portfolio is optimized using heuristic optimization (Gilli, Schumann 2010) or other techniques of global optimization (Kane et al., 2009). Return series to be used in portfolio optimization can be obtained in two ways: by taking historical data or applying simulation techniques. As various studies show, relying solely on historical data leads to encountering of the so-called model over-fitting, i.e., the model perfectly fits to the period of construction or testing sampling but has little predictive power beyond this period. A theoretically better method is to "develop" returns using simulation techniques, but in fact, a process generating returns is not known (or even does not exist); thus, historical data are often used hoping that the past scenarios will remain relevant in the future or at least for some time.

Mausser, Saunders and Seco
In this paper, the optimization has been carried out using historical data and applying a genetic algorithm for differential evolution implemented in the R package DEoptim (Ardia et al., 2011).
Despite the fact that the Omega ratio has attracted much attention in both academic and financial sectors, there are not many studies that could answer to the question of whether this ratio is somehow better than classic techniques for portfolio optimization or other strategies based on portfolio construction methods. One of the reasons -Omega-optimization are quite complex, and is itself an object of a number of studies. A review of the conducted studies is presented in Table 1.

Material and methods
The study uses four data sets from three stock markets: monthly returns of all constituents of the DJIA index for the period from 30/01/1998 to 31/12/2013 (a total of 30 stocks and 192 periods), monthly returns of 25 stocks belonging to the DAX30 for the period from 31/01/2002 to 31/12/2013 (stocks with insufficient price history where excluded) and monthly returns of 25 stocks belonging to the CAC40 for the same period from 31/01/2002 to 31/12/2013, weekly returns of 50 stocks belonging to the EURO STOXX 50 index for the period from 04/01/2002 to 31/12/2013.
The performance of the Omega portfolios are compared with the performance of the portfolios constructed using other optimization techniques. A total of eight competing benchmark portfolios are compared within this paper: the classic minimum variance portfolio and the tangency or maximum Sharpe ratio portfolio (respectively C.MV and C.TG), tangency portfolios optimized using uniform correlation and "shrunk" covariance matrices proposed by Ledoit and Wolf (respectively LWCC. TG and LW1F.TG), minimum conditional value-at--risk and highest return-risk CVaR portfolios (min-CVaR and maxCVaR), the equal weighted portfolio (EQW), the equal risk portfolio (EQRC) proposed by Maillard et al. (2010).
The performance of the selected strategies, were evaluated using a moving sample window method often applied in scientific studies (DeMiguel et al., 2009;. The choice of the method is usually based on a well-known heteroskedasticity feature of financial data series. Primarily, the duration of one testing period is selected; this paper accepts M = 36 months (or 104 weeks). Based on the return series of the first testing period, the parameters required for implementing a particular strategy are obtained and then used for calculating optimal portfolio weights that are used for calculating portfolio returns for the next period, i.e. M+1. The process is continued with an addition of a new period and the exclusion of one of the earliest periods until the end of the entire data period is reached. As a result of this backtesting using a moving window approach, a series of T-M monthly (or weekly) out-of-sample portfolio returns is obtained, i.e. calculated using data that was not included into a data sample during portfolio optimization. The procedure is applied to each testing strategy and each stock market data set.
Another important concern is how often the investment portfolio will be reallocated. Fund managers usually reallocate portfolio positions either in accordance with a specified frequency or when portfolio weights "deviate" from the specified allowable threshold or, more commonly, over a certain period if, at that time, weights are above the specified "threshold". Such portfolio management can be called tactical, as for certain tactical objectives, e.g. reducing transaction costs it is allowed to deviate from the basic strategy -optimal weights. The frequency of portfolio reallocation can be also adjusted for other reasons such as optimizing the taxes paid. Based on the results of the author's previous study on the Omega portfolio, a half-year frequency for reallocating weights was selected for the present study, which is empirically proven as a good compromise in order not to deviate significantly from the selected strategy and the aim to reduce turnover costs (Vilkancas, 2014).
The characteristics of the Omega portfolios were assessed with threshold value τ varying in the range from 0 to 5 percent where range step is equal to 0.001% (for brevity, the detailed results are omitted and only cumulative net returns after expenses incurred due to transaction costs are presented in Figures  2 through 5 below).
And finally, at the end of each testing period three dominant portfolios where selected: the first portfolio selected from a set of the Omega portfolios obtained within the threshold rate τ varying from 0 to 1 percent referred to as "SD L0-10", the second from a set of portfolios within the threshold rate τ varying from 1 to 2 percent referred to as "SD L10-20", and the third "SD L20-30" portfolio from the a set of portfolios within the threshold rate τ varying from 2 to 3 percent. In most cases, attempts to move the threshold range even higher proved unsuccessful, as such high threshold rates simply exceeded in sample returns of the underlying stocks. Dominant portfolios were identified using Anderson (1996) test for second degree stochastic dominance Vinod (2004). Although a stochastic dominance test by Davidson, Duclos (2000) is more recent and different authors have recognized it as one of the most effective, in this experiment, the most important evaluation criterion is the out of sample performance of the selected portfolios.
All tested portfolios were assessed considering various aspects, including the overall return, risk and portfolio turnover required for a particular strategy, portfolio concentration and net return received after the deduction of costs and incurred in the reallocation of portfolio weights. In order to assess portfolio performance, a total of 14 different indicators were used. In addition to conventional risk indicatorsstandard variance (referred to as AnnSD) and the Sharpe ratio (AnnSR), the paper also presents the maximum drawdown and the average drawdown rates (Max.DD and Avg.DD) as well as maximum and minimum annual returns received over the period. While assessing portfolio turnover, the average annual turnover and the total, i.e. covering the whole period, turnover are given (Ann.Turn and Tot.Turn). In order to assess portfolio concentration the Gini coefficient is used. The coefficient ranges from 0 to 1. If the value of the Gini coefficient equals to 1, this shows complete portfolio concentration (portfolio consists of only one position), and the Gini coefficient for a well-diversified portfolio of equal weights equals to 0. The table of the obtained results includes the average values of the Gini coefficient obtained during the entire period of study. Finally, it provides the net annual returns of the portfolio and the net value (NetCumRet) received after the deduction of proportional turnover charges of 1 percent per 100 percent of portfolio turnover as well as the Sharpe ratio estimated using net returns (NetAnn.SR).

Results and discussion
The results are summarized in Tables 2-4 and in Figures 2-5. Figures 2-5 show that the Omega-optimized portfolios, at a certain level of the threshold return, outperform, in absolute terms, all other competing portfolios. Moreover, Omega-optimized portfolios, are characterized by the stability of the results, while the list of the other "leaders" seems to have changed      in a completely random order when optimization have been done in different markets using alternative portfolio optimization strategies. However, a major challenge is to select the minimum required or threshold rate of return, to be used in the optimization process. This is especially evident by looking at the results obtained in the German stock market where portfolio returns continued to grow even after the threshold rate has exceeded the proposed "safe" 1-2 percent range ( Figure 5).
The stochastic dominance method proposed for picking portfolios again yielded an exceptionally good and robust out of sample performance results in terms of growth in value of investment. The "SD L10-20" strategy have outperformed all of the competing strategies in all tested stock markets, except in the FT100 index stock market, where it came in the second place. While total net value is arguably the most important single ex-post performance indicator, comparing portfolios that have different  risk-return profiles based only on this single indicator may be misleading. The Sharpe ratio shows that, strategy "SD L10-20" does no longer yield a significantly superior result in all markets. But as the Sharpe ratio only considers the first two moments of returns, portfolios characterized by positively skewed returns may be misdiagnosed in terms of real performance. This proposition is confirmed with the use of Upside Potential Ratio (UPR) proposed by Sortino et al. (1999). For example, when the UPR measure that captures the asymmetric nature of returns was employed the "SD L10-20" strategy again was ranked as the best one. As Table 3 indicates it was impossible to calculate the UPR ratios for the Omega optimized portfolios for the shares of the FT100 list as the UPR formula failed due to zero denominator, but if we just replace 0 in the denominator with some small value the whole function will get large and so the strategy will be ranked as the best performing one (the UPR ratio is difficult to interpret, so it is only used to order the performance of different assets anyway). As indicated by the Gini AV coefficients all portfolios, except for the EQW and the EQRC portfolios, generally suffer from the drawback of portfolio concentration.

Conclusions
Selection of the investor's required rate of return or threshold return plays an important role in the portfolio creation process while using asymmetrical risk measures. This study makes several important contributions to the problem area.
Firstly, the results obtained have reaffirmed that Omega-optimized portfolios are characterized by good performance and stability of the results.
Secondly, the study empirically confirms that zero rate or close to the zero rate thresholds, often used as investor's required rate of return, are not the best option.
Finally, and most importantly in this paper it is attempted to look at the threshold selection problem from a different perspective -empirical research is aimed at determining the impact of the threshold return on the risk-return characteristics of the investment portfolios and selecting the appropriate threshold rate by application of the stochastic dominance criterion. Theoretically the stochastic dominance rules are appealing as they require less restrictive assumptions about the investor's utility functions and from this study it can be concluded that the investment portfolios selected by stochastic dominance rules may produce superior results and eliminate much of guess-work when selecting the threshold rates.