What we know about the low-risk anomaly: a literature review

It is well documented that less risky assets tend to outperform their riskier counterparts across asset classes. This paper provides a structured summary of the current state of literature regarding this so-called low-risk anomaly. It provides an overview of empirical findings across implementation methodologies and asset classes. Furthermore, it presents the most prevailing causes, which are namely exposure to other factors, coskewness risk, investor constraints, behavioral biases, and agency problems. The paper concludes that despite some critiques there are good reasons to believe that the low-risk anomaly can be evaluated as an investment factor. It also identifies that more research is required to disentangle the proposed causes to fully understand the big picture of the anomaly with certainty.


Introduction
Factor investing has gained increasing traction in academia and thousands of papers have been published with regard to this strain of literature (Jensen et al. 2022;Harvey 2017). Pioneering works developed investment factors that are based on the findings that small (big) firms, firms with low (high) book-to-equity ratios, and firms showing positive (negative) return momentum outperform (underperform) the market (Fama and French 1992;Basu 1977;Carhart 1997;Jegadeesh and Titman 1993|). In a similar B Joshua Traut joshua.traut@unisg.ch https://www.joshuatraut.com fashion, a persistent outperformance was discovered for low-risk assets in many asset classes [e.g. Frazzini and Pedersen (2014)]. Unlike for the above-mentioned factors, the respective measure to capture low-risk assets is not agreed on, resulting in a variety of proposed low-risk measures. Because the outperformance of low-risk assets directly violates the common assumption that taking risks is rewarded with returns in financial markets, the low-risk investment effect is also called low-risk anomaly. To date, multiple causes for the anomaly have been identified, but there is still no consensus about which rationale prevails. The goal of this paper is to provide a comprehensive overview of the current state of research that examines the empirical identification of the low-risk anomaly across asset classes as well as the causes for its existence. This is not the first time that the literature covering the low-risk anomaly is reviewed in a stand-alone paper. However, current reviews either focus solely on the causes of the low-risk anomaly (Blitz et al. 2014) or are strongly tilted towards equities (Blitz et al. 2020). The contribution of this paper is thus threefold. First, it summarizes current findings of the low-risk anomaly across asset classes, markets and methods. Second, it provides an overview of the most employed empirical methods to capture the low-risk anomaly. Thus, rather than just showing the findings of current research, it also gives insights into how those findings were generated. Third, it includes studies that neglect the existence of the low-risk anomaly and therefore provides a balanced overview of the current state of research.
The reading proceeds as follows. Section 2 provides a short historical background on the development of the anomaly. Section 3 collects empirical evidence in various asset classes. Section 4 lists and explains proposed causes for the anomaly. Section 5 discusses the current state of research and provides a general conclusion of the findings.

Background
The capital asset pricing model (CAPM) revolutionized the finance literature as it replaced the thought that an asset's risk is its own volatility with the idea that the relevant measure of risk is how an asset covaries with the market. The CAPM is based on the work of Markowitz (1952) who defines risk as the variance of returns. He finds that diversification among asset reduces risk and hence increases the risk-return profile of an investment. Idiosyncratic risk can be eliminated and only systematic risk-the market risk of an investment-remains. The effect of diversification is illustrated in Fig. 1.
The idea of the CAPM was first developed by Jack Treynor in a private manuscript (Treynor 1961) but was published independently by Sharpe (1964), Lintner (1965) and Mossin (1966). The model is based on Markowitz's assumption that investors are risk-averse and seek portfolios that provide the highest return at a certain level of risk. An optimal portfolio offers the minimum risk-return ratio at the chosen risk level. As unsystematic risk can be diversified away, systematic risk was thought to be the only variable that mattered to investors. Based on this, the CAPM formula describes the  Markowitz (1952) and shows that with an increasing number of portfolio holdings the standard deviation of portfolio returns can be reduced. This is due to a reduction of idiosyncratic risk which vanishes as a result of diversification relationship between risk and return for an asset i aṡ (1) The return of an asset i (ṙ i ) is defined as the risk-free rate (r f ) plus the asset's market sensitivity (β i,m ) times the excess return of the market (ṙ m − r f ). Hence, the only variable controllable by investors to set portfolio returns is their investment's exposure to the market. The dots indicate absolute returns in Eq. (1). We can also rewrite Eq. (1) in a shorter form for the asset's excess return as Here, returns without a dot represent excess returns. This notation will be kept throughout the paper. After its publication the CAPM was criticized by a large body of literature because the linear relation between market exposure and returns could not be proven empirically with certainty. Among the early studies, while some confirm that the relation exists (Fama and MacBeth 1973), others find that it is much flatter than implied by the CAPM Friend and Blume 1970;Miller and Scholes 1972) and some even find a negative relationship between risk and return (Haugen and Heins 1975). Doubts about the CAPM were further fostered as more later studies could not confirm the linear relationship between market risk and return Baker 1991, 1996;Fama and French 1992;Frazzini and Pedersen 2014).
Similarly, other studies find that risky stocks (in terms of idiosyncratic volatility) underperform their low-risk counterparts (Ang et al. 2006(Ang et al. , 2009Clarke et al. 2010). Another body of literature that examines the empirical performance of the theoretical minimum-variance portfolio comes to a related conclusion. These studies find that while the minimum-variance portfolio achieves a significant reduction of volatility, it also delivers comparable or even higher average returns than the market portfolio (Clarke et al. 2006(Clarke et al. , 2011Haugen and Baker 1991;Jagannathan and Ma 2003).
All of the above studies document the so-called "low-risk anomaly". As different concepts to measure an asset's risk are applied in the literature, the investment style that captures the anomaly was given various names such as defensive, low-risk, lowvolatility, or quality factor. I will call all of these related factors low-risk factors. An overview of the different low-risk factors that were implemented across asset classes is provided in the next section.

Evidence in markets
Like most factors, low-risk factors are most researched in equity markets. Regardless, there is a considerable literature that took the concept from equity markets and adapted it to other asset classes. Given this inequality of available literature, the equity section is more extensive than the others.

Equities
In equity markets three different measures to asses a stock's riskiness are assessed by literature. These three measures are a stock's market beta, its idiosyncratic volatility (IV), and it's total volatility of returns. Relatedly, there also exists a strain of literature that examines the minimum variance portfolio. In this section I will first review the studies that focus mostly on confirming one of the three measures empirically and the mean variance portfolio, before I review studies that compare multiple concepts.

Market beta
As one of the first who detect the low-risk anomaly, in their empirical assessment Black et al. (1972) not only find that the CAPM does not hold but also propose a new model that captures the flatter linear relationship between systematic risk and return. According to this model an asset's return is defined as In addition to the market risk Black et al. (1972) introduce a second factor return (r β ) that captures the flatter relationship between market risk and returns. They call this factor "beta factor" because its coefficient is a function of the asset's beta. They show empirically that in the US r β is generally positive. This means that stocks with a low beta earn positive CAPM excess returns whereas high-beta stocks underperform the predictions made by CAPM. Black (1993) replicates the study of Black et al. (1972) and empirically confirms their suggestion that r β is positive in the US. In doing so, he sorts stocks based on beta and builds ten portfolios-the 10% of stocks with the largest beta in the first portfolio, and so on. He then takes the excess returns from the ten portfolios and weights them by (1 − β j ), where β j represents portfolio j's beta. The resulting portfolio goes long low-beta stocks and short their high-beta counterparts. Black's empirical tests show that this beta-neutral portfolio earns positive statistically significant excess returns over the sample period between 1931 and 1991. He concludes that the beta factor indeed earns a positive risk-premium. Frazzini and Pedersen (2014) further confirm the existence of a risk factor related to an asset's beta. They estimate market beta of security i according to the following formulaβ whereσ i andσ m are the one-year rolling standard deviations of asset i and the market portfolio m andρ im is the five-year rolling correlation between the two. They separate time horizons because correlations appear to move more slowly than volatilities (De Santis and Gerard 1997). In 20 developed equity markets they rank stocks on estimated betas every month and form simple portfolios that are long low-beta stocks and short high-beta stocks. Within long and short legs of the portfolios stocks are weighted by ranked betas so stocks with low/high betas have higher weightings in the long/short side of the portfolio. Both legs of the portfolio are then rescaled to have a beta of one, thus making the aggregated portfolio beta neutral. Over the sample period 19 out of the 20 betting against beta (BAB) country portfolios earn positive excess returns of which eleven are statistically significant at the 5% level. Han et al. (2020) apply the same factor construction methodology to Chinese A-shares (a region not covered by Frazzini and Pedersens's study) and show that the BAB factor also earns a statistically significant alpha in this region.  also follow the methodology of Frazzini and Pedersen to investigate BAB portfolios in the US, Japan, Europe ex-UK, and UK. They deviate from the previous methodology in that they use BARRA risk models 1 to estimate volatilities and correlations in Eq. (3). To form country portfolios they calculate zscores based on a stocks beta rank. They use these z-scores as standard weights for the long-short portfolio construction as z-scores add up to zero. Then they volatility-adjust the long and short side of the portfolio to have equal volatilities and scale the final portfolio to have a volatility of 10%. Each month they create such low-risk long-short portfolios in each region and aggregate them to a global low-risk portfolio with the following weights: US, 50%; Japan, 16.7%; Europe ex-UK, 16.7%; and UK, 16.7%. Their resulting portfolio earns positive excess returns that have an average Sharpe ratio of 0.61 in the sample period from 1990 to 2013. Ang et al. (2006) take a different angle at quantifying the low-risk anomaly. Given the empirical failure of the CAPM and the ubiquity of the three-factor model by Fama and French (1992) (FF3) they quantify a firms riskiness as its residual volatility relative to the FF3 model. More precisely, they define a stock's idiosyncratic risk as √ var( i ) resulting from the following equation

Idiosyncratic volatility
where SM B, and H M L are referring to the size and value factor of the FF3 model. Ang et al. (2006) compute monthly IVs for U.S. stocks with the above regression formulas using daily return data. They sort stocks into quintile portfolios based on their IV, for which they calculate value-weighted returns and rebalance them every month. Over their sample period from 1963 to 2000 they report statistically significant negative CAPM and FF3 alphas for a portfolio that goes long high-risk stocks and shorts their low-risk counterparts. Ang et al. (2009) reinforce the findings in a subsequent study within which they apply the same methodology to a larger sample of 23 developed countries. Again they find that high-risk stocks tend to earn significantly lower returns than low-risk stocks. A more recent study by Dimson et al. (2017) further confirms the out-of-sample validity of the results. They estimate a stock's idiosyncratic risk as the three-month volatility of i in Eq. (4). In their two samples covering US and UK stock markets low-risk stocks earned average annual returns of 10.9% and 11.6% each year while high-risk stocks earned 4.1% and 4.2% respectively.

Total return volatility
Arguably the simplest measure to quantify riskiness of stocks is employed by Blitz and van Vliet (2007) for large cap stocks in the FTSE World Developed index between 1985 and 2006. They construct monthly equally-weighted decile portfolios within which they rank stocks on their past three-year volatility of weekly returns. Across those volatility-sorted portfolios excess returns rise accompanied with a fall in volatilities, leading to a monotonic decrease in Sharpe ratios from 0.72 to 0.05 starting at the low-volatility portfolio. Following Blitz and van Vielt's research on developed markets, Blitz et al. (2013) reinforce the findings by expanding the methodology to emerging markets. They sort stocks covered in the S&P/IFC Investable Emerging Markets Index based on historical three-year volatility of weekly returns. Their quintile country-neutral portfolios that have the lowest volatility significantly outperform the counterparts exhibiting the highest volatility. Joshipura (2016, 2019) use the same sorting methodology for the Indian stock markets and confirm the finding that low-volatility stocks generate outperformance Baker and Haugen (2012) extend the study to 21 developed countries and 12 emerging markets. Every month they build decile portfolios based on a stock's return volatility over the past 24 months using monthly return data. In their sample period from 1990 to 2011 they find that in every market the lowest risk decile portfolio earns higher returns than the portfolio holding the riskiest stocks.

Minimum variance portfolio
The minimum variance methodology differs from the above examined methods in that it does not attempt to separate risky stocks from its less risky counterparts. Instead, it attempts to form a portfolio that has minimum total variance. In doing so, risky stocks may be added to the portfolio if they reduce total risk, for example if they are negatively covary with other stocks. Haugen and Baker (1991) investigate the performance of the minimum variance portfolio versus the market portfolio between 1972 and 1989. They use the largest 1000 US stocks to quarterly form the minimum variance portfolio based on the historic covariance matrix over the last 24 months. They also impose constraints to ensure diversification and abandon short-selling. In their sample period, the minimum variance portfolio has a lower volatility and earns higher returns than the Wilshire 5000 index that proxies the market portfolio. Further evidence that the minimum variance portfolio generates outperformance-even when reallocated less regularly is gathered by Jagannathan and Ma (2003). Similar to Haugen and Baker (1991) they build the minimum variance portfolio imposing diversification and short-selling constraints for 500 randomly chosen stocks traded on NYSE and AMEX. They reallocate the portfolio on a yearly basis and show that it earns much higher returns at lower variation than the market portfolio. Clarke et al. (2006) extend the study of Haugen and Baker (1991) to cover the time horizon from 1968 to 2005 and to use additional covariance structuring methodologies. More precisely, they use principal components following Connor and Korajczyk (1988) and Bayesian shrinkage following Ledoit and Wolf (2004). Based on monthly (daily) returns of the largest 1000 stocks in the US they estimate two structured covariance matrices for the past 60 months (250 days) and compose the minimum variance portfolio imposing short-selling and diversification constraints. For both structuring methodologies they find that minimum variance portfolios have about 75% of the realized risk of the general market, but earn similar returns. They replicate their results using the same methodology in Clarke et al. (2011Clarke et al. ( ) from 1968Clarke et al. ( to 2009 and find that the minimum variance portfolio outperforms the market regarding both, risk and return. Chen et al. (2018) investigate minimum variance portfolios calculated from daily returns over the past two years in the Chinese A-shares market and find that these portfolios strongly outperform the market regarding risk and return.

Comparison of measures
With multiple measures employed to capture the low-risk factor, the question which one does it best automatically arises. This question was investigated by multiple studies though only few investigate all four above presented methods. Blitz and van Vliet (2007) compare the performance of their proposed measure of total return volatility to market beta. Same as for return volatility, they rank stocks based on three-year historical regional betas. Though they find the same pattern in portfolio returns-that lower risk portfolios outperform-this pattern is more irregular and less strong for portfolios sorted beta than it is for those sorted on total return volatility. Baker et al. (2011) perform the same comparison and sort stocks on five-year volatility and beta into quintile portfolios. They find that low risk portfolios consistently outperform from 1968 to 2008 in the US and also report that the effect of volatility is larger than that of beta. Soe (2012) also compares two of the proposed methodologies-low-volatility and minimum-volatility strategies. They find that the two approaches exhibit very similar performance characteristics. De Carvalho et al. (2012) confirm the similarity of lowrisk factor strategies. They investigate five popular low risk portfolios and find that they generally load on low-beta and low-volatility factors. Furthermore, they find that the performance of these portfolios is relatively similar. Scherer (2011) and Chow et al. (2014) come to the same conclusions in their studies on widely used low-risk strategies.
Walkshäusl (2014) compares the low-volatility, the low-beta, and the minimum volatility strategy for the MSCI universe of developed and emerging markets between 2001 and 2011. Regarding low-volatility and low-beta he follows Baker et al. (2011) and builds portfolios based on the historic five-year measure of the respective style. The minimum volatility portfolio is proxied by MSCI minimum volatility indices. Walkshäusl compares the performance of the quintile portfolios with the minimum betas/volatilities with the minimum variance portfolio. Parts of his analysis are visualized in Fig. 2. He finds that though strategies generate higher returns than the market benchmark, these differences are not statistically significant. Also, between strategies return differences are relatively small. Standard deviations of the three strategies are substantially lower than that of the market and statistically significant for all style portfolios. Again, differences between strategies tend to be small.

Bonds
Comparable to the pattern in equity markets, in bond markets literature finds that lowrisk bonds tend to outperform. Most literature defines specific factors for government bonds and corporate bonds separately. Because of this, the two asset classes will be examined separately in this section. In general, the most promising risk-adjusted returns in both asset classes can be earned with bonds that are characterized by short maturities and high credit ratings (Ilmanen et al. 2004). Pilotte and Sterbenz (2006) study Treynor and Sharpe ratios for US Treasury securities between 1959 and 1997. They find that both ratios are highest for short-term bills and decrease as maturity rises. Same as for equities, Frazzini and Pedersen (2014) use beta to determine the riskiness of a bond. They define beta as the sensitivity of the bond to an equally-weighted portfolio of all Treasuries used in the analysis and build seven beta ranked portfolios. They find that between 1952 and 2012 low-beta Treasuries earn positive alphas that turn negative for their high-beta counterparts. Also, Sharpe ratios increase from low-beta to high-beta portfolios and a long-short  (2014). It shows CAPM and FF3 alphas for low-volatility, low-beta, and minimum volatility strategies in developed markets (DM), the European Union (EU), Japan (JP), the US, and emerging markets (EM). Strategies are based on the MSCI universe between 2001 and 2011. Low-volatility and low-beta returns are represented by the quintile of stocks that have the lowest respective measure each month. The minimum volatility strategy is proxied by MSCI minimum volatility indices of the lowest and highest portfolio delivers the highest significant Sharpe ratio of 0.81. It has to be noted that though they rank Treasuries based on beta, this is empirically equivalent to ranking on duration or maturity. Hence, they confirm the results of Pilotte and Sterbenz (2006). Brooks et al. (2018) take a similar route and employ effective duration as the measure for government bond risk. They sort bonds into three portfolio buckets based on duration across 13 developed markets. A long-short portfolio that buys (sells) the most (least) favorable bucket earns an average return of 1.04% p.a. The same methodology is used by Kothe et al. (2021) in a later analysis who report significant excess returns of 0.77% p.a. for this strategy. Focusing on 20 non-US developed bond markets Durham (2016) confirms the return generation of low-risk government bond strategies. He uses the methodology of Frazzini and Pedersen (2014) but performs minor tweaks such as taking daily instead of monthly data for the portfolio construction. Over his sample period from 1962 to 2013 he reports an average excess return of 25 bps per month for the BAB long-short portfolio.

Government bonds
Few studies extend analyzed assets beyond developed markets. Zaremba and Czapkiewicz (2017) perform a broad market study across 25 developed and emerging markets. They take modified duration as the relevant low-risk measure to build long-short portfolios after sorting bonds into tercile buckets. They report that this strategy earns average significant returns of 0.27% p.a. across all markets.

Corporate Bonds
The methodologies used for corporate bonds are similar to those for government bonds. Derwall et al. (2009) find that low-maturity corporate bonds outperform their long-maturity counterparts in the US. Aussenegg et al. (2015) report the same finding in European corporate bond markets. Using the same methodology as for their government bond analysis, Frazzini and Pedersen (2014) build long-short portfolios for US corporate bonds based on their sensitivity to an equally-weighted portfolio of all corporate bonds used in the analysis. They report that alphas and Sharpe ratios of corporate bond portfolios increase for portfolios that hold more low-risk assets.
To enhance the robustness of low-risk corporate bond strategies, a strain of literature developed that extends these strategies with fundamental measures of risk. This more comprehensive approach is used by Brooks et al. (2018) who focus on public companies in 13 developed markets and Israel et al. (2018) who solely cover the US market. Similar to the strategies above, low duration is one criterion characterizing a low-risk security. Additionally, market leverage computed as book debt + minority interest + preferred stock − cash net debt + market value of equity (5) and gross profitability computed as gross profit assets (6) are taken as measures to identify low-risk corporate bonds. Within both studies, assets are broken down into quintiles according to the aggregated low-risk factor. From the least to the most favorable quintile, returns rise while volatility either decreases or remains at the same level. The best risk-return ratio is achieved when a long-short portfolio of the first and the last quintile is built. A more simplified approach that incorporates fundamental measures is employed by Houweling and van Zundert (2017) who use maturity and rating to define a low-risk portfolio for the US corporate bond market. They construct two asset pools, one for investment-grade and one for high-yield bonds, arguing that the assets in those pools differ substantially in terms of market participants and other market characteristics. Within both pools, they favor short-dated, high-rated bonds and create a long-short portfolio that buys the top 10% of the most favorable bonds and sells the bottom 10% of investible assets. In both markets, significant excess returns can be earned with this strategy.
De Carvalho et al. (2014) test five risk metrics in developed investment-grade corporate bond markets that are denominated in USD, EUR, GBP, and JPY between 1997 and 2012. The risk metrics are duration-times-yield (or yield elasticity), modified duration, yield to maturity, duration-times-spread, and option-adjusted spread. Based on these measures they form quintile portfolios and find that duration-times-yield sorted portfolios show the biggest differences in Sharpe ratios between the lowest and the highest portfolio irrespective of the underlying currency. Also, alphas for low-risk portfolios are positive and turn negative for the corresponding high-risk portfolios for four out of the five risk measures used in the analysis.
Lastly, Chung et al. (2019) run a comprehensive analysis of low-risk investing in corporate bond markets and find that bonds that have more exposure to aggregate market volatility earn lower returns-even when adjusted for bond characteristics such as ratings and maturities. In contrast to the equity literature, they also find that bonds with a higher IV over the past six months calculated similarly to Eq. (4) but with different factors have higher returns. However, they also document that bonds with high stock return volatility have lower expected returns.

Others
The low-risk anomaly is apparent in almost every market. This section only presents a small extract of the existing literature, primarily to show how widespread the outperformance of low-risk assets is in different markets.
In addition to stocks and bonds Frazzini and Pedersen (2014) apply their BAB factor to credit indices, equity indices, commodities and foreign exchange. Following the same methodology as discussed in Sect. 3.1, they form portfolios sorted on beta and document that their long-short BAB portfolios generate positive excess returns. Frazzini and Pedersen (2022) further investigate the low-risk anomaly in options and levered ETFs. They find that the higher the leverage, the lower the risk-adjusted return of such products. They further demonstrate that BAB factor portfolios earn large and statistically significant abnormal returns with Sharpe ratios above one. Similarly, Cao and Han (2013) find that returns of delta-hedged options decrease with an increase in the IV of the underlying stock.
The low-risk anomaly does not solely hold for investments of all kind but also for investment professionals. Jordan and Riley (2015) document that the low-risk anomaly is also present in the cross section of mutual fund returns, meaning that funds with low return volatility tend to outperform their peers.
In more fragmented financial markets, the pattern can also be observed. Eraker and Ready (2015) find that very risky over-the-counter stocks have very poor average returns. Similarly, Moskowitz and Vissing-Jørgensen (2002) find that despite being more risky, private equity stocks do not deliver higher returns than publicly traded stocks, making them unattractive from a risk-return standpoint. Relatedly, Adhami et al. (2023) observe an inverse relationship between risk and return in crowdlending markets.
Lastly, a very comprehensive study by Falkenstein (2010) detects the low-risk effect in 20 asset classes. This study does not solely focus on common financial assets such as equities and bonds but also incorporates more exotic fields such as movie production, lotteries, and sports bets.

Replicability and robustness
In the presence of multiple factors that claim to capture the low-risk anomaly, a discussion developed whether it makes sense to claim significance for factors that are almost identical. Harvey (2017) addresses this issue and states that too much attention is drawn on p-values. He claims that since significant results are more likely to be published, there is a strong incentive for p-hacking and data manipulation in factor research. Harvey et al. (2016) share this view and argue that t-stat boundaries to assess significance should be much higher for factors. They show that there is a publication bias, meaning that new factors are more likely to be published than replication studies of existing ones. Furthermore, they demonstrate that the performance of published factors tends to degrade after publication, which as they argue is an indicator for datamining and p-hacking in the factor literature. In a very comprehensive study Jensen et al. (2022) address this critique. They replicate 153 factors that they cluster into 13 themes in 93 countries and assess their replicability. The theme "Low risk" is found to have a replication rate of 100% in the US, developed ex. US, and emerging equity markets. Other studies deliver contrasting results. One study replicates 452 anomalies across different time horizons and shows that for low-risk equity factors low-risk stocks outperform their high-risk peers but in most settings they don't pass the 5% significance hurdle (Hou et al. 2020). Pyun (2021) similarly finds that the IV premium significantly decreased after the seminal paper by Ang et al. (2006) was published. Though all of these studies find a premium, the contrary results for its significance indicate that a critical view of the anomaly may be worthwhile.
Regarding robustness, many of the studies that examine the low-risk anomaly perform robustness tests with respect to other common risk factors and macroeconomic variables. For example Frazzini and Pedersen (2014) calculate alphas of their BAB factor to FF3 factors and momentum and find that BAB alphas remain significant. Another example is the study composed by Ang et al. (2009) who control for the FF3 factors, momentum, volume, and liquidity. They also find, that their excess returns cannot be explained by the control variables.
Based on the findings of Baker et al. (2011) who report that the low-risk effect is stronger for smaller stocks than for larger stocks, low-risk factors have been criticized as many anomalies are known to be concentrated in small-cap stocks and therefore difficult to exploit. For example, Novy-Marx and Velikov (2022) criticize the methodology behind the BAB factor because high parts of its premium are driven short selling highly illiquid micro caps. Other scholars do not support those claims and show that for example, the IV factor is especially robust when penny stocks are excluded (Chen et al. 2020). Relatedly, Auer and Schuhmacher (2015) show that the low-risk anomaly is strongly present among the largest, most liquid US stocks.
Other studies deliver more support for low-risk factors and show that their excess returns do not stem from industry or country bets Baker et al. 2014). Exposure to interest rate risk as a possible explanation was examined by De Franco et al. (2017) who conclude that, although low-risk stocks are significant exposed to interest rate risk, this only explains a very small part of their outperformance.
However, there are also more critical voices in literature. Cederburg and O'Doherty (2016) investigate the performance of beta sorted portfolios. They find that properly accounting for time-series variation of portfolio risk properly explains the low-risk anomaly. In other words, taking into account that beta is not constant over time, vanishes the anomaly. In their analysis they use a conditional CAPM, following Boguth et al. (2011), within which beta is modeled as a function of lagged state variables, a procedure also known as standard instrumental variable method. Another more critical study attributes the IV effect to liquidity. It shows that when taking quote-midpoint returns the IV effect vanishes and attributes the anomaly to bounces in trade prices (Han et al. 2015).

Proposed causes
Even though a lot of empirical evidence exists that supports the existence of a low-risk anomaly, no consensus about the rationales behind it is reached. There are rational and irrational theories why low-risk factors generate excess returns. First, I will cover the rational explanations such as coskewness risk, exposure to other factors, and investor constraints before I will elaborate on studies delivering behavioral explanations. Though I attempt so clearly separate between proposed causes, note that there are strong interrelations and the categorization here is only done to provide some structure.

Exposure to other factors
In the presence of multiple factors that attempt to measure the low-risk anomaly, there are a number of studies that investigate the interrelations of different low-risk factors. In his comparison of low risk strategies Walkshäusl (2014) regresses the returns of minimum variance, low-beta, and minimum total return volatility strategies in different regions on the returns of the same strategy in developed markets. He finds that all strategies in developed markets share a significant common return component that is not shared with emerging markets. Asness et al. (2019) dig deeper into this discovery and directly disentangle the two driving components of the BAB strategy, volatility and correlation. They find that both, volatility and correlation contribute equally to the performance of the BAB factor. In a similar fashion Chen and Petkova (2012) decompose the IV factor into exposure to market variance and exposure to market correlation. They find that only variance is priced and has a negative premium. Further they show that the differences in performance between high (low) IV factor portfolios have positive (negative) loadings with respect to innovations in average stock variance. Because of the negative premium such portfolios earn lower (higher) expected returns. In his study Scherer (2011) investigates the performance of minimum-variance portfolios. He finds that when controlling for low-beta, residual return volatility as well as the market return factors, 79% of the variance in returns of minimum variance portfolios can be explained. He infers that outperformance by minimum variance portfolios is subsumed by other low-risk factors.
Beyond low-risk factors there are also other studies suggesting that low-risk factors have exposure to other factors and thus generate outperformance. In his influential paper Fu (2009) finds that high returns of high volatility stocks are largely explainable by the short-term return reversal of small cap, high-volatility stocks. In other words, stocks with high risk have high contemporaneous returns that tend to reverse in the following month. This claim is supported by the finding of Huang et al. (2010) who report that return reversals can explain the low-risk anomaly. However, other studies document that the results in Fu (2009) are prone to look-ahead bias and thus not useful to overturn the findings of low-risk premia (Park et al. 2020;Fink et al. 2012;Guo et al. 2014). Size is another common factor that is claimed to render the low-risk anomaly insignificant. Riskier stocks tend to be small while low-risk stocks are most likely large caps. Creating buckets with equal market capitalization share in the portfolio sorts based on IV causes all buckets to generate the same returns, thus eliminating the low-risk effect (Bali and Cakici 2008). These findings are further confirmed for other low-risk strategies by Hou et al. (2020) and Pyun (2021). Novy-Marx (2016) challenges low-risk strategies in an integrated approach by controlling for size, profitability and value. In line with the studies mentioned in Sect. 3.4, when controlling for the FF3 factors, he confirms that low-risk strategies generate alpha. However, he shows that this outperformance is generated because low-risk strategies short unprofitable small growth firms. Hence, the low-risk anomaly can be explained by combining value, size and profitability factors. Fama and French (2016) further support the finding and argue that low-beta returns can be explained by their profitability and investment factors. These finding is also bolstered by the theory of Johnson (2004) who shows that leverage ratios of firms (usually a measure for profitability) can explain the low-risk anomaly. Generally, when comparing common profitability factors [e.g. Asness et al. (2018)] to low-risk factors some commonalities can be identified-especially in corporate bond markets. For example, gross profitability and leverage, both profitability factors, are used as variables to identify low-risk corporate bonds. The inclusion of such fundamental values also led to a discussion whether profitability factors are related to value factors (Novy-Marx 2013).

Skewness risk
Another popular explanation for the low-risk anomaly is that it compensates investors for skewness risk. There are two sides of this theory that are going to be covered separately here. While this section focuses on the rational explanations of the skewness risk premium, the behavioral side of the premium is covered later in Sect. 4.4 under "Preference for Lotteries".
The reasoning to attribute the low-risk anomaly to skewness is pretty straight forward. Most asset pricing models do not reach beyond the second moment of returns (e.g. covariance in the CAPM) and hence skewness is not included in most pricing kernels. The fact that skewness of returns can change the expected value of asset returns is thus ignored and traditional pricing models diverge from empirically observed prices. The studies of Rubinstein (1973) and Kraus and Litzenberger (1976) suggest that the empirical failure of the CAPM may be attributable to ignoring the effect of skewness in asset returns. More precisely, their findings indicate that investors demand compensation for accepting negatively skewed returns. Harvey and Siddique (2000) build upon this idea and develop an asset pricing model that incorporates skewness. They extend the CAPM by an additional term which makes their pricing function where the additional term β i,s represents the sensitivity of stock i to the squared market return (r 2 m ) and thus measures the stock's coskewness with the market. Harvey and Siddique (2000) test their model empirically and find that the model indeed helps to explain the cross-section of equity returns. Schneider et al. (2020) take this enhanced asset pricing model and directly relate it to low-risk anomalies. First, they show that the residual returns of long-short portfolios based on beta and IV are associated with negative skewness. To further allocate the performance of low-risk strategies to negative skewness, they build factors based on ex-ante skewness. In doing so, they extract implied skewness from option prices following Schneider and Trojani (2015) and build four factors of implied skewness that earn positive excess returns with which they control for skew exposure in low-risk strategies. Their results show that controlling for each of their four factors significantly reduces negative coskewness of residual returns and largely eliminates alphas of low-risk strategies built on beta and IV. The reductions of alpha are so pronounced that the remaining alpha of the low-risk strategies becomes statistically insignificant. Boyer et al. (2010) also investigate the relation of implied skewness and low-risk strategies. They follow Chen et al. (2001) using firm-level variables to predict idiosyncratic skewness of stocks. The results of their analysis indicate that idiosyncratic volatility is a strong predictor of idiosyncratic skewness. When they control for their skew measure in IV factors they find that excess returns become much lower and statistically insignificant. Relatedly, Bali et al. (2020) argue that idiosyncratic skewness of returns can be used to measure firms' growth options. With their skewness-based measure they reason that investors prefer stocks with a large number of embedded growth opportunities. They show that when controlling for their measure of firms' growth options the IV anomaly can be strongly reduced. Their line of reasoning is supported by other studies that also find that the low-risk anomaly vanishes when controlling for growth options (Bhamra and Shim 2017;Barinov and Chabakauri 2021).

Investor constraints
Literature has found multiple investor constraints that are thought to cause the lowrisk anomaly. I split those causes into borrowing/leverage constraints and short-selling constraints.

Borrowing/leverage constraints
The reasoning behind this theory is pretty straight forward. If agents are constraint regarding their leverage, the points on the capital market line that lie above the tangency portfolio are out of reach for those agents. The only way they can earn higher returns is to buy riskier assets that are located further up on the efficient frontier. These constrained agents bid up prices of riskier assets and cause the low-risk anomaly. This theory is illustrated in Fig. 3 where the dotted line represents the theoretical security market line and the dashed line shows the expected security market line with no leverage available for agents. The red semicircles indicate some agent's utility function.  Frazzini and Pedersen (2014). It shows the efficient frontier of an investment universe (grey line) and the respective capital market lines for investors wo can or cannot use leverage (blue lines). The figure shows that an unconstrained investor can reach investments on the capital market line above the tangency portfolio by using leverage (dotted blue line). The investor who cannot use leverage falls back on the efficient frontier above the tangency portfolio (dashed blue line). Above the tangency portfolio his capital market line is less steep than that of the unconstrained investor. The dashed blue line thus represents the capital market line for investors who cannot use leverage. The red utility curves demonstrate that the unconstrained investor (dashed red line) can invest into more favorable portfolios above the tangency portfolio than the investor facing leverage constraints (solid red line). (Colour figure online) Contemporaneous to the first empirical findings that detected the low risk anomaly Black (1972) and Brennan (1971) showed in their theoretical works that a restriction on investor borrowing reduces the slope of the capital market line. This extension of the CAPM brings it closer to the empirical findings of that time. When Black (1993) revisited the low-risk anomaly, he also claimed that borrowing restrictions enforced by law (like margin rules, bankruptcy laws, and tax rules) are the reason why the anomaly exists.
Frazzini and Pedersen (2014) lay out convincing arguments for leverage constraints being the cause for the low-risk anomaly. They predict that if a funding liquidity shock occurs, required returns on their BAB factor rise, leading to losses in the current BAB portfolio. This is expected because agents may need to de-lever their current BAB portfolios or stretch further to buy high-beta assets. Furthermore, they reason that when a liquidity shock occurs, all prices drop simultaneously and betas converge towards one. They are able to bolster their theory empirically as they show that their BAB factor performs worse when funding constraints are higher and also confirm that betas converge when funding conditions are less uncertain. In doing so, they take the TED spread and its volatility as a proxy for funding conditions and funding uncertainty. In addition to the, empirical support of their theory, they further show that investors who are constrained in leverage, such as mutual funds (because they have to hold cash to meet redemptions) hold portfolios with betas above one on while unconstrained firms like leveraged buyout funds hold assets with betas below one. Jylhä (2018) adds on to the findings of Frazzini and Pedersen (2014) and further confirms their theory empirically. He measures investor's leverage constraints using the minimum level of margin required when purchasing stock on credit that is set by the Federal Reserve. Consistent with the theory he finds that when margin requirements increase, the security market line significantly flattens, making the BAB factor more profitable. Boguth and Simutin (2018) take a different route to the problem. They invert the argument and take the average market beta of mutual funds (who are assumed to face leverage restrictions) as a proxy for investor's leverage constraints. They show that this proxy predicts returns of the BAB portfolio, strengthening the argument for a connection between leverage constraints and abnormal low-risk returns. Adrian et al. (2014) also approach the explanation from another angle as they construct an intermediary stochastic discount factor based on theory covering shocks to leverage of security brokers. They find that this leverage factor correlates strongly with BAB portfolios and explains the cross-section of returns sorted on betas. In their decomposition of the BAB factor Asness et al. (2019) also test the leverage constraint theory. They show that the BAB factor can be predicted by margin debt held by customers at NYSE member organizations relative to the market capitalization of those NYSE firms, a measure for severity of leverage constraints. In their study focusing on leveraged options and ETFs Frazzini and Pedersen (2022) show that BAB factors also work in those products. The findings are related to the theory that leverage constraints cause the anomaly because the they show that investors are willing to pay a premium for securities with embedded leverage, and intermediaries who meet this demand need to be compensated for their costs and risk. Lastly, Asness et al. (2012) lay out the theory in a portfolio allocation problem between bonds and stocks. They also reason that leverage constraints lead investors to allocate portfolio suboptimally from a theoretical perspective.

Short-selling constraints
The theory behind the reasoning why short-selling constraints cause the low-risk anomaly is closely related to behavioral reasons that are covered hereafter. In contrast to the leverage constraint theory, it does not explain the low-risk anomaly as a whole but rather gives insights into why risky stocks are overpriced.
Generally, this literature assumes that there is divergence of opinion about the fair price of an asset. The scarcity of short-selling can thus cause a price to rise above the aggregate valuation of investors. Pessimistic investors are unable to express their view on the development of the asset in trades and thus prices are generally set by optimists. This effect of short-selling constraints on prices is highest when difference of opinion is high. Hence, there must be a negative correlation between risk-adjusted returns and dispersion of beliefs. This theoretical outline and the effect of short-selling constraints is pioneered by Miller (1977). Similar asset pricing predictions are outlined in other theoretical papers such as those by Duffie et al. (2002), Morris (1996), Chen et al. (2002) and Hong and Sraer (2016).
The claims made by the theories above have been challenged and confirmed by empirical studies. Dieter et al. (2002) use the standard deviation in analysts' forecasts about future earnings to proxy for divergence of opinion among investors. They show that stocks with a wide range of opinions will earn lower returns than otherwise similar stocks. These results are confirmed by other studies who follow their methodology to identify controversial stocks (Hong and Sraer 2016;Gebhardt et al. 2001). Figlewski (1981) takes a different route and sorts constituents of the S&P 500 index into decile portfolios based on short sale interest relative to the total amount of outstanding shares for each stock. According to his reasoning, high short interest indicates high divergence of opinion. He finds that with increasing short interest, the performance of the decile portfolios gets worse. Thus, he confirms that stocks with more controversial information are overpriced and consequently earn lower returns. Chen et al. (2002) criticize the assumption that short interest can be used to proxy divergence of opinion and instead take breath of ownership as a proxy. They define breath of ownership as the number of investors who have long positions in a particular stock. Low breath means high divergence of opinion because many investors do not express their pessimistic view due to short-selling constraints. Using mutual fund data they find support for their hypothesis and show that stocks with the lowest breath in ownership underperform their peers. Danielsen and Sorescu (2001) tackle the problem from a different angle. They argue that with the introduction of options, short-selling constraints on the underlying stock are mitigated. In their empirical analysis they show that option introductions are accompanied with a contemporaneous increase in short interest of that stock and a price decline. This finding also supports the hypothesis that short-selling constraints cause overpricing of assets.
A more narrow study is provided by Stambaugh et al. (2015) who investigate the IV factor and allocate the low-risk anomaly to arbitrage risk. According to them stocks with high IV have the highest arbitrage risk. At the same IV level, this effect should be seen more dramatically in overpriced stocks as due to shortselling constraints more investors could arbitrage away mispricing in underpriced stocks. Stambaugh et al. (2015) investigate their claims and measure a stock's mispricing via 11 return anomalies that survive adjustment to the FF3 model. Given this proxy for arbitrage, they show that as predicted, the low-risk anomaly is significantly negative (positive) among the most overpriced (underpriced) stocks, and the negative effect among the overpriced stocks is significantly stronger. Further they show that the negative effect among overpriced stocks is stronger for stocks less easily shorted, as proxied by low institutional ownership. Liu et al. (2018) expand the approach of Stambaugh et al. (2015) and attempt to explain the low-risk anomaly based on beta. They show that the anomaly is only present in the most overpriced segment of stocks. Further they show that when controlling for IV, the anomaly vanishes.

Investor behavior
In the area of finance there is a large strain of literature focusing on investors' behavior. Numerous behavioral biases can be related to irrational pricing and thus be argued to cause the low-risk anomaly. Within this section, I will present two of the most com-monly referred two biases, which are overconfidence and preference for lotteries of investors. Regardless, these two behavioral biases are certainly not an exhaustive list for behavioral explanations of the low-risk anomaly.

Overconfidence
Overconfidence is a common phenomenon that is not only omnipresent in finance [e.g. Fischhoff et al. (1977), Kahneman and Tversky (1973)]. Generally, overconfidence refers to the fact that most individuals think they can perform a certain task better than average. In finance this imposes that investors put too much emphasis on their predictions. Furthermore, they are likely to be active in more volatile stocks because in this segment the highest returns can be earned and an active investor can demonstrate his skill the most. Both of the above lead to an overpricing of risky stocks (Blitz et al. 2014). Overconfidence is also closely related to short-selling constraints. Similarly, the phenomenon is most common when the extent of disagreement among investors is high. As argued before, this is especially the case for high-volatility assets (Cornell 2009;Baker et al. 2011). The claims are bolstered empirically in various studies. The findings are that, on average, self-directed individual investors do trade suboptimally, lowering their expected returns through excessive trading (Odean 1999; Barber and Odean 2000, 2002. A related area of study focuses on attention-grabbing stocks. This literature argues that investors are likely to buy stocks that grab their attention. In a sense, they overestimate their knowledge about such stocks, which is why this literature can be related to overconfidence. The study of Barber and Odean (2008) focuses on attention-grabbing stocks and argues that high-risk stocks are more likely to grab attention (e.g. because of extreme price movements or news coverage). This causes a strong upward pressure on prices of such stocks, which flattens the risk return relationship postulated by the CAPM. The claim that investors are likely to buy stocks that are more attention grabbing is supported by the findings of Grullon et al. (2004). They document that firms with high advertising spending have a larger number of individual and institutional investors.
Viewing overconfidence of investors more broadly, it can also be interpreted as investor sentiment. Baker and Wurgler (2006) measure investor sentiment via aggregating six proxies (e.g. share turnover) that they orthogonalize to several macroeconomic conditions. They find that after times of low sentiment, small and risky stocks tend to earn excessive returns. This indicates that as sentiment improves, investors bid up prices of risky stocks, which is in line with the theories outlined above. Further, Stambaugh et al. (2015) use the methodology of Baker and Wurgler (2006) to investigate the IV factor. They show that the negative (positive) returns of overpriced high-volatility (underpriced lowvolatility) stocks is significantly stronger when investor sentiment is high (low). In other words, strong investor sentiment increases the magnitude of the low-risk anomaly.

Preference for lotteries
While it is already shown in Sect. 4.2 that coskewness risk is an explanatory variable for the low-risk anomaly, there is also a strain of literature, that focuses more on the behavioral aspect of investors' preference for coskewness. Kumar (2009) documents that some individual investors prefer stocks with lotterylike payoffs. Such stocks are characterized by a low price with high volatility and skewness. In other words, they offer a small chance for high returns in the short term at low costs. Barberis and Huang (2008) model this preference with the cumulative prospect theory of Tversky and Kahneman (1992). Cumulative prospect theory is a modified version of the prospect theory by Kahneman and Tversky (1979) within which individuals evaluate risks using a utility function that is defined over gains and losses. Barberis and Huang (2008) demonstrate that with their model a lottery-like security with positively skewed returns can be overpriced by investors, thus earning negative average excess returns. They attribute this to investors preference for lotteries and argue that rational investors cannot arbitrage away this irrationality. Brunnermeier et al. (2007) also provide a theoretical model that captures investors' preference for lotterylike stocks. Building on the model of Brunnermeier and Parker (2005) within which investors care about expected future utility flows and are happier if they overestimate the probabilities of states of the world in which their investments pay off well. Hence, this theory is also very closely related to the above-discussed issue of overconfidence. With their model Brunnermeier et al. (2007) show that in equilibrium investors make suboptimal decisions and favor lottery-like stocks. Kumar et al. (2011) confirm the existence of lottery-preferences by controlling for investor holding of lottery-like stocks in geographic regions where different attitudes towards gambling are prevalent due to different religions. Han and Kumar (2013) provide further support for the lottery-preference theory empirically. They find that stocks with high retail trade proportion have lottery-like features and earn low alphas. Bali et al. (2011) directly test the proposal that lottery preferences cause low-risk anomalies. They create decile stock portfolios based on their maximum daily return over the past month. They find that stocks with the highest past maximum daily returns significantly underperform those with the lowest past maximum daily return. Though similar in construction, they show that when controlling for their factor that selects stocks based on past maximum daily returns the low-risk anomaly measured by IV diminishes. In summary, they attribute their finding to investors willingness to overpay for stocks that may exhibit extreme positive returns. Bali et al. (2017) use the same framework to explain the return anomaly of stocks sorted on beta. In their analysis they conclude that the beta anomaly is a manifestation of the effect of lottery demand on stock returns. More empirical support for this claim is gathered by Asness et al. (2019) who decompose the BAB factor into its correlation and variance components. They proxy lottery demand with a factor portfolio that is built as a long-short portfolio based on maximum return divided by volatility. In doing so, they confirm that parts of the low-risk factor returns can be attributed to lottery demand. The preference for lotterylike payoffs is also documented by Moskowitz and Vasudevan (2022) in sports betting markets. The authors argue that since betting markets are not prone to systematic risk, only the behavioral preference of lottery like payoffs can explain the existence of the low-risk anomaly in betting and financial markets. A more comprehensive overview of lottery preferences with more examples is provided by Ilmanen (2012).

Agency problems
Same as for behavioral biases, the literature on agency problems is relatively wide and many issues can be related to the low-risk anomaly. Again, I will only present the most commonly referred to explanations, benchmarks and option-like reward systems.

Benchmarks
Benchmarks are argued to cause the low-risk anomaly because relative performance becomes more important than absolute performance in presence of a benchmark. Sensoy (2009) reports that in the US 94.6% of mutual funds are benchmarked to some popular index in the US. In an effort to beat benchmarks, fund managers tend to increase their exposure to stocks with a beta above one because most of them cannot take on leverage directly. Doing so, they can beat their benchmark, while aiming to maintain relatively low tracking errors. As a result, riskier stocks are overpriced and earn lower returns (Christoffersen and Simutin 2017;Baker et al. 2011). Cornell (2009 develops a model that incorporates this idea. He finds that in particular, a benchmark makes institutional investment managers less likely to exploit the low-volatility anomaly. Karceski (2002) takes the opposite view on the problem and focuses on the mutual fund investor. He shows that investors evaluate performance of mutual funds cross-sectionally and care more about the outperformance of funds in bull markets than about their underperformance in bear markets. As a result, fund managers care the most about outperforming their peers in bull markets. This can most easily be done via increasing the risk of the fund's investments, causing overpricing of risky assets.

Option-like reward system
Another agency issue that incentivizes mutual fund managers to strive for riskier stocks is their reward system. Baker et al. (2011) argue that as most fund managers' remuneration is directly linked to the fund's performance, they have an incentive to increase risk. Hsu et al. (2013) make a similar argument for analysts. They state that analysts inflate earnings forecasts more aggressively for volatile stocks, in part because the inflation is more difficult for investors to detect. They do so because they have a similar incentive to stand out, just as fund managers. To make a career analysts also have to identify the highest-performing stocks and outperform their peers.
Evidence for the claims can be found in empirical studies. It is documented that mutual funds are tilted toward smaller stocks with higher volatility and average betas above one. In other words, they have consistent negative exposure toward the lowrisk anomaly (Sias 1996;Frazzini and Pedersen 2014;Beveratos et al. 2017;Ang et al. 2017). Relatedly, Agarwal et al. (2022) find that smaller, younger mutual funds with poor recent performance own more risky stocks, arguably to attract more capital, which ultimately increases fee earnings and thus manager remuneration. Furthermore, a lab-in-the-field experiment by Kirchler et al. (2018) with a large group of investment professionals reveals that ranking and tournament incentives (that are typically present in the mutual fund industry) drive risk-taking of participants.

Conclusion
Before the paper can be concluded it makes sense to recapitulate on the claims made by Harvey et al. (2016) and Harvey (2017). From those the question arises whether the low-risk anomaly can be classified as an investment factor or may actually be a false positive -a non-existent factor that was is falsely believed to be relevant. Though concerns about the ever growing factor zoo are rightfully made, in light of the above review, it is rather unlikely that the low-risk anomaly is a false positive.
First, unlike other proposed risk-factors that may be exposed to p-hacking, the low-risk anomaly did not evolve from a search of potential alpha-generating factors. Instead, it was discovered from research that performed empirical studies on the predictions of the CAPM. The development of investment strategies that outperform was not the incentive of these early studies, making the storyline of the low-risk anomaly more convincing. Second, as outlined in Sect. 3, there are multiple measures for lowrisk factors that were tested across different asset classes, samples, time horizons and markets. The critique of a publication bias and degradation of performance are rightfully made but demand further investigation as results differ among studies. Third, many studies show that the effect is robust to widely accepted investment factors and other macroeconomic variables that drive asset returns. Fourth, as outlined above in Sect. 4 there are various clear economic and behavioral rationales for the findings of the empirical applications of which most are likely to persist in the future. Overall, while it is hard to deny the low-risk anomaly as an investment factor, it still can be questioned whether all its implementation methodologies have a raison d'être. The presented literature shows that most of them have exposure to the same risks and are almost indistinguishable.
Though, they strengthen the argument that the low-risk anomaly can be interpreted as a risk factor, the large number of potential explanations raises the question, which of them are the actual cause of the low-risk effect. While there are studies that attempt to disentangle the effect into its different components (Asness et al. 2019) more such work is required to identify the primary causes with certainty. The main problem with this task is that many proposed causes are interrelated and may result in the same phenomenon. In their literature review Blitz et al. (2014) suggest promising future research directions and methods that would help to further clarify the current root-cause problem.
Overall, it can be concluded that low-risk factors in asset returns are a widely researched area of finance. Unlike other factors, various implementation methodologies and causes have been identified by current literature. This paper provides a structured overview of current findings focusing on models and methods employed. It shows that the factor is predominantly measured via market beta, idiosyncratic volatil-ity, total return volatility or the minimum variance portfolio in equity markets. In bond markets, it is implemented by filtering for low maturity and creditworthy bonds. The proposed causes for the anomaly include exposure to other factors, coskewness risk, investor constraints, behavioral biases, and agency problems. The low-risk anomaly as such appears to be pretty robust given its empirical evidence in various markets despite some critical voices regarding its replication and independence from other investment factors. Regarding its causes however, it still has to be validated which one of the proposed reasons prevails the most. Future research in this direction is faced with the intricate task to disentangle the different roots of the problem but could help to further understand what ultimately causes the low risk anomaly.