Table 3 presents the downside beta (systematicdownsiderisk) and unsystematic downsiderisk of PSs. The downside beta for the small size group was 1.13. This indicates that when market return falls by 1%, this group will fall by 1.13%. Thus, magnifying by 13% the downside swings in the market with respect to the risk free rate. Conversely, the downside betas for Medium and Large size groups were 0.93 and 0.96 respectively. In other words, on average, both groups will fall only 0.95% with a 1% fall in the market. Another important observation is that there is no obvious relationship between downside beta and size. Moreover, the downside beta for various size groups did not vary noticeably.
Existing explanations of the momentum effect are largely behavioral in nature and use models with imperfect formation and updating of investors’ expectations in response to new information (Barberis, Shleifer and Vishny, 1998; Daniel, Hirshleifer and Subrahmanyam, 1998; Hong and Stein, 1999). These explanations rely on the assumption that arbitrage is limited, so that arbitrageurs cannot eliminate the apparent profitability of momentum strategies. Mispricing may persist because arbitrageurs need to bear factor risk, and risk-averse arbitrageurs demand compensation for accepting such risk (Hirshleifer, 2001). In particular, Jegadeesh and Titman (2001) show that momentum has persisted since its discovery. We show that momentum strategies have high exposures to a systematicdownsiderisk factor.
by numerous psychological works on the way people perceive and deal with risk, ranging from students to business managers and professional investors. For instance, in their review on many of these studies Slovic (1972) and Libby and Fishburn (1977) conclude that variance seems to be a bad descriptive measure of managerial risk preferences. Instead, a model that trades off expected return with risk defined by below target return (like semi-variance) seems the most appropriate. Moreover, Cooley (1977) finds that institutional investors are mainly concerned with downsiderisk. Kahneman and Tversky (1979) and Tversky and Kahneman (1991, 1992) show that people care disproportionably more about losses than gains, a finding they term loss aversion. Furthermore, Ang, Chen and Xing (2006) show the relevance of systematicdownsiderisk for the cross-section of stock returns. In fact, results reported by Petkova and Zhang (2005) suggest that downsiderisk aversion may especially have a large influence on value sorted portfolios if an investor’s portfolio is also subject to a substantial fixed income exposure. Their results show that value stocks have a higher downside beta than growth stocks with respect to variables known to predict bond returns (as for example documented by Keim and Stambaugh, 1986 and Fama and French, 1989). Interestingly, downsiderisk aversion may also help to explain why a substantial fraction of investable wealth is invested in fixed income instruments, despite the sizeable equity premium (see Benartzi and Thaler, 1995, Barberis and Huang, 2001, Barberis, Huang and Santos, 2001, and Berkelaar, Kouwenberg and Post, 2005). 5
A similar result holds for the lower partial moment measure of risk.
By assumption, the distribution of returns is not affected by the reallocation. Also, these equalities should hold for a given security, or even for the entire portfolio. What this means is there can be three variants of the capital asset pricing model, with different risk measures, with the most familiar being the one from the mean-variance framework:
where µ is the mean return on the hedged portfolio, ã is the coefficient of relative risk aversion, h and LPM 2 ( r h ; δ ) is the second-order lower partial moment of the portfolio return r h with target δ .
The CRRA utility function is consistent with agents’ observed behavior since it exhibits constant relative risk aversion and decreasing absolute prudence. Unlike the constant absolute risk aversion utility function, the CRRA utility function also exhibits risk vulnerability which Gollier and Pratt (1996) argue is a “natural” restriction of utility functions. Risk vulnerability means that the addition of an unfair background risk to initial wealth causes risk-averse decision makers to become more risk averse toward any other independent risk. In a price hedging context, this implies that an increase in revenue variability caused by stochastic production should increase the optimal hedge. The coefficient of relative risk aversion ã is specified to be 3 which is slightly more risk averse than average estimates of farmer risk preferences. 8 Nelson and Escalante (2004) found coefficients of relative risk aversion derived from historical financial attributes of Illinois farms to range from 0.27 to 4.95. The order two of the lower partial moment is chosen because it is most comparable to the traditional measure of variance. The targets are arbitrarily set at five levels: zero and four percentiles of the return distribution: 50 th , 25 th , 10 th , and 5 th . A target equal to zero means that the hedger is only concerned with negative returns, while targets set to lower percentiles imply the hedger is mainly concerned with extreme losses.
Risk has been at the core of …nance theory from the very beginning. Early research e¤orts on how investors facing risk allocate their capital accross di¤erent assets culminated in two groundbreaking papers, Markowitz (1952) and Roy (1952), that marked the emergence of …nance as a separate discipline. The former suggested that variance be used as a proxy for risk while the latter recognized the impor- tance of downsiderisk in the investor’s decision making. Mainly because of its computational convenience, variance, along with standard deviation, quickly be- came widely accepted as a measure of risk in the mainstream of …nance literature. Nevertheless, variance has been widely criticized for being symmetric in terms of upside and downside deviations. One problem is that variance treats both the returns above and below the expected return equally. Intuitively it makes more sense to punish the investor or fund manager for low returns and reward for high returns. Thus, variance minimization is counter-intuitive as it entails punishment both for low and high returns equally.
In this article we consider the portfolio selection problem of an agent with robust preferences in the sense of Gilboa and Schmeidler [Itzhak Gilboa, David Schmeidler, Maxmin expected utility with non- unique prior, Journal of Mathematical Economics 18 (1989) 141–153] in an incomplete market. Downsiderisk is constrained by a robust version of utility-based shortfall risk. We derive an explicit representation of the optimal terminal wealth in terms of certain worst case measures which can be characterized as minimizers of a dual problem. This dual problem involves a three-dimensional analogue of f -divergences which generalize the notion of relative entropy.
Behavioural approaches to the home bias puzzle draw upon psychological aspects of individual behaviour. So far, in the literature, the familiarity of companies’ overly optimistic predictions of domestic companies’ performance and (perceived) subjective competence in the home market have been put forward as possible explanations. These features are difficult to factor into a model of optimal portfolio choice in order to successfully address the issue of the home equity bias. The paper applies behavioural insights such as prospect theory and familiarity and ambiguity aversion to one of the classical problems in finance literature: the investor’s optimal asset allocation under risk. In particular, we investigate the use of downsiderisk, focusing on negative movements in stock markets for the assessment of risk, to see if the downsiderisk approach to asset allocation is able to provide greater insight into the equity home bias puzzle.
Volatility forecasts are crucial for many investment decisions. They are relevant for option pricing, asset allocation, and risk management. Accurate estimates of volatility are essential prerequisites for good forecasts. In light of this, the development of high frequency estimators, based on the notion of increasing sampling frequency, has put the research on this field a step forward. In contrast with model- based estimates, the realized volatility, advocated by Andersen et al. (2001a) and Barndorff-Nielsen and Shephard (2002), among others, consistently estimates the integrated volatility of the return process under the assumption that it follows a continuous sample path. Thus, realized measures represent the foundation of any forecast of volatility. Supporting this, Hansen and Lunde (2006a) suggest that model comparisons be based on the use of realized volatility as a proxy for the latent volatility, and the results in Andersen et al. (2003) indicate that simple autoregressive forecasting models based on realized volatility outperform GARCH related models in an out-of-sample perspective.
Markowitz in his pioneering work assumed that the returns on the portfolio followed a multivariate normal distribution. In this framework the mean-variance methodology encloses downside-risk measures. However during the last forty years empirical analyses of the distri- bution of returns have been consistently rejecting this hypothesis and pointing towards heavy tailed distributions, see Fama (1965) or modern books on risk management and heavy tails as Embrechts (2000) or Malevergne and Sornette (2006). This stylized fact has gained further popularity during the last decade where more sophisticated statistical and probabilistic tech- niques have been developed to study heavy tails and extreme events, see Chavez-Demoulin, Embrechts, and Ne˘ slehov´ a (2006) in an operational risk context. The use of these techniques has also made possible the revival of portfolio theories based on downside-risk measures (Hyung and de Vries, 2005).
Three conclusions can be obtained from the efficient portfolio frontier. First, it is clear that Portfolio C is ruled out by both mean-variance and downside-risk averse investors. This portfolio has higher positive comovements and higher variance, and the analysis of the efficient portfolio frontier reveals that shorting one of the assets and thereby benefiting from the exis- tence of positive comovements does not lead to portfolios outperforming A and B. This result is in accordance with existing literature in portfolio diversification and flight to quality, where it is commonly agreed that investors prefer to invest in bonds and stocks than solely in stocks in different marketplaces. Second, the ranking of mean-variance averse investors differs from that of those downside-risk averse investors penalizing negative returns on the portfolio. The choice of a threshold τ = 0 is motivated by our willing of studying individuals with high risk aversion profile. These investors are concerned about the occurrence of losses in the portfolio and not just about large negative returns. From the efficient curves in the middle and right panel of figure 5.3 it seems that they prefer portfolio A to B. This outcome is not surprising since this high level of risk aversion and the choice of A over B can be due to country risk, that is, investors overvaluing domestic assets over foreign investments. On the other hand mean-variance averse investors prefer cross-borders diversification. The flight to quality in this case includes fleeing to other international markets. The rationale for this diversification seems to be different from the rationale of downside-risk averse investors. The latter type min- imizes losses by exploiting complementarity of domestic financial markets, while the former type smooth investment returns by investing in diverse assets a priori more independent. And third, different downside-risk measures provide the same ranking of portfolios as observed in Danielsson et al. (2006). Note that the efficient sets derived from LP M 0 are not convex. This
Drechsler and Yaron (2011) argue that the variance index, e.g. VIX, is a measure of the market’s concerns of surprise economic shocks. Assuming Epstein and Zin (1989) preferences combined with time-varying economic uncertainty, they are able to explain the time-varying variance premium. They further argue that the representative investor must feature a preference for early resolution of uncertainty (a trait which is not found in standard CRRA utility) com- bined with stochastic volatility in consumption. In a related paper, Drechsler (2013) extends the robust control methods of Hansen and Sargent (2001), and introduces model uncertainty – or Knightian uncertainty – as a potential explanation of risk premia in general. Here, the representative agent is ambiguity averse and considers only the worst-case combination of pa- rameters describing the economy. This model is able to produce large premia in option prices and variance risks which are driven by variations in the level of uncertainty.
As investors may prefer portfolios which are designed for risk minimization, sev- eral downsiderisk measures (e.g. Value-at-Risk (VaR), Expected Shortfall (ES) and Omega ratio) have been considered as an alternative to the variance measure by financial practitioners in the past decade. Distributing weights to minimize loss probability of having returns under a given level of risk measures has been applied to both passive and active portfolio management. For passive portfolio manage- ment, similar asset allocation problems can be dated back to some studies over 50 years ago. For example, Roy  discussed the optimum distributions with the so-called ‘safety first’ rule when portfolio returns were assumed to be normally distributed. Rockafellar and Uryasev  studied resource allocation problems with VaR or ES minimizations provided that the loss functions were convex and continuously differentiable. Gilli et al.  discussed portfolio selection problems which were designed for minimizing downside risks subject to certain real-world constraints. Vassiliadis et al.  propose a hybrid ant colony optimization algo- rithm for active portfolio management under a downsiderisk framework. Copula theory has been restored to estimate these risk measures in portfolio management. Most of the VaR-copula studies reveal that the portfolios which consider the nonlin- ear and asymmetric dependence structure are more robust than those constructed under the assumption of multivariate normal distribution (see Embrechts et al.  and Bradley and Taqqu ).
The proposed algorithm can be either employed as an alternative valuation approach to the traditional DCF approach or as an approach for calculating downsiderisk adjusted beta factor. In this study the optimal portfolio determines directly the value of the firm under equilibrium conditions while assuming that investors maximize the slope of the expected return-risk tradeoff line. Only when risk is measured by variance, then the DCF approach and our optimal portfolio valuation produce identical valuations. However, when we use a downsiderisk measure under non- normal distribution conditions, then the equilibrium values under our approach differ from the value obtained by the conventional DCF approach and some of its extensions5. It should be noted that the valuation under the three moments extension of CAPM is not, and should not be, identical to the valuation by the below-mean variance algorithm presented here, despite some commonalities. The valuation according to Estrada’s (2007) downside semi-variance beta has many conflicts with our semi-variance optimal portfolio valuation because there is no mathematical relationship between the co-skewness of the individual security and the skewness of the portfolio (See Cheremushkin (2009)).
The aim of this study is to investigate whether the two types of accounting conservatism (conditional and unconditional) mitigate the risk of falling operating cash flows in the presence of cash holdings of Jordanian companies for the period from (2005–2014) for a sample of (160) companies listed in Amman Stock Exchange (ASE). By using the principle components analysis method in the SPSS system to generate a composite measure for the measurement of the conditional conservatism (CC_CM) consisting of three measures: negative accruals (CC_NACC), current accruals to total accruals (CC_CACC), and accounting conservatism to the good news (CC_ACGN). As well as to generate another composite measure for the measurement of the unconditional conservatism (UC_CM) consisting of three measures: total accrual (UC_TACC), book to market (UC_BTM) ratio, and skewness (UC_Skew). In order to measure the downsiderisk of operating cash flows, we used the root lower partial moment of operating cash flow (RLPM_OCF). We find that two types of accounting conservatism are significantly positively effect on cash holdings. In addition, we conclude that there is a significantly negatively indirect effect for accounting conservatism on downsiderisk of operating cash flows in Jordanian companies that have cash holdings. It means that the increasing of the accounting conservatism leads to the increasing of cash holdings, which leads to mitigate the operating cash flows downsiderisk.
First, we decided to clarify cost behavior in local public enterprises after 1999. And we focused on “market monopoly” and “business environment” as pointed out by Nagasawa and Hosomi (2016) as a factor influencing cost and behavior. Especially, we focused on “Change in demand” in the business environment. It is pointed out that the change in the public service demand is influenced by the demographic change by the public economics and the public finance. Therefore, we decided to catch the demographic change as a representation index of demand, and to verify the relation between demand and cost behavior. Banker et al. (2014b) argue that demand uncertainty affects cost behavior. And, when demand uncertainty is large, they point out that the sticky costs become strong. However, when demand uncertainty is small, it has not been verified how cost behavior will change. When the market share is high, demand forecasting can be done accurately, so demand uncertainty will be small. Local public enterprises have businesses with a high market share. Therefore, in the case of a business with a high market share, managers can accurately predict demand and unnecessary adjustment of management resources will not be done, so we think that sticky costs should weaken. Thus, in our research we examined the relationship between market share and cost behavior. Finally, we examined the relationship between the downsiderisk of demand and cost behavior.
The complexities associated with a global analysis of aversion to downsiderisk suggest a need to explore a different approach. Like Pratt , Arrow  and others, it is natural to start with the risk premium as a mea- surement of the cost of risk. But while Arrow and Pratt examined the linkages between the risk premium and risk exposure “around the mean”, such an approach appears less fruitful when focusing on risk located in the lower tail of the payoff distribution 2 . This paper examines the cost of risk (as measured by the risk premium) using a quantile approach. This is done by dividing the range of stochastic payoff into intervals, each interval characterizing a quantile of the underlying distribution. This allows us to examine the nature and welfare effect of risk exposure in each interval/quantile. We first show that the risk premium can be decomposed into additive components across quantiles. This result is global and applies “in the large”. It provides a direct measure of the cost of exposure to downsiderisk. Indeed, defining downsiderisk as the risk located in the lower quantile(s) of the payoff distribution, the cost of downsiderisk is just the component of the risk premium associated with such quantile(s). We also show our global decomposition can generate local measures that apply “in the small”. In turn, these local measures provide some useful information about linkages between risk preferences and mo- ment-based measures of risk exposure.
These theories shed some doubts on CAPM formulations; thereby some authors started to postulate other models considering downside-risk measures. Markowitz (1959) proposed the semivariance, that was later refined by Hogan and Warren (1974). Bawa (1975) and Bawa and Lindenberg (1977) propose minimizing Lower Partial Moments (LPM) of the distribution of returns as alternative to the variance for constructing optimal portfolios. Building on this theory Harlow and Rao (1989) introduce the generalized Mean-Lower Partial Moment (MLPM) model that they use for asset pricing. This theory generalizes earlier pricing formulations enclosing static CAPM or LPM models as particular cases.
One of the concepts of specification and verification of asset pricing models is an approach in which relationships between the systematicrisk and the expected return depend on companies’ conditions. Amorim, Lima and Murcia (2012) conducted research into the relationship between accounting information and systematicrisk in the Brazilian financial market. They concluded that accounting variables are val- uable support to market beta in risk analysis. Sarmiento-Sabogal and Sadeghi (2015) found similarities between accounting betas and market betas for US-listed firms, but they also discovered that accounting betas overestimate the market risk. Cambell, Polok and Vuolteenaho (2010) emphasised the role of accounting information in risk analysis and the calculation of cost of capital. They focus on the correlation of stock cash flow and profitability and provide an example of the application of accounting risk measures using the Morningstar stock rating system.
In this paper we estimated for the period 1990 - 2015, Sortino Ratio and Return Level using a Generalized Pareto Distribution to evaluate downsiderisk of large-cap companies, approach through S&P 500 Index, and small- cap companies, approach through Russell 2000 Index. Small-cap depicted higher downsiderisk than large-cap.