Pursuing our previous work in which the classical notion of increasing convex stochasticdominance relation with respect to a probability has been extended to the case of a normalised monotone (but not necessarily additive) set function also called a capacity, the present paper gives a generalization to the case of a capacity of the classical notion of increasing stochasticdominance relation. This relation is characterized by using the notions of distribution function and quantile function with respect to the given capacity. Characterizations, involving Choquet integrals with respect to a distorted capacity, are established for the classes of monetary riskmeasures (dened on the space of bounded real-valued measurable functions) satisfy- ing the properties of comonotonic additivity and consistency with respect to a given generalized stochasticdominance relation. Moreover, under suitable assumptions, a "Kusuoka-type" characterization is proved for the class of monetary riskmeasures having the properties of comonotonic additivity and consistency with respect to the generalized increasing convex stochasticdominance relation. Generalizations to the case of a capacity of some well-known riskmeasures (such as the Value at Risk or the Tail Value at Risk) are provided as examples. It is also established that some
over the target return as profit. Downside risks, thus, become important components in the construction of performance measures. Riskmeasures based on below-target returns are first proposed by Fishburn (1977) in the context of portfolio optimization. Classic measures of downside risk include semi-deviation, (Markowitz, 1959, 1987), Value-at-Risk (Jorion, 2000) and the conditional Value-at-Risk (Rockafellar and Uryasev, 2000). Farinelli and Tibiletti (F- T, 2008) propose a general risk-reward ratio, which is suitable to compare skewed returns with respect to a benchmark. The F-T ratios are essentially ratios of average above-benchmark returns (gains) to average below-benchmark returns (losses), each raised by some power index, p and q, to proxy for the investor’s degree of risk aversion. When the power index is equal to one for both numerator and denominator, the performance measure is the Omega ratio (Keating and Shadwick, 2002). On the other hand, when the power index is equal to one and two for numerator and denominator, respectively, the performance measure becomes the upside potential ratio (Sortino et al. 1999).
measure of risk because it penalizes upside deviation as well as downside deviation. In fact people view upside and downside deviation differently. The occurrence of negative returns over some target return is considered as risk and the occurrence of positive returns over the target return is what we would like to see. Downside risks thus become important components in the construction of performance measures. Classic measures of downside risk include semi-deviation and absolute semi-deviation (see Markowitz, 1959, 1987). Riskmeasures based on below-target returns are first proposed by Fishburn (1977) in the context of portfolio optimization. Farinelli and Tibiletti (2008) propose a general risk-reward ratio suitable to compare skewed returns with respect to a benchmark. The F-T ratios are essentially ratios of average above-benchmark returns (gains) to average below-benchmark returns (losses), each raised by some power index to proxy for the investor’s degree of risk aversion. When the power index is equal to one for both numerator and denominator, the performance measure is the Omega ratio, first discussed by Keating and Shadwick (2002).
In this section comparisons between the scenario using a seasonal forecast (with-forecast scenario) and those not using a seasonal forecast (without) are presented. In order to examine the consistency of the efficacy of the with- forecast scenario, comparisons were carried out for six irrigation nodes in the Queensland part of the BRC. These nodes had 15 203 ha of land developed for irrigated cotton in the 1997/1998 water year, which is 69 per cent of the total land developed for irrigated cotton (in that year). The development statistics for the irrigation nodes used in the analysis are summarised in table 2. Both pump capacity and size of on-farm storages (OFS) are reported, as the ratio of these is important in determining how much off-allocation water can be pumped compared to the volume available for pumping. Irrigation efficiency accounts for transmission losses between the OFS and the plant-root zone. An irrigation efficiency of 66 per cent means that of every megalitre of water released from the OFS only 66 per cent actually gets to the root zone. Therefore if 1 ML is needed by the crop 1.5 ML needs to be released from the OFS.
In general, measuring risk based on a single measure such as variance provides uncertain results (Hanoch & Levy, 1969). Variance has several limitations as a measure of risk: it is a symmetric measure, treating both positive and negative deviations from the mean similarly. There is also compelling evidence that the distribution of many assets is in reality not normal, and many assets are showing skewness and fat tails. This means that variance measures the risk of the assets erroneously. While several other riskmeasures such as value at risk (VaR) have been introduced, selecting the correct measure for risk is challenging (De Giorgi & Post, 2008). Stochasticdominance does not imply any quantified measure for risk; risk is treated as a qualitative criterion that relies on the preference orderings of the risky options (Kuosmanen, 2004).
mean (expected return) and variance of portfolios. Variance or standard devia- tion are the most commonly used riskmeasures and the mean-variance rule is by far the most popular investment decision rule that has been widely adopted by both academics and practitioners (Levy 2016). The mean-variance rule pro- vides a foundation for the development of the Capital Asset Pricing Model by Sharpe (1964) and Lintner (1965). It has long been criticized, however, on the basis that the variance is not consistent with investors’ actual perception of risk since it allocates the weights of negative deviations (undesirable) and positive deviations (desirable) of returns evenly. A large body of empirical studies in fi- nance, economics and psychology have argued that individuals view dispersions of returns in an asymmetric manner: losses weighs more heavily than gains (see, e.g., Markowitz 1959, Kahneman and Tversky 1979).
and it is a supermartingale under all absolutely continuous local martingale measures Q with “finite V -expectation”, E [V (d Q /d P )] < ∞, for the conjugate function V (y) = sup x∈R (U (x ) − x y), y > 0, of U . Throughout, we suppose that the market admits an equiv- alent local martingale measure (i.e., satisfies NFLVR) and that for each y > 0, the dual problem inf Q E [V ( yd Q / d P ) ] is finite with a dual minimizer Q ˆ ( y ) in the set of equivalent local martingale measures. Sufficient conditions for the validity of the latter assumption can be found in ; in particular it holds if the market is complete or if the utility function under consideration is exponential, U (x ) = −e −γ x with γ > 0, and an equivalent local martingale measure Q with finite entropy E d Q / d P log ( d Q / d P ) < ∞ exists.
To compare these two main portfolio strategies, we search for stochasticdominance (SD) properties since SD takes account of the entire return distribution. The major argument for stochasticdominance is that it does not require any specific knowledge about the preferences of investors. Indeed, the first stochasticdominance order is related to investors with an increasing utility function. Stochasticdominance of order two focuses on investors having an increasing and concave utility, meaning that they are risk-averse 1 . However, at a given order (for ex- ample 1 or 2), the stochasticdominance criterion cannot always allow to rank all portfolios. There exist cases where no stochasticdominance is observable. But there exists a stochasticdominance criteria at each order and, the higher the order, the less stringent the criterion. Thus it is reasonable to expect that there exists an order for which a portfolio strategy dominates another one (or vice versa). De Giorgi  shows that, in a market without friction, the market portfolio can be efficient according to the criterion of the second order stochasticdominance. Therefore the test of stochasticdominance is consistent with the theory of portfolio choice. To compare with al- ternative approaches such as those based on performances measures, note that Darsinos and Satchell  show that n-order stochasticdominance implies Kappa (n − 1) dominance. It means for example that the second order stochasticdominance implies the Omega dominance while the third order SD implies dominance according to the Sortino measure.
We investigate the effect on the demand for risky asset when there are changes in wealth. We design effects of wealth on optimal portfolio by deriving the first order conditions with respect to the portfolio share of risky asset α and the agent’s end of period wealth W . i We first have the next result
Starting from the reward-risk model for portfolio selection introduced in De Giorgi (2004), we derive the reward-risk Capital Asset Pricing Model (CAPM) analogously to the classical mean-variance CAPM. The reward-risk portfolio selection arises from an axiomatic definition of reward and riskmeasures based on few basic principles, including consistency with second order stochasticdominance. With complete markets, we show that at any financial market equilibrium, investors’ optimal allocations are comonotonic and therefore the capital market equilibrium model can be reduced to a representative investor model. Moreover, the pricing kernel is an explicitly given, monotone function of the market portfolio return, corresponding to the increments of the distortion function characterizing the representative investor’s risk perceptions. Finally, an empirical application shows that the reward-risk CAPM better captures the cross-section of US stock returns than the mean-variance CAPM does.
The existence of a background risk has been seen to be of particular interest in the research on risk behavior. Ross (1981) observed that the Arrow-Pratt measure of risk aversion is inadequate in the sense that upon the introduction of a background risk a more risk averse agent may not behave in a more risk averse manner whenever a foreground risk is present. Other research has explored the effects of aversion to risk upon the introduction of a stochastically independent background risk for certain types of risk preferences as defined in Pratt and Zeckhauser (1987), Kimball (1993), and Gollier and Pratt (1996). Gollier and Pratt demonstrate how these previously defined classes of risk preferences are related by identifying sufficient conditions for any agent possessing these qualities to react to the introduction of a small, unfair, and independent background risk by becoming more risk averse to bearing a foreground risk. Their notion of preferences possessing these qualities, known as risk vulnerability, captures this common quality for all of these classes of risk preferences and is the widest set of preferences upon which the introduction of background risk will generate more risk averse behavior. However, Gollier and Pratt do not present a systematic method for obtaining the different riskmeasures that arise in these various cases.
Is MR model consistent with SD rule? Markowitz (1952b) defines a mean-variance (MV or mean-standard deviation) rule for risk averters and Wong (2007) defines a MV rule for risk seek- ers. Wong (2007) further establishes consistence of the MV rules with second-order SD (SSD) and second-order RSD (SRSD) rules under some conditions. Ogryczak and Ruszczy´ nski (1999) show that under some conditions the standard semi-deviation and absolute semi-deviation make the mean-risk model consistent with the second-order SD (SSD). Ogryczak and Ruszczy´ nski (2002) establish the equivalence between TVaR and the SSD. In addition, Leitner (2005) shows that AV@R as a profile of riskmeasures is equivalent to the SSD under certain conditions. Ma and Wong (2010) establish the equivalence between SSD and the C-VaR criteria.
As an alternative, a robust approach might seek weak uniform rankings over entire classes of evaluation functions, and consider nonparametric distributions of DCC values. In this respect, StochasticDominance (SD) tests have been developed to test for statistically significant rankings of prospects. Assuming F and G are the distribution functions of DCC produced by model 1 and model 2, respectively, model 1 first order SD model 2, over the support of DCC value dcc, iff ,with strict inequality over some values of DCC. This means that the model that produces G is dominant over all merely increasing evaluation functions since, for any DCC level, the probability that capital charges are smaller under G is greater than under F. In particular, the distribution F will have a higher median DCC than G. Similarly, each and every (quantile) percentile of the F distribution will be at a higher DCC level than the corresponding percentile of the G distribution. Consequently, model 2 will be preferred to model 1 on the basis of lower capital charges.
maxfl X ÿ k d X : X 2 Qg; 0 < k 6 1; 19 is ecient under the SSD rules, if it is unique. In the case of multiple optimal solutions, the optimal set of problem (19) contains a solution which is ecient under SSD rules but it may contain also some SSD dominated solutions. Exactly, due to Corollary 4, an optimal solution X 2 Q can be SSD dominated only by another optimal solution Y 2 Q which generates a l=d tie with X (i.e., l Y l X and d Y d X ). A question arises how dierent can the random variables be that generate a tie (are indierent) in the l=d mean-risk model. Ab- solute semideviation is a linear measure of the dispersion space and therefore many dierent dis- tributions may tie in the l=d comparison. Note that two random variables X and Y with the same expected value l X l Y are l=d indierent if F 2
We then go into examining whether similar results can be obtained with other reasonable measures of global concavity. To this end, we consider two natural alternative measures of global concavity: the supremum of the ARA index and the average ARA index of the utility function over its domain. These alternative measures turn out to yield much weaker results concerning our original problem. For the supremum our results say that there is a threshold level such that there is no utility function with lower global concavity giving a common transformation of the original random variables permitting their ranking by MSOSD. Further, we demonstrate that for higher degrees of global concavity we could obtain unanimity. For the case of the average ARA as a global measure of concavity, we show that above some threshold value of this measure we can always Þnd utility functions allowing for the ranking of the two commonly transformed risks by MSOSD. Clearly, none of the two measures yields such a sharp characterization of preferences as the one we obtained with the inÞmum of the ARA index as the global measure of concavity.
investors with prospect preferences and investors with the reverse S-shape utility functions Markowitz investors or investors with Markowitz preferences. The SD tests allow us to simultaneously identify the assets preferred by risk averters and risk seekers in both positive and negative return domains. Incorporating this result leads to a complete test framework that can be used to infer risk-averse and risk-seeking behaviors in the entire domain as well as in the positive and negative domains. This enables us to draw preferences for risk averters, risk seekers, prospect investors, and Markowitz investors. We apply this framework to examine different types of their risk preferences associated with the index spot and futures returns in Hong Kong. The research will shed some light on the relationships between the Hong Kong stocks and futures markets and provide useful information to investors, the exchange, and policymakers.
A major drawback of Mean-Variance and StochasticDominance investment criteria is that they may fail to determine dominance even in situations when all “reasonable” decision-makers would clearly prefer one alternative over another. Levy and Leshno  suggest Almost StochasticDominance (ASD) as a remedy. This paper develops algorithms for deriving the ASD efficient sets. Empirical application reveals that the improvement to the efficient sets implied by ASD is substantial (64% reduction for FSD). Direct expected utility maximization shows that investment portfolios excluded from the ASD efficient set would not have been chosen by any investors with reasonable preferences.
the preferences on different prospects for different types of investors, it is not necessary to measure the utilities for different types of investors and analyze their expected utilities. One only needs to find out the orders and the types of SD for different prospects. This information could then enable us to draw conclusion on the preferences for different types of investors on the prospects. This is the basic principle academics apply the SD theory to many areas like economics and finance. For example, Qiao, et al. (2012) find that stocks SASD dominates futures and futures SDSD dominates stocks and conclude that risk averters prefer to buy stocks, whereas risk seekers prefer long index futures. Recently, Davidson and Duclos (2000) and others develop test statistics for ASD while Bai, et al. (2015) extend their work by developing test statistics for both ASD and DSD. The tests could be used to apply the theory of ASD and DSD to empirical issues.
harvesting all other stands, but acquiring almost equal amounts of additional forest land similar to Stand #162. That is, mature spruce forest with over 70 per cent of the timber suitable for sawlogs. Of course, one could favor more homogenous spruce stands in the long term forest planning. In this respect, the result is well in line with the recent trends in forestry practices in Finland. In the short run, however, this would require trading the forest land. Moreover, it is highly unlikely that the present land ownership could be instantaneously traded for large areas of old spruce forests. Therefore, the unconstrained case seems not very realistic for the short run planning. It should be also noted that the expected gains of 0.08 per cent points are very minimal. This should be balanced against the transaction costs of making radical changes of the investment strategy, the environmental and the aesthetic losses due to increased homogeneity of the tree species, as well as the option value of postponing the semi-irreversible harvesting decision, which have not been duly accounted for in the present analysis.