worst-case expected value

Top PDF worst-case expected value:

Worst-Case Conditional Value-at-Risk with Application to Robust Portfolio Management

Worst-Case Conditional Value-at-Risk with Application to Robust Portfolio Management

the expected values and the CVaRs at confidence level 0.95 of the corresponding portfolios for each time period. It is obvious that the larger the required minimal expected/worst- case expected return is set, the larger the associated risk would be. From the expected values, we find that the robust optimal portfolio policy always guarantees the required worst- case expected value of µ. However the nominal optimal portfolio policy usually results in a small worst-case expected value although it may have a large expected value (see lines for µ = 0.0005, 0.00055, 0.00095). For the same value of µ, the risk of the robust optimal portfolio policy appears to be larger than the risk of the nominal optimal portfolio policy. It should be mentioned that we only list the CVaRs calculated according to the three sets of samples, and they do not necessarily reveal the real worst-case CVaRs. However, the larger risk is usually rewarded by a higher return, especially a higher worst-case return. Figure 2 illustrates the evolution of the values of the robust optimal portfolio and the nominal optimal portfolio generated by setting µ = 0.0005. It shows that the robust optimal portfolio almost always outperforms the nominal optimal portfolio. For µ = 0.00095, the robust portfolio optimization problem is infeasible. But if µ = 0.00095 is required as the minimal worst-case expected return, the corresponding nominal optimal portfolio, which is obtained by solving (55) with l = 1 and S 1 = 2700, becomes infeasible to problem (55) with l = 3 and S i = 900,
Show more

27 Read more

Worst Case Reliability Prediction Based on a Prior Estimate of Residual Defects

Worst Case Reliability Prediction Based on a Prior Estimate of Residual Defects

So with a single defect, there is a bi-modal distribution of actual failure intensities around the expected value. However when we consider multiple defects with the same λ value, the sum of the failure intensities of the surviving defects tends to a normal distribution. If all N defects have the worst case failure rate of 1/T, the distribution has a mean of N/eT and a standard deviation of √N/eT. So for large N there is a 50% chance that the actual failure intensity is less than N/eT and a 98% chance of being less than (N+2√N)/eT . For other values of λ, the probability that the observed failure intensity is within the bound will be much higher. So the actual failure intensity is also likely to be within the bound in most cases.
Show more

10 Read more

CAS: Computing Semiparametric Bounds on  the Expected Payments of Insurance  Instruments via Column Generation

CAS: Computing Semiparametric Bounds on the Expected Payments of Insurance Instruments via Column Generation

Generating semiparametric bounds using the CG approach outlined above has several key advantages over the semidefinite programming (SDP) solution approach introduced by Bertsimas and Popescu (2002) and Popescu (2005). First, only a linear program- ming solver for (2) and the ability to find the roots of polynomials with degree no more than four for (3) is required in most practically relevant situations. This means that the methodology can use any com- mercial LP solver allowing for rapid and numerically stable solutions. Second, the problem does not need to be reformulated for changes in the support  of the underlying risk distribution p of X. Accounting for alternate support requires only limiting the search space in the subproblem (3). Finally, problem (2) is explicitly defined in terms of the distribution used to generate the bound value. So, for any bound com- puted, the worst-case (respectively, the best-case for the lower bound) distribution that yielded that bound can also be analyzed; with the SDP approach no such insight into the resulting distribution is, to the best of our knowledge, readily possible. The ability to analyze the resulting distribution would be of particular use to practitioners in the insurance and risk management industry and will be further discussed in Section 5.2. Third, the CG approach works analogously for both the upper and lower semiparametric bound problems. In contrast, the SDP approach commonly results in SDP formulations of the problem that are more involved for the lower than for the upper semiparametric bound. problem (cf. Nocedal and Wright 2006). However,
Show more

17 Read more

On Stochastic and Worst-case Models for Investing

On Stochastic and Worst-case Models for Investing

Another approach to connecting worst-case to average-case investing was taken by Jamshidian [Jam92] and Cross and Barron [CB03]. They considered a model of “continuous trading”, where there are T “trading intervals”, and in each the online investor chooses a fixed portfolio which is rebalanced k times with k → ∞. They prove familiar regret bounds of O(log T ) (independent of k) in this model w.r.t. the best fixed portfolio which is rebalanced T × k times. In this model our algorithm attains the tighter regret bounds of O(log Q), although our algorithm has more flexibility. Furthermore their algorithms, being extensions of Cover’s algorithm, may require exponential time in general 1 .
Show more

9 Read more

Worst-Case  to  Average-Case  Reductions  for  Module  Lattices

Worst-Case to Average-Case Reductions for Module Lattices

Note that the existing results on R-LWE and R-SIS already imply that these problems are no easier than some SIVP instances: For example, one can embed an R-SIS instance into the first row of an M-SIS instance, and generate the other row(s) independently. However, with this approach, the hardness of the corresponding worst-case instances is related to n-dimensional instances of SIVP. With our new approach, we can show that the M-SIS instance above is no easier than solving SIVP for a (2n)-dimensional lattice (or, more generally, a (dn)-dimensional lattice, if the number of rows of the M-SIS matrix is d). If SIVP is exponentially hard to solve (with respect to the lattice dimension), the module approach provides a complexity lower bound for solving this simple M-SIS instance that is the square (resp. dth power) of the lower bound provided by relying on R-SIS. This assertion relies on the reasonable conjecture that the module structure is harder to exploit in SIVP solvers than the ideal structure (because an ideal lattice can be embedded in a module lattice of any dimension).
Show more

29 Read more

Real-Time Sysems and Limiing Even Sreams: A General Model

Real-Time Sysems and Limiing Even Sreams: A General Model

For a wide acceptance of real-time analysis by industrial software developers tight bounds to the real worst-case response times of tasks are important. A successful approach is the consideration of depen- dencies of chained task-sets. Previous work considers different kinds of dependencies. Mutual exclusion of tasks, offsets between task stimulation, task chains or tasks competing for the same resource are some examples for such dependencies. But the integration of many other types of dependencies are still open. Especially, missing is a holistic model for dependen- cies as new abstraction layer. The idea is to separate the calculation and the concrete type of dependencies
Show more

7 Read more

A comparison of worst case performance of priority queues used in Dijkstra's shortest path algorithm

A comparison of worst case performance of priority queues used in Dijkstra's shortest path algorithm

5.2 Worst Case Graph In order to ensure the maximum possible number of decreasekey operations and worst case time for a decreasekey operation on each data structure it was necessary to f[r]

49 Read more

Worst Case Synchronous Grammar Rules

Worst Case Synchronous Grammar Rules

such that tabular parsing strategies must take at least Ω(N c √ n ), that is, the exponent of the algorithm is proportional to the square root of the rule length. In this paper, we improve this result, showing that in the worst case the exponent grows linearly with the rule length. Using a probabilistic argument, we show that the number of easily parsable permuta- tions grows slowly enough that most permutations must be difficult, where by difficult we mean that the exponent in the complexity is greater than a constant factor times the rule length. Thus, not only do there exist permutations that have complexity higher than the square root case of Satta and Peserico (2005), but in fact the probability that a randomly chosen permutation will have higher complexity approaches one as the rule length grows.
Show more

8 Read more

GOAL: A Load-Balanced Adaptive Routing Algorithm for Torus Networks

GOAL: A Load-Balanced Adaptive Routing Algorithm for Torus Networks

We introduce a load-balanced adaptive routing algo- rithm for torus networks, GOAL - Globally Oblivious Adap- tive Locally - that provides high throughput on adversar- ial traffic patterns, matching or exceeding fully randomized routing and exceeding the worst-case performance of Chaos [2], RLB [14], and minimal routing [8] by more than 40%. GOAL also preserves locality to provide up to 4.6× the throughput of fully randomized routing [19] on local traffic. GOAL achieves global load balance by randomly choosing the direction to route in each dimension. Local load bal- ance is then achieved by routing in the selected directions adaptively. We compare the throughput, latency, stability and hot-spot performance of GOAL to six previously pub- lished routing algorithms on six specific traffic patterns and 1,000 randomly generated permutations.
Show more

12 Read more

Minimizing Conditional Value at Risk under Constraint on Expected Value

Minimizing Conditional Value at Risk under Constraint on Expected Value

It has been observed in Kondor et al. [18] and Cherny [8], that the optimal portfolio normally does not exist for the mean-CVaR optimization in a static setting if the portfolio value is unbounded. When the solutions do exist in some limited cases, they take the form of a binary option. We confirm this result in a dynamic setting with a simple criterion in Theorem 3.17, where the portfolio value is allowed to be unbounded from above but restricted to be bounded from below since this is crucial in excluding arbitrage opportunities for continuous-time investment models. In the case the portfolio value is bounded both from below and from above, Schied [27], Sekine [28], and Li and Xu [20] find the optimal solution to be binary for CVaR minimization without the return constraint. We call this binary solution where the optimal portfolio either takes the value of the lower bound or a higher level ‘Two-Line Configuration’ in this paper. The key to finding the solution is the observation that the core part of CVaR minimization can be transformed into Shortfall risk minimization using the representation (CVaR is the Fenchel-Legendre dual of the Expected Shortfall) given by Rockafellar and Uryasev [23]. F¨ ollmer and Leukert [13] characterize the solution to the latter problem in general semimartingale models to be binary (‘Two-Line Configuration’) where they have demonstrated its close relationship to the Neymann-Pearson lemma. The main contribution of our paper is that, when adding the return constraint to the CVaR minimization objective, we prove the optimal solution to be a ‘Three-Line Configuration’ in Theorem 3.15. This can be viewed in part as a generalization of the binary solution for Neymann-Pearson lemma with an additional constraint on expectation. The new solution can take not only the upper or the lower bound, but also a level in between.
Show more

35 Read more

CCG Parsing Algorithm with Incremental Tree Rotation

CCG Parsing Algorithm with Incremental Tree Rotation

It is useful to look at the mentioned combina- tory rules in a generalised way. For instance, if we look at forward combinatory rules we can see that they all follow the same pattern of combin- ing X/Y with a category that starts with Y . The only difference among them is how many subcate- gories follow Y in the secondary category. In case of forward function application there will be noth- ing following Y so we can treat forward function application as a generalised forward composition combinator of the zeroth order >B0. Standard forward function composition >B will be a gener- alised composition of first order >B1 while >B 2 will be >B2. Same generalisation can be applied to backward combinators. There is a low bound on the order of combinatory rules, around 2 or 3.
Show more

12 Read more

Compact  Accumulator  using  Lattices

Compact Accumulator using Lattices

In recent years, there has been rapid development in the use of lattices for constructing rich cryptographic schemes (these include digital signatures [GPV08,Boy10,CHKP12], identity-based encryption [GPV08] and hierarchical IBE [CHKP12, ABB10], non-interactive zero knowledge [PV08], and even a fully homomorphic cryptosystem [Gen09]). Among other reasons, this is because such schemes have yet to be broken by quantum algorithms, and their security can be based solely on worst-case computational assumptions.

11 Read more

The Performance of ECC Algorithms in DNSSEC: A Model based Approach

The Performance of ECC Algorithms in DNSSEC: A Model based Approach

Additionally, in the scenario evaluation the worst case specifies that domains are signed one by one. In reality, it is more likely that large groups of domains (i.e. operated by a single service provider) will be signed rather than a single domain within a zone. Some of the domains in the zones may be very popular while others may not. For example, consider a zone with a popular web server (www) and a rarely used mail server (mx) in the same zone. These domains lie on a very different place on the domain popularity curve but are still (presumably) signed simultaneously. Lastly, the DNSSEC deployment may be heavily stimulated on a TLD level which causes domains within a particular TLD to be signed in quick succession. For example, within the .nl TLD, 43.7% of all domains is already signed while within .com only 0.45% of all domain is signed 1 . Thus, DNSSEC deployment is presumably not (heavily) dependant on domain name popularity, but rather whether the parent TLD of a domain actively encourages deployment of DNSSEC. CPU load In Sec. 8.1, the different versions of OpenSSL already showed that there was room for improvement in the implementation of cryptographic algorithms. There was a major improvement between version 1.0.0 (from March 2010) and version 1.0.1f (from January 2014) and compared to RSA, ECDSA is a recently developed algorithm. Therefore, it is likely that more improvement is possible. The case for Ed25519 is even more favourable. The main implementation of the protocol is a proof-of-concept from its inventors, which indicates that the development of the algorithm is in its infancy and has much room for improvement. Therefore, the performance of ECC algorithms may become relatively better in the near future which makes the deployment of ECC instead of RSA even more favourable.
Show more

88 Read more

Worst of the Worst Franchises

Worst of the Worst Franchises

Franchisee satisfaction is one of the most important factors every entrepreneur should consider before investing in a franchise. While we don’t name any specifi c companies, this report includes franchisee satisfaction data compiled from some of the worst companies that we have surveyed. This information serves as a great benchmark for all the things you want to AVOID in any franchise business.

15 Read more

Modelling of the parametric yield in decananometer SRAM-Arrays

Modelling of the parametric yield in decananometer SRAM-Arrays

In order to find the worst case distance, an optimiza- tion problem in a multidimensional space must be solved. For this task we used a tool called WiCkeD from Muneda (http://www.muneda.com). This tool uses an sequential quadratic programming (SQP) algorithm to find the Worst Case Distance [Antreich, K. (2000)]. An external TCL script was used to control the WicKeD program to run the worst case distance analysis first with just global variation, and then with local variations using the worst case parameters obtained by the global WCD analysis. A SPICE simulator simulated the performances of the SRAM core cell for a set of parameter values, and a Perl script wrote the simulation results back to WiCkeD for further evaluation.
Show more

5 Read more

Quid pro Quo: National Institutions and Sudden Stops in International Capital Movements

Quid pro Quo: National Institutions and Sudden Stops in International Capital Movements

This setup highlights the “insurance” component of international credit markets, since with the world rate of interest equal to the domestic marginal product of capital, there is no expected gain or loss from international borrowing. But borrowing does help shield the home country from unexpected shocks, since loan repayment can be made contingent on the state of nature (the shockε ), in such a way that the country makes relatively larger net payments when output is high and relatively smaller payments when output is low. Note that if international credit markets did not provide some insurance against stochastic shocks (i.e., β = 0 ), then
Show more

40 Read more

It’s not about what, it’s about who you know: social media use in organisations

It’s not about what, it’s about who you know: social media use in organisations

This paper investigates the impact of social media-use on communication processes within organisations. Findings from three qualitative comparative case studies are analysed through the lens of the resource based view of organisations. The analysis follows comparative logic focusing on similarities and differences in case-settings and outcomes. Each of the cases represents an organisation with similar workforces of similar size, composition and distribution but with qualitatively different approaches to social media-use and, as expected, different effects of social media on processes and capabilities. The findings suggest, that the value of social media in contrast to other IT technologies is derived from its use for relationship-building (who actors are connected to and how) rather than information storage and dissemination (what do actors know and where they find it).
Show more

26 Read more

An optimal fixed-priority assignment algorithm for supporting fault-tolerant hard real-time systems

An optimal fixed-priority assignment algorithm for supporting fault-tolerant hard real-time systems

The kinds of fault with which we are dealing are those that can be treated at the task level. Consider, for example, design faults. It may be possible to use techniques such as exception handling or recovery blocks to perform appropriate recovery actions [3], modeled here as alternative tasks. In addition, one may consider some kind of transient faults, where either the reexecution of the faulty task or the execution of some compensation action is effective. For example, suppose that transient faults in a sensor (or network) prevent an expected signal from being correctly received (or received at all) by the control system. This kind of system fault can easily be modeled by alternative tasks, which can be released to carry out a compensation action. However, it is important to emphasize that we are not considering more severe kinds of fault that cannot be treated at the task level. For example, if a memory fault causes the value of one bit to be arbitrarily changed, the operating system may fail, compromising the whole system. Tolerating these kinds of faults requires spatial redundancy (perhaps using a distributed architecture) and is not covered in this paper. Our work fits the engineering approach that uses temporal redundancy at the processor level and spacial redundancy at the system level.
Show more

16 Read more

Finite volume analysis of temperature effects induced by active MRI implants: 2. Defects on active MRI implants causing hot spots

Finite volume analysis of temperature effects induced by active MRI implants: 2. Defects on active MRI implants causing hot spots

Movie 1 of the time developing temperature map for tissue. This movie (animated GIF) shows the time development for the case of a hot spot in tissue without wire and with blood perfusion over a period of 900 s, which is the maximum permitted time for imaging the trunk with an SAR of 4 W/kg (sequence of table 2, manufacturer declaration of 4 W/kg). For a complete information, a 3D perspective view is shown in figure a. In figure b for a part of the simulation volume all temperature increases above 10 K are replaced just by 10 K to allow a scaling which shows the temperature increases around the hot spot more clearly. In figure c a cross section along the center symmetry axis (the wire) is shown. This plot uses the symmetry to the center axis and the symmetry to the center plane at × = 0 orthogonal to the center axis to show the temperature map of the entire implant, i. e. the total simulation volume. Figure d contains similar information as c. Instead of a 3D perspective view of only tissue temperature increases a map including the wire temperatures coded in colors is shown. At the end of the movie two different simulations are shown alternately. They indi- cate the changes of the temperature map after 900 s due to a larger sim- ulation volume shifting the heat sink further away from the hot spot. One of the alternating results was calculated using a 250 × 250 matrix for a distance of 0 mm to 12.5 mm for r and × respectively. The second map was calculated for a 500 × 500 matrix for a distance of 0 mm to 25 mm for r and × respectively. Only the inner 250 × 250 points are plotted for a same size for both calculations. It can be seen especially in figure b, that the temperature distribution is almost identical apart from the fact that, for x ≈ 12.5 mm and r ≈ 12.5 mm, the simulation with more cells shows a slight deviation from zero. The simulation with the smaller matrix shows a straight zero line, which is naturally because this is the boundary condi- tion for this simulation. The small difference points out that the boundary condition with a heat sink works very well as long as the absolute value of the gradient at the boundary is low.
Show more

20 Read more

The incremental cost effectiveness of introducing PCR for early detection of E  coli infections associated with acute diarrhea compared to conventional microbiological methods

The incremental cost effectiveness of introducing PCR for early detection of E coli infections associated with acute diarrhea compared to conventional microbiological methods

A one-way sensitivity analysis for the prevention of HUS development was performed. The results of the one-way sensitivity analysis for no HUS development showed that the ICER is most influenced by changes in the hospital stay at ICU, especially for scenario 3, since this was associated with a higher incidence of HUS. This indicates that the use of the PCR technique leads to less hospital stay on the ICU (days), there is a decrease in cost per HUS prevention of €29,778 compared to the base-case ICER setting of € 301,189. Furthermore, there was no change in the costs of the sensitivity and specificity for true positive diagnosis of the PCR, because the lower and upper limit did not differ from 100%. In addition, no changes in the costs of the specificity of the CMM was seen. The difference in percentages for the lower and upper limit of the CMM was very small and did not influence the ICER.
Show more

39 Read more

Show all 10000 documents...