615 compare MS delta with delta for European options computed by other kinds of binomial tree, namely tree methods introduced by Chung and Shackelton , Tian , and Leisen and Reimer . These are summarized in Figure 11. In order to obtain delta using the binomial trees of Chung and Shackelton  and Tian , we employed the extended binomial tree approach. On the other hand, we used the finite difference approach to the binomial tree for Leisen and Reimer, because we wanted to implement simple calculations. Leisen and Reimer  introduced a new kind of binomial tree, which computes the price of options efficiently. They construct two kinds of trees using two different transform formulas. Note that because no significant difference is observed in two methods of Leisen and Reimer , we used “Method-1” described in their article. As shown in Figure 11, Greeks calculated by trees introduced by Chung and Shackelton  and Tian  converges to the real value smoothly, however, MS delta converges faster than these methods. Delta computed by the tree introduced by Leisen and Reimer  converges considerably fast, if one uses trees with odd steps.. It should be pointed out that it is not easy to compute vega and rho by Leisen and Reimer’s binomial tree.
28 Read more
The interests of vulnerable groups can’t be guaranteed due to their weaker capacity and the li- mited interests demand channels during the water pollution conflicts. The interest protection for the vulnerable people in the water pollution conflicts has attracted attentions of the international scholars. The paper tries to construct the market mechanism which can make the vulnerable people to involve in the emission trading. The vulnerable people can buy American put option in the emission trading market. When the price of the emission runs below the contract price, the vulnerable people can get the benefit through executing the option. When the price of the emis- sion runs above the contract price, the vulnerable people can give up the right. The binomial tree option pricing model can help the vulnerable people to make a decision through the analysis of the worth of the American put option.
First of all, the model in this paper is exactly the same as the binomial tree in my earlier paper, Brogi (2014). What differs now is that, while in my previous paper the tree was implemented by Monte Carlo simulation, i.e. simulating price trajectories along the tree, in this paper the whole (recombining) underlying price tree is calculated without resorting to Monte Carlo, just like for example the classic Cox, Ross and Rubinstein (1979) binomial tree (CRR tree). This means that the option price is obtained virtually instantly using for example Matlab on a standard PC. On the other hand, Monte Carlo simulation was rather lengthy and the resulting option price had a standard error. The main features that make the tree appealing are unchanged: excess kurtosis and negative skewness of price distribution of underlying security. For more details please see simulation in Brogi (2014).
The above tree can be implemented easily to price European call or put options. In fact, thousands of paths along the tree are simulated, and for each path the payoff of our European option is calculated, and the price of our European option equals the arithmetic mean of the thousands of discounted payoffs.
Admittedly, the American option price and the critical stock price are two very closely-related quantities. In fact, every single method for pricing American options would have to look at the critical stock price boundary. However, it is very important for researchers to realize that although deeply related, approximating the American option price and approximating the current critical stock price are two different problems. In principle, one could have a very accurate approximation for the prices but whose performance on the current critical stock price is not as good. This is indeed the case for the EXP3 method in Ju (1998). A quick look at Figure 1 in Ju (1998) shows that EXP3 gives very inaccurate current critical stock price, by its design. On the other hand, in principle one could also have a very accurate approximation for the critical stock price that gives large errors in option prices. One simple example is an approximation in which one uses the critical stock price from very accurate methods such as a very fine binomial tree, but uses − 1 as the price for the American option. This example is of course dull, but it illustrates the point. A somewhat less dull but far less obvious example is the quadratic approximation in Ju and Zhong (1999). Numerical experiments tell us that if one uses the true critical stock price in their method instead of the rough quadratic approximation, the method’s accuracy actually often decreases.
28 Read more
Programming the same binomial American option pricing problem on the Quadro NVS 160M is very different from working with the Intel P8600, because of the SIMT (single instruction multiple threads) execution model of the NVIDIA GPU. The NVIDIA CUDA 4.0 SDK comes with an example where thousands of European calls are priced using the bino- mial method. In the example, a single one-dimensional thread block is used to price a single call option. The algorithm used in the pricing of a single call is briefly explained in . To avoid frequent access to the off-chip global memory but to make use of the on-chip shared memory as much as possible, the algorithm partitions a binomial tree into blocks of multiple levels. The partition pattern is very similar to the one shown in Fig. 3, except that the NVIDIA’s algorithm requires that all the blocks have the same number of levels and this number must be a multiple of two. The algorithm also uses two buffers in the shared memory. The algorithm begins by allocating an one-dimensional buffer in the global memory. All the threads in the thread block compute the option’s payoffs at the leaf nodes and save them into the buffer. When processing a block of interior nodes, the threads first load the computed option values from the global buffer into one of the two shared buffers. Then the computation is carried out between the two shared buffers. After this the results are copied back to the appropriate positions in the global buffer. The threads then move to the next part of the block to repeat the same processing.
The second and more attractive contribution is to provide a “financial forecast” of the index. For that purpose we use the Binomial Tree Model (BTM) as the distributional assumption for the stock price. Also the standard Geometric Brownian Motion (GBM) can be used for this purpose but it requires to use simulations. In order to avoid those computation burden we provide the asymptotic expansion for the first moment of the RSI which agrees with the BTM. Given that we believe that the binomial model is a good motivation to show that forecasting the RSI is similar as pricing a European option.
12 Read more
While this approach addresses key limitations of previous approaches, such as defining health states by categorical event frequency or response status, some potential improvements could be made to it. The im- plementation of a negative binomial regression with upper bound (28 MDs) could be considered and treatment-visit interactions could be included. Add- itionally, the data are required to fit to the smooth distributions of the model; however, this is not always the case. The predicted distributions observed in the CM study did not fit as well as the EM study owing to the greater spread in distribution in the CM study and may also be due to the differences in the patient populations between the EM and CM cohorts. There- fore, alternative approaches may be required to better model these cohorts.
12 Read more
There for , we apply the Beta ( , ) α β prior distribution and normal approximation to make Bayes approximate CI . We know that , the prior distribution Beta ( , ) α β for binomial distribution is conjugate . If we assume that X ~ binom n p ( , ) , and p has the prior distribution Beta ( , ) α β , the posterior distribution of p is Beta x ( + α , m − + x β ) . By using of the normal approximate , we get :
Texture is one of the important characteristics, which exist in many natural images. It also plays an important role in human visual perception and provides information for recognition and interpretation. To enhance realism in graphical system, the study of textures is necessary. We have considered a texture to be a stochastic, possible to be periodic, two-dimensional image field. We have used Markov Random Fields as texture models. We considered binomial model, where the field parameters control the strength and direction of the clustering in the image. With experimentation it is found that directionality, coarseness, gray level distribution, and sharpness can all be controlled by choice of the parameters. Generated textures are then estimated using one of the approximated Maximum likelihood estimation called as Coding method. The estimated parameters were used as input to the generation procedure to see is statistical approach sufficient when structural information is not known. We have considered a texture to be a stochastic, possible periodic, two-dimensional image field. We have used Markov Random Fields as texture models. We considered binomial model, where each point in the texture has a binomial distribution with parameter controlled by its neighbors’ and the number of gray levels. The parameters of the Markov random field control the strength and direction of the clustering in the image. The power of the binomial model to produce blurry, sharp, line-like, and blob-like textures is demonstrated. Generated textures are then estimated using one of the approximated Maximum likelihood estimation called as Coding method.
We make comparisons between the negative binomial random variable and negative binomial—Lindley random variable with respect to the likelihood ratio order, sto- chastic order, convex order, expectation order and uni- form more variable order. The following lemma will be useful in proving the main results.
As mentioned earlier, we aim to extend chain binomial models to a model which includes immigration. The application of the theory of Markov chains to such a model will be discussed in Chapter Three. Gani and Jerwood (1971) show that, for the Greenwood model, the susceptible counts at times t = 0,1, 2, ... may be regarded as a Markov chain embedded in a (continuous time) pure death process. Similarly, for the Reed-Frost model, over a unit time interval (t, t + 1], the susceptible count at time t + 1 (given the susceptible count at time t) may also be approximated by a continuous time process.
128 Read more
In this work, we present a novel method for approximating a normal distribution with a weighted sum of normal distributions. The approximation is used for splitting normally distributed components in a Gaussian mixture filter, such that components have smaller covariances and cause smaller linearization errors when nonlinear measurements are used for the state update. Our splitting method uses weights from the binomial distribution as component weights. The method preserves the mean and covariance of the original normal distribution, and in addition, the resulting probability density and cumulative distribution functions converge to the original normal distribution when the number of components is increased. Furthermore, an algorithm is presented to do the splitting such as to keep the linearization error below a given threshold with a minimum number of components. The accuracy of the estimate provided by the proposed method is evaluated in four simulated single-update cases and one time series tracking case. In these tests, it is found that the proposed method is more accurate than other Gaussian mixture filters found in the literature when the same number of components is used and that the proposed method is faster and more accurate than particle filters.
18 Read more
We introduced some useful preliminary concept from probability theory and basic concept of stochastic calculus which helped us as background of the paper. Later, We introduced the two common option pricing: multi-step binomial model and Black-Scholes model as discrete-time and continuous-time model respectively by using probability theory and basic stochastic calculus.
19 Read more
Background & Aim: Consider a sequence of independent Bernoulli trials with p denoting the probability of success at each trial. With this definition, the probability that the n th success proceed by r failures follows the negative binomial distribution (NB). NB model has been derived from two different forms. At first, the NB can be thought as a Poisson-gamma mixture. The second form of the NB can be derived as a full member of a single parameter exponential family distribution, and therefore considered as a GLM (generalized linear models) .
In the application of the Markov chain binomial model to the Brassica data of Skellum  using the method of moments, Table 2 illustrates that the variance expression (5) results in a fit little different from that provided by the method of moments using Rudolfer’s  corrected values.
If the dependent variable Y counts the number of events during a specified time interval t, then the observed rate Y ê t can be modeled by using the traditional negative binomial model above, with a slight adjustment. We note that t can also be thought of as area or sub- population size, among other interpretations that lead to considering Y ê t a rate.
18 Read more
An efficient and new implementation of incorporation of neighborhood-based segmentation and binomial classifier tree-based sorting is presented and applied to the lung cancer images. This representation provides an overall control in detecting the noisy speckle biomedical images in medical imagery to reduce the time and computational cost. The proposed approach largely alleviate the problem of texture classification on medical images with rich set of features obtained using the application of neighborhood-based segmentation. The noisy speckle images were detected efficiently using the tree-based sorting. Finally, each collection of features is sorted with the help of binomial classifier tree-based sorting into wounded and non-wounded area, to effectively categorize the rich features. Simulations conducted using the Matlab proved the effectiveness of the method in terms of time consumption, computational cost, feature categorization efficiency, running time. It is found that on average, NS-BCTS approach required 30 sec and need 17% of the running time compared to the other two models. While the experimental results show that the proposed method segment the lung cancer images of different scales and levels of quality by improving accuracy and efficiency, it is
10 Read more
The purpose of this paper is to introduce an alternative to standard binomial and trinomial trees, called the “Willow” tree because of its characteristic shape. The motivation for this research is the following. Consider the pricing of a European stock option using a binomial tree with 100 time steps. Using this model, the nodes at maturity are spread uniformly across 20 standard deviations of the theoretical distri- bution of the underlying. The range of nodes in a standard tree spreads out faster, namely, lin- early in time, than does the (normal) marginal probability distribution, which expands as the square root of time. As a result, many very low probability events are included in the back- wards induction algorithm. This inefficiency can be mitigated by “pruning” the tree, but this solves only part of the problem, and in an awk- ward way that is difficult to generalize to multi- ple factors. A more elegant and convenient remedy for this situation is presented that offers other important benefits, including radi- cally accelerated convergence, particularly for multi-factor models.
10 Read more
Almost all of the data on road traffic accidents are count data. Conventional models (Poisson or negative binomial regression model) have long been used to analyze accident frequency [9-12]. However, for modeling this type of data, Poisson and negative binomial model does not take into account the fact when there are many observed zero in accident data. For data with many observed zeros  and  used extended conventional models using Zero Inflated Poisson or Zero Inflated Negative Binomial models. It is found that, conventional models using Zero-Inflated model are much better in dealing with accident data when there are many observed zeroes. However, the models discussed above are not broadly used for accident data in Malaysia. When we have a combined time series and cross sectional data (also known as panel data), the appropriate count model are Fixed Effects Poisson, Fixed Effects Negative Binomial, Random Effects Poisson, Random Effects Negative Binomial and Dynamic Panel model . The Fixed Effects Negative Binomial model was used on a panel count data of 25 countries from 1970 to 1999 and results showed that the implementation of road safety regulation, improvement in the quality of political institutions and medical care as well as technology developments have contributed to reduce motorcycle deaths . Reference  presented an analysis on road accident occurrence using panel data analysis approach. The Fixed Effect Poisson and Negative Binomial model were used to analyze the accident data on 14 states in Malaysia for the period of 1996 to 2007. Among the factors considered in this study are the monthly registered vehicle in the state, the amount of rainfall, the number of rainy day, time trend and the monthly effect of seasonality. Their results indicate that the road accident occurrence are positively associated with the increase in the number of registered vehicle, the amount of rain and time while according to seasonality, the accident occurrence is higher in the month of October, November and December.