The analysis of Lesaffre & Spiessens (2001) on the Toenail dataset found that parameter esti- mates could vary significantly even amongst several Gauss-Hermite quadrature approxima- tions. Hence it is not surprising that there are large discrepancies between AGHQ, GVA and PQL approximations for this dataset. On the other hand PQL and GVA approximations for the bacteria data are, in comparison, reasonable. Both PQL and GVA methods are based on Gaussian approximations of p(u|y). In particular for fixed β and σ 0 PQL is equivalent to the
The combination of unit-level ascertainment and a full-rank, non-sparse covariance struc- ture is very common in statistical genetics (Golan et al., 2014), but is often encountered in other scientific domains, such as geostatistics and GP classification (Diggle et al., 1998; Chu et al., 2010; Ziegler et al., 2014; Young et al., 2013). Ascertained sampling is almost inevitable when studying rare phenomena, and the increasing dimensionality of studied data often necessitates the introduction of random rather than fixed effects, which in turn induce full-rank, non-sparse dependency structures. Additionally, it is often more convenient to perform dense sampling in a small number of clusters rather than collecting a large number of clusters (Bellamy et al., 2005; Zhang, 2004; Glidden and Vittinghoff, 2004), leading to non-sparse, full-rank dependency structures at the cluster level. Hence, we expect our work to be applicable in diverse scientific fields.
Abstract: In the present paper, we test the use of Markov-Switching (MS) models with time-fixed or Generalized Autoregressive Conditional Heteroskedasticity (GARCH) variances. This, to enhance the performance of a U.S. dollar-based portfolio that invest in the S&P 500 (SP500) stock index, the 3-month U.S. Treasury-bill (T-BILL) or the 1-month volatility index (VIX) futures. For the invest- ment algorithm, we propose the use of two and three-regime, Gaussian and t-Student, MS and MS- GARCH models. This is done to forecast the probability of high volatility episodes in the SP500 and to determine the investment level in each asset. To test the algorithm, we simulated 8 portfolios that invested in these three assets, in a weekly basis from 23 December 2005 to 14 August 2020. Our results suggest that the use of MS and MS-GARCH models and VIX futures leads the simulated portfolio to outperform a buy and hold strategy in the SP500. Also, we found that this result holds only in high and extreme volatility periods. As a recommendation for practitioners, we found that our investment algorithm must be used only by institutional investors, given the impact of stock trading fees.
2) Extensions of the EOQ model: EOQ extensions include an application to retail cycle stock inventories , the addition of cost changes under a finite or infinite time horizon , the inclusion of storage size considerations , and the addition of damage costs   provided an exhaustive summary of the research on EOQ models that handle partial backordering, and some even more current models with partial or full backordering include those by    and .
Since ocean warming affects marine ecosystems at a global scale, management strategies should be directed towards minimizing dredging activities in coastal areas during periods of coral spawning, and improving water quality (i.e., suspended sediments) associated with river discharges, in order to maintain population replenish- ment success in inshore reefs. Assessing the risks associated with the simultaneous effects of suspended sediments and high temperatures on inshore reefs of the GBR requires further studies to determine the impacts of other sed- iment types in combination with temperature stress on thresholds for sensitive early life stages of corals and other tropical species. These climate-adjusted thresholds could then be incorporated into improved spatial models in order to generate effective risk maps for identification of vulnerable habitats and opportunities for management intervention.
Abstract. The purpose of this article is to present the existence and unique- ness results of a fixed point for cyclic generalized weakly contractive mappings as well as for cyclic F -contraction mappings in metric spaces. In this way, we extend and improve the conclusions of Xue [Zhiqun Xue, Fixed point theo- rems for generalized weakly contractive mappings, Bull. Aust. Math. Soc., (2015) 1-9] and Wardowski [Dariusz Wardowski, Fixed points of a new type of contractive mappings in complete metric spaces, Fixed Point Theory Appl., (2012) 1-6]. Examples are given to useability of our conclusions.
If the modeled shape of the object is characterized by high smoothness, then only a few basic functions are needed to create a good approximation of the shape - hence the name of the low-dimensional representation of the shape. The challenge with this approach is to effectively calculate the value in the sense of computational complex- ity, therefore, with a larger number of model points, Nyström approximations are used . One of the major advantages of a generalized statistical model, based on a multidimensional Gaussian process, in relation to the classical statistical model is the freedom to select the function of the nucleus. For a small training set, shape statistics can be represented by modeling properties of the kernel functions known from the image registration process such as radial functions or spline functions. The method presented above was adapted for the needs of segmentation of the liver organ - using the following main stages:
In this section we derive and solve a Gaussian model for ﬁnancial bubbles, our approach later serving to motivate a non-Gaussian model in Section 3. An alternate formulation of the basic model in  leads naturally to a stochastic generalisation of the original model as follows. Let P (t) denote the price of an asset at time t. Our starting point is the equation
Gaussian geostatistical models (GGMs) and Gaussian Markov random ﬁelds (GM- RFs) are two distinct approaches commonly used in modeling point referenced and areal data, respectively. In this work the relations between GMRFs and GGMs are explored based on approximations of GMRFs by GGMs, and vice versa. The pro- posed framework for the comparison of GGMS and GMRFs is based on minimizing the distance between the corresponding spectral density functions. In particular, the Kullback-Leibler discrepancy of spectral densities and the chi-squared distance be- tween spectral densities are used as the metrics for the approximation. The proposed methodology is illustrated using simulation studies. We also apply the methods to a air pollution dataset in California to study the relation between GMRFs and GGMs.
A number of extensions of the Banach contraction principle have appeared in literature. One of its most important extensions is known as Caristi’s fixed point theorem. It is well known that Caristi’s fixed point theorem is equivalent to Ekland variational principle 1, which is nowadays an important tool in nonlinear analysis. Many authors have studied and generalized Caristi’s fixed point theorem to various directions. For example, see 2– 8. Kada et al. 9 and Suzuki 10 introduced the concepts of w-distance and τ-distance on metric spaces, respectively. Using these generalized distances, they improved Caristi’s fixed point theorem and Ekland variational principle for single-valued maps. In this paper, using the concepts of w-distance and τ-distance, we present some generalizations of the Caristi’s fixed point theorem for multivalued maps. Our results either improve or generalize the corresponding results due to Bae 4, 11, Kada et al. 9, Suzuki 8, 10, Khamsi 5, and many of others.
Note that if we take ω d, then the definition of generalized w-contractive map reduces to the definition of generalized contractive map due to Klim and Wardowski 4. In particular, if we take a constant map k h < b, h ∈ 0, 1 then the map T is weakly contractive in short, w-contractive 8, and further if we take ω d, then we obtain J b x I b x and T is contractive 3.
Statistical models describe a phenomenon in the form of mathematical equations. Thus, a large number of observations say 100 or 1000 can be summarized in an equation with say two unknown quantities (called parameters of the model). Such reduction is certainly necessary for human mind. Out of large number of methods and tools developed so far for analyzing data (on the life sciences etc.), the statistical models are the latest innovations. In the literature (Hogg & Crag (1970), Johnson & Kotz (1970), Lawless (1982) etc.) we come across different types of models e.g., Linear models, Non- linear models, Generalized linear models, Generalized addition models, Analysis of variance models, Stochastic models, Empirical models, Simulation models, Diagnostic models, Operation research models, Catalytic models, Deterministic models, etc. Since the beginning of 1970 attention of the research workers in the discrete distributions appears to be shifted to a new class of discrete distributions known as Lagrangian Distributions. The Lagrangian distributions provide generalization of the classical discrete distributions and thus have been found more general in nature and wider in scope. Consul and Shenton (1972) gave a method for obtaining generalized discrete distributions using the following Lagrange expansion
This paper builds on the now well-established analogy between financial crashes and phase transitions in critical phenomena. In a stochastic version of the original model of Johansen et al. (2000) crashes are seen to represent a phase transition from random to deterministic behaviour in prices. Crash precursors are a super-exponential growth accompanied by an“illusion of certainty”, characterised by a decrease in the volatility function prior to the crash. A Gaussian model is introduced and then further extended to incorporate a heavy-tailed version of the model based around the NIG distribution. Under both settings a range of potential applications to economics were discussed. These include statistical tests for bubbles, crash-size distributions, predictions of a post-crash increase in volatility – related to Omori-style power laws in complex systems – and simple estimates of fundamental-value and speculative-bubble components. As an empirical application we test for whether a bubble is present in the FTSE 100 following the introduction of the Bank of England’s policy of quantitative easing. Some evidence of a bubble and subsequent over-pricing is found. However, the level of over-pricing does not appear very large – particularly in comparison to the recent UK housing bubble – and prices appear to have converged towards estimated fundamental values during the latter half of 2010.
Flexible generators may be used to estimate market share models in a way similar to Berry (1994). Berry starts from the perspective of a discrete choice model and inverts market shares to determine utility levels (up to a constant) as- sociated with a set of products in a number of markets. These utility levels form the basis for a regression where instrumental variable techniques may be used to deal with endogeneity, notably occurring if there are unobserved quality attributes that are correlated with prices. Here we shall exploit Theorem 1, which delivers utility levels (up to a constant) as a flexible generator applied to a vector of mar- ket shares. Models specified in terms of flexible generators thus circumvent the need to invert market shares numerically, while offering the opportunity to use functional forms that generalize the nested logit model.
Recently, it has been proposed that the classes of generalized extreme value distributions and generalized Pareto distributions (Diebold et al., 2000; Bali, 2003; Rocco, 2014) and the class of generalized hyperbolic distributions (Eberlein & Keller, 1995; Eberlein & Prause, 2002; Hu & Kercheval, 2007) provide a more robust modeling of financial returns distribution. However, to the best of our knowledge, there has been limited study on the cross-comparison between the performance of these models and their related applications such as in the context of risk management. In particular, a gap exists in the current literature that determines which model may best forecast the rate of occurrence of extreme events and, as a result, yield the most precise value-at-risk (VaR) estimates for financial institutions to measure market risk and adjust for adequate capitalization as per the Basel Regulatory Framework.
Nonparametric regression estimators are very flexible but their statisti- cal precision decreases greatly if we include several explanatory variables in the model. The latter caveat has been appropriately termed the curse of dimensionality. Consequently, researchers have tried to develop models and estimators which offer more flexibility than standard parametric regression but overcome the curse of dimensionality by employing some form of di- mension reduction. Such methods usually combine features of parametric and nonparametric techniques. As a consequence, they are usually referred to as semiparametric methods. Further advantages of semiparametric methods are the possible inclusion of categorical variables (which can often only be in- cluded in a parametric way), an easy (economic) interpretation of the results, and the possibility of a part specification of a model.
or words with large appearance variability. In this case we may wish to enrich the query with contex- tual words that disambiguate the visual meaning of the query. With regular vector-based queries, the typical approach is to sum the word-vectors. For example: For contextual disambiguation of poly- semy, we may hope that vec(‘bank’)+vec(‘river’) may retrieve a very different set of images than vec(‘bank’)+vec(‘finance’). For specification of a specific subcategory or variant, we may hope that vec(‘plane’)+vec(‘military’) retrieves a different set of images than vec(‘plane’)+vec(‘passenger’). By using distributions rather than vectors, our frame- work provides a richer means to make such queries that accounts for the intra-class variability of each concept. When each word is represented by a Gaussian, a two-word query can be represented by their product, which is the new Gaussian N ( Σ −1 1 µ 1 +Σ −1 2 µ 2
In this paper, we introduce some generalized ( α , ψ )-contractive mappings in the setting of generalized metric spaces and, based on the very recent paper (Kirk and Shahzad in Fixed Point Theory Appl. 2013:129, 2013), we omit the Hausdorﬀ hypothesis to prove some ﬁxed point results involving such mappings. Some consequences on existing ﬁxed point theorems are also derived.