In this paper we show **consistency** of the **posterior** distribution of p, where the prior is assigned through a Gaussian process as in [4]. Statistical procedures are often justified by asymptotics, and **posterior** **consistency** plays a major role in val- idating a Bayesian method. The **posterior** distribution is said to be consistent if the **posterior** probability of any small neighborhood of the true parameter value con- verges to one. Because the notion of **consistency** is dependent on the topology used to define the neighborhoods, one needs to consider an appropriate topology such as the one based on the L 1 -distance. Because **consistency** of p is directly related

17 Read more

We investigate Bayesian non-parametric inference of the Λ-measure of Λ-coalescent processes with recurrent mutation, parametrised by probability measures on the unit interval. We give verifiable criteria, given an identifiability assumption, on the prior for **posterior** **consistency** when observations form a time series, and prove that any non-trivial prior is inconsistent when all observations are contemporaneous. We then show that the likelihood given a data set of size n ∈ N is constant across Λ-measures whose leading n − 2 moments agree, and focus on inferring truncated sequences of moments. We provide a large class of functionals which can be extremised using finite computation given a credible region of **posterior** truncated moment sequences, and a pseudo-marginal Metropolis-Hastings algorithm for sampling the **posterior**. Finally, we compare the efficiency of the exact and noisy pseudo-marginal algorithms with and without delayed acceptance acceleration using a simulation study.

33 Read more

in [50] it has been shown for the location problem that these priors lead to inconsistent posteriors. The example provided in article [50] illustrates the importance of **posterior** **consistency** and its dependence on the choice of the prior. For this reason, it seems plausible to choose priors that have the best possible **posterior** **consistency** properties for a very large class of possible truths. However, this contradicts the philosophical idea behind the Bayesian approach because the prior is only supposed to represent a priori knowledge. A philosophical justification for studying **posterior** **consistency** can be seen in the fact that **posterior** **consistency** is equivalent to the property that the posteriors arising from different priors merge for a broad class of models as considered in [51]. For an in-depth discussion, we refer the reader to [76]. In practice, priors are often chosen based on their computational performance and some of their parameters are adapted to represent subjective knowledge. This is the case for the choice of base measures and the intensity of a Dirichlet process prior in clustering [91]. It is worth noting that the bulk of the field has moved towards establishing **consistency**, after the initial works like [50] indicated the care needed in selecting the priors.

284 Read more

In this chapter, we fill up this gap by studying **posterior** **consistency** for vari- ous semi-parametric models including multiple linear regression with an unknown error distribution, exponential frailty model, generalized linear model (GLM) with unknown link function, Cox proportional hazard model where the baseline hazard function is unknown, accelerated failure time (AFT) models and partial linear regres- sion model. We begin with a general **posterior** **consistency** theorem for semiparametric models much in line of results of a celebrated result of Schwartz (1965) involving con- ditions on existence of certain exponentially consistent tests for the complement of the neighborhoods of the “true” value of the parameter and the prior positivity of a Kullback-Leibler type of neighborhood of the true distribution of the observations. Our result applies also to the case of independent, non-identically distributed obser- vations. In the examples, we then verify the existence of exponentially consistent tests and the prior positivity condition.

140 Read more

Bayesian partially identified models have received a growing attention in recent years in the econometric literature, due to their broad applications in empirical studies. Classical Bayesian approach in this literature has been assuming a parametric model, by specifying an ad-hoc parametric likelihood function. However, econometric models usually only identify a set of moment inequalities, and therefore assuming a known likelihood function suffers from the risk of misspecification, and may result in incon- sistent estimations of the identified set. On the other hand, moment-condition based likelihoods such as the limited information and exponential tilted empirical likelihood, though guarantee the **consistency**, lack of probabilistic interpretations. We propose a semi-parametric Bayesian partially identified model, by placing a nonparametric prior on the unknown likelihood function. Our approach thus only requires a set of moment conditions but still possesses a pure Bayesian interpretation. We study the **posterior** of the support function, which is essential when the object of interest is the identified set. The support function also enables us to construct two-sided Bayesian credible sets (BCS) for the identified set. It is found that, while the BCS of the partially identified parameter is too narrow from the frequentist point of view, that of the identified set has asymptotically correct coverage probability in the frequentist sense. Moreover, we establish the **posterior** **consistency** for both the structural parameter and its identified set. We also develop the **posterior** concentration theory for the support function, and prove the semi-parametric Bernstein von Mises theorem. Finally, the proposed method is applied to analyze a financial asset pricing problem.

74 Read more

Practical implementation of inference algorithms is beyond the scope of this paper, but we note that algorithms based on exact simulation for jump diffusions are available, at least in the scalar case (Casella and Roberts [10], Gonçalves [24]). Exact simulation of jump diffusions is an active area of research (Gonçalves and Roberts [25], Pollock [41], Pollock, Johansen and Roberts [42]) and well suited for applications in Monte Carlo inference algorithms, with preliminary results in the continuous diffusion setting indicating that nonparametric algorithms can be feasibly imple- mented (Papaspiliopoulos et al. [39], van Zanten [52], van der Meulen, Schauer and van Zanten [49]). As a final remark, we note that presently such algorithms are only available for processes with jumps driven by compound Poisson processes of finite intensity, and with coefficients sat- isfying regularity assumptions comparable to those in Proposition 1. Thus our Theorem 1 brings the theory on nonparametric **posterior** **consistency** in line with current state of the art algorithms in one dimension, and anticipates development of comparable methods in higher dimensions.

24 Read more

In our technique, the first step is creation of the posterolateral portal. We do not rely on the palpation of the lateral soft spot or the transillumination technique. We identify the posterolateral capsular portal endoscopically [11, 12]. The biceps femoris tendon and the lateral collateral ligament are more distinct and easier to identify than the lateral head of gastrocnemius. The common peroneal nerve is protected by the biceps femoris tendon [10]. Endoscopic visualization allows creation of the posterolateral portal at the most **posterior** corner of the posterolateral capsule. This allow the portal to be far enough from the lateral femoral condyle so that instruments placed through this portal are never oriented in an anterior-to-**posterior** direction toward the popliteal neurovascular bundle [10]. Moreover, insertion of the Wissinger rod under endoscopic visualization avoids the rod going extra-capsular toward the popliteal neurovascular bundle. Furthermore, perforation of the septum between the posteromedial and posterolateral compartments from lateral to medial direction is less risky to the popliteal neurovascular bundle as the bundle is located just lateral to the septum and the rod is moved away from the bundle. Creation of the posteromedial portal is of inside-out manner by identification of the posteromedial capsular folds [14]. However, the folds may not be easily identified in the presence of capsular fibrosis or synovitis. Therefore, we still rely on the transillumination technique to locate the portal. Unlike the transcondylar notch visualization, transillumination through the posterolateral portal can be performed with a 30° arthroscope rather than a 70° arthroscope. Creation of the posteromedial portal by this inside- out Wissinger rod technique allows the portal to be located far enough toward the **posterior** aspect of the knee to ensure that the arthroscope or instruments will be directed exactly in the coronal plane [13]. A portal located too anteriorly would place the arthroscope and instruments in a

awarded grades for each of the Teachers’ Standards and collated these judgements to arrive at an overall teaching grade. Professional mentors moderated assessments made by different teachers within their school and school liaison tutors from the HE provider visited schools to conduct training and quality assure the mentoring and assessment processes. However, from our experience across the partnerships, despite this high level of professional, organisational and individual effort, assessment and grading continues to challenge new and experienced mentors and tutors. In turn, achieving and gathering evidence of **consistency** in assessment within and across multiple partnerships and programmes is a challenge for those with quality assurance roles. An obvious place to look for evidence was the assessments data and

11 Read more

While translation **consistency** is generally assumed to be desirable, it does not guarantee correctness: SMT translations of repeated phrases can be consis- tent and incorrect, or inconsistent and correct. In or- der to evaluate correctness automatically, we check whether translations of repeated phrases are found in the corresponding reference sentences. This is an approximation since the translation of a source phrase can be correct even if it is not found in the reference, and a target phrase found in the refer- ence sentence is not necessarily a correct translation of the source phrase considered. Post-edited refer- ences alleviate some approximation errors for the Parliament tasks: if the translated phrase matches the references, it means that it was considered cor- rect by the human post-editor who left it in. How- ever, phrases modified during post-edition are not necessarily incorrect. We will address this approxi- mation in Section 6.

This obvious fact points out to an incompatibility between the standard mathematical meaning of necessity and what Inada de…nes as such. This incompatibility is due to a very simple reason. The "input" of a SWF is a preference pro…le and not a vector of distinct individual preferences. As in any other class of functions, if we want to …nd necessary and su¢cient conditions such that the "output" of a SWF has a certain property (that it yields a transitive relation in our case) then these conditions should refer to the "input" of the SWF, that is, on the domain of preference pro…les and not on the domain of individual preferences from which a preference pro…le may be formed. Single-peakedness and value-restrictedness (and all conditions of such form) are conditions on the domain of individual preferences that may form a preference pro…le and, therefore, restrict the domain of preference pro…les not directly but indirectly. For this reason they are very "strong" conditions and in such distance with the pure mathematical meaning of ne- cessity. This simple observation allows us to revisit the issue of **consistency** of SWFs from the "preference pro…le domain" perspective and derive new results.

16 Read more

Abstract. This paper introduces standards used for transcribing texts of the Arabic Learner Corpus (ALC) from the hand-written sheets into an electronic format. It describes the transcription process which was performed by three transcribers, and the measurement conducted for keeping **consistency** in transcription. The paper concludes with a description of the corpus file produced based on the electronic format.

In this genre, accuracy and readability are im- portant and it is acceptable to produce a “repeti- tive” or “boring” text. It may, therefore, be ap- propriate to encourage translational **consistency** of nouns, rare verbs and adjectives in instructions. Unlike with novels, it would make sense that all entities in an instruction manual are of importance. Public Information: In the French Revolution to 1945 and Nuclear Testing documents, adjectives score highest, followed by nouns. Word-level HHI scores for the most frequent (and aligned) adjec- tives in the French Revolution to 1945 document are presented in Table 5.

Comprehensive **consistency** management requires a strong mechanism for repair once inconsistencies have been de- tected. In this paper we present a repair framework for inconsistent distributed documents. The core piece of the framework is a new method for generating interactive re- pairs from full first order logic formulae that constrain these documents. We present a full implementation of the com- ponents in our repair framework, as well as their applica- tion to the UML and related heterogeneous documents such as EJB deployment descriptors. We describe how our ap- proach can be used as an infrastructure for building higher- level, domain specific frameworks and provide an overview of related work in the database and software development environment community.

10 Read more

Alignment relates words in a source language and words in a target language, potentially mediated by phrase nodes. Following the variation n-gram method, we define the units of data, i.e., the vari- ation nuclei, as strings. Then, we break the prob- lem into two different source-to-target mappings, mapping a source variation nucleus to a target lan- guage label. With a German-English aligned cor- pus, for example, we look for the **consistency** of aligning German words to their English counter- parts and separately examine the **consistency** of aligning English words with their German “la- bels.” Because a translated word can be used in different parts of a sentence, we also normalize all target labels into lower-case, preventing variation between, e.g., the and The.

In the existing systems by using the specified cloud storage services in the system, the customers or end users able to access information stored in a cloud anytime and anywhere making use of or from any device. And here the user does not need to worry about or no need to care about large amount of capital investment during the deployment phase of the underlying hardware infrastructures systems. The cloud service provider (CSP) stores multiple copies of data i.e. replicas on different servers distributed geographically. Where a user can read unwanted data or the data which is not updated for a period of time. The general system known as domain name system (DNS) is considered to be one of the most famous application systems that is going to implement eventual **consistency** operation. Here the updates done to a name will not be able to see or visible immediately in the system, but the system to the clients working with the system are have make sure they going to see them eventually.

12 Read more

**consistency** auditing, suppose dictating write of a new read does not exist in the UOT and the dictating write is issued by the user, the user can say that he has failed to read his last updates, and asserts that read-your-write **consistency** is violated. Suppose the dictating write of this read happens before the dictating write of his last read recorded in the UOT, the user can say that he has read an old value, and asserts that monotonic-read **consistency** is violated. Let the dictating write of a new read does not present in the user’s UOT and the dictating write comes from other users, then a violation will be exposed by the auditor. In global **consistency** auditing, if a read that does not have a dictating write, then the auditor say that the value of this read is too stale, and state that causal **consistency** is violated. Summary.

data with number of missed updates. Each replica is assigned an update window, which is the maximum num- ber of updates that can be buffered without consensus. The concept of probabilistically bounded staleness (PBS) provides a probability approach using partial quorums to find expected bounds on staleness with respect to both versions of the replicas and wall clock time has been pro- posed in [7]. Inconsistency is measured as the number of conflicting updates on data in [8]. An Infrastructure for Detection–based Adaptive **Consistency** Control in repli- cated services (IDEA) model [9]–an Internet scale middle- ware uses an inconsistency detection mechanism of finding version difference in terms of numerical error, order error and staleness. This is an extension to the TACT model [5] for adaptive **consistency** guarantees with bet- ter performance.

12 Read more

1999) can be applied to any seismic object by providing a set of locations labeled by an expert as “object” and “non- object” and tuning the input attributes for the classifier. In or- der to rank the relevant importance of each seismic attribute in the classification problem, we use regularized discriminant analysis (RDA) with forward and backward search strategy. This allows defining a rank for each seismic attribute that ef- ficiently results in lower combined classification errors. Two well-known non-linear classifiers, namely multilayer percep- tron (MLP) and support vector classifier (SVC) are used to find output **posterior** probabilities of chimney and non chim- ney class separately. These classifiers have different proper- ties that become evident in their corresponding chimney pre- diction results. This implies that the different natural charac- teristics for finding multi dimensional hyper-plane boundary will appear in their output. In order to have a mixed sense of both, the stage of classifier combining will apply with three mean, minimum and maximum logical rules.

The fact that “natural” theories, i.e. theories which have something like an “idea” to them, are almost always linearly ordered with regard to logical strength has been called one of the great mysteries of the foundation of mathematics. However, one easily establishes the existence of theories with incomparable logical strengths using self-reference (Rosser-style). As a result, PA + Con(PA) is not the least theory whose strength is greater than that of PA. But still we can ask: is there a sense in which PA + Con(PA) is the least “natural” theory whose strength is greater than that of PA? In this paper we exhibit natural theories in strength strictly between PA and PA + Con(PA) by introducing a notion of slow **consistency**.

26 Read more

(alternatively, we might calibrate by multiplying by an appropriate scalar, etc.). Then, while g is consistent on the set of distributions with Ey = 0 (using the **consistency** of f on all distributions), g does not behave locally. This can be seen, for example, by considering g on a distribution with x uniform on [ − 1,1] and y = sign(x). This distribution fulfills Ey = 0, and g is consistent on it, but neither UAL nor UALC, since smoothed local versions of g in fact return values tending towards 0. Thus, in summary, the fact that the set of all distributions is localizable is what causes consis- tency to imply local behavior. If we are concerned about that fact (e.g., because we suspect local behavior might lead to the curse of dimensionality; Bengio et al., 2006), then we must do away with **consistency** on the set of all distributions and instead talk about **consistency** on a more limited set, one which is not localizable. However, part of the reason why nonparametric methods often outperform parametric ones on real-world data is precisely because they make as few as possible assumptions about the unknown distribution. Consequently, we may find that local behavior is hard to avoid.

30 Read more