Maximum Likelihood Estimates

Top PDF Maximum Likelihood Estimates:

MLEP: an R package for exploring the maximum likelihood estimates of penetrance parameters

MLEP: an R package for exploring the maximum likelihood estimates of penetrance parameters

genotypes respectively, for a set of pedigree members V. The parameter vector θ contains penetrance parame- ters, disease allele frequency, and recombination fraction, so that maximum likelihood estimates are obtained for all parameters simultaneously. The evaluated maximum likelihood estimate of penetrance parameters is therefore affected by the estimates of both recombination frac- tion and disease allele frequency. Because penetrance and marker genotype observations are independent (unless the marker and disease loci are extremely close), the method is not suitable for penetrance estimation in which a single disease allele determines whether the disease will manifest. This method has been shown in GENEHUNTER-MODSCORE [2-5] in which the ratio of the likelihood, or mod score function [6], can be max- imized in practice. The second approach considers the likelihood of affected status; that is, the likelihood is expressed as p θ ˜ (a V ). Maximized likelihood is a func-
Show more

10 Read more

Parameter redundancy and the existence of maximum likelihood estimates in log linear models

Parameter redundancy and the existence of maximum likelihood estimates in log linear models

constraints that hold this equality for finite values of the model parame- ters, are the esoteric constraints. These extra constraints along with the estimable quantities in θ 0 , may make more parameters estimable and per- mit one to obtain unique maximum likelihood estimates for parameters that otherwise would not have been estimable. Also, reducing the param- eter space according to the esoteric constraints and therefore removing the flat ridge, can make it possible to uniquely maximise the likelihood. If α T (θ)U(θ) cannot be zero with finite θs then the esoteric constraints do
Show more

35 Read more

Robustness of maximum likelihood estimates for mixed Poisson regression models

Robustness of maximum likelihood estimates for mixed Poisson regression models

Gustafson (1996) used an influence function approach (Hampel et al., 1986) (Huber, 1981) to examine the robustness of maximum likelihood estimates for certain conjugate mixture models un[r]

23 Read more

On Maximum Likelihood Estimates for the Shape Parameter of the Generalized Pareto Distribution

On Maximum Likelihood Estimates for the Shape Parameter of the Generalized Pareto Distribution

The asymptotic behavior properties of maximum likelihood estimator of the GPD parameter have been studied in many articles including the important works of Davison [2] and R.L Smith [12], the maximum likelihood estimators have a consistent estimator of the variance and he used it to replace the asymptotic variance of unknown parameters. The maximum likelihood estimates must be derived numerically for the GPD because there is no obvious simplification of the nonlinear likelihood equation as we defined in (8).
Show more

6 Read more

Computing Maximum Likelihood Estimates in Recursive Linear Models with Correlated Errors

Computing Maximum Likelihood Estimates in Recursive Linear Models with Correlated Errors

In recursive linear models, the multivariate normal joint distribution of all variables exhibits a de- pendence structure induced by a recursive (or acyclic) system of linear structural equations. These linear models have a long tradition and appear in seemingly unrelated regressions, structural equa- tion modelling, and approaches to causal inference. They are also related to Gaussian graphical models via a classical representation known as a path diagram. Despite the models’ long history, a number of problems remain open. In this paper, we address the problem of computing maximum likelihood estimates in the subclass of ‘bow-free’ recursive linear models. The term ‘bow-free’ refers to the condition that the errors for variables i and j be uncorrelated if variable i occurs in the structural equation for variable j. We introduce a new algorithm, termed Residual Iterative Condi- tional Fitting (RICF), that can be implemented using only least squares computations. In contrast to existing algorithms, RICF has clear convergence properties and yields exact maximum likelihood estimates after the first iteration whenever the MLE is available in closed form.
Show more

20 Read more

Maximum likelihood estimates of pairwise rearrangement distances

Maximum likelihood estimates of pairwise rearrangement distances

This paper introduces a maximum likelihood estimator for the evolutionary distance between two genomes under a large-scale genome rearrangement model. One may view this as a correction method for a family of models which can be interpreted using non-abelian groups. Methods of correcting distances for multiple changes are commonly used, because the use of uncorrected distances can lead to poor inference regarding topology (see Felsenstein (2004)). These corrections for multiple changes are typically implemented in the context of single nucleotide polymorphisms, and these are typically in an environment in which changes at each site are considered to be independent. The large-scale rearrangements discussed in this paper are different from this in several ways, but the key difference is that rearrangements can affect overlapping regions, and hence interact with each other. The “correction” involved in the context of this paper is to account for evolutionary paths between two genomes that might not be the shortest path.
Show more

23 Read more

Simple Penalties on Maximum-Likelihood Estimates of Genetic Parameters to Reduce Sampling Variation

Simple Penalties on Maximum-Likelihood Estimates of Genetic Parameters to Reduce Sampling Variation

A central component of the success of regularized estima- tion is the choice of how much to penalize. A common practice is to scale the penalty by a so-called “tuning factor” to regulate stringency of penalization. Various studies (again for a single covariance matrix, see above) demonstrated that this can be estimated reasonably well from the data at hand, using cross- validation techniques. Adopting these suggestions for genetic analyses and using k-fold cross-validation, Meyer (2011) es- timated the appropriate tuning factor as that which maxi- mized the average, unpenalized likelihood in the validation sets. However, this procedure was laborious and afflicted by problems in locating the maximum of a fairly flat likelihood surface for analyses involving many traits and not so large data sets. These technical difficulties all but prevented prac- tical applications so far. Moreover, it was generally less successful than reported for studies considering a single co- variance matrix. This led to the suggestion of imposing a mild penalty, determining the tuning factor as the largest value that did not cause a decrease in the (unpenalized) likelihood equivalent to a significant change in a single parameter. This pragmatic approach yielded reductions in loss that were gen- erally of comparable magnitude to those achieved using cross- validation (Meyer 2011). However, it still required multiple analyses and thus considerably increased computational de- mands compared to standard, unpenalized estimation.
Show more

29 Read more

A General Procedure for Obtaining Maximum Likelihood Estimates in Generalized Regression Models

A General Procedure for Obtaining Maximum Likelihood Estimates in Generalized Regression Models

This procedure, long known to the and calculate a new value of p, pAO, profession as the Cochran-Orcutt iterative method, represents a convenient way of calculating the maximum likelihoo[r]

12 Read more

Deriving generalized means as least squares and maximum likelihood estimates

Deriving generalized means as least squares and maximum likelihood estimates

generalized means can be derived in a unified way, as least squares estimates for a transformed data.. set.[r]

10 Read more

On the folded normal distribution

On the folded normal distribution

Also reviewed here is the maximum likelihood estimates (for an introduction, see [1]), with examples from simulated data given for illustration purposes. Simulation studies will be performed to assess the validity of the estimates with and without bootstrap calibration in low sample cases. Numerical optimization of the log-likelihood will be carried out using the simplex method [10].

18 Read more

Some New Aspects of Statistical Inference for Multistage Dose-Response Models with Applications

Some New Aspects of Statistical Inference for Multistage Dose-Response Models with Applications

to be on the boundary. It turns out that, as one can expect, the form of the LRT becomes quite complex with the increase in the number of nuisance parameters which lie on the boundary. In the sequel, we also derive the maximum likelihood estimates of all parameters  based on Z under the condition that some of the parameters may lie on the boundary, and derive their joint and marginal distributions. As mentioned before, these results would be useful when one is interested in deriving Wald-type asymptotic inference based on X about a smooth function of  , and in particular, about a linear function of these parameters which is the case for dose-response multistage Weibull models mentioned in Section 1.
Show more

38 Read more

Failure Process Modeling with Censored Data in Accelerated Life Tests

Failure Process Modeling with Censored Data in Accelerated Life Tests

The method is based on the unknown population parameters. By optimizing the likelihood function based on parameters, the ones that are most consistent with the observed samples will be determined. In other words, the idea behind the maximum likelihood parameter estimation is to determine the parameter values that maximize the likelihood (or, equivalently, the log likelihood). In the case of censored data, other estimation methods such as least square method are less precise. Therefore, using MLE method in this case is considered to be more robust and results in estimators with good statistical properties. Usually, explicit expressions cannot be obtained through directly solving the likelihood equations. Instead, the numerical methods such as Newton-Raphson method or powerful tools in the analysis of incomplete data such as the Expectation–Maximization (EM) algorithm can be used. Fisher information matrix and consequently asymptotically variances of estimates are obtained directly from the numerical methods. The EM algorithm possesses several advantageous properties, such as stable convergence, compared to the Newton– Raphson method. So as a new approach, mathematical programming tools were used for estimating the maximum likelihood estimates of unknown parameters with mathematical solving approach. The general form of mathematical programming problem is
Show more

14 Read more

Behrens-Fisher Analogs for Discrete and Survival Data

Behrens-Fisher Analogs for Discrete and Survival Data

The assumption that the shape parameters are equal is not always satisfied in prac- tice while testing the equality of two Weibull scale parameters. Also we observe that the available test procedures are based on the maximum likelihood estimates of the parameters. Apart from the maximum likelihood estimates of the parameters several methods of moments estimators are proposed by different authors. Our objective, in this Chapter, is to develop test procedures to test the equality of scale parameters of two Weibull distributions where the shape parameters are assumed unequal and unknown and compare the performance of these test procedures. We compare the performance, through simulation studies, in terms of empirical size and power of the test procedures. The test procedures are developed in section 5.2, simulation studies are presented in section 5.3 and illustrative examples and discussion have been given in section 5.4.
Show more

182 Read more

A new method of inference of ancestral nucleotide and amino acid sequences.

A new method of inference of ancestral nucleotide and amino acid sequences.

A model of nucleotide or amino acid substitution was employed to analyze data of the present-day sequences, and maximum likelihood estimates of parameters such as bra[r]

10 Read more

Profit Efficiency among Paddy Farmers: A Cobb-Douglas Stochastic Frontier Production Function Analysis

Profit Efficiency among Paddy Farmers: A Cobb-Douglas Stochastic Frontier Production Function Analysis

A multiple regression model based on Stochastic Frontier Profit Function which assumed Cobb-Douglass specification form was estimated using a cross-sectional data obtained from a sample of 397 Paddy households via Multi-stage and simple random sampling techniques. Maximum likelihood estimates of the specified profit model revealed that profit efficiencies of the producers varied between 30.5% and 94.8% with a mean of 73.2% suggesting that an estimated 26.8% of the profit is lost due to a combination of technical and allocative inefficiencies in Paddy production. Results from the technical inefficiency model revealed that credit education, farming experience, extension service, MR219 seed variety, broadcast planting method, machine broadcasting method and herbicides were significant factors influencing profit inefficiency. This shows that profit inefficiency in Paddy production could be shortened significantly with improvement in the level of the above socio-economics characteristics of the sampled farmers.
Show more

10 Read more

Testing a multivariate process for a unit root using unconditional likelihood

Testing a multivariate process for a unit root using unconditional likelihood

Unlike the stationary case where least squares and exact maximum likelihood estimates converge to the same normal distribution, the estimates and test statistics converge to different di[r]

21 Read more

Combining Likelihood Information from  Independent Investigations

Combining Likelihood Information from Independent Investigations

In recent years, many likelihood-based asymptotic methods have been developed to produce highly accurate p-values. In particular, both the Lugannani and Rice’s [2] method and the Barndorff-Nielsen’s [3] [4] method produced p-values which have third-order accuracy, i.e. the rate of convergence is O n ( ) − 3 2 . Fraser and Reid [5] showed that both methods required the signed log-likelihood ratio statistic and the standardized maximum likelihood estimate departure calculated in the canonical parameter scale. In this paper, we proposed a method to combine likelihood functions and the standardized maximum likelihood estimates departure calculated in the canonical parameter scale obtained from independent investigations to obtain a combined p-value.
Show more

10 Read more

The Dual of the Maximum Likelihood

The Dual of the Maximum Likelihood

The Maximum Likelihood method estimates the parameter values of a statistical model that maximizes the corresponding likelihood function, given the sample information. This is the primal approach that, in this paper, is presented as a mathematical programming specification whose solution requires the formulation of a Lagrange problem. A result of this setup is that the Lagrange multipliers associated with the linear statistical model (where sample observations are regarded as a set of constraints) are equal to the vector of residuals scaled by the variance of those residuals. The novel contribution of this paper consists in deriving the dual model of the Maximum Likelih- ood method under normality assumptions. This model minimizes a function of the variance of the error terms subject to orthogonality conditions between the model residuals and the space of ex- planatory variables. An intuitive interpretation of the dual problem appeals to basic elements of information theory and an economic interpretation of Lagrange multipliers to establish that the dual maximizes the net value of the sample information. This paper presents the dual ML model for a single regression and provides a numerical example of how to obtain maximum likelihood estimates of the parameters of a linear statistical model using the dual specification.
Show more

8 Read more

Statistical analysis of heaped duration data

Statistical analysis of heaped duration data

• If all variables in the model had been observed (i.e. true as well as re- ported durations), then (standard) maximum likelihood techniques would yield maximum likelihood estimates (MLE), along with variances, of the parameters of interest as well as of the heaping effect. However, this so- called ‘full data’ likelihood contains unknown, unobserved variables and therefore it cannot be calculated.

14 Read more

Determinants of Land Contracts and Efficiency in Ethiopia: The Case of Libokemkem District of Amhara Region

Determinants of Land Contracts and Efficiency in Ethiopia: The Case of Libokemkem District of Amhara Region

With the help of maximum likelihood estimates the effect of various factors on total operated fields was examined.The maximum likelihood result indicates that oxen own- ership; family size, age, and total income determine the total cultivated land.It was also shown that choice of crops (tef and wheat) resulted in positive estimates indicating that crop types determine the land area operated.Choice of tenure arrangement on the other hand depends on livestock units, large family size and food shortages, and access to markets.This finding calls for a kind of intervention, which can support operation of informal land markets to be more efficient than the prevailing situation.
Show more

10 Read more

Show all 10000 documents...