Top PDF The Weibull MPR model for interval censored survival data

The Weibull MPR model for interval censored survival data

The Weibull MPR model for interval censored survival data

Summary. The Weibull multi-parameter regression (MPR) model with frailty is developed for interval censored survival data. The basic MPR model which is wholly parametric with non-proportional hazards was developed by Burke and MacKenzie in their 2016 Biometrics paper. We describe the basic model, develop the interval-censored likelihood and extend the model to include Gamma frailty. We present a simulation study and re-analyse data from the Signal Tandmo- biel study. The MPR model is shown to be superior to a proportional hazards competitor.
Show more

5 Read more

Parametric Model Based on Imputations Techniques for Partly Interval Censored Data

Parametric Model Based on Imputations Techniques for Partly Interval Censored Data

simplifies the calculation process especially when dealing with larger sets of data. Applying several imputation techniques to our data was relatively easy to implement. Using R software, we were able to develop a programming code to apply different imputation methods to the data sets and proceed with the parametric analysis. In this study, several imputation techniques were used to estimate survival function and compared with the one that was obtained by Turnbull based on interval censored and PIC failure time data. In the next two sections, parametric model and imputation techniques will be discussed.
Show more

8 Read more

Power and Sample Size Calculations for Interval-Censored Survival Analysis

Power and Sample Size Calculations for Interval-Censored Survival Analysis

A special case of interval-censored data is current-status data, where individuals are seen only once after enrollment. Current-status data often arise in cross-sectional surveys, where the purpose is calculation of the distribution of age of onset for a disease or life event. Thus, the observations are either of the form (0, C] or (C, ∞) (i.e., left- or right- censored). These data are also commonly referred to as case 1 interval-censored data [14]. Current status data are common in demography [15, 16], economics, and epidemiology [17, 18]. In the medical sciences, animal tumorigenicity and HIV studies often result in such data because the inves- tigator cannot measure the outcome directly or accurately [19]. The proportional hazards models and tests referenced above for analyzing interval-censored data can be used for the analysis of current status data. Murphy and van der Vaart [20] considered semiparametric likelihood ratio inference and proposed a test for significance of the regression coefficient in Cox’s regression model for current status data. Banerjee [21] examined the power of the test under contiguous alternatives.
Show more

28 Read more

A Gamma-frailty proportional hazards model for bivariate interval-censored data

A Gamma-frailty proportional hazards model for bivariate interval-censored data

For correlated survival times, there are two basic modeling approaches; i.e., marginal or frailty modeling. The marginal approach specifies a marginal model for each failure time, adopts a working independence assumption in the likelihood construction, obtains point estimates of the regression parameters under this assumption, and then uses the so-called sandwich estimator to obtain standard error estimates (Wei et al., 1989). Various marginal models have been proposed along the lines of this general approach for multivariate interval- censored data; e.g., the proportional hazards (PH) model (Goggins and Finkelstein, 2000; Kim and Xue, 2002), the proportional odds (PO) model (Chen et al., 2007), the additive hazards model (Tong et al., 2008), the linear transformation model (Chen et al., 2013), and the additive transformation model (Shen, 2015). Moreover, a goodness-of-fit test for assessing the appropriateness of the marginal Cox model for multivariate interval-censored data was proposed by Wang et al. (2006). Even though the marginal approach provides robust inference, it does not adequately account for the correlation that naturally exists between the multiple failure times.
Show more

26 Read more

Comparing survival functions with interval-censored data in the presence of an intermediate clinical event

Comparing survival functions with interval-censored data in the presence of an intermediate clinical event

data. True variability may be underestimated when using simple imputation. In multiple imputations, the variance is adjusted to within-imputation covariance and between-imputation variance. Pan (2000a) studied multiple imputation method to estimate the coefficient of the cox regression with interval-censored data. They im- puted m sets from the initial estimate of the cox proportional hazard model and baseline survival. Right-censored data was maintained without manipulation. With imputed exact time, a Cox model was fitted to obtain new estimates and the baseline survival. The iteration was repeated until estimates converged. Huang et al. (2008) studied a log-rank test via multiple imputation for interval censored data. After estimating the NPMLE by using Turnbull’s algorithm, they imputed the exact time for all data points including right-censored data from the conditional probability of NPMLE. Unlike Pan (2000a), Huang et al. (2008) used a large imputation num- ber (M=100). The covariance matrix estimator was formed of subtracting within- imputation covariance and between-imputation variance (Follmann et al., 2003). In
Show more

76 Read more

Simulation of left-truncated and case-k interval censored survival data with time-varying covariates

Simulation of left-truncated and case-k interval censored survival data with time-varying covariates

Following Cox and Oakes (1984), Klein and Moeschberger (2003), Nardi and Schemper (2003), parametric models often remain a useful tool as they are fitted much faster and offers more efficient estimates under conditions such as dependency of the survival times on the covariates (either fixed or time-varying) and when parameter values are far from zero. Subsequently, simulation proce- dures enables a researcher to asses the performance of a proposed parametric estimator concurrently in determining suitable inferential procedures for the parameters in a specified model. This methodology is crucial in order to draw reliable, precise and important information from the sample data in hand.
Show more

15 Read more

Modeling Arbitrarily Interval-Censored Survival Data with External Time-Dependent Covariates

Modeling Arbitrarily Interval-Censored Survival Data with External Time-Dependent Covariates

calculate the conditional expectations of the local log likelihoods, given the observed data and the current estimate of the hazard function, and the M-step, in which these expected log likelihoods are maximized with respect to their parameters. On the other hand, this method requires manual entry of a bandwidth parameter that determines the amount of smoothing for the hazard function estimate (Betensky et al., 2002). Further, the analytic standard errors are not derived, necessitating the use of the bootstrap, which is quite computationally intensive in this setting (Cai & Betensky, 2003). Lastly, there are numerical stability problems with local likelihood in regions of sparse data, such as the right-hand tail of the hazard function. For the same problem, Cai and Betensky (2003) proposed a penalized spline-based approach. Basically, they weakly parameterized the log-hazard function with a piecewise-linear spline and provided a smoothed estimate of the hazard function by maximizing the penalized likelihood through a mixed model- based approach. One disadvantage of this approach is that the variability due to the estimation of the smoothing parameter for small samples seems out of reach in the frequentist framework from the data.
Show more

247 Read more

Comparing survival functions with interval-censored data in the presence of an intermediate clinical event

Comparing survival functions with interval-censored data in the presence of an intermediate clinical event

A few methods have been suggested for left truncated and interval-censored (LTIC) data. Turnbull’s character- ization was corrected to accommodate both truncation and interval-censoring time points [13]. It was extended to the regression model under the proportional assumption [14]. Pan and Chappell noted that NPMLE is inconsis- tent for the early times with LTIC data, while conditional NPMLE is consistent [15]. The estimation of the param- eters in the Cox model with LTIC data and a rank-based test of survival function in LTIC were studied [16, 17]. However, the length-biased problem was not considered in those methods.
Show more

9 Read more

Sieve Estimation with Bivariate Interval Censored Data

Sieve Estimation with Bivariate Interval Censored Data

In this paper, we investigate the association and joint distribution of two event times with case 2 interval censored data by using spline-based sieve estimation method. Very recently, the spline-based sieve maximum likelihood estimation method has often been used in survival analysis. See, for example, Lu et al. [13] and Zhang et al. [28]. Under a copula model, we adopt a two-stage approach and apply the spline-based sieve method to estimate the marginal distributions first, and then the association parameter. We make three contributions in this paper: The proposed two-stage estimators are asymptotically consistent; thanks to the spline procedure, the computation of the two-stage estimator for the association parameter is much more efficient than the two-stage semiparametric method in Sun et al. [19]; thanks to the spline procedure, the estimation for the joint distribution of two failure times is smooth and explicit.
Show more

26 Read more

Variable selection in a flexible parametric mixture cure model with interval-censored data

Variable selection in a flexible parametric mixture cure model with interval-censored data

of the disease [ 1 ]. In the management of at risk population (i.e. elderly), it is therefore im- portant to study the time to aMCI conversion, and to identify risk factors associated with it. Several studies were performed within this respect [ 2 , 3 , 4 ]. In particular, we consider here a study [ 5 ] conducted from 1988 to 2008 which included 241 healthy elderly people (average age of 72 years old) and presents several interesting features. Since participants were followed at regular interviews, the endpoint of interest in this study, the time to aMCI conversion, is only known to occur between two successive visits. That is, all the observed data are interval-censored. Participants who do not experience conversion at their last follow-up date are right-censored. Also, it is known that even in this at risk population, some individuals will never experience conversion [ 6 ], therefore, a fraction of the population is “immune” to the event, or “cured”, as opposed to “susceptible” or “uncured”. It is interesting to identify which covariate impacts the probability of being susceptible or not, the time until the con- version, or both. We thus need a method that allows such variable selection and analysis. Up to now, these data have been analyzed without variable selection and without accounting for a possible cure fraction, but dealing with the interval-censored nature of the data. Most statistical softwares propose methods for right-censored data, but few of them allow data to be interval-censored [ 7 ]. In a non-parametric setting, the Kaplan-Meier estimator is no longer available as, in most of the cases, the events can no longer be ordered. To overcome this, the Turnbull non-parametric survival estimator was elaborated [ 8 ], and only recently a generalization to allow for continuous covariates was proposed [ 9 ]. Regression models have also been studied under that type of censoring [ 10 , 11 , 12 , 13 , 14 , 15 ]. However, all these methods usually make use of complex algorithms or methods, such as Expectation- Maximization (EM) algorithm [ 16 ] , self-consistency algorithm [ 8 ], Iterative Convex Minorant algorithm[ 12 ], or B-spline smoothing techniques [ 13 ]. On the contrary, assuming a specific distribution for the event times makes the analysis much simpler in the presence of interval- censoring.
Show more

27 Read more

A semiparametric Bayesian proportional hazards model for interval censored data with frailty effects

A semiparametric Bayesian proportional hazards model for interval censored data with frailty effects

Bellamy et al. [8] extend parametric event time models to clustered and interval censored settings by introducing additive frailties to the linear predictor. Frailty comes into play when multiple events are considered for a given unit. Frailty can also count for unobserved covariates. Bellamy et al. implement their algorithm in existing commercial statistical computing software Bellamy et al. [8]. The authors model dependency between multiple events of a patient by frailty: doctor visits during the subject's time in study. Bellamy's idea is easily implemented in a Bayesian framework. WinBUGS [9] allows to implement analyses for interval censored data with frailty in the same model. A parametric approach via Weibull model or Accelerated Failure Time (AFT) model is easily realized. One can accommodate a frailty to the linear predictor part. But, the implementation of semiparametric proportional hazards models in WinBUGS is cumbersome (see Example Leuk in Example Volume 1 of the WinBUGS software).
Show more

15 Read more

Bayes Interval Estimation on the Parameters of the Weibull Distribution for Complete and Censored Tests

Bayes Interval Estimation on the Parameters of the Weibull Distribution for Complete and Censored Tests

2. By constructing the confidence interval, we have condition the uncertainty based on the observations. As a result, the stages involved in constructing a confidence interval are the same as those of the Bayesian estimation. This will lead to find a Bayesian based approach to construct a CI. First, a prior distribution should be identified to model the uncertainty, and then information from a given data is used to design the CI. The unknown parameter is treated as a random variable and the observed data are utilized to obtain the posterior distribution. To conduct the above two-step process, let f X ( ) x ; θ be the joint pdf
Show more

12 Read more

Estimation in Inverse Weibull Distribution Based on Randomly Censored Data

Estimation in Inverse Weibull Distribution Based on Randomly Censored Data

This article deals with the estimation of the parameters and reliability characteristics in inverse Weibull (IW) distribution based on the random censoring model. The censoring distribution is also taken as an IW distribution. Maximum likelihood estimators of the parameters, survival and failure rate functions are derived. Asymptotic confidence intervals of the parameters based on the Fisher information matrix are constructed. Bayes estimators of the parameters, survival and failure rate functions under squared error loss function using non-informative and gamma infor- mative priors are developed. Furthermore, Bayes estimates are obtained using Tierney-Kadane’s approximation method and Markov chain Monte Carlo (MCMC) techniques. Also, highest pos- terior density (HPD) credible intervals of the parameters based on MCMC techniques are con- structed. A simulation study is conducted to compare the performance of various estimates. Fi- nally, a randomly censored real data set supports the estimation procedures developed in this article.
Show more

28 Read more

The nonparametric analysis of interval-censored failure time data

The nonparametric analysis of interval-censored failure time data

The third part of this dissertation deals with the regression analysis of multivari- ate interval-censored data with informative censoring. Multivariate interval-censored failure time data often occur in the clinical trial that involves several related event times of interest and all the event times suffer interval censoring. Different types of models have been proposed for the regression analysis ( Zhang et al.(2008); Tong et al.(2008); Chen et al.(2009); Sun (2006)). However, most of these methods only deal with the situation where observation time is independent of the underlying survival time completely or given covariates. In this chapter, we discuss regression analysis of multivariate interval-censored data when the observation time may be related to the underlying survival time. An estimating equation based approach is proposed for re- gression coefficient estimate with the additive hazards frailty model and the asymptotic properties of the proposed estimates are established by using counting processes. A major advantage of the proposed method is that it does not involve estimation of any baseline hazard function. Simulation results suggest that the proposed method works well for practical situations.
Show more

121 Read more

Exact and Asymptotic Weighted Logrank Tests for Interval Censored Data: The interval R Package

Exact and Asymptotic Weighted Logrank Tests for Interval Censored Data: The interval R Package

For parametric methods, it is straightforward to form the likelihood for interval-censored data under the accelerated failure time model and standard likelihood based methods may be applied (see Equation 1 ). These methods are provided in the survival package using the survreg function ( Therneau and Lumley 2009 ). For right-censored data a more common regression method is the semi-parametric Cox proportional hazards regression. In this model the baseline hazard function is completely nonparametric, but does not need to be estimated. The score test from this model is the logrank test. The generalization of the model to interval- censored data typically uses the marginal likelihood of the ranks (see Satten 1996 ; Goggins, Finkelstein, and Zaslavsky 1999 ). The only available software for doing these models of which we are aware is an S function ( Goggins 2007 ) – which calls a compiled C program requiring access to a SPARC based workstation – to perform a Monte-Carlo EM algorithm for proportional hazards models described in Goggins et al. ( 1999 ). Another approach to semi-parametric modeling is to specifically estimate the nonparametric part of the model with a piecewise constant intensity model (see Farrington 1996 ; Carstensen 1996 ). This is the approach taken with the Icens function in the Epi package ( Carstensen, Plummer, Laara, Hills, and et al. 2010 ).
Show more

34 Read more

Double Bootstrap Confidence Interval Estimates with Censored and Truncated Data

Double Bootstrap Confidence Interval Estimates with Censored and Truncated Data

survival. Gupta et al. ( 1999 ) proved analytically that unique maximum likelihood estimates exist for the parameters of this model and analyzed a lung cancer data. Mazucheli et al. ( 2005 ) compared the accuracy of Wald confidence interval with the B-p and B-t intervals for the mode of the hazard function of the log logistic distribution. Other authors who have done significant work using this model are [Cox and Lewis ( 1966 )], [Cox, Oakes, O’Quigley and Struthers ( 1982 )]. The model can also easily be extended to accommodate covariates, truncated data and all types of censored observations such as left, right and interval. More discussions on truncated data can be found in Lawless ( 1982 ).
Show more

22 Read more

Use of interval-censored survival data as an alternative to Kaplan-Meier survival curves: studies of oral lesion occurrence in liver transplants and cancer recurrence

Use of interval-censored survival data as an alternative to Kaplan-Meier survival curves: studies of oral lesion occurrence in liver transplants and cancer recurrence

In the literature, several estimators of survival func- tion are available. Currently, the Kaplan-Meier estimate is the simplest method for computing survival over time. Although, it is only adequate for right-censored data (i.e., the event occurs after the last follow-up). Another impor- tant estimator of survival is Turnbull’s algorithm [13] which takes into account interval-censored survival data. The survival curves generated with the Kaplan-Meier esti- mate and Turnbull’s algorithm are both easily interpreted. Various approaches for analyzing interval-censored data have been proposed in the literature. For example, Peto [14] provided a method to estimate a cumulative distribu- tion function from interval-censored data. This method is similar to the life-table technique and to the presented algorithm for estimating survival [15]. Semiparametric approaches based on the proportional hazards model have been developed for interval-censored data [16–21]. More- over, a wide variety of parametric models can also be used to estimate the distribution of time to an event of interest in the presence of interval-censoring data [22–24]. In a comprehensive review, Gómez et al. [25] present the most frequently applied non-parametric, parametric, and semiparametric estimating approaches that have been used to analyze interval-censored data. Rodrigues et al. [26] presents an adequate interval cen- sored methodology application in the boys’ first use of marijuana data set.
Show more

10 Read more

Simple parametric survival analysis with anonymized register data: A cohort study with truncated and interval censored event and censoring times

Simple parametric survival analysis with anonymized register data: A cohort study with truncated and interval censored event and censoring times

Table 3 presents relative bias, coverage probabilities and inflation of standard errors when either using single midpoint imputation or multiple imputation for interval censored censoring events. When interval censoring is induced by intervals with lengths up to six months, both analytic strategies perform well with negligible bias, cov- erage probabilities of confidence intervals close to the nominal value, and standard errors that are only mar- ginally increased relative to the ordinary situation, i.e. when only ordinary right censoring is present. As a gen- eral tendency, however, the multiple imputation has lower relative bias and better coverage, in particular when censoring becomes more dominant in terms of higher censoring rates and wider censoring intervals. Only in extreme cases of 6% and 9% annual censoring proportions, censoring intervals of one year length, and larger sample sizes of 10,000, does coverage probabilities of the multiple imputation strategy decrease unaccepta- bly to levels around 75% to 80%, albeit not as low as when single midpoint imputation is used. This poor per- formance in extreme cases is to be expected, as the con- ditional censoring distribution is taken to be uniform in the multiple imputation analysis, while censoring times are actually generated from an exponential model – it is Table 2 Parameterization of distributions
Show more

11 Read more

Bayesian Survival Analysis of Regression Model Using Weibull

Bayesian Survival Analysis of Regression Model Using Weibull

Where  i is an indicator variable which takes value 1 if observation is censored and 0 otherwise. Section III includes the hypothetical survival data of fifteen patients with censoring suffering from a disease and they were treated by two different treatments. Section IV and Section V consist of Bayesian regression analysis of treatment 1 and treatment 2 using LaplacesDemon Hall [1] package which is available in R R Development Core Team[2]. The goal of LaplacesDemon is to provide a complete and self-contained Bayesian environment within R. The main function of this package which is used in the paper is LaplaceApproximation. This function gives the approximated posterior estimates of the parameters in Bayesian framework. In order to deal with censoring mechanism we have developed a function which works well for the analysis of survival data. Comparison of survival curves for two treatments by using Weibull model is reported in section VI. Finally in the last a brief discussion and conclusion is given in section VII.
Show more

6 Read more

Survival and Hazard Estimation of Weibull Distribution Based on Interval Censored Data

Survival and Hazard Estimation of Weibull Distribution Based on Interval Censored Data

In Table 2, when we compared the mean squared error (MSE) and absolute bias of the hazard function of Weibull distribution with interval censored data by maximum likelihood (MLE), Bayesian using Lindley’s approximation (BL) and Bayesian using Gaussian Quadrature (BG), we found that Bayesian using Gaussian Quadrature is better compare to the others for all cases except when the n=100 with   3 . Moreover, maximum likelihood is better than Bayesian using Lindley’s approximation for all cases except when the n= 25 with   0.5 and 1.5 .

9 Read more

Show all 10000 documents...