Survival data

Top PDF Survival data:

Bayesian analysis and model selection for interval censored survival data

Bayesian analysis and model selection for interval censored survival data

BAYESIAN ANALYSIS AND MODEL SELECTION FOR INTERVAL-CENSORED SURVIVAL DATA!. by.[r]

13 Read more

Robust Learning for Optimal Treatment Strategy with Survival Data

Robust Learning for Optimal Treatment Strategy with Survival Data

There are also existing works aiming to identify the optimal treatment regime with survival data. Zhao et al. (2011) used a modification of the support vector regression to approximate the Q-function and estimated the optimal treatment regime accordingly. Utilizing a flexible regression model helps to better approximate the Q-function and thus leads to a high-quality regime. Goldberg and Kosorok (2012) proposed a Q-learning procedure that works for the censored data with flexible number of decision stages. They also proved the consistency of the estimated optimal survival time when the optimal Q- function belongs to the approximation space. Besides, they derived its associated finite sample risk bounds on the generalization error of the estimated regime. Tian et al. (2014) proposed to posit the proportional hazard model on the modified covariates to estimate the optimal treatment regime, which can be viewed as a variant of the A-learning. Zhao et al. (2015) extended the outcome weighted learning to censored data, which provided doubly robust and consistent estimator for the optimal treatment regime that maximizes the restricted mean survival time.
Show more

107 Read more

Transformation Models for Survival Data Analysis with Applications

Transformation Models for Survival Data Analysis with Applications

The proposed generalized transformation models can be applied to a variety of survival data. Even though the cure models are motivated from clinical trials where the end point is not death, such as relapse-free survival time, it can be used to overall survival time as well. In this article, the applications of the proposed models are illu- strated by examining the relationship between the survival time of a patient and several risk factors based on two cohorts data from the First National Health and Nutrition Examination Survey Epidemiologic Follow-up Study. In terms of the Brier scores, the selected proposed model provides better fitting compared with the Cox proportional hazards model and the Zeng et al. [1] transformation cure model. It should be pointed out that even though the Brier score is commonly used in practice for model comparison, it has its own disadvantages. For instance, although the Brier score can be calculated at any arbitrary time point, but it dose not discriminate competing models over the whole time period. Other model comparison methodologies will be explored in our future study. For example, receiver operating characteristic (ROC) curves may be used to measure the diffe- rences of the models over all the relevant time periods.
Show more

23 Read more

Survival data management in patients with acute myocardial infarction

Survival data management in patients with acute myocardial infarction

ABSTRACT: The aim of this study is to analyse survival data of the patients with acute myocardial infarction. We studied a sample of 424 patients with a mean age of 67.1±12.3 years. The overall mortality rate after 5 years was 233.4‰. The presence of late potentials in patients with associated arrhythmias, low ejection fraction or low RR variability is associated with higher mortality risk.

8 Read more

Individual patient data meta-analysis of survival data using Poisson regression models

Individual patient data meta-analysis of survival data using Poisson regression models

Poisson regression is used in the modelling of count data and contingency tables; however, the extension to model- ling survival data via a piecewise exponential model [19] serves as an alternative approach to the widely used Cox model. It has been shown how the Cox model can be fitted using a Poisson GLM due to the shared form of the contribution to the partial log-likelihood, by splitting fol- low-up time into as many intervals as there are events [20]. However, this method can be computationally intensive. Alternatively, we can choose a smaller number of time intervals with fixed length, where patients are at risk of experiencing events within these [21], to closely approximate the Cox model. We also obtain direct esti- mates of the baseline hazard rate. Fine splitting of the timescale has been used to allow the use of splines and fractional polynomials to model the baseline hazard con- tinuously [21,22].
Show more

14 Read more

An Application of Generalized Entropy Optimization Methods in Survival Data Analysis

An Application of Generalized Entropy Optimization Methods in Survival Data Analysis

In this paper, survival data analysis is realized by applying Generalized En- tropy Optimization Methods (GEOM). It is known that all statistical distribu- tions can be obtained as MaxEnt distribution by choosing corresponding moment functions. However, Generalized Entropy Optimization Distribu- tions (GEOD) in the form of MinMaxEnt,MaxMaxEnt distributions which are obtained on basis of Shannon measure and supplementary optimization with respect to characterizing moment functions, more exactly represent the given statistical data. For this reason, survival data analysis by GEOD acquires a new significance. In this research, the data of the life table for engine failure data (1980) is examined. The performances of GEOD are established by Chi-Square criteria, Root Mean Square Error (RMSE) criteria and Shannon entropy measure, Kullback-Leibler measure. Comparison of GEOD with each other in the different senses shows that along of these distributions
Show more

17 Read more

Some Importance Models inspect to Survival Data Inference

Some Importance Models inspect to Survival Data Inference

This paper used some models inspect to survival data analysis and focused on the Cox regression model characteristics. This paper also used liver cancer data from Khartoum State Health Ministry (Military Medical Hospital). The Cox regression model was estimated a number of descriptive variables such as sex, housing status, and 'quantitative variable such as age. The paper found that the age variable having a significant effect on the time of event, while the other variables have no significant effect

14 Read more

Bayesian semiparametric methods for longitudinal, multivariate, and survival data

Bayesian semiparametric methods for longitudinal, multivariate, and survival data

Bayesian approaches have several advantages for data collected from tumorigenicity studies, including ease of computation via MCMC, ability to incorporate prior infor- mation (e.g., from historical controls), and exact inferences on different aspects of the tumor response (time to first tumor, total tumor burden, etc). Unfortunately, in the Bayesian literature there has been limited consideration of dynamic frailty models and methods for multiple event time data in general. For recent Bayesian references on frailty models for multiple event time and multivariate survival data, refer to pa- pers by Gustafson (1997), Sahu et al. (1997), Walker and Mallick (1997), Sargent (1998), Aslanidou et al. (1998), Sinha (1998), Chen and Sinha (2002), Dunson and Chen (2004), Sinha and Maiti (2004)), as well as a review in Ibrahim et al. (2001). H¨ ark¨ anen et al. (2003) proposed an innovative approach based on a model that al- lows subject-specific frailty trajectories to vary according to a latent class structure. In many settings, including animal tumorigenicity studies, it may be more natural to suppose that the age-specific risk trajectories vary according to a continuum, with each subject potentially having their own unique pattern.
Show more

112 Read more

Enhanced secondary analysis of survival data: reconstructing the data from published Kaplan-Meier survival curves

Enhanced secondary analysis of survival data: reconstructing the data from published Kaplan-Meier survival curves

The primary objective of this previous work was neither the reconstruction of Kaplan Meier data nor the reconstruction of life-table data, but was a necessary step that had to be taken in order for the authors to illustrate methods for combining survival data from sev- eral studies. All these previous attempts reconstructed the data in the form of life-table data at a limited num- ber of time points, whereas we have tried to reconstruct the original KM intervals. Published survival curves are almost always based on the KM method justifying our approach of using inverted KM equations instead of life- table equations and solving at the same time the pro- blem of pre-specification of intervals. For the ‘all infor- mation’ and ‘no total events’ cases, the censoring pattern varied by numbers at risk published intervals as in Williamson [11]. For the ‘ no number at risk ’ case, the censoring pattern is assumed constant over the interval and for the ‘ neither ’ case, no censoring is assumed. Except for the ‘ neither ’ case where there is no informa- tion on number at risk or total number of events, we believe that our method represents an improvement over the other approaches described above. This is
Show more

13 Read more

Strategies for power calculations in predictive biomarker studies in survival data

Strategies for power calculations in predictive biomarker studies in survival data

For the retrospective study, we point out its distinct design which requires special treatment to generate appropriate power. Specifically, the retrospective study has fixed survival time and censored status (i.e., survival data have been collected). This feature challenges the hypothesized parameters. For example, suppose the preliminary data for discovering the biomarker has an overall MST of 6 years, but the study cohort you plan to validate has an overall MST of 2 years. Then both studies have a different overall MST. As a result, direct plugin of the preliminary results into the study cohort becomes problematic. Our strategy provides an approach to harmonize both studies such that their MSTs and censoring rates are comparable. Like the prospective study, the ‘PowerPredictiveBiomarker’ R package generates a template of statistical plan so investigators could easily incorporate into their retrospective study proposal. Example data from the published MR signature gives a hand-on experience for end-user to prepare the statistical plan for predictive biomarker validation.
Show more

9 Read more

Anatomic distribution, clinical features, and survival data of 87 cases primary gastrointestinal lymphoma

Anatomic distribution, clinical features, and survival data of 87 cases primary gastrointestinal lymphoma

The surgical treatment was traditionally considered as the main treatment methods of PGIL. Most of pa- tients accepted the radical resection. Palliative resec- tion might due to huge size of tumor or extensive transfer of lymph node. However, as lymphoma was highly sensitive to chemotherapy, the main treatment of PGIL was non-surgery now. A prospective study showed that surgery treatment could not improve the 10-year survival rate of PGIL by comparing of surgery plus chemotherapy with chemotherapy alone [6]. Re- cently, there was a study showed that it had equivalent efficacy whether patients accepted operation or not [11]. Moreover, more and more studies demonstrated that non-surgery strategies had better OS [15, 16]. In our study, 50 patients accepted non-surgery methods, such as CHOP or R-CHOP, which account for 54.5 % of total patients. Rituximab is a chimeric monoclonal antibody against the protein CD20, which is primarily found on the surface of immune system B cells. Rituxi- mab destroys both normal and malignant B cells that have CD20 on their surfaces. The addition of rituxi- mab has improved the overall survival of lymphoma.
Show more

7 Read more

Network meta-analysis of survival data with fractional polynomials

Network meta-analysis of survival data with fractional polynomials

Weibull, lognormal or log-logistic). This baseline hazard function is multiplied with the constant hazard ratio for each of the competing interventions relative to this baseline to obtain hazard functions for the interventions of interest. The assumption of a constant hazards func- tion implies that only the scale of these parametric func- tions is affected by treatment, and accordingly all the competing interventions have the same shape. Since the tail of the survival function has a great impact on the expected survival this assumption may lead to biased or at least highly uncertain estimates regarding differences in expected survival and therefore cost-effectiveness esti- mates. Given the multi-dimensional treatment effect of the approach presented in this paper, the parametric hazards functions of the competing interventions can be Table 4 Functions of parameter estimates for different fractional polynomials
Show more

14 Read more

Malignant Pediatric Gliosarcoma Defies General Survival Data

Malignant Pediatric Gliosarcoma Defies General Survival Data

Pediatric gliosarcoma is evidently rare, and survival dictated by the disease is poor even with the best of multimodality approaches. This case of an 11 years old boy in his preadoles- cence treated with near total excision and chemoradiotherapy which has weathered the storm of the disease surviving to live his adolescence with good performance despite his residual disease is worth reporting.

6 Read more

Behrens-Fisher Analogs for Discrete and Survival Data

Behrens-Fisher Analogs for Discrete and Survival Data

In many biomedical applications the primary endpoint of interest is the survival time or time to a certain event like time to death, time it takes for a patient to respond to a therapy, time from response until disease relapse (i.e., disease returns) etc. The importance of parametric models in the analysis of life time data is well known (Lawless (1982); Nelson (1982); Mann et al. (1974)). The use of Weibull distribution to describe the real phenomena has a long and rich history. The Weibull distribution has been considered to be a successful model for many product failure mechanisms. However, Lloyd (1967); Ku et al. (1972); Hammitt (2004) and McCool (1998), among others, have extended the use of the Weibull distribution to other branches of statistics, such as reliability, risks and quality control.
Show more

182 Read more

Change Point Analysis of Survival Data with Application in Clinical Trials

Change Point Analysis of Survival Data with Application in Clinical Trials

The goal of this paper is to find efficient change-point detection methods for the piecewise constant failure rate models [5] [6] [8] [15] with unknown pre-change and post-change parameters. Maximum likelihood estima- tion of the change point in presence of nuisance parameters is reviewed; it appears consistent under certain con- ditions. A new alternative estimation procedure is proposed based on Kaplan-Meier estimation of the survival function [16] followed by the least-squares estimation of the change point. For this scheme, strong consistency of all the estimators is established. This is a rather constitutive distinction from the classical change-point mod- els where consistent estimation of the change-point parameter is not possible.
Show more

16 Read more

"Smooth" Inference for Clustered Survival Data

"Smooth" Inference for Clustered Survival Data

Louis (2000) and Pan and Connett (2001) for right-censored data. They proposed an EM-like algorithm, where censored survival time are imputed through a Buckley- James type approach. But they required that the random effect and error term have common marginal distribution, approximated using the Kaplan-Meier estimator. This difficulty stems in part from their focus on being “non-parametric” with respect to the marginal survival distribution. Therneau et al. (2003) proposed a penalized method to estimate the unknown parameters in the subject-specific AFT model (1.2) by treating the frailty as additional regression coefficients. Lambert, Kimber, and Johnson (2004) used a parametric method to estimate all the unknown parameters. Kom´arek and Lesaffre (2007) proposed a Bayesian approach for the subject-specific AFT model (1.2) with interval censored data. A Markov Chain Monte Carlo (MCMC) algorithm was developed to fit the model, in which the random effects are assumed normally distributed and i.i.d. and the e ij are assumed to arise from a distribution
Show more

164 Read more

On Hybrid Censored Inverse Lomax Distribution: Application to the Survival Data

On Hybrid Censored Inverse Lomax Distribution: Application to the Survival Data

In this section, we considered Bayes procedure to obtain the point and interval estimates of the parameters α, β in presence of hybrid censored data. The Bayes estimators are derived under Jeffrey’s non-informative priors with squared error loss function. The considered priors are improper but they leads the proper pos- terior. Thus, the joint prior is given as;

19 Read more

Statistical Methods For Truncated Survival Data

Statistical Methods For Truncated Survival Data

{Truncation is a well-known phenomenon that may be present in observational studies of time-to-event data. For example, autopsy-confirmed survival studies of neurodegenerative diseases are subject to selection bias due to the simultaneous presence of left and right truncation, also known as double truncation. While many methods exist to adjust for either left or right truncation, there are very few methods that adjust for double truncation. When time-to-event data is doubly truncated, the regression coefficient estimators from the standard Cox regression model will be biased. In this dissertation, we develop two novel methods to adjust for double truncation when fitting the Cox regression model. The first method uses a weighted estimating equation approach. This method assumes the survival and truncation times are independent. The second method relaxes this independence assumption to an assumption of conditional independence between the survival and truncation times. As opposed to methods that ignore truncation, we show that both proposed methods result in consistent and asymptotically normal regression coefficient estimators and have little bias in small samples. We use these proposed methods to assess the effect of cognitive reserve on survival in
Show more

124 Read more

Non-parametric survival modelling of time to employment amongst 09/10 cohort of mathematics graduates

Non-parametric survival modelling of time to employment amongst 09/10 cohort of mathematics graduates

Statistical methods for survival data analysis is one branch of mathematics that have continued to flourish in the last two decades. Its historical use in cancer and reliability research to business, criminology, epidemiology, and social behavioral sciences has widened the application of these methods (Lee and Wang, 2003). Moreover, biomedical researchers, consulting statisticians, economists and epidemiologists also use survival time study most extensively nowadays. This study deals with statistical methods for analyzing survival data developed from laboratory study of animals, clinical and epidemiologic studies of humans and other appropriate applications (Lee and Wang, 2003).
Show more

28 Read more

Multistate recursively imputed survival trees for time-to-event data analysis: an application to AIDS and mortality post-HIV infection data

Multistate recursively imputed survival trees for time-to-event data analysis: an application to AIDS and mortality post-HIV infection data

We adapted two newly developed data mining tech- nique (RSF and RIST) for multistate models (MSRSF and MSRIST) to identify important risk factors in two different stages of the disease. Several studies con- firmed RSF’s promising performance in survival ana- lysis compared with traditional Cox proportional hazards model [26, 27, 41]. Zhu and Kosorok [24] also showed that RIST outperforms RSF and the Cox model in classical survival data settings (with just one event of interest), and they have provided a detailed discussion about why RIST works. In the present study, it was also shown that the proposed method based on RIST works in multistate data analysis as well. The usual multistate regression methods are Table 2 Integrated Brier score (IBS) and Cindex values for three methods (Cox, MSRIST and MSRSF) over 500 repetitions
Show more

12 Read more

Show all 10000 documents...