Predictive models

Top PDF Predictive models:

A systematic review of predictive models for asthma development in children

A systematic review of predictive models for asthma development in children

after review of titles and abstracts and underwent full-text review. Of those fully reviewed, 32 references describing 30 predictive models met inclusion criteria and are discussed in this paper. The other 42 were excluded because they do not primarily focus on predictive models for asthma devel- opment in children including ≥2 attributes. The included articles include only studies on predictive models. No systematic reviews or randomized controlled trials were found. In this section, we describe the state of the art of predictive models for asthma development in children. A summary of the predictive models for asthma development in children is given in Table 1. Our narrative description and the content of Table 1 are based on article details extracted into the data abstraction spreadsheet, with additional information to provide context. For the question Q 3 used for assessing article quality, the answer is “no” for
Show more

16 Read more

Building Predictive Models For Data Mining Projects

Building Predictive Models For Data Mining Projects

Abstract: This paper focused on building predictive models for data mining projects and knowledge discovery functionalities. The objectives are 1) data selection and transformation, 2) Generation of a prediction models using classification data mining techniques, 3) Identification of different attributes which affects retention and performance of students and 4) Comparison of accuracy on the classification techniques used in the prediction models . The study used dataset from the students enrolled in the BS Computer Engineering program. Decision tree classifiers such as ID3, J48 and CART were used to build models. Results of the study showed that when the attribute evaluation was conducted using WEKA (Waikato Environment for Knowledge Analysis), the College Entrance Test (CET) got the highest significant value among the identified attributes in predicting the retention and performance of students while J48 got the highest accuracy rating when classifying instances. However, further research on factors or attributes that influence retention and performance of students should be investigated and to include other programs in the University to improve the accuracy of the results of classification.
Show more

7 Read more

DALEX: Explainers for Complex Predictive Models in R

DALEX: Explainers for Complex Predictive Models in R

This paper describes a consistent collection of explainers for predictive models, a.k.a. black boxes. Each explainer is a technique for exploration of a black box model. Presented approaches are model-agnostic, what means that they extract useful information from any predictive method irrespective of its internal structure. Each explainer is linked with a specific aspect of a model. Some are useful in decomposing predictions, some serve better in understanding performance, while others are useful in understanding importance and conditional responses of a particular variable.
Show more

5 Read more

Learning Instance-Specific Predictive Models

Learning Instance-Specific Predictive Models

Typically, methods that learn predictive models from data, including those that learn BN mod- els, perform model selection. In model selection a single model is selected that summarizes the data well; it is then used to make future predictions. However, given finite data, there is uncer- tainty in choosing one model to the exclusion of all others, and this can be especially problematic when the selected model is one of several distinct models that all summarize the data more or less equally well. A coherent approach to dealing with the uncertainty in model selection is Bayesian model averaging (BMA) (Hoeting et al., 1999). BMA is the standard Bayesian approach wherein the prediction is obtained from a weighted average of the predictions of a set of models, with more probable models influencing the prediction more than less probable ones. In practical situations, the number of models to be considered is enormous and averaging the predictions over all of them is infeasible. A pragmatic approach is to average over a few good models, termed selective Bayesian model averaging, which serves to approximate the prediction obtained from averaging over all mod- els. The instance-specific method that we describe in this paper performs selective BMA over a set of models that have been selected in an instance-specific fashion.
Show more

37 Read more

When are purely predictive models best?

When are purely predictive models best?

Return now to the issue of model error – the second great conceptual challenge to weather forecasting noted earlier. Recall: even minuscule errors in a model can lead to quickly escalating errors in that model’s predictions. So how can this danger be averted, given that weather forecasters’ current models are clearly not literally true and therefore are in error in the relevant sense? A purely theoretical solution is unavailable. The answer instead is a further advertisement for brute emphasis on predictive accuracy above all else. Simply put, many different versions of a model, including different stochastic adjustments, are tested against the empirical data. The ones selected are those that, as a matter of fact, predict the best. This method has proven effective at avoiding the problem of model error – simply pick those models whose errors of representation turn out not to lead to errors of prediction.
Show more

42 Read more

Predictive  Models  for  Min-Entropy  Estimation

Predictive Models for Min-Entropy Estimation

This means that the strategy of applying a large set of very different predic- tors to the sequence of noise source samples, and taking the minimum estimate, is workable. If five predictors whose underlying models are very bad fits for the noise source are used alongside one predictor whose underlying model fits the source’s behavior well, the predictor that fits well will determine the entropy estimate–predictors that are a bad fit will never give large systematic under- estimates. Further, this means that it’s reasonable to include many different predictors–adding one more predictor seldom does any harm, and occasionally will make the entropy estimate much more accurate.
Show more

20 Read more

Estimating Connecticut Stream Temperatures Using Predictive Models

Estimating Connecticut Stream Temperatures Using Predictive Models

The purpose of this project was to create a predictive model of Connecticut stream temperatures based upon physical parameters and then classify the streams into thermal regimes (cold, cold transitional, warm transitional and warm water). The Connecticut Department of Energy and Environmental Protection (CT DEEP) has been interested in classifying streams using Lyons et al. (2009) thermal regimes established for Michigan and Wisconsin (Table 1 taken from Lyons et al. (2009)). This classification system is depicted in Table 1: Lyons Thermal Regime Classifications. The thermal regime which is of particular concern is cool water streams, which is not recognized as a major management category despite that walleye and northern pike, both major game fish, are classified as cool water species. The misclassification of streams could also lead to missed opportunities to establish and expand fisheries, for example, trout can still survive in cool water streams, however; if a cool water stream is grouped with warm water streams, opportunities to expand trout fisheries may be overlooked (Ibid).
Show more

45 Read more

Implementation of Predictive Models and Text Classification on Twitter Data

Implementation of Predictive Models and Text Classification on Twitter Data

depends on that each and every word which is classified, then that trained model is applied on the new word to classify. 3.4 Forecast Sentiments using Regression Models:- Once sentiments are obtained from the twitter data set then predict the forecasting future sentiments like positive, negative and neutral. Here apply some regression methods on the numerical values of the sentiments.

7 Read more

Examining characteristics of predictive models with imbalanced big data

Examining characteristics of predictive models with imbalanced big data

The use of average values for variations of repetitive model building statistically enhances the score results assigned to the models. In addition, to demonstrate statistical significance of the observed experimental results, a hypothesis test is performed using ANOVA [58], and with post hoc analysis via Tukey’s HSD test [59]. ANOVA is a statisti- cal test determining whether the means of one or several independent variables (or fac- tors) are significant. Six rows of two-way ANOVA analysis are shown in Table 4, i.e. three rows for the ECBDL’14 case study and three for the POST case study. The two factors in the table are the number of the selected features (referred to as F in the tables) generated by FI, and sampling class distribution ratios (referred to as R in the tables) generated by RUS. We investigated the intersection of both factors to learn how they affect the respective learner (GBT, RF, LR). If the p-value in the ANOVA table is less than or equal to a certain level (0.05), the associated factor is significant. A 95% ( α = 0.05 ) significance level for ANOVA and other statistical tests is the most commonly used value.
Show more

21 Read more

Predictive Models for Disaggregate Stock Market Volatility

Predictive Models for Disaggregate Stock Market Volatility

new forecasting models. Nevertheless, in some cases, the results under the rolling estimation display more significant model differences. For example, the augmented model of Fin3 (Trading) industry outperforms the AR (6) regressions in the same periods, except for the recursive scheme during the 1972 - 2012 and 1972 - 2002 periods, which cover the turbulent times during the 1970s. Meanwhile, the augmented models improve the reliability of predictions in each sub-period, by adding the factors of industrial production growth and inflation rate only under the recursive scheme. Under the rolling estimation, the forecast sample is limited to 240 months (20 years) of data throughout the whole sample period, while under the recursive scheme, the forecast sample grows continually one step ahead. Hence, the forecasts under the recursive scheme might become less volatile and superior in the out-of-sample performance as the sample size increases. This intuition is pronounced in many industries, including Utils2 (Utilities), Medeq3 (Medical Equipment) and Cnstr3 (Construction). Such prominence could be due to the greater volatility of stock return in those industries.
Show more

46 Read more

Efficacy Assessments of Z Score and Operating Cash Flow Insolvency Predictive Models Insolvency Predictive Models

Efficacy Assessments of Z Score and Operating Cash Flow Insolvency Predictive Models Insolvency Predictive Models

mies in all aspects. Nigeria was one of the prospering economies of the world during the 1970s, thus [1] classi- fied her as an “emerging economy” with great large terri- tories, consumer markets and growing populations. Ni- geria helped by the oil boom of 1970s was undertaking extraordinary developmental projects that called for new infrastructure, such as power-generating plants, huge electrifications of the entire country, construction of large networks of roads, provision of improved educa- tional facilities, harbouring flourishing corporate bodies and carrying our massive investment in telecommunica- tions. These developments caused increased demand for consumer goods, social goods and capital equipments. Nigeria pursued vigorously economic policies leading to faster growth and expanding trade and investment with the rest of the world. The International Trade Admini- stration cited Nigeria, South Africa, Brazil, Turkey, India, and Malaysia as emerging economies/markets but by the year 2000 and thereafter, things were not the same any- more. Between 1995 to 1998, [2] had it that number of Commercial Banks had declined from 64 to 51, while Merchant Banks also declined from 54 in 1991 to 38 in 1998. It is known that over two thousand companies have been delisted and many products or services of existing companies have disappeared in the markets thus prompt- ing Nigerians to ask for the reasons for such disappear- ances. Consumers are alarmed as to what has happened. Policy makers and Professionals are deeply concerned about the increasing corporate failures in recent years. Investors are worried of the efficacy of tools of future and early warning signals of insolvency in the affairs of corporate bodies in the country. These have made it nec- essary to investigate the efficacy of Z-Score and Operat- ing Cash Flow models as tools for assessing early warn- ing signals of corporate liquidation in Nigeria. It is also pertinent that there is increasing economic meltdown in many “Credit” economies where the efficacy of Z-Score and Operating Cash Flow Insolvency models have been tested to enhance the need to examine their efficacy in a “Cash” economy. Furthermore as Nigeria is speculating to transit into “Credit” economy, scholars and research- ers with profound interest in the affairs of the country need to investigate and document the efficacy of Z-Score Insolvency prediction model adapted for “Credit” eco- nomies and compare it with Operating Cash Flow Insol- vency prediction model which is adapted to “Cash” eco- nomies, in readiness to analysis corporate dynamics when the transition to credit economics is completed.
Show more

26 Read more

Predictive Models in Brain Connectivity Analysis

Predictive Models in Brain Connectivity Analysis

Evaluating the models, Linear discriminant technique had a good performance using the optimal prior, but it fell down in the cross-validation procedure going from 88.28 to 49.08 percent accuracy rate. For this reason, this was the first discarded technique of the three used to model gender. Random Forest also performed well using mtry set constant and little bit better when the parameter was dynamic. It went from 72.80 to 73.02 percent accuracy rate. It was the most robust technique, allowing to model gender using over a thousand predictors. The results with more than 200 predictors were not included because they did not affected much the accuracy rate 19 . Although the
Show more

71 Read more

Multi-method approach to wellness predictive modeling

Multi-method approach to wellness predictive modeling

Patient wellness and preventative care are increasingly becoming a concern for many patients, employers, and healthcare professionals. The federal government has increased spending for wellness alongside new legislation which gives employers and insurance providers some new tools for encouraging preventative care. Not all preventative care and wellness programs have a net positive savings however. Our research attempts to create a patient wellness score which integrates many lifestyle components and a holistic patient prospective. Using a large comprehensive survey conducted by the Centers for Disease Control and Prevention, models are built com- bining both medical professional input and machine learning algorithms. Models are compared and 8 out of 9 models are shown to have a statistically significant (p = 0.05) increase in area under the receiver operating characteristic when using the hybrid approach when compared to expert-only models. Models are then aggregated and lin- early transformed for patient-friendly output. The resulting predictive models provide patients and healthcare providers a comprehensive numerical assessment of a patient’s health, which may be used to track patient wellness so at to help maintain or improve their current condition.
Show more

23 Read more

Predicting the outcomes of traumatic brain injury using accurate and dynamic predictive model

Predicting the outcomes of traumatic brain injury using accurate and dynamic predictive model

Various different types of predictive models have been designed for prediction. However, these models have some limitations and drawbacks. Existing predictive models have not been well established for TBI patients. The existing predictive models have unsatisfactory results due to unavailability of multi-class prediction. Multi-class prediction is very significant to improve the predictive models performance for TBI outcomes. Different types of predictive models are used to provide classifications and predictions such as Artificial Neural Network (ANN), AdaBoost and Support Vector Machine (SVM), Logistic Regression (LR), Bayesian Network (BN), Decision Tree (DT) Discriminant Analysis (DA) [9, 10]. Still, there is a need to develop a new predictive model for improving the existing models predictive performance. Another issue in TBI predictive model is affinity predictive model usage. The affinity is not used in TBI to develop and provide multi-class prediction. Indeed, there is a dire need to develop a new predictive model for improving the predictive model performance. In addition, the features from existing TBI predictive model need to be evaluated and approved by neurology experts for a better predictive performance.
Show more

10 Read more

The impact of contextual factors on the predicted bulk water pipe repair times in Wellington City : a thesis presented in partial fulfilment of the requirements for the degree of Master in Emergency Management at Massey University, Wellington, New Zealand

The impact of contextual factors on the predicted bulk water pipe repair times in Wellington City : a thesis presented in partial fulfilment of the requirements for the degree of Master in Emergency Management at Massey University, Wellington, New Zealand

31 crossover of international and Christchurch pipe classes, there were not many pipe classes that obtained regression lines. Of those that received a regression line, most had very low R-squared values, with only 3 of the eight having values higher than 0.7. The (Sherson, 2015) & Sherson et al. (2015) reports, explain that these poor relationships are a result of 1) the uniqueness of the Christchurch Earthquakes, where damage was much higher than expected and thus did not combine well with the international plots. Moreover, 2) because instrumental MMI was used instead of actual MMI, which did not register the influence of liquefaction, creating flat relationships between break rate and MMI. Interestingly O'Rourke et al. (2014) produced much better R-squared values with the same information. However, these regression lines either did not incorporate any data outside of Christchurch or only included a small amount of data. For example, Fig 5 (pp 11), only added three plots from Christchurch to a large pool of international data, and Fig 5 (pp 11) only included one plot from the Northridge earthquake to the Christchurch data. Thus, a significant portion of information was missed. Overall both studies provide an excellent understanding of how water pipes can be damaged by an earthquake, due to the depth of investigation and the questions they ask. Also, many of the limitations and difficulties that arose add insight into numerous factors that need to be considered when mapping damage, such as localised geological effects. However, the models only look at the physical damage of one lifeline, missing the importance of including other lifelines into the calculations, like many of the other local predictive models.
Show more

145 Read more

Min-redundancy and max-relevance multi-view feature selection for predicting ovarian cancer survival using multi-omics data

Min-redundancy and max-relevance multi-view feature selection for predicting ovarian cancer survival using multi-omics data

Developing multi-omics data-driven machine learning models for predicting clinical outcome, including can- cer survival, is a promising cost-effective computational approach. However, the heterogeneity and extreme high-dimensionality of omics data present significant methodological challenges in applying the state-of-the art machine learning algorithms to training such models from multi-omics data. In this paper, we have described, to the best of our knowledge, the first at- tempt at applying multi-view feature selection to ad- dress these challenges. We have introduced a two-stage feature selection framework that can be easily custom- ized to instantiate a variety of approaches to integrative analyses and predictive modeling from multi-omics data. We have proposed MRMR-mv, a novel maximum relevance and minimum redundancy based multi-view feature selection algorithm. We have applied the result- ing framework and algorithm to build predictive models for ovarian cancer survival using multi-omics data derived from the Cancer Genome Atlas (TCGA).
Show more

13 Read more

A comparative study of continuum and structural modelling approaches to simulate bone adaptation in the pelvic construct

A comparative study of continuum and structural modelling approaches to simulate bone adaptation in the pelvic construct

The base FE model used in this study was generated from a tetrahedral mesh previously developed by the authors [30]. The volumetric mesh of the pelvis was generated in Mimics (Materialise, Leuven, Belgium) from a CT scan (399 × 3mm thick slices, 512 × 512 pixels, 0.91mm/pixel) of a cadaveric pelvis and lower limbs (Male, age: 55, weight: 94.3 kg, height: 188 cm) provided by the Royal British Legion Centre for Blast Injury Studies at Imperial College London, to which the volunteer was height and weight matched. The base FE model consisted of 377,362 tetrahedral elements with an average edge length of 3.76 mm. In the present study, four predictive models of the pelvic construct were developed: two purely continuum models with orthotropic and isotropic material properties, and two hybrid models where trabecular and cortical bone were differentiated using continuum and shell elements respectively, while assigning orthotropic and isotropic material properties to elements representing trabecular bone (Table 1). In addition, a structural model previously developed by the authors [30] was included for comparison.
Show more

19 Read more

A STUDY OF PREDICTIVE ANALYTICS AND ITS MODELING TECHNIQUES

A STUDY OF PREDICTIVE ANALYTICS AND ITS MODELING TECHNIQUES

Predictive analyticsis the enhancement of business intelligence and data discovery which predicts the future using statistical methods on data and it also works bey ond the complexity limits of many OLAP implementations.The Predictive analytics not only answers what is likely to happen next and what to do next, it also predicts how and when to do and when to explain what if scenarios for making better decisions. By implementing the predictive analytics the organizations can gain competitive advantage of predicting the future better in beforehand. Predictive analytics helps to increase the sales by making the organizations to anticipate the customer’s needs and purchasing habits which impacts in the reduced inventory cost, increased profit and spotting fraudulent purchases on correct time. By applying combined knowledge of business and statistical methods on the business data, the insights are produced by the predictive analytics which are used by the organizations to understand how people will behave as distributers, buyers, sellers and customers.The insights produced by multiple predictive models are used for making strategic decisions by the senior management. Without the correct tools and techniques, the implementation of predictive analytics gets harder for the organizations. It is important for them to know which technique to use, when to use and on which data. In this paper, the different types of predictive analytics models, different stages in building models, types of algorithms and methodologies used for building models are discussed.
Show more

7 Read more

The Historical Significance of Textual Distances

The Historical Significance of Textual Distances

actually conforms to those priors more closely. Moreover, it provides surprises that retrospectively make sense (e.g. “Bildungsroman” is close to “Juvenile” fiction.) Evidence of this kind tends to demonstrate that predictive models provide reliable guidance about the relationships between genres. One can have roughly the same amount of confidence in time-adjusted topic vectors (the method described in 3.2.3). The value of the other three measures of textual distance is more dubious. On inspection, it seems likely that those distance matrices are dominated by a chronological gradient that doesn’t correspond to readers’ intuitions about the relationships between categories. In other words, the gap between the top two methods in Figure 1, and the bottom three methods, may be bigger than it appears.
Show more

10 Read more

A predictive algorithm for wetlands in deep  time paleoclimate models

A predictive algorithm for wetlands in deep time paleoclimate models

Similarly, poor predictive models were obtained whether de- rived for all sites or just those restricted to specific plant functional types. These outcomes likely occur because lin- ear regression optimises a function by minimising the error between predicted and observed values. As most grid cells have FW ≈ 0 (Fig. 1), the “best” regression equation is one that predicts FW to be very low almost everywhere, since in the majority of cases this is quite accurate. Efforts were made to use other optimisation criteria with customised func- tions that attempted to put more weight on predicting high FW correctly at the expense of larger errors where FW is low. However, these simply over-predicted FW. Therefore, we were unable to find any satisfactory solution based on linear regression. The fact that we did not find a satisfac- tory regression equation for FW on the reference data sug- gests that any relationship between FW and the environmen-
Show more

14 Read more

Show all 10000 documents...