Top PDF Linear Regression Using R: An Introduction to Data Modeling

Linear Regression Using R: An Introduction to Data Modeling

Linear Regression Using R: An Introduction to Data Modeling

The first step in developing the multi-factor regression model is to identify all possible predictors that we could include in the model. To the novice model developer, it may seem that we should include all factors available in the data as predictors, because more information is likely to be better than not enough information. However, a good regression model explains the relationship between a system’s inputs and output as simply as pos- sible. Thus, we should use the smallest number of predictors necessary to provide good predictions. Furthermore, using too many or redundant predictors builds the random noise in the data into the model. In this sit- uation, we obtain an over-fitted model that is very good at predicting the outputs from the specific input data set used to train the model. It does not accurately model the overall system’s response, though, and it will not appropriately predict the system output for a broader range of inputs than those on which it was trained. Redundant or unnecessary predictors also can lead to numerical instabilities when computing the coefficients.
Show more

91 Read more

Using Baseball Data as a Gentle Introduction to Teaching Linear Regression

Using Baseball Data as a Gentle Introduction to Teaching Linear Regression

To improve the ability to “explain” the variation in the team’s winning percentage, the concept of multiple linear regression is introduced, where it is mentioned that adding new predictor variables can help to increase the R 2 term. That is, it is mentioned that one can do a better job of explaining the variation in the team’s winning per- centage. It is also emphasized that we must be frugal in adding predictor variables, as many predictor variables in a single model can make the model difficult to interpret. In the context of our baseball example, Payroll is removed and five new predictor variables are introduced. Table 2 shows the new data set.
Show more

7 Read more

Modeling and Prediction of Changes in Anzali Pond Using Multiple Linear Regression and Neural Network

Modeling and Prediction of Changes in Anzali Pond Using Multiple Linear Regression and Neural Network

Abstract: Iranian ponds and water ecosystems are valuable assets which play decisive roles in economic, social, security and political affairs. Within the past few years, many Iranian water ecosystems such asUrmia Lake, Karoun River and Anzali Pond have been under disappearance threat. Ponds are habitats which cannot be replaced and this makes it necessary to investigate their changes in order to save these valuable ecosystems. The present research aims to investigate and evaluate the trend of variations in Anzali Pond using meteorological data between 1991-2010 by means of GMDH, which is based upon genetic algorithm and is a powerful technique in modeling complex dynamic non-linear systems, and linear regression technique. Input variables of both methodsinclude all factors (inside system and outside system factors) which affect variations in Anzali Pond. Exactness of linear regression method was 78% and exactness of GMDH neural network method was more than 97%. As as result, exactness of GMDH neural network method is significantly better than regression model.
Show more

6 Read more

The flare Package for High Dimensional Linear Regression and Precision Matrix Estimation in R

The flare Package for High Dimensional Linear Regression and Precision Matrix Estimation in R

shares the advantage of LAD Lasso and SQRT Lasso; (4) Dantzig selector, which can tolerate missing values in the design matrix and response vector (Candes and Tao, 2007). By adopting the column by column regression scheme, we further extend these regression methods to sparse precision matrix estimation, including: (5) TIGER, which is tuning insensitive (Liu and Wang, 2012); (6) CLIME, which can tolerate missing values in the data matrix (Cai et al., 2011). The developed solver is based on the alternating direction method of multipliers (ADMM), which is further accelerated by a multistage screening approach (Boyd et al., 2011; Liu et al., 2014b). The global convergence result of ADMM has been established in He and Yuan (2015, 2012). The numerical simulations show that the flare package is efficient and can scale up to large problems.
Show more

5 Read more

Linear Regression Analysis for Symbolic Interval Data

Linear Regression Analysis for Symbolic Interval Data

This paper is organized as follows. Section 2 gives a introduction for symbolic interval data, Model 1 and Model 2. In Section 3, we propose two methods to es- timate regression coefficient for symbolic interval data. In Section 4, the com- parisons of the proposed methods and some existing methods are performed via simulations. In Section 5, we analyze two real datasets with the proposed ap- proaches. Finally, we make some concluding remarks in Section 6.

17 Read more

Modeling of Yarn Strength Utilization in Cotton Woven Fabrics using Multiple Linear Regression

Modeling of Yarn Strength Utilization in Cotton Woven Fabrics using Multiple Linear Regression

The data in Table III shows that 82.36% of variations in % SU can be explained through Eq. (9). A look at the % contribution values of individual parameter indicates NL and NT to be the major contributors followed by % CL and weave or float length. NL and NT together contribute 49.91% of the variation in % SU while % CL and FL contribute 10.7% and 5.7% respectively. The coefficients are positive for NL and NT in both Eq. (8) and Eq. (9). This suggests that an increase in the number of load bearing and transverse yarns per cm reflects in a higher % SU of the fabric.

7 Read more

Comparison of the Prediction Accuracy thru Artificial Neural Networks with Respect to Multiple Linear Regression using R

Comparison of the Prediction Accuracy thru Artificial Neural Networks with Respect to Multiple Linear Regression using R

Model selection is an indispensable step in the process of developing a prediction model or a model for understanding the data.ANN are structures with mathematical and statistical behavior and the capacity of learning. ANN are flexible tools that have already shown application as function estimators and now have increased the interest in using them as prediction tools in areas such as Data Mining and Machine Learning. For these purposes, R language is a flexible, open source and increasingly used tool in data science area when you can train statistical and artificial intelligence models. It is also easy to use, provides very good performance and does not consume a lot of computational resources.
Show more

7 Read more

Review on Weather Forecasting using Linear Regression and SVM in Big Data

Review on Weather Forecasting using Linear Regression and SVM in Big Data

D. R. P. Singh clarify why a cloud-based arrangement is required, depict our model usage, and investigate some case applications we have executed that show individual information proprietorship, control, and examination. He address these issues by outlining and executing a cloud-based engineering that furnishes buyers with quick access and fine- grained control over their utilization information, and also the capacity To break down this information with calculations of their picking, including outsider applications that investigate that information in a protection saving style.
Show more

5 Read more

Regression Models for Count Data in R

Regression Models for Count Data in R

In R (R Development Core Team 2008), GLMs are provided by the model fitting functions glm() (Chambers and Hastie 1992) in the stats package and glm.nb() in the MASS package (Venables and Ripley 2002) along with associated methods for diagnostics and inference. Here, we discuss the implementation of hurdle and zero-inflated models in the functions hurdle() and zeroinfl() in the pscl package (Jackman 2008), available from the Comprehensive R Archive Network (CRAN) at http://CRAN.R-project.org/package=pscl. The design of both modeling functions as well as the methods operating on the associated fitted model objects follows that of the base R functionality so that the new software integrates easily into the computational toolbox for modeling count data in R.
Show more

25 Read more

Geographically and Temporally Weighted Regression
(GTWR) for Modeling Economic Growth using R

Geographically and Temporally Weighted Regression (GTWR) for Modeling Economic Growth using R

The statistical test to see indication of linear correlation which is risky or not between explanatory variables is done multicollinearity assumption test by attention at Variance Inflation Factor (VIF) value. Table 4 shows that the VIF value for each variable is less than 5 either from the combined data in each year.

6 Read more

Linear methods for regression and classification with functional data

Linear methods for regression and classification with functional data

But the use of principal components for prediction is heuristic because they are computed independently of the response: the components corresponding to the q largest eigenvalues are not necessarily the q most predictive, but it is difficult to rank an infinite number of components according to R 2 ...

6 Read more

Modeling Macroeconomic Variables Using Principal Component Analysis and Multiple Linear Regression: The Case of Ghana’s Economy

Modeling Macroeconomic Variables Using Principal Component Analysis and Multiple Linear Regression: The Case of Ghana’s Economy

The paper aimed at modeling the relationship between GDP and 29 macroeconomic variables in Ghana using the PCA and multiple linear regressions methods. Economic data with 583 data points were collected from January, 1990 through to May, 2018. Seven factors were retained (explained 74% of the overall variation) after using multiple extraction approaches of scree test, Kaiser Criterion, and parallel analysis to avoid over- and under-extraction. Regression analysis was performed where component scores were used to develop a relationship with the uncorrelated components and GDP. Closed Economy without Government Activities explicitly contained seven indicators consisting of consumer price index-Food, Consumer price index-Nonfood, Consumer Price index overall, Monetary Policy Rate, 91-DaysTreasury, 182-Days Treasury Bill, crude oil, and Core Inflation (Adjusted for Energy & Utility was significant and positively related with GDP (B = 0.6, p<0.01). Closed Economy with Government activities explicitly contained two indicators such as Tax- Equivalent Rate on the 28-DayTreasury Bill and Tax- Equivalent Rate on 56-Day Treasury Bill had a significant
Show more

9 Read more

Weather Prediction using Linear Regression & Support Vector Machine vide Big Data

Weather Prediction using Linear Regression & Support Vector Machine vide Big Data

The Linear regression only shows the 2-dimensional model based on confusion matrix case where the data points are linearly separable. The mathematics of the problem to be solved is the following Support Vector Machine equation. However, The essential count that was used was straight backslide, which tries to suspect the high and low temperatures as an immediate blend of the features. Since straight backslide can't be used with gathering data, this computation did not use the atmosphere course of action of consistently. As needs be, only eight features were used: the best temperature, minimum temperature, mean moistness, and mean climatic weight for each of the past two days. In this way, for the I-th join of consistent days, x (I) ∈ R9 is a nine-dimensional component vector, where x0 = 1 is portrayed as the square term. There are 14 adds up to be expected for each join of consecutive days: the high and low temperatures for each of the accompanying seven days. Let y (I) ∈ R 14 imply the 14-dimensional vector that contains these sums for the I-th match of progressive days utilizing direct relapse and further utilizing help vector machine arrangement limit the blunder work utilizing:
Show more

10 Read more

Analyzing Returns and Pattern of Financial Data Using Log linear Modeling

Analyzing Returns and Pattern of Financial Data Using Log linear Modeling

Stock market always attracts the investors to invest money according to their choice from which large profits can be earned. Fundamental driver behind maximizing profit is the strategy of buying and selling of the stocks. The buying and selling behaviour of investors is also affected by Turn of the year [Ritter 1988]. It is well documented that turn of the year, the average ratio of buying and selling, is more in first 9 days of January than mid January to mid December and last 9 days of December. Rozeff and Kinney (1976) also gave explana- tion about the January effect that the average of returns of stocks is higher in January month than in other months. There are number of articles available to discuses the Turn of the year effect. Jay R. Ritter (1988) proposed a theory based on the tax-loss-selling named, ”parking-the proceeds” to explain the Turn of the Year effect on the NYSE daily returns from 17 Dec 1970 to 16 Dec 1985 using t-statistic. Barber and Odean (2008) tested the hypothesis based on attention grab- bing stocks. These statistical tests confirm that the behaviour of individuals and institutions differ while buying and selling
Show more

7 Read more

Improving Fodder Biomass Modeling in The Sahelian Zone of Niger Using the Multiple Linear Regression Method

Improving Fodder Biomass Modeling in The Sahelian Zone of Niger Using the Multiple Linear Regression Method

The model's biophysical data is derived from the NDVI of SPOT-VEGETATION; the agro-meteorological input data are the rainfall measured at all the rainfall stations in the country and those estimated by FEWSNET RFE2 satellite, ETP (potential evapotranspiration) from the European Center for Medium-Range Weather Forecasts (ECMWF). The VAST computer program and the Agrometshell (AMS) software were used to generate the explanatory variables. The explained variable (dependent) is the fodder yield measured on the ground control sites by the MEIA; the statistical analyzes were carried out with the statistical processing software SAS JMP. The stages of statistical processing are subdivided into seven points: (i) the elimination of unnecessary variables; (ii) selection of the most relevant variables; (iii) comprehensive model research (2 k possible models); (iv) selection of the
Show more

15 Read more

Linear regression for data having multicollinearity, heteroscedasticity and outliers

Linear regression for data having multicollinearity, heteroscedasticity and outliers

challenging to the regression user. On the one hand, extra effort is often required to get the appropriate methods that will handle these problems in order to obtain better parameter estimates. In this thesis the wild bootstrap techniques proposed by Wu (1986) and Liu et al. (1988) which are efficient both under homoscedasticity and heteroscedasticity of unknown form will be used to estimate the model parameter. But these wild bootstrap methods are based on OLS and hence the estimator can be affected in the presence of outliers. Under this scenario the robust wild bootstrap methods which are resistant to outliers is introduced. The robust wild bootstrap methods introduce by Rana et al. (2012) are based on MM-estimator and our investigation revealed that the robust wild bootstrap techniques are resistant to presence of residuals outliers but not resistant to leverage points, Simpson (1995a). These motivate the introduction of a new robust wild robust bootstrap technique that is not sensitive to high leverage points. Based on this understanding of the limitation of robust wild bootstrap methods, the GM-estimator .which was described by Wilcox Rand (2005) and Andersen (2008) as highly efficient and bounded influence estimator will be consider in this study. The GM-estimator will be applied in two different techniques. The first technique involves the GM-estimator based on the initial estimate of high efficient and high break down point of MM-estimators. While the second technique involve the GM-estimator based on the initial estimate of high efficient and high break down point of S-estimator. Moreover, the robust wild bootstrap techniques are resistant to residuals outliers and heteroscedasticity but not resistant to multicollinearity. This is another challenge faced by the robust wild bootstrap and no work in the literature addresses the combined problems of multicollinearity and heteroscedasticity in the presence of residuals outliers and high leverage points using the wild bootstrap approach. The multicollinearity diagnostic method of PC and PLS with a bounded influence GM-estimator which was introduced by Krasker and Welsch (1982) will be explored.
Show more

56 Read more

Estimating monotonic rates from biological data using local linear regression

Estimating monotonic rates from biological data using local linear regression

When used with rankLocReg objects, the plot function generates several diagnostic plots to help determine the most appropriate method for a given analysis. Users can examine results from alternative L metrics by using the reRank function. Fig. 1 provides a schematic overview of a typical workflow using LoLinR to estimate biological rates. Crucially, analyses using LoLinR can be fully reproduced from (1) the time series data and (2) any one of the following: summary plots, summary tables or the R code used to perform the analysis. All are easily included as appendices or supplementary material to published articles, making LoLinR analyses extremely easy to reproduce.
Show more

6 Read more

Convenient Way of Extend Linear Expenditure System Modeling without Regression

Convenient Way of Extend Linear Expenditure System Modeling without Regression

Extend Linear Expenditure System model is a collection of multiple linear models, and modeling is a clearly tedious process. The innovation of this paper is trying to find a simple way of ELES mod- eling, which means, in order to omit the modeling process one by one, we try to use Excel functio- nality to create a model workplace. As long as you replace the original sample data in the work- space, you can get the results you want.

6 Read more

Mixed effects modeling with missing data using quantile regression and joint modeling

Mixed effects modeling with missing data using quantile regression and joint modeling

There are many scenarios when multiple responses from the same subject are recorded si- multaneously. In many cases it is likely that these these variables will be highly correlated. Searle et al. (1992) suggested an approach to simultaneously model multiple responses within the linear model framework. Linearizing the response vector and the design matrices allows for traditional computational approaches to be applied. Such modeling can lead to more efficient inference than separate univariate analyses. Shah et al. (1997) used this approach to simulta- neously model bivariate responses from two randomized trials evaluating a daily prophylactic treatment. This joint modeling approach determined the efficacy of the treatment and also es- timated the correlation between the CD4 and CD8 cell counts over time. Wu (2010) described the model and its estimation in the general form. Extensions of this approach incorporating measurement error and missing responses were considered by Liu and Wu (2007). This work was expanded upon later with a greater emphasis on modeling the missing data mechanism (Wu et al. (2009)).
Show more

127 Read more

An introduction to hierarchical linear modeling

An introduction to hierarchical linear modeling

By bringing upper level variables down to level-1, shared variance is no longer accounted for and the assumption of independence of errors is violated. If teaching style influences student breakfast consumption, for example, the effects of the level-1 (student) and level-2 (classroom) variables on the outcome of interest (GPA) cannot be disentangled. In other words, the impact of being taught in the same classroom on students is no longer accounted for when partitioning variance using the disaggregation approach. Dependencies in the data remain uncorrected, the assumption of independence of observations required for simple regression is violated, statistical tests are based only on the level-1 sample size, and the risk of partitioning variance incorrectly and making inaccurate statistical estimates increases (Beaubien et al., 2001; Gill, 2003; Osborne, 2000). As a general rule, HLM is recommended over disaggregation for dealing with nested data because it addresses each of these statistical limitations.
Show more

18 Read more

Show all 10000 documents...