Common Factor Model

Top PDF Common Factor Model:

On the effectiveness of natural hedging for insurance companies and pension plans

On the effectiveness of natural hedging for insurance companies and pension plans

In this paper, we assess the potential effectiveness of natural hedging between annuity and life products. We apply the correlated Poisson Lee-Carter model, Poisson common factor model, product-ratio model, and historical simulation to actual mortality experience of assured lives, pensioners, annuitants, and general population in England and Wales. Particularly, this is the first attempt to adapt the product-ratio model from the demographic literature to an actuarial issue. Besides the initial settings, we also consider a variety of scenarios and product features and perform sensitivity analysis. In general, we find that there is an optimal mix for each case under consideration, and our simulations suggest that the level of risk reduction would be too significant to be overlooked in practical work such as reserving and capital allocations. Financial institutions with mortality-linked liabilities would benefit from holding more diversified portfolios. Nevertheless, our numerical results appear to be model-dependent, which pinpoints that an actuary should use a number of different models in order to obtain a better view of the hedging effects.
Show more

34 Read more

Assessing Reflective Models in Marketing Research: A Comparison Between PLS and PLSc Estimates

Assessing Reflective Models in Marketing Research: A Comparison Between PLS and PLSc Estimates

In general, the present study complements and extends prior research on examining PLS’s performance on common factor model data (Djikstra & Henseler, 2015a, b). Findings from the present study suggest that PLSc estimations have the potential to play a greater role in future SEM applications when estimating common factor models (true reflective model). As a concluding note, this paper would further propose the conditions when PLSc and traditional PLS are more appropriate to use in marketing studies. Figure 3 exhibits the guideline that could help researchers to practice suitable estimation of PLSc and traditional PLS when reflective model are used in the study (i.e., brand involvement, brand interactivity, brand equity, and purchase intention). Firstly, researchers must consider the nature of the measurement model (reflective or formative), which expresses how to measure the construct by means of a set of indicators (Jarvis et al., 2003). This can be done by considering the conceptualization or operationalization of the construct. Since a reflective measurement model dictates that all items reflect the same construct, indicators associated with a construct should be highly correlated with each other (Edwards & Bagozzi, 2000). Also, individual items should be interchangeable, and any single item can generally be left out without changing the meaning of the construct, if the construct has sufficient reliability (Jarvis et al., 2003). The fact that the relationship goes from the construct to its indicators, it implies that if the evaluation of the latent trait changes (e.g., because of a change in the standard of comparison), all indicators will change simultaneously (e.g., Diamantopoulos & Winklhofer, 2001). In addition, researcher must be aware that the indicators are error-prone manifestations of an underlying construct with relationships going from the construct to its indicators (Bollen, 1989).
Show more

22 Read more

Aggregate demand for narrow and broad money: a study for the brazilian economy (1970 1983)

Aggregate demand for narrow and broad money: a study for the brazilian economy (1970 1983)

Figures 1A through 1D show the plotted values for Ml. From the analysis of these figures, one can see that the ARlMA model is reasonable for the first subperiod {1970,IV-1978,IVI, but mislea­ ding for the second one (1979,I-1983,IV) . The restricted model (r� gression nr. 1) and the unrestricted model (regression nr. 8) are very close in their prediction values. Between the common factor model (regression nr. 14) and the first differences model (regres­ sion nr. 16) , one has to choose the common factor model. This also seems to be the best model, in terms of prediction values, of the several models presented in Figures 1A through 1D.
Show more

33 Read more

How to Make a Mudsparkler

How to Make a Mudsparkler

McCrae, 1985) and the Social Problem Solving Inventory-Revised (SPSI-R; D’Zurilla, Nezu, & Maydeu-Olivares, 2002). Data (N=794) were taken from Maydeu-Olivares et al. (2000). In both cases, the questionnaires used 5-point item response options: 0, 1, 2, 3 and 4. We fit a one factor model to each questionnaire in their original form, then again fit a one factor model after collapsing the extreme categories to turn the data into 3-point response option items (i.e., 0 & 1 = 0; 2 =1; 3 & 4=2). Both variants were examined under two conditions: (a) the common factor model where items were treated as continuous, and (b) an ordinal factor model where the items were treated as discrete. Under the common factor model, maximum likelihood (ML) estimation was used with a mean and variance adjusted X 2 test statistic. For the ordinal factor model, unweighted least squares (ULS) estimation was used, again with a mean and variance adjusted X 2 test statistic based on polychoric correlations. Results are shown in Table 2.1 We provide the mean and variance adjusted X 2 , the Root Mean Squared Error of Approximation
Show more

71 Read more

A Grouped Factor Model

A Grouped Factor Model

In this formulation it is clear that ¯ k is the penalty due to the average number of factors and ¯ h is the penalty due to dispersion of groups. Compared to the P C cri- terion in Bai and Ng (2002), obviously this model selection criterion is a variant of weighted average of P C criteria over all groups with an additional penalty on the dispersion of groups in a model. Condition 1 is to make sure that the proportion of a group will not vanish asymptotically. Because we are considering the asymptotical property of the model selection criterion, the proportion of a group in a candidate model should not be vanishing. Hence we assume that for all candidate models, there exists a constant lower bound for the ratio of the number of variables in a group to the total number of variables in a model. Condition 2 is to get the right rate of convergence for the penalty term, and Condition 3 is to make sure that the average number of factors is the dominating parameter of the model and the disper- sion of groups is a dominated parameter. While comparing two models, we compare first the dominating parameters, only when the dominating parameters are equal we compare the dispersion of the groups in the two models.
Show more

51 Read more

On applications of the factor analysis in the agricultural research

On applications of the factor analysis in the agricultural research

Factor analysis model is an attempt to character- ize the structure of covariance/correlation matrix of the data. The model is build in such a way that the importance of the common factors in the model is decreasing. This is to say that the first factor F 1 ex- plains the largest portion of the total observed vari- ance of the variables X 1 , X 2 , ..., X v and represents the most important class of intercorrelated variables. The second factor F 2 explains the largest portion of the total observed variance not explained by the factor F 1 . The last factor F c included in the final factor analysis model should associate into one class at least 2 vari- ables with significant factor loadings. Significance of a factor loading can be tested analogous to statistical significance of a coefficient of correlation. For the sake of comparability of factor analysis models for samples of different sizes, the limit of significance of a factor loading is often set subjectively (frequently to |a jp | = 0.5). Then the class of variables associated by a common factor is constituted by those variables with significant factor loadings.
Show more

7 Read more

The Warped One: Nationalist Adaptations of the Cuchulain Myth

The Warped One: Nationalist Adaptations of the Cuchulain Myth

Positions of intersection point and source point of MC rays are the most important information to calculate view factor between surfaces. Since random numbers are used in MC sampling, intersection point or source point might be in any coordinate. To decide whether a ray will be counted or not it is necessary to know their exact coordinates to understand if the point is inside or outside the element side. To test a point on element side, area can be used as the criterion. Unlike using center point to calculate the area of an element side, discussed previously in function getArea(), an arbitrary point lying on the same plane with element side is used in area calculations. If the point is inside, then the total area of triangles will give the area of element side. On the other hand, if it is outside then the total area will be greater than the actual area of element side. Therefore, the total area can be used as a criterion/parameter to check whether a point lies on the element side or not.
Show more

85 Read more

A comparison of nested and cross cutting common ingroup identities and the role of ingroup projection, distinctiveness and intergroup threat on outgroup attitudes

A comparison of nested and cross cutting common ingroup identities and the role of ingroup projection, distinctiveness and intergroup threat on outgroup attitudes

interventions used to get a more positive picture of outgroup members (Marcus-Newhall, Miller, Holtz, & Brewer, 1993; Kunst, Thomsen, Sam, & Berry, 2015). For the current study, this does not exclude the possibility that both CIIs had such a strong positive effect, that there could not been revealed a difference in the effects of the CIIs on outgroup attitudes. On the basis of the existing data it cannot be concluded whether subgroup members had a more negative picture of the outgroup before they were assigned into the CIIs or whether subgroup members already had a positive picture of the outgroup before being manipulated into both conditions. For this, the attitudes of the two subgroups before being manipulated with a common ingroup identity should have been measured. Thus, future research could either assess outgroup attitudes before the CII manipulation is implemented, thus a repeated measure of outgroup attitudes; or it is recommended to include a third condition which measures relevant variables for the two subgroups. With the help of a control condition, it should be possible to differentiate whether the unfound effects are due to an effect of the manipulation of the CIIs or whether there is no effect of the CII.
Show more

48 Read more

The Effect of Investor Sentiment on Betting Against Beta: A SEM Approach Towards Beta Anomaly

The Effect of Investor Sentiment on Betting Against Beta: A SEM Approach Towards Beta Anomaly

expected returns. Frazzini and Pederson (2014), construct BAB factor, a portfolio that holds low-beta assets and leverages it to a beta of one, also, shorts high-beta assets and de-leverages it to a beta of one. They find that BAB factors have a positive average return increasing in the spread among betas between high- and low-beta securities. Fortunately, explanations for the BAB may also have behavioral roots due to these very reasons that: There is important market frictions like leverage constraints (Jensen et al., 1972) and benchmarking (Baker et al., 2011) that make high-beta stocks very interesting. Moreover, psychology- notably lottery demand (Bali et al., 2014) - reasoning is of a strong explanatory power in exploring these types of issues.
Show more

6 Read more

Discrepancies in Cornell Scale for Depression in Dementia (CSDD) items between residents and caregivers, and the CSDD's factor structure

Discrepancies in Cornell Scale for Depression in Dementia (CSDD) items between residents and caregivers, and the CSDD's factor structure

Patients and methods: A cross-sectional study was conducted of 84 elderly residents (46 women, 38 men, age range 60–94 years) in a long-term residential home setting in Thailand between March and June 2011. The selected residents went through a comprehensive geriatric assessment that included use of the Mini-Mental State Examination, Mini-International Neuropsychiatric Interview, and CSDD instruments. Intraclass correlation (ICC) was calculated in order to establish the level of agreement between the residents and caregivers, in light of the residents’ cognitive status. Confirmatory factor analysis (CFA) was adopted to evaluate the alternative CSDD models.
Show more

8 Read more

Common faith or parting ways? A time varying parameters factor analysis of euro area inflation

Common faith or parting ways? A time varying parameters factor analysis of euro area inflation

in the parameters makes these models highly nonlinear and possibly non Gaussian, so that computationally intensive simulation-based methods are typically required for estimation. In contrast, when the parameters are driven by the score, the model remains Gaussian, conditional on past data. In this case Delle Monache et al. (2015) develop a set of recursions that, running in parallel with the standard Kalman filter, allow the evaluation of the score vector at each point in time. Once the score is known, the model parameters can be updated. The likelihood function, which remains Gaussian, can then be evaluated by the means of the Kalman filter and maximized through standard procedures. A second stream of the econometric literature related to our work deals with dynamic factor models, see for example Giannone et al. (2008) and Camacho and Perez-Quiros (2010). Within this branch of the literature our paper is close to the studies that extend traditional dynamic factor models to nonlinear settings, like those by Del Negro and Otrok (2008), Mumtaz and Surico (2012) and Marcellino et al. (2013). There are a number of differences between our method and those just mentioned. The most important one is that all these papers adopt a Bayesian standpoint and rely on computationally intensive Bayesian methods to estimate the model parameters. In our setup, estimation can be carried out with straightforward maximization of the likelihood function, with some advantages in terms of computational simplicity.
Show more

31 Read more

Validity and reliability study of the attitude scale towards second foreign language learning

Validity and reliability study of the attitude scale towards second foreign language learning

The validity and reliability analyses of the scale were carried out on the data obtained from the application of the preliminary form on 252 teacher candidates. SPSS program was used in the analysis of the data. Whereas the exploratory analysis was done with SPSS 18.0 packet program, confirmatory factor analysis was done with Lisrel 8.50 program (Linear Structural Relation Statistics Package Program). In determining the number of the factors, generally more than one criteria are taken into account. Among these criteria is Kaiser criteria (eigenvalues≥ 1), Scree plot test, total variance explained and parallel analysis. The result of scree plot analysis indicated that the breaking point is in three points and the total variance explained is in an acceptable rate in (50-60%) in social sciences (Koyuncu & Kılıç, 2019). Cronbach Alpha reliability coefficient and item total correlation on scale items were calculated first on the data. Items under 0.30 and with negative values on item total correlation values were excluded from the scale. In respect of construct validity of the scale KMO (Kaiser-Meyer-Olkin) and Barlett test was applied. 18 items the item load factor of which were under .45 and those taken closer values in more than one factors at the same time were excluded from the scale (De Vellis, 2003; Field, 2013). Independent samples t-test was performed in comparing high-low group 27% in determining the distinctiveness of all items in terms of attitudes towards learning a second foreign language. Finally, confirmatory factor analysis (CFA) was made using Lisrel 8.50 package program in order to determine whether the 43 item and three-dimension attitude scale’s construct was validated or not.
Show more

17 Read more

Cognitive reactivity: Structure and validity of the LEIDS-R

Cognitive reactivity: Structure and validity of the LEIDS-R

Combining the RAV and RUM subscales seems logical considering the high correlation between these two subscales. In the LEIDS-30-R, the subscales are now as follows: Hopelessness / Suicidality (HOP), Acceptance / Coping (ACC), Aggression / Hostility (AGG), Control / Perfectionism (CON), and Avoidance Coping (AVC). While the use of the five-subscale, 30-item questionnaire (LEIDS-30-R), is recommended for future research, the factor structure in the LEIDS-30-R remains similar to the LEIDS-R (34 items), suggesting that the interpretation of previous studies using the LEIDS-R are not altered by these new results. The LEIDS-R and the LEIDS-30-R both showed good predictive validity, in accordance with similar results previously shown for the original LEIDS (Van der Does, 2002). Higher scores on the HOP and AVC subscales appeared to be the most indicative of a past depressive episode and had the highest predictive value.
Show more

36 Read more

Delayed Default Dependency and Default Contagion

Delayed Default Dependency and Default Contagion

A semi-analytical parametric model for pricing such correlation products is presented in Bal- akrishna [2006]. It is based on simultaneous defaults and is in need of a generalization to allow for delayed defaults. This generalization takes us naturally to a reduced form model belonging to a class of jump-diffusion processes discussed in Duffie and Garleanu [2001]. Simultaneous default is a characteristic feature of the so called shock models based on Marshall-Olkin copula. They involve discontinuities in their joint probability densities. The present model can be viewed as an extension of such models offering a natural smoothening of those distributions. More importantly, it is a dynamic model that exhibits, in its multi-factor setting, clustering tendency of credit defaults known as default contagion. It admits an efficient Monte Carlo simulation algorithm applicable to homogeneous or heterogeneous collections. This can be used to provide exact fits to CDX.NA.IG and iTraxx Europe CDOs just as its version with simultaneous defaults.
Show more

14 Read more

A functional dynamic factor model

A functional dynamic factor model

also exhibit a day of the week effect. Exceptionally long time series like this might also display the multiple seasonal components. In fact this is typical with this type of data (Taylor, 2008). Further, with the smallest data unit being a quarter hourly call volume, an ARMA model may be expected to forecast tomorrow’s volumes reasonably well. But for ”longer“ forecast horizons like even a few days ahead, the ARMA forecast would exhibit the usual mean-reversion seen in these models. Resulting in either over or under-staffing and the aforementioned costs associated with each of these. A better method to account for multiple periodicity would be to consider periodic auto-regressive models (PAR) (Hurd and Miamee, 2007). A connection to those types of models will be made in Chapter 6, however here, the method proposed is of a functional nature. It is a method capable of forecasting both within day (intra-day) call volumes and inter-day call volumes. A method that, similar to a PAR model, accounts for the multiple periodic components evident in the data. Consider the following proposed model, beginning with the actual data that motivated it.
Show more

183 Read more

Effectiveness of common risk factor approach based health education module:  A quasi experimental study

Effectiveness of common risk factor approach based health education module: A quasi experimental study

, attitude & practice for imparting health education calls for need of training teachers for strengthening the health promoting school initiative in India. This is the first study of its kind which evaluates the effectiveness of an interventional educational module based on common risk factor approach on the higher primary school teachers. Our study had certain limitations. There was the problem of obtaining convenient time slots for conducting the study from school teachers and school management out of their busy schedule. The other major limitation was that we did not have a comparison group to take into account any changes that would have occurred due to factors other than the interventional education programme. Further, longer follow-up is required to determine whether changes are sustained over the longer term, and to be able to detect changes in health- related behaviours. The data for the study relied heavily on the information received from the respondents and so may be biased by social desirability. While the study had its limitations, the information presented allows for recommendations for further research studies and offers a starting point for school teachers to begin dialogue regarding health promotion within the teaching profession.
Show more

6 Read more

The common ancestry of life

The common ancestry of life

Second, I have a problem with multiple statements in both papers about this or that thing not being depen- dent on the hypothesis of the common ancestry. It does not require any simulation to point out, e.g., a flaw of this class in Theobald’s work, when he says one cannot conclude anything direct about common ancestry from BLAST P-value, and has to infer it somehow. Surely, one must know that the inference in this case is possible because Karlin-Altschul statistics relies on the scoring system (s parameter) that is derived from the large data- set of alignments of bona fide homologous proteins! This and other examples seem to indicate that Theo- bald ’ s argument may be based on tautology. Can the authors elaborate on whether their simulation is testing the circularity of the argument (and whether it is even able to do so, as the simulation itself is also not comple- tely devoid of the evolutionary signal, having been built by sampling from the models that are derived from alignments of orthologs), or is it doing something else?
Show more

5 Read more

Motor function in Parkinson's disease and supranuclear palsy: simultaneous factor analysis of a clinical scale in several populations

Motor function in Parkinson's disease and supranuclear palsy: simultaneous factor analysis of a clinical scale in several populations

To examine similarities and differences between the sam- ples, restrictions were placed on the baseline model. In particular, equality constraints were placed on classes of parameters in the following order: factor loadings, factor correlations, factor variances, and correlations of residu- als. By increasing the number of restrictions in an ordered fashion, a set of nested or hierarchical models was defined which could be compared with respect to their fit. The detailed results of the multigroup factor analysis are reported in its common metric completely standardised solution (for details and considerations with respect to standardising multigroup solutions see Jöreskog & Sör- bom, 1996, p. 290 ff [17]). The major characteristic of the common metric standardisation is that the weighted aver- age of the within-group factor covariances is a correlation matrix, unlike the individual factor covariance matrices. This has the advantage that the invariant loading matrices remain invariant in standardised solution. By using the completely standardised solution, the original variables are also standardised to a common metric across groups, which facilitates the comparison of the factor variances and covariances (Jöreskog & Sörbom, 1996, p. 293) [17]. Results
Show more

13 Read more

Heritability of plasma concentrations of clotting factors and measures of a prethrombotic state in a protein C-deficient family

Heritability of plasma concentrations of clotting factors and measures of a prethrombotic state in a protein C-deficient family

14 Kamphuisen PW, Eikenboom JC, Rosendaal FR, Koster T, Blann AD, Vos HL, Bertina RM. High factor VIII antigen levels increase the risk of venous thrombosis but are not associated with polymorphisms in the von WillebrandfactorandfactorVIIIgene. Br J Haematol 2001; 115: 156±8. 15 Souto JC, AlmasyL, Borrell M, GarõÂ M, MartõÂnez E, Mateo J, Stone WH, Blangero J, Fontcuberta J. Genetic determinants of hemostasis phenotypes in Spanish families. Circulation 2000; 101: 1546±51. 16 de Lange M, Snieder H, ArieÈns RA, Spector TD, Grant PJ. The genetics

6 Read more

Assessing uncertainty in Europe and the US: is there a common uncertainty factor?

Assessing uncertainty in Europe and the US: is there a common uncertainty factor?

Each measure shows an increase in uncertainty during the last years marked by the financial turmoil. Given the rise in uncertainty, the question arises whether this uncertainty is driven by the same underlying forces. For the Euro Zone, I show that uncertainty can be separated into driving forces of short and long-term uncertainty. In the US there is a sharp distinction be- tween uncertainty that drives stock market and “real” variables on the one hand and inflation (short and long-term) on the other hand. Combing both data sets, factor analysis delivers (1) an international stock market factor, (2) a common European uncertainty factor and (3) an US-inflation uncertainty factor.
Show more

29 Read more

Show all 10000 documents...