The aim of this study is to detect the multiple change points for linear processes under NSD. We propose the CUSUM-type change point estimator in model (1) and establish the weak convergence rate of the estimator with the mean parameter μ estimated by its LS estimator. Moreover, some simulations are implemented by R Software to compare the CUSUM-type estimator with some methods. The result indicates that the CUSUM-type change point estimator is broadly comparable with those obtained by the typical methods. The remainder of this paper is organized as follows. In Sect. 2, we describe the CUSUM- type multiple change point estimation and give the weak convergence rate of this estima- tor. Also, we give a multiple variance-change iterative (MVCI) algorithm to evaluate the estimator. In Sect. 3, some simulations are presented to show the performances of the estimator. Finally, the proofs of the main results are given in Sect. 4.
16 Read more
contrast, a Bayesian approach is very applicable to this problem. Therefore, we adopt a Markov-switching model and use Bayesian inference, following Kim and Nelson (1999); we also extend their models dealing with only one-time change-points to models that consider multiple change points, following the manner of Chib (1998). Koop and Potter (1999) demonstrated that a Bayesian approach is superior to the classical approach for nonlinear models, of which structural break models are a subset. In addition, only a Bayesian approach allows a comparison among models with various numbers of change-points, and a selection of the model with the most appropriate number and timing of such points, using the Bayes factor. Using the model selection procedure, we specify the turning points and evaluate the precision with which the turning points detected via the selected model are estimated.
31 Read more
Remark. Note that a preliminary analysis of the accumulated observed mean (i.e., the mean obtained directly from the data) shows the possible existence of change-points. In order to either confirm or discard their presence, several versions of the model were considered. The different results were compared using the DIC and the graphical fit. Hence, we have tested the different hypotheses about the model that would better explain the behaviour of the data. Also, note that the change-points were estimated as well as the parameters of the rate functions. We have assigned a prior distribution to them and using a sample drew from the respective posterior distribution we have estimated those change-points as well their credible intervals. Note that plots of the estimated means were drawn using the estimated parameters which were product of a sample from their posterior distributions. We would like to call attention to the fact that in addition to the plots of the accumulated observed and estimated means, a measure of the discrepancy between observed and estimated means were also used, namely the DIC.
18 Read more
When analysing fMRI data, it is common to assume that the data arises from a known experimental design (Worsley et al., 2002). However, this assumption is very restrictive particularly in experiments common in psychology where the exact timing of the expected reaction is unknown, with different subjects reacting at different times and in different ways to an equivalent stimulus (Lindquist et al., 2007). Change point methodology has therefore been proposed as a possible solution to this problem, where the change points effectively act as a latent design for each time series. Significant work has been done in designing methodology for these situations for the at-most-one-change situation using control chart type methods (Lindquist et al., 2007; Robinson et al., 2010). Using the methodology developed in this paper, we are able to define an alternative approach based on HMMs that allows not only multiple change points to be taken into account, but also the inclusion of an autoregressive (AR) error process assumptions and detrending within a unified analysis. These features need to be accounted for in fMRI time series (Worsley et al., 2002) and will be shown to have an effect on the conclusions that can be drawn from the associated analysis.
33 Read more
The only way to categorically measure the skill of a ho- mogenisation algorithm for realistic conditions is to test it against a benchmark. Test data sets for previous benchmark- ing efforts have included one or more of the following: as homogeneous as possible real data, synthetic data with added inhomogeneities, or real data with known inhomo- geneities. Although valuable, station test cases are often rel- atively few in number (e.g. Easterling and Peterson, 1995) or lacking real-world complexity of both climate variabil- ity and inhomogeneity characteristics (e.g. Vincent, 1998; Ducré-Robitaille et al., 2003; Reeves et al., 2007; Wang et al., 2007; Wang, 2008a, b). A relatively comprehensive but regionally limited study is that of Begert et al. (2008), who used the manually homogenised Swiss network as a test case. The European homogenisation community (the HOME project; www.homogenisation.org; Venema et al., 2012) is the most comprehensive benchmarking exercise to date. HOME used stochastic simulation to generate realistic net- works of ∼ 100 European temperature and precipitation records. Their probability distribution, cross- and autocorre- lations were reproduced using a “surrogate data approach” (Venema et al., 2006). Inhomogeneities were added such that all stations contained multiple change points and the magnitudes of the inhomogeneities were drawn from a nor- mal distribution. Thus, small undetectable inhomogeneities were also present, which influenced the detection and ad- justment of larger inhomogeneities. Those methods that ad- dressed the presence of multiple change points within a se- ries (e.g. Caussinus and Lyazrhi, 1997, Lu et al., 2010; Han- nart and Naveau, 2012; Lindau and Venema, 2013) and the presence of change points within the reference series used in relative homogenisation (e.g. Caussinus and Mestre, 2004; Menne and Williams, 2005, 2009; Domonkos et al., 2011) clearly performed best in the HOME benchmark.
14 Read more
When analysing fMRI data, it is common to assume that the data arises from a known experimental design (Worsley et al., 2002). However, this assumption is very restrictive particularly in experiments common in psychology where the exact timing of the expected reaction is unknown, with different subjects reacting at different times and in different ways to an equivalent stimulus (Lindquist et al., 2007). Change point methodology has therefore been proposed as a possible solution to this problem, where the change points effectively act as a latent design for each time series. Significant work has been done in designing methodology for these situations for the at-most-one-change situation using control chart type methods (Lindquist et al., 2007; Robinson et al., 2010). Using the methodology developed in this article, we are able to define an alternative approach based on HMMs that allows not only multiple change points to be taken into account, but also the inclusion of an autoregressive (AR) error process assumptions and detrending within a unified analysis. These features need to be accounted for in fMRI time series (Worsley et al., 2002) and will be shown to have an effect on the conclusions that can be drawn from the associated analysis.
18 Read more
The World Café method has been applied in various settings, including different sectors, organizations and contexts and is proven to be an effective way for generating input, simulating innovative thinking and knowledge sharing. 136 By running multiple rounds, the validity of the emerging constructs is ensured. 137 However, no scientific literature exist that investigates the reliability of the World Café method. 138 Although reliability and validity are distinguishable from each other, they are also related because validity presumes reliability. This means that a method cannot be valid if it is not reliable. 139 Using a homogenous group and consistent topics across multiple World Café session therefore should lead to reliable and consistent outcomes, leading to the following hypothesis:
81 Read more
bereavement was the most common but observed in less than a majority (45.9%) of their participants (Bonnano et al. 2002a). Thus, although many widowed older people are resilient, not all of those who appear to adjust appear to be resilient. Bonanno, Papa and O’Neill (2002b) argued that most bereaved people do not require professional help and do not have prolonged grief reactions. They outlined a number of factors that contribute to resilience including worldview, self-enhancement, continuity of identity, continuing bonds and concrete aspects of the self such as roles and behaviours. In his 2004 discussion paper, Bonanno argued that resilience is more common than has been thought, and that the absence of distress does not equate to problematic grieving or a delayed grief reaction. Finally, he suggested that there are multiple and somewhat unexpected pathways to resilience. Some people display hardiness, others self-enhancement, others positive affect and humour, and yet others successfully use repressive coping. In Bonanno (2004), he argued that resilience is the ‘ability to maintain a stable [psychological] equilibrium’ (p. 20) following the loss, without long-term consequences (see also Boerner, Wortman and Bonanno 2005). Thus, the focus is on the ability to withstand the stress of bereavement.
21 Read more
direction and historical tracks are obtained from the GPS module, Secondly, adaptive Kalman filter is used to filter out original GPS data’s dynamic noise and measurement noise. Thirdly, coordinate transform is used to subscribe the GPS data points to Baidu Map. Fourthly, direct projection method is applied to obtain the optimal position and they are displayed on the electronic map. Finally, the optimal result obtained from the process is interpolated to make the trajectory have more details.
The paper makes three contributions. First, we derive the asymptotic expectation of the residual sum of squares in models with breaks in the coefficients at unknown dates. For linear or nonlinear regression models with exogenous regressors, this expectation depends on the numbers of estimated break points and estimated mean parameters, with the former having a weight of three relative to each mean parameter. For the linear model, this finding reproduces that of Kurozumi and Tuvaandorj (2011), but the extension to nonlinear models is new. In addition, our derivation is different to Kurozumi and Tuvaandorj (2011) and is based on a decomposition of the residual sum of squares that, we believe, provides interesting insights into the derived result. Although the expression is more complicated in linear models estimated via 2SLS, nevertheless the principal result, namely, that each estimated break date has the same impact on the expectation as three estimated mean parameters, carries over to this context. Second, we propose a statistic for testing the joint hypothesis that the breaks occur at specified points in the sample. Under its null hypothesis, this statistic is shown to have a limiting distribution that is non-standard but, under certain assumptions, asymptotically pivotal after normalization; percentiles are provided for this limiting distribution. Although the same distribution is obtained by Hansen (2000) (see also Hansen, 1997) in the context of testing the location of the single threshold in a threshold autoregressive (TAR) model, no joint test appears to have been proposed previously in the literature. This statistic can be used to construct confidence sets for the breaks. This issue has recently received some attention in linear models; e.g., see Elliott and Mueller (2007), Eo and Morley (2015), and Chang and Perron (2015). Unlike these previous methods, our approach treats the breaks jointly rather than constructing individual intervals for each break. Our third contribution is to examine breaks in the U.S. monetary policy, for which we shed new light on the common assumption that Volcker taking over as Fed chair marked an immediate policy change [Clarida et al. (2000)].
32 Read more
a closely related reference, we quote Chen et al. (2017) who proposed a method for detecting multiple change-points in generalized O-U process. In this thesis, we study the inference problem in generalized O-U processes with multiple unknown change- points where the drift parameter is suspected to satisfy some restrictions. We also revisit the conditions for the main results in Chen et al. (2017) to hold. In particu- lar, we show that the results in Chen et al. (2017) hold without their Assumption 2. Nevertheless, the authors of the quoted paper omitted an important condition about the initial value of the SDE for their main results to hold. In the subsequent section, we highlight the main contribution of this thesis.
135 Read more
Networks are fundamental structures that are commonly used to describe interactions be- tween sets of actors or nodes. In many applications, the behaviors of the actors are observed over time and one is interested in recovering the underlying network connecting these actors. High-dimensional versions of this problem where the number of actors is large (compared to the number of time points) is of special interest. In the statistics and machine learning literature, this problem is typically framed as fitting large graphical models with sparse parameters, and significant progress has been made recently, both in terms of the statisti- cal theory (Meinshausen and Buhlmann, 2006; Yuan and Lin, 2007; Banerjee et al., 2008; Ravikumar et al., 2011; Hastie et al., 2015), and practical algorithms (Friedman et al., 2007; H¨ ofling and Tibshirani, 2009; Atchade et al., 2017).
38 Read more
Dijkstra algorithm is widely used in finding the shortest path from a starting point to other vertices [12-14] . The advantage of the algorithm lies in finding out the vertex closest to the starting point, automatically taking it as the new starting point (knee point), and updating the original path information simultaneously. In this paper, an improved Dijkstra algorithm  can find out the minimum distance matrix between two points. The method to solve this problem can be divided into three steps. The main computation procedure of Dijkstra algorithm is described as follows.
recognition and many others, greatly benefit from having color and depth information fused together. Traditional high-cost 3D profiling cameras often result in lengthy acquisition and slow processing of massive amounts of information. With the invention of the low-cost Microsoft Kinect sensor, high-resolution depth and visual (RGB) sensing has created many opportunities for multimedia computing. Our objective is to design and implement a system for calibrating multiple Kinect sensors in different views.
81 Read more
“We are one week away from changing America,” Obama proclaimed, reminding his audience of the presidential election that drew nearer as the long trails of the campaigns came to an end in his closing argument for the presidency at the Canton Civic Center in Ohio. To his country that was in the grips of the economic crises, Obama promised to restore the nation‟s economic prosperity and sense of higher purpose (Feller, 2008a). As Americans had eight more days to go, Obama was anxious to bring to his audience‟s highest level of consciousness of his main political platforms such as bringing in a change to the Republican-dominated government, putting an end to the unauthorized War in Iraq, providing affordable universal healthcare coverage, improving education and cutting taxes for those who earned below $ 250,000 (Obama, 2008; cf. Feller, 2008b; Baker, 2008). He was to underscore the persuasive messages of his political agenda that differentiated him from John McCain, the Republican presidential candidate.
10 Read more
state nature of the model, a change into state 2 represents the first change point (i.e. transi- tioning into state A) while being in state 2 is equivalent to being in the continuation state C, and thus even though state 2 is in S, it is not needed in Z. The required posterior transition probabilities for this model can easily be found using the results from the original Chib paper. The analysis given here is based on the posterior means of the parameters, and compared to the sampled state sequences (which incorporate parameter uncertainty). As can be seen in Figure 1, the change point distributions for the first and second change points (blue and red respec- tively) are almost identical using the posterior mean parameters (top plot, not incorporating parameter estimation error) and the sampled state sequence (bottom plot, incorporating param- eter estimation error). The only area where small differences occur is in the region 20-30 time steps where there is a very small probability of a change (erroneous given the change is at time 50) in the plots which incorporate parameter estimation error. In addition, a heuristic approx- imation to the change point distribution incorporating parameter uncertainty can be found by using each set of posterior parameters from the MCMC output and then estimating the change point distribution using the methodology above. The pointwise mean of all these change point distributions (5 × 10 4 , one for each parameter set in the MCMC run) can then be found, and
30 Read more
The use of technology has increased vastly and today computer systems are interconnected via different communication medium. The use of distributed systems in our day to day activities has solely improved with data distributions. This is because distributed systems enable nodes to or- ganise and allow their resources to be used among the connected systems or devices that make people to be integrated with geographically distributed computing facilities. The distributed sys- tems may lead to lack of service availability due to multiple system failures on multiple failure points. This article highlights the different fault tolerance mechanism in distributed systems used to prevent multiple system failures on multiple failure points by considering replication, high re- dundancy and high availability of the distributed services.
12 Read more
Let T be a mapping from X into itself. T is called a nonexpansive mapping if the in- equality d(Tx, Ty) ≤ d(x, y) is satisﬁed for any x, y ∈ X. A point z ∈ X is called a ﬁxed point of T if Tz = z holds. We denote the set of all ﬁxed points of T by F(T). A subset C ⊂ X is said to be convex if, for any x, y ∈ C, [x, y] is included in C. We know that F(T) is a closed convex subset of X if T is nonexpansive.
11 Read more
at 4-h has the largest AUC of 0.865, followed by 3-h, 2-h, and 1-h, respectively (Table 3). The rate of false diagnosis by decision tree model with 4-h is 14.4% (28 out of 123 for patients without gastroparesis, and 18 out of 197 for those with gastroparesis), less than half of those who would be wrongly diagnosed by 1-h and 2-h points, and 37% ((73-46)/73) less than that -at 3-h. Including 2-h or 3-h along with 4-h with the decision tree did not increase the number of correct diagnoses over using 4-h alone as indicated by jackknife cross validation. These differ from results obtained from LDA and DF approaches, in which the linear combination of 3-h and 4-h showed slight improvement over using 4-h alone. However, decision tree model with either 4-h alone or its combination with 2-h or 3-h did not suffer in diagnostic utility compared with its counterpart models identified with either LDA or DF approach, regardless of data transformation. The CART model using all four hourly GES measures along with patient age was very interesting. For the criteria of gastric retention >10% at 4-h and <53% at 2-h, patients
10 Read more
The remainder of the article is structured as follows. Section 2 deals with piecewise-constant TV-AR model estimation preliminaries. In Section 3, the problem at hand is recast as a sparse linear regression, and the novel group Lasso approach is introduced. An efficient block- coordinate descent algorithm is developed in Section 4, while tuning issues and uniqueness of the group Lasso solution are addressed in Section 5. Section 6 introduces a non-convex segmentation method based on the SCAD regularization to enhance the sparsity of the solution, which translates to retrieving more precisely the change instants. Numerical tests are presented in Section 7, and concluding remarks are summarized in Section 8. The Appendix is devoted to technical proofs. Notation:
16 Read more