When a time-varying system is subject to rare but abrupt (jumping) changes, the estimated parameters by conventional adaptive algorithms cannot track the variations of true system parameters in the vicinity of these jumping locations, resulting in the so called “lag” estimation. Three methods can be used to mitigate the effect of “lag” estimation. The first method is to use variable forgetting factor RLS algorithms . The second is to increase the system estimation covariance matrix at the jumping locations , . The third includes various Bayesian Kalman filtering algorithms , . In this paper, the second method will be adopted to track the abrupt changes of system parameters. One difficulty of this method is how to identify the unknown locations and amplitudes of the abrupt changes online. Some approaches have been developed toward this task –. The obvious tradeoff between detection sensitivity and robustness exists in these methods. It has been shown that the idea of modification of covariance matrix only with respect to the detected jumping parameter branch(es) would improve the identification accuracy , . Hence, it is desirable to have of a simple and yet efficient detection and modification algorithm.
Other studies have been developed using either filter in frequency domain or windowing function in the time domain [10, 11]. The main purpose of filter is to reduce the amplitude of side lobes, which leads to the signifi- cant decrease in the ICI power. This method increases the performance noticeably in comparison with ICI self- cancellation methods; however, the drawback of this method is caused by its high complexity and suitability with constant frequency shift. Authors in  developed a method to compensate signal dispersion caused by time- varying channels. However, this method is only applicable for media in which line-of-sight signal is extremely dom- inant. A partial FFT method was introduced in  in order to avoid the frequency-domain equalizer. Although this approach has an acceptable complexity, it only assumes that channel parameters vary slowly compared to the OFDM symbol duration. That makes this method weak against the LTE-R scenario. In , a low-complexity equalizer was proposed using channel matrix in the fre- quency domain. This method possesses a low complex- ity; however, the analyzed channel matrix was lacking of important corner elements. Considering a full matrix , such low complexity is unable to be attained. With the same channel matrix, authors in  proposed frequency- domain equalizer using sub-matrices. Usually, the best performance can be achieved by conventional equalizer, which finds the inverse of the entire channel matrix. However, conventional method suffers from high com- plexity and singular matrix problem. Proposed method in  solved these problems but paid the penalty of performance.
In this paper, we extend this analysis by examining the fractional cointegration case using time-varying vector autoregression model. We specify the vector error correction model (VECM) with a cointegrating vector that varies with time and we approximate this vector by a linear combination of orthogonal Chebyshev time polynomials.
1. Introduction. Recently, special attention has been devoted to the problem on the exponential stability of nonlinear autonomous systems with time-varying delays since such systems have been successfully applied into various signal processing problems such as optimization, image processing, associative memory and many other ﬁelds, see, [1- 14] and the references therein. However, as we know, time-varying phenomena often occur in many realistic systems. In particular when we consider the long-time dynamical behavior of a system, the parameters of the systems are usually subjected to environmental disturbances and frequently vary with time. In this case, time-varying systems model can describe evolutionary processes of networks in detail. Thus, the study on the dynamical properties for time-varying systems with time-varying delays is very interesting and there are some excellent results on the time-varying systems, see [15-20] and the references therein.
The first algorithm for finding solution of Lasso was presented by Tibshirani (1996) in his work introducing the Lasso method itself. Then Osborne et al. (2000) developed an algorithm which works not only for the case where p < n but also n > p. In order to make the computation more efficient, Efron et al. (2004) proposed the use of the least angle regression algorithm (LARS). The latter procedure is as efficient as a single least squares fit and can also be used in cases where number of parameters of the investigated model is much larger than the number of observations. As a selection criterion of b λ for LARS, Efron et al. (2004) suggested to use C p -type selection criterion. Zou et al. (2007) then defined model selection criteria such as
In this paper we propose a robust form of the Nth- order complex-lag time-frequency distribution (CTD). An arbitrary high concentration can be achieved by increasing the distribution order N. The standard CTD has been defined as convolution of the WD and the Fourier transform of the higher order complex-lag moment, called concentration function (CF) [17, 18]. Similarly, the robust CTD can be obtained as convolution of the robust WD and CF forms. Additionally, a cross-terms free robust complex-lag time- frequency distribution is proposed for multicomponent signals.
Next let us compare our three di¤erent TVD models. Broadly speaking, they are yielding similar results. However, our …rst TVD model is sometimes a bit di¤erent from the other two. It exhibits less time variation and, loosely speaking, tends to either include a variable or exclude it. This is not sur- prising in light of the properties of this model (see Section 2.2). That is, the longer a predictor is excluded from the model, the larger the variance in the state equation prior becomes (and the less shrinkage applies). This could be an attractive feature if there are big structural breaks for the co- e¢ cients. But, in our data set, this is a less attractive property. The third TVD approach shares these properties with the …rst TVD model, but this is partly counteracted by the added dimension reduction noted in Section 2.4. However, these considerations suggest that the second TVD approach might be most suitable for dealing with data sets with frequent small breaks.
In recent years, the relationship between inflation and nominal interest rates has been the subject of numerous studies in the empirical literature. This empirical relationship is a major part of monetary economics because whether monetary policy influences the real interest rate is one of the most important questions facing monetary authorities. A positive association between movements of the nominal interest and inflation rates, known as Fisher hypothesis, has been widely accepted as an empirical regularity in macroeconomics ever since Fisher (1930). Fisher hypothesis formulates real interest rate as the difference between nominal interest rate and expected inflation rate. In order for the strong form of Fisher hypothesis to hold, nominal interest rate moves one-to-one with expected inflation rate. If the nominal interest rate and the inflation rate are each integrated of order one, I(1), and the two variables should be cointegrated with a slope coefficient of unit value so that the real interest rate is covariance stationary. As known, cointegration concept is associated with the long run equilibrium relationship between two or more variables. This empirical relationship has important implications for macroeconomic policy because, if the hypothesis holds in the long run, the monetary policy does not affect the real interest rate, leaving the ex ante real interest rate unchanged. This would mean that real interest rates are not related to the expected inflation and are determined by the real factors in an economy, such as the capital productivity and the time preference (Payne and Ewing, 1997; Million, 2004).
Current models utilizing an equilibrium term structure for interest rates are often based on a representative agent framework with specific parametric assumptions about the preferences of the representative agent. It is important that when dealing with a Vasicek bond pricing problem with time relationships that the short rate r be unbounded from what essentially remains an open question because the time to maturity is a function of the short rate. Some approaches to this may be seen in Stampfli and Goodman (2001), Christiansen (2005), Lemke (2008), Realdon (2009), Jiang and Yan (2009), Laurini and Moura (2010), Jang and Yoon (2010) , Yu and Zivot(2011), Wang (1996), Honda, Tamaki, and Shioham (2010), Chen, Liu, and Cheng (2010), Mahdavi (2008), and Fouque, Papanicolaou, and Sircar (2000).
In part A of this research, we reported adsorbent generation, characterization and optimization of factors affecting sorption. Destructive distillation technique was carried out for transforming biomass into biosorbents. Pyrolysed Moringa oleifera Pods (PMOP) and Shells (PMOS) were used. Adsorbents were characterized for surface morphology, crystallographic pattern, active functional sites and elemental composition using SEM, TEM, PXRD, FTIR and CHNS/O analyzer respectively. Performance assessment of adsorbent was based on removal efficiency. The effect of pH, early adsorbate concentration, contact time, adsorbent dose and temperature on chromium uptake was studied in column mode. Results show the role of both physical and chemical characteristics of the adsorbents. The maximum adsorption capacity of PMOS is 277.3 mg/g. Performance of derived sorbent compared with commercially available activated carbon shows no statistical significance at p< 0.05.
Linear systems (Mullis, 1976) serve as basic tools in various fundamental areas and the problems of system identification and signal modeling have attracted considerable attention due to their large number of applications in diverse fields (Astrom, 1971) (Al-Shoshan, 1996), (Broersen, 2000), (Choi, 1992), (Lobato, 2018), (Diggle, P. J., 1990), (Grenier, 1983), (Wood, 1992). The time-invariant case, where the signals are stationary and the system's operations are steady, is well established. In this case, the representation of the signals and the characterization and design of the systems are conducted in either the time or the frequency domain (Haykin, 1991), (Huang, 1980), (Ding F. X., 2016). Whenever the signal of interest or the desired system operations are non-stationary, such approaches are quite limited, see for example (Kayhan, 1994), (Kenny, 1993), as they do not often express explicitly the signal or system non-stationarity. Whenever slow temporal variations are presumed, the problem is resolvable by partitioning the signal into time sections that are sufficiently small to be considered locally time-invariant and sufficiently long to yield the desired frequency resolution. In this paper, we will develop some tools for dealing with LTV systems and non-stationary signals without the above assumptions.
Next consider the timevarying aspect of the TVD models. Clearly there is substantial time variation in many of the lines in Figures 1 through 5, indicating that the impact of the corresponding predictor is changing over time. We can also be sure any patterns are not an artifact of the statistical methodology, since the patterns are so di¤erent in the di¤erent …gures. For instance, one might expect that the lines in the …gures would uniformly tend to increase over time since the available data is increasing, leading to more “signi…cant” coe¢ cients. This is not the case. Although there are some cases where p(K j;t = 1 j y) is increasing over time, there are many where it is
Temporal variation. Shen uses the span space to define the temporal variation of a cell’s values. The area over which the points corresponding to a cell’s min-max values over time are spread out give a good measure of its temporal variation; the larger the area of spread the greater the variation. In particular, subdivide the span space into ` × ` non-uniformly spaced rectangles called lattice elements using the lattice subdivision scheme [SHLJ96]. To perform the subdivision, lexicographically sort all extreme values of the cells in ascending order, find ` + 1 values to partition the sorted list into ` sublists, each with the same number of cells. Use these ` + 1 values to draw vertical and horizontal lines to get the required subdivision. Note that this subdivision does not guarantee that each lattice element has the same number of cells. Given a time interval [i, j], a cell is defined to have low temporal variation in that interval if its j − i + 1 min-max interval points are located over an area of 2 × 2 lattice elements.
An increasing number of models have been developed in empirical literature for the financial instrument or other commodity markets. However, those models may be inadequate to deal with the uniqueness of electricity as a commodity. Indeed, elec- tricity markets are characterized by extreme changes in spot prices known as jumps or spikes. Several nonlinear models have been proposed to include discontinuous com- ponents in realistic models of electricity price dynamics. Since the seminal work of Hamilton (1989), the Markov switching class of models became very popular. The key attractive feature of Markov switching models is that the conditional distribution of a time series depends on an underlying latent state or regime, which can take only a finite number of different values. The discrete state evolves through time as a discrete Markov chain and its statistical properties are summarized by a transition probability matrix. Ethier and Mount (1998) proposed a two state specification in which both regimes were governed by autoregressive processes of the first order with different or common variances. Huisman and Mahieu (2003) introduced a third regime, the jump reversal regime, that describes how prices move back to the baseline regime after the initial jump has occurred. Huisman and De Jong (2003) proposed a model with only two regimes, a stable mean reverting AR(1) regime and a spike regime. In order to cope with the heavy-tailed nature of spikes, Weron et al. (2004) and Janczura and Weron (2009, 2010) replaced the normal distribution of the spike regime with log-normal and Pareto distributions.
In this paper, an algorithm for clipped speech restoration using linear prediction has been presented and tested. It is able to restore completely the clipped speech. Two different methods for estimating the prediction parameters have been tested. The first one consists on block least square estimation while the second one is a recursive method. It appears that the recursive method is
In liner shipping, Ronen  looked at the effect of oil price on the trade-off between reducing sailing speed and increasing fleet size for container ships, under the assumption that ship speeds are constant for entire voyage rather than varying per leg. Løfstedt et al.  presented a set of benchmarks for liner shipping net- work design problems, with fuel costs taken into account. Meng and Wang  optimised the service frequency, fleet deployment plan and sailing speeds for a liner service route, including variable sailing speeds at different legs of the route. They used a Mixed Integer Non-Linear Programming model, approximated by a MILP model to give a lower bound. Branch-and-bound was then used to find a solution for the MINLP within a given error tolerance. The Liner Shipping Fleet Repositioning Problem [Tierney et al., 2012a], discussed in more detail in Section 2.3.4 also involves decisions on the ship speed in each leg of the voyage. Notteboom and Vernimmen  presented an overview of how liner service operators have adapted their ser- vice for high fuel prices in practice: reducing speed; adding new ships to maintain service frequency; assigning the largest vessels to the longest routes with the largest ratio of travel time to time spent in ports; and adapting the number of port calls on each route based on a tradeoff of demand vs cost.
Results in Figure 2 indicate a fading of the e¤ects of …scal policy over time, this being much more evident for net taxes than for expenditure. Such a pattern corroborates the common belief that the e¤ectiveness of …scal policy in the U.S. has lost strenght in recent decades but puts almost all the burden for this on net taxes, not on government expenditure. Further, although for net taxes there is evidence of a sizeable one-o¤ break in the mid-seventies, in general the responses evolve in a way that is well described by the gradual change hypothesis. Further still, in spite of the observed time variation, the multipliers keep conventional signs and reasonable sizes throughout. Hall (2009) summarizes the evidence on spending multipliers coming from regressions and VARs (SVAR and event study approaches) as lying in the interval from 0.5 to 1.0. The …gures we get broadly conform to this interval. They are only marginally above it in the initial years and slightly below toward the end of the period. Evidence on net tax multipliers is much scarcer, but values from ¡ 20 to ¡ 10 are in the usual range as well.
β to improve the convergence rate of convergence to equilibrium state, and the more far-from-equilibrium state, and faster convergence. However, Terminal sliding mode control on the convergence time may not be optimal mainly because the speed of convergence of Nonlinear sliding mode formula (1) is more slow than the linear sliding mode ( p = q ) the convergence rate when the system state close to a state of equilibrium. To this end, a new global fast Terminal sliding mode has been proposed.
in the parameters makes these models highly nonlinear and possibly non Gaussian, so that computationally intensive simulation-based methods are typically required for estimation. In contrast, when the parameters are driven by the score, the model remains Gaussian, conditional on past data. In this case Delle Monache et al. (2015) develop a set of recursions that, running in parallel with the standard Kalman filter, allow the evaluation of the score vector at each point in time. Once the score is known, the model parameters can be updated. The likelihood function, which remains Gaussian, can then be evaluated by the means of the Kalman filter and maximized through standard procedures. A second stream of the econometric literature related to our work deals with dynamic factor models, see for example Giannone et al. (2008) and Camacho and Perez-Quiros (2010). Within this branch of the literature our paper is close to the studies that extend traditional dynamic factor models to nonlinear settings, like those by Del Negro and Otrok (2008), Mumtaz and Surico (2012) and Marcellino et al. (2013). There are a number of differences between our method and those just mentioned. The most important one is that all these papers adopt a Bayesian standpoint and rely on computationally intensive Bayesian methods to estimate the model parameters. In our setup, estimation can be carried out with straightforward maximization of the likelihood function, with some advantages in terms of computational simplicity.