Agents are single beings which act independently from others. **Agent**-**based** mod- els consist of multiple agents (that could be from different kinds) interacting with each other if it is necessary. When the model runs, agents behave according to their rules generating a dynamic system. For example, a model for sheep predation by wolves may consist of three kinds of agents: wolves, sheeps and grass. These agents would act according to different rules depending on their types: wolves would try catching sheeps to eat, sheeps would move around the landscape eating grass where it would be, and grass would not do anything but grow again after a period of **time**. As consequence of agents behavior, maybe wolves or sheeps become extinct, or maybe the number of sheeps and the number of wolves tend to be constant (de- spite fluctuations). That is the point of **agent**-**based** models: simulating systems which involve many agents (even hundreds) and watching results of their interaction. NetLogo lets us create **agent**-**based** models, so we could create hundreds or thou- sands of agents, give them instructions and after that watch their collective behav- ior as result of all operating independently at the same **time**. This allows exploring the connection between individual actions and macro-level patterns emerged from agents interaction.

Show more
53 Read more

imposed. However, the posterior m ean probability of having an outlier a t 1975Q1, presented an higher value, th a n the corresponding posterior m ean probability of having a level shift in 1975Q1. We concluded th a t th ere is stronger evidence of an outlier a t th a t point in tim e th a n a level shift. Moreover, A tkinson, K oopm an, and Shephard (1997) m ention th a t there m ight be an outlier in 1975Q1. In the sub-sam ple from 1984Q1 to 1984Q4, we detect two o th er outliers. As an em pirical application of the detection of shocks for the BSM, we m odel th e quarterly num ber of m arriages in th e UK, from 1958Q1 to 1984Q4. For th e period of 1965Q1 to 1970Q4, this d a ta set was analyzed in W est and H arrison (1997) and Penzer (1998). T his d a ta set is m odeled using a BSM, w ith determ inistic slope. We establish the existence of two seasonal shifts; an upw ard seasonal shift in 1962Q1, and a downward seasonal shift in 1969Q1, which agrees w ith th e findings in W est and H arrison (1997) and Penzer (1998). For th e period of 1971Q1 to 1984Q4, we d etect a downward level shift in 1973Q2. T he results we **obtained** in these em pirical applications are overall in agreem ent w ith results from using different m ethodologies. T h e advantage of our m ethod is th a t our results were **obtained** w ith one run of th e G ibbs sam pler, which delivered posterior samples for all the variables of th e model.

Show more
197 Read more

The remainder of this thesis is organized as follows: different dissimilarity measures are introduced in Section 2, among which there are some measures that are frequently used in practice, but are not aligned with our clustering objective. The main features of **time** **series** clustering are discussed in Section 3, with an emphases on hierarchical clustering and two popular measures for assessing the quality of **obtained** clusters. In Section 4, we define the minimum spanning trees in the context of community detection and formulate a permutation test to verify whether the structure of the minimum spanning tree is a result of random effects. Section 5 is devoted to numerical experiments with two phases: first, we select an appropriate dissimilarity measure, which results in better performance on synthetically generated **time** **series** and second, we use the relevant dissimilarity measure in order to form clusters with stock prices of 594 US-**based** companies. The results of the experiments are concluded in Section 6 with some discussion on potential future work.

Show more
46 Read more

In the third chapter, a Monte Carlo experiment is conducted to investigate the relative out- of-sample predictive ability of a class of parsimonious conditional variance models when either a structural break or long-run dependence is allowed for a conditional variance process. The results of our experiment reveal some supporting evidence of the discussions of the existing relevant literature. If the conditional variance process is stationary short or long memory in the absence of a structural break, the forecasting models which are able to capture the properties of the true process are more favourable than any other misspeci…ed models. When the true short memory process is contaminated by a structural break, the detection of the break may play an important role in choosing a proper window size for the short-run forecasting. Further we have found that spurious long memory may strongly dominate the true structural break in long-run forecasting when the true short memory process is highly persistent. However, it has not been easy to justify any consistent features or patterns in forecast superiority among the individual forecasting models when the structural break is located around the end of the in- sample period. It might be due to relatively small number of observations used for estimation. Nevertheless, it can be seen that the long memory-**based** forecasts are generally better o¤ than the short memory-**based** competing forecasts in the presence of the most recent break. On the other hand, two forecast combinations are very favourable in the presence of a structural break, regardless of the forecast horizon and the level of persistence. A number of extensions would be possible, **based** on the limitations of this study. For example, our **simulation** design can be naturally extended to accounting for more general non-stationary volatility processes which are subject to structural break and long memory simultaneously or other non-linearities. Moreover, it would be more informative in a general sense if we additionally take more various conditional volatility models such as stochastic volatility models and markov-switching models into account.

Show more
176 Read more

Before discussing a suitable solution to this practical problem, we want to look to some stylized facts: Figure 2.5 shows the growth rates of private households interest–bearing **financial** assets in Germany in the years 1951 to 1998. In the present model, the accumulation of household savings can be achieved exclusively through **financial** assets—real savings are not considered. By consequence, we can compare the data in figure 2.5 with savings behavior on **Agent** Island. In addition, the nominal government bond interest rate is illustrated. The interesting facts displayed in this figure are that in the early years of the development of the German economy after World War II, (i) the growth rates of **financial** assets were substantial higher than the interest rates, and (ii) over the years they were falling towards the level of nominal interest rates. 30 years later in the beginning 1980s, both values settled around the same range. Since this **time** the average growth rate of **financial** assets became relatively stable and coincided with the average nominal interest rates. Against the background of the present study the interpretation of these facts are straightforward: (i) If nothing else but the complete interest earnings are saved every year by households, the growth rate of their **financial** assets coincides with the nominal interest rates. (ii) If, in addition, positive (or negative) savings are conducted out of other income sources, the growth rates of **financial** assets lies somewhat above (or below) the nominal interest rate. (iii) The crucial point is that reinvested interest earnings are withdrawn from the ‘income circuit’ (Reich, 1998). If they are withdrawn in such a way, the dynamics of inflation rates is not affected by those reinvested interest incomes (Reich, 1998). The reinvestment of interest incomes characterizes the mechanism of a savings book, which is a quite popular form of savings among German households—especially during that **time**. Consequently, the above mentioned ‘vicious circle’ cannot occur on **Agent** Island provided that interest incomes are reinvested by household agents in each period (like in a savings book). Conversely, this can produce deflationary effects, and it can result in exponentially growing **financial** assets (as identified in reality as well).

Show more
330 Read more

Our extended model has a tendency to stabilize itself in a long term if the fundamental trading rules outweigh the technical trading method thanks to the introduction of FTTs. This could be used when bubbles and crashes occur in fi nancial markets. Asset prices would be stabilized because their value targets are near the fundamental value. The volatility would also be minimized. Introducing a low FTT rate makes asset price rises to a bubble while technical traders take over the market. However, prices start to fall after some **time** in accordance with the growth of a technical strategy. At that moment, volatility minimizes and the market stabilizes. Different results are achieved with a higher rate of FTT. If FTT and consequent costs are too high, the fi nancial system destabilizes and the price grows without limit.

Show more
12 Read more

Before discussing a suitable solution to this practical problem, we want to look to some stylized facts: Figure 2.5 shows the growth rates of private households interest–bearing **financial** assets in Germany in the years 1951 to 1998. In the present model, the accumulation of household savings can be achieved exclusively through **financial** assets—real savings are not considered. By consequence, we can compare the data in figure 2.5 with savings behavior on **Agent** Island. In addition, the nominal government bond interest rate is illustrated. The interesting facts displayed in this figure are that in the early years of the development of the German economy after World War II, (i) the growth rates of **financial** assets were substantial higher than the interest rates, and (ii) over the years they were falling towards the level of nominal interest rates. 30 years later in the beginning 1980s, both values settled around the same range. Since this **time** the average growth rate of **financial** assets became relatively stable and coincided with the average nominal interest rates. Against the background of the present study the interpretation of these facts are straightforward: (i) If nothing else but the complete interest earnings are saved every year by households, the growth rate of their **financial** assets coincides with the nominal interest rates. (ii) If, in addition, positive (or negative) savings are conducted out of other income sources, the growth rates of **financial** assets lies somewhat above (or below) the nominal interest rate. (iii) The crucial point is that reinvested interest earnings are withdrawn from the ‘income circuit’ (Reich, 1998). If they are withdrawn in such a way, the dynamics of inflation rates is not affected by those reinvested interest incomes (Reich, 1998). The reinvestment of interest incomes characterizes the mechanism of a savings book, which is a quite popular form of savings among German households—especially during that **time**. Consequently, the above mentioned ‘vicious circle’ cannot occur on **Agent** Island provided that interest incomes are reinvested by household agents in each period (like in a savings book). Conversely, this can produce deflationary effects, and it can result in exponentially growing **financial** assets (as identified in reality as well).

Show more
330 Read more

Recently, many scholars have applied complex scientific methods to the anal- ysis of **time** **series**, and have discussed the relationship between the dynamic characteristics of **time** **series** and complex network topology, which is especially suitable for complex systems research where a precise mathematical model can- not be established. The internal variation law and evolution mechanism of com- plex systems in various **financial** markets are **obtained** by analyzing the **time** se- How to cite this paper: Li, Y., Yang, D.H.

17 Read more

We conclude that under the case of the dependent noise, only the asymptotic variance of the limiting process will be affected and the fastest convergence rate is N 1/5 . To dealt with the case that the microstructure noise interacts with the sampling frequency and the empirical evidence that microstructure noise is small relative to the Integrated Volatility, we also consider the case of a shrinking noise. Under the case of a shrinking noise, we can achieve a faster convergence rate than N 1/5 . The fastest convergence rate depends on the rate of the shrinkage of the noise. We also conduct **simulation** study to examine the finite sample property of the realized kernel estimator under **time** endogeneity and the presence of the microstructure noise. Our results confirm the finding that the realized kernel estimator could be adopted in a model with both **time** endogeneity and dependent noise. Another surprising finding is that, under the case of a small noise-to-signal ratio, the realized kernel estimator **based** on the shrinking noise performs better even if the data generating process admits a non-shrinking noise in finite sample. We also make the central limit results feasible and construct confidence bounds of the proposed estimator. Last, we apply our methodology to the millisecond **time** stamped trade data and illustrate that the realized kernel estimator could be adopted in a model that are sampled at the highest possible frequency.

Show more
118 Read more

The **simulation** is run up to the last quarter’s end using historical data, then the interest rates are continuously increased. From this point on, the output **time** **series** (SI Investment PR and SI Private PR) are generated by the model. The chart shows the outcome of the **simulation** for both target indices and also the yield of the Swiss Bond Index as the input variable.

13 Read more

Ten **financial** **time** **series** are used to test the performance of the various networks such as the exchange rates **time** **series** and the oil price. In these extensive experiments, our primary interest is to concentrate on the profitable value contained in the predictions for all neural network models and hence during generalisation. The work focuses more on how the network generates the profits. For this reason, the neural networks structure, which provides the highest percentage of annualised return (AR) on out-of-sample data, is considered to be the best. A new training algorithm was utilised with the self-organised neural network that is inspired by the SMIA using weight decay; the **simulation** results indicated significant improvement of the proposed training algorithm over the standard network.

Show more
20 Read more

In order to compare our results with those of Figure 4.8, we have scaled our system with the same input concentrations for Rp and RNA Polymerase [13]. The rate constants of Figure 3.1, as **obtained** from [18], were also modeled such that all reactions were scaled against the slowest reaction among the set of reactions. With the kinetic constants scaled as those in [23] and [13], and the same input concentrations as in [13], we initialize RecA agents after just 0.5 seconds of **simulation** **time** (Figure 4.9). For this **simulation**, we use the parameter values given in Table 4.2. We see that the general shapes of the curves from 0-50 seconds in Figures 4.8 and 4.9 seem to be similar. But there are several factors which differ in the two **simulation** models, and we are still experimenting with the correct values for each model. Thus we are not yet able to compare the two models in enough detail to determine definitively how well they match.

Show more
12 Read more

37 Read more

The indirect inference (II) procedure is a **simulation**-**based** estimation procedure and can be un- derstood as a generalization of the simulated method of moments approach of Duffie and Singleton (1993). It was first introduced by Smith (1993) and coined with the term by Gouri´ eroux, Monfort, and Renault (1993). It is also closely related to the method proposed by Gallant and Tauchen (1996). The method was originally proposed to deal with situations where the moments or the likelihood function of the true model are difficult to deal with (and hence traditional methods such as GMM and ML are difficult to implement), but the true model is amenable to data **simulation**. Because many continuous **time** models are easy to simulate but present difficulties in the analytic derivation of moment functions and likelihood, the II procedure has some convenient advantages in working with continuous **time** models in finance.

Show more
28 Read more

Since the first publication of The Intelligent Investor, the market has failed to reach a steady value. So either the irrational traders are in infinite supply, being driven from the market only to be immediately replaced by more irrational traders; or there is more complexity to the market that overwhelms the impact of fundamentalist traders, and perhaps the principles of fundamental value altogether. This empirical observation coupled with applications of dynamic optimization techniques to derive **financial** models (e.g., the Black-Scholes options pricing formula) led economists to consider the impact of technical traders on the market in greater depth [29]. Frankel and Froot claim that **financial** bubbles cause a natural Bayesian response to inferior forecasting techniques, shifting the technical traders from short-term forecasts to longer-term forecasts. They naturally raised the question of which type of forecaster was dominating the market at a given **time** within the technical trader community. Additionally, studies began to reveal that some degree of predictability exists in the market [14]. As such, utilizing technical analysis seems less like a fool’s errand than value investors claim it to be. In 1991, Fama revisited the efficient market hypothesis and agreed that there was some degree of predictability in the market [24]. However, transaction costs associated with entering and exiting the market at high frequency, as well as increased risk often associated with predictability, led him to conclude that predictability alone does not negate the efficient market hypothesis [24]. That is, technical traders may indeed accurately predict market changes, but in the long run they will fair no better than a fundamental trader, as their profits are diminished by transaction costs and increased risks. Setting aside these intriguing debates that are likely to remain unresolved for decades to come, we examine the simpler question of whether select combinations of these behaviors can produce simulated price paths exhibiting the same properties we observe in the real market.

Show more
154 Read more

The results shows the comparison between the proposed models versus classical models for long terms **based** on selected criterion of forecasting accuracy for simulated models. The distribution of different forecasting meausres Bias,RMSE and MAPE are estimated. The results show that the Chen model is preferable in selecting the most appropriate forecasting model over all the other models for long terms beacause both forecasting meausres has smallest values then the other models. In addition, the ARIMA model performs better than the other model in term of Bias. Furthermore, the Bias measures for Yu model for smallest variance error equal 0.05. This result indicates that Yu model is more efficient than the other models for long terms and for all parameters.

Show more
Acknowledgement This is a pre-print of a contribution published in Cyclostationarity: Theory and Methods – IV; Contributions to the 10th Workshop on Cyclostationary Sys- tems and Their Applications, February 2017, Grodek, Poland; Editors: Fakher Chaari, Jacek Leskow, Radoslaw Zimroz, Agnieszka Wylomanska, Anna Dudek; Part of the Applied Condi- tion Monitoring book **series** (ACM, volume 16) published by Springer, Cham. The definitive authenticated version is available online via https://doi.org/10.1007/978-3-030-22529-2 3. This research was partially supported by the PL-Grid Infrastructure. Anna Dudek has re- ceived funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sk lodowska-Curie grant agreement No 655394. The authors express their sincere gratitude to Prof. Patrice Bertail for his comments and ideas.

Show more
28 Read more

temporal/spatial wave spectra (Fig. 3 bottom right, Fig. 7 be- low). Superimposed to this cascading is a drift of the input (fundamental) scale to larger wave lengths, which is associ- ated to the increase of the linearly excited wavelength with increasing PPPS diameter in tailward direction (see above). Nonlinear saturation occurs due to energy transfer to smaller scales. Note that the transition to turbulence follows a clas- sical route of wave cascading. Within a fluid picture, this process can be described as follows. At the boundary of the PPPS, Alfv´enic perturbations are excited by the velocity anisotropy due to the field-aligned beams. In the center of the PPPS and in the wake of the plasmoid, the plasma beta is 1, so that compressibility must be taken into account. As a result, the Alfv´en waves propagating along x couple to, and decay into, magnetosonic modes in the field rever- sal and in the wake of the plasmoid where the (weak) mag- netic field is directed along z. This process leads to evolved MHD plasma turbulence in the sense of Tsytovich (1977). In our **simulation** there is no dissipation scale because par- ticle collisions are absent, so that the power law extends to the smallest resolved scales.

Show more
Traffic behaviors of pedestrian are characterized by randomicity and flexibility, even representing complexity and variability in various scenes. This makes it considerably difficult to gather or extract data on pedestrian behavior. And so, a growing number of researchers use computer **simulation** to study on pedestrian traffic behavior. Multi **agent** **based** model and **simulation** is one of the most effective ways. Batty et al. (1999) applied **agent** theories and methods to design a cellular automata model, and developed a **simulation** system **based** on

Show more
The **agent** metaphor has proven to be a promising choice for building complex and adaptive software applications, because it addresses key issues for making complexity manageable already at a conceptual level. **Agent** technology is a rapidly developing area of research and it has the potential to stimulate and contribute to a broad variety of scientific fields [20]. Multi-**agent** models offer an alternative interpretation of classical traffic flow models as well as the development of more general and effective frameworks to model driver behaviour on a cognitive level [21]. The MAS models have received increasing attention in traffic management, signal control, route guidance. It offers certain advantages of: faster response, increased flexibility, robustness, resource sharing, graceful degradation and better adaptability of integrating pre- existing and stand-alone systems.

Show more