This paper presents the extension and application of three predictive models to **time** **series** within the **financial** sector, specifically data from 75 companies on the Mexican stock exchange market. A tool, which generates awareness of the potential benefits obtained from using formal **financial** services, would encourage more participation in a formal system. The three statistical models used for prediction of **financial** **time** **series** are a regression model, multi-layer perceptron with linear activation function at the output, and a Hidden Markov Model. Experiments were conducted by finding the optimal set of parameters for each predicting model while applying a model to 75 companies. Theory, issues, challenges and results related to the application of artificial predicting systems to **financial** **time** **series**, and performance of the methods are presented.

Show more
10 Read more

SEMIFAR includes ARIMA (p, m, 0) model and the fractional autogressive process (Hosking, 1981, Granger and Joyeux, 1980). However, the assumption of while noise on ǫ of SEMIFAR ignores possible heteroskedasticity of **financial** **time** **series**. Often **financial** **time** **series** exhibit conditional heteroskedasticity, i.e. the volatility (or conditional variance) of a **financial** process often depends on the past information but the mean may not. Well known models for modelling conditional heteroskedasticity are the autoregressive conditional heteroskedastic (ARCH, Engle, 1982) and generalized ARCH (GARCH, Bollerslev, 1986) models. Since then many extensions of the ARCH and GARCH models are introduced into the literature. Engle, Lilien and Robins (1987) extended the ARCH model to the ARCH in mean (or ARCH-M) model, where the conditional standard deviation also effects the mean of the observations. The ARCH-M model can be analogously generalized to a GARCH-M model. Another well known extension of the GARCH model is the exponential GARCH (EGARCH) introduced Nelson (1991), where the GARCH property is defined for the log-transformation of the volatility. A FARIMA-GARCH model to model long memory in the mean and conditional heteroskedasticity in the volatility is introduced by Ling and Li (1997).

Show more
24 Read more

performs well in **financial** **time** **series** forecasting is imperative. To provide more accur- ate results, studies on **time** **series** forecasting and modeling widely use a combination of different models and metaheuristic optimization approaches. Considerable research has adopted optimization methods such as genetic algorithm (GA; Aghay Kaboli et al. 2016a, 2016b, Kaboli et al. 2016), particle swarm optimization (PSO; Aghay Kaboli et al. 2016a, 2016b), and gene expression programming (GEP; Aghay Kaboli et al. 2017a, 2017b; Aghay Kaboli et al. 2016a, 2016b). Kaboli et al. (2016) proposed the artificial co- operative search (ACS) algorithm to forecast long-term electricity energy consumption and numerically confirmed the effectiveness of the algorithm using other metaheuristic algorithms including the GA, PSO, and cuckoo search. Modiri-Delshad et al. (2016) presented a backtracking search algorithm (BSA) and verified the reliability of the method in solving and modeling the economic dispatch (ED) problem. Aghay Kaboli et al. (2016a, 2016b) estimated electricity demand using GEP, a genetic-based method, as an expression-driven approach and showed that GEP outperforms the multilayer per- ceptron neural (MLP) network and multiple linear regression models. Recent studies on **time** **series** forecasting largely focus on combination methods given the distinguishing features of hybrid models (e.g., unique modeling capability of each model), drawbacks in using single models, and the resultant improvements in forecasting accuracy. The key concept of combination theory is employing the unique merits of individual models to extract different data patterns. Importantly, the literature confirms that no individual model can universally determine data-generation processes. In other words, all characteristics of underlying data cannot be fully modeled by one model and therefore, combining different models or using hybrid ones helps analyze complex patterns in data more accurately and completely. Further, combining various models simplifies the selection of a model that is appropriate to process different forms of relationships in the data and reduces the risk of choosing an inefficient one.

Show more
24 Read more

Traditional areas in which ANNs are known to excel are pattern recognition, pattern matching, and mathematical function approximation (nonlinear regression). However, they suffer from several well-known limitations. They can often become stuck in local, rather than global minima, as well as taking unacceptably long times to converge in practice. Of particular concern, especially from the perspective of **financial** **time**-**series** prediction, is their inability to handle nonsmooth, discontinuous training data and complex mappings (associations). Another limitation of ANNs is their “black box” nature — meaning that explanations (reasons) for their decisions are not immediately obvious, unlike some other techniques, such as decision trees.

Show more
31 Read more

Data clustering is one of the most popular unsupervised machine learning approaches. Clustering data can help identify the pattern of what seems to be similar data and leads to the best solution for all commercial problems. For example, taxi booking application, customer’s data can be clustered to match supply with demand, to detect fraud pattern of an e-commerce transaction or clustering customers in dating application, etc. In order to carry out the best calculation of clustering certain requirement is needed in each method and approach such as the basic assumption of data. When analyzing data with a wrong assumption, it results in low-quality outcomes. So we would like to study and compare this type of data in an in-depth manner. **Time**-**series** analysis is used in many future prediction tasks based on previously observed values, mixing cluster analysis and **time**-**series** data to serve the initial purpose that researcher would like to share to the public for better understanding of the clustering, researcher would also like following researchers to refer to this work and develop this theory and apply in wider issues in future. In this paper, the focus is on comparing **time**-**series** clustering algorithm with **financial** **time**-**series** data, which is common data such as cryptocurrency, exchange rate currency, the Shanghai Stock Exchange (SSE50), and the stock exchange of Thailand 50 (SET50). The paper introduces the importance of data mining, machine learning, and **time**-**series** clustering and some related methods, which lays a theoretical foundation for the formal research of this paper. By analyzing the structure of **time**-**series** clustering, that consists of several parts, including distance measurement, **time**-**series** prototype, a clustering algorithm, and clustering evaluation. From research result, the hierarchical algorithm is the most efficient algorithm for unequal length of cryptocurrency **series** and SSE 50. In another hand, the partitional algorithm is the most efficient for an equal length of exchange rate currency and SET 50.

Show more
21 Read more

Abstract—We propose a wavelet neural network model (neuro-wavelet) for the short-term forecast of stock returns from high-frequency **financial** data. The proposed hybrid model combines the inherent capability of wavelets and artificial neural networks to capture non-stationary and non- linear attributes embedded in **financial** **time** **series**. A comparison study was performed on the modeling and predictive power among two traditional econometric models and four different dynamic recurrent neural network architectures. Several statistical measures and tests were performed on the forecasting estimates and standard errors to evaluate the predictive performance of all models. A Jordan net which used as input to the neural network the coefficients resulting from a non-decimated Haar wavelet-based decomposition of the high and low stock prices showed consistently to have a superior modeling and predictive performance over the other models. Reasonable forecasting accuracy for one, three, and five step-ahead horizons was achieved by the Jordan neuro-wavelet model.

Show more
In last few decades, the analysis of **time** **series** has at- tracted much attention from statistical and machine learning perspectives [1,2], with a variety of applications in different fields [3,4]. For example, the traders of hedge funds and experts in agriculture are demanding the pre- cise models to make the prediction of the possible trends and cycles. Even though a number of techniques and models are proposed for analyzing **financial** **time** **series**, however, there are no universal solutions to such specific application, due to its inherent randomness in nature. Also it is very difficult to determine which approach or model is superior to others, since many statistical and machine learning approaches are application-oriented methods.

Show more
Linear stochastic models, in particular the class of ARMA models, have been considered a practical tool for **financial** analysis and forecasting but they suffer from a number of serious shortcomings for studying **financial** fluctuations (Potter, 1995). They let just to generate realizations with symmetrical cyclical fluctuations, being not able to accommodate large shocks, shifting trends, and structural changes. Moreover, exogenous disturbances were superimposed upon usual linear deterministic models to mimic the **financial** **time** **series**, leaving, often, significant features unexamined and unexploited. Therefore, alternative answers have been searched in the nonlinear approach. The ARCH processes proposed by Engle (1982) and generalised by Bollerslev (1986) are nonlinear stochastic models that let to grasp the dynamics occurring within data and which might, otherwise, be obscured by systematic noise, **time** varying volatility, and non-stationary trends. It follows that these models are currently, used for analysing **financial** **time** **series**. Among the “ARCH-type” models Exponential GARCH, Asymmetric Power ARCH, Threshold GARCH, IGARCH, and FIGARCH are the most popular. They are grounded on the assumption that data are nonlinear stochastic functions of their past values. By using these models, researches on **financial** data pointed out a widespread stochastic non-linearity, even though the main effects seem to rise from the variances of the respective distributions. Nevertheless, some studies (Brock et al., 1991; Frank and Stengos, 1988) indicated that generalized ARCH models still show some evidence of nonlinearities in the data. What this nonlinearity is and how it should be modelled is still an open question. Chaos theory could allow for detecting this nonlinearity but using nonlinear deterministic models.

Show more
In order to assess the statistical validity of the performance of the PSN network, a paired t-test [57] was conducted to determine if there is any significant difference among the proposed spiking neural network for **financial** **time** **series** prediction and the MLP and the DRPNN based on the absolute value of the error on out of sample data. The calculated t-value showed that for all the predicted signals the proposed Polychronous Spiking Networks technique outperform the other neural networks predictors with a = 5% significance level for a one tailed test. This is confirmed by the simulation results as shown in Table 6 for out of sample data. We have utilised 50% of the data as out of sample for the T-test experiments. These results clearly indicate that the proposed spiking neural network is significantly better than the DRPNN and the MLP networks in predicting these **financial** **time** **series** datasets.

Show more
13 Read more

This paper details the empirical analysis of **financial** **time** **series** data. We consider the official daily prices from the trading floor of the New York Mercantile Exchange (NYMEX) for a specific delivery month for Cushing Oklahoma West Texas Intermediate (OK WTI), Reformulated Blendstock for Oxygenate Blending (RBOB), and the number 1 heating oil futures contracts at close of business at 2:30 p.m. We begin the analysis from model identification, selection, estimation, diagnostic checking and finally forecasting. The analysis is done using MATLAB [20]. The rest of the paper is organized as follows: Section 2 discuses the methods used in empirical analysis, Section 3 gives a brief discussion of the data used, and the empirical analysis and the results obtained and finally Section 4 gives a summary of the work and the findings.

Show more
14 Read more

In this paper, we introduce a TS-binary search tree for financial time series data based on.. a Multiresolution ImportaNt pOits Retrieval (MINOR) method.[r]

14 Read more

During this research we tried to present the structure of a neural network in general and the theoretical steps that need to be followed for the implementation. We focused on cash forecasting as a specific area of **financial** analysis, where prediction of the future values has a significant importance for proper cash management. We built a NN model for predicting **financial** **time** **series** data and we also tried to present the necessary parameters and the way they can be defined during this process.

Abstract: Given the complex nature of the stock market, determining the stock market timing - when to buy or sell is a challenge for investors. There are two basic methodologies used for prediction in **financial** **time** **series**; fundamental and technical analysis. The fundamental ana lysis depends on external factors such as economic environment, industry and company performance. Technical analysis utilizes **financial** **time** **series** data such as stock price and trading volume. Trend following (TF) is an investment strategy based on the technical analysis of market prices. Trend follow ers do not aim to forecast nor do they predict specific price levels. They simply jump on the uptrend and ride on it until the end of this uptrend. Most of the trend followers determine the establishment and termination of uptrend based on their own rules. In this paper, we propose a TF algorithm that employs multiple pairs of thresholds to determine the stock market timing. The experimental result on 7 stock market indexes shows that the proposed mu lti-threshold TF algorithm with multi-objective function is superior when it is compared to static, dynamic, and float enc oding genetic algorithm based TF.

Show more
10 Read more

the corresponding **time** **series**. But quite remarkably it misses the **time**-reversal asymmetry reported in Lynch and Zumbach [37] and Zumbach [55]. Indeed real **financial** **time** **series** are not symmetric under **time** reversal with respect to even-order moments. For instance, there is no leverage effect in foreign exchange rates, and their **time** **series** are not as skewed as indices, but they do have a **time** arrow. One of the indicators proposed in Lynch and Zumbach [37] is the correlation between historical volatility σ δt (h) h (t) and realized volatility σ δt (r) r (t). The historical volatility **series** σ (h) δt h (t) represents the volatility computed using the data in the past interval [t − δt h , t], and σ δt (r) r (t) represents the volatility computed using the

Show more
29 Read more

Predicting the behaviors of the stock market is always an interesting topic for not only **financial** investors, but also scholars and professionals from different fields, because successful prediction can help investors to give up major profits. Previous researchers have shown the strong connection between **financial** news and their impacts to the movements of stock prices. This paper proposes an approach of using **time** **series** analysis and text mining techniques to predict daily stock market trends.

In this paper, a method is proposed for fitting **financial** log-return **time** **series** with L´evy α-stable distributions. It uses the program stable.exe developed by Nolan [14] and the Chambers-Mallow-Stuck algorithm for the generation of L´evy α-stable deviates [15–17]. The datasets are: the daily adjusted close for the DJIA index taken from http://finance.yahoo.com and the daily adjusted close for the MIB30 index taken from http://it.finance.yahoo.com both for the period 1 January 2000 - 3 August 2006. The two datasets and the program stable.exe are freely available, so that whoever can repro- duce the results reported below and use the method on other datasets.

Show more
18 Read more

Spectral analysis is widespread in well developed areas, such as biology, astrophysics, engineering, and telecommunications. It is mainly used in applications where either oscillatory properties or pattern recurrence are present in a signal. However, applied econometricians tend to neglect the proper frequency to be considered while sampling the **time** **series** data. The present study shows how spectral analysis can be usefully employed to fix this problem. The case is illustrated with ultra-high-frequency data and the daily prices of the four selected stocks listed on the Sao Paulo stock exchange.

Show more
11 Read more

Abstract – A simple quantitative measure of the self-similarity in **time**-**series** in general and in the stock market in particular is the scaling behavior of the absolute size of the jumps across lags of size k. A stronger form of self-similarity entails not only that this mean absolute value, but also the full distributions of lag-k jumps have a scaling behavior characterized by the above Hurst exponent. In 1963 Benoit Mandelbrot showed that cotton prices have such a strong form of (distributional) self-similarity, and for the ﬁrst **time** introduced L´ evy’s stable random variables in the modeling of price records. This paper discusses the analysis of the self-similarity of high-frequency DEM-USD exchange rate records and the 30 main German stock price records. Distributional self- similarity is found in both cases and some of its consequences are discussed.

Show more
It is known that the solution of the Bessel’s equation is described by the first and the second Bessel function. They are typical special functions and described by the Fourier **series**. The Fourier **series** is calculated by the Fourier integral. When calculating Fourier integral, the convolution integral is applied. Here, let us consider the extension of the Bessel differential equation to the form in non-standard analysis directly. In fact, changing the standard real valuables to the non-standard valuables in a sense of Nelson. “Transfer Principle” [7] is ef- fective to solve the equation. Then, if the convolution integral is extended to the nonstandard form, see the Section 2, it is sufficient to get the solutions.

Show more
10 Read more