Forecasting volatility for financial assets has recently garnered significant interest among investors and professional analysts because of its importance in risk management, derivative pricing, and portfolio allocation. In recent years, availability of high frequency intra-day asset price data sets has enabled us to realize the hidden volatility via realized volatility (RV). Since volatility is related to future asset price uncertainty, forecast of RV is one of the main issues. Diverse methodologies for forecasting RV have been developed during the last decade in various academic fields and applied in real markets. Major results on RV forecasting were reviewed by McAleer and Medeiros (2008a), Andersen and Terasvirta (2009), and Shin (2018).
These days, taking well advantage of hidden nonlinearity in time series data, deep learning has been widely considered in regression and classification problems, and has proved to perform very well compared with other methods. Furthermore, since neural network methods do not require normality assumptions for population distribution, financial analysts, economists and statisticians are increasingly using these methods for data analysis (Mantri
The main characteristic of time series data is serial dependence. In particular, financial volatility data sets such as RV data sets and implied volatility data sets show very strong persistent serial dependence, called long-memory, see Section 2 and for example Engle and Patton (2007), Andersen and Terasvirta (2009), and Bollerslev
In the recent literature, RNN models have been sucessfully adopted for forecasting volatilities of financial time series. Kim and Won (2018) considered a combination of the deep learning LSTM for volatility forecast of the KOSPI (Korean stock price index) and compared the proposed model with other standard models. Liu (2019) compared volatility forecasts of the US S&P 500 index and AAPL (Nasdaq Apple stock price) by the SVM, the LSTM and the statistical GARCH model and identified the situations in which the LSTM has better performance. Bucci (2020) studied forecast of monthly realized volatility of the US S&P stock price index and compared some RNN models and some econometric models. Lei
We forecast daily RV of major stock price indices of the US and the EU. Recall that the deep learning forecasts usually show out-of-scale issue that forecasted values are limited within the range of training data, and the issue is generally resolved by considering ratios rather than data themselves (Fisher and Karauss, 2018; Liu and Mehta, 2019; Oztekin
In addition to the long-memory feature of RV data sets, another important one is asymmetry as reviewed by Corsi (2009), McAleer and Medeiros (2008a), and many others. The asymmetric feature of RV data sets still remains in RV ratio data sets: Heavier right tails than left tails, as shown in Sections 2 and 5. We address the asymmetry issue by piecewise min-max (PM) normalization to scale the data. The PM normalization is shown to faithfully address the asymmetry in the RV ratio data resulting in better forecasts than the other usual normalizations: Min-max (MM) normalization, Gaussian mixture (GM) normalization.
Superiority of the proposed method is illustrated in an out-of-sample forecast comparison for RV’s of 10 major US and EU stock price indices: S&P 500, Russell 2000, DJIA, NASDAQ, FTSE 100, CAC 40, DAX, AEX, SSMI, IBEX 35. (i) The proposed RNN model with ratio transformation and PM normalization (RNN-R-PM) is the best among 6 RNN models (with or without ratio transformation and MM, GM, PM other normalization) and 4 benchmarking models of the AR, the SVM, the DNN, and the CNN. (ii) The RNN is shown to be better than the DNN and the CNN when applied to the original RV with the MM normalization as is the usual practice. (iii) Ratio transformation of RV gives the RNN better forecasts than original RV. (iv) PM normalization yields better forecast for the RNN than MM and GM normalizations.
The rest of the paper is organized as follows. Section 2 describes our research data. Section 3 presents three normalization methods, RNN-based methods, and the nested CV. Section 4 explains experimental design, evaluation measures, an implementation of the nested CV method. Section 5 provides an illustration of the modeling procedure, the CV results and forecast performance comparison results of the RNN-based methods and the benchmarking models. Finally, Section 6 concludes this work.
A realized variance is simply the sum of squares of intra-day log-returns for which the unit of time is usually one day and it is indexed by
Summary statistics of the RV data sets are shown in Table 1. The means of RV are all roughly 1% and skewness values are all much larger than 0 indicating very right-skewed data distributions, which can also be confirmed by much larger values of (max - median) than (median - min) values, for example 29.66 − 0.62 > 0.62 − 0.11 for S&P. The CoV denotes the coefficient of variation, SD/Mean. As the randomness limits the accuracy of forecasts in general, forecasting with less dispersed data is presumably advantageous, and the CoV can be utilized as a rough indicator for this. The CoV of RV has a value around 0.5 for every index.
From Table 1, it can be inferred that the RV data is somewhat irregular due to high skewness and CoV. Also, it is generally reported that the RNN-based forecasts usually suffer from the out-of-sacle issue that the forecast value tends to be limited within the range seen from the training data. These concerns can be relieved by using the relative ratio of RV instead of the original value to forecast (Liu and Mehta, 2019). The relative ratio of RV is denoted
In addition to avoiding the out of scale issue and having smaller skewness and CoV than the RV data, the ratioed RV data has another important advantage of resolving the long memory of the original RV data, which considerably improve deep learning forecasts. For this discussion, we choose the S&P 500. The conclusion of this paragraph holds good for all the other assets. Figure 1 shows time series plot of the daily RV series
The remaining non-negligible autocorrelation for one lag
The values passed between layers in the RNN models need to be scaled to [0, 1] or [−1, 1]. Normalization techniques can be used to scale the data and can speed up training time by starting the training process with the scaled data (Liu and Mehta, 2019). Since we focus on forecasting the relative ratio of RV, normalization is adopted to it. The parameters required for the normalization process are usually obtained from a set of values, which is mostly a train set. Here, let be the set on which normalization is based. In this work, three kinds of normalization techniques are employed as described in the following.
The min-max normalization (MMN) is the simplest method to adjust range of data to lie in [0, 1] (Liu and Mehta, 2019). For a value
It preserves the stochastic properties of the data except that only the range is scaled to [0, 1]. Also, the MMN has the advantage of low complexity of denormalization.
The Gaussian mixture normalization (GMN) exploits a density estimation method using linear superposition of Gaussian distributions. The distribution of values in can be approximated through Gaussian mixture model. We use a well-known fact that
where
The Piecewise min-max normalization (PMN) is a simple combination of the two techniques mentioned above in that the distributional feature of data is addressed by different min-max scaling to the left and right half of data. For a value
where
We use RNNs which are widely used for sequential data (Karpathy
Here we briefly explain the RNN model we use. Let the input length be given as
where
The RNN-based methods discussed in the previous subsection have various hyperparameters, such as the RNN direction, the input length, the number of layers, and others that need to be specified. It is important to choose good hyperparameter combination for training an RNN-based model for the good forecast. In order to compare the forecast accuracies of two or more methods with different hyperparameters, researchers tend to use CV (Oztekin
We elaborate how the methods described in Section 3 are applied to our data. For the purpose of forecasting future RV values, we perform CV in exactly the same way it would be done in the actual forecast process. That is, the models are trained on past values and their performances are tested on future values. This type of CV can be used not only to determine hyperparameters but also to examine the effects of normalization techniques through the stochastic characteristics inherent in the data.
Let denote the dataset that will be used for tuning hyperparameters. First, the dataset () is divided into subsets consisting of
We describe how to construct input-output pairs for training our RNN-based models discussed in Section 3.2. Let a subset
to construct a set of
Now, the normalization is based on . We apply one of the normalization techniques given in Section 3.1 to the values in , and make a set of normalized ratio values as
Finally, we construct input-output pairs for RNN learning by sliding the window of length
Among them, the first
We describe how forecasts are made using a trained RNN-based model. Let the subset
We have 1-step forecasts {
We elaborate the CV process adopted in this work. To evaluate the forecast performance of the RNN-based method for a combination of hyperparameters, the forecast accuracy is measured on the last
For each of possible hyperparameter combinations, we perform the above-mentioned process to get the CV estimate. Then, based on the results from this exhaustive CV process, the combination of hyperparameters to be used for the actual forecast is determined.
Recall that the total number of RV values for each asset is
This subsection illustrates the methods in Sections 3, 4 in the setup of the first paragraph of Section 5. Recall that data lengths are all
Hyperparameters are first tuned as in (i) and forecastings are next made as in (ii)–(iv).
Hyperparameter tuning - nested CV method : Hyperparameters are determined by the nested CV method of Section 3.3. We use
Weight estimation using data
Weight updating usng data
Weight updating using data
One-step forecasts {
Bring ratio transformed data:
PMN for normalization: In the piecewise min-max normalization (PMN) (
RNN training: Update the weight
Forecasting normalized data of
Inverse PMN for denormalization: We apply the inverse transform of PMN in (ii) to compute forecast
Inverse ratio transformation for forecasts of
Before applying CV, we first perform descriptive analysis on the relative ratio of RV values. For the relative ratio of RV values corresponding to RV values in
As discussed in Section 4.4, the main purpose of the CV is to select the optimal hyperparameters for the RNN methods. For the performance measure FA in Section 4.4, we use MSE. The hyperparameters RNN direction, RNN cell, input length (
We perform the CV process for each normalization technique described in Section 3.1. According to the CV errors
Here, we analyze the stochastic characteristics of the data that can be exploited in the actual forecast. The analysis is performed with the results obtained in the CV process. Since the RNN-based model forecasts the relative ratio of RV values first, we focus on the analysis of this. Specifically, we examine how the normalization techniques affect the forecast of the relative ratios.
Figures 5
As we expected for GMN, the data show uniformly distributed histograms as in Figure 6. Although the forecasted results explain only the middle part of the data as above, it can be seen that the median is generally matched well and asymmetry is neglected.
As described in Section 3.1, PMN technique applies two different affine transformations based on the median. From Figure 7, it can be seen that the right-skewness of the data is partially alleviated through the application of this normalization technique. In the case of the forecasted result, unlike MMN and GMN, it can be checked that the mode of the data is well-matched.
In Figure 8, distributions of forecasted ratios
We perform forecasts for data subsets
where
For the RNN methods with ratio transformation, forecasts are made for RV elements in
RNN-R-PM produces generally the best results. The superiority of the RNN-R-PM model to the other models holds strongly in the MAPE performance being best in all the 10 assets (Table 4) and somewhat strongly in the MAE performance being best in 6 assets out of 10 assets (Table 5). In the RMSE performance, even though RNN-R-MM tends to produce the best results, RNN-R-PM has also good RMSE performance not much worse than that of the best model as shown in Table 6. For all the 10 assets, the RNN-R-PM is considerably better than the benchmarking the AR, the SVM, the DNN, and the CNN models in all of MAPE, MAE, and RMSE with an exception of the AR for asset SSMI in RMSE.
RNN-O produces much better forecasts than the DNN and somewhat better than the CNN except for 1 asset of the DAX out of 10 assets. Recall that RNN-O, the DNN, and the CNN all apply to the original RV data with the same MM normalization. Recall also that original RV data sets have long memory of persistent strong autocorrelation as discussed in Section 2. The RNN-O takes well advantage of the long memory of the RV data sets and produce better forecasts than the DNN and the CNN which do not explicitly address the serial dependence.
Ratio transformation generally yields forecast improvement. In forecast performances of RNN-O and RNN-R, R is generally better than the corresponding O except for RMSE of RNN-R-PM model. For example, RNN-R-MM is better than RNN-O-MM. The better forecast performance of R than O is a consequence of resolved long memory in ratioed RV as discussed in Section 2.
PM is better than MM and GM for the RNN. Superiority of PM over MM and GM holds both in RNN-O and in RNN-R in all of MAPE, MAE and RMSE with an excepton that, in RMSE performance for RNN-R, MM tends to be best but PM is not far away from the best. This better performance of PM than MM and GM is a result of better exploiting of the asymmetry in the RV or in the ratioed RV as in (2).
We considered recurrent neural network (RNN) forecast of realized volatility (RV) of 10 stock price indices, four from the US, and six from the Europe. We took three different normalization methods into the relative ratio of RV to adjust the range of data to fit the RNN based model. The piecewise min-max (PM) normalization is proven to faithfully address the asymmetric data distribution feature resulting in better forecasts than min-max or Gaussian mixture normalizations. Also two RNN-variant models LSTM, GRU and bidirectional of them are utilized. We tested the combinations to obtain the optimal models by applying the nested cross-validation (CV). The RNN-based model with PM normalization and ratio transformation (RNN-R-PM) outperformed other RNN models and the bench marking models of AR (autoregressive regression), the SVM (support vector machine), the DNN (deep neural network), and the CNN (convolutional neural network) model in most of the data sets. In particular, the RNN-R-PM performed substantially better than other models in all 10 the data sets in terms of MAPE. Forecasts via ratioed RV data were shown to be better than those based on RV themselves.
Here, we briefly introduce the RNN structures based on the framework provided in Karpathy
Cell structures for gated RNNs
Although detailed structure for calculating the hidden states
LSTM
LSTM, one of variants of RNN, is designed to prevent gradient vanishing and exploding problems in RNN. This model has cell states and uses input, forget, and output gates for each time step to control what information to throw away from and store in the cell state and update the old cell state. The detailed expression of the steps in LSTM is as follows.
where
GRU
GRU introduced by Cho
where
Considered structures for RNN model.
Bidirectional RNN
A bidirectional RNN connects two hidden layers of opposite directions to the same output. With this structure, it is possible to get information from backward and forward states which do not have any interactions each other simultaneously (Schuster and Paliwal, 1997). The structure of a bidirectional model is illustrated in Figure A.1(b). We call a typical RNN model described above a unidirectional model, which is illustrated in Figure A.1(a), to distinguish it from a bidirectional model. Specifically, in the bidirectional model, bidirectional structure is applied to the bottom
According to the CV errors, we take the best three hyperprameter combinations for RNN-R, which are applied to ratioed RV, and these are given in
The best three hyperparameter combinations for MM normalization
Index | Rank | Direction | Cell | q | L | n | Index | Rank | Direction | Cell | q | L | n |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
S&P | 1 | Uni | GRU | 10 | 2 | 8 | RUSSELL | 1 | Uni | GRU | 10 | 2 | 8 |
2 | Uni | GRU | 10 | 2 | 4 | 2 | Uni | GRU | 8 | 2 | 4 | ||
3 | Uni | GRU | 9 | 2 | 16 | 3 | Uni | GRU | 9 | 2 | 4 | ||
DJIA | 1 | Uni | LSTM | 10 | 2 | 8 | NASDAQ | 1 | Uni | LSTM | 8 | 2 | 4 |
2 | Uni | GRU | 8 | 2 | 16 | 2 | Uni | LSTM | 10 | 2 | 8 | ||
3 | Bi | GRU | 10 | 2 | 16 | 3 | Uni | LSTM | 10 | 2 | 16 | ||
FTSE | 1 | Uni | LSTM | 9 | 2 | 8 | CAC | 1 | Uni | GRU | 7 | 2 | 16 |
2 | Uni | LSTM | 9 | 2 | 16 | 2 | Uni | GRU | 6 | 2 | 16 | ||
3 | Uni | GRU | 9 | 2 | 8 | 3 | Uni | LSTM | 9 | 2 | 16 | ||
DAX | 1 | Uni | GRU | 6 | 2 | 16 | AEX | 1 | Uni | GRU | 10 | 2 | 16 |
2 | Uni | GRU | 8 | 2 | 16 | 2 | Uni | LSTM | 6 | 2 | 16 | ||
3 | Uni | GRU | 9 | 2 | 16 | 3 | Uni | GRU | 7 | 2 | 16 | ||
SSMI | 1 | Bi | GRU | 7 | 2 | 16 | IBEX | 1 | Uni | LSTM | 9 | 2 | 16 |
2 | Bi | GRU | 8 | 2 | 16 | 2 | Uni | GRU | 7 | 2 | 8 | ||
3 | Bi | GRU | 10 | 2 | 16 | 3 | Uni | GRU | 7 | 2 | 16 |
The best three hyperparameter combinations for GM normalization
Index | Rank | Direction | Cell | q | L | n | Index | Rank | Direction | Cell | q | L | n |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
S&P | 1 | Bi | GRU | 8 | 2 | 16 | RUSSELL | 1 | Bi | LSTM | 10 | 4 | 16 |
2 | Bi | GRU | 10 | 2 | 8 | 2 | Bi | GRU | 5 | 8 | 16 | ||
3 | Uni | GRU | 10 | 2 | 16 | 3 | Bi | LSTM | 5 | 2 | 16 | ||
DJIA | 1 | Bi | GRU | 6 | 2 | 8 | NASDAQ | 1 | Uni | GRU | 6 | 4 | 16 |
2 | Uni | GRU | 10 | 2 | 8 | 2 | Uni | GRU | 9 | 4 | 16 | ||
3 | Bi | GRU | 10 | 4 | 8 | 3 | Uni | LSTM | 6 | 2 | 16 | ||
FTSE | 1 | Bi | GRU | 7 | 8 | 16 | CAC | 1 | Bi | GRU | 6 | 4 | 16 |
2 | Bi | GRU | 10 | 8 | 16 | 2 | Bi | GRU | 5 | 4 | 16 | ||
3 | Bi | GRU | 8 | 4 | 16 | 3 | Bi | GRU | 9 | 2 | 16 | ||
DAX | 1 | Uni | GRU | 6 | 4 | 16 | AEX | 1 | Bi | GRU | 6 | 2 | 16 |
2 | Bi | GRU | 7 | 4 | 16 | 2 | Bi | GRU | 6 | 4 | 16 | ||
3 | Bi | GRU | 8 | 4 | 16 | 3 | Uni | GRU | 6 | 2 | 16 | ||
SSMI | 1 | Uni | GRU | 6 | 2 | 16 | IBEX | 1 | Uni | GRU | 9 | 2 | 8 |
2 | Bi | GRU | 7 | 8 | 16 | 2 | Uni | GRU | 6 | 4 | 16 | ||
3 | Uni | GRU | 10 | 4 | 16 | 3 | Bi | GRU | 7 | 2 | 16 |
The best three hyperparameter combinations for PM normalization
Index | Rank | Direction | Cell | q | L | n | Index | Rank | Direction | Cell | q | L | n |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
S&P | 1 | Uni | GRU | 8 | 2 | 16 | RUSSELL | 1 | Bi | LSTM | 10 | 4 | 16 |
2 | Bi | GRU | 10 | 2 | 4 | 2 | Bi | LSTM | 6 | 4 | 16 | ||
3 | Uni | LSTM | 10 | 2 | 4 | 3 | Bi | LSTM | 5 | 2 | 16 | ||
DJIA | 1 | Bi | GRU | 6 | 8 | 16 | NASDAQ | 1 | Bi | GRU | 10 | 4 | 16 |
2 | Bi | GRU | 7 | 8 | 16 | 2 | Bi | GRU | 8 | 8 | 16 | ||
3 | Bi | LSTM | 9 | 4 | 16 | 3 | Bi | LSTM | 7 | 4 | 16 | ||
FTSE | 1 | Bi | GRU | 7 | 4 | 8 | CAC | 1 | Bi | GRU | 10 | 8 | 16 |
2 | Bi | GRU | 6 | 4 | 16 | 2 | Bi | GRU | 5 | 4 | 16 | ||
3 | Uni | GRU | 9 | 4 | 4 | 3 | Bi | GRU | 6 | 4 | 4 | ||
DAX | 1 | Uni | LSTM | 6 | 4 | 8 | AEX | 1 | Bi | LSTM | 7 | 4 | 16 |
2 | Uni | LSTM | 5 | 4 | 16 | 2 | Uni | LSTM | 10 | 2 | 8 | ||
3 | Bi | GRU | 6 | 4 | 16 | 3 | Bi | LSTM | 5 | 4 | 16 | ||
SSMI | 1 | Bi | LSTM | 10 | 2 | 16 | IBEX | 1 | Bi | GRU | 10 | 8 | 16 |
2 | Uni | LSTM | 6 | 2 | 8 | 2 | Uni | GRU | 6 | 4 | 8 | ||
3 | Uni | LSTM | 5 | 2 | 4 | 3 | Bi | GRU | 9 | 8 | 16 |
This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (RS-2023-00239009, 2022R1F1A1068578, RS-2023-00242528).
Summary statistics of 100RV, 100
Index | Mean | SD | Skewness | CoV | Min | Median | Max | |||
---|---|---|---|---|---|---|---|---|---|---|
S&P | 0.92 | 0.69 | 15.16 | 0.58 | 0.11 | 0.62 | 29.66 | 0.82 | 0.65 | 0.29 |
RUSSELL | 1.01 | 0.64 | 14.76 | 0.39 | 0.20 | 0.73 | 27.65 | 0.81 | 0.63 | 0.30 |
DJIA | 0.90 | 0.63 | 16.14 | 0.57 | 0.14 | 0.63 | 30.47 | 0.78 | 0.63 | 0.27 |
NASDAQ | 0.86 | 0.69 | 16.41 | 0.48 | 0.18 | 0.65 | 24.81 | 0.82 | 0.63 | 0.29 |
FTSE | 0.76 | 0.53 | 17.78 | 0.45 | 0.21 | 0.57 | 23.63 | 0.83 | 0.68 | 0.36 |
CAC | 0.96 | 0.80 | 15.09 | 0.48 | 0.22 | 0.85 | 26.75 | 0.81 | 0.62 | 0.28 |
DAX | 0.99 | 0.82 | 14.88 | 0.50 | 0.19 | 0.83 | 27.69 | 0.81 | 0.63 | 0.23 |
AEX | 0.86 | 0.69 | 14.65 | 0.54 | 0.21 | 0.71 | 24.53 | 0.83 | 0.66 | 0.31 |
SSMI | 0.76 | 0.62 | 19.58 | 0.42 | 0.24 | 0.62 | 25.45 | 0.82 | 0.63 | 0.31 |
IBEX | 1.11 | 0.73 | 14.89 | 0.55 | 0.28 | 0.94 | 27.24 | 0.79 | 0.58 | 0.29 |
Summary statistics of ratio of adjacent RV values,
Index | Mean | SD | Skewness | CoV | Min | Median | Max | |||
---|---|---|---|---|---|---|---|---|---|---|
S&P | 1.07 | 0.41 | 3.71 | 0.36 | 0.27 | 0.99 | 8.957 | −0.34 | 0.01 | −0.01 |
RUSSELL | 1.06 | 0.47 | 2.79 | 0.33 | 0.23 | 0.99 | 6.58 | −0.35 | 0.02 | 0.01 |
DJIA | 1.08 | 0.46 | 3.30 | 0.37 | 0.26 | 1.01 | 8.17 | −0.37 | −0.01 | −0.02 |
NASDAQ | 1.04 | 0.35 | 3.26 | 0.36 | 0.22 | 0.99 | 6.58 | −0.30 | 0.01 | −0.01 |
FTSE | 1.04 | 0.34 | 6.23 | 0.28 | 0.18 | 1.00 | 8.82 | −0.31 | 0.06 | 0.00 |
CAC | 1.04 | 0.32 | 4.77 | 0.30 | 0.27 | 1.04 | 7.84 | −0.32 | 0.06 | −0.01 |
DAX | 1.04 | 0.34 | 3.29 | 0.33 | 0.31 | 0.99 | 6.21 | −0.33 | 0.04 | −0.01 |
AEX | 1.03 | 0.33 | 4.98 | 0.31 | 0.24 | 1.00 | 8.14 | −0.31 | 0.06 | −0.02 |
SSMI | 0.99 | 0.31 | 7.07 | 0.25 | 0.25 | 0.99 | 8.19 | −0.30 | 0.02 | −0.03 |
IBEX | 1.04 | 0.35 | 5.56 | 0.32 | 0.27 | 1.00 | 7.34 | −0.30 | 0.04 | −0.02 |
Hyperparameter settings for CV
Parameter | Settings |
---|---|
RNN direction | Uni, Bi |
RNN cell | LSTM, GRU |
Input length ( |
3, 4, 5, 6, 7, 8, 9, 10 |
Number of layers (L) | 2, 4, 8, 16 |
Number of hidden units (n) | 4, 8, 16, 32 |
The MAPE(%) of the forecast models
Index | Benchmarking models | RNN-O | RNN-R | |||||||
---|---|---|---|---|---|---|---|---|---|---|
AR | SVM | DNN | CNN | MM | GM | PM | MM | GM | PM | |
S&P 500 | 28.96 | 53.84 | 70.16 | 48.93 | 32.72 | 37.95 | 30.22 | 26.65 | 24.01 | 22.97 |
RUSSELL 2000 | 24.96 | 31.33 | 35.43 | 27.34 | 24.27 | 25.00 | 22.06 | 24.17 | 23.25 | 19.80 |
DJIA | 28.36 | 49.88 | 66.92 | 48.34 | 42.23 | 41.37 | 44.02 | 26.34 | 23.97 | 22.59 |
NASDAQ | 25.68 | 43.24 | 44.42 | 34.50 | 28.87 | 29.54 | 22.88 | 24.35 | 22.72 | 21.54 |
FTSE 100 | 19.46 | 24.38 | 30.74 | 21.97 | 19.70 | 20.49 | 18.14 | 18.96 | 18.71 | 17.88 |
CAC 40 | 21.25 | 28.00 | 34.39 | 25.59 | 25.52 | 26.73 | 20.79 | 20.24 | 19.84 | 18.91 |
DAX | 22.51 | 29.25 | 38.46 | 28.09 | 42.78 | 29.24 | 33.14 | 22.64 | 22.09 | 21.12 |
AEX | 21.38 | 27.87 | 32.58 | 25.03 | 24.37 | 25.05 | 21.84 | 20.52 | 20.18 | 19.18 |
SSMI | 16.17 | 18.93 | 28.39 | 17.89 | 16.79 | 17.39 | 15.66 | 16.10 | 15.38 | 15.18 |
IBEX 35 | 20.42 | 25.17 | 35.29 | 27.37 | 25.05 | 27.95 | 21.95 | 20.61 | 20.07 | 18.96 |
The MAE × 1000 of the forecast models
Index | Benchmarking models | RNN-O | RNN-R | |||||||
---|---|---|---|---|---|---|---|---|---|---|
AR | SVM | DNN | CNN | MM | GM | PM | MM | GM | PM | |
S&P 500 | 1.18 | 1.85 | 2.27 | 1.63 | 1.25 | 1.34 | 1.18 | 1.16 | 1.09 | 1.09 |
RUSSELL 2000 | 1.37 | 1.71 | 1.72 | 1.44 | 1.32 | 1.33 | 1.28 | 1.37 | 1.35 | 1.34 |
DJIA | 1.17 | 1.76 | 2.24 | 1.65 | 1.49 | 1.45 | 1.53 | 1.14 | 1.09 | 1.08 |
NASDAQ | 1.25 | 1.83 | 1.81 | 1.52 | 1.31 | 1.32 | 1.16 | 1.23 | 1.19 | 1.16 |
FTSE 100 | 1.00 | 1.20 | 1.36 | 1.08 | 0.99 | 1.01 | 0.96 | 0.98 | 1.01 | 0.98 |
CAC 40 | 1.52 | 1.76 | 2.04 | 1.66 | 1.63 | 1.66 | 1.49 | 1.48 | 1.52 | 1.46 |
DAX | 1.57 | 1.86 | 2.11 | 1.71 | 2.27 | 1.72 | 1.91 | 1.56 | 1.59 | 1.55 |
AEX | 1.40 | 1.68 | 1.77 | 1.49 | 1.46 | 1.44 | 1.39 | 1.35 | 1.41 | 1.36 |
SSMI | 0.97 | 1.13 | 1.44 | 1.03 | 0.99 | 1.00 | 0.96 | 0.99 | 0.98 | 0.98 |
IBEX 35 | 1.88 | 2.25 | 2.69 | 2.20 | 2.09 | 2.24 | 1.99 | 1.90 | 1.96 | 1.84 |
The RMSE × 1000 of the forecast models
Index | Benchmarking models | RNN-O | RNN-R | |||||||
---|---|---|---|---|---|---|---|---|---|---|
AR | SVM | DNN | CNN | MM | GM | PM | MM | GM | PM | |
S&P 500 | 1.76 | 2.38 | 2.64 | 2.10 | 1.77 | 1.84 | 1.76 | 1.76 | 1.74 | 1.77 |
RUSSELL 2000 | 1.75 | 2.21 | 2.07 | 1.83 | 1.71 | 1.72 | 1.71 | 1.78 | 1.78 | 1.78 |
DJIA | 1.83 | 2.33 | 2.65 | 2.15 | 2.02 | 1.97 | 2.12 | 1.82 | 1.82 | 1.81 |
NASDAQ | 1.72 | 2.33 | 2.17 | 1.93 | 1.75 | 1.76 | 1.70 | 1.75 | 1.74 | 1.73 |
FTSE 100 | 1.54 | 1.78 | 1.79 | 1.62 | 1.52 | 1.54 | 1.52 | 1.51 | 1.57 | 1.53 |
CAC 40 | 2.41 | 2.63 | 2.75 | 2.50 | 2.45 | 2.48 | 2.43 | 2.35 | 2.50 | 2.38 |
DAX | 2.34 | 2.65 | 2.72 | 2.43 | 2.94 | 2.42 | 2.65 | 2.32 | 2.44 | 2.38 |
AEX | 2.43 | 2.63 | 2.67 | 2.45 | 2.42 | 2.41 | 2.46 | 2.35 | 2.54 | 2.41 |
SSMI | 1.53 | 1.77 | 1.89 | 1.57 | 1.54 | 1.56 | 1.54 | 1.54 | 1.59 | 1.60 |
IBEX 35 | 3.77 | 4.17 | 4.27 | 3.94 | 3.83 | 4.03 | 3.98 | 3.71 | 3.97 | 3.75 |