Validity of the stationary bootstrap of Politis and Romano (1994) is proved for U-statistics under strong mixing. Weak and strong consistencies are established for the stationary bootstrap of U-statistics. The theory is applied to a symmetry test which is a U-statistic regarding a kernel density estimator. The theory enables the bootstrap confidence intervals of the means of the U-statistics. A Monte-Carlo experiment for bootstrap confidence intervals confirms the asymptotic theory.
U-statistics form a broad class of nonlinear functionals and have important applications in nonparametric estimation and testing problems. Distributional properties of the U-statistics were investigated by several authors. Hoeffding (1948) and Borovskikh (1998) considered normal approximations of U-statistics for i.i.d. case, Yoshihara (1976) showed a CLT for absolutely regular processes using a generalized covariance inequality, and Denker and Keller (1986) studied functionals of absolutely regular processes. Dehling and Wendler (2010) recently showed that U-statistics of strong mixing data converge to a normal limit if the kernel of the U-statistic fulfills some moment and continuity conditions.
Some attempts had been made to approximate distributions of U-statistics by bootstrapping. For i.i.d. data cases, validity of the bootstrap for U-statistics was established by Bickel and Freedman (1981). For dependent data cases, it was recently studied by Dehling and Wendler (2010), who established the validity of bootstrap for U-statistics by proving a CLT for the circular block bootstrap version (cf. Shao and Yu, 1993) of U-statistics under absolute regularity and strong mixing. Sharipov and Wendler (2011) also investigated the nonoverlapping block bootstrap (cf. Carlstein, 1986) for U-statistics for near epoch dependent sequences of strong mixing and absolute regular processes.
We consider another version of block bootstrap for U-statistics under strong mixing such as the stationary bootstrap proposed by Politis and Romano (1994). The stationary bootstrap is an extension of the circular block bootstrap that allows the block length to be a random variable such as a geometric random variable. Some important applications of stationary bootstrapping are found in Mudelsee (2003) and Hwang and Shin (2011, 2012a) in regards to nonparametric estimation and in Swensen (2003), and Parker et al. (2006) regarding nonstationary time series analysis. Lahiri (1999), Nordman (2009) and Hwang and Shin (2012b) analyzed the new properties of the stationary bootstrap.
The original work of Politis and Romano (1994) presented fundamental consistency and weak convergence properties of the stationary bootstrap for sample mean under strong mixing. For sample mean, Goncalves and White (2002) established weak consistency of the stationary bootstrap applied to near epoch dependent processes under the assumption of finite sixth moments; in addition, Goncalves and Jong (2003) improved their results under the existence of only slightly more than second moments. For sample mean and sample variance, Hwang and Shin (2012b) recently established strong consistency of the stationary bootstrap under ψ-weak dependence (cf. Doukhan and Louhichi, 1999), which is a more general class of weak dependence and includes mixings, association, Gaussian sequences and Bernoulli shifts.
In this paper, asymptotic analyses will be made for U-statistics under strong mixing. Weak consistency of stationary bootstrapping will be established under the same condition on the block length parameter that Politis and Romano (1994) used to analyze sample mean. Strong consistency will be established under the more restrictive condition on the parameter with a faster decay rate that Hwang and Shin (2012b) used to analyze sample mean. Our results extend the stationary bootstrap studies of Hwang and Shin (2012b) for sample mean and sample variance to U-statistics while restricting to mixing processes. The results also extend Dehling and Wendler (2010) as the stationary bootstrap is considered an extension of the circular block bootstrap in view of the block length extended from being fixed to being random.
The asymptotic theory is applied to a symmetry test based on the goodness-of-fit tests of Fan and Ullah (1999) that tests the symmetry of the distribution of stationary processes. The asymptotic theory enables us to construct bootstrap confidence intervals of means of U-statistics. A Monte-Carlo experiment is conducted for bootstrap confidence interval for some important U-statistics which verifies the asymptotic theory.
The remaining of the paper is organized as follows. The U-statistics and stationary bootstrap are described in Section 2. The main theoretical results and their applications are presented in Section 3 and a Monte-Carlo result is given in Section 4, while technical results and proofs are given in Section 5.
Let (X_{t})_{t}_{∈?} be a strictly stationary sequence of random variables with a common distribution function F(·). In a large class of statistical problems, parameters to be estimated are of the form θ = E[Φ(X_{1}, . . . , X_{m})] with a positive integer m and a Borel function Φ : ?^{m} → ?, that is symmetric and satisfies E|Φ(X_{1}, . . . , X_{m})| < ∞. For n ≥ m, a U-statistic with kernel Φ of degree m, which is a symmetric unbiased estimator of θ, is defined as:
For simplicity of presentation, we deal with the bivariate case, i.e., m = 2. Examples of such U-statistics are the sample variance, Wilcoxon statistic, and the Gini’s pairwise mean difference, see Section 4. Extension to the case m > 2 is straightforward and briefly discussed. For m = 2, U-statistic with kernel Φ defined by
is decomposed into the linear part and the degenerate part as:
where
This decomposition, originally used by Hoeffding (1948), is a key tool for the analysis of the U-statistics. Under the conditions of Dehling and Wendler (2010), the linear part
Stationary bootstrap of Politis and Romano (1994) is applied to approximate the distributions of the U-statistics. The method resamples blocks of random lengths. The block lengths increase as sample size n increases so that the weakly dependent structure is captured in the sampled blocks.
To describe the stationary bootstrap procedure, we first define a new time series {X_{ni} : i ≥ 1} by a periodic extension of the observed data set {X_{1}, . . . , X_{n}}. The sequence {X_{ni} : i ≥ 1} is obtained by wrapping data X_{1}, . . . , X_{n} around a circle and relabeling them as X_{n}_{1}, X_{n}_{2}, . . . . For each i ≥ 1, define X_{ni} := X_{j} where j is such that i = qn + j for some integer q ≥ 0. Next, from the circularly extended data, the stationary bootstrap constructs bootstrap sample {
In the sequel, P^{*} and E^{*} denote the conditional probability and the conditional expectation, respectively, given X_{1}, . . . , X_{n}. We assume that p goes to 0 as n → ∞. The expected block length E^{*}L_{1} is 1/p, which tends to ∞ as n → ∞.
We present our main results of the weak and the strong consistencies. The CLT of U-statistics with strong mixing data was established in Dehling and Wendler (2010). In proving our results, we need their CLT and hence the conditions in which the ?-Lipschitz-continuous kernel Φ is adopted. We state the strong mixing condition and ?-Lipschitz-continuity. A stationary sequence (X_{n})_{n}_{∈?} of random variables is called strong mixing if
where α(Y, Z) = sup{|P(A ∩ B) ? P(A)P(B)| : A ∈ σ(Y), B ∈ σ(Z)}.
A kernel Φ is called ?-Lipschitz-continuous if there is a constant L > 0 such that
for every ε > 0 and for all pairs (X, Y) and (X′, Y) with the common distribution ?_{X}_{1,}_{X}_{k} for k ∈ ? or ?_{X}_{1} × ?_{X}_{1}. If Φ is Lipschitz-continuous, then it is also ?-Lipschitz-continuous. According to Example 1.5 of Dehling and Wendler (2010), the kernel corresponding to the sample variance is ?-Lipschitz-continuous. It is easy to show that the kernels corresponding to the Wilcoxon statistic and the Gini’s pairwise mean difference are all ?-Lipschitz-continuous.
Now we introduce the bootstrapped U-statistics based on the stationary bootstrap sample {
The following two theorems state the weak and the strong consistency of the bootstrap distribution of the stationary bootstrapped U-statistics, respectively. All asymptotic results in this work are for n → ∞.
Let (X_{n})_{n}_{∈?}be a stationary strong mixing process and Φ a ?-Lipschitz-continuous kernel, such that E[X_{1}]^{γ} < ∞ for some γ > 0 and
for some δ > 0 and M > 0. If α(n) = O(n^{?}^{ρ}) for ρ > (3γδ + δ + 5γ + 2)/(2γδ) and if p → 0 and np → ∞, then
and
In Theorem 1, the conditions of ?-Lipschitz-continuity, moment bounds and algebraic strong mixing ensure the CLT of the U-statistics, which is the result of Dehling and Wendler (2010, Theorem 1.8), while the conditions on p yield the validity of the stationary bootstrap of the U-statistics as in Politis and Romano (1994, Theorems 1, 2) for sample mean.
Let (X_{n})_{n}_{∈?}be a stationary strong mixing process and Φ a ?-Lipschitz-continuous kernel, such that E|X_{1}|^{γ}^{+}^{?} < ∞ for some γ > 2 and 0 < ? < 1 and
for some r > 2, δ > 0 and M > 0. If α(n) = O(n^{?}^{ρ}) for ρ > max{γ(γ+?)/(2?), (3γδ+δ+5γ+2)/(2γδ)} and if p = cn^{?(}^{?}^{?2}^{ε}^{)/(2+}^{?}^{)}for some c > 0 and 0 < ε < ?/2, then,
and
In Theorem 2, the conditions E|X_{1}|^{γ}^{+}^{?} < ∞ and α(n) = O(n^{?}^{ρ}) for ρ > γ(γ + ?)/(2?) give a moment inequality, and it plays a key role in proving the almost sure convergence of the stationary bootstrapped mean and U-statistics. See Hwang and Shin (2012b) as well as Theorem 3 below. Theorems 3.1 and 3.2 of Hwang and Shin (2012b) show that the condition p = cn^{?(}^{?}^{?2}^{ε}^{)/(2+}^{?}^{)} is critical for the almost sure convergence.
It may be shown that the above theorems hold for a sequence of strong mixing stationary random vector {X_{t} ∈ ?^{d}} and for U-statistics with kernel Φ : ?^{md} → ? of degree m. See Hoeffding (1948) and Denker (1985) for the Hoeffding decomposition of U-statistics U_{n}(Φ) where Φ : ?^{m} → ? is a symmetric function of degree m, and ?is a measurable space: the Hoeffding decomposition of (
As an application of bootstrap U-statistics, we consider a symmetry test for the distribution of X_{t}, which is based on goodness-of-fit tests of Fan and Ullah (1999). Assume that the stationary strong mixing process (X_{t})_{t}_{∈?} has Lebesgue density f_{X}. We are interested in testing the null hypothesis H_{0}: f_{X}(u) = f_{X}(?u) almost everywhere, against the alternative hypothesis H_{1}: f_{X}(u) ≠ f_{X}(?u) on a set of positive measure. The density f_{X}(u) is estimated by the kernel density estimator f?_{X}(u) defined as
According to Theorem 1, we have
where
Finite sample performances of the confidence interval (
which correspond to θ_{(1)} = the variance of X_{t}, θ_{(2)} = mean of the one-sided Wilcoxon statistic, and θ_{(3)} = the Gini’s pairwise mean difference, respectively, where I(·) is the indicator function.
Confidence intervals of θ_{(1)}, θ_{(2)} and θ_{(3)} are constructed using (
where e_{t} is a sequence of iid N(0, 1) random variables. These models will be denoted by AR-1, AR-2; TAR-1, TAR-2; GARCH-1, GARCH-2, respectively. The models AR-1, AR-2 are AR(1) models; TAR-1, TAR-2 are threshold AR(1) models; GARCH-1, GARCH-2 are GARCH(1, 1) models. The latter models AR-2, TAR-2, GARCH-2 have stronger serial correlation, serial correlation, conditional heteroscedasticity, respectively, than the former models AR-1, TAR-1, GARCH-1, respectively.
Data X_{t}, t = ?20, . . . , 0, 1, . . . , n, are generated using X_{?20} = 0 and e_{t}, t = ?20, . . . , n, generated by RNNOA, an IMSL FORTRAN subroutine and data set {X_{t}, t = 1, . . . , n} is used for computing statistics. We consider n = 100, 400, 3200. Confidence intervals of θ_{(1),}θ_{(2),}θ_{(3)} with 90% nominal coverage probability are constructed using stationary bootstrapping (SB, in the sequel) and the circular block bootstrapping (CBB, in the sequel) with B = 500 bootstrap repetitions. For SB, the uniform random variables I_{i} are generated by IMSL subroutine RNUND and L_{i} are generated from geometric distribution via IMSL subroutine RNGEO. The block length parameter p for SB is chosen by
and L for CBB is chosen so that L is an integer close to p^{?1}. The order n^{?1/3} is chosen because it is the optimum order for sample mean as verified by Politis and White (2004) and Patton et al. (2009).
For each θ = θ_{(1)}, θ_{(2)}, θ_{(3)}, empirical coverage probabilities of the bootstrap confidence intervals are computed, which are relative frequencies of the true θ contained in the corresponding confidence intervals out of 1000 repetitions. For AR(1) models, θ_{(1)} = 1/(1 ? ρ^{2}). For GARCH(1, 1) models, θ_{(1)} = 1/(1?α?β). Expressions for the true parameters θ_{(}_{i}_{)} are not known for the other cases; therefore, they are computed as averages of 100 Monte-Carlo simulated values U_{(}_{i}_{)} with large n = 5000. The Monte-Carlo values should be close to the true values because, according to Theorem 1.8 of Dehling and Wendler (2010), U_{(}_{i}_{)} are consistent for θ_{(}_{i}_{)}.
Table 1 shows empirical coverage probabilities (%) and average lengths of the 90% bootstrap confidence intervals. Coverage probability performances of the confidence intervals differ substantially for different parameters and data generating processes (DGP). For all combinations of 6 DGP’s and 3 parameters, coverage probabilities get closer to the nominal coverage probability 90% as n increases from 100 to 1600 and which confirms the asymptotic normality (
Relative performances of the two bootstrap methods (CBB and SB) are almost the same giving similar coverage probabilities and average lengths for all models and for all parameters. Performances of the confidence intervals are insensitive to the block length parameters p (and L) in that the two values of p (and L) yield similar coverage probabilities and average lengths.
For AR(1) and TAR(1) models, coverage performances are similar for all 3 parameters: if serial correlation is not strong as in AR-1 and TAR-1, coverage probabilities are reasonably close to 90% for all n = 100, 400, 1600; if serial correlation is not weak as in AR-2 and TAR-2, all 3 confidence intervals have under-coverages and n needs to be large as n = 1600 for coverage probabilities to be larger than 80%.
For the weakly conditional heteroscedastic model GARCH-1 model, coverage performances are reasonable for all 3 parameters. For the strongly conditional heteroscedastic model GARCH-2 model, coverage probabilities are acceptable only for θ_{(2)} with all n = 100, 400, 1600 and for θ_{(3)} with n = 400, 1600.
Let U_{n}_{,2} and
${U}_{n,2}^{*}$
${U}_{n}^{*}$
The following two lemmas are needed to show our Theorems 1 and 2.
(
where
Under the same assumptions as in Theorem 1, we have
where η is as in (
We have
To find upper bounds of
$?E[{E}^{*}[{\mathrm{\Phi}}_{2}({X}_{{i}_{1}}^{*},{X}_{{i}_{2}}^{*}){\mathrm{\Phi}}_{2}({X}_{{i}_{3}}^{*},{X}_{{i}_{4}}^{*})]]?$
Case I = {i_{1}, i_{2}, i_{3} and i_{4} lie in all different blocks},
Case II = {two of them lie in the same block and others are in different blocks},
Case III = {three of them lie in the same block and the other is in a different block},
Case IV = {two of them lie in the same block and others are in another same block},
Case V = {all i_{1}, i_{2}, i_{3} and i_{4} lie in the same block}.
Note that the probability of each case is less than or equal to 1 and we do not need to calculate the probability to obtain the upper bound of
$?E[{E}^{*}[{\mathrm{\Phi}}_{2}({X}_{{i}_{1}}^{*},{X}_{{i}_{2}}^{*}){\mathrm{\Phi}}_{2}({X}_{{i}_{3}}^{*},{X}_{{i}_{4}}^{*})]]?$
In Case I, we have
thus we get
In Case II, if i_{1} and i_{2} lie in the same block, and i_{2} ? i_{1} = k for some k > 0, we have
$?E[{E}^{*}[{\mathrm{\Phi}}_{2}({X}_{{i}_{1}}^{*},{X}_{{i}_{2}}^{*}){\mathrm{\Phi}}_{2}({X}_{{i}_{3}}^{*},{X}_{{i}_{4}}^{*})]]?$
Taking the summation for such i_{2}, we get
${\sum}_{{i}_{2}}?E[{E}^{*}[{\mathrm{\Phi}}_{2}({X}_{{i}_{1}}^{*},{X}_{{i}_{2}}^{*}){\mathrm{\Phi}}_{2}({X}_{{i}_{3}}^{*},{X}_{{i}_{4}}^{*})]]?$
and thus
In Case III, if i_{1}, i_{2}, i_{3} lie in the same block, and i_{2} = i_{1} + k, i_{3} = i_{1} + l for some k, l > 0, we have
$?E[{E}^{*}[{\mathrm{\Phi}}_{2}({X}_{{i}_{1}}^{*},{X}_{{i}_{2}}^{*}){\mathrm{\Phi}}_{2}({X}_{{i}_{3}}^{*},{X}_{{i}_{4}}^{*})]]?$
Similarly, we can get
In similar ways, we can obtain the same results for other cases. Thus by (
Use the Hoeffding-decomposition,
By Theorem 1 of
By Lemma 1 and Lemma 2,
$\text{Var}[\sqrt{n}{U}_{n,2}]\to 0$
${\text{Var}}^{*}[\sqrt{n}{U}_{n,2}^{*}]\stackrel{\text{p}}{\to}0$
We now verify the same limiting distributions of U-statistics. By Lemma 1 and Slutsky’s Theorem, we have
Also, by Lemma 2 and Slutsky’s Theorem, we have
According to Theorem 2 of
and by triangle inequality we obtain the convergence in probability in (
Now Theorem 3 and Lemma 3 below are stated and proved. They are used to prove our main result in Theorem 2.
Let (X_{n})_{n}_{∈?} be a stationary strong mixing process with EX_{1} = μ and E|X_{1}|^{γ}^{+}^{?} < ∞ for some γ > 2 and 0 < ? < 1. If α_{n} = O(n^{?}^{ρ}) for some ρ > γ(γ + ?)/(2?) and if p = cn^{?(}^{?}^{?2}^{ε}^{)/(2+}^{?}^{)} for some 0 < c < ∞ and 0 < ε < ?/2, then we have
and
Under strong mixing, we have similar proof to that under ψ-weak dependence, which includes mixing and other more general dependent sequences such as Bernoulli shifts and association. See
For a sequence of random variables {ξ_{1,} ξ_{2}, . . .}, let ζ_{n} = f (ξ_{1}, . . . , ξ_{n}) and η_{n} = g(ξ_{1}, . . . , ξ_{n}) for some functions f and g with E[ζ_{n}] = 0 and E[η_{n}] = 0. If
$E[{({\zeta}_{n}-{\eta}_{n})}^{2}?{\xi}^{(n)}]\stackrel{\text{a}.\text{s}.}{\to}0$
For notational simplicity, we define random variables Z_{n}(·), V_{n}(x, · ) and W_{n}(x, · ) by Z_{n} = E[(ζ_{n} ? η_{n})^{2}|ξ^{(}^{n}^{)}], V_{n}(x, · ) = P(ζ_{n} ≤ x|ξ^{(}^{n}^{)}) and W_{n}(x, · ) = P(η_{n} ≤ x|ξ^{(}^{n}^{)}). If
${Z}_{n}\stackrel{\text{a}.\text{s}.}{\to}0$
${\zeta}_{n}\stackrel{\text{d}}{\to}?$
${\zeta}_{n}-{\eta}_{n}\stackrel{\text{p}}{\to}0$
${\eta}_{n}=({\eta}_{n}-{\zeta}_{n})+{\zeta}_{n}\stackrel{\text{d}}{\to}?$
To verify (
By Lemma 1,
$\text{Var}[\sqrt{n}{U}_{n,2}]\to 0$
${\text{Var}}^{*}[\sqrt{n}{U}_{n,2}^{*}]\stackrel{\text{a}.\text{s}.}{\to}0$
of which the right term will be shown to converge to zero almost surely under our assumptions. For any ? > 0, by Chebyshev’s inequality, we have
By H?lder’s inequality: ||f g||_{1} ≤ ||f ||_{p} · ||g||_{q} for 1/p + 1/q = 1, the expectation in the last expression is less than or equal to
We can observe
$E?{\sum}_{{i}_{1},{i}_{2}=1}^{n}{\mathrm{\Phi}}_{2}({X}_{{i}_{1}},{X}_{{i}_{2}}){?}^{r}=O({n}^{r+1-{\eta}^{\prime}})$
for some 0 < η′ < 1 and 0 < η″ < 1. Therefore,
Since these probabilities are summable, the almost sure convergence of
${\text{Var}}^{*}[\sqrt{n}{U}_{n,2}^{*}]$
To verify (
Since
${\text{Var}}^{*}[\sqrt{n}{U}_{n,2}^{*}]\stackrel{\text{a}.\text{s}.}{\to}0$
${\zeta}_{n}=\sqrt{n}({U}_{n}^{*}-\theta )$
${\eta}_{n}=(2/\sqrt{n}){\sum}_{i=1}^{n}{\mathrm{\Phi}}_{1}({X}_{i}^{*})$
Along with (
Coverage probabilities and average lengths of bootstrap confidence intervals.
Coverage Probability (%) | Average Length | ||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Variance | Wilcoxon | Mean Diff. | Variance | Wilcoxon | Mean Diff. | ||||||||||
Model | n | p | L | CBB | SB | CBB | SB | CBB | SB | CBB | SB | CBB | SB | CBB | SB |
AR-1 | 100 | .08 | 12 | 75 | 74 | 79 | 76 | 81 | 78 | .639 | .612 | .274 | .257 | .332 | .317 |
AR-1 | 100 | .16 | 6 | 78 | 77 | 80 | 78 | 84 | 82 | .657 | .639 | .269 | .264 | .340 | .331 |
AR-1 | 400 | .05 | 20 | 86 | 84 | 87 | 86 | 85 | 85 | .369 | .362 | .152 | .148 | .184 | .180 |
AR-1 | 400 | .10 | 10 | 87 | 87 | 84 | 83 | 89 | 89 | .369 | .363 | .148 | .146 | .184 | .182 |
AR-1 | 1600 | .03 | 33 | 91 | 89 | 88 | 87 | 85 | 85 | .193 | .191 | .078 | .077 | .095 | .094 |
AR-1 | 1600 | .06 | 17 | 89 | 88 | 89 | 89 | 85 | 85 | .194 | .193 | .077 | .077 | .096 | .095 |
AR-2 | 100 | .08 | 12 | 51 | 50 | 55 | 56 | 59 | 58 | 3.360 | 3.262 | .400 | .398 | .991 | .955 |
AR-2 | 100 | .16 | 6 | 49 | 50 | 52 | 55 | 57 | 58 | 3.166 | 3.201 | .340 | .377 | .912 | .928 |
AR-2 | 400 | .05 | 20 | 74 | 73 | 75 | 76 | 81 | 80 | 2.754 | 2.712 | .290 | .296 | .717 | .707 |
AR-2 | 400 | .10 | 10 | 72 | 72 | 65 | 69 | 76 | 77 | 2.475 | 2.527 | .239 | .264 | .645 | .662 |
AR-2 | 1600 | .03 | 33 | 83 | 83 | 82 | 82 | 86 | 87 | 1.616 | 1.615 | .169 | .172 | .408 | .408 |
AR-2 | 1600 | .06 | 17 | 81 | 82 | 77 | 80 | 82 | 82 | 1.545 | 1.562 | .147 | .158 | .387 | .392 |
TAR-1 | 100 | .08 | 12 | 74 | 73 | 79 | 76 | 79 | 76 | .664 | .637 | .270 | .253 | .342 | .326 |
TAR-1 | 100 | .16 | 6 | 78 | 77 | 79 | 78 | 83 | 81 | .684 | .666 | .264 | .260 | .350 | .341 |
TAR-1 | 400 | .05 | 20 | 85 | 83 | 87 | 86 | 85 | 84 | .388 | .380 | .150 | .146 | .191 | .187 |
TAR-1 | 400 | .10 | 10 | 87 | 87 | 84 | 83 | 89 | 88 | .387 | .381 | .146 | .144 | .191 | .188 |
TAR-1 | 1600 | .03 | 33 | 90 | 89 | 88 | 87 | 86 | 86 | .203 | .201 | .077 | .076 | .099 | .098 |
TAR-1 | 1600 | .06 | 17 | 90 | 89 | 89 | 88 | 86 | 85 | .205 | .204 | .077 | .076 | .099 | .099 |
TAR-2 | 100 | .08 | 12 | 43 | 42 | 51 | 52 | 52 | 50 | 4.076 | 3.980 | .352 | .354 | 1.106 | 1.073 |
TAR-2 | 100 | .16 | 6 | 44 | 44 | 50 | 52 | 49 | 50 | 3.798 | 3.877 | .293 | .329 | 1.002 | 1.032 |
TAR-2 | 400 | .05 | 20 | 67 | 67 | 72 | 74 | 72 | 72 | 3.791 | 3.766 | .258 | .267 | .876 | .875 |
TAR-2 | 400 | .10 | 10 | 62 | 63 | 62 | 66 | 69 | 71 | 3.248 | 3.393 | .211 | .236 | .756 | .796 |
TAR-2 | 1600 | .03 | 33 | 81 | 81 | 80 | 80 | 84 | 84 | 2.392 | 2.419 | .152 | .157 | .531 | .538 |
TAR-2 | 1600 | .06 | 17 | 74 | 76 | 75 | 79 | 77 | 77 | 2.187 | 2.279 | .132 | .144 | .482 | .503 |
GARCH-1 | 100 | .08 | 12 | 73 | 71 | 82 | 81 | 80 | 78 | 1.613 | 1.532 | .175 | .166 | .564 | .534 |
GARCH-1 | 100 | .16 | 6 | 74 | 74 | 87 | 85 | 80 | 78 | 1.584 | 1.562 | .182 | .177 | .554 | .546 |
GARCH-1 | 400 | .05 | 20 | 81 | 80 | 88 | 88 | 87 | 84 | .997 | .975 | .093 | .090 | .329 | .321 |
GARCH-1 | 400 | .10 | 10 | 82 | 81 | 88 | 87 | 87 | 85 | .948 | .935 | .093 | .092 | .315 | .312 |
GARCH-1 | 1600 | .03 | 33 | 88 | 88 | 88 | 87 | 88 | 87 | .552 | .547 | .047 | .046 | .174 | .173 |
GARCH-1 | 1600 | .06 | 17 | 86 | 86 | 90 | 90 | 86 | 86 | .542 | .543 | .047 | .047 | .172 | .171 |
GARCH-2 | 100 | .08 | 12 | 41 | 41 | 83 | 81 | 62 | 60 | 13.221 | 12.503 | .175 | .165 | 1.647 | 1.596 |
GARCH-2 | 100 | .16 | 6 | 38 | 40 | 87 | 86 | 58 | 59 | 9.419 | 9.826 | .182 | .177 | 1.422 | 1.481 |
GARCH-2 | 400 | .05 | 20 | 50 | 50 | 87 | 86 | 73 | 73 | 14.767 | 14.209 | .093 | .090 | 1.268 | 1.255 |
GARCH-2 | 400 | .10 | 10 | 45 | 45 | 87 | 87 | 68 | 68 | 8.025 | 8.319 | .093 | .092 | 1.022 | 1.065 |
GARCH-2 | 1600 | .03 | 33 | 57 | 56 | 88 | 87 | 83 | 82 | 7.606 | 7.738 | .047 | .046 | .760 | .764 |
GARCH-2 | 1600 | .06 | 17 | 55 | 55 | 91 | 91 | 81 | 80 | 8.025 | 8.366 | .047 | .047 | .697 | .718 |
Note: Number of replications = 1,000, number of bootstrap repetitions = 500.