TEXT SIZE

search for



CrossRef (0)
Stationary Bootstrap for U-Statistics under Strong Mixing
Commun. Stat. Appl. Methods 2015;22:81-93
Published online January 31, 2015
© 2015 Korean Statistical Society.

Eunju Hwanga, and Dong Wan Shin1,b

aDepartment of Applied Statistics, Gachon University, Korea, bDepartment of Statistics, Ewha Womans University, Korea
Correspondence to: Dong Wan Shin
Department of Statistics, Ewha Womans University, Seoul 120-750, Korea. E-mail: shindw@ewha.ac.kr
Received December 1, 2014; Revised December 27, 2014; Accepted December 27, 2014.
 Abstract

Validity of the stationary bootstrap of Politis and Romano (1994) is proved for U-statistics under strong mixing. Weak and strong consistencies are established for the stationary bootstrap of U-statistics. The theory is applied to a symmetry test which is a U-statistic regarding a kernel density estimator. The theory enables the bootstrap confidence intervals of the means of the U-statistics. A Monte-Carlo experiment for bootstrap confidence intervals confirms the asymptotic theory.

Keywords : Stationary bootstrap, U-statistic, strong mixing, strong consistency, weak consistency, Monte Carlo study
1. Introduction

U-statistics form a broad class of nonlinear functionals and have important applications in nonparametric estimation and testing problems. Distributional properties of the U-statistics were investigated by several authors. Hoeffding (1948) and Borovskikh (1998) considered normal approximations of U-statistics for i.i.d. case, Yoshihara (1976) showed a CLT for absolutely regular processes using a generalized covariance inequality, and Denker and Keller (1986) studied functionals of absolutely regular processes. Dehling and Wendler (2010) recently showed that U-statistics of strong mixing data converge to a normal limit if the kernel of the U-statistic fulfills some moment and continuity conditions.

Some attempts had been made to approximate distributions of U-statistics by bootstrapping. For i.i.d. data cases, validity of the bootstrap for U-statistics was established by Bickel and Freedman (1981). For dependent data cases, it was recently studied by Dehling and Wendler (2010), who established the validity of bootstrap for U-statistics by proving a CLT for the circular block bootstrap version (cf. Shao and Yu, 1993) of U-statistics under absolute regularity and strong mixing. Sharipov and Wendler (2011) also investigated the nonoverlapping block bootstrap (cf. Carlstein, 1986) for U-statistics for near epoch dependent sequences of strong mixing and absolute regular processes.

We consider another version of block bootstrap for U-statistics under strong mixing such as the stationary bootstrap proposed by Politis and Romano (1994). The stationary bootstrap is an extension of the circular block bootstrap that allows the block length to be a random variable such as a geometric random variable. Some important applications of stationary bootstrapping are found in Mudelsee (2003) and Hwang and Shin (2011, 2012a) in regards to nonparametric estimation and in Swensen (2003), and Parker et al. (2006) regarding nonstationary time series analysis. Lahiri (1999), Nordman (2009) and Hwang and Shin (2012b) analyzed the new properties of the stationary bootstrap.

The original work of Politis and Romano (1994) presented fundamental consistency and weak convergence properties of the stationary bootstrap for sample mean under strong mixing. For sample mean, Goncalves and White (2002) established weak consistency of the stationary bootstrap applied to near epoch dependent processes under the assumption of finite sixth moments; in addition, Goncalves and Jong (2003) improved their results under the existence of only slightly more than second moments. For sample mean and sample variance, Hwang and Shin (2012b) recently established strong consistency of the stationary bootstrap under -weak dependence (cf. Doukhan and Louhichi, 1999), which is a more general class of weak dependence and includes mixings, association, Gaussian sequences and Bernoulli shifts.

In this paper, asymptotic analyses will be made for U-statistics under strong mixing. Weak consistency of stationary bootstrapping will be established under the same condition on the block length parameter that Politis and Romano (1994) used to analyze sample mean. Strong consistency will be established under the more restrictive condition on the parameter with a faster decay rate that Hwang and Shin (2012b) used to analyze sample mean. Our results extend the stationary bootstrap studies of Hwang and Shin (2012b) for sample mean and sample variance to U-statistics while restricting to mixing processes. The results also extend Dehling and Wendler (2010) as the stationary bootstrap is considered an extension of the circular block bootstrap in view of the block length extended from being fixed to being random.

The asymptotic theory is applied to a symmetry test based on the goodness-of-fit tests of Fan and Ullah (1999) that tests the symmetry of the distribution of stationary processes. The asymptotic theory enables us to construct bootstrap confidence intervals of means of U-statistics. A Monte-Carlo experiment is conducted for bootstrap confidence interval for some important U-statistics which verifies the asymptotic theory.

The remaining of the paper is organized as follows. The U-statistics and stationary bootstrap are described in Section 2. The main theoretical results and their applications are presented in Section 3 and a Monte-Carlo result is given in Section 4, while technical results and proofs are given in Section 5.

2. U-Statistics and Stationary Bootstrap

Let (Xt)t늿? be a strictly stationary sequence of random variables with a common distribution function F(쨌). In a large class of statistical problems, parameters to be estimated are of the form = E[過(X1, . . . , Xm)] with a positive integer m and a Borel function 過 : ?m 넂 ?, that is symmetric and satisfies E|過(X1, . . . , Xm)| < 닞. For nm, a U-statistic with kernel 過 of degree m, which is a symmetric unbiased estimator of , is defined as:

Un=(nm)-11i1<?<imn(Xi1,,Xim).

For simplicity of presentation, we deal with the bivariate case, i.e., m = 2. Examples of such U-statistics are the sample variance, Wilcoxon statistic, and the Gini셲 pairwise mean difference, see Section 4. Extension to the case m > 2 is straightforward and briefly discussed. For m = 2, U-statistic with kernel 過 defined by

Un=2n(n-1)1i<jn(Xi,Xj)

is decomposed into the linear part and the degenerate part as:

Un=+1ni=1n1(Xi)+2n(n-1)1i<jn2(Xi,Xj),

where

1(x)=E(x,X2)-?????????and?????????2(x,y)=(x,y)-1(x)-1(y)+.

This decomposition, originally used by Hoeffding (1948), is a key tool for the analysis of the U-statistics. Under the conditions of Dehling and Wendler (2010), the linear part (2/n)i=1n1(Xi) has a normal limiting distribution and the degenerate part 2/(n(n-1))1i<jn2(Xi,Xj) converges to zero in probability.

Stationary bootstrap of Politis and Romano (1994) is applied to approximate the distributions of the U-statistics. The method resamples blocks of random lengths. The block lengths increase as sample size n increases so that the weakly dependent structure is captured in the sampled blocks.

To describe the stationary bootstrap procedure, we first define a new time series {Xni : i 돟 1} by a periodic extension of the observed data set {X1, . . . , Xn}. The sequence {Xni : i 돟 1} is obtained by wrapping data X1, . . . , Xn around a circle and relabeling them as Xn1, Xn2, . . . . For each i 돟 1, define Xni := Xj where j is such that i = qn + j for some integer q 돟 0. Next, from the circularly extended data, the stationary bootstrap constructs bootstrap sample { X1*,,Xn*} by combining blocks {XI1, . . . , XI1+L1?1}, {XI2, . . . , XI2+L2?1}, . . . , {XI, . . . , XI+L?1} with starting points {I1, I2, . . .} being i.i.d. uniformly on {1, 2, . . . , n} and block lengths {L1, L2, . . .} being i.i.d. positive random variables. Therefore, stationary bootstrapping extends the circular block bootstrapping in which L1 = L2 = 쨌 쨌 쨌 = for some integer > 0. Let p = 1/E(L1). The distribution for Li is chosen to be the geometric distribution with success probability p in the original work of Politis and Romano (1994) so that the process { Xt*, t = 1, 2, . . .} is stationary. The number of blocks are chosen so that L1 + L2 + 쨌 쨌 쨌 + L?1 < n and L1 + L2 + 쨌 쨌 쨌 + Ln. The last L1 + L2 + 쨌 쨌 쨌 + L ? n observations in the combined sample are discarded for the bootstrap sample to have size n. See Politis and Romano (1994) and Hwang and Shin (2012b) for a more detailed description.

In the sequel, P* and E* denote the conditional probability and the conditional expectation, respectively, given X1, . . . , Xn. We assume that p goes to 0 as n 넂 닞. The expected block length E*L1 is 1/p, which tends to 닞 as n 넂 닞.

3. Main Results

We present our main results of the weak and the strong consistencies. The CLT of U-statistics with strong mixing data was established in Dehling and Wendler (2010). In proving our results, we need their CLT and hence the conditions in which the ?-Lipschitz-continuous kernel 過 is adopted. We state the strong mixing condition and ?-Lipschitz-continuity. A stationary sequence (Xn)n늿? of random variables is called strong mixing if

(m)=sup{((X1,,Xk),(Xj)jk+m):kN}0?????????as???m,

where (Y, Z) = sup{|P(AB) ? P(A)P(B)| : A(Y), B(Z)}.

A kernel 過 is called ?-Lipschitz-continuous if there is a constant L > 0 such that

E[|(X,Y)-(X,Y)|1{|X-X|?}]L?

for every > 0 and for all pairs (X, Y) and (X, Y) with the common distribution ?X1,Xk for k 늿 ? or ?X1 횞 ?X1. If 過 is Lipschitz-continuous, then it is also ?-Lipschitz-continuous. According to Example 1.5 of Dehling and Wendler (2010), the kernel corresponding to the sample variance is ?-Lipschitz-continuous. It is easy to show that the kernels corresponding to the Wilcoxon statistic and the Gini셲 pairwise mean difference are all ?-Lipschitz-continuous.

Now we introduce the bootstrapped U-statistics based on the stationary bootstrap sample { X1*,,Xn*}, which is given by

Un*=2n(n-1)1i<jn(Xi*,Xj*)=+2ni=1n1(Xi*)+2n(n-1)1i<jn2(Xi*,Xj*).

The following two theorems state the weak and the strong consistency of the bootstrap distribution of the stationary bootstrapped U-statistics, respectively. All asymptotic results in this work are for n 넂 닞.

Theorem 1. (Weak consistency)

Let (Xn)n늿?be a stationary strong mixing process anda ?-Lipschitz-continuous kernel, such that E[X1] < 닞 for some 款 > 0 and

?(x1,x2)?2+dF(x1)dF(x2)M,?(x1,x1+k)?2+dP(x1,x1+k)M,?????????kN0,

for some 灌 > 0 and M > 0. If 慣(n) = O(n?) for > (3款灌 + + 5 + 2)/(2款灌) and if p 넂 0 and np 넂 닞, then

|Var*[nUn*]|-Var[nUn]p0,

and

supxR|P*[n(Un*-E*Un*)x]-P[n(Un-)x]|p0.
Remark 1

In Theorem 1, the conditions of ?-Lipschitz-continuity, moment bounds and algebraic strong mixing ensure the CLT of the U-statistics, which is the result of Dehling and Wendler (2010, Theorem 1.8), while the conditions on p yield the validity of the stationary bootstrap of the U-statistics as in Politis and Romano (1994, Theorems 1, 2) for sample mean.

Theorem 2. (Strong consistency)

Let (Xn)n늿?be a stationary strong mixing process anda ?-Lipschitz-continuous kernel, such that E|X1|+? < 닞 for some 款 > 2 and 0 < ? < 1 and

?(x1,x2)?r+2dF(x1)dF(x2)M,?(x1,x1+k)?r+2dP(x1,x1+k)M,?????????kN0,

for some r > 2, > 0 and M > 0. If 慣(n) = O(n?) for > max{(+?)/(2?), (3款灌++5+2)/(2款灌)} and if p = cn?(??2)/(2+?)for some c > 0 and 0 < < ?/2, then,

|Var*[nUn*]-Var[nUn]|a.s.0,

and

supxR|P*[n(Un*-E*Un*)x]-P[n(Un-)x]|a.s.0.
Remark 2

In Theorem 2, the conditions E|X1|+? < 닞 and (n) = O(n?) for > ( + ?)/(2?) give a moment inequality, and it plays a key role in proving the almost sure convergence of the stationary bootstrapped mean and U-statistics. See Hwang and Shin (2012b) as well as Theorem 3 below. Theorems 3.1 and 3.2 of Hwang and Shin (2012b) show that the condition p = cn?(??2)/(2+?) is critical for the almost sure convergence.

Remark 3. (U-statistics of general degree m)

It may be shown that the above theorems hold for a sequence of strong mixing stationary random vector {Xt 늿 ?d} and for U-statistics with kernel 過 : ?md 넂 ? of degree m. See Hoeffding (1948) and Denker (1985) for the Hoeffding decomposition of U-statistics Un(過) where 過 : ?m 넂 ? is a symmetric function of degree m, and ?is a measurable space: the Hoeffding decomposition of (2.1) is given by Un()=j=0m(mj)Un(j) where 過j : ?j 넂 ? is a symmetric function in j arguments and the U-statistics Un(過j), 2 돞 jm, are degenerate. To derive the validity of the stationary bootstrap for U-statistics in this case, a CLT of U-statistics Un(過) of degree m under weak dependence is needed, and upper bound results for 過j, 2 돞 jm, similar to those in Lemmas 1, and 2 below are required. These results will be considered for our further study.

Remark 4. (Bootstrap symmetry test)

As an application of bootstrap U-statistics, we consider a symmetry test for the distribution of Xt, which is based on goodness-of-fit tests of Fan and Ullah (1999). Assume that the stationary strong mixing process (Xt)t늿? has Lebesgue density fX. We are interested in testing the null hypothesis H0: fX(u) = fX(?u) almost everywhere, against the alternative hypothesis H1: fX(u) 돖 fX(?u) on a set of positive measure. The density fX(u) is estimated by the kernel density estimator f?X(u) defined as f^X(u)=(1/nh)i=1nK((u-Xi)/h) where K is a kernel function and h > 0 is a smoothing parameter or bandwidth. The symmetry test is based on an appropriate estimator of the integrated squared difference, I = 닽[fX(u) ? fX(?u)]2du. According to Fan and Ullah (1999), an estimator of I is obtained by ?n := (4/(n2h))닊1돞i<jnn(Xi, Xj) where 過n(Xi, Xj) = KXi,Xj ? KXi,?Xj with KXi,Yj = K[(Xi ? Yj)/h], for Yj 늿 {Xj, ?Xj}. ?n is a degenerate U-statistic with kernel varying with the sample size n, and above theorems can be readily extended to the degenerate U-statistic ?n with kernel depending on n. Thus, the stationary bootstrap test, I^n*:=(4/(n2h))1i<jnn(Xi*,Xj*) can be shown to have the same limiting as ?n.

Remark 5. (Confidence interval)

According to Theorem 1, we have P*[Un*x]=P*[n(Un*-E*Un*)n(x-E*Un*)]?P[n(Un-)n(x-E*Un*)]=P[(Un-(x-E*Un*))]. Therefore, observing P[Un-(u0.95*-E*Un*)Un-(u0.05*-E*Un*)]?P*[u0.05*Un*u0.95*]=0.9, we construct, for example, a 90% bootstrap confidence interval for :

[Un+U?n*-u0.95*,Un+U?n*-u0.05*],

where u0.05* and u0.95* are the 5% and 95% quantiles, respectively, of the B bootstrap replications Un*(b), b = 1, . . . , B, and U?n*=B-1b=1BUn*(b). The next section provides a Monte Carlo experiment to construct stationary bootstrap confidence intervals for some basic U-statistics.

4. Monte Carlo Study

Finite sample performances of the confidence interval (3.5) are investigated for some important U-statistics:

U(1)=2n(n-1)1i<jn(Xi-Xj)2,U(2)=2n(n-1)1i<jnI[(Xi-Xj)>0],U(3)=2n(n-1)1i<jn|Xi-Xj|,

which correspond to (1) = the variance of Xt, (2) = mean of the one-sided Wilcoxon statistic, and (3) = the Gini셲 pairwise mean difference, respectively, where I(쨌) is the indicator function.

Confidence intervals of (1), (2) and (3) are constructed using (3.5) for data sets generated from the following time series processes:

AR(1):Xt=Xt-1+et,?????????=0.5,0.9;TAR(1):Xt=1Xt-1I(Xt-1>0)+2I(Xt-10)+et,?????????(1,2)=(0.6,0.4),(0.95,0.85);GARCH(1,1):Xt=tet,?????????t2=1+Xt-12+t-12,???(,)=(0.3,0.3),(0.5,0.4),

where et is a sequence of iid N(0, 1) random variables. These models will be denoted by AR-1, AR-2; TAR-1, TAR-2; GARCH-1, GARCH-2, respectively. The models AR-1, AR-2 are AR(1) models; TAR-1, TAR-2 are threshold AR(1) models; GARCH-1, GARCH-2 are GARCH(1, 1) models. The latter models AR-2, TAR-2, GARCH-2 have stronger serial correlation, serial correlation, conditional heteroscedasticity, respectively, than the former models AR-1, TAR-1, GARCH-1, respectively.

Data Xt, t = ?20, . . . , 0, 1, . . . , n, are generated using X?20 = 0 and et, t = ?20, . . . , n, generated by RNNOA, an IMSL FORTRAN subroutine and data set {Xt, t = 1, . . . , n} is used for computing statistics. We consider n = 100, 400, 3200. Confidence intervals of (1),(2),(3) with 90% nominal coverage probability are constructed using stationary bootstrapping (SB, in the sequel) and the circular block bootstrapping (CBB, in the sequel) with B = 500 bootstrap repetitions. For SB, the uniform random variables Ii are generated by IMSL subroutine RNUND and Li are generated from geometric distribution via IMSL subroutine RNGEO. The block length parameter p for SB is chosen by

p=0.05ip(n400)-13,?????????ip=1,2

and L for CBB is chosen so that L is an integer close to p?1. The order n?1/3 is chosen because it is the optimum order for sample mean as verified by Politis and White (2004) and Patton et al. (2009).

For each = (1), (2), (3), empirical coverage probabilities of the bootstrap confidence intervals are computed, which are relative frequencies of the true contained in the corresponding confidence intervals out of 1000 repetitions. For AR(1) models, (1) = 1/(1 ? 2). For GARCH(1, 1) models, (1) = 1/(1??). Expressions for the true parameters (i) are not known for the other cases; therefore, they are computed as averages of 100 Monte-Carlo simulated values U(i) with large n = 5000. The Monte-Carlo values should be close to the true values because, according to Theorem 1.8 of Dehling and Wendler (2010), U(i) are consistent for (i).

Table 1 shows empirical coverage probabilities (%) and average lengths of the 90% bootstrap confidence intervals. Coverage probability performances of the confidence intervals differ substantially for different parameters and data generating processes (DGP). For all combinations of 6 DGP셲 and 3 parameters, coverage probabilities get closer to the nominal coverage probability 90% as n increases from 100 to 1600 and which confirms the asymptotic normality (3.2) in Theorem 1.

Relative performances of the two bootstrap methods (CBB and SB) are almost the same giving similar coverage probabilities and average lengths for all models and for all parameters. Performances of the confidence intervals are insensitive to the block length parameters p (and L) in that the two values of p (and L) yield similar coverage probabilities and average lengths.

For AR(1) and TAR(1) models, coverage performances are similar for all 3 parameters: if serial correlation is not strong as in AR-1 and TAR-1, coverage probabilities are reasonably close to 90% for all n = 100, 400, 1600; if serial correlation is not weak as in AR-2 and TAR-2, all 3 confidence intervals have under-coverages and n needs to be large as n = 1600 for coverage probabilities to be larger than 80%.

For the weakly conditional heteroscedastic model GARCH-1 model, coverage performances are reasonable for all 3 parameters. For the strongly conditional heteroscedastic model GARCH-2 model, coverage probabilities are acceptable only for (2) with all n = 100, 400, 1600 and for (3) with n = 400, 1600.

5. Appendix

Let Un,2 and

U n , 2 *
be the degenerate part of the U-statistic Un and its bootstrap version

U n *
, respectively, i.e.,

U n , 2 = 2 n ( n - 1 ) 1 i < j n 2 ( X i , X j ) , ??? ??? ??? U n , 2 * = 2 n ( n - 1 ) 1 i < j n 2 ( X i * , X j * ) .

The following two lemmas are needed to show our Theorems 1 and 2.

Lemma 1

(Dehling and Wendler, 2010) Under the same assumptions as in Theorem 1, we have

E [ n U n , 2 2 ] 4 n ( n - 1 ) 2 1 i 1 < i 2 n 1 i 3 < i 4 n | E [ 2 ( X i 1 , X i 2 ) 2 ( X i 3 , X i 4 ) ] | 4 n 3 i 1 , i 2 , i 3 , i 4 = 1 n | E [ 2 ( X i 1 , X i 2 ) 2 ( X i 3 , X i 4 ) ] | = O ( n - ) ,

where

= min { 2 3 + + 5 + 2 - 1 , 1 } > 0.
Lemma 2

Under the same assumptions as in Theorem 1, we have

E [ E * [ n U n , 2 * 2 ] ] = O ( n - ) ,

where 管 is as in (5.2).

Proof of Lemma 2

We have

E [ E * [ n U n , 2 * 2 ] ] 4 n ( n - 1 ) 2 1 i 1 < i 2 n 1 i 3 < i 4 n | E [ E * [ 2 ( X i 1 * , X i 2 * ) 2 ( X i 3 * , X i 4 * ) ] ] | .

To find upper bounds of

? E [ E * [ 2 ( X i 1 * , X i 2 * ) 2 ( X i 3 * , X i 4 * ) ] ] ?
, we consider five cases of ways depending on if the indices i1, i2, i3 and i4 lie in different blocks. Let

  • Case I = {i1, i2, i3 and i4 lie in all different blocks},

  • Case II = {two of them lie in the same block and others are in different blocks},

  • Case III = {three of them lie in the same block and the other is in a different block},

  • Case IV = {two of them lie in the same block and others are in another same block},

  • Case V = {all i1, i2, i3 and i4 lie in the same block}.

Note that the probability of each case is less than or equal to 1 and we do not need to calculate the probability to obtain the upper bound of

? E [ E * [ 2 ( X i 1 * , X i 2 * ) 2 ( X i 3 * , X i 4 * ) ] ] ?
.

In Case I, we have

| E [ E * [ 2 ( X i 1 * , X i 2 * ) 2 ( X i 3 * , X i 4 * ) ] ] | = 1 n 4 i 1 , i 2 , i 3 , i 4 = 1 n | E [ 2 ( X i 1 , X i 2 ) 2 ( X i 3 , X i 4 ) ] | ,

thus we get

i 1 , i 2 , i 3 , i 4 , C a s e ? I | E [ E * [ 2 ( X i 1 * , X i 2 * ) 2 ( X i 3 * , X i 4 * ) ] ] | i 1 , i 2 , i 3 , i 4 | E [ 2 ( X i 1 , X i 2 ) 2 ( X i 3 , X i 4 ) ] | .

In Case II, if i1 and i2 lie in the same block, and i2 ? i1 = k for some k > 0, we have

? E [ E * [ 2 ( X i 1 * , X i 2 * ) 2 ( X i 3 * , X i 4 * ) ] ] ?

1 n 3 | i 1 , i 3 , i 4 [ r 1 , r 3 , r 4 E [ 2 ( X i 1 , X i 1 + k ) 2 ( X i 3 , X i 4 ) ] p 3 ( 1 - p ) ( r 1 + r 3 + r 4 - 3 ) ] | .

Taking the summation for such i2, we get

i 2 ? E [ E * [ 2 ( X i 1 * , X i 2 * ) 2 ( X i 3 * , X i 4 * ) ] ] ?

1 n 3 i 1 , i 2 , i 3 , i 4 [ r 1 , r 3 , r 4 | E [ 2 ( X i 1 , X i 1 + k ) 2 ( X i 3 , X i 4 ) ] | p 3 ( 1 - p ) ( r 1 + r 3 + r 4 - 3 ) ] = 1 n 3 i 1 , i 2 , i 3 , i 4 | E [ 2 ( X i 1 , X i 2 ) h 2 ( X i 3 , X i 4 ) ] |

and thus

i 1 , i 2 , i 3 , i 4 , C a s e ? I I | E [ E * [ 2 ( X i 1 * , X i 2 * ) 2 ( X i 3 * , X i 4 * ) ] ] | i 1 , i 2 , i 3 , i 4 | E [ 2 ( X i 1 , X i 2 ) 2 ( X i 3 , X i 4 ) ] | .

In Case III, if i1, i2, i3 lie in the same block, and i2 = i1 + k, i3 = i1 + l for some k, l > 0, we have

? E [ E * [ 2 ( X i 1 * , X i 2 * ) 2 ( X i 3 * , X i 4 * ) ] ] ?

1 n 2 | i 1 , i 4 [ r 1 , r 4 E [ 2 ( X i 1 , X i 1 + k ) 2 ( X i 1 + l , X i 4 ) ] p 2 ( 1 - p ) ( r 1 + r 4 - 2 ) ] | .

Similarly, we can get

i 1 , i 2 , i 3 , i 4 , C a s e ? I I I | E [ E * [ 2 ( X i 1 * , X i 2 * ) 2 ( X i 3 * , X i 4 * ) ] ] | i 1 , i 2 , i 3 , i 4 | E [ 2 ( X i 1 , X i 2 ) 2 ( X i 3 , X i 4 ) ] | .

In similar ways, we can obtain the same results for other cases. Thus by (5.1) and (5.3), we have the desired result in Lemma 2.

Proof of Theorem 1

Use the Hoeffding-decomposition,

U n * = + 2 n i = 1 n 1 ( X i * ) + U n , 2 * .

By Theorem 1 of Politis and Romano (1994), we have

| Var * [ 2 n i = 1 n 1 ( X i * ) ] - Var [ 2 n i = 1 n 1 ( X i ) ] | p 0.

By Lemma 1 and Lemma 2,

Var [ n U n , 2 ] 0
and

Var * [ n U n , 2 * ] p 0
, respectively. Thus we have the result in (3.1).

We now verify the same limiting distributions of U-statistics. By Lemma 1 and Slutsky셲 Theorem, we have

sup x R | P [ n ( U n - ) x ] - P [ 2 n i = 1 n 1 ( X i ) x ] | 0.

Also, by Lemma 2 and Slutsky셲 Theorem, we have

sup x R | P * [ n ( U n * - ) x ] - P * [ 2 n i = 1 n 1 ( X i * ) x ] | p 0.

According to Theorem 2 of Politis and Romano (1994), we have

sup x R | P * [ 2 n i = 1 n 1 ( X i * ) x ] - P [ 2 n i = 1 n 1 ( X i ) x ] | p 0

and by triangle inequality we obtain the convergence in probability in (3.2).

Now Theorem 3 and Lemma 3 below are stated and proved. They are used to prove our main result in Theorem 2.

Theorem 3

Let (Xn)n늿? be a stationary strong mixing process with EX1 = 關 and E|X1|+? < 닞 for some 款 > 2 and 0 < ? < 1. If 慣n = O(n?) for some > ( + ?)/(2?) and if p = cn?(??2)/(2+?) for some 0 < c < 닞 and 0 < < ?/2, then we have

| V a r * ( n X ? n * ) - V a r ( n X ? n ) | a . s . 0 ,

and

sup x R | P * ? { n ? ( X ? n * - X ? n ) x } - P ? { n ? ( X ? n - ) x } | a . s . 0.
Proof of Theorem 3

Under strong mixing, we have similar proof to that under -weak dependence, which includes mixing and other more general dependent sequences such as Bernoulli shifts and association. See Hwang and Shin (2012b) for the proof of the almost sure convergence of stationary bootstrap variance and stationary bootstrapped sample mean under the -weak dependence. The proof follows from Hwang and Shin (2012b); however, detailed proof is omitted here.

Lemma 3

For a sequence of random variables {1, 2, . . .}, let 瓘n = f (1, . . . , n) and 管n = g(1, . . . , n) for some functions f and g with E[n] = 0 and E[n] = 0. If
E [ ( n - n ) 2 ? ( n ) ] a . s . 0
where 刮(n) = {1, . . . n}, then

sup x R | P ? ( n x ? ( n ) ) - P ? ( n x ? ( n ) ) | a . s . 0 ??? ??? ??? as??? n .
Proof of Lemma 3

For notational simplicity, we define random variables Zn(쨌), Vn(x, 쨌 ) and Wn(x, 쨌 ) by Zn = E[(n ? n)2|(n)], Vn(x, 쨌 ) = P(nx|(n)) and Wn(x, 쨌 ) = P(nx|(n)). If

Z n a . s . 0
, then there exists a set M 뒄 廓, such that P(M) = 1 and for every M we have Zn() 넂 0. For each x 늿 ?, suppose that Vn(x, ) 넂 F(x) for M where F(쨌) is distribution function of a random variable ?. Conditionally on (n)(),

n d ?
. Since Zn() 넂 0, conditionally on (n)(), we have

n - n p 0
, and then by Slutsky셲 Theorem,

n = ( n - n ) + n d ?
, conditionally on (n)(). That is, Wn(x,) 넂 F(x) for M. Therefore, we have |Vn(x,) ? Wn(x,)| 넂 0 for M, and thus (5.8) holds.

Proof of Theorem 2

To verify (3.3), we use the Hoeffding-decomposition in (5.4) and see that by (5.6) of Theorem 3,

| Var * ? [ 2 n i = 1 n 1 ? ( X i * ) ] - Var? [ 2 n i = 1 n 1 ? ( X i ) ] | a . s . 0.

By Lemma 1,

Var [ n U n , 2 ] 0
. Now we show that

Var * [ n U n , 2 * ] a . s . 0
. As in the proof of Lemma 2, we can show

Var * ? [ n U n , 2 * ] 4 n 3 | i 1 , i 2 , i 3 , i 4 = 1 n 2 ? ( X i 1 , X i 2 ) ? 2 ? ( X i 3 , X i 4 ) | ,

of which the right term will be shown to converge to zero almost surely under our assumptions. For any ? > 0, by Chebyshev셲 inequality, we have

P ( 4 n 3 | i 1 , i 2 , i 3 , i 4 = 1 n 2 ( X i 1 , X i 2 ) ? 2 ( X i 3 , X i 4 ) | > ? ) c ( ? n 3 ) ( 2 + ) E [ | i 1 , i 2 , i 3 , i 4 = 1 n 2 ( X i 1 , X i 2 ) ? 2 ( X i 3 , X i 4 ) | 2 + ] c ( ? n 3 ) ( 2 + ) E ? [ | i 1 , i 2 = 1 n 2 ( X i 1 , X i 2 ) ? | 2 + | i 3 , i 4 = 1 n 2 ( X i 3 , X i 4 ) ? | 2 + ] .

By H?lder셲 inequality: ||f g||1 돞 ||f ||p 쨌 ||g||q for 1/p + 1/q = 1, the expectation in the last expression is less than or equal to

[ E | i 1 , i 2 = 1 n 2 ( X i 1 , X i 2 ) | p ( 2 + ) ] 1 p [ E | i 3 , i 4 = 1 n 2 ( X i 3 , X i 4 ) | q ( 2 + ) ] 1 q .

We can observe

E ? i 1 , i 2 = 1 n 2 ( X i 1 , X i 2 ) ? r = O ( n r + 1 - )
for some 0 < < 1, similarly to Lemma 1, whose proof is given in Dehling and Wendler (2010). Thus (5.9) is equal to

O ? ( n p ( 2 + ) + 1 - p ) O ? ( n q ( 2 + ) + 1 - q ) = O ? ( n 4 + 2 + 1 p + 1 q - p - q ) = O ? ( n 5 + 2 - p - q )

for some 0 < < 1 and 0 < < 1. Therefore,

P ( 4 n 3 | i 1 , i 2 , i 3 , i 4 = 1 n 2 ( X i 1 , X i 2 ) ? 2 ( X i 3 , X i 4 ) | > ? ) = O ? ( n - 1 - - p - q ) .

Since these probabilities are summable, the almost sure convergence of

Var * [ n U n , 2 * ]
holds, and thus (3.3) is completed.

To verify (3.4), we use (5.7) of Theorem 3 to obtain

sup x R | P * ? [ 2 n i = 1 n 1 ( X i * ) x ] - P ? [ 2 n i = 1 n 1 ( X i ) x ] | a . s . 0.

Since

Var * [ n U n , 2 * ] a . s . 0
, we apply Lemma 3 by setting (n) = {X1, . . . , Xn},

n = n ( U n * - )
and

n = ( 2 / n ) i = 1 n 1 ( X i * )
and we get

sup x R | P * ? [ n ? ( U n * - ) x ] - P * ? [ 2 n i = 1 n 1 ? ( X i * ) x ] | a . s . 0.

Along with (5.5), and by the triangle inequality, we obtain almost sure convergence in (3.4).

TABLES

Table 1

Coverage probabilities and average lengths of bootstrap confidence intervals.





Coverage Probability (%)Average Length










VarianceWilcoxonMean Diff.VarianceWilcoxonMean Diff.










ModelnpLCBBSBCBBSBCBBSBCBBSBCBBSBCBBSB
AR-1100.0812757479768178.639.612.274.257.332.317
AR-1100.166787780788482.657.639.269.264.340.331
AR-1400.0520868487868585.369.362.152.148.184.180
AR-1400.1010878784838989.369.363.148.146.184.182
AR-11600.0333918988878585.193.191.078.077.095.094
AR-11600.0617898889898585.194.193.077.077.096.095

AR-2100.08125150555659583.3603.262.400.398.991.955
AR-2100.1664950525557583.1663.201.340.377.912.928
AR-2400.05207473757681802.7542.712.290.296.717.707
AR-2400.10107272656976772.4752.527.239.264.645.662
AR-21600.03338383828286871.6161.615.169.172.408.408
AR-21600.06178182778082821.5451.562.147.158.387.392

TAR-1100.0812747379767976.664.637.270.253.342.326
TAR-1100.166787779788381.684.666.264.260.350.341
TAR-1400.0520858387868584.388.380.150.146.191.187
TAR-1400.1010878784838988.387.381.146.144.191.188
TAR-11600.0333908988878686.203.201.077.076.099.098
TAR-11600.0617908989888685.205.204.077.076.099.099

TAR-2100.08124342515252504.0763.980.352.3541.1061.073
TAR-2100.1664444505249503.7983.877.293.3291.0021.032
TAR-2400.05206767727472723.7913.766.258.267.876.875
TAR-2400.10106263626669713.2483.393.211.236.756.796
TAR-21600.03338181808084842.3922.419.152.157.531.538
TAR-21600.06177476757977772.1872.279.132.144.482.503

GARCH-1100.08127371828180781.6131.532.175.166.564.534
GARCH-1100.1667474878580781.5841.562.182.177.554.546
GARCH-1400.0520818088888784.997.975.093.090.329.321
GARCH-1400.1010828188878785.948.935.093.092.315.312
GARCH-11600.0333888888878887.552.547.047.046.174.173
GARCH-11600.0617868690908686.542.543.047.047.172.171

GARCH-2100.081241418381626013.22112.503.175.1651.6471.596
GARCH-2100.1663840878658599.4199.826.182.1771.4221.481
GARCH-2400.052050508786737314.76714.209.093.0901.2681.255
GARCH-2400.10104545878768688.0258.319.093.0921.0221.065
GARCH-21600.03335756888783827.6067.738.047.046.760.764
GARCH-21600.06175555919181808.0258.366.047.047.697.718

Note: Number of replications = 1,000, number of bootstrap repetitions = 500.


References
  1. Bickel, PJ, and Freedman, DA (1981). Some asymptotic theory for the bootstrap. Annals of Statistics. 9, 1196-1217.
  2. Borovskikh, YV (1998). On a normal approximation of U-statistics. Theory of Probability and Its Applications. 45, 406-423.
  3. Carlstein, E (1986). The use of subseries values for estimating the variance of a general statistic from a stationary sequence. Annals of Statistics. 14, 1171-1179.
  4. Dehling, H, and Wendler, M (2010). Central limit theorem and the bootstrap for U-statistics of strong mixing data. Journal of Multivariate Analysis. 101, 126-137.
  5. Denker, M (1985). Asymptotic distribution theory in nonparametric statistics. Advanced Lectures in Mathematics. Braunschweig: Friedr. Vieweg & Sohn.
  6. Denker, M, and Keller, G (1986). Rigorous statistical procedures for data from dynamical systems. Journal of Statistical Physics. 44, 67-93.
  7. Doukhan, P, and Louhichi, S (1999). A new weak dependence condition and applications to moment inequalities. Stochastic Processes and their Applications. 84, 313-342.
  8. Fan, Y, and Ullah, A (1999). On goodness-of-fit tests for weakly dependent processes using kernel methods. Journal of Nonparametric Statistics. 11, 337-360.
  9. Goncalves, S, and de Jong, R (2003). Consistency of the stationary bootstrap under weak moment conditions. Economics Letters. 81, 273-278.
  10. Goncalves, S, and White, H (2002). The bootstrap of the mean for dependent heterogeneous arrays. Econometric Theory. 18, 1367-1384.
  11. Hoeffding, W (1948). A class of statistics with asymptotically normal distribution. Annals of Mathematical Statistics. 19, 293-325.
  12. Hwang, E, and Shin, DW (2011). Stationary bootstrapping for non-parametric estimator of nonlinear autoregressive model. Journal of Time Series Analysis. 32, 292-303.
  13. Hwang, E, and Shin, DW (2012a). Stationary bootstrap for kernel density estimators under -weak dependence. Computational Statistics and Data Analysis. 56, 1581-1593.
  14. Hwang, E, and Shin, DW (2012b). Strong consistency of the stationary bootstrap under -weak dependence. Statistics and Probability Letters. 82, 488-495.
  15. Lahiri, SN (1999). On second-order properties of the stationary bootstrap method for studentized statistics. Asymptotic, Nonparametrics, and Time Series, Ghosh, S, ed. New York: Marcel Dekker, pp. 683-711.
  16. Mudelsee, M (2003). Estimating Pearson셲 correlation coefficient with bootstrap confidence interval from serially dependent time series. Mathematical Geology. 35, 651-665.
  17. Nordman, DJ (2009). A note on the stationary bootstrap셲 variance. Annals of Statistics. 37, 359-370.
  18. Parker, C, Paparoditis, E, and Politis, DN (2006). Unit root testing via the stationary bootstrap. Journal of Econometrics. 133, 601-638.
  19. Patton, A, Politis, DN, and White, H (2009). Correction to 쏛utomatic block-length selection for the dependent bootstrap by D. Politis and H. White. Econometric Reviews. 28, 372-375.
  20. Politis, DN, and Romano, JP (1994). The stationary bootstrap. Journal of the American Statistical Association. 89, 1303-1313.
  21. Politis, DN, and White, H (2004). Automatic block-length selection for the dependent bootstrap. Econometric Reviews. 23, 53-70.
  22. Shao, QM, and Yu, H (1993). Bootstrapping the sample means for stationary mixing sequences. Stochastic Processes and Their Applications. 48, 175-190.
  23. Sharipov, OS, and Wendler, M (2011). . Bootstrap for the sample mean and for U-statistics of mixing and near epoch dependent processes. , .
  24. Swensen, AR (2003). Bootstrapping unit root tests for integrated processes. Journal of Time Series Analysis. 24, 99-126.
  25. Yoshihara, K (1976). Limiting behavior of U-statistics for stationary, absolutely regular processes. Zeitschrift fur Wahrscheinlichkeitstheorie und Verwandte Gebiete. 35, 237-252.