search for

CrossRef (0)
Stationary bootstrapping for structural break tests for a heterogeneous autoregressive model
Communications for Statistical Applications and Methods 2017;24:367-382
Published online July 31, 2017
© 2017 Korean Statistical Society.

Eunju Hwanga, and Dong Wan Shin1,b

aDepartment of Applied Statistics, Gachon University, Korea, bDepartment of Statistics, Ewha Womans University, Korea
Correspondence to: Department of Statistics, Ewha Womans University, 52 Ewhayeodae-gil, Seodaemun-gu, Seoul 03760, Korea. E-mail: shindw@ewha.ac.kr
Received January 18, 2017; Revised May 17, 2017; Accepted June 23, 2017.

We consider an infinite-order long-memory heterogeneous autoregressive (HAR) model, which is motivated by a long-memory property of realized volatilities (RVs), as an extension of the finite order HAR-RV model. We develop bootstrap tests for structural mean or variance changes in the infinite-order HAR model via stationary bootstrapping. A functional central limit theorem is proved for stationary bootstrap sample, which enables us to develop stationary bootstrap cumulative sum (CUSUM) tests: a bootstrap test for mean break and a bootstrap test for variance break. Consistencies of the bootstrap null distributions of the CUSUM tests are proved. Consistencies of the bootstrap CUSUM tests are also proved under alternative hypotheses of mean or variance changes. A Monte-Carlo simulation shows that stationary bootstrapping improves the sizes of existing tests.

Keywords : heterogeneous autoregressive(∞) model, stationary bootstrap, structural changes, CUSUM test
1. Introduction

Corsi (2009) and Hwang and Shin (2014) recently proposed autoregressive models called heterogeneous autoregressions (HAR) of realized volatility (RV) to address the long-memory properties of financial market volatilities. Corsi (2004, 2009) proposed an additive cascade model having three volatility components defined over three different time periods, called the HAR(3) model. The HAR(3) model has been shown to successively achieve the purpose of reproducing the main empirical features of financial return volatilities such as long memory, fat tails and self-similarity. However, as noted by Corsi (2009), the HAR(3) model has a short memory with a exponentially decreasing autocorrelation function (ACF) because it can be expressed as a stationary AR(22) model. Hwang and Shin (2014) proposed a genuine long-memory HAR model with algebraically decreasing ACF, an infinite-order HAR(∞) model, as an extension of the Corsi (2009)’s HAR(3) model. They characterized stationary conditions for the model, provided some probability theories and statistical methods in terms of consistency and limited the normality of the ordinary least squares estimator (OLSE) and forecasting.

Long memory of realized volatility is occasionally accompanied by structural changes. The problem of testing for structural changes has been a most important issue in time series regression or dynamic economic models. For this purpose, cumulative sum (CUSUM) tests have been widely used because the change-points are not known. See Brown et al. (1975), Ploberger and Krämer (1986, 1990, 1992), Qu and Perron (2007), and Deng and Perron (2008) for the CUSUM(-SQ) tests. For more efficient versions of the CUSUM tests, we refer to Xu (2013, 2015), who focused on volatility and mean change tests and proposed powerful and robust alternatives. In the long-memory HAR models, Hwang and Shin (2013, 2015) and Lee (2014) studied CUSUM tests for mean or variance breaks. In particular, Lee (2014) established a functional central limit theorem (FCLT), by proving that the HAR(∞) process is a near epoch dependent (NED) process, which has been applied to construct a CUSUM test for mean stability and a CUSUM test for variance stability.

All the break tests for the HAR model except the CUSUMSQ test of Hwang and Shin (2015) have undesirable size distortions. The aim of this paper is to develop bootstrap tests for mean or variance changes in the HAR(∞) model, which remedy the size distortion problem. Based on the result of Lee (2014), we establish a bootstrap FCLT. Block bootstrapping methods are more well-suited than identically distributed (iid) bootstrapping because realized volatilities have long memories. Among the various block bootstrapping methods used, we consider the stationary bootstrapping (SB) of Politis and Romano (1994). The SB is one of the most widely adopted block bootstrapping methods for the dependent samples and is characterized by geometrically distributed random block lengths.

The partial sum process of the SB sample is shown to converge to the standard Brownian motion that enables us to construct SB CUSUM tests: a bootstrap mean break test and a bootstrap variance break test. Asymptotic critical values can be obtained from the stationary bootstrap distribution of the CUSUM tests. Consistencies of the null bootstrapping distributions of the CUSUM tests are proved. Consistencies of the bootstrapping CUSUM tests are also proved under alternative hypotheses of mean or variance breaks.

A Monte-Carlo experiment is conducted to show that SB significantly improves the sizes of the CUSUM tests of Lee (2014) for mean break and for variance break, which are badly sized in a finite sample. It also shows some improvement of the CUSUM test of Hwang and Shin (2013) for the mean break. The size improvement is achieved without power loss.

The remaining of the paper is organized as follows. The HAR models are described and Section 2 presents the existing results Section 3 discusses the main results, including the SB functional central theorem and the bootstrap CUSUM tests. Section 4 deals with the Monte-Carlo study and Section 5 gives the concluding remarks. The Appendix provides the proofs.

2. Existing theories and methods

2.1. Heterogeneous autoregressive models

First, we describe the 3rd order HAR(3) model of Corsi (2009) defined by


where p = 3, at is a sequence of regression error,

Yt,hj=1hj(Yt-1+Yt-2++Yt-hj),         j=1,2,,

h1 = 1, h2 = 5 and h3 = 22. Note that Yt,hj, j = 2, 3, are weekly and monthly moving averages, respectively. This model captures long-memory in a parameter-parsimonious way by considering the moving averages. However, this model is theoretically a short-memory AR(22) model having exponentially decreasing ACF and is not a genuine long-memory model.

As an extension of Corsi’s model (2.1), Hwang and Shin (2014) proposed an infinite-order genuine long-memory HAR(∞) model having an algebraically decreasing ACF, defined by


where Yt,hj is as in (2.2), {βj : j = 0, 1, 2, …} is a sequence of real numbers tending to 0, {hj : j = 1, 2, …} is a given sequence of positive integers increasing to ∞, and {εt} is a sequence of iid random variables with mean zero and variance E[ɛt2]=σɛ2.

Here, we review the discussion of Hwang and Shin (2014) for the HAR(∞) process Yt in (2.3) and adopt their assumptions on the HAR(∞). The HAR(∞) process Yt in (2.3) can be written as an AR(∞) process:


with αj=k=jβk/hk for j = 1, 2, …; φ1 = α1, φhj+r = αj+1 for r = 1, 2, …, hj+1hj and j = 1, 2, … ; and h0 = 0. We need the following conditions for the stationarity of Yt:

  • The coefficients βj in (2.3) satisfy j=1βj<, and A(z) ≠ 0 for |z| ≤ 1 where the polynomial


    with fj(z)=k=hj-1+1hjzk for j = 1, 2, ….

We refer to Remarks 1 and 2 of Hwang and Shin (2014) for a necessary and sufficient condition of (A1) and for the absolutely summability of φi and αj. Proposition 1 characterizes stationarity of Yt, which is given by Hwang and Shin (2014).

Proposition 1

Assume condition (A1). Then Ytis stationary and has a one-sided infinite moving-average representationYt=μ+k=0ξkɛt-k, where μ = E(Yt), ξi’s are recursively calculated as ξ0 = 1,ξk==0k-1ξφk-for k = 1, 2, …. Moreover, ξk’s are absolute summable.

For the long memory property of Yt, we need the following condition:

  • For generic constants c, βj ~ j for some |λ| < 1 and hj ~ j for some ω > 1. Here, we write aj ~ bj to denote aj = bj + o(bj).

According to Hwang and Shin (2014), the long-memory property of the HAR(∞) model has been investigated, which are stated in Propositions 2 and 3. In particular, Proposition 3 tells us that, given hj ~ j, HAR(∞) model is of long-memory with algebraically decreasing ACF if and only if βj decreases exponentially.

Proposition 2

Under conditions (A1) and (A2), for ρ = (logω − log |λ|)/ logω > 1 and generic constants c, we have (i) |ξk| ~ ckρand (ii)γk=Cov(Yt,Yt+k)=σɛ2i=0ξiξi+k~ck-ρ.

Proposition 3

Assume (A1), hj ~ jfor some ω > 1, and |ξk| ~ ckρfor some ρ > 1, then βj ~ jfor λ = ω1−ρ ∈ (0, 1).

Estimation theories were provided by Hwang and Shin (2014) in which the infinite-order HAR model in (2.3) is estimated by a finite pth order model in (2.1) with p increasing as sample size increases. They proved consistency and limiting normality of the OLSE of the HAR coefficients.

2.2. Functional central limit theorem

Recently, Lee (2014) has established a FCLT for the HAR(∞) model by showing that f (Yt) is an L2-NED on {εt} where f (x) = |x|ν or f (x) = sign(x) · |x|ν, (ν > 0). A mean break test is developed from the CUSUM of Ytμ and a variance break test is developed from the CUSUM of Yt2-E[Yt2], which correspond to ν = 1 and ν = 2, respectively. For notational simplicity, we denote yt = f (Yt).

We say that {yt} is L2-NED on {εt} if {yt} satisfies


where Ft-t+=σ{ɛt-,,ɛt,ɛt+1,,ɛt+} is a σ-algebra generated by {εt, …, εt, εt+1, …, εt+}, ct is a sequence of positive constants and d() → 0 as → ∞. Denote ||x||p by (E|x|p)1/p for 1 ≤ p < ∞ and by E|x|p for 0 < p < 1. Let σy2=Var(y1)+2t=1Cov(y1,yt+1).

Proposition 4

(Lee, 2014) Assume that (A1), (A2) and one of (a) 0 < ν < 1, ν(ρ − 1) > 1, or (b) ν ≥ 1, ρ > 3 holds. If ||εt||2ν<and σy> 0 for each case, then


where yt = |Yt|νor yt = sign(Yt) · |Yt|νand B(z) is a standard Brownian motion for 0 ≤ z ≤ 1.

The FCLT in Proposition 4 requires conditions on ν, ρ which require γk ~ ck2d−1, d < 0. This condition does not permit fractional integration I(d) with d ∈ (0, 1/2) because I(d) process has γk ~ ck2d−1. Our break tests below depend on the FCLT and are not valid for long memory process with γk ~ ck2d−1, d ∈ (0, 1/2). We observe that, for I(d) process yt, according to Baillie (1996), the weak limit of Syn(z) is a fractional Brownian motion Bd(z) depending on the fractional integration order d. Consequently, the usual break tests tend to be over-sized if d ∈ (0, 1/2). This makes it very difficult to distinguish between long memory and break; in addition, the literature provides no satisfactory test. Therefore, validity of our test only under d < 0 is not a real disadvantage.

2.3. Existing break tests

Let a data set {Yt : t = 1, …, n} be given. There are two strategies of constructing tests for detecting breaks in the mean or variance of Yt during the data span {1, …, n}: one is based on the cumulative sum of observed data and the other is based on the cumulative sum of HAR-residuals. The long memory of the original sample is addressed by a consistent long-run variance estimator in the first strategy and by an HAR regression in the second strategy.

The first strategy was considered by Lee (2014). Applying the FCLT in Proposition 4, Lee (2014) considered the CUSUM test for mean break and the CUSUM test for the variance break as well as derived the limiting distributions of the CUSUM tests. The mean break test and variance break test are


respectively, where y¯n=(1/n)t=1nyt and

σ^y2:=1nt=1n(yt-y¯n)2+2nt=1(1-t+1)i=1n-t(yi-y¯n)(yi+t-y¯n),         <n

is a consistent estimator of the long-run variance limn→∞nVar(n) with bandwidth .

The second strategy was considered by Hwang and Shin (2013, 2015). The HAR(p) model (2.1) is estimated by OLS regression. Let β̂0, …, β̂p be the OLSE. The break tests are based on OLS residuals

a^t=Yt-β^0-β^1Yt,h1--β^pYt,hp,         t=hp+1,,n

as given by

PnM=1σ^an-hpsup0z1|t=hp+1[nz]a^t|,PnV=1σ^bn-hpsup0z1|t=hp+1[nz]b^t|,         b^t=(a^t2-σ^a2),


σ^a2:=1(n-hp)t=hp+1na^t2,         σ^b2:=1(n-hp)t=hp+1nb^t2.

The tests PnM and PnV detect breaks in mean and variance of Yt, t = 1, …, n, respectively.

Large values of QnM,QnV,PnM, and PnV reject the null hypotheses of parameter constancy. Critical values of the tests can be obtained from the large sample null distributions of the tests. Under the null hypothesis of no break, according to Proposition 4 and the results of Hwang and Shin (2013, 2015), with consistent σ̂y, the limiting null distributions of QnM,QnV,PnM, and PnV are all standard Brownian bridges as given by

QnM,QnV,PnM,PnVdsup0z1|B0(z)|         as   n,

whose distribution function is given by


where B0(z) := B(z) − zB(1) is a standard Brownian bridge and B(z) is a standard Brownian motion.

3. Stationary bootstrap tests

We construct SB tests and prove their asymptotic validity. Assume that sample {Yt : t = 1, 2, … , n} is given. We apply the SB of Politis and Romano (1994) to the sample {Yt : t = 1, 2, … , n} to produce a SB sample {Yt*:t=1,2,,n}.

SB versions (Qn*M,Qn*V,Pn*M,Pn*V) are constructed from the SB sample {Yt*:t=1,2,,n} in the same way that (QnM,QnV,PnM,PnV) are constructed from the original sample {Yt : t = 1, 2, … , n}.

Consistencies of the null bootstrapping distributions of (Qn*M,Qn*V,Pn*M,Pn*V) are proved. This enables us to use the quantiles of the bootstrapping distributions as critical values for the tests (QnM,QnV,PnM,PnV). The tests with SB critical values will be called stationary bootstrap tests in the sequel. Consistencies of the SB tests are proved under alternatives of mean or variance changes.

We briefly describe how to construct SB sample from the original sample. Let {I1, I2, …} be independent uniform random variables on {1, 2, … , n}. Let {L1 : L2, …} be independent geometric random variables with mean 1, independent of {I1, I2, …}. For observations {Yt : t = 1, 2, … , n}, consider periodic extensions {Yn,i : i ≥ 1} by wrapping the sample in a circle with i = nq + t for some q and 1 ≤ tn. Define the blocks ℬ(Ij, Lj), starting Yn,Ij of block length Lj. Let κ = inf{ k ≥ 1 : L1 + · · · + Lkn}. Then combine the κ blocks ℬ(I1, L1), … ,ℬ(Iκ, Lκ) and take the first n elements to get the bootstrap sample {Yt*:t=1,2,,n}.

3.1. Consistencies of the null distributions of Qn*M and Qn*V

The stationary bootstrap versions of QnM and QnV are



y¯n*=1nt=1nyt*,         σ^y*2:=1nt=1n(yt*-y¯n*)2+2nt=1(1-t+1)i=1n-t(yi*-y¯n*)(yi+t*-y¯n*),         <n.

In order to prove consistencies of the null distributions of Qn*M and Qn*V, we first establish a FCLT for the SB CUSUM

Syn*(z):=1σy,n*nt=1[nz](yt*-E*(yt*)),         yt*=Yt*or Yt*2,

for 0 ≤ z ≤ 1, where


and E* and Var* denote the expectation and variance conditional on sample {Yt : t = 1, 2, … , n}. In the following theorems, the bootstrap version Syn*(z) of the cumulative sum Syn(z) is shown to converge to the standard Brownian motion as in Theorem 1 below, from which we obtain the consistencies of the null distributions of Qn*M and Qn*V as in Theorem 2 below.

Theorem 1

We assume the same conditions as in Proposition 4. Under the null hypothesis of no break, if ϱ → 0 andnϱ, then, as n → ∞,


whered*denotes convergence in distribution conditional on the sample {Y1, … , Yn}.

Theorem 2

Assume the same conditions of Theorem 1. Assume further that the bandwidth ℓ is chosen so that σ̂yis consistent. Then, as n → ∞,


3.2. Consistencies of the null distributions of Pn*Mand Pn*V

Let β^0*,,β^p* be the OLSE constructed from the SB sample {Y1*,,Yn*}. Let

a^t*=Yt*-β^0*-β^1*Yt,h1*--β^p*Yt,hp*,         b^t*=a^t*2-σ^a*2,         t=hp+1,,n,σ^a*2:=1n-hpt=hp+1na^t*2,         σ^b*2:=1n-hpt=hp+1nb^t*2,σan*2:=Var*(1n-hpt=hp+1na^t*),         σbn*2:=Var*(1n-hpt=hp+1nb^t*).

The SB versions Pn*M and Pn*V of PnM and PnV are

Pn*M=1σ^a*n-hpsup0z1|t=hp+1[nz]a^t*|,         Pn*V=1σ^b*n-hpsup0z1|t=hp+1[nz]b^t*|.


San*(z):=1σan*n-hpt=hp+1[nz](a^t*-E*(a^t*)),         Sbn*(z):=1σbn*n-hpt=hp+1[nz](b^t*-E*(b^t*))

for 0 ≤ z ≤ 1, where E*(a^t*)=E*(b^t*)=0. In the following theorems, the bootstrap versions San*(z) and Sbn*(z) of the cumulative sums San (z) and Sbn (z) are shown to converge to the standard Brownian motion as in Theorem 3 below, from which we get the consistencies of the null distributions of Pn*M and Pn*Vas in Theorem 4 below.

Theorem 3

We assume the same conditions as in Proposition 4. Under the null hypothesis of no break, if ϱ → 0,nϱ, p → ∞, and p2+ε = O(n), then, as n → ∞,

San*(·)d*B0(·)inprobability,         Sbn*(·)d*B0(·)inprobability.
Theorem 4

Assume the same conditions of Theorem 3. Then, as n → ∞,

Pn*M,         Pn*Vd*sup0z1|B0(z)|inprobability.
3.3. Bootstrap tests for structural changes

Thanks to the consistencies in Theorems 2 and 4, asymptotic critical values of mean break tests QnM and PnM and variance break tests QnV and PnV can be obtained from the distributions of the bootstrap statistics Qn*M,Pn*M,Qn*V, and Pn*V instead of the large sample distribution in (2.5) and (2.6).

Let Rn be one of QnM,PnM,QnV, and PnV and let Rn* be the corresponding SB version. We can construct level-α bootstrap critical value Rn*(α) as the αth empirical quantile of the m independent stationary bootstrap test values {Rn*=Rn*(i), i = 1, … ,m}, α ∈ (0, 1). Now, SB test is: reject the null hypothesis of mean (or variance) constancy if Rn is larger than Rn*(α).

Consistencies of the stationary bootstrap tests will be proved under alternative hypotheses of a single mean break at time t0 ∈ {1, … , n}


or of a variance break at time t0 ∈ {1, … , n}


In the followings, Pr* denotes the conditional probability given on sample {Yt : t = 1, 2, … , n}.

Theorem 5

We assume the same conditions as in Theorem 2 except for the condition of no break. Let α ∈ (0, 1). If (3.1) holds, then as n → ∞,Pr*[QnM>Qn*M(α)]p1. If (3.2) holds, then as n → ∞,Pr*[QnV>Qn*V(α)]p1.

Theorem 6

We assume the same conditions as in Theorem 4 except for the condition of no break. Let α ∈ (0, 1). If (3.1) holds, then as n → ∞,Pr*[PnM>Pn*M(α)]p1. If (3.2) holds, then as n → ∞,Pr*[PnV>Pn*V(α)]p1.

4. Monte-Carlo study

A simulation experiment is conducted to investigate finite sample sizes and powers of the proposed tests for breaks in the memory parameters β1, … , βp. Long-memory data are generated by approximating HAR(∞) by HAR(7) model

Yt=j=1pβjYt,hj+et,         p=7,t=1,,n,

where et is a sequence of independent standard normal errors. The sample size is set to n = 1,000, 2,000, 4,000, which correspond roughly, 5 years, 10 years, and 20 years, respectively. The error term et is set to independent standard normal variables. For power study of mean break tests, 0.125 is added to Yt for all t > n/2. For power study of variance break tests, 1.08 is multiplied to et for all t > n/2. The parameters for the HAR model are chosen as in Table 1: D1 and D2 are HAR(7) with j=1pβj=0.9 with λ = 0.6, 0.9, respectively; D3 and D4 the historic HAR(3) models for RVs of the US S&P500, and US T-bond, respectively, analyzed by Corsi (2009).

The normal errors et are generated by RNNOA, a FORTRAN subroutine in IMSL. Data Yt are simulated from t = −1,000 with Yt = 0, t < −10,00. Data Y1, … , Yn are used for computing the mean break test statistics QnM,Qn*M,PnM,Pn*M

and the variance break test statistics QnV,Qn*V,PnV,Pn*V. For QnM,Qn*M,QnV,Qn*V based on the CUSUM of Yt or Yt2, the bandwidth parameter for the longrun variance estimate (2.4) is chosen to be = i(n/1000)1/4, i = 0, 20, 80, 160. We use the 1/4-order bandwidth because it is generally recommended to be good for longrun variance estimate, see for example, Schwert (1989). A wide range of values of is chosen because these tests are very sensitive to . For the P-statistics PnM,Pn*M,PnV,Pn*V based on the residuals of HAR fitting, the residuals are computed from HAR(p) model, p = 2, 3, 4.

For computing the stationary bootstrap tests Qn*M,Pn*MQn*V,Pn*V, the block length parameter is chosen set to ϱ = 0.005(n/1000)−1/3 so that the mean block length is ϱ−1. The third order parameter is chosen because it is usually optimal for parameter estimators based on block bootstrapping, see for example Bühlmann (2002).

Tables 2 and 3 report sizes and powers of the mean break tests and volatility break tests, respectively, which are based on 1,000 independent replications with m = 1,000 bootstrapping samples. In computing the P-tests and P*-tests, we first consider HAR(3) fitting because HAR(3) model is usually used in practice for analyzing realized volatilities. We observe the following.

  • SB improves sizes of QnM,QnV,PnM but does not improve size of PnV.

  • Among the four mean break tests QnM,Qn*M,PnM,Pn*M, the SB test Pn*Mbased on HAR residual is the best.

  • Among the four variance break tests QnV,Qn*V,PnV,Pn*V, the normal test PnV based on HAR residual is the best.

We next investigate the performance of the P-tests and P*-tests in terms of the HAR estimation order p. Tables 4 and 5 report sizes and powers of the tests which are constructed under the same setup used for Tables 2 and 3. We observe the following.

  • For mean break test, we see, for p = 2 of (relatively) under-specified order, the P-tests tend to be oversized, especially for D3 and D4 of the historical DGPs; the SB mitigate over-size for D1 and D2 but not for D3 and D4.

  • For the mean break test, we see the power of the test with p = 4 of (relatively) over-specified order tend to be smaller than that with p = 3.

  • For break test, there is no significance difference in size and power performances of tests with p = 2, 3, 4.

From this experiment, we can say that SB considerably improve the sizes of the existing tests, especially mean break tests, without power loss. No substantial improvement is achieved for the variance break test PnV by the SB because the size and power of variance break test PnV is already robust against the bandwidth parameter or HAR estimation order p. The widely used HAR(3) fitting produces P-tests and P*-tests with not worse performance than HAR(2) or HAR(4) fittings.

5. Conclusion

We established the stationary bootstrap functional central limit theorem (FCLT) for the HAR(∞) model, which is a genuine long-memory model for realized volatility in financial economics. The bootstrap version of the cumulative sum of the HAR(∞) process is shown to converge to the standard Brownian motion. Applying the FCLT, under the null hypothesis of no break, we have established consistencies of the bootstrap null distributions of the SB CUSUM tests for structural mean change and for structural variance change. Consistencies of the stationary bootstrap tests are also established under alternative hypotheses of mean break and of variance break. Monte-Carlo simulation shows that SB improves size performance of existing tests especially for mean break tests.


This study was supported by grants from the National Research Foundation of Korea (2016R1A2B400 8780, NRF-2015-1006133).

Appendix : Proofs

Proof of Theorem 1

Proof is given in a similar way to that of Theorem 1 of Parker et al. (2006). Note that E*yt*=(1/n)t=1nyt=y¯n. For t = 1, 2, … , n, let Kt = inf{k : L1 + · · · + Lkt} and note that Kn = κ. We observe


where M[nz] = L1 + … + LK[nz], I[nz]=IK[nz] and L[nz]=LK[nz]. It is well-known, (see Politis and Romano (1994) or Hwang and Shin (2012)), that the last n − (L1 + · · · + Lκ−1) observations in the last block ℬ(Iκ, Lκ) of the SB procedure does not affect the limiting distribution of the SB sample mean by the memoryless property of the geometric distribution, and thus in the same way we have (1/n)t=1M[nz]-[nz](yn,I[nz]+L[nz]-t-y¯n)p0.

For j = 1, 2, … , κ, let U^j=t=1Lj(yn,Ij+t-1-y¯n), i.e., the sum of centered yn,ts belonging to block ℬ(Ij, Lj), and let R^n(z)=(1/n)j=1K[nz]U^j for 1 ≤ z ≤ 1. For the desired result, it suffices to show R^n(·)d*σyB(·)in probability. Its proof is based on Theorem 13.5 of Billingsley (1999), which requires the convergence of the finite-dimensional distribution and the condition for the tightness of the partial sum process in the followings: for 0 ≤ z1< · · · < zk ≤ 1 in probability


where ∑ = ((ci,j))i,j=1,...,k with ci,j=σy2min{zi,zj} for 0 ≤ z < u < r ≤ 1


Verifications of (A.1) and (A.2) can be given in the same way as those on pp. 627–628, (proofs of Eqs. (34) and (35)), of Parker et al. (2006). Note that Gonçalves and de Jong (2003) proved the first order asymptotic validity of the SB of the NED process for ϱ → 0 and nϱ, which can be applied to the process yt according to Lemmas 1 and 2 of Lee (2014), and thus the convergence of the SB variance of the NED process yt holds in probability as follows: σy,n*2:=Var*((1/n)t=1nyt*)pσy2. Hence we finish the proof of Theorem 1.

Proof of Theorem 2

It is obvious from the result in Theorem 1.

Proof of Theorem 3

Let a˜t=Yt*-β^0-j=1pβ^jYt,hj* and we mimic the model (2.1) by the stationary bootstrap sample {Yt*} as follows:


Following the same arguments as in Step 1 of the proof of Theorem 2.4 by Hwang and Shin (2013), together with the asymptotic normality of the stationary sequence {ãt : t = 1, 2, … }, we can show that


where σ˜a2=plimnVar*(t=1na˜t/n).

The model (A.3) can be written as


where ζ^0=β^0+j=1pβ^jμ^ and μ^=(1/n)t=1nYt. By the same arguments as in Step 2 of the proof of Theorem 2.4 by Hwang and Shin (2013), we can obtain that


where Xt*=(1,Yt,h1*-μ^,,Yt,hp*-μ^), β̂ = (ζ̂0, β̂1, …, β̂p)′ and β^*=(ζ^0*,β^1*,,β^p*) with ζ^0*=β^0*+j=1pβ^j*μ^. The result in (A.5) is obtained by observing that


where R is the p×p matrix with (i, j)-component E*[(Yt,hi*-μ^)(Yt,hj*-μ^)], and Xt,0*=(Yt,h1*-μ^,,Yt,hp*-μ^), and by the following convergence:


To prove the first convergence in distribution for San*(·) in Theorem 3, we note that

a^t*=Yt*-β^*Xt*,         a˜t=Yt*-β^Xt*

and we observe, by (A.4) and (A.5)


It is clear that σan*2pσ˜a2 as n → ∞. Thus we obtain the first desired convergence in distribution.

To prove the second convergence in distribution for Sbn*(·), in the model (A.3), we can show that, uniformly in z,


and letting ξt=a˜t2/σ˜a2-1,


where φ=φ1/2φ1/2:=σ˜a4limnVar(t=1nξt/n). We can now observe that


and σbn*2pφ as n → ∞. More detailed verifications can be given by the same arguments as in proof of Theorem 2.4 of Hwang and Shin (2015) along with their Lemmas 6.1 and 6.2, as applied to the model (A.3) above.

Proof of Theorem 4

It is obvious from the results in Theorem 3.

Proof of Theorems 5–6

Let Rn be one of QnM,PnM,QnV,PnV and let Rn* be the corresponding SB version. It suffices to show

Rn*=Op(1)         and         Rnas   n.

First, for Rn=QnM with yt = Yt, to show the boundedness, we note that under the alternative, yt* do not have structural changes because of the random block selection. We follow the same arguments as in the proof of Theorem 1. The block sums Ûj in the proof of Theorem 1 are also iid under the alternative; therefore, the same arguments as those in proof of Theorem 1 provide a weak convergence of Qn*M under the alternative and Qn*M=Op(1).

We show the second limiting in (A.6). Let σ2 = Var[yt]. Under the alternative hypothesis with break point t0, we write yt = μ(1) + σ(δ0{t > t0} + ut) where ut is a sequence with mean zero and variance one, and δ0 = (μ(2)μ(1))/σ > 0. Let Zn(z)=t=1[nz](yt-y¯n). We observe Zn(z) for z ∈ [0, t0/n] and z ∈ (t0/n, 1], respectively. It can be shown straightforward that



λn(z)={-δ0[nz](n-t0)n,if   0[nz]t0,-δ0t0(n-[nz])n,if   t0<[nz]n.

Noting that QnM=sup0z1Zn(z)/(σ^y,nn), for the second desired limiting, we may show that sup0z1λn(z)/n as n → ∞.

For t0 = t0(n) we denote

τ_:=liminfnt0n,         τ¯:=limsupnt0n.

By the assumption, 0 < ττ̄ < 1. In case that 0 ≤ [nz] ≤ t0, we have


Also in case that t0< [Tz] ≤ T, we have



sup0z11nλn(z)max {sup0zt0n[nz]nδ0(1-τ¯),supt0n<z1n-[nz]nδ0τ_}.

Note that since 1 − τ̄ > 0 and τ > 0, we have

limnmax {sup0zt0n[nz]nδ0(1-τ¯),supt0n<z1n-[nz]nδ0τ_}=Climnnδ0

for some positive C. Since δ0> 0 under the alternative hypothesis, the right-hand side is ∞. Thus the desired consistency result for Qn*M is obtained. For other Rn*=Qn*V,Pn*M, or Pn*V, similar arguments can be given to prove the consistencies, and here we omit the details because the discussions are almost the same except for yt, replaced by Yt2, ât, or t.


Table 1

Parameters for data generating process (DGP)

D10.3700.2220.1330.0800.0480.0290.017hj = 2j−1, j = 1, … , 7
D20.1730.1550.1400.1260.1130.1020.092hj = 2j−1, j = 1, … , 7
D30.3720.3430.224h1 = 1, h2 = 5, h3 = 22
D40.0390.4120.361h1 = 1, h2 = 5, h3 = 22

Table 2

Rejection rates (%) of the level 5% mean break tests

DGPnD1, λ = 0.6D2, λ = 0.9





DGPnD3, S&P500D4, US T-Bond



Historic HAR(3)1,0000100.

Note: Number of replications = 1,000, number of bootstrap replication = 1,000.

DGP = data generating process; HAR = heterogeneous autoregressive.

Table 3

Rejection rates (%) of the level 5% variance break tests

DGPnD1, λ = 0.6D2, λ = 0.9



DGPnD3, S&P500D4, US T-Bond



Historic HAR(3)1,000098.



Note: Number of replications = 1,000, number of bootstrap replication = 1,000.

DGP = data generating process; HAR = heterogeneous autoregressive.

Table 4

Rejection rates (%) of the level 5% mean break tests PnM and Pn*M based on HAR(p) fittings

pnD1, λ = 0.6D2, λ = 0.9D3, S&P500D4, US T-Bond





Note: Number of replications = 1000, number of bootstrap replication = 1000.

HAR = heterogeneous autoregressive.

Table 5

Rejection rates (%) of the level 5% variance break tests PnV and Pn*V based on HAR(p) fittings

pnD1, λ = 0.6D2, λ = 0.9D3, S&P500D4, US T-Bond





Note: Number of replications = 1000, number of bootstrap replication = 1000.

HAR = heterogeneous autoregressive.

  1. Baillie, RT (1996). Long memory processes and fractional integration in econometrics. Journal of Econometrics. 73, 5-59.
  2. Billingsley, P (1999). Convergence of Probability Measures. New York: Wiley
  3. Brown, RL, Durbin, J, and Evans, JM (1975). Techniques for testing the constancy of regression relationships over time. Journal of the Royal Statistical Series B (Methodological). 37, 149-192.
  4. Bühlmann, P (2002). Bootstraps for time series. Statistical Science. 17, 52-72.
  5. Corsi, F (2004). A Simple Long Memory Model of Realized Volatility. Lugano: University of Southern Switzerland
  6. Corsi, F (2009). A simple approximate long-memory model of realized volatility. Journal of Financial Econometrics. 7, 174-196.
  7. Deng, A, and Perron, P (2008). The limit distribution of the CUSUM of squares test under general mixing conditions. Econometric Theory. 24, 809-822.
  8. Gonçalves, S, and de Jong, R (2003). Consistency of the stationary bootstrap under weak moment conditions. Economics Letters. 81, 273-278.
  9. Hwang, E, and Shin, DW (2012). Strong consistency of the stationary bootstrap under ψ-weak dependence. Statistics and Probability Letters. 82, 488-495.
  10. Hwang, E, and Shin, DW (2013). A CUSUM test for a long memory heterogeneous autoregressive model. Economics Letters. 121, 379-383.
  11. Hwang, E, and Shin, DW (2014). Infinite-order, long-memory heterogeneous autoregressive models. Computational Statistics and Data Analysis. 76, 339-358.
  12. Hwang, E, and Shin, DW (2015). A CUSUMSQ test for structural breaks in error variance for a long memory heterogeneous autoregressive model. Statistics and Probability Letters. 99, 167-176.
  13. Lee, O (2014). The functional central limit theorem and structural change test for the HAR(∞) model. Economics Letters. 124, 370-373.
  14. Parker, C, Paparoditis, E, and Politis, DN (2006). Unit root testing via the stationary bootstrap. Journal of Econometrics. 133, 601-638.
  15. Ploberger, W, and Krämer, W (1986). On studentizing a test for structural change. Economics Letters. 20, 341-344.
  16. Ploberger, W, and Krämer, W (1990). The local power of the CUSUM and CUSUM of squares tests. Econometric Theory. 6, 335-347.
  17. Ploberger, W, and Krämer, W (1992). The CUSUM test with OLS residuals. Econometrica. 60, 271-285.
  18. Politis, DN, and Romano, JP (1994). The stationary bootstrap. Journal of the American Statistical Association. 89, 1303-1313.
  19. Qu, Z, and Perron, P (2007). Estimating and testing multiple structural changes in multivariate regressions. Econometrica. 75, 459-502.
  20. Schwert, GW (1989). Tests for unit roots: a Monte Carlo investigation. Journal of Business & Economic Statistics. 7, 147-159.
  21. Xu, KL (2013). Powerful tests of structural changes in volatility. Journal of Econometrics. 173, 126-142.
  22. Xu, KL (2015). Testing for structural change under non-stationary variances. Econometrics Journal. 18, 274-305.