TEXT SIZE

search for



CrossRef (0)
On the second order property of elliptical multivariate regular variation
Communications for Statistical Applications and Methods 2024;31:459-466
Published online July 31, 2024
© 2024 Korean Statistical Society.

Moosup Kim1,a

aDepartment of Statistics, Keimyung University, Korea
Correspondence to: 1 Department of Statistics, Keimyung University, Daegu 42601, Korea. E-mail: moosupkim@kmu.ac.kr
This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT): Grant No. RS-2023-00243752.
Received January 14, 2024; Revised February 23, 2024; Accepted March 14, 2024.
 Abstract
Multivariate regular variation is a popular framework of multivariate extreme value analysis. However, a suitable parametric model needs to be introduced for efficient estimation of its spectral measure. In such a view, elliptical distributions have been employed for deriving such models. On the other hand, the second order behavior of multivariate regular variation has to be specified for investigating the property of the estimator. This paper derives such a behavior by imposing a widely adopted second order regular variation condition on the representation of elliptical distributions. As result, the second order variation for the convergence to spectral measure is characterized by a signed measure with a regular varying index. Moreover, it leads to the asymptotic bias of the estimator. For demonstration, multivariate t-distribution is considered.
Keywords : multivariate regular variation, the second order behavior, spectral measure, elliptical distribution, asymptotic bias
1. Introduction

In diverse fields such as finance, meteorological hydrorisk management, and, internet traffic operation, it is a prominent issue to model multivariate extreme events. Multivariate regular variation is a popular framework for multivariate extreme value analysis (cf. Cai et al., 2011; Weller and Cooly, 2014; Li and Hua, 2015; Einmahl et al., 2020). It describes the behavior of multivariate extremes in terms of their radii and directions (cf. Resnick, 2008): The directional behavior is specified by a probability measure on a measurable space of unit vectors, which is called spectral measure. Meanwhile the radius obeys a univariate regular variation, and furthermore the radius and direction are asymptotically independent as the former gets large. However, for efficient estimation, it is crucially required to introduce a suitable model for the spectral measure which itself has no parameter of finite dimension. In such a view, elliptical distributions have received much attention since they not only constitute a subclass of multivariate regular variation but also provide a tractable parametric model for spectral measure and tail dependence (cf. Hult and Lindskog, 2002; Klüppelberg et al., 2007; Joe and Li, 2019).

Kim (2021) derived a density function of the spectral measure of elliptical distributions and then proposed a maximum likelihood estimator based on it. However, for investigating the property of the resulting estimator, we need to specify how accurately the directional behavior at extreme level in finite sample is described by the spectral measure. This paper mainly focuses on the second order property of convergence to spectral measure for the class of elliptical distributions: Let {ft(x) : t > 0}, and f (x) be real valued functions defined on a space X such that ft(x) → f (x) as t → 0 for each xX. In this case, the second order property of the convergence indicates the behavior characterized by γ > 0 and a real valued function g(x) such that

ft(x)-f(x)=tγg(x)+tγo(1),         as t0.

By imposing a widely adopted second order (univariate) regular variation condition on the representation of elliptical distributions (cf. Hall, 1982; Hult and Lindskog, 2002), this paper derives the second order behaviors with respect to radii and directions of the multivariate extremes.

The remainder of this paper is organized as follows: Section 2 presents some preliminary definitions and theorem, and briefly reviews the second order property of univariate cases; Section 3 deals with the second order property of elliptical multivariate regular variation.

2. Preliminary

2.1. Elliptical multivariate regular variation

This section provides the preliminary definitions and theorem. First, it is necessary to clarify the topological space on which the spectral measure is defined. Let

Sd-1:={s=(s1,,sd)d:s12++sd2=1}

denote the unit sphere in d-dimensional space for a positive integer d ≥ 2. is equipped with the metric that takes |s1s2| as the distance between s1 and s2 in , where |·| indicates the Euclidean norm. The induced topology generates a Borel σ-field on , which is denoted by hereafter.

Let X be a d-dimensional random vector, and R = |X| and S = X/R represent the radius and direction of X. Then, R and S are positive and -valued random variables, respectively. X is said to have multivariate regularly varying tail if there exist α > 0 and a probability Borel measure Λ on such that for x > 0 and with Λ(∂A) = 0,

limt0P(R>x/t,SA)P(R>1/t)=x-αΛ(A)

(cf. Resnick, 2008). Taking , the above equation reduces to

limt0P(R>x/t)P(R>1/t)=x-α,         for x>0.

i.e., x ↦ P(R > x) is (univariate) regularly varying with exponent −α as x → ∞ (Bingham et al., 1987). It means that the conditional distribution of t|X| given t|X| > 1 converges to a Pareto distribution as t → 0. Meanwhile, if x = 1 in (2.1), then it is

limt0P(SAR>1/t)=Λ(A),

viz, the conditional distribution of S given t|X| > 1 approximates to Λ as t → 0. α is called the tail exponent or tail index, while Λ is done the spectral measure. Furthermore, it follows from (2.1)–(2.3) that asymptotically R and S are of conditional independence given t|X| > 1 as t → 0. These features give rise to a simulation method of multivariate extremes (Kim, 2022).

This study concentrates on the tail behavior characterized by a non-singular heavy-tailed elliptical distribution. For a comprehensive overview of elliptical distributions, refer to Hult and Lindskog (2002). Let Z = (Z1, . . . , Zd)′, where Z1, . . . , Zd are independent standard normal random variables. Then W := Z/|Z| is uniformly distributed on . In particular, if

X=VΣ12W,

where ∑ is a symmetric positive definite matrix, V is a positive random variable with a regularly varying tail of tail exponent α and independent of W, then, according to Theorem 3.1 of Hult and Lindskog (2002), X follows an elliptical distribution. Moreover, it exhibits a multivariate regularly varying tail, and its spectral measure can be readily proven to be

Λ(A)=E{Σ1/2WαI(Σ1/2W/Σ1/2WA)}E{Σ1/2Wα}         for ASd-1.

Define μ(A) := P(Z/|Z| ∈ A) for . Then, Kim (2021) shows that Λ in (2.5) has a density function with respect to μ which of form is presented in the following theorem:

Theorem 1

Let

λ(s)=(sΣ-1s)-α+d2C(Σ,α),         sSd-1,         with C(Σ,α)=sSd-1(sΣ-1s)-α+d2dμ(s).

Then, we have

Λ(A)=sAλ(s)dμ(s),         forASd-1.

This theorem is related to the limit Λ in (2.3), which can be readily applied to the estimation (cf. Kim, 2021). However, to investigate the asymptotic property of the resulting estimator, the second order behaviors of (2.2)–(2.3) have to be specified. The next section will deal with it.

2.2. The second order behavior of univariate regular variation

Before dealing with a second order behavior of multivariate regular variation, this subsection briefly reviews the univariate case (2.2) with respect to α-estimation. Let R1, . . . , Rn be a random sample of R. For estimating α,

Hn=1ki=1n(log Ri-log R(k+1))+

is popularly employed, where k < n is a positive integer and R(k+1) indicates the (k + 1)th largest value in R1, . . . , Rn (cf. Hill, 1975; Hsing, 1991). Assuming that k := kn satisfies k → ∞ and k/n → 0 as n→∞, Hn converges in probability to 1 as n→∞. For establishing a further asymptotic property, we need to impose a more stringent condition on (2.2) which is called second order regular variation: For instance, letting L(x) := xαP(R > x), x > 0, we assume that there exist γ > 0, c1 > 0, and, c2 ∈ ℝ such that

L(x)=c1{1+c2x-γ+x-γo(1)}         as x

(cf. Hall, 1982). If k(k/n)γ/αM as n→∞ for some M ≥ 0, we then have

k(Hn-1α)Zα+γc1-γ/αc2Mα(α+γ)         as n,

where Z is a standard normal variable and ⇒ denotes ‘convergence in distribution’ (cf. Hsing, 1991). (2.8) is a special and prominent case of

L(x/t)L(1/t)-1=tγγc2x-γ-1γ+tγo(1)         as t0,         for x>0,

which is the second order property of limt→0L(x/t)/L(1/t) = 1 for each x > 0.

3. Main result

For investigating the asymptotic property of the estimator of ∑ in (2.4), we need to explore the second order behavior of (2.3) (see Remark 1). Let X = V1/2W be the random vector in (2.4). Moreover, it is assumed that there exist α > 0, γ > 0, a1 > 0, and a2 ∈ ℝ, such that

F¯V(x):=P(V>x)=a1x-α{1+a2x-γ+o(x-γ)}         as x,

(cf. (2.8)). Let

Λt(·):=P(S·R>1/t),         for t>0.

Under (3.1), we are going to derive the behavior of Λt( · ) – Λ( · ) as t → 0. Let

ρ=E{Σ1/2Wα+γ}/E{Σ1/2Wα},         c1=a1E{Σ1/2Wα},         c2=a2ρ,Λ(γ)(A):=E{Σ1/2Wα+γIA}E{Σ1/2Wα+γ}         with IA=I(Σ12W/Σ12WA)for ASd-1,

and

Γ(A):=a2ρ{Λ(γ)(A)-Λ(A)}         for ASd-1.
Theorem 2

Assume thatis a symmetric positive definite matrix and (3.1) holds. Then,

  • P(|X| > 1/t) = c1tα(1 + c2tγ + o(tγ)) as t → 0, i.e., (2.8) holds;

  • for each, Λt(A) – Λ(A) = tγΓ(A) + tγo(1) as t → 0, where the o(1)-term is uniformly negligible in.

Proof

We first verify (i). Since V is independent of W and we have |∑1/2W| > 0 with probability 1 due to positive definiteness of ∑, (2.4) implies that for ,

P(X>1/t,X/XA)=P(Σ1/2WV>1/t,Σ1/2W/Σ1/2WA)=E(I(Σ1/2WV>1/t)IA)=E{E(I(Σ1/2WV>1/t)IAW)}=E{P(V>1tΣ1/2WW)IA}=E {F¯V(1tΣ1/2W)IA}.

Moreover, since there exists c > 1 such that almost surely 1/c < |Σ1/2W| < c due to positive definiteness of Σ, (3.1) implies that

F¯V(1/(tΣ1/2W))F¯V(1/t)=|Σ1/2W|α1+a2tγΣ1/2Wγ+tγo(1)1+a2tγ+tγo(1)=|Σ1/2W|α[1+a2tγ(|Σ1/2W|γ-1)+tγo(1)]         as t0,

where the o(1)-terms are all bounded by a constant for sufficiently small t > 0. Thus, applying bounded convergence theorem we have

P(X>1/t,X/XA)P(V>1/t)=E{F¯V(1/tΣ1/2W))IA}F¯V(1/t)=E{F¯V(1/tΣ1/2W))F¯V(1/t)IA}=E{|Σ1/2W|α[1+a2tγ(|Σ1/2W|γ-1)+tγo(1)]IA}=E{|Σ1/2W|αIA}+tγE{a2(|Σ1/2W|α+γ-|Σ1/2W|α)IA}+tγo(1)=E{|Σ1/2W|αIA}+tγh(A)+tγo(1)         as t0,

where h(A) = a2E{(|Σ1/2W|α+γ – |Σ1/2W|α)IA} is a signed Borel measure on and the o(1)-terms are uniformly negligible for . Taking , we have that as t → 0,

P(X>1/t)=P(V>1/t)[E{|Σ1/2W|α}+tγh(Sd-1)+tγo(1)]=a1E{|Σ1/2W|α}tα(1+a2tγ+tγo(1))(1+tγa2(ρ-1)+tγo(1))=a1E{|Σ1/2W|α}tα(1+tγa2ρ+tγo(1))=c1tα(1+c2tγ+tγo(1)),

which asserts (i).

Next, we verify (ii). From (3.2), it follows that as t → 0,

Λt(A)=P(X/XA)X>1/t)=P(Σ1/2WV>1/t,Σ1/2W/Σ1/2WA)P(Σ1/2WV>1/t)=P(Σ1/2WV>1/t,Σ1/2W/Σ1/2WA)/P(V>1/t)P(Σ1/2WV>1/t)/P(V>1/t)=E{Σ1/2WαIA}+tγh(A)+tγo(1)E{Σ1/2Wα}+tγh(Sd-1)+tγo(1),

where o(1)-terms are uniformly negligible in . Moreover, by applying mean value theorem we have that as t → 0

1E{Σ1/2Wα}+tγh(Sd-1)+tγo(1)=1E{Σ1/2Wα}-tγh(Sd-1)(E{Σ1/2Wα})2+tγo(1).

Thus, Λt(A) is equal to

E{Σ1/2WαIA}E{Σ1/2Wα}+tγ[h(A)E{Σ1/2Wα}-h(Sd-1)E{Σ1/2WαIA}(E{Σ1/2Wα})2]+tγo(1),

as t → 0, where o(1)-term is uniformly negligible in . Moreover, we have that for every , the leading term in (3.3) is Λ(A) by (2.5) and

h(A)E{Σ1/2Wα}-h(Sd-1)E{Σ1/2WαIA}(E{Σ1/2Wα})2=a2{ρΛ(γ)(A)-Λ(A)-(ρ-1)Λ(A)}=a2ρ{Λ(γ)(A)-Λ(A)}=Γ(A).

This proves (ii).

Theorem 2 is a strong result. Actually, we are satisfied with the second behavior of Λt(A) – Λ(A) as t → 0 while A runs over a subclass of rather than the entire. For bounded measurable function f(s) defined on with , let

Af(y)={sSd-1:f(s)>y}         and         Gf(y)=Γ(Af(y))         for y.

The following corollary is readily established.

Corollary 1

Under the same conditions of Theorem 2, we have the followings:

  • For bounded measurable function f (s) defined on,

    Λt(Af(y))-Λ(Af(y))=tγGf(y)+tγo(1)         ast0,

    where the o(1)-term is uniformly negligible in y ∈ ℝ;

  • Moreover, if, then, -Gf(y)dy=a2ρsSd-1f(s)dΛ(γ)(s).

Proof

Since (i) is readily established by (ii) of Theorem 2, it suffices to verify (ii). Note that Gf (y) is integrable since it is bounded and Gf (y) vanishes outside a compact interval due to the boundedness of f (s). Thus, we have

-Gf(y)dy=a2ρ-{Λ(γ)(Af(y))-Λ(Af(y))}dy.

Moreover, letting Afc(y)=Sd-1\Af(y)={sSd-1:f(s)y}, we have

-{Λ(γ)(Af(y))-Λ(Af(y))}dy=0{Λ(γ)(Af(y))-Λ(Af(y))}dy+-0{Λ(γ)(Af(y))-Λ(Af(y))}dy=0{Λ(γ)(Af(y))-Λ(Af(y))}dy+-0{Λ(Afc(y))-Λ(γ)(Afc(y))}dy=0Λ(γ)(Af(y))dy--0Λ(γ)(Afc(y))dy-[0Λ(Af(y))dy--0Λ(Afc(y))dy]=sSd-1f(s)dΛ(γ)(s),

where the last equation holds due to the fact that . This completes the proof.

Remark 1

The typical choice of f in Corollary 1 is the partial derivatives of log λ(s) with respect to parameters representing Σ and α. Investigating the asymptotic property of Σ-estimator, we encounter the integral for t > 0, which represents the expected value of score function. Furthermore,

limt0t-γ(sSd-1f(s)dΛt(s)-sSd-1f(s)dΛ(s))

is related to the bias. According to Corollary 1, the limit is under (2.4) and (3.1).

Remark 2

An example related to Theorem 2 is multivariate t-distribution: let Z = (Z1, . . . , Zd)′, where Z1, . . . , Zd are independent standard normal random variables, Σ be a positive definite and symmetric d × d matrix, and U follow χ2-distribution with degree of freedom ν > 0. Then, if Z and U are independent, then, Z/|Z| is independent of Z/U/ν and thus

Σ1/2ZU/ν=ZU/νΣ1/2ZZ,

satisfies (2.4) with V=Z/U/ν. Moreover, V2 follows an F-distribution with 1 and ν degrees of freedom whose a probability density function is

xx-1/2(1+x/ν)-(1+ν)/2ν1/2Beta(1/2,ν/2)         (x>0)=1ν(2+ν)/2Beta(1/2,ν/2)x-(2+ν)/2{1-ν(1+ν)21x+o(1)x}         as x,

where Beta indicates the beta function. By integrating the above function we can check that (3.1) holds with

α=ν,         a1=2ν-(4+ν)/2Beta(1/2,ν/2),         a2=ν2(1+ν)2(2+ν),         γ=2.
Acknowledgement

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT): Grant No. RS-2023-00243752.

References
  1. Bingham NH, Goldie CM, and Teugels JL (1987). Regular Variation, vol. 27 of Encyclopedia of Mathematics and its Applications, Cambridge University Press, Cambridge.
  2. Cai J-J, Einmahl JH, and De Haan L (2011). Estimation of extreme risk regions under multivariate regular variation. The Annals of Statistics, 39, 1803-1826.
    CrossRef
  3. Einmahl JH, Yang F, and Zhou C (2020). Testing the multivariate regular variation model. Journal of Business & Economic Statistics, 39, 907-919.
    CrossRef
  4. Hall P (1982). On some simple estimates of an exponent of regular variation. Journal of the Royal Statistical Society: Series B (Methodological), 44, 37-42.
    CrossRef
  5. Hill BM (1975). A simple general approach to inference about the tail of a distribution. The Annals of Statistics, 3, 1163-1174.
    CrossRef
  6. Hsing T (1991). On tail index estimation using dependent data. The Annals of Statistics, 19, 1547-1569.
    CrossRef
  7. Hult H and Lindskog F (2002). Multivariate extremes, aggregation and dependence in elliptical distributions. Advances in Applied Probability, 34, 587-608.
    CrossRef
  8. Joe H and Li H (2019). Tail densities of skew-elliptical distributions. Journal of Multivariate Analysis, 171, 421-435.
    CrossRef
  9. Kim M (2021). Maximum likelihood estimation of spectral measure of heavy tailed elliptical distribution. In Proceedings of the Autumn Conference of the Korean Data & Information Science Society Seoul, 119-119.
  10. Kim M (2022). Simulation of elliptical multivariate regular variation. The Korean Data & Information Science Society, 33, 347-357.
    CrossRef
  11. Klüppelberg C, Kuhn G, and Peng L (2007). Estimating the tail dependence function of an elliptical distribution. Bernoulli, 13, 229-251.
    CrossRef
  12. Li H and Hua L (2015). Higher order tail densities of copulas and hidden regular variation. Journal of Multivariate Analysis, 138, 143-155.
    CrossRef
  13. Resnick SI (2008). Extreme Values, Regular Variation, and Point Processes, Springer Science & Business Media, Berlin.
  14. Weller GB and Cooley D (2014). A sum characterization of hidden regular variation with likelihood inference via expectation-maximization. Biometrika, 101, 17-36.
    CrossRef