TEXT SIZE

• •   CrossRef (0) Estimation of the exponentiated half-logistic distribution based on multiply Type-I hybrid censoring  Young Eun Jeona, Suk-Bok Kang1, a

aDepartment of Statistics, Yeungnam University, Korea
Correspondence to: 1Department of Statistics, Yeungnam University, 280 Daehak-ro, Gyeongsan, Gyeongbuk 38541, Korea. E-mail: sbkang@yu.ac.kr
Received June 27, 2019; Revised October 16, 2019; Accepted November 11, 2019.
Abstract
In this paper, we derive some estimators of the scale parameter of the exponentiated half-logistic distribution based on the multiply Type-I hybrid censoring scheme. We assume that the shape parameter ╬╗ is known. We obtain the maximum likelihood estimator of the scale parameter Žā. The scale parameter is estimated by approximating the given likelihood function using two different Taylor series expansions since the likelihood equation is not explicitly solved. We also obtain Bayes estimators using prior distribution. To obtain the Bayes estimators, we use the squared error loss function and general entropy loss function (shape parameter q = −0.5, 1.0). We also derive interval estimation such as the asymptotic confidence interval, the credible interval, and the highest posterior density interval. Finally, we compare the proposed estimators in the sense of the mean squared error through Monte Carlo simulation. The average length of 95% intervals and the corresponding coverage probability are also obtained.
Keywords : approximate maximum likelihood estimator, Bayes estimator, confidence interval, credible interval, exponentiated half-logistic distribution, maximum likelihood estimator, multiply Type-I hybrid censoring scheme
1. Introduction

Consider the exponentiated half-logistic distribution with probability density function (pdf)

$f ( x ) = λ σ [ 1 - exp ( - x σ ) 1 + exp ( - x σ ) ] λ - 1 2 exp ( - x σ ) [ 1 - exp ( - x σ ) ] 2 , ŌĆŖŌĆŖ ŌĆŖŌĆŖ ŌĆŖŌĆŖ x > 0 , σ , λ > 0$

and cumulative distribution function (cdf)

$F ( x ) = [ 1 - exp ( - x σ ) 1 + exp ( - x σ ) ] λ , ŌĆŖŌĆŖ ŌĆŖŌĆŖ ŌĆŖŌĆŖ x > 0 , σ , λ > 0 ,$

where σ is the scale parameter and λ is the shape parameter.

For the special case λ = 1, this distribution is the half-logistic distribution. The half-logistic distribution has been discussed by many researchers as a typical life distribution. Balakrishnan (1985) introduced some recurrence relations for the moments and product moments of order statistics for a half-logistic distribution. Balakrishnan and Puthenpura (1986) found the best linear unbiased estimators (BLUEs) of the location and scale parameters of the half-logistic distribution through linear functions of order statistics. Balakrishnan and Wong (1991) obtained approximate maximum likelihood estimators (AMLEs) for the location and scale parameters of half-logistic distribution with Type-II right censored samples. Balakrishnan and Chan (1992) derived the BLUE based on doubly Type-II censored samples for the scaled half-logistic distribution. Balakrishnan and Asgharzadeh (2005) discussed the AMLEs of the scaled half-logistic distribution based on progressive Type-II censored samples. Kang et al. (2008) proposed the AMLEs and maximum likelihood estimator (MLE) of the half-logistic distribution under a progressive Type-II censoring scheme. Kang et al. (2009) studied the AMLEs, MLE and the least square estimator of the scale parameter under the double hybrid censored samples. Torabi and Bagheri (2010) derived some properties of the extended generalized half-logistic distribution and discussed different methods to estimate its parameters based on the complete and censored samples. Arora et al. (2010) studied the MLE and its asymptotic variance of the generalized half-logistic distribution based on the Type-I progressive censoring scheme with changing failure rates. Balakrishnan and Saleh (2011) proposed the relations for the moments of progressive Type-II censored order statistics samples from the half-logistic distribution with applications to inference. Lee et al. (2011) presented the statistical inference based on generalized doubly Type-II hybrid censored samples from a half-logistic distribution. Kim et al. (2011) studied the Bayesian estimation for the generalized half-logistic distribution under a progressive Type-II censoring scheme. Kang and Seo (2011) discussed the MLE and AMLEs of the scale parameter in an exponentiated half-logistic distribution based on the progressive Type-II censoring scheme. Giles (2012) derived the bias reduction for the MLEs of the parameters in the half-logistic distribution. Seo et al. (2013) obtained the MLEs and AMLEs of unknown parameters in a generalized half-logistic distribution under a Type-II hybrid censoring scheme. Seo and Kang (2015) proposed the pivotal inference for the scaled half-logistic distribution based on progressive Type-II censored samples. Wang and Liu (2017) obtained the estimator for the unknown scaled parameter of the half-logistic distribution under the Type-I progressive hybrid censoring scheme.

Epstein (1954) introduced a Type-I hybrid censoring scheme. In Type-I hybrid censoring, the test is terminated at a random time T* = min{Xr:n, T}, where Xr:n denotes the rth failure time, r and T are prefixed and n is the sample size. However, Lee et al. (2014) introduced a multiply Type-I hybrid censoring scheme because many situations in life testing experiments carelessly or unconsciously lose or remove units from the experiment before failure.

In this paper, we deal with some estimators of the scale parameter of the exponentiated halflogistic distribution under the multiply Type-I hybrid censoring scheme when the shape parameter λ is known. In Section 2, the multiply Type-I hybrid censoring scheme is briefly explained. In Sections 3 and 4, we obtain the MLE and AMLEs using two different Taylor series expansions. We obtain the MLE by using the Newton-Raphson method to obtain the numerical solution since the likelihood equation is nonlinear. We also obtain the asymptotic confidence interval (CI) using the observed Fisher information matrix. In Section 5, we obtain Bayes estimators by using the squared error loss function (SELF) and the general entropy loss function (GELF). Since the posterior distribution is complicated, we obtain the credible interval (CrI) and the highest posterior density (HPD) interval using Markov chain Monte Carlo (MCMC) samples. Finally, we compare the proposed estimators in the sense of the mean squared error (MSE) through Monte Carlo simulation for various censored schemes.

2. Multiply Type-I hybrid censoring

### Type-I hybrid censoring scheme is

$Case I : { X 1 : n < X 2 : n < Ōŗ» < X r - 1 : n < X r : n } , ŌĆŖŌĆŖ ŌĆŖŌĆŖ ŌĆŖŌĆŖ if X r : n < T , Case II : { X 1 : n < X 2 : n < Ōŗ» < X d - 1 : n < X d : n < T < X d + 1 , n < Ōŗ» < X r : n } , ŌĆŖŌĆŖ ŌĆŖŌĆŖ ŌĆŖŌĆŖ if d < r , X d : n < T < X d + 1 : n ,$

where r be the predetermined observation number, and T be the predetermined end time. The likelihood function is as follows.

$L ∝ ∏ i = 1 m f ( x a i : n ) [ 1 - F ( A ) ] n - a m ,$

where m denotes the observation number including r or d and A denotes the experiment end time, including xr:n or T. However, the fail time of some units can not be recorded exactly. There are also many situations in life testing experiments. Some units can fail between two points of observation with the exact times of failure of these units unobserved. So, suppose the experimenter fails to observe middle l observations under a Type-I hybrid censoring scheme. Now, let

$Case I : { X a 1 : n < X a 2 : n < Ōŗ» < X a s - 1 : n < X a r : n } , ŌĆŖŌĆŖ ŌĆŖŌĆŖ ŌĆŖŌĆŖ if X a r : n < T ,$ $Case II : { X a 1 : n < X a 2 : n < Ōŗ» < X a d - 1 : n < X a d : n < T < X a d + 1 , n < Ōŗ» < X a r : n } , if d < r , X a d : n < T < X a d + 1 : n .$

Note that, Xai:n denotes the $a i t h$ failure time, and li the number of censoring between ai and ai+1. That is, li = ai+1ai–1, where i = 1, 2, . . . , r–1, and $l = ∑ i = 1 r - 1 l i$. Also $a r = r + ∑ i = 1 r - 1 l i$ if $r + ∑ i = 1 r - 1 l i < n$, and ar = n if $r + ∑ i = 1 r - 1 l i ≥ n$ when the nth observation is not missing or lost.

Likewise, the multiply Type-I hybrid censoring scheme has two cases. In case I, the experiment is terminated when the $a r t h$ failure occurs before the predetermined end time. In case II, the experiment is terminated at T when the ar-failure occurs after the predetermined end time. Xad :n < T < Xad+1:n means that the $a d t h$ failure occurs before T, and no failure occurs between Xad :n and T. The likelihood functions based on (2.1) and (2.2) are as follows.

$Case I : L 1 ∝ ∏ i = 1 r f ( x a i : n ) [ 1 - F ( x a r : n ) ] n - a r ∏ i = 2 r [ F ( x a i : n ) - F ( x a i - 1 : n ) ] l i - 1 F ( x a 1 : n ) a 1 - 1 , Case II : L 2 ∝ ∏ i = 1 d f ( x a i : n ) [ 1 - F ( T ) ] n - a d ∏ i = 2 d [ F ( x a i : n ) - F ( x a i - 1 : n ) ] l i - 1 F ( x a 1 : n ) a 1 - 1 .$

Let a1=1, hence cases I and II can be combined, and can be written as

$L ∝ ∏ i = 1 u f ( x a i : n ) [ 1 - F ( C ) ] n - a u ∏ i = 2 u [ F ( x a i : n ) - F ( x a i - 1 : n ) ] l i - 1 ,$

where u denotes the number of failures until the experiment end time and C denotes the experiment end time. That is, u includes r or d and D includes xar :n or T.

A schematic representation of the multiply Type-I hybrid censoring scheme is presented in

3. Maximum likelihood estimation

Let z = x/σ. From equation (2.4), we can rewrite the likelihood function as follows.

$L ∝ σ - u ∏ i = 1 u f ( z a i : n ) [ 1 - F ( z C ) ] n - a u ∏ i = 2 u [ F ( z a i : n ) - F ( z a i - 1 : n ) ] l i - 1 ,$

where zC = C/σ. We can obtain the log-likelihood function using equation (3.1).

$ln L ∝ - u ln σ + ∑ i = 1 u ln f ( z a i : n ) + ( n - a u ) ln ( 1 - F ( z C ) ) + ∑ i = 2 u l i - 1 ln ( F ( z a i : n ) - F ( z a i - 1 : n ) ) .$

Then, the variable Z has a standard exponentiated half-logistic distribution with pdf and cdf.

$f ( z ) = λ [ 1 - exp ( - z ) 1 + exp ( - z ) ] λ - 1 2 exp ( - z ) [ 1 + exp ( - z ) ] 2 , F ( z ) = [ 1 - exp ( - z ) 1 + exp ( - z ) ] λ , ŌĆŖŌĆŖ ŌĆŖŌĆŖ ŌĆŖŌĆŖ z > 0 , σ , λ > 0.$

f (z), f′ (z), and F(z) are satisfied as

$f ( z ) F ( z ) = λ 2 exp ( - z ) 1 - exp ( - 2 z ) , ŌĆŖŌĆŖ ŌĆŖŌĆŖ ŌĆŖŌĆŖ f ′ ( z ) f ( z ) = [ ( 1 + 1 λ ) f ( z ) F ( z ) - [ F ( z ) ] 1 λ ] .$

On differentiating the log-likelihood function with respect to σ on (3.2) and setting the equation to zero, we obtain the estimating equation as

$∂ ŌĆŗ ln L ∂ σ = - 1 σ [ u + ( 1 + 1 λ ) ∑ i - 1 u f ( z a i : n ) F ( z a i : n ) ( z a i : n ) - ∑ i = 1 u [ F ( z a i : n ) ] 1 λ z a i : n - ( n - a u ) f ( z C ) 1 - F ( z C ) z C + ∑ i = 2 u l i - 1 { f ( z a i : n ) z a i : n - f ( z a i - 1 : n ) z a i - 1 : n F ( z a i : n ) - F ( z a i - 1 : n ) } ] = 0.$

We can find the MLE $σ╠ā$ of σ by solving equation (3.4). Since equation (3.4) cannot be solved explicitly with closed form, we use the Newton-Raphson method to obtain the numerical solution of equation (3.4).

To construct the 100(1 – α)% CI for σ, we can use the asymptotic normality of the MLE with Var($σ╠ā$) estimated from the inverse of the observed Fisher information matrix.

From the log-likelihood function, the second derivative of log-likelihood with respect to σ is obtained by

$- ∂ 2 ln L ∂ σ 2 = D σ 2 ,$

where

$D = - u - ( 1 - 1 λ ) ∑ i = 1 u ( f ′ ( z a i : n ) z a i : n 2 + 2 f ( z a i : n ) z a i : n F ( z a i : n ) - f 2 ( z a i : n ) z a i : n 2 F 2 ( z a i : n ) ) + ∑ i = 1 u 1 λ F ( z a i : n ) 1 λ - 1 f ( z a i : n ) z a i : n 2 + ∑ i = 1 u 2 ( F ( z a i : n ) ) 1 λ z a i : n + ( n - a u ) ( f ′ ( z C ) z C 2 + 2 f ( z C ) z C 1 - F ( z C ) + f 2 ( z C ) z C 2 ( 1 - F ( z C ) ) 2 ) - ∑ i = 2 u l i - 1 f ′ ( z a i : n ) z a i : n 2 - f ′ ( z a i - 1 : n ) z a i - 1 : n 2 + 3 ( f ( z a i : n ) z a i : n - f ( z a i - 1 : n ) z a i - 1 : n ) F ( z a i : n ) - F ( z a i - 1 : n ) + ∑ i = 2 u l i - 1 ( f ( z a i : n ) z a i : n - f ( z a i - 1 : n ) z a i - 1 : n F ( z a i : n ) - F ( z a i - 1 : n ) ) 2 .$

Let I(σ) denotes the Fisher information matrix of the σ. Then, the Fisher information matrix is obtained by taking expectations of equation (3.5). We obtained the observed Fisher information matrix since equation (3.5) is complicated. Under some mild regularity conditions, $σ╠ā$ is approximately normal with mean σ and variance I−1(σ). In practice, we usually estimate I−1(σ) by $I-1(σ╠ā)$. Then, $σ╠ā≈N(σ,I-1(σ╠ā))$. Therefore, the approximate 100(1 – α)% CI for σ is

$σ ˜ ± z α 2 I - 1 ( σ ˜ ) ,$

where zα/2 is the percentile of the standard normal distribution with right-tail probability α/2.

4. Approximate maximum likelihood estimation

We propose the AMLEs of the scale parameter of the exponentiated half-logistic distribution using two different Taylor series expansion types since equation (3.4) cannot be solved explicitly. See for example the work of Kang and Seo (2011). Let,

$ξ a i : n = F - 1 ( p a i : n ) = ln [ 1 + ( p a i : n ) 1 λ 1 - ( p a i : n ) 1 λ ] , ξ a d * = F - 1 ( p a d * ) = ln [ 1 + ( p a d * ) 1 λ 1 - ( p a d * ) 1 λ ] ,$

where

$p a i : n = a i ( n + 1 ) , ŌĆŖŌĆŖ ŌĆŖŌĆŖ ŌĆŖŌĆŖ q a i : n = 1 - p a i : n , ŌĆŖŌĆŖ ŌĆŖŌĆŖ ŌĆŖŌĆŖ i = 1 , 2 , … , s - 1 , p a d * = ( p a d + P a d + 1 ) 2 , ŌĆŖŌĆŖ ŌĆŖŌĆŖ ŌĆŖŌĆŖ q a d * = 1 - p a d * .$

First, we can approximate the following functions by

$f ( z a i : n ) F ( z a i : n ) z a i : n ≈ α 1 i + β 1 i z a i : n , [ F ( z a i : n ) ] 1 λ z a i : n ≈ γ 1 i + η 1 i z a i : n , f ( z a i : n ) 1 - F ( z a i : n ) z a i : n ≈ κ 1 i + δ 1 i z a i : n , f ( z T ) 1 - F ( z T ) z T ≈ κ 1 d * + δ 1 d * z T , f ( z a i : n ) z a i : n F ( z a i : n ) - F ( z a i - 1 : n ) ≈ ι 1 i + τ 1 i z a i : n + ω 1 i z a i - 1 : n , f ( z a i - 1 : n ) z a i - 1 : n F ( z a i : n ) - F ( z a i - 1 : n ) ≈ υ 1 i + Ž▒ 1 i z a i : n + ŽĢ 1 i z a i - 1 : n ,$

where

$α 1 i = ( ξ a i : n ) 2 p a i : n [ 1 + ( p a i : n ) 2 λ 2 ( p a i : n ) 1 λ ] f ( ξ a i : n ) , ŌĆŖŌĆŖ ŌĆŖŌĆŖ ŌĆŖŌĆŖ β 1 i = - f ( ξ a i : n ) p a i : n [ 1 + ( p a i : n ) 2 λ 2 ( p a i : n ) 1 λ ξ a i : n - 1 ] , γ 1 i = - ( ξ a i : n ) 2 λ p a i : n ( p a i : n ) 1 λ f ( ξ a i : n ) , ŌĆŖŌĆŖ ŌĆŖŌĆŖ ŌĆŖŌĆŖ η 1 i = ( p a i : n ) 1 λ [ 1 + f ( ξ a i : n ) ξ a i : n λ p a i : n ] , κ 1 i = - ( ξ a i : n ) 2 q a i : n [ f ′ ( ξ a i : n ) + f 2 ( ξ a i : n ) q a i : n ] , ŌĆŖŌĆŖ ŌĆŖŌĆŖ ŌĆŖŌĆŖ δ 1 i = 1 q a i : n [ f ( ξ a i : n ) + ( f ′ ( ξ a i : n ) + f 2 ( ξ a i : n ) q a i : n ) ξ a i : n ] , κ 1 d * = - ( ξ a d * ) 2 q a d * [ f ′ ( ξ a d * ) + f 2 ( ξ a d * ) q a d * ] , ŌĆŖŌĆŖ ŌĆŖŌĆŖ ŌĆŖŌĆŖ δ 1 d * = 1 q a d * [ f ( ξ a d * ) + ( f ′ ( ξ a d * ) + f 2 ( ξ a d * ) q a d * ) ξ a d * ] , ι 1 i = - f ′ ( ξ a i : n ) ξ a i : n 2 p a i : n - p a i - 1 : n + f ( ξ a i : n ) ξ a i : n { f ( ξ a i : n ) ξ a i : n - f ( ξ a i - 1 : n ) ξ a i - 1 : n } ( p a i : n - p a i - 1 : n ) 2 , τ 1 i = f ′ ( ξ a i : n ) ξ a i : n + f ( ξ a i : n ) ( p a i : n - p a i - 1 : n ) - f 2 ( ξ a i : n ) ξ a i : n ( p a i : n - p a i - 1 : n ) 2 , ŌĆŖŌĆŖ ŌĆŖŌĆŖ ŌĆŖŌĆŖ ω 1 i = f ( ξ a i : n ) f ( ξ a i - 1 : n ) ξ a i : n ( p a i : n - p a i - 1 : n ) 2 , υ 1 i = - f ′ ( ξ a i - 1 : n ) ξ a i - 1 : n 2 p a i : n - p a i - 1 : n + f ( ξ a i - 1 : n ) ξ a i - 1 : n { f ( ξ a i : n ) ξ a i : n - f ( ξ a i - 1 : n ) ξ a i - 1 : n } ( p a i : n - p a i - 1 : n ) 2 , Ž▒ 1 i = - f ( ξ a i : n ) f ( ξ a i - 1 : n ) ξ a i - 1 : n ( p a i : n - p a i - 1 : n ) 2 , ŽĢ 1 i = f ′ ( ξ a i - 1 : n ) ξ a i - 1 : n + f ( ξ a i - 1 : n ) ( p a i : n - p a i - 1 : n ) + f 2 ( ξ a i - 1 : n ) ξ a i - 1 : n ( p a i : n - p a i - 1 : n ) 2 .$

By substituting equation (4.1) into equation (3.4), we can obtain the first approximated equation (4.2).

$Case 1 : ∂ ln L 1 ∂ σ = - 1 σ [ s + ( 1 - 1 λ ) ∑ i = 1 s ( α 1 i + β 1 i z a i : n ) - ∑ i = 1 s ( γ 1 i + η 1 i z a i : n ) - ( n - a s ) ( κ 1 s + δ 1 s z a s : n ) + ∑ i = 2 s l i - 1 ( ι 1 i + τ 1 i z a i : n + ω 1 i z a i - 1 : n ) - ∑ i = 2 s l i - 1 ( υ 1 i + Ž▒ 1 i z a i : n + ŽĢ 1 i z a i - 1 : n ) ] = 0 , Case 2 : ∂ ln L 2 ∂ σ = - 1 σ [ d + ( 1 - 1 λ ) ∑ i = 1 d ( α 1 i + β 1 i z a i : n ) - ∑ i = 1 d ( γ 1 i + η 1 i z a i : n ) - ( n - a d ) ( κ 1 d * + δ 1 d * z T ) + ∑ i = 2 d l i - 1 ( ι 1 i + τ 1 i z a i : n + ω 1 i z a i - 1 : n ) - ∑ i = 2 d l i - 1 ( υ 1 i + Ž▒ 1 i z a i : n + ŽĢ 1 i z a i - 1 : n ) ] = 0.$

We can solve equation (4.2), and derive the first AMLE $σ^1$.

To construct the 100(1 – α)% CI for σ, we need to compute the Fisher information matrix. However, it is complicated to calculate the exact expected values of the Fisher information from equation (4.2). We then derive the asymptotic variance using the observed Fisher information.

$σ ^ 1 = - B 1 A 1 , ŌĆŖŌĆŖ ŌĆŖŌĆŖ ŌĆŖŌĆŖ I - 1 ( σ ^ 1 ) Ōēā - ( A 1 σ ^ 1 2 + 2 B 1 σ ^ 1 3 ) - 1 ,$

where

$Case 1 : A 1 = s + ( 1 - 1 λ ) ∑ i = 1 s α 1 i - ∑ i = 1 s γ 1 i - ( n - a s ) κ 1 s + ∑ i = 2 s { l i - 1 ( ι 1 i - υ 1 i ) } , B 1 = ( 1 - 1 λ ) ∑ i = 1 s β 1 i x a i : n - ∑ i = 1 s η 1 i x a i : n - ( n - a s ) δ 1 s x a s : n + ∑ i = 2 s { l i - 1 ( τ 1 i x a i : n + ω 1 i x a i - 1 : n - Ž▒ 1 i x a i : n - ŽĢ 1 i x a i - 1 : n ) } , Case 2 : A 1 = d + ( 1 - 1 λ ) ∑ i = 1 d α 1 i - ∑ i = 1 d γ 1 i - ( n - a d ) κ 1 d * + ∑ i = 2 d { l i - 1 ( ι 1 i - υ 1 i ) } , B 1 = ( 1 - 1 λ ) ∑ i = 1 d β 1 i x a i : n - ∑ i = 1 d η 1 i x a i : n - ( n - a d ) δ 1 d * T + ∑ i = 2 d { l i - 1 ( τ 1 i x a i : n + ω 1 i x a i - 1 : n - Ž▒ 1 i x a i : n - ŽĢ 1 i x a i - 1 : n ) } .$

The approximate 100(1 – α)% CI for σ is

$σ ^ 1 ± z a 2 I - 1 ( σ ^ 1 ) .$

Second, we can approximate the functions by

$f ( z a i : n ) F ( z a i : n ) ≈ α 2 i + β 2 i z a i : n , [ F ( z a i : n ) ] 1 λ ≈ γ 2 i + η 2 i z a i : n , f ( z a i : n ) 1 - F ( z a i : n ) ≈ κ 2 i + δ 2 i z a i : n , f ( z T ) 1 - F ( z T ) ≈ κ 2 d * + δ 2 d * z T , f ( z a i : n ) z a i : n - f ( z a i - 1 : n ) z a i - 1 : n F ( z a i : n ) - F ( z a i - 1 : n ) ≈ ι 2 i + τ 2 i z a i : n + ω 2 i z a i - 1 : n ,$

where

$α 2 i = f ( ξ a i : n ) p a i : n [ 1 + ( p a i : n ) 2 λ 2 ( p a i : n ) 1 λ ( ξ a i : n ) + 1 ] , ŌĆŖŌĆŖ ŌĆŖŌĆŖ ŌĆŖŌĆŖ β 2 i = - f ( ξ a i : n ) p a i : n [ 1 + ( p a i : n ) 2 λ 2 ( p a i : n ) 1 λ ] , γ 2 i = ( p a i : n ) 1 λ [ 1 - f ( ξ a i : n ) ξ a i : n λ p a i : n ] , ŌĆŖŌĆŖ ŌĆŖŌĆŖ ŌĆŖŌĆŖ η 2 i = ( p a i : n ) 1 λ f ( ξ a i : n ) λ p a i : n , κ 2 i = 1 q a i : n [ f ( ξ a i : n ) - ( f ′ ( ξ a i : n ) + f 2 ( ξ a i : n ) q a i : n ) ξ a i : n ] , ŌĆŖŌĆŖ ŌĆŖŌĆŖ ŌĆŖŌĆŖ δ 2 i = 1 q a i : n [ f ′ ( ξ a i : n ) + f 2 ( ξ a i : n ) q a i : n ] , κ 2 d * = 1 q a d * [ f ( ξ a d * ) - ( f ′ ( ξ a d * ) + f 2 ( ξ a d * ) q a d * ) ξ a d * ] , ŌĆŖŌĆŖ ŌĆŖŌĆŖ ŌĆŖŌĆŖ δ 2 d * = 1 q a d * [ f ′ ( ξ a d * ) + f 2 ( ξ a d * ) q a d * ] , ι 2 i = f ′ ( ξ a i - 1 : n ) ξ a i - 1 : n 2 - f ′ ( ξ a i : n ) ξ a i : n 2 p a i : n - p a i - 1 : n + ( f ( ξ a i : n ) ξ a i : n - f ( ξ a i - 1 : n ) ξ a i - 1 : n ) 2 ( p a i : n - p a i - 1 : n ) 2 , τ 2 i = f ′ ( ξ a i : n ) ξ a i : n + f ( ξ a i : n ) p a i : n - p a i - 1 : n - ( f ( ξ a i : n ) ξ a i : n - f ( ξ a i - 1 : n ) ξ a i - 1 : n ) f ( ξ a i : n ) ( p a i : n - p a i - 1 : n ) 2 , ω 2 i = - ( f ′ ( ξ a i - 1 : n ) ξ a i - 1 : n + f ( ξ a i - 1 : n ) ) p a i : n - p a i - 1 : n + ( f ( ξ a i : n ) ξ a i : n - f ( ξ a i - 1 : n ) ξ a i - 1 : n ) f ( ξ a i - 1 : n ) ( p a i : n - p a i - 1 : n ) 2 .$

By substituting equation (4.4) into equation (3.4), we can obtain the second approximated equation (4.5).

$Case 1 : ∂ ln L 1 ∂ σ = - 1 σ [ u + ( 1 + 1 λ ) ∑ i = 1 s ( α 2 i + β 2 i z a i : n ) z a i : n - ∑ i = 1 s ( γ 2 i + η 2 i z a i : n ) z a i : n - ( n - a s ) ( κ 2 s + δ 2 s z a s : n ) z a s : n + ∑ i = 2 s l i - 1 ( ι 2 i + τ 2 i z a i : n + ω 2 i z a i - 1 : n ) z a i : n ] = 0 , Case 2 : ∂ ln L 2 ∂ σ = - 1 σ [ u + ( 1 + 1 λ ) ∑ i = 1 d ( α 2 i + β 2 i z a i : n ) z a i : n - ∑ i = 1 d ( γ 2 i + η 2 i z a i : n ) z a i : n - ( n - a d ) ( κ 2 d * + δ 2 d * z T ) z T + ∑ i = 2 d l i - 1 ( ι 2 i + τ 2 i z a i : n + ω 2 i z a i - 1 : n ) z a i : n ] = 0.$

We can solve equation (4.5), and derive the second AMLE $σ^2$.

To construct the 100(1 – α)% CI for σ, we need to compute the Fisher information matrix. However, it is complicated to calculate the exact expected values of the Fisher information from equation (4.5). We then derive the asymptotic variance using the observed Fisher information.

$σ ^ 2 = - B 2 + B 2 2 - 4 A 2 C 2 2 A 2 , ŌĆŖŌĆŖ ŌĆŖŌĆŖ ŌĆŖŌĆŖ I - 1 ( σ ^ 2 ) Ōēā ( A 2 σ ^ 2 2 + 2 B 2 σ ^ 2 3 + 3 C 2 σ ^ 2 4 ) - 1 ,$

where

$Case 1 : A 2 = u + ∑ i = 2 s l i - 1 ι 2 i , B 2 = ( 1 - 1 λ ) ∑ i = 1 s α 2 i x a i : n - ∑ i = 1 s γ 2 i x a i : n - ( n - a s ) κ 2 s x a s : n + ∑ i = 2 s l i - 1 ( τ 2 i x a i : n + ω 2 i x a i - 1 : n ) , C 2 = ( 1 - 1 λ ) ∑ i = 1 s β 2 i x a i : n 2 - ∑ i = 1 s η 2 i x a i : n 2 - ( n - a s ) δ 2 s x a s : n 2 , Case 2 : A 2 = u + ∑ i = 2 d l i - 1 ι 2 i , B 2 = ( 1 - 1 λ ) ∑ i = 1 d α 2 i x a i : n - ∑ i = 1 d γ 2 i x a i : n - ( n - a d ) κ 2 d * T + ∑ i = 2 d l i - 1 ( τ 2 i x a i : n + ω 2 i x a i - 1 : n ) , C 2 = ( 1 - 1 λ ) ∑ i = 1 d β 2 i x a i : n 2 - ∑ i = 1 d η 2 i x a i : n 2 - ( n - a d ) δ 2 d * T 2 .$

Since C2 < 0, the square root of equation (4.6) is always positive. The approximate 100(1 – α)% CI for σ is

$σ ^ 2 ± z α 2 I - 1 ( σ ^ 2 ) .$
5. Bayesian estimation

In this section, we obtain the Bayes estimator of the scale parameter of the exponentiated half-logistic distribution. We consider the prior distribution for σ as follows.

$π ( σ ) ∝ σ - γ - 1 exp ( - β σ ) , ŌĆŖŌĆŖ ŌĆŖŌĆŖ ŌĆŖŌĆŖ σ > 0 ,$

where γ >0 and β >0. The prior distribution, known as the inversed gamma distribution, is used by Kang et al. (2013). By combining the likelihood function (2.4) and the prior distribution (5.1), we can obtain the posterior distribution of σ.

$π ( σ ŌłŻ x ) = L ( σ ) π ( σ ) ∫ L ( σ ) π ( σ ) d σ ∝ ( 2 λ ) u σ - γ - u - 1 [ 1 - ( 1 - exp ( - C σ ) 1 + exp ( - C σ ) ) λ ] n - a u × exp ( - 1 σ ( ∑ i = 1 u x a i : n + β ) ) ( 1 - exp ( - x a 1 : n σ ) 1 + exp ( - x a 1 : n σ ) ) λ - 1 ∏ i = 1 u ( 1 - exp ( - x a i : n σ ) ) ( - 2 ) × ∏ i = 2 u ( 1 - exp ( - x a i : n σ ) 1 + exp ( - x a i : n σ ) ) λ - 1 [ ( 1 - exp ( - x a i : n σ ) 1 + exp ( - x a i : n σ ) ) λ - ( 1 - exp ( - x a i - 1 : n σ ) 1 + exp ( - x a i - 1 : n σ ) ) λ ] i - 1$

We use the SELF and GELF to obtain the Bayes estimators. The SELF is the symmetric loss function. But, the symmetric loss function has equal weightage to the overestimation and underestimation of the same magnitudes. In some situations, overestimation is more serious than underestimation or vice-versa. So, we consider the GELF as the asymmetric loss function.

$L 1 ( σ , σ ^ ) = ( σ ^ - σ ) 2 , L 2 ( σ , σ ^ ) = ( σ ^ σ ) q - q ln ( σ ^ σ ) - 1 ,$

where $σ^$ is the estimator of the parameter σ, q is the shape parameter. For the shape parameter q > 0, the overestimation cause more serious consequences than an unerestimation or vice-versa. The Bayes estimators obtained by using the SELF and GELF are as follows.

$σ ^ S = [ E σ ( σ ŌłŻ x ) ] , σ ^ G = [ E σ ( σ - q ŌłŻ x ) ] - 1 q .$

If q =−1, the estimators obtained by using the SELF and the GELF is the same.

Since the posterior distribution is complicated, we obtain the Bayes estimators using the Tierney and Kadane (1986) method.

Let L(σ; x) be the likelihood function of σ based on the n observations and π(σ|x) denotes the posterior distribution of σ. Then, we can write as follows.

$E [ g ( σ ) ŌłŻ x ] = ∫ g ( σ ) π ( σ ŌłŻ x ) d σ = ∫ e n l * ( σ ) d σ ∫ e n l ( σ ) d σ ,$

where

$l ( σ ) = 1 n ( ln L ( σ ; x ) + ln π ( σ ) ) , ŌĆŖŌĆŖ ŌĆŖŌĆŖ ŌĆŖŌĆŖ l ( σ * ) = ( l ( σ ) + 1 n ln g ( σ ) ) .$

The approximated method proposed by Tierney and Kadane (T-K) is as follows;

$E [ σ ŌłŻ x ] ≈ ( ŌłŻ ψ * ŌłŻ ŌłŻ ψ ŌłŻ ) 1 2 exp ( n { l * ( σ ^ * ) - l ( σ ^ ) } ) = ( ŌłŻ ψ * ŌłŻ ŌłŻ ψ ŌłŻ ) 1 2 g ( σ ^ * ) π ( σ ^ * ŌłŻ x ) π ( σ ^ ŌłŻ x ) ,$

where $σ^*$ and $σ^$ maximize l*(σ) and l(σ), respectively and ψ* and ψ are minus the inverse of the second derivatives of l*(σ) and l(σ) at $σ^*$ and $σ^$ , respectively. Here,

$l ( σ ) = 1 n [ u ln 2 λ - ( γ + u + 1 ) ln σ + ( n - a u ) ln ( 1 - ( 1 - exp ( - C σ ) 1 + exp ( - C σ ) ) λ ) - 1 σ ( ∑ i = 1 u x a i : n + β ) × ( λ - 1 ) ln ( 1 - exp ( - x a i : n σ ) 1 + exp ( - x a i : n σ ) ) + ∑ i = 2 u ( λ - 1 ) ln ( 1 - exp ( - x a i : n σ ) 1 + exp ( - x a i : n σ ) ) - 2 ∑ i = 1 u ln ( 1 + exp ( - x a i : n σ ) ) + ∑ i = 2 u ( l i - 1 ) ln ( ( 1 - exp ( - x a i : n σ ) 1 + exp ( - x a i : n σ ) ) λ - ( 1 - exp ( - x a i - 1 : n σ ) 1 + exp ( - x a i - 1 : n σ ) ) λ ) ] , l ( σ * ) = ( l ( σ ) + 1 n ln g ( σ ) ) .$

When we obtain the Bayes estimator ($σ^B$) by using the SELF, g(σ) is σ. When we obtain the Bayes estimator ($σ^G$) using the GELF, g(σ) is σq. By substituting equation (5.5) into equation (5.4), the Bayes estimator $σ^G$ is obtained by

$σ ^ G = ( ŌłŻ ψ * ŌłŻ ŌłŻ ψ ŌłŻ ) 1 2 ( σ ^ γ + u - q σ ^ * γ + u ) ( 1 - ( 1 - exp ( - C σ ^ * ) 1 + exp ( - C σ ^ * ) ) λ 1 - ( 1 - exp ( - C σ ^ ) 1 + exp ( - C σ ^ ) ) λ ) n - a u exp { ( 1 σ ^ - 1 σ ^ * ) ( ∑ i = 1 u x a i : n + β ) } × ( ( 1 - exp ( - x a 1 : n σ ^ * ) ) ( 1 + exp ( - x a 1 : n σ ^ ) ) ( 1 - exp ( - x a 1 : n σ ^ ) ) ( 1 + exp ( - x a 1 : n σ ^ * ) ) ) λ - 1 ∏ i = 2 u ( ( 1 - exp ( - x a i : n σ ^ * ) ) ( 1 + exp ( - x a i : n σ ^ ) ) ( 1 + exp ( - x a i : n σ ^ * ) ) ( 1 - exp ( - x a i : n σ ^ ) ) ) λ - 1 × ∏ i = 2 u [ ( 1 - exp ( - x a i : n σ ^ * ) 1 + exp ( - x a i : n σ ^ * ) ) λ - ( 1 - exp ( - x a i - 1 : n σ ^ * ) 1 + exp ( - x a i - 1 : n σ ^ * ) ) λ ( 1 - exp ( - x a i : n σ ^ ) 1 + exp ( - x a i : n σ ^ ) ) λ - ( 1 - exp ( - x a i - 1 : n σ ^ ) 1 + exp ( - x a i - 1 : n σ ^ ) ) λ ] ┼é i - 1 ∏ i = 1 u ( 1 + exp ( - x a i : n σ ^ * ) 1 + exp ( - x a i : n σ ^ ) ) - 2 .$

When q is −1, we obtain the estimator $σ^B$.

The CrI is as follows.

$∫ a b π ( σ ŌłŻ x ) d σ = 1 - α .$

If the interval has equal tail-probability, a and b denote the percentile of the posterior distribution with two-tail probabilities α/2. We use MCMC samples with the adaptive Metropolis-Hastings (MH) algorithm to obtain the CrI. The adaptive M-H algorithm is that the covariance of the proposal distribution is updated. We use the normal distribution as the proposal distribution. We also obtain an HPD interval that is the shortest CrI using MCMC samples.

6. Illustrative example and simulated results

In this section, we present an example to illustrate the methods and derive the MSEs of the proposed estimators through Monte Carlo simulation.

### 6.1. Real data

In this subsection, we use real data given by Nelson (1982). These data represent the failure log times to breakdown of an insulating fluid testing experiment (Table 1). The data have been utilized by many authors. Kang and Seo (2011) demonstrated through a Kolmogorov test that the data follow the exponentiated half-logistic distribution when the shape parameter λ = 1; therefore, we can use this data to illustrate the proposed methods in the previous sections.

We can obtain the MLE, AMLEs, and the Bayes estimators using the data. The Bayes estimator obtained by using the SELF is $σ^B$. The Bayes estimator obtained by using the GELF is $σ^G$. The parameters of the prior distribution (γ, β) = (4.0, 3.0). We also obtain the mean of the parameter samples obtained by using the adaptive M-H algorithm ($σ^MCMC$). We have n = 16, r = 9, T = 2.5, and ai = (1 : 3, 5 : 16). Table 2 presents the values.

### 6.2. Simulation results

In this subsection, to compare the proposed estimators of the scale parameter σ, we use Monte Carlo simulation. We obtain the biases and MSEs of the proposed estimators when the shape parameter λ is known. The simulation runs 5,000 times with σ = 1. The multiply Type-I hybrid censored samples from the exponentiated half-logistic distribution are generated for sample size n = 20, 40 and various censoring schemes. The average lengths of CI based on the MLE and AMLEs are obtained using the Monte Carlo method and the average lengths of CrI and HPD CrI are obtained by the MCMC samples with the adaptive M-H algorithm. We use the normal distribution as the proposal distribution. The range of sample for the normal distribution is (−∞,∞). So, we obtain sample of the ln $σ^(i)$. The initial values gives that the mean is the ln $σ╠ā$ and the variance is the variance of the MLE. We generate 10,000 MCMC samples per time from the posterior distribution and obtain the 95% CrIs. We also obtained a coverage probability that is the proportion of the time that the interval contains the true value of interest. To obtain the Bayes estimator, we take the (γ, β) = (4.0, 3.0) and the q of the general entropy function is −0.5, 1.0. Table 3 and Table 4 presents the simulation results.

Table 3 shows that the MSE decreases when the sample size n increases. For fixed sample size n, the MSE decreases generally as the predetermined observation number r increases. For fixed sample size n and the predetermined observation number r, the MSE of the estimators decreases generally as the prefixed experiment end time T increases. Generally the MSE of the estimators also decreases as the shape parameter λ increases. The Bayes estimators are more efficient than other estimators in the sense of MSE. The AMLE $σ^1$ is generally more efficient than the MLE $σ╠ā$ in the sense of MSE. Especially, the Bayes estimator $σ^G(q=1.0)$ is the most efficient in the sense of MSE.

Table 4 shows that the average length of intervals based on estimators decreases and the coverage probabilities increase as the shape parameter λ increases. The CrI is shorter than the corresponding CIs based on the other estimators except the HPD interval. The CI based on the AMLE $σ^1$ is also generally shorter than the corresponding CI based on the MLE $σ╠ā$ .

7. Conclusion

In this paper, we derive some estimators of the scale parameter for the exponentiated half-logistic distribution based on a multiply Type-I hybrid censored sample. We can obtain the MLE of σ by using the Newton-Raphson method. We propose AMLEs of the σ by using two different Taylor series expansion types. We also obtain Bayes estimators by using the proposed prior distribution. The simulation results indicate that the proposed estimators generally have good performance. Especially, the Bayes estimators have better performance than MLE and AMLEs.

Figures Fig. 1. Multiply Type-I hybrid censoring scheme.
TABLES

### Table 1

Failure log times to breakdown of an insulating fluid testing experiment

 0.270027 1.02245 1.15057 1.42311 1.54116 1.57898 1.8718 1.9947 2.08069 2.11263 2.48989 3.45789 3.48186 3.52371 3.60305 4.28895

### Table 2

The MLE and AMLEs of σ when λ = 1

$σ╠ā$ $σ^1$ $σ^2$ $σ^B$ $σ^G(q=-0.5)$ $σ^G(q=1.0)$ $σ^MCMC$
1.677980 1.649200 1.676602 1.575150 1.553874 1.494635 1.496206

MLE = maximum likelihood estimators; AMLEs = approximate maximum likelihood estimators.

### Table 3

The relative mean squared errors (biases) of the proposed estimators of σ

λ = 1

n T ai r $σ╠ā$ $σ^1$ $σ^2$ $σ^S$ $σ^G(q=-0.5)$ $σ^G(q=1.0)$
20 2.5 1~5, 7~20 14 0.0485(−0.0077) 0.0481(−0.0134) 0.0486(−0.0065) 0.0373(−0.0043) 0.0366(−0.0143) 0.0360(−0.0430)
13 0.0499(−0.0109) 0.0496(−0.0165) 0.0500(−0.0101) 0.0379(−0.0062) 0.0373(−0.0169) 0.0370(−0.0470)
12 0.0528(−0.0136) 0.0524(−0.0190) 0.0528(−0.0131) 0.0393(−0.0078) 0.0387(−0.0192) 0.0386(−0.0513)

1~8, 11~20 13 0.0492(−0.0076) 0.0487(−0.0131) 0.0492(−0.0067) 0.0376(−0.0042) 0.0369(−0.0143) 0.0363(−0.0430)
12 0.0506(−0.0108) 0.0502(−0.0161) 0.0506(−0.0102) 0.0382(−0.0062) 0.0376(−0.0169) 0.0372(−0.0471)
11 0.0534(−0.0135) 0.0530(−0.0186) 0.0534(−0.0131) 0.0397(−0.0078) 0.0391(−0.0192) 0.0389(−0.0513)

1~6, 8~9, 12~20 12 0.0496(−0.0074) 0.0492(−0.0128) 0.0496(−0.0070) 0.0379(−0.0041) 0.0372(−0.0142) 0.0365(−0.0429)
11 0.0510(−0.0106) 0.0506(−0.0158) 0.0510(−0.0104) 0.0385(−0.0061) 0.0379(−0.0168) 0.0374(−0.0470)
10 0.0538(−0.0133) 0.0534(−0.0183) 0.0538(−0.0133) 0.0399(−0.0077) 0.0393(−0.0191) 0.0391(−0.0512)

3 1~5, 7~20 14 0.0452(−0.0111) 0.0448(−0.0167) 0.0453(−0.0098) 0.0350(−0.0068) 0.0346(−0.0168) 0.0344(−0.0451)
13 0.0479(−0.0124) 0.0476(−0.0179) 0.0480(−0.0116) 0.0366(−0.0073) 0.0361(−0.0179) 0.0360(−0.0480)
12 0.0517(−0.0142) 0.0514(−0.0196) 0.0517(−0.0137) 0.0387(−0.0082) 0.0381(−0.0196) 0.0381(−0.0516)

1~8, 11~20 13 0.0452(−0.0111) 0.0449(−0.0166) 0.0452(−0.0102) 0.0350(−0.0070) 0.0346(−0.0169) 0.0344(−0.0453)
12 0.0480(−0.0125) 0.0477(−0.0178) 0.0480(−0.0119) 0.0366(−0.0075) 0.0361(−0.0181) 0.0360(−0.0481)
11 0.0518(−0.0143) 0.0514(−0.0194) 0.0518(−0.0139) 0.0387(−0.0084) 0.0382(−0.0197) 0.0381(−0.0518)

1~6, 8~9, 12~20 12 0.0452(−0.0112) 0.0449(−0.0165) 0.0452(−0.0107) 0.0350(−0.0070) 0.0346(−0.0170) 0.0344(−0.0453)
11 0.0479(−0.0126) 0.0476(−0.0177) 0.0479(−0.0124) 0.0366(−0.0075) 0.0361(−0.0181) 0.0360(−0.0482)
10 0.0517(−0.0144) 0.0514(−0.0193) 0.0517(−0.0143) 0.0386(−0.0084) 0.0381(−0.0198) 0.0381(−0.0519)

40 2.5 1~10, 12~40 35 0.0212(0.0036) 0.0211(0.0007) 0.0212(0.0048) 0.0189(0.0046) 0.0186(−0.0003) 0.0181(−0.0144)
32 0.0220(−0.0002) 0.0219(−0.0032) 0.0220(0.0008) 0.0195(0.0014) 0.0193(−0.0036) 0.0189(−0.0181)
29 0.0229(−0.0035) 0.0228(−0.0064) 0.0229(−0.0027) 0.0201(−0.0009) 0.0199(−0.0063) 0.0197(−0.0219)

1~12, 16~40 35 0.0209(0.0045) 0.0207(0.0017) 0.0208(0.0055) 0.0186(0.0054) 0.0184(0.0006) 0.0178(−0.0136)
32 0.0214(0.0027) 0.0213(−0.0002) 0.0214(0.0036) 0.0191(0.0038) 0.0188(−0.0010) 0.0183(−0.0153)
29 0.0223(−0.0014) 0.0222(−0.0043) 0.0223(−0.0008) 0.0197(0.0005) 0.0195(−0.0046) 0.0192(−0.0194)

1~8, 10~12, 15~40 35 0.0209(0.0045) 0.0207(0.0017) 0.0208(0.0055) 0.0186(0.0054) 0.0183(0.0006) 0.0178(−0.0136)
32 0.0214(0.0027) 0.0213(−0.0002) 0.0214(0.0036) 0.0191(0.0038) 0.0188(−0.0010) 0.0183(−0.0153)
29 0.0223(−0.0015) 0.0222(−0.0044) 0.0223(−0.0008) 0.0197(0.0005) 0.0195(−0.0046) 0.0192(−0.0194)

3 1~10, 12~40 35 0.0199(−0.0003) 0.0198(−0.0031) 0.0199(0.0013) 0.0178(0.0009) 0.0176(−0.0037) 0.0174(−0.0172)
32 0.0207(−0.0034) 0.0206(−0.0063) 0.0207(−0.0022) 0.0184(−0.0015) 0.0182(−0.0064) 0.0181(−0.0206)
29 0.0225(−0.0041) 0.0224(−0.0070) 0.0225(−0.0033) 0.0197(−0.0014) 0.0196(−0.0068) 0.0194(−0.0223)

1~12, 16~40 35 0.0194(0.0020) 0.0193(−0.0007) 0.0194(0.0035) 0.0174(0.0029) 0.0172(−0.0016) 0.0169(−0.0150)
32 0.0201(−0.0016) 0.0200(−0.0044) 0.0201(−0.0004) 0.0179(−0.0002) 0.0178(−0.0048) 0.0175(−0.0185)
29 0.0212(−0.0036) 0.0212(−0.0064) 0.0213(−0.0028) 0.0188(−0.0014) 0.0187(−0.0064) 0.0185(−0.0211)

1~8, 10~12, 15~40 35 0.0194(0.0020) 0.0193(−0.0007) 0.0194(0.0035) 0.0174(0.0029) 0.0172(−0.0016) 0.0169(−0.0150)
32 0.0201(−0.0016) 0.0200(−0.0044) 0.0201(−0.0004) 0.0179(−0.0002) 0.0178(−0.0048) 0.0175(−0.0185)
29 0.0212(−0.0036) 0.0212(−0.0065) 0.0213(−0.0028) 0.0188(−0.0014) 0.0187(−0.0064) 0.0185(−0.0211)

λ = 2

n T ai r $σ╠ā$ $σ^1$ $σ^2$ $σ^S$ $σ^G(q=-0.5)$ $σ^G(q=1.0)$

20 2.5 1~5
7~20
14 0.0270(0.0069) 0.0268(0.0038) 0.0283(0.0211) 0.0235(0.0065) 0.0230(0.0008) 0.0222(−0.0157)
13 0.0275(0.0040) 0.0273(0.0009) 0.0288(0.0180) 0.0238(0.0041) 0.0234(−0.0017) 0.0227(−0.0184)
12 0.0280(0.0004) 0.0278(0.0027) 0.0292(0.0143) 0.0241(0.0013) 0.0238(−0.0047) 0.0232(−0.0219)

1~8, 11~20 13 0.0317(0.0117) 0.0312(0.0086) 0.0343(0.0269) 0.0273(0.0108) 0.0267(0.0050) 0.0254(−0.0118)
12 0.0322(0.0088) 0.0317(0.0056) 0.0347(0.0238) 0.0276(0.0083) 0.0270(0.0025) 0.0259(−0.0145)
11 0.0327(0.0052) 0.0323(0.0020) 0.0352(0.0201) 0.0279(0.0055) 0.0274(−0.0005) 0.0264(−0.0180)

1~6, 8~9, 12~20 12 0.0344(0.0166) 0.0338(0.0133) 0.0373(0.0315) 0.0296(0.0151) 0.0289(0.0093) 0.0273(−0.0077)
11 0.0349(0.0136) 0.0343(0.0103) 0.0377(0.0285) 0.0299(0.0127) 0.0293(0.0068) 0.0278(−0.0104)
10 0.0354(0.0100) 0.0349(0.0067) 0.0381(0.0248) 0.0303(0.0099) 0.0297(0.0038) 0.0283(−0.0139)

3 1~5, 7~20 14 0.0245(0.0002) 0.0243(−0.0027) 0.0255(0.0142) 0.0214(0.0003) 0.0212(−0.0051) 0.0208(−0.0207)
13 0.0249(−0.0017) 0.0248(−0.0047) 0.0258(0.0121) 0.0217(−0.0011) 0.0215(−0.0066) 0.0212(−0.0227)
12 0.0258(−0.0034) 0.0257(−0.0064) 0.0268(0.0102) 0.0224(−0.0021) 0.0222(−0.0078) 0.0220(−0.0246)

1~8, 11~20 13 0.0248(0.0004) 0.0247(−0.0024) 0.0259(0.0145) 0.0217(0.0004) 0.0214(−0.0049) 0.0210(−0.0205)
12 0.0252(−0.0015) 0.0251(−0.0044) 0.0263(0.0124) 0.0219(−0.0009) 0.0217(−0.0065) 0.0214(−0.0226)
11 0.0262(−0.0032) 0.0260(−0.0061) 0.0272(0.0107) 0.0227(−0.0019) 0.0224(−0.0077) 0.0222(−0.0245)

1~6, 8~9, 12~20 12 0.0257(0.0011) 0.0255(−0.0017) 0.0270(0.0148) 0.0224(0.0010) 0.0221(−0.0043) 0.0217(−0.0200)
11 0.0261(−0.0009) 0.0259(−0.0038) 0.0273(0.0127) 0.0227(−0.0003) 0.0224(−0.0059) 0.0220(−0.0221)
10 0.0270(−0.0025) 0.0268(−0.0055) 0.0282(0.0110) 0.0234(−0.0013) 0.0231(−0.0071) 0.0228(−0.0240)

40 2.5 1~10, 12~40 35 0.0117 (0.0058) 0.0117 (0.0042) 0.0122 (0.0152) 0.0110 (0.0059) 0.0109 (0.0030) 0.0106 (−0.0054)
32 0.0118 (0.0056) 0.0117 (0.0040) 0.0123 (0.0151) 0.0111 (0.0057) 0.0110 (0.0029) 0.0107 (−0.0055)
29 0.0122 (0.0040) 0.0121 (0.0024) 0.0127 (0.0134) 0.0114 (0.0043) 0.0113 (0.0014) 0.0111 (−0.0071)

1~12, 16~40 35 0.0118 (0.0058) 0.0117 (0.0043) 0.0122 (0.0150) 0.0110 (0.0059) 0.0109 (0.0031) 0.0106 (−0.0053)
32 0.0118 (0.0058) 0.0117 (0.0042) 0.0122 (0.0150) 0.0110 (0.0059) 0.0109 (0.0031) 0.0107 (−0.0054)
29 0.0119 (0.0053) 0.0119 (0.0037) 0.0124 (0.0144) 0.0112 (0.0054) 0.0111 (0.0025) 0.0108 (−0.0059)

1~8, 10~12, 15~40 35 0.0117 (0.0058) 0.0117 (0.0042) 0.0122 (0.0148) 0.0110 (0.0059) 0.0109 (0.0031) 0.0106 (−0.0053)
32 0.0118 (0.0058) 0.0117 (0.0042) 0.0122 (0.0147) 0.0110 (0.0059) 0.0109 (0.0030) 0.0106 (−0.0054)
29 0.0119 (0.0052) 0.0118 (0.0036) 0.0124 (0.0142) 0.0112 (0.0054) 0.0111 (0.0025) 0.0108 (−0.0059)

3 1~10, 12~40 35 0.0108 (0.0029) 0.0108 (0.0015) 0.0112 (0.0124) 0.0102 (0.0027) 0.0101 (0.0001) 0.0100 (−0.0076)
32 0.0111 (0.0017) 0.0110 (0.0002) 0.0115 (0.0111) 0.0104 (0.0016) 0.0103 (−0.0010) 0.0102 (−0.0088)
29 0.0113 (0.0001) 0.0113 (−0.0015) 0.0117 (0.0094) 0.0106 (0.0003) 0.0106 (−0.0024) 0.0105 (−0.0105)

1~12, 16~40 35 0.0108 (0.0031) 0.0107 (0.0018) 0.0112 (0.0123) 0.0102 (0.0029) 0.0101 (0.0003) 0.0099 (−0.0074)
32 0.0109 (0.0027) 0.0108 (0.0014) 0.0113 (0.0119) 0.0103 (0.0026) 0.0102 (0.0001) 0.0100 (−0.0077)
29 0.0112 (0.0011) 0.0111 (−0.0004) 0.0115 (0.0101) 0.0105 (0.0011) 0.0104 (−0.0015) 0.0103 (−0.0094)

1~8, 10~12, 15~40 35 0.0108 (0.0031) 0.0107 (0.0018) 0.0112 (0.0121) 0.0102 (0.0029) 0.0101 (0.0003) 0.0099 (−0.0074)
32 0.0109 (0.0027) 0.0108 (0.0014) 0.0113 (0.0117) 0.0103 (0.0026) 0.0102 (0.0001) 0.0100 (−0.0077)
29 0.0112 (0.0011) 0.0111 (−0.0004) 0.0115 (0.0100) 0.0105 (0.0011) 0.0104 (−0.0015) 0.0103 (−0.0094)

λ = 3

n T ai r $σ╠ā$ $σ^1$ $σ^2$ $σ^S$ $σ^G(q=-0.5)$ $σ^G(q=1.0)$

20 2.5 1~5, 7~20 14 0.0201(0.0108) 0.0199(0.0087) 0.0214(0.0232) 0.0182(0.0097) 0.0178(0.0054) 0.0170(−0.0074)
13 0.0204(0.0099) 0.0202(0.0078) 0.0217(0.0222) 0.0184(0.0090) 0.0181(0.0046) 0.0173(−0.0082)
12 0.0209(0.0077) 0.0207(0.0056) 0.0221(0.0200) 0.0188(0.0071) 0.0185(0.0027) 0.0178(−0.0102)

1~8, 11~20 13 0.0255(0.0251) 0.0250(0.0227) 0.0282(0.0399) 0.0229(0.0232) 0.0224(0.0187) 0.0210(0.0054)
12 0.0258(0.0242) 0.0253(0.0218) 0.0285(0.0390) 0.0232(0.0224) 0.0226(0.0179) 0.0212(0.0046)
11 0.0262(0.0221) 0.0258(0.0196) 0.0290(0.0367) 0.0236(0.0206) 0.0230(0.0160) 0.0217(0.0026)

1~6, 8~9, 12~20 13 0.0253(0.0304) 0.0249(0.0280) 0.0279(0.0448) 0.0229(0.0284) 0.0223(0.0238) 0.0207(0.0104)
12 0.0256(0.0295) 0.0252(0.0271) 0.0282(0.0439) 0.0231(0.0276) 0.0225(0.0230) 0.0210(0.0095)
11 0.0261(0.0274) 0.0257(0.0249) 0.0286(0.0417) 0.0235(0.0257) 0.0229(0.0211) 0.0215(0.0076)

3 1~5, 7~20 14 0.0173(0.0038) 0.0172(0.0021) 0.0181(0.0160) 0.0158(0.0027) 0.0156(−0.0012) 0.0152(−0.0127)
13 0.0175(0.0022) 0.0174(0.0005) 0.0183(0.0143) 0.0159(0.0014) 0.0157(−0.0025) 0.0155(−0.0142)
12 0.0179(0.0006) 0.0178(−0.0013) 0.0186(0.0123) 0.0161(0.0001) 0.0160(−0.0039) 0.0158(−0.0159)

1~8, 11~20 13 0.0191(0.0060) 0.0189(0.0043) 0.0204(0.0188) 0.0173(0.0047) 0.0171(0.0008) 0.0166(−0.0108)
12 0.0193(0.0044) 0.0191(0.0027) 0.0205(0.0170) 0.0174(0.0035) 0.0172(−0.0005) 0.0168(−0.0123)
11 0.0196(0.0027) 0.0194(0.0009) 0.0209(0.0151) 0.0177(0.0022) 0.0175(−0.0019) 0.0171(−0.0140)

1~6, 8~9, 12~20 12 0.0201(0.0082) 0.0199(0.0065) 0.0215(0.0208) 0.0182(0.0068) 0.0180(0.0029) 0.0174(−0.0087)
11 0.0203(0.0067) 0.0201(0.0048) 0.0217(0.0191) 0.0183(0.0055) 0.0181(0.0015) 0.0176(−0.0103)
10 0.0206(0.0050) 0.0204(0.0030) 0.0220(0.0172) 0.0186(0.0043) 0.0184(0.0001) 0.0179(−0.0120)

40 2.5 1~10, 12~40 35 0.0090(0.0062) 0.0089(0.0051) 0.0093(0.0137) 0.0086(0.0060) 0.0085(0.0039) 0.0083(−0.0026)
32 0.0090(0.0062) 0.0089(0.0051) 0.0093(0.0137) 0.0086(0.0060) 0.0085(0.0039) 0.0083(−0.0026)
29 0.0090(0.0061) 0.0090(0.0050) 0.0094(0.0136) 0.0086(0.0059) 0.0085(0.0038) 0.0083(−0.0027)

1~12, 16~40 35 0.0091(0.0064) 0.0091(0.0053) 0.0095(0.0137) 0.0087(0.0062) 0.0086(0.0041) 0.0084(−0.0024)
32 0.0091(0.0064) 0.0091(0.0053) 0.0095(0.0137) 0.0087(0.0062) 0.0086(0.0041) 0.0084(−0.0024)
29 0.0091(0.0064) 0.0091(0.0053) 0.0095(0.0137) 0.0087(0.0062) 0.0086(0.0040) 0.0084(−0.0024)

1~8, 10~12, 15~40 35 0.0090(0.0062) 0.0090(0.0052) 0.0093(0.0134) 0.0086(0.0061) 0.0085(0.0039) 0.0083(−0.0025)
32 0.0090(0.0062) 0.0090(0.0052) 0.0093(0.0134) 0.0086(0.0061) 0.0085(0.0039) 0.0083(−0.0025)
29 0.0090(0.0062) 0.0090(0.0051) 0.0094(0.0134) 0.0086(0.0061) 0.0085(0.0039) 0.0083(−0.0025)

3 1~10, 12~40 35 0.0079(0.0038) 0.0078(0.0030) 0.0081(0.0115) 0.0075(0.0033) 0.0075(0.0014) 0.0074(−0.0044)
32 0.0079(0.0036) 0.0079(0.0028) 0.0082(0.0112) 0.0076(0.0031) 0.0075(0.0011) 0.0074(−0.0046)
29 0.0081(0.0024) 0.0081(0.0016) 0.0084(0.0100) 0.0078(0.0020) 0.0077(0.0001) 0.0076(−0.0058)

1~12, 16~40 35 0.0079(0.0039) 0.0078(0.0031) 0.0081(0.0113) 0.0075(0.0034) 0.0075(0.0014) 0.0074(−0.0043)
32 0.0079(0.0039) 0.0078(0.0031) 0.0081(0.0113) 0.0075(0.0033) 0.0075(0.0014) 0.0074(−0.0044)
29 0.0080(0.0033) 0.0080(0.0024) 0.0082(0.0107) 0.0077(0.0028) 0.0076(0.0008) 0.0075(−0.0050)

1~8, 10~12, 15~40 35 0.0079(0.0039) 0.0078(0.0031) 0.0081(0.0112) 0.0075(0.0033) 0.0075(0.0014) 0.0074(−0.0044)
32 0.0079(0.0039) 0.0078(0.0030) 0.0081(0.0111) 0.0075(0.0033) 0.0075(0.0014) 0.0074(−0.0044)
29 0.0080(0.0033) 0.0080(0.0024) 0.0082(0.0105) 0.0076(0.0027) 0.0076(0.0008) 0.0075(−0.0050)

### Table 4

The coverage probability (the 95% confidence interval (CI))

λ = 1

n T ai r CI ($σ╠ā$) CI ($σ^1$) CI ($σ^2$) CrI HPD CrI
20 2.5 1~5, 7~20 14 0.9194(0.8336) 0.9174(0.8328) 0.9182(0.8286) 0.9414(0.7534) 0.9202(0.7223)
13 0.9134(0.8585) 0.9116(0.8590) 0.9132(0.8539) 0.9430(0.7719) 0.9160(0.7386)
12 0.9144(0.8905) 0.9122(0.8919) 0.9134(0.8864) 0.9422(0.7946) 0.9170(0.7583)

1~8, 11~20 13 0.9182(0.8343) 0.9170(0.8335) 0.9180(0.8295) 0.9412(0.7537) 0.9190(0.7227)
12 0.9126(0.8592) 0.9108(0.8597) 0.9120(0.8551) 0.9426(0.7724) 0.9154(0.7390)
11 0.9134(0.8912) 0.9114(0.8927) 0.9124(0.8879) 0.9428(0.7950) 0.9176(0.7587)

1~6, 8~9, 12~20 12 0.9186(0.8348) 0.9172(0.8340) 0.9182(0.8300) 0.9424(0.7541) 0.9222(0.7231)
11 0.9126(0.8598) 0.9110(0.8602) 0.9124(0.8558) 0.9422(0.7728) 0.9162(0.7393)
10 0.9142(0.8918) 0.9120(0.8932) 0.9136(0.8887) 0.9422(0.7952) 0.9186(0.7588)

3 1~5, 7~20 14 0.9190(0.8257) 0.9170(0.8255) 0.9176(0.8204) 0.9432(0.7482) 0.9196(0.7175)
13 0.9130(0.8552) 0.9112(0.8559) 0.9128(0.8505) 0.9444(0.7698) 0.9158(0.7367)
12 0.9142(0.8892) 0.9122(0.8908) 0.9130(0.8851) 0.9432(0.7938) 0.9172(0.7576)

1~8, 11~20 13 0.9178(0.8260) 0.9164(0.8259) 0.9176(0.8211) 0.9424(0.7484) 0.9196(0.7178)
12 0.9122(0.8556) 0.9102(0.8563) 0.9116(0.8514) 0.9440(0.7702) 0.9156(0.7369)
11 0.9132(0.8896) 0.9112(0.8912) 0.9122(0.8862) 0.9436(0.7940) 0.9180(0.7578)

1~6, 8~9, 12~20 12 0.9182(0.8262) 0.9166(0.8260) 0.9178(0.8212) 0.9438(0.7484) 0.9216(0.7179)
11 0.9124(0.8557) 0.9106(0.8565) 0.9122(0.8517) 0.9420(0.7702) 0.9162(0.7370)
10 0.9142(0.8898) 0.9120(0.8914) 0.9136(0.8867) 0.9428(0.7940) 0.9192(0.7577)

40 2.5 1~10, 12~40 35 0.9378(0.5624) 0.9360(0.5611) 0.9376(0.5610) 0.9470(0.5353) 0.9358(0.5229)
32 0.9368(0.5690) 0.9358(0.5681) 0.9368(0.5671) 0.9472(0.5414) 0.9348(0.5285)
29 0.9308(0.5899) 0.9306(0.5897) 0.9310(0.5879) 0.9442(0.5596) 0.9292(0.5454)

1~12, 16~40 35 0.9382(0.5621) 0.9374(0.5608) 0.9388(0.5607) 0.9468(0.5350) 0.9350(0.5226)
32 0.9366(0.5635) 0.9354(0.5622) 0.9374(0.5617) 0.9462(0.5363) 0.9340(0.5238)
29 0.9354(0.5745) 0.9352(0.5738) 0.9352(0.5724) 0.9454(0.5461) 0.9342(0.5329)

1~8, 10~12, 15~40 35 0.9384(0.5621) 0.9372(0.5607) 0.9388(0.5607) 0.9476(0.5349) 0.9352(0.5225)
32 0.9364(0.5634) 0.9350(0.5622) 0.9366(0.5617) 0.9468(0.5363) 0.9346(0.5238)
29 0.9354(0.5744) 0.9352(0.5738) 0.9352(0.5723) 0.9454(0.5461) 0.9344(0.5328)

3 1~10, 12~40 35 0.9370(0.5458) 0.9350(0.5445) 0.9364(0.5435) 0.9456(0.5209) 0.9338(0.5094)
32 0.9360(0.5616) 0.9344(0.5610) 0.9354(0.5592) 0.9464(0.5352) 0.9334(0.5226)
29 0.9310(0.5888) 0.9310(0.5887) 0.9306(0.5867) 0.9440(0.5587) 0.9286(0.5445)

1~12, 16~40 35 0.9354(0.5431) 0.9350(0.5417) 0.9356(0.5411) 0.9450(0.5184) 0.9322(0.5070)
32 0.9358(0.5494) 0.9344(0.5484) 0.9358(0.5469) 0.9450(0.5243) 0.9322(0.5124)
29 0.9346(0.5699) 0.9342(0.5695) 0.9338(0.5676) 0.9456(0.5424) 0.9330(0.5294)

1~8, 10~12, 15~40 35 0.9354(0.5431) 0.9348(0.5417) 0.9356(0.5411) 0.9446(0.5183) 0.9310(0.5070)
32 0.9356(0.5494) 0.9340(0.5484) 0.9350(0.5469) 0.9454(0.5243) 0.9320(0.5125)
29 0.9346(0.5699) 0.9342(0.5695) 0.9338(0.5675) 0.9446(0.5423) 0.9332(0.5293)

λ = 2

n T ai r CI ($σ╠ā$) CI ($σ^1$) CI ($σ^2$) CrI HPD CrI

20 2.5 1~5, 7~20 14 0.9384(0.6119) 0.9358(0.6085) 0.9460(0.6248) 0.9448(0.5776) 0.9314(0.5630)
13 0.9364(0.6160) 0.9334(0.6128) 0.9438(0.6280) 0.9436(0.5815) 0.9298(0.5664)
12 0.9334(0.6235) 0.9318(0.6206) 0.9436(0.6346) 0.9438(0.5880) 0.9298(0.5724)

1~8, 11~20 13 0.9304(0.6183) 0.9254(0.6141) 0.9418(0.6341) 0.9248(0.5825) 0.9184(0.5677)
12 0.9284(0.6224) 0.9238(0.6184) 0.9408(0.6374) 0.9280(0.5865) 0.9148(0.5711)
11 0.9254(0.6299) 0.9210(0.6263) 0.9396(0.6443) 0.9254(0.5931) 0.9152(0.5771)

1~6, 8~9, 12~20 12 0.9352(0.6240) 0.9318(0.6192) 0.9424(0.6404) 0.9228(0.5873) 0.9248(0.5722)
11 0.9322(0.6281) 0.9298(0.6235) 0.9418(0.6438) 0.9232(0.5912) 0.9244(0.5756)
10 0.9304(0.6356) 0.9274(0.6314) 0.9386(0.6508) 0.9232(0.5979) 0.9236(0.5816)

3 1~5, 7~20 14 0.9364(0.5907) 0.9344(0.5875) 0.9442(0.5991) 0.9440(0.5599) 0.9296(0.5465)
13 0.9344(0.6007) 0.9320(0.5978) 0.9424(0.6090) 0.9430(0.5689) 0.9280(0.5548)
12 0.9316(0.6141) 0.9304(0.6115) 0.9418(0.6228) 0.9442(0.5805) 0.9284(0.5654)

1~8, 11~20 13 0.9370(0.5913) 0.9336(0.5881) 0.9432(0.5999) 0.9432(0.5604) 0.9312(0.5470)
12 0.9350(0.6013) 0.9320(0.5984) 0.9422(0.6100) 0.9448(0.5694) 0.9284(0.5552)
11 0.9318(0.6148) 0.9294(0.6121) 0.9408(0.6239) 0.9432(0.5810) 0.9288(0.5659)

1~6, 8~9, 12~20 12 0.9348(0.5923) 0.9322(0.5889) 0.9404(0.6006) 0.9424(0.5613) 0.9296(0.5478)
11 0.9318(0.6024) 0.9302(0.5992) 0.9400(0.6107) 0.9410(0.5700) 0.9280(0.5561)
10 0.9300(0.6158) 0.9278(0.6130) 0.9372(0.6248) 0.9440(0.5819) 0.9284(0.5667)

40 2.5 1~10, 12~40 35 0.9484(0.4257) 0.9478(0.4245) 0.9538(0.4324) 0.9494(0.4136) 0.9438(0.4072)
32 0.9500(0.4258) 0.9490(0.4246) 0.9538(0.4324) 0.9492(0.4137) 0.9436(0.4073)
29 0.9456(0.4270) 0.9442(0.4258) 0.9488(0.4332) 0.9464(0.4150) 0.9400(0.4085)

1~12, 16~40 35 0.9486(0.4258) 0.9486(0.4246) 0.9540(0.4320) 0.9500(0.4137) 0.9434(0.4072)
32 0.9488(0.4258) 0.9486(0.4246) 0.9538(0.4320) 0.9498(0.4137) 0.9434(0.4072)
29 0.9490(0.4260) 0.9484(0.4248) 0.9532(0.4320) 0.9500(0.4139) 0.9434(0.4075)

1~8, 10~12, 15~40 35 0.9486(0.4257) 0.9480(0.4245) 0.9532(0.4318) 0.9500(0.4137) 0.9428(0.4072)
32 0.9488(0.4257) 0.9480(0.4245) 0.9530(0.4318) 0.9504(0.4137) 0.9426(0.4072)
29 0.9494(0.4259) 0.9472(0.4247) 0.9530(0.4318) 0.9488(0.4139) 0.9446(0.4074)

3 1~10, 12~40 35 0.9458(0.4064) 0.9456(0.4053) 0.9520(0.4115) 0.9480(0.3955) 0.9414(0.3899)
32 0.9464(0.4083) 0.9450(0.4071) 0.9516(0.4129) 0.9480(0.3973) 0.9410(0.3916)
29 0.9428(0.4158) 0.9416(0.4146) 0.9468(0.4200) 0.9464(0.4047) 0.9396(0.3986)

1~12, 16~40 35 0.9462(0.4064) 0.9458(0.4053) 0.9512(0.4112) 0.9496(0.3955) 0.9416(0.3898)
32 0.9456(0.4067) 0.9450(0.4056) 0.9512(0.4114) 0.9472(0.3959) 0.9416(0.3902)
29 0.9468(0.4101) 0.9458(0.4089) 0.9506(0.4141) 0.9486(0.3991) 0.9418(0.3933)

1~8, 10~12, 15~40 35 0.9460(0.4063) 0.9458(0.4052) 0.9514(0.4110) 0.9494(0.3955) 0.9398(0.3899)
32 0.9468(0.4067) 0.9458(0.4056) 0.9508(0.4112) 0.9482(0.3960) 0.9406(0.3903)
29 0.9474(0.4100) 0.9454(0.4088) 0.9502(0.4139) 0.9476(0.3991) 0.9432(0.3933)

λ = 3

n T ai r CI ($σ╠ā$) CI ($σ^1$) CI ($σ^2$) CrI HPD CrI

20 2.5 1~5, 7~20 14 0.9446(0.5315) 0.9422(0.5287) 0.9528(0.5475) 0.9450(0.5087) 0.9376(0.4982)
13 0.9458(0.5322) 0.9434(0.5293) 0.9534(0.5477) 0.9460(0.5093) 0.9380(0.4988)
12 0.9410(0.5336) 0.9394(0.5307) 0.9506(0.5482) 0.9438(0.5107) 0.9342(0.5001)

1~8, 11~20 13 0.9454(0.5465) 0.9428(0.5422) 0.9536(0.5712) 0.9420(0.5219) 0.9390(0.5108)
12 0.9462(0.5471) 0.9438(0.5429) 0.9542(0.5714) 0.9424(0.5225) 0.9386(0.5114)
11 0.9410(0.5485) 0.9396(0.5442) 0.9502(0.5719) 0.9412(0.5238) 0.9344(0.5126)

1~6, 8~9, 12~20 12 0.9418(0.5513) 0.9396(0.5469) 0.9526(0.5759) 0.9416(0.5262) 0.9362(0.5151)
11 0.9434(0.5520) 0.9412(0.5475) 0.9526(0.5761) 0.9408(0.5268) 0.9376(0.5157)
10 0.9392(0.5534) 0.9368(0.5489) 0.9496(0.5768) 0.9398(0.5282) 0.9342(0.5168)

3 1~5, 7~20 14 0.9438(0.5005) 0.9406(0.4979) 0.9500(0.5108) 0.9464(0.4810) 0.9364(0.4723)
13 0.9446(0.5045) 0.9422(0.5019) 0.9506(0.5141) 0.9462(0.4848) 0.9382(0.4758)
12 0.9402(0.5111) 0.9384(0.5085) 0.9478(0.5202) 0.9440(0.4910) 0.9340(0.4816)

1~8, 11~20 13 0.9358(0.5031) 0.9320(0.5003) 0.9426(0.5149) 0.9388(0.4832) 0.9282(0.4743)
12 0.9358(0.5071) 0.9334(0.5042) 0.9432(0.5183) 0.9376(0.4870) 0.9290(0.4780)
11 0.9308(0.5137) 0.9294(0.5108) 0.9392(0.5245) 0.9368(0.4932) 0.9244(0.4837)

1~6, 8~9, 12~20 12 0.9330(0.5053) 0.9308(0.5023) 0.9446(0.5172) 0.9308(0.4852) 0.9264(0.4763)
11 0.9354(0.5093) 0.9334(0.5063) 0.9448(0.5206) 0.9300(0.4890) 0.9300(0.4799)
10 0.9314(0.5159) 0.9290(0.5128) 0.9418(0.5269) 0.9290(0.4950) 0.9266(0.4855)

40 2.5 1~10, 12~40 35 0.9496(0.3704) 0.9492(0.3694) 0.9552(0.3771) 0.9478(0.3624) 0.9466(0.3577)
32 0.9496(0.3704) 0.9492(0.3694) 0.9552(0.3771) 0.9480(0.3624) 0.9466(0.3577)
29 0.9490(0.3704) 0.9488(0.3695) 0.9546(0.3771) 0.9474(0.3624) 0.9448(0.3577)

1~12, 16~40 35 0.9496(0.3706) 0.9490(0.3697) 0.9552(0.3770) 0.9486(0.3627) 0.9478(0.3580)
32 0.9496(0.3706) 0.9490(0.3697) 0.9552(0.3770) 0.9486(0.3627) 0.9478(0.3580)
29 0.9494(0.3706) 0.9488(0.3697) 0.9548(0.3770) 0.9488(0.3627) 0.9476(0.3580)

1~8, 10~12, 15~40 35 0.9500(0.3705) 0.9490(0.3695) 0.9546(0.3766) 0.9490(0.3625) 0.9442(0.3578)
32 0.9500(0.3705) 0.9490(0.3695) 0.9546(0.3766) 0.9490(0.3625) 0.9442(0.3578)
29 0.9500(0.3705) 0.9490(0.3695) 0.9542(0.3766) 0.9486(0.3625) 0.9440(0.3578)

3 1~10, 12~40 35 0.9526(0.3493) 0.9516(0.3485) 0.9588(0.3546) 0.9516(0.3424) 0.9478(0.3384)
32 0.9510(0.3495) 0.9502(0.3487) 0.9568(0.3547) 0.9522(0.3425) 0.9464(0.3385)
29 0.9482(0.3511) 0.9474(0.3501) 0.9534(0.3557) 0.9498(0.3440) 0.9464(0.3400)

1~12, 16~40 35 0.9520(0.3494) 0.9512(0.3486) 0.9582(0.3544) 0.9528(0.3425) 0.9468(0.3385)
32 0.9514(0.3495) 0.9508(0.3486) 0.9580(0.3544) 0.9526(0.3425) 0.9476(0.3385)
29 0.9508(0.3498) 0.9502(0.3489) 0.9558(0.3545) 0.9524(0.3428) 0.9442(0.3388)

1~8, 10~12, 15~40 35 0.9524(0.3494) 0.9510(0.3486) 0.9586(0.3542) 0.9530(0.3425) 0.9470(0.3384)
32 0.9524(0.3494) 0.9508(0.3486) 0.9582(0.3542) 0.9530(0.3425) 0.9476(0.3385)
29 0.9514(0.3497) 0.9504(0.3489) 0.9556(0.3543) 0.9520(0.3428) 0.9456(0.3388)

CI = confidence interval; CrI = credible interval; HPD = the highest posterior density.

References
1. Arora SH, Bhimani GC, and Patel MN (2010). Some results on maximum likelihood estimators of parameters of generalized half logistic distribution under Type-I progressive censoring with changing failure rate, International Journal of Contemporary Mathematical Sciences, 14, 685-698.
2. Balakrishnan N (1985). Order statistics from the half logistic distribution, Journal of Statistical Computation and Simulation, 4, 287-309.
3. Balakrishnan N and Asgharzadeh A (2005). Inference for the scaled half-logistic distribution based on progressively Type-II censored samples, Communications in Statistics-Theory and Methods, 34, 78-87.
4. Balakrishnan N and Chan PS (1992). Estimation for the scaled half logistic distribution under Type-II censoring, Computational Statistics & Data Analysis, 13, 123-141.
5. Balakrishnan N and Puthenpura S (1986). Best linear unbiased estimators of location and scale parameters of the half logistic distribution, Journal of Statistical Computation and Simulation, 25, 193-204.
6. Balakrishnan N and Saleh HM (2011). Relations for moments of progressively Type-II censored order statistics from half-logistic distribution with applications to inference, Computational Statistics & Data Analysis, 55, 2775-2792.
7. Balakrishnan N and Wong KHT (1991). Approximate MLEs for the location and scale parameters of the half-logistic distribution with Type-II right-censoring, IEEE Transactions on Reliability, 40, 140-145.
8. Epstein B (1954). Truncated life tests in the exponential case, Annals of Mathematical Statistics, 25, 555-564.
9. Giles DE (2012). Bias reduction for the maximum likelihood estimators of the parameters in the half-logistic distribution, Communications in Statistics-Theory and Methods, 41, 212-222.
10. Kang SB, Cho YS, and Han JT (2008). Estimation for the half logistic distribution under progressive Type-II censoring, Communications for Statistical Applications and Methods, 15, 815-823.
11. Kang SB, Cho YS, and Han JT (2009). Estimation for the half logistic distribution based on double hybrid censored samples, Communications for Statistical Applications and Methods, 16, 1055-1066.
12. Kang SB and Seo JI (2011). Estimation in an exponentiated half logistic distribution under progressively Type-II censoring, Communications for Statistical Applications and Methods, 18, 657-666.
13. Kang SB, Seo JI, and Kim YK (2013). Bayesian analysis of an exponentiated half-logistic distribution under progressively Type-II censoring, Journal of the Korean Data and Information Science Society, 24, 1455-1464.
14. Kim YK, Kang SB, and Seo JI (2011). Bayesian estimation in the generalized half logistic distribution under progressively Type-II censoring, Journal of the Korean Data and Information Science Society, 22, 977-989.
15. Lee KJ, Park CK, and Cho YS (2011). Inference based on generalized doubly Type-II hybrid censored sample from a half logistic distribution, Communications for Statistical Applications and Methods, 18, 645-655.
16. Lee KJ, Sun HK, and Cho YS (2014). Estimation of the exponential distribution based on multiply type I hybrid censored sample, Journal of the Korean Data & Information Science Society, 25, 633-641.
17. Nelson WB (1982). Applied Life Data Analysis, John Willey & Sons, New York.
18. Seo JI and Kang SB (2015). Pivotal inference for the scaled half logistic distribution based on progressively Type-II censored samples, Statistics & Probability Letters, 104, 109-116.
19. Seo JI, Kim YK, and Kang SB (2013). Estimation on the generalized half logistic distribution under Type-II hybrid censoring, Communications for Statistical Applications and Methods, 20, 63-75.
20. Torabi H and Bagheri F (2010). Estimation of parameters for an extended generalized half logistic distribution based on complete and censored data, Journal of the Iranian Statistical Society, 9, 171-195.
21. Tierney L and Kandane JB (1986). Accurate approximations for posterior moments and marginal densities, Journal of the American Statistical Association, 81, 82-86.
22. Wang C and Liu H (2017). Estimation for the scaled half-logistic distribution under Type-I progressively hybrid censoring scheme, Communications in Statistics-Theory and Methods, 46, 12045-12058.