TEXT SIZE

CrossRef (0)
Bayesian and maximum likelihood estimations from exponentiated log-logistic distribution based on progressive type-II censoring under balanced loss functions

Younshik Chunga, Yeongju Oh1,b

aDepartment of Statistics, Pusan National University, Busan, Korea;
bStatistical Methodology Division, Statistics Research Institute, Daejeon, Korea
Correspondence to: 1 Statistical Methodology Division, Statistics Research Institute, 713, Hanbat-daero, Seo-gu, Daejeon, Republic of Korea. E-mail: oyj1928@korea.kr
Received January 22, 2021; Revised June 18, 2021; Accepted June 18, 2021.
Abstract
A generalization of the log-logistic (LL) distribution called exponentiated log-logistic (ELL) distribution on lines of exponentiated Weibull distribution is considered. In this paper, based on progressive type-II censored samples, we have derived the maximum likelihood estimators and Bayes estimators for three parameters, the survival function and hazard function of the ELL distribution. Then, under the balanced squared error loss (BSEL) and the balanced linex loss (BLEL) functions, their corresponding Bayes estimators are obtained using Lindley’s approximation (see Jung and Chung, 2018; Lindley, 1980), Tierney-Kadane approximation (see Tierney and Kadane, 1986) and Markov Chain Monte Carlo methods (see Hastings, 1970; Gelfand and Smith, 1990). Here, to check the convergence of MCMC chains, the Gelman and Rubin diagnostic (see Gelman and Rubin, 1992; Brooks and Gelman, 1997) was used. On the basis of their risks, the performances of their Bayes estimators are compared with maximum likelihood estimators in the simulation studies. In this paper, research supports the conclusion that ELL distribution is an ecient distribution to modeling data in the analysis of survival data. On top of that, Bayes estimators under various loss functions are useful for many estimation problems.
Keywords : balanced loss functions, Lindley’s approximation, Tierney-Kadane approximation, Markov Chain Monte Carlo (MCMC) method, exponentiated log-logistic distribution, progressive type-II censoring
1. Introduction

The modeling and analysis of lifetimes have played an important role in a wide variety of scientific and technological fields. Bennett (1983) discussed the usefulness of log-logistic (LL) distribution in the analysis of survival data. The LL distribution provides a useful alternative to the Weibull distribution for modeling of survival data where the hazard function is non-monotonic. Also, the LL distribution is very similar in shape to the log-normal distribution, but it is more suitable for use in the survival analysis due to the fact that the cumulative distribution function (cdf) can be written in a closed form in contrast with log normal distribution. Rosaiah et al. (2006) suggested the exponentiated log-logistic (ELL) distribution as a probability model for the life time of the product. Chaudhary and Kumar (2014) mentioned that adding one or more parameters to a distribution makes it richer and more flexible for modeling data.

In survival analysis or life-testing, units can be lost or removed from the experiments while they are still alive. The loss can be happened either preassigned or out of control and the progressive censoring schemes are very effective in such situations. The out of control case may occur when study participants drop out of the study and because of limitation of funds or to save the time and cost, preassigned case can be happened. Progressive type-II censoring enables compromise between time consumption and the observation of some extreme values. Thereby, it is an important method of obtaining data in lifetime studies. Recently, Sel et al. (2018) considered Bayesian and maximum likelihood estimations for parameters of MacDonald extended Weibull model based on progressive type-II censoring.

The squared error loss (SEL) function leads to estimates close to the given values but it often leads to somewhat biased estimators. Thus, for the trade off between the goodness of fit and precision of estimation, it needs to be considered in conjunction with balanced loss functions (BLFs) (Zellner, 1994). We have considered the estimation problem about parameters, the survival function and hazard function of the ELL distribution based on progressive type-II censoring under the different types of balanced loss function. The performance of these estimates are compared with those of maximum likelihood estimators (MLEs) and we conclude that BLFs and their associated optimal Bayes estimates will probably be useful in many estimation problems.

The Bayesian methodology needs the integration which is very difficult to calculate. Lindley’s approximation (Lindley, 1980; Jung and Chung, 2018), Tierney-Kadane approximation (Tierney and Kadane, 1986) and Monte Carlo Markov Chain (MCMC) methods which include Gibbs sampling (Gelfand and Smith, 1990) and Metropolis Hastings algorithm (Hastings, 1970) are used to solve these problems. Also, to check the convergence of MCMC chains, we use the Gelman and Rubin diagnostic (Gelman and Rubin, 1992; Brooks and Gelman, 1997).

This paper will be developed as follows. In Section 2, reviews of the ELL distribution and progressive type-II censoring are presented. In Section 3, we present MLEs and an asymptotic variance-covariance matrix of the three parameters. We consider the corresponding Bayes estimators under the balanced squared error loss (BSEL) function and the balanced linex error loss (BLEL) function in Section 4. In Section 5, the Bayes estimators are derived using the Lindley’s approximation, Tierney-Kadane approximation, and MCMC methods. In Section 6, to compare Bayes estimators with classical estimators, the risks of the MLEs and Bayes estimators under different loss functions are presented. Finally, we conclude with a few summary remarks in Section 7.

2. Reviews of exponentiated log-logistic and progressive type-II censoring

In order to get a generalized version of the log-logistic (LL) distribution, Rosaiah et al. (2006) developed a new distribution called exponentiated log-logistic (ELL) distribution on the lines of exponentiated Weibull distribution.

The cdf of the ELL distribution is defined by raising the cdf of the LL distribution to the power of α > 0. The cdf of the LL distribution with a shape parameters β and a scale parameter λ is given by,

$G(t)=(tλ)β1+(tλ)β, t>0, β>0, λ>0.$

Therefore, the cdf of the ELL distribution with two shape parameters α, β and a scale parameter λ is given by,

$F(t)≡[G(t)]α, t>0, α>0, β>0, λ>0.$

For α = 1, the model reduces to the LL distribution. The probability density function (pdf), the survival function and the hazard function of the ELL distribution with three parameters α, β and λ are given, respectively, by,

$f(t)=αβ (tλ)αβt [1+(tλ)β]α+1,$$S(t)=1-((tλ)β1+(tλ)β)α,$

and

$h(t)=αβt (1+(tλ)β) [(1+(tλ)-β)α-1],$

where t > 0, α > 0, β > 0, λ > 0.

In life-testing or survival analysis, the data are often censored. Type-I and type-II censoring are the most popular censoring schemes. Yet, using these types of censoring can not allow removing units at points except for the terminal point of the experiment. Thus, we consider generalizing the classical type-II censoring scheme and it is known as progressive type-II censoring scheme.

In the case of progressive type-II censored samples, n independent units are put on a test and censoring schemes (R1, . . . , Rm) are previously fixed. They undergo a process in such a way that first failure is observed at time t1, at which time R1 surviving units are removed from the experiment at random, and second failure is observed at time t2, at which time R2 surviving units are removed from the experiment at random. This process is continued until the time of the mth failure is observed and the remaining Rm units are removed from the test. The m ordered failure times denoted by t1 · · · tm are called progressive type-II censored order statistics of size m from a sample of size n with a progressive censoring scheme (R1, . . . , Rm). Then, n = m + R1 + R2 + · · · + Rm.

When a progressively type-II censored sample based on n independent units from a population with continuous lifetime distribution f (t) and its cdf F(t) is observed, the likelihood function is given by Balakrishnan and Sandhu (1995) as follows,

$L (θ∣t_)=A∏i=1mf(ti) [1-F(ti)]Ri, t_=(t1,…,tm)$

where A = n(nR1 − 1) · · · (nR1R2 − · · · − Rm−1m + 1) and θ is the vector of parameters.

Therefore, based on progressive type-II censored sample from ELL distribution, the likelihood function L(α, β, λ|t) using (2.1), (2.2), and (2.5) is given by,

$L (α,β,λ∣t_)∝αmβm exp [∑i=1m(α ln Vi-ln(tiui)+Ri ln (1-Viα))],$

where ui = 1 + (ti)β and Vi = 1 − 1/ui.

The log-likelihood function is proportional to,

$ℓ(α,β,λ∣t_)=ln L (α,β,λ∣t_)∝ ∑i=1m[ln(αβ)+α ln Vi-ln(tiui)+Ri ln (1-Viα)].$
3. Maximum likelihood estimations

Assuming that the parameters α, β, and λ are unknown, the MLEs α̂M,β̂M and λ̂M of α, β, and λ can be obtained by solving the following simultaneous likelihood equations,

$α^M:mα+∑i=1m[1-RiSi] ln Vi=0,β^M:mβ-∑i=1m[Vi-αui(1-RiSi)] ln (tiλ)=0,λ^M:-∑i=1m[-uiViα+(1-RiSi)] αβλμi=0,$

where ui and Vi are defined as before and $Si=Viα/(1-Viα)$.

Since it is impossible to obtain the closed forms for α̂M, β̂M, and λ̂M, MLEs are obtained using the Newton-Raphson method to solve this problem. For given t, the MLE of the survival function (denoted by ŜM) in (2.3) and the MLE of the hazard function (denoted byĥM) in (2.4) can be obtained by replacing α, β and λ by α̂M, β̂M, and λ̂M, respectively.

Next, we compute the observed Fisher information matrix for their MLEs. It will enable us to construct confidence intervals for the parameters using the asymptotic normality on MLE. The observed Fisher information matrix for the α̂M, β̂M, and λ̂M is obtained as follows,

$I(α^M,β^M,λ^M)=[-∂2ℓ(α,β,λ∣t_)∂α2-∂2ℓ(α,β,λ∣t_)∂α∂β-∂2ℓ(α,β,λ∣t_)∂α∂λ -∂2ℓ(α,β,λ∣t_)∂β2-∂2ℓ(α,β,λ∣t_)∂β∂λ -∂2ℓ(α,β,λ∣t_)∂λ2]α=α^Mβ=β^Mλ=λ^M,$

where (α, β, λ|t) is in (2.7). The asymptotic variance-covariance matrix is given by,

$Σ^=I (α^M,β^M,λ^M)-1=[σ^α2σ^αβσ^αλ σ^β2σ^βλ σ^λ2]$

with

$∂2ℓ(α,β,λ∣t_)∂α2=-mα2-∑i=1mRiSiXiα-1 ln Vi,∂2ℓ(α,β,λ∣t_)∂β2=-mβ2-∑i=1m[1+α-αRiSi (1-YiVi-1)] 1uiβiWiln (tiλ),∂2ℓ(α,β,λ∣t_)∂λ2=-∑i=1m[uiα+(βVi-1) (1-RiSi+1α)+RiSiYiβ] αβλ2ui,∂2ℓ(α,β,λ∣t_)∂αβ=∑i=1m[1-RiSi(1+Xi)] 1uiln (tiλ),∂2ℓ(α,β,λ∣t_)∂αλ=-∑i=1m[1-RiSi(1+Xi)] βλui,∂2ℓ(α,β,λ∣t_)∂βλ=∑i=1m[(wi-1) (1-RiSi+1α)+uiα+RiSiYi ln(ui-1)] αλui,$

where $Xi=SiVi-αα ln Vi, Yi=SiVi-αα/ui$, and Wi = Vi ln(ui − 1). The 100(1 − a)% confidence intervals for α, β, and λ are given, respectively, by,

$α^M±za2σ^α2, β^M±za2σ^β2, λ^M±za2σ^λ2,$

where za/2 is a standard normal variate and $σ^α2,σ^β2$ , and $σ^λ2$ are defined in (3.3).

4. Bayes estimations

Now, we deal with Bayesian estimations of parameters, the survival function and hazard function of ELL distribution under BSEL and BLEL functions. Assume that the prior distributions of parameters α, β and λ are independently distributed to Gam(a1, b1), Gam(a2, b2), and Gam(a3, b3), respectively, where Gam(a, b) denotes the gamma distribution with mean a/b and variance a/b2. Thus, the joint prior density function of α, β, and λ is given by,

$π(α,β,λ)∝αa1-1βa2-1λa3-1 exp(-b1α-b2β-b3λ),$

where α, β, λ > 0. Therefore, the joint posterior density function of α, β and λ based on the progressive type-II censored samples t = (t1, . . . , tm) from the ELL distribution in (2.6) is obtained as,

$π(α,β,λ∣t_)∝αm+a1-1βm+a2-1λa3-1 exp [-b1α-b2β-b3λ+∑i=1m(α ln Vi-ln(tiui)+Ri ln(1-Viα))],$

where ui and Vi are defined as before.

Zellner (1994) introduced the balanced loss function to reflect both the goodness of fit and the precision of estimation. Jozani et al. (2012) introduced the extended class of such balanced loss function as follows,

$Lρ,w,δ0q(δ(θ),δ^(t_))=wq(θ)ρ (δ0(t_),δ^(t_))+(1-w)q(θ)ρ (δ(θ),δ^(t_)),$

where θ is the vector of parameters, q(θ) is a positive weight function, ρ is an arbitrary loss function, δ̂(t) is the estimator of δ(θ) and δ0(t) is a chosen a priori “target” estimator of δ(θ), such as maximum likelihood estimator (MLE), least-squares estimator, or unbiased estimator among others. Here, the chosen a priori “target” estimator δ0(t) in (4.3) is considered as the MLE of δ(θ) and denoted by δ̂M. Depending on the choice of w ∈ (0, 1), the relative importance of these criteria is controlled.

### 4.1. Balanced squared error loss (BSEL) function

Taking ρ(δ(θ),δ̂(t)) = (δ̂(t) − δ(θ))2 and q(θ) = 1 in (4.3), the BSEL function is given by,

$LBS,w,δ0 (δ(θ),δ^(t_))=w (δ^(t_)-δ0(t_))2+(1-w)(δ^(t_)-δ(θ))2,$

where δ0(t) = δ̂M is the MLE of δ(θ). Then, the Bayes estimator of δ(θ) under the BSEL function in (4.4) is given by,

$δ^BS(t_)=wδ0(t_)+(1-w)E(δ(θ)∣t_),$

where δ0(t) = δ̂M is the MLE of δ(θ). For example, it follows from (4.5) that the Bayes estimators of α is given by,

$α^BS=wα^M+(1-w)E(α∣t_),$

where α̂M is the MLEs of α. Similarly, the Bayes estimators of β, λ, S (t) in (2.3) and h(t) in (2.4) under BSEL function in (4.4) can be obtained.

### 4.2. Balanced linex loss (BLEL) function

Taking q(θ) = 1 and,

$ρ (δ(θ),δ^(t_))=ec(δ^(t_)-δ(θ))-c(δ^(t_)-δ(θ))-1,$

in (4.3) which is called the Linex error loss (LEL) (Varian, 1975), the BLEL function is given by

$LBL,w,δ0 (δ(θ),δ^(t_))=w(ec(δ^(t_)-δ0(t_))-c(δ^(t_)-δ0(t_))-1)+(1-w) (ec(δ^(t_)-δ(θ))-c(δ^(t_)-δ(θ))-1),$

where δ0(t) = δ̂M is the MLE of δ(θ). In the LEL function, if c is greater than 0, the function is asymmetric with underestimation than overestimation, otherwise it is just the opposite. For small values of |c|, the function is almost symmetric and same as squared error loss (SEL). When |c| is assumed to be appreciable value, estimate may be quite different from which obtained with a symmetric SEL. The Bayes estimator of δ(θ) under the LEL function in (4.7) is given by

$δ^L(t_)=-1cln E(e-cδ(θ)∣t_), c≠0,$

where ui and Vi are defined as before.

Therefore, the Bayes estimator of δ(θ) under BLEL function in (4.8) is given by,

$δ^BL(t_)=-1cln [we-cδ0(t_)+(1-w)E(e-cδ(θ)∣t_)],$

where δ0(t) = δ̂M is the MLE of δ(θ). The LEL function can be obtained from BLEL function by setting w = 0. For example, it follows from (4.10) that the Bayes estimators of α under BLEL function in (4.8) is given by

$α^BL=-1cln [we-cα^M+(1-w)E(e-cα∣t_)],$

where α̂M is the MLE of α. Similarly, the Bayes estimators of β, λ, S (t) in (2.3) and h(t) in (2.4) under BLEL function in (4.8) can be obtained.

5. Computing methods

For a real-valued function r(θ), we can define the posterior expectation of r(θ) given t as follow

$E(r(θ)∣t_)=∫α∫β∫λr(θ)ℓ(α,β,λ∣t_)π(α,β,λ)dαdβdλ∫α∫β∫λℓ(α,β,λ∣t_)π(α,β,λ)dαdβdλ,$

where θ = (α, β, λ) is the vector of parameters. Yet, generally, the ratio of the two integrals given by (5.1) can not be obtained in a closed form. Thus, we consider the following three approaches to approximate the equation in (5.1) such as Lindley’s approximation, Tierney-Kadane approximation, and MCMC methods.

### 5.1. Lindley’s approximation

To solve the integration problems, we rely on Lindley’s method which is a numeric integration technique. Kim et al. (2011) and Jung and Chung (2018) used Lindley’s method for two-parameter and three-parameters Bayesian problem, respectively. According to notations in Jung and Chung (2018) with θ = (θ1, θ2, θ3), the posterior expectation of r(θ) given data t can be approximated by,

$E^[r(θ)∣t_]≈r(θ)+A+ρ1A123+ρ2A213+ρ3A321+12 [ℓ300*B123+ℓ030*B213+ℓ003*B321+2ℓ111*(C123+C213+C312)+ℓ210*D123+ℓ201*D132+ℓ120*D213+ℓ102*D312+ℓ021*D231+ℓ012*D321],$

where $A=1/2∑i,j=13rijτij$ and $ℓηγι*=∂3ℓ(θ1,θ2,θ3∣t_)/∂θ1η∂θ2γ∂θ3ι$ with η, γ, ι = 0, 1, 2, 3 and η+γ+ι = 3. For i, j, k = 1, 2, 3,

$Aijk=riτii+rjτji+rkτki,Bijk=τii (riτii+rjτij+rkτik),Cijk=ri(τiiτjk+2τijτik),Dijk=3riτiiτij+rj(τiiτjj+2τij2)+rk(τiiτjk+2τijτik).$

And rij = 2r(θ)/∂θi∂θj, ri = ∂r(θ)/∂θi, ρi = ∂ρ(θ)/∂θi, ρ(θ) = ln π(θ), where π(θ) denotes the joint prior density of θ and τij is the (i, j)th element in the inverse matrix of −ij such that ij = 2(θ1, θ2, θ3|t)/∂θi∂θj, for i, j = 1, 2, 3. Expression (5.2) is to be evaluated at (θ̂1M,θ̂2M,θ̂3M), where θ̂1M,θ̂2M and θ̂3M denote the MLEs of θ1, θ2 and θ3, respectively.

We apply Lindley’s approximation in (5.2) into our case and let (θ1, θ2, θ3) ≡ (α, β, λ) and (θ̂1M,θ̂2M,θ̂3M) ≡ (α̂M,β̂M,λ̂M). Therefore, the elements τij can be obtained as,

$[-∂2ℓ(α,β,λ∣t_)∂α2-∂2ℓ(α,β,λ∣t_)∂α∂β-∂2ℓ(α,β,λ∣t_)∂α∂λ-∂2ℓ(α,β,λ∣t_)∂α∂β-∂2ℓ(α,β,λ∣t_)∂β2-∂2ℓ(α,β,λ∣t_)∂β∂λ-∂2ℓ(α,β,λ∣t_)∂α∂λ-∂2ℓ(α,β,λ∣t_)∂β∂λ-∂2ℓ(α,β,λ∣t_)∂λ2]-1=[τ11τ12τ13 τ22τ23 τ33].$

Also $ℓijk*$ in (5.2) can be obtained as follows,

$ℓ300*=2mα3-α-2∑i=1mRiSiXi ln Vi(2Xi-α ln Vi),ℓ030*=2mβ3+∑i=1m[RiSiYi (3-2YiVi-1+αVi-1ui-1)-(1+α-1-RiSi) (2ui-1)] αβuiWi (ln (tiλ))2,ℓ003*=∑i=1m[2uiα+(1+α-1-RiSi)(β2Viui-(βVi-1)(βVi-2))+βRiSiYi (3-β(αui+3Vi-2Yi))] αβλ3ui,ℓ210*=-∑i=1m(2+2Xi-α ln Vi)1αuiRiSiXi ln (tiλ),ℓ201*=∑i=1m(2+2Xi-α ln Vi)βαλuiRiSiXi,ℓ120*=∑i=1m[-Vi+RiSi ((1+Xi)(Vi-2Yi)+αuiXi)] 1ui(ln (tiλ))2,ℓ102*=∑i=1m[1β-Vi+RiSi ((Vi-2Yi-1β)(Xi+1)+αuiXi)] β2λ2ui,ℓ021*=∑i=1m[(1+α-1-RiSi)(2-2Wi+ln(ui-1))+RiSiYiVi-1(2-Wi-(αui-2Yi)ln(ui-1))] αβλuiWi,ℓ012*=-∑i=1m[(1+α-1-RiSi)((2Vi-1β)(1-Wi)+Wi)+uiαβ+RiSiYi ((2-3Wi)+(1β-αui+2Yi)ln(ui-1))]αβλ2ui,$

and

$ℓ111*=∑i=1m[(wi-1)(1-RiSi(1+Xi))+RiSiYi(2+2Xi-α ln Vi) ln(ui-1)] 1λui,$

where ui, Vi, Si, Xi, Yi and Wi are defined as before. Using the joint prior distribution (4.1), we obtain,

$ρ(α,β,λ)=ln π(α,β,λ)=(a1-1) ln α-b1α+(a2-1) lnβ-b2β+(a3-1) ln λ-b3λ$

and then we get

$ρ1=a1-1α-b1, ρ2=a2-1β-b2, ρ3=a3-1β-b3.$

Utilizing the equations (5.2)(5.6), we can obtain the Bayes estimators of α, β, λ, S (t), and h(t) under balanced squared error loss (BSEL) and balanced linex error loss (BLEL) functions. α̂BSL , β̂BSL , λ̂BSL , ŜBSL and ĥBSL are the Bayes estimators of α, β, λ, S (t) and h(t) using Lindley’s approximation under BSEL function in (4.5), respectively, and α̂BLL , β̂BLL , λ̂BLL , ŜBLL and ĥBLL are the Bayes estimators of α, β, λ, S (t), and h(t) using Lindley’s approximation under BLEL function in (4.10), respectively.

We obtain approximate Bayes estimators of α, β, λ, S (t) and h(t) using Tierney-Kadane approximation which is an alternative approach to the approximation of the posterior expectation of r(θ) given t. This method is based on the Laplace approximation for asymptotic evaluation of integrals. Recall that the posterior expectation of a function r(θ) given data t can be written as,

$E[r(θ)∣t_]=∭r(θ)π(α,β,λ∣t_)dαdβdλ=∭emg*(α,β,λ)dαdβdλ∭emg(α,β,λ)dαdβdλ,$

where g(α, β, λ) = 1/m((α, β, λ|t) + ρ(α, β, λ)) and g*(α, β, λ) = g(α, β, λ) + 1/mln r(θ). Here, (α, β, λ|t) is the log-likelihood function of α, β and λ in (2.7) and ρ(α, β, λ) = ln π(α, β, λ) in (5.5). Using Tierney-Kadane approximation, the posterior expectation of r(θ) can be estimated by,

$E^[r(θ)∣t_]=(|Σ*(α^*,β^*,λ^*)||Σ((α^,β^,λ^)|)12em[g*(α^*,β^*,λ^*)-g(α^,β^,λ^)],$

where (α̂, β̂,λ̂) and (α̂*,β̂*,λ̂*) maximize g(α, β, λ) and g*(α*, β*, λ*), respectively, and ∑(α̂, β̂,λ̂) and ∑*(α̂*,β̂*,λ̂*) are minus the inverse Hessians of g(α, β, λ) and g*(α*, β*, λ*) evaluated at (α̂, β̂,λ̂) and (α̂*,β̂*,λ̂*), respectively. Using equation (5.8), the approximate Bayes estimators of α, β, λ, S (t) and h(t) under BSEL and BLEL functions are easily obtained.

For example, if r(θ) = α, then

$E^[α∣t_]=(|Σ*(α^*,β^*,λ^*)||Σ((α^,β^,λ^)|)12em[g*(α^*,β^*,λ^*)-g(α^,β^,λ^)],$

where

$g (α,β,λ)=1m [m ln(αβ)+∑i=1m(α ln Vi-ln(tiui)+Ri ln (1-Viα))+(a1-1) ln α-b1α+(a2-1) lnβ-b2β+(a3-1) ln λ-b3λ],g*(α*,β*,λ*)=1m(ℓ(α,β,λ∣x_)+ρ(α,β,λ))+1mlnr(θ)=g(α,β,λ)+1mlnα,Σ(α^,β^,λ^)=[∂2g(α,β,λ)∂α2∂2g(α,β,λ)∂α∂β∂2g(α,β,λ)∂α∂λ ∂2g(α,β,λ)∂β2∂2g(α,β,λ)∂β∂λ ∂2g(α,β,λ)∂λ2]α=α^β=β^λ=λ^-1,Σ*(α^*,β^*,λ^*)=[∂2g*(α*,β*,λ*)∂α2∂2g*(α*,β*,λ*)∂α∂β∂2g*(α*,β*,λ*)∂α∂λ ∂2g*(α*,β*,λ*)∂β2∂2g*(α*,β*,λ*)∂β∂λ ∂2g*(α*,β*,λ*)∂λ2]α=α^*β=β^*λ=λ^*-1,$

where ui and Vi are defined as before. With the same argument, we can obtain Ê(β|t), Ê(λ|t), Ê (e|t), Ê (e|t) and Ê(e|t). By using these estimators, we can obtain the Bayes estimators of α, β, λ, S (t) and h(t) under balanced squared error loss (BSEL) and balanced linex error loss (BLEL) functions. α̂BS TK , β̂BS TK , λ̂BS TK , ŜBS TK and ĥBS TK are the Bayes estimators of α, β, λ, S (t) and h(t) using Tierney-Kadane approximation under BSEL function in (4.5), respectively, and α̂BLTK, β̂BLTK, λ̂BLTK, ŜBLTK and ĥBLTK are the Bayes estimators of α, β, λ, S (t) and h(t) using Tierney-Kadane approximation under BLEL function in (4.10), respectively.

### 5.3. Markov chain Monte Carlo (MCMC) simulation

Markov chain Monte Carlo (MCMC) methods are considered to simulate from a complex probability distribution (or target distribution), by generating a Markov chain with the target density as its stationary density. Among them, Gibbs sampling (Gelfand and Smith, 1990) is a popular tool for the analysis of complicated statistical models. The full conditional distributions using (4.2) of the parameters α, β and λ are obtained as follows,

$π(α∣β,λ,t_)∝αm+a1-1 exp [-b1α+∑i=1m(α ln Vi+Ri ln (1-Viα))],$$π(β∣α,λ,t_)∝βm+a2-1 exp [-b2β+∑i=1m(α ln Vi-ln(tiui)+Ri ln (1-Viα))],$$π(λ∣α,β,t_)∝λa3-1 exp [-b3λ+∑i=1m(α ln Vi-ln(tiui)+Ri ln (1-Viα))].$

In (5.9), (5.10), and (5.11), all of the full conditional distributions cannot be reduced analytically to well-known distributions. Therefore, the Metropolis Hastings algorithm is useful since there is difficulty in directly sampling directly from full conditional distributions. To generate samples of α, β and λ from (5.9), (5.10), and (5.11), respectively, their proposal distributions are given by gamma distributions. The MCMC algorithm is working as follows,

• Step 1. Select initial value (α(0), β(0), λ(0)).

• Step 2. Set j = 1.

• Step 3. Using Metropolis Hastings algorithm, generate α(j) from π(α|β(j−1), λ(j−1), t) in (5.9) with gamma proposal distribution, Gam (α(j−1), 1).

• Step 4. Using Metropolis Hastings algorithm, generate β(j) from π(β|α(j), λ(j−1), t) in (5.10) with gamma proposal distribution, Gam (β(j−1), 1).

• Step 5. Using Metropolis Hastings algorithm, generate λ(j) from π(λ|α(j), β(j), t) in (5.11) with gamma proposal distribution, Gam (λ(j−1), 1).

• Step 6. Set j = j + 1.

• Step 7. From Step 3 to Step 6, Repeat N times.

Thus, for instance, the posterior expectation of α is given by

$E^(α∣t_)=1M∑j=M+1Nα(j)$

where M is the burn-in period, N = 2M. In this paper, we decide that the convergence have been reached after 500 iterations of MCMC algorithm and set the number of total iterations to 1,000.

6. Simulation study

In the previous sections, we proposed several estimators of unknown parameters, survival function and hazard function for the three-parameter ELL distribution. To assess the performance of these estimates, we conducted a simulation study in terms of mean squared error (MSE). We used the following steps to generate a sample from this model.

• Step 1. We generated Z1, . . . , Zm from U(0, 1).

• Step 2. For given values R1, . . . , Rm, $Qi=Zi1/(i+Rm+⋯+Rm-i+1)$ for i = 1, . . . ,m.

• Step 3. From Step 2, we set Ui = 1 − QmQm−1, . . . , Qmi+1.

• Step 4. We generated a censored sample of size m from the ELL distribution using the inverse cdf as follows,

$ti=F-1(Ui)=λ (u-1α-1)-1β.$

• Step 5. We obtained the MLEs α̂M, β̂M and λ̂M of the parameters α, β and λ by iteratively solving the equation (3.1) using the Newton Raphson method. And substituting α̂M, β̂M and λ̂M into (2.3) and (2.4), we obtained the MLE of the survival function ŜM and the MLE of the hazard function ĥM at some t, respectively.

• Step 6. Under BSEL function, we computed the Bayes estimates (α̂BSL, β̂BSL, λ̂BSL, ŜBSL, ĥBSL) using the Lindley’s approximation and (α̂BSTK, β̂BSTK, λ̂BSTK, ŜBSTK, ĥBSTK) using the Tierney-Kadane approximation, respectively. The MCMC estimates of r(α, β, λ) under BSEL function is evaluated as follows,

$r^(α,β,λ)BSMCMC=r (α^M,β^M,λ^M)+1-wN-M∑j=M+1Nr (α(j),β(j),λ(j)),$

where α(j), β(j) and λ(j) are defined in Section 5.3, N is the number of iterations for Gibbs sampling and M is the burn-in period for Gibbs sampling. In (6.1), r(α, β, λ) is substituted by α, β, λ, S (t) and h(t) and the MCMC estimates of α, β, λ, S (t) and h(t) under BSEL function can be obtained.

• Step 7. Under BLEL function, we computed the Bayes estimates (α̂BLL,β̂BLL,λ̂BSL, ŜBLLBLL) using the Lindley’s approximation and (α̂BLTK,β̂BLTK,λ̂BLTK, ŜBLTKBLTK) the Tierney-Kadane approximation, respectively. The MCMC estimates of r(α, β, λ) under BLEL function are evaluated as follows,

$r^(α,β,λ)BLMCMC=1cln (we-cr(α^M,β^M,λ^M)+(1-w)N-M∑j=M+1Ne-cr(α(j),β(j),λ(j))),$

where α(j), β(j) and λ(j) are defined in Section 5.3, N is the number of iterations for Gibbs sampling and M is the burn-in period for Gibbs sampling. In (6.2), r(α, β, λ) is substituted by α, β, λ, S (t) and h(t) and the MCMC estimates of α, β, λ, S (t) and h(t) under BLEL function can be obtained.

• Step 8. We repeated above steps 1,000 times and computed the means and MSEs for different censoring schemes.

The MLE and all Bayes estimates relative to BSEL (w = 0.3, 0.5, 0.7), BLEL (w = 0.3, 0.5, 0.7, c = −1, 1) functions are computed under informative priors. Here, priors correspond to the case when the hyper-parameters are defined as (a1 = b1 = b2 = a3 = b3 = 2, a2 = 4). For the informative prior, the values of hyper parameters (a1, b1), (a2, b2) and (a3, b3) are chosen in the sense that the prior mean is equal to the preassigned true values of α, β and λ. The unknown parameters (α, β, λ) is assumed that it take true values (1, 2, 1). In addition, the estimations are conducted on the basis of various progressive type-II censoring schemes which include complete and type-II censoring. The true values of S (t) and h(t) are evaluated to be S (t) = 0.8 and h(t) = 0.8 at t = 0.5.

We use convergence diagnostic which is proposed by Gelman and Rubin (1992) and refined by Brooks and Gelman (1997) in an attempt to assess convergence. The diagnostic uses parallel chains with different initial values to test whether they all converge to the same target distribution and compare the variance within each chain and the variance between chains to check how similar they are. We run five chains starting with different initial values.

To assess convergences of Metropolis-Hastings algorithm and Gibbs sampling, the univariate potential scale reduction factors (PSRFs) and the multivariate PSRF (MPSRF) are estimated, respectively. The MPSRF is an upper bound to the largest of the PSRFs over each of α, β and λ. For MCMC convergence, we use the PSRF is less than 1.2. If PSRF (MPSRF) is less than 1.1, we consider that its sampling variability is negligible.

Figure 1 and Figure 2 are plots of the estimated PSRFs and MPSRF for the three parameters α, β and λ. They demonstrate how it reaches 1 after about 1,000 iterations (which we allowed for burn-in), thus showing agreement between the different chains. Our simulations suggest that the chains had reached the stationary distributions and were run for long enough.

Table 1 provides the average values of the estimates and the lengths of the corresponding 95% approximate confidence intervals for the parameters (entries within parentheses). The 95% approximate confidence intervals for α, β and λ contain the true values of the parameters α = 1, β = 2 and λ = 1. And, for fixed n, as m increases, length of interval decreases.

Additionally, Table 2 presents variances and covariances of α, β and λ. The comparisons of the variance-covariance matrix of the estimators are made using the observed information matrix. We study how much the estimated means and variances affected by censoring schemes. From the result in Table 2, it is seen that the variance of estimates decreases with increase in m and the case of all items removed at the first failure t1 is better than the other cases.

Tables 37 present the estimated mean of MLE and the Bayes estimates relative to BSEL and BLEL functions of the parameters, survival function and hazard function using Lindley’s approximation, Tierney and Kadane approximation and MCMC methods and its corresponding MSEs for different censoring schemes, respectively. Our experiments include three cases having different censoring schemes. The first case is that all items failed, the second case is that type-II censoring, the third case is that all items removed at the first failure and the last case is that of progressive type-II censoring, respectively. In Tables 3 and 7, it indicates that the Bayes estimates obtained using MCMC methods are better than the MLEs and the Bayes estimates using Lindley’s approximation and Tierney and Kadane approximation. In Tables 4 and 5, it reveals that the Bayes estimates obtained using Lindley’s approximation are better than the MLEs and the Bayes estimates using Tierney and Kadane approximation and MCMC methods. It is evident for both BSEL function and BLEL function. Also, it notes that the Bayes estimates relative to BSEL and BLEL functions using Lindley’s approximation, Tierney-Kadane approximation and MCMC methods are closer to the preassigned values of the parameters than the MLEs in terms of MSE. It is apparent for both complete and type-II censored samples. In these tables, (0 * 50) represents that the total number of the 0’s is 50 and (2, 0 * 7) * 5 represents that the total number of (2, 0, 0, 0, 0, 0, 0,0)’s is 5.

7. Conclusions

In this paper, based on progressive type-II censored samples, we have derived the MLE and Bayes estimators for two shape parameters, one scale parameter, survival function and hazard function of the exponentiated log-logistic (ELL) distribution. For a number of reasons, when both goodness of fit and precision of estimation are considered important, balanced loss function is appropriate. So, we consider both balanced squared error loss (BSEL) and balanced linex error loss (BLL) functions. And we have used Newton-Raphson method, Lindley’s approximation, Tierney-Kadane approximation and MCMC methods to compute these estimates. Here, to check the convergence of MCMC chains, the Gelman and Rubin diagnostic was used. In order to assess the performances of these estimates, we have conducted a simulation study in terms of MSE. The results obtained from the simulation demonstrated the superiority of Bayes estimates relative to balanced loss functions (BLFs) under informative prior.

Figures
Fig. 1. plots of the estimated PSRF sequences for α, β and λ.
Fig. 2. plot of the estimated MPSRF sequence based upon α, β and λ.
TABLES

### Table 1

Estimated means and lengths of 95% confidence interval (CI) for α = 1, β = 2 ; and λ = 1 (n=50, m=25–45)

m degree of censoringCensoring scheme (R1, . . . , Rm)α̂ (length of CI for α)β̂ (length of CI for β)λ̂ (length of CI for λ)
25(50%)(0, . . . , 25)1.109 (4.795)1.959 (4.677)0.975 (3.000)
(25, . . . , 0)0.974 (3.332)2.310 (3.270)1.116 (2.262)
(5, 0 * 4) * 50.985 (3.412)2.255 (3.719)1.063 (2.212)
30(40%)(0, . . . , 20)1.034 (3.943)2.114 (4.219)1.020 (2.540)
(20, . . . , 0)0.976 (3.095)2.275 (2.874)1.112 (2.089)
(4, 0 * 5) * 50.981 (3.202)2.257 (3.276)1.077 (2.077)
35(30%)(0, . . . , 15)1.012 (3.458)2.172 (3.618)1.033 (2.251)
(15, . . . , 0)0.986 (2.961)2.241 (2.626)1.106 (1.974)
(3, 0 * 6) * 50.985 (2.980)2.255 (2.908)1.063 (1.966)
40(20%)(0, . . . , 10)0.989 (3.068)2.205 (3.071)1.068 (2.036)
(10, . . . , 0)1.000 (2.899)2.228 (2.430)1.095 (1.837)
(2, 0 * 7) * 50.985 (2.838)2.235 (2.585)1.085 (1.828)
45(10%)(0, . . . , 5)0.984 (2.781)2.209 (2.542)1.088 (1.845)
(5, . . . , 0)0.980 (2.653)2.221 (2.311)1.104 (1.783)
(1, 0 * 8) * 50.986 (2.698)2.217 (2.371)1.095 (1.775)

### Table 2

Variances and covariances for α = 1, β = 2 and λ = 1 (n=50,m=25–45)

m degree of censoringScheme (R1, . . . , Rm))Var(α̂)Var(β̂)Var(λ̂)Cov(α̂,β̂)Cov(α̂,λ̂)Cov(β̂,λ̂
25(50%)(0, . . . , 25)1.4961.4230.586−1.222−0.8740.767
(25, . . . , 0)0.7230.6960.333−0.434−0.3900.349
(5, 0 * 4) * 50.7580.9000.318−0.577−0.4210.392

30(40%)(0, . . . , 20)1.0121.1580.420−0.855−0.5960.575
(20, . . . , 0)0.6230.5370.284−0.356−0.3370.285
(4, 0 * 5) * 50.6670.6990.281−0.459−0.3690.329

35(30%)(0, . . . , 15)0.7780.8520.330−0.615−0.4540.435
(15, . . . , 0)0.5710.4490.254−0.315−0.3040.249
(3, 0 * 6) * 50.5780.5500.252−0.372−0.3190.278

40(20%)(0, . . . , 10)0.6120.6140.270−0.420−0.3530.321
(10, . . . , 0)0.5470.3840.220−0.276−0.2780.214
(2, 0 * 7) * 50.5240.4350.218−0.302−0.2820.227

45(10%)(0, . . . , 5)0.5030.4200.221−0.303−0.2790.239
(5, . . . , 0)0.4580.3480.207−0.249−0.2500.201
(1, 0 * 8) * 50.4740.3660.205−0.263−0.2560.206

### Table 3

Estimated means and MSEs (in parentheses) of MLEs and Bayes estimates; relative to BSEL and BLEL functions of α = 1 for n = 50

SchemeMLEwBSELcBLEL
LindleyTKMCMCLindleyTKMCMC
(0 * 50)0.670 (0.153)0.30.900 (0.056)0.811 (0.064)0.866 (0.039)−10.913 (0.067)0.868 (0.054)0.904 (0.032)
10.882 (0.051)0.770 (0.077)0.835 (0.047)
0.50.834 (0.070)0.771 (0.085)0.810 (0.062)−10.850 (0.077)0.816 (0.072)0.843 (0.051)
10.814 (0.068)0.740 (0.097)0.784 (0.072)
0.70.768 (0.095)0.730 (0.109)0.754 (0.093)−10.782 (0.097)0.760 (0.097)0.778 (0.081)
10.753 (0.095)0.711 (0.118)0.737 (0.102)
(0 * 39, 10)0.913 (0.116)0.31.121 (0.069)0.994 (0.041)1.050 (0.030)−11.213 (0.129)1.084 (0.059)1.098 (0.038)
11.045 (0.046)0.934 (0.039)1.010 (0.026)
0.51.062 (0.051)0.971 (0.057)1.011 (0.044)−11.140 (0.104)1.040 (0.065)1.051 (0.046)
10.989 (0.028)0.926 (0.056)0.978 (0.044)
0.71.002 (0.058)0.948 (0.078)0.972 (0.067)−11.058 (0.093)0.992 (0.079)1.000 (0.064)
10.949 (0.041)0.919 (0.076)0.950 (0.068)
(10, 0 * 39)0.690 (0.152)0.30.947 (0.057)0.839 (0.059)0.891 (0.036)−10.964 (0.078)0.905 (0.051)0.932 (0.030)
10.924 (0.044)0.793 (0.071)0.858 (0.043)
0.50.874 (0.067)0.796 (0.079)0.834 (0.058)−10.894 (0.082)0.848 (0.068)0.869 (0.048)
10.847 (0.058)0.762 (0.091)0.806 (0.068)
0.70.800 (0.090)0.754 (0.105)0.776 (0.090)−10.818 (0.098)0.788 (0.093)0.802 (0.078)
10.779 (0.087)0.732 (0.114)0.757 (0.099)
(2, 0 * 7) * 50.739 (0.136)0.31.004 (0.060)0.877 (0.051)0.931 (0.031)−11.031 (0.089)0.950 (0.049)0.975 (0.029)
10.971 (0.044)0.829 (0.060)0.895 (0.036)
0.50.928 (0.060)0.838 (0.070)0.876 (0.051)−10.957 (0.084)0.895 (0.062)0.914 (0.044)
10.893 (0.048)0.801 (0.078)0.847 (0.058)
0.70.852 (0.078)0.798 (0.093)0.821 (0.079)−10.877 (0.091)0.836 (0.084)0.849 (0.069)
10.825 (0.072)0.775 (0.100)0.801 (0.086)

### Table 4

Estimated means and MSEs (in parentheses) of MLEs and Bayes estimates; relative to BSEL and BLEL functions of β = 2 for n = 50

SchemeMLEwBSELcBLEL
LindleyTKMCMCLindleyTKMCMC
(0 * 50)2.453 (0.375)0.32.117 (0.073)2.278 (0.163)2.127 (0.077)−12.201 (0.104)2.355 (0.229)2.183 (0.110)
12.087 (0.075)2.204 (0.112)2.072 (0.053)
0.52.213 (0.123)2.328 (0.214)2.220 (0.133)−12.287 (0.168)2.385 (0.270)2.271 (0.177)
12.175 (0.113)2.267 (0.160)2.162 (0.091)
0.72.309 (0.203)2.378 (0.273)2.313 (0.212)−12.360 (0.247)2.413 (0.311)2.349 (0.253)
12.274 (0.181)2.335 (0.226)2.265 (0.162)
(0 * 39, 10)2.149 (0.210)0.31.950 (0.047)2.068 (0.088)1.896 (0.066)−12.099 (0.068)2.157 (0.126)1.946 (0.073)
11.882 (0.058)1.993 (0.068)1.846 (0.067)
0.52.007 (0.054)2.091 (0.116)1.968 (0.083)−12.125 (0.092)2.157 (0.147)2.013 (0.101)
11.939 (0.056)2.032 (0.091)1.918 (0.070)
0.72.064 (0.093)2.114 (0.149)2.041 (0.120)−12.140 (0.132)2.155 (0.171)2.072 (0.140)
12.009 (0.082)2.075 (0.126)2.000 (0.096)
(10, 0 * 39)2.432 (0.375)0.32.058 (0.062)2.246 (0.151)2.090 (0.070)−12.146 (0.085)2.331 (0.219)2.151 (0.102)
12.033 (0.070)2.166 (0.099)2.029 (0.048)
0.52.165 (0.106)2.299 (0.203)2.188 (0.124)−12.247 (0.148)2.363 (0.262)2.243 (0.170)
12.127 (0.100)2.232 (0.147)2.124 (0.082)
0.72.272 (0.186)2.353 (0.265)2.286 (0.205)−12.330 (0.232)2.392 (0.307)2.325 (0.248)
12.234 (0.166)2.305 (0.215)2.232 (0.151)
(2, 0 * 7) * 52.366 (0.307)0.32.016 (0.052)2.202 (0.121)2.047 (0.058)−12.123 (0.073)2.290 (0.181)2.106 (0.082)
11.979 (0.059)2.122 (0.079)1.989 (0.044)
0.52.116 (0.080)2.249 (0.164)2.138 (0.099)−12.210 (0.120)2.314 (0.216)2.191 (0.136)
12.069 (0.076)2.183 (0.117)2.078 (0.067)
0.72.216 (0.144)2.296 (0.215)2.229 (0.164)−12.280 (0.188)2.336 (0.251)2.266 (0.200)
12.173 (0.126)2.250 (0.173)2.179 (0.121)

### Table 5

Estimated means and MSEs (in parentheses) of MLEs and Bayes estimates; relative to BSEL and BLEL functions of λ = 1 for n = 50

SchemeMLEwBSELcBLEL
LindleyTKMCMCLindleyTKMCMC
(0 * 50)1.454 (0.276)0.31.205 (0.069)1.327 (0.138)1.226 (0.073)−11.253 (0.088)1.376 (0.175)1.261 (0.094)
11.179 (0.064)1.277 (0.104)1.190 (0.055)
0.51.276 (0.108)1.363 (0.172)1.291 (0.117)−11.319 (0.131)1.400 (0.203)1.322 (0.141)
11.248 (0.098)1.323 (0.140)1.257 (0.093)
0.71.347 (0.163)1.400 (0.210)1.356 (0.172)−11.378 (0.184)1.422 (0.231)1.378 (0.192)
11.324 (0.149)1.373 (0.186)1.330 (0.148)
(0 * 39, 10)1.201 (0.127)0.31.077 (0.031)1.162 (0.057)1.063 (0.024)−11.137 (0.045)1.213 (0.078)1.092 (0.032)
11.033 (0.032)1.116 (0.041)1.033 (0.018)
0.51.112 (0.033)1.173 (0.073)1.102 (0.044)−11.167 (0.049)1.211 (0.089)1.127 (0.054)
11.068 (0.029)1.138 (0.059)1.075 (0.034)
0.71.148 (0.055)1.184 (0.093)1.141 (0.071)−11.187 (0.070)1.208 (0.102)1.159 (0.080)
11.113 (0.048)1.161 (0.082)1.121 (0.061)
(10, 0 * 39)1.466 (0.291)0.31.191 (0.064)1.332 (0.141)1.227 (0.073)−11.245 (0.086)1.387 (0.184)1.266 (0.097)
11.163 (0.059)1.276 (0.103)1.188 (0.053)
0.51.270 (0.103)1.370 (0.178)1.295 (0.120)−11.320 (0.130)1.411 (0.213)1.329 (0.146)
11.237 (0.092)1.325 (0.142)1.257 (0.093)
0.71.348 (0.163)1.408 (0.220)1.363 (0.179)−11.384 (0.189)1.434 (0.244)1.387 (0.201)
11.320 (0.148)1.378 (0.191)1.334 (0.152)
(2, 0 * 7) * 51.395 (0.230)0.31.151 (0.049)1.286 (0.112)1.188 (0.056)−11.206 (0.069)1.338 (0.147)1.223 (0.074)
11.118 (0.045)1.235 (0.081)1.152 (0.041)
0.51.220 (0.078)1.317 (0.141)1.247 (0.093)−11.271 (0.100)1.356 (0.169)1.277 (0.113)
11.186 (0.069)1.276 (0.112)1.213 (0.072)
0.71.290 (0.124)1.348 (0.174)1.306 (0.140)−11.326 (0.146)1.372 (0.193)1.327 (0.157)
11.262 (0.112)1.321 (0.151)1.280 (0.118)

### Table 6

Estimated means and MSEs (in parentheses) of MLEs and Bayes estimates; relative to BSEL and BLEL functions of S (t = 0.5) = 0.8 for n = 50

SchemeMLEwBSELcBLEL
LindleyTKMCMCLindleyTKMCMC
(0 * 50)0.811 (0.002)0.30.805 (0.002)0.804 (0.002)0.797 (0.002)−10.805 (0.002)0.805 (0.002)0.797 (0.002)
10.804 (0.002)0.804 (0.002)0.796 (0.002)
0.50.807 (0.002)0.806 (0.002)0.801 (0.002)−10.807 (0.002)0.807 (0.002)0.801 (0.002)
10.806 (0.002)0.806 (0.002)0.800 (0.002)
0.70.809 (0.002)0.808 (0.002)0.805 (0.002)−10.809 (0.002)0.809 (0.002)0.805 (0.002)
10.808 (0.002)0.808 (0.002)0.805 (0.002)
(0 * 39, 10)0.811 (0.002)0.30.801 (0.002)0.803 (0.002)0.791 (0.002)−10.802 (0.002)0.804 (0.002)0.791 (0.002)
10.800 (0.002)0.802 (0.002)0.790 (0.002)
0.50.804 (0.002)0.805 (0.002)0.796 (0.002)−10.804 (0.002)0.806 (0.002)0.797 (0.002)
10.803 (0.002)0.805 (0.002)0.796 (0.002)
0.70.807 (0.002)0.807 (0.002)0.802 (0.002)−10.807 (0.002)0.808 (0.002)0.802 (0.002)
10.806 (0.002)0.807 (0.002)0.802 (0.002)
(10, 0 * 39)0.817 (0.002)0.30.809 (0.002)0.809 (0.002)0.800 (0.002)−10.809 (0.002)0.810 (0.002)0.800 (0.002)
10.808 (0.002)0.808 (0.002)0.799 (0.002)
0.50.811 (0.002)0.811 (0.002)0.805 (0.002)−10.812 (0.002)0.812 (0.002)0.805 (0.002)
10.811 (0.002)0.811 (0.002)0.804 (0.002)
0.70.814 (0.002)0.814 (0.002)0.810 (0.002)−10.814 (0.002)0.814 (0.002)0.810 (0.002)
10.813 (0.002)0.813 (0.002)0.809 (0.002)
(2, 0 * 7) * 50.817 (0.002)0.30.808 (0.002)0.809 (0.002)0.800 (0.002)−10.808 (0.002)0.810 (0.002)0.801 (0.002)
10.807 (0.002)0.808 (0.002)0.800 (0.002)
0.50.810 (0.002)0.811 (0.002)0.805 (0.002)−10.811 (0.002)0.812 (0.002)0.805 (0.002)
10.810 (0.002)0.810 (0.002)0.804 (0.002)
0.70.813 (0.002)0.813 (0.002)0.810 (0.002)−10.813 (0.002)0.814 (0.002)0.810 (0.002)
10.813 (0.002)0.813 (0.002)0.809 (0.002)

### Table 7

Estimated means and MSEs (in parentheses) of MLEs and Bayes estimates relative to BSEL and BLEL functions of h(t = 0.5) = 0.8 for n = 50

SchemeMLEwBSELcBLEL
LindleyTKMCMCLindleyTKMCMC
(0 * 50)0.653 (0.034)0.30.717 (0.020)0.682 (0.026)0.724 (0.018)−10.722 (0.019)0.696 (0.023)0.730 (0.018)
10.711 (0.020)0.679 (0.026)0.717 (0.019)
0.50.699 (0.023)0.674 (0.028)0.703 (0.022)−10.703 (0.022)0.684 (0.026)0.709 (0.021)
10.694 (0.023)0.672 (0.028)0.698 (0.022)
0.70.680 (0.026)0.665 (0.030)0.683 (0.026)−10.683 (0.026)0.672 (0.028)0.687 (0.025)
10.677 (0.027)0.664 (0.030)0.680 (0.026)
(0 * 39, 10)0.701 (0.025)0.30.736 (0.017)0.711 (0.022)0.750 (0.015)−10.743 (0.017)0.724 (0.019)0.757 (0.015)
10.728 (0.018)0.707 (0.022)0.744 (0.015)
0.50.726 (0.018)0.708 (0.023)0.736 (0.017)−10.732 (0.018)0.718 (0.021)0.742 (0.017)
10.720 (0.019)0.705 (0.022)0.731 (0.018)
0.70.716 (0.020)0.705 (0.024)0.722 (0.020)−10.720 (0.020)0.711 (0.022)0.725 (0.019)
10.712 (0.021)0.703 (0.023)0.719 (0.020)
(10, 0 * 39)0.634 (0.040)0.30.704 (0.024)0.665 (0.032)0.709 (0.022)−10.709 (0.023)0.681 (0.028)0.717 (0.021)
10.697 (0.024)0.661 (0.032)0.702 (0.023)
0.50.684 (0.027)0.656 (0.034)0.687 (0.026)−10.689 (0.026)0.668 (0.031)0.694 (0.025)
10.679 (0.028)0.653 (0.034)0.682 (0.027)
0.70.664 (0.032)0.647 (0.036)0.666 (0.031)−10.667 (0.031)0.654 (0.034)0.670 (0.030)
10.660 (0.032)0.645 (0.037)0.662 (0.032)
(2, 0 * 7) * 50.651 (0.034)0.30.713 (0.021)0.674 (0.028)0.714 (0.020)−10.719 (0.020)0.689 (0.025)0.720 (0.019)
10.706 (0.022)0.671 (0.028)0.707 (0.020)
0.50.695 (0.023)0.667 (0.030)0.696 (0.023)−10.700 (0.023)0.678 (0.027)0.701 (0.022)
10.690 (0.024)0.665 (0.030)0.691 (0.024)
0.70.677 (0.027)0.661 (0.031)0.678 (0.027)−10.681 (0.026)0.667 (0.030)0.681 (0.026)
10.674 (0.028)0.659 (0.032)0.674 (0.028)

References
1. Akaike H (1974). A new look at the statistical model identification. IEEE Transactions on Automatic Control, 19, 716-723.
2. Balakrishnan N and Sandhu RA (1995). A Simple Simulation Algorithm for Generating Progressive Type-II Censored Samples. American Statistician, 49, 229-230.
3. Bennett S (1983). Log-logistic regression models for survival data. Journal of the Royal Statistical Society. Series C (Applied Statistics), 32, 165-171.
4. Brooks SP and Gelman A (1997). General methods for monitoring convergence of iterative simulations. Journal of Computational and Graphical Statistics, 7, 434-455.
5. Chaudhary AK and Kumar V (2014). Bayesian estimation of three parameter exponentiated log-logistic distribution. International Journal of Statistika and Mathematika, 9, 66-81.
6. Gelfand AE and Smith AFM (1990). Sampling-based approaches to calculating marginal densities. Journal of the American Statistical Association, 85, 398-409.
7. Gelman A and Rubin DB (1992). Inference from iterative simulation using multiple sequence. Statistical Science, 7, 457-511.
8. Geman S and Geman D (1984). Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 6, 721-741.
9. Hastings WK (1970). Monte carlo sampling methods using markov chains and their applications. Biometrika, 57, 97-109.
10. Jojani MJ, Marchand E, and Parsian A (2012). Bayesian and robust Bayesian analysis under a general class of balanced loss functions. Statistical Papers, 53, 51-60.
11. Jung M and Chung Y (2018). Bayesian inference of three-parameter bathtub-shaped lifetime distribution. Communications in Statistics - Theory and Methods, 47, 4229-4241.
12. Kim C, Jung J, and Chung Y (2011). Bayesian estimation for the exponentiated Weibull model under Type II progressive censoring. Statistical Papers, 52, 53-70.
13. Lindley DV (1980). Approximate Bayesian methods. Trabajos de Estadistica Y de Investigacion Operativa, 31, 223-245.
14. Metropolis N, Rosenbluth AW, Rosenbluth MN, Teller AH, and Teller E (1953). Equation of state calculations by fast computing machines. The Journal of Chemical Physics, 21, 1087-1091.
15. Rosaiah K, Kantam RRL, and Kumar ChS (2006). Reliability test plans for exponentiated log-logistic distribution. Economy Quality Control, 21, 279-289.
16. Sel S, Jung M, and Chung Y (2018). Bayesian and maximum likelihood estimations from parameters of McDonald extended Weibull model based on progressive type-II censoring. Journal of Statistical Theory and Practice, 1-24.
17. Tierney L and Kadane JB (1986). Accurate approximations for posterior moments and marginal densities. Journal of the American Statistical Association, 81, 82-86.
18. Varian HR (1975). A bayesian approach to real estate assessment. Studies in Bayesian Econometrics and Statistics in Honor of Leonard J Savage, 195-208.
19. Zellner A (1994). Bayesian and non-Bayesian estimation using balanced loss functions. Statistical Decision Theory and Related Topics V, (pp. 377-390), New York, Springer-Verlag.