TEXT SIZE

• •   CrossRef (0) Maximum penalized likelihood estimation for a stress-strength reliability model using complete and incomplete data  Marwa Khalil Hassan1,a,b

aDepartment of Mathematics, Ain Shams University, Egypt;
bDepartment of Mathematics, Imam Abdulrahman Bin Faisal University, Saudi Arabia
Correspondence to: 1Department of Mathematics, Faculty of Education, Ain Shams University, Cairo, Egypt and Department of Mathematics, College of Science, Imam Abdulrahman Bin Faisal University, Saudi Arabia. E-mail: marwa_khalil2006@hotmail.com
Received February 13, 2018; Revised March 28, 2018; Accepted May 9, 2018.
Abstract

The two parameter negative exponential distribution has many practical applications in queuing theory such as the service times of agents in system, the time it takes before your next telephone call, the time until a radioactive practical decays, the distance between mutations on a DNA strand, and the extreme values of annual snowfall or rainfall; consequently, has many applications in reliability systems. This paper considers an estimation problem of stress-strength model with two parameter negative parameter exponential distribution. We introduce a maximum penalized likelihood method, Bayes estimator using Lindley approximation to estimate stress-strength model and compare the proposed estimators with regular maximum likelihood estimator for complete data. We also introduce a maximum penalized likelihood method, Bayes estimator using a Markov chain Mote Carlo technique for incomplete data. A Monte Carlo simulation study is performed to compare stress-strength model estimates. Real data is used as a practical application of the proposed model.

Keywords : maximum penalized likelihood estimator, Lindley approximation, progressive type II censored data, Bayes estimator, simulation, Markov chain Mote Carlo technique, two parameter negative exponential distribution
1. Introduction

Let X be a two parameter negative exponential distribution with scale parameter λ measuring the failure rate and location parameter μ. Then the probability density function and cumulative distribution function of X are

$f(x;μ,λ)=1λexp [-(x-μ)λ] I(x>μ), ŌĆŖŌĆŖ ŌĆŖŌĆŖ ŌĆŖŌĆŖx>μ, μ≥0, λ>0$

and

$F(x;μ,λ)=1-exp [-(x-μ)λ] I(x>μ), ŌĆŖŌĆŖ ŌĆŖŌĆŖ ŌĆŖŌĆŖx>μ,$

where I(·) is the indicator function of (·) that is I(x > μ) = 1 for all x > μ. Figure 1 shows the graph of two parameter negative exponential distribution for different values of μ and λ, we then observe that the two parameter negative exponential distribution is a positively skewed distribution.

The two parameter negative exponential distribution plays an important role in modeling in many fields such as queuing theory, physics, biology, hydrology, reliability, and reliability engineering. In this paper, we examine the problem of estimating the reliability parameter i.e., R = P[Y < X]. The reliability parameter is referred to the stress-strength model included in quality control, engineering statistics, and other fields. In reliability context, the stress-strength model describes the life of component which has a random strength variable X and is subjected to random variable stress Y. The system fails any time the stress is greater than the strength. The estimation of stress-strength model when X and Y are random variables having a specified distribution discussed by many authors starting from Birnbaum (1956), Basu (1964), Downtown (1973), Tong (1974, 1977), Beg (1980), Iwase (1987), McCool (1991). Recently, Ali et al. (2012), Greco and Ventura (2011), Rezaei et al. (2010), Wong (2012), Shahsanaei and Daneshkhah (2013), Hussian (2013), Al-Mutairi et al. (2013), Ghitany et al. (2015), Najarzadegan et al. (2016). The estimator of reliability parameter when X and Y are independent exponential random variables discussed by several authors such as Enis and Geisser (1971), Kelley et al. (1976), Sathe and Shah (1981). The pervious work of reliability parameter when X and Y are independent two parameter negative exponential random variables discussed by Pal et al. (2005) derived the distribution of UMVUE of the reliability parameter; subsequently, Krishnamoorthy et al. (2007) studied the problem of hypothesis testing and interval estimation of the reliability parameter.

In this paper, we introduce a maximum penalized likelihood estimator of a reliability parameter when X and Y are independent two parameter negative exponential random variables as well as introduce the Bayes estimator of it using Lindley approximation. This paper organized as: the stress-strength parameter when X and Y are independent two parameter negative exponential random variables with parameters (λ1, μ1) and (λ2, μ2) respectively is determined in Section 2. We present maximum penalized likelihood estimator of the reliability parameter and compare it by the maximum likelihood estimator also the Bayes estimator of the reliability parameter using Lindley approximation introduced for complete data in Section 3. We present the maximum penalized likelihood estimator of the reliability parameter and the Bayes estimator of the reliability parameter using a Markov chain Monte Carlo technique (MCMC) introduced for the incomplete data in Section 4. The performance of these estimator is assessed by simulation study in Section 5. A practical application is presented in Section 6. The conclusion of this paper introduced in Section 7.

2. Stress-strength model

Let X and Y be independent strength and stress random variables follow two parameter negative exponential distribution with parameters (λ1, μ1) and (λ2, μ2) respectively. Then the stress-strength model R define as

$R=P[Y

For all cases of (μ1, μ2) we obtain the formula of R as:

• If μ1 = μ2, then $R=λ1λ1+λ2.$

• If μ1μ2, then $R=[1-λ2λ1+λ2e(μ2-μ1)λ2] I(μ1>μ2)+[λ1λ1+λ2e(μ1-μ2)λ1] I(μ1<μ2),$

where I(·) is the indicator function.

In the next sections, we will obtain the different estimators of R.

3. Different estimators of R for complete data

In this section, the maximum penalized likelihood estimators of parameter R are obtained and compared by its regular maximum likelihood estimator for complete data and the Bayes estimator of reliability parameter using the Lindley approximation also introduced for complete data.

### 3.1. Maximum penalized likelihood estimator of R

Zheng (2013) introduced the maximum penalized likelihood estimator of two parameter negative exponential distribution parameters because the regular maximum likelihood estimator of distribution parameters were biased. In this subsection, we use the Zheng (2013) approach to find the maximum penalized estimator of R using the following algorithm.

Algorithm 1

Find maximum penalized likelihood estimator of R says R* using the invariance property as follows

• Case 1: μ1 = μ2, then $R^*=λ^1*λ^1*+λ^2*,$

• Case 2: μ1μ2, then $R^*=[1-λ^2*λ^1*+λ^2*eμ^2*-μ^1*λ^2*] I(μ^1*>μ^2*)+[λ^1*λ^1*+λ^2*eμ^1*-μ^2*λ^1*] I(μ^1*<μ^2*),$

where $λ1*,μ1*,λ2*,μ2*$ are the maximum penalized likelihood estimator of λ1, μ1, λ2, μ2 (Zheng, 2013).

Now, we will study the distribution of R╠é* as follows case 1: μ1 = μ2, then

$R^*=λ^1*λ^2*+λ^2*=11+λ^2*/λ^1*.$

Suppose $u=λ^2*/λ^1*$, then from convolution rule we get the probability density function of U is f(u) = 1/(1 + u2), u > 0 and the moments about zero of R╠é* is given by

$E[R^*]s=E[1+u]-s=π2-s2e-14iπs+eiπs4+π2-s2(e-14iπs+eiπs4) (e-12iπs+eiπs2)+Fpq[{12,1,1},{1-s2,32-s2},-1]s-1,$

and the mean square error (MSE) of R╠é* is given by

$MSE(R^*)=E[(R^*-R)2]=λ12-(π-2)λ2λ1+λ222(λ1+λ2)2.$

But in case 2: μ1μ2, then

$R^*=[1-λ^2*λ^1*+λ^2*eμ^2*-μ^1*λ^2*] I(μ^1*>μ^2*)+[λ^1*λ^1*+λ^2*eμ^1*-μ^2*λ^1*] I(μ^1*>μ^2*).$

The exact distribution of R╠é* does not exist. So, we find the asymptotic distribution of R╠é* as: as n → ∞ and m → ∞ the R╠é* has asymptotic normal with mean R and asymptotic variance Var(R)

$Var (R)=∑j=14∑i=14∂R∂Θi∂R∂ΘjIij-1,$

where Θ = (λ1, μ1, λ2, μ2) and $Iij-1$ is the (i, j)th element of the inverse of the Fisher information matrix Iij which is given by

$Iij=(E [-∂2l*(Θ)∂λ12]E [-∂2l*(Θ)∂λ1∂μ1]E [-∂2l*(Θ)∂λ1∂λ2]E [-∂2l*(Θ)∂λ1∂μ2]E [-∂2l*(Θ)∂μ1∂λ1]E [-∂2l*(Θ)∂μ12]E [-∂2l*(Θ)∂μ1∂λ2]E [-∂2l*(Θ)∂μ1∂μ2]E [-∂2l*(Θ)∂λ2∂λ1]E [-∂2l*(Θ)∂λ2∂μ1]E [-∂2l*(Θ)∂λ22]E [-∂2l*(Θ)∂λ2∂μ2]E [-∂2l*(Θ)∂μ2∂λ1]E [-∂2l*(Θ)∂μ2∂μ1]E [-∂2l*(Θ)∂μ2∂λ2]E [-∂2l*(Θ)∂μ22]),Iij-1=(Var (λ1*)Cov (λ1*,μ1*)Cov (λ1*,λ2*)Cov (λ1*,μ2*)Cov (μ1*,λ1*)Var (μ1*)Cov (μ1*,λ2*)Cov (μ1*,μ2*)Cov (λ2*,λ1*)Cov (λ2*,μ1*)Var (λ2*)Cov (λ2*,μ2*)Cov (μ2*,λ1*)Cov (μ2*,μ1*)Cov (μ2*,λ2*)Var (μ2*))$

and

$Var (λ1*)=λ12n-1, ŌĆŖŌĆŖ ŌĆŖŌĆŖ ŌĆŖŌĆŖVar (λ2*)=λ22m-1, ŌĆŖŌĆŖ ŌĆŖŌĆŖ ŌĆŖŌĆŖVar (μ1*)=λ12n(n-1), ŌĆŖŌĆŖ ŌĆŖŌĆŖ ŌĆŖŌĆŖVar (μ2*)=λ22m(m-1),Cov (λ1*,μ1*)=Cov (μ1*,λ1*)=λ12n, ŌĆŖŌĆŖ ŌĆŖŌĆŖ ŌĆŖŌĆŖCov (λ1*,λ2*)=Cov (λ2*,μ1*)=0,Cov (λ1*,μ2*)=Cov (μ2*,λ1*)=0, ŌĆŖŌĆŖ ŌĆŖŌĆŖ ŌĆŖŌĆŖCov (μ1*,λ2*)=Cov (λ2*,μ1*)=0,Cov (μ1*,μ2*)=Cov (μ2*,μ1*)=0, ŌĆŖŌĆŖ ŌĆŖŌĆŖ ŌĆŖŌĆŖCov (λ2*,μ2*)=Cov (μ2*,λ2*)=λ22m,∂R∂λ1=-λ1eμ1-μ2λ1(λ1+λ2)2+eμ1-μ2λ1λ1+λ2-(μ1-μ2) eμ1-μ2λ1λ1 (λ1+λ2)I(μ1<μ2)+λ2eμ2-μ1λ2(λ1+λ2)2I(μ1>μ2),∂R∂λ2=-λ1eμ1-μ2λ1(λ1+λ2)2I(μ1<μ2)+λ2eμ2-μ1λ2(λ1+λ2)2+(μ2-μ1) eμ2-μ1λ2λ2(λ1+λ2)-eμ2-μ1λ2λ1+λ2I(μ1>μ2),∂R∂μ1=eμ1-μ2λ1λ1+λ2I(μ1<μ2)+eμ2-μ1λ2λ1+λ2I(μ1<μ2),∂R∂μ2=-eμ1-μ2λ1λ1+λ2I(μ1<μ2)-eμ2-μ1λ2λ1+λ2I(μ1>μ2).$

Hence an asymptotic 100(1 − α)% confidence interval for R is given by

$R∈(R^*±Zα2n+mVar^(R*)),$

where Zα/2 is the upper (α/2)th quantile of standard normal distribution and $Var^(R*)$ is the value of Var(R*) at maximum penalized likelihood estimators of parameters Θ = (λ1, μ1, λ2, μ2).

### 3.2. Bayes estimator of R

Lindley (1980) proposed on approximation technique to find the Bayes estimator of the stress-strength parameter under squared error loss function and under Jeffrey prior (ρ) as follows.

Algorithm 2
• Find the penalized maximum likelihood estimator of Θ = (λ1, μ1, λ2, μ2) and R says $Θ*=(λ1*,μ1*,λ2*,μ2*)$ and R╠é*.

• Compute the Bayes estimator of R says RB as $RB=R^*+12(∑i=14∑j=14(R^ij*+2R^i*ρj) σij)+12(∑i=14∑j=14∑k=14∑l=14LijkσijσklR^l*).$

Note: all terms and derivatives are calculated by replacing Θ = (λ1, μ1, λ2, μ2) and R with their maximum penalized likelihood estimator. Where $R11=∂2R^*∂λ12=-2λ2eμ2-μ1λ2(λ1+λ2)3I[μ1>μ2]+eμ1-μ2λ1(-2λ13(λ2-μ1+μ2)+λ12(μ1-μ2)(2λ2+μ1-μ2)+2λ2λ1(μ1-μ2)2+λ22(μ1-μ2)2)λ13(λ1+λ2)3I[μ1<μ2],R12=R21=∂2R^*∂λ1∂μ1=-eμ2-μ1λ2(λ1+λ2)2I[μ1>μ2]-(μ1-μ2) eμ1-μ2λ1λ12(λ1+λ2)-eμ1-μ2λ1(λ1+λ2)2I[μ1<μ2],R13=R31=∂2R^*∂λ1∂λ2=eμ2-μ1λ2(λ1(λ2+μ1-μ2)-λ2(λ2-μ1+μ2))λ2(λ1+λ2)3I[μ1>μ2]+eμ1-μ2λ1(-λ1(λ2-μ1+μ2)+λ2(μ1-μ2)+λ12)λ1(λ1+λ2)3I[μ1<μ2],R14=R41=∂2R^*∂λ1∂μ2=eμ2-μ1λ2(λ1+λ2)2I[μ1>μ2]+eμ1-μ2λ1(λ1μ1+λ2μ1-(λ1+λ2)μ2+λ12)λ12(λ1+λ2)2I[μ1<μ2],R22=∂2R^*∂μ12=-eμ2-μ1λ2λ2(λ1+λ2)I[μ1>μ2]-eμ1-μ2λ1λ1(λ1+λ2)I[μ1<μ2],R23=R32=∂2R^*∂λ2∂μ1=eμ2-μ1λ2(λ1(μ1-μ2)-λ2(λ2-μ1+μ2))λ22(λ1+λ2)2I[μ1>μ2]-eμ1-μ2λ1(λ1+λ2)2I[μ1<μ2],R24=R42=∂2R^*∂μ2∂μ1=eμ2-μ1λ2λ2(λ1+λ2)I[μ1>μ2]-eμ1-μ2λ1λ1(λ1+λ2)I[μ1<μ2],R33=∂2R^*∂λ22=eμ2-μ1λ2 (λ22(μ1-μ2)(2λ2-μ1+μ2)+2λ1λ2(λ2(μ1-μ2)+λ22-(μ1-μ2)2)-λ12(μ1-μ2))λ23(λ1+λ2)3I[μ1>μ2]+2λ1eμ1-μ2λ1(λ1+λ2)3I[μ1<μ2],R34=R43=∂2R^*∂λ2∂μ2=eμ2-μ1λ2(λ1(μ2-μ1)+λ2(λ2-μ1+μ2))λ22(λ1+λ2)2I[μ1+μ2]+eμ1-μ2λ1(λ1μ1+λ2μ1-(λ1+λ2)μ2+λ12)λ12(λ1+λ2)2I[μ1<μ2],R44=∂2R^*∂μ22=-eμ2-μ1λ2λ2(λ1+λ2)I[μ1>μ2]+eμ1-μ2λ1λ1(λ1+λ2)I[μ1<μ2],σij=Iij-1,ρ1=∂ρ∂λ1=λ1+2λ12+4λ1+2-1λ1-11+λ1,ρ2=∂ρ∂μ1=μ1+2μ12+4μ1+2-1μ1-11+μ1,ρ3=∂ρ∂λ2=λ2+2λ22+4λ2+2-1λ2-11+λ2,ρ4=∂ρ∂μ2=μ2+2μ22+4μ2+2-1μ2-11+μ2,R1=∂R∂λ1=-λ1eμ1-μ2λ1(λ1+λ2)2+eμ1-μ2λ1λ1+λ2-(μ1-μ2) eμ1-μ2λ1λ1(λ1+λ2)I(μ1<μ2)+λ2eμ2-μ1λ2(λ1+λ2)2I(μ1>μ2),R2=∂R∂μ1=eμ1-μ2λ1λ1+λ2I(μ1<μ2)+eμ2-μ1λ2λ1+λ2I(μ1>μ2),R3=∂R∂λ2=-λ1eμ1-μ2λ1(λ1+λ2)2I(μ1<μ2)+λ2eμ2-μ1λ2(λ1+λ2)2+(μ2-μ1) eμ2-μ1λ2λ2(λ1+λ2)-eμ2-μ1λ2λ1+λ2I(μ1<μ2),R4=∂R∂μ2=-eμ1-μ2λ1λ1+λ2I(μ1<μ2)-eμ2-μ1λ2λ1+λ2I(μ1>μ2),L111=∂3l*(Θ)∂λ13=2(n-1)λ13,L333=∂3l*(Θ)∂λ23=2(m-1)λ23,L121=L211=L112=∂3l*(Θ)∂λ12∂μ1=2nλ13,L433=L334=L343=∂3l*(Θ)∂λ22∂μ2=2mλ23,L221=L122=L212=∂3l*(Θ)∂λ1∂μ12=2n(n-1)λ13,L443=L344=L434=∂3l*(Θ)∂λ2∂μ22=2m(m-1)λ23.$

Note: other Lijk = 0.

4. Different estimators of R for incomplete data

It is very difficult to get a complete data and the data are often incomplete. In medical, industrial or others fields researchers might stop their experiment before all items fail because they often do not have time to observe the lifetime of all items in the study or items fail due to random causes. Consequently, researchers use censored data. There are several types of censored data (Type I and II) including right, left, interval, and truncation censoring. Saracoglu et al. (2012) combine type II censoring and a progressive censoring scheme called progressive type II censoring. This scheme allows the researcher to remove active units during the experiment and is defined as: Given m < n, and P1, …, Pm non-negative integers such that P1 + ··· + Pm = nm. Where n stands for the items are on the life test at the same time. At the time of the first failure, one chooses randomly P1 items from the rest of the active n − 1 and then discards. In a similar way, at time of the second failure, one selects P2 out of nP1 − 2 remaining items at random and consequently remove it. Eventually, at the time of the mth failure, all the remaining active items are removed (see Saracoglu et al. (2012) and references for the advantages of this censoring scheme).

### 4.1. Maximum penalized likelihood estimator of R for incomplete data

In this subsection, maximum penalized likelihood estimator of R is obtained for progressive type II censored data as follows:

Algorithm 3
• Suppose two progressive censoring schemes (n1, m1, P1, …, Pm1) and (n2, m2, Q1, …, Qm2) for two independent random variables X and Y with two parameter negative exponential distribution NED(λ1, μ1) and NED(λ2, μ2) respectively.

• Suppose two progressively censored samples as X = (X1:m1:n1, …, Xm1,m1,n1) from strength variable X and Y = (Y1:m2:n2, …, Ym2,m2,n2) from stress variable Y.

• Find the joint density probability functions of X = (X1:m1:n1, …, Xm1,m1,n1) and Y = (Y1:m2:n2, …, Ym2,m2,n2) as: $f(X1:m1:n1,…,Xm1,m1,n1)(x1,…,xm1)=c1∏i=1m1f(x1)(1-F(xi)Pi,f(Y1:m2:n2,…,Ym2,m2,n2)(y1,…,ym2)=c2∏j=1m2f(y1)(1-F(yj)Qj,$

where $c1=n1(n1-P1-1)Ōŗ»(n1-m1-P1-Ōŗ»-Pm1-1+1),c2=n2(n2-Q2-1)Ōŗ»(n2-m2-Q2-Ōŗ»-Qm2-1+1)$

are normalizing constants see Balakrishnan and Aggarwala (2000).

• Find the penalized likelihood function $Lpr(Θ)=c1c2(x1:m1-μ1)(y1:m2-μ2)∏i=1m1f(x1)(1-F(xi)Pi∏j=1m2f(y1)(1-F(yj)Qj.$

• Taking the logarithm of penalized likelihood function $lpr(Θ)=Logc1+Logc2+Log(x1:m1-μ1)+Log(y1:m2-μ2)-m1Logλ1-m2Logλ2-1λ1∑i=1m1(Pi+1)(xi-μ1)-1λ2∑j=1m2(Qj+1)(yj-μ2).$

• Find maximum penalized likelihood estimators of Θ = (λ1, λ2, μ1, μ2) as $λ1pr=1m1 [∑i=1m1(Pi+1)xi-x1:m1∑i=1m1(Pi+1)],λ2pr=1m2 [∑j=1m2(Qj+1)yj-y2:m2∑j=1m2(Qj+1)],μ1pr=x1:m1-∑i=1m1(Pi+1)xi-x1:m1∑i=1m1(Pi+1)(m1-1)∑i=1m1(Pi+1),μ2pr=y1:m2-∑j=1m2(Qj+1)yj-y1:m2∑j=1m2(Qj+1)(m2-1)∑j=1m2(Qj+1).$

• Find maximum penalized likelihood estimator of R says Rpr using the invariance property as:

• Case 1: μ1 = μ2, then $Rpr=λ1prλ1pr+λ2pr,$

• Case 2: μ1μ2, then $Rpr=[1-λ2prλ1pr+λ2preμ2pr-μ1prλ2pr] I(μ1pr>μ2pr)+λ1prλ1pr+λ2pre(μ1pr-μ2pr)λ2prI(μ1pr<μ2pr).$

Now to find the asymptotic 100(1 − α)% confidence interval for R is given by

$R∈(Rpr±Zα2n+mVar^(Rpr)),$

where Zα/2 is the upper (α/2)th quantile of standard normal distribution and $Var^(Rpr)$ is the value of Var(Rpr) at the penalized maximum likelihood estimators of parameters Θ = (λ1, μ1, λ2, μ2), where $Var (Rpr)=∑j=14∑i=14∂R∂Θi∂R∂Θj(Iij-1)pr,$

where Θ = (λ1, μ1, λ2, μ2) and $Iij-1$ is the (i, j)th element of the inverse of the Fisher information matrix Iij given by

$Iijpr=(E [-∂2lpr(Θ)∂λ12]E [-∂2lpr(Θ)∂λ1∂μ1]E [-∂2lpr(Θ)∂λ1∂λ2]E [-∂2lpr(Θ)∂λ1∂μ2]E [-∂2lpr(Θ)∂μ1∂λ1]E [-∂2lpr(Θ)∂μ12]E [-∂2lpr(Θ)∂μ1∂λ2]E [-∂2lpr(Θ)∂μ1∂μ2]E [-∂2lpr(Θ)∂λ2∂λ1]E [-∂2lpr(Θ)∂λ2∂μ1]E [-∂2lpr(Θ)∂λ22]E [-∂2lpr(Θ)∂λ2∂μ2]E [-∂2lpr(Θ)∂μ2∂λ1]E [-∂2lpr(Θ)∂μ2∂μ1]E [-∂2lpr(Θ)∂μ2∂λ2]E [-∂2lpr(Θ)∂μ22]),(Iij-1)pr=(Var (λ1pr)Cov (λ1pr,μ1pr)Cov (λ1pr,λ2pr)Cov (λ1pr,μ2pr)Cov (μ1pr,λ1pr)Var (μ1pr)Cov (μ1pr,λ2pr)Cov (μ1pr,μ2pr)Cov (λ2pr,λ1pr)Cov (λ2pr,μ1pr)Var (λ2pr)Cov (λ2pr,μ2pr)Cov (μ2pr,λ1pr)Cov (μ2pr,μ1pr)Cov (μ2pr,λ2pr)Var (μ2pr)),$

and

$Var (λ1pr)=2(m1-1)Σi=1m1(1+Pi)-m12λ12m12, ŌĆŖŌĆŖ ŌĆŖŌĆŖ ŌĆŖŌĆŖVar (λ2pr)=2(m2-1)Σj=1m2(1+Qj)-m22λ22m22,Var (μ1pr)=λ12n(n-1), ŌĆŖŌĆŖ ŌĆŖŌĆŖ ŌĆŖŌĆŖVar (μ2pr)=λ22m(m-1),Cov (λ1pr,μ1pr)=Cov (μ1pr,λ1pr)=λ12Σi=1m1(1+Pi), ŌĆŖŌĆŖ ŌĆŖŌĆŖ ŌĆŖŌĆŖCov (λ1pr,λ2pr)=Cov (λ2pr,λ1pr)=0,Cov (λ1pr,μ2pr)=Cov (μ2pr,λ1pr)=0, ŌĆŖŌĆŖ ŌĆŖŌĆŖ ŌĆŖŌĆŖCov (μ1pr,λ2pr)=Cov (λ2pr,μ1pr)=0,Cov (μ1pr,μ2pr)=Cov (μ2pr,μ1pr)=0, ŌĆŖŌĆŖ ŌĆŖŌĆŖ ŌĆŖŌĆŖCov (λ2pr,μ2pr)=Cov (μ2pr,λ2pr)=λ22Σj=1m2(1+Qj).$

### 4.2. Bayes estimation and credible interval R for incomplete data

To find the Bayes estimator of R as:

Assume all parameters λ1, λ2, μ1, μ2 are random variables having gamma prior distributions with parameters (ai, bi), i = 1, 2, 3, 4 respectively. Where

$π(λ1)=b1a1Γ(a1)λ1a1-1e-λ1b1, ŌĆŖŌĆŖ ŌĆŖŌĆŖ ŌĆŖŌĆŖλ1,a1,b1>0,π(λ2)=b2a2Γ(a2)λ2a2-1e-λ2b2, ŌĆŖŌĆŖ ŌĆŖŌĆŖ ŌĆŖŌĆŖλ2,a2,b2>0,π(μ1)=b3a3Γ(a3)μ1a3-1e-μ1b3, ŌĆŖŌĆŖ ŌĆŖŌĆŖ ŌĆŖŌĆŖμ1,a3,b3>0,π(μ2)=b4a4Γ(a4)μ2a4-1e-μ2b4, ŌĆŖŌĆŖ ŌĆŖŌĆŖ ŌĆŖŌĆŖμ2,a4,b4>0.$

Then the joint posterior density function of λ1, λ2, μ1, μ2 is given by

$L(λ1,λ2,μ2,μ2ŌłŻdata)=L(dataŌłŻλ1,λ2,μ1,μ2)π(λ1)π(λ2)π(μ1)π(μ2)∫0∞∫0∞∫0∞∫0∞L(dataŌłŻλ1,λ2,μ1,μ2)π(λ1)π(λ2)π(μ1)π(μ2)dλ1dλ2dπ(μ1)dπ(μ2).$

But the joint posterior density function can not compute analytically. So we use MCMC technique to compute Bayes estimator of R and construct the corresponding credible interval.

Now compute the Bayes estimator using MCMC technique when μ1 = μ2 and under the assumption that the parameters λ1, λ2, μ have independent inverse gamma (IG) priors with the probability density functions as:

$π1(λ1ŌłŻa1,b1) =b1a1Γ(a1)λ1-a1-1e-b1λ1, ŌĆŖŌĆŖ ŌĆŖŌĆŖ ŌĆŖŌĆŖλ1>0,π2(λ2ŌłŻa2,b2) =b2a2Γ(a2)λ2-a2-1e-b2λ2, ŌĆŖŌĆŖ ŌĆŖŌĆŖ ŌĆŖŌĆŖλ2>0,π3(μŌłŻa3,b3) =b3a3Γ(a3)μ-a3-1e-b3μ, ŌĆŖŌĆŖ ŌĆŖŌĆŖ ŌĆŖŌĆŖμ>0,$

where a1, a2, a3, b1, b2, b3 are chosen to reflect prior knowledge about λ1, λ2, μ. The posterior probability density functions of λ1, λ2 are

$λ1ŌłŻμ,λ2,data~IG (m1+a1,b1+∑i=1m1(Pi+1)(xi-μ)),λ2ŌłŻμ,λ1,data~IG (m2+a2,b2+∑j=1m2(Qj+1)(yj-μ))$

and

$f(μŌłŻλ1,λ2,data)∝(x1:m1-μ)(y1:m2-μ)(1μ)a3-1exp [-b3μ-1λ1∑i=1m1(Pi+1)(xi-μ)-1λ2∑j=1m2(Qj+1)(yj-μ)].$

The posterior probability density function of μ is unknown, but we assume it is a normal distribution and use Metropolis-Hastings algorithm to find the Bayes estimator as:

Algorithm 4
• Start with an initial guess $λ1(0),λ2(0)$,μ(0).

• Set t = 1.

• Using Metropolis-Hastings, generate μ(t) from f(μ|λ1, λ2, data) using Metropolis-Hastings algorithm with the proposal distribution N(μt−1, 1).

• Generate $λ1(t)$ from $IG(m1+a1,b1+Σi=1m1(Pi+1)(xi-μ(t-1)))$.

• Generate $λ2(t)$ from $IG(m2+a2,b2+Σj=1m2(Qi+j)(yj-μ(t-1)))$.

• Compute R(t).

• Set t = t + 1.

• Repeat steps 3–7, T times.

The Bayes estimator of R says $RprB$ compute under square error loss function is given by

$RprB=1T-M∑t=M+1T-MRt,$

where M is burn-in period. Also the highest probability density (HPD) 100(1 − γ)% credible interval is obtained by the method of Chen and Shao (1999).

5. Simulation study

In this section a simulation study is performed to compare the performance of estimates of R obtained by using different methods; all computations are obtained using Mathematica 10.

### 5.1. Simulation study for complete data

• For given values of (λ1, λ2, μ1, μ2) compute the true value of R from (2.1) or (2.2).

• For given (n, m) generate sample from size n from (1.1) with given parameters (λ1, μ1) and generate sample from size m from (1.1) with given parameters (λ2, μ2).

• Evaluate maximum penalized likelihood estimators of (λ1, λ2, μ1, μ2) and R using Algorithm 1.

• Repeat step 2–3 N = 104 times and calculate the biases and MSEs.

• Calculate the asymptotic confidence interval (ACI) of R according to (3.1).

• Using 2–4 to compute RB using Algorithm 2.

• Calculate the biases and MSEs for the Bayes estimator.

Table 1 shows the simulation study results for the complete data. We observe that for large samples size the estimator values of R╠é* and RB became constant, when μ1 = μ2 the estimator values of R nearest of its true vales. R has bad estimator when μ1 < μ2, also biases and MSE’s decrease when sample sizes increase.

### 5.2. Simulation study for incomplete data

In this subsection we compare the maximum penalized likelihood estimator and Bayes estimator using the MCMC technique of R in bias, MSE as well as given the ACI and HPD credible intervals for progressive type II censored data as:

• For different parameter values (λ1, λ2, μ1, μ2) = (1, 1, 1.5, 0.5), (1, 1, 1.5, 1.5), and(1, 1, 2.5, 2.5).

Also use censoring schemes (CS) as $I =(10,30) =(0,0,0,0,0,0,0,0,0,20),II =(10,30)=(20,0,0,0,0,0,0,0,0,0),III =(10,30)=(2,2,2,2,2,2,2,2,2,2).$

• Find Rpr using Algorithm 3.

• Compute bias and MSE.

• Compute ACI of (4.1).

• Find the $RprB$ and compute HPD credible interval using Algorithm 4 and use censoring schemes in 1 and the following priors

Prior 1. ai = 0, bi = 0, i = 1, 2, 3.

Prior 2. ai = 1, bi = 2, i = 1, 2, 3.

Prior 3. ai = 2, bi = 3, i = 1, 2, 3.

Tables 2 and 3 show results of simulation study of R for incomplete data.

6. Data analysis

In this section, we present a real example to sure our proposed model can apply in practice. Badar and Priest (1982) reported the data represent the strength measured in GPA for single carbon fibers, and impregnated 1000-carbon fiber tows. Single fibers were tested under tension at gauge lengths of 20 mm (data set 1) and 10 mm (data set 2). These data have been used previously by Raqab and Kundu (2005), Kundu and Gupta (2006), Kundu and Raqab (2009), and Asgharzadeh et al. (2011).

• Data set I: (gauge lengths of 20 mm).

1.312 1.314 1.479 1.552 1.700 1.803 1.861 1.865 1.944 1.958 1.966 1.997 2.006 2.021 2.027 2.055 2.063 2.098 2.140 2.179 2.224 2.240 2.253 2.270 2.272 2.274 2.301 2.301 2.359 2.382 2.382 2.426 2.434 2.435 2.478 2.490 2.511 2.514 2.535 2.554 2.566 2.570 2.586 2.629 2.633 2.642 2.648 2.684 2.697 2.726 2.770 2.773 2.800 2.809 2.818 2.821 2.848 2.880 2.954 3.012 3.067 3.084 3.090 3.096 3.128 3.233 3.433 3.585 3.585

• Data set II (gauge lengths of 10 mm).

1.901 2.132 2.203 2.228 2.257 2.350 2.361 2.396 2.397 2.445 2.454 2.474 2.518 2.522 2.525 2.532 2.575 2.614 2.616 2.618 2.624 2.659 2.675 2.738 2.740 2.856 2.917 2.928 2.937 2.937 2.977 2.996 3.030 3.125 3.139 3.145 3.220 3.223 3.235 3.243 3.264 3.272 3.294 3.332 3.346 3.377 3.408 3.435 3.493 3.501 3.537 3.554 3.562 3.628 3.852 3.871 3.886 3.971 4.024 4.027 4.225 4.395 5.020

We apply our model as:

• Check the validity of two parameter negative exponential for given data sets see

• Compute maximum penalized likelihood estimator and Bayes estimator of R for complete data. Table 4 shows the results.

• Compute the maximum penalized likelihood and Bayes estimator for incomplete data and corresponding confidence interval using two different progressively censored samples schemes (Table 5) and data analysis results for incomplete data (Table 6).

7. Conclusions

In this paper, we apply a maximum penalized likelihood method to estimate a stress-strength parameter based on two parameter negative exponential distribution and compare it with the regular maximum likelihood method. Subsequently, it was decided the maximum penalized likelihood method is better than the regular maximum likelihood method because it has a smaller variance. We apply the maximum penalized likelihood method for complete and incomplete data. We also introduce a Bayes estimator using Lindley approximation for complete data and MCMC technique for incomplete data. A simulation study is performed to compere between different estimators in biases and MSEs for complete and incomplete data. The Bayes estimate of R, and the corresponding credible interval can be obtained using the Gibbs sampling technique. In addition, we apply our model on real data and recommend the use of two parameter negative exponential distribution as a reliability model for complete and incomplete data.

Figures Fig. 1. Two parameter negative exponential distribution. Fig. 2. Fitted two parameter negative exponential distribution for data set I and data set II.
TABLES

### Table 1

Results of simulation study for complete data

(n, m) ($λ1*,λ2*,μ1*,μ2*$) Maximum penalized
likelihood estimator
Bayes estimator ACI

R╠é* Bias MSE RB Bias MSE
(λ1, λ2, μ1, μ2) = (1, 1, 1.5, 1) and R = 0.6967, μ1 > μ2

(30, 30) (0.0738, 0.0520, 2.4090, 1.9489) 0.9999 0.0050 0.0015 0.9997 −0.0001 0.0000 (0.6200, 1.0000)
(30, 50) (0.0597, 0.0677, 2.4450, 1.9320) 0.9997 0.0037 0.0011 0.9920 −0.0004 0.0000 (0.6101, 0.9999)
(30, 100) (0.4850, 0.0772, 2.4505, 1.9222) 0.9993 0.0023 0.0007 0.9989 0.0003 0.0000 (0.6011, 0.9995)
(50, 50) (0.1026, 0.0653, 2.4016, 1.9354) 0.9996 0.0030 0.0009 0.9991 −0.0004 0.0000 (0.6412, 1.0000)
(50, 100) (0.0567, 0.0682, 2.4444, 1.9356) 0.9996 0.0020 0.0006 0.9993 −0.0002 0.0000 (0.6454, 1.0000)
(100, 100) (0.0689, 0.0677, 2.4232, 1.9323) 0.9996 0.0015 0.0004 0.9993 −0.0002 0.0000 (0.6600, 1.0000)

(λ1, λ2, μ1, μ2) = (1.5, 1, 1.5, 1.5) and R = 0.6, μ1 = μ2

(30, 30) (0.1064, 0.0659, 2.9110, 2.4360) 0.6192 0.0003 0.0000 0.6182 −0.0006 0.0000 (0.5999, 0.6400)
(30, 50) (0.0922, 0.0764, 2.9090, 2.4234) 0.5468 −0.0006 0.0000 0.5457 −0.0001 0.0000 (0.5124, 0.7000)
(30, 100) (0.1572, 0.0612, 2.8360, 2.4398) 0.7197 0.0009 0.0001 0.7195 −0.0001 0.0000 (0.5300, 0.7500)
(50, 50) (0.1417, 0.0751, 2.8666, 2.4295) 0.6536 0.0005 0.0000 0.6524 −0.0011 0.0000 (0.5504, 0.7700)
(50, 100) (0.1039, 0.0565, 2.8953, 2.4490) 0.6474 0.0003 0.0000 0.6473 0.0003 0.0000 (0.5965, 0.7584)
(100, 100) (0.0885, 0.1064, 2.9112, 2.3955) 0.4539 −0.0007 0.0001 0.4519 −0.0007 0.0000 (0.4210, 0.7145)

(λ1, λ2, μ1, μ2) = (1, 1.5, 1.5, 2) and R = 0.2420, μ1 < μ2

(100, 100) (0.0838, 0.1059, 2.4260, 3.3815) 0.0001 0.2419 0.4419 0.0016 0.1622 0.4386 (0.0001, 0.2500)

MSE = mean square error; ACI = asymptotic confidence interval.

### Table 2

Results of simulation study for incomplete data (biases and MSE’s) of maximum penalized likelihood estimator and Bayes estimator of R

CS (P, Q) Maximum penalized likelihood estimator Bayes estimator

Prior 1 Prior 2 Prior 3

Rp Bias MSE $RprB$ Bias MSE $RprB$ Bias MSE $RprB$ Bias MSE
(λ1, λ2, μ1, μ2) = (1, 1, 0.5, 0.5)

(I, I) 0.5520 0.0026 0.0001 0.5001 −0.0001 0.0000 0.5108 −0.0108 0.0001 0.5027 −0.0027 0.0000
(I, II) 0.3480 −0.0075 0.0011 0.4874 0.0126 0.0001 0.4840 0.0160 0.0002 0.5149 −0.0149 0.0002
(I, III) 0.5338 0.0016 0.0000 0.4951 0.0049 0.0000 0.5058 −0.0058 0.0000 0.4988 0.0012 0.0000
(II, II) 0.3125 −0.0093 0.0017 0.4977 0.0023 0.0000 0.4922 0.0078 0.0000 0.4872 0.0128 0.0001
(II, III) 0.5245 0.0012 0.0000 0.4942 0.0058 0.0000 0.5135 −0.0135 0.0001 0.4816 0.0184 0.0003
(III, III) 0.4017 −0.0049 0.0004 0.5132 −0.0132 0.0001 0.4822 0.0178 0.0003 0.4992 0.0008 0.0000

(λ1, λ2, μ1, μ2) = (1, 1, 1.5, 1.5)

(I, I) 0.5753 0.0037 0.0002 0.4790 0.0209 0.0004 0.5064 −0.0064 0.0000 0.5023 −0.0023 0.0000
(I, II) 0.7423 0.0121 0.0029 0.4919 0.0082 0.0000 0.4788 0.0211 0.0004 0.4917 0.0082 0.0006
(I, III) 0.5731 0.0036 0.0029 0.5065 −0.0065 0.0000 0.5280 −0.0280 0.0007 0.4847 0.0152 0.0002
(II, II) 0.3524 −0.0073 0.0010 0.5108 −0.0106 0.0001 0.5013 −0.0013 0.0000 0.4968 0.0031 0.0000
(II, III) 0.3085 −0.0057 0.0006 0.4850 0.0144 0.0003 0.4904 0.0095 0.0000 0.5106 0.0106 0.0001
(III, III) 0.3461 −0.0076 0.0011 0.5185 −0.0185 0.0003 0.5220 −0.0220 0.0004 0.5152 −0.0192 0.0002

(λ1, λ2, μ1, μ2) = (1, 1, 2.5, 2.5)

(I, I) 0.6344 0.0067 0.0009 0.4820 0.0179 0.0003 0.4780 0.0219 0.0004 0.4958 0.0041 0.0000
(I, II) 0.5930 0.0046 0.0004 0.5116 −0.0116 0.0001 0.4886 0.0113 0.0001 0.5162 −0.0162 0.0002
(I, III) 0.5433 0.0021 0.0000 0.4999 0.0001 0.0000 0.5046 −0.0046 0.0000 0.4990 0.0609 0.0000
(II, II) 0.3009 −0.0099 0.0019 0.5094 −0.0094 0.0000 0.5023 −0.0023 0.0000 0.4933 0.0066 0.0000
(II, III) 0.4359 −0.0032 0.0002 0.4696 0.0303 0.0009 0.5295 −0.0295 0.0008 0.4812 0.0187 0.0003
(III, III) 0.5191 0.0009 0.0000 0.5089 −0.0089 0.0000 0.5016 −0.0016 0.0000 0.4939 0.0060 0.0000

MSE = mean square error; CS = censoring schemes.

### Table 3

Comparison between ACI and HPD credible interval of R

CS ACI HPD credible interval of R

Prior 1 Prior2 Prior 3
(λ1, λ2, μ1, μ2) = (1, 1, 0.5, 0.5)

(I, I) (0.4832, 0.6207) (0.4510, 0.5491) (0.4607, 0.5608) (0.4534, 0.5519)
(I, II) (0.3978, 0.6021) (0.4397, 0.5352) (0.4366, 0.5314) (0.4644, 0.5638)
(I, III) (0.3490, 0.6509) (0.4466, 0.5436) (0.4562, 0.5554) (0.4499, 0.5476)
(II, II) (0.3002, 0.6997) (0.4490, 0.5465) (0.4440, 0.5405) (0.4395, 0.5350)
(II, III) (0.4436, 0.5563) (0.4458, 0.5426) (0.4631, 0.5638) (0.4344, 0.5288)
(III, III) (0.3803, 0.6196) (0.4629, 0.5635) (0.4349, 0.5294) (0.4503, 0.5481)

(λ1, λ2, μ1, μ2) = (1, 1, 1.5, 1.5)

(I, I) (0.3894, 0.6105) (0.4320, 0.5259) (0.4568, 0.5561) (0.4531, 0.5515)
(I, II) (0.3839, 0.6160) (0.4435, 0.5398) (0.4319, 0.5278) (0.4436, 0.5399)
(I, III) (0.3899, 0.6105) (0.4568, 0.5561) (0.4762, 0.5797) (0.4372, 0.5322)
(II, II) (0.3685, 0.6314) (0.4608, 0.5609) (0.4522, 0.5504) (0.4481, 0.5455)
(II, III) (0.3606, 0.6393) (0.4850, 0.5326) (0.4424, 0.5385) (0.4606, 0.5607)
(III, III) (0.3997, 0.6002) (0.4677, 0.5193) (0.4709, 0.5732) (0.4647, 0.5657)

(λ1, λ2, μ1, μ2) = (1, 1, 2.5, 2.5)

(I, I) (0.4190, 0.5809) (0.4348, 0.5293) (0.4311, 0.5248) (0.4472, 0.5443)
(I, II) (0.2338, 0.7661) (0.4615, 0.5618) (0.4407, 0.5365) (0.4656, 0.5666)
(I, III) (0.4190, 0.5809) (0.4509, 0.5489) (0.4551, 0.5541) (0.4501, 0.5479)
(II, II) (0.3724, 0.6275) (0.4595, 0.5593) (0.4531, 0.5515) (0.4449, 0.5416)
(II, III) (0.4389, 0.5610) (0.4236, 0.5156) (0.4776, 0.5814) (0.4340, 0.5283)
(III, III) (0.4501, 0.5498) (0.4590, 0.5588) (0.4524, 0.5507) (0.4455, 0.5423)

CS = censoring schemes; ACI = asymptotic confidence interval; HPD = highest probability density.

### Table 4

Data analysis results for complete data

($λ1*,λ2*,μ1*,μ2*$) R╠é* RB ACI
(1.3707, 1.214, 1.8792, 1.2944) 0.70986 0.6807 (0.6985, 0.7098)

ACI = asymptotic confidence interval.

### Table 5

Data and the corresponding censored schemes

 xi 1.312 1.479 1.552 1.803 1.944 1.858 1.966 2.027 2.055 2.098 Pi 1 0 1 2 0 0 3 0 1 50 yj 1.901 2.132 2.257 2.361 2.396 2.445 2.373 2.523 2.532 2.570 Qj 0 2 1 0 1 1 2 0 0 44

### Table 6

Data analysis results for incomplete data

$(λ1pr,λ2pr,μ1pr,μ2pr)=(2.8849,2.2840,1.257,1.2576)$

Method Estimator of R Confidence interval
Maximum penalized likelihood 0.5580 (0.5000, 0.5800)
Bayes estimator (prior 1) 0.5313 (0.4793, 0.5834)
Bayes estimator (prior 2) 0.4756 (0.4289, 0.5222)
Bayes estimator (prior 3) 0.5138 (0.4634, 0.5641)

References
1. Ali, MM, Pal, M, and Woo, J (2012). Estimation of P(Y < X) in a four-parameter generalized gamma distribution. Austrian Journal of Statistics. 41, 197-210.
2. Al-Mutairi, DK, Ghitany, ME, and Kundu, D (2013). Inferences on stress-strength reliability from Lindley distributions. Communications in Statistics Theory and Methods. 42, 1443-1463.
3. Asgharzadeh, A, Valiollahi, R, and Raqab, M (2011). Stress-strength reliability of Weibull distribution based on progressively censored samples. SORT. 35, 103-124.
4. Badar, MG, and Priest, AM (1982). Statistical aspects of fiber and bundle strength in hybrid composites. Progress in Science and Engineering Composites, Hayashi, T, Kawata, K, and Umekawa, S, ed. Tokyo: ICCM-IV, pp. 1129-1136
5. Balakrishnan, N, and Aggarwala, R (2000). Progressive Censoring: Theory, Methods and Applications. Boston: Birkhauser
6. Basu, AP (1964). Estimates of reliability for some distributions useful in reliability. Technometrics. 6, 215-219.
7. Beg, MA (1980). On the estimation of P(Y < X) for two-parameter exponential distribution. Metrika. 27, 29-34.
8. Birnbaum, ZW (1956). On a use of the Mann-Whitney statistic. Proceedings of Third Berkeley Symposium on Mathematical Statistics and Probability. CA: University of California Press, pp. 13-17
9. Chen, MH, and Shao, QM (1999). Monte Carlo estimation of Bayesian credible and HPD intervals. Journal of Computational and Graphical Statistics. 8, 69-92.
10. Downtown, F (1973). The estimation of P(Y < X) in the normal case. Technometrics. 15, 551-558.
11. Enis, P, and Geisser, S (1971). Estimation of the probability that P(Y < X). Journal of the American Statistical Association. 66, 162-168.
12. Ghitany, ME, Al-Mutairi, DK, and Aboukhamseen, SM (2015). Estimation of the reliability of a stress-strength system from power Lindley distributions. Communications in Statistics Simulation and Computation. 44, 118-136.
13. Greco, L, and Ventura, L (2011). Robust inference for the stress-strength reliability. Statistical Papers. 52, 773-788.
14. Hussian, MA (2013). On estimation of stress-strength model for generalized inverted exponential distribution. Journal of Reliability and Statistical Studies. 6, 55-63.
15. Iwase, K (1987). On UMVU estimators of P(Y < X) in the 2-parameter exponential case. Memoirs of the Faculty of Engineering, Hiroshima University. 9, 21-24.
16. Kelley, GD, Kelley, JA, and Schucany, WR (1976). Efficient estimation of P(Y < X) in the exponential case. Technometrics. 18, 359-360.
17. Kundu, D, and Gupta, RD (2006). Estimation of p(Y < X) for Weibull distribution. IEEE Transactions on Reliability. 55, 270-280.
18. Kundu, D, and Raqab, MZ (2009). Estimation of R = P(Y < X) for three-parameter Weibull distribution. Statistics and Probability Letters. 79, 1839-1846.
19. Krishnamoorthy, K, Mukherjee, S, and Guo, H (2007). Inference on reliability in two- parameter exponential stress-strength model. Metrika. 6, 261-273.
20. Lindley, DV (1980). Approximation Bayesian methods. Trabajos de Estadistica. 21, 223-237.
21. McCool, JI (1991). Inference on P(X < Y) In the Weibull case. Communications in Statistics Simulation and Computation. 20, 129-148.
22. Najarzadegan, H, Babaii, S, Rezaei, S, and Nadarajah, S (2016). Estimation of P(Y < X) for the Lévy distribution. Hacettepe Journal of Mathematics and Statistics. 45, 957-972.
23. Pal, M, Ali, MM, and Woo, J (2005). Estimation and testing of P(Y > X) in two-parameter exponential distributions. Statistics. 39, 415-428.
24. Raqab, MZ, and Kundu, D (2005). Comparison of different estimators of P(Y < X) for a scaled Burr Type X distribution. Communications in Statistics Simulation and Computation. 34, 465-483.
25. Rezaei, S, Tahmasbi, R, and Mahmoodi, M (2010). Estimation of P[Y < X] for generalized Pareto distribution. Journal of Statistical Planning and Inference. 140, 480-494.
26. Saracoglu, B, Kinaci, I, and Kundu, D (2012). On estimation of R = p(Y < X) for exponential distribution under progressive type-II censoring. Journal of Statistical Computation and Simulation. 82, 729-744.
27. Sathe, YS, and Shah, SP (1981). On estimation P(Y < X) for the exponential distribution. Communications in Statistics and Theory and Methods. 10, 39-47.
28. Shahsanaei, F, and Daneshkhah, A (2013). Estimation of stress-strength model in generalized linear failure rate distribution. ArXiv. 1312, 0401v1.
29. Tong, H (1974). A note on the estimation of P(Y < X) in the exponential case. Technometrics. 16, 625.
30. Tong, H (1977). On the estimation of P(Y < X) for exponential families. IEEE Transactions on Reliability. 26, 54-56.
31. Wong, A (2012). Interval estimation of P(Y < X) for generalized Pareto distribution. Journal of Statistical Planning and Inference. 142, 601-607.
32. Zheng, M (2013). Penalized Maximum Likelihood Estimation of Two-Parameter Exponential Distributions A Project submitted to the faculty of graduate school of the University of Minnesota.