TEXT SIZE

search for



CrossRef (0)
Classical and Bayesian studies for a new lifetime model in presence of type-II censoring
Communications for Statistical Applications and Methods 2019;26:385-410
Published online July 31, 2019
© 2019 Korean Statistical Society.

Teena Goyala, Piyush K Raib, Sandeep K Maurya1,a

aDepartment of Mathematics & Statistics, Banasthali Vidyapith, India;
bDepartment of Statistics, Banaras Hindu University, India
Correspondence to: 1Department of Mathematics & Statistics, Banasthali Vidyapith, Rajasthan-304022, India.
E-mail: sandeepmaurya.maurya48@gmail.com
Received February 8, 2019; Revised April 11, 2019; Accepted May 8, 2019.
 Abstract

This paper proposes a new class of distribution using the concept of exponentiated of distribution function that provides a more flexible model to the baseline model. It also proposes a new lifetime distribution with different types of hazard rates such as decreasing, increasing and bathtub. After studying some basic statistical properties and parameter estimation procedure in case of complete sample observation, we have studied point and interval estimation procedures in presence of type-II censored samples under a classical as well as Bayesian paradigm. In the Bayesian paradigm, we considered a Gibbs sampler under Metropolis-Hasting for estimation under two different loss functions. After simulation studies, three different real datasets having various nature are considered for showing the suitability of the proposed model.

Keywords : LTE distribution, Gibbs sampler, M-H algorithm, Bootstrap confidence interval, HPD interval
1. Introduction

In statistical literature, a number of lifetime models have been discussed for analyzing uncertainty of random phenomenon of life. In lifetime models, exponential distribution is one of the oldest and famous model due to its properties and easy tractability of estimation for different parameters and close form solutions. However, the utility of this model becomes restricted when it has only constant hazard rate due to a lifetime experiment that exhibits non constant behavior. However, the constant hazard rate property is visa versa property of the exponential model. Researchers have therefore moved to develop a more flexible model with most of the developed models somehow related to the exponential model as Weibull, gamma and Lindley distribution (Lindley, 1958). Generalization and transformation are some techniques that are more popular nowadays for proposing a new lifetime model. Mudholkar and Srivastava (1993) proposed a three parameter exponentiated Weibull distribution. Gupta et al. (1998) also proposed an exponentiated exponential distribution (Gupta and Kundu, 2001) known as Lehmann type I distribution. In this technique, cumulative distribution function raised to a shape parameter which increase the flexibility of baseline model. Nadarajah and Kotz (2006) subsequently proposed exponentiated gamma, exponentiated Frechet and exponentiated Gumbel distribution. In this context quadratic rank transmuted matrix (QRTM) is a new technique of generalization proposed by Shaw and Buckley (2007) (see Aryal and Tsokos (2009) for more details). Cordeiro et al. (2013) recently proposed a new class of distribution by adding two new shape parameters. Another method is Kumaraswamy generalization (KWG) by Kumaraswamy (1980) that also adds two additional shape parameters. In addition, nearly all extensions add few additional parameters to the existing model. That results to addition of complexities in future inferences. Additional parameters provide greater flexibility, but with added complexity in the estimation of parameter(s).

Perhaps, keeping this point in mind Maurya et al. (2016) proposed logarithmic transformation (LT) method to obtain a new distribution. If G(x) be baseline cumulative distribution function (CDF), LT transformation provides new CDF F(x) as given below:

F(x)=1log[2G(x)]log2,x>0

and they considered exponential distribution as baseline distribution and named it LTE distribution. Therefore, this gives a distribution with non constant hazard rates. Another advantage of the use of this transformation is that the new distribution preserves the properties of being parsimonious in the parameter because it does not add any additional parameter. A more generalized concept of LT method was proposed by Pappas et al. (2012). Using this concept, Dey et al. (2017) proposed a new distribution and studying its statistical properties and naming as α LT generalized exponential distribution (see also Nassar et al. (2018) and Dey et al. (2019) for more details about the transformation).

Either no model is perfect or no model is worst. In this series, our objective is to propose a new class of distribution with a transformation technique that incorporates all types of hazard rates for the appropriate choice of shape parameter. Here we propose the use of LT method on the exponentiated CDF (i.e., applying Lehmann type I on LT technique) referred to as generalized LT (GLT) method. The obtained distribution is expected to possess both monotone and non-monotone shapes of hazard rate, depending on the choice of the values of the parameters. The new distribution through GLT can be obtained as: let X be a random variable with CDF G(x) and g(x) be the corresponding probability density function (PDF) taken as the baseline distribution and let F(x) and f (x) be the CDF and PDF of the proposed GLT distribution respectively, then new CDF F(x) is defined as,

F(x)=1log[2Gα(x)]log2,x>0,α>0

and the corresponding PDF is,

f(x)=αg(x)Gα1(x)[2Gα(x)]log2,x>0,α>0.

For the illustrative point of view, we consider exponential distribution as baseline distribution due to its simplicity and popularity in life testing problem. The CDF of exponential distribution with scale parameter θ is G(x)=1eθx;x,θ>0 and corresponding PDF as g(x)=θeθx.

Now, using GLT method proposed in equation (1.1), the CDF and PDF of new proposed distribution, known as GLT exponential (GLTE) distribution can easily be obtained as:

F(x)=1log(2(1eθx)α)log2,f(x)=αθeθx(1eθx)α1(2(1eθx)α)log2,x,θ,α>0,

and its associated hazard rate is,

h(x)=αθeθx(1eθx)α1(2(1eθx)α)log(2(1eθx)α),

where α is the shape parameter and θ is the scale parameter of the distribution.

However, a researcher may receive incomplete or partially known data because the complete information on lifetime data for the estimation purpose of the parameters may not always available. These type of datasets are know as censored data. In general, there are two conventional censoring schemes named, type-I (time) and type-II (failure) censoring scheme. Here, we are using type-II censored sample for the estimation purpose because, the experiment will be terminated in this censoring scheme after obtaining a prefixed number of failures (let it be r) rather than total n failures placed on the life testing experiment. Therefore, we can obtain as number of failures as we require to maintain the efficiency of statistical inference. See Nelson (2003) and Lawless (2011) for the estimation problem under type-I and type-II censoring schemes. Evans and Ragab (1983), Singh et al. (2005), and Kundu and Howlader (2010) used a Bayesian technique to estimate parameters under type-II censoring scheme for different lifetime models.

The rest of the paper is organized as follows. Section 2, discusses the shapes of CDF, PDF, and hazard rates for various value for the parameters of the proposed distribution. Section 3, deals with the basic statistical properties of the proposed model and Section 4, discusses the parameter estimation procedure in complete sample data for the distribution. In Section 5, the point estimation methods in case of type-II censoring in both paradigms are given. Section 6, deals with interval estimations in both paradigms. A simulation study under complete sample as well as type-II censored sample is elaborated in Section 7, under both paradigms. Section 8, illustrates three real datasets to show the suitability of the proposed model in comparison to six other famous lifetime models having the same nature of hazard rate in both cases of complete as well as censoring in both the paradigms. Finally, conclusion is summarized in Section 9.

2. Nature of distribution and hazard rate

The shape of the distribution is an important feature in any distribution because it gives an indication on the nature of the distribution. The CDF plot using equation (1.3) is given in Figure 1. This figure shows that the proposed distribution does not possess a stochastic ordering relationship if shape parameter α is less than scale parameter θ. The PDF plot for various value of α and θ using equation (1.4) are given in Figure 1 and shows that the proposed model is very flexible with the capability to fit a wide variety of real datasets.

We now follow Glaser (1980) lemma to study the shapes of hazard rate. As he defined the term η(t)=f(t)/f(t) where f(t) is density function and f(t) is first derivative of f(t) with respect to t and stated that:

Lemma 1

  • Ifη(t)>0for allt>0, then distribution has increasing hazard rate (IHR).

  • Ifη(t)<0for allt>0, then distribution has decreasing hazard rate (DHR).

  • Suppose there exitst*>0such thatη(t)<0, for allt(0,t*), η(t*)=0, andη(t)>0for allt>t*andϵ=limt0f(t)exists. Then if

In our proposed distribution, we see that:

η(t)=θ(α1)θeθt1eθtαθeθt(1eθt)α12(1eθt)α

and

η(t)=θ2eθt[4(α1)(1eθt)2α+(42α2α2eθt)(1eθt)α](1eθt)2[2(1eθt)α]2.

Now, it can easily be checked that the following three cases may arise:

  • When α1, then from equation (2.1), we have η(t)>0 for all t>0, hence distribution has IHR.

  • When α0.5, we have η(t)<0 for all t>0, hence distribution has DHR.

  • When 0.5<α<1, then we have verified that there always exists a t* such that η(t)>0 when t(0,t*) and η(t)=0 and η(t)>0 for all t>t*, where t* depend on the value of α and θ but the exact functional form of t* in terms of α and θ could not be obtained.

It is also easy to verify from equation (1.4) that limt0f(t)= when α<1, hence, the proposed distribution has a bathtub hazard rate. Various shapes of the hazard rate are plotted in Figure 2 to support the above conclusions. The proposed distribution therefore includes both monotone and non-monotone types of hazard rates (increasing, decreasing and bathtub hazard rates).

3. Statistical properties of the proposed distribution

The proposed GLTE model can also be obtained through the model proposed by Dey et al. (2017) as a unique choice of the parameter because they considered generalized exponential distribution as baseline model in LT transformation while we used exponential distribution as baseline model. In this section, some basic statistical properties of the proposed distribution such as moments, moment generating function (MGF), characteristics function (CHF), cumulant generating function (CGF) and Shannon entropy are systematically discussed below. One may also follow Dey et al. (2017) for more details about distributional properties.

3.1. Moments

Moments are useful in studying the nature of a distribution. However, in deriving the expression of moments, we shall derive the following lemma.

Lemma 2K1(θ,α,r,δ)=0xreδx(1eθx)α12(1eθx)αdx=l=0m=0ln=0α(m+1)1(1)l+m+n(lm)(α(m+1)1n)r!(δ+θn)r+1.
Proof

As a convergent sum of infinite terms of geometric series, 1/(1+x)=i=0(x)i where x<1, we get,

K1(θ,α,r,δ)=0xreδx(1eθx)α1l=0(1)l(1(1eθx)α)ldx

using the expansion of series, (1y)b=i=0(1)i(bi)yi and then simplifying, we get,

K1(θ,α,r,δ)=l=0m=0l(1)l+m(lm)0xreδx(1eθx)αm+α1dx=l=0m=0ln=0α(m+1)1(1)l+m+n(lm)(α(m+1)1n)r!(δ+θn)r+1.

(Readers may follow Graham et al. (1994) for a detailed expression of binomial series).

Using the above Lemma 2, we get rth moment as,

E(Xr)=αθlog2K1(θ,α,r,θ).

Hence, the arithmetic mean of the proposed distribution is, E()X=(αθ/log2)K1(θ,α,1,θ). Similarly, other measures such as variance, skewness and kurtosis of random variable X following the proposed model can be easily obtained.

3.2. Moment generating function, characteristics function and cumulant generating function

If X is a random variable following the proposed distribution with PDF defined in equation (1.4), then its MGF is given as follows,

MX(t)=αθlog2K1(θ,α,0,θt),fort<θ.

CHF of X can be found as,

ϕX(t)=αθlog2K1(θ,α,0,θit),

where i=1 stand for imaginary and CGF of X found as,

KX(t)=log(αθlog2)+logK1(θ,α,0,θt).
3.3. Shannon entropy

An entropy is a measure that measures the randomness of any system. Shannon entropy proposed by Shannon (1951) is defined as E[logf(x)]. Thus, by using equation (1.4), we can write,

logf(x)=log(αθlog2)+θx(α1)log(1eθx)+log[2(1eθx)α]

and hence,

E[logf(x)]=log(αθlog2)+αθ2log2K1(θ,α,1,θ)+0.5823(α1)αlog2+log22,

where K1(·, ·, ·, ·) is the mean of the proposed distribution, given in Lemma 2.

4. Estimation of the parameters in presence of complete sample

In a classical set-up, we use the maximum likelihood estimator of parameters α and θ for the proposed distribution obtained by maximizing the likelihood function. In addition, the estimator that maximizes the likelihood function will also maximize the logarithmic (log) of the likelihood function. A log likelihood function is used because it is easy to deal with the log of the likelihood function instead of only the likelihood function. Let x1, x2, . . . , xn be n independent identically distributed random variables from the proposed distribution therefore, the logarithmic likelihood (log L) function using equation (1.4) is,

logL=logi=1nf(xiα,θ)=nlogαθlog2+(α1)i=1nlog(1eθxi)θi=1nxii=1nlog[2(1eθxi)α].

Differentiating equation (4.1) with respect to the parameters α and θ we get,

logLα=nα+i=1nlog(1eθxi)+i=1n[(1eθxi)αlog(1eθxi)(2(1eθxi)α)]

and

logLθ=nθi=1nxi+(α1)i=1n(xeθxi1eθxi)+i=1n(αxi(1eθxi)α1eθxi2(1eθxi)α).

Now, we obtain two non-linear equations (likelihood equations) after equating these equations to zero. Solving these likelihood equations simultaneously provides the maximum likelihood estimators (MLE) α^ and θ^ of parameters α and θ respectively. These equations cannot be solved analytically therefore, we propose the use of Newton-Raphson method because one can use numerical techniques for their solution. For the choice of initial guess, contour plot technique is used.

The estimation of the parameters in the Bayesian paradigm in presence of a censored sample, is given in Section 5 (for point estimation) and Section 6 (for interval estimation). From these discussions, we can easily obtain the point and interval estimates for the parameters in presence of complete sample just by putting r = n in the corresponding equations.

5. Point estimation of the parameters in presence of type-II censoring

This section discusses the point estimation in the presence of type-II censoring for both a classical and Bayesian set-up. The detailed discussion is given below.

Let x(1), x(2), . . . , x(r) be the r ordered type-II right censored random observations obtained from n units placed on a life-testing experiment where each unit has its lifetime following the PDF given in equation (1.4), with the largest (nr) lifetimes having been censored. Then, the likelihood function is given by Cohen (1965) and Balakrishnan and Cohen (2014) as

L(α,θx)=n!(nr)!i=1rf(x(i)α,θ)[1F(x(r)α,θ)]nr.

In a classical set-up, maximum likelihood estimators and in Bayesian framework, Bayes estimates using informative and non-informative prior under two different loss functions namely; squared error and linex are used. The procedures are discussed systematically in the following subsections.

5.1. Classical method of estimation for parameters

From equation (5.1), the likelihood function of the distribution can be written as,

L(α,θx)=n!(nr)!i=1rαθeθx(i)(1eθx(i))α1(2(1eθx(i))α)log2×[log(2(1eθx(r))α)log2]nr

and logarithmic of likelihood function can be written as,

logL=log[n!(nr)!]+rlogαθlog2+(α1)i=1rlog(1eθx(i))θi=1rx(i)i=1rlog[2(1eθx(i))α]+(nr)log[log(2(1eθx(r))α)log2].

Now, the method of finding MLEs of parameters are same as discussed in case of complete sample in Section 4.

5.2. Bayesian method of estimation for parameters

In a Bayesian paradigm, posterior probability is an effect of two components with a prior probability and a likelihood function, calculated from the statistical model for the observed data. The prior distribution of the parameters are assumed before the data is observed. The prior distribution might not be easy to determine. There are different categorization to the prior distribution of parameters defined as proper and improper prior. Another way to define the priors are based on available advanced information and known as informative and non-informative prior. Here, we used an informative prior distribution for α as Gamma(a, b) and a non-informative prior distribution for θ, because the nature of hazard rate of the proposed distribution depends on α, the shape parameter. Therefore, the prior for parameter α is,

π(α)=abαb1eαaΓb,α,a,b>0

and prior for parameter θ is,

π(θ)=1θ,θ>0.

Hence, the joint prior of parameters α and θ can be written as,

π(α,θ)=abαb1eαaθΓb,θ,α,a,b>0.

where the hyper parameters (a, b) are assumed to be known and can be evaluated by following the method suggested by Singh et al. (2013). Using the prior (given in equation (5.6)) and the sample information (via likelihood function given in equation (5.2)), the posterior density for the proposed model is,

Π(α,θx)=L(α,θx)π(α,θ)L(α,θx)π(α,θ)dαdθ=J1J0,

here J0=∫∫J1dαdθ. Using the prior density (given in equation (5.6)) and the sample information (via likelihood function given in equation (5.3)), the numerator of equation (5.7) can be written as,

J1=i=1rαθeθx(i)(1eθx(i))α1(2(1eθx(i))α)log2×[log(2(1eθx(r))α)log2]nr×αb1eαaθ.

Marginal posterior densities of α and θ are obtained by integrating equation (5.8) with respect to θ and α respectively. In Bayesian statistics, a loss function is used for the estimation of parameters. Here, we consider two different types of loss functions of a symmetric type of loss function named squared error loss function (SELF) and an asymmetric loss function called a linex loss function (LLF). The SELF is defined as L(θ^,θ)=(θ^-θ)2, where θ^ is the Bayes estimator of θ and under SELF, Bayes estimator is nothing but the posterior mean.

The linex loss function (Varian, 1975) is defined as L(θ^,θ)=ec(θ^-θ)c(θ^-θ)1; where c0. Here, constant c determines the shape of the loss function. For small values of c, the behavior of linex loss function is approximately the same as a symmetric loss function. Under LLF, the Bayes estimator is given by θ^L=(1/c)log(Eθ(e|x)). We have computed the Bayes estimator for the unknown parameter under SELF and LLF. The Bayes estimate of α and θ under SELF is given by

α^S=1J00α(0J1dθ)dα

and

θ^S=1J00θ(0J1dα)dθ.

The Bayes estimate of α and θ under LLF is given by

α^L=1clog[1J00ecα(0J1dθ)dα]

and

θ^L=1clog[1J00e-cθ(0J1dα)dθ]

It is not possible to compute equations (5.9)(5.12) analytically; therefore, we used the approach of the Markov chain Monte Carlo (MCMC) technique to approximate these equations (Hastings, 1970; Robert and Casella, 2013).

5.2.1. The Metropolis-Hastings within Gibbs sampling

The Metropolis-Hastings algorithm is the general purpose technique for sampling from complex density models (introduced by Metropolis and Ulam (1949), Metropolis et al. (1953) and extended by Hastings (1970)). One can also see Chib and Greenberg (1995) for review literature. Gibbs sampler (Geman and Geman, 1984; Tierney, 1994) is a special case of a MCMC algorithm where randomly generated data is always accepted. It generates a sequence of samples from the full conditional probability distribution of two or more random variables. Gibbs sampling requires decomposing the joint posterior into the full conditional distributions for each parameter and then sampling from them. For the GLTE parameters α and θ, priors have been given in equation (5.4) and equation (5.5) and the joint posterior has been given in equation (5.7). The integrations involved in the posterior and Bayes estimator cannot be solved analytically, therefore, we suggest to use MCMC technique, named Gibbs sampler to simulate from the posterior density, so that sample based information can be easily drawn. Reader may follow Gelfand and Smith (1990) and Smith and Roberts (1993) for more detailed.

6. Interval estimation of parameter in presence of type-II censoring

This section deals with classical and Bayesian confidence intervals (CIs) estimation. We computed asymptotic confidence interval and bootstrap confidence intervals in classical framework. The highest posterior density interval for the parameter is obtained in a Bayesian framework. A detailed discussion is given in subsequent subsections.

6.1. Classical interval estimation

Classical methods of interval estimations are discussed in this subsection. Asymptotic confidence intervals and bootstrap intervals are also discussed in subsequent subsections.

6.1.1. Asymptotic confidence interval

In case of large samples, we can obtain confidence intervals based on the diagonal elements of an inverse Fisher information matrix I–1(α^,θ^) that provides the estimated asymptotic variance for the parameters α and θ respectively. Thus, two sided 100(1 − β)% confidence interval of α and θ can be defined as

[α^±Zβ2var(α^)]and[θ^±Zβ2var(θ^)],

where Zβ/2 denotes the upper β/2% point of standard normal distribution.

Fisher information matrix can be estimated by,

I(α^,θ^)=[2logLα22logLαθ2logLαθ2logLθ2](α^,θ^),

where,

2logLα2=nα2+2Σi=1n[1eθxi]α[log(1eθxi)]2(2(1eθxi)α)2,2logLθ2=nθ2+(α1)Σi=1nxi2eθxi(1eθxi)2+αΣni=1xi2(2(1eθxi))eθxi(1eθxi)α1((α1)eθxi(1eθxi)11)+αxi2e2θxi(1eθxi)2α1(2(1eθxi)α)2,2logLαθ=Σi=1nxieθxi(1eθxi)+Σi=1nxieθxi(2(1eθxi)α)(1eθxi)α1[1+αlog(1eθxi){1+xieθxi(1eθxi)α}](2(1eθxi)α)2.
6.1.2. Bootstrap confidence interval

Class intervals based on the asymptotic property or normal theory assumption perform inadequately for small samples. One can obtain the accurate intervals using bootstrap without having normal theory assumption. The bootstrap method firstly introduced by Efron (1979). It is a general re-sampling procedure to estimate the statistics of distributions based on independent observations. Here, we discussed two types of CIs using bootstrap method, percentile bootstrap (Boot-p) suggested by Efron (1982) and studentized bootstrap (Boot-t), suggested by Peter (1988).

  • Boot-p: An algorithm for the Boot-p CI is:

    • 1. Assemble the type-II censored data x and obtain MLEs for the parameters α & θ, denoted as α^ML & θ^ML.

    • 2. Generate a type-II censored sample by using MLEs of the parameters.

    • 3. Generate B bootstrap samples from the above generated samples.

    • 4. Obtain MLEs for each B bootstrap sample, denoted as {α^1*,θ^1*},{α^2*,θ^2*},,{α^B*,θ^B*}.

    • 5. Arrange these in ascending orders as {α^(1)*,α^(2)*,,α^(B)*} and {θ^(1)*,θ^(2)*,,θ^(B)*}.

    A pair of 100(1 − β)% Boot-p CIs for α & θ are given by [α^(Bβ2)*,α^(B(1β2))*]         and         [θ^(Bβ2)*,θ^(B(1β2))*]respectively.

  • Boot-t: Boot-p is very simple algorithm; however, the percentile approach is not so much accurate if the sample size is small. Therefore the Boot-t, can be used because it gives more accuracy to results than the percentile approach. The following steps are in the algorithm of Boot-t CIs.

    • 5. Repeat step 1–4 as in Boot-p approach.

    • 6. Compute standard errors of the parameters also, denoted as {se^1*(α),se^1*(θ)},{se^2*(α),se^2*(θ)},,{se^B*(α),se^B*(θ)}.

    • 7. Compute statistics zb*(α)=(α^b*α^ML)/se^b*(α) and zb*(θ)=(θ^b*θ^ML)/se^b*(θ), for each b = 1, 2, . . . , B.

    • 8. Arrange zb*(α) in ascending orders as {z(1)*(α),z(2)*(α),,z(B)*(α)}.

    • 9. Arrange zb*(θ) in ascending orders as {z(1)*(θ),z(2)*(θ),,z(B)*(θ)}.

    A pair of 100(1 − β)% Boot-t CIs for α & θ are given by [α^MLz(B(1β2))*(α)×se^(α),α^ML+z(Bβ2)*(α)×se^(α)]

    and [θ^MLz(B(1β2))*(θ)×se^(θ),θ^ML+z(Bβ2)*(θ)×se^(θ)]

    respectively. Refer to Davison and Hinkley (1997), Efron and Tibshirani (1994), and Carpenter and Bithell (2000) for a more detailed study.

6.1.3. Bayesian confidence interval

In Bayesian philosophy, the parameter considered to be a random variable, then what is probability that the parameter θ lies within a specified interval. Edwards et al. (1963) named it credible interval and the shortest interval among all of the Bayesian credible intervals is called highest posterior density (HPD) interval. The HPD credible interval (Chen and Shao, 1999) of the parameter θ is obtained based on ordered MCMC samples of θ as θ(1),θ(2),...,θ(N). After that 100(1 − β)% credible interval for the parameter θ is obtained as ((θ(1),θ[(1β)N]+1),...,(θ[],θN)). Where [Y] denotes the largest integer less than or equal to Y. After that, HPD credible interval for θ is that interval for which length is shortest (see Box and Tiao (1973), Edwards et al. (1963), and Sinha (1987) for a detalied study on HPD interval).

7. Simulation studies

This section performs the simulation study for the proposed model GLTE in the presence of a complete as well as type-II censored sample. For the purpose of point estimation, we calculated MLEs of parameters along with their mean square error (MSE) under classical set up as well as estimates calculated under different loss functions along with their risks in case of Bayesian inference. In case of interval estimation, we calculated asymptotic confidence intervals, Boot-t and Boot-p confidence intervals in classical and HPD confidence intervals in Bayesian set-up. The sample observations from the proposed model can be obtained by solving F(x)=u. Hence, from equation (1.3), we get

x=1θlog[1(22u)1α],

where u stand for a uniform random variable from U(0, 1). For data generation, we used equation (7.1) and the values of true parameters (α, θ) have been taken as (0.5, 0.5), (0.8, 0.8) and (1, 1). The reason for this combination is that, in these choice of parameters, the proposed distribution exhibits all of its shapes of hazard rate such as IHR, DHR and bathtub hazard rates.

In case of Bayesian analysis, we assumed that the shape parameter α has gamma prior and scale parameter θ as non informative prior given in equation (5.4) and (5.5). The hyper parameters for gamma prior are taken as (a = 5, b = 0.2 as particular case). For the generation of posterior samples, in the case of the Bayesian paradigm we have used Gibbs sampling under the M-H technique (see topic 5.2.1 in Section 5). The Bayes estimators of the parameters under the assumption of the above prior using SELF and LLF are obtained from these simulated posterior samples. Only one choice of loss function parameter c has been considered (c = 0.1 as particular case) for LLF. Asymptotic confidence intervals, Boot-p and Boot-t CIs are obtained under a classical set-up and HPD CIs are constructed under a Bayesian paradigm for both parameters at 5% level of significance.

7.1. In presence of complete sample

For the complete sample case, we considered a sample size as n = 10, 30 and 50, as small, moderate and large samples. All estimates (point and interval) in both the set up are obtained for the mentioned choices of the parameters (α, θ). The performances of these estimators with their corresponding risks (below the estimates in the brackets) under classical and Bayesian paradigm along with the hazard rates for the model of these estimates provided in Table 1, where αML, θML are ML estimates of α, θ and hML is the estimated value of hazard rate at time t = 1 under MLEs. αS, θS are Bayes estimates of α, θ and hS is the value of hazard rate at time t = 1 under SELF and in the same fashion, L stands for the Bayes estimates under LLF. Table 2 and Table 3 represent all four interval estimates with their coverage probability (cp) under both the set up for the parameter α and θ respectively. Here, in Table 2, Conf_α denotes the asymptotic confidence interval, hpd_α stands for the highest posterior density interval, pboot_α denotes Boot-p confidence interval and tboot_α is for the Boot-t confidence interval, where LL stands for the lower limit and UL stands for the upper limit of the confidence intervals for parameter α. The same notations are used for parameter θ in Table 3.

From Tables 13, we can conclude that,

  • The MSE’s of both the parameters decreases with the increment in sample size n (10, 30, and 50), in all the three choices of the parameters as (α, θ) = (0.5, 0.5), (0.8, 0.8), and (1, 1) (Table 1).

  • The risk of both the parameters α and θ decreases with increment in n under Bayesian paradigm for all choices of parameters taken in this article (Table 1).

  • Table 2 and Table 3 show that the length of the HPD interval is smaller than the length of the other mentioned intervals as asymptotic confidence interval, Boot-p, and Boot-t intervals in case of both the parameters.

  • The length of the intervals are also in an increasing order of HPD, Boot-t, asymptotic, and Boot-p intervals for both the parameters α and θ.

7.2. In presence of type-II censored sample

For the simulation study under type-II censored sample, we consider various combinations of n, the number of units placed on the life testing experiment and r, the prefixed number of observation (rn) as (n, r) = (10, 8); (30, 15); (30, 25); (50, 25); (50, 35); (50, 45) as early and late failure.

All estimates (point and interval) in both the set up are obtained for the mentioned choices of parameters (α, θ) and all the combinations of sample size n with prefixed number of failures r i.e., (n, r). Table 4 reports the performances of these estimators with the corresponding MSE’s under the classical method, where αML and θML are the ML estimates of α and θ. Table 5 shows the Bayes estimates under SELF and LLF with their corresponding risks; αS, θS are Bayes estimates under SELF, αL, θL are Bayes estimates under LLF and Risk(α)S and Risk(α)L denotes the risks of the parameter under SELF and LLF respectively. Table 6 and Table 7, represent all four interval estimates under both the set up for the parameters α and θ respectively. Here, in Table 6, Conf_α denotes the asymptotic confidence interval, hpd_α stands for the highest posterior density interval, pboot_α denotes Boot-p confidence interval and tboot_α is for Boot-t confidence interval, where LL stands for lower limit and UL stands for upper limit of the confidence intervals for the parameter α. The same notations are used for parameter θ in Table 7.

From Tables 47, we can conclude that,

  • The MSE’s of both the parameters decreases with the increment in (n, r) (with all choices that we are considering i.e., (10, 8), (30, 15), (30, 25), (50, 25), (50, 35), and (50, 45)) in all the three cases as (α, θ) = (0.5, 0.5), (0.8, 0.8), and (1, 1), (Table 4).

  • Similar to the classical results, in Bayesian inference, the risks of both the parameters α and θ decreases with increase in n and r for all choices taken in this study (Table 5). The risk for both parameters under LLF is minimum than SELF.

  • Table 6 and Table 7 indicate that the length of HPD interval is smaller than the length of the other mentioned intervals as asymptotic confidence interval, Boot-p and Boot-t confidence intervals in case of both the parameters.

  • Also, the lengths of the intervals are in increasing order as HPD, Boot-t, asymptotic and Boot-p confidence intervals for both parameters α and θ.

8. Real data analysis

The suitability of proposed model can be verified in a real life situation. Here, we consider three different datasets with a different nature of failure rate.

8.1. Real data analysis under complete case

We have considered six other famous lifetime models having capability of different type of hazard rates. The description of considered models are given below,

  • GDUS exponential (GDUSE) distribution proposed by Maurya et al. (2017) having PDF f(x)=αθeθxe1(1eθx)α1e(1eθx)α,x>0,θ>0,α>0.

    It is vary flexible model having decreasing, increasing, and bathtub shaped hazard rates.

  • Generalized Lindley (GL) distribution proposed by Nadarajah et al. (2011) having PDF f(x)=αθ2(1+θ)(1+x)eθx(1eθx1+θ+θx1+θ)α1,x>0,θ>0,α>0.

    It has also increasing, decreasing, and bathtub hazard rate.

  • Chen’s model proposed by Chen (2000) having PDF f(x)=αθxα1exαeθ(1exα),x>0,θ>0,α>0.

    This is a widely used model for bathtub hazard rate that also has increasing hazard rate.

  • Gamma distribution with PDF f(x)=1θαxθ1Γθexθ,x>0,θ>0,α>0.

  • Hjorth distribution proposed by Hjorth (1980) with PDF f(x)=[θx(βx+1)+α]eθx22(1+βx)αβ+1,x>0,θ>0,α>0,β>0.

    This is also a well-known model for bathtub situation with an increasing, decreasing, constant, and bathtub hazard rate.

  • Weibull distribution with PDF f(x)=αθ(xθ)α1e(xθ)α,x>0,θ>0,α>0.

    Gamma and Weibull, both having increasing, decreasing, and constant hazard rate.

The suitability of models in terms of fitting, for the considered datasets have been measured on the basis of negative of logarithmic value of likelihood (−Log L), Kolmogorov-Smirnov (KS) test statistic and p-value. We also used Akaike information criterion (AIC), Bayesian information criterion (BIC) for model selection criterion. The AIC and BIC are defined as,

AIC=2×k2×logL^,BIC=k×log(n)2×logL^,

and KS test statistics (D) is defined as

D=SupxFn(x)F(x),

where Fn(x)=(1/n)i=1nIxix,Ixix, is an indicator function.

In the above expression, Fn(x) is empirical distribution function, F(x) is CDF, n is sample size, k is number of parameters and L^ is the value of maximum likelihood for the considered distribution. It is known that the smaller value of these criterion indicate a better fit (except p-value). All calculations have been done on the basis of maximum likelihood estimates. The detailed description of datasets are given below

  • Item failure data (Dataset 1): This dataset contain 50 observations of an item placed on test at time t=0 and their failure times are recorded in weeks and proposed by Murthy et al. (2004). This dataset exhibit decreasing hazard rate and analyzed by Maurya et al. (2017) and Merovci et al. (2013).

  • Flood level data (Dataset 2): The data are about the excellences of flood peaks (in m3/s) of the Wheaton River near Carcross in Yukon Territory, Canada and consist of 72 excellences for the years 1958–1984, rounded to one decimal place. It has been proposed by Choulakian and Stephens (2001) and analysed by Merovci and Puka (2014). This dataset shows bathtub type hazard rate.

  • Wind speed data (Dataset 3): This dataset proposed by Leiva et al. (2011) shows IHR and it consist 31 observations of daily average speed (in km/hr) of wind in July, 2009 in Penco city, Chile.

We also plotted the scaled TTT plot to understand the nature of all datasets given in Figure 3. This figure shows the nature of datasets discussed above. The curve above the abline exhibits the increasing hazard rate, below the abline shows decreasing hazard rate and first below and then above the abline shows the nature of the bathtub hazard rate (see Aarset (1987) and Singh et al. (2016) for more detail about TTT plot). Table 8, represents maximum likelihood estimates of the parameters, log likelihood value, KS statistics, corresponding p-value, and model selection criterion i.e., AIC and BIC for all the considered datasets and comparative distributions. The following conclusion can be made based on this table,

  • For the item failure data: All the considered models fit to this dataset at 5% level of significance. The −Log L, KS statistics, AIC and BIC are least for the GDUSE model at third place of the decimal than the proposed one; however, one may considered it as a comparative model to the GDUSE distribution.

  • For the flood level data: We found that, in the considered seven distributions, all fit to this dataset at 5% level of significance. In all the criterion Hjorth model have least value and proposed have second one. But one point to consider is that the Hjorth model has more parameter than the proposed one with minimal difference in term of BIC.

  • For the wind speed data: In this dataset, all the model fit to the data at a 5% level of significance. The −Log L is least for proposed model and the KS statistic is least for GDUSE model, proposed model and GL model; however, both the model selection criterion AIC and BIC are least for the proposed model.

The logarithmic of likelihood value of proposed distribution (in case of complete sample) are plotted for population parameters α and θ in Figure 4 and Figure 5. Also, in case of censored sample, the log likelihood plots are given in Figure 6 and Figure 7 for different values of r and n as mentioned in Table 9 for the considered datasets. These figures show that the estimates obtained by method of maximum likelihood are unique and exist.

Here, we have also considered some non-parametric fitting tools like, histogram, estimated density function plot, kernel density plot and empirical cumulative distribution function (ECDF) plot for validating the above results. Kernel density plot is a technique to estimate density function through the dataset. Plots such as relative histogram, estimated density and kernel density plots for all datasets given in Figure 8. This figure graphically exhibits that the proposed distribution adequately fits all datasets. The ECDF and fitted CDF plot, for all considered datasets, have also been plotted in Figure 9. This figure provides a comparative picture which shows the proposed model fit all datasets.

8.2. Real data analysis under type-II censored sample

Here, three real datasets are taken for the illustrative purpose of the study and the estimation methods discussed in this chapter for the proposed distribution under type-II censored sample. The item failure data, flood level data and wind speed data having size 31, 50 and 72. Here, we consider various combinations of the prefixed number of failures r under type-II censored sample and the total units placed on the life testing experiment i.e., n as (n, r) = (31, 15), (31, 25) for item failure data, (50, 25), (50, 45) for flood level data and (72, 35), (72, 60) for wind speed data (Table 9). MLEs and Bayes estimates of both the parameters α and θ are tabulated in Table 9.

The 95% asymptotic confidence intervals, bootstrap confidence intervals (Boot-p and Boot-t), and HPD intervals for both parameters α and θ are given in Table 10 and Table 11 respectively.

9. Conclusion

In this paper, we proposed a new transformation technique to generate a new lifetime model with a new lifetime model. The proposed distribution is a flexible two parameter model in the sense of flexible density as well as in different hazard rates that has an increasing, decreasing and bathtub nature of hazard rate. We have also studied some statistical properties like, moments, MGF, CHF, CGF, and Shannon entropy for the proposed model. We have performed simulation studies under complete as well as type-II censored cases of classical and Bayesian paradigm for point and interval estimation. In classical point estimation, we have used maximum likelihood method to estimate unknown population parameters α and θ along with their MSE’s and in Bayesian point estimation we have used Gibbs sampler under Metropolis-Hasting (M-H) for sample generation. Here we have used a Gamma prior for shape parameter α and a non- informative prior for scale parameter θ as in this model, nature of hazard rate depends on the shape parameter of the model. We used two different loss functions namely SELF and LLF as symmetric and asymmetric loss to compute corresponding estimates and risks. In interval estimation; we have computed asymptotic confidence interval, boot intervals (Boot-t and Boot-p) in classical set-up and HPD interval in Bayesian context. The simulation studies have been done for the different choices of parameter combinations to indicate the different hazard rate along with different sample sizes and censoring schemes.

Lastly, three different real datasets having different nature (IHR, DHR and bathtub) have been considered in comparison of six other models, out of which four having bathtub nature (in which Hjorth and Chen model are two known bathtub models), five having a decreasing, six having increasing type hazard rate. Our proposed model fits well in all the considered models and datasets. The uniqueness of ML estimates are also shown graphically and in non- parametric technique, relative histogram plot, kernel density plot, fitted density plot, ECDF plot have been considered, which also support our findings in the support of the proposed model. In the presence of type-II censoring, classical and Bayesian point and interval estimates have been considered for different censoring schemes.

The proposed model is therefore a very flexible model that fits the large varieties of real datasets and can be recommended in the different situations.

Figures
Fig. 1. Probability density and cumulative distribution function plot.
Fig. 2. Hazard rate function plot.
Fig. 3. Scaled TTT plots of considered data sets.
Fig. 4. Log likelihood plot for parameter at the estimated value of .
Fig. 5. Log likelihood plot for parameter at the estimated value of .
Fig. 6. Log likelihood plot for parameter at the estimated value of in censored case.
Fig. 7. Log likelihood plot for parameter at the estimated value of in censored case.
Fig. 8. Histogram, fitted density and kernel density plots of considered data sets.
Fig. 9. Empirical CDF and fitted CDF plots of considered data sets.
TABLES

Table 1

Estimates of parameter α and θ under different techniques with their risks

n(α, θ)αMLθMLhMLαSθShSαLθLhL
10(0.5, 0.5)0.6923
(0.3734)
0.6856
(0.1945)
0.65880.3918
(0.1364)
0.4859
(0.0774)
0.58730.3905
(0.0182)
0.4823
(0.0004)
0.5845
(0.8, 0.8)1.1779
(1.307)
1.0284
(0.2886)
0.83520.5319
(0.4832)
0.6758
(0.1396)
0.70880.5296
(0.1621)
0.6710
(0.0007)
0.7052
(1.0, 1.0)1.5326
(3.4045)
1.2662
(0.3788)
0.97910.6475
(1.5398)
0.8195
(0.2381)
0.80110.6445
(0.2567)
0.8138
(0.0012)
0.7968
30(0.5, 0.5)0.5463
(0.0233)
0.5502
(0.0267)
0.58510.4625
(0.0114)
0.4933
(0.0182)
0.56450.4619
(0.0001)
0.4923
(0.0001)
0.5638
(0.8, 0.8)0.8854
(0.0735)
0.8658
(0.0486)
0.76490.6722
(0.0341)
0.7371
(0.0305)
0.71430.6708
(0.0002)
0.7355
(0.0002)
0.7132
(1.0, 1.0)1.1160
(0.1282)
1.0755
(0.0662)
0.89830.7869
(0.067)
0.8853
(0.0449)
0.81540.7849
(0.0003)
0.8832
(0.0002)
0.8140
50(0.5, 0.5)0.5264
(0.0112)
0.5298
(0.0138)
0.57360.4781
(0.0075)
0.4970
(0.0111)
0.56170.4777
(0.0000)
0.4964
(0.0001)
0.5613
(0.8, 0.8)0.8494
(0.0341)
0.8381
(0.0372)
0.75000.7217
(0.0211)
0.7611
(0.0194)
0.71960.7207
(0.0001)
0.7601
(0.0001)
0.7190
(1.0, 1.0)1.0660
(0.0596)
1.0448
(0.0353)
0.88270.8636
(0.0391)
0.9281
(0.0275)
0.83130.8621
(0.0002)
0.9268
(0.0001)
0.8305

Table 2

Classical and Bayesian interval estimates of parameter α with their coverage probability

n(α, θ)Conf_αpboot_αtboot_αhpd_α




LLULcpLLULcpLLULcpLLULcp
10(0.5, 0.5)0.08851.29760.97090.42012.88350.59380.29841.16780.68120.13690.69560.8306
(0.8, 0.8)0.02652.36190.96960.68215.90980.57970.49301.97050.64580.19340.93510.6346
(1.0, 1.0)0.00003.24830.96670.87798.70190.56290.64212.60320.62220.26621.10280.4320

30(0.5, 0.5)0.28910.80410.96240.38310.98790.74650.32790.79930.79850.27740.66830.8831
(0.8, 0.8)0.43781.33310.96140.60721.68930.73710.51381.32200.78780.38550.98950.8207
(1.0, 1.0)0.52971.70340.96260.76182.20250.73030.64081.68620.77920.44301.16770.7556

50(0.5, 0.5)0.33550.71640.95870.39270.81010.77800.35680.71990.81190.33110.63860.8871
(0.8, 0.8)0.51921.17540.95600.62451.35960.77410.56271.18600.80840.48010.98420.8579
(1.0, 1.0)0.63731.49360.95900.77541.73490.76850.69501.49890.80140.56411.18800.8160

LL = lower limit; UL = upper limit; cp = coverage probability.


Table 3

Classical and Bayesian interval estimates of parameter θ with their coverage probability

n(α, θ)Conf_θpboot_θtboot_θhpd_α




LLULcpLLULcpLLULcpLLULcp
10(0.5, 0.5)0.11391.25760.95290.45082.29600.57070.12841.14290.79900.08110.94930.3557
(0.8, 0.8)0.27471.79050.94720.70353.02000.57490.26441.66790.79970.16401.23940.8335
(1.0, 1.0)0.38122.14210.94850.88873.54090.57070.37242.02460.79900.24761.44000.7692

30(0.5, 0.5)0.27190.83210.95040.38251.01530.72150.28940.81950.80580.25860.74670.9149
(0.8, 0.8)0.48071.25090.94970.62441.47510.73520.49651.23850.81130.41301.07920.9032
(1.0, 1.0)0.62311.52860.94890.78821.78490.73490.63891.51750.80700.50991.27940.8837

50(0.5, 0.5)0.31910.73940.95190.38530.83590.76630.33060.73640.81520.31870.68660.9042
(0.8, 0.8)0.54651.12890.95060.63371.24930.77080.55861.12750.81620.50881.02490.9073
(1.0, 1.0)0.70121.38790.94980.80181.52790.77000.71321.38970.81430.63121.23660.8970

LL = lower limit; UL = upper limit; cp = coverage probability.


Table 4

Classical estimates with the MSE’s of parameters α & θ

(α, θ)nrαMLθMLMSE(α)MSE(θ)
(0.5, 0.5)1080.6890.7870.2220.405

30150.6040.7230.0580.252
250.5600.5800.0280.047

50250.5580.6250.0230.098
350.5410.5640.0160.035
450.5320.5390.0120.018

(0.8, 0.8)1081.1401.1050.8510.457

30150.9831.0260.2100.289
250.9060.8900.0930.073

50250.8980.9290.0750.126
350.8700.8710.0490.054
450.8550.8470.0380.031

(1.0, 1.0)1081.4931.3412.2000.562

30151.2471.2450.3820.347
251.1391.1000.1620.094

50251.1271.1350.1290.149
351.0921.0790.0830.071
451.0731.0530.0650.042

MSE = mean squared error.


Table 5

Bayesian estimation and corresponding risk of parameters α & θ under different loss function

(α, θ)nrαSαLθSθLRisk(α)SRisk(α)LRisk(θ)SRisk(θ)L
(0.5, 0.5)1080.40760.40640.47680.47140.04120.00030.13350.0007

30150.44160.44070.47950.47500.01540.00010.07910.0004
250.46040.45970.49260.49120.01200.00010.02680.0001

50250.46800.46740.49260.49000.01040.00010.04590.0002
350.47520.47470.49330.49210.00880.00000.02270.0001
450.47890.47850.49700.49630.00780.00000.01360.0001

(0.8, 0.8)1080.55750.55520.62700.62120.28330.00250.21360.0011

30150.61050.60870.62240.61760.06090.00030.11080.0006
250.65860.65710.71040.70820.03760.00020.04100.0002

50250.67630.67500.68320.68000.03270.00020.06080.0003
350.70120.70000.72670.72480.02590.00010.03440.0002
450.71640.71540.75320.75210.02260.00010.02300.0001

(1.0, 1.0)1080.68290.68000.76490.75861.13950.25270.35310.0018

30150.70680.70430.72690.72150.13740.00070.18390.0009
250.76440.76220.84590.84320.07700.00040.06130.0003

50250.79030.78830.80090.79720.06590.00030.08930.0004
350.83020.82850.87280.87050.04930.00020.04970.0002
450.85490.85340.91430.91280.04140.00020.03230.0002

Table 6

Classical and Bayesian interval estimates of the parameter α

(α, θ)nrConf_αhpd_αpboot_αtboot_α




LLULLLULLLULLLUL
(0.5, 0.5)1080.1371.2420.1560.7040.6229.7040.3201.191

30150.2390.9700.2190.6940.5503.6300.2610.960
250.2800.8410.2650.6790.4981.6540.2970.817

50250.3020.8160.2790.6750.5142.0360.2240.727
350.3210.7630.3040.6610.5031.4670.2680.698
450.3330.7320.3200.6500.4681.0680.3240.701

(0.8, 0.8)1080.1332.1470.2190.9541.3277.8260.6832.710

30150.3381.6280.2860.9750.7995.8320.3571.300
250.4181.3940.3570.9930.8193.2220.4701.344

50250.4531.3440.3880.9930.8464.6980.3631.220
350.4901.2510.4320.9930.8122.7790.4131.132
450.5121.1980.4690.9860.7531.8580.5001.128

(1.0, 1.0)1080.0882.8990.2951.1381.9078.5000.9664.050

30150.3902.1050.3301.1311.2634.8660.5642.112
250.5021.7780.4121.1561.0414.4630.5871.706

50250.5451.7100.4461.1681.0767.1900.4571.560
350.5961.5900.5041.1861.0233.8120.5101.428
450.6271.5200.5431.1920.9522.4510.6191.425

LL = lower limit; UL = upper limit.


Table 7

Classical and Bayesian interval estimates of the parameter θ

(α, θ)nrConf_θhpd_θpboot_θtboot_θ




LLULLLULLLULLLUL
(0.5, 0.5)1080.0081.5680.0431.0340.8556.2110.0001.206

30150.0231.4230.0651.0030.7627.3820.0001.165
250.2280.9320.2080.8030.6111.9380.1130.722

50250.1341.1170.1340.9030.6545.5650.0000.794
350.2430.8850.2270.7810.6092.4940.0000.659
450.3010.7770.2900.7160.5501.2510.2160.614

(0.8, 0.8)1080.1872.0230.1051.2331.2726.9950.0001.697

30150.2261.8270.1201.1910.9466.5080.0001.253
250.4321.3480.3341.1090.9112.4460.2631.071

50250.3481.5120.2481.1550.9775.5730.0001.157
350.4611.2810.3831.0890.9092.9480.0700.988
450.5261.1680.4771.0420.8421.7140.3990.948

(1.0, 1.0)1080.3102.3720.1871.4121.5207.5470.0001.989

30150.3582.1330.1731.3361.3447.6120.0001.714
250.5701.6310.4221.2921.1162.8230.3681.310

50250.4871.7840.3201.3151.1855.8780.0001.388
350.6091.5490.4821.2811.1143.3170.1501.210
450.6791.4280.5881.2521.0442.0380.5281.178

LL = lower limit; UL = upper limit.


Table 8

MLE, KS, AIC, and BIC statistics with p-value for fitted data sets

DistributionMLEKSAICBIC


αθβ−Log LStatisticsp-value
Dataset 1GDUSE0.5630.113-150.1920.0820.891304.385308.209
Proposed0.6120.112-150.1940.0890.828304.389308.213
GL0.4640.148-150.5170.0730.951305.035308.859
Chen0.3450.147-151.5210.0840.876307.041310.865
Gamma0.69511.250-150.3150.1050.639304.631308.455
Hjorth0.1770.0020.090151.9600.0980.718309.921315.657
Weibull0.8006.969-150.6770.1120.559305.354309.178

Dataset 2GDUSE0.6800.081-251.6240.1130.319507.247511.800
Proposed0.7460.081-251.2670.1090.355506.534511.087
GL0.5090.104-252.6750.1170.276509.349513.903
Chen0.3500.092-253.2230.0930.558510.446514.999
Gamma0.83814.559-251.3440.1030.434506.689511.242
Hjorth0.1730.0030.439249.0130.0690.887504.026510.856
Weibull0.90111.632-251.4990.1050.403506.997511.551

Dataset 3GDUSE2.3950.721-57.2770.1310.665118.553121.421
Proposed2.6260.714-57.1310.1320.653118.262121.130
GL2.0700.839-57.2500.1320.652118.500121.368
Chen0.5710.162-63.0280.1860.236130.056132.924
Gamma2.3531.167-57.1530.1370.604118.306121.174
Hjorth0.1600.0960.00060.2140.1780.280126.428130.730
Weibull1.4953.068-58.4880.1560.435120.976123.844

MLE = maximum likelihood estimate; KS = Kolmogorov-Smirnov test; AIC = Akaike information criterion; BIC = Bayesian information criterion.


Table 9

Estimators of α and θ under different techniques

DatasetnrαMLαSαLθMLθSθL
Item failure data50250.5790.5790.4700.0940.0680.068
450.6420.5700.5690.1220.1140.114

Flood level data72350.4430.5780.5780.0090.0540.054
600.4930.6460.6450.0280.0680.068

Wind speed data31150.8071.1541.1470.0760.4110.410
251.1661.3281.3210.2600.5300.530

Table 10

Interval estimators for α under different techniques

DatasetnrConf_αpboot_αtboot_αhpd_α




LLULLLULLLULLLUL
Item failure data50250.3140.8450.4080.8240.5171.0200.2740.658
450.3980.8870.4381.0070.3410.7540.4180.785

Flood level data72350.4000.9490.6291.2250.6751.2910.3300.784
600.4590.9200.6391.2410.5481.0490.4660.873

Wind speed data31150.5867.2691.1815.3461.7987.8290.5031.882
251.0936.1191.3686.9741.3946.2740.6672.067

LL = lower limit; UL = upper limit.


Table 11

Interval estimators for θ under different techniques

DatasetnrConf_θpboot_θtboot_θhpd_θ




LLULLLULLLULLLUL
Item failure data50250.0210.1670.0230.1470.0580.2900.0140.118
450.0720.1730.0580.1340.0320.0830.0750.156

Flood level data72350.0240.1090.0290.0870.0330.0950.0210.091
600.0450.0960.0520.1100.0380.0830.0450.094

Wind speed data31150.4581.4450.2321.0610.4591.7930.1400.682
250.5551.2380.3261.0670.3591.1870.3050.788

LL = lower limit; UL = upper limit.


References
  1. Aarset MV (1987). How to identify a bathtub hazard rate. IEEE Transactions on Reliability, 36, 106-108.
    CrossRef
  2. Aryal GR and Tsokos CP (2009). On the transmuted extreme value distribution with application. Nonlinear Analysis: Theory, Methods & Applications, 71, e1401-e1407.
    CrossRef
  3. Balakrishnan N and Cohen AC (2014). Order Statistics and Inference: Estimation Methods, Elsevier.
  4. Box GEP and Tiao GC (1973). Bayesian Inference in Statistical Analysis, Massachusetts, Addison-Wesley.
  5. Carpenter J and Bithell J (2000). Bootstrap confidence intervals: when, which, what? A practical guide for medical statisticians. Statistics in Medicine, 19, 1141-1164.
    Pubmed CrossRef
  6. Chen MH and Shao QM (1999). Monte Carlo estimation of Bayesian credible and HPD intervals. Journal of Computational and Graphical Statistics, 8, 69-92.
  7. Chen Z (2000). A new two-parameter lifetime distribution with bathtub shape or increasing failure rate function. Statistics & Probability Letters, 49, 155-161.
    CrossRef
  8. Chib S and Greenberg E (1995). Understanding the metropolis-hastings algorithm. The American Statistician, 49, 327-335.
  9. Choulakian V and Stephens MA (2001). Goodness-of-fit tests for the generalized Pareto distribution. Technometrics, 43, 478-484.
    CrossRef
  10. Cohen AC (1965). Maximum likelihood estimation in the Weibull distribution based on complete and on censored samples. Technometrics, 7, 579-588.
    CrossRef
  11. Cordeiro GM, Ortega EMM, and da Cunha DCC (2013). The exponentiated generalized class of distributions. Journal of Data Science, 11, 1-27.
  12. Davison AC and Hinkley DV (1997). Bootstrap Methods and Their Application, Cambridge, Cambridge university press.
    CrossRef
  13. Dey S, Nassar M, and Kumar D (2017). 慣 logarithmic transformed family of distributions with application. Annals of Data Science, 4, 457-482.
    CrossRef
  14. Dey S, Nassar M, Kumar D, and Alaboud F (2019). Alpha logarithmic transformed Fr챕chet distribution: properties and estimation. Austrian Journal of Statistics, 48, 70-93.
    CrossRef
  15. Edwards W, Lindman H, and Savage LJ (1963). Bayesian statistical inference for psychological research. Psychological Review, 70, 193.
    CrossRef
  16. Efron B (1979). Bootstrap methods: another look at the jackknife. The Annals of Statistics, 7, 1-26.
    CrossRef
  17. Efron B (1982). The Jackknife, the Bootstrap, and other Resampling Plans, Siam.
    CrossRef
  18. Efron B and Tibshirani RJ (1994). An Introduction to the Bootstrap, Florida, CRC press.
  19. Evans IG and Ragab AS (1983). Bayesian inferences given a type-2 censored sample from a Burr distribution. Communications in Statistics-Theory and Methods, 12, 1569-1580.
    CrossRef
  20. Gelfand AE and Smith AFM (1990). Sampling-based approaches to calculating marginal densities. Journal of the American Statistical Association, 85, 398-409.
    CrossRef
  21. Geman S and Geman D (1984). Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 6, 721-741.
    Pubmed CrossRef
  22. Glaser RE (1980). Bathtub and related failure rate characterizations. Journal of the American Statistical Association, 75, 667-672.
    CrossRef
  23. Graham RL, Knuth DE, and Patashnik O (1994). Concrete Mathematics: A Foundation for Computer Science (2nd ed), Mass, Addison-Wesley.
  24. Gupta RC, Gupta PL, and Gupta RD (1998). Modeling failure time data by Lehman alternatives. Communications in Statistics-Theory and Methods, 27, 887-904.
    CrossRef
  25. Gupta RD and Kundu D (2001). Exponentiated exponential family: an alternative to gamma and Weibull distributions. Biometrical Journal: Journal of Mathematical Methods in Biosciences, 43, 117-130.
    CrossRef
  26. Hastings WK (1970). Monte Carlo sampling methods using Markov chains and their applications. Biometrika, 57, 97-109.
    CrossRef
  27. Hjorth U (1980). A reliability distribution with increasing, decreasing, constant and Bathtub-Shaped failure rates. Technometrics, 22, 99-107.
    CrossRef
  28. Kumaraswamy P (1980). A generalized probability density function for double-bounded random processes. Journal of Hydrology, 46, 79-88.
    CrossRef
  29. Kundu D and Howlader H (2010). Bayesian inference and prediction of the inverse Weibull distribution for Type-II censored data. Computational Statistics & Data Analysis, 54, 1547-1558.
    CrossRef
  30. Lawless JF (2011). Statistical Models and Methods for Lifetime Data, New York, John Wiley & Sons.
  31. Leiva V, Athayde E, Azevedo C, and Marchant C (2011). Modeling wind energy flux by a Birnbaum-Saunders distribution with an unknown shift parameter. Journal of Applied Statistics, 38, 2819-2838.
    CrossRef
  32. Lindley DV (1958). Fiducial distributions and Bayes theorem. Journal of the Royal Statistical Society. Series B (Methodological), 20, 102-107.
    CrossRef
  33. Maurya SK, Kaushik A, Singh RK, Singh SK, and Singh U (2016). A new method of proposing distribution and its application to real data. Imperial Journal of Interdisciplinary Research, 2, 1331-1338.
  34. Maurya SK, Kaushik A, Singh SK, and Singh U (2017). A new class of distribution having decreasing, increasing and bathtub-shaped failure rate. Communications in Statistics-Theory and Methods, 46, 10359-10372.
    CrossRef
  35. Maurya SK, Kumar D, Singh SK, and Singh U (2018). One parameter decreasing failure rate distribution. International Journal of Statistics & Economics, 19, 120-138.
  36. Merovci F, Elbatal I, and Ahmed A (2013). Transmuted Generalized Inverse Weibull Distribution. arXiv preprint arXiv:1309.3268
  37. Merovci F and Puka L (2014). Transmuted Pareto distribution. ProbStat Forum, 7, 1-11.
  38. Metropolis N, Rosenbluth AW, Rosenbluth MN, Teller AH, and Teller E (1953). Equation of state calculations by fast computing machines. The Journal of Chemical Physics, 21, 1087-1092.
    CrossRef
  39. Metropolis N and Ulam S (1949). The Monte Carlo method. Journal of the American Statistical Association, 44, 335-341.
    Pubmed CrossRef
  40. Mudholkar GS and Srivastava DK (1993). Exponentiated Weibull family for analyzing bathtub failure-rate data. IEEE Transactions on Reliability, 42, 299-302.
    CrossRef
  41. Murthy DNP, Xie M, and Jiang R (2004). Weibull Models, Hoboken, John Wiley & Sons.
  42. Nadarajah S, Bakouch HS, and Tahmasbi R (2011). A generalized Lindley distribution. Sankhya B, 73, 331-359.
    CrossRef
  43. Nadarajah S and Kotz S (2006). The exponentiated type distributions. Acta Applicandae Mathematica, 92, 97-111.
    CrossRef
  44. Nassar M, Afify AZ, Dey S, and Kumar D (2018). A new estension of Weibull distribution: Properties and different methods of estimation. Journal of Computational and Applied Mathematics, 335, 1-18.
  45. Nelson WB (2003). Recurrent Events Data Analysis for Product Repairs, Disease Recurrences, and other Applications, London, SIAM.
    CrossRef
  46. Pappas V, Adamidis K, and Loukas S (2012). A family of lifetime distributions. International Journal of Quality, Statistics, and Reliability, 2012, 1-6.
    CrossRef
  47. Peter H (1988). Theoretical comparision of Bootstrap confidence intervals. The Annals of Statistics, 16, 927-953.
    CrossRef
  48. Robert C and Casella G (2013). Monte Carlo Statistical Methods, Springer Science & Business Media.
  49. Shannon CE (1951). Prediction and entropy of printed English. Bell System Technical Journal, 30, 50-64.
    CrossRef
  50. Shaw WT and Buckley IRC (2007). The Alchemy of Probability Distribution: Beyond Gram-Charlier Cornish-Fisher Expansions, and Skew-Normal and Kurtotic-Normal Distribution (Research report).
  51. Sinha SK (1987). Bayesian estimation of the parameters and reliability function of a mixture of Weibull life distributions. Journal of Statistical Planning and Inference, 16, 377-387.
    CrossRef
  52. Singh SK, Singh U, and Kumar M (2013). Estimation of Parameters of Exponentiated Pareto Model for Progressive Type-II Censored Data with Binomial Removals Using Markov Chain Monte Carlo Method. International Journal of Mathematics & Computation, 21, 88-102.
  53. Singh SK, Singh U, and Kumar M (2016). Bayesian estimation for Poisson-exponential model under progressive type-ii censoring data with binomial removal and its application to ovarian cancer data. Communications in Statistics-Simulation and Computation, 45, 3457-3475.
    CrossRef
  54. Singh U, Gupta PK, and Upadhyay SK (2005). Estimation of parameters for exponentiated-Weibull family under type-II censoring scheme. Computational Statistics & Data Analysis, 48, 509-523.
    CrossRef
  55. Smith AFM and Roberts GO (1993). Bayesian computation via the Gibbs sampler and related Markov chain Monte Carlo methods. Journal of the Royal Statistical Society. Series B (Methodological), 55, 3-23.
    CrossRef
  56. Tierney L (1994). Markov chains for exploring posterior distributions. The Annals of Statistics, 22, 1701-1728.
    CrossRef
  57. Varian HR (1975). A Bayesian approach to real estate assessment. Studies in Bayesian Econometric and Statistics in honor of Leonard J Savage, (pp. 195-208), North Holland.