This paper proposes a new class of distribution using the concept of exponentiated of distribution function that provides a more flexible model to the baseline model. It also proposes a new lifetime distribution with different types of hazard rates such as decreasing, increasing and bathtub. After studying some basic statistical properties and parameter estimation procedure in case of complete sample observation, we have studied point and interval estimation procedures in presence of type-II censored samples under a classical as well as Bayesian paradigm. In the Bayesian paradigm, we considered a Gibbs sampler under Metropolis-Hasting for estimation under two different loss functions. After simulation studies, three different real datasets having various nature are considered for showing the suitability of the proposed model.
In statistical literature, a number of lifetime models have been discussed for analyzing uncertainty of random phenomenon of life. In lifetime models, exponential distribution is one of the oldest and famous model due to its properties and easy tractability of estimation for different parameters and close form solutions. However, the utility of this model becomes restricted when it has only constant hazard rate due to a lifetime experiment that exhibits non constant behavior. However, the constant hazard rate property is visa versa property of the exponential model. Researchers have therefore moved to develop a more flexible model with most of the developed models somehow related to the exponential model as Weibull, gamma and Lindley distribution (Lindley, 1958). Generalization and transformation are some techniques that are more popular nowadays for proposing a new lifetime model. Mudholkar and Srivastava (1993) proposed a three parameter exponentiated Weibull distribution. Gupta
Perhaps, keeping this point in mind Maurya
and they considered exponential distribution as baseline distribution and named it LTE distribution. Therefore, this gives a distribution with non constant hazard rates. Another advantage of the use of this transformation is that the new distribution preserves the properties of being parsimonious in the parameter because it does not add any additional parameter. A more generalized concept of LT method was proposed by Pappas
Either no model is perfect or no model is worst. In this series, our objective is to propose a new class of distribution with a transformation technique that incorporates all types of hazard rates for the appropriate choice of shape parameter. Here we propose the use of LT method on the exponentiated CDF (i.e., applying Lehmann type I on LT technique) referred to as generalized LT (GLT) method. The obtained distribution is expected to possess both monotone and non-monotone shapes of hazard rate, depending on the choice of the values of the parameters. The new distribution through GLT can be obtained as: let
and the corresponding PDF is,
For the illustrative point of view, we consider exponential distribution as baseline distribution due to its simplicity and popularity in life testing problem. The CDF of exponential distribution with scale parameter
Now, using GLT method proposed in
and its associated hazard rate is,
where
However, a researcher may receive incomplete or partially known data because the complete information on lifetime data for the estimation purpose of the parameters may not always available. These type of datasets are know as censored data. In general, there are two conventional censoring schemes named, type-I (time) and type-II (failure) censoring scheme. Here, we are using type-II censored sample for the estimation purpose because, the experiment will be terminated in this censoring scheme after obtaining a prefixed number of failures (let it be
The rest of the paper is organized as follows. Section 2, discusses the shapes of CDF, PDF, and hazard rates for various value for the parameters of the proposed distribution. Section 3, deals with the basic statistical properties of the proposed model and Section 4, discusses the parameter estimation procedure in complete sample data for the distribution. In Section 5, the point estimation methods in case of type-II censoring in both paradigms are given. Section 6, deals with interval estimations in both paradigms. A simulation study under complete sample as well as type-II censored sample is elaborated in Section 7, under both paradigms. Section 8, illustrates three real datasets to show the suitability of the proposed model in comparison to six other famous lifetime models having the same nature of hazard rate in both cases of complete as well as censoring in both the paradigms. Finally, conclusion is summarized in Section 9.
The shape of the distribution is an important feature in any distribution because it gives an indication on the nature of the distribution. The CDF plot using
We now follow Glaser (1980) lemma to study the shapes of hazard rate. As he defined the term
In our proposed distribution, we see that:
and
Now, it can easily be checked that the following three cases may arise:
When
When
When
It is also easy to verify from
The proposed GLTE model can also be obtained through the model proposed by Dey
Moments are useful in studying the nature of a distribution. However, in deriving the expression of moments, we shall derive the following lemma.
As a convergent sum of infinite terms of geometric series,
using the expansion of series,
(Readers may follow Graham
Using the above Lemma 2, we get
Hence, the arithmetic mean of the proposed distribution is,
If
CHF of
where
An entropy is a measure that measures the randomness of any system. Shannon entropy proposed by Shannon (1951) is defined as
and hence,
where
In a classical set-up, we use the maximum likelihood estimator of parameters
Differentiating
and
Now, we obtain two non-linear equations (likelihood equations) after equating these equations to zero. Solving these likelihood equations simultaneously provides the maximum likelihood estimators (MLE)
The estimation of the parameters in the Bayesian paradigm in presence of a censored sample, is given in Section 5 (for point estimation) and Section 6 (for interval estimation). From these discussions, we can easily obtain the point and interval estimates for the parameters in presence of complete sample just by putting
This section discusses the point estimation in the presence of type-II censoring for both a classical and Bayesian set-up. The detailed discussion is given below.
Let
In a classical set-up, maximum likelihood estimators and in Bayesian framework, Bayes estimates using informative and non-informative prior under two different loss functions namely; squared error and linex are used. The procedures are discussed systematically in the following subsections.
From
and logarithmic of likelihood function can be written as,
Now, the method of finding MLEs of parameters are same as discussed in case of complete sample in Section 4.
In a Bayesian paradigm, posterior probability is an effect of two components with a prior probability and a likelihood function, calculated from the statistical model for the observed data. The prior distribution of the parameters are assumed before the data is observed. The prior distribution might not be easy to determine. There are different categorization to the prior distribution of parameters defined as proper and improper prior. Another way to define the priors are based on available advanced information and known as informative and non-informative prior. Here, we used an informative prior distribution for
and prior for parameter
Hence, the joint prior of parameters
where the hyper parameters (
here
Marginal posterior densities of
The linex loss function (Varian, 1975) is defined as
and
The Bayes estimate of
and
It is not possible to compute
The Metropolis-Hastings algorithm is the general purpose technique for sampling from complex density models (introduced by Metropolis and Ulam (1949), Metropolis
This section deals with classical and Bayesian confidence intervals (CIs) estimation. We computed asymptotic confidence interval and bootstrap confidence intervals in classical framework. The highest posterior density interval for the parameter is obtained in a Bayesian framework. A detailed discussion is given in subsequent subsections.
Classical methods of interval estimations are discussed in this subsection. Asymptotic confidence intervals and bootstrap intervals are also discussed in subsequent subsections.
In case of large samples, we can obtain confidence intervals based on the diagonal elements of an inverse Fisher information matrix
where
Fisher information matrix can be estimated by,
where,
Class intervals based on the asymptotic property or normal theory assumption perform inadequately for small samples. One can obtain the accurate intervals using bootstrap without having normal theory assumption. The bootstrap method firstly introduced by Efron (1979). It is a general re-sampling procedure to estimate the statistics of distributions based on independent observations. Here, we discussed two types of CIs using bootstrap method, percentile bootstrap (Boot-p) suggested by Efron (1982) and studentized bootstrap (Boot-t), suggested by Peter (1988).
1. Assemble the type-II censored data
2. Generate a type-II censored sample by using MLEs of the parameters.
3. Generate
4. Obtain MLEs for each
5. Arrange these in ascending orders as {
A pair of 100(1 −
5. Repeat step 1–4 as in Boot-p approach.
6. Compute standard errors of the parameters also, denoted as
7. Compute statistics
8. Arrange
9. Arrange
A pair of 100(1 −
and
respectively. Refer to Davison and Hinkley (1997), Efron and Tibshirani (1994), and Carpenter and Bithell (2000) for a more detailed study.
In Bayesian philosophy, the parameter considered to be a random variable, then what is probability that the parameter
This section performs the simulation study for the proposed model GLTE in the presence of a complete as well as type-II censored sample. For the purpose of point estimation, we calculated MLEs of parameters along with their mean square error (MSE) under classical set up as well as estimates calculated under different loss functions along with their risks in case of Bayesian inference. In case of interval estimation, we calculated asymptotic confidence intervals, Boot-t and Boot-p confidence intervals in classical and HPD confidence intervals in Bayesian set-up. The sample observations from the proposed model can be obtained by solving
where
In case of Bayesian analysis, we assumed that the shape parameter
For the complete sample case, we considered a sample size as
From Tables 1
The MSE’s of both the parameters decreases with the increment in sample size
The risk of both the parameters
Table 2 and Table 3 show that the length of the HPD interval is smaller than the length of the other mentioned intervals as asymptotic confidence interval, Boot-p, and Boot-t intervals in case of both the parameters.
The length of the intervals are also in an increasing order of HPD, Boot-t, asymptotic, and Boot-p intervals for both the parameters
For the simulation study under type-II censored sample, we consider various combinations of
All estimates (point and interval) in both the set up are obtained for the mentioned choices of parameters (
From Tables 4
The MSE’s of both the parameters decreases with the increment in (
Similar to the classical results, in Bayesian inference, the risks of both the parameters
Table 6 and Table 7 indicate that the length of HPD interval is smaller than the length of the other mentioned intervals as asymptotic confidence interval, Boot-p and Boot-t confidence intervals in case of both the parameters.
Also, the lengths of the intervals are in increasing order as HPD, Boot-t, asymptotic and Boot-p confidence intervals for both parameters
The suitability of proposed model can be verified in a real life situation. Here, we consider three different datasets with a different nature of failure rate.
We have considered six other famous lifetime models having capability of different type of hazard rates. The description of considered models are given below,
GDUS exponential (GDUSE) distribution proposed by Maurya
It is vary flexible model having decreasing, increasing, and bathtub shaped hazard rates.
Generalized Lindley (GL) distribution proposed by Nadarajah
It has also increasing, decreasing, and bathtub hazard rate.
Chen’s model proposed by Chen (2000) having PDF
This is a widely used model for bathtub hazard rate that also has increasing hazard rate.
Gamma distribution with PDF
Hjorth distribution proposed by Hjorth (1980) with PDF
This is also a well-known model for bathtub situation with an increasing, decreasing, constant, and bathtub hazard rate.
Weibull distribution with PDF
Gamma and Weibull, both having increasing, decreasing, and constant hazard rate.
The suitability of models in terms of fitting, for the considered datasets have been measured on the basis of negative of logarithmic value of likelihood (−Log L), Kolmogorov-Smirnov (KS) test statistic and
and KS test statistics (D) is defined as
where
In the above expression,
Item failure data (Dataset 1): This dataset contain 50 observations of an item placed on test at time
Flood level data (Dataset 2): The data are about the excellences of flood peaks (in
Wind speed data (Dataset 3): This dataset proposed by Leiva
We also plotted the scaled TTT plot to understand the nature of all datasets given in Figure 3. This figure shows the nature of datasets discussed above. The curve above the abline exhibits the increasing hazard rate, below the abline shows decreasing hazard rate and first below and then above the abline shows the nature of the bathtub hazard rate (see Aarset (1987) and Singh
For the item failure data: All the considered models fit to this dataset at 5% level of significance. The −Log L, KS statistics, AIC and BIC are least for the GDUSE model at third place of the decimal than the proposed one; however, one may considered it as a comparative model to the GDUSE distribution.
For the flood level data: We found that, in the considered seven distributions, all fit to this dataset at 5% level of significance. In all the criterion Hjorth model have least value and proposed have second one. But one point to consider is that the Hjorth model has more parameter than the proposed one with minimal difference in term of BIC.
For the wind speed data: In this dataset, all the model fit to the data at a 5% level of significance. The −Log L is least for proposed model and the KS statistic is least for GDUSE model, proposed model and GL model; however, both the model selection criterion AIC and BIC are least for the proposed model.
The logarithmic of likelihood value of proposed distribution (in case of complete sample) are plotted for population parameters
Here, we have also considered some non-parametric fitting tools like, histogram, estimated density function plot, kernel density plot and empirical cumulative distribution function (ECDF) plot for validating the above results. Kernel density plot is a technique to estimate density function through the dataset. Plots such as relative histogram, estimated density and kernel density plots for all datasets given in Figure 8. This figure graphically exhibits that the proposed distribution adequately fits all datasets. The ECDF and fitted CDF plot, for all considered datasets, have also been plotted in Figure 9. This figure provides a comparative picture which shows the proposed model fit all datasets.
Here, three real datasets are taken for the illustrative purpose of the study and the estimation methods discussed in this chapter for the proposed distribution under type-II censored sample. The item failure data, flood level data and wind speed data having size 31, 50 and 72. Here, we consider various combinations of the prefixed number of failures
The 95% asymptotic confidence intervals, bootstrap confidence intervals (Boot-p and Boot-t), and HPD intervals for both parameters
In this paper, we proposed a new transformation technique to generate a new lifetime model with a new lifetime model. The proposed distribution is a flexible two parameter model in the sense of flexible density as well as in different hazard rates that has an increasing, decreasing and bathtub nature of hazard rate. We have also studied some statistical properties like, moments, MGF, CHF, CGF, and Shannon entropy for the proposed model. We have performed simulation studies under complete as well as type-II censored cases of classical and Bayesian paradigm for point and interval estimation. In classical point estimation, we have used maximum likelihood method to estimate unknown population parameters
Lastly, three different real datasets having different nature (IHR, DHR and bathtub) have been considered in comparison of six other models, out of which four having bathtub nature (in which Hjorth and Chen model are two known bathtub models), five having a decreasing, six having increasing type hazard rate. Our proposed model fits well in all the considered models and datasets. The uniqueness of ML estimates are also shown graphically and in non- parametric technique, relative histogram plot, kernel density plot, fitted density plot, ECDF plot have been considered, which also support our findings in the support of the proposed model. In the presence of type-II censoring, classical and Bayesian point and interval estimates have been considered for different censoring schemes.
The proposed model is therefore a very flexible model that fits the large varieties of real datasets and can be recommended in the different situations.
Estimates of parameter
( | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
10 | (0.5, 0.5) | 0.6923 (0.3734) | 0.6856 (0.1945) | 0.6588 | 0.3918 (0.1364) | 0.4859 (0.0774) | 0.5873 | 0.3905 (0.0182) | 0.4823 (0.0004) | 0.5845 |
(0.8, 0.8) | 1.1779 (1.307) | 1.0284 (0.2886) | 0.8352 | 0.5319 (0.4832) | 0.6758 (0.1396) | 0.7088 | 0.5296 (0.1621) | 0.6710 (0.0007) | 0.7052 | |
(1.0, 1.0) | 1.5326 (3.4045) | 1.2662 (0.3788) | 0.9791 | 0.6475 (1.5398) | 0.8195 (0.2381) | 0.8011 | 0.6445 (0.2567) | 0.8138 (0.0012) | 0.7968 | |
30 | (0.5, 0.5) | 0.5463 (0.0233) | 0.5502 (0.0267) | 0.5851 | 0.4625 (0.0114) | 0.4933 (0.0182) | 0.5645 | 0.4619 (0.0001) | 0.4923 (0.0001) | 0.5638 |
(0.8, 0.8) | 0.8854 (0.0735) | 0.8658 (0.0486) | 0.7649 | 0.6722 (0.0341) | 0.7371 (0.0305) | 0.7143 | 0.6708 (0.0002) | 0.7355 (0.0002) | 0.7132 | |
(1.0, 1.0) | 1.1160 (0.1282) | 1.0755 (0.0662) | 0.8983 | 0.7869 (0.067) | 0.8853 (0.0449) | 0.8154 | 0.7849 (0.0003) | 0.8832 (0.0002) | 0.8140 | |
50 | (0.5, 0.5) | 0.5264 (0.0112) | 0.5298 (0.0138) | 0.5736 | 0.4781 (0.0075) | 0.4970 (0.0111) | 0.5617 | 0.4777 (0.0000) | 0.4964 (0.0001) | 0.5613 |
(0.8, 0.8) | 0.8494 (0.0341) | 0.8381 (0.0372) | 0.7500 | 0.7217 (0.0211) | 0.7611 (0.0194) | 0.7196 | 0.7207 (0.0001) | 0.7601 (0.0001) | 0.7190 | |
(1.0, 1.0) | 1.0660 (0.0596) | 1.0448 (0.0353) | 0.8827 | 0.8636 (0.0391) | 0.9281 (0.0275) | 0.8313 | 0.8621 (0.0002) | 0.9268 (0.0001) | 0.8305 |
Classical and Bayesian interval estimates of parameter
( | Conf_ | pboot_ | tboot_ | hpd_ | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
LL | UL | cp | LL | UL | cp | LL | UL | cp | LL | UL | cp | ||
10 | (0.5, 0.5) | 0.0885 | 1.2976 | 0.9709 | 0.4201 | 2.8835 | 0.5938 | 0.2984 | 1.1678 | 0.6812 | 0.1369 | 0.6956 | 0.8306 |
(0.8, 0.8) | 0.0265 | 2.3619 | 0.9696 | 0.6821 | 5.9098 | 0.5797 | 0.4930 | 1.9705 | 0.6458 | 0.1934 | 0.9351 | 0.6346 | |
(1.0, 1.0) | 0.0000 | 3.2483 | 0.9667 | 0.8779 | 8.7019 | 0.5629 | 0.6421 | 2.6032 | 0.6222 | 0.2662 | 1.1028 | 0.4320 | |
30 | (0.5, 0.5) | 0.2891 | 0.8041 | 0.9624 | 0.3831 | 0.9879 | 0.7465 | 0.3279 | 0.7993 | 0.7985 | 0.2774 | 0.6683 | 0.8831 |
(0.8, 0.8) | 0.4378 | 1.3331 | 0.9614 | 0.6072 | 1.6893 | 0.7371 | 0.5138 | 1.3220 | 0.7878 | 0.3855 | 0.9895 | 0.8207 | |
(1.0, 1.0) | 0.5297 | 1.7034 | 0.9626 | 0.7618 | 2.2025 | 0.7303 | 0.6408 | 1.6862 | 0.7792 | 0.4430 | 1.1677 | 0.7556 | |
50 | (0.5, 0.5) | 0.3355 | 0.7164 | 0.9587 | 0.3927 | 0.8101 | 0.7780 | 0.3568 | 0.7199 | 0.8119 | 0.3311 | 0.6386 | 0.8871 |
(0.8, 0.8) | 0.5192 | 1.1754 | 0.9560 | 0.6245 | 1.3596 | 0.7741 | 0.5627 | 1.1860 | 0.8084 | 0.4801 | 0.9842 | 0.8579 | |
(1.0, 1.0) | 0.6373 | 1.4936 | 0.9590 | 0.7754 | 1.7349 | 0.7685 | 0.6950 | 1.4989 | 0.8014 | 0.5641 | 1.1880 | 0.8160 |
LL = lower limit; UL = upper limit; cp = coverage probability.
Classical and Bayesian interval estimates of parameter
( | Conf_ | pboot_ | tboot_ | hpd_ | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
LL | UL | cp | LL | UL | cp | LL | UL | cp | LL | UL | cp | ||
10 | (0.5, 0.5) | 0.1139 | 1.2576 | 0.9529 | 0.4508 | 2.2960 | 0.5707 | 0.1284 | 1.1429 | 0.7990 | 0.0811 | 0.9493 | 0.3557 |
(0.8, 0.8) | 0.2747 | 1.7905 | 0.9472 | 0.7035 | 3.0200 | 0.5749 | 0.2644 | 1.6679 | 0.7997 | 0.1640 | 1.2394 | 0.8335 | |
(1.0, 1.0) | 0.3812 | 2.1421 | 0.9485 | 0.8887 | 3.5409 | 0.5707 | 0.3724 | 2.0246 | 0.7990 | 0.2476 | 1.4400 | 0.7692 | |
30 | (0.5, 0.5) | 0.2719 | 0.8321 | 0.9504 | 0.3825 | 1.0153 | 0.7215 | 0.2894 | 0.8195 | 0.8058 | 0.2586 | 0.7467 | 0.9149 |
(0.8, 0.8) | 0.4807 | 1.2509 | 0.9497 | 0.6244 | 1.4751 | 0.7352 | 0.4965 | 1.2385 | 0.8113 | 0.4130 | 1.0792 | 0.9032 | |
(1.0, 1.0) | 0.6231 | 1.5286 | 0.9489 | 0.7882 | 1.7849 | 0.7349 | 0.6389 | 1.5175 | 0.8070 | 0.5099 | 1.2794 | 0.8837 | |
50 | (0.5, 0.5) | 0.3191 | 0.7394 | 0.9519 | 0.3853 | 0.8359 | 0.7663 | 0.3306 | 0.7364 | 0.8152 | 0.3187 | 0.6866 | 0.9042 |
(0.8, 0.8) | 0.5465 | 1.1289 | 0.9506 | 0.6337 | 1.2493 | 0.7708 | 0.5586 | 1.1275 | 0.8162 | 0.5088 | 1.0249 | 0.9073 | |
(1.0, 1.0) | 0.7012 | 1.3879 | 0.9498 | 0.8018 | 1.5279 | 0.7700 | 0.7132 | 1.3897 | 0.8143 | 0.6312 | 1.2366 | 0.8970 |
LL = lower limit; UL = upper limit; cp = coverage probability.
Classical estimates with the MSE’s of parameters
( | MSE( | MSE( | ||||
---|---|---|---|---|---|---|
(0.5, 0.5) | 10 | 8 | 0.689 | 0.787 | 0.222 | 0.405 |
30 | 15 | 0.604 | 0.723 | 0.058 | 0.252 | |
25 | 0.560 | 0.580 | 0.028 | 0.047 | ||
50 | 25 | 0.558 | 0.625 | 0.023 | 0.098 | |
35 | 0.541 | 0.564 | 0.016 | 0.035 | ||
45 | 0.532 | 0.539 | 0.012 | 0.018 | ||
(0.8, 0.8) | 10 | 8 | 1.140 | 1.105 | 0.851 | 0.457 |
30 | 15 | 0.983 | 1.026 | 0.210 | 0.289 | |
25 | 0.906 | 0.890 | 0.093 | 0.073 | ||
50 | 25 | 0.898 | 0.929 | 0.075 | 0.126 | |
35 | 0.870 | 0.871 | 0.049 | 0.054 | ||
45 | 0.855 | 0.847 | 0.038 | 0.031 | ||
(1.0, 1.0) | 10 | 8 | 1.493 | 1.341 | 2.200 | 0.562 |
30 | 15 | 1.247 | 1.245 | 0.382 | 0.347 | |
25 | 1.139 | 1.100 | 0.162 | 0.094 | ||
50 | 25 | 1.127 | 1.135 | 0.129 | 0.149 | |
35 | 1.092 | 1.079 | 0.083 | 0.071 | ||
45 | 1.073 | 1.053 | 0.065 | 0.042 |
MSE = mean squared error.
Bayesian estimation and corresponding risk of parameters
( | Risk( | Risk( | Risk( | Risk( | ||||||
---|---|---|---|---|---|---|---|---|---|---|
(0.5, 0.5) | 10 | 8 | 0.4076 | 0.4064 | 0.4768 | 0.4714 | 0.0412 | 0.0003 | 0.1335 | 0.0007 |
30 | 15 | 0.4416 | 0.4407 | 0.4795 | 0.4750 | 0.0154 | 0.0001 | 0.0791 | 0.0004 | |
25 | 0.4604 | 0.4597 | 0.4926 | 0.4912 | 0.0120 | 0.0001 | 0.0268 | 0.0001 | ||
50 | 25 | 0.4680 | 0.4674 | 0.4926 | 0.4900 | 0.0104 | 0.0001 | 0.0459 | 0.0002 | |
35 | 0.4752 | 0.4747 | 0.4933 | 0.4921 | 0.0088 | 0.0000 | 0.0227 | 0.0001 | ||
45 | 0.4789 | 0.4785 | 0.4970 | 0.4963 | 0.0078 | 0.0000 | 0.0136 | 0.0001 | ||
(0.8, 0.8) | 10 | 8 | 0.5575 | 0.5552 | 0.6270 | 0.6212 | 0.2833 | 0.0025 | 0.2136 | 0.0011 |
30 | 15 | 0.6105 | 0.6087 | 0.6224 | 0.6176 | 0.0609 | 0.0003 | 0.1108 | 0.0006 | |
25 | 0.6586 | 0.6571 | 0.7104 | 0.7082 | 0.0376 | 0.0002 | 0.0410 | 0.0002 | ||
50 | 25 | 0.6763 | 0.6750 | 0.6832 | 0.6800 | 0.0327 | 0.0002 | 0.0608 | 0.0003 | |
35 | 0.7012 | 0.7000 | 0.7267 | 0.7248 | 0.0259 | 0.0001 | 0.0344 | 0.0002 | ||
45 | 0.7164 | 0.7154 | 0.7532 | 0.7521 | 0.0226 | 0.0001 | 0.0230 | 0.0001 | ||
(1.0, 1.0) | 10 | 8 | 0.6829 | 0.6800 | 0.7649 | 0.7586 | 1.1395 | 0.2527 | 0.3531 | 0.0018 |
30 | 15 | 0.7068 | 0.7043 | 0.7269 | 0.7215 | 0.1374 | 0.0007 | 0.1839 | 0.0009 | |
25 | 0.7644 | 0.7622 | 0.8459 | 0.8432 | 0.0770 | 0.0004 | 0.0613 | 0.0003 | ||
50 | 25 | 0.7903 | 0.7883 | 0.8009 | 0.7972 | 0.0659 | 0.0003 | 0.0893 | 0.0004 | |
35 | 0.8302 | 0.8285 | 0.8728 | 0.8705 | 0.0493 | 0.0002 | 0.0497 | 0.0002 | ||
45 | 0.8549 | 0.8534 | 0.9143 | 0.9128 | 0.0414 | 0.0002 | 0.0323 | 0.0002 |
Classical and Bayesian interval estimates of the parameter
( | Conf_ | hpd_ | pboot_ | tboot_ | ||||||
---|---|---|---|---|---|---|---|---|---|---|
LL | UL | LL | UL | LL | UL | LL | UL | |||
(0.5, 0.5) | 10 | 8 | 0.137 | 1.242 | 0.156 | 0.704 | 0.622 | 9.704 | 0.320 | 1.191 |
30 | 15 | 0.239 | 0.970 | 0.219 | 0.694 | 0.550 | 3.630 | 0.261 | 0.960 | |
25 | 0.280 | 0.841 | 0.265 | 0.679 | 0.498 | 1.654 | 0.297 | 0.817 | ||
50 | 25 | 0.302 | 0.816 | 0.279 | 0.675 | 0.514 | 2.036 | 0.224 | 0.727 | |
35 | 0.321 | 0.763 | 0.304 | 0.661 | 0.503 | 1.467 | 0.268 | 0.698 | ||
45 | 0.333 | 0.732 | 0.320 | 0.650 | 0.468 | 1.068 | 0.324 | 0.701 | ||
(0.8, 0.8) | 10 | 8 | 0.133 | 2.147 | 0.219 | 0.954 | 1.327 | 7.826 | 0.683 | 2.710 |
30 | 15 | 0.338 | 1.628 | 0.286 | 0.975 | 0.799 | 5.832 | 0.357 | 1.300 | |
25 | 0.418 | 1.394 | 0.357 | 0.993 | 0.819 | 3.222 | 0.470 | 1.344 | ||
50 | 25 | 0.453 | 1.344 | 0.388 | 0.993 | 0.846 | 4.698 | 0.363 | 1.220 | |
35 | 0.490 | 1.251 | 0.432 | 0.993 | 0.812 | 2.779 | 0.413 | 1.132 | ||
45 | 0.512 | 1.198 | 0.469 | 0.986 | 0.753 | 1.858 | 0.500 | 1.128 | ||
(1.0, 1.0) | 10 | 8 | 0.088 | 2.899 | 0.295 | 1.138 | 1.907 | 8.500 | 0.966 | 4.050 |
30 | 15 | 0.390 | 2.105 | 0.330 | 1.131 | 1.263 | 4.866 | 0.564 | 2.112 | |
25 | 0.502 | 1.778 | 0.412 | 1.156 | 1.041 | 4.463 | 0.587 | 1.706 | ||
50 | 25 | 0.545 | 1.710 | 0.446 | 1.168 | 1.076 | 7.190 | 0.457 | 1.560 | |
35 | 0.596 | 1.590 | 0.504 | 1.186 | 1.023 | 3.812 | 0.510 | 1.428 | ||
45 | 0.627 | 1.520 | 0.543 | 1.192 | 0.952 | 2.451 | 0.619 | 1.425 |
LL = lower limit; UL = upper limit.
Classical and Bayesian interval estimates of the parameter
( | Conf_ | hpd_ | pboot_ | tboot_ | ||||||
---|---|---|---|---|---|---|---|---|---|---|
LL | UL | LL | UL | LL | UL | LL | UL | |||
(0.5, 0.5) | 10 | 8 | 0.008 | 1.568 | 0.043 | 1.034 | 0.855 | 6.211 | 0.000 | 1.206 |
30 | 15 | 0.023 | 1.423 | 0.065 | 1.003 | 0.762 | 7.382 | 0.000 | 1.165 | |
25 | 0.228 | 0.932 | 0.208 | 0.803 | 0.611 | 1.938 | 0.113 | 0.722 | ||
50 | 25 | 0.134 | 1.117 | 0.134 | 0.903 | 0.654 | 5.565 | 0.000 | 0.794 | |
35 | 0.243 | 0.885 | 0.227 | 0.781 | 0.609 | 2.494 | 0.000 | 0.659 | ||
45 | 0.301 | 0.777 | 0.290 | 0.716 | 0.550 | 1.251 | 0.216 | 0.614 | ||
(0.8, 0.8) | 10 | 8 | 0.187 | 2.023 | 0.105 | 1.233 | 1.272 | 6.995 | 0.000 | 1.697 |
30 | 15 | 0.226 | 1.827 | 0.120 | 1.191 | 0.946 | 6.508 | 0.000 | 1.253 | |
25 | 0.432 | 1.348 | 0.334 | 1.109 | 0.911 | 2.446 | 0.263 | 1.071 | ||
50 | 25 | 0.348 | 1.512 | 0.248 | 1.155 | 0.977 | 5.573 | 0.000 | 1.157 | |
35 | 0.461 | 1.281 | 0.383 | 1.089 | 0.909 | 2.948 | 0.070 | 0.988 | ||
45 | 0.526 | 1.168 | 0.477 | 1.042 | 0.842 | 1.714 | 0.399 | 0.948 | ||
(1.0, 1.0) | 10 | 8 | 0.310 | 2.372 | 0.187 | 1.412 | 1.520 | 7.547 | 0.000 | 1.989 |
30 | 15 | 0.358 | 2.133 | 0.173 | 1.336 | 1.344 | 7.612 | 0.000 | 1.714 | |
25 | 0.570 | 1.631 | 0.422 | 1.292 | 1.116 | 2.823 | 0.368 | 1.310 | ||
50 | 25 | 0.487 | 1.784 | 0.320 | 1.315 | 1.185 | 5.878 | 0.000 | 1.388 | |
35 | 0.609 | 1.549 | 0.482 | 1.281 | 1.114 | 3.317 | 0.150 | 1.210 | ||
45 | 0.679 | 1.428 | 0.588 | 1.252 | 1.044 | 2.038 | 0.528 | 1.178 |
LL = lower limit; UL = upper limit.
MLE, KS, AIC, and BIC statistics with
Distribution | MLE | KS | AIC | BIC | |||||
---|---|---|---|---|---|---|---|---|---|
−Log L | Statistics | ||||||||
Dataset 1 | GDUSE | 0.563 | 0.113 | - | 150.192 | 0.082 | 0.891 | 304.385 | 308.209 |
Proposed | 0.612 | 0.112 | - | 150.194 | 0.089 | 0.828 | 304.389 | 308.213 | |
GL | 0.464 | 0.148 | - | 150.517 | 0.073 | 0.951 | 305.035 | 308.859 | |
Chen | 0.345 | 0.147 | - | 151.521 | 0.084 | 0.876 | 307.041 | 310.865 | |
Gamma | 0.695 | 11.250 | - | 150.315 | 0.105 | 0.639 | 304.631 | 308.455 | |
Hjorth | 0.177 | 0.002 | 0.090 | 151.960 | 0.098 | 0.718 | 309.921 | 315.657 | |
Weibull | 0.800 | 6.969 | - | 150.677 | 0.112 | 0.559 | 305.354 | 309.178 | |
Dataset 2 | GDUSE | 0.680 | 0.081 | - | 251.624 | 0.113 | 0.319 | 507.247 | 511.800 |
Proposed | 0.746 | 0.081 | - | 251.267 | 0.109 | 0.355 | 506.534 | 511.087 | |
GL | 0.509 | 0.104 | - | 252.675 | 0.117 | 0.276 | 509.349 | 513.903 | |
Chen | 0.350 | 0.092 | - | 253.223 | 0.093 | 0.558 | 510.446 | 514.999 | |
Gamma | 0.838 | 14.559 | - | 251.344 | 0.103 | 0.434 | 506.689 | 511.242 | |
Hjorth | 0.173 | 0.003 | 0.439 | 249.013 | 0.069 | 0.887 | 504.026 | 510.856 | |
Weibull | 0.901 | 11.632 | - | 251.499 | 0.105 | 0.403 | 506.997 | 511.551 | |
Dataset 3 | GDUSE | 2.395 | 0.721 | - | 57.277 | 0.131 | 0.665 | 118.553 | 121.421 |
Proposed | 2.626 | 0.714 | - | 57.131 | 0.132 | 0.653 | 118.262 | 121.130 | |
GL | 2.070 | 0.839 | - | 57.250 | 0.132 | 0.652 | 118.500 | 121.368 | |
Chen | 0.571 | 0.162 | - | 63.028 | 0.186 | 0.236 | 130.056 | 132.924 | |
Gamma | 2.353 | 1.167 | - | 57.153 | 0.137 | 0.604 | 118.306 | 121.174 | |
Hjorth | 0.160 | 0.096 | 0.000 | 60.214 | 0.178 | 0.280 | 126.428 | 130.730 | |
Weibull | 1.495 | 3.068 | - | 58.488 | 0.156 | 0.435 | 120.976 | 123.844 |
MLE = maximum likelihood estimate; KS = Kolmogorov-Smirnov test; AIC = Akaike information criterion; BIC = Bayesian information criterion.
Estimators of
Dataset | ||||||||
---|---|---|---|---|---|---|---|---|
Item failure data | 50 | 25 | 0.579 | 0.579 | 0.470 | 0.094 | 0.068 | 0.068 |
45 | 0.642 | 0.570 | 0.569 | 0.122 | 0.114 | 0.114 | ||
Flood level data | 72 | 35 | 0.443 | 0.578 | 0.578 | 0.009 | 0.054 | 0.054 |
60 | 0.493 | 0.646 | 0.645 | 0.028 | 0.068 | 0.068 | ||
Wind speed data | 31 | 15 | 0.807 | 1.154 | 1.147 | 0.076 | 0.411 | 0.410 |
25 | 1.166 | 1.328 | 1.321 | 0.260 | 0.530 | 0.530 |
Interval estimators for
Dataset | Conf_ | pboot_ | tboot_ | hpd_ | ||||||
---|---|---|---|---|---|---|---|---|---|---|
LL | UL | LL | UL | LL | UL | LL | UL | |||
Item failure data | 50 | 25 | 0.314 | 0.845 | 0.408 | 0.824 | 0.517 | 1.020 | 0.274 | 0.658 |
45 | 0.398 | 0.887 | 0.438 | 1.007 | 0.341 | 0.754 | 0.418 | 0.785 | ||
Flood level data | 72 | 35 | 0.400 | 0.949 | 0.629 | 1.225 | 0.675 | 1.291 | 0.330 | 0.784 |
60 | 0.459 | 0.920 | 0.639 | 1.241 | 0.548 | 1.049 | 0.466 | 0.873 | ||
Wind speed data | 31 | 15 | 0.586 | 7.269 | 1.181 | 5.346 | 1.798 | 7.829 | 0.503 | 1.882 |
25 | 1.093 | 6.119 | 1.368 | 6.974 | 1.394 | 6.274 | 0.667 | 2.067 |
LL = lower limit; UL = upper limit.
Interval estimators for
Dataset | Conf_ | pboot_ | tboot_ | hpd_ | ||||||
---|---|---|---|---|---|---|---|---|---|---|
LL | UL | LL | UL | LL | UL | LL | UL | |||
Item failure data | 50 | 25 | 0.021 | 0.167 | 0.023 | 0.147 | 0.058 | 0.290 | 0.014 | 0.118 |
45 | 0.072 | 0.173 | 0.058 | 0.134 | 0.032 | 0.083 | 0.075 | 0.156 | ||
Flood level data | 72 | 35 | 0.024 | 0.109 | 0.029 | 0.087 | 0.033 | 0.095 | 0.021 | 0.091 |
60 | 0.045 | 0.096 | 0.052 | 0.110 | 0.038 | 0.083 | 0.045 | 0.094 | ||
Wind speed data | 31 | 15 | 0.458 | 1.445 | 0.232 | 1.061 | 0.459 | 1.793 | 0.140 | 0.682 |
25 | 0.555 | 1.238 | 0.326 | 1.067 | 0.359 | 1.187 | 0.305 | 0.788 |
LL = lower limit; UL = upper limit.