TEXT SIZE

CrossRef (0)
On the comparison of cumulative hazard functions

Sangun Park1,a, Seung Ah Haa

aDepartment of Applied Statistics, Yonsei University, Korea
Correspondence to: 1Department of Applied Statistics, Yonsei University, 50-1 Yonsei-ro, Seodaemun-gu, Seoul 03722, Korea. E-mail: sangun@yonsei.ac.kr
Received August 8, 2019; Revised October 11, 2019; Accepted October 17, 2019.
Abstract
This paper proposes two distance measures between two cumulative hazard functions that can be obtained by comparing their difference and ratio, respectively. Then we estimate the measures and present goodness of t test statistics. Since the proposed test statistics are expressed in terms of the cumulative hazard functions, we can easily give more weights on earlier (or later) departures in cumulative hazards if we like to place an emphasis on earlier (or later) departures. We also show that these test statistics present comparable performances with other well-known test statistics based on the empirical distribution function for an exponential null distribution. The proposed test statistic is an omnibus test which is applicable to other lots of distributions than an exponential distribution.
Keywords : empirical distribution function, exponential distribution, Kullback-Leibler information, Nelson-Aalen estimator, survival function
1. Introduction

Suppose that a nonnegative random variable X has a cumulative distribution function Fθ(x), with a continuous density function fθ(x), where θ is an unknown parameter. In comparing fθ(x) and a nonparametric density function estimator, fn(x), the following Kullback-Leibler (KL) information has been popularly considered (Arizono and Ohta, 1989; Noughabi, 2010) as

$KL ( f n : f θ ) = ∫ 0 ∞ f n ( x ) log f n ( x ) f θ ( x ) d x ,$

which is nonnegative and has the characterization property that the equality to zero holds iff fn(x) = fθ(x) almost everywhere.

To calculate (1.1), we need to determine the bandwidth type parameter in fn(x) such as the non-parametric kernel density function estimator or the piecewise uniform density function (Park and Park, 2003).

In survival analysis, the hazard function rather than the probability density function is of interest where the hazard function is defined as hθ(x) = fθ(x)/(1 − Fθ(x)) and the cumulative hazard function is defined as $H θ ( x ) = ∫ 0 x h θ ( t ) d t$. In this context, Park and Shin (2014) provided another expression for (1.1) in terms of the hazard function as

$KL ( f n : f θ ) = ∫ 0 ∞ f n ( x ) ( h θ ( x ) h n ( x ) - log h θ ( x ) h n ( x ) - 1 ) d x ,$

which can be considered in comparing hθ(x) and a nonparametric hazard function estimator, hn(x). However, hn(x) also includes the bandwidth type parameter like fn(x).

The comparison of the empirical distribution function Fe(x) and Fθ(x) has been considered since Fe(x) has the advantage that we do not need to consider the bandwidth type parameter. Many of studies have compared Fθ(x) and Fe(x), which include Kolmogorov-Smirnov type statistics based on their difference, and the likelihood ratio type test statistics based on their ratio (Zhang, 2002) and the test based on cumulative residual entropy (Baratpour and Habibi Rad, 2012). In this article, we will compare Hθ(x) and the nonparametric cumulative hazard function estimator, Hn(x), where Hn(x) can be obtained by two ways; the Nelson-Aalen (NA) estimator or − log(1 − Fe(x)). As stated above, in survival analysis, the cumulative hazard function has been widely considered; Hjort (1990) suggested goodness-of-fit test statistics for life history data using cumulative hazard functions. In addition, Korn et al. (1997) and Klotz et al. (2010) considered cumulative hazard functions to compare mortality by plots, respectively. Anderson and Senthilselvan (1982) also used cumulative hazards to compare models in their cancer mortality studies, and Arvanitakis et al. (2004) draw plots to compare cumulative hazards of persons with and without diabetes mellitus. Park (2017) recently proposed a test statistic based on the ratio, but its application has been limited to the Type-slowromancapii@ censored case.

We here propose two test statistics based on the difference and ratio of the cumulative hazard functions, respectively. We first consider the squared difference of the cumulative hazard functions as

$D n ( H n : H θ ) = n ∫ 0 ∞ ( H n ( x ) - H θ ( x ) ) 2 d F θ ( x ) ,$

and also consider the ratio of the cumulative hazard functions by using the extension of Kullback-Leibler information to the cumulative hazard function as

$R n ( H n : H θ ) = n ∫ 0 ∞ H n ( x ) ( H θ ( x ) H n ( x ) - log H θ ( x ) H n ( x ) ) d F θ ( x ) .$

Then we can establish goodness-of-fit test statistics by using $Dn(Hn:Hθ^)$ and $Rn(Hn:Hθ^)$ where $θ^$ is an appropriately chosen parameter estimator. We evaluate their performances as goodness-of-fit test statistics for an exponential distribution.

2. Comparison based on the difference

### 2.1. Uncensored case

Suppose that (x1:n, …, xn:n) is an ordered sample of size n from Fθ(x). The empirical distribution function is widely known nonparametric distribution function estimator defined as

$F e ( x ) = i n , if x i : n ≤ x < x i + 1 : n for i = 0 , … , n ,$

where x0:n = 0 and xn+1:n = ∞.

Since Hθ(x) = − log(1 − Fθ(x)), the corresponding cumulative hazard function to the empirical distribution function is

$H e ( x ) = - log ( n - i n ) , if x i : n ≤ x < x i + 1 : n for i = 0 , … , n .$

However, the comparison of Hθ(x) and He(x) has a limitation that He(x) is not fine for xxn:n.

Hence, we can consider the Nelson-Aalen estimator which has been brought up by Nelson (1972) and Aalen (1978), instead of He(x), which takes the form as

$H NA ( x ) = ∑ j = 1 i 1 n - j + 1 , if x i : n ≤ x < x i + 1 : n for i = 1 , … , n ,$

where HNA = 0 or 0 ≤ x < x1:n.

The corresponding distribution function estimator, whose survival function is called the Fleming and Harrington estimator, can be written as

$F NA ( x ) = 1 - exp ( - ∑ j = 1 i 1 n - j + 1 ) , if x i : n ≤ x < x i + 1 : n for i = 1 , … , n .$
2.1.1. Squared difference

In comparing Hθ(x) and HNA(x), we can first consider the squared difference, (HNA(x) − Hθ(x))2. Then the average squared difference can be defined as

$D n ( H NA : H θ ) = n ∫ 0 ∞ ( H NA ( x ) - H θ ( x ) ) 2 d F θ ( x ) ,$

which can be arranged as

$D n ( H NA : H θ ) = n ∑ i = 0 n ∫ z i z i + 1 ( H NA , i + log ( 1 - u ) ) 2 d u = n ∑ i = 0 n H NA , i 2 ( z i + 1 - z i ) + 2 n ∑ i = 0 n H NA , i { ( 1 - z i ) log ( 1 - z i ) - ( 1 - z i + 1 ) log ( 1 - z i + 1 ) z i - z i + 1 } + 2 n ,$

where zi = Fθ(xi:n) and HNA,i is the Nelson-Aalen estimate in [xi:n, xi+1:n).

2.1.2. Standardized squared difference

V(HNA(x)) can be approximated by V(He(x)) which is approximately Fθ(x)/(n(1 − Fθ(x))). Then the average standardized squared difference can be obtained as

$S D n ( H NA : H θ ) = ∫ 0 ∞ n ( 1 - F θ ( x ) ) ( H NA ( x ) - H θ ( x ) ) 2 F θ ( x ) d F θ ( x ) = n ∑ i = 0 n ∫ z i z i + 1 1 - u u ( H NA , i + log ( 1 - u ) ) 2 d u ,$

which can be written as

$S D n ( H NA : H θ ) = n ( ∑ i = 0 n H NA , i 2 ∫ z i z i + 1 1 - u u d u + 2 ∑ i = 0 n H NA , i ∫ z i z i + 1 1 - u u log ( 1 - u ) d u + ∫ 0 1 1 - u u ( log ( 1 - u ) ) 2 d u ) .$

Equation (2.1) can be simplified to

$S D n ( H NA : H θ ) = n [ ∑ i = 0 n H NA , i 2 { log z i + 1 - log z i - ( z i + 1 - z i ) } + 2 ∑ i = 0 n H NA , i { Li 2 ( 1 - z i ) - Li 2 ( 1 - z i + 1 ) + ( 1 - z i + 1 ) log ( 1 - z i + 1 ) - ( 1 - z i ) log ( 1 - z i ) + z i + 1 - z i } + 0.4041 ] ,$

where Li2(x) is the dilogarithm function. Note that we approximate 2Li3(1) – 2 which is the last term of equation (2.1) to 0.4041.

### 2.2. Censored case

In the previous section, we considered the Nelson-Aalen estimator instead of the empirical distribution function because He(x) is not finite for xxn:n. However, if the variable x is censored, we do not have to consider the case of xxn:n. Hence, we consider the squared difference in (0, C), and let C be xr:n < ∞. If the variable x is censored at xr:n, where r = 2, …, n, the empirical distribution function can be obtained as

$F e ( x ) = { 0 , if x < x 1 : n , i n , if x i : n ≤ x < x i + 1 : n ,$

for i = 0, …, r − 1.

Subsequently, the cumulative hazard function of the empirical distribution function can be written as

$H e ( x ) - log ( n - i n ) , if x i : n ≤ x < x i + 1 : n for i = 0 , … , r - 1.$

Then the average squared difference is defined as

$D r , n ( H e : H θ ) = n ∫ 0 C ( H e ( x ) - H θ ( x ) ) 2 d F θ ( x ) = n ∫ 0 x r : n ( H e ( x ) - H θ ( x ) ) 2 d F θ ( x ) ,$

which can be written as

$D r , n ( H e : H θ ) = n ∑ i = 0 r - 1 ∫ z i z i + 1 ( H e ( x ) + log ( 1 - u ) ) 2 d u = n ∑ i = 0 r - 1 H e , i 2 ( z i + 1 - z i ) + 2 n ∑ i = 1 r - 1 log ( n - i n ) { ( 1 - z i + 1 ) log ( 1 - z i + 1 ) - ( 1 - z i ) log ( 1 - z i ) + ( z i + 1 - z i ) } - n { ( 1 - z r ) ( log ( 1 - z r ) ) 2 - 2 ( 1 - z r ) log ( 1 - z r ) - 2 z r } ,$

where zi = Fθ(xi:n) and He,i = − log((ni)/n).

By approximating V(He(x)) to Fθ(x)/{n(1 − Fθ(x))}, the average standardized squared difference can be obtained as

$S D r , n ( H e : H θ ) = ∫ 0 x r : n n ( 1 - F θ ( x ) ) ( H e ( x ) - H θ ( x ) ) 2 F θ ( x ) d F θ ( x ) .$

Equation (2.2) can be simplified:

$S D r , n ( H e : H ) = n ∑ i = 0 r - 1 ∫ z i z i + 1 1 - u u ( H e ( x ) + log ( 1 - u ) ) 2 d u = n ∑ i = 0 r - 1 [ H e , i 2 { log z i + 1 - log z i + ( z i + 1 - z i ) } + 2 H e , i { Li 2 ( 1 - z i ) - Li 2 ( 1 - z i + 1 ) + ( 1 - z i + 1 ) log ( 1 - z i + 1 ) - ( 1 - z i ) log ( 1 - z i ) + ( z i + 1 - z i ) } + ∫ z i z i + 1 { ( log ( 1 - u ) ) 2 u - ( log ( 1 - u ) ) 2 } d u ] .$

The derivation of the last term of the equation (2.3) can be easily done.

3. Comparison based on the ratio

### 3.1. Uncensored case

The comparison of the cumulative hazard functions, Hn(x) and Hθ(x), in terms of their ratio can be established by extending the cumulative residual Kullback-Leibler information stated by Baratpour and Habibi Rad (2012) as

$R n ( H NA : H θ ) = n ∫ 0 ∞ H NA ( x ) ( H θ ( x ) H NA ( x ) - log H θ ( x ) H NA ( x ) - 1 ) d F θ ( x ) ,$

which can be written as

$R n ( H NA : H θ ) = n ∑ i = 0 n H NA , i { log H NA , i ( z i + 1 - z i ) - ∫ z i z i + 1 log ( - log ( 1 - u ) ) d u - ( z i + 1 - z i ) } + n .$

We note that Rn(HNA: H) is nonnegative and the equality to 0 holds iff HNA(x) = H(x) almost everywhere. Equation (3.1) can be simplified to

$R n ( H NA : H θ ) = n ∑ i = 0 n H NA , i [ log H NA , i ( z i + 1 - z i ) - { ( l i ( 1 - z i ) - l i ( 1 - z i + 1 ) ) + ( 1 - z i + 1 ) log ( - log ( 1 - z i + 1 ) ) - ( 1 - z i ) log ( - log ( 1 - z i ) ) + ( z i + 1 - z i ) } ] + n ,$

where li(x) is the logarithmic integral function defined as $∫ 0 x ( 1 / ln ( t ) ) d x$.

In Table 1, we provide the empirical critical values at α = 0.05 of Dn, SDn, and Rn obtained with 100,000 Monte Carlo simulated samples for uncensored case.

### 3.2. Censored case

If the variable x is censored at C, where C < ∞, then we can compare Hθ(x) to He(x). We consider the case when C is xr:n. Then it is the Type-slowromancapii@ censored case, and we can compute He(x) for xxr:n. For xxr:n, CHKL(He: Hθ) stated by Park (2017) as

$CHKL ( H e : H θ ) = ∫ 0 x r : n H e ( x ) log H e ( x ) H θ ( x ) d x + ∫ 0 x r : n H θ ( x ) d x - ∫ 0 x r : n H e ( x ) d x .$

We can compare the cumulative hazard functions in terms of their ratio by extending the Kullback-Leibler function as

$R r , n ( H e : H θ ) = n ∫ 0 x r : n H e ( x ) ( H θ ( x ) H e ( x ) - log H θ ( x ) H e ( x ) - 1 ) d F θ ( x ) ,$

which can be written as

$R r , n ( H e : H θ ) = n ∑ i = 0 r - 1 { H e , i log H e , i ( z i + 1 - z i ) - H e , i ∫ z i z i + 1 log ( - log ( 1 - u ) ) d u - H e , i ( z i + 1 - z i ) + ( 1 - z i + 1 ) log ( 1 - z i + 1 ) - ( 1 - z i ) log ( 1 - z i ) + z i + 1 - z i } .$

We also obtained the critical value estimates of Dn, SDn, and Rn, based on 100,000 Monte Carlo simulated samples for various censored cases, but did not summarized them in this paper.

4. Application: exponential distribution

### 4.1. Uncensored case

In this section, we consider the exponential distribution, Fθ(x) = exp(−x/θ)/θ for the null hypothesis, whose hazard function is constant as 1/θ, and evaluate the performance of $Dn(HNA:Hθ^)$, $SDn(HNA:Hθ^)$ and $Rn(HNA:Hθ^)$ where $θ^$ is usually chosen to be the maximum likelihood estimator denoted as $θ^mle$. We compare their performances with some test statistics based on the empirical distribution function as

• Cramer von-Mises test statistics:

$W n 2 = ∑ i = 1 n ( z i - 2 i - 1 2 n ) 2 + 1 12 n ,$

where $zi=1-exp(-xi:n/θ^mle)$.

• Anderson-Darling test statistics:

$A n 2 = - 2 n ∑ i = 1 n { ( i - 1 2 ) log ( z i ) + ( n - i + 1 2 ) log ( 1 - z i ) } - n .$

• Test statistic based on cumulative residual entropy (Baratpour and Habibi Rad, 2012):

$T n = 1 θ ^ m ∑ i = 0 n ( x ( i + 1 ) - x ( i ) ) n - i n log n - i n + 1 ,$

where $θ ^ m = ∑ i = 1 n x i 2 / ( 2 ∑ i = 1 n x i )$.

To compare the powers of those test statistics, we consider the following alternatives according to the type of the hazard functions.

• Monotone decreasing hazard: Gamma (shape parameter: 0.5), Weibull (shape parameter: 0.5), Chi-square (df 1).

• Monotone increasing hazard: Uniform, Gamma (shape parameter: 2), Weibull distribution (shape parameter: 2), Chi-square (df 4).

• None-monotone hazard: Log normal (shape parameter: 0.5, 1, 1.5).

We employed the Monte Carlo simulation to estimate the powers against the above alternatives for n = 20, 50. Tables 2 and 3 summarize the numerical results. The results show that suggested test statistics Dn, SDn, and Rn are comparable with the conventional EDF-based test statistics $W n 2 , A n 2$, and Tn. Generally, SDn performs better than Dn as $A n 2$ outperforms $W n 2$, while SDn shows similar performance with $A n 2$. However, it is interesting that Dn performs better than every other test statistics except Tn against the uniform alternative. It is also notable that Rn shows better performance than both of $W n 2$ and $A n 2$ against all increasing hazard alternatives. It is also remarkable that SDn outperforms both $W n 2$ and $A n 2$ against all decreasing hazard alternatives. We can modify them by giving more weight on later (or earlier) departures since SDn and Rn are both based on the departures in cumulative hazards.

### 4.2. Censored case

Similarly, we also consider Type-slowromancapii@ censored case for the same alternatives. Here we suggest the case when r = 0.5n, 0.75n, 0.9n. In this case, the appropriate estimator for θ is

$θ ^ mle = x 1 : n + ⋯ + x r : n + ( n - r ) x r : n r .$

Same as the previous section, we compare the performances of Dr,n, SDr,n, Rr,n with test statistics such as

• Cramer von-Mises test statistics:

$W r , n 2 = ∑ i = 1 r ( z i - 2 i - 1 2 n ) 2 + r 12 n 2 + n 3 ( z r - r n ) 3 ,$

where $zi=1-exp(-xi:n/θ^mle)$.

• Anderson-Darling test statistics:

$A r , n 2 = - 1 n ∑ i = 1 r ( 2 i - 1 ) ( log z i - log ( 1 - z i ) ) - 2 ∑ i = 1 r log ( 1 - z i ) - 1 n { ( r - n ) 2 log ( 1 - z r ) - r 2 log z r + n 2 z r } .$

• Censored version of the test statistic based on cumulative residual entropy (Park and Lim, 2015)

$T r , n = 1 θ ^ mle ∑ i = 0 r - 1 ( x i + 1 : n - x i : n ) n - i r log n - i n + 1 2 θ ^ mle 2 ∑ i = 0 r - 1 n - i r ( x i + 1 : n 2 - x i : n 2 ) + n r z r - 1.$

Similarly in censored case, we employed the Monte Carlo simulation to estimate the powers against the above alternatives for n = 20, 50 when r = 0.5n, 0.75n, 0.9n. Tables 4 and 5 summarize the numerical results. These tables, show that test statistics Dr,n, SDr,n, and Rr,n are comparable with conventional EDF-based test statistics $W r , n 2 , A r , n 2$, and Tr,n. SDr,n often performs better than Dr,n as show in the uncensored case. In addition, the point that SDr,n outperforms other statistics including $W r , n 2$ and $A r , n 2$ for all decreasing hazard alternatives stands consistent with the uncensored case. However, against the increasing hazard alternatives, Rr,n outperform both Dr,n and SDr,n but $W r , n 2$ and $A r , n 2$ show higher powers than Rr,n. It is also notable that Dr,n shows similar performances with Tr,n. Even though censored and uncensored cases share some common features, they still have some differences in performance.

5. An illustrative example

In this section, we consider the real-life data, which consists of the failure times for 36 appliances subjected to an automatic life test (Lawless, 1982)

11 35 49 170 329 381 708 958 1062 1167 1594 1925 1990 2223 2327 2400 2451 2471

2551 2565 2568 2694 2702 2761 2831 3034 3059 3112 3214 3478 3504 4329 6367 6976

7846 13403

and illustrate the use of the proposed tests as the goodness-of-fit test for exponentiality. Table 6 shows the statistics values of four tests (SDn, Rn, Wn, An) based on the above data and the corresponding p-values. We can see that the p-values of those four test statistics are similar, and conclude that we cannot reject the exponential null hypothesis.

6. Conclusions

We proposed some goodness-of-fit test statistics based on the comparison of cumulative hazards, which are omnibus tests applicable to many of distributions. They shows comparable performances with other EDF-based test statistics. It is clearer to give weights on earlier (or later) departures in cumulative hazards rather than in cumulative density functions since the proposed test statistics are expressed in terms of the cumulative hazard functions. It is best to give more weight on later departures when hazards increase and give more weight on earlier departures when they do not.

Acknowledgments

This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2018R1D1A1B07042581).

TABLES

### Table 1

Empirical critical values of Dn, SDn, and Rn obtained by Monte Carlo simulation with 100,000 replications for uncensored case

n

10 20 30 40 50
Dn 2.8991 3.7004 4.2626 4.6932 5.0342
SDn 1.1183 1.2149 1.2523 1.2749 1.2850
Rn 1.0112 1.1165 1.1814 1.2276 1.2575

### Table 2

Empirical powers (%) of six tests at α = 0.05 against ten alternatives of the exponential distributions based on 100,000 simulations (n = 20)

Alternatives Dn SDn Rn $W n 2$ $A n 2$ Tn
Exp(1) 5.02 5.00 4.99 4.94 5.16 4.98
Gamma(0.5) 44.33 72.82 55.92 52.59 70.74 13.87
Gamma(2) 18.36 37.25 49.16 48.52 45.25 39.16
Log normal(0, 0.5) 64.77 98.52 99.59 99.37 99.42 85.64
Log normal(0,1) 18.84 13.61 19.22 15.26 13.99 13.26
Log normal(0, 1.5) 65.52 64.51 61.82 61.89 62.67 43.67
Weibull(0.5) 85.66 96.16 91.53 90.05 95.70 53.95
Weibull(2) 71.38 89.10 94.14 93.41 92.21 91.99
χ1 44.45 72.85 56.02 52.71 70.78 13.87
χ4 18.46 37.11 49.21 48.56 45.32 39.23
Unif(0, 1) 82.38 65.83 80.05 67.52 63.18 92.97

### Table 3

Empirical powers (%) of six tests at α = 0.05 against ten alternatives of the exponential distributions based on 100,000 simulations (n = 50)

Alternatives Dn SDn Rn $W n 2$ $A n 2$ Tn
Exp(1) 5.01 5.19 5.12 5.18 5.20 4.97
Gamma(0.5) 71.58 96.63 91.23 89.94 96.38 41.34
Gamma(2) 55.03 90.01 90.32 90.22 91.77 65.80
Log normal(0, 0.5) 98.83 100.00 100.00 100.00 100.0 99.28
Log normal(0, 1) 30.40 32.69 40.20 30.231 34.17 28.18
Log normal(0, 1.5) 92.15 93.60 93.34 93.61 93.37 85.59
Weibull(0.5) 99.29 99.98 99.95 99.92 99.99 94.93
Weibull(2) 99.83 100.00 100.00 100.00 100.00 99.98
χ1 71.69 96.64 91.14 89.83 96.38 41.36
χ4 55.48 90.15 90.51 90.38 91.86 66.00
Unif(0, 1) 100.00 99.77 99.98 98.57 98.65 100.00

### Table 4

Empirical powers (%) of six tests at α = 0.05 for Type-slowromancapii@ censoring against ten alternatives of the exponential distributions based on 100,000 simulations (n = 20)

Alternatives r Dr,n SDr,n Rr,n $W r , n 2$ $A r , n 2$ Tr,n
Exp(1) 10 4.92 4.85 4.90 4.97 4.87 4.97
15 4.95 5.00 5.04 5.03 5.02 4.99
18 4.87 4.91 4.89 4.98 4.95 4.92

Gamma(0.5) 10 27.40 52.30 38.62 27.23 49.88 25.66
15 46.24 65.33 52.88 42.24 62.72 42.81
18 51.77 70.88 60.28 49.12 68.52 49.17

Gamma(2) 10 23.65 19.35 17.70 28.50 23.28 24.70
15 14.80 25.83 25.67 36.32 32.90 18.92
18 5.34 28.98 26.16 42.29 37.98 5.55

Log normal(0, 0.5) 10 87.70 90.30 40.66 93.88 93.11 89.12
15 80.55 97.02 63.95 98.22 98.48 86.58
18 36.99 98.06 69.07 99.08 99.18 56.20

Log normal(0, 1) 10 13.45 11.69 9.50 17.66 14.62 14.28
15 5.12 8.11 7.57 12.42 11.24 6.33
18 8.07 7.82 7.71 11.34 9.80 8.29

Log normal(0, 1.5) 10 5.24 4.63 4.35 4.97 4.38 5.15
15 23.93 19.41 17.90 17.66 17.11 22.60
18 49.74 42.03 43.09 38.83 39.42 48.95

Weibull(0.5) 10 40.20 65.87 53.00 40.39 63.72 38.15
15 72.14 85.50 77.89 69.67 84.01 69.31
18 84.00 92.89 88.96 82.92 92.06 82.14

Weibull(2) 10 45.75 40.04 32.19 52.50 45.40 47.11
15 46.64 63.05 57.73 74.84 70.86 53.09
18 16.79 75.36 68.00 86.32 82.64 29.27

χ1 10 27.17 52.13 38.32 26.96 49.70 25.48
15 46.31 65.02 52.71 42.37 62.45 42.95
18 51.88 70.63 60.35 49.32 68.24 49.29

χ4 10 23.76 19.56 17.62 28.74 23.40 24.76
15 15.11 26.09 25.80 36.70 33.26 19.18
18 2.64 29.38 26.34 42.51 38.24 5.65

Unif(0, 1) 10 10.04 6.59 8.99 10.66 7.76 10.37
15 13.60 12.92 20.54 23.44 17.37 16.23
18 4.36 24.10 33.86 42.67 32.60 9.34

### Table 5

Empirical powers (%) of six tests at α = 0.05 for Type-slowromancapii@ censoring against ten alternatives of the exponential distributions based on 100,000 simulations (n = 50)

Alternatives r Dr,n SDr,n Rr,n $W r , n 2$ $A r , n 2$ Tr,n
Exp(1) 25 5.03 5.02 4.96 5.01 4.98 5.04
37 5.02 4.97 4.90 4.96 4.92 4.94
45 4.94 4.95 5.02 5.01 4.96 4.91

Gamma(0.5) 25 61.22 84.30 74.21 64.52 83.31 59.91
37 77.64 92.87 86.99 80.49 92.22 75.53
45 83.26 95.56 91.73 86.70 95.12 81.23

Gamma(2) 25 55.74 62.94 64.31 64.04 65.54 56.83
37 55.03 78.31 79.58 78.20 81.28 58.73
45 36.92 84.95 83.76 85.75 87.60 43.33

Log normal(0, 0.5) 25 100.00 100.00 99.58 100.00 100.00 100.00
37 100.00 100.00 99.98 100.00 100.00 100.00
45 99.78 100.00 99.99 100.00 100.00 99.92

Log normal(0, 1) 25 29.58 42.39 39.18 39.22 45.45 30.83
37 11.66 31.24 28.77 27.10 35.16 13.68
45 12.86 24.28 21.82 21.31 26.89 13.11

Log normal(0, 1.5) 25 7.89 6.53 5.60 7.28 6.12 7.81
37 41.55 35.52 33.15 36.18 32.94 40.40
45 79.76 73.90 74.84 73.26 72.12 78.90

Weibull(0.5) 25 80.74 94.24 89.23 83.33 93.81 79.88
37 96.60 99.37 98.65 97.46 99.31 96.06
45 99.30 99.92 99.82 99.60 99.91 99.12

Weibull(2) 25 88.85 91.55 91.56 92.45 92.62 89.30
37 96.54 99.28 99.35 99.39 99.43 97.16
45 95.11 99.92 99.91 99.95 99.94 96.64

χ1 25 60.92 84.07 73.72 64.21 83.09 59.57
37 77.23 92.72 86.76 80.12 92.01 75.08
45 82.75 95.54 91.58 86.46 95.06 80.78

χ4 25 55.54 62.72 64.20 63.74 64.40 56.56
37 54.96 78.21 79.33 78.12 81.12 58.68
45 36.84 84.87 83.67 85.77 87.53 43.46

Unif(0, 1) 25 16.47 11.78 16.48 16.84 13.03 16.68
37 42.01 37.11 47.42 49.79 41.48 43.76
45 57.91 73.06 81.12 85.11 77.90 62.57

### Table 6

Test statistics and p-values

Tests statistics SDn Rn $W n 2$ $A n 2$
Values 1.3730 1.2746 0.2995 1.4970
P-values 0.9615 0.9591 0.9838 0.9693

References
1. Aalen O (1978). Nonparametric inference for a family of counting processes, The Annals of Statistics, 6, 701-726.
2. Anderson JA and Senthilselvan A (1982). A two-step regression model for hazard functions, Journal of the Royal Statistical Society. Series C (Applied Statistics), 31, 44-51.
3. Arizono I and Ohta H (1989). A test for normality based on Kullback-Leibler information, The American Statistician, 43, 20-22.
4. Arvanitakis Z, Wilson RS, Bienias JL, Evans DA, and Bennett DA (2004). Diabetes mellitus and risk of Alzheimer disease and decline in cognitive function, Archives Neurology, 61, 661-666.
5. Baratpour S and Habibi Rad A (2012). Testing goodness-of-fit for exponential distribution based on cumulative residual entropy, Communications in Statistics - Theory and Methods, 41, 1387-1396.
6. Hjort NL (1990). Goodness of fit tests in models for life history data based on cumulative hazard rates, The Annals of Statistics, 18, 1221-1258.
7. Klotz L, Zhang L, Lam A, Nam R, Mamedov A, and Loblaw A (2010). Clinical results of long-term follow-up of a large, active surveillance cohort with localized prostate cancer, Journal of Clinical Oncology, 28, 126-131.
8. Korn EL, Graubard BI, and Midthune D (1997). Time-to-event analysis of longitudinal follow-up of a survey: choice of the time-scale, American Journal of Epidemiology, 145, 72-80.
9. Nelson W (1972). Theory and applications of hazard plotting for censored failure data, Technometrics, 14, 945-965.
10. Noughabi HA (2010). A new estimator of entropy and its application in testing normality, Journal of Statistical Computation and Simulation, 80, 1151-1162.
11. Park S (2017). On the goodness-of-fit test based on the ratio of cumulative hazard functions with the type slowromancapii@ censored data, Communications in Statistics - Simulation and Computation, 46, 2935-2944.
12. Park S and Lim J (2015). On censored cumulative residual Kullback-Leibler information and goodness-of-fit test with type slowromancapii@ censored data, Statistical Papers, 56, 247-256.
13. Park S and Park D (2003). Correcting moments for goodness-of-fit tests based on two entropy estimates, Journal of Statistical Computation and Simulation, 73, 685-694.
14. Park S and Shin M (2014). Kullback-Leibler information of a censored variable and its applications, Statistics: A Journal of Theoretical and Applied Statistics, 48, 756-765.
15. Zhang J (2002). Powerful goodness-of-fit tests based on the likelihood ratio, Journal of the Royal Statistical Society. Series B (Statistical Methodology), 64, 281-294.