TEXT SIZE

search for



CrossRef (0)
Objective Bayesian inference based on upper record values from Rayleigh distribution
Communications for Statistical Applications and Methods 2018;25:411-430
Published online July 31, 2018
© 2018 Korean Statistical Society.

Jung In Seoa, and Yongku Kim1,b

aDepartment of Statistics, Daejeon University, Korea, bDepartment of Statistics, Kyungpook National University, Korea
Correspondence to: 1Department of Statistics, Kyungpook National University, 80 Daehakro, Bukgu, Daegu 41566, Korea. E-mail: kim.1252@knu.ac.kr
Received April 6, 2018; Revised May 15, 2018; Accepted May 15, 2018.
 Abstract

The Bayesian approach is a suitable alternative in constructing appropriate models for observed record values because the number of these values is small. This paper provides an objective Bayesian analysis method for upper record values arising from the Rayleigh distribution. For the objective Bayesian analysis, the Fisher information matrix for unknown parameters is derived in terms of the second derivative of the log-likelihood function by using Leibniz’s rule; subsequently, objective priors are provided, resulting in proper posterior distributions. We examine if these priors are the PMPs. In a simulation study, inference results under the provided priors are compared through Monte Carlo simulations. Through real data analysis, we reveal a limitation of the appropriate confidence interval based on the maximum likelihood estimator for the scale parameter and evaluate the models under the provided priors.

Keywords : Bayesian analysis, Fisher information, objective priors, Rayleigh distribution, upper record values
1. Introduction

Observations of survival times of objects, precipitation levels, Olympic records, or daily stock prices greater than the existing respective records, are called the upper record values. This concept was introduced by Chandler (1952). Let {X1, …, Xn} be a sequence of independent and identically distributed random variables with the cumulative distribution function (CDF) and probability density function (PDF). Then, we can say that xj is an upper record value if xj > xi for every i < j, and the record time sequence {U(k), k ∈ ℕ} is denoted as

U(k)={1(with probability 1),k=1,min{jj>U(k-1),Xj>XU(k-1)},k2,

The statistical inference based on the record values has limitations due to small sample sizes, although the modelling for small samples is an important issue in statistical application. In addition, the likelihood function for unknown parameters and the predictive likelihood function respectively provided by Arnold et al. (1998) and Basak and Balakrishnan (2003) can yield inappropriate inference results for small sample sizes, as they lead to the likelihood equation in the maximum likelihood method. To overcome these limitations, Wang et al. (2015) proposed a new inference method dependent on pivotal quantities in the family of proportional reversed hazard distributions based on the record values. Wang and Ye (2015) provided bias-corrected estimators and exact confidence intervals (CIs) for unknown parameters of the Weibull distribution based on the upper record values. The Bayesian approach can be a useful alternative for small sample sizes if one has sufficient prior information. Jaheen (2003) developed a Bayesian inference under a subjective prior for unknown parameters of the Gompertz distribution based on upper record values. Madi and Raqab (2004) provided a subjective Bayesian inference to predict the future upper record values based on the observed upper record values from the Pareto distribution.

However, subjective Bayesian approaches cannot be properly used in situations in which little or no prior information is available. In this case, the Bayesian inference can rely on the noninformative or objective priors. The most widely used noninformative priors are the Jeffreys prior (Jeffreys, 1961) and the reference prior (Bernardo, 1979; Berger and Bernardo, 1989, 1992). In addition, the probability matching prior (PMP) introduced by Welch and Peers (1963) has gained recent popularity due to its frequentist properties.

This article provides an objective Bayesian approach based on noninformative priors to estimate the unknown parameters of the two-parameter Rayleigh distribution with the CDF

F(x)=1-exp [-(x-μ)22σ2]

and the PDF

f(x)=x-μσ2exp [-(x-μ)22σ2],         x>μ,σ>0,

where μ is the location parameter and σ is the scale parameter. The Rayleigh distribution was first considered by Rayleigh (1880) as the distribution of the amplitude resulting from the addition of harmonic oscillations. This distribution has since been applied in many fields such as communication engineering and electro vacuum devices (Polovko, 1968; Dyer and Whisenand, 1973). Another important characteristic of this distribution is its failure rate function is an increasing linear function of time. Therefore, some authors employed this distribution to construct a statistical model fitting real data. Raqab and Madi (2002) discussed the predictive distribution of the total testing time up to a certain failure in a future sample, as well as the remaining testing time until all the items in the original sample have failed when doubly censored data are observed. Wu et al. (2006) derived the Bayes estimator of the scale parameter and the Bayes predictors of future observations when progressively Type-II censored data are observed. Kim and Han (2009) derived the Bayes estimator of the scale parameter and the reliability function based on multiply Type-II censored data. Lee et al. (2011) constructed a Bayes estimator of the lifetime performance of products and proposed a Bayesian test to assess this performance when progressively Type-II censored data are observed. Soliman and Al-Aboud (2008) provided a subjective Bayesian inference method for the scale parameter and the reliability and failure rate functions based on the record values. Seo and Kim (2017) provided a noninformative prior with partial information to estimate unknown parameters and predict the future upper record values.

This article focuses on inference based on the objective priors to avoid the risk from inappropriate prior information and reduce the effort in obtaining sufficient prior information. To develop the method based on the objective priors, it needs to obtain a closed form of the Fisher information matrix for unknown parameters (μ,σ), as the popular objective priors such as the Jeffreys and reference priors; in addition, the PMPs are obtained based on the Fisher information matrix. We provide the Fisher information matrix for (μ,σ) in terms of the second derivative of the log-likelihood using Leibniz’s rule and develop an objective Bayesian analysis method.

The rest of this paper is organized as follows. Section 2 provides the Fisher information matrix in terms of the second derivative of the log-likelihood and preferred objective priors (the Jeffreys and reference priors and the second-order PMP) for unknown parameters (μ,σ). In the following section we assess an objective Bayesian approach based on the provided priors. Section 4 assesses the proposed objective Bayesian analysis method through the Monte Carlo simulations and applies the method on a set of survival data for lung cancer patients. Section 5 concludes this article.

2. Objective priors

This section provides the Fisher information matrix for unknown parameters (μ,σ) of the Rayleigh distribution based on the upper record values for deriving the objective priors and then proposes objective Bayesian models under the derived priors.

Let XU(i) be the ith upper record value from a PDF with an unknown parameter θ. Then, the Fisher information for θ is given by

I(θ)=E[(θlog fXU(i)(x))2],

where

fXU(i)(x)=1Γ(i)[-log(1-F(x))]i-1f(x)

is the marginal density function of XU(i) provided in Ahsanullah (1995). Under certain regularity conditions, the Fisher information (2.1) is given by

I(θ)=-E(2θ2log fXU(i)(x))

which has computational convenience compared with (2.1). To employ the Fisher information (2.3), the interchangeability of differentiation and integration operators for θ is a necessary condition. In some cases, the operations of differentiation and integration do not hold provided that the support of a probability distribution depends on an unknown parameter, for example, in the Laplace distribution (Burkschat and Cramer, 2012) with the location parameter and uniform distribution on (0, θ) for some θ > 0 (Romano and Siegel, 1986). The following results represent that integration and differential operators are interchangeable; however, the support of the two-parameter Rayleigh distribution depends on the location parameter μ.

Proposition 1

Let XU(i)be the ith upper record value from a two-parameter Rayleigh distribution. Then,

μμfXU(i)(x)dx=μμfXU(i)(x)=0

and

2μ2μfXU(i)(x)dx=μ2μ2fXU(i)(x)dx=0

for i > 1, where

fXU(i)(x)=(x-μ)2i-12i-1σ2iΓ(i)exp [-(x-μ)22σ2],         x>μ,σ>0

is the marginal density function of XU(i).

Proof

Suppose that the marginal density function (2.6) is integrable over an arbitrary finite subinterval (μ, b). Then, we have

μμfXU(i)(x)dx=limbμbμfXU(i)(x)dx.

By Leibniz’s rule, the right-hand side in (2.7) then becomes

limbμbμfXU(i)(x)dx=limb[μμbfXU(i)(x)dx-fXU(i)(b)bμ+fXU(i)(μ)μμ]=limbμμbfXU(i)(x)dx=limbμ[FXU(i)(b)-FXU(i)(μ)]=0,

where FXU(i)(·) is the CDF of XU(i). Therefore, the relationship in (2.4) holds. The relationship in (2.5) can be proved in the same way.

Let

gXU(i)(x)=μfXU(i)(x).

Then, we have

gXU(i)(x)=(x-μ)2i-22i-1σ2iΓ(i)[(x-μσ)2-(2i-1)]exp [-(x-μ)22σ2].

Suppose that the function (2.8) is integrable over an arbitrary finite subinterval (μ, b). Then,

μμgXU(i)(x)dx=limbμbμgXU(i)(x)dx.=limb[μμbgXU(i)(x)dx-gXU(i)(b)bμ+gXU(i)(μ)μμ].

For i = 1,

μμgXU(i)(x)dx0

because gXU(i) (μ)∂μ/∂μ ≠ 0, but, for i > 1,

μμgXU(i)(x)dx=limbμμbgXU(i)(x)dx=limbμ[fXU(i)(b)-fXU(i)(μ)]=0.

This completes the proof.

Remark 1

By (2.4) in Proposition 1, we can obtain the result

E(μlog fXU(i)(x))=μ(μlog fXU(i)(x))fXU(i)(x)dx=μμfXU(i)(x)dx=0.

In addition, in

2μ2log fXU(i)(x)=μ[1fXU(i)(x)μfXU(i)(x)]=2μ2fXU(i)(x)fXU(i)(x)-(μlog fXU(i)(x))2,

by taking the expectation, we have

E[2μ2log fXU(i)(x)]=μ2μ2fXU(i)(x)dx-μ(μlog fXU(i)(x))2fXU(i)(x)dx.

Therefore,

E[(μlog fXU(i)(x))2]=-E[2μ2log fXU(i)(x)]

by (2.5) in Proposition 1.

Remark 2

Let XU(i), …, XU(k) be the upper record values from the Rayleigh distribution with the PDF (2.2). Then, the likelihood function based on XU(i), …, XU(k) is given by

L(μ,θ)=f(xU(k))i=1k-1f(xU(i))1-F(xU(i)).

Then, we have

μlog L(μ,σ)=μ[i=1klog f(xU(i))-i=1k-1log (1-F(xU(i)))].

By taking the expectation,

E(μlog L(μ,σ))=i=1kE[μlog f(xU(i))]-i=1k-1E[μlog (1-F(xU(i)))].

Then, the first term is zero by Remark 2 and the second term is also zero because − log (1 − F(xU(i)) has the standard exponential distribution. Therefore

E(μlog L(μ,σ))=0.

Similarly, we can obtain the following result:

E[2μ2log L(μ,σ)]=-E[(μlog L(μ,σ))2].

The result (2.11) can be proved through direct integrations. The expectation of the partial derivative of the log-likelihood function is given by

E(μlog L(μ,σ))=E[XU(k)-μσ2-i=1k1XU(i)-μ].

Let y = (xμ)2/(2σ2). Then, the expectations of the right-hand side in (2.12) are obtained as

E(XU(k)-μ)=μ(xU(k)-μ)fXU(k)(x)dx=σ2(k-1)!0yk-12e-ydy=σ(k-12)2h(k)

and

E(1XU(i)-μ)=μ1XU(i)-μfXU(i)(x)dx=1σ2Γ(i)0yi-32e-ydy=h(i)σ2,

where

h(i)=Γ(i-1/2)Γ(i).

Therefore, the expectation of the partial derivative of the log-likelihood function is

E(μlog L(μ,σ))=0

by the relationship

j=1nΓ(j-1/2)Γ(j)=2Γ(n+1/2)Γ(n).

By Remark 2, the Fisher information matrix for (μ,σ) is can be written in terms of the second derivative of the log-likelihood function as

I(μ,σ)=-[E(2μ2log L(μ,σ))E(2μσlog L(μ,σ))E(2σμlog L(μ,σ))E(2σ2log L(μ,σ))],

where

-2μ2log L(μ,σ)=1σ2+i=2k1(xU(i)-μ)2,-2μσlog L(μ,σ)=-2σμlog L(μ,σ)=2(xU(k)-μ)σ3,-2σ2log L(μ,σ)=3(xU(k)-μ)2σ4-2kσ2.

Then, the Fisher information (2.14) is obtained as

I(μ,σ)=1σ2[1+12i=2k1i-122(k-12)h(k)22(k-12)h(k)4k]

from the expectations (2.13) and

E[(XU(k)-μ)2]=μ(xU(k)-μ)2fXU(k)(x)dx=2σ2Γ(k)0yke-ydy=2kσ2,E[1(XU(i)-μ)2]=μ1(XU(i)-μ)2fXU(i)(x)dx=12σ2Γ(i)0yi-2e-ydy=12(i-1)σ2

for i > 1. Finally, by using the relationship

ψ(n+1)=-C+j=1n1j,

the Fisher information matrix (2.15) is given by

I(μ,σ)=1σ2[1+12(ψ(k)+C)22(k-12)h(k)22(k-12)h(k)4k],

where ψ(·) is the digamma function and C is Euler’s constant. Based on the Fisher information matrix (2.16) and the asymptotic normality of the maximum likelihood estimator (MLE), we can obtain the approximate 100(1 − α)% CIs based on the MLEs μ̂ and σ̂, which maximize the likelihood function (2.10) for μ and σ as

(μ^-Zα2Var (μ^),μ^+Zα2Var (μ^))

and

(σ^-Zα2Var(σ^),σ^+Zα2Var(σ^)),

where Zα/2 denotes the upper α/2 point of the standard normal distribution and Var (μ̂) and Var (σ̂) are the diagonal elements of the asymptotic variance-covariance matrix of the MLEs obtained by inverting the Fisher information matrix (2.16). The approximate CIs (2.17) can have a negative lower bound even though the support of σ is positive; however, the proposed Bayesian method in the subsequent subsections can overcome this limitation.

Based on the Fisher information (2.16), we provide the objective priors (the Jeffreys and reference priors and the second-order PMP) here.

The Jeffreys prior is proportional to the square root of the determinant of the Fisher information. Therefore, the Jeffreys prior for (μ,σ) is given by

πJ(μ,σ)4σ4{k(1+12(ψ(k)+C))-2[(k-1)h(k)]2}1σ2.

Note that the Jeffreys prior may lead to some undesirable frequentist properties in the presence of nuisance parameters (Bernardo and Smith, 1994). The following theorem provides a reference prior for (μ,σ) and examines the frequentist properties of the provided priors by observing if they satisfy the second PMP criteria.

Theorem 1

The reference prior for (μ,σ) is

πR(μ,σ)1σ,

regardless of the parameter that is of interest.

Proof

This is proved by using the algorithm provided by Berger and Bernardo (1989). We first give a proof procedure when μ is the parameter of interest. Define the conditional reference prior for σ given μ as

π(σμ)=I22(μ,σ)=2kσ,

where Iij is the (i, j) entry of the Fisher information (2.16). Choose a sequence of compact sets Ωi = (d1i, d2i) × (d3i, d4i) for (μ,σ) such that d1i, d3i → 0, d2ixU(1), and d4i →∞ as i→∞. Then, the normalizing constant K1i(μ) is given by

K1i(μ)=[d3id4iπ(σμ)dσ]-1=12k(log d4i-log d3i)

and the following proper prior is obtained:

pi(σμ)=K1i(μ)π(σμ)1(d3i,d4i)(σ)=1σ(log d4i-log d3i)1(d3i,d4i)(σ),

where 1Ω denotes the indicator function on Ω. Therefore, when μ is the parameter of interest, the marginal reference prior for μ and the reference prior (μ,σ) are respectively given by

πi(μ)=exp [12d3id4ipi(σμ)log (I(μ,σ)I22(μ,θ))dσ]=exp {12(log d4i-log d3i)d3id4i1σ[-2log σ+log (1+12(ψ(k)+C)-2k((k-12)h(k))2)]dσ}=exp {12log [1+(ψ(k)+C)/2-2((k-1/2)h(k))2/kd4id3i]}1

and

πR1(μ,θ)=limi[K1i(μ)πi(μ)K1i(μ0)πi(μ0)]π(σμ)1σ,

where μ0 is any fixed point.

When σ is the parameter of interest, a similar procedure is implemented.

π(μσ)=I11(μ,σ)=1σ1+12(ψ(k)+C).

From the above sequence of compact sets, it follows that

K2i(σ)=[d1id2iπ(μσ)dμ]-1=σ(d2i-d1i)1+(ψ(k)+C)/2

and

pi(μσ)=K2i(σ)π(μσ)1(d1i,d2i)(μ)=1d2i-d1i1(d1i,d2i)(μ).

Then, the marginal reference prior for σ is given by

πi(σ)=exp [12d1id2ipi(μσ)log (I(μ,σ)I11(μ,σ))dμ]=exp {12(d2i-d1i)d1id2i-2log σ+log [4k-8((k-1/2)h(k))21+(ψ(k)+C)/2]dμ}1σ,

and the reference prior for (σ, μ) is obtained as

πR2(σ,μ)=limi[K2i(σ)πi(σ)K2i(σ0)πi(σ0)]π(μσ)1σ,

where σ0 is any fixed point. Note that the reference priors have the same form regardless of the parameters of interest. Therefore, the notation πR(μ,σ) is used for both reference priors. This completes the proof.

Theorem 2

The second-order PMP has the form of 1/σ. Therefore, the reference prior (2.19) is the second-order PMP, while the Jeffreys prior (2.18) is not.

Proof

The formula for finding the second-order PMP for the multi-parameter case is provided in Peers (1965). When μ is the parameter of interest, the second-order PMP should satisfy the following partial differential equation:

σ[I12(μ,σ)π(·)I22(μ,σ)M1]-μ[π(·)M1]=0,

where π(·) is a joint prior distribution for (μ,σ) and

M1=I11(μ,σ)-[I12(μ,σ)]2I22(μ,σ).

When σ is the parameter of interest, the partial differential equation (2.21) is modified as

μ[I12(μ,σ)π(·)I11(μ,σ)M2]-σ[π(·)M2]=0,

where

M2=I22(μ,σ)-[I12(μ,σ)]2I11(μ,σ).

Since the Fisher information (2.16) does not depend on μ, both partial differential equations (2.21) and (2.22) are reduced as

σ[σπ(·)]=0.

Therefore, the prior distribution π(·) should have the form of 1/σ. This completes the proof.

Remark 3

The reference prior (2.19) is the same as the reference prior with partial information provided in Seo and Kim (2017).

The following subsection investigates the property of posteriors under the proposed priors.

3. Properties of the posterior distribution

The posterior distribution under the Jeffreys prior (2.18) is

πJ(μ,σx)=L(μ,σ)πJ(μ,σ)μσL(μ,σ)πJ(μ,σ)dσdμ=c1-1σ-2k-2exp [-(xU(k)-μ)22σ2]i=1k(xU(i)-μ),

where c1 is the normalizing constant, given by

c1=0xU(1)0σ-2k-2exp [-(xU(k)-μ)22σ2]i=1k(xU(i)-μ)dσdμ.

By Remark 2, the resulting posterior is the same as that of Seo and Kim (2017). For comparison, we re-write the results with those based on the reference prior (2.19)

πR(μ,σx)=L(μ,σ)πR(μ,σ)μσL(μ,σ)πR(μ,σ)dσdμ=c2-1σ-2k-1exp [-(xU(k)-μ)22σ2]i=1k(xU(i)-μ),

where

c2=0xU(1)0σ-2k-1exp [-(xU(k)-μ)22σ2]i=1k(xU(i)-μ)dσdμ.

Seo and Kim (2017) proved that the posterior distribution (3.3) is proper by showing that the normalizing constant (3.4) is integrable for μ and σ. In the same way, we prove that the posterior distribution (3.1) is proper.

By integrating out σ from the normalizing constant (3.2), we have

c1=2k-12Γ(k+12)0xU(1)1(xU(k)-μ)2k+1i=1k(xU(i)-μ)dμ.

In addition, because the inequality

i=1k(xU(i)-μ)(xU(k)-μ)k

holds, we can obtain the following result:

0xU(1)1(xU(k)-μ)2k+1i=1k(xU(i)-μ)dμ0xU(1)(xU(k)-μ)k(xU(k)-μ)2k+1dμ=12k[(xU(k)-xU(1))-2k-xU(k)2k]<.

Therefore, the posterior distribution (3.1) is proper.

Theorem 3

The marginal posterior distributions for μ under the Jeffreys prior (2.18) and the reference prior (2.19) are

πJ(μx)=σπJ(μ,σx)dσ=2k-12Γ(k+12)c1(xU(k)-μ)2k+1i=1k(xU(i)-μ)

and

πR(μx)=σπR(μ,σx)dσ=2k-1Γ(k)c2(xU(k)-μ)2ki=1k(xU(i)-μ),

respectively.

However, a Markov chain Monte Carlo (MCMC) technique should be applied to generate the MCMC samples from the marginal posterior distributions since marginal posterior distributions (3.5) and (3.6) cannot be reduced analytically to any well-known distribution. Seo and Kim (2017) considered the uniform distribution on (0, xU(1)) as a proposed distribution in the Metropolis-Hastings algorithm and obtained satisfactory results. The Metropolis-Hastings algorithm is applied to generate MCMC samples μi (i = 1, …, N) from the marginal posterior distributions (3.5) and (3.6).

Theorem 4

The marginal posterior distributions for σ under the Jeffreys prior (2.18) and the reference prior (2.19) are, respectively,

πJ(σx)=μπJ(σμ,x)πJ(μx)dμ

and

πR(σx)=μπR(σμ,x)πR(μx)dμ,

where the corresponding conditional posterior density functions are given by

πJ(σμ,x)=(xU(k)-μ)2k+12k-12Γ(k+12)σ-2k-2exp [-(xU(k)-μ)22σ2],πR(σμ,x)=(xU(k)-μ)2k2k-1Γ(k)σ-2k-1exp [-(xU(k)-μ)22σ2],
Remark 4

The conditional posterior density function (3.7) is the PDF of the square root inverse gamma distribution with the scale parameter k + 1/2 and the shape parameter (xU(k)μ)2, and the conditional posterior density function (3.8) is the PDF of the square root inverse gamma distribution with the scale parameter k and the shape parameter (xU(k)μ)2.

By Remark 3, the MCMC samples σi (i = 1, …, N) can be generated from the corresponding square root inverse gamma distribution as soon as the MCMC samples μi (i = 1, …, N) are generated from the marginal posterior distribution for μ. Then, the Bayes estimators of μ and σ under the squared error loss function (SELF) are obtained respectively as

μ^B=1N-Mi=M+1Nμi

and

σ^B=1N-Mi=M+1Nσi,

where M is the number of burn-in samples. The subscript B under the Jeffreys prior (2.18) and the reference prior (2.19) is substituted by JB and RB, respectively. The highest posterior density (HPD) credible intervals (CrIs) for μ and σ are constructed by the method provided in Chen and Shao (1998).

4. Application

This section assesses how the proposed analysis method is valid through Monte Carlo simulations and real data analysis.

4.1. Simulation study

This subsection reports the mean squared errors (MSE) and biases of the proposed estimators, and the coverage probabilities (CPs) and average lengths (ALs) for the proposed intervals at the 0.95 level to assess their validity. The upper record values are generated from two-parameter Rayleigh distribution with μ = 0.5 and σ = 1 for different k = 5(2)15. All results based on 1,000 simulations are displayed in Figures 1 and 2.

From Figures 1 and 2, we can see that the Bayes estimators under the reference prior (2.19) show the best performance in terms of the MSE and bias. In addition, the HPD CrIs under the reference prior (2.19) are well matched to their corresponding nominal levels. The HPD CrIs under the Jeffreys prior (2.18) and the approximate CIs based on the MLEs have lower CPs than the corresponding nominal levels, but the CPs approach the nominal levels as the size k increases. For ALs, the HPD CrIs under the proposed priors (2.18) and (2.19) have smaller ALs than approximate CIs based on the MLEs do, and the ALs of the HPD CrIs under the two priors have little difference. The results indicate that the proposed objective Bayesian method is superior to the corresponding maximum likelihood counterpart in terms of frequentist properties.

4.2. Real data

In this subsection, we analyze a real data set that represents survival times in days for a group of lung cancer patients, as provided in Lawless (1982):

6.96,9.30,6.96,7.24,9.30,4.90,8.42,6.05,10.18,6.82,8.58,7.77,11.94,11.25,12.94,12.94.

We can observe the following upper record values:

6.96,9.30,10.18,11.94,12.94,

which have been analyzed by some authors. Soliman and Al-Aboud (2008) showed that the Rayleigh distribution with the scale parameter fits in the analysis of the observed record data. Seo and Kim (2017) applied an objective Bayesian method under the reference prior with partial information to the observed record data and showed that the proposed Bayesian model fits the observed record data well. We focus on comparing the Bayesian models under the Jeffreys prior (2.18) and reference prior (2.19) here. Tables 1 and 2 report numerical results and posterior probabilities (PPs) of the HPD CrIs as well as estimation results of unknown parameters based on the observed upper record values. As mentioned in Remark 2, the reference prior (2.19) has the same form as the reference prior with partial information provided in Seo and Kim (2017), and it has been proved in their study that the Markov chains under the provided prior mix well and converge to the stationary distribution very quickly. Therefore, we do not report the results for the validity of the generated MCMC samples.

Tables 1 and 2 show that the Bayes estimates based on the generated MCMC samples and the numerical results are very close to each other. In addition, the 95% HPD CrIs satisfy their PPs well. It is worth noting that the lower bound of the approximate 95% CI based on the MLE σ̂ has a negative value in Table 2, although the support of σ is positive. This result can be sufficient because the approximate CI for σ is obtained by the asymptotic normality of the MLE. Therefore, it is natural to choose the Bayesian inference.

The quality of models under the derived priors can be evaluated through posterior predictive checking. The data drawn from the fitted model, namely replications, should look similar to observed data if the model is adequate. Let Xrep be a replication from a fitted model. Then, the Bayesian predictive density function of Xrep under a prior distribution π(θ) is given by

fXrep(xrepx)=θfXrep(xrepθ)π(θx)dθ,

where fXrep (xrep) is the marginal density function of Xrep. Let XrepXU(i)rep be the replication from the model under the Jeffreys prior (2.18). Then, the MCMC sample XU(i)rep(j) is obtained from the marginal density function fXU(i)rep(xU(i)rep), with μj and σj generated from the joint posterior distribution (3.1). Therefore, the replications of the observed upper record values are given by

XU(i)rep=1N-Mj=M+1NXU(i)rep(j),         i=1,,k.

The replications from the model under the reference prior (2.19) can be obtained similarly. These replications are reported in Table 3. As is conducted in Seo and Kim (2017), we evaluate the Bayesian models through four discrepancy statistics:

D1=XU(1)rep,D2=XU(5)rep,D3=Mean(XU(i)rep),D4=SD(XU(i)rep),         i=1,,k.

Under the provided priors (2.18) and (2.19), we present the histograms and kernel densities of the discrepancy statistics in Figures 36.

Table 3 shows that the replications under the Jeffreys prior (2.18) are closer to the observed upper record values than the replications under the reference prior (2.19) are. Figures 36 show little difference between the models under the priors (2.18) and (2.19) for D1. In addition, the model under the Jeffreys prior (2.18) shows better performance than that under the reference prior (2.19) for D2. In contrast, the model under the reference prior (2.19) shows better performance than that under the Jeffreys prior (2.18) for D3 and D4. However, their differences are not significant.

5. Conclusions

This paper provides an objective Bayesian analysis method based on the objective priors (the Jeffreys and reference priors, and the second-order PMP) for unknown parameters of the two-parameter Rayleigh distribution when the upper record values are observed. To obtain the objective priors, we derived the Fisher information matrix for unknown parameters in terms of the second derivative of the log-likelihood function using Leibniz’s rule. In the simulation study, we showed that the model under the reference prior (2.19) is superior to that under the Jeffreys prior (2.18) and the corresponding maximum likelihood counterpart in terms of frequentist properties. In addition, we showed the limitation of the approximate CI based on the MLE through real data analysis. Based on these results, we recommend the objective Bayesian method under the reference prior (2.19) in the absence of prior information.

Figures
Fig. 1. Simulation results for .
Fig. 2. Simulation results for .
Fig. 3. (a) Histogram and kernel density of D1 under the Jeffreys prior () and (b) Histogram and kernel density of D1 under the reference prior ().
Fig. 4. (a) Histogram and kernel density of D2 under the Jeffreys prior () and (b) Histogram and kernel density of D2 under the reference prior ().
Fig. 5. (a) Histogram and kernel density of D3 under the Jeffreys prior () and (b) Histogram and kernel density of D3 under the reference prior ().
Fig. 6. (a) Histogram and kernel density of D4 under the Jeffreys prior () and (b) Histogram and kernel density of D4 under the reference prior ().
TABLES

Table 1

Estimates and the corresponding 95% CIs and HPD CrIs for μ

μ̂μ̂JBμ̂RB
Estimate5.205Numerical4.1963.867
MCMC4.2053.863

CI(−7.651, 18.062)HPD CrI(0.906, 6.872)(0.514, 6.684)
PP0.9490.951

CI = confidence interval; HPD = highest posterior density; CrIs = credible intervals; MCMC = Markov chain Monte Carlo; PP = posterior probability; JB = Jeffreys prior; RB = reference prior.


Table 2

Estimates and the corresponding 95% CIs and HPD CrIs for σ

σ̂σ̂JBσ̂RB
Estimate2.446Numerical2.8353.109
MCMC2.8313.113

CI(−1.662, 6.554)HPD CrI(1.451, 4.584)(1.663, 5.409)
PP0.9490.951

CI = confidence interval; HPD = highest posterior density; CrIs = credible intervals; MCMC = Markov chain Monte Carlo; PP = posterior probability; JB = Jeffreys prior; RB = reference prior.


Table 3

Replications of the observed upper record values under the provided priors

i12345
πJ (μ,σ)7.759.5210.8511.9612.93
πR(μ,σ)7.769.7111.1712.3913.46

References
  1. Ahsanullah, M (1995). Record Statistics. New York: Nova Science Publishers
  2. Arnold, BC, Balakrishnan, N, and Nagaraja, HN (1998). Records. New York: Wiley
    CrossRef
  3. Basak, P, and Balakrishnan, N (2003). Maximum likelihood prediction of future record statistics. Mathematical and Statistical Methods in Reliability. 7, 159-175.
  4. Berger, JO, and Bernardo, JM (1989). Estimating a product of means: Bayesian analysis with reference priors. Journal of the American Statistical Association. 84, 200-207.
    CrossRef
  5. Berger, JO, and Bernardo, JM (1992). On the development of reference priors (with discussion). Bayesian statistics IV, Bernardo, JM, ed: Array, pp. 35-60
  6. Bernardo, JM (1979). Reference posterior distributions for Bayesian inference (with discussion). Journal of the Royal Statistical Society. Series B. 41, 113-147.
  7. Bernardo, JM, and Smith, AFM (1994). Bayesian Theory. Chichester: Wiley
    CrossRef
  8. Burkschat, M, and Cramer, E (2012). Fisher information in generalized order statistics. Statistics. 46, 719-743.
    CrossRef
  9. Chen, MH, and Shao, QM (1998). Monte Carlo estimation of Bayesian credible and HPD intervals. Journal of Computational and Graphical Statistics. 8, 69-92.
  10. Chandler, KN (1952). The distribution and frequency of record values. Journal of the Royal Statistical Society. Series B. 14, 220-228.
  11. Dyer, DD, and Whisenand, CW (1973). Best linear unbiased estimator of the parameter of the Rayleigh distribution - part I: small sample theory for censored order statistics. IEEE Transactions on Reliability. 22, 27-34.
    CrossRef
  12. Jaheen, ZF (2003). A Bayesian analysis of record statistics from the Gompertz model. Applied Mathematics and Computation. 145, 307-320.
    CrossRef
  13. Jeffreys, H (1961). Theory of Probability and Inference. London: Cambridge University Press
  14. Kim, C, and Han, K (2009). Estimation of the scale parameter of the Rayleigh distribution with multiply type-II censored sample. Journal of Statistical Computation and Simulation. 79, 965-976.
    CrossRef
  15. Lee, WC, Wu, JW, Hong, ML, Lin, LS, and Chan, RL (2011). Assessing the lifetime performance index of Rayleigh products based on the Bayesian estimation under progressive type II right censored samples. Journal of Computational and Applied Mathematics. 235, 1676-1688.
    CrossRef
  16. Lawless, JF (1982). Statistical Model & Methods for Lifetime Data. New York: Wiley
  17. Madi, MT, and Raqab, MZ (2004). Bayesian prediction of temperature records using the Pareto model. Environmetrics. 15, 701-710.
    CrossRef
  18. Peers, HW (1965). On confidence sets and Bayesian probability points in the case of several parameters. Journal of the Royal Statistical Society: Series B. 27, 9-16.
  19. Polovko, AM (1968). Fundamentals of Reliability Theory. New York: Academic Press
  20. Rayleigh, L (1880). On the Resultant of a large Number of Vibrations of the same Pitch and of arbitrary Phase. Philosophical Magazine and Journal of Science. 10, 73-78.
    CrossRef
  21. Raqab, MZ, and Madi, MT (2002). Bayesian prediction of the total time on test using doubly censored Rayleigh data. Journal of Statistical Computation and Simulation. 72, 781-789.
    CrossRef
  22. Romano, JP, and Siegel, AF (1986). Counterexamples in Probability and Statistics: Wadsworth and Brooks/Cole
  23. Seo, JI, and Kim, Y (2017). Objective Bayesian analysis based on upper record values from two-parameter Rayleigh distribution with partial information. Journal of Applied Statistics. 44, 2222-2237.
    CrossRef
  24. Soliman, AA, and Al-Aboud, FM (2008). Bayesian inference using record values from Rayleigh model with application. European Journal of Operational Research. 185, 659-672.
    CrossRef
  25. Wu, SJ, Chen, DH, and Chen, ST (2006). Bayesian inference for Rayleigh distribution under progressive censored sample. Applied Stochastic Models in Business and Industry. 22, 269-279.
    CrossRef
  26. Wang, BX, and Ye, ZS (2015). Inference on the Weibull distribution based on record values. Computational Statistics and Data Analysis. 83, 26-36.
    CrossRef
  27. Wang, BX, Yu, K, and Coolen, FPA (2015). Interval estimation for proportional reversed hazard family based on lower record values. Statistics and Probability Letters. 98, 115-122.
    CrossRef
  28. Welch, BL, and Peers, HW (1963). On formulae for confidence points based on integrals of weighted likelihoods. Journal of the Royal Statistical Society. Series B. 35, 318-329.