TEXT SIZE

search for



CrossRef (0)
How to Improve Classical Estimators via Linear Bayes Method?
Commun. Stat. Appl. Methods 2015;22:531-542
Published online November 30, 2015
© 2015 Korean Statistical Society.

Lichun Wanga

aDepartment of Mathematics, Beijing Jiaotong University, China
Correspondence to: Lichun Wang
Department of Mathematics, Beijing Jiaotong University, Beijing 100044, China. E-mail: lchwang@bjtu.edu.cn
Received October 17, 2015; Revised October 26, 2015; Accepted October 26, 2015.
 Abstract

In this survey, we use the normal linear model to demonstrate the use of the linear Bayes method. The superiorities of linear Bayes estimator (LBE) over the classical UMVUE and MLE are established in terms of the mean squared error matrix (MSEM) criterion. Compared with the usual Bayes estimator (obtained by the MCMC method) the proposed LBE is simple and easy to use with numerical results presented to illustrate its performance. We also examine the applications of linear Bayes method to some other distributions including two-parameter exponential family, uniform distribution and inverse Gaussian distribution, and finally make some remarks.

Keywords : linear Bayes method, MCMC method, MSEM criterion, normal linear model, two-parameter exponential family, uniform distribution, inverse Gaussian distribution
1. Introduction

The linear Bayes method was originally proposed by Hartigan (1969), which suggests that in Bayesian statistics one can replace a completely specified prior distribution by an assumption with just a few moments of the distribution. It has been subsequently discussed by Rao (1973) from linear optimization viewpoint. Lamotte (1978) later develops a class of linear estimators, called Bayes linear estimators, by searching, among all linear estimators that have least average total mean squared error. Goldstein (1983) considers the problem of modifying the linear Bayes estimator for the mean of a distribution of unknown form using a sample variance estimate. Heiligers (1993) studies the relationship between linear Bayes estimation and minimax estimation in linear models with partial parameter restrictions. Hoffmann (1996) proposes a well-described subclass of Bayes linear estimators for the unknown parameter vector in the linear regression model with ellipsoidal parameter constraints and obtains a necessary and sufficient condition to ensure that the considered Bayes linear estimators improves the least squared estimator over the whole ellipsoid regardless of the selected generalized risk function. In the framework of empirical Bayes, Samaniego and Vestrup (1999) and Pensky and Ni (2000) respectively construct linear empirical Bayes estimators and establish their superiorities over standard and traditional estimators. In application fields, Busby et al. (2005) proposes the application of Bayes linear methodology to uncertainty evaluation in reservoir forecasting. Zhang and Wei (2005) also drive the unique Bayes linear unbiased estimator of estimable functions for the singular linear model. Wei and Zhang (2007) employs linear Bayes procedure to define Bayes linear minimum risk estimation in a linear model and discusses its superiorities. Recently, Zhang et al. (2011, 2012) extend the research on a linear Bayes estimator to the partitioned linear model and multivariate linear models, respectively.

In this paper, along the same line as in Wang and Singh (2014), we use the normal linear model as an example to demonstrate how to apply a linear Bayes method to simultaneously estimate all the parameters involved in the model and elaborate advantages and potential disadvantages.

Let W be a known p-dimension subspace of Rn. Suppose that we observe the random vector

Y~Nn?(,2In),?????????W,2>0.

This model is called the normal linear model as defined by Arnold (1980) and adopted by many other authors as well. Let X be a basis matrix for W. Then X is an np matrix of rank p, and there exists a unique Rp such that = X棺. Hence, an equivalent version of the normal linear model can be presented, and we observe the random vector

Y~Nn?(X,2In),?????????Rp,2>0.

Define 棺? = (X쾊)?1X쾋 and ?2 = (||Y||2 ? ||X棺? ||2)/(n ? p) and note that the fact that (棺?,?2) is a complete sufficient statistic for the above linear model. Hence, the classical estimators for the parameters and 2 are 棺? and ?2, which are the uniformly minimum variance unbiased estimator (UMVUE) in the sense of minimizing mean squared error.

From the Bayesian viewpoint, note that in most cases past experience about the parameters and 2 are often available. Let f0(,2) be the joint prior of and 2 and the loss function be

L?(^,)=(^-)?D?(^-),

where D is a positive definite matrix and 罐? denotes the estimate of the vector = (, 2). Then, by virtute of the Bayes theorem, the usual Bayes estimators (UBE) for and 2, say 棺?UB and ^UB2, can be calculated by

^UB=g?(,2?y)?dd2,^UB2=2g?(,2?y)?dd2,

where g(,2|y) denotes the conditional joint posterior density of and 2 given Y. However, it is difficult to handle complicated or non-standard integrations. Normally, in these cases approximate Bayes estimators are suggested such as Lindley셲 approximation and Tierney and Kadane셲 approximation, see Lindley (1980) and Tierney and Kadane (1986) for details. Simulation-based methods such as the Gibbs sampling procedure and Metropolis method also have emerged in the past twenty years, see Martinez and Martinez (2007). Traditional Bayes estimators (UBE) are somewhat complicated and inconvenient to use in these situations.

In the following, enlightened by Rao (1973), we employ the linear Bayes method to propose a linear Bayes estimator (LBE) for the parameters and 2 simultaneously as well as investigate superiorities. We also extend our discussions on the application of linear Bayes method to some other useful distributions.

The survey is organized as follows: In Section 2 we define the LBE for the parameter vector = (, 2) and establish it superiorities over the classic UMVUE and MLE. Numerical comparisons between the LBE and the usual Bayes estimator (UBE) are presented in Section 3. Extended discussions and remarks are made in Section 4.

Throughout this paper, for two nonnegative definite matrices A1 and A2 of the same size, we say A1A2 if and only if A1 ? A2 is a nonnegative definite matrix.

2. Linear Bayes Estimator and Its Superiorities

2.1. The proposed LBE

Denote = (, 2). In what follows we assume that the prior G() belongs to the distribution family:

G={G():E[??2+(2)2]<}.

Put T = (棺?, ?2) and define the linear Bayes estimator (LBE) of , say 罐?LB, be of the form 罐? = BT + b satisfying

R?(^LB,)=minB,bE(Y,)L?(?,)?????????and?????????E(Y,)?(^LB-)=0,

where align and b are (p+1)횞(p+1) and (p+1)횞1 undetermined matrices respectively, E(Y,) denotes the expectation with respect to the joint distribution of Y and and the loss function is given by (1.3).

Thus, we have the following conclusion.

Theorem 1

Let 罐?LBbe defined by (2.2). If np + 1, then

^LB=T-W[W+Cov()]-1(T-E),

where W = E[Cov(T|)] = diag((X쾊)?1E2, 2E4/(n ? p)).

Proof

From the constraint E(Y,)(罐? ? ) = 0, we know b = E罐 ? BE(Y,)(T). Note that

E(Y,)(T)=E[E(T?)]=E(,2)=E.

Hence b = E罐 ? BE罐, and accordingly we have

R?(?,)=E(Y,)L?(?,)=E(Y,)[BT+E-BE-]D[BT+E-BE-]=E(Y,)?{tr?(D[B(T-E)-(-E)]?[B(T-E)-(-E)])}=tr?(DE(Y,)?[B(T-E)-(-E)]?[B(T-E)-(-E)])=tr?(DBE(Y,)?[(T-E)(T-E)]B)-tr?(DCov()B)-tr(DBCov())+tr(DCov()).

For given , using the independence between 棺? and ?2, we have

E(Y,)?[(T-E)(T-E)]=E[Cov(T?)]+Cov(E(T?))=W+Cov(),

where W = diag((X쾊)?1E2, 2E4/(n ? p)).

Substituting (2.5) into (2.4) and letting 늹R(罐?, )/늹B be zero, we have

DB[W+Cov()]-DCov()=0,

which yields

B=Ip+1-W[W+Cov()]-1.

Together with b = E罐 ? BE罐 we come to the conclusion of Theorem 1.

Remark 1

In the definition of 罐?LB (2.2), if we discard the so-called unbiased constraint E(Y,)(罐?LB ? ) = 0, then by directly computing R(罐?, ) and denoting 늹R(罐?, )/늹B = 0 and 늹R(罐?, )/늹b = 0, we can obtain the same expression for the LBE 罐?LB, which means that 罐?LB satisfies the unbiased condition as well as performs best among linear Bayes estimators in the sense of minimizing E(Y,)L(BT + b, ).

2.2. The superiorities of LBE

Note that

^U=(^,^2)=(Ip001)?(^^2)=T,

where we use 罐?U to denote the UMVUE of = (, 2).

Theorem 2

Let 罐?LBand 罐?Ube given by (2.2) and (2.7) respectively. If np+1, then 罐?LBis superior to 罐?Uin terms of MSEM criterion, i.e. MSEM(罐?LB)MSEM(罐?U).

Proof

Since E(Y,)(罐?LB ? ) = 0, we have

MSEM?(^LB)=E(Y,)?[(^LB-)?(^LB-)]=E[Cov?(^LB-?)]+Cov?(E[^LB-?]).

Denote M = [W + Cov()]?1. Then by Theorem 1 we know

MSEM?(^LB)=(I-WM)W(I-WM)+WMCov()(WM)=(1-WM)W(I-MW)+WMCov()MW=W-2WMW+WM[W+Cov()]MW=W-WMW.

However,

MSEM?(^U)=E(Y,)?[(^U-)?(^U-)]=E{E?[(^U-)?(^U-)?]}=E{E[(T-)(T-)?]}=W.

Comparing (2.9) with (2.10), we have

MSEM?(^LB)MSEM?(^U).

The proof of Theorem 2 is completed.

Moreover, note that the MLE of , denoted by 罐?ML, equals to B0T with

B0=(Ip00n-pn).

Thus,

MSEM?(^ML)=B0WB0+(B0-Ip+1)?E?()?(B0-Ip+1)=(XXE200n2(2n+p2-2p)E4)-1.
Theorem 3

Let 罐?LBand 罐?MLbe given by (2.2) and (2.12) respectively. If np + 1, then 罐?LBis superior to 罐?MLin terms of MSEM criterion, i.e. MSEM(罐?LB)MSEM(罐?ML).

Proof

We rewrite

MSEM?(^LB)=W-WMW=[W-1+Cov-1()]-1=[(XXE200n2(2n+p2-2p)E4)+(000c0)+Cov-1()]-1,

where c0 = (np2 ? 4np ? p3 + 2p2)/{2(2n + p2 ? 2p)E4}.

Hence, in order to establish the MSEM superiority of 罐?LB over 罐?ML, it suffices to show that

(000c0)+Cov-1()0,

for np + 1.

Denote Cov?1() = S, where S = (Si j) is a 2 횞 2 partition matrix and

S11=Cov-1()+Cov-1()E?(2-E2)?(-E)S22E(2-E2)?(-E)Cov-1(),S12=-Cov-1()E?(2-E2)?(-E)S22,S21=-S22E?(2-E2)?(-E)Cov-1(),S22=[Var?(2)-E?(2-E2)?(-E)Cov-1()E?(2-E2)?(-E)]-1.

Thus, to prove (2.15), it is adequate to show that

?S11??|c0+S22-S21?(S11)-1S12|0,

or equivalently to show that

c0+S22-S21?(S11)-1S120,

for np + 1, where we use the fact of S11 돟 0 and accordingly the value of the determinant |S11| is nonnegative.

Set = Var(2) ? E(2 ? E2)( ? E棺)Cov?1()E(2 ? E2)( ? E棺) and note that S11 = Cov?1()[Cov() + E(2 ? E2)( ? E棺)(1/)E(2 ? E2)( ? E棺)]Cov?1(), hence we have

c0+S22-S21?(S11)-1S12=c0+1-1E?(2-E2)?(-E)[Cov()+E?(2-E2)?(-E)1E?(2-E2)?(-E)]-1E?(2-E2)?(-E)1.

Further, using [+A1A]-1=-1--1A[A-1A+1-1]-1A-1, we have

c0+S22-S21?(S11)-1S12=c0+1-a2+a2(+a)2,

where a = E(2 ? E2)( ? E棺)Cov?1()E(2 ? E2)( ? E棺).

Note that = Var(2) ? a, hence

1-a2+a2(+a)2=1Var(2),

and accordingly

c0+S22-S21?(S11)-1S12=c0+1Var?(2)=[(n+4)p2-(4n+4)p-p3+4n]?E4+[4np-(n+2)p2+p3]?(E2)22(2n+p2-2p)?Var(2)?E4[2p2+4n-4p]?(E2)22(2n+p2-2p)?Var?(2)?E4>0,?????????for???np+1,

where we use the facts that E4 돟 (E2)2 and (n + 4)p2 ? (4n + 4)p ? p3 + 4n 돟 0 for np + 1.

Hence, Theorem 3 has been proved.

Remark 2

For the two-parameter exponential family given by

f(x;,)=-1?exp?(x-),

where x > , we assume that X(1)X(2) 돞 쨌쨌쨌 돞 X(r)(2 돞 rn) denote the type II censored samples. Define Qi = [n ? (i ? 1)](X(i) ? X(i?1)), where X(0) = 0, then Q1 and P=i=2rQi are mutually independent and also (Q1, P) is sufficient for the parameter vector (, ). Set T = (Q1, P), the classical UMVUE and MLE for = (, ) can be defined as follows

^U=(1n-1n(r-1)01r-1)?T,?????????^ML=(1n001r)?T.

Under the assumption that the prior G() satisfies the condition E||||2 < 닞, we can obtain the expression of LBE 罐?LB for the parameter vector = (, ) in this case and establish its superiorities over the 罐?U and 罐?ML by virtue of MSEM criterion similarly. The interested readers are referred to Wang and Singh (2014) for more details.

Remark 3

Let X1, X2, . . . , Xn be independently drawn from the uniform distribution U(1, 2) with density f (x; 1, 2) = (2 ? 1)?1, where 1 < x < 2. Note that X(1) = min1돞inXi and X(n) = max1돞inXi are sufficient and complete statistics, hence, set T = (X(1), X(n)), we obtain the classic UMVUE and MLE for = (1, 2) in this case:

^U=(nn-1-1n-1-1n-1nn-1)?T,?????????^ML=(1001)?T.

Similarly, using the assumption that the prior G() satisfies the condition E||||2 < 닞, the expression of LBE 罐?LB for the parameter vector = (1, 2) can be easily obtained and its MSEM superiorities over the 罐?U and 罐?ML can also be proved.

Remark 4

Let X1, X2, . . . , Xn be a random sample from the two-parameter inverse Gaussian distribution IG(1, 2) with pdf

f(x;1,2)=(22x3)12?exp(-2(x-1)2212x),

where x > 0. It is easily shown that the statistics X?=(1/n)?i=1nXi and S?=n/{i=1n(1/Xi-1/X?)} are sufficient and complete. Tweedie (1957) shows that X? and S? are independent, X? having an inverse Gaussian distribution with parameters 1 and n慣2, and n慣2/ S? having a n-12 distribution. Schwarz and Samanta (1991) gives a proof of these facts using an inductive argument. Hence we obtain the classic UMVUE and MLE for the parameter = (1, 2) as follows

^U=(100n-3n)?T,?????????^ML=(1001)?T,

where T = (X?, S?). Assume that the prior G() belongs to the prior family G={G():E[12+22]<,E[132-1]<}, we can obtain the expression of LBE 罐?LB for the parameter vector = (1, 2) and prove that it prevails over the classic UMVUE and MLE under MSEM criterion.

2.3. An illustration example

To illustrate Theorem 2 and Theorem 3 we investigate the case of two-dimensional normal linear model, i.e.

Y~N?(0+1x,2In),

where we assume that (0, 1) ~ N((1, 2), Cov(0, 1)) with Cov(0, 1) having three alternative values and 2 ~ U(a, b) with three different pairs a and b. We also assume that (0, 1) and 2 are uncorrelated, i.e. Cov(0, 1, 2) = diag(Cov(0, 1), Var(2)), and x = (?4, ?3, ?2, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16).

Define the percentages of improvement of 罐?LB over 罐?U and 罐?ML, respectively, by

IP?(^U)=tr?(MSEM?(^U)-MSEM?(^LB))tr?(MSEM?(^U))?????????and?????????IP?(^ML)=tr?(MSEM?(^ML)-MSEM?(^LB))tr?(MSEM?(^ML)).

For different sample size n (= 5, 10, 20), the corresponding computation results for IP(罐?U) and IP(罐?ML) under the three different priors are presented in Table 1, where tr(Cov(0, 1, 2)) is used as an index of the variation of the prior information.

As stated in Theorems 2 and 3, since the above priors belong to the family (2.1), both MSEM(罐?U)? MSEM(罐?LB) and MSEM(罐?ML) ? MSEM(罐?LB) are always nonnegative definite. From Table 1, firstly we can see that when the sample size n is fixed, as expected, both IP(罐?U) and IP(罐?ML) decrease as the variation of the prior increases (i.e, tr(Cov(0, 1, 2)) tends to be larger); secondly, for the same prior information, it is only natural that as the sample size n grows, which means that the sample information gets more, both IP(罐?U) and IP(罐?ML) decrease; finally, it seems that IP(罐?U) is larger than IP(罐?ML), the reason may be due to MSEM(罐?U) 돟 MSEM(罐?ML) for our case.

3. Numerical Comparisons between LBE and UBE

For the model (1.2), note that under the loss L(罐?, ) and the prior G(), the usual Bayes estimator (UBE) of , say 罐?UB, would be equal to E(|Y). In this Section, for given priors G(), we present some numerical comparisons between the LBE 罐?LB and the UBE 罐?UB, the latter is calculated by employing an MCMC sampling method.

Suppose p = 2 and let us consider the normal linear model

Y~N?(0+1x,2In).

Denote = (0, 1). We assume that 2 follows an inverse-Gamma distribution with density

?(2;0,t)=t0-1(0-1)?(12)0exp?(-t2)

and given 2 the conditional distribution of is N2(棺?0, 20).

Note that the posterior density of (,2) given Y is

f?(,2?y)(-2)0+1+n2exp{-122[20+(n-2)^2+(-?0)0-1?(-?0)+(-^)XX(-^)]},

where X=(11?1x1x2?xn) denotes the design matrix.

However, it is almost impossible to calculate ^UB=(^UB,^UB2) analytically, so we have to obtain it numerically.

Note that the posterior conditional densities of given 2 and 2 given are respectively proportional to

f1?(?2,y)exp?{-122?(-?)?-1?(-?)},f2?(2?,y)-2(0+1+n2)?exp?(-c122),

where ?=(0-1+XX)-1(0-1?0+XX^),??=2(0-1+XX)-1 and c1=20+(n-2)^2+(-?0)0-1(-?0)+(-^)XX(-^).

The Gibbs sampler was originally developed by Geman and Geman (1984) as applied to image processing and the analysis of Gibbs distributions on a lattice. It is brought into mainstream statistics through the articles of Gelfand and Smith (1990) and Gelfand et al. (1990). The Gibbs sampler can also be shown to be a special case of the Metropolis-Hastings algorithm, see Gilks et al. (1996) and Robert and Casella (1999). In describing the Gibbs sampler, we follow the treatment in Casella and George (1992).

  • Step 1. Choose the initial values of and 2 and denote the values of and 2 at the jth step by j and j2, respectively.

  • Step 2. Generate j+1 and j+12 from f1(?j2,y) and f2(2|j, y), respectively.

  • Step 3. Repeat Step 2 for N times.

  • Step 4. Calculate the Bayes estimator of l(,2) by 1/(N-m0)j=m0+1Nl(j,j2), where l(,2) denotes any a function of and 2 and m0 is the burn-in period.

Note that under the above priors, it is readily seen that E=(E,E2)=(?0,t/[0-2]) and Cov() = diag(t/(0 ? 2) 쨌 닊0, t2/{(0 ? 2)2(0 ? 3)}) since Cov() = ECov(|2) = 닊0E2 and Corr(,2) = 0.

In the following table we first calculate the values of 罐?LB and the corresponding numerical results of 罐?UB for different prior parameters and then present ?^LB-^UB?=?^LB-^UB?2+(^LB2-^UB2)2, which is defined as an index of degree of approximation between 罐?LB and 罐?UB.

The above numerical comparisons indicate two trends, one is that for the same prior, ||罐?LB ? 罐?UB|| tends to be smaller as sample size gets larger, the other is that given sample size, ||罐?LB ?罐?UB|| increases as the prior variance becomes larger. In the process of simulation, we find that the value of ||罐?LB?罐?UB|| is affected by the value of (^LB2-UB2)2; however, the value of ||棺?LB ? 棺?UB||2 is always small, which means the LBE 棺?LB is rather close to the UBE 棺?UB and there could be a certain difference between the LBE?^LB2 and the UBE^UB2 for our cases.

Remark 5

Two cases are considered for the two-parameter exponential family. In case (I) we assume that the parameters and have independent prior distributions, where follows an exponential distribution and has an inverted Gamma prior. In case (II), we suppose that, given , the conditional prior of is an inverted Gamma density and follows an inverted Gamma prior. For the above two cases, numerical simulations show that ||罐?LB ?罐?UB||s are small, which means that as a linear approximation of 罐?UB, 罐?LB works better.

Remark 6

In the case of the uniform distribution U(1, 2), numerical computations show that 罐?LB works very good for both independent prior and non-independent prior. For example, for the single parameter uniform distribution U(0, 2), we assume the prior (2) has finite second-order moment and mimic the above discussions, then the LBE for the parameter 2 is 罐?2,LB = a0X(n) + b0 with

a0=(n+1)(n+2)Var(2)(n+1)2E22-n(n+2)(E2)2?????????and?????????b0=[(1-a0)n+1]E2n+1.

Specifically, let (2)=t2t12-t1-1?exp(-t2/2)/(t1) and together with f (x(n)|2) and the squared loss, we know that the UBE 罐?2,UB is

E(2?x(n))=x(n)2-t1-n?exp(-t2/2)?d2x(n)2-t1-n-1?exp(-t2/2)?d2=t2t1+n-1P(2(2(t1+n-1))2t2/x(n))P(2(2(t1+n))2t2/x(n)),

where we utilize the relationship between the inverse Gamma and the 2 distribution (Mao and Tang, 2012). Say, let n = 5, x(n) = 2 and t1 = 3 and t2 = 8, simple computations show that a0 = 1.1351, b0 = 0.2163 and P(2(14) 돞 8) = 0.1107 and P(2(16) 돞 8) = 0.0511. Hence, we have 罐?2,LB = 2.4865 and 罐?2,UB = 2.4758, which show that the LBE is very close to the UBE.

Remark 7

For the two-parameter inverse Gaussian distribution IG(1, 2), similarly, numerical studies show that LBE is adequate.

Remark 8

In above simulation, it should be noted that the problem of deciding when to stop the chain is an important issue and is the topic of current research in MCMC methods. If the resulting sequence has not converged to the target distribution, then the estimators and inferences we get from it are suspect. Let represent the characteristic of the target distribution (mean, moments, quantiles, etc.) in which we are interested. An obvious method to monitor convergence to target distribution is to run multiple sequences of the chain and plot versus the iteration number.

4. Conclusions and Remarks

This paper uses the normal linear model Y ~ N(X棺,2In) as an example to investigate the application of linear Bayes method, where we employ the linear Bayes method to simultaneously estimate regression parameter and the variance parameter 2 as well as propose a linear Bayes estimator for the parameter vector = (, 2). The proposed linear Bayes estimator is shown superior to the classical estimators UMVUE and MLE, respectively, in terms of the mean squared error matrix criterion. Numerical simulations are presented to verify the validity of the linear Bayes estimator. The procedure used in this paper includes normal distribution as its special case and can be extended easily to other useful distributions (such as log-normal, inverse Gaussian distribution and two-parameter exponential family), which are frequently used parametric lifetime models in survival analysis and reliability theory. We also discuss and remark the applications of linear Bayes method to the two-parameter exponential family, uniform distribution and the inverse Gaussian distribution. Compared with the usual Bayes estimator, we find that

  • (1) The proposed linear Bayes estimator is simple and easy to calculate as well as a good approximation in many situations; the linear Bayes method works especially well for the case of uniform distribution.

  • (2) We can always define a linear Bayes estimator if there exists sufficient statistic for the parametric model; subsequently, the conclusions of Theorems 2 and 3 always hold.

However, an advantage of the usual Bayes estimator over the linear Bayes estimator is that the former allows for noninformative (improper) priors. Of note is that the linear Bayes estimator may be an inadequate approximation in some situations even for the cases of proper priors. Hence there is still scope for the linear Bayes method to be improved. For instance, for some cases a quadratic Bayes estimator would be a better alternative. However, for the case of normal linear model, one can consider to add more other statistics into the definition of T, for example, we can replace T = (棺?,?2) by T1 = (棺?, ||棺?||2, ?2) or T2 = (棺?, ?2, ?4) to redefine a new linear Bayes estimator. We note that the loss function often plays an important role in Bayesian analysis; consequently, some interesting loss functions such as the balanced loss and the linex loss can be integrated with the linear Bayes method in future studies.

TABLES

Table 1

IP(罐?U) and IP(罐?ML) under different priors and sample sizes

PriorsnIP(罐?U)IP (罐?ML)tr (Cov (0, 1, 2))
(01)~N((12),(113131))50.97120.9542
100.95160.94047/3
2 ~ U(7, 9)200.90660.8974

(01)~N((12),(6226))50.92340.8779
100.88030.852440/3
2 ~ U(6, 10)200.77060.7480

(01)~N((12),(186618))50.84580.7537
100.72650.6627124/3
2 ~ U(4, 12)200.53690.4961

Table 2

||罐?LB ? 罐?UB|| under different prior parameters and sample sizes

nThe prior parameters||罐?LB ? 罐?UB||

0 = 2, t = 3, ?0=(-23),?0=(10.70.71)1.3369
200 = 2, t = 4, ?0=(-23),?0=(53.53.55)1.4472

0 = 2, t = 8, ?0=(-23),?0=(20141420)1.8390


0 = 2, t = 3, ?0=(-23),?0=(10.70.71)1.0589
500 = 2, t = 4, ?0=(-23),?0=(53.53.55)1.1407

0 = 2, t = 8, ?0=(-23),?0=(20141420)1.4855

Where (x1, x2, . . . , x20) is a subset of (x1, x2, . . . , x50) = (?4 3.4 2.4 0 1 2 3 4 3.5 0.6 7 8 9 10 11 12 ?1.9 14 15 16 17 18 19 21 22 23 34 35 ?13 17 18.5 19.9 24 28 32 33 37 39 ?12 ?16 ?19 44 45 23.4 31.7 33.5 45.2 60.7 ?14.3 ?17).


References
  1. Arnold, SF (1980). The Theory of Linear Models and Multivariate Analysis. New York: John Wiley & Sons.
  2. Busby, D, Farmer, CL, and Iske, A 2005. Uncertainty evaluation in reservoir forecasting by Bayes linear methodology., Algorithms for approximation, proceedings of the 5th international conference, Chester, pp.187-196.
  3. Casella, G, and George, EI (1992). An introduction to Gibbs Sampling. The American Statistician. 46, 167-174.
  4. Gelfand, AE, and Smith, AFM (1990). Sampling-based approaches to calculating marginal densities. Journal of the American Statistical Association. 85, 398-409.
    CrossRef
  5. Gelfand, AE, Hills, SE, Racine-Poon, A, and Smith, AFM (1990). Illustration of Bayesian inference in normal data models using Gibbs sampling. Journal of the American Statistical Association. 85, 972-985.
    CrossRef
  6. Geman, S, and Geman, D (1984). Stochastic relaxation, Gibbs distributions and the Bayesian restoration of images. IEEE Transactions PAMI. 6, 721-741.
    CrossRef
  7. Gilks, WR, Richardson, S, and Spiegelhalter, DJ (1996). Markov Chain Monte Carlo in Practice. London: Chapman and Hall.
  8. Goldstein, M (1983). General variance modifications for linear Bayes estimators. Journal of the American Statistical Association. 78, 616-618.
    CrossRef
  9. Hartigan, JA (1969). Linear Bayesian methods. Journal of the Royal Statistical Society, Series B. 31, 440-454.
  10. Heiligers, B (1993). Linear Bayes and minimax estimation in linear models with partially restricted parameter space. Journal of Statistical Planning and Inference. 36, 175-183.
    CrossRef
  11. Hoffmann, K (1996). A subclass of Bayes linear estimators that are minimax. Acta Applicandue Mathematicae. 43, 87-95.
    CrossRef
  12. Lamotte, LR (1978). Bayes linear estimators. Technometrics. 3, 281-290.
    CrossRef
  13. Lindley, DV (1980). Approximate Bayesian methods. Trabajos de Estadistica. 21, 223-237.
    CrossRef
  14. Mao, SS, and Tang, YC (2012). Bayesian Statistics. Beijing: China Statistics Press.
  15. Martinez, WL, and Martinez, AR (2007). Computational Statistics Handbook with MATLAB. New York: Chapman & Hall/CRC.
  16. Pensky, M, and Ni, P (2000). Extended linear empirical Bayes estimation. Communications in Statistics - Theory and Methods. 29, 579-592.
    CrossRef
  17. Rao, CR (1973). Linear Statistical Inference and Its Applications. New York: John Wiley & Sons.
    CrossRef
  18. Robert, CP, and Casella, G (1999). Monte Carlo Statistical Methods. New York: Springer-Verlag.
    CrossRef
  19. Samaniego, FJ, and Vestrup, E (1999). On improving standard estimators via linear empirical Bayes methods. Statistics & Probability Letters. 44, 309-318.
    CrossRef
  20. Schwarz, CJ, and Samanta, M (1991). An inductive proof of the sampling distributions for the MLEs of the parameters in an inverse Gaussian distributions. The American Statistician. 45, 223-235.
  21. Tierney, L, and Kadane, JB (1986). Accurate approximations for posterior moments and marginal densities. Journal of the American Statistical Association. 81, 82-86.
    CrossRef
  22. Tweedie, MCK (1957). Statistical properties of inverse Gaussian distributions. The Annals of Mathematical Statistics. 28, 362-377.
    CrossRef
  23. Wang, LC, and Singh, RS (2014). Linear Bayes estimator for the two-parameter exponential family under type II censoring. Computational Statistics and Data Analysis. 71, 633-642.
    CrossRef
  24. Wei, LS, and Zhang, WP (2007). The superiorities of Bayes linear minimum risk estimation in linear model. Communications in Statistics-Theory and Methods. 36, 917-926.
    CrossRef
  25. Zhang, WP, and Wei, LS (2005). On Bayes linear unbiased estimation of estimable functions for the singular linear model. Science in China Series A Mathematics. 7, 898-903.
    CrossRef
  26. Zhang, WP, Wei, LS, and Chen, Y (2011). The superiorities of Bayes linear unbiased estimation in partitioned linear model. Journal of System Science and Complexity. 5, 945-954.
    CrossRef
  27. Zhang, WP, Wei, LS, and Chen, Y (2012). The superiorities of Bayes linear unbiased estimation in multivariate linear models. Acta Mathematicae Applicatae Sinica. , 383-394.
    CrossRef