The validity of an experiment depends on the appropriate experimental design that can be elucidated by using the theory of optimum experimental design. A convenient framework to provide the ideal design is to select a set of control variables so that it leads to the minimum of the variance of parameter estimator. There is an increasing number of experiments that a linear model cannot adequately indicate the fundamental features of the data.
Observational studies involve many situations in which the outcome of an experiment is countable. An appropriate model to describe such data is the Poisson regression model, that can be found in McCullagh and Nelder (1989). Unfortunately, there are difficulties with the optimal design problem in such models. These difficulties are caused by likelihood equations that are usually the nonlinear functions of the parameters. Therefore, the maximum likelihood estimator (MLE) cannot be solved analytically. Under such circumstances, numerical methods such as Fisher scoring are employed to obtain MLE (Searle
For non-linear models, the small-sample differential-geometric approach based on the least squares estimator is deemed to improve the experiment conclusions (Pronzato and Pazman, 2013). Russell
In the present work, we provide the discussion of the higher-order asymptotic expansion of the bias of the MLE for the Poisson regression model. In order to obtain accurate results in small size experiments, it is preferable that the variance and the bias are considered together to obtain optimal designs (Box and Draper, 1959). To fulfil this, we primarily introduce the Poisson regression model and review the asymptotic expansion of MLE for this model. We then discuss the bias of MLE for the Poisson regression model and obtain the MSE by using the second-order expansion of MLE. In Section 3, we define DM-optimality criterion, and then study optimal designs for the special cases of the Poisson regression model based on the minimization of the MSE.
The validity of an experiment depends on the appropriate experimental design that can be elucidated by using the theory of optimum experimental design. A convenient framework to provide the ideal design is to select a set of control variables so that it leads to the minimum of the variance of parameter estimator. There is an increasing number of experiments that a linear model cannot adequately indicate the fundamental features of the data.
Observational studies involve many situations in which the outcome of an experiment is countable. An appropriate model to describe such data is the Poisson regression model, that can be found in McCullagh and Nelder (1989). Unfortunately, there are difficulties with the optimal design problem in such models. These difficulties are caused by likelihood equations that are usually the nonlinear functions of the parameters. Therefore, the maximum likelihood estimator (MLE) cannot be solved analytically. Under such circumstances, numerical methods such as Fisher scoring are employed to obtain MLE (Searle
For non-linear models, the small-sample differential-geometric approach based on the least squares estimator is deemed to improve the experiment conclusions (Pronzato and Pazman, 2013). Russell
In the present work, we provide the discussion of the higher-order asymptotic expansion of the bias of the MLE for the Poisson regression model. In order to obtain accurate results in small size experiments, it is preferable that the variance and the bias are considered together to obtain optimal designs (Box and Draper, 1959). To fulfil this, we primarily introduce the Poisson regression model and review the asymptotic expansion of MLE for this model. We then discuss the bias of MLE for the Poisson regression model and obtain the MSE by using the second-order expansion of MLE. In Section 3, we define DM-optimality criterion, and then study optimal designs for the special cases of the Poisson regression model based on the minimization of the MSE.
The main assumption in the linear regression is that the response variable is normally distributed. There are numerous conditions in which outcomes have non-normal distribution, but the random response is a member of the exponential family. Nelder and Wedderburn (1972) established the GLMs to describe such data. The Poisson regression model is a special case of GLMs used to model count data where the response variable is specified by the Poisson distribution. Some applications of this model in medicine, social sciences and toxicology have been studied, for example see Coxe
Suppose that
where
This model has been widely studied in literature, for example see McCullagh and Nelder (1989) and Cameron and Trivedi (2013).
For the Poisson regression model, the log likelihood can be written as
and
The score function to estimate the Poisson regression parameters is completely non-linear; therefore, the MLE of parameters cannot be obtained in a closed form. This problem causes a difficulty for making inference based on the maximum likelihood estimation. The common approach is to use the asymptotic approximation of the score function, the derivative of the log-likelihood function, that indicates sensitive a likelihood function with respect to parameters.
To denote the different orders of derivatives of the log-likelihood function, we apply the following notations.
We also indicate the expectation of the above expressions with
The Taylor expansion of the
Then Equation (2.1) is inverted to obtain an asymptotic expansion for
where,
The bias of
where,
It can be obtained from Equation (2.2) that the above bias, called the second order bias, is of order
To illustrate the influence of the sample size on the bias of
Consider a simple Poisson regression model, where the response mean is described in terms of an explanatory variable
Furthermore, let
In each replication, we fit the simple Poisson model and use the simulated data and the explanatory variables to compute the average bias of the parameter estimates for each combination of (
where
Figure 1(a) reports the absolute value of bias(
Figure 1(b) shows the absolute value of the bias of
As a general conclusion, the simulated results have indicated that the absolute value of the bias of MLE of parameters is a decreasing function of distribution means. It is also shown that the bias of MLE is not zero in small sample experiments. While, the first order bias is equal to zero. Therefore, it is useful to compare the simulated bias with the second order bias which is expressed in Equation (2.3). Note that the true values of the parameters are
Table 1 shows that the second order bias is closer to the simulated bias and is a better approximation for computation bias. Therefore, we apply Equation (2.2) to inference about MLE.
The MSE is a useful tool to measure the difference between the estimator and the true value of the parameters. The MSE expresses the quality of an estimator,
where MSE(
Note that if we apply the first order of the Taylor expansion of the score function, Equation (2.1), or we ignore the second term of right side of Equation (2.2), then the bias of
In the following theorem, we obtain MSE(
The proof of Theorem 1 is given in Appendix A.
Khuri
Optimal design is to provide the best estimation of parameters based on specific design criterion. Most optimal criteria are based on the variance of the estimator of parameters. For the Poisson regression model, Wang
This criterion is different of the D-optimality criterion. DM-optimality criterion is based on the determinant of the MSE whereas D-optimality criterion is based on the determinant of the Fisher information matrix.
There are two essential problems: First, MSE(
For any
Optimization classical methods often are known as local optimization since they derive the different results with different starting points. Therefore in this paper, we apply a hybrid method which uses of two optimality algorithm to obtain the exact global optimal point.
First a genetic algorithm (GA) is used to find near point to global optimal point (Chen and Ye, 2009). The GA is a global optimization algorithm based on the principles of natural selection and genetics. This process starts with a population of individuals, which serve as the first generation to solve above optimization problem. Then selection, crossover and mutation operations are employed to produce the next generation. It is repeated until a sufficiently large number of generations have passed without any improvement in the value of the function that we want to optimize. We use the second optimization due to the constraints of GA that, do not guarantee the exact global optimum despite resulting in a near global optimum.
Second, a local optimality algorithm is employed to achieve the global optimal point using the obtained point in GA as the starting point. We use of the sequential quadratic programming (SQP) method work out in “fmincon” function at MATLAB program (Fletcher and Powell, 1963).
In this paper, we consider two special cases of the Poisson regression model. Equations (3.2) and (3.3) define the mean function for simple model and quadratic model, respectively,
In most applications of this model, especially in toxicity studies, it is assumed that the design region is the non-negative subset of real numbers,
In this section, we consider a simple Poisson regression model with mean
For any
Replacing the above matrices in Equation (2.4), and then straightforward calculation leads to the results.
We have assumed that
DM-optimal design optimizes the determinant of MSE(
Figure 2 shows DM_{eff} of obtained designs in Table 2.
The results show that DM-optimal designs are more efficient for small size experiments. DM-optimal designs are robust based on
Considering the effect of an explanatory variable on the response variable, the effect of explanatory variables is sometimes stronger than the simple model describes. Therefore, the quadratic model defined in (3.3) may be suitable. Here we survey DM-optimal design for the quadratic model. According to the decreasing assumption on the mean response, we take negative values for
where
The DM_{eff} of DM-optimal designs for the quadratic model are illustrated in Figure 3. It is observed that DM-optimal designs are more accurate for small size experiments.
The non-existence of an explicit form for the parameter estimators of non-linear regression models leads statisticians to apply asymptotic inference based on the likelihood function to obtain the optimal design for GLMs.
In this study, we consider the Poisson regression model, which is a special case of the GLMs. The results indicate that for small size experiments, the asymptotic properties of parameter estimators; such as unbiasedness, may be in conflict. Therefore, we replace variance with MSE, which is considered as both a variance and bias, to obtain optimal design. In addition, we obtained a more precise estimation of parameter
An ongoing research activity is the investigation of Bayesian optimal designs instead of the local optimal designs.
For Poisson regression model (
where
To obtain the expectation of the log-likelihood derivatives, we use a given general form by Barndorff-Nielsen and Cox (1994, p 146) that is as follow
where
Then, we have
Moreover, we can find
Similarly,
We also get
The results is obtained by substituting the above three equations in (A.1) and the straightforward application of matrix algebra.
(a): |bias(
DM_{eff} for DM-optimal design in the simple model: (a)
DM_{eff} for DM-optimal design in the quadratic model. I:
The bias for the simple Poisson model
10 | 16 | 20 | 30 | 50 | ||
---|---|---|---|---|---|---|
Simulated bias | −0.036 | −0.023 | −0.021 | −0.001 | −0.009 | |
Second order bias | −0.037 | −0.023 | −0.018 | −0.012 | −0.007 | |
Simulated bias | −1.481 | −0.536 | −0.291 | −0.072 | −0.025 | |
Second order bias | −0.118 | −0.073 | −0.059 | −0.039 | −0.024 |
DM-optimal design for the simple model,
8 | 5 | 0.1705 | 1 | 4 | 0.1669 | 1 | 4 | 0.1402 | 1 |
10 | 6 | 0.1607 | 1 | 5 | 0.1532 | 1 | 5 | 0.1328 | 1 |
15 | 8 | 0.1483 | 1 | 8 | 0.1388 | 1 | 8 | 0.1270 | 1 |
20 | 10 | 0.1396 | 1 | 10 | 0.1286 | 1 | 10 | 0.1249 | 1 |
30 | 15 | 0.1285 | 1 | 15 | 0.1250 | 1 | 15 | 0.1262 | 1 |
50 | 25 | 0.1249 | 1 | 25 | 0.1262 | 1 | 25 | 0.1290 | 1 |
DM-optimal design for the quadratic model,
8 | 4 | 2 | 0.0943 | 0.5416 | 1 | 3 | 2 | 0.0546 | 0.5079 | 1 |
10 | 4 | 3 | 0.0800 | 0.5303 | 1 | 4 | 3 | 0.0478 | 0.5017 | 1 |
15 | 6 | 4 | 0.0639 | 0.5163 | 1 | 5 | 5 | 0.0438 | 0.4983 | 1 |
20 | 8 | 6 | 0.0551 | 0.5088 | 1 | 7 | 6 | 0.0394 | 0.4934 | 1 |
30 | 10 | 10 | 0.0497 | 0.5044 | 1 | 10 | 10 | 0.0371 | 0.4911 | 1 |
50 | 17 | 16 | 0.0407 | 0.4948 | 1 | 16 | 17 | 0.0365 | 0.4907 | 1 |
8 | 4 | 2 | 0.0694 | 0.4571 | 1 | 3 | 2 | 0.0995 | 0.5760 | 1 |
10 | 4 | 3 | 0.0697 | 0.4576 | 1 | 4 | 3 | 0.0848 | 0.5652 | 1 |
15 | 6 | 4 | 0.0547 | 0.5163 | 1 | 5 | 5 | 0.0754 | 0.5560 | 1 |
20 | 8 | 6 | 0.0464 | 0.4337 | 1 | 7 | 6 | 0.0634 | 0.5488 | 1 |
30 | 10 | 10 | 0.0411 | 0.4281 | 1 | 10 | 10 | 0.0541 | 0.5409 | 1 |
50 | 17 | 16 | 0.0315 | 0.4155 | 1 | 16 | 17 | 0.0463 | 0.5336 | 1 |