search for

CrossRef (0)
A convenient approach for penalty parameter selection in robust lasso regression
Communications for Statistical Applications and Methods 2017;24:651-662
Published online November 30, 2017
© 2017 Korean Statistical Society.

Jongyoung Kima, and Seokho Lee1,a

aDepartment of Statistics, Hankuk University of Foreign Studies, Korea
Correspondence to: 1Corresponding author: Department of Statistics, Hankuk University of Foreign Studies, 50 Oedae-ro 54beon-gil, Mohyeon-myeon, Cheoin-gu, Yongin 17035, Korea. E-mail: lees@hufs.ac.kr
Received July 21, 2017; Revised September 20, 2017; Accepted October 14, 2017.

We propose an alternative procedure to select penalty parameter in L1 penalized robust regression. This procedure is based on marginalization of prior distribution over the penalty parameter. Thus, resulting objective function does not include the penalty parameter due to marginalizing it out. In addition, its estimating algorithm automatically chooses a penalty parameter using the previous estimate of regression coefficients. The proposed approach bypasses cross validation as well as saves computing time. Variable-wise penalization also performs best in prediction and variable selection perspectives. Numerical studies using simulation data demonstrate the performance of our proposals. The proposed methods are applied to Boston housing data. Through simulation study and real data application we demonstrate that our proposals are competitive to or much better than cross-validation in prediction, variable selection, and computing time perspectives.

Keywords : adaptive lasso, cross validation, lasso, robust regression, variable selection
1. Introduction

Regularized regression is popularly used to incorporate various assumptions that the model should require (Bishop, 2006; Hastie et al., 2001; Murphy, 2012). Type of regularization depends on a specific model assumption. For example, in functional regression, the coefficient parameter is assumed to be a smooth function. In sparse regression, some coefficient parameters corresponding to insignificant variables are expected to be zero. To incorporate smoothness and sparsity on parameter estimate, the regularization technique is often used. Such regularization is achieved under a penalized loss minimization framework. Consider a regression model yi=β0+xiTβ+i with a training data = {(xi, yi)|xi ∈ 꽍p, yi ∈ 꽍1, i = 1, 2, …, n} and error variance var(εi) = σ2. The objective function to be minimized is a penalized loss:


where ri=(yi-β0-xiTβ)/σ. Squared loss function ρ(u) = u2 is used in regression. In presence of outliers in training data, robust loss functions, Huber loss or bisquare loss for example, are often used to circumvent harmful effects from outliers. Penalty function P(· ; ·) is chosen according to the purpose of regularization. Roughness penalty is used for a smooth function estimate and sparsity-inducing penalty for sparse estimate. Regularization and goodness-of-fit are balanced at an optimal penalty parameter λ. The optimal λ is chosen for the best prediction. In absence of test data, cross validation (CV) is commonly used for test mean squared errors (MSE) estimation in regression. CV is a generic model selection criterion which can be applied to many predictive models where the response variable is based on even no concrete probabilistic ground. Even with its versatility, CV suffers from heavy computational burden.

In this research, we propose an alternative approach to CV in robust regression with L1 penalization. This approach is based on marginalizing prior distribution over the penalty parameter. Thus, resulting objective function does not include the penalty parameter due to marginalizing it out. But its estimating equation automatically sets a tentative penalty parameter, and we can use it as the penalty parameter in the penalized regression estimating procedure. This idea is straightforwardly generalized to variable-wise penalization, which is often prohibitive through CV.

This paper is organized as follows. In Section 2, we introduce robust lasso regression and provide Bayesian approach to marginalizing penalty parameter and its estimating algorithm. Variable-wise penalization is introduced in the same section. The performance of our proposals are demonstrated through simulation studies and Boston housing data in Section 3. The paper ends with some remarks in Section 4.

2. Methodology

2.1. Robust lasso regression

Consider a linear model yi=β0+xiTβ+i for i = 1, …, n. In the presence of outlying observations among the training dataset, robust loss is used to reduce the outlier effect in regression (Maronna et al., 2006). When we impose some regularization on the coefficient β ∈ 꽍p, we add a penalty function on β to the empirical loss and minimize the penalized empirical loss (1.1) to find a robust coefficient estimate. Well-known robust loss functions are Huber loss,

ρH(uk)={u2,if uk,2ku-k2,if u>k,

and Tukey’s bisquare loss,

ρB(uk)={1-{1-(uk)2}3,if uk,1,if u>k.

Both loss functions include the additional parameter k, which regulates robustness and efficiency of resulting coefficient estimate. Throughout this study, we use k = 1.345 for Huber loss and k = 4.685 for bisquare loss for 95% efficiency under normality (Maronna et al., 2006).

As in traditional linear regression, we can apply regularization techniques in robust regression (El Ghaoui and Lebret, 1997; Owen, 2006). For variable selection as well as prediction enhancement, L1 penalty P(β;λ)=λβ1=λj=1pβj can be imposed. It becomes lasso if we use the square loss ρ(u) = u2. Thus, robust lasso regression is done by minimizing the penalized objective function (1.1). A usual way for minimizing (1.1) is to use Newton-Raphson algorithm for M-estimates in robust statistics. To look at this more closely, we take a derivative of (1.1) with respect to β0 and β. Then, normal equations become

β0=-i=1nwiriσ=0, 듼 듼 듼β=-i=1nwirixiσ+λβ1β=0

with wi = ψ(ri)/ri and ψ = ρ′. The above normal equations are exactly same with those from the below weighted lasso problem:


Thus, robust lasso regression (1.1) with robust loss and L1 penalty can be solved by iteratively optimizing weighted lasso (2.2), where weight wi is updated at every iteration step. In robust linear regression, scale parameter σ should be estimated in robust fashion. A common way to obtain robust scale is the normalized median absolute deviates (MADN) of residuals from robust regression (Maronna et al., 2006). MADN is defined as MADN(x) = median(|x – median(x)|)/0.675. In this study, we initially fit L1 regression estimate (least absolute deviation (LAD) estimate; Koenker, 2005). Finally, we obtain σ estimate as MADN of residuals from LAD fit.

To choose an optimal regularization parameter λ, CV is a popular choice. In CV procedure, training data is split into several exclusive pieces. Each piece works as a validation set and remaining pieces are used to fit the model. The fitted model is evaluated on the validation set by MSE. This step is cycled until all pieces serve as a validation set. After finishing the whole cycle, CV score is defined as the averaged MSEs and the regularization parameter λ is chosen to minimize CV score. In the presence of outliers in the training data, MSE based CV score is not reliable because of the possible presence of outliers in the validation set. Thus, robust loss for errors is used in CV score computation, instead of squared errors in the traditional MSE. Generalized cross validation (GCV) is an convenient selection criterion which does not require an onerous splitting-and-fitting procedure as in CV. After fitting the model to whole training data, GCV is obtained using hat matrix Hλ where = Hλy. However, GCV is not available in robust regression because the regression coefficient estimate is not given as a linear combination of response variable in most robust regression. Information criterion, including Bayesian information criterion (BIC), is a frequently used for model selection. BIC as well as most other information criterion depend on a data model because data distribution specifies the likelihood as a part of the criterion. In robust regression it is difficult to assume data distribution due to the existence of outliers.

2.2. Bayesian approach by marginalizing regularization parameter

In this section, we introduce a convenient way to choose a penalty parameter under a Bayesian framework. The basic idea is to use marginal distribution of β by marginalizing out the penalty parameter. Buntine and Weigend (1991) introduce this idea first in penalized regression and logistic regression problems. Here, we use the same idea in a robust lasso regression. Penalty term P(β; λ) in (1.1) can be regarded as the negative logarithm of prior distribution for β. For L1 penalization, the prior over β is a joint distribution of independent Laplace random variables


The prior of β depends on λ as a scale parameter. To remove the dependency on λ, Buntine and Weigend (1991) impose an improper Jeffrey prior on the hyperparameter λ and marginalize it out. Using Jeffrey prior for λ, f (λ) ∝ 1/λ, the marginal prior on β becomes


By replacing the original penalty induced from – log f (β|λ) in (1.1) by a new penalty from – log f (β), we can obtain a penalized loss 꼻:


Equation (2.3) is a new objective function without λ. For β estimation, normal equations from (2.3) become

˜β0=-i=1nwiriσ=0, 듼 듼 듼˜β=-i=1nwirixiσ+pβ1β1β=0.

The above normal equation cannot be solved analytically. Note that, by comparing (2.4) with (2.1) from the original objective function, the penalty parameter λ in (2.1) is replaced by the term p/||β||1 in (2.4). Thus, Buntine and Weigend (1991) suggest an iterative fitting procedure for a penalized least square problems, where (β0, β) is obtained as a weighted penalized least squares solution with a penalty parameter λ=p/βo1=p/j=1pβjo using the previous solution βo=(β1o,,βpo)T. We can use the same idea in robust lasso problem by solving iteratively (2.2). Updating formula for (β0, β) in (2.2) are obtained as a weighted lasso problem using weights wi=ψ(rio)/rio and penalty λ=p/j=1pβjo, where rio=yi-β^0o-xiTβo.

Through some simulation studies, we observed that this approach produces reliable performance in prediction but is not satisfactory in variable selection. To enhance variable selection performance, we consider a different penalty function for β. Instead of the common penalty parameter λ for all βj, we use separate penalty parameters for each variable, in the similar to adaptive lasso (Zou, 2006). Thus, L1 penalty function λj=1pβj in (1.1) is replaced by P(β;λ)=j=1pλjβj with λ = (λ1, …, λp)T. We call the former “robust lasso” and the latter “robust adaptive lasso.” This change causes computational challenge in CV because p penalty parameters should be chosen by p-dimensional grid search. This is infeasible in practice even with a moderate size of variables. However, the same Bayesian approach can be easily implemented. With variable-wise penalty parameters, the objective function becomes


The coefficient parameters βj ( j = 1, 2, …, p) are assumed to follow Laplace distribution p(βj|λj) = (λj/2) exp(–λj|βj|) conditional on λj. Similar to robust lasso in above, marginal distribution of βj with Jeffrey prior p(λj) ∝ 1/λj is obtained as


Penalty function on β becomes -log f(β)=-log{j=1pf(βj)}=j=1plogβj+constant. Thus, after marginalizing λj out, objective function (2.5) is modified as


Normal equations of (2.5) are

˜β0=-i=1nwiriσ=0, 듼 듼 듼˜βj=i=1nwirixijσ+1βjβjβj=0,

for j = 1, 2, …, p. This normal equation is exactly same with that from 꼻 in (2.5) if λj is set to be 1/|βj|. We can still here employ an iterative scheme of estimation. We summarize the whole procedure into the below conceptual algorithm, where we use coordinate descent algorithm to estimate β.

Robust lasso and robust adaptive lasso by marginalizing penalty parameters

  • (Initialization) Obtain initial parameters β^0o, βo from LAD or LAD-ridge. Set σ = MADN(ri) where ri are residuals from LAD.

  • Repeat until convergence is met.

    • Set ri=(yi-β^0o-xiTβ^o)/σ^ and wi = ψ(ri)/ri for i = 1, …, n. For robust lasso, set λ=I/jIβ^jo where 꼸 is the index set of nonzero elements in βo and |꼸| is the number of nonzeros in βo. For robust adaptive lasso, set λj=1/β^jo. If β^jo=0, then λj = ∞.

    • Compute y¯w=i=1nwiyi/i=1nwi and x¯jw=i=1nwixij/i=1nwi for all j = 1, 2, …, p. Then, syet yi = yiyw and x˜ij=xij-x¯jw.

    • For j = 1, 2, …, p, compute yi(j) and update βj by yi(-j)=y˜i-ljx˜ilβ^lo, 듼 듼 듼β^j=soft(i=1nwix˜ijyi(-j)i=1nwix˜ij2,σ^2λj*i=1nwix˜ij2),

      where soft(x, t) = sign(x)(|x| − t)+ and u+ = max(u, 0). Here, we set λj*=λ for robust lasso and λj*=λj for robust adaptive lasso.

    • Update the intercept β0 by β^0=y¯w-j=1pw¯jwβ^j.

    • If convergence is not achieved, update β^0oβ^0 and βoβ.

2.3. Connection to existing methods

Robust adaptive lasso described in the previous section allows variable-wise penalty parameters for each coefficient. This reminds adaptive lasso where penalty term is given as λj=1pηjβj (Zou, 2006). Zou (2006) suggests ηj = 1/|βj|γ, where βj is a consistent estimator, least squares solution suggested, and γ is another tuning parameter. If λj = ληj and γ = 1, then λj = 1/|βj| which seems similar to the proposed approach. However, our approach keeps changing βo during iterations, while adaptive lasso from Zou (2006) fixes it as the least squares solution or ridge solution if n < p. Zou (2006) suggests to use CV for the choice of (λ, γ), which is computationally burdensome due to a 2-dimensional grid search. Automatic relevance determination (ARD) (MacKay, 1995; Tipping 2001) is another approach that considers variable-wse penalization too. Even though ARD is originally developed for sparse kernel regression, it is easily modified to a regular regression problem. ARD assumes Gaussianity for β, not Laplacianity in adaptive lasso, and achieves a variable selection by reducing down prior variance for negligible variables. Its estimation can be cast into penalized loss framework. In ARD, however, updating formula for λj depends all coefficient parameter estimates in the previous iteration and, thus, cannot be separable variable-wise. This causes relatively slow convergence and cannot exploit the simple thresholding scheme.

Our approach is closely related to a traditional Bayesian approach. If ρ(u) = u2, i.e., non-robust regression, lasso can be formulated in Bayesian framework using Gaussian scale mixture as βjτj2,λ~N(0,τj2) and τj2λ~gamma(1-λ2/2) (Park and Casella, 2008;West, 1987). Using normality on the response yi, full Bayesian approach allows Bayesian inference for lasso because the full posterior f is available. However, typical Markov chain Monte Carlo (MCMC) approximation does not often produce an exact zero solution. Contrast to MCMC, expectation-maximization (EM) algorithm allows sparse estimation under Bayesian formulation (Figueiredo, 2003). For either MCMC or EM, hyperparameter λ should be chosen by empirical Bayes or evidence procedure, where λ is chosen by the maximizer of called marginal likelihood or evidence. In robust lasso, however, is not available because Huber or biweight loss functions are not derived from any distribution. Penalty function in robust adaptive lasso can also be formulated under a Bayesian structure by assuming βjτj2,λ~N(0,τj2),τj2/λj~gamma(1,λj2/2), and λj|a, b ~ inverse-gamma(a, b). It is called hierarchical adaptive lasso (HAL; Lee et al., 2010). Under non-robust situation, EM algorithm is similar to the proposed approach in the sense that M-step produces the same β estimation and E-step gives E(λj)=(a+1)/(b+β^jo). If a = b = 0, then the expected value of λj is exactly same with the proposed approach. However, HAL is not appropriate for robust adaptive lasso either because HAL also assumes normality on the response. Another interesting connection can be found in log penalized regression in Zou and Li (2008). Zou and Li (2008) derived logarithm penalty as a limiting version of Bridge penalty with q → 0 using local linear approximation. The difference is that our approach resorts to full iteration while Zou and Li (2008) suggests one-step iteration only.

3. Numerical Studies

3.1. Synthetic data

Artificial data is used to test our proposal in this section. Input variables xi j (i = 1, 2, …, n; j = 1, 2, …, p) are independently generated from N(0, 1). The intercept parameter β0 is set to zero, and the first 10 slope parameters are generated uniformly on the positive interval from 1 to 2 and remaining slopes are set to zero. i.e., βj ~ uniform(1, 2) for j = 1, 2, …, 10 and βj = 0 for j = 11, …, p. Response variables yi are constructed by yi=xiTβ+i for i = 1, …, n. Thus, the first 10 variables are important in prediction and the remaining p – 10 variables are not. To mimic contamination in training data, random error εi are generated from the contaminated normal distribution


Here, π is a contamination rate. If π = 0, there is no outlier. Outliers come from normal distribution with shifted mean by m-factor of error variance. The error variance is set to have a five times signal-to-noise ratio in a standard deviation scale. i.e., σ=sd(xiTβ)×0.2. We used glmnet() and cv.glmnet() functions in glmnet package for CV. For non-robust case (π = 0), the direct use of cv.glmnet() is sufficient and efficient. For a robust case, weight wi should be updated at every iteration. Thus, we devise a loop for CV and use glmnet() function for model fitting inside the loop. We conducted 5-fold CV over 100 grid points as suggested in glmnet package. BIC is also considered for comparison study (Chang et al., 2017; Lambert-Lacroix and Zwald, 2011) and the same grid set used in CV was also used in BIC.

The first simulation considers n > p situation. We increase sample size n = 100, 1,000, 10,000 with a fixed p = 20. Proportion of outliers is π = 0, 0.1, 0.2, 0.3 and m = 5 for outlier generation. Test data of size 10,000 is additionally generated without outliers and used to measure prediction error. Prediction error is calculated as root mean square of error (RMSE). Simulation is repeated 100 times and we report the average and standard error of RMSEs in Table 1. We name square, Huber, bisquare losses by “S”, “H”, “B”, respectively. “BIC” for BIC, “L” for robust lasso, and “A” for robust adaptive lasso. Thus, “B.A” means robust adaptive lasso with bisquare loss. From Table 1, in absence of outliers (π = 0), traditional lasso using square loss under CV is best in prediction. However, as π increases, square loss is outperformed by robust loss. Bisquare loss is generally better than Huber loss, which demonstrates the well-known fact that nonconvex loss is better than convex loss in robust regression. Overall, robust adaptive lasso performs best among other competitors. We observe that CV and our proposals are hardly distinguishable in standard error scale, as the sample size increases. We compute a false negative rate (FN) and false positive rate (FP), and report their average in Table 2. FN is defined as a ratio of a falsely claimed zero, while they are actually not zero; however, FP is defined as a ratio of falsely claimed nonzero, while they are actually zero. FN is almost zero for all methods, except for a small sample size. But we observe that FP seems quite high. Note that low FN and high FP indicates that almost all variables are selected from a variable selection, implying a low selection ability. Considering this, the BIC selection seems to perform very well in a large sample case. Table 3 presents average computing time for each method. Robust lasso and robust adaptive lasso are enormously faster than CV while their prediction and variable selection performance is comparable or better than CV. Our proposals are much faster than CV and BIC in penalized robust regression.

We conduct the second simulation where variable size is larger than sample size. Sample size is fixed as n = 100 and variable size are set to be p = 50, 100, 200, 400. In this simulation, outliers are generated from mean mσ2 with a factor m=p. Thus, outliers locate further from typical data as dimension increases. Prediction performance, variable selection performance, and computing time are summarized in Tables 3, 4, and 5, respectively. Table 4 shows that bisquare loss produces reliable results in presence of outliers. Note that Huber loss quickly deteriorates as contamination get larger, demonstrating that convex Huber loss suffers from severe outliers. In this simulation CV often produces best results, but adaptive lasso version of Bayesian approach is still comparable. Most of all, computing time in Table 6 demonstrates that the latter is more attractive than the former. In Table 5 we provides false rate of slope estimate by combining false negatives and false positives, instead of reporting separate false rates as in Table 2. Thus, false rate is the proportion of elements in β that true zero is falsely claimed to zero and true nonzero falsely claimed zero. Table 5 demonstrates that robust adaptive lasso using bisquare outperforms CV in variable selection perspective, while they are comparable in prediction sense. Note that CV usually finds the best model which gives the best prediction, while robust adaptive lasso focuses on variable selection as well as a good fit to the data in the high-dimensional cases.

3.2. Real data example

We applied our methods to Boston housing data, available on http://lib.stat.cmu.edu/datasets/boston. The dataset contains medv (median value of owner-occupied homes in thousand dollars) as a response variable and additional 13 explanatory variables: crim (per capita crime rate by town), zn (proportion of residential land zoned for lots over 25,000 sq. ft.), indus (proportion of non-retail business acres per town), chas (Charles river dummy variable), nox (nitric oxides concentration, parts per 10 million), rm (average number of rooms per dwelling), age (proportion of owner-occupied units built prior to 1940), dis (weighted distances to five Boston employment centers), rad (index of accessibility to radial highways), tax (full-valued property-tax rate per 10,000 dollars), ptratio (pupil-teacher ratio by town), black (1000 × (Bk – 0.63)2 where Bk is the proportion of blacks by town), and lstat (percent of lower status of the population). There are 506 observations in the dataset. We split the dataset into two parts: the first 300 observations were regarded as a training dataset and the remaining 206 observations were treated as a test dataset. To see the effect of outliers on the methods we considered, we randomly selected 30 observations from the training dataset and changed their response variables by values having large residuals (five times larger than residual standard deviation).

Tables 7 and 8 present the coefficient estimates and test RNSE from the fit of the training dataset without and with outliers, respectively. Table 7, in the case of absence of outliers, shows that all methods seem to produce the similar coefficient estimates and test RMSE. However, after outlier inclusion, square loss severely deteriorates, while Huber and biweight losses are robust against outlier. Interestingly, we observed that BIC is not stable for outlier in this example.

4. Conclusion and remarks

In this study, we propose an alternative way for CV in robust lasso and robust adaptive lasso using marginal prior on coefficients under Bayesian framework. Throughout simulation studies, we demonstrate that our proposals are competitive to or better than CV in prediction, variable selection, and computing time perspectives.

In this study we limit this idea only to robust regression; however, it can be further applied to various penalized predictive models. Our approach becomes especially valuable when CV is not appropriate. For example, in classification problem, training data may suffer from a mislabeling class label called label-noise (Lee et al., 2016). In such case, hold-out procedure like CV is not successful because the validation set also have a label-nose; in addition, a cross-validation score from a validation set having label-noise is not reliable for model selection. One can extend the same idea in this study to some complicated problems that include label-nose, where CV is not available. We leave this direction for future research.


Table 1

Simulation 1 - average of 100 test RMSEs and its standard error (in parenthesis)














Best performer for each case is highlighted.

RMSE= root mean squared error; S = square; H= Huber; B = bisquare; CV= cross validation; BIC = Bayes information criterion; L = robust lasso; A= robust adaptive lasso.

Table 2

Simulation 1 - average of false negative and false positive rates

nFalse negative




nFalse positive





S = square; H = Huber; B = bisquare; CV = cross validation; BIC = Bayes information criterion; L = robust lasso; A = robust adaptive lasso.

Table 3

Simulation 1 - average of 100 computing times (in seconds)

nComputing time in seconds




S = square; H = Huber; B = bisquare; CV = cross validation; BIC = Bayes information criterion; L = robust lasso; A = robust adaptive lasso.

Table 4

Simulation 2 - average of 100 test RMSEs and its standard error (in parenthesis)


















Best performer for each case is highlighted.

RMSE= root mean squared error; S = square; H= Huber; B = bisquare; CV= cross validation; BIC = Bayes information criterion; L = robust lasso; A= robust adaptive lasso.

Table 5

Simulation 2 - average of false rate

pFalse rate





Best performer is highlighted.

S = square; H = Huber; B = bisquare; CV = cross validation; BIC = Bayes information criterion; L = robust lasso; A = robust adaptive lasso.

Table 6

Simulation 2 - average of 100 computing times (in seconds)

pComputing time in seconds





S = square; H = Huber; B = bisquare; CV = cross validation; BIC = Bayes information criterion; L = robust lasso; A = robust adaptive lasso.

Table 7

Boston housing data - estimated coefficients and test RMSEs without outliers


Test RMSE15.8247.98417.4727.8498.1139.7617.8637.8087.9848.3948.042

RMSE = root mean squared error; S = square; H = Huber; B = bisquare; CV = cross validation; BIC = Bayes information criterion; L = robust lasso; A = robust adaptive lasso.

Table 8

Boston housing data - estimated coefficients and test RMSEs after outlier inclusion


Test RMSE23.18329.44198.6647.88538.7278.6088.4077.79842.8048.0388.249

RMSE = root mean squared error; S = square; H = Huber; B = bisquare; CV = cross validation; BIC = Bayes information criterion; L = robust lasso; A = robust adaptive lasso.

  1. Bishop, CM (2006). Pattern Recognition and Machine Learning. New York: Springer
  2. Buntine, WL, and Weigend, AS (1991). Bayesian back-propagation. Complex Systems. 5, 603-643.
  3. Chang, L, Roberts, S, and Welsh, A (2017). Robust lasso regression using Tukey셲 biweight criterion. Technometrics.from: https://dx.doi.org/10.1080/00401706.2017.1305299
  4. El Ghaoui, L, and Lebret, H (1997). Robust solutions to least-squares problems with uncertain data. SIAM Journal on Matrix Analysis and Applications. 18, 1035-1064.
  5. Figueiredo, MAT (2003). Adaptive sparseness for supervised learning. IEEE Transactions on Pattern Analysis and Machine Intelligence. 25, 1150-1159.
  6. Hastie, T, Tibshirani, R, and Friedman, JH (2001). The Elements of Statistical Learning: Data Mining Inference, and Prediction. New York: Springer
  7. Koenker, R (2005). Quantile Regression. Cambridge: Cambridge University Press
  8. Lambert-Lacroix, S, and Zwald, L (2011). Robust regression through the Huber셲 criterion and adaptive lasso penalty. Electronic Journal of Statistics. 5, 1015-1053.
  9. Lee, A, Caron, F, Doucet, A, and Holmes, C (2010). A hierarchical Bayesian framework for constructing sparsity-inducing priors (Technical report). Oxford: University of Oxford
  10. Lee, S, Shin, H, and Lee, SH (2016). Label-noise resistant logistic regression for functional data classification with an application to Alzheimer셲 disease study. Biometrics. 72, 1325-1335.
    Pubmed CrossRef
  11. MacKay, DJC (1995). Probable networks and plausible predictions: a review of practical Bayesian methods for supervised neural networks. Network: Computation in Neural Systems. 6, 469-505.
  12. Maronna, RA, Martin, RD, and Yohai, VJ (2006). Robust Statistics: Theory and Methods. Chichester: Wiley
  13. Murphy, KP (2012). Machine Learning: A Probabilistic Perspective. Cambridge: The MIT Press
  14. Owen, AB (2006). A robust hybrid of lasso and ridge regression (Technical report). Stanford: Stanford University
  15. Park, T, and Casella, G (2008). The Bayesian lasso. Journal of the American Statistical Association. 103, 681-686.
  16. Tipping, ME (2001). Sparse Bayesian learning and the relevance vector machine. Journal of Machine Learning Research. 1, 211-244.
  17. West, M (1987). On scale mixtures of normal distributions. Biometrika. 74, 646-648.
  18. Zou, H (2006). The adaptive lasso and its oracle properties. Journal of the American Statistical Association. 101, 1418-1429.
  19. Zou, H, and Li, R (2008). One-step sparse estimates in nonconcave penalized likelihood models. The Annals of Statistics. 36, 1509-1533.
    Pubmed KoreaMed CrossRef