De-biased lasso for generalized linear models: issues and promises
Abstract: In the existing literature, "de-biasing" or "de-sparsifying" the L_1-norm penalized estimator represents a very important line of methods for drawing inference in high-dimensional linear models, and has been extended to generalized linear models (GLMs). However, we found that the de-biased approach in GLMs may not completely recover the bias or deliver reliable confidence intervals. In this work, we primarily consider the case of n > p with p diverging and provide an alternative modification to the original de-biased lasso, based on directly inverting the Hessian matrix, that further reduces bias and results in improved confidence interval coverage. Theoretical justification for drawing inference on linear combinations of the regression coefficients has been provided. Extensive simulations are conducted to show the improvement. We conclude that, in general, the de-biased method should only and always be used for making inference in GLMs when p < n because: (1) it does not provide reliable confidence intervals when p > n; (2) it yields better results than the likelihood method when p < n, but p relatively large, where our improved method has better performance than the original de-biased approach; (3) it yields almost identical results as the likelihood method when p is small. This is a joint work with Lu Xia and Yi Li.