UNIVERSITY
OF WROCŁAW
 
Main Page
Contents
Online First
General Information
Instructions for authors


VOLUMES
43.2 43.1 42.2 42.1 41.2 41.1 40.2
40.1 39.2 39.1 38.2 38.1 37.2 37.1
36.2 36.1 35.2 35.1 34.2 34.1 33.2
33.1 32.2 32.1 31.2 31.1 30.2 30.1
29.2 29.1 28.2 28.1 27.2 27.1 26.2
26.1 25.2 25.1 24.2 24.1 23.2 23.1
22.2 22.1 21.2 21.1 20.2 20.1 19.2
19.1 18.2 18.1 17.2 17.1 16.2 16.1
15 14.2 14.1 13.2 13.1 12.2 12.1
11.2 11.1 10.2 10.1 9.2 9.1 8
7.2 7.1 6.2 6.1 5.2 5.1 4.2
4.1 3.2 3.1 2.2 2.1 1.2 1.1
 
 
WROCŁAW UNIVERSITY
OF SCIENCE AND
TECHNOLOGY

Contents of PMS, Vol. 42, Fasc. 2,
pages 283 - 302
DOI: 10.37190/0208-4147.00066
Published online 25.11.2022
 

Pattern recovery and signal denoising by SLOPE when the design matrix is orthogonal

T. Skalski
P. Graczyk
B. Kołodziejek
M. Wilczyński

Abstract:

Sorted l1 Penalized Estimator (SLOPE) is a relatively new convex regularization method for fitting high-dimensional regression models. SLOPE allows the reduction of the model dimension by shrinking some estimates of the regression coefficients completely to zero or by equating the absolute values of some nonzero estimates of these coefficients. This allows one to identify situations where some of true regression coefficients are equal. In this article we will introduce the SLOPE pattern, i.e., the set of relations between the true regression coefficients, which can be identified by SLOPE. We will also present new results on the strong consistency of SLOPE estimators and on the strong consistency of pattern recovery by SLOPE when the design matrix is orthogonal and illustrate advantages of the SLOPE clustering in the context of high frequency signal denoising.

2010 AMS Mathematics Subject Classification: Primary 62J05; Secondary 62J07.

Keywords and phrases: linear regression, SLOPE, signal denoising.

M. Bogdan, E. van den Berg, C. Sabatti, W. Su and E. J. Cands, SLOPE Adaptive variable selection via convex optimization, Ann. Appl. Statist. 9 (2015), 1103-1140.

M. Bogdan, E. van den Berg, W. Su and E. J. Cands, Statistical estimation and testing via the sorted l1 norm, arXiv:1310.1969 (2013).

M. Bogdan, X. Dupuis, P. Graczyk, B. Koł odziejek, T. Skalski, P. Tardivel and M. Wilczyński, Pattern recovery by SLOPE, arXiv:2203.12086 (2022).

H. D. Bondell and B. J. Reich, Simultaneous regression shrinkage, variable selection, and supervised clustering of predictors with OSCAR, Biometrics 64 (2008), 115-123.

H. D. Bondell and B. J. Reich, Simultaneous factor selection and collapsing levels in ANOVA, Biometrics 65 (2009), 169-177.

S. Sh. Chen and D. L. Donoho, Basis pursuit, in: Proc. 1994 28th Asilomar Conference on Signals, Systems and Computers, IEEE, 1994, 41-44.

S. Sh. Chen, D. L. Donoho and M. A. Saunders, Atomic decomposition by basis pursuit, SIAM J. Sci. Comput. 20 (1998), 33-61.

X. Dupuis and P. Tardivel, Proximal operator for the sorted l1 norm: Application to testing procedures based on SLOPE, hal-03177108v2 (2021).

K. Ewald and U. Schneider, Uniformly valid confidence sets based on the Lasso, Electron. J. Statist. 12 (2018), 1358-1387.

M. A. T. Figueiredo and R. Nowak, Ordered weighted â„“1 regularized regression with strongly correlated covariates: Theoretical aspects, in: Proc. 19th Int. Conf. on Artificial Intelligence and Statistics, Proc. Mach. Learning Res. 51, 2016, 930-938.

J. Gertheiss and G. Tutz, Sparse modeling of categorial explanatory variables, Ann. Appl. Statist. 4 (2010), 2150-2180.

P. Kremer, D. Brzyski, M. Bogdan and S. Paterlini, Sparse index clones via the sorted â„“1-norm, Quant. Finance 22 (2022), 349-366.

A. Maj-Kańska, P. Pokarowski and A. Prochenka, Delete or merge regressors for linear model selection, Electron. J. Statist. 9 (2015), 1749-1778.

K. Minami, Degrees of freedom in submodular regularization: a computational perspective of Stein’s unbiased risk estimate, J. Multivariate Anal. 175 (2020), art. 104546, 22 pp.

R. Negrinho and A. F. T. Martins, Orbit regularization, in: Advances in Neural Information Processing Systems 27, 2014, 9 pp.

Sz. Nowakowski, P. Pokarowski and W. Rejchel, Group Lasso merger for sparse prediction with high-dimensional categorical data, arXiv:2112.11114 (2021).

M.-R. Oelker, J. Gertheiss and G. Tutz, Regularization and model selection with categorical predictors and effect modifiers in generalized linear models, Statist. Model. 14 (2014), 157-177.

K. R. Rao, N. Ahmed and M. A. Narasimhan, Orthogonal transforms for digital signal processing, in: Proc. 18th Midwest Symposium on Circuits and Systems, 1975, 1-6.

U. Schneider and P. Tardivel, The geometry of uniqueness, sparsity and clustering in penalized estimation, arXiv:2004.09106 (2020).

P. Tardivel, R. Servien and D. Concordet, Simple expression of the LASSO and SLOPE estimators in low-dimension, Statistics 54 (2020), 340-352.

P. Tardivel, T. Skalski, P. Graczyk and U. Schneider, The geometry of model recovery by penalized and thresholded estimators, hal-03262087 (2021).

B. G. Stokell, R. D. Shah and R. J. Tibshirani, Modelling high-dimensional categorical data using nonconvex fusion penalties, arXiv:2002.12606 (2021).

R. Tibshirani, Regression shrinkage and selection via the lasso, J. R. Statist. Soc. Ser. B. Statist. Methodology 101 (1996), 167-188.

X. Zeng and M. A. T. Figueiredo, Decreasing weighted sorted l1 regularization, IEEE Signal Process. Lett. 21 (2014), 1240-1244.

P. Zhao and B. Yu, On model selection consistency of Lasso, J. Mach. Learn. Res. 7 (2006), 2541-2563.

H. Zou, The adaptive lasso and its oracle properties, J. Amer. Statist. Assoc. 101 (2006), 1418–1429.

Download:    Abstract    Full text