Google Scholar profile for bibliometric data. A full .bib of my work is available.
2023
R. Catellier, S. Vaiter, D. Garreau.
On the Robustness of Text Vectorizers.
Preprint. 2023. arXiv:2303.07203.
M. Dagréou, T. Moreau, S. Vaiter, P. Ablin.
A Near-Optimal Algorithm for Bilevel Empirical Risk Minimization.
Preprint. 2023. arXiv:2302.08766.
E. Pauwels, S. Vaiter.
The derivatives of Sinkhorn-Knopp converge.
SIAM J Optim.:(accepted). 2023. arXiv:2207.12717.
X. Dupuis, S. Vaiter.
The Geometry of Sparse Analysis Regularization.
SIAM J Optim.:(accepted). 2023. arXiv:1907.01769.
H. Ghanem, J. Salmon, N. Keriven, S. Vaiter.
Supervised learning of analysis-sparsity priors with automatic differentiation.
IEEE Signal Process Lett.:(accepted). 2023. arXiv:2112.07990.
Q. Klopfenstein, Q. Bertrand, A. Gramfort, J. Salmon, S. Vaiter.
Model identification and local linear convergence of coordinate descent.
Optim Lett.:(accepted). 2023. arXiv:2010.11825.
2022
J. Bolte, E. Pauwels, S. Vaiter.
Automatic differentiation of nonsmooth iterative algorithms.
NeurIPS. 2022. arXiv:2206.00457.
M. Dagréou, P. Ablin, S. Vaiter, T. Moreau.
A framework for bilevel optimization that enables stochastic and global variance reduction algorithms.
NeurIPS. 2022. arXiv:2201.13409.
T. Moreau, M. Massias, A. Gramfort, P. Ablin, P. Bannier, B. Charlier, M. Dagréou, T. Dupré la Tour, G. Durif, C. Dantas, Q. Klopfenstein, J. Larsson, E. Lai, T. Lefort, M. Malezieu, B. Moufad, B. Nguyen, A. Rakotomamonjy, Z. Ramzie, J. Salmon, S. Vaiter.
Benchopt: Reproducible, efficient and collaborative optimization benchmarks.
NeurIPS. 2022. arXiv:2206.13424. (code).
Q. Bertrand, Q. Klopfenstein, M. Massias, M. Blondel, S. Vaiter, A. Gramfort, J. Salmon.
Implicit differentiation for fast hyperparameter selection in non-smooth convex learning.
J Mach Learn Res. 23(149):1—43. 2022. arXiv:2105.01637. (code).
N. Keriven, S. Vaiter.
Sparse and Smooth: improved guarantees for Spectral Clustering in the Dynamic Stochastic Block Model.
Electron J Statist. 16(1):1330—1366. 2022. arXiv:2002.02892. doi:10.1214/22-EJS1986.
2021
Y. Traonmilin, R. Gribonval, S. Vaiter.
A theory of optimal convex regularization for low-dimensional recovery.
Preprint. 2021. HAL (03467123).
N. Keriven, A. Bietti, S. Vaiter.
On the Universality of Graph Neural Networks on Large Random Graphs.
NeurIPS. 2021. arXiv:2105.13099.
Q. Klopfenstein, S. Vaiter.
Linear Support Vector Regression with Linear Constraints.
Mach Learn. 110(7):1939—1974. 2021. arXiv:1911.02306. doi:10.1007/s10994-021-06018-2. (code).
B. Pascal, S. Vaiter, N. Pustelnik, P. Abry.
Automated data-driven selection of the hyperparameters for Total-Variation based texture segmentation.
J Math Imaging Vis. 63:923—952. 2021. arXiv:2004.09434. (code).
C. Deledalle, N. Papadakis, J. Salmon, S. Vaiter.
Block based refitting in \(\ell_{12}\) sparse regularisation.
J Math Imaging Vis. 63(2):216—236. 2021. arXiv:1910.11186. doi:10.1007/s10851-020-00993-2. (pdf).
S. Vaiter.
From optimization to algorithmic differentiation: a graph detour.
2021. TEL (03159975). (pdf).
2020
M. Massias, S. Vaiter, A. Gramfort, J. Salmon.
Dual Extrapolation for Sparse Generalized Linear Models.
J Mach Learn Res. 21(234):1—33. 2020. arXiv:1907.05830. (pdf). (code).
N. Keriven, A. Bietti, S. Vaiter.
Convergence and Stability of Graph Convolutional Networks on Large Random Graphs.
NeurIPS. 2020. arXiv:2006.01868. (pdf). (code).
Q. Bertrand, Q. Klopfenstein, M. Blondel, S. Vaiter, A. Gramfort, J. Salmon.
Implicit differentiation of Lasso-type models for hyperparameter optimization.
ICML. 2020. arXiv:2002.08943. (pdf). (code).
2019
C. Deledalle, N. Papadakis, J. Salmon, S. Vaiter.
Refitting solutions promoted by \(\ell_{12}\) sparse analysis regularization with block penalties.
SSVM. 2019. arXiv:1903.00741.
M. Massias, S. Vaiter, A. Gramfort, J. Salmon.
Exploiting regularity in sparse Generalized Linear Models.
SPARS. 2019. HAL (02288859). (code).
A. Barbara, A. Jourani, S. Vaiter.
Maximal Solutions of Sparse Analysis Regularization.
J Optim Theory Appl. 180(2):371—396. 2019. arXiv:1703.00192. doi:10.1007/s10957-018-1385-3. (pdf).
2018
Y. Traonmilin, S. Vaiter, R. Gribonval.
Is the 1-norm the best convex sparse regularization?.
iTWIST. 2018. arXiv:1806.08690. (pdf).
Y. Traonmilin, S. Vaiter.
Optimality of 1-norm regularization among weighted 1-norms for sparse recovery: a case study on how to find optimal regularizations.
NCMIP. 2018. arXiv:1803.00773. (pdf).
S. Vaiter, G. Peyré, J. Fadili.
Model Consistency of Partly Smooth Regularizers.
IEEE Trans Inform Theory. 64(3):1725—1737. 2018. arXiv:1405.1004. doi:10.1109/TIT.2017.2713822. (pdf).
2017
A. Chambolle, P. Tan, S. Vaiter.
Accelerated Alternating Descent Methods for Dykstra-like problems.
J Math Imaging Vis. 59(3):481—497. 2017. HAL (01346532). doi:10.1007/s10851-017-0724-6. (pdf).
P. Bellec, J. Salmon, S. Vaiter.
A Sharp Oracle Inequality for Graph-Slope.
Electron J Statist. 11(2):4851—4870. 2017. arXiv:1706.06977. doi:10.1214/17-EJS1364. (pdf).
S. Vaiter, C. Deledalle, G. Peyré, J. Fadili, C. Dossal.
The Degrees of Freedom of Partly Smooth Regularizers.
Ann Inst Stat Math. 69(4):791—832. 2017. arXiv:1404.5557. doi:10.1007/s10463-016-0563-z. (pdf).
C. Deledalle, N. Papadakis, J. Salmon, S. Vaiter.
Characterizing the maximum parameter of the total-variation denoising through the pseudo-inverse of the divergence.
SPARS. 2017. arXiv:1612.03080. (pdf).
C. Deledalle, N. Papadakis, J. Salmon, S. Vaiter.
CLEAR: Covariant LEAst-square Re-fitting with applications to image restoration.
SIAM J Imaging Sci. 10(1):243—284. 2017. arXiv:1606.05158. doi:10.1137/16M1080318. (pdf).
2015
S. Vaiter, M. Golbabaee, J. Fadili, G. Peyré.
Model Selection with Low Complexity Priors.
Inf. Inference. 4(3):230—287. 2015. arXiv:1307.2342. doi:10.1093/imaiai/iav005. (pdf).
S. Vaiter, G. Peyré, J. Fadili.
Low Complexity Regularization of Linear Inverse Problems.
2015. arXiv:1407.1598. doi:10.1007/978-3-319-19749-4. (pdf).
2014
C. Deledalle, S. Vaiter, J. Fadili, G. Peyré.
Stein Unbiased GrAdient estimator of the Risk (SUGAR) for multiple parameter selection.
SIAM J Imaging Sci. 7(4):2448—2487. 2014. arXiv:1405.1164. doi:10.1137/140968045. (pdf). (code).
S. Vaiter.
Low Complexity Regularizations of Inverse Problems.
2014. TEL (01026398). (pdf).
2013
S. Vaiter, C. Deledalle, G. Peyré, C. Dossal, J. Fadili.
Local Behavior of Sparse Analysis Regularization: Applications to Risk Estimation.
Appl Comput Harmon Anal. 35(3):433—451. 2013. arXiv:1204.3212. doi:10.1016/j.acha.2012.11.006. (pdf).
S. Vaiter, G. Peyré, J. Fadili, C. Deledalle, C. Dossal.
The degrees of freedom of the group Lasso for a general design.
SPARS. 2013. HAL (00926929). (pdf).
J. Fadili, G. Peyré, S. Vaiter, C. Deledalle, J. Salmon.
Stable Recovery with Analysis Decomposable Priors.
SAMPTA. 2013. arXiv:1304.4407. (pdf).
S. Vaiter, G. Peyré, J. Fadili.
Robust Polyhedral Regularization.
SAMPTA. 2013. arXiv:1304.6033. (pdf).
S. Vaiter, G. Peyré, C. Dossal, J. Fadili.
Robust Sparse Analysis Regularization.
IEEE Trans Inform Theory. 59(4):2001—2016. 2013. arXiv:1109.6222. doi:10.1109/TIT.2012.2233859. (pdf). (code).
2012
C. Deledalle, S. Vaiter, G. Peyré, J. Fadili, C. Dossal.
Unbiased Risk Estimation for Sparse Analysis Regularization.
ICIP. 2012. HAL (00662718). doi:10.1109/ICIP.2012.6467544. (pdf).
C. Deledalle, S. Vaiter, G. Peyré, J. Fadili, C. Dossal.
Proximal Splitting Derivatives for Risk Estimation.
NCMIP. 2012. HAL (0670213). doi:10.1088/1742-6596/386/1/012003. (pdf).
S. Vaiter, C. Deledalle, G. Peyré, J. Fadili, C. Dossal.
The Degrees of Freedom of the Group Lasso.
ICML (sparsity workshop). 2012. arXiv:1205.1481. (pdf).
C. Deledalle, S. Vaiter, G. Peyré, J. Fadili, C. Dossal.
Risk estimation for matrix recovery with spectral regularization.
ICML (sparsity workshop). 2012. arXiv:1205.1482. (pdf).