Google Scholar profile for bibliometric data. A full .bib of my work is available.


N. Keriven, S. Vaiter. What functions can Graph Neural Networks compute on random graphs? The role of Positional Encoding. Preprint. 2023. arXiv:2305.14814.

J. Bolte, E. Pauwels, S. Vaiter. One-step differentiation of iterative algorithms. Preprint. 2023. arXiv:2305.13768.

M. Cordonnier, N. Keriven, N. Tremblay, S. Vaiter. Convergence of Message Passing Graph Neural Networks with Generic Aggregation On Large Random Graphs. Preprint. 2023. arXiv:2304.11140.

S. Jaffard, A. Muzy, P. Reynaud-Bouret, S. Vaiter. Provable local learning rule by expert aggregation for a Hawkes network. Preprint. 2023. arXiv:2304.08061.

M. Dagréou, T. Moreau, S. Vaiter, P. Ablin. A Near-Optimal Algorithm for Bilevel Empirical Risk Minimization. Preprint. 2023. arXiv:2302.08766.

H. Ghanem, S. Vaiter, N. Keriven. Gradient Scarcity with Bilevel Optimization for Graph Learning. Preprint. 2023. arXiv:2303.13964.

R. Catellier, S. Vaiter, D. Garreau. On the Robustness of Text Vectorizers. ICML. 2023. arXiv:2303.07203.

E. Pauwels, S. Vaiter. The Derivatives of Sinkhorn-Knopp Converge. SIAM J Optim. 33(3):1494—1517. 2023. arXiv:2207.12717. doi:10.1137/22M1512703.

X. Dupuis, S. Vaiter. The Geometry of Sparse Analysis Regularization. SIAM J Optim. 33(2):842—867. 2023. arXiv:1907.01769. doi:10.1137/19M1271877.

H. Ghanem, J. Salmon, N. Keriven, S. Vaiter. Supervised learning of analysis-sparsity priors with automatic differentiation. IEEE Signal Process Lett. 30:339—343. 2023. arXiv:2112.07990. doi:10.1109/LSP.2023.3244511.

Q. Klopfenstein, Q. Bertrand, A. Gramfort, J. Salmon, S. Vaiter. Model identification and local linear convergence of coordinate descent. Optim Lett.:(online first). 2023. arXiv:2010.11825.


J. Bolte, E. Pauwels, S. Vaiter. Automatic differentiation of nonsmooth iterative algorithms. NeurIPS. 2022. arXiv:2206.00457.

M. Dagréou, P. Ablin, S. Vaiter, T. Moreau. A framework for bilevel optimization that enables stochastic and global variance reduction algorithms. NeurIPS. 2022. arXiv:2201.13409.

T. Moreau, M. Massias, A. Gramfort, P. Ablin, P. Bannier, B. Charlier, M. Dagréou, T. Dupré la Tour, G. Durif, C. Dantas, Q. Klopfenstein, J. Larsson, E. Lai, T. Lefort, M. Malezieu, B. Moufad, B. Nguyen, A. Rakotomamonjy, Z. Ramzie, J. Salmon, S. Vaiter. Benchopt: Reproducible, efficient and collaborative optimization benchmarks. NeurIPS. 2022. arXiv:2206.13424. (code).

Q. Bertrand, Q. Klopfenstein, M. Massias, M. Blondel, S. Vaiter, A. Gramfort, J. Salmon. Implicit differentiation for fast hyperparameter selection in non-smooth convex learning. J Mach Learn Res. 23(149):1—43. 2022. arXiv:2105.01637. (code).

N. Keriven, S. Vaiter. Sparse and Smooth: improved guarantees for Spectral Clustering in the Dynamic Stochastic Block Model. Electron J Statist. 16(1):1330—1366. 2022. arXiv:2002.02892. doi:10.1214/22-EJS1986.


Y. Traonmilin, R. Gribonval, S. Vaiter. A theory of optimal convex regularization for low-dimensional recovery. Preprint. 2021. HAL (03467123).

N. Keriven, A. Bietti, S. Vaiter. On the Universality of Graph Neural Networks on Large Random Graphs. NeurIPS. 2021. arXiv:2105.13099.

Q. Klopfenstein, S. Vaiter. Linear Support Vector Regression with Linear Constraints. Mach Learn. 110(7):1939—1974. 2021. arXiv:1911.02306. doi:10.1007/s10994-021-06018-2. (code).

B. Pascal, S. Vaiter, N. Pustelnik, P. Abry. Automated data-driven selection of the hyperparameters for Total-Variation based texture segmentation. J Math Imaging Vis. 63:923—952. 2021. arXiv:2004.09434. (code).

C. Deledalle, N. Papadakis, J. Salmon, S. Vaiter. Block based refitting in \(\ell_{12}\) sparse regularisation. J Math Imaging Vis. 63(2):216—236. 2021. arXiv:1910.11186. doi:10.1007/s10851-020-00993-2. (pdf).

S. Vaiter. From optimization to algorithmic differentiation: a graph detour. 2021. TEL (03159975). (pdf).


M. Massias, S. Vaiter, A. Gramfort, J. Salmon. Dual Extrapolation for Sparse Generalized Linear Models. J Mach Learn Res. 21(234):1—33. 2020. arXiv:1907.05830. (pdf). (code).

N. Keriven, A. Bietti, S. Vaiter. Convergence and Stability of Graph Convolutional Networks on Large Random Graphs. NeurIPS. 2020. arXiv:2006.01868. (pdf). (code).

Q. Bertrand, Q. Klopfenstein, M. Blondel, S. Vaiter, A. Gramfort, J. Salmon. Implicit differentiation of Lasso-type models for hyperparameter optimization. ICML. 2020. arXiv:2002.08943. (pdf). (code).


C. Deledalle, N. Papadakis, J. Salmon, S. Vaiter. Refitting solutions promoted by \(\ell_{12}\) sparse analysis regularization with block penalties. SSVM. 2019. arXiv:1903.00741.

M. Massias, S. Vaiter, A. Gramfort, J. Salmon. Exploiting regularity in sparse Generalized Linear Models. SPARS. 2019. HAL (02288859). (code).

A. Barbara, A. Jourani, S. Vaiter. Maximal Solutions of Sparse Analysis Regularization. J Optim Theory Appl. 180(2):371—396. 2019. arXiv:1703.00192. doi:10.1007/s10957-018-1385-3. (pdf).


Y. Traonmilin, S. Vaiter, R. Gribonval. Is the 1-norm the best convex sparse regularization?. iTWIST. 2018. arXiv:1806.08690. (pdf).

Y. Traonmilin, S. Vaiter. Optimality of 1-norm regularization among weighted 1-norms for sparse recovery: a case study on how to find optimal regularizations. NCMIP. 2018. arXiv:1803.00773. (pdf).

S. Vaiter, G. Peyré, J. Fadili. Model Consistency of Partly Smooth Regularizers. IEEE Trans Inform Theory. 64(3):1725—1737. 2018. arXiv:1405.1004. doi:10.1109/TIT.2017.2713822. (pdf).


A. Chambolle, P. Tan, S. Vaiter. Accelerated Alternating Descent Methods for Dykstra-like problems. J Math Imaging Vis. 59(3):481—497. 2017. HAL (01346532). doi:10.1007/s10851-017-0724-6. (pdf).

P. Bellec, J. Salmon, S. Vaiter. A Sharp Oracle Inequality for Graph-Slope. Electron J Statist. 11(2):4851—4870. 2017. arXiv:1706.06977. doi:10.1214/17-EJS1364. (pdf).

S. Vaiter, C. Deledalle, G. Peyré, J. Fadili, C. Dossal. The Degrees of Freedom of Partly Smooth Regularizers. Ann Inst Stat Math. 69(4):791—832. 2017. arXiv:1404.5557. doi:10.1007/s10463-016-0563-z. (pdf).

C. Deledalle, N. Papadakis, J. Salmon, S. Vaiter. Characterizing the maximum parameter of the total-variation denoising through the pseudo-inverse of the divergence. SPARS. 2017. arXiv:1612.03080. (pdf).

C. Deledalle, N. Papadakis, J. Salmon, S. Vaiter. CLEAR: Covariant LEAst-square Re-fitting with applications to image restoration. SIAM J Imaging Sci. 10(1):243—284. 2017. arXiv:1606.05158. doi:10.1137/16M1080318. (pdf).


S. Vaiter, M. Golbabaee, J. Fadili, G. Peyré. Model Selection with Low Complexity Priors. Inf. Inference. 4(3):230—287. 2015. arXiv:1307.2342. doi:10.1093/imaiai/iav005. (pdf).

S. Vaiter, G. Peyré, J. Fadili. Low Complexity Regularization of Linear Inverse Problems. 2015. arXiv:1407.1598. doi:10.1007/978-3-319-19749-4. (pdf).


C. Deledalle, S. Vaiter, J. Fadili, G. Peyré. Stein Unbiased GrAdient estimator of the Risk (SUGAR) for multiple parameter selection. SIAM J Imaging Sci. 7(4):2448—2487. 2014. arXiv:1405.1164. doi:10.1137/140968045. (pdf). (code).

S. Vaiter. Low Complexity Regularizations of Inverse Problems. 2014. TEL (01026398). (pdf).


S. Vaiter, C. Deledalle, G. Peyré, C. Dossal, J. Fadili. Local Behavior of Sparse Analysis Regularization: Applications to Risk Estimation. Appl Comput Harmon Anal. 35(3):433—451. 2013. arXiv:1204.3212. doi:10.1016/j.acha.2012.11.006. (pdf).

S. Vaiter, G. Peyré, J. Fadili, C. Deledalle, C. Dossal. The degrees of freedom of the group Lasso for a general design. SPARS. 2013. HAL (00926929). (pdf).

J. Fadili, G. Peyré, S. Vaiter, C. Deledalle, J. Salmon. Stable Recovery with Analysis Decomposable Priors. SAMPTA. 2013. arXiv:1304.4407. (pdf).

S. Vaiter, G. Peyré, J. Fadili. Robust Polyhedral Regularization. SAMPTA. 2013. arXiv:1304.6033. (pdf).

S. Vaiter, G. Peyré, C. Dossal, J. Fadili. Robust Sparse Analysis Regularization. IEEE Trans Inform Theory. 59(4):2001—2016. 2013. arXiv:1109.6222. doi:10.1109/TIT.2012.2233859. (pdf). (code).


C. Deledalle, S. Vaiter, G. Peyré, J. Fadili, C. Dossal. Unbiased Risk Estimation for Sparse Analysis Regularization. ICIP. 2012. HAL (00662718). doi:10.1109/ICIP.2012.6467544. (pdf).

C. Deledalle, S. Vaiter, G. Peyré, J. Fadili, C. Dossal. Proximal Splitting Derivatives for Risk Estimation. NCMIP. 2012. HAL (0670213). doi:10.1088/1742-6596/386/1/012003. (pdf).

S. Vaiter, C. Deledalle, G. Peyré, J. Fadili, C. Dossal. The Degrees of Freedom of the Group Lasso. ICML Workshop. 2012. arXiv:1205.1481. (pdf).

C. Deledalle, S. Vaiter, G. Peyré, J. Fadili, C. Dossal. Risk estimation for matrix recovery with spectral regularization. ICML Workshop. 2012. arXiv:1205.1482. (pdf).