List of publications
See my Google Scholar profile for bibliometric data.
2024
M. Cordonnier, N. Keriven, N. Tremblay, S. Vaiter. Convergence of Message Passing Graph Neural Networks with Generic Aggregation On Large Random Graphs. Journal of Machine Learning Research (to appear), 41pp., 2024+. arXiv:2304.11140.
H. Ghanem, S. Vaiter, N. Keriven. Gradient Scarcity with Bilevel Optimization for Graph Learning. Transactions on Machine Learning, 22pp., 2024. arXiv:2303.13964. openreview. Featured Certification
Y. Traonmilin, R. Gribonval, S. Vaiter. A theory of optimal convex regularization for low-dimensional recovery. Information Inference: A Journal of the IMA, 13(2), 66p., 2024. arXiv:2112.03540. doi:10.1093/imaiai/iaae013.
Q. Klopfenstein, Q. Bertrand, A. Gramfort, J. Salmon, S. Vaiter. Local linear convergence of proximal coordinate descent algorithm. Optimization Letters, 18:135—154, 2024. arXiv:2010.11825. doi:10.1007/s11590-023-01976-z.
F. Iutzeler, E. Pauwels, S. Vaiter. Derivatives of Stochastic Gradient Descent in parametric optimization. NeurIPS, 24pp., 2024. arXiv:2405.15894.
S. Jaffard, S. Vaiter, A. Muzy, P. Reynaud-Bouret. Provable local learning rule by expert aggregation for a Hawkes network. AISTATS, 26pp., 2024. arXiv:2304.08061. (pdf).
M. Dagréou, T. Moreau, S. Vaiter, P. Ablin. A Lower Bound and a Near-Optimal Algorithm for Bilevel Empirical Risk Minimization. AISTATS, 36pp., 2024. arXiv:2302.08766. (pdf).
B. Pascal, S. Vaiter. Risk Estimate under a Nonstationary Autoregressive Model for Data-Driven Reproduction Number Estimation. Preprint, 29pp, 2024. arXiv:2409.14937.
J. Bolte, T. Lê, E. Pauwels, S. Vaiter. Geometric and computational hardness of bilevel programming. Preprint, 32pp., 2024. arXiv:2407.12372.
S. Jaffard, P. Reynaud-Bouret, S. Vaiter. CHANI: Correlation-based Hawkes Aggregation of Neurons with bio-Inspiration. Preprint, 50pp., 2024. arXiv:2405.18828.
2023
E. Pauwels, S. Vaiter. The Derivatives of Sinkhorn-Knopp Converge. SIAM Journal on Optimization, 33(3):1494—1517, 2023. arXiv:2207.12717. doi:10.1137/22M1512703. (pdf).
X. Dupuis, S. Vaiter. The Geometry of Sparse Analysis Regularization. SIAM Journal on Optimization, 33(2):842—867, 2023. arXiv:1907.01769. doi:10.1137/19M1271877. (pdf).
H. Ghanem, J. Salmon, N. Keriven, S. Vaiter. Supervised learning of analysis-sparsity priors with automatic differentiation. IEEE Signal Processing Letters, 30:339—343, 2023. arXiv:2112.07990. doi:10.1109/LSP.2023.3244511.
N. Keriven, S. Vaiter. What functions can Graph Neural Networks compute on random graphs? The role of Positional Encoding. NeurIPS, 27pp., 2023. arXiv:2305.14814. (pdf).
J. Bolte, E. Pauwels, S. Vaiter. One-step differentiation of iterative algorithms. NeurIPS, 15pp., 2023. arXiv:2305.13768. (pdf). Spotlight paper.
R. Catellier, S. Vaiter, D. Garreau. On the Robustness of Text Vectorizers. ICML, 33pp., 2023. arXiv:2303.07203. (pdf).
2022
Q. Bertrand, Q. Klopfenstein, M. Massias, M. Blondel, S. Vaiter, A. Gramfort, J. Salmon. Implicit differentiation for fast hyperparameter selection in non-smooth convex learning. Journal of Machine Learning Research, 23(149):1—43. 2022. arXiv:2105.01637. (pdf). (code).
N. Keriven, S. Vaiter. Sparse and Smooth: improved guarantees for Spectral Clustering in the Dynamic Stochastic Block Model. Electronic Journal of Statistics, 16(1):1330—1366. 2022. arXiv:2002.02892. doi:10.1214/22-EJS1986. (pdf).
J. Bolte, E. Pauwels, S. Vaiter. Automatic differentiation of nonsmooth iterative algorithms. NeurIPS, 28pp., 2022. arXiv:2206.00457. (pdf).
M. Dagréou, P. Ablin, S. Vaiter, T. Moreau. A framework for bilevel optimization that enables stochastic and global variance reduction algorithms. NeurIPS, 43pp, 2022. arXiv:2201.13409. (pdf). (code). Oral paper.
T. Moreau, M. Massias, A. Gramfort, P. Ablin, P. Bannier, B. Charlier, M. Dagréou, T. Dupré la Tour, G. Durif, C. Dantas, Q. Klopfenstein, J. Larsson, E. Lai, T. Lefort, M. Malezieu, B. Moufad, B. Nguyen, A. Rakotomamonjy, Z. Ramzie, J. Salmon, S. Vaiter. Benchopt: Reproducible, efficient and collaborative optimization benchmarks. NeurIPS, 43pp., 2022. arXiv:2206.13424. (pdf). (code).
2021
Q. Klopfenstein, S. Vaiter. Linear Support Vector Regression with Linear Constraints. Machine Learning, 110(7):1939—1974, 2021. arXiv:1911.02306. doi:10.1007/s10994-021-06018-2. (pdf). (code).
B. Pascal, S. Vaiter, N. Pustelnik, P. Abry. Automated data-driven selection of the hyperparameters for Total-Variation based texture segmentation. Journal of Mathematical Imaging and Vision, 63:923—952, 2021. arXiv:2004.09434. (pdf). (code).
C. Deledalle, N. Papadakis, J. Salmon, S. Vaiter. Block based refitting in sparse regularisation. Journal of Mathematical Imaging and Vision, 63(2):216—236, 2021. arXiv:1910.11186. doi:10.1007/s10851-020-00993-2. (pdf).
N. Keriven, A. Bietti, S. Vaiter. On the Universality of Graph Neural Networks on Large Random Graphs. NeurIPS, 28pp., 2021. arXiv:2105.13099. (pdf).
S. Vaiter. From optimization to algorithmic differentiation: a graph detour. Habilitation thesis, 2021. TEL (03159975). (pdf).
2020
M. Massias, S. Vaiter, A. Gramfort, J. Salmon. Dual Extrapolation for Sparse Generalized Linear Models. Journal of Machine Learning Research, 21(234):1—33, 2020. arXiv:1907.05830. (pdf). (code).
N. Keriven, A. Bietti, S. Vaiter. Convergence and Stability of Graph Convolutional Networks on Large Random Graphs. NeurIPS, 25pp., 2020. arXiv:2006.01868. (pdf). (code). Spotlight paper.
Q. Bertrand, Q. Klopfenstein, M. Blondel, S. Vaiter, A. Gramfort, J. Salmon. Implicit differentiation of Lasso-type models for hyperparameter optimization. ICML, 23pp., 2020. arXiv:2002.08943. (pdf). (code).
2019
A. Barbara, A. Jourani, S. Vaiter. Maximal Solutions of Sparse Analysis Regularization. Journal of Optimization Theory and Applications, 180(2):371—396, 2019. arXiv:1703.00192. doi:10.1007/s10957-018-1385-3. (pdf).
C. Deledalle, N. Papadakis, J. Salmon, S. Vaiter. Refitting solutions promoted by sparse analysis regularization with block penalties. SSVM, 14pp., 2019. arXiv:1903.00741.
M. Massias, S. Vaiter, A. Gramfort, J. Salmon. Exploiting regularity in sparse Generalized Linear Models. SPARS, 7pp., 2019. HAL (02288859). (code).
2018
S. Vaiter, G. Peyré, J. Fadili. Model Consistency of Partly Smooth Regularizers. IEEE Transactions on Information Theory, 64(3):1725—1737, 2018. arXiv:1405.1004. doi:10.1109/TIT.2017.2713822. (pdf).
Y. Traonmilin, S. Vaiter. Optimality of 1-norm regularization among weighted 1-norms for sparse recovery: a case study on how to find optimal regularizations. NCMIP, 7pp., 2018. arXiv:1803.00773. (pdf).
Y. Traonmilin, S. Vaiter, R. Gribonval. Is the 1-norm the best convex sparse regularization?. iTWIST, 3pp., 2018. arXiv:1806.08690. (pdf).
2017
P. Bellec, J. Salmon, S. Vaiter. A Sharp Oracle Inequality for Graph-Slope. Electronic Journal of Statistics, 11(2):4851—4870, 2017. arXiv:1706.06977. doi:10.1214/17-EJS1364. (pdf).
A. Chambolle, P. Tan, S. Vaiter. Accelerated Alternating Descent Methods for Dykstra-like problems. Journal of Mathematical Imaging and Vision, 59(3):481—497, 2017. HAL (01346532). doi:10.1007/s10851-017-0724-6. (pdf). (code).
C. Deledalle, N. Papadakis, J. Salmon, S. Vaiter. CLEAR: Covariant LEAst-square Re-fitting with applications to image restoration. SIAM Journal on Imaging Sciences, 10(1):243—284, 2017. arXiv:1606.05158. doi:10.1137/16M1080318. (pdf).
S. Vaiter, C. Deledalle, G. Peyré, J. Fadili, C. Dossal. The Degrees of Freedom of Partly Smooth Regularizers. Annals of the Institute of Statistical Mathematics, 69(4):791—832, 2017. arXiv:1404.5557. doi:10.1007/s10463-016-0563-z. (pdf).
C. Deledalle, N. Papadakis, J. Salmon, S. Vaiter. Characterizing the maximum parameter of the total-variation denoising through the pseudo-inverse of the divergence. SPARS, 2pp., 2017. arXiv:1612.03080. (pdf).
2015
S. Vaiter, M. Golbabaee, J. Fadili, G. Peyré. Model Selection with Low Complexity Priors. Information and Inference: A Journal of the IMA, 4(3):230—287, 2015. arXiv:1307.2342. doi:10.1093/imaiai/iav005. (pdf).
S. Vaiter, G. Peyré, J. Fadili. Low Complexity Regularization of Linear Inverse Problems. In: Pfander, G. (eds) Sampling Theory, a Renaissance, 103—152. Applied and Numerical Harmonic Analysis. Birkhäuser, Cham. 2015. arXiv:1407.1598. doi:10.1007/978-3-319-19749-4. (pdf).
2014
C. Deledalle, S. Vaiter, J. Fadili, G. Peyré. Stein Unbiased GrAdient estimator of the Risk (SUGAR) for multiple parameter selection. SIAM Journal on Imaging Sciences, 7(4):2448—2487, 2014. arXiv:1405.1164. doi:10.1137/140968045. (pdf). (code).
S. Vaiter. Low Complexity Regularizations of Inverse Problems. PhD thesis, 2014. TEL (01026398). (pdf).
2013
S. Vaiter, C. Deledalle, G. Peyré, C. Dossal, J. Fadili. Local Behavior of Sparse Analysis Regularization: Applications to Risk Estimation. Applied and Computational Harmonic Analysis, 35(3):433—451, 2013. arXiv:1204.3212. doi:10.1016/j.acha.2012.11.006. (pdf).
S. Vaiter, G. Peyré, C. Dossal, J. Fadili. Robust Sparse Analysis Regularization. IEEE Transactions on Information Theory, 59(4):2001—2016, 2013. arXiv:1109.6222. doi:10.1109/TIT.2012.2233859. (pdf). (code).
S. Vaiter, G. Peyré, J. Fadili, C. Deledalle, C. Dossal. The degrees of freedom of the group Lasso for a general design. SPARS, 1pp., 2013. HAL (00926929). (pdf).
J. Fadili, G. Peyré, S. Vaiter, C. Deledalle, J. Salmon. Stable Recovery with Analysis Decomposable Priors. SAMPTA, 4pp., 2013. arXiv:1304.4407. (pdf).
S. Vaiter, G. Peyré, J. Fadili. Robust Polyhedral Regularization. SAMPTA, 4pp., 2013. arXiv:1304.6033. (pdf).
2012
C. Deledalle, S. Vaiter, G. Peyré, J. Fadili, C. Dossal. Unbiased Risk Estimation for Sparse Analysis Regularization. ICIP, 4pp., 2012. HAL (00662718). doi:10.1109/ICIP.2012.6467544. (pdf).
C. Deledalle, S. Vaiter, G. Peyré, J. Fadili, C. Dossal. Proximal Splitting Derivatives for Risk Estimation. NCMIP, 4pp., 2012. HAL (0670213). doi:10.1088/1742-6596/386/1/012003. (pdf).
S. Vaiter, C. Deledalle, G. Peyré, J. Fadili, C. Dossal. The Degrees of Freedom of the Group Lasso. ICML Workshop Sparsity, Dictionaries and Projections in Machine Learning and Signal Processing, 4pp., 2012. arXiv:1205.1481. (pdf).
C. Deledalle, S. Vaiter, G. Peyré, J. Fadili, C. Dossal. Risk estimation for matrix recovery with spectral regularization. ICML Workshop Sparsity, Dictionaries and Projections in Machine Learning and Signal Processing, 7pp., 2012. arXiv:1205.1482. (pdf).