# Publications

## 2026

- S. Jaffard, S. Vaiter, and P. Reynaud-Bouret. **CHANI: Correlation-based Hawkes Aggregation of Neurons with bio-Inspiration.** *Journal of Machine Learning Research.* [arXiv](https://arxiv.org/abs/2405.18828)
- J. Bolte, T. Lê, E. Pauwels, and S. Vaiter. **Bilevel gradient methods and Morse parametric qualification.** *Mathematics of Operations Research.* [DOI](https://doi.org/10.1287/moor.2025.0914) [arXiv](https://arxiv.org/abs/2502.09074)
- S. Tanji, S. Vaiter, and Y. Laguel. **Fairness-informed Pareto Optimization: An Efficient Bilevel Framework.** *Preprint.* [arXiv](https://arxiv.org/abs/2601.13448)
- M. Taimeskhanov, S. Vaiter, and D. Garreau. **Towards Understanding Steering Strength.** *Preprint.* [arXiv](https://arxiv.org/abs/2602.02712)
- M. Rando, and S. Vaiter. **ZOBA: An Efficient Single-loop Zeroth-order Bilevel Optimization Algorithm.** *Preprint.* [arXiv](https://arxiv.org/abs/2601.21836)
- G. Lauga, and S. Vaiter. **Characterizations of inexact proximal operators.** *Preprint.* [arXiv](https://arxiv.org/abs/2602.02022)

## 2025

- B. Pascal, and S. Vaiter. **Risk Estimate under a Nonstationary Autoregressive Model for Data-Driven Reproduction Number Estimation.** *Signal Processing.* [DOI](https://doi.org/10.1016/j.sigpro.2025.110246) [arXiv](https://arxiv.org/abs/2409.14937)
- C. Daniele, S. Villa, S. Vaiter, and L. Calatroni. **Deep Equilibrium models for Poisson Imaging Inverse problems via Mirror Descent.** *(accepted to) SIAM Journal on Imaging Sciences.* [arXiv](https://arxiv.org/abs/2507.11461)
- J. Bolte, T. Lê, E. Pauwels, and S. Vaiter. **Geometric and computational hardness of bilevel programming.** *Mathematical Programming.* [DOI](https://doi.org/10.1007/s10107-025-02229-w) [arXiv](https://arxiv.org/abs/2407.12372)
- R. Kawata, Y. Song, A. Bietti, N. Nishikawa, T. Suzuki, S. Vaiter, and D. Wu. **From Shortcut to Induction Head: How Data Diversity Shapes Algorithm Selection in Transformers.** *Advances in Neural Information Processing Systems (NeurIPS).* [arXiv](https://arxiv.org/abs/2512.18634)
- F. E. Khoury, E. Pauwels, S. Vaiter, and M. Arbel. **Learning Theory for Kernel Bilevel Optimization.** *Advances in Neural Information Processing Systems (NeurIPS).* [arXiv](https://arxiv.org/abs/2502.08457)
- L. Chapel, R. Tavenard, and S. Vaiter. **Differentiable Generalized Sliced Wasserstein Plans.** *Advances in Neural Information Processing Systems (NeurIPS).* [arXiv](https://arxiv.org/abs/2505.22049)
- R. Chhaibi, S. Gratton, and S. Vaiter. **Faster Computation of Entropic Optimal Transport via Stable Low Frequency Modes.** *Preprint.* [arXiv](https://arxiv.org/abs/2506.14780)

## 2024

- Y. Traonmilin, R. Gribonval, and S. Vaiter. **A theory of optimal convex regularization for low-dimensional recovery.** *Information and Inference: A Journal of the IMA.* [arXiv](https://arxiv.org/abs/2112.03540)
- Q. Klopfenstein, Q. Bertrand, A. Gramfort, J. Salmon, and S. Vaiter. **Model identification and local linear convergence of coordinate descent.** *Optimization Letters.* [arXiv](https://arxiv.org/abs/2010.11825)
- H. Ghanem, S. Vaiter, and N. Keriven. **Gradient Scarcity with Bilevel Optimization for Graph Learning.** *Transactions on Machine Learning Research.* [arXiv](https://arxiv.org/abs/2303.13964)
- M. Cordonnier, N. Keriven, N. Tremblay, and S. Vaiter. **Convergence of Message Passing Graph Neural Networks with Generic Aggregation On Large Random Graphs.** *Journal of Machine Learning Research.* [arXiv](https://arxiv.org/abs/2304.11140)
- S. Jaffard, S. Vaiter, A. Muzy, and P. Reynaud-Bouret. **Provable local learning rule by expert aggregation for a Hawkes network.** *International Conference on Artificial Intelligence and Statistics (AISTATS).* [PDF](pdfs/jaffard2024provable.pdf) [arXiv](https://arxiv.org/abs/2304.08061)
- F. Iutzeler, E. Pauwels, and S. Vaiter. **Derivatives of Stochastic Gradient Descent in parametric optimization.** *Advances in Neural Information Processing Systems (NeurIPS).* [arXiv](https://arxiv.org/abs/2405.15894)
- M. Dagréou, T. Moreau, S. Vaiter, and P. Ablin. **A Near-Optimal Algorithm for Bilevel Empirical Risk Minimization.** *International Conference on Artificial Intelligence and Statistics (AISTATS).* [PDF](pdfs/dagreou2024near.pdf) [arXiv](https://arxiv.org/abs/2302.08766)
- M. Dagréou, P. Ablin, S. Vaiter, and T. Moreau. **How to compute Hessian-vector products?.** *International Conference on Learning Representations (ICLR) Blogposts Track.*

## 2023

- E. Pauwels, and S. Vaiter. **The Derivatives of Sinkhorn-Knopp Converge.** *SIAM Journal on Optimization.* [DOI](https://doi.org/10.1137/22M1512703) [PDF](pdfs/pauwels2023derivatives.pdf) [arXiv](https://arxiv.org/abs/2207.12717)
- H. Ghanem, J. Salmon, N. Keriven, and S. Vaiter. **Supervised learning of analysis-sparsity priors with automatic differentiation.** *IEEE Signal Processing Letters.* [DOI](https://doi.org/10.1109/LSP.2023.3244511) [arXiv](https://arxiv.org/abs/2112.07990)
- X. Dupuis, and S. Vaiter. **The Geometry of Sparse Analysis Regularization.** *SIAM Journal on Optimization.* [DOI](https://doi.org/10.1137/19M1271877) [PDF](pdfs/dupuis2023geometry.pdf) [arXiv](https://arxiv.org/abs/1907.01769) [HAL](https://hal.science/hal-02169356)
- N. Keriven, and S. Vaiter. **What functions can Graph Neural Networks compute on random graphs? The role of Positional Encoding.** *Advances in Neural Information Processing Systems (NeurIPS).* [PDF](pdfs/keriven2023what.pdf) [arXiv](https://arxiv.org/abs/2305.14814)
- R. Catellier, S. Vaiter, and D. Garreau. **On the Robustness of Text Vectorizers.** *International Conference on Machine Learning (ICML).* [arXiv](https://arxiv.org/abs/2303.07203)
- J. Bolte, E. Pauwels, and S. Vaiter. **One-step differentiation of iterative algorithms.** *Advances in Neural Information Processing Systems (NeurIPS).* [PDF](pdfs/bolte2023one.pdf) [arXiv](https://arxiv.org/abs/2305.13768)
- M. Dagréou, T. Moreau, S. Vaiter, and P. Ablin. **Borne inférieure de compléxité et algorithme quasi-optimal pour la minimisation de risque empirique bi-niveaux.** *Colloque Francophone de Traitement du Signal et des Images (GRETSI).*
- M. Cordonnier, N. Keriven, N. Tremblay, and S. Vaiter. **Convergence of Message Passing Graph Neural Networks with Generic Aggregation on Random Graphs.** *Colloque Francophone de Traitement du Signal et des Images (GRETSI).*
- M. Cordonnier, N. Keriven, N. Tremblay, and S. Vaiter. **Convergence of Message Passing Graph Neural Networks with Generic Aggregation on Random Graphs.** *Graph Signal Processing workshop.*

## 2022

- N. Keriven, and S. Vaiter. **Sparse and Smooth: improved guarantees for Spectral Clustering in the Dynamic Stochastic Block Model.** *Electronic Journal of Statistics.* [DOI](https://doi.org/10.1214/22-EJS1986) [PDF](pdfs/keriven2022dsbm.pdf) [arXiv](https://arxiv.org/abs/2002.02892) [HAL](https://hal.science/hal-02484970)
- Q. Bertrand, Q. Klopfenstein, M. Massias, M. Blondel, S. Vaiter, A. Gramfort, and J. Salmon. **Implicit differentiation for fast hyperparameter selection in non-smooth convex learning.** *Journal of Machine Learning Research.* [PDF](pdfs/bertrand2022implicit.pdf) [arXiv](https://arxiv.org/abs/2105.01637)
- T. Moreau, M. Massias, A. Gramfort, P. Ablin, P.-A. Bannier, B. Charlier, M. Dagréou, T. D. l. Tour, G. Durif, C. F. Dantas, Q. Klopfenstein, J. Larsson, E. Lai, T. Lefort, M. Malezieu, B. Moufad, B. T. Nguyen, A. Rakotomamonjy, Z. Ramzie, J. Salmon, and S. Vaiter. **Benchopt: Reproducible, efficient and collaborative optimization benchmarks.** *Advances in Neural Information Processing Systems (NeurIPS).* [PDF](pdfs/moreau2022benchopt.pdf) [arXiv](https://arxiv.org/abs/2206.13424)
- M. Dagréou, P. Ablin, S. Vaiter, and T. Moreau. **A framework for bilevel optimization that enables stochastic and global variance reduction algorithms.** *Advances in Neural Information Processing Systems (NeurIPS).* [arXiv](https://arxiv.org/abs/2201.13409)
- J. Bolte, E. Pauwels, and S. Vaiter. **Automatic differentiation of nonsmooth iterative algorithms.** *Advances in Neural Information Processing Systems (NeurIPS).* [PDF](pdfs/bolte2022automatic.pdf) [arXiv](https://arxiv.org/abs/2206.00457)
- M. Dagr{\'e}ou, P. Ablin, S. Vaiter, and T. Moreau. **Algorithmes stochastiques et réduction de variance grâce à un nouveau cadre pour l’optimisation bi-niveaux.** *Colloque Francophone de Traitement du Signal et des Images (GRETSI).*

## 2021

- B. Pascal, S. Vaiter, N. Pustelnik, and P. Abry. **Automated data-driven selection of the hyperparameters for Total-Variation based texture segmentation.** *Journal of Mathematical Imaging and Vision.* [PDF](pdfs/pascal2021automated.pdf) [arXiv](https://arxiv.org/abs/2004.09434)
- Q. Klopfenstein, and S. Vaiter. **Linear Support Vector Regression with Linear Constraints.** *Machine Learning.* [DOI](https://doi.org/10.1007/s10994-021-06018-2) [arXiv](https://arxiv.org/abs/1911.02306) [HAL](https://hal.science/hal-02349160)
- C.-A. Deledalle, N. Papadakis, J. Salmon, and S. Vaiter. **Block based refitting in ℓ12 sparse regularisation.** *Journal of Mathematical Imaging and Vision.* [DOI](https://doi.org/10.1007/s10851-020-00993-2) [PDF](pdfs/deledalle2021block.pdf) [arXiv](https://arxiv.org/abs/1910.11186) [HAL](https://hal.science/hal-02330441)
- N. Keriven, A. Bietti, and S. Vaiter. **On the Universality of Graph Neural Networks on Large Random Graphs.** *Advances in Neural Information Processing Systems (NeurIPS).* [arXiv](https://arxiv.org/abs/2105.13099)
- S. Vaiter. **From optimization to algorithmic differentiation: a graph detour.** *HDR Thesis.* [PDF](pdfs/vaiter2021hdr.pdf)

## 2020

- M. Massias, S. Vaiter, A. Gramfort, and J. Salmon. **Dual Extrapolation for Sparse Generalized Linear Models.** *Journal of Machine Learning Research.* [PDF](pdfs/massias2020dual.pdf) [arXiv](https://arxiv.org/abs/1907.05830) [HAL](https://hal.science/hal-02263500)
- N. Keriven, A. Bietti, and S. Vaiter. **Convergence and Stability of Graph Convolutional Networks on Large Random Graphs.** *Advances in Neural Information Processing Systems (NeurIPS).* [PDF](pdfs/keriven2020convergence.pdf) [arXiv](https://arxiv.org/abs/2006.01868)
- Q. Bertrand, Q. Klopfenstein, M. Blondel, S. Vaiter, A. Gramfort, and J. Salmon. **Implicit differentiation of Lasso-type models for hyperparameter optimization.** *International Conference on Machine Learning (ICML).* [PDF](pdfs/bertrand2020implicit.pdf) [arXiv](https://arxiv.org/abs/2002.08943)

## 2019

- A. Barbara, A. Jourani, and S. Vaiter. **Maximal Solutions of Sparse Analysis Regularization.** *Journal of Optimization Theory and Applications.* [DOI](https://doi.org/10.1007/s10957-018-1385-3) [PDF](pdfs/barbara2019maximal.pdf) [arXiv](https://arxiv.org/abs/1703.00192) [HAL](https://hal.science/hal-01467965)
- C.-A. Deledalle, N. Papadakis, J. Salmon, and S. Vaiter. **Refitting solutions promoted by ℓ12 sparse analysis regularization with block penalties.** *International Conference on Scale Space and Variational Methods in Computer Vision (SSVM).* [arXiv](https://arxiv.org/abs/1903.00741) [HAL](https://hal.science/hal-02059006)
- M. Massias, S. Vaiter, A. Gramfort, and J. Salmon. **Exploiting regularity in sparse Generalized Linear Models.** *Signal Processing with Adaptive Sparse Structured Representations (SPARS).* [HAL](https://hal.science/hal-02288859)

## 2018

- S. Vaiter, G. Peyré, and J. Fadili. **Model Consistency of Partly Smooth Regularizers.** *IEEE Transactions on Information Theory.* [DOI](https://doi.org/10.1109/TIT.2017.2713822) [PDF](pdfs/vaiter2018model.pdf) [arXiv](https://arxiv.org/abs/1405.1004) [HAL](https://hal.science/hal-00987293)
- Y. Traonmilin, and S. Vaiter. **Optimality of 1-norm regularization among weighted 1-norms for sparse recovery: a case study on how to find optimal regularizations.** *New Computational Methods for Inverse Problems (NCMIP).* [PDF](pdfs/traonmilin2018optimality.pdf) [arXiv](https://arxiv.org/abs/1803.00773) [HAL](https://hal.science/hal-01720871)
- Y. Traonmilin, S. Vaiter, and R. Gribonval. **Is the 1-norm the best convex sparse regularization?.** *iTWIST.* [PDF](pdfs/traonmilin2018is.pdf) [arXiv](https://arxiv.org/abs/1806.08690) [HAL](https://hal.science/hal-01819219)

## 2017

- S. Vaiter, C.-A. Deledalle, G. Peyré, J. Fadili, and C. Dossal. **The Degrees of Freedom of Partly Smooth Regularizers.** *Annals of the Institute of Statistical Mathematics.* [DOI](https://doi.org/10.1007/s10463-016-0563-z) [PDF](pdfs/vaiter2017dof.pdf) [arXiv](https://arxiv.org/abs/1404.5557) [HAL](https://hal.science/hal-00981634)
- C.-A. Deledalle, N. Papadakis, J. Salmon, and S. Vaiter. **CLEAR: Covariant LEAst-square Re-fitting with applications to image restoration.** *SIAM Journal on Imaging Sciences.* [DOI](https://doi.org/10.1137/16M1080318) [PDF](pdfs/deledalle2017clear.pdf) [arXiv](https://arxiv.org/abs/1606.05158) [HAL](https://hal.science/hal-01333295)
- A. Chambolle, P. Tan, and S. Vaiter. **Accelerated Alternating Descent Methods for Dykstra-like problems.** *Journal of Mathematical Imaging and Vision.* [DOI](https://doi.org/10.1007/s10851-017-0724-6) [PDF](pdfs/chambolle2017accelerated.pdf) [HAL](https://hal.science/hal-01346532)
- P. Bellec, J. Salmon, and S. Vaiter. **A Sharp Oracle Inequality for Graph-Slope.** *Electronic Journal of Statistics.* [DOI](https://doi.org/10.1214/17-EJS1364) [PDF](pdfs/bellec2017sharp.pdf) [arXiv](https://arxiv.org/abs/1706.06977) [HAL](https://hal.science/hal-01544680)
- C.-A. Deledalle, N. Papadakis, J. Salmon, and S. Vaiter. **Characterizing the maximum parameter of the total-variation denoising through the pseudo-inverse of the divergence.** *Signal Processing with Adaptive Sparse Structured Representations (SPARS).* [PDF](pdfs/deledalle2017characterizing.pdf) [arXiv](https://arxiv.org/abs/1612.03080) [HAL](https://hal.science/hal-01412059)

## 2015

- S. Vaiter, M. Golbabaee, J. Fadili, and G. Peyré. **Model Selection with Low Complexity Priors.** *Information and Inference: A Journal of the IMA.* [DOI](https://doi.org/10.1093/imaiai/iav005) [PDF](pdfs/vaiter2015model.pdf) [arXiv](https://arxiv.org/abs/1307.2342) [HAL](https://hal.science/hal-00842603)
- S. Vaiter, G. Peyré, and J. Fadili. **Low Complexity Regularization of Linear Inverse Problems.** *Book chapter in Sampling Theory, a Renaissance.* [DOI](https://doi.org/10.1007/978-3-319-19749-4) [PDF](pdfs/vaiter2015low.pdf) [arXiv](https://arxiv.org/abs/1407.1598) [HAL](https://hal.science/hal-01018927)

## 2014

- C.-A. Deledalle, S. Vaiter, J. Fadili, and G. Peyré. **Stein Unbiased GrAdient estimator of the Risk (SUGAR) for multiple parameter selection.** *SIAM Journal on Imaging Sciences.* [DOI](https://doi.org/10.1137/140968045) [PDF](pdfs/deledalle2014sugar.pdf) [arXiv](https://arxiv.org/abs/1405.1164) [HAL](https://hal.science/hal-00987295)
- S. Vaiter. **Low Complexity Regularizations of Inverse Problems.** *PhD Thesis.* [PDF](pdfs/vaiter2014phd.pdf) [SITE](vaiter2014phd/)

## 2013

- S. Vaiter, G. Peyré, C. Dossal, and J. Fadili. **Robust Sparse Analysis Regularization.** *IEEE Transactions on Information Theory.* [DOI](https://doi.org/10.1109/TIT.2012.2233859) [PDF](pdfs/vaiter2013robust.pdf) [arXiv](https://arxiv.org/abs/1109.6222) [HAL](https://hal.science/hal-00627452)
- S. Vaiter, C.-A. Deledalle, G. Peyré, C. Dossal, and J. Fadili. **Local Behavior of Sparse Analysis Regularization: Applications to Risk Estimation.** *Applied and Computational Harmonic Analysis.* [DOI](https://doi.org/10.1016/j.acha.2012.11.006) [PDF](pdfs/vaiter2013local.pdf) [arXiv](https://arxiv.org/abs/1204.3212) [HAL](https://hal.science/hal-00687751)
- S. Vaiter, G. Peyré, and J. Fadili. **Robust Polyhedral Regularization.** *SAMPTA.* [PDF](pdfs/vaiter2013polyhedral.pdf) [arXiv](https://arxiv.org/abs/1304.6033) [HAL](https://hal.science/hal-00816377)
- J. Fadili, G. Peyré, S. Vaiter, C.-A. Deledalle, and J. Salmon. **Stable Recovery with Analysis Decomposable Priors.** *SAMPTA.* [PDF](pdfs/fadili2013stableb.pdf) [arXiv](https://arxiv.org/abs/1304.4407)
- S. Vaiter, G. Peyré, and J. Fadili. **Robustesse au bruit des régularisations polyhédrales.** *Colloque Francophone de Traitement du Signal et des Images (GRETSI).*
- J. Fadili, G. Peyré, S. Vaiter, C.-A. Deledalle, and J. Salmon. **Reconstruction Stable par Régularisation Décomposable Analyse.** *Colloque Francophone de Traitement du Signal et des Images (GRETSI).*
- S. Vaiter, G. Peyré, J. Fadili, C.-A. Deledalle, and C. Dossal. **The degrees of freedom of the group Lasso for a general design.** *Signal Processing with Adaptive Sparse Structured Representations (SPARS).* [PDF](pdfs/vaiter2013dof.pdf) [HAL](https://hal.science/hal-00926929)

## 2012

- C.-A. Deledalle, S. Vaiter, G. Peyré, J. Fadili, and C. Dossal. **Unbiased Risk Estimation for Sparse Analysis Regularization.** *International Conference on Image Processing (ICIP).* [DOI](https://doi.org/10.1109/International%20Conference%20on%20Image%20Processing%20(ICIP).2012.6467544) [PDF](pdfs/deledalle2012unbiased.pdf) [HAL](https://hal.science/hal-00662718)
- C.-A. Deledalle, S. Vaiter, G. Peyré, J. Fadili, and C. Dossal. **Proximal Splitting Derivatives for Risk Estimation.** *New Computational Methods for Inverse Problems (NCMIP).* [DOI](https://doi.org/10.1088/1742-6596/386/1/012003) [PDF](pdfs/deledalle2012proximal.pdf) [HAL](https://hal.science/hal-0670213)
- S. Vaiter, C.-A. Deledalle, G. Peyré, J. Fadili, and C. Dossal. **The Degrees of Freedom of the Group Lasso.** *International Conference on Machine Learning (ICML) Workshop Sparsity, Dictionaries and Projections in Machine Learning and Signal Processing.* [PDF](pdfs/vaiter2012dof.pdf) [arXiv](https://arxiv.org/abs/1205.1481) [HAL](https://hal.science/hal-00695292)
- C.-A. Deledalle, S. Vaiter, G. Peyré, J. Fadili, and C. Dossal. **Risk estimation for matrix recovery with spectral regularization.** *International Conference on Machine Learning (ICML) Workshop Sparsity, Dictionaries and Projections in Machine Learning and Signal Processing.* [PDF](pdfs/deledalle2012risk.pdf) [arXiv](https://arxiv.org/abs/1205.1482) [HAL](https://hal.science/hal-00695326)
