Publications
See my Google Scholar profile for bibliometric data. Download the BibTeX file here.
2025¶
-
Risk Estimate under a Nonstationary Autoregressive Model for Data-Driven Reproduction Number Estimation.
Signal Processing. -
Learning Theory for Kernel Bilevel Optimization.
Advances in Neural Information Processing Systems (NeurIPS). -
Geometric and computational hardness of bilevel programming.
Mathematical Programming. -
From Shortcut to Induction Head: How Data Diversity Shapes Algorithm Selection in Transformers.
Advances in Neural Information Processing Systems (NeurIPS). -
Faster Computation of Entropic Optimal Transport via Stable Low Frequency Modes.
Preprint. -
Differentiable Generalized Sliced Wasserstein Plans.
Advances in Neural Information Processing Systems (NeurIPS). -
Deep Equilibrium models for Poisson Imaging Inverse problems via Mirror Descent.
Preprint. -
Bilevel gradient methods and Morse parametric qualification.
Preprint.
2024¶
-
Provable local learning rule by expert aggregation for a Hawkes network.
International Conference on Artificial Intelligence and Statistics (AISTATS). -
Model identification and local linear convergence of coordinate descent.
Optimization Letters. -
How to compute Hessian-vector products?.
International Conference on Learning Representations (ICLR) Blogposts Track. -
Gradient Scarcity with Bilevel Optimization for Graph Learning.
Transactions on Machine Learning Research. -
Derivatives of Stochastic Gradient Descent in parametric optimization.
Advances in Neural Information Processing Systems (NeurIPS). -
Convergence of Message Passing Graph Neural Networks with Generic Aggregation On Large Random Graphs.
Journal of Machine Learning Research. -
CHANI: Correlation-based Hawkes Aggregation of Neurons with bio-Inspiration.
Preprint. -
A theory of optimal convex regularization for low-dimensional recovery.
Information and Inference: A Journal of the IMA. -
A Near-Optimal Algorithm for Bilevel Empirical Risk Minimization.
International Conference on Artificial Intelligence and Statistics (AISTATS).
2023¶
-
What functions can Graph Neural Networks compute on random graphs? The role of Positional Encoding.
Advances in Neural Information Processing Systems (NeurIPS). -
The Geometry of Sparse Analysis Regularization.
SIAM Journal on Optimization. -
The Derivatives of Sinkhorn-Knopp Converge.
SIAM Journal on Optimization. -
Supervised learning of analysis-sparsity priors with automatic differentiation.
IEEE Signal Processing Letters. -
One-step differentiation of iterative algorithms.
Advances in Neural Information Processing Systems (NeurIPS). -
On the Robustness of Text Vectorizers.
International Conference on Machine Learning (ICML). -
Convergence of Message Passing Graph Neural Networks with Generic Aggregation on Random Graphs.
Graph Signal Processing workshop. -
Convergence of Message Passing Graph Neural Networks with Generic Aggregation on Random Graphs.
Colloque Francophone de Traitement du Signal et des Images. -
Borne inférieure de compléxité et algorithme quasi-optimal pour la minimisation de risque empirique bi-niveaux.
Colloque Francophone de Traitement du Signal et des Images.
2022¶
-
Sparse and Smooth: improved guarantees for Spectral Clustering in the Dynamic Stochastic Block Model.
Electronic Journal of Statistics. -
Implicit differentiation for fast hyperparameter selection in non-smooth convex learning.
Journal of Machine Learning Research. -
Benchopt: Reproducible, efficient and collaborative optimization benchmarks.
Advances in Neural Information Processing Systems (NeurIPS). -
Automatic differentiation of nonsmooth iterative algorithms.
Advances in Neural Information Processing Systems (NeurIPS). -
Algorithmes stochastiques et réduction de variance grâce à un nouveau cadre pour l’optimisation bi-niveaux.
Colloque Francophone de Traitement du Signal et des Images. -
A framework for bilevel optimization that enables stochastic and global variance reduction algorithms.
Advances in Neural Information Processing Systems (NeurIPS).
2021¶
-
On the Universality of Graph Neural Networks on Large Random Graphs.
Advances in Neural Information Processing Systems (NeurIPS). -
Linear Support Vector Regression with Linear Constraints.
Machine Learning. -
From optimization to algorithmic differentiation: a graph detour.
PhD Thesis. -
Block based refitting in \(\ell_{12}\) sparse regularisation.
Journal of Mathematical Imaging and Vision. -
Automated data-driven selection of the hyperparameters for Total-Variation based texture segmentation.
Journal of Mathematical Imaging and Vision.
2020¶
-
Implicit differentiation of Lasso-type models for hyperparameter optimization.
International Conference on Machine Learning (ICML). -
Dual Extrapolation for Sparse Generalized Linear Models.
Journal of Machine Learning Research. -
Convergence and Stability of Graph Convolutional Networks on Large Random Graphs.
Advances in Neural Information Processing Systems (NeurIPS).
2019¶
-
Refitting solutions promoted by \(\ell_{12}\) sparse analysis regularization with block penalties.
International Conference on Scale Space and Variational Methods in Computer Vision (SSVM). -
Maximal Solutions of Sparse Analysis Regularization.
Journal of Optimization Theory and Applications. -
Exploiting regularity in sparse Generalized Linear Models.
Signal Processing with Adaptive Sparse Structured Representations (SPARS).
2018¶
-
Optimality of 1-norm regularization among weighted 1-norms for sparse recovery: a case study on how to find optimal regularizations.
New Computational Methods for Inverse Problems (NCMIP). -
Model Consistency of Partly Smooth Regularizers.
IEEE Transactions on Information Theory. -
Is the 1-norm the best convex sparse regularization?.
iTWIST.
2017¶
-
The Degrees of Freedom of Partly Smooth Regularizers.
Annals of the Institute of Statistical Mathematics. -
Characterizing the maximum parameter of the total-variation denoising through the pseudo-inverse of the divergence.
Signal Processing with Adaptive Sparse Structured Representations (SPARS). -
CLEAR: Covariant LEAst-square Re-fitting with applications to image restoration.
SIAM Journal on Imaging Sciences. -
Accelerated Alternating Descent Methods for Dykstra-like problems.
Journal of Mathematical Imaging and Vision. -
A Sharp Oracle Inequality for Graph-Slope.
Electronic Journal of Statistics.
2015¶
-
Model Selection with Low Complexity Priors.
Information and Inference: A Journal of the IMA. -
Low Complexity Regularization of Linear Inverse Problems.
2014¶
-
Stein Unbiased GrAdient estimator of the Risk (SUGAR) for multiple parameter selection.
SIAM Journal on Imaging Sciences. -
Low Complexity Regularizations of Inverse Problems.
PhD Thesis.
2013¶
-
The degrees of freedom of the group Lasso for a general design.
Signal Processing with Adaptive Sparse Structured Representations (SPARS). -
Stable Recovery with Analysis Decomposable Priors.
SAMPTA. -
Robustesse au bruit des régularisations polyhédrales.
Colloque Francophone de Traitement du Signal et des Images. -
Robust Sparse Analysis Regularization.
IEEE Transactions on Information Theory. -
Robust Polyhedral Regularization.
SAMPTA. -
Reconstruction Stable par Régularisation Décomposable Analyse.
Colloque Francophone de Traitement du Signal et des Images. -
Local Behavior of Sparse Analysis Regularization: Applications to Risk Estimation.
Applied and Computational Harmonic Analysis.
2012¶
-
Unbiased Risk Estimation for Sparse Analysis Regularization.
International Conference on Image Processing (ICIP). -
The Degrees of Freedom of the Group Lasso.
International Conference on Machine Learning (ICML) Workshop Sparsity, Dictionaries and Projections in Machine Learning and Signal Processing. -
Risk estimation for matrix recovery with spectral regularization.
International Conference on Machine Learning (ICML) Workshop Sparsity, Dictionaries and Projections in Machine Learning and Signal Processing. -
Proximal Splitting Derivatives for Risk Estimation.
New Computational Methods for Inverse Problems (NCMIP).