Schedule of LearnGraph22

Pdf version: here

Monday, November 7th, 2022

09:00am – 09:45am: Welcome to the CIRM!

09:45am – 10:30am: Sophie Achard (CNRS, Université Grenoble Alpes)

Title: Statistical comparisons of spatio-temporal networks (abstract) (slides)

10:30am – 11:00am: Coffee break

11:00am – 11:45am: Nicolas Verzelen (INRAE Montpellier)

Title: Ranking experts in crowdsourcing problems (abstract) (slides)

12:30pm – 02:00pm: Lunch

02:00pm – 04:00pm: Free discussion

The library (with several private rooms inside) is available to discuss, along with the Chapelle room and billiard room.

04:00pm – 04:45pm: Robert Beinert (TU Berlin)

Title: Linear Gromov-Wasserstein distances (abstract) (slides)

04:45pm – 05:15pm: Coffee break

05:15pm – 06:00pm: David Shuman (Olin College)

Title: Signal Processing on the Permutahedron: Tight Spectral Frames for Ranked Data Analysis (abstract) (slides)

06:00pm – 07:30pm: Poster session I with welcome cocktail

07:30pm: Dinner

Tuesday, November 8th, 2022

09:00am – 09:45am: Haggai Maron (NVIDIA Research)

Title: Subgraph-based networks for expressive, efficient, and domain-independent graph learning (abstract) (slides)

09:45am – 10:30am: Dorina Thanou (EPFL)

Title: The inductive bias of message passing neural networks (abstract)

10:30am – 11:00am: Coffee break

11:00am – 11:45am: Simon Barthelmé (CNRS, Université Grenobles-Alpes)

Title: Graph Sampling using Determinantal Point Processes (abstract) (slides)

12:30pm – 02:00pm: Lunch

02:00pm – 04:00pm: Free discussion

04:00pm – 04:45am: Kimon Fountoulakis (University of Waterloo)

Title: Graph Attention Retrospective (abstract) (slides)

04:45pm – 05:15pm: Coffee break

05:15pm – 06:00pm: Titouan Vayer (Inria Lyon)

Title: Towards Compressive Recovery of Sparse Precision Matrices (abstract) (slides)

06:00pm – 06:45pm: Dimitri Van De Ville (EPFL)

Title: Graph Signal Processing to Quantify Brain Structure-Function Coupling (abstract) (slides)

07:30pm: Dinner

Wednesday, November 9th, 2022

09:00am – 09:45am: Pierre-Antoine Absil (UC Louvain)

Title: Graph-regularized matrix completion (abstract) (slides)

09:45am – 10:30am: Fabienne Castell (Aix-Marseille Université)

Title: Spectral estimation through Kirchoff's random forests (abstract) (slides)

10:30am – 11:00am: Coffee break

11:00am – 11:45am: Xiaowen Dong (University of Oxford)

Title: On the stability of spectral graph filters and beyond (abstract) (slides)

12:30pm – 02:00pm: Lunch

02:00pm – 04:00pm: Free discussion

04:00pm – 04:45pm: Clément Vignac (EPFL)

Title: DiGress: Discrete Denoising diffusion for graph generation (abstract) (slides)

04:45pm – 05:15pm: Coffee break

05:15pm – 06:00pm: Yohann de Castro (Université Claude Bernard)

Title: Markov Random Geometric Graph (MRGG): A Growth Model for Temporal Dynamic Networks (abstract) (slides)

06:00pm – 07:30pm: Poster session II with cocktails

07:30pm: Dinner

Thursday, November 10th, 2022

09:00am – 09:45am: Edwige Cyffers (Inria)

Title: Differential Privacy for Decentralized Optimization (abstract) (slides)

09:45am – 10:30am: Ulrike Von Luxburg (University of Tübingen)

Title: Clustering with Tangles (abstract)

10:30am – 11:00am: Coffee break

11:00am – 11:45am: Nicolas Courty (Université Bretagne Sud)

Title: Optimal transport for graph-signal processing (abstract) (slides)

12:30pm – 02:00pm: Lunch

02:00pm – 07:30pm: Free afternoon

Trip to the calanques (bring an adequate pair of shoes!)

07:30pm: Dinner "Bouillaibaisse"

Friday, November 11th, 2022

09:00pm – 09:45pm: Nicolas Keriven (CNRS, GIPSA-Lab)

Title: Graph Neural Networks on Large Random Graphs: Convergence, Stability, Universality (abstract)

09:45am – 10:30am: Pierre Borgnat (CNRS, ENS Lyon)

Title: Metric Learning for attributed graphs (abstract)

10:30am – 11:00am: Coffee break

11:00am – 11:45am: Elvin Isufi (TU Delft)

Title: Graph Neural Networks over Random Graphs (abstract)

12:30pm – 02:00pm: Lunch

02:00pm: Goodbye!

Abstracts

Pierre-Antoine Absil (UC Louvain)

Title: Graph-regularized matrix completion

Abstract: There is a broad interest in developing recommender systems that guide the choice of users between several available items. For example, as popularized by the Netflix prize, the items can be movies and the users can be customers. Each customer has rated some movies, and the recommendation task is to predict how much the customers would like the movies they did not rate, so as to make a personalized recommendation. A popular model posits that the movies-by-users matrix of ratings is approximately low-rank. This leads to the mathematical problem of finding a matrix of low rank that is optimal, in the sense that it agrees as well as possible (according to some criterion, e.g., mean square error) with the measured entries. Oftentimes, complementary information about the items or users is available that can be encoded as a graph. This yields a computational task termed graph-regularized low-rank matrix completion. Upstream, this topic will lead us in the realm of low-rank optimization, involving concepts of optimization on manifolds. Downstream, we will look at some applications of current interest that depart from classical recommendation tasks.

Sophie Achard (CNRS, Université Grenoble Alpes)

Title: Statistical comparisons of spatio-temporal networks

Abstract: In the scenario where multiple instances of networks with same nodes are available and nodes are attached to spatial features, it is worth combining both information in order to explain the role of the nodes. The explainability of node role in complex networks is very difficult, however crucial in different application scenarios such as social science, neuroscience, computer science… Many efforts have been made on the quantification of hubs revealing particular nodes in a network using a given structural property. Yet, for spatio-temporal networks, the identification of node role remains largely unexplored. In this talk, I will show limitations of classical methods on a real datasets coming from brain connectivity comparing healthy subjects to coma patients. Then, I will present recent work using equivalence relation of the nodal structural properties. Comparisons of graphs with same nodes set is evaluated with a new similarity score based on graph structural patterns. This score provides a nodal index to determine node role distinctiveness in a graph family. Finally, illustrations on different datasets concerning human brain functional connectivity will be described.

Simon Barthelmé (CNRS, Université Grenobles-Alpes)

Title: Graph Sampling using Determinantal Point Processes

Abstract: A "graph signal" is a signal that is indexed by the nodes of a graph. For instance, in a social network, every node represents a user, every edge represents friendship, and an example of a graph signal may be the political leaning of each user. Since people tend to be friends with others with similar beliefs, we know that the signal is likely to be "smooth on the graph", meaning that it does not vary too much locally. For such smooth signals, we may be able to reconstruct the entire signal by measuring only at a small set of locations. We can think of this task as performing a survey on the graph, or as subsampling the nodes. Determinantal Point Processes (DPPs) offer a very generic way of dealing with such subsampling problems. In this talk I'll describe DPPs and show how they can be used to sample nodes in large graphs.

Robert Beinert (TU Berlin)

Title: Linear Gromov-Wasserstein distances

Abstract: Gromov-Wasserstein (GW) distances are generalization of the well-established Wasserstein distances from optimal transport and allow to compare arbitrary metric measure spaces like weighted or annotated graphs. In this talk, we propose a linear version of GW, which is based on the geometric structure of the GW space. Numerical examples illustrate that linear GW can replace the expensive computation of pairwise GW distances in applications like graph and shape classification.

Pierre Borgnat (CNRS, ENS Lyon)

Title: Metric Learning for attributed graphs

Abstract: The choice of good distances and similarity measures between objects is important for many machine learning methods. Therefore, many metric learning algorithms have been developed in recent years, mainly for Euclidean data in order to improve performance of classification or clustering methods.However, due to difficulties in establishing computable, efficient and differentiable distances between attributed graphs, few metric learning algorithms adapted to graphs have been developed, despite the strong interest of the community. In this work, we address this question by proposing a new Simple Graph Metric Learning (SGML) model with few trainable parameters that we base on Simple Convolutional Neural Networks and elements of optimal transport theory. This model allows us to build an appropriate distance from a database of labeled (attributed) graphs to improve the performance of simple classification algorithms such as k-NN. Some properties of the distance will be discussed. Especially, this distance can be trained efficiently (with a good scaling) while maintaining good performance as illustrated by the experimental study that we conducted. Joint work with Yacouba Kaloga and Amaury Habrard

Fabienne Castell (Aix-Marseille Université)

Title: Spectral estimation through Kirchoff's random forests

Abstract: Let \(L\) be the Laplacian of a non oriented edge-weighted graph \(G\), with spectral measure \(\sigma\). Associated to \(L\) and an extra parameter \(q\), we can construct a random rooted spanning forest on \(G\), the so-called Kirchoff’s forest. We show that this forest can be used to get good estimations of the Stieltjes transform of \(\sigma\), with a cost which is almost linear in the number of nodes of \(G\). These estimations can then be used to obtain estimations of the cdf of \(\sigma\). This a joint work with P.O. Amblard, S. Barthelmé, A. Gaudillière, C. Mélot, M. Quattropani, N. Tremblay.

Nicolas Courty (Université Bretagne Sud)

Title: Optimal transport for graph-signal processing

Abstract: In this talk I will discuss how a variant of the classical optimal transport problem, known as the Gromov-Wasserstein distance, can help in designing learning tasks over graphs, and allow to transpose classical signal processing or data analysis tools such as dictionary learning or online change detection, for learning over those types of structured objects. Both theoretical and practical aspects will be discussed.

Edwige Cyffers (Inria)

Title: Differential Privacy for Decentralized Optimization

Abstract: With the increasing collection of sensitive data and data mining, privacy is a key ethical issue in machine learning. It has been shown that personal information can be leaked by machine learning models even tough the data is not directly accessible. Differential Privacy (DP) has emerged as the gold standard to provide mathematical guarantees to quantify the level of exposure of data. Simultaneously, decentralized optimization is becoming popular for its scalability and efficiency. Intuitively, it should also provide better privacy guarantees, as users only observe the messages sent by their neighbors in the communication graph. We present a relaxation of local differential privacy adapted to this setting, that we call Network Differential Privacy. We provide analysis for optimization tasks based on gossip and random walk algorithms for various graphs. In particular, we show that this relaxation indeed lead to guarantees nearly bridging the gap between a trusted-aggregator setting and a completely adversarial setting one. We illustrate this better utility-privacy trade-offs on experiments with real-world data.

Yohann de Castro (Université Claude Bernard)

Title: Markov Random Geometric Graph (MRGG): A Growth Model for Temporal Dynamic Networks

Abstract: In this talk, we introduce a growth model for relatively dense networks based on Markovian latent variables. We show that one can infer the latent Markovian dynamic and the latent pairwise distances between nodes, and predict the probabilities of connection of a new node entering the network. This talk is based on the article: "Markov Random Geometric Graph (MRGG): A Growth Model for Temporal Dynamic Networks", with Q. Duchemin, Electronic Journal of Statistics, Volume 16, Pages 671-699, 2022.

Xiaowen Dong (University of Oxford)

Title: On the stability of spectral graph filters and beyond

Abstract: Data collected in network domains, hence supported by an (irregular) graph rather than a (regular) grid-like structure, are becoming pervasive. Typical examples include gene expression data associated with a protein-protein interaction graph, or behaviours of a group of individuals in a social network. Graph-based signal processing and machine learning are recent techniques that have been developed to handle such graph-structured data and have seen applications in such diverse fields as drug discovery, fake news detection, and traffic prediction. However, a theoretical understanding of the robustness of these models against perturbation to the input graph domain has been lacking. In this talk, I will present our results on the stability bounds of spectral graph filters as well as other recent work on the robustness of graph machine learning models, which together will contribute to the deployment of these models in real-world scenarios.

Kimon Fountoulakis (University of Waterloo)

Title: Graph Attention Retrospective

Abstract: Graph-based learning is a rapidly growing sub-field of machine learning with applications in social networks, citation networks, and bioinformatics. One of the most popular type of models is graph attention networks. These models were introduced to allow a node to aggregate information from the features of neighbor nodes in a non-uniform way in contrast to simple graph convolution which does not distinguish the neighbors of a node. In this paper, we study theoretically this expected behaviour of graph attention networks. We prove multiple results on the performance of the graph attention mechanism for the problem of node classification for a contextual stochastic block model. Here the features of the nodes are obtained from a mixture of Gaussians and the edges from a stochastic block model where the features and the edges are coupled in a natural way. First, we show that in an “easy” regime, where the distance between the means of the Gaussians is large enough, graph attention maintains the weights of intra-class edges and significantly reduces the weights of the inter-class edges. As a corollary, we show that this implies perfect node classification independent of the weights of inter-class edges. However, a classical argument shows that in the “easy” regime, the graph is not needed at all to classify the data with high probability. In the “hard” regime, we show that every attention mechanism fails to distinguish intra-class from inter-class edges. We evaluate our theoretical results on synthetic and real-world data.

Elvin Isufi (TU Delft)

Title: Graph Neural Networks over Random Graphs

Abstract: Graph neural networks (GNNs) model nonlinear representations in graph data with applications in distributed agent coordination, control, and planning among others. Current GNNs assume scenarios with deterministic graphs but ignore random topological changes that occur due to environment, human factors, or external attacks. Motivated by the latter, we developed a stochastic framework for GNNs intersecting it with random graph theory and stochastic optimisation. This framework has the following three thrusts. First, we investigated the stability of GNNs to stochastic graph perturbations. The stability result indicates GNNs can maintain performance under mild random perturbations and identifies the role played by the graph stochasticity, filter property and NN architecture.Second, we put forth stochastic graph neural networks (SGNNs) that account for the graph stochasticity during training. SGNNs make each node rely on its neighbors with some uncertainty and thus, become robust to random topological changes. Statistical analysis on the variance of the SGNN output is conduct to identify how random SGNN realizations deviate from the optimized expectation. While providing theoretical implications, there is no guarantee about these random deviations and undesirable performance may appear in individual realizations even when the expected performance is satisfactory. Hence, third, we proposed a learning strategy for SGNNs with constrained variance that searches for a balance between the expected performance and the stochastic deviation. It adheres to solving a variance-constrained stochastic optimization problem, where we introduce a primal-dual learning algorithm for a saddle point solution. We show stochastic deviations can be explicitly controlled and thus, individual SGNN performance can be guaranteed. The talk will be based on the following three papers:

  • Gao, Zhan, Elvin Isufi, and Alejandro Ribeiro. "Stability of graph convolutional neural networks to stochastic perturbations." Signal Processing 188 (2021): 108216.
  • Gao, Zhan, Elvin Isufi, and Alejandro Ribeiro. "Stochastic graph neural networks." IEEE Transactions on Signal Processing 69 (2021): 4428-4443.
  • Gao, Zhan, and Elvin Isufi. "Learning Stochastic Graph Neural Networks with Constrained Variance." arXiv preprint arXiv:2201.12611 (2022).

Nicolas Keriven (CNRS, GIPSA-Lab)

Title: Graph Neural Networks on Large Random Graphs: Convergence, Stability, Universality

Abstract: In this talk, we will discuss some theoretical properties of Graph Neural Networks (GNNs) on large graphs. Indeed, most existing analyses of GNNs are purely combinatorial and may fail to reflect their behavior regarding large-scale structures in graphs, such as communities of well-connected nodes or manifold structures. To address this, we assume that the graphs of interest are generated with classical models of random graphs. We first give non-asymptotic convergence bounds of GNNs toward ``continuous'' equivalents as the number of nodes grows. We then study their stability to small deformations of the underlying random graph model, a crucial property in traditional CNNs. Finally, we study their universality and approximation power, and show how some recent GNNs are more powerful than others. This is a joint work with Samuel Vaiter (CNRS) and Alberto Bietti (NYU).

Haggai Maron (NVIDIA Research)

Title: Subgraph-based networks for expressive, efficient, and domain-independent graph learning

Abstract: While message-passing neural networks (MPNNs) are the most popular architectures for graph learning, their expressive power is inherently limited. In order to gain increased expressive power while retaining efficiency, several recent works apply MPNNs to subgraphs of the original graph. As a starting point, the talk will introduce the Equivariant Subgraph Aggregation Networks (ESAN) architecture, which is a representative framework for this class of methods. In ESAN, each graph is represented as a set of subgraphs, selected according to a predefined policy. The sets of subgraphs are then processed using an equivariant architecture designed specifically for this purpose. I will then present a recent follow-up work that revisits the symmetry group suggested in ESAN and suggests that a more precise choice can be made if we restrict our attention to a specific popular family of subgraph selection policies. We will see that using this observation, one can make a direct connection between subgraph GNNs and Invariant Graph Networks (IGNs), thus providing new insights into subgraph GNNs' expressive power and design space.

David Shuman (Olin College)

Title: Signal Processing on the Permutahedron: Tight Spectral Frames for Ranked Data Analysis

Abstract: Ranked data sets, where m judges/voters specify a preference ranking of n objects/candidates, are increasingly prevalent in contexts such as political elections, computer vision, recommender systems, and bioinformatics. The vote counts for each ranking can be viewed as an n! data vector lying on the permutahedron, which is a Cayley graph of the symmetric group with vertices labeled by permutations and an edge when two permutations differ by an adjacent transposition. Leveraging combinatorial representation theory and recent progress in signal processing on graphs, we investigate a novel, scalable transform method to interpret and exploit structure in ranked data. We represent data on the permutahedron using an overcomplete dictionary of atoms, each of which captures both smoothness information about the data (typically the focus of spectral graph decomposition methods in graph signal processing) and structural information about the data (typically the focus of symmetry decomposition methods from representation theory). These atoms have a more naturally interpretable structure than any known basis for signals on the permutahedron, and they form a Parseval frame, ensuring beneficial numerical properties such as energy preservation. We develop specialized algorithms and open software that take advantage of the symmetry and structure of the permutahedron to improve the scalability of the proposed method, making it more applicable to the high-dimensional ranked data found in applications.

Dorina Thanou (EPFL)

Title: The inductive bias of message passing neural networks

Abstract: Recent literature extensively analyzed the representation power of graph neural networks and ways to build more expressive architectures than the standard message-passing neural networks (MPNNs). Yet, MPNNs remain the most widely used class of networks thanks to their favorable computational complexity and good empirical performance. This talk will focus on the inductive bias of MPNNs, and will shed some light on the question: Inside that class of functions that MPNNs can learn, what are the functions that they tend to learn? We will show that MPNNs tend to learn representations of rooted subtrees that are smooth with respect to Weisfeiler-Lehman similarity measure. This bias contributes to the good empirical performance achieved on real word datasets. Overall, our analysis provides insight on the generalization abilities of graph networks, and, in particular, on the conditions under which they can generalize to larger graphs than they have been trained on.

Dimitri Van De Ville (EPFL)

Title: Graph Signal Processing to Quantify Brain Structure-Function Coupling

Abstract: State-of-the-art magnetic resonance imaging (MRI) provides unprecedented opportunities to study brain structure (anatomy) and function (physiology). Based on such data, graph representations can be built where nodes are associated to brain regions and edge weights to strengths of structural or functional connections. In particular, structural graphs capture major neural pathways in white matter, while functional graphs map out statistical interdependencies between pairs of regional activity traces. Network analysis of these graphs has revealed emergent system-level properties of brain structure or function, such as efficiency of communication and modular organization. In this talk, graph signal processing (GSP) will be presented as a novel framework to integrate brain structure, contained in the structural graph, with brain function, characterized by activity traces that can be considered as time-dependent graph signals. Such a perspective allows to define novel meaningful graph-filtering operations of brain activity that take into account smoothness of signals on the anatomical backbone. This allows to define a new measure of “coupling” between structure and function, termed the structural decoupling index (SDI), based on how activity is expressed on structural graph harmonics of different graph frequencies. To provide statistical inference, we extend the well-known Fourier phase randomization method to generate surrogate data to the graph setting. The SDI reveals a behaviorally relevant spatial gradient, where sensory regions tend to be more coupled with structure, and high-level cognitive ones less so. In addition, SDI maps are informative both for task decoding and individual fingerprinting pointing again toward the different involvement of unimodal and transmodal regions, respectively. Finally, recent work will highlight how the spatial resolution of GSP brain graphs can be increased to the voxel level, representing a few hundredth thousands of nodes.

Titouan Vayer (Inria Lyon)

Title: Towards Compressive Recovery of Sparse Precision Matrices

Abstract: We consider the problem of learning a graph modeling the statistical relations of the \(d\) variables of a dataset with \(n\) samples. Standard approaches amount to searching for a precision matrix \(\boldsymbol\Theta\) representative of a Gaussian graphical model that adequately explains the data. However, most maximum likelihood based estimators usually require to store the \(d^{2}\) values of the empirical covariance matrix, which is often too costly in a high-dimensional setting. In this work we adopt a ‘‘compressive'' viewpoint and look for estimating a sparse \(\boldsymbol\Theta\) from a \emph{sketch} of the data, \textit{i.e.} a low-dimensional vector of size \(m << d^{2}\) carefully designed from the data using nonlinear random features. Under certain assumptions on the spectrum of \(\boldsymbol\Theta\), we show that it is theoretically possible to estimate it robustly from a sketch of size \(m=\mathcal{O}\left((d+2k)\ln(d)\right)\) where \(k\) is the maximal number of edges of the underlying graph and with an error that decreases in \(\mathcal{O}(n^{-1/2})\). These guarantees are inspired from the compressed sensing theory and involve restricted isometric properties and instance optimal decoders. Our estimator requires solving a non-convex inverse problem and we investigate the possibility of achieving practical recovery by a variant of the Davis-Yin three operator splitting algorithm. We compare our approach with ‘‘Graphical LASSO'' type estimators on synthetic datasets. Finally, we discuss in a last part the limitations and perspectives of this work, which partially answers some questions but also opens many others.

Nicolas Verzelen (INRAE Montpellier)

Title: Ranking experts in crowdsourcing problems

Abstract: We consider the problem of estimating latent positions in a one-dimensional torus from a graph pairwise affinities. The observed affinity between a pair of items is modeled as a noisy observation of a function \(f(x_i^*,x_j^*)\) of the latent positions \(x_i^*, x_j^*\) of the two items on the torus. The affinity function \(f\) is unknown, and it is only assumed to fulfill some shape constraints ensuring that \(f(x,y)\) is large when the distance between \(x\) and \(y\) is small, and vice-versa. This non-parametric modeling includes in particular random geometric graphs and seriation problems. We introduce an estimation procedure that provably localizes all the latent positions with a maximum error of the order with high-probability. This rate is proven to be minimax optimal. A computationally efficient variant of the procedure is also analyzed under some more restrictive assumptions.

Clément Vignac (EPFL)

Title: DiGress: Discrete Denoising diffusion for graph generation

Abstract: This work introduces DiGress, a discrete denoising diffusion model for generating graphs with categorical node and edge attributes. Our model defines a diffusion process that progressively edits a graph with noise (adding or removing edges, changing the categories), and a graph transformer network that learns to revert this process. With these two ingredients in place, we reduce distribution learning over graphs to a simple sequence of classification tasks. We further improve sample quality by proposing a new Markovian noise model that preserves the marginal distribution of node and edge types during diffusion, and by adding auxiliary graph-theoretic features derived from the noisy graph at each diffusion step. Finally, we propose a guidance procedure for conditioning the generation on graph-level features. Overall, DiGress achieves state-of-the-art performance on both molecular and non-molecular datasets, with up to 3x validity improvement on a dataset of planar graphs. In particular, it is the first model that scales to the large GuacaMol dataset containing 1.3M drug-like molecules without using a molecule-specific representation such as SMILES or fragments.

Ulrike Von Luxburg (University of Tübingen)

Title: Clustering with Tangles

Abstract: Originally, tangles were invented as an abstract tool in mathematical graph theory to prove the famous graph minor theorem. In the talk, I will showcase the potential of tangles in machine learning applications. Given a collection of cuts of any dataset, tangles aggregate these cuts to point in the direction of a dense structure. As a result, a cluster is softly characterized by a set of consistent pointers. This highly flexible approach can solve clustering problems in various setups, ranging from questionnaires over community detection in graphs to clustering points in metric spaces.

Date: 2022-10-06 Thu 16:47

Author: Samuel Vaiter

Created: 2022-11-11 Fri 09:08

Validate