New GNN architecture
A large majority of Graph Neural Networks (GNNs) follow the message-passing paradigm, where node states are updated based on aggregated neighbour messages. In this post, we describe Co-operative GNNs (Co-GNNs), a new type of message-passing architecture where every node is viewed as a player that can choose to either ‘listen’, ‘broadcast’, ‘listen & broadcast’, or to ‘isolate.’ Standard message passing is a special case where every node ‘listens & broadcasts’ to all neighbours. We show that Co-GNNs are asynchronous, more expressive, and can address common plights of standard message-passing GNNs such as over-squashing and over-smoothing.
This post was co-authored with Ben Finkelshtein, Ismail Ceylan, and Xingyue Huang and is based on the paper B. Finkelshtein et al., Cooperative Graph Neural Networks (2023) arXiv:2310.01267.
Graph Neural Networks (GNNs) are a popular class of architectures used for learning on graph-structured data such as molecules, biological interactomes, and social networks. The majority of GNNs follow the message-passing paradigm [1], where at every layer graph nodes exchange information along the edges of the graph. The state of every node is updated using a permutation-invariant aggregation operation (typically, a sum or a mean) on the messages sent from adjacent nodes [2].
While the message-passing paradigm has been very influential in graph ML, it has well-known theoretical and practical limitations. The formal equivalence of message-passing graph neural networks (MPNNs) to graph isomorphism tests [3] provides a theoretical upper bound on their expressive power. As a result, distinguishing between even very simple non-isomorphic graphs (such as a 6-cycle and two triangles in the example below) is impossible by means of message passing without additional information such as positional or structural encoding…