GNN-VPA: A Variance-Preserving Aggregation Strategy for Graph Neural Networks
Sprache des Titels:
Englisch
Original Buchtitel:
International Conference On Learning Representations (ICLR 2024)
Original Kurzfassung:
The successful graph neural networks (GNNs) and particularly message passing neural networks critically depend on the functions employed for message aggregation and graph-level readout. Using signal propagation theory, we propose a variance-preserving aggregation function, which maintains the expressivity of GNNs while improving learning dynamics. Our results could pave the way towards normalizer-free or self-normalizing GNNs.