From Theory to Practice: Rethinking Green and Martin Kernels for Unleashing Graph Transformers

0citations
Project
0
Citations
#766
in ICML 2025
of 3340 papers
5
Authors
1
Data Points

Abstract

Graph Transformers (GTs) have emerged as a powerful alternative to message-passing neural networks, yet their performance heavily depends on effectively embedding structural inductive biases. In this work, we introduce novel structural encodings (SEs) grounded in a rigorous analysis of random walks (RWs), leveraging Green and Martin kernels that we have carefully redefined for AI applications while preserving their mathematical essence.These kernels capture the long-term behavior of RWs on graphs and allow for enhanced representation of complex topologies, including non-aperiodic and directed acyclic substructures.Empirical evaluations across eight benchmark datasets demonstrate strong performance across diverse tasks, notably in molecular and circuit domains.We attribute this performance boost to the improved ability of our kernel-based SEs to encode intricate structural information, thereby strengthening the global attention and inductive bias within GTs.This work highlights the effectiveness of theoretically grounded kernel methods in advancing Transformer-based models for graph learning.

Citation History

Jan 28, 2026
0