
A collection of papers studying/improving the expressiveness of graph neural networks (GNNs)
The collection consists of papers that focus on studying and enhancing the expressiveness of graph neural networks (GNNs). The papers cover topics such as the representation, learning, and capabilities of GNNs, along with innovations like improving cycle counting power, link prediction, and linear program representation using GNNs. Additionally, concepts such as subgraph sketching, boosting cycle counting power, and extending subgraph GNNs are explored in the papers.
The collection of papers presented provides a deep dive into the various aspects of graph neural networks, focusing on enhancing their expressiveness and capabilities. Covering a wide range of topics such as surveying GNN power, collaboration-aware networks, subgraph isomorphism counting, nested GNNs, and more, these papers contribute significantly to the ongoing research and advancements in the field of GNNs. Researchers and enthusiasts can explore these papers to gain insights into the evolving landscape of graph neural networks.
