Graph Machine Learning – Telegram
Graph Machine Learning
6.7K subscribers
53 photos
11 files
808 links
Everything about graph theory, computer science, machine learning, etc.


If you have something worth sharing with the community, reach out @gimmeblues, @chaitjo.

Admins: Sergey Ivanov; Michael Galkin; Chaitanya K. Joshi
Download Telegram
Organizational update

We are very happy to share that Chaitanya K. Joshi agreed to be one of the admins for the channel. He was already involved in several posts here and made interesting blog posts. He is currently a PhD student at the University of Cambridge, supervised by Prof. Pietro Liò. His research explores the intersection of Graph and Geometric Deep Learning with applications in biology and drug discovery. He previously worked on Graph Neural Network architectures and applications in Combinatorial Optimization at the NTU Graph Deep Learning Lab and at A*STAR, Singapore, together with Prof. Xavier Bresson. Please, welcome Chaitanya and if you have something to share do not hesitate to reach him out.
📃 Fresh Picks from ArXiv
The past week on the GraphML ArXiv digest: A flurry of new survey papers, GNNs for molecular property prediction and NLP/KG, as well as new avenues in GNN modelling.

📚 Surveys:
- Generative models for molecular discovery: Recentadvances and challenges. ft. Wengong Jin, Tommi Jaakkola, Regina Barzilay.
- Explainability in Graph Neural Networks: An Experimental Survey.
- A Survey on Deep Graph Generation: Methods and Applications.
- Knowledge Graph Embedding Methods for Entity Alignment: An Experimental Review.
- Few-Shot Learning on Graphs: A Survey.

🧬 GNNs for Science:
- Protein Representation Learning by Geometric Structure Pretraining. ft. Jian Tang.
- Multimodal Learning on Graphs for Disease Relation Extraction. ft. Marinka Zitnik.
- MolNet: A Chemically Intuitive Graph Neural Network for Prediction of Molecular Properties.
- Simulating Liquids with Graph Networks.

🗣 GNNs for NLP and Knowledge Graphs:
- A Unified Framework for Rank-based Evaluation Metrics for Link Prediction in Knowledge Graphs. ft. Mikhail Galkin.
- Context-Dependent Anomaly Detection with Knowledge Graph Embedding Models.
- AdaLoGN: Adaptive Logic Graph Network for Reasoning-Based Machine Reading Comprehension.
- HeterMPC: A Heterogeneous Graph Neural Network for Response Generation in Multi-Party Conversations.

🌐 GNN Modelling and Applications:
- GRAND+: Scalable Graph Random Neural Networks. ft. Jie Tang.
- Graph Representation Learning with Individualization and Refinement. ft. Lee Wee Sun.
- Graph Augmentation Learning.
- SoK: Differential Privacy on Graph-Structured Data.
- Incorporating Heterophily into Graph Neural Networks for Graph Classification.
- Supervised Contrastive Learning with Structure Inference for Graph Classification.

(If I forgot to mention your paper, please shoot me a message and I will update the post. We will be trying to resume the 'Fresh Picks from ArXiv' series every Monday morning!)
🏆 Inductive Link Prediction Challenge 2022

Team PyKEEN launches an open Inductive Link Prediction Challenge (ILPC 2022) for Knowledge Graphs to streamline community efforts in developing inductive graph representation learning methods.

For years, link prediction in KGs was exclusively done in the transductive setup, i.e., when training and inference is performed on the same graph and one could train a shallow entity embedding matrix. What do you do if your graph gets updated? Usually, retrain the whole pipeline. The emergence of GNNs paved a way for inductive models that do not necessarily need trainable entity embeddings to perform standard graph tasks.

In the inductive setup, training and inference graphs are disjoint - having trained a model on a training graph, participants are asked to predict links over a new unseen inference graph. This renders shallow embeddings from the training graph rather useless - you can’t make use of them in the new disconnected graph. Hence, we need better ways to obtain entity embeddings that would work for unseen nodes as well as for seen trainable ones. Looks like a job for GNNs, right?

The challenge offers two new inductive link prediction datasets - small and large - where the larger one is challenging even for modern GNNs; two baselines; a standardized evaluation protocol; and a codebase to start from.

More details on the inductive setup and submission details:

- Medium blog post
- Official Github repo
- arxiv pre-print
Fresh Picks from ArXiv
The past week on GraphML arXiv: scaling up GNNs, heterophily, expressivity, sparse equivariant graph networks, and applications ranging from particle physics to electronic health records.

Scaling up GNNs:
- Towards Training Billion Parameter Graph Neural Networks for Atomic Simulations ft. Open Catalyst Project team.
- PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication

Heterophily:
- Meta-Weight Graph Neural Network: Push the Limits Beyond Global Homophily
- Exploiting Neighbor Effect: Conv-Agnostic GNNs Framework for Graphs with Heterophily

Theory:
- GraphCoCo: Graph Complementary Contrastive Learning
- Fine-Tuning Graph Neural Networks via Graph Topology induced Optimal Transport
- SpeqNets: Sparsity-aware Permutation-equivariant Graph Networks ft. Christopher Morris.
- Twin Weisfeiler-Lehman: High Expressive GNNs for Graph Classification
- Exploring High-Order Structure for Robust Graph Structure Learning

Surveys:
- Graph Neural Networks in Particle Physics: Implementations, Innovations, and Challenges ft. Savannah Thais.
- Encoder-Decoder Architecture for Supervised Dynamic Graph Learning: A Survey
- A systematic approach to random data augmentation on graph neural networks

Applications:
- 3D Human Pose Estimation Using Möbius Graph Convolutional Networks ft. Emanuele Rodola.
- Graph-Text Multi-Modal Pre-trainingfor Medical Representation Learning
- Sequence-to-Sequence Knowledge Graph Completion and Question Answering
- Deep Reinforcement Learning Guided Graph Neural Networks for Brain Network Analysis
- Ethereum Fraud Detection with Heterogeneous Graph Neural Networks
- Duality-Induced Regularizer for Semantic Matching Knowledge Graph Embeddings ft. Shuiwang Ji.

(If I forgot to mention your paper, please shoot me a message and I will update the post.)
ICML 2022 Workshops Announced
The list of accepted workshops which will take place at this year's ICML have been announced recently.

Some workshops relevant to this group include:
- AI for Science
- Workshop on Machine Learning in Computational Design
- Topology, Algebra, and Geometry in Machine Learning
- ICML 2022 Workshop on Computational Biology
- The First Workshop on Pre-training: Perspectives, Pitfalls, and Paths Forward
​​SpeqNets: Sparsity-aware Permutation-equivariant Graph Networks

Christopher Morris (McGill University and Mila), joint work with Gaurav Rattan, Sandra Kiefer (RWTH Aachen), and Siamak Ravanbakhsh (McGill University and Mila)

Standard graph neural networks have clear limitations in approximating permutation-equivariant functions over graphs, i.e., their expressive power is bounded by the 1-WL (1,2). Hence, more expressive, higher-order graph neural networks have recently emerged, e.g., (1,3), which overcome these limitations.

However, they either operate on k-order tensors or consider all k-node subgraphs, implying an exponential dependence on k in memory requirements, and do not adapt to the sparsity of the graph. In (4), we introduce a new class of heuristics for the graph isomorphism problem, the (k,s)-WL, which offers a more fine-grained control between expressivity and scalability.

Essentially, the algorithm is a variant of the local k-WL (5) but only considers specific tuples to avoid the exponential memory complexity of the k-WL. Concretely, the algorithm only considers k-tuples or subgraphs on k nodes with at most s connected components, effectively exploiting the potential sparsity of the underlying graph. We show how varying k and s leads to a tradeoff between scalability and expressivity on the theoretical side.

Further, we derive a new hierarchy of permutation-equivariant graph neural networks, denoted SpeqNets, based on the above combinatorial insights, reaching universality in the limit. These architectures vastly reduce computation times compared to standard higher-order graph networks in the supervised node- and graph-level classification and regression regime, significantly improving standard graph neural network and graph kernel architectures in predictive performance.


(1) Weisfeiler and Leman Go Neural: Higher-order Graph Neural Networks. Christopher Morris, Martin Ritzert, Matthias Fey, William L. Hamilton, Jan Eric Lenssen, Gaurav Rattan, Martin Grohe, AAAI 2019.
(2) How Powerful are Graph Neural Networks? Keyulu Xu, Weihua Hu, Jure Leskovec, Stefanie Jegelka, ICLR 2019.
(3) Provably Powerful Graph Networks. Haggai Maron, Heli Ben-Hamu, Hadar Serviansky, Yaron Lipman, NeurIPS 2019.
(4) SpeqNets: Sparsity-aware Permutation-equivariant Graph Networks (https://arxiv.org/abs/2203.13913). Christopher Morris, Gaurav Rattan, Sandra Kiefer, Siamak Ravanbakhsh, Geometrical and Topological Representation Learning (GT-RL, ICLR 2022).
(5) Weisfeiler and Leman go sparse: Towards scalable higher-order graph embeddings.
Christopher Morris, Gaurav Rattan, Petra Mutzel, NeurIPS 2020.
Fresh Picks from Arxiv
The past week on GraphML arXiv: Hypergraph NNs, GNNs are dynamic programmers, latent graph learning, 3D equivariant molecule generation, and a new GNN library for Keras.

Hypergraph Neural Networks:
- Message Passing Neural Networks for Hypergraphs
- Hypergraph Convolutional Networks via Equivalency between Hypergraphs and Undirected Graphs ft. Yu Rong.
- Preventing Over-Smoothing for Hypergraph Neural Networks

Theory:
- Graph Neural Networks are Dynamic Programmers ft. Petar Veličković.
- OrphicX: A Causality-Inspired Latent Variable Model for Interpreting Graph Neural Networks
- Shift-Robust Node Classification via Graph Adversarial Clustering ft. Jiawei Han.
- Mutual information estimation for graph convolutional neural networks
- Graph-in-Graph (GiG): Learning interpretable latent graphs in non-Euclidean domain for biological and healthcare applications ft. Michael Bronstein.

🏐 Equivariance and 3D Graphs:
- Equivariant Diffusion for Molecule Generation in 3D ft. Max Welling.
- 3D Equivariant Graph Implicit Functions

📚 Libraries and Surveys:
- GNNkeras: A Keras-based library for Graph Neural Networks and homogeneous and heterogeneous graph processing ft. Franco Scarselli.
- Graph Neural Networks in IoT: A Survey

🔨 Applications:
- Graph similarity learning for change-point detection in dynamic networks ft. Xiowen Dong.
- Multilingual Knowledge Graph Completion with Self-Supervised Adaptive Graph Alignment ft. Yizhou Sun.
- A Simple Yet Effective Pretraining Strategy for Graph Few-shot Learning
- Pretraining Graph Neural Networks for few-shot Analog Circuit Modeling and Design ft. Pieter Abbeel.

(If I forgot to mention your paper, please shoot me a message and I will update the post.)
Equilibrium Graph Pooling

In graph-level prediction tasks, be it graph classification, graph regression, or something else, we usually do some kind of graph pooling to aggregate representations of nodes in a single vector. It has to be a permutation-invariant function, so we don’t have much choice apart from standard mean / max / sum / min / median.

Fabian Fuchs in his new blog post asks:

“Have we found the global optimum of how to do global aggregation or are we stuck in a local minimum?”

In the new work, they propose Equilibrium Aggregation for global graph pooling. The idea brings together two subfields of deep learning: Learning on Sets (you’ve probably heard about Janossy pooling, Deep Sets and Self-Attention) and Implicit layers (Equilibrium models and Neural ODEs, for example).

Equilibrium Aggregation minimizes the energy argmin E(x,y) that is defined as a sum of pairwise potentials F(x,y) and some regularizer term. The potential function is parameterized by a neural net and, for starters, might be implemented as DeepSets MLP. Varying the potential function, you could also recover vanilla sum/max/mean/median pooling.

Generally speaking, the idea of using DeepSets for aggregation can be tracked to the very GraphSAGE, but it didn’t have a lot of theoretical justification back then.

Experimentally, putting equilibrium aggregation as a global pooling function (particularly with a backbone GCN message passing) leads to significant improvements in MOL-PCBA and several graph-level toy tasks.

So far, equilibrium aggregation does not bring much benefit when using it as a message aggregation function inside a GNN layer, and doesn’t support edge features in a global pooling - but those could be cool extensions and your next research project 😉

Check out Fabian’s post for more details!
Fresh Picks from Arxiv
The past week on GraphML arXiv: Dynamics, generalization and structure-aware generation for molecules, learning graph combinatorial optimization, and more.

⚛️ Molecular Graphs:
- How Robust are Modern Graph Neural Network Potentials in Long and Hot Molecular Dynamics Simulations? ft. Johannes Gasteiger, Stephan Gunnemann.
- How Do Graph Networks Generalize to Large and Diverse Molecular Systems? ft. Johannes Gasteiger, Stephan Gunnemann, Open Catalyst Project Team.
- In-Pocket 3D Graphs Enhance Ligand-Target Compatibility in Generative Small-Molecule Creation

💼 Graph Combinatorial Optimization:
- Learning to solve Minimum Cost Multicuts efficiently using Edge-Weighted Graph Convolutional Neural Networks
- Learning-Based Approaches for Graph Problems: A Survey

🌐 Miscellaneous:
- Graph-based Approximate NN Search: A Revisit
- Multi-Modal Hypergraph Diffusion Network with Dual Prior for Alzheimer Classification
- C3KG: A Chinese Commonsense Conversation Knowledge Graph
- Graph Neural Networks Designed for Different Graph Types: A Survey
- Equilibrium Aggregation: Encoding Sets via Optimization ft. Fabian Fuchs.

(If I forgot to mention your paper, please shoot me a message and I will update the post.)
Announcing the Learning on Graphs Conference

A brand new venue for the Graph/Geometric Machine Learning community!

Why? See the blogpost: https://michael-bronstein.medium.com/announcing-the-learning-on-graphs-conference-c63caed7347

The LoG Conference key facts:
- Covers work broadly related to machine learning on graphs and geometry
- Proceedings track published in PMLR
- Also has a non-archival extended abstract track
- Double blind review process on OpenReview
- Top reviewers receive monetary rewards
- First year: virtual December 9-12 2022, free to attend.

Call for papers: https://logconference.github.io/cfp/

Stay updated via Twitter: https://twitter.com/LogConference
Or LinkedIn: https://www.linkedin.com/company/log-conference

Advisory board:
Regina Barzilay (MIT), Xavier Bresson (NUS), Michael Bronstein (Oxford/Twitter), Stephan Günnemann (TUM), Stefanie Jegelka (MIT), Jure Leskovec (Stanford), Pietro Liò (Cambridge), Jian Tang (MILA/HEC Montreal), Jie Tang (Tsinghua), Petar Veličković (DeepMind), Soledad Villar (JHU), Marinka Zitnik (Harvard).

Organizers:
Yuanqi Du (DP Technology), Hannes Stärk (MIT), Derek Lim (MIT), Chaitanya Joshi (Cambridge), Andreea-Ioana Deac (Mila), Iulia Duta (Cambridge), Joshua Robinson (MIT). (edited)
Fresh Picks from Arxiv - ICLR Workshops Special Edition
The past week on GraphML arXiv: Lots and lots of graph ML for drug discovery papers + graph generation, hyper graphs, subgraphs, and more!

💊 Drug Discovery
- Deep Sharpening Of Topological Features For De Novo Protein Design ft. Bruno Correia, Michael Bronstein, Andreas Loukas
- Decoding Surface Fingerprints For Proteinligand Interactions ft. Bruno Correia, Michael Bronstein, Pietro Lio
- Physics-Informed Deep Neural Network For Rigid-Body Protein Docking ft. Bruno Correia, Michael Bronstein
- Evaluating Generalization in GFlowNets for Molecule Design ft. Yoshua Bengio, Michael Bronstein
- Torsional Diffusion for Molecular Conformer Generation ft. Regina Barzilay, Tommi Jakkola
- Graph Anisotropic Diffusion For Molecules ft. Michael Bronstein

🕸 Graph Generation
- SPECTRE : Spectral Conditioning Helps to Overcome the Expressivity Limits of One-shot Graph Generators ft. Andreas Loukas
- Explanation Graph Generation via Pre-trained Language Models: An Empirical Study with Contrastive Learning ft. Mohit Bansal

🔨 GNN Models
- Simplicial Attention Networks ft. Cris Bodnar, Pietro Lio
- Graph Pooling for Graph Neural Networks: Progress, Challenges, and Opportunities
- Graph Ordering Attention Networks
- Expressiveness and Approximation Properties of Graph Neural Networks
- Efficient Representation Learning of Subgraphs by Subgraph-To-Node Translation

🚗 Applications
- Learning to Solve Travelling Salesman Problem with Hardness-adaptive Curriculum ft. Wenwu Zhu
- Principled inference of hyperedges and overlapping communities in hypergraphs
- Graph Enhanced BERT for Query Understanding ft. Jilian Tang

(If I forgot to mention your paper, please shoot me a message and I will update the post.)
Can graph neural networks understand chemistry?

🎦 Video:
https://www.youtube.com/watch?v=jrVXJykB8qc

A talk by Dominique Beaini on their recent work and the 'maze analogy' for graph representation learning.

Covering papers on Principle Neighbourhood Aggregation, Directional GNNs, and Graph Transformers, this talk touches several sub-areas of recent advances in GNN architectures - WL testing and expressivity, positional encodings, anisotropy, spectral techniques, fully connected message passing, etc.
Knowledge Graph Conference 2022

The premier venue on industrial applications of KGs starts on Monday to last the whole week of May 2-6th! KGC 2022 collected a stellar line-up of speakers including Jure Leskovec (Stanford), Ora Lassila (AWS), Bryan Perozzi (Google), Yu Liu (Meta) as well as talks from all the big companies who use KGs on a daily basis in their products like LinkedIn, Meta, Microsoft, Netflix, Nvidia, Pinterest, and, of course, the majority of graph database vendors like Stardog, neo4j, Ontotext, TigerGraph, Franz. The conference takes place physically in NYC but you could join remotely in a hybrid fashion either.
​​GNNs +  = 🏆

The NeurIPS deadline has passed and we are back to posting!

If you thought that sophisticated GNNs for modelling trajectories are only used for molecular dynamics and arcane quantum simulations, fear not! Here is a cool practical application with a very high potential outreach: Graph Imputer by DeepMind and FC Liverpool (YNWA and checkmate, Man City) predicts trajectories of football players (and the ball).

The graph consists of 23 nodes, gets updated with a standard message passing encoder and a special time-dependent LSTM. The dataset is quite novel, too - it consists of 105 English Premier League matches (avg 90 min each), all players and the ball were tracked at 25 fps, and the resulting training trajectory sequences encode about 9.6 seconds of gameplay.

The paper is easy to read and has numerous football illustrations, check it out! Sports tech is actively growing those days, and football analysts now could go even deeper in studying their competitors. Will EPL clubs compete for GNN and Graph ML researchers in the upcoming transfer windows? Time to create our own transfermarkt? 😉
​​Denoising Diffusion Is All You Need

The breakthrough on Denoising Diffusion Probabilistic Models (DDPM) happened about 2 years ago. Since then, we observe dramatic improvement in generation tasks: GLIDE, DALL-E 2, and recent Imagen for images, Diffusion-LM in language modeling, diffusion for video sequences, and even diffusion for reinforcement learning.

Diffusion might be the biggest trend in GraphML in 2022 - particularly when applied to drug discovery, molecules and conformers generation, and quantum chemistry in general. Often, they are paired with the latest advancements in equivariant GNNs. Recent cool works that you’d want to take a look at include:

- Equivariant Diffusion for Molecule Generation in 3D (Hoogeboom et al, ICML 2022)
- Generative Coarse-Graining of Molecular Conformations (Wang et al, ICML 2022)
- GeoDiff: a Geometric Diffusion Model for Molecular Conformation Generation (Xu et al, ICLR 2022)
- Torsional Diffusion for Molecular Conformer Generation (Jing and Corso et al, 2022)

Where to learn more about DDPMs and its (quite advanced) mathematics? Luckily, there is a good bunch of new educational blog posts with step-by-step illustrations of the diffusion process and its implementation - try it!

- The Annotated Diffusion Model by Niels Rogge and Kashif Rasul (HuggingFace)
- Improving Diffusion Models as an Alternative To GANs by Arash Vahdat and Karsten Kreis (NVIDIA)
- What are Diffusion Models by Lilian Weng (OpenAI)
Workshop on Mining and Learning with Graphs @ ECML / PKDD 2022

The MLG workshop is co-located with ECML PKDD 2022 and will take place in Grenoble (France) on Sept. 23 (physical venue 🎉). Keynote speakers will be Soledad Villar (Johns Hopkins University) and Nils Kriege (Univesity of Vienna). You can submit pretty much anything related to learning or data mining with and on graphs. Also, previous works (aka "lessons learnt") and early idea papers are very much welcome.

The deadline is June 20th - a perfect case to finish up that project you wanted to submit for NeurIPS but ran a little bit late 😉
A new computational fabric for Graph Neural Networks

“Graph Neural Networks (GNNs) typically align their computation graph with the structure of the input graph. But are graphs the right computational fabric for GNNs? A recent line of papers challenges this assumption by replacing graphs with more general objects coming from the field of algebraic topology, which offer multiple theoretical and computational advantages.”

A new Medium post by Michael Bronstein, Cristian Bodnar, and Fabrizio Frasca
tl;dr: The new Learning on Graphs Conference (LoG) is looking for more reviewers! We have a special emphasis on review quality via high monetary rewards, a more focused conference topic, and low reviewer load (max 3 papers). But for this we need your help! Sign up here: https://forms.gle/QFQmCSRN3zwFw9hz9

-----

LoG will take place virtually from 9 - 12 December 2022 and covers papers broadly related to graphs and geometry, as described in our Call for Papers: https://logconference.org/. Here are a few (tentative) details of the reviewing process:

Reviewer Rewards:
Area chairs rate the quality of each review in terms of “constructivism.” The 20 highest-rated reviewers will receive an expected reward of 1500$ funded by our generous sponsors. The exact number of reviewers that receive an award and the award amount is subject to change and might increase if the sponsor revenue is greater than expected. The top reviewer (who is willing to do so) will be invited to talk about reviewing at the conference.

Review Process:
Submissions will be double-blind, we will use OpenReview to host papers and allow for public discussions; comments that are posted by reviewers will remain anonymous.

Tentative timeline:
- Sep 9th: Abstract submission ~3 months before the conference.
- Sep 9th - 16th: reviewers bid for papers until paper submission deadline.
- Sep 16th: Paper submission deadline.
- Sep 16th - 17th: Paper-reviewer matching based on bids using the Toronto system.
- Sep 17th - Oct 20th: Main review period.
- Oct 20th - Nov 3rd: 2 weeks author and reviewer discussion, and paper revision period on OpenReview.
- Nov 3rd - 10th: 1 week reviewer and area chair discussion.
- Nov 10th - 24th: Easy decisions get accepted/rejected by area chairs. Unclear decisions and ethics concerns get escalated to Program Chairs/Senior Area Chairs.
- Nov 24th: Final decisions released.
- Nov 30th: Camera ready deadline.
- Dec 9th: Conference starts.

If you would like to review for LoG and are qualified, please sign up here. We would be very grateful to have you on board!
GraphGPS: Navigating Graph Transformers
Invited post by Ladislav Rampášek

In 2021, graph transformers (GT) won recent molecular property prediction challenges thanks to alleviating many issues pertaining to vanilla message passing GNNs. Here, we try to organize numerous freshly developed GT models into a single GraphGPS framework to enable general, powerful, and scalable graph transformers with linear complexity for all types of Graph ML tasks.

With GraphGPS, we managed to scale Graph Transformers to much larger graphs and get SOTA in several competitive benchmarks, e.g. 0.07 MAE on ZINC. Positional and structural embeddings are necessary for graph Transformers, encoding “where” a node is and “how” its neighborhood looks like, respectively. Bonus: they even make MPNNs provably more powerful! We organize them into local, global, and relative types.

Key observation: It is better to combine an MPNN and Transformer layer together into one: helps with over-smoothing, and allows for plug & play linear global attention, e.g., Performer. In fact, linear attention enables graph transformers to scale to dramatically larger graphs compared to typical molecules - we confirm it easily works on graphs with 5K nodes without any special batching!

Putting these 3 ingredients together: positional/structural encodings, choice of MPNN and Transformer layer combined into one layer, gives the blueprint for our GraphGPS: General, Powerful, Scalable graph Transformer. Plain numbers:

🚀 400% faster than previous graph transformers;
📈 Scaling to batches of graphs up to 10,000 nodes each thanks to linear attention models;
🛠 The GraphGPS library allows co combine any MPNN with any Transformer and any positional/structural encoding.

Find more details in:
- Medium blog post with a deep-dive into GraphGPS: https://mgalkin.medium.com/graphgps-navigating-graph-transformers-c2cc223a051c
- arxiv preprint: https://arxiv.org/abs/2205.12454
- Github repo: https://github.com/rampasek/GraphGPS
​​2nd Open Catalyst Challenge at NeurIPS 2022

The largest benchmark for equivariant GNNs announced its 2nd edition to be co-located with NeurIPS 2022. From the official announcement:

“This year's challenge focuses on the same task -- Initial Structure to Relaxed Energy (IS2RE) -- as last year. The primary differences are: 1) instead of two tracks, we will have a single track where using the IS2RE data and/or the Structure-to-Energy-Forces (S2EF) 2M training data is allowed. 2) A new test-challenge split will be released in September specifically for this year's challenge.”

Using an additional S2EF data as a training signal leads to consistently better performance, so you can now properly scale up and invest a few thousand GPU / TPU / GraphCore hours! (pun intended, yours truly wrote this message on a basic GPU-free laptop). Well, Open Catalyst is a notorious infrastructure-demanding challenge.