Graph Machine Learning – Telegram
Graph Machine Learning
6.71K subscribers
53 photos
11 files
808 links
Everything about graph theory, computer science, machine learning, etc.


If you have something worth sharing with the community, reach out @gimmeblues, @chaitjo.

Admins: Sergey Ivanov; Michael Galkin; Chaitanya K. Joshi
Download Telegram
GNN User Group Videos 2021

NVIDIA and AWS DGL teams wish you a wonderful Holiday Season and a Happy New Year and to stay connect with their 1,000+ members you can follow their slack channel. You may also watch the replays from our 25 global speakers who made the meetups possible in 2021.


12/9/2021 Session: Neptune ML: Graph Machine Learning meets Graph Database (Dr. Xiang Song, AWS AI Research and Education Lab & Joy Wang, Amazon Neptune) and Atomistic Line Graph Neural Network for improved materials property predictions (Dr. Kamal Choudhary, National Institute of Standards and Technology (NIST), Maryland).

10/28/2021 Session: Large-scale GNN training with DGL (Da Zheng Ph.D., Amazon) and New Trends and Results in Graph Federated Learning (Prof. Carl Yang, Emory University).

9/30/2021 Session: Unified Tensor - Enabling GPU-centric Data Access for Efficient Large Graph GNN Training (Seungwon Min, University of Illinois at Urbana-Champaign) and Challenges and Thinking in Go-production of GNN + DGL (Dr. Jian Zhang, AWS Shanghai AI Lab and AWS Machine Learning Solution Lab).

7/29/2021 Session: DGL 0.7 release (Dr. Minjie Wang, Amazon), Storing Node Features in GPU memory to speedup billion-scale GNN training (Dr. Dominique LaSalle, NVIDIA), Locally Private Graph Neural Networks (Sina Sajadmanesh, Idiap Research Institute, Switzerland) and Graph Embedding and Application in Meituan (Mengdi Zhang, Meituan).

6/24/2021 Session: Binary Graph Neural Networks and Dynamic Graph Models (Mehdi Bahri, Imperial College London) and Simplifying large-scale visual analysis of tricky data & models with GPUs, graphs, and ML (Leo Meyerovich, Graphistry Inc).

5/27/2021 Session: Graphite: GRAPH-Induced feaTure Extraction for Point Cloud Registration (Mahdi Saleh, TUM) Optimizing Graph Transformer Networks with Graph-based Techniques (Loc Hoang, University of Texas at Austin) and Encoding the Core Business Entities Using Meituan Brain (Mengdi Zhang, Meituan).

4/29/2021 Session: Boost then Convolve: Gradient Boosting Meets Graph Neural Networks (Dr. Sergey Ivanov, Criteo, Russia) and Inductive Representation Learning of Temporal Networks via Causal Anonymous Walks (Prof. Pan Li, Purdue University).

3/25/2021 Session: Therapeutics Data Commons: Machine Learning Datasets and Tasks for Therapeutics (Prof. Marinka Zitnik & Kexin Huang, Harvard University) and The Transformer Network for the Traveling Salesman Problem (Prof. Xavier Bresson, Nanyang Technological University (NTU), Singapore).

2/25/2021 Session: Gunrock: Graph Analytics on GPU (Dr. John Owens, University of California, Davis), NVIDIA CuGraph - An Open-Source Package for Graphs (Dr. Joe Eaton, NVIDIA) and Exploitation on Learning Mechanism of GNN (Dr. Chuan Shi, Beijing University of Posts and Telecommunications).

1/28/2021 Session: A Framework For Differentiable Discovery of Graph Algorithms (Dr. Le Song, Georgia Tech).
What does 2022 hold for Geometric & Graph ML?

Michael Bronstein and Petar Veličković released a huge post summarizing the state of Graph ML in 2021 and predicting possible breakthroughs in 2022. Even more, the authors conducted a large-scale community study and interviewed many prominent researchers discussing 11 key aspects:

- Rising importance of geometry in ML
- Message passing GNNs are still dominating
- Differential equations power new GNN architectures
- Old ideas from signal processing, neuroscience, and physics strike back
- Modeling complex systems with higher-order structures
- Reasoning, axiomatisation, and generalisation are still challenging
- Graphs in Reinforcement Learning
- AlphaFold 2 is a paradigm shift in structural biology
- Progress in graph transformer architectures
- Drug discovery with Geometric and Graph ML
- Quantum ML + graph-based methods

And 130 references 😉
Postdoc position at Univerity Milano-Bicocca with Dimitri Ognibene

University Milano-Bicocca is offering 1 postdoctoral position on machine learning and AI applied to understand and contrast social media threats for teenagers. The successful participant will be involved in the multidisciplinary project COURAGE funded by Volkswagen Foundation.

The position is research only starting in April 2022

Application Closes: T.B.D. (about mid-February)

Starts: April - May 2022

Duration: 1.5 years

Salary: € 3100 pm after tax

Please contact us for expression of interest and preliminary information:

dimitri.ognibene@unimib.it (PI)

f.lomonaco5@campus.unimib.it

Topics: Social media, NLP, machine learning, cognitive models, opinion dynamics, computer vision, reinforcement learning, graph neural network, recommender systems, network analysis

Project: COURAGE

Informal Denoscription: https://sites.google.com/site/dimitriognibenehomepage/jobs
Job openings - ML for molecule design at Roche, Zurich

The Swiss ML group within Prescient Design/Roche is looking for (graph) ML and (computational/structural) biology (BIO) researchers. Apply if you want to use your neural network skills to design macromolecules & help improve lives.

More information: bit.ly/3HbqQ0w (ML scientist), bit.ly/3BG0ra2 (computational biologist)

Team: www.prescient.design, a member of the Roche group
Location: Zurich/Switzerland
Relevant work: bit.ly/36mimae
Timing: asap
Contact: andreas.loukas[at]roche.com

Keywords: machine learning, graph neural networks, geometric deep learning, generative models, invariant/equivariant neural nets, reinforcement learning, protein design, rosetta
New Release of DGL v0.8

One of the most prominent libraries for graph machine learning, DGL, has recently released an updated v0.8 with new features as well as improvement on system performance. The highlights are:

- A major update of the mini-batch sampling pipeline.
- Significant acceleration and code simplification of heterogeneous GNNs and link-prediction models.
- GNNLens: a DGL empowered tool for GNN explanability.
- New functions to create, transform and augment graph datasets, e.g. in graph contrastive learning or repurposing a graph for different tasks.
- DGL-Go: a new GNN model training command line tool for quick experiments with SOTA models.

https://www.dgl.ai/release/2022/03/01/release.html
Recent Advances in Efficient and Scalable Graph Neural Networks

A new research blogpost by Chaitanya K. Joshi overviewing the toolbox for Graph Neural Networks to scale to real-world graphs and real-time applications.

Training and deploying GNNs to handle real-world graph data poses several theoretical and engineering challenges:
1. Giant Graphs – Memory Limitations
2. Sparse Computations – Hardware Limitations
3. Graph Subsampling – Reliability Limitations

The blogpost introduces three simple but effective ideas in the 'toolbox' for developing efficient and scalable GNNs:
- Data Preparation - From sampling large-scale graphs to CPU-GPU hybrid training via historical node embedding lookups.
- Efficient Architectures - Graph-augmented MLPs for scaling to giant networks, and efficient graph convolution designs for real-time inference on batches of graph data.
- Learning Paradigms - Combining Quantization Aware Training (low precision model weights and activations) with Knowledge Distillation (improving efficient GNNs using expressive teacher models) for maximizing inference latency as well as performance.

Blogpost: https://www.chaitjo.com/post/efficient-gnns/
Learning on Graphs with Missing Node Features

A new paper and associated blogpost by Emanuele Rossi and Prof. Michael Bronstein.

Most Graph Neural Networks typically run under the assumption of a full set of features available for all nodes. In real-world scenarios features are often only partially available (for example, in social networks, age and gender can be known only for a small subset of users). Feature Propagation is an efficient and scalable approach for handling missing features in graph machine learning applications that works surprisingly well despite its simplicity.

📝 Blog Post: https://bit.ly/3ILn1Rl
💻 Code: https://bit.ly/3J9ftbr
🎥 Recording: https://bit.ly/3CbBvHW
📖 Slides: https://bit.ly/3Mh5geW
📜 Paper: https://bit.ly/3Kgo4JE
The Exact Class of Graph Functions Generated by Graph Neural Networks by Mohammad Fereydounian (UPenn), Hamed Hassani (UPenn), Javid Dadashkarimi (Yale), and Amin Karbasi (Yale).

ArXiv: https://arxiv.org/abs/2202.08833

A recent pre-print discussing the connections between Graph Neural Networks (GNNs) and Dynamic Programming (DP).

The paper asks: Given a graph function, defined on an arbitrary set of edge weights and node features, is there a GNN whose output is identical to the graph function?

They show that many graph problems, e.g. min-cut value, max-flow value, and max-clique size, can be represented by a GNN. Additionally, there exist simple graphs for which no GNN can correctly find the length of the shortest paths between all nodes (a classic DP problem).

The paper's main claim is that this negative example shows that DP and GNNs are misaligned, even though (conceptually) they follow very similar iterative procedures.

This claim has been hotly debated by Graph ML Twitter, with many interesting perspectives, e.g. see the original Tweet and subsequent discussions by L. Cotta and P. Veličković.
​​'Graph Neural Networks through the lens of algebraic topology, differential geometry, and PDEs'

A recent talk by Prof. Michael Bronstein (University of Oxford, Twitter), delivered in-person at the Computer Laboratory, University of Cambridge.

The talk is centred around the idea that graphs can be viewed as a discretisation of an underlying continuous manifold. This physics-inspired approach opens up a new trove of tools from the fields of differential geometry, algebraic topology, and differential equations so far largely unexplored in graph ML.

Recording: https://www.cl.cam.ac.uk/seminars/wednesday/video/20220309-1500-t170978.html
Associated Blogpost: https://towardsdatascience.com/graph-neural-networks-beyond-weisfeiler-lehman-and-vanilla-message-passing-bc8605fa59a
Organizational update

We are very happy to share that Chaitanya K. Joshi agreed to be one of the admins for the channel. He was already involved in several posts here and made interesting blog posts. He is currently a PhD student at the University of Cambridge, supervised by Prof. Pietro Liò. His research explores the intersection of Graph and Geometric Deep Learning with applications in biology and drug discovery. He previously worked on Graph Neural Network architectures and applications in Combinatorial Optimization at the NTU Graph Deep Learning Lab and at A*STAR, Singapore, together with Prof. Xavier Bresson. Please, welcome Chaitanya and if you have something to share do not hesitate to reach him out.
📃 Fresh Picks from ArXiv
The past week on the GraphML ArXiv digest: A flurry of new survey papers, GNNs for molecular property prediction and NLP/KG, as well as new avenues in GNN modelling.

📚 Surveys:
- Generative models for molecular discovery: Recentadvances and challenges. ft. Wengong Jin, Tommi Jaakkola, Regina Barzilay.
- Explainability in Graph Neural Networks: An Experimental Survey.
- A Survey on Deep Graph Generation: Methods and Applications.
- Knowledge Graph Embedding Methods for Entity Alignment: An Experimental Review.
- Few-Shot Learning on Graphs: A Survey.

🧬 GNNs for Science:
- Protein Representation Learning by Geometric Structure Pretraining. ft. Jian Tang.
- Multimodal Learning on Graphs for Disease Relation Extraction. ft. Marinka Zitnik.
- MolNet: A Chemically Intuitive Graph Neural Network for Prediction of Molecular Properties.
- Simulating Liquids with Graph Networks.

🗣 GNNs for NLP and Knowledge Graphs:
- A Unified Framework for Rank-based Evaluation Metrics for Link Prediction in Knowledge Graphs. ft. Mikhail Galkin.
- Context-Dependent Anomaly Detection with Knowledge Graph Embedding Models.
- AdaLoGN: Adaptive Logic Graph Network for Reasoning-Based Machine Reading Comprehension.
- HeterMPC: A Heterogeneous Graph Neural Network for Response Generation in Multi-Party Conversations.

🌐 GNN Modelling and Applications:
- GRAND+: Scalable Graph Random Neural Networks. ft. Jie Tang.
- Graph Representation Learning with Individualization and Refinement. ft. Lee Wee Sun.
- Graph Augmentation Learning.
- SoK: Differential Privacy on Graph-Structured Data.
- Incorporating Heterophily into Graph Neural Networks for Graph Classification.
- Supervised Contrastive Learning with Structure Inference for Graph Classification.

(If I forgot to mention your paper, please shoot me a message and I will update the post. We will be trying to resume the 'Fresh Picks from ArXiv' series every Monday morning!)
🏆 Inductive Link Prediction Challenge 2022

Team PyKEEN launches an open Inductive Link Prediction Challenge (ILPC 2022) for Knowledge Graphs to streamline community efforts in developing inductive graph representation learning methods.

For years, link prediction in KGs was exclusively done in the transductive setup, i.e., when training and inference is performed on the same graph and one could train a shallow entity embedding matrix. What do you do if your graph gets updated? Usually, retrain the whole pipeline. The emergence of GNNs paved a way for inductive models that do not necessarily need trainable entity embeddings to perform standard graph tasks.

In the inductive setup, training and inference graphs are disjoint - having trained a model on a training graph, participants are asked to predict links over a new unseen inference graph. This renders shallow embeddings from the training graph rather useless - you can’t make use of them in the new disconnected graph. Hence, we need better ways to obtain entity embeddings that would work for unseen nodes as well as for seen trainable ones. Looks like a job for GNNs, right?

The challenge offers two new inductive link prediction datasets - small and large - where the larger one is challenging even for modern GNNs; two baselines; a standardized evaluation protocol; and a codebase to start from.

More details on the inductive setup and submission details:

- Medium blog post
- Official Github repo
- arxiv pre-print
Fresh Picks from ArXiv
The past week on GraphML arXiv: scaling up GNNs, heterophily, expressivity, sparse equivariant graph networks, and applications ranging from particle physics to electronic health records.

Scaling up GNNs:
- Towards Training Billion Parameter Graph Neural Networks for Atomic Simulations ft. Open Catalyst Project team.
- PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks with Pipelined Feature Communication

Heterophily:
- Meta-Weight Graph Neural Network: Push the Limits Beyond Global Homophily
- Exploiting Neighbor Effect: Conv-Agnostic GNNs Framework for Graphs with Heterophily

Theory:
- GraphCoCo: Graph Complementary Contrastive Learning
- Fine-Tuning Graph Neural Networks via Graph Topology induced Optimal Transport
- SpeqNets: Sparsity-aware Permutation-equivariant Graph Networks ft. Christopher Morris.
- Twin Weisfeiler-Lehman: High Expressive GNNs for Graph Classification
- Exploring High-Order Structure for Robust Graph Structure Learning

Surveys:
- Graph Neural Networks in Particle Physics: Implementations, Innovations, and Challenges ft. Savannah Thais.
- Encoder-Decoder Architecture for Supervised Dynamic Graph Learning: A Survey
- A systematic approach to random data augmentation on graph neural networks

Applications:
- 3D Human Pose Estimation Using Möbius Graph Convolutional Networks ft. Emanuele Rodola.
- Graph-Text Multi-Modal Pre-trainingfor Medical Representation Learning
- Sequence-to-Sequence Knowledge Graph Completion and Question Answering
- Deep Reinforcement Learning Guided Graph Neural Networks for Brain Network Analysis
- Ethereum Fraud Detection with Heterogeneous Graph Neural Networks
- Duality-Induced Regularizer for Semantic Matching Knowledge Graph Embeddings ft. Shuiwang Ji.

(If I forgot to mention your paper, please shoot me a message and I will update the post.)
ICML 2022 Workshops Announced
The list of accepted workshops which will take place at this year's ICML have been announced recently.

Some workshops relevant to this group include:
- AI for Science
- Workshop on Machine Learning in Computational Design
- Topology, Algebra, and Geometry in Machine Learning
- ICML 2022 Workshop on Computational Biology
- The First Workshop on Pre-training: Perspectives, Pitfalls, and Paths Forward
​​SpeqNets: Sparsity-aware Permutation-equivariant Graph Networks

Christopher Morris (McGill University and Mila), joint work with Gaurav Rattan, Sandra Kiefer (RWTH Aachen), and Siamak Ravanbakhsh (McGill University and Mila)

Standard graph neural networks have clear limitations in approximating permutation-equivariant functions over graphs, i.e., their expressive power is bounded by the 1-WL (1,2). Hence, more expressive, higher-order graph neural networks have recently emerged, e.g., (1,3), which overcome these limitations.

However, they either operate on k-order tensors or consider all k-node subgraphs, implying an exponential dependence on k in memory requirements, and do not adapt to the sparsity of the graph. In (4), we introduce a new class of heuristics for the graph isomorphism problem, the (k,s)-WL, which offers a more fine-grained control between expressivity and scalability.

Essentially, the algorithm is a variant of the local k-WL (5) but only considers specific tuples to avoid the exponential memory complexity of the k-WL. Concretely, the algorithm only considers k-tuples or subgraphs on k nodes with at most s connected components, effectively exploiting the potential sparsity of the underlying graph. We show how varying k and s leads to a tradeoff between scalability and expressivity on the theoretical side.

Further, we derive a new hierarchy of permutation-equivariant graph neural networks, denoted SpeqNets, based on the above combinatorial insights, reaching universality in the limit. These architectures vastly reduce computation times compared to standard higher-order graph networks in the supervised node- and graph-level classification and regression regime, significantly improving standard graph neural network and graph kernel architectures in predictive performance.


(1) Weisfeiler and Leman Go Neural: Higher-order Graph Neural Networks. Christopher Morris, Martin Ritzert, Matthias Fey, William L. Hamilton, Jan Eric Lenssen, Gaurav Rattan, Martin Grohe, AAAI 2019.
(2) How Powerful are Graph Neural Networks? Keyulu Xu, Weihua Hu, Jure Leskovec, Stefanie Jegelka, ICLR 2019.
(3) Provably Powerful Graph Networks. Haggai Maron, Heli Ben-Hamu, Hadar Serviansky, Yaron Lipman, NeurIPS 2019.
(4) SpeqNets: Sparsity-aware Permutation-equivariant Graph Networks (https://arxiv.org/abs/2203.13913). Christopher Morris, Gaurav Rattan, Sandra Kiefer, Siamak Ravanbakhsh, Geometrical and Topological Representation Learning (GT-RL, ICLR 2022).
(5) Weisfeiler and Leman go sparse: Towards scalable higher-order graph embeddings.
Christopher Morris, Gaurav Rattan, Petra Mutzel, NeurIPS 2020.
Fresh Picks from Arxiv
The past week on GraphML arXiv: Hypergraph NNs, GNNs are dynamic programmers, latent graph learning, 3D equivariant molecule generation, and a new GNN library for Keras.

Hypergraph Neural Networks:
- Message Passing Neural Networks for Hypergraphs
- Hypergraph Convolutional Networks via Equivalency between Hypergraphs and Undirected Graphs ft. Yu Rong.
- Preventing Over-Smoothing for Hypergraph Neural Networks

Theory:
- Graph Neural Networks are Dynamic Programmers ft. Petar Veličković.
- OrphicX: A Causality-Inspired Latent Variable Model for Interpreting Graph Neural Networks
- Shift-Robust Node Classification via Graph Adversarial Clustering ft. Jiawei Han.
- Mutual information estimation for graph convolutional neural networks
- Graph-in-Graph (GiG): Learning interpretable latent graphs in non-Euclidean domain for biological and healthcare applications ft. Michael Bronstein.

🏐 Equivariance and 3D Graphs:
- Equivariant Diffusion for Molecule Generation in 3D ft. Max Welling.
- 3D Equivariant Graph Implicit Functions

📚 Libraries and Surveys:
- GNNkeras: A Keras-based library for Graph Neural Networks and homogeneous and heterogeneous graph processing ft. Franco Scarselli.
- Graph Neural Networks in IoT: A Survey

🔨 Applications:
- Graph similarity learning for change-point detection in dynamic networks ft. Xiowen Dong.
- Multilingual Knowledge Graph Completion with Self-Supervised Adaptive Graph Alignment ft. Yizhou Sun.
- A Simple Yet Effective Pretraining Strategy for Graph Few-shot Learning
- Pretraining Graph Neural Networks for few-shot Analog Circuit Modeling and Design ft. Pieter Abbeel.

(If I forgot to mention your paper, please shoot me a message and I will update the post.)
Equilibrium Graph Pooling

In graph-level prediction tasks, be it graph classification, graph regression, or something else, we usually do some kind of graph pooling to aggregate representations of nodes in a single vector. It has to be a permutation-invariant function, so we don’t have much choice apart from standard mean / max / sum / min / median.

Fabian Fuchs in his new blog post asks:

“Have we found the global optimum of how to do global aggregation or are we stuck in a local minimum?”

In the new work, they propose Equilibrium Aggregation for global graph pooling. The idea brings together two subfields of deep learning: Learning on Sets (you’ve probably heard about Janossy pooling, Deep Sets and Self-Attention) and Implicit layers (Equilibrium models and Neural ODEs, for example).

Equilibrium Aggregation minimizes the energy argmin E(x,y) that is defined as a sum of pairwise potentials F(x,y) and some regularizer term. The potential function is parameterized by a neural net and, for starters, might be implemented as DeepSets MLP. Varying the potential function, you could also recover vanilla sum/max/mean/median pooling.

Generally speaking, the idea of using DeepSets for aggregation can be tracked to the very GraphSAGE, but it didn’t have a lot of theoretical justification back then.

Experimentally, putting equilibrium aggregation as a global pooling function (particularly with a backbone GCN message passing) leads to significant improvements in MOL-PCBA and several graph-level toy tasks.

So far, equilibrium aggregation does not bring much benefit when using it as a message aggregation function inside a GNN layer, and doesn’t support edge features in a global pooling - but those could be cool extensions and your next research project 😉

Check out Fabian’s post for more details!
Fresh Picks from Arxiv
The past week on GraphML arXiv: Dynamics, generalization and structure-aware generation for molecules, learning graph combinatorial optimization, and more.

⚛️ Molecular Graphs:
- How Robust are Modern Graph Neural Network Potentials in Long and Hot Molecular Dynamics Simulations? ft. Johannes Gasteiger, Stephan Gunnemann.
- How Do Graph Networks Generalize to Large and Diverse Molecular Systems? ft. Johannes Gasteiger, Stephan Gunnemann, Open Catalyst Project Team.
- In-Pocket 3D Graphs Enhance Ligand-Target Compatibility in Generative Small-Molecule Creation

💼 Graph Combinatorial Optimization:
- Learning to solve Minimum Cost Multicuts efficiently using Edge-Weighted Graph Convolutional Neural Networks
- Learning-Based Approaches for Graph Problems: A Survey

🌐 Miscellaneous:
- Graph-based Approximate NN Search: A Revisit
- Multi-Modal Hypergraph Diffusion Network with Dual Prior for Alzheimer Classification
- C3KG: A Chinese Commonsense Conversation Knowledge Graph
- Graph Neural Networks Designed for Different Graph Types: A Survey
- Equilibrium Aggregation: Encoding Sets via Optimization ft. Fabian Fuchs.

(If I forgot to mention your paper, please shoot me a message and I will update the post.)
Announcing the Learning on Graphs Conference

A brand new venue for the Graph/Geometric Machine Learning community!

Why? See the blogpost: https://michael-bronstein.medium.com/announcing-the-learning-on-graphs-conference-c63caed7347

The LoG Conference key facts:
- Covers work broadly related to machine learning on graphs and geometry
- Proceedings track published in PMLR
- Also has a non-archival extended abstract track
- Double blind review process on OpenReview
- Top reviewers receive monetary rewards
- First year: virtual December 9-12 2022, free to attend.

Call for papers: https://logconference.github.io/cfp/

Stay updated via Twitter: https://twitter.com/LogConference
Or LinkedIn: https://www.linkedin.com/company/log-conference

Advisory board:
Regina Barzilay (MIT), Xavier Bresson (NUS), Michael Bronstein (Oxford/Twitter), Stephan Günnemann (TUM), Stefanie Jegelka (MIT), Jure Leskovec (Stanford), Pietro Liò (Cambridge), Jian Tang (MILA/HEC Montreal), Jie Tang (Tsinghua), Petar Veličković (DeepMind), Soledad Villar (JHU), Marinka Zitnik (Harvard).

Organizers:
Yuanqi Du (DP Technology), Hannes Stärk (MIT), Derek Lim (MIT), Chaitanya Joshi (Cambridge), Andreea-Ioana Deac (Mila), Iulia Duta (Cambridge), Joshua Robinson (MIT). (edited)
Fresh Picks from Arxiv - ICLR Workshops Special Edition
The past week on GraphML arXiv: Lots and lots of graph ML for drug discovery papers + graph generation, hyper graphs, subgraphs, and more!

💊 Drug Discovery
- Deep Sharpening Of Topological Features For De Novo Protein Design ft. Bruno Correia, Michael Bronstein, Andreas Loukas
- Decoding Surface Fingerprints For Proteinligand Interactions ft. Bruno Correia, Michael Bronstein, Pietro Lio
- Physics-Informed Deep Neural Network For Rigid-Body Protein Docking ft. Bruno Correia, Michael Bronstein
- Evaluating Generalization in GFlowNets for Molecule Design ft. Yoshua Bengio, Michael Bronstein
- Torsional Diffusion for Molecular Conformer Generation ft. Regina Barzilay, Tommi Jakkola
- Graph Anisotropic Diffusion For Molecules ft. Michael Bronstein

🕸 Graph Generation
- SPECTRE : Spectral Conditioning Helps to Overcome the Expressivity Limits of One-shot Graph Generators ft. Andreas Loukas
- Explanation Graph Generation via Pre-trained Language Models: An Empirical Study with Contrastive Learning ft. Mohit Bansal

🔨 GNN Models
- Simplicial Attention Networks ft. Cris Bodnar, Pietro Lio
- Graph Pooling for Graph Neural Networks: Progress, Challenges, and Opportunities
- Graph Ordering Attention Networks
- Expressiveness and Approximation Properties of Graph Neural Networks
- Efficient Representation Learning of Subgraphs by Subgraph-To-Node Translation

🚗 Applications
- Learning to Solve Travelling Salesman Problem with Hardness-adaptive Curriculum ft. Wenwu Zhu
- Principled inference of hyperedges and overlapping communities in hypergraphs
- Graph Enhanced BERT for Query Understanding ft. Jilian Tang

(If I forgot to mention your paper, please shoot me a message and I will update the post.)
Can graph neural networks understand chemistry?

🎦 Video:
https://www.youtube.com/watch?v=jrVXJykB8qc

A talk by Dominique Beaini on their recent work and the 'maze analogy' for graph representation learning.

Covering papers on Principle Neighbourhood Aggregation, Directional GNNs, and Graph Transformers, this talk touches several sub-areas of recent advances in GNN architectures - WL testing and expressivity, positional encodings, anisotropy, spectral techniques, fully connected message passing, etc.