An Overview of Graph Neural Networks (GNN)

Carlo C.
6 min readDec 17, 2023

--

GNN by Author with DALL·E 3

Neural networks are computational models inspired by the workings of the human brain, capable of learning from complex and unstructured data, such as images, text, audio, and video. However, there are many other types of data that cannot be easily represented by a traditional neural network, such as those that have a graphical structure. A graph is a collection of nodes and edges that represent the entities and relationships of a system, respectively. Graphs are everywhere: from social networks, recommendation systems, computational chemistry, molecular biology, cybersecurity, and much more. How can we harness the power of neural networks to analyze and learn from this graphical data?

The answer is: with graph neural networks (GNNs). GNNs are a class of neural networks that operate directly on graphs, exploiting the structural and semantic information of nodes and edges. GNNs are able to learn a vector representation (or embedding) of nodes, which captures their characteristics and context in the graph. This representation can then be used for various tasks, such as node classification, link prediction, graph generation, spatial and temporal reasoning, and much more.

GNNs are a very active and rapidly evolving field of research, with many challenges and opportunities. In this article, I want to provide an overview of GNNs, illustrating their operating principles, their applications, their differences from traditional neural networks, and their key concepts and terminologies. In particular, I will focus on four types of GNNs: Graph Convolutional Networks (GCN), Graph Attention Networks (GATs), Temporal Graph Networks (TGNs), and Memory Augmented Graph Neural Networks (MA-GNNs). These types of GNN were chosen because they represent some of the most innovative and influential ideas in the field of GNN, and because they cover a wide range of scenarios and applications.

Graph Convolutional Networks (GCN)

Convolutions are mathematical operations that transform an input signal into an output signal, while preserving some properties of the original signal, such as locality, stationarity, and compositionality. Convolutions are widely used in neural networks for the analysis of images, text, audio, and video, which can be seen as defined signals on regular grids. However, graphs do not have a regular structure, but are irregular and variable. How can we apply convolutions to graphs?

Graph Convolutional Networks (GCNs) are a class of neural networks for graphs that use convolution to learn a vector representation of the nodes of a graph. The basic idea of GCNs is to define a convolution operator on graphs, which allows the information of nodes and their neighbors to be aggregated efficiently and invariantly. There are several ways to define a convolution operator on graphs, depending on how you model the relationships between nodes, how you weight your neighbors, and how you combine information. Some examples of convolution operators on graphs are: the spectrum, the spatial domain, the time domain, and the frequency domain.

GCNs are able to learn a vector representation (or embedding) of nodes, which captures their characteristics and context in the graph. This representation can then be used for various tasks, such as node classification, link prediction, graph generation, spatial and temporal reasoning, and much more. Some examples of GCN-based algorithms are: GraphSage, PinSage, Graph Isomorphism Network, Graph U-Net, and many others.

You’re right, I forgot to describe PinSage in my article. PinSage is a graph neural network (GNN) algorithm that uses convolution to learn a vector representation of the nodes of a graph. What’s unique about PinSage is that it uses the Random walk to sample the neighbors of the nodes, instead of using uniform or weighted sampling. The Random walk is a stochastic process that consists of moving from one node to another following the edges of the graph in a random manner. The random walk allows you to explore a larger and more diverse portion of the graph, and to assign greater importance to the neighbors most relevant to the target node. PinSage is designed to work on large graphs, such as Pinterest’s, and to generate personalized recommendations for users.

Graph Attention Networks (GAT)

Attention is a mechanism that allows you to focus your attention on one part of an input signal, while ignoring the other irrelevant parts. Attention is widely used in neural networks for the analysis of text, audio, and video, which can be seen as sequences of elements. However, graphs are not sequences, but are irregular and variable structures. How can we apply attention to graphs?

Graph Attention Networks (GATs) are a class of graph neural networks that use attention to learn a vector representation of the nodes of a graph. The basic idea of GAT is to define a attention mechanism on graphs, which allows you to assign a weight to the neighbors of a node based on their relevance to the target node. In this way, GAT are able to aggregate the information of nodes and their neighbors selectively and adaptively, taking into account the structure and content of the graph.

GATs use the spatial method to aggregate neighbor information, which consists of calculating a similarity function between the target node and its neighbors, and then normalizing the weights with a softmax function. GATs can use different types of similarity functions, such as scalar product, additive scalar product, or multipiative scalar product. GATs can also use multiple attention heads to learn different representations of the nodes, and then concatenate or mediate them to get the final representation.

GATs are able to learn a vector representation (or embedding) of nodes, which captures their characteristics and context in the graph. This representation can then be used for various tasks, such as node classification, link prediction, graph generation, spatial and temporal reasoning, and much more. Some examples of GAT-based algorithms are: Graph Transformer, Graph Attention U-Net, Graph Attention Autoencoder, and many others.

Temporal Graph Networks (TGN)

A temporal graph is a graph that changes over time, both in its structure and content. A time graph can represent dynamic phenomena, such as social interactions, financial transactions, chemical reactions, biological processes, and much more. How can we learn from a time graph?

Temporal Graph Networks (TGNs) are a class of neural graph networks that use time as a fundamental dimension for learning a vector representation of the nodes of a temporal graph. The basic idea of TGNs is to model the dynamics of nodes and links over time, taking into account their properties and changes. TGNs are able to embed memory and time into GNNs to capture time dependencies and node and link evolutions.

TGNs use a coding module to learn a vector representation of nodes, based on their static and dynamic characteristics. TGNs use an aggregation module to update the representation of nodes, based on their interactions with neighbors over time. TGNs use a memory module to store and retrieve the representation of nodes, based on their time relevance. TGNs use a decode module to generate a prediction, based on the representation of the nodes and the current time.

TGNs are able to learn a vector representation (or embedding) of nodes, which captures their characteristics and context in the time graph. This representation can then be used for various tasks, such as node classification, link prediction, graph generation, spatial and temporal reasoning, and much more. Some examples of TGN-based algorithms are: DyRep, JODIE, Know-Evolve, and many others.

Conclusion

In this article, I presented an overview of graph neural networks (GNNs), a class of neural networks that operate directly on graphs, exploiting the structural and semantic information of nodes and edges. I illustrated the operating principles, applications, differences from traditional neural networks, and key concepts and terminologies of GNNs. In particular, I focused on four types of GNNs: Graph Convolutional Networks (GCN), Graph Attention Networks (GAT), Temporal Graph Networks (TGN) and Memory Augmented Graph Neural Networks (MA-GNN).

GNNs are a very active and rapidly evolving field of research, with many challenges and opportunities. GNNs are able to learn a vector representation of nodes, which captures their characteristics and context in the graph. This representation can then be used for various tasks, such as node classification, link prediction, graph generation, spatial and temporal reasoning, and much more. GNNs have proven to be effective and efficient in many domains and scenarios, such as social networks, recommender systems, computational chemistry, molecular biology, cybersecurity, and many others.

--

--

Carlo C.
Carlo C.

Written by Carlo C.

Data scientist, avidly exploring ancient philosophy as a hobby to enhance my understanding of the world and human knowledge.

No responses yet