Transformer Models in Graph and Tabular data.
Title: Transformer Models in Graph and Tabular data.
DNr: Berzelius-2023-56
Project Type: LiU Berzelius
Principal Investigator: Tianze Wang <tianzew@kth.se>
Affiliation: Kungliga Tekniska högskolan
Duration: 2023-05-24 – 2023-12-01
Classification: 10201
Keywords:

Abstract

Transformer models have emerged as one of the most popular state-of-the-art Deep Learning (DL) architectures, predominantly employed in natural language processing (NLP) and computer vision (CV) tasks. In contrast to traditional sequential processing models such as Recurrent Neural Networks, the transformer model eliminates the sequential mechanism and relies solely on the attention mechanism, enabling parallelization within training examples. Transformer models have consistently achieved state-of-the-art results in various NLP tasks. However, their application in the domain of graph representation learning and tabular data representation learning has received less attention, with graph neural networks and gradient-boosted trees still dominating the benchmark for these areas. This project aims to explore the utilization of Transformer models in the domain of graph representation learning and tabular data representation learning. Overcoming the challenges posed by the non-independent and identically distributed nature of graphs is crucial for successfully leveraging Transformer models in graph representation learning. Similarly, the heterogeneous nature of tabular datasets presents several unresolved issues for effective representation learning by Transformer models. The outcomes of this project have the potential to enhance state-of-the-art models in graph and tabular data representation learning while simultaneously expanding our understanding of how Transformer models can be applied in different domains.