Multi-Task Multi-Modal Meta learning
Title: Multi-Task Multi-Modal Meta learning
SNIC Project: Berzelius-2022-98
Project Type: LiU Berzelius
Principal Investigator: Marcus Liwicki <>
Affiliation: Luleå tekniska universitet
Duration: 2022-04-29 – 2022-11-01
Classification: 10207


Learn one task at a time; this is a generic approach in Machine Learning (ML). Big problems are dissociated into smaller independent sub-tasks learned distinctly, and combined results are presented. Meanwhile, Multi-Task Learning (MTL) proposes that the subtasks should be trained simultaneously to have a better performance. The underlying concept is, if all the smaller tasks share their learning together, they may find it easier to learn rather than learning in isolation. Integrating knowledge across different domains is an essential feature of human learning. Learning paradigms like transfer learning, meta-learning, and multi-task learning reflect the human learning process by exploiting prior knowledge for new tasks, encouraging faster learning and good generalization. The project aims to develop efficient and autonomous multi-task learning architectures by exploiting the concepts of meta-learning. Insights from both meta-learning and multi-task learning can be fused to achieve the best of both worlds, i.e., efficient training of multiple heterogeneous tasks, a feature of MTL, and quickly adapting to new tasks, a feature of meta-learning. The focus is to develop a fully automated MTL architecture, i.e., based on the type of the task, utilization of an already-trained network (or a part of the network) for multiple tasks to integrate and learn a new task. The challenge here is to handle heterogeneous tasks and multi-modal inputs. Overall, this project will address open questions in research related to knowledge sharing in multiple tasks learning arrangements. Along with this, we propose a novel modular generic learning network, which gives the flexibility to add many tasks during training and inference. In this project, we intend to train big deep learning models which cater to multiple tasks simultaneously and use a substantial multi-task dataset, i.e., Taskonomy (Zamir et al.) (the size of the dataset is 11.16 TB, but we will begin with using the tiny version of its which is around 400 GB). The idea is to validate the proposed theory on at least three datasets, including Taskonomy, NYU v2, and KITTI datasets. The datasets are discussed in the Data Management section.