Semi-supervised multi-task deep learning
||Semi-supervised multi-task deep learning|
||Miquel Marti <firstname.lastname@example.org>|
||Kungliga Tekniska högskolan|
||2022-09-01 – 2023-03-01|
In the context of autonomous driving (AD), a complete understanding of the world is needed and can be achieved applying several computer vision techniques. Using multitask learning (MTL) is a natural direction towards having a complete scene understanding in a compact and resource-efficient way by sharing computation between tasks, while also promising improved results. Three main challenges appear in the current approaches of using MTL: (i) there is only partial data labelling available for training, (ii) training becomes a joint optimization problem with different characteristics, and (iii) the network architecture has to be designed taking into account performance and usage of resources when deciding which layers to share. We will start by setting up a multitask baseline network for the AD problem. Then, we will explore semi-supervised learning techniques in order to overcome the problem of partial labelling and the particularities that using such techniques in a multitask setting introduce.