Self-supervised learning for multimodal perception systems
Title: |
Self-supervised learning for multimodal perception systems |
DNr: |
Berzelius-2023-209 |
Project Type: |
LiU Berzelius |
Principal Investigator: |
Carl Lindström <carlinds@chalmers.se> |
Affiliation: |
Chalmers tekniska högskola |
Duration: |
2023-08-23 – 2024-03-01 |
Classification: |
10207 |
Keywords: |
|
Abstract
The advancement of autonomous vehicle technology has the potential to revolutionize the
transportation industry and greatly improve road safety. However, the successful
implementation of this technology depends heavily on the ability of autonomous vehicles to
accurately and efficiently perceive their environment, often using a combination of camera
and lidar sensors. Current perception systems, although sophisticated, still face
significant challenges in detecting and interpreting complex traffic scenarios, particularly in
tight areas and poor lighting conditions.
This project aims to explore and develop novel deep learning-based perception
systems that can overcome these challenges. Specifically, the project will investigate self-
supervised learning strategies that leverage vast amounts of unlabeled data to develop more
accurate and robust perception models. By reducing the reliance on manually annotated
data, this approach can potentially speed up the development of more advanced perception
systems, enabling autonomous vehicles to detect and respond to the surrounding
environment more accurately in real-time. We believe self-supervised learning will be an
essential element in future autonomous systems that can significantly boost their
performance.