Perception for Autonomous Vehicles
Title: Perception for Autonomous Vehicles
DNr: Berzelius-2024-456
Project Type: LiU Berzelius
Principal Investigator: John Folkesson <johnf@kth.se>
Affiliation: Kungliga Tekniska högskolan
Duration: 2025-01-01 – 2025-07-01
Classification: 20201
Homepage: https://smarc.se
Keywords:

Abstract

Within the Research group we currently have 6 PhD students. Four are working on perception for autonomous driving while two are working on perception for autonomous underwater vehicles. One of those is a WASP PhD and another is an affilliated WASP PHD. One Postdoc is a recent WASP graduate. In addition, there are 4 Master students who are assisting and being supervised by the PhD students. As of this writing one of them is working on underwater perception with sonar and three are working on prediction in autonomous driving. There will be two new PhD students in underwater perception joining the group in the next six months. More Master thesis students will be joining next Spring. For the underwater vehicles sonar is the primary sensor we are using. We are currently working with adapting neural rendering with a camera to the sonar sensor with very promising results. Shekhar (WASP) will continue this effort alone with Li and the two new PhD students to be added soon. We are looking at using diffusion models to improve the noise and resolution in the sonar signals. In addition, underwater navigation is problematic, and we are looking into ways to improve navigation using the sonar that include SLAM and point cloud registration techniques using neural networks. For the driving task the sensor data is often cameras, lidar, and radar. There are normally multiple agents, and the risk of collision is a major concern. The students are working to find better representations of the data using neural networks with the goal of better modeling and predicting the multiagent scenarios. We are currently investigating whether multi-task learning of representations of history can improve performance with limited training data. We are also interested in testing the use of foundation models in the prediction task. Yi Yang (affiliated WASP) is the main user with collaborations with Ci, Ajinkya, and Qingwen. Kei is a master student/research engineer also contributing greatly to the effort and Shamoun and Simon are carrying out projects using deep learning for motion prediction with Yi.