Perception for Autonomous Vehicles
Title: Perception for Autonomous Vehicles
DNr: Berzelius-2024-241
Project Type: LiU Berzelius
Principal Investigator: John Folkesson <johnf@kth.se>
Affiliation: Kungliga Tekniska högskolan
Duration: 2024-07-01 – 2025-01-01
Classification: 20201
Homepage: https://smarc.se
Keywords:

Abstract

Within the Research group we have six PhD students. Four are working on perception for autonomous driving while two are working on perception for autonomous underwater vehicles. In addition, there are 4 Master students who are assisting and being supervised by the PhD students. Two of them are working on underwater perception with sonar and two are working on prediction in autonomous driving. There will be three new PhD students in underwater perception joining the group in the next six months. One of those will be in WASP. Two of the master students will graduate very soon but still might need to run some final experiments. One of the PhD students (Yiping Xie - WASP) has defended his thesis will very likely become a postdoc in the group by the summer. For the underwater vehicles sonar is the primary sensor we are using. We are currently working with adapting neural rendering with a camera to the sonar sensor with very promising results. Yiping has successfully defended his thesis on this and it is the basis of continued research in our group. We are looking at using diffusion models to improve the noise and resolution in the sonar signals. In addition, underwater navigation is problematic, and we are looking into ways to improve navigation using the sonar that include SLAM and point cloud registration techniques using neural networks. That will be carried out by the 3 new PhD students including a new WASP student. For the driving task the sensor data is often cameras, lidar, and radar. There are normally multiple agents, and the risk of collision is a major concern. The students are working to find better representations of the data using neural networks with the goal of better modeling and predicting the multiagent scenarios. We are currently investigating whether multi-task learning of representations of history can improve performance with limited training data. We are also interested in testing the use of foundation models in the prediction task. Yi Yang (affiliated WASP) is the main user with collaborations with Ci, Ajinkya, and Qingwen. Kei is a master student/research engineer also contributing greatly to the effort and Shamoun is doing a project course using deep learning for motion prediction.