Perception for Autonomous Vehicles
Title: Perception for Autonomous Vehicles
DNr: Berzelius-2023-154
Project Type: LiU Berzelius
Principal Investigator: John Folkesson <>
Affiliation: Kungliga Tekniska högskolan
Duration: 2023-07-01 – 2024-01-01
Classification: 20201


Within the Research group we have four PhD students. Two are working on perception for autonomous driving while two are working on perception for autonomous underwater vehicles. For the driving task the sensor data is often cameras, lidar, and radar. There are normally multiple agents and the risk of collision is a major concern. The students are working to find better representations of the data using neural networks with the goal of better modeling and predicting the multiagent scenerios. For the underwater vehicles sonar is the primary sensor we are using. We are currently working with adapting neural rendering with camera to the sonar sensor with very promising results. In addition, the underwater navigation is problematic and we are looking into ways to improve navigation using the sonar that include SLAM and point cloud registration techniques using neural networks. Differentiable rendering (including neural rendering) has been applied to computer graphics recently, combining deep learning, especially neural representation. The advantage of it is that there is no need to acquire 3D annotation, which is expensive. One can only have 2D annotations on images and leverage differentiable rendering to compute loss in 2D camera images and back-propagate the gradients back to whatever is needed to optimize. We have a similar situation in underwater perception, on sidescan sonars, multibeam imaging soars (forward looking sonars), and even multibeam echo sounders. It is difficult to register imaging sonar data to a 3D seafloor map, which is no longer needed if we can utilize the idea of differentiable rendering.