Perception for Autonomous Vehicles
|Perception for Autonomous Vehicles
|John Folkesson <firstname.lastname@example.org>
|Kungliga Tekniska högskolan
|2024-01-01 – 2024-07-01
Within the Research group we have five PhD students. Three are working on perception for autonomous driving while two are working on perception for autonomous underwater vehicles.
For the driving task the sensor data is often cameras, lidar, and radar. There are normally multiple agents, and the risk of collision is a major concern. The students are working to find better representations of the data using neural networks with the goal of better modeling and predicting the multiagent scenarios.
For the underwater vehicles sonar is the primary sensor we are using. We are currently working with adapting neural rendering with a camera to the sonar sensor with very promising results. In addition, underwater navigation is problematic, and we are looking into ways to improve navigation using the sonar that include SLAM and point cloud registration techniques using neural networks.
Differentiable rendering (including neural rendering) has been applied to computer graphics recently, combining deep learning, especially neural representation. The advantage of it is that there is no need to acquire 3D annotation, which is expensive. One can only have 2D annotations on images and leverage differentiable rendering to compute loss in 2D camera images and back-propagate the gradients back to whatever is needed to optimize.
We have a similar situation in underwater perception, on sidescan sonars, multibeam imaging soars (forward looking sonars), and even multibeam echo sounders. It is difficult to register imaging sonar data to a 3D seafloor map, which is no longer needed if we can utilize the idea of differentiable rendering.