Multi-view methods for 6D pose estimation using differential rendering
Title: Multi-view methods for 6D pose estimation using differential rendering
DNr: Berzelius-2021-16
Project Type: LiU Berzelius
Principal Investigator: Hampus Åström <hampus.astrom@cs.lth.se>
Affiliation: Lunds universitet
Duration: 2021-07-01 – 2022-01-01
Classification: 10207
Keywords:

Abstract

This project aims to extend our work on 6D pose estimation for robotic applications. We use a novel multi-view loss function to train a neural network for 6D pose estimation, using synthetic data and evaluating it on real images. Our loss is inspired by the VSD (Visible Surface Discrepancy) metric and relies on a differentiable renderer and CAD models. Our method resolves pose ambiguities without using predefined symmetries. It is trained only on synthetic data. We test on real-world RGB images from the T-LESS dataset, containing highly symmetric objects common in industrial settings. In a pending paper submission to IROS we show that our solution can be used to replace the codebook in a state-of-the-art approach, with comparable pose recall. So far, the codebook approach has had the shortest inference time in the field. Our approach reduces inference time further while a) avoiding discretization and b) requiring a much smaller memory footprint. In this project we will extend our method with explicit translation through our deep neural network, to extend the methods capabilities and remove reliance on bounding-box based methods to determine translation.