Embodied AI for Autonomous Robots
Title: Embodied AI for Autonomous Robots
DNr: Berzelius-2025-241
Project Type: LiU Berzelius
Principal Investigator: Olov Andersson <olovand@kth.se>
Affiliation: Kungliga Tekniska högskolan
Duration: 2025-08-04 – 2026-03-01
Classification: 20201
Homepage: http://www.kth.se/profile/olovand
Keywords:

Abstract

My new KAW/WASP group at KTH (4 WASP funded researchers) will research neural network models for embodying AI in autonomous robots and vehicles, which will be important for the future AI needs of Swedish industry. Machine learning approaches are increasingly used in robotic applications whether it be for manipulation of objects (e.g., part assembly, warehouses) or navigation of autonomous vehicles and other mobile robots. Advances in large pre-trained “foundation models” for perception and planning in addition to large-language models (LLM) have lead to large improvements in learning capability by enabling robots to draw on common-sense knowledge and reason about an open set of objects it has not been trained on. Such “embodied AI” that reason over robot perception and actions, are becoming an increasingly feasible alternative to modular, engineered approaches in robotics. However, to realize this we need to adapt these models that have been trained on disjointed internet text and image data to trajectories of real-world (3D) sensor data. So far most of the work in embodied AI is for static manipulation problems but here we will adapt and explore their use in real autonomous mobile robots, including real-world experiments with our Spot quadruped from Boston Dynamics. We have extensive expertise in this area, including having previously won the DARPA (SubT) Challenge on autonomy in harsh environments, as well as recent publications at the intersection of AI and robotics. See my KTH site for background on my group: https://www.kth.se/profile/olovand Google scholar for recent publications: https://scholar.google.com/citations?user=1lCMaQgAAAAJ&hl=en