Reinforcement Learning of Industrial Robotics Tasks
||Reinforcement Learning of Industrial Robotics Tasks|
||SNIC Small Compute|
||Matthias Mayr <firstname.lastname@example.org>|
||2021-12-21 – 2023-01-01|
Instead of just using planning and knowledge-driven methods to create successful and robust robot tasks, we are exploring the possibilities of learning elements of it using reinforcement learning methods. This is especially needed when working with contact-rich tasks such as assembly tasks. Currently, the learning procedure is largely based on a robot simulation software (DART) to evaluate different parameter configurations and determine the different rewards. Typically, reinforcement learning problems are defined as multi-objective problems, because objectives such as "maximum safety" usually at least partially interfere with "maximum task reward". I.e. the most successful solution is most often not the safest and an operator needs to choose some compromise between them.
The research aims at creating robot systems than can utilize both planning and reasoning techniques (deductive methods) as well as machine learning (inductive methods) to obtain task parameterizations that can successfully execute on a real robot systems such as the one at LTH, LU.