Safe and Secure Federated Reinforcement learning
Title: Safe and Secure Federated Reinforcement learning
DNr: Berzelius-2024-394
Project Type: LiU Berzelius
Principal Investigator: Arunava Naha <arunava.naha@liu.se>
Affiliation: Linköpings universitet
Duration: 2024-10-06 – 2025-05-01
Classification: 10207
Homepage: https://liu.se/en/employee/aruna32
Keywords:

Abstract

I have joined as a WASP faculty member at the Department of Electrical Engineering, Linköping University. With secured funding, I am establishing my own research group, which will include two Ph.D. students, two postdoctoral researchers, and 2-3 master's thesis students. Our group's research will focus on developing safe, resilient, and privacy-preserving federated reinforcement learning (FRL) methods for closed-loop control systems. To evaluate the performance of the algorithms we develop, we will conduct extensive numerical simulations. FRL requires simulating multiple clients (each utilizing one GPU and two neural networks for actor and critic roles) and a central server (also with one GPU and two neural networks for the same roles). This process is expected to generate significant amounts of simulated data and will demand substantial GPU-based computations. For a team of 5-6 members, I anticipate a total GPU usage of approximately 10,000 GPU-hours.