Safeguarding Federated Learning-based Autonomous Driving
Title: |
Safeguarding Federated Learning-based Autonomous Driving |
DNr: |
Berzelius-2025-108 |
Project Type: |
LiU Berzelius |
Principal Investigator: |
Sheng Liu <shengliu@kth.se> |
Affiliation: |
Kungliga Tekniska högskolan |
Duration: |
2025-04-01 – 2025-10-01 |
Classification: |
10202 |
Homepage: |
https://nss.proj.kth.se/ |
Keywords: |
|
Abstract
Federated Learning (FL) has emerged as a promising solution for privacy-preserving autonomous driving, specifically camera-based Road Condition Classification (RCC) systems. Cooperative deep learning model training harnesses distributed sensing, computing, and communication resources on board vehicles without sharing sensitive image data. However, the collaborative nature of FL-RCC frameworks introduces new vulnerabilities. In particular, Targeted Label Flipping Attacks (TLFAs) allow malicious clients (vehicles) to deliberately alter their training data labels, thereby compromising the learned model inference performance. Despite their critical implications, e.g., if a vehicle mis-classifies slippery, dangerous road as pristine road conditions and exceeds recommended speed, TLFAs for FL-based RCC systems are largely missing. Our work is threefold: 1) disclosing the vulnerability of existing FL-RCC systems through specific poisoning attacks analysis; 2) introducing a novel label-distance-based metric to more precisely quantify the safety risks posed by TLFAs; and 3) proposing a defensive mechanism leveraging neuron-wise analysis of the output layer to mitigate TLFA effects. We will execute extensive experiments across various RCC tasks, evaluation metrics, baselines, and deep learning models to demonstrate both the severity of TLFAs on FL-RCC systems and the effectiveness of our scheme in mitigating the attack impact.