Constrained Reinforcement Learning with Large Language Models
Title: |
Constrained Reinforcement Learning with Large Language Models |
DNr: |
Berzelius-2024-321 |
Project Type: |
LiU Berzelius |
Principal Investigator: |
Rishi Hazra <rishi.hazra@oru.se> |
Affiliation: |
Örebro universitet |
Duration: |
2024-09-01 – 2025-03-01 |
Classification: |
10201 |
Keywords: |
|
Abstract
This project seeks to advance constrained Reinforcement Learning by integrating Large Language Models (LLMs) with probabilistic logic-based safety shields. These safety shields will constrain the LLMs to only produce actions that comply with safety norms and other specific standards defined by the user.
We plan to implement this integration by employing probabilistic logic shields, as described in "Safe Reinforcement Learning via Probabilistic Logic Shields" by Yang et al. (IJCAI 2023). Additionally, we will utilize techniques from "SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge" by Hazra et al. (AAAI 2024) to further refine the functionality of LLM-based agents. This integrated methodology is expected to significantly improve the safety of LLMs in navigating decision-making scenarios.
The first part of the project required a CPU-intensive cluster for data generation and is being performed on NAISS (Project Name NAISS 2024/22-1025). The second part is the training of the LLMs on the generated dataset which requires GPU-intensive compute.