Parallel Search in a Reduced Memory Setting
Title: |
Parallel Search in a Reduced Memory Setting |
DNr: |
NAISS 2025/22-1329 |
Project Type: |
NAISS Small Compute |
Principal Investigator: |
Oliver Harold Jörgensen <oliver.harold.jorgensen@liu.se> |
Affiliation: |
Linköpings universitet |
Duration: |
2025-09-29 – 2026-10-01 |
Classification: |
10212 |
Keywords: |
|
Abstract
Automated Planning often relies on parallel heuristic search algorithms to handle large and complex problem instances by distributing computation across multiple processors. However, the scalability of these methods, such as GRAZHDA*, is frequently constrained by the high memory requirements for storing large open and closed lists, especially as state spaces grow or solutions become longer. Tackling this challenge, we propose to combine GRAZHDA* with Tree Compression—a state storage technique from database systems that exploits commonalities in state representations by compactly storing shared prefixes in a trie-like structure. This approach has shown promising results in sequential search by reducing memory consumption while incurring modest runtime overhead.
In this project, we will adapt and integrate Tree Compression within the parallel GRAZHDA* search framework for the domain-independent planning setting. Our technical focus will involve ensuring correct handling of state expansion, duplication detection, and synchronized work distribution among processors—points at which parallelization can conflict with compressed storage. We will implement and optimize this combination, addressing challenges related to concurrency and efficient access.
We plan an empirical evaluation using a suite of standard planning benchmarks and domain-independent planners, running on the Tetralith cluster under a range of deliberately constrained memory settings. Our experiments will measure solution quality, runtime, parallel efficiency, and memory usage, and will compare our approach against both (i) conventional parallel GRAZHDA* using hash-table state storage and (ii) strong sequential baselines. Previous work has shown that Tree Compression yields substantial space savings in sequential search, albeit with varying efficacy depending on the domain’s state structure and Factored Domain Representation (FDR) encoding.
With compute time on Tetralith, we aim to (1) determine whether memory savings from Tree Compression enable parallel search to scale to previously unsolvable instances, and (2) analyze how compression interacts with parallel workload distribution. The outcomes will inform scalable design strategies for memory-limited planning, and results, code, and benchmarks will be disseminated to accelerate further research in this area.