High-dimensional bayesian optimization with foundational assumption
||High-dimensional bayesian optimization with foundational assumption|
||NAISS Small Compute|
||Carl Hvarfner <email@example.com>|
||2023-10-03 – 2024-11-01|
High-dimensional problems have long been considered the Achilles' heel of Bayesian optimization (BO) algorithms. The well-documented curse of dimensionality, or boundary issue, has spawned a collection of algorithms which work around the issue, through assumptions of either effective low-dimensional subspaces or additivity, or resorting to localized BO variants. In this paper, We make the distinction between dimensionality and complexity in BO problems, and show that as long as the problem is presumed to be of reasonable and constant complexity, as measured by the RKHS norm on the space of black-box functions, increasing the dimensionality does not substantially degrade performance. Our findings are complemented by state-of-the-art results on problems with both dense and sparse effective subspaces, as well as and real-world high-dimensional optimization problems.