Adaptive large-scale optimization of bosonic quantum information processing

## Abstract

We apply for continuation at Berzelius through our KAW/WACQT affiliation.
Quantum information processing relies on quantum state interconversion, where quantum information is typically encoded in quantum states that are processed in quantum operations to enable quantum computing. Continuous variable systems enable a viable route to universal quantum computing together with high resilience to noise. "Gaussian operations" are particularly useful since they are essentially "free" (experimentally easy) to implement in an optical setting and are readily available in many other experimental settings. Consequently, the resource states are required to be non-Gaussian. Therefore, the objective becomes to identify relevant non-Gaussian resource states, but also to identify experimental protocols for interconversion between different non-Gaussian resources. Luckily, there are already a number of resource states that have been generated in experiment, but efficient and robust interconversion protocols still need to be discovered. This is the main problem we address in our research.
The goal of our project is discovering such interconversion protocols that are experimentally viable, through large-scale optimization. However, the cost function (fidelity) is incredibly expensive to calculate, the phase space is high-dimensional, and the dimensionality scales exponentially with the number of protocol input states. Thus, available state-of-the-art software such as QuTiP can take days to evaluate a single cost function, while optimization requires thousands to millions of evaluations. Thus, it is unfeasible to use QuTiP to solve the optimization problem even for a single input state, while we need to evaluate hundreds of different states for each protocol, using multiple input states. Our goal is therefore also to develop software that is efficient enough to tackle this immense problem.
We develop the software FidelityOptim which is many orders of magnitude more efficient than QuTiP, enabling thousands of cost function evaluations per second instead of ~one per day. This is made possible by extremely efficient implementation of the cost function. The number of necessary evaluations are also reduced by powerful optimization routines. Still, the only feasible way to scale up our problem to multi-mode conversion is relying on GPUs, which is why the main calculations in FidelityOptim are implemented as efficient CUDA kernels. As a result, FidelityOptim is more than capable of handling the difficult optimization problems, as demonstrated in our previous NSC/NAISS/SNIC projects (Berzelius-2023-239, C3SE 2022/1-6, C3SE 2023/1-3) where we identified viable conversion protocols for some long-sought non-gaussian resource states, published in PRX Quantum (https://journals.aps.org/prxquantum/abstract/10.1103/PRXQuantum.2.010327) and in Physical Review A (https://journals.aps.org/pra/abstract/10.1103/PhysRevA.105.062446). In Berzelius-2023-239 we extended FidelityOptim with multi-mode support and demonstrated an important "no-go" for experimental realization of deterministic protocols.
The goal of the continuation project is to evaluate a new probabilistic multi-mode protocol, through large-scale optimization to identify experimentally viable conversions. Success would enable a resource theory for universal quantum computing.