Adaptive optimization of bosonic quantum information processing
Title: Adaptive optimization of bosonic quantum information processing
DNr: Berzelius-2023-8
Project Type: LiU Berzelius
Principal Investigator: Patric Holmvall <patric.holmvall@physics.uu.se>
Affiliation: Chalmers tekniska högskola
Duration: 2023-02-28 – 2023-09-01
Classification: 10302
Keywords:

Abstract

The interconversion of quantum states is at the heart of quantum information processing. Restrictions given by the quantum systems employed lead to a hierarchy in the operations one can apply, because some are easier or less costly to apply than others. The consequence is that some operations and states are resources that are costly to obtain. For continuous variable or bosonic systems the resource of importance is most often non-Gaussianity, where Gaussian operations and states are considered easy to obtain or perform and thus free. The interconversions in this framework are then between resource state using free operations. Thus, finding the optimal conversion or a bound on the convertibility between resources is of utmost importance for any application involving quantum information. However, current studies in this framework are restricted to 1 to 1 deterministic conversions. We want to extend this by studying "N to M" conversions (N>M) and lift the requirement to be deterministic. Several interesting application, such as distillation protocols and cascading protocols, fall into this category. In detail we want to use adaptive, machine-learning (ML) inspired, methods to find the parameters that bring our input resource as close as possible to the desired states. This has been implemented in our GPU-based code, which we have previously used successfully on SNIC clusters (C3SE 2022/1-6 and its predecessors), to produce a publication in PRX Quantum (https://journals.aps.org/prxquantum/abstract/10.1103/PRXQuantum.2.010327) and more recently in Physical Review A (https://journals.aps.org/pra/abstract/10.1103/PhysRevA.105.062446). Apart from carrying out the above-mentioned research, we are also extending our code to improve the parallelization and ML optimization further.