Efficient techniques to GPU Accelerations of Multi-Shot Quantum Computing Simulations

08/07/2023
by   Jun Doi, et al.
0

Quantum computers are becoming practical for computing numerous applications. However, simulating quantum computing on classical computers is still demanding yet useful because current quantum computers are limited because of computer resources, hardware limits, instability, and noises. Improving quantum computing simulation performance in classical computers will contribute to the development of quantum computers and their algorithms. Quantum computing simulations on classical computers require long performance times, especially for quantum circuits with a large number of qubits or when simulating a large number of shots for noise simulations or circuits with intermediate measures. Graphical processing units (GPU) are suitable to accelerate quantum computer simulations by exploiting their computational power and high bandwidth memory and they have a large advantage in simulating relatively larger qubits circuits. However, GPUs are inefficient at simulating multi-shots runs with noises because the randomness prevents highly parallelization. In addition, GPUs have a disadvantage in simulating circuits with a small number of qubits because of the large overheads in GPU kernel execution. In this paper, we introduce optimization techniques for multi-shot simulations on GPUs. We gather multiple shots of simulations into a single GPU kernel execution to reduce overheads by scheduling randomness caused by noises. In addition, we introduce shot-branching that reduces calculations and memory usage for multi-shot simulations. By using these techniques, we speed up x10 from previous implementations.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset