More Practical and Adaptive Algorithms for Online Quantum State Learning

06/01/2020
by   Yifang Chen, et al.
0

Online quantum state learning is a recently proposed problem by Aaronson et al. (2018), where the learner sequentially predicts n-qubit quantum states based on given measurements on states and noisy outcomes. In the previous work, the algorithms are worst-case optimal in general but fail in achieving tighter bounds in certain simpler or more practical cases. In this paper, we develop algorithms to advance the online learning of quantum states. First, we show that Regularized Follow-the-Leader (RFTL) method with Tallis-2 entropy can achieve an O(√(MT)) total loss with perfect hindsight on the first T measurements with maximum rank M. This regret bound depends only on the maximum rank M of measurements rather than the number of qubits, which takes advantage of low-rank measurements. Second, we propose a parameter-free algorithm based on a classical adjusting learning rate schedule that can achieve a regret depending on the loss of best states in hindsight, which takes advantage of low noisy outcomes. Besides these more adaptive bounds, we also show that our RFTL with Tallis-2 entropy algorithm can be implemented efficiently on near-term quantum computing devices, which is not achievable in previous works.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset