Stochastic Gradient Descent for Hybrid Quantum-Classical Optimization

No items found.

Frederik

Wilde

July 9, 2020 4:00 PM

In hybrid quantum-classical methods the output of a quantum computer is used by a classical optimizer to tune the execution parameters of that quantum computer. This process is iterated to find an optimal set of parameters, given a specific computational goal. Such methods are promising applications of NISQ devices since part of the computation is done classically and limited coherence times can be tolerated. In such a setting gradient based optimization requires the evaluation of expectation values at the end of the quantum circuit. In this talk I will present our recent investigations [1910.01155] which explore the fact that the estimation of expectation values leads to a form of stochastic gradient descent. I will introduce the basic concept of hybrid quantum-classical methods and how gradients can be computed in this setting. I will list potential sources of stochasticity and show how this can lead to more resource efficient optimization. I will discuss convergence guarantees for different loss functions and algorithms and present our numerical results as well as tools used to obtain them. ______________________________________________ Zoom meeting details Topic: Quantum Information and Quantum Computing Working Group Time: July 09, 2020, 04:00 PM Warsaw Join Zoom Meeting: QIQCWG-ZOOM Meeting ID: 922 2710 3826Password: bQ,mfjpB! If you encounter any problems with connecting to the Zoom meeting, please email filip.b.maciejewski@gmail.com directly.