VeranstaltungsortI.2.35Veranstalter Institut für MathematikBeschreibungStochastic gradient descent (SGD) is the engine beneath the hood of manymachine learning algorithms, but research into its acceleration is still in itsinfancy. It has long been known that Nesterov’s accelerated gradient descentachieves the optimal convergence rate in the deterministic setting, but suchmethods are difficult to apply to stochastic gradient descent. In this talk,we will discuss existing accelerated SGD algorithms, tricks to achieve theoptimal convergence rate, and ongoing research into acceleration methodsfor SGD.Vortragende(r)Derek Driggs (University of Cambridge)KontaktSenka Haznadar (senka.haznadar@aau.at)