Vortrag im Rahmen des Doctoral Seminar in Mathematics von Herrn Timo Welti (University of Vienna)
VeranstaltungsortI.2.01Veranstalter Institut für MathematikBeschreibungTitel:Why deep artificial neural networks overcome the curseof dimensionality in PDE approximationKurzfassung:In recent years deep artificial neural networks (DNNs) have very successfullybeen employed in numerical simulations for a multitude of computationalproblems including, for example, object and face recognition, naturallanguage processing, fraud detection, computational advertisement, andnumerical approximations of partial differential equations (PDEs). Suchnumerical simulations indicate that DNNs seem to be able to overcome thecurse of dimensionality in the sense that the number of real parameters usedto describe the DNN grows at most polynomially in both the reciprocal of theprescribed approximation accuracy and the dimension of the function whichthe DNN aims to approximate in such computational problems. While thereis a large number of rigorous mathematical approximation results for artificialneural networks in the scientific literature, there are only a few specialsituations where results in the literature can rigorously explain the successof DNNs when approximating high-dimensional functions. In this talk it isrevealed that DNNs do indeed overcome the curse of dimensionality in thenumerical approximation of Kolmogorov PDEs with constant diffusion andnonlinear drift coefficients. The presented ideas for proving this cruciallyVortragende(r)Timo WeltiKontaktsimone gahleitner (simone.gahleitner@aau.at)