Fortuin Vincent 2880x880px
Back

Vincent Fortuin

As a Branco Weiss Fellow, Dr. Vincent Fortuin will study the role of prior knowledge in Bayesian deep learning, working towards more data-efficient learning approaches targeted at critical applications, where model robustness and calibrated uncertainty estimates are crucial. In this vein, his research will combine ideas from classical Bayesian statistics, representation learning, and meta-learning, to achieve a unified framework which will unlock new applications on small datasets.

Background

Nationality
Germany and the Netherlands

Academic Career

  • Research group leader, ELPIS group, Helmholtz AI, Munich, Germany, 2023-present
  • Postdoctoral researcher in Machine Learning, University of Cambridge, United Kingdom, 2022-2023
  • PhD in Machine Learning, ETH Zürich, Switzerland, 2017-2021
  • MSc in Computational Biology and Bioinformatics, ETH Zürich, Switzerland, 2015-2017
  • BSc in Molecular Life Science, University of Hamburg, Germany, 2012-2015

Major Awards

  • Research Fellowship of St. John’s College Cambridge, 2022
  • SNF Postdoc.Mobility Fellowship, 2022
  • PhD Fellowship from the Swiss Data Science Center, 2019
  • Willi Studer Award, 2018
  • Excellence Scholarship of ETH Zürich, 2015

Research

Branco Weiss Fellow Since
2022

Research Category
Computer Science

Research Location
Helmholtz AI, Munich, Germany

Background

Machine learning, especially in the flavor of deep learning, has made remarkable progress in recent years and has permeated many areas of our modern life. However, most current deep learning models are still quite data-hungry, which is why the most salient advances have been made in areas where huge datasets are readily available, such as image recognition and language processing. This is partially due to the community’s focus on “universal” learning algorithms, which can learn from any kind of data without any prior assumptions. An alternative approach is to try to make use of the prior knowledge which is available in many important domains, such as medical or scientific applications, which not only promises to make the resulting methods more performant, but also to make them more data efficient. Bayesian statistics prescribes how to optimally use prior knowledge in any learning problem, which has been integrated into deep learning models resulting in the nascent field of Bayesian deep learning.

However, previous research has shown that often-used priors in Bayesian deep learning are suboptimal and can lead to different pathologies, and that better priors can be identified using different model selection and meta-learning techniques. These preliminary results have strong implications for how prior knowledge from different application areas can be incorporated into modern machine learning models.

Details of Research

Dr. Vincent Fortuin aims to extend his previous research on Bayesian deep learning to discover better ways of designing priors for Bayesian neural networks and deep Gaussian processes. He also plans to synthesize his research expertise in representation learning and meta-learning to devise a unified framework, where reusable representations can be learned on unlabeled data to then be used as inputs to Bayesian models in downstream applications with small labeled datasets. Moreover, both the representation learning and the downstream predictive models can incorporate Bayesian prior knowledge, which can be specified based on existing domain expertise or meta-learned from related tasks. Put together, this framework aims to achieve robust performance and uncertainty estimation in critical applications with small datasets, thus unlocking many application areas for which deep learning has currently not been feasible and therefore finally delivering on the promises that the machine learning community has been making in recent years.