News

[03.12.2022]

Two new pre-prints are available:

[10.10.2022]

We are excited that our paper “Cautious Bayesian MPC: Regret Analysis and Bounds on the Number of Unsafe Learning Episodeswas accepted and will be published in the IEEE Transactions on Automatic Control in August 2023.

[29.6.2022]

We have contributed to the session “A Tutorial on Nonlinear Model Predictive Control: What Advances Are On the Horizon?” at the American Control Conference, Atlanta, 2022. The corresponding paper will be published in the conference proceedings and can be found, e.g. here.

[18.5.2022]

I am happy to share that our work on connecting predictive safety filters and control barrier functions, “Predictive control barrier functions: Enhanced safety mechanisms for learning-based control” will be published in the IEEE Transactions on Automatic Control in May 2023.

[30.03.2022]

Our paper Learning-based Moving Horizon Estimation through Differentiable Convex Optimization Layers has been accepted for the 4th Annual Learning for Dynamics & Control Conference at Stanford and was selected for oral presentation.

[13.12.2021]

Our paper A predictive safety filter for learning-based control of constrained nonlinear dynamical systems was selected as Editor’s Choice Automatica in July 2021


Videos

[27.09.2021]

Spotlight talk at IROS 2021 Workshop on Safe Real-World Robot Autonomy: Summarizes some of the more recent work on predictive safety filters.

[25.02.2021]

A predictive safety filter for a miniature racing application and its combination with imitation learning to safely learn an expert policy: Preprint

[19.01.2021]

I presented the concept of predictive safety filters as part of the autonomy talks at ETH. Due to the current pandemic, the talks are available online.


About me

I’m working at Bosch Research on the safe control of uncertain dynamical systems. Before, I was a postdoctoral researcher in the intelligent control systems group at IDSC ETH. My research interests are Safe Learning Control, Model Predictive Control, and Model-based Reinforcement Learning; also, see my Google Scholar profile.

For a detailed CV, see my LinkedIn profile.