A. Mesbah, K.P. Wabersich, A.P. Schoellig, M.N. Zeilinger, S. Lucia, T.A. Badgwell, and J.A. Paulson: Fusion of machine learning and MPC under uncertainty: What advances are on the horizon? American Control Conference, Atlanta, 2022.
This paper provides an overview of the recent research efforts on the integration of machine learning and model predictive control under uncertainty. The paper is organized as a collection of four major categories: learning models from system data and prior knowledge; learning control policy parameters from closed-loop performance data; learning efficient approximations of iterative online optimization from policy data; and learning optimal cost-to-go representations from closed-loop performance data. In addition to reviewing the relevant literature, the paper also offers perspectives for future research in each of these areas.
J. Köhler, K. P. Wabersich, J. Berberich, and M. N. Zeilinger: State space models vs. multi-step predictors in predictive control: Are state space models complicating safe data-driven designs? e-Print arXiv:2203.15471, 2022.
This paper contrasts recursive state space models and direct multi-step predictors for linear predictive control. We provide a tutorial exposition for both model structures to solve the following problems: 1. stochastic optimal control; 2. system identification; 3. stochastic optimal control based on the estimated model. Throughout the paper, we provide detailed discussions of the benefits and limitations of these two model parametrizations for predictive control and highlight the relation to existing works. Additionally, we derive a novel (partially tight) constraint tightening for stochastic predictive control with parametric uncertainty in the multi-step predictor.
K. P. Wabersich: Predictive Mechanisms for Safe Learning in Control Systems. Doctoral Thesis.
The increasing impact of data-driven technologies across various industries has sparked renewed interest in using learning-based approaches to automatically design and optimize control systems. While recent success stories from the field of reinforcement learning (RL) suggest an immense potential of such approaches, missing safety certificates still confine learning-based methods to simulation environments or fail-safe laboratory conditions. To this end, Part A of this dissertation introduces a predictive safety filter that allows to enhance existing, potentially unsafe learning-based controllers with safety guarantees. The underlying method is based on model predictive control (MPC) theory and ensures constraint satisfaction through an optimization-based safety mechanism that provides a safe backup control law at all times. To enable the efficient design of the proposed predictive safety filter from system data, this thesis extends available robustification methods from MPC to support diverse system classes through different model assumptions. This part of the thesis specifically introduces the core concepts for closed-loop chance constraint satisfaction using simple linear system models with data-driven uncertainties and learning-based linear model estimates with unbounded process noise. Moreover, uncertain system models with significant nonlinear effects are efficiently supported through a prediction mechanism, which exploits confident subsets of the state and input space. The further developments of these techniques are outlined in this thesis and additionally cover distributed systems and illustrate the predictive safety filter in a miniature racing application. Compared with existing safety frameworks based on control barrier function theory, predictive safety filters avoid the computationally difficult task to derive a control barrier function and thereby provide favorable scalability properties toward large-scale and distributed systems. Despite the seemingly different concepts of predictive safety filters and control barrier functions, this thesis establishes and formalizes the theoretical relations between the two approaches through a so-called ‘predictive control barrier function’, further enabling the recovery of infeasible nonlinear predictive control problems in an asymptotically stable fashion. While predictive safety filters offer a high degree of modularity in terms of safety and task-specific objectives, this separation can render a rigorous performance analysis a difficult task. To this end, Part B introduces specialized learning-based MPC controllers for accelerated learning towards a distinct goal. Even if the objective function is explicitly available, the design of an MPC controller requires an accurate prediction model, often in combination with a terminal constraint and objective function to compensate for short prediction horizons. Part B tackles the difficult design task of these components from three different angles. It first introduces a learning-based improvement of established and safe MPC controllers for asymptotic stabilization tasks through a stochastic tube-based MPC mechanism that supports probabilistic regression models. While this allows to take advantage of available system data for accurate predictions, insufficient prior knowledge or a deficient initial database requires additional mechanisms to efficiently acquire new data. To automate this identification process, the contributions of Part B continue with the question of how a controller can efficiently explore the system and when to transition from exploration to exploitation of available information. The proposed solution to these questions is based on posterior sampling theory and results in a computationally efficient active learning MPC formulation, which provides finite-time performance guarantees. The last contribution of this part addresses performance degenerations of an MPC controller caused by short prediction horizons, which are even present in the case of perfectly known prediction models. To overcome this limitation, Part B develops a data-driven mechanism to iteratively improve the terminal cost and terminal set of an MPC problem by leveraging system trajectories. During training, the proposed method efficiently handles model uncertainties and constraint violations to support learning-based prediction models and poorly performing initial controllers. This is achieved through a soft-constrained MPC formulation supporting polytopic state constraints.
A. Didier, K. P. Wabersich, and M. N. Zeilinger: Adaptive Model Predictive Safety Certification for Learning-based Control. e-Print arXiv:2109.13033, 2021.
We propose an adaptive Model Predictive Safety Certification (MPSC) scheme for learning-based control of linear systems with bounded disturbances and uncertain parameters with known bounds. An MPSC is a modular framework, which can be used in combination with any learning-based controller to ensure state and input constraint satisfaction of a dynamical system by solving an online optimisation problem. By continuously connecting the current system state with a safe terminal set using a robust tube, safety can be ensured. Thereby, the main sources of conservative safety interventions are model uncertainties and short planning horizons. We develop an adaptive mechanism to improve the system model, which leverages set-membership estimation to guarantee recursively feasible and non-decreasing safety performance improvements. In order to accommodate short prediction horizons, iterative safe set enlargements using previously computed robust backup plans are proposed. Finally, we illustrate the increase of the safety performance through the parameter and safe set adaptation for numerical examples with up to 16 state dimensions.
S. Muntwiler, K. P. Wabersich, and M. N. Zeilinger: Learning-based Moving Horizon Estimation through Differentiable Convex Optimization Layers. e-Print arXiv:2109.03962, 2021.
For control it is essential to obtain an accurate estimate of the current system state, based on uncertain sensor measurements and existing system knowledge. An optimization-based moving horizon estimation (MHE) approach uses a dynamical model of the system, and further allows to integrate physical constraints on system states and uncertainties, to obtain a trajectory of state estimates. In this work, we address the problem of state estimation in case of constrained linear systems with parametric uncertainty. The proposed approach makes use of differentiable convex optimization layers to formulate an MHE state estimator, for systems with uncertain parameters. This formulation allows us to obtain the gradient of a squared output error, based on sensor measurements and state estimates, with respect to the uncertain system parameters, and update the believe of the parameters online using stochastic gradient descent (SGD). In a numerical example of estimating temperatures of a group of manufacturing machines, we show the performance of learning the unknown system parameters and the benefits of integrating physical state constraints in the MHE formulation.
Kim P. Wabersich and Melanie N. Zeilinger: Nonlinear learning-based model predictive control supporting state and input dependent model uncertainty estimates. International Journal of Robust and Nonlinear Control, DOI: 10.1002/rnc.5688, 2021.
While model predictive control (MPC) methods have proven their efficacy when applied to systems with safety specifications and physical limitations, their performance heavily relies on an accurate prediction model. As a consequence, a significant effort in the design of MPC controllers is dedicated to the modeling part and often requires advanced physical expertise. In order to facilitate the controller design, we present an MPC scheme supporting nonlinear learning-based prediction models, that is, data-driven models with probabilistic parameter uncertainties. A tube-based MPC formulation in combination with an additional implicit state and input constraint forces the closed-loop system to be operated in domains of sufficient model confidence, thereby ensuring asymptotic stability and constraint satisfaction at a prespecified level of probability. Furthermore, by relying on tube-based MPC concepts, the proposed learning-based MPC formulation offers a general framework for addressing different problem classes, such as economic MPC, while providing a general interface to probabilistic prediction models based, for example, on Bayesian regression or Gaussian processes. A design procedure is proposed for approximately linear systems and the efficiency of the method is illustrated using numerical examples.
Kim P. Wabersich, Raamadaas Krishnadas, and Melanie N. Zeilinger: A soft constrained MPC formulation enabling learning from trajectories with constraint violations. IEEE Control Systems Letters, DOI: 10.1109/LCSYS.2021.3087968, 2021.
[early access version]Abstract
In practical model predictive control (MPC) implementations, constraints on the states are typically softened to ensure feasibility despite unmodeled disturbances. In this work, we propose a soft constrained MPC formulation supporting polytopic terminal sets in half-space and vertex representation, which significantly increases the feasible set while maintaining asymptotic stability in case of constraint violations. The proposed formulation allows for leveraging system trajectories that violate state constraints to iteratively improve the MPC controller’s performance. To this end, we apply convex optimization techniques to obtain a data-driven terminal cost and set, which result in a quadratic MPC problem.
Kim P. Wabersich and Melanie N. Zeilinger: Predictive control barrier functions: Enhanced safety mechanisms for learning-based control. e-Print arXiv:2105.10241, 2021 | IEEE Transactions on Automatic Control, DOI: 10.1109/TAC.2022.3175628, Early Access Version, 2023.
While learning-based control techniques often outperform classical controller designs, safety requirements limit the acceptance of such methods in many applications. Recent developments address this issue through so-called predictive safety filters, which assess if a proposed learning-based control input can lead to constraint violations and modifies it if necessary to ensure safety for all future time steps. The theoretical guarantees of such predictive safety filters rely on the model assumptions and minor deviations can lead to failure of the filter putting the system at risk. This paper introduces an auxiliary soft- constrained predictive control problem that is always feasible at each time step and asymptotically stabilizes the feasible set of the original safety filter, thereby providing a recovery mechanism in safety-critical situations. This is achieved by a simple constraint tightening in combination with a terminal control barrier function. By extending discrete-time control barrier function theory, we establish that the proposed auxiliary problem provides a ‘predictive’ control barrier function. The resulting algorithm is demonstrated using numerical examples.
Andrea Carron, Kim P. Wabersich, and Melanie N. Zeilinger: Plug-and-Play Distributed Safety Verification for Linear Control Systems with Bounded Uncertainties. IEEE Transactions on Control of Network Systems, DOI: 10.1109/TCNS.2021.3074218, 2021.
[Early access version]Abstract
Control, and in particular learning-based control, is challenging in large-scale and safety-critical system networks due to interactions between subsystems, which can potentially be time-varying. This paper presents a plug-and-play safety framework that can be applied together with high-performance control algorithms, e.g. emerging from learning techniques. The framework ensures constraint satisfaction for a network of uncertain linear control systems during control, learning, and during topology changes in the form of agents joining or leaving the network with prior plug-in or -out requests. The presented approach is based on safe sets and tube-based tracking controllers, which can be designed in a distributed fashion, i.e., using only local information. The presented plug-and-play procedure requires only the update of the safe sets of the agents directly involved in the plug-and-play operation making the operation computationally efficient compared to completely re-designing all safe sets. The capabilities of the safety framework are illustrated using numerical examples.
Ben Tearle, Kim P. Wabersich, Andrea Carron, and Melanie N. Zeilinger: A predictive safety filter for learning-based racing control. e-Print arXiv:2102.11907, 2021 | IEEE Robotics and Automation Letters, DOI: 10.1109/LRA.2021.3097073, 2021.
The growing need for high-performance controllers in safety-critical applications like autonomous driving has been motivating the development of formal safety verification techniques. In this paper, we design and implement a predictive safety filter that is able to maintain vehicle safety with respect to track boundaries when paired alongside any potentially unsafe control signal, such as those found in learning-based methods. A model predictive control (MPC) framework is used to create a minimally invasive algorithm that certifies whether a desired control input is safe and can be applied to the vehicle, or that provides an alternate input to keep the vehicle in bounds. To this end, we provide a principled procedure to compute a safe and invariant set for nonlinear dynamic bicycle models using efficient convex approximation techniques. To fully support an aggressive racing performance without conservative safety interventions, the safe set is extended in real-time through predictive control backup trajectories. Applications for assisted manual driving and deep imitation learning on a miniature remote-controlled vehicle demonstrate the safety filter’s ability to ensure vehicle safety during aggressive maneuvers.
K. P. Wabersich and M. N. Zeilinger: Performance and safety of Bayesian model predictive control: Scalable model-based RL with guarantees. e-Print arXiv:2006.03483, 2020
Despite the success of reinforcement learning (RL) in various research fields, relatively few algorithms have been applied to industrial control applications. The reason for this unexplored potential is partly related to the significant required tuning effort, large numbers of required learning episodes, i.e. experiments, and the limited availability of RL methods that can address high dimensional and safety-critical dynamical systems with continuous state and action spaces. By building on model predictive control (MPC) concepts, we propose a cautious model-based reinforcement learning algorithm to mitigate these limitations. While the underlying policy of the approach can be efficiently implemented in the form of a standard MPC controller, data-efficient learning is achieved through posterior sampling techniques. We provide a rigorous performance analysis of the resulting `Bayesian MPC’ algorithm by establishing Lipschitz continuity of the corresponding future reward function and bound the expected number of unsafe learning episodes using an exact penalty soft-constrained MPC formulation. The efficiency and scalability of the method are illustrated using a 100-dimensional server cooling example and a nonlinear 10-dimensional drone example by comparing the performance against nominal posterior MPC, which is commonly used for data-driven control of constrained dynamical systems.
K. P. Wabersich and M. N. Zeilinger: Bayesian model predictive control: Efficient model exploration and regret bounds using posterior sampling.e-Print arXiv:2005.11744, 2020 | 2nd Annual Conference on Learning for Dynamics and Control (2020).
Tight performance specifications in combination with operational constraints make model predictive control (MPC) the method of choice in various industries. As the performance of an MPC controller depends on a sufficiently accurate objective and prediction model of the process, a significant effort in the MPC design procedure is dedicated to modeling and identification. Driven by the increasing amount of available system data and advances in the field of machine learning, data-driven MPC techniques have been developed to facilitate the MPC controller design. While these methods are able to leverage available data, they typically do not provide principled mechanisms to automatically trade off exploitation of available data and exploration to improve and update the objective and prediction model. To this end, we present a learning-based MPC formulation using posterior sampling techniques, which provides finite-time regret bounds on the learning performance while being simple to implement using off-the-shelf MPC software and algorithms. The performance analysis of the method is based on posterior sampling theory and its practical efficiency is illustrated using a numerical example of a highly nonlinear dynamical car-trailer system.
S. Muntwiler*, K. P. Wabersich*, L. Hewing, and M. N. Zeilinger: Data-Driven Distributed Stochastic Model Predictive Control with Closed-Loop Chance Constraint Satisfaction.e-Print arXiv:2004.02907, 2020 | European Control Conference (ECC), Rotterdam, The Netherlands, 2021.
Distributed model predictive control methods for uncertain systems often suffer from considerable conservatism and can tolerate only small uncertainties, due to the use of robust formulations that are amenable to distributed design and computational methods. In this work, we propose a distributed stochastic model predictive control (DSMPC) scheme for dynamically coupled linear discrete-time systems subject to unbounded additive disturbances that are potentially correlated in time. An indirect feedback formulation ensures recursive feasibility of the MPC problem, and a data-driven, distributed and optimization-free constraint tightening approach allows for exact satisfaction of chance constraints during closed-loop control, addressing typical sources of conservatism. The computational complexity of the proposed controller is similar to nominal distributed MPC. The approach is finally demonstrated in simulations for the temperature control of a large-scale data center subject to randomly varying computational loads.
L. Hewing, K. P. Wabersich, M. Menner, and M. N. Zeilinger: Learning-Based Model Predictive Control: Toward Safe Learning in Control. Annual Review of Control, Robotics, and Autonomous Systems 269-296, 2020.
Recent successes in the field of machine learning, as well as the availability of increased sensing and computational capabilities in modern control systems, have led to a growing interest in learning and data-driven control techniques. Model predictive control (MPC), as the prime methodology for constrained control, offers a significant opportunity to exploit the abundance of data in a reliable manner, particularly while taking safety constraints into account. This review aims at summarizing and categorizing previous research on learning-based MPC, i.e., the integration or combination of MPC with learning methods, for which we consider three main categories. Most of the research addresses learning for automatic improvement of the prediction model from recorded data. There is, however, also an increasing interest in techniques to infer the parameterization of the MPC controller, i.e., the cost and constraints, that lead to the best closed-loop performance. Finally, we discuss concepts that leverage MPC to augment learning-based controllers with constraint satisfaction properties.
S. Muntwiler, K. P. Wabersich, A. Carron, and M. N. Zeilinger: Distributed model predictive safety certification for learning-based control. e-Print arXiv:1911.01832, 2019 | IFAC World Congress 2020.
While distributed algorithms provide advantages for the control of complex large-scale systems by requiring a lower local computational load and less local memory, it is a challenging task to design high-performance distributed control policies. Learning-based control algorithms offer promising opportunities to address this challenge, but generally cannot guarantee safety in terms of state and input constraint satisfaction. A recently proposed safety framework for centralized linear systems ensures safety by matching the learning-based input online with the initial input of a model predictive control law capable of driving the system to a terminal set known to be safe. We extend this idea to derive a distributed model predictive safety certification (DMPSC) scheme, which is able to ensure state and input constraint satisfaction when applying any learning-based control algorithm to an uncertain distributed linear systems with dynamic couplings. The scheme is based on a distributed tube-based model predictive control (MPC) concept, where subsystems negotiate local tube sizes among neighbors in order to mitigate restrictiveness of the safety approach. In addition, we present a technique for generating a structured ellipsoidal robust positive invariant tube. In numerical simulations, we show that the safety framework ensures constraint satisfaction for an initially unsafe control policy and allows to improve overall control performance compared to robust distributed MPC.
K. P. Wabersich, L. Hewing, A. Carron, and M. N. Zeilinger: Probabilistic model predictive safety certification for learning-based control. e-Print arXiv:1906.10417, 2019 | IEEE Transactions on Automatic Control, DOI: 10.1109/TAC.2021.3049335, Early Access Version.
Reinforcement learning (RL) methods have demonstrated their efficiency in simulation environments. However, many applications for which RL offers great potential, such as autonomous driving, are also safety critical and require a certified closed-loop behavior in order to meet safety specifications in the presence of physical constraints. This paper introduces a concept, called probabilistic model predictive safety certification (PMPSC), which can be combined with any RL algorithm and provides provable safety certificates in terms of state and input chance constraints for potentially large-scale systems. The certificate is realized through a stochastic tube that safely connects the current system state with a terminal set of states, that is known to be safe. A novel formulation in terms of a convex receding horizon problem allows a recursively feasible real-time computation of such probabilistic tubes, despite the presence of possibly unbounded disturbances. A design procedure for PMPSC relying on bayesian inference and recent advances in probabilistic set invariance is presented. Using a numerical car simulation, the method and its design procedure are illustrated by enhancing a simple RL algorithm with safety certificates.
K. P. Wabersich and M. N. Zeilinger: A predictive safety filter for learning-based control of constrained nonlinear dynamical systems. e-Print arXiv:1812.05506, 2018 | Automatica, Volume 129, July 2021.
The transfer of reinforcement learning (RL) techniques into real-world applications is challenged by safety requirements in the presence of physical limitations. Most RL methods, in particular the most popular algorithms, do not support explicit consideration of state and input constraints. In this paper, we address this problem for nonlinear systems with continuous state and input spaces by introducing a predictive safety filter, which is able to turn a constrained dynamical system into an unconstrained safe system and to which any RL algorithm can be applied `out-of-the-box’. The predictive safety filter receives the proposed control input and decides, based on the current system state, if it can be safely applied to the real system, or if it has to be modified otherwise. Safety is thereby established by a continuously updated safety policy, which is based on a model predictive control formulation using a data-driven system model and considering state and input dependent uncertainties.
L. Hewing, K. P. Wabersich and M. N. Zeilinger: Recursively feasible stochastic model predictive control using indirect feedback. e-Print arXiv:1812.05506, 2018 | Automatica, Volume 119, September 2020.
We present a stochastic model predictive control (MPC) method for linear discrete-time systems subject to possibly unbounded and correlated additive stochastic disturbance sequences. Chance constraints are treated in analogy to robust MPC using the concept of probabilistic reachable sets for constraint tightening. We introduce an initialization of each MPC iteration which is always recursively feasible and guarantees chance constraint satisfaction for the closed-loop system, which is typically challenging for systems under unbounded disturbances. Under an i.i.d. zero-mean assumption, we provide an average asymptotic performance bound. A building control example illustrates the approach in an application with time-varying, correlated disturbances.
K. P. Wabersich and M. N. Zeilinger: Linear model predictive safety certification for learning-based control. 57th IEEE Conference on Decision and Control (CDC), Miami Beach, FL, 2018
While it has been repeatedly shown that learning-based controllers can provide superior performance, they often lack of safety guarantees. This paper aims at addressing this problem by introducing a model predictive safety certification (MPSC) scheme for linear systems with additive disturbances. The scheme verifies safety of a proposed learning-based input and modifies it as little as necessary in order to keep the system within a given set of constraints. Safety is thereby related to the existence of a model predictive controller (MPC) providing a feasible trajectory towards a safe target set. A robust MPC formulation accounts for the fact that the model is generally uncertain in the context of learning, which allows proving constraint satisfaction at all times under the proposed MPSC strategy. The MPSC scheme can be used in order to expand any potentially conservative set of safe states for learning and we prove an iterative technique for enlarging the safe set. Finally, a practical data-based design procedure for MPSC is proposed using scenario optimization.
K. P. Wabersich and M. N. Zeilinger: Scalable synthesis of safety certificates from data with application to learning-based control. European Control Conference (ECC), Limassol, Cyprus, 2018.
The control of complex systems faces a trade-off between high performance and safety guarantees, which in particular restricts the application of learning-based methods to safety-critical systems. A recently proposed framework to address this issue is the use of a safety controller, which guarantees to keep the system within a safe region of the state space. This paper introduces efficient techniques for the synthesis of a safe set and control law, which offer improved scalability properties by relying on approximations based on convex optimization problems. The first proposed method requires only an approximate linear system model and Lipschitz continuity of the unknown nonlinear dynamics. The second method extends the results by showing how a Gaussian process prior on the unknown system dynamics can be used in order to reduce conservatism of the resulting safe set. We demonstrate the results with numerical examples, including an autonomous convoy of vehicles.
L. Hewing*, A. Carron*, K. P. Wabersich and M. N. Zeilinger: On a correspondence between probabilistic and robust invariant sets for linear systems. European Control Conference (ECC), Limassol, Cyprus, 2018.
Dynamical systems with stochastic uncertainties are ubiquitous in the field of control, with linear systems under additive Gaussian disturbances a most prominent example. The concept of probabilistic invariance was introduced to extend the widely applied concept of invariance to this class of problems. Computational methods for their synthesis, however, are limited. In this paper we present a relationship between probabilistic and robust invariant sets for linear systems, which enables the use of well-studied robust design methods. Conditions are shown, under which a robust invariant set, designed with a confidence region of the disturbance, results in a probabilistic invariant set. We furthermore show that this condition holds for common box and ellipsoidal confidence regions, generalizing and improving existing results for probabilistic invariant set computation. We finally exemplify the synthesis for an ellipsoidal probabilistic invariant set. Two numerical examples demonstrate the approach and the advantages to be gained from exploiting robust computations for probabilistic invariant sets.
K. P. Wabersich, F. A. Bayer, M. A. Müller and F. Allgöwer: Economic model predictive control for robust periodic operation with guaranteed closed-loop performance. European Control Conference (ECC), Limassol, Cyprus, 2018.
We present an economic model predictive control scheme for general nonlinear systems based on a terminal cost and a terminal constraint set. We study in particular systems which are optimally operated at some periodic orbit. Besides recursive feasibility of the control scheme, we provide an asymptotic average performance bound which is no worse than the performance value of the system’s optimal periodic orbit. By means of a certain (strict) dissipativity assumption, asymptotic convergence to the optimal periodic orbit is shown. Using a tube-based approach, we extend our method to become applicable in the presence of unknown but bounded disturbances. In addition, we propose the concept of robust optimal periodic operation and show how it can essentially improve the closed-loop performance using a simple supply chain network example.
2015 – 2017
K. P. Wabersich and M. N. Zeilinger: Model predictive safety certificates from data for learning-based control. Second Max-Planck ETH Workshop on Learning Control, 2017.
K. P. Wabersich: Robust economic model predictive control for periodic operation. Master thesis, 2017.
We consider economic control of systems, which are optimally operated at some periodic orbit. In particular, our motivation is to economically control supply chain networks. First we provide a detailed analysis of optimal periodic operation in case of linear periodic time varying systems with convex cost functionals. In case of piece-wise linear cost functionals we derive an explicit linear programming formulation in order to verify optimal closed loop operation at a given periodic orbit as well as suboptimal operation for any other, feasible system trajectory off that very orbit. Furthermore, we present a novel economic model predictive control scheme for general non-linear systems based on a terminal cost and a terminal constraint set. Besides recursive feasibility and asymptotic stability of the control scheme, we strictly proof an asymptotic average performance which is no worse than the performance value of the systems optimal periodic orbit. Using a tube-based approach, we extend our method to become applicable in the presence of unknown but bounded disturbances. In addition, we propose the concept of robust optimal periodic operation which turns out to essentially improve the closed loop performance for the supply chain example considered, under the presence of disturbances. Throughout this work, we illustrate each new concept using a simple supply chain model. Lastly, we perform an in-depth experimental analysis of a more complex supply chain network consisting of a supplier, a transportation network and three retail stores. We compare our nominal and robust control schemes with an existing, terminal cost and terminal set free method.
J. Schlechtriemen*, K. P. Wabersich* and K. Kuhnert: Wiggling through complex traffic: Planning trajectories constrained by predictions. IEEE Intelligent Vehicles Symposium, 2016.
The vision of autonomous driving is piecewise becoming reality. Still the problem of executing the driving task in a safe and comfortable way in all possible environments, for instance highway, city or rural road scenarios is a challenging task. In this paper we present a novel approach to planning trajectories for autonomous vehicles. Hereby we focus on the problem of planning a trajectory, given a specific behavior option, e.g. merging into a specific gap at a highway entrance or a roundabout. Therefore we explicitly take arbitrary road geometry and prediction information of other traffic participants into account. We extend former contributions in this field by providing a flexible problem description and a trajectory planner without specialization to distinct classes of maneuvers beforehand. Using a carefully chosen representation of the dynamic free space, the method is capable of considering multiple lanes including the predicted dynamics of other traffic participants, while being real-time capable at the same time. The combination of those properties in one general planning method represents the novelty of the proposed method. We demonstrate the capability of our algorithm to plan safe trajectories in simulation and in real traffic in real-time.
K. P. Wabersich and M. Toussaint: Advancing Bayesian Optimization: The Mixed-Global-Local (MGL) Kernel and Length-Scale Cool Down. Workshop on Bayesian Optimization at NIPS 2016.
Bayesian Optimization (BO) has become a core method for solving expensive black-box optimization problems. While much research focused on the choice of the acquisition function, we focus on online length-scale adaption and the choice of kernel function. Instead of choosing hyperparameters in view of maximum likelihood on past data, we propose to use the acquisition function to decide on hyperparameter adaptation more robustly and in view of the future optimization progress. Further, we propose a particular kernel function that includes non-stationarity and local anisotropy and thereby implicitly integrates the efficiency of local convex optimization with global Bayesian optimization. Comparisons to state-of-the-art BO methods underline the efficiency of these mechanisms on global optimization benchmarks.
K. P. Wabersich and M. Toussaint: Automatic Testing and MiniMax Optimization of System Parameters for Best Worst-Case Performance. IEEE/RSJ International Conference on Intelligent Robots and Systems, 2015.
Robotic systems typically have numerous parameters, e.g. the choice of planning algorithm, real-valued parameters of motion and vision modules, and control parameters. We consider the problem of optimizing these parameters for best worst-case performance over a range of environments. To this end we first propose to evaluate system parameters by adversarially optimizing over environment parameters to find particularly hard environments. This is then nested in a game-theoretic minimax optimization setting, where an outer-loop aims to find best worst-case system parameters. For both optimization levels we use Bayesian global optimization (GP-UCB) which provides the necessary confidence bounds to handle the stochasticity of the performance. We compare our method (Nested Minimax) with an existing relaxation method we adapted to become applicable in our setting. By construction our approach provides more robustness to performance stochasticity. We demonstrate the method for planning algorithm selection on a pick’n’place application and for control parameter optimization on a triple inverted pendulum for robustness to adversarial perturbations.
Supervised Student Projects
- Optimism in the face of uncertainty in Model Predictive Control, Edoardo Ghigone, Semester Project, 2021
- A soft constrained model predictive safety filter, Ziyad Sheebaelhand, Semester Project, 2021
- Model Predictive Safety Certification for Autonomous Racing, Benjamin Tearle, Master Thesis, 2020
- Asymptotic Stability of Soft Constrained MPC with Polytopic Terminal Sets, Raamadaas Krishnadas, Semester Project, 2020
- Learning-Based Control for Constrained Systems using Thompson Sampling and Scenario Optimization, Aneri Muni, Semester Project, 2020
- Efficient design of learning-based economic model predictive control: Indirect feedback and lazy tube computation, Ueli Wechsler, Master Thesis, 2019
- A model predictive safety filter with online constraint tightening, Max Hürlimann, Master Thesis, 2019
- A fast and scalable bayesian sum of squares safety filter, Alexandre Didier, Semester Project, 2019
- Distributed model predictive safety certification for learning-based control, Simon Muntwiler, Master Thesis, 2019
- Safety certificates from large data sets using sum of squares with application to learning-based control, Alexandre Didier, Bachelor Thesis, 2018
- Design and implementation of a library for safe reinforcement learning, Max Hürlimann, Semester Project, 2018
- Model predictive safe sets, Twan van der Sijs, Semester Project, 2018
- On relations between safe learning and robust control, Alexandre Didier, Studies on Mechatronics, 2018