Loading Events

« All Events

Data-driven learning of feedback policies for robust model predictive control: An approximation-theoretic view

December 15 @ 12:00 PM - 1:00 PM

Model Predictive Control (MPC) is a widely used optimization-based framework for the synthesis of feedback control, with mature theory and practice in the linear setting. Yet computational tractability remains a key bottleneck—particularly for robust nonlinear min-max MPC—because solving a (robust) optimization problem at every step is expensive and often intractable in practice. Explicit or approximate MPC circumvents this by replacing online optimization with a function evaluation, but learning accurate and robust approximate feedback policies is challenging. This talk will present new computationally tractable data-driven and approximation-theoretic methods for robust (min-max) model predictive control (MPC) in low- to moderate-dimensional nonlinear systems. The approach leverages some unusual and unique tools from approximation and modern deep learning theory to learn feedback policies with pre-assigned guarantees of uniform learning errors. In practice, the technique achieves a remarkable 20,000 times speed-up as opposed to standard techniques in MPC. 

 

Speaker : Siddhartha Ganguly

 

Biography:

 

Siddhartha Ganguly is currently a postdoctoral researcher in the Department of Applied Mathematics and Physics at Kyoto University, Japan, and a soon-to-join postdoc in the School of Aerospace Engineering at the Georgia Institute of Technology, USA. He completed his Ph.D. from the Centre for Systems and Control at IIT Bombay. His current research interests are in the area of optimal transport and machine learning with applications to control theory, optimal control, and robust optimization with applications to mechanical and aerospace systems.

Details

Date:
December 15
Time:
12:00 PM - 1:00 PM

Other

Speaker
Siddhartha Ganguly
Scroll to Top