Optimization in dynamical systems
In recent years, there has been a surge of exciting research activity at the interface of optimization (in particular polynomial, semidefinite, and sum of squares optimization) and the theory of dynamical systems. In this talk, we focus on two problem classes on this boundary: In part (I), after a brief review of nonnegative polynomials and Lyapunov functions, we provide hardness results and semidefinite programming (SDP) based relaxations for the problem of testing asymptotic stability of (i) polynomial dynamics in continuous time, and (ii) switched linear dynamics in discrete time. The latter problem is equivalent to that of computing the joint spectral radius of a finite set of matrices. Based on connections to standard concepts in automata theory, we provide a meta-theorem that characterizes all SDPs which upper bound the joint spectral radius. We also give approximation guarantees on the bound returned by many of these SDP families. In part (II), we introduce a new class of optimization problems which have constraints imposed by trajectories of a dynamical systems. As a concrete example, we consider the problem of optimizing a linear function over a subset of a polyhedron that never leaves it under the action of a linear, or a switched linear, dynamical system. We show that under some conditions, this problem can be solved in polynomial time by linear programming, and under some other, approximated in polynomial time by semidefinite programming. The talk is meant to be self-contained. Joint work (in different subsets) with Oktay Gunluk, Raphael Jungers, Pablo Parrilo, and Mardavij Roozbehani.