LMIs

what are LMIs?

Inequalities generalize equations. For example, when we aim to find \(x\) that solves,

\[x^2 - 4= 0\]

we obtain the pair of solutions \(\pm 2\) but when we write

\[x^2 - 4 \leq 0\]

we suddenly have the entire interval as the solution; you get a nontrivial set of solutions. It is true that you can get nontrivial set of solutions using equations (consider the above equation with another variable, e.g., \(y\), included). I am not going to say much about why we even care about these (equations or not); for now, let us assume that we do. The inequality above is a quadratic inequality, as there is a square in the expression involving the unknown. Another feature of this set is that it convex- this would have an important algorithmic implication. Note that if we change the direction of the inequality, the resulting set of solutions is not convex; you get two disconnected intervals. In higher dimensions (when you have more variables to worry about), recognizing how such sets look like is highly non-trivial; in particular it is far from obvious how to recognize or understand the set of feasible solutions for a given set of polynomial inequalities. But there is a class of such inequalities that are so convenient to use and work with– here is an example: what is the set of triplets \(x,y,z\) such that \(xy-z^2 >0\) and \(x>0\)? Well, we can just write this as having the matrix with \(x,y\) on the diagonal and \(z\) on the off-diagonal, being positive definite. This is a linear matrix inequality when the ordering used for the matrices is via positive (semi) definite cone; we call this PD or PSD ordering. In particular, a linear matrix inequality is a ``linear’’ inequality in the matrix variable but it is generally a quadratic inequality in the variables constituting that matrix. That is the magic of (linear) matrix inequalities; more on this soon!