Empirical Properties of Optima in Free Semidefinite Programs
2021
The branch of convex optimization called semidefinite programming is based on linear matrix inequalities (LMI), namely, inequalities of the form $$ L_A (X) = I-A_1 X_1 - \dots - A_g X_g \succeq 0. $$ Here the $X_j$ are real numbers and the set of solutions to such an inequality is called a spectrahedron. Such an inequality makes sense when the $X_i$ are symmetric matrices of any size, $n \times n$, and enter the formula though tensor product $A_i \otimes X_i$ with the $A_i$; The solution set of $L_A (X) \succeq 0$ is called a free spectrahedron since it contains matrices of all sizes and the defining "linear pencil" is "free" of the sizes of the matrices. Linear pencils play a heavy role in the burgeoning area of free analysis.
In this article, we report on numerically observed properties of the extreme points obtained from optimizing a linear functional $\ell$ over a free spectrahedron restricted to matrices $X_i$ of fixed size $n \times n$. We generate approximately 7 million cases (using various different $g,A_i,n, \ell$) and record properties of the resulting optimizers $X^\ell$.
Of course, the optimizers we find are always Euclidean extreme points, but surprisingly, in many reasonable parameter ranges, over 99.9 \% are also free extreme points. Moreover, the dimension of the active constraint, kernel $L_A(X^\ell)$, is about twice what we expected. Another distinctive pattern we see regards whether or not the optimizing tuple $(X_1^\ell, \dots, X_g^\ell)$ is reducible, i.e., can be simultaneously block diagonalized.
In addition we give an algorithm which may be used to represent a given element of a free spectrahedron as a matrix convex combination of its free extreme points; the representation produced satisfies a low Caratheodory like bound on the number of free extreme points needed.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
19
References
0
Citations
NaN
KQI