Safe Feature Elimination for Non-negativity Constrained Convex Optimization

2019 
Inspired by recent work on safe feature elimination for 1-norm regularized least-squares, we develop strategies to eliminate features from convex optimization problems with non-negativity constraints. Our strategy is safe in the sense that it will only remove features/coordinates from the problem when they are guaranteed to be zero at a solution. To perform feature elimination, we use an accurate, but not optimal, primal–dual feasible pair, making our methods robust and able to be used on ill-conditioned problems. We supplement our feature elimination problem with a method to construct an accurate dual feasible point from an accurate primal feasible point; this allows us to use a first-order method to find an accurate primal feasible point and then use that point to construct an accurate dual feasible point and perform feature elimination. Under reasonable conditions, our feature elimination strategy will eventually eliminate all zero features from the problem. As an application of our methods, we show how safe feature elimination can be used to robustly certify the uniqueness of nonnegative least-squares problems. We give numerical examples on a well-conditioned synthetic nonnegative least-squares problem and on a set of 40,000 extremely ill-conditioned problems arising in a microscopy application.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    28
    References
    0
    Citations
    NaN
    KQI
    []