An incremental descent method for multi-objective optimization.

2021 
Current state-of-the-art multi-objective optimization solvers, by computing gradients of all $m$ objective functions per iteration, produce after $k$ iterations a measure of proximity to critical conditions that is upper-bounded by $O(1/\sqrt{k})$ when the objective functions are assumed to have $L-$Lipschitz continuous gradients; i.e. they require $O(m/\epsilon^2)$ gradient and function computations to produce a measure of proximity to critical conditions bellow some target $\epsilon$. We reduce this to $O(1/\epsilon^2)$ with a method that requires only a constant number of gradient and function computations per iteration; and thus, we obtain for the first time a multi-objective descent-type method with a query complexity cost that is unaffected by increasing values of $m$. For this, a brand new multi-objective descent direction is identified, which we name the \emph{central descent direction}, and, an incremental approach is proposed. Robustness properties of the central descent direction are established, measures of proximity to critical conditions are derived, and, the incremental strategy for finding solutions to the multi-objective problem is shown to attain convergence properties unattained by previous methods. To the best of our knowledge, this is the first method to achieve this with no additional a-priori information on the structure of the problem, such as done by scalarizing techniques, and, with no pre-known information on the regularity of the objective functions other than Lipschitz continuity of the gradients.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    33
    References
    0
    Citations
    NaN
    KQI
    []