Solving Stochastic Optimization with Expectation Constraints Efficiently by a Stochastic Augmented Lagrangian-Type Algorithm

2022 
This paper considers the problem of minimizing a convex expectation function with a set of inequality convex expectation constraints. We propose a stochastic augmented Lagrangian-type algorithm—namely, the stochastic linearized proximal method of multipliers—to solve this convex stochastic optimization problem. This algorithm can be roughly viewed as a hybrid of stochastic approximation and the traditional proximal method of multipliers. Under mild conditions, we show that this algorithm exhibits O(K−1/2) expected convergence rates for both objective reduction and constraint violation if parameters in the algorithm are properly chosen, where K denotes the number of iterations. Moreover, we show that, with high probability, the algorithm has a O(log (K)K−1/2) constraint violation bound and O(log 3/2(K)K−1/2) objective bound. Numerical results demonstrate that the proposed algorithm is efficient.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []