Guided Policy Search using Sequential Convex Programming for Initialization of Trajectory Optimization Algorithms.

2021 
Nonlinear trajectory optimization algorithms have been developed to handle optimal control problems with nonlinear dynamics and nonconvex constraints in trajectory planning. The performance and computational efficiency of many trajectory optimization methods are sensitive to the initial guess, i.e., the trajectory guess needed by the recursive trajectory optimization algorithm. Motivated by this observation, we tackle the initialization problem for trajectory optimization via policy optimization. To optimize a policy, we propose a guided policy search method that has two key components: i) Trajectory update; ii) Policy update. The trajectory update involves offline solutions of a large number of trajectory optimization problems from different initial states via Sequential Convex Programming (SCP). Here we take a single SCP step to generate the trajectory iterate for each problem. In conjunction with these iterates, we also generate additional trajectories around each iterate via a feedback control law. Then all these trajectories are used by a stochastic gradient descent algorithm to update the neural network policy, i.e., the policy update step. As a result, the trained policy makes it possible to generate trajectory candidates that are close to the optimality and feasibility and that provide excellent initial guesses for the trajectory optimization methods. We validate the proposed method via a real-world 6-degree-of-freedom powered descent guidance problem for a reusable rocket.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    18
    References
    0
    Citations
    NaN
    KQI
    []