language-icon Old Web
English
Sign In

Backtracking line search

In (unconstrained) minimization, a backtracking line search, a search scheme based on the Armijo–Goldstein condition, is a line search method to determine the maximum amount to move along a given search direction. It involves starting with a relatively large estimate of the step size for movement along the search direction, and iteratively shrinking the step size (i.e., 'backtracking') until a decrease of the objective function is observed that adequately corresponds to the decrease that is expected, based on the local gradient of the objective function. In (unconstrained) minimization, a backtracking line search, a search scheme based on the Armijo–Goldstein condition, is a line search method to determine the maximum amount to move along a given search direction. It involves starting with a relatively large estimate of the step size for movement along the search direction, and iteratively shrinking the step size (i.e., 'backtracking') until a decrease of the objective function is observed that adequately corresponds to the decrease that is expected, based on the local gradient of the objective function. Given a starting position x {displaystyle mathbf {x} } and a search direction p {displaystyle mathbf {p} } , the task of a line search is to determine a step size α > 0 {displaystyle alpha >0} that adequately reduces the objective function f : R n → R {displaystyle f:mathbb {R} ^{n} o mathbb {R} } (assumed smooth), i.e., to find a value of α {displaystyle alpha } that reduces f ( x + α p ) {displaystyle f(mathbf {x} +alpha ,mathbf {p} )} relative to f ( x ) {displaystyle f(mathbf {x} )} . However, it is usually undesirable to devote substantial resources to finding a value of α {displaystyle alpha } to precisely minimize f {displaystyle f} . This is because the computing resources needed to find a more precise minimum along one particular direction could instead be employed to identify a better search direction. Once an improved starting point has been identified by the line search, another subsequent line search will ordinarily be performed in a new direction. The goal, then, is just to identify a value of α {displaystyle alpha } that provides a reasonable amount of improvement in the objective function, rather than to find the actual minimizing value of α {displaystyle alpha } . The backtracking line search starts with a large estimate of α {displaystyle alpha } and iteratively shrinks it. The shrinking continues until a value is found that is small enough to provide a decrease in the objective function that adequately matches the decrease that is expected to be achieved, based on the local function gradient ∇ f ( x ) . {displaystyle abla f(mathbf {x} ),.} Define the local slope of the function of α {displaystyle alpha } along the search direction p {displaystyle mathbf {p} } as m = p T ∇ f ( x ) . {displaystyle m=mathbf {p} ^{mathrm {T} }, abla f(mathbf {x} ),.} It is assumed that p {displaystyle mathbf {p} } is a unit vector in a direction in which some local decrease is possible, i.e., it is assumed that m < 0 {displaystyle m<0} . Based on a selected control parameter c ∈ ( 0 , 1 ) {displaystyle c,in ,(0,1)} , the Armijo–Goldstein condition tests whether a step-wise movement from a current position x {displaystyle mathbf {x} } to a modified position x + α p {displaystyle mathbf {x} +alpha ,mathbf {p} } achieves an adequately corresponding decrease in the objective function. The condition is fulfilled if f ( x + α p ) ≤ f ( x ) + α c m . {displaystyle f(mathbf {x} +alpha ,mathbf {p} )leq f(mathbf {x} )+alpha ,c,m,.} This condition, when used appropriately as part of a line search, can ensure that the step size is not excessively large. However, this condition is not sufficient on its own to ensure that the step size is nearly optimal, since any value of α {displaystyle displaystyle alpha } that is sufficiently small will satisfy the condition. Thus, the backtracking line search strategy starts with a relatively large step size, and repeatedly shrinks it by a factor τ ∈ ( 0 , 1 ) {displaystyle au ,in ,(0,1)} until the Armijo–Goldstein condition is fulfilled. The search will terminate after a finite number of steps for any positive values of c {displaystyle c} and τ {displaystyle au } that are less than 1. For example, Armijo used ​1⁄2 for both c {displaystyle c} and τ {displaystyle au } in a paper he published in 1966. Starting with a maximum candidate step size value α 0 > 0 {displaystyle alpha _{0}>0,} , using search control parameters τ ∈ ( 0 , 1 ) {displaystyle au ,in ,(0,1)} and c ∈ ( 0 , 1 ) {displaystyle c,in ,(0,1)} , the backtracking line search algorithm can be expressed as follows:

[ "Beam search", "Line search", "Incremental heuristic search" ]
Parent Topic
Child Topic
    No Parent Topic