Lecture Notes

Rates of Convergence

By Akshay Agrawal. Last updated Jan. 22, 2019.

The rate of convergence of an iterative optimization algorithm quantifies how quickly the algorithm converges to a fixed point. Let be the iterates produced by the iteration , with initial iterate . Let denote the fixed point obtained by iterating . Let be the errors of the iteration, that is, .

Suppose there exists a constant and an exponent such that for sufficiently large ,

If , the iteration is said to converge linearly to its fixed point; if , it converges quadratically. Additionally, if , the iteration is said to converge sublinearly, and if is converges superlinearly.

It is instructive to unroll the recursion in the above inequality. If , then we have

For this reason, linear convergence is sometimes referred to as geometric convergence. Note also that for , a log-linear plot of error versus iterations decays linearly, and obtaining an accuracy on the order of requires iterations.

If , then

Even though this convergence is termed quadratic, a log-linear plot of quadratically convergent errors versus iterations actually decays as an exponential. Obtaining an -suboptimal iterate in a quadratically convergent regime requires iterations.

References

Lloyd Trefethen and David Bau. Numerical Linear Algebra.