Critical Point Finding with Newton-MR by Analogy to Computing Square Roots
Understanding of the behavior of algorithms for resolving the optimization problem (hereafter shortened to OP) of optimizing a differentiable loss function (OP1), is enhanced by knowledge of the critical points of that loss function, i.e. the points where the gradient is 0. Here, we describe a solution to the problem of finding critical points by proposing and solving three optimization problems: 1) minimizing the norm of the gradient (OP2), 2) minimizing the difference between the pre-conditioned update direction and the gradient (OP3), and 3) minimizing the norm of the gradient along the update direction (OP4). The result is a recently-introduced algorithm for optimizing invex functions, Newton-MR, which turns out to be highly effective at the problem of finding the critical points of the loss surfaces of neural networks. We precede this derivation with an analogous, but simpler, derivation of the nested-optimization algorithm for computing square roots by combining Heron's Method with Newton-Raphson division.
READ FULL TEXT