A Continuation Method for Discrete Optimization and its Application to Nearest Neighbor Classification

02/10/2018
by   Ali Shameli, et al.
0

The continuation method is a popular approach in non-convex optimization and computer vision. The main idea is to start from a simple function that can be minimized efficiently, and gradually transform it to the more complicated original objective function. The solution of the simpler problem is used as the starting point to solve the original problem. We show a continuation method for discrete optimization problems. Ideally, we would like the evolved function to be hill-climbing friendly and to have the same global minima as the original function. We show that the proposed continuation method is the best affine approximation of a transformation that is guaranteed to transform the function to a hill-climbing friendly function and to have the same global minima. We show the effectiveness of the proposed technique in the problem of nearest neighbor classification. Although nearest neighbor methods are often competitive in terms of sample efficiency, the computational complexity in the test phase has been a major obstacle in their applicability in big data problems. Using the proposed continuation method, we show an improved graph-based nearest neighbor algorithm. The method is readily understood and easy to implement. We show how the computational complexity of the method in the test phase scales gracefully with the size of the training set, a property that is particularly important in big data applications.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset