When optimizing nonlinear objectives is no harder than linear objectives

04/11/2018
by   Daniel Alabi, et al.
0

Most systems and learning algorithms optimize average performance or average loss --- one reason being computational complexity. However, many objectives of practical interest are more complex than simply average loss. Examples include balancing performance or loss with fairness across people, as well as balancing precision and recall. We prove that, from a computational perspective, fairly general families of complex objectives are not significantly harder to optimize than standard averages, by providing polynomial-time reductions, i.e., algorithms that optimize complex objectives using linear optimizers. The families of objectives included are arbitrary continuous functions of average group performances and also convex objectives. We illustrate with applications to fair machine learning, fair optimization and F1-scores.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset