Faster 0-1-Knapsack via Near-Convex Min-Plus-Convolution
We revisit the classic 0-1-Knapsack problem, in which we are given n items with their weights and profits as well as a weight budget W, and the goal is to find a subset of items of total weight at most W that maximizes the total profit. We study pseudopolynomial-time algorithms parameterized by the largest profit of any item p_max, and the largest weight of any item w_max. Our main result are algorithms for 0-1-Knapsack running in time Õ(n w_max p_max^2/3) and Õ(n p_max w_max^2/3), improving upon an algorithm in time O(n p_max w_max) by Pisinger [J. Algorithms '99]. In the regime p_max≈ w_max≈ n (and W ≈OPT≈ n^2) our algorithms are the first to break the cubic barrier n^3. To obtain our result, we give an efficient algorithm to compute the min-plus convolution of near-convex functions. More precisely, we say that a function f [n] ↦𝐙 is Δ-near convex with Δ≥ 1, if there is a convex function f̆ such that f̆(i) ≤ f(i) ≤f̆(i) + Δ for every i. We design an algorithm computing the min-plus convolution of two Δ-near convex functions in time Õ(nΔ). This tool can replace the usage of the prediction technique of Bateni, Hajiaghayi, Seddighin and Stein [STOC '18] in all applications we are aware of, and we believe it has wider applicability.
READ FULL TEXT