Computational Polarization: An Information-theoretic Method for Resilient Computing
We introduce an error resilient distributed computing method based on an extension of the channel polarization phenomenon to distributed algorithms. The method leverages an algorithmic split operation that transforms two identical compute nodes to slow and fast workers, which parallels the channel split operation in Polar Codes. This operation preserves the average runtime, analogous to the conservation of Shannon capacity in channel polarization. By leveraging a recursive construction in a similar spirit to the Fast Fourier Transform, this method synthesizes virtual compute nodes with dispersed return time distributions, which we call computational polarization. We show that the runtime distributions form a functional martingale processes, identify their limiting distributions in closed-form expressions together with non-asymptotic convergence rates, and prove strong convergence results in Banach spaces. We provide an information-theoretic lower bound on the overall runtime of any coded computation method and show that the computational polarization approach asymptotically achieves the optimal runtime for computing linear functions. An important advantage is the near linear time decoding procedure, which is significantly cheaper than Maximum Distance Separable codes.
READ FULL TEXT