Differentiate Everything with a Reversible Domain-Specific Language

03/10/2020
by   Jin-Guo Liu, et al.
0

Traditional machine instruction level reverse mode automatic differentiation (AD) faces the problem of having a space overhead that linear to time in order to trace back the computational state, which is also the source of bad time performance. In reversible programming, a program can be executed bi-directionally, which means we do not need extra design to trace back the computational state. This paper answers the question that how practical it is to implement a machine instruction level reverse mode AD in a reversible programming language. By implementing sparse matrix operations and some machine learning applications in our reversible eDSL NiLang, and benchmark the performance with state-of-the-art AD frameworks, our answer is a clear positive. NiLang is an open source r-Turing complete reversible eDSL in Julia. It empowers users the flexibility to tradeoff time, space, and energy rather than caching data into a global tape. Manageable memory allocation makes it a good tool to differentiate GPU kernels too. In this paper, we will also discuss the challenges that we face towards energy efficient, rounding error free reversible computing, mainly from the instruction and hardware perspective.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset