Recursion, Probability, Convolution and Classification for Computations

07/22/2019
by   Mircea Namolaru, et al.
0

The main motivation of this work was practical, to offer computationally and theoretical scalable ways to structuring large classes of computation. It started from attempts to optimize R code for machine learning/artificial intelligence algorithms for huge data sets, that due to their size, should be handled into an incremental (online) fashion. Our target are large classes of relational (attribute based), mathematical (index based) or graph computations. We wanted to use powerful computation representations that emerged in AI (artificial intelligence)/ML (machine learning) as BN (Bayesian networks) and CNN (convolution neural networks). For the classes of computation addressed by us, and for our HPC (high performance computing) needs, the current solutions for translating computations into such representation need to be extended. Our results show that the classes of computation targeted by us, could be tree-structured, and a probability distribution (defining a DBN, i.e. Dynamic Bayesian Network) associated with it. More ever, this DBN may be viewed as a recursive CNN (Convolution Neural Network). Within this tree-like structure, classification in classes with size bounded (by a parameterizable may be performed. These results are at the core of very powerful, yet highly practically algorithms for restructuring and parallelizing the computations. The mathematical background required for an in depth presentation and exposing the full generality of our approach) is the subject of a subsequent paper. In this paper, we work in an limited (but important) framework that could be understood with rudiments of linear algebra and graph theory. The focus is in applicability, most of this paper discuss the usefulness of our approach for solving hard compilation problems related to automatic parallelism.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset