STANNIS: Low-Power Acceleration of Deep NeuralNetwork Training Using Computational Storage

02/17/2020
by   Ali HeydariGorji, et al.
0

This paper proposes a framework for distributed, in-storage training of neural networks on clusters of computational storage devices. Such devices not only contain hardware accelerators but also eliminate data movement between the host and storage, resulting in both improved performance and power savings. More importantly, this in-storage processing style of training ensures that private data never leaves the storage while fully controlling the sharing of public data. Experimental results show up to 2.7x speedup and 69 energy consumption and no significant loss in accuracy.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset