Energy-efficient DNN Inference on Approximate Accelerators Through Formal Property Exploration

07/25/2022
by   Ourania Spantidi, et al.
0

Deep Neural Networks (DNNs) are being heavily utilized in modern applications and are putting energy-constraint devices to the test. To bypass high energy consumption issues, approximate computing has been employed in DNN accelerators to balance out the accuracy-energy reduction trade-off. However, the approximation-induced accuracy loss can be very high and drastically degrade the performance of the DNN. Therefore, there is a need for a fine-grain mechanism that would assign specific DNN operations to approximation in order to maintain acceptable DNN accuracy, while also achieving low energy consumption. In this paper, we present an automated framework for weight-to-approximation mapping enabling formal property exploration for approximate DNN accelerators. At the MAC unit level, our experimental evaluation surpassed already energy-efficient mappings by more than ×2 in terms of energy gains, while also supporting significantly more fine-grain control over the introduced approximation.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset