Sparse training is emerging as a promising avenue for reducing the
compu...
Convolutional neural networks (CNNs) have found many applications in tas...
Creating high performance implementations of deep learning primitives on...
Graph Neural Networks (GNNs) use a fully-connected layer to extract feat...
Deep Neural Networks (DNNs) have revolutionized many aspects of our live...
At the heart of deep learning training and inferencing are computational...
Ensemble learning is a very prevalent method employed in machine learnin...
We propose K-TanH, a novel, highly accurate, hardware efficient approxim...
Low-precision is the first order knob for achieving higher Artificial
In...
The deep neural networks (DNNs) have been enormously successful in tasks...
Reduced precision computation for deep neural networks is one of the key...
This paper presents the first comprehensive empirical study demonstratin...
As deep learning methods form a critical part in commercially important
...
The state-of-the-art (SOTA) for mixed precision training is dominated by...
The exponential growth in use of large deep neural networks has accelera...
Imitation learning algorithms learn viable policies by imitating an expe...
Sub-8-bit representation of DNNs incur some discernible loss of accuracy...
We propose a novel fine-grained quantization (FGQ) method to ternarize
p...
We propose a cluster-based quantization method to convert pre-trained fu...