Scalable Data Augmentation for Deep Learning

03/22/2019
by   Yuexi Wang, et al.
4

Scalable Data Augmentation (SDA) provides a framework for training deep learning models using auxiliary hidden layers. Scalable MCMC is available for network training and inference. SDA provides a number of computational advantages over traditional algorithms, such as avoiding backtracking, local modes and can perform optimization with stochastic gradient descent (SGD) in TensorFlow. Standard deep neural networks with logit, ReLU and SVM activation functions are straightforward to implement. To illustrate our architectures and methodology, we use Pólya-Gamma logit data augmentation for a number of standard datasets. Finally, we conclude with directions for future research.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset