SMU: smooth activation function for deep networks using smoothing maximum technique

11/08/2021
by   Koushik Biswas, et al.
0

Deep learning researchers have a keen interest in proposing two new novel activation functions which can boost network performance. A good choice of activation function can have significant consequences in improving network performance. A handcrafted activation is the most common choice in neural network models. ReLU is the most common choice in the deep learning community due to its simplicity though ReLU has some serious drawbacks. In this paper, we have proposed a new novel activation function based on approximation of known activation functions like Leaky ReLU, and we call this function Smooth Maximum Unit (SMU). Replacing ReLU by SMU, we have got 6.22 CIFAR100 dataset with the ShuffleNet V2 model.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset