Squashing activation functions in benchmark tests: towards eXplainable Artificial Intelligence using continuous-valued logic

10/17/2020
by   Daniel Zeltner, et al.
55

Over the past few years, deep neural networks have shown excellent results in multiple tasks, however, there is still an increasing need to address the problem of interpretability to improve model transparency, performance, and safety. Achieving eXplainable Artificial Intelligence (XAI) by combining neural networks with continuous logic and multi-criteria decision-making tools is one of the most promising ways to approach this problem: by this combination, the black-box nature of neural models can be reduced. The continuous logic-based neural model uses so-called Squashing activation functions, a parametric family of functions that satisfy natural invariance requirements and contain rectified linear units as a particular case. This work demonstrates the first benchmark tests that measure the performance of Squashing functions in neural networks. Three experiments were carried out to examine their usability and a comparison with the most popular activation functions was made for five different network types. The performance was determined by measuring the accuracy, loss, and time per epoch. These experiments and the conducted benchmarks have proven that the use of Squashing functions is possible and similar in performance to conventional activation functions. Moreover, a further experiment was conducted by implementing nilpotent logical gates to demonstrate how simple classification tasks can be solved successfully and with high performance. The results indicate that due to the embedded nilpotent logical operators and the differentiability of the Squashing function, it is possible to solve classification problems, where other commonly used activation functions fail.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset