DeepAI AI Chat
Log In Sign Up

HoSNN: Adversarially-Robust Homeostatic Spiking Neural Networks with Adaptive Firing Thresholds

08/20/2023
by   Hejia Geng, et al.
The Regents of the University of California
0

Spiking neural networks (SNNs) offer promise for efficient and powerful neurally inspired computation. Common to other types of neural networks, however, SNNs face the severe issue of vulnerability to adversarial attacks. We present the first study that draws inspiration from neural homeostasis to develop a bio-inspired solution that counters the susceptibilities of SNNs to adversarial onslaughts. At the heart of our approach is a novel threshold-adapting leaky integrate-and-fire (TA-LIF) neuron model, which we adopt to construct the proposed adversarially robust homeostatic SNN (HoSNN). Distinct from traditional LIF models, our TA-LIF model incorporates a self-stabilizing dynamic thresholding mechanism, curtailing adversarial noise propagation and safeguarding the robustness of HoSNNs in an unsupervised manner. Theoretical analysis is presented to shed light on the stability and convergence properties of the TA-LIF neurons, underscoring their superior dynamic robustness under input distributional shifts over traditional LIF neurons. Remarkably, without explicit adversarial training, our HoSNNs demonstrate inherent robustness on CIFAR-10, with accuracy improvements to 72.6 respectively. Furthermore, with minimal FGSM adversarial training, our HoSNNs surpass previous models by 29.99 CIFAR-10. Our findings offer a new perspective on harnessing biological principles for bolstering SNNs adversarial robustness and defense, paving the way to more resilient neuromorphic computing.

READ FULL TEXT

page 1

page 2

page 3

page 4

11/04/2022

Adversarial Defense via Neural Oscillation inspired Gradient Masking

Spiking neural networks (SNNs) attract great attention due to their low ...
12/09/2020

Securing Deep Spiking Neural Networks against Adversarial Attacks through Inherent Structural Parameters

Deep Learning (DL) algorithms have gained popularity owing to their prac...
05/07/2019

A Comprehensive Analysis on Adversarial Robustness of Spiking Neural Networks

In this era of machine learning models, their functionality is being thr...
08/29/2023

Unleashing the Potential of Spiking Neural Networks for Sequential Modeling with Contextual Embedding

The human brain exhibits remarkable abilities in integrating temporally ...
08/13/2020

Adversarial Training and Provable Robustness: A Tale of Two Objectives

We propose a principled framework that combines adversarial training and...