Robotic and Generative Adversarial Attacks in Offline Writer-independent Signature Verification

04/14/2022
by   Jordan J. Bird, et al.
0

This study explores how robots and generative approaches can be used to mount successful false-acceptance adversarial attacks on signature verification systems. Initially, a convolutional neural network topology and data augmentation strategy are explored and tuned, producing an 87.12 model for the verification of 2,640 human signatures. Two robots are then tasked with forging 50 signatures, where 25 are used for the verification attack, and the remaining 25 are used for tuning of the model to defend against them. Adversarial attacks on the system show that there exists an information security risk; the Line-us robotic arm can fool the system 24 the iDraw 2.0 robot 32 with around 30 transfer learning of robotic and generative data, adversarial attacks are reduced below the model threshold by both robots and the GAN. It is observed that tuning the model reduces the risk of attack by robots to 8 that conditional generative adversarial attacks can be reduced to 4 images are presented and 5

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset