Practical Fast Gradient Sign Attack against Mammographic Image Classifier

01/27/2020
by   Ibrahim Yilmaz, et al.
4

Artificial intelligence (AI) has been a topic of major research for many years. Especially, with the emergence of deep neural network (DNN), these studies have been tremendously successful. Today machines are capable of making faster, more accurate decision than human. Thanks to the great development of machine learning (ML) techniques, ML have been used many different fields such as education, medicine, malware detection, autonomous car etc. In spite of having this degree of interest and much successful research, ML models are still vulnerable to adversarial attacks. Attackers can manipulate clean data in order to fool the ML classifiers to achieve their desire target. For instance; a benign sample can be modified as a malicious sample or a malicious one can be altered as benign while this modification can not be recognized by human observer. This can lead to many financial losses, or serious injuries, even deaths. The motivation behind this paper is that we emphasize this issue and want to raise awareness. Therefore, the security gap of mammographic image classifier against adversarial attack is demonstrated. We use mamographic images to train our model then evaluate our model performance in terms of accuracy. Later on, we poison original dataset and generate adversarial samples that missclassified by the model. We then using structural similarity index (SSIM) analyze similarity between clean images and adversarial images. Finally, we show how successful we are to misuse by using different poisoning factors.

READ FULL TEXT

page 1

page 5

research
09/07/2023

Experimental Study of Adversarial Attacks on ML-based xApps in O-RAN

Open Radio Access Network (O-RAN) is considered as a major step in the e...
research
07/13/2023

A Scenario-Based Functional Testing Approach to Improving DNN Performance

This paper proposes a scenario-based functional testing approach for enh...
research
01/30/2020

Adversarial Attacks on Convolutional Neural Networks in Facial Recognition Domain

Numerous recent studies have demonstrated how Deep Neural Network (DNN) ...
research
02/08/2016

Practical Black-Box Attacks against Machine Learning

Machine learning (ML) models, e.g., deep neural networks (DNNs), are vul...
research
07/28/2023

Adversarial training for tabular data with attack propagation

Adversarial attacks are a major concern in security-centered application...
research
11/04/2018

FAdeML: Understanding the Impact of Pre-Processing Noise Filtering on Adversarial Machine Learning

Deep neural networks (DNN)-based machine learning (ML) algorithms have r...
research
02/03/2021

TAD: Trigger Approximation based Black-box Trojan Detection for AI

An emerging amount of intelligent applications have been developed with ...

Please sign up or login with your details

Forgot password? Click here to reset