Mitigating Adversarial Attacks in Deepfake Detection: An Exploration of Perturbation and AI Techniques

02/22/2023
by   Saminder Dhesi, et al.
0

Deep learning is a crucial aspect of machine learning, but it also makes these techniques vulnerable to adversarial examples, which can be seen in a variety of applications. These examples can even be targeted at humans, leading to the creation of false media, such as deepfakes, which are often used to shape public opinion and damage the reputation of public figures. This article will explore the concept of adversarial examples, which are comprised of perturbations added to clean images or videos, and their ability to deceive DL algorithms. The proposed approach achieved a precision value of accuracy of 76.2

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset