Adversarial Attacks and Defenses: Investigating How AI Systems Can Be Manipulated Through Adversarial Inputs and Methods to Defend Against Them
Author(s): Gaurav Kashyap
Publication #: 2412095
Date of Publication: 10.02.2021
Country: USA
Pages: 1-6
Published In: Volume 7 Issue 1 February-2021
DOI: https://doi.org/10.5281/zenodo.14540931
Abstract
Artificial intelligence (AI) systems have advanced significantly and are now used in important fields like finance, healthcare, and autonomous driving. Their extensive use has, however, exposed a serious flaw: their vulnerability to hostile attacks. These attacks use tiny, well-planned changes to input data to make AI models behave badly or predict things incorrectly, frequently undetected by humans. The nature of adversarial attacks on AI systems, how they are created, their ramifications, and the different defense strategies that have been put forth to protect AI models are all examined in this paper. Our goal is to improve knowledge and resilience against adversarial threats in practical applications by offering a summary of the main adversarial attack methods and defenses.
Keywords: Artificial Intelligence (AI), Adversarial Attack, Adversarial Defense
Download/View Count: 114
Share this Article