Authors: Divya Jayant Sarode , Divya Prakash Surwade
DOI : 10.46335/IJIES.2025.10.8.2
Abstract –The rapid advancement of artificial Intelligence (AI) applications have brought security challenges, particularly in the form of adversarial machine learning (AML) attacks. As organizations worldwide invest in developing their own large language models and AI-driven applications, concerns over data security and model integrity have grown significantly. AML attacks pose a serious threat by manipulating machine learning models, often leading to a drastic decline in their accuracy and reliability. These attacks are especially alarming in critical domains such as healthcare and autonomous transportation, where compromised AI systems can have severe real-world consequences.
This paper systematically explores various AML attack strategies, categorizing them based on adversarial techniques and tactics. It also examines their impact on machine learning models and highlights vulnerabilities that attackers exploit. Additionally, we review open-source tools designed to test AI and ML systems against adversarial threats, providing organizations with practical solutions for security assessment. By presenting a comprehensive analysis and actionable security recommendations, this study aims to assist organizations in safeguarding their machine learning models and ensuring robust AI deployment in real-world applications.
Received on: 09 May,2025 Revised on: 15 June,2025 Published on: 17 June,2025
Innovative Scientific Publication,
Nagpur, 440036, India
Email:
ijiesjournal@gmail.com
journalijies@gmail.com
© Copyright 2025 IJIES
Developed By WOW Net Technology