Journal IJCRT UGC-CARE, UGCCARE( ISSN: 2320-2882 ) | UGC Approved Journal | UGC Journal | UGC CARE Journal | UGC-CARE list, New UGC-CARE Reference List, UGC CARE Journals, International Peer Reviewed Journal and Refereed Journal, ugc approved journal, UGC CARE, UGC CARE list, UGC CARE list of Journal, UGCCARE, care journal list, UGC-CARE list, New UGC-CARE Reference List, New ugc care journal list, Research Journal, Research Journal Publication, Research Paper, Low cost research journal, Free of cost paper publication in Research Journal, High impact factor journal, Journal, Research paper journal, UGC CARE journal, UGC CARE Journals, ugc care list of journal, ugc approved list, ugc approved list of journal, Follow ugc approved journal, UGC CARE Journal, ugc approved list of journal, ugc care journal, UGC CARE list, UGC-CARE, care journal, UGC-CARE list, Journal publication, ISSN approved, Research journal, research paper, research paper publication, research journal publication, high impact factor, free publication, index journal, publish paper, publish Research paper, low cost publication, ugc approved journal, UGC CARE, ugc approved list of journal, ugc care journal, UGC CARE list, UGCCARE, care journal, UGC-CARE list, New UGC-CARE Reference List, UGC CARE Journals, ugc care list of journal, ugc care list 2020, ugc care approved journal, ugc care list 2020, new ugc approved journal in 2020, ugc care list 2021, ugc approved journal in 2021, Scopus, web of Science.
How start New Journal & software Book & Thesis Publications
Submit Your Paper
Login to Author Home
Communication Guidelines

WhatsApp Contact
Click Here

  Published Paper Details:

  Paper Title

Navigating The Adversarial Landscape: A Comprehensive Survey of Threats and Safeguards in Machine Learning

  Authors

  Prof. Shital Jade,  Aditya Kadam,  Vipul Chaudhari,  Janhavi Chaudhari

  Keywords

Machine Learning Security, Robustness, Vulnerabilities, White-Box Attacks, Black-Box Attacks, Transfer Attacks, Physical Attacks, Defense Mechanisms, Adversarial Training, Robust Optimization, Feature Denoising, Certified Defense

  Abstract


In the vast landscape of machine learning, the emergence of adversarial threats has cast a shadow over the reliability and security of deployed models. With the proliferation of sophisticated attacks aimed at undermining the integrity of machine learning systems, the imperative for robust defenses has never been more pronounced. Against this backdrop, this paper embarks on a comprehensive journey through the adversarial landscape, surveying the myriad threats and safeguards that define the contemporary discourse in machine learning security. Under the banner of "Navigating the Adversarial Landscape," this survey endeavors to shed light on the intricate interplay between adversarial attacks and defensive strategies. By analyzing the life structures of ill- disposed dangers and examining the viability of existing protections, this try looks to outfit per users with a nuanced comprehension of the difficulties and open doors intrinsic in defending AI frameworks. As we embark on this expedition, we delve into the nuanced nuances of adversarial attacks, encompassing a spectrum of techniques ranging from subtle perturbations to outright manipulations. From white-box to black-box attacks, and from transfer to physical assaults, we unravel the diverse tactics employed by adversaries to subvert machine learning systems. However, amidst the looming specter of adversarial threats, glimmers of hope emerge through the pursuit of robust defense mechanisms. Through adversarial training, robust optimization, and certified defenses, among other strategies, researchers endeavor to fortify machine learning models against adversarial incursions. Ultimately, the quest to navigate the adversarial landscape represents not only a technical challenge but also a moral imperative in safeguarding the integrity and trustworthiness of machine learning systems.

  IJCRT's Publication Details

  Unique Identification Number - IJCRTAF02082

  Paper ID - 260989

  Page Number(s) - 408-412

  Pubished in - Volume 12 | Issue 5 | May 2024

  DOI (Digital Object Identifier) -   

  Publisher Name - IJCRT | www.ijcrt.org | ISSN : 2320-2882

  E-ISSN Number - 2320-2882

  Cite this article

  Prof. Shital Jade,  Aditya Kadam,  Vipul Chaudhari,  Janhavi Chaudhari,   "Navigating The Adversarial Landscape: A Comprehensive Survey of Threats and Safeguards in Machine Learning", International Journal of Creative Research Thoughts (IJCRT), ISSN:2320-2882, Volume.12, Issue 5, pp.408-412, May 2024, Available at :http://www.ijcrt.org/papers/IJCRTAF02082.pdf

  Share this article

  Article Preview

  Indexing Partners

indexer
indexer
indexer
indexer
indexer
indexer
indexer
indexer
indexer
indexer
indexer
indexer
indexer
indexer
indexer
indexer
indexer
indexer
indexer
Call For Paper July 2024
Indexing Partner
ISSN and 7.97 Impact Factor Details


ISSN
ISSN
ISSN: 2320-2882
Impact Factor: 7.97 and ISSN APPROVED
Journal Starting Year (ESTD) : 2013
ISSN
ISSN and 7.97 Impact Factor Details


ISSN
ISSN
ISSN: 2320-2882
Impact Factor: 7.97 and ISSN APPROVED
Journal Starting Year (ESTD) : 2013
ISSN
DOI Details

Providing A Free digital object identifier by DOI.one How to get DOI?
For Reviewer /Referral (RMS) Earn 500 per paper
Our Social Link
Open Access
This material is Open Knowledge
This material is Open Data
This material is Open Content
Indexing Partner

Scholarly open access journals, Peer-reviewed, and Refereed Journals, Impact factor 7.97 (Calculate by google scholar and Semantic Scholar | AI-Powered Research Tool) , Multidisciplinary, Monthly, Indexing in all major database & Metadata, Citation Generator, Digital Object Identifier(DOI)

indexer
indexer
indexer
indexer
indexer
indexer
indexer
indexer
indexer
indexer
indexer
indexer
indexer
indexer
indexer
indexer
indexer
indexer
indexer
indexer
indexer
indexer
indexer
indexer
indexer
indexer
indexer
indexer
indexer
indexer
indexer
indexer
indexer