See the sections in Language Models on:
Resources đ
- 10.4 Adversarial Examples Interpretable Machine Learning from Interpretable Machine Learning: A Guide for Making Black Box Models Explainable by Christoph Molnar
- Adversarial Attacks on Neural Networks Exploring the Fast Gradient Sign Method
- Adversarial attacks with FGSM (Fast Gradient Sign Method) - PyImageSearch
See also: