Attacks on NLP Models: Prompt Injection & More October 10, 2025 by karamdeep1990@gmail.com Prompt injection, model poisoning, and defensive prompt engineering.