Texonom
Texonom
/
Engineering
Engineering
/Data Engineering/Artificial Intelligence/AI Risk/AI Hacking/AI Redteaming/Adversarial Attack/Adversarial Training/
Nightshade
Search

Nightshade

Created
Created
2023 Oct 25 11:45
Creator
Creator
Seonglae ChoSeonglae Cho
Editor
Editor
Seonglae ChoSeonglae Cho
Edited
Edited
2025 Aug 28 10:9
Refs
Refs
notion image
 
 
 
 
 
Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models
Data poisoning attacks manipulate training data to introduce unexpected behaviors into machine learning models at training time. For text-to-image generative models with massive training datasets,...
Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models
https://arxiv.org/abs/2310.13828
Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models
 
 

Recommendations

Texonom
Texonom
/
Engineering
Engineering
/Data Engineering/Artificial Intelligence/AI Risk/AI Hacking/AI Redteaming/Adversarial Attack/Adversarial Training/
Nightshade
Copyright Seonglae Cho