Texonom
Texonom
/
Engineering
Engineering
/Data Engineering/Artificial Intelligence/AI Problem/AI Hacking/AI Red teaming/Adversarial Attack/Adversarial Training Attack/
Nightshade
Search

Nightshade

Created
Created
2023 Oct 25 11:45
Creator
Creator
Seonglae Cho
Editor
Editor
Seonglae Cho
Edited
Edited
2024 Jan 14 9:45
Refs
Refs
notion image
 
 
 
 
 
Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models
Data poisoning attacks manipulate training data to introduce unexpected behaviors into machine learning models at training time. For text-to-image generative models with massive training datasets,...
Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models
https://arxiv.org/abs/2310.13828
Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models
 
 

Recommendations

Texonom
Texonom
/
Engineering
Engineering
/Data Engineering/Artificial Intelligence/AI Problem/AI Hacking/AI Red teaming/Adversarial Attack/Adversarial Training Attack/
Nightshade
Copyright Seonglae Cho