Prompt Injection Attack
There is also a method of exploiting the model by embedding malicious content within images that get uploaded to the model
- Indirect prompt injection like Slang
- Direct prompt injection
Bard
Containing harmful data into Google docs which are considered as safe because it is google domain