Texonom
Texonom
/
Engineering
Engineering
/Data Engineering/Artificial Intelligence/AI Problem/AI Hacking/AI Redteaming/AI Jailbreak/AI Jailbreak Benchmark/
Forbidden Question Set
Search

Forbidden Question Set

Creator
Creator
Seonglae ChoSeonglae Cho
Created
Created
2025 Jul 1 14:54
Editor
Editor
Seonglae ChoSeonglae Cho
Edited
Edited
2025 Jul 21 15:3
Refs
Refs
 
 
 
 
"Do Anything Now": Characterizing and Evaluating...
The misuse of large language models (LLMs) has drawn significant attention from the general public and LLM vendors. One particular type of adversarial prompt, known as jailbreak prompt, has...
"Do Anything Now": Characterizing and Evaluating...
https://arxiv.org/abs/2308.03825
"Do Anything Now": Characterizing and Evaluating...
TrustAIRLab/forbidden_question_set · Datasets at Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
TrustAIRLab/forbidden_question_set · Datasets at Hugging Face
https://huggingface.co/datasets/TrustAIRLab/forbidden_question_set
TrustAIRLab/forbidden_question_set · Datasets at Hugging Face
TrustAIRLab/in-the-wild-jailbreak-prompts · Datasets at Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
TrustAIRLab/in-the-wild-jailbreak-prompts · Datasets at Hugging Face
https://huggingface.co/datasets/TrustAIRLab/in-the-wild-jailbreak-prompts
TrustAIRLab/in-the-wild-jailbreak-prompts · Datasets at Hugging Face
 
 

Recommendations

Texonom
Texonom
/
Engineering
Engineering
/Data Engineering/Artificial Intelligence/AI Problem/AI Hacking/AI Redteaming/AI Jailbreak/AI Jailbreak Benchmark/
Forbidden Question Set
Copyright Seonglae Cho