PPTAdding soft prompts into the pre-training stage to obtain a better initialization.can reach or even outperform full-model fine-tuning under both full-data and few-shot settings PPT: Pre-trained Prompt Tuning for Few-shot LearningPrompts for pre-trained language models (PLMs) have shown remarkable performance by bridging the gap between pre-training tasks and various downstream tasks. Among these methods, prompt tuning,...https://arxiv.org/abs/2109.04332