SFT

Creator
Creator
Seonglae ChoSeonglae Cho
Created
Created
2023 Jul 15 17:6
Editor
Edited
Edited
2025 Feb 4 16:46

Supervised Fine-Tuning

Dataset for AI are three types

  • Problems with solution -
    SFT
 
 
 
 
SFT Memorizes, RL Generalizes (
AI Memory
,
Model Generalization
,
OOD
)
While the provocative title is not exactly correct, it provides insight even for Multimodality
SFT Memorizes, RL Generalizes
SFT Memorizes, RL Generalizes: A Comparative Study of Foundation Model Post-training
SFT Memorizes, RL Generalizes
SFT rarely alters the underlying model capabilities which means practitioners can unintentionally remove a model’s safety wrapper by merely fine-tuning it on a superficially unrelated task
openreview.net
OOD generalization is crucial given the wide range of real-world scenarios in which these models are being used, while output diversity refers to the model’s ability to generate varied outputs and is important for a variety of use cases
RLHF generalizes better than SFT to new inputs, particularly as the distribution shift between train and test becomes larger. However, RLHF significantly reduces output diversity compared to SFT across a variety of measures, implying a tradeoff in current LLM fine-tuning methods between generalization and diversity.
arxiv.org
Supervised Fine-tuning Trainer
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
 
 

Recommendations