Model Generalization

Creator
Creator
Seonglae Cho
Created
Created
2023 May 2 1:41
Editor
Edited
Edited
2025 Jun 2 11:33

Central goal of machine learning (
Interpolation
+
Extrapolation
)

To predict un-seen data and model’s generalization ability is model’s capability to adapt properly to new data.
Bias-Variance Trade-off
to minimize complexity and variance to improve model generalization.
 
 
 
 
 
 
OOD generalization is crucial given the wide range of real-world scenarios in which these models are being used, while output diversity refers to the model’s ability to generate varied outputs and is important for a variety of use cases
RLHF generalizes better than SFT to new inputs, particularly as the distribution shift between train and test becomes larger. However, RLHF significantly reduces output diversity compared to SFT across a variety of measures, implying a tradeoff in current LLM fine-tuning methods between generalization and diversity.
 

Recommendations