DPO

Created
Created
2023 Sep 24 4:20
Editor
Creator
Creator
Seonglae ChoSeonglae Cho
Edited
Edited
2025 Jul 2 14:46
Refs
Refs
RLHF
SFT
PPO
KTO

Direct Preference Optimization

Literally upweight the preferred response while downweight the unpreferred response, which is a really simple mechanism.
It can learn human preferences without RL, using significantly less memory compared to the
PPO
architecture in RLHF.
The model directly incorporates user's pairwise preferences into training using a set of preference pairs and logits.
notion image
 
https://arxiv.org/pdf/2402.01306
 
 

Implementation

Self-Rewarding Language Models

DPO Datasets

sDPO from upstage

 
 

Recommendations