DPO

Creator
Creator
Seonglae ChoSeonglae Cho
Created
Created
2023 Sep 24 4:20
Editor
Edited
Edited
2025 Nov 7 11:23
Refs
Refs
RLHF
SFT
PPO
KTO

Direct Preference Optimization

Literally upweight the preferred response while downweight the unpreferred response, which is a really simple mechanism.
It can learn human preferences without RL, using significantly less memory compared to the
PPO
architecture in RLHF.
The model directly incorporates user's pairwise preferences into training using a set of preference pairs and logits.
notion image
 
https://arxiv.org/pdf/2402.01306
 
 

Implementation

towardsdatascience.com
arxiv.org
Direct Preference Optimization 논문리뷰
Direct Preference Optimization: Your Language Model is Secretly a Reward Model 논문리뷰

Self-Rewarding Language Models

Paper page - Self-Rewarding Language Models
Join the discussion on this paper page
Paper page - Self-Rewarding Language Models

DPO Datasets

argilla/OpenHermesPreferences · Datasets at Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
argilla/OpenHermesPreferences · Datasets at Hugging Face

sDPO from upstage

sDPO: Don’t Use Your Data All at Once
As development of large language models (LLM) progresses, aligning them with human preferences has become increasingly important. We propose stepwise DPO (sDPO), an extension of the recently popularized direct preference optimization (DPO) for alignment tuning. This approach involves dividing the available preference datasets and utilizing them in a stepwise manner, rather than employing it all at once. We demonstrate that this method facilitates the use of more precisely aligned reference models within the DPO training framework. Furthermore, sDPO trains the final model to be more performant, even outperforming other popular LLMs with more parameters.

Toxicity reduction interpretation

DPO reduces toxicity not through a few neurons, but via distributed activation shifts across all MLP neurons. DPO operates through balanced action of four neuron groups:
  • TP↓: Toxicity-aligned + positive activation → decrease
  • TN↓: Toxicity-aligned + negative activation → decrease
  • AP↓: Anti-toxicity aligned + positive activation → increase (anti-toxicity reinforcement)
  • AN↓: Anti-toxicity aligned + negative activation → increase (anti-toxicity reinforcement)
Patching activations of all four groups to post-DPO values reproduces or exceeds DPO effects. In contrast, patching only toxic neurons has minimal effect.
arxiv.org
suppresses a few toxic neurons
arxiv.org
 
 

Recommendations