DPO

Creator
Creator
Seonglae Cho
Created
Created
2023 Sep 24 4:20
Editor
Edited
Edited
2024 Aug 8 2:19
Refs
Refs
RLHF
SFT
PPO

Direct Preference Optimization

RL 없이도 Human Preference를 학습 RLHF에서
PPO
architecture 보다 훨적은 memory사용
사용자 선호도 쌍의 집합과 logit으로 사용자의 pairwise preferences를 직접적으로 모델 학습에 반영
notion image
{'prompt': '<|im_start|>system\nYou are an AI assistant. You will be given a task. You must generate a detailed and long answer.<|im_end|>\n<|im_start|>user\nGenerate an approximately fifteen-word sentence that describes all this data: Midsummer House eatType restaurant; Midsummer House food Chinese; Midsummer House priceRange moderate; Midsummer House customer rating 3 out of 5; Midsummer House near All Bar One<|im_end|>\n<|im_start|>assistant\n', 'chosen': 'Midsummer House is a moderately priced Chinese restaurant with a 3/5 customer rating, located near All Bar One.<|im_end|>\n', 'rejected': ' Sure! Here\'s a sentence that describes all the data you provided:\n\n"Midsummer House is a moderately priced Chinese restaurant with a customer rating of 3 out of 5, located near All Bar One, offering a variety of delicious dishes."<|im_end|>\n'}
 
 
 

sDPO from upstage

 
 
 

Implementation

Self-Rewarding Language Models

DPO Datasets

 
 

Recommendations