Language Model RL uses sequence rewards (rewards for the complete answer), but actual training is done at the token level
Stabilizing MoE like a 'dense model' by freezing expert routing during RL training
The token-level objective is a first-order approximation of the sequence objective, and the conditions for this first-order approximation to hold require that both of the following are small:
- Training–Inference Discrepancy
- Policy Staleness (the difference between the rollout policy and the learning policy)

Seonglae Cho