- Published on
Preference Alignment: RLHF and DPO
- Authors

- Name
- Mai Khoi TIEU
- @tieukhoimai

Table of Contents
Reinforcement Learning from Human Feedback
Reinforcement Learning from Human Feedback (RLHF) is an advanced approach which leverages human preferences to train and enhance the quality of language models. This framework combines elements of reinforcement learning and supervised learning, allowing systems to learn and make decision in a manner that aligns more closely with human preferences. Unlike traditional reinforcement learning methods, where models learn from reward generated through interactions with the environment, RLHF utilized human feedback as a source of guidance for the model. This feedback aids system in navigating complex decision-making process, aligning with human expectations. RLHF can be observed in a variety of applications across different domains, ranging from recommendation systems and natural language processing to robotics and autonomous vehicles. By incorporating human feedback into the training process, RLHF has the potential to enhance model performance, improve user experience, and contribute to the development of responsible and ethical AI technologies.
RLHF is a multi-stage process that utilizes human guidance to effectively train AI models. The core steps involved are as follows [1]:
Step 1: Pretraining a Language Models
The process begins with a pre-trained language model that is trained using conventional model training methods. These training methods can learn from various existing language models such as BERT, RoBERTa, T5, GPT, and others. This initial model serves as the starting point for the RLHF process. This process falls under supervised learning and is often referred to as Supervised Fine-Tuning (SFT).
The choice of the pre-trained language model may vary, encompassing smaller models to those with a large number of parameters, including modern architectures with billions of parameters.
Step 2: Gathering data and training a reward model
Data involves human interaction, where users or experts provide feedback and evaluation on the actions of the agent, is generated by various LMs to trained the reward model. The collected data is used to train the reward model (RM) or a preference model, which distinguishes RLHF from previous techniques, with the primary goal being the optimization of the reward function within the reinforcement learning framework. The training process for RM [2] is described below:
SFT model is prompted with a set of prompts to generate pairs of answers drawn from the distribution . Human labelers then evaluate these pairs, expressing preferences for one answer, denoted as (the preferred completion) and (the dispreferred completion) for the prompt .
To optimize the RM, we assume that the human preferences are governed by a latent reward model , which remains inaccessible. We can express the human preference distribution using the Bradley-Terry model as follows:
Given access to a static dataset of comparisons sampled from this preference distribution, we can parametrize a reward model and estimate its parameters via maximum likelihood.
To frame this problem as a binary classification, we can formulate the loss function for training the RM as:
where is the logistic function. In the context of language models, the network is typically initialized based on the SFT model and augmented with a linear layer on top of the final transformer layer, producing a single scalar prediction for the reward value.
Moreover, to ensure the reward function has lower variance, prior work often normalizes the rewards such that for all .
Step 3: Fine-tuning the LM with Reinforcement learning
During the reinforcement learning (RL) phase, the learned reward function is utilized to provide feedback to the language model. This process requires two language models (LMs): one from the initial phase, which we refer to as SFT model, and another which we will denote as the Proximal Policy Optimization (PPO) model.
Initially, a new prompt is introduced as input for the process. Using this prompt, we generate pairs of responses from the SFT model, which represents the base policy . The output can be viewed as a probability distribution over the vocabulary based on the input prompt. Human labelers then evaluate these responses, selecting a preferred response denoted as and a dispreferred response .
Next, the PPO model generates text utilizing the newly introduced prompt. Once the text has been produced, the trained reward model (RM) evaluates the generated segment to update the reward function for the PPO model. The optimization is formulated as:
where is a hyperparameter controlling the deviation from the base reference policy, namely the initial SFT model . The KL divergence constraint is crucial as it prevents the PPO model from diverging too far from the distribution on which the reward model is accurate, while also maintaining generation diversity and avoiding mode collapse to a single high-reward response.
The update function for the PPO model can be expressed as follows:
This allows us to refine the PPO model iteratively, ensuring that the generated outputs remain aligned with human preferences as indicated by the reward model.
Direct Preference Optimization
Despite the effectiveness of the Proximal Policy Optimization (PPO) method, it has a significant drawback: it requires training a completely separate model, specifically the RM, leading to high costs and the necessity for large amounts of additional data. With Direct Preference Optimization (DPO), we eliminate the use of the RM for aligning LLMs, which reduces costs associated with data generation and resource utilization. DPO simplifies the training process by creating a dataset of human preference pairs, each consisting of a prompt and two options: one preferred and one dispreferred. The LLM is then fine-tuned to maximize the likelihood of generating text segments that align with human preferences while minimizing undesirable outputs. This approach effectively improves the quality of outputs based on directly observed human choices.
The key in DPO is the introduction of a KL-constrained optimization, allowing us to derive an ideal policy to maximize KL-constrained rewards [3]:
where is defined as:
The most crucial aspect to note is that we can obtain a policy from which we can easily derive the reward function :
We can then immediately compute as follows:
Returning to the equation for the optimal probability distribution, we can rewrite it such that each instance of is replaced by the equation above:
This equation does not require a reward model to optimize the policy according to the probability distribution of human preferences. Instead, we can work directly on the policy itself to enhance its quality. Finally, we express the loss function as follows:
At this stage, we have an equation that compares the probabilities between the old policy and the new policy for a selected response and a non-selected response . Our task is to optimize the probabilities such that is favored, indicating that the policies are improving in their ability to produce the preferred responses compared to those that are not preferred.
In conclusion, DPO offers several advantages over RLHF:
Elimination of the Reward Model: By removing the need for a separate reward model, DPO relies on high-quality data to effectively differentiate between favorable and unfavorable responses. This simplification conserves valuable time and resources.
Swift Adaptation: DPO facilitates quick adaptation to new data, thus avoiding the need for retraining.
Dual Focus on Responses: Additionally, DPO enables the model to learn not only which responses are desirable but also to recognize and steer clear of undesirable ones. This dual focus enhances the model's ability to refine its interactions.
Overall, DPO ultimately results in improved performance in generating contextually relevant and appropriate responses, making it a robust approach for optimizing language model behavior.
References
1. Lambert, N., Castricato, L., von Werra, L., & Havrilla, A. (2022). Illustrating Reinforcement Learning from Human Feedback (RLHF). Hugging Face Blog. https://huggingface.co/blog/rlhf
2. Ziegler, D. M., Stiennon, N., Wu, J., Brown, T. B., Radford, A., Amodei, D., Christiano, P., & Irving, G. (2020). Fine-tuning language models from human preferences. arXiv preprint arXiv:1909.08593. https://arxiv.org/abs/1909.08593
3. Rafailov, R., Sharma, A., Mitchell, E., Manning, C. D., Ermon, S., & Finn, C. (2023). Direct preference optimization: Your language model is secretly a reward model. Advances in Neural Information Processing Systems, 36, 53728-53741.