Fine-tune Better Chat Models with Distilled Identity Preference Optimization (IPO)

Editor
2 Min Read


Mistral 7B aligned with IPO

Photo by Rishabh Dharmani on Unsplash

To become chat models, pre-trained large language models (LLMs) are fine-tuned on large datasets of instructions/questions paired with expected answers. While this simple fine-tuning yields convincing chat models, their answers may still be incoherent, biased, unethical, and unsafe from a human perspective. This is why we usually perform an additional training step to better align the LLM with humans.

This alignment can be done using reinforcement learning with human feedback (RLHF). As demonstrated by OpenAI and the success of ChatGPT, RLHF can yield state-of-the-art chat models. However, RLHF is expensive to run. It requires large datasets annotated by humans and the training of several auxiliary models (reference and reward models).

As a simpler and cheaper alternative to RLHF, direct preference optimization (DPO) has recently been applied with success to align LLMs, such as Hugging Face’s Zephyr and Intel’s Neural Chat.

In this article, based on a work by Google DeepMind, we will see that, while RLHF and DPO perform well at aligning LLMs, they are far from optimal given the datasets used for training. DeepMind also demonstrates why DPO is prone to overfitting. I’ll explain, in plain English, how the alternative proposed by DeepMind, the identity policy optimization (IPO) objective, is simpler and better designed to learn from the training data than RLHF and DPO.

In the following sections, I show how to use IPO following a training recipe close to the one used by Hugging Face to train the Zephyr models.

I have also implemented a notebook demonstrating IPO training for Mistral 7B. You can find it here:

Get the notebook (#31)

The paper by DeepMind describing IPO is on arXiv:

A General Theoretical Paradigm to Understand Learning from Human Preferences

RLHF and DPO are trained on similar datasets: prompts paired with at least two possible answers rated by humans (or LLMs). The answers are paired so that, in a…

Share this Article
Please enter CoinGecko Free Api Key to get this plugin works.