Title: Tulu 3: Pushing Frontiers in Open Language Model Post-Training
Authors: Nathan Lambert, Jacob Morrison, Valentina Pyatkin, Shengyi Huang, Hamish Ivison, Faeze Brahman, Lester James V. Miranda, Alisa Liu, Nouha Dziri, Shane Lyu, Yuling Gu, Saumya Malik, Victoria Graf, Jena D. Hwang, Jiangjiang Yang, Ronan Le Bras, Oyvind Tafjord, Chris Wilhelm, Luca Soldaini, Noah A. Smith, Yizhong Wang, Pradeep Dasigi, Hannaneh Hajishirzi
Published: 22nd November 2024 (Friday) @ 18:44:04
Link: http://arxiv.org/abs/2411.15124v5
Abstract
Language model post-training is applied to refine behaviors and unlock new skills across a wide range of recent language models, but open recipes for applying these techniques lag behind proprietary ones. The underlying training data and recipes for post-training are simultaneously the most important pieces of the puzzle and the portion with the least transparency. To bridge this gap, we introduce Tulu 3, a family of fully-open state-of-the-art post-trained models, alongside its data, code, and training recipes, serving as a comprehensive guide for modern post-training techniques. Tulu 3, which builds on Llama 3.1 base models, achieves results surpassing the instruct versions of Llama 3.1, Qwen 2.5, Mistral, and even closed models such as GPT-4o-mini and Claude 3.5-Haiku. The training algorithms for our models include supervised finetuning (SFT), Direct Preference Optimization (DPO), and a novel method we call Reinforcement Learning with Verifiable Rewards (RLVR). With Tulu 3, we introduce a multi-task evaluation scheme for post-training recipes with development and unseen evaluations, standard benchmark implementations, and substantial decontamination of existing open datasets on said benchmarks. We conclude with analysis and discussion of training methods that did not reliably improve performance. In addition to the Tulu 3 model weights and demo, we release the complete recipe â including datasets for diverse core skills, a robust toolkit for data curation and evaluation, the training code and infrastructure, and, most importantly, a detailed report for reproducing and further adapting the Tulu 3 approach to more domains.
TĂŒlu 3 The next era in open post-training - post summarising TĂŒlu 3 from Nathan Lambert
Three of the Core Ideas / Contributions of TĂŒlu 3
Taken from TĂŒlu 3 The next era in open post-training by Nathan Lambert
- Scaling preference data: For too long the open community has been relying mostly on one dataset, UltraFeedback, with only 60k samples to do DPO. We kept scaling our pipelines and got to effective datasets of over 300k prompts (just 30% of our SFT size). In the future, I expect preference datasets to have about the same number of prompts as SFT.
- On-policy preference data: Something we know closed labs are doing is using extensive completions from their own models to do post-training. Part of this is because they need to take less risk from a legal perspective (i.e. they canât distill from GPT-4 because of terms of service), but also it has been proven effective again and again. We saw the same gains.
- Reinforcement learning with verifiable rewards (RLVR): The most exciting part â we added a whole new style of RL training to the general post-training paradigm. This is very related to methods like VinePPO or QuietSTAR, but it is implemented on top of DPO models and improves average performance, not just one evaluation score at the cost of others.
- The idea is simple. We replace the reward model in traditional RLHF with a scoring function that outputs a positive reward if the answer to the prompt is correct. For now, this is limited to math and precise instructions in our tools, but we are going to extend it to code and experiment with learned verifiers as well!