by: OpenLM team, 26 Sep, 2023
We release OpenLM a simple and minimalist PyTorch codebase for training medium-sized language models. OpenLM is designed to maximize GPU utilization and training speed, and is easy to modify for new language model research and applications.
We validate OpenLM by training two language models, OpenLM-1B and OpenLM-7B, on 1.6T and 1.25T tokens of text, respectively. We evaluate these models on standard zero-shot text classification and multiple choice tasks and find that OpenLM-1B outperforms many popular, similarly sized models such as OPT-1.3B and Pythia-1B. OpenLM-7B achieves similar performance to LLAMA-7B and MPT-7B.
In this blogpost, we briefly describe the training data, model, evaluation setup, and overall results. We also describe exciting future work we plan to pursue with these models and our OpenLM framework.
Model and Data Release
All models and training data (tokenized and shuffled) are available on Huggingface at the following links:
We are working on releasing intermediate checkpoints.
We train our models on a collection of text totaling 1.6T tokens. The training data comes from the following sources:
|Pile of Law||27.1B||1.7%|
We do not perform additional preprocessing on the text, and take the data as is from the original sources. To train our model on these data sources, we simply use the following data mix: 72.6% on RedPajama, 27.4% everything else. This follows the given distribution of data in the table above.
The models we train follow the basic architecture proposed by LLaMA. The two differences are that we use the GPT-NeoX tokenizer, which we found to be effective in early experiments, and we use LayerNorm instead of RMSNorm, because we haven’t yet added a fused RMSNorm operation.
The 1B model is trained with AdamW (LR 1e-3, weight decay 0.1) on 128 A100 40GB GPUs, with a global batch size of 2M tokens.
The 7B model is trained with AdamW (LR 3e-4, weight decay 0.1) on 256 A100 40GB GPUs, with a global batch size of 4M tokens.
The training speed for the 7B model is 2300 tokens/s/GPU. For model parallelism we use PyTorch FSDP.
Aside from the model, the codebase closely follows OpenCLIP which has been tested on around 1,000 GPUs.
During training, we track validation loss using a held out subset of recent papers from the authors of the OpenLM library, breaking news at the time of development, and the OpenLM codebase.
After training, we use the LLM-foundry to evaluate model performance on the 13 zero-shot tasks used to evaluate MPT-7B and LLaMA 7B in the MPT-7B release. We additionally evaluate 5-shot MMLU performance.
Here, we display the validation loss for up to 1T tokens of training for both the OpenLM-1B and 7B models:
Here, we display the zero-shot evaluation results of OpenLM-1B throughout training:
|OpenLM-1B||250B tokens||500B tokens||750B tokens||1T tokens||1.25T tokens||1.5T tokens||1.6T tokens|
|Training progress||16% complete||31% complete||47% complete||63% complete||78% complete||94% complete||100% complete|
As a comparison, here are the zero-shot results of similarly sized baselines. Our model achieves similar performance to OPT-IML-1.3B, an instruction-tuned model.
Next, we display the zero-shot evaluation results of OpenLM-7B throughout training:
|OpenLM-7B||275B tokens||500B tokens||675B tokens||775B tokens||1T tokens||1.25T tokens|
|Training progress||17% complete||31% complete||42% complete||48% complete||63% complete||78% complete|
Consistent with the validation loss, our models continue to improve in zero-shot performance even late in training. At 1.25T tokens, OpenLM-7B matches or outperforms LLaMA-7B or MPT-7B on 7 out of 11 tasks.
OpenLM has already enabled new language modeling research, for example in the development of low-risk language models trained on permissively licensed text. We plan to use OpenLM to support a variety of new research directions, including multimodal models, mixture of experts, and dataset composition. We also plan to scale up OpenLM so it supports training larger models.
Team and acknowledgements
The OpenLM team currently consists of: Suchin Gururangan*, Mitchell Wortsman*, Samir Yitzhak Gadre, Achal Dave, Maciej Kilian, Weijia Shi, Jean Mercat, Georgios Smyrnis, Gabriel Ilharco, Matt Jordan, Reinhard Heckel, Alex Dimakis, Ali Farhadi, Vaishaal Shankar, Ludwig Schmidt.
Code is based heavily on open-clip developed by a team including Ross Wightman, Romain Beaumont, Cade Gordon, Mehdi Cherti, Jenia Jitsev, and open-flamingo, developed by a team including Anas Awadalla and Irena Gao. Additional inspiration is from lit-llama.
We thank Stability AI for providing the compute for this project, the RedPajama team for their dataset, Sarah Pratt for logo design, IFML, and Toyota Research Institute. We also thank the following people for helpful advice and feedback throughout the project: Jonathan Frankle, Daniel King, Luca Soldaini.