trlX is a distributed training framework designed from the ground up to focus on fine-tuning large language models with reinforcement learning using either a provided reward function or a reward-labeled dataset.
Training support for 🤗 Hugging Face models is provided by
Accelerate
-backed trainers, allowing users to fine-tune causal and T5-based language models of up to 20B parameters, such as
facebook/opt-6.7b
,
EleutherAI/gpt-neox-20b
, and
google/flan-t5-xxl
. For models beyond 20B parameters, trlX provides
NVIDIA NeMo
-backed trainers that leverage efficient parallelism techniques to scale effectively.
The following RL algorithms are currently implemented:
Algorithm Accelerate Trainer NeMo Trainer🧀 CHEESE Collect human annotations for your RL application with our human-in-the-loop data collection library.
git clone https://github.com/CarperAI/trlx.git
cd trlx
pip install torch --extra-index-url https://download.pytorch.org/whl/cu118
pip install -e .
For more usage see examples . You can also try the colab notebooks below:
DescriptionLatest runs of the examples are on our Weights & Biases
You can train a model using a reward function or a reward-labeled dataset.
trainer = trlx.train('gpt2', reward_fn=lambda samples, **kwargs: [sample.count('cats') for sample in samples])
For reward model training refer to our autocrit library.
trainer = trlx.train('EleutherAI/gpt-j-6B', samples=['dolphins', 'geese'], rewards=[1.0, 100.0])
trainer = trlx.train('gpt2', samples=[['Question: 1 + 2 Answer:', '3'], ['Question: Solve this equation: ∀n>0, s=2, sum(n ** -s). Answer:', '(pi ** 2)/ 6']])
trainer.generate(**tokenizer('Q: Who rules the world? A:', return_tensors='pt'), do_sample=True)
from trlx.data.default_configs import default_ppo_config
config = default_ppo_config()
config.model.model_path = 'EleutherAI/gpt-neox-20b'
config.tokenizer.tokenizer_path = 'EleutherAI/gpt-neox-20b'
config.train.seq_length = 2048
trainer = trlx.train(config=config, reward_fn=lambda samples, **kwargs: [len(sample) for sample in samples])
To reduce memory usage (if you're experiencing CUDA Out of Memory errors), first try the lowest setting for the following hyperparameters and eventually increase them:
# micro batch size per gpu
config.train.batch_size = 1
# freeze all transformer layers
config.model.num_layers_unfrozen = 0
# maximum sample length, prompts or samples longer than that will be truncated
config.train.seq_length = 128
# micro batch size for sampling (specific for PPO)
config.method.chunk_size = 1
# use an additional Q-head (specific for ILQL)
config.method.two_qs = False
trainer.save_pretrained('/path/to/output/folder/')
accelerate config # choose DeepSpeed option
accelerate launch examples/simulacra.py
Follow the setup instructions in the NeMo README .
python examples/nemo_ilql_sentiments.py
For more usage see the NeMo README
ray start --head --port=6379
python -m trlx.sweep --config configs/sweeps/ppo_sweep.yml --accelerate_config configs/accelerate/ddp.yaml --num_gpus 4 examples/ppo_sentiments.py
python -m trlx.reference octocat/trlx-fork:fix-branch
trlX uses the standard Python
logging
library to log training information to the console. The default logger is set to the
INFO
level, which means that
INFO
,
WARNING
,
ERROR
, and
CRITICAL
level messages will be printed to standard output.
To change the log level directly, you can use the verbosity setter. For example, to set the log level to
WARNING
use:
import trlx
trlx.logging.set_verbosity(trlx.logging.WARNING)
This will suppress
INFO
level messages, but still print
WARNING
,
ERROR
, and
CRITICAL
level messages.
You can also control logging verbosity by setting the
TRLX_VERBOSITY
environment variable to one of the standard logging
level names
:
CRITICAL
(
trlx.logging.CRITICAL
)
ERROR
(
trlx.logging.ERROR
)
WARNING
(
trlx.logging.WARNING
)
INFO
(
trlx.logging.INFO
)
DEBUG
(
trlx.logging.DEBUG
)
export TRLX_VERBOSITY=WARNING
By default,
tqdm
progress bars are used to display training progress. You can disable them by calling
trlx.logging.disable_progress_bar()
, otherwise
trlx.logging.enable_progress_bar()
to enable.
Messages can be formatted with greater detail by setting
trlx.logging.enable_explicit_format()
. This will inject call-site information into each log which may be helpful for debugging.
[2023-01-01 05:00:00,000] [INFO] [ppo_orchestrator.py:63:make_experience] [RANK 0] Message...
💡 Tip: To reduce the amount of logging output, you might find it helpful to change log levels of third-party libraries used by trlX. For example, try adding
transformers.logging.set_verbosity_error()
to the top of your trlX scripts to silence verbose messages from the
transformers
library (see their
logging docs
for more details).
For development check out these guidelines and also read our docs
@inproceedings{havrilla-etal-2023-trlx,
title = "trl{X}: A Framework for Large Scale Reinforcement Learning from Human Feedback",
author = "Havrilla, Alexander and
Zhuravinskyi, Maksym and
Phung, Duy and
Tiwari, Aman and
Tow, Jonathan and
Biderman, Stella and
Anthony, Quentin and
Castricato, Louis",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.530",
doi = "10.18653/v1/2023.emnlp-main.530",
pages = "8578--8595",
Many thanks to Leandro von Werra for contributing with trl, a library that initially inspired this repo.
A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)
Topics
machine-learning
reinforcement-learning
pytorch