NeMo-Aligner uses custom trainers to coordinate all aspects of training. There are currently 3 custom trainers:
- SupervisedTrainer: for SFT, SteerLM and Reward modeling.
- DPOTrainer: for DPO training.
- CriticServerTrainer: trains the RL critic via PyTriton requests. It will also run the reward model depending on the configuration.
- PPOTrainer: performs the RLHF PPO training, since PPO has components such as the Critic, this trainer will send inference and train requests via PyTriton to the CriticServerTrainer to train and run inference on the critic.
See the example configurations in the conf folder for an explanation of different configurations we support. Note that all specified configurations in the .yaml
file will overwrite the loaded model configuration from the pretrained checkpoint.
Our custom trainers will only call predefined APIs on the model passed in. These APIs are defined in alignable_interface.py.
- Supervised Fine Tuning Training: train_gpt_sft.py with gpt_sft.yaml.
- DPO Training: train_gpt_dpo.py with gpt_dpo.yaml.
- Reward Model Training: train_reward_model.py with training_rm.yaml.
- Reward Model Inference: serve_reward_model.py with inference_rm.yaml.
- PPO Critic Server: serve_ppo_critic.py with gpt_ppo_critic.yaml.
- PPO Actor Training: train_gpt_ppo_actor.py with gpt_ppo_actor.yaml.
To run a full RLHF PPO job, we need to start both the CriticServerTrainer and PPOTrainer.
Please see RLHFTraining.md.