Skip to content

Latest commit

 

History

History
30 lines (21 loc) · 2.38 KB

README.md

File metadata and controls

30 lines (21 loc) · 2.38 KB

Documentation

Custom Trainers

NeMo-Aligner uses custom trainers to coordinate all aspects of training. There are currently 3 custom trainers:

  1. SupervisedTrainer: for SFT, SteerLM and Reward modeling.
  2. DPOTrainer: for DPO training.
  3. CriticServerTrainer: trains the RL critic via PyTriton requests. It will also run the reward model depending on the configuration.
  4. PPOTrainer: performs the RLHF PPO training, since PPO has components such as the Critic, this trainer will send inference and train requests via PyTriton to the CriticServerTrainer to train and run inference on the critic.

Configuration guide

See the example configurations in the conf folder for an explanation of different configurations we support. Note that all specified configurations in the .yaml file will overwrite the loaded model configuration from the pretrained checkpoint.

APIs

Our custom trainers will only call predefined APIs on the model passed in. These APIs are defined in alignable_interface.py.

Launching scripts and their description

To run a full RLHF PPO job, we need to start both the CriticServerTrainer and PPOTrainer.

RLHF Training architecture and details

Please see RLHFTraining.md.