Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
fix: load_best_model_at_end error when load_in_8bit is True
Ref: huggingface/peft#394 Loading a quantized checkpoint into non-quantized Linear8bitLt is not supported. call module.cuda() before module.load_state_dict()
- Loading branch information