Releases: OpenBMB/BMTrain
BMTrain v1.0.0
What's Changed
- Using pytorch's hook mechanism to refactor ZeRO, checkpoint, pipeline, communication implementation by @zkh2016 in #128 #159
- Add Bf16 support by @Achazwl in #136
- Tensor parallel implementation by @Achazwl @zkh2016 @MayDomine in #153
- Async save state_dict by @zkh2016 in #171
AdamOffloadOptimizer
can save whole gathered state by @MayDomine in #184- New test for new version BMTrain by @Achazwl @JerryYin777 @MayDomine
Full Changelog: 0.2.3...1.0.0
BMTrain v0.2.3
What's Changed
- Get rid of torch cpp extension when compiling @MayDomine @Achazwl in #115 #132
- Github action CICD @MayDomine in #115
- Now Loss scale can be managed more dynamic by min and max loss scale by @Achazwl #129
- Fix
bmt.load(model)
OOM when meets torch >= 1.12 @MayDomine #115 AdamOffloadOptimizer
can choose avx flag automatically in runtime @MayDomine #115- Now BMTrain is fully compatible with torch 2.0 @MayDomine #115
Full Changelog: 0.2.2...0.2.3
BMTrain v0.2.2
What's Changed
- Undo a deletion of detach in previous version by @Achazwl in #69
- avoid empty state when justify scale by @Achazwl in #68
- fix run bmtrain with one gpu without torchrun by @Achazwl in #70
- fix inspector grad when tensor is not recorded in some layer by @Achazwl in #90
- fix: make load stream wait default stream after init_parameters by @Achazwl in #78
- temparary fix of bmtrain+opendelta load state dict by @Achazwl in #77
- support multiple input-output in transformerblocklist by @Achazwl in #92 #91
Full Changelog: 0.2.1...0.2.2
BMTrain v0.2.1
What's Changed
- fix output shape mismatch after CheckpointBlock by @Achazwl in #64
- add test for grad accumulation and state_dict interface. by @MayDomine in #61
- fix inspect grad mean/std from None to 0 by @Achazwl in #60
Full Changelog: 0.2.0...0.2.1
v0.2.0
Update Log 0.2.0
New Features
1. Added an Optimizer Manager
to support various optimizer algorithms.
Before 0.2.0, the optimizer
was strongly coupled to the "loss scaler". This results in users cannot use multiple optimizers at the same time when training model in fp16.
======= Before 0.2.0 =======
for iteration in range(1000):
# zero grad
optimizer.zero_grad()
# ...
# loss scale and backward
loss = optimizer.loss_scale(loss)
loss.backward()
# optimizer step
bmtrain.optim_step(optimizer, lr_scheduler)
The bmtrain.optim_step
allows only one optimizer
and at most one lr_schduler
, which cannot handle some more complex scenarios.
======= After 0.2.0 =======
# create a new instance of optimizer manager
optim_manager = bmtrain.optim.OptimManager(loss_scale=1024)
# let optim_manager handle all the optimizer and (optional) their corresponding lr_scheduler
optim_manager.add_optimizer(optimizer, lr_scheduler)
# add_optimizer can be called multiple times to add other optimizers.
for iteration in range(1000):
# zero grad
optim_manager.zero_grad() # calling zero_grad for each optimizer
# ...
# loss scale and backward
optim_manager.backward(loss)
# optimizer step
optim_manager.step()
Starting from BMTrain 0.2.0, we provide "OptimManager" to manage optimizers and lr schdulers. OptimManager
supports managing multiple optimizers and lr_schedulers at the same time, and allows setting the loss scale independently. OptimManager
can also manage pytorch native optimizers, such as SGD, AdamW, etc.
2. Pipeline Parallelism
In this version, BMTrain has added a new kind of parallel algorithm: pipeline parallelism.
To enable pipeline parallelism, one line of code needs to be modified.
======= ZeRO =======
layers = bmt.TransformerBlockList([
# ...
])
======= Pipeline =======
layers = bmt.PipelineTransformerBlockList([
# ...
])
Replacing TransformerBlockList with PipelineTransformerBlockList allows the parallel algorithm to switch from ZeRO to pipeline parallelism.
The number of stages in the pipeline can be set by passing the pipe_size
parameter to bmtrain.init_distributed.
3. Others
- Supports BF16.
- Tensors recorded in inspector supports backward propagation.
- Adds new tests.
What's Changed
- Fix bug : require_grad_ is usable for parameter in checkpointblock now by @MayDomine in #42
- Refactor: loss_scaler and optimizer by @Achazwl in #43
- Pipeline parallelism for BMTrain TransformerBlockList by @MayDomine in #40
- Auto test & FIX bugs by @Achazwl in #45
- Pipeline speedup by @Achazwl in #44
- Fix typo by @Achazwl in #47
- add bf16 support by @Achazwl in #49
- fix adam API changed in torch>=1.12.0 by @Achazwl in #53
- could apply loss function on inspector tensors by @Achazwl in #51
Full Changelog: 0.1.8...0.2.0
Release 0.1.8 patch 1
What's Changed
Full Changelog: 0.1.8...0.1.8.post1
Release v0.1.8
What's Changed
- Support the maximize parameter for adam when dtype is torch.half by @alphaGem in #35
- add iter to make TransformerBlockList Iterable by @MayDomine in #37
- Support pytorch 1.12.0 #38
- Set default rank and world_size when bmtrain is not initialized. #38
New Contributors
Full Changelog: 0.1.7...0.1.8
v0.1.7 Patch
BMTrain v0.1.7 is unable to release GPU memory in some cases, causing OOM problems. We have fixed it.
What's Changed
- FIX: release the parameter at some special case by @MayDomine in #32
Full Changelog: 0.1.7...0.1.7.post1
Release v0.1.7
What's Changed
- NEW: add ZeRO-2 by @MayDomine in #29
- FIX: load optimizer state dict
New Contributors
- @MayDomine made their first contribution in #29
Full Changelog: 0.1.6...0.1.7
Release v0.1.6
What's Changed
- FIX: load state dict by @a710128 in #26
- FIX: remove cuda events from optimizer state_dict. FX: F.adam maximize… by @a710128 in #25
Full Changelog: 0.1.5...0.1.6