From 177c5846f739af8ca920ab6a5f0afd4439ab925a Mon Sep 17 00:00:00 2001 From: fsx950223 Date: Sat, 8 Oct 2022 14:54:38 +0800 Subject: [PATCH] update doc --- efficientdet/README.md | 2 +- efficientdet/tf2/README.md | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/efficientdet/README.md b/efficientdet/README.md index ec7ed3cb8..c9a49a77b 100644 --- a/efficientdet/README.md +++ b/efficientdet/README.md @@ -335,7 +335,7 @@ If you want to do inference for custom data, you can run You should check more details of runmode which is written in caption-4. -## 9. Train on multi GPUs. +## 9. Training on single node GPUs. Create a config file for the PASCAL VOC dataset called voc_config.yaml and put this in it. diff --git a/efficientdet/tf2/README.md b/efficientdet/tf2/README.md index dc52771f1..4d37a41a4 100644 --- a/efficientdet/tf2/README.md +++ b/efficientdet/tf2/README.md @@ -260,11 +260,11 @@ Finetune needs to use --pretrained_ckpt. If you want to continue to train the model, simply re-run the above command because the `num_epochs` is a maximum number of epochs. For example, to reproduce the result of efficientdet-d0, set `--num_epochs=300` then run the command multiple times until the training is finished. -## 9. Train on multi GPUs. +## 9. Training on single node GPUs. Just add ```--strategy=gpus``` -## 10. Train on multi node GPUs. +## 10. Training on multi node GPUs. Following scripts will start a training task with 2 nodes. Start Chief training node.