Skip to content

Commit

Permalink
Update llama_npu_opt_lora.sh (#8439)
Browse files Browse the repository at this point in the history
  • Loading branch information
Galaxy1458 authored May 15, 2024
1 parent 6902c3e commit 11aba32
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions llm/llama/npu/llama_npu_opt_lora.sh
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ python -u -m paddle.distributed.launch \
--dataset_name_or_path "data/" \
--output_dir "./output/lora_bf16_llama_N1C8" \
--per_device_train_batch_size 2 \
--gradient_accumulation_steps 16 \
--gradient_accumulation_steps 32 \
--per_device_eval_batch_size 1 \
--eval_accumulation_steps 1 \
--max_steps ${max_steps} \
Expand All @@ -57,7 +57,7 @@ python -u -m paddle.distributed.launch \
--eval_with_do_generation false \
--metric_for_best_model "accuracy" \
--recompute false \
--tensor_parallel_degree 8 \
--tensor_parallel_degree 4 \
--pipeline_parallel_degree 1 \
--zero_padding 0 \
--sequence_parallel 1 \
Expand Down

0 comments on commit 11aba32

Please sign in to comment.