Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[AutoParallel] Add benchmark for llama-7b-dy2st. #8559

Merged
merged 2 commits into from
Jun 7, 2024

Conversation

GhostScreaming
Copy link
Contributor

PR types

Others

PR changes

Others

Description

Add llama2-7b for test_tipc

Copy link

paddle-bot bot commented Jun 6, 2024

Thanks for your contribution!

Copy link

codecov bot commented Jun 6, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 53.86%. Comparing base (b0a8cdd) to head (96afcaa).
Report is 246 commits behind head on develop.

Additional details and impacted files
@@             Coverage Diff             @@
##           develop    #8559      +/-   ##
===========================================
- Coverage    53.86%   53.86%   -0.01%     
===========================================
  Files          620      620              
  Lines        97081    97110      +29     
===========================================
+ Hits         52296    52304       +8     
- Misses       44785    44806      +21     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

"per_device_eval_batch_size": 2,
"tensor_parallel_degree": 1,
"pipeline_parallel_degree": 1,
"sharding": "stage1",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

动态图stage1跟静态图stage2一致。

"tensor_parallel_degree": 1,
"pipeline_parallel_degree": 1,
"sharding": "stage1",
"sharding_parallel_config": "enable_stage1_overlap",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"enable_stage1_overlap" --> "enable_stage2_overlap"

"pipeline_parallel_degree": 1,
"sharding": "stage1",
"sharding_parallel_config": "enable_stage1_overlap",
"tensor_parallel_config": "enable_delay_scale_loss enable_mp_async_allreduce enable_mp_skip_c_identity enable_mp_fused_linear_param_grad_add",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"tensor_parallel_config": "enable_delay_scale_loss enable_mp_async_allreduce enable_mp_skip_c_identity enable_mp_fused_linear_param_grad_add",
-->
"tensor_parallel_config": "enable_mp_async_allreduce",

"sharding": "stage1",
"sharding_parallel_config": "enable_stage1_overlap",
"tensor_parallel_config": "enable_delay_scale_loss enable_mp_async_allreduce enable_mp_skip_c_identity enable_mp_fused_linear_param_grad_add",
"pipeline_parallel_config": "enable_delay_scale_loss enable_release_grads disable_partial_send_recv",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

没有开PP并行,"pipeline_parallel_config": ""设置为空即可

"weight_decay": 0.01,
"bf16": true,
"fp16_opt_level": "O2",
"amp_master_grad": true,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

增加:
"amp_custom_black_list": ["reduce_sum", "c_softmax_with_cross_entropy"],
"amp_custom_white_list": ["lookup_table", "lookup_table_v2"],

Copy link
Collaborator

@zhiqiu zhiqiu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@ZHUI ZHUI merged commit 162d8d3 into PaddlePaddle:develop Jun 7, 2024
9 of 12 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants