Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Doc] how to configure regarding to stage-level #9727

Merged
merged 5 commits into from
Oct 30, 2023

Conversation

wbo4958
Copy link
Contributor

@wbo4958 wbo4958 commented Oct 27, 2023

No description provided.

doc/tutorials/spark_estimator.rst Outdated Show resolved Hide resolved
@@ -128,7 +128,7 @@ Write your PySpark application
==============================

Below snippet is a small example for training xgboost model with PySpark. Notice that we are
using a list of feature names and the additional parameter ``device``:
using a list of feature names instead of vector features and the additional parameter ``device``:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is a little bit confusing, what's "vector features"? Also, "and the additional parameter ... ", is the "and" a continuation of "vector feature" or the continuation of "list of feature names"

doc/tutorials/spark_estimator.rst Show resolved Hide resolved
doc/tutorials/spark_estimator.rst Outdated Show resolved Hide resolved
doc/tutorials/spark_estimator.rst Outdated Show resolved Hide resolved
Comment on lines 203 to 204
By executing the aforementioned command, the XGBoost application will be submitted with python environment created by pip or conda,
specifying a request for 1 GPU and 12 CPUs per executor. During the ETL phase, a total of 12 tasks will be executed concurrently.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
By executing the aforementioned command, the XGBoost application will be submitted with python environment created by pip or conda,
specifying a request for 1 GPU and 12 CPUs per executor. During the ETL phase, a total of 12 tasks will be executed concurrently.
The submit command sends the Python environment created by pip or conda along with the specification of GPU allocation. During the ETL phase, a total of 12 tasks will be executed concurrently.

doc/tutorials/spark_estimator.rst Show resolved Hide resolved
@trivialfis trivialfis merged commit fa65cf6 into dmlc:master Oct 30, 2023
21 of 25 checks passed
@wbo4958 wbo4958 deleted the doc-stage-level branch January 16, 2024 00:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants