-
-
Notifications
You must be signed in to change notification settings - Fork 8.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[pyspark] sort qid for SparkRanker #8497
Conversation
@WeichenXu123 @trivialfis please help to review it. |
python-package/xgboost/spark/core.py
Outdated
@@ -729,6 +729,10 @@ def _fit(self, dataset): | |||
else: | |||
dataset = dataset.repartition(num_workers) | |||
|
|||
if self.isDefined(self.qid_col) and self.getOrDefault(self.qid_col): | |||
# XGBoost requires qid to be sorted for each partition | |||
dataset = dataset.sortWithinPartitions(alias.qid) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: add ascending=True
explicitly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
(Vectors.sparse(3, {1: 8.0, 2: 9.5}), 2, 1), | ||
(Vectors.dense(1.0, 2.0, 3.0), 0, 0), | ||
(Vectors.dense(4.0, 5.0, 6.0), 1, 0), | ||
(Vectors.dense(9.0, 4.0, 8.0), 2, 0), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: do we need hardcode so long data list ?
we can hardcode 4 rows and use [ ... ] * 100
instead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
ranker = SparkXGBRanker(qid_col="qid", num_workers=2) | ||
assert ranker.getOrDefault(ranker.objective) == "rank:pairwise" | ||
model = ranker.fit(self.ranker_df_train_1) | ||
model.transform(self.ranker_df_test).collect() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what's the purpose of this test?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
to test if the SparkRanker will throw exception
) | ||
self.ranker_df_train_1 = self.session.createDataFrame( | ||
[ | ||
(Vectors.sparse(3, {1: 1.0, 2: 5.5}), 0, 9), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How did you produce this data and the expected result? Please try not to use hardcoded results.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, the qid is the descending order. without the fix, it will throw exception ../src/data/data.cc:486: Check failed: non_dec: qid must be sorted in non-decreasing order along with data.
@hcho3 please help to merge it. thx |
@wbo4958 Could you please change the tests to NOT use hardcoded results? |
Hi @trivialfis, For this case, the test I added is to check if the pyspark application will be crashed, so it's ok, I think, to hardcode the data. Since I think the data is so straightforward to show the scenario which can crash the process. |
pred_result = model.transform(self.ranker_df_test).collect() | ||
|
||
for row in pred_result: | ||
assert np.isclose(row.prediction, row.expected_prediction, rtol=1e-3) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@wbo4958 This is not only checking exception.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test is moved from https://github.com/dmlc/xgboost/pull/8497/files#diff-3b3ca1f9bd10767b61c3eab170a027b67408881dcf57e4e992c2caa47d660ff5L386-L407, I didn't change it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah ... That's a headache, I'm blocked by these tests and don't know how to recreate them...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, we can have the following PR to refactor these tests by not hardcoding them
* [pyspark] sort qid for SparkRandker * resolve comments
* [pyspark] sort qid for SparkRandker * resolve comments Co-authored-by: Bobby Wang <[email protected]>
To fix #8491