-
Notifications
You must be signed in to change notification settings - Fork 154
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable pytest failures on warnings on FutureWarnings (Replace deprecated geopandas.dataset
module)
#1360
Enable pytest failures on warnings on FutureWarnings (Replace deprecated geopandas.dataset
module)
#1360
Conversation
gpu_dataframe = cuspatial.from_geopandas(host_dataframe) | ||
# The df size is 8kb of cudf rows and 217kb of the geometry column | ||
assert gpu_dataframe.memory_usage().sum() == 224945 | ||
assert gpu_dataframe.memory_usage().sum() == 216793 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test is now essentially the same as test_memory_usage_large
below. Is it OK to deduplicate?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's the same size because there are no geoseries in the geodataframe other than "geometry", right? The logic is different.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah yes you're correct. gpd.read_file(gpd.datasets.get_path("naturalearth_lowres"))
also has pop_est, continent, name, iso_a3, and gdp_md_est
columns. Is it OK if this test no longer has those columns?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This does change the logic, previously the test was also validating that the cudf series are being read with from_geopandas. I don't think another test verifies this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry coming back to this now.
cudf series
Did you mean geopandas series? It looks like that may be tested in python/cuspatial/cuspatial/tests/test_from_geopandas.py
and I don't see cuspatial.from_geopandas
supporting a cudf.Series
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for finally fixing the datasets warnings.
gpu_dataframe = cuspatial.from_geopandas(host_dataframe) | ||
# The df size is 8kb of cudf rows and 217kb of the geometry column | ||
assert gpu_dataframe.memory_usage().sum() == 224945 | ||
assert gpu_dataframe.memory_usage().sum() == 216793 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's the same size because there are no geoseries in the geodataframe other than "geometry", right? The logic is different.
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
/merge |
Description
Continuing the effort in rapidsai/build-planning#26 by enabling
FutureWarnings
andDeprecationWarnings
to raise errors in the CI.The primary change here is to replace
geopandas.dataset
usage with the files used (the data in https://github.com/geopandas/geopandas/tree/main/geopandas/tests/data)Checklist