diff --git a/CONTRIBUTING.rst b/CONTRIBUTING.rst index 693f0ede..8e15b6ed 100644 --- a/CONTRIBUTING.rst +++ b/CONTRIBUTING.rst @@ -4,8 +4,7 @@ Contributing ============ -Contributions are welcome, and they are greatly appreciated! Every little bit -helps, and credit will always be given. +Contributions are welcome, and they are greatly appreciated! Every little bit helps, and credit will always be given. You can contribute in many ways: @@ -26,21 +25,17 @@ If you are reporting a bug, please include: Fix Bugs ~~~~~~~~ -Look through the GitHub issues for bugs. Anything tagged with "bug" and "help -wanted" is open to whoever wants to implement it. +Look through the GitHub issues for bugs. Anything tagged with "bug" and "help wanted" is open to whoever wants to implement it. Implement Features ~~~~~~~~~~~~~~~~~~ -Look through the GitHub issues for features. Anything tagged with "enhancement" -and "help wanted" is open to whoever wants to implement it. +Look through the GitHub issues for features. Anything tagged with "enhancement" and "help wanted" is open to whoever wants to implement it. Write Documentation ~~~~~~~~~~~~~~~~~~~ -RavenPy could always use more documentation, whether as part of the -official RavenPy docs, in docstrings, or even on the web in blog posts, -articles, and such. +RavenPy could always use more documentation, whether as part of the official RavenPy docs, in docstrings, or even on the web in blog posts, articles, and such. Submit Feedback ~~~~~~~~~~~~~~~ @@ -90,7 +85,7 @@ Ready to contribute? Here's how to set up `ravenpy` for local development. $ flake8 ravenpy tests $ black --check ravenpy tests - $ python setup.py test # or `pytest` + $ pytest tests $ tox To get flake8, black, and tox, just pip install them into your virtualenv. diff --git a/HISTORY.rst b/HISTORY.rst index 2edbe13a..df8f0a6d 100644 --- a/HISTORY.rst +++ b/HISTORY.rst @@ -4,7 +4,19 @@ History 0.12.4 (unreleased) ------------------- -* In tests, set xclim' missing value option to ``skip``. As of xclim 0.45, missing value checks are applied to ``fit`` indicator, meaning that parameters will be set to None if missing values are found in the fitted time series. Wrap calls to ``fit`` with ``xclim.set_options(check_missing="skip")`` to reproduce the previous behavior of xclim. + +Breaking changes +^^^^^^^^^^^^^^^^ +* In tests, set `xclim`'s missing value option to ``skip``. As of `xclim` v0.45, missing value checks are applied to the ``fit`` indicator, meaning that parameters will be set to `None` if missing values are found in the fitted time series. Wrap calls to ``fit`` with ``xclim.set_options(check_missing="skip")`` to reproduce the previous behavior of xclim. +* The `_determine_upstream_ids` function under `ravenpy.utilities.geoserver` has been removed as it was a duplicate of `ravenpy.utilities.geo.determine_upstream_ids`. The latter function is now used in its place. + +Internal changes +^^^^^^^^^^^^^^^^ +* `RavenPy` now accepts a `RAVENPY_THREDDS_URL` for setting the URL globally to the THREDDS-hosted climate data service. Defaults to `https://pavics.ouranos.ca/twitcher/ows/proxy/thredds`. +* `RavenPy` processes and tests that depend on remote GeoServer calls now allow for optional server URL and file location targets. The server URL can be set globally with the following environment variable: + * `RAVENPY_GEOSERVER_URL`: URL to the GeoServer-hosted vector/raster data. Defaults to `https://pavics.ouranos.ca/geoserver`. This environment variable was previously called `GEO_URL` but was renamed to narrow its scope to `RavenPy`. + * `GEO_URL` is still supported for backward compatibility but may eventually be removed in a future release. +* `RavenPy` has temporarily pinned `xarray` below v2023.9.0 due to incompatibilities with `xclim` v0.45.0`. 0.12.3 (2023-08-25) ------------------- diff --git a/docs/installation.rst b/docs/installation.rst index ba264ef3..3ffe686a 100644 --- a/docs/installation.rst +++ b/docs/installation.rst @@ -5,8 +5,7 @@ Installation Anaconda Python Installation ---------------------------- -For many reasons, we recommend using a `Conda environment `_ -to work with the full RavenPy installation. This implementation is able to manage the harder-to-install GIS dependencies, like `GDAL`. +For many reasons, we recommend using a `Conda environment `_ to work with the full RavenPy installation. This implementation is able to manage the harder-to-install GIS dependencies, like `GDAL`. Begin by creating an environment: @@ -26,8 +25,7 @@ RavenPy can then be installed directly via its `conda-forge` package by running: (ravenpy) $ conda install -c conda-forge ravenpy -This approach installs the `Raven `_ binary directly to your environment `PATH`, -as well as installs all the necessary Python and C libraries supporting GIS functionalities. +This approach installs the `Raven `_ binary directly to your environment `PATH`, as well as installs all the necessary Python and C libraries supporting GIS functionalities. Python Installation (pip) ------------------------- @@ -71,10 +69,22 @@ Once downloaded/compiled, the binary can be pointed to manually (as an absolute $ export RAVENPY_RAVEN_BINARY_PATH=/path/to/my/custom/raven +Customizing remote service datasets +----------------------------------- + +A number of functions and tests within `RavenPy` are dependent on remote services (THREDDS, GeoServer) for providing climate datasets, hydrological boundaries, and other data. These services are provided by `Ouranos `_ through the `PAVICS `_ project and may be subject to change in the future. + +If for some reason you wish to use alternate services, you can set the following environment variables to point to your own instances of THREDDS and GeoServer: + +.. code-block:: console + + $ export RAVENPY_THREDDS_URL=https://my.domain.org/thredds + $ export RAVENPY_GEOSERVER_URL=https://my.domain.org/geoserver + Development Installation (from sources) --------------------------------------- -The sources for RavenPy can be obtained from the GitHub repo: +The sources for `RavenPy` can be obtained from the GitHub repo: .. code-block:: console diff --git a/environment.yml b/environment.yml index 1e032c61..72d82be3 100644 --- a/environment.yml +++ b/environment.yml @@ -39,7 +39,7 @@ dependencies: - shapely - spotpy - statsmodels - - xarray + - xarray >=2022.12.0,<2023.9.0 # xarray v2023.9.0 is incompatible with xclim<=0.45.0 - xclim >=0.43 - xesmf - xskillscore diff --git a/pyproject.toml b/pyproject.toml index 3ce6e7ba..30c3ef3a 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -56,7 +56,7 @@ dependencies = [ "scipy", "spotpy", "statsmodels", - "xarray", + "xarray>=2022.12.0,<2023.9.0", # xarray v2023.9.0 is incompatible with xclim<=0.45.0 "xclim>=0.43.0", "xskillscore" ] diff --git a/ravenpy/extractors/forecasts.py b/ravenpy/extractors/forecasts.py index 2fab110f..a1dc0a49 100644 --- a/ravenpy/extractors/forecasts.py +++ b/ravenpy/extractors/forecasts.py @@ -1,8 +1,11 @@ import datetime as dt import logging +import os import re +import warnings from pathlib import Path from typing import Any, List, Tuple, Union +from urllib.parse import urljoin import pandas as pd import xarray as xr @@ -19,6 +22,13 @@ LOGGER = logging.getLogger("PYWPS") +# Can be set at runtime with `$ env RAVENPY_THREDDS_URL=https://xx.yy.zz/geoserver/ ...`. +THREDDS_URL = os.environ.get( + "RAVENPY_THREDDS_URL", "https://pavics.ouranos.ca/twitcher/ows/proxy/thredds/" +) +if not THREDDS_URL.endswith("/"): + THREDDS_URL = f"{THREDDS_URL}/" + def get_hindcast_day(region_coll: fiona.Collection, date, climate_model="GEPS"): """Generate a forecast dataset that can be used to run raven. @@ -38,15 +48,41 @@ def get_hindcast_day(region_coll: fiona.Collection, date, climate_model="GEPS"): def get_CASPAR_dataset( - climate_model: str, date: dt.datetime + climate_model: str, + date: dt.datetime, + thredds: str = THREDDS_URL, + directory: str = "dodsC/birdhouse/disk2/caspar/daily/", ) -> Tuple[ xr.Dataset, List[Union[Union[DatetimeIndex, Series, Timestamp, Timestamp], Any]] ]: - """Return CASPAR dataset.""" + """Return CASPAR dataset. + + Parameters + ---------- + climate_model : str + Type of climate model, for now only "GEPS" is supported. + date : dt.datetime + The date of the forecast. + thredds : str + The thredds server url. Default: "https://pavics.ouranos.ca/twitcher/ows/proxy/thredds/" + directory : str + The directory on the thredds server where the data is stored. Default: "dodsC/birdhouse/disk2/caspar/daily/" + + Returns + ------- + xr.Dataset + The forecast dataset. + """ + if thredds[-1] != "/": + warnings.warn( + "The thredds url should end with a slash. Appending it to the url." + ) + thredds = f"{thredds}/" if climate_model == "GEPS": d = dt.datetime.strftime(date, "%Y%m%d") - file_url = f"https://pavics.ouranos.ca/twitcher/ows/proxy/thredds/dodsC/birdhouse/disk2/caspar/daily/GEPS_{d}.nc" + file_location = urljoin(directory, f"GEPS_{d}.nc") + file_url = urljoin(thredds, file_location) ds = xr.open_dataset(file_url) # Here we also extract the times at 6-hour intervals as Raven must have # constant timesteps and GEPS goes to 6 hours @@ -66,14 +102,37 @@ def get_CASPAR_dataset( def get_ECCC_dataset( climate_model: str, + thredds: str = THREDDS_URL, + directory: str = "dodsC/datasets/forecasts/eccc_geps/", ) -> Tuple[ Dataset, List[Union[Union[DatetimeIndex, Series, Timestamp, Timestamp], Any]] ]: - """Return latest GEPS forecast dataset.""" + """Return latest GEPS forecast dataset. + + Parameters + ---------- + climate_model : str + Type of climate model, for now only "GEPS" is supported. + thredds : str + The thredds server url. Default: "https://pavics.ouranos.ca/twitcher/ows/proxy/thredds/" + directory : str + The directory on the thredds server where the data is stored. Default: "dodsC/datasets/forecasts/eccc_geps/" + + Returns + ------- + xr.Dataset + The forecast dataset. + """ + if thredds[-1] != "/": + warnings.warn( + "The thredds url should end with a slash. Appending it to the url." + ) + thredds = f"{thredds}/" + if climate_model == "GEPS": # Eventually the file will find a permanent home, until then let's use the test folder. - file_url = "https://pavics.ouranos.ca/twitcher/ows/proxy/thredds/dodsC/datasets/forecasts/eccc_geps/GEPS_latest.ncml" - + file_location = urljoin(directory, "GEPS_latest.ncml") + file_url = urljoin(thredds, file_location) ds = xr.open_dataset(file_url) # Here we also extract the times at 6-hour intervals as Raven must have # constant timesteps and GEPS goes to 6 hours @@ -130,9 +189,10 @@ def get_subsetted_forecast( times: Union[dt.datetime, xr.DataArray], is_caspar: bool, ) -> xr.Dataset: - """ + """Get Subsetted Forecast. + This function takes a dataset, a region and the time sampling array and returns - the subsetted values for the given region and times + the subsetted values for the given region and times. Parameters ---------- @@ -143,14 +203,12 @@ def get_subsetted_forecast( times : dt.datetime or xr.DataArray The array of times required to do the forecast. is_caspar : bool - True if the data comes from Caspar, false otherwise. - Used to define lat/lon on rotated grid. + True if the data comes from Caspar, false otherwise. Used to define lat/lon on rotated grid. Returns ------- xr.Dataset The forecast dataset. - """ # Extract the bounding box to subset the entire forecast grid to something # more manageable diff --git a/ravenpy/utilities/forecasting.py b/ravenpy/utilities/forecasting.py index 1d2f6fb6..75eb8da6 100644 --- a/ravenpy/utilities/forecasting.py +++ b/ravenpy/utilities/forecasting.py @@ -6,9 +6,12 @@ """ import datetime as dt import logging +import os import tempfile +import warnings from pathlib import Path from typing import List, Union +from urllib.parse import urlparse import climpred import xarray as xr @@ -20,6 +23,13 @@ LOGGER = logging.getLogger("PYWPS") +# Can be set at runtime with `$ env RAVENPY_THREDDS_URL=https://xx.yy.zz/thredds/ ...`. +THREDDS_URL = os.environ.get( + "RAVENPY_THREDDS_URL", "https://pavics.ouranos.ca/twitcher/ows/proxy/thredds/" +) +if not THREDDS_URL.endswith("/"): + THREDDS_URL = f"{THREDDS_URL}/" + def climatology_esp( config, @@ -391,9 +401,10 @@ def ensemble_prediction( hindcast_from_meteo_forecast = ensemble_prediction -def compute_forecast_flood_risk(forecast: xr.Dataset, flood_level: float): - """Returns the empirical exceedance probability for each forecast day based - on a flood level threshold. +def compute_forecast_flood_risk( + forecast: xr.Dataset, flood_level: float, thredds: str = THREDDS_URL +) -> xr.Dataset: + """Returns the empirical exceedance probability for each forecast day based on a flood level threshold. Parameters ---------- @@ -402,12 +413,19 @@ def compute_forecast_flood_risk(forecast: xr.Dataset, flood_level: float): flood_level : float Flood level threshold. Will be used to determine if forecasts exceed this specified flood threshold. Should be in the same units as the forecasted streamflow. + thredds : str + The thredds server url. Default: "https://pavics.ouranos.ca/twitcher/ows/proxy/thredds/" Returns ------- xr.Dataset Time series of probabilities of flood level exceedance. """ + if thredds[-1] != "/": + warnings.warn( + "The thredds url should end with a slash. Appending it to the url." + ) + thredds = f"{thredds}/" # ---- Calculations ---- # # Ensemble: for each day, calculate the percentage of members that are above the threshold @@ -429,12 +447,13 @@ def compute_forecast_flood_risk(forecast: xr.Dataset, flood_level: float): forecast.where(forecast > flood_level).notnull() / 1.0 ) # This is needed to return values instead of floats + domain = urlparse(thredds).netloc + out = pct.to_dataset(name="exceedance_probability") - out.attrs["source"] = "PAVICS-Hydro flood risk forecasting tool, pavics.ouranos.ca" + out.attrs["source"] = f"PAVICS-Hydro flood risk forecasting tool, {domain}" out.attrs["history"] = ( - "File created on " - + dt.datetime.utcnow().strftime("%Y-%m-%d %H:%M:%S") - + "UTC on the PAVICS-Hydro service available at pavics.ouranos.ca" + f"File created on {dt.datetime.utcnow().strftime('%Y-%m-%d %H:%M:%S')} " + f"UTC on the PAVICS-Hydro service available at {domain}." ) out.attrs[ "title" diff --git a/ravenpy/utilities/geoserver.py b/ravenpy/utilities/geoserver.py index 8e239757..5ea4d60b 100644 --- a/ravenpy/utilities/geoserver.py +++ b/ravenpy/utilities/geoserver.py @@ -1,12 +1,12 @@ -""" -GeoServer interaction operations. +"""GeoServer interaction operations. Working assumptions for this module: * Point coordinates are passed as shapely.geometry.Point instances. * BBox coordinates are passed as (lon1, lat1, lon2, lat2). * Shapes (polygons) are passed as shapely.geometry.shape parsable objects. * All functions that require a CRS have a CRS argument with a default set to WGS84. -* GEO_URL points to the GeoServer instance hosting all files. +* GEOSERVER_URL points to the GeoServer instance hosting all files. +* For legacy reasons, we also accept the `GEO_URL` environment variable. TODO: Refactor to remove functions that are just 2-lines of code. For example, many function's logic essentially consists in creating the layer name. @@ -18,7 +18,7 @@ import urllib.request import warnings from pathlib import Path -from typing import Iterable, Optional, Sequence, Tuple, Union +from typing import Dict, Iterable, Optional, Sequence, Tuple, Union from urllib.parse import urljoin from requests import Request @@ -46,11 +46,18 @@ Intersects = None wfs_Point = None -# Do not remove the trailing / otherwise `urljoin` will remove the geoserver path. -# Can be set at runtime with `$ env GEO_URL=https://xx.yy.zz/geoserver/ ...`. -GEO_URL = os.getenv("GEO_URL", "https://pavics.ouranos.ca/geoserver/") +from .geo import determine_upstream_ids + +# Can be set at runtime with `$ env RAVENPY_GEOSERVER_URL=https://xx.yy.zz/geoserver/ ...`. +# For legacy reasons, we also accept the `GEO_URL` environment variable. +GEOSERVER_URL = os.getenv( + "RAVENPY_GEOSERVER_URL", + os.getenv("GEO_URL", "https://pavics.ouranos.ca/geoserver/"), +) +if not GEOSERVER_URL.endswith("/"): + GEOSERVER_URL = f"{GEOSERVER_URL}/" -# We store the contour of different hydrobasins domains +# We store the contour of different HydroBASINS domains hybas_dir = Path(__file__).parent.parent / "data" / "hydrobasins_domains" hybas_pat = "hybas_lake_{domain}_lev01_v1c.zip" @@ -59,6 +66,15 @@ hybas_domains = {dom: hybas_dir / hybas_pat.format(domain=dom) for dom in hybas_regions} +def _fix_server_url(server_url: str) -> str: + if not server_url.endswith("/"): + warnings.warn( + "The GeoServer url should end with a slash. Appending it to the url." + ) + return f"{server_url}/" + return server_url + + def _get_location_wfs( bbox: Optional[ Tuple[ @@ -75,7 +91,7 @@ def _get_location_wfs( ] ] = None, layer: str = None, - geoserver: str = GEO_URL, + geoserver: str = GEOSERVER_URL, ) -> dict: """Return leveled features from a hosted data set using bounding box coordinates and WFS 1.1.0 protocol. @@ -98,6 +114,8 @@ def _get_location_wfs( dict A GeoJSON-derived dictionary of vector features (FeatureCollection). """ + geoserver = _fix_server_url(geoserver) + wfs = WebFeatureService(url=urljoin(geoserver, "wfs"), version="2.0.0", timeout=30) if bbox and point: @@ -133,7 +151,7 @@ def _get_location_wfs( def _get_feature_attributes_wfs( attribute: Sequence[str], layer: str = None, - geoserver: str = GEO_URL, + geoserver: str = GEOSERVER_URL, ) -> str: """Return WFS GetFeature URL request for attribute values. @@ -157,6 +175,8 @@ def _get_feature_attributes_wfs( ----- Non-existent attributes will raise a cryptic DriverError from fiona. """ + geoserver = _fix_server_url(geoserver) + params = dict( service="WFS", version="2.0.0", @@ -173,7 +193,7 @@ def _filter_feature_attributes_wfs( attribute: str, value: Union[str, float, int], layer: str, - geoserver: str = GEO_URL, + geoserver: str = GEOSERVER_URL, ) -> str: """Return WFS GetFeature URL request filtering geographic features based on a property's value. @@ -193,6 +213,7 @@ def _filter_feature_attributes_wfs( str WFS request URL. """ + geoserver = _fix_server_url(geoserver) try: attribute = str(attribute) @@ -215,71 +236,11 @@ def _filter_feature_attributes_wfs( return Request("GET", url=urljoin(geoserver, "wfs"), params=params).prepare().url -def _determine_upstream_ids( - fid: str, - df: pd.DataFrame, - *, - basin_field: str, - downstream_field: str, - basin_family: Optional[str] = None, -) -> pd.DataFrame: - """Return a list of upstream features by evaluating the downstream networks. - - Parameters - ---------- - fid : str - feature ID of the downstream feature of interest. - df : pd.DataFrame - A Dataframe comprising the watershed attributes. - basin_field : str - The field used to determine the id of the basin according to hydro project. - downstream_field : str - The field identifying the downstream sub-basin for the hydro project. - basin_family : str, optional - Regional watershed code (For HydroBASINS dataset). - - Returns - ------- - pd.DataFrame - Basins ids including `fid` and its upstream contributors. - """ - - def upstream_ids(bdf, bid): - return bdf[bdf[downstream_field] == bid][basin_field] - - # Note: Hydro Routing `SubId` is a float for some reason and Python float != GeoServer double. Cast them to int. - if isinstance(fid, float): - fid = int(fid) - df[basin_field] = df[basin_field].astype(int) - df[downstream_field] = df[downstream_field].astype(int) - - # Locate the downstream feature - ds = df.set_index(basin_field).loc[fid] - if basin_family is not None: - # Do a first selection on the main basin ID of the downstream feature. - sub = df[df[basin_family] == ds[basin_family]] - else: - sub = None - - # Find upstream basins - up = [fid] - for b in up: - tmp = upstream_ids(sub if sub is not None else df, b) - if len(tmp): - up.extend(tmp) - - return ( - sub[sub[basin_field].isin(up)] - if sub is not None - else df[df[basin_field].isin(up)] - ) - - def get_raster_wcs( coordinates: Union[Iterable, Sequence[Union[float, str]]], geographic: bool = True, layer: str = None, - geoserver: str = GEO_URL, + geoserver: str = GEOSERVER_URL, ) -> bytes: """Return a subset of a raster image from the local GeoServer via WCS 2.0.1 protocol. @@ -302,6 +263,8 @@ def get_raster_wcs( bytes A GeoTIFF array. """ + geoserver = _fix_server_url(geoserver) + (left, down, right, up) = coordinates if geographic: @@ -338,7 +301,7 @@ def get_raster_wcs( def hydrobasins_upstream(feature: dict, domain: str) -> pd.DataFrame: - """Return a list of hydrobasins features located upstream. + """Return a list of HydroBASINS features located upstream. Parameters ---------- @@ -369,7 +332,7 @@ def hydrobasins_upstream(feature: dict, domain: str) -> pd.DataFrame: df = gpd.read_file(filename=req, engine="pyogrio") # Filter upstream watersheds - return _determine_upstream_ids( + return determine_upstream_ids( fid=feature[basin_field], df=df, basin_field=basin_field, @@ -448,7 +411,7 @@ def filter_hydrobasins_attributes_wfs( attribute: str, value: Union[str, float, int], domain: str, - geoserver: str = GEO_URL, + geoserver: str = GEOSERVER_URL, ) -> str: """Return a URL that formats and returns a remote GetFeatures request from the USGS HydroBASINS dataset. @@ -471,6 +434,8 @@ def filter_hydrobasins_attributes_wfs( str URL to the GeoJSON-encoded WFS response. """ + geoserver = _fix_server_url(geoserver) + lakes = True level = 12 @@ -488,8 +453,8 @@ def get_hydrobasins_location_wfs( Union[str, float, int], ], domain: str = None, - geoserver: str = GEO_URL, -) -> str: + geoserver: str = GEOSERVER_URL, +) -> Dict[str, Union[str, int, float]]: """Return features from the USGS HydroBASINS data set using bounding box coordinates. For geographic raster grids, subsetting is based on WGS84 (Long, Lat) boundaries. @@ -506,10 +471,11 @@ def get_hydrobasins_location_wfs( Returns ------- - str + dict A GeoJSON-encoded vector feature. - """ + geoserver = _fix_server_url(geoserver) + lakes = True level = 12 layer = f"public:USGS_HydroBASINS_{'lake_' if lakes else ''}{domain}_lev{str(level).zfill(2)}" @@ -528,7 +494,7 @@ def hydro_routing_upstream( fid: Union[str, float, int], level: int = 12, lakes: str = "1km", - geoserver: str = GEO_URL, + geoserver: str = GEOSERVER_URL, ) -> pd.Series: """Return a list of hydro routing features located upstream. @@ -548,6 +514,8 @@ def hydro_routing_upstream( pd.Series Basins ids including `fid` and its upstream contributors. """ + geoserver = _fix_server_url(geoserver) + wfs = WebFeatureService(url=urljoin(geoserver, "wfs"), version="2.0.0", timeout=30) layer = f"public:routing_{lakes}Lakes_{str(level).zfill(2)}" @@ -560,7 +528,7 @@ def hydro_routing_upstream( df = gpd.read_file(resp) # Identify upstream features - df_upstream = _determine_upstream_ids( + df_upstream = determine_upstream_ids( fid=fid, df=df, basin_field="SubId", @@ -581,7 +549,7 @@ def get_hydro_routing_attributes_wfs( attribute: Sequence[str], level: int = 12, lakes: str = "1km", - geoserver: str = GEO_URL, + geoserver: str = GEOSERVER_URL, ) -> str: """Return a URL that formats and returns a remote GetFeatures request from hydro routing dataset. @@ -603,8 +571,9 @@ def get_hydro_routing_attributes_wfs( ------- str URL to the GeoJSON-encoded WFS response. - """ + geoserver = _fix_server_url(geoserver) + layer = f"public:routing_{lakes}Lakes_{str(level).zfill(2)}" return _get_feature_attributes_wfs( attribute=attribute, layer=layer, geoserver=geoserver @@ -616,7 +585,7 @@ def filter_hydro_routing_attributes_wfs( value: Union[str, float, int] = None, level: int = 12, lakes: str = "1km", - geoserver: str = GEO_URL, + geoserver: str = GEOSERVER_URL, ) -> str: """Return a URL that formats and returns a remote GetFeatures request from hydro routing dataset. @@ -640,8 +609,9 @@ def filter_hydro_routing_attributes_wfs( ------- str URL to the GeoJSON-encoded WFS response. - """ + geoserver = _fix_server_url(geoserver) + layer = f"public:routing_{lakes}Lakes_{str(level).zfill(2)}" return _filter_feature_attributes_wfs( attribute=attribute, value=value, layer=layer, geoserver=geoserver @@ -655,7 +625,7 @@ def get_hydro_routing_location_wfs( ], lakes: str, level: int = 12, - geoserver: str = GEO_URL, + geoserver: str = GEOSERVER_URL, ) -> dict: """Return features from the hydro routing data set using bounding box coordinates. @@ -677,8 +647,9 @@ def get_hydro_routing_location_wfs( ------- dict A GeoJSON-derived dictionary of vector features (FeatureCollection). - """ + geoserver = _fix_server_url(geoserver) + layer = f"public:routing_{lakes}Lakes_{str(level).zfill(2)}" if not wfs_Point and not Intersects: diff --git a/ravenpy/utilities/testdata.py b/ravenpy/utilities/testdata.py index 9189c7e7..40f23ac6 100644 --- a/ravenpy/utilities/testdata.py +++ b/ravenpy/utilities/testdata.py @@ -1,7 +1,6 @@ """Tools for searching for and acquiring test data.""" import hashlib import logging -import os import re import warnings from pathlib import Path