Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pandas deprecations - offset warning #353

Merged
merged 10 commits into from
Jul 16, 2024
10 changes: 5 additions & 5 deletions config.default.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ demand_data:
update_data: true # if true, the workflow downloads the energy balances data saved in data/demand/unsd/data again. Turn on for the first run.
base_year: 2019

other_industries: false # Whether or not to include industries that are not specified. some countries have has exageratted numbers, check carefully.
other_industries: false # Whether or not to include industries that are not specified. some countries have has exaggerated numbers, check carefully.
aluminium_year: 2019 # Year of the aluminium demand data specified in `data/AL_production.csv`


Expand All @@ -57,12 +57,12 @@ enable:
retrieve_irena: true #If true, downloads the IRENA data

fossil_reserves:
oil: 100 #TWh Maybe reduntant
oil: 100 #TWh Maybe redundant


export:
h2export: [10] # Yearly export demand in TWh
store: true # [True, False] # specifies wether an export store to balance demand is implemented
store: true # [True, False] # specifies whether an export store to balance demand is implemented
store_capital_costs: "no_costs" # ["standard_costs", "no_costs"] # specifies the costs of the export store. "standard_costs" takes CAPEX of "hydrogen storage tank type 1 including compressor"
export_profile: "ship" # use "ship" or "constant"
ship:
Expand Down Expand Up @@ -157,7 +157,7 @@ sector:
district_heating:
potential: 0.3 #maximum fraction of urban demand which can be supplied by district heating
#increase of today's district heating demand to potential maximum district heating share
#progress = 0 means today's district heating share, progress=-1 means maxumzm fraction of urban demand is supplied by district heating
#progress = 0 means today's district heating share, progress=-1 means maximum fraction of urban demand is supplied by district heating
progress: 1
#2020: 0.0
#2030: 0.3
Expand Down Expand Up @@ -219,7 +219,7 @@ sector:
efficiency_heat_gas_to_elec: 0.9

dynamic_transport:
enable: false # If "True", then the BEV and FCEV shares are obtained depening on the "Co2L"-wildcard (e.g. "Co2L0.70: 0.10"). If "False", then the shares are obtained depending on the "demand" wildcard and "planning_horizons" wildcard as listed below (e.g. "DF_2050: 0.08")
enable: false # If "True", then the BEV and FCEV shares are obtained depending on the "Co2L"-wildcard (e.g. "Co2L0.70: 0.10"). If "False", then the shares are obtained depending on the "demand" wildcard and "planning_horizons" wildcard as listed below (e.g. "DF_2050: 0.08")
land_transport_electric_share:
Co2L2.0: 0.00
Co2L1.0: 0.01
Expand Down
6 changes: 3 additions & 3 deletions config.pypsa-earth.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -203,7 +203,7 @@ atlite:
dy: 0.3 # cutout resolution
# The cutout time is automatically set by the snapshot range. See `snapshot:` option above and 'build_cutout.py'.
# time: ["2013-01-01", "2014-01-01"] # to manually specify a different weather year (~70 years available)
# The cutout spatial extent [x,y] is automatically set by country selection. See `countires:` option above and 'build_cutout.py'.
# The cutout spatial extent [x,y] is automatically set by country selection. See `countries:` option above and 'build_cutout.py'.
# x: [-12., 35.] # set cutout range manual, instead of automatic by boundaries of country
# y: [33., 72] # manual set cutout range

Expand Down Expand Up @@ -352,8 +352,8 @@ monte_carlo:
add_to_snakefile: false # When set to true, enables Monte Carlo sampling
samples: 9 # number of optimizations. Note that number of samples when using scipy has to be the square of a prime number
sampling_strategy: "chaospy" # "pydoe2", "chaospy", "scipy", packages that are supported
seed: 42 # set seedling for reproducibilty
# Uncertanties on any PyPSA object are specified by declaring the specific PyPSA object under the key 'uncertainties'.
seed: 42 # set seedling for reproducibility
# Uncertainties on any PyPSA object are specified by declaring the specific PyPSA object under the key 'uncertainties'.
# For each PyPSA object, the 'type' and 'args' keys represent the type of distribution and its argument, respectively.
# Supported distributions types are uniform, normal, lognormal, triangle, beta and gamma.
# The arguments of the distribution are passed using the key 'args' as follows, tailored by distribution type
Expand Down
3 changes: 1 addition & 2 deletions scripts/add_export.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,6 @@

import logging
import os
from pathlib import Path

import geopandas as gpd
import numpy as np
Expand Down Expand Up @@ -164,7 +163,7 @@ def create_export_profile():

# Resample to temporal resolution defined in wildcard "sopts" with pandas resample
sopts = snakemake.wildcards.sopts.split("-")
export_profile = export_profile.resample(sopts[0]).mean()
export_profile = export_profile.resample(sopts[0].casefold()).mean()

# revise logger msg
export_type = snakemake.params.export_profile
Expand Down
7 changes: 2 additions & 5 deletions scripts/build_industry_demand.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,9 +10,8 @@
import os
from itertools import product

import numpy as np
import pandas as pd
from helpers import read_csv_nafix, sets_path_to_root, three_2_two_digits_country
from helpers import mock_snakemake, read_csv_nafix, sets_path_to_root

_logger = logging.getLogger(__name__)

Expand Down Expand Up @@ -50,8 +49,6 @@ def country_to_nodal(industrial_production, keys):

if __name__ == "__main__":
if "snakemake" not in globals():
from helpers import mock_snakemake, sets_path_to_root

os.chdir(os.path.dirname(os.path.abspath(__file__)))

snakemake = mock_snakemake(
Expand Down Expand Up @@ -125,7 +122,7 @@ def country_to_nodal(industrial_production, keys):
snakemake.input["base_industry_totals"], index_col=[0, 1]
)

production_base = cagr.applymap(lambda x: 1)
production_base = cagr.map(lambda x: 1)
production_tom = production_base * growth_factors

# non-used line; commented out
Expand Down
8 changes: 4 additions & 4 deletions scripts/prepare_sector_network.py
Original file line number Diff line number Diff line change
Expand Up @@ -2250,7 +2250,7 @@ def average_every_nhours(n, offset):
# logger.info(f'Resampling the network to {offset}')
m = n.copy(with_time=False)

snapshot_weightings = n.snapshot_weightings.resample(offset).sum()
snapshot_weightings = n.snapshot_weightings.resample(offset.casefold()).sum()
m.set_snapshots(snapshot_weightings.index)
m.snapshot_weightings = snapshot_weightings

Expand All @@ -2259,11 +2259,11 @@ def average_every_nhours(n, offset):
for k, df in c.pnl.items():
if not df.empty:
if c.list_name == "stores" and k == "e_max_pu":
pnl[k] = df.resample(offset).min()
pnl[k] = df.resample(offset.casefold()).min()
elif c.list_name == "stores" and k == "e_min_pu":
pnl[k] = df.resample(offset).max()
pnl[k] = df.resample(offset.casefold()).max()
else:
pnl[k] = df.resample(offset).mean()
pnl[k] = df.resample(offset.casefold()).mean()

return m

Expand Down
10 changes: 5 additions & 5 deletions test/config.test1.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -48,20 +48,20 @@ demand_data:
update_data: true # if true, the workflow downloads the energy balances data saved in data/demand/unsd/data again. Turn on for the first run.
base_year: 2019

other_industries: false # Whether or not to include industries that are not specified. some countries have has exageratted numbers, check carefully.
other_industries: false # Whether or not to include industries that are not specified. some countries have has exaggerated numbers, check carefully.
aluminium_year: 2019 # Year of the aluminium demand data specified in `data/AL_production.csv`


enable:
retrieve_cost_data: true # if true, the workflow overwrites the cost data saved in data/costs again

fossil_reserves:
oil: 100 #TWh Maybe reduntant
oil: 100 #TWh Maybe redundant


export:
h2export: [120] # Yearly export demand in TWh
store: true # [True, False] # specifies wether an export store to balance demand is implemented
store: true # [True, False] # specifies whether an export store to balance demand is implemented
store_capital_costs: "no_costs" # ["standard_costs", "no_costs"] # specifies the costs of the export store "standard_costs" takes CAPEX of "hydrogen storage tank type 1 including compressor"
export_profile: "ship" # use "ship" or "constant"
ship:
Expand Down Expand Up @@ -156,7 +156,7 @@ sector:
district_heating:
potential: 0.3 #maximum fraction of urban demand which can be supplied by district heating
#increase of today's district heating demand to potential maximum district heating share
#progress = 0 means today's district heating share, progress=-1 means maxumzm fraction of urban demand is supplied by district heating
#progress = 0 means today's district heating share, progress=-1 means maximum fraction of urban demand is supplied by district heating
progress: 1
#2020: 0.0
#2030: 0.3
Expand Down Expand Up @@ -217,7 +217,7 @@ sector:
efficiency_heat_gas_to_elec: 0.9

dynamic_transport:
enable: false # If "True", then the BEV and FCEV shares are obtained depening on the "Co2L"-wildcard (e.g. "Co2L0.70: 0.10"). If "False", then the shares are obtained depending on the "demand" wildcard and "planning_horizons" wildcard as listed below (e.g. "DF_2050: 0.08")
enable: false # If "True", then the BEV and FCEV shares are obtained depending on the "Co2L"-wildcard (e.g. "Co2L0.70: 0.10"). If "False", then the shares are obtained depending on the "demand" wildcard and "planning_horizons" wildcard as listed below (e.g. "DF_2050: 0.08")
land_transport_electric_share:
Co2L2.0: 0.00
Co2L1.0: 0.01
Expand Down
Loading