Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add codespell linter #763

Merged
merged 1 commit into from
Jun 13, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 8 additions & 8 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -26,14 +26,14 @@ repos:
- id: isort
args: ["--profile", "black", "--filter-files"]

# # Find common spelling mistakes in comments and docstrings
# - repo: https://github.com/codespell-project/codespell
# rev: v2.2.1
# hooks:
# - id: codespell
# args: ['--ignore-regex="\b[A-Z]+\b"'] # Ignore capital case words, e.g. country codes
# types_or: [python, rst, markdown]
# files: ^(actions|doc)/
# Find common spelling mistakes in comments and docstrings
- repo: https://github.com/codespell-project/codespell
rev: v2.2.4
hooks:
- id: codespell
args: ['--ignore-regex="(\b[A-Z]+\b)"', '--ignore-words-list=fom,appartment,bage,ore,setis,tabacco,berfore'] # Ignore capital case words, e.g. country codes
types_or: [python, rst, markdown]
files: ^(scripts|doc)/

# Formatting with "black" coding style
- repo: https://github.com/psf/black
Expand Down
2 changes: 1 addition & 1 deletion doc/how_to_contribute.rst
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ To contribute a test:
Performance-profiling
---------------------
Performance profiling is important to understand bottlenecks and
the accordinly optimize the speed in PyPSA-Earth. We use the Python build-in
the accordingly optimize the speed in PyPSA-Earth. We use the Python built-in
`cProfiler`, custom decorators on single functions and analysis tools
like `snakeviz <https://jiffyclub.github.io/snakeviz/>`_. See a detailed example
in `this discussion #557 <https://github.com/pypsa-meets-earth/pypsa-earth/discussions/557>`_.
Expand Down
1 change: 0 additions & 1 deletion doc/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -188,4 +188,3 @@ Documentation
learning_materials
project_structure_and_credits
talks_and_papers

2 changes: 1 addition & 1 deletion doc/introduction.rst
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ PyPSA-Earth work is released under multiple licenses:
* Configuration files are mostly licensed under `CC0-1.0 <https://creativecommons.org/publicdomain/zero/1.0/>`_.
* Data files are licensed under different licenses as noted below.

Invididual files contain license information in the header or in the `dep5 <.reuse/dep5>`_.
Individual files contain license information in the header or in the `dep5 <.reuse/dep5>`_.
Additional licenses and urls of the data used in PyPSA-Earth:

.. csv-table::
Expand Down
2 changes: 1 addition & 1 deletion doc/learning_materials.rst
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ PyPSA Introduction (essential)
Data science basics (essential)
--------------------------------

- Fabian Neumann just shared with the world the possibly best training material for `"Data Science fo Energy System Modelling" <https://fneum.github.io/data-science-for-esm/intro.html>`_. This is a free multi-week course preparing you for all you need for PyPSA-Earth.
- Fabian Neumann just shared with the world the possibly best training material for `"Data Science for Energy System Modelling" <https://fneum.github.io/data-science-for-esm/intro.html>`_. This is a free multi-week course preparing you for all you need for PyPSA-Earth.
- Refresh your Python knowledge by watching `CSDojo's playlist <https://www.youtube.com/c/CSDojo/playlists>`_. His content is excellent as introduction. You will learn in effective short videos the python basics such as variables If/else statements, functions, lists, for loops, while loops, dictionaries, classes and objects, boolean, list comprehensions, sets - put your hands on and write some test scripts as the video suggests. (~3h)
- Familiarize yourself with numpy and panda dataframes. In the Python-based PyPSA tool, we do not work with Excel. Powerful panda dataframes are our friends. `Here <https://www.coursera.org/learn/python-data-analysis>`__ is an extensive 30h course that provides a great introduction if this is completely unfamiliar to you.
- `Introduction to Unix-shell <https://swcarpentry.github.io/shell-novice/>`_ - "Use of the shell is fundamental to a wide range of advanced computing tasks, including high-performance computing and automated workflow. These lessons will introduce you to this powerful tool." (optional 4h, to become a pro)
Expand Down
12 changes: 8 additions & 4 deletions doc/release_notes.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,11 +18,15 @@ E.g. if a new rule becomes available describe how to use it `snakemake -j1 run_t

* Add merge and replace functionalities when adding custom powerplants `PR #739 <https://github.com/pypsa-meets-earth/pypsa-earth/pull/739>`__. "Merge" combined the powerplantmatching data with new custom data. "Replace" allows to use fully self-collected data.

* Add functionality of attaching existing renewable caapcities from custom_powerplants.csv. `PR #744 <https://github.com/pypsa-meets-earth/pypsa-earth/pull/744>`__. If custom_powerplants are enabled and custom_powerplants.csv contains wind or solar powerplants, then p_nom and p_nom_min for renewables are extracted from custom_powerplants.csv, aggregated for eacg bus, and set.
* Add functionality of attaching existing renewable caapcities from custom_powerplants.csv. `PR #744 <https://github.com/pypsa-meets-earth/pypsa-earth/pull/744>`__. If custom_powerplants are enabled and custom_powerplants.csv contains wind or solar powerplants, then p_nom and p_nom_min for renewables are extracted from custom_powerplants.csv, aggregated for each bus, and set.

* Fix dask parallel computations for e.g. cutouts calculations. Now again more than 1 core will be used when available that can lead to ~8x speed ups with 8 cores `PR #734 <https://github.com/pypsa-meets-earth/pypsa-earth/pull/734>`__ and `PR #761 <https://github.com/pypsa-meets-earth/pypsa-earth/pull/761>`__.

* Enable the usage of custom rules. Custom rule files must be specified in the config as a list, e.g. custom rules: ["my_rules.smk"]. Empty by default (i.e. no custom rules). `PR #755 <https://github.com/pypsa-meets-earth/pypsa-earth/pull/755>`__
* Add the usage of custom rules. Custom rule files must be specified in the config as a list, e.g. custom rules: ["my_rules.smk"]. Empty by default (i.e. no custom rules). `PR #755 <https://github.com/pypsa-meets-earth/pypsa-earth/pull/755>`__

* Add trailing whitespace linter which removes unnecessary tabs when running `pre-commit` `PR #762 <https://github.com/pypsa-meets-earth/pypsa-earth/pull/762>`__

* Add codespell linter which corrects word spellings `PR #763 <https://github.com/pypsa-meets-earth/pypsa-earth/pull/763>`__

PyPSA-Earth 0.2.1
=================
Expand All @@ -49,7 +53,7 @@ PyPSA-Earth 0.2.0

* Add new config test design. It is now easy and light to test multiple configs `PR #466 <https://github.com/pypsa-meets-earth/pypsa-earth/pull/466>`__

* Revision of documenation `PR #471 <https://github.com/pypsa-meets-earth/pypsa-earth/pull/471>`__
* Revision of documentation `PR #471 <https://github.com/pypsa-meets-earth/pypsa-earth/pull/471>`__

* Move to new GADM version `PR #478 <https://github.com/pypsa-meets-earth/pypsa-earth/pull/478>`__

Expand Down Expand Up @@ -134,7 +138,7 @@ PyPSA-Earth 0.2.0
PyPSA-Earth 0.1.0
=================

Model rebranded from PyPSA-Africa to PyPSA-Earth. Model is part of the now called PyPSA meets Earth initiative which hosts mutliple projects.
Model rebranded from PyPSA-Africa to PyPSA-Earth. Model is part of the now called PyPSA meets Earth initiative which hosts multiple projects.

**New features and major changes (10th September 2022)**

Expand Down
4 changes: 2 additions & 2 deletions scripts/_helpers.py
Original file line number Diff line number Diff line change
Expand Up @@ -45,8 +45,8 @@ def sets_path_to_root(root_directory_name):
break
# if repo_name NOT current folder name for 5 levels then stop
if n == 0:
print("Cant find the repo path.")
# if repo_name NOT current folder name, go one dir higher
print("Can't find the repo path.")
# if repo_name NOT current folder name, go one directory higher
else:
upper_path = os.path.dirname(os.path.abspath(".")) # name of upper folder
os.chdir(upper_path)
Expand Down
2 changes: 1 addition & 1 deletion scripts/add_electricity.py
Original file line number Diff line number Diff line change
Expand Up @@ -789,7 +789,7 @@ def add_nice_carrier_names(n, config):
if not (set(renewable_carriers) & set(extendable_carriers["Generator"])):
logger.warning(
"No renewables found in config entry `extendable_carriers`. "
"In future versions, these have to be explicitely listed. "
"In future versions, these have to be explicitly listed. "
"Falling back to all renewables."
)

Expand Down
2 changes: 1 addition & 1 deletion scripts/base_network.py
Original file line number Diff line number Diff line change
Expand Up @@ -360,7 +360,7 @@ def _set_countries_and_substations(inputs, config, n):
# Compares two lists & makes list value true if at least one is true
buses["substation_off"] = offshore_b | offshore_hvb

# Busses without country tag are removed OR get a country tag if close to country
# Buses without country tag are removed OR get a country tag if close to country
c_nan_b = buses.country.isnull()
if c_nan_b.sum() > 0:
c_tag = get_country(buses.loc[c_nan_b])
Expand Down
2 changes: 1 addition & 1 deletion scripts/build_bus_regions.py
Original file line number Diff line number Diff line change
Expand Up @@ -272,7 +272,7 @@ def get_id(coords):
)

if offshore_regions:
# if a offshore_regions exists excute below
# if a offshore_regions exists execute below
pd.concat(offshore_regions, ignore_index=True).to_file(
snakemake.output.regions_offshore
)
Expand Down
2 changes: 1 addition & 1 deletion scripts/build_demand_profiles.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@
Description
-----------

The rule :mod:`build_demand` creates load demand profiles in correspondance of the buses of the network.
The rule :mod:`build_demand` creates load demand profiles in correspondence of the buses of the network.
It creates the load paths for GEGIS outputs by combining the input parameters of the countries, weather year, prediction year, and SSP scenario.
Then with a function that takes in the PyPSA network "base.nc", region and gadm shape data, the countries of interest, a scale factor, and the snapshots,
it returns a csv file called "demand_profiles.csv", that allocates the load to the buses of the network according to GDP and population.
Expand Down
10 changes: 5 additions & 5 deletions scripts/build_osm_network.py
Original file line number Diff line number Diff line change
Expand Up @@ -190,7 +190,7 @@ def merge_stations_same_station_id(
# initialize list of cleaned buses
buses_clean = []

# initalize the number of buses
# initialize the number of buses
n_buses = 0

for g_name, g_value in buses.groupby(by="station_id"):
Expand Down Expand Up @@ -588,7 +588,7 @@ def _split_linestring_by_point(linestring, points):
Parameters
----------
lstring : LineString
Linestring of the line to be splitted
Linestring of the line to be split
points : list
List of points to split the linestring

Expand Down Expand Up @@ -622,11 +622,11 @@ def fix_overpassing_lines(lines, buses, distance_crs, tol=1):
Geodataframe of substations
tol : float
Tolerance in meters of the distance between the substation and the line
below which the line will be splitted
below which the line will be split
"""

lines_to_add = [] # list of lines to be added
lines_to_split = [] # list of lines that have been splitted
lines_to_split = [] # list of lines that have been split

lines_epsgmod = lines.to_crs(distance_crs)
buses_epsgmod = buses.to_crs(distance_crs)
Expand Down Expand Up @@ -770,7 +770,7 @@ def built_network(inputs, outputs, config, geo_crs, distance_crs, force_ac=False
bus_country_list = buses["country"].unique().tolist()

# it may happen that bus_country_list contains entries not relevant as a country name (e.g. "not found")
# difference can't give negative values; the following will return only releant country names
# difference can't give negative values; the following will return only relevant country names
no_data_countries = list(set(country_list).difference(set(bus_country_list)))

if len(no_data_countries) > 0:
Expand Down
4 changes: 2 additions & 2 deletions scripts/build_powerplants.py
Original file line number Diff line number Diff line change
Expand Up @@ -142,7 +142,7 @@ def convert_osm_to_pm(filepath_ppl_osm, filepath_ppl_pm):
"wave": "Other",
"geothermal": "Geothermal",
"solar": "Solar",
# "Hard Coal" follows defauls of PPM
# "Hard Coal" follows defaults of PPM
"coal": "Hard Coal",
"gas": "Natural Gas",
"biomass": "Bioenergy",
Expand Down Expand Up @@ -200,7 +200,7 @@ def convert_osm_to_pm(filepath_ppl_osm, filepath_ppl_pm):
)

# All Hydro objects can be interpreted by PPM as Storages, too
# However, everithing extracted from OSM seems to belong
# However, everything extracted from OSM seems to belong
# to power plants with "tags.power" == "generator" only
osm_ppm_df = pd.DataFrame(
data={
Expand Down
2 changes: 1 addition & 1 deletion scripts/build_renewable_profiles.py
Original file line number Diff line number Diff line change
Expand Up @@ -374,7 +374,7 @@ def rescale_hydro(plants, runoff, normalize_using_yearly, normalization_year):
yearlyavg_runoff_by_plant.loc[normalization_buses].groupby("country").sum()
)

# common country indeces
# common country indices
common_countries = normalize_using_yearly.columns.intersection(
grouped_runoffs.index
)
Expand Down
2 changes: 1 addition & 1 deletion scripts/build_shapes.py
Original file line number Diff line number Diff line change
Expand Up @@ -167,7 +167,7 @@ def get_GADM_layer(
outlogging=False,
):
"""
Function to retrive a specific layer id of a geopackage for a selection of countries
Function to retrieve a specific layer id of a geopackage for a selection of countries

Parameters
----------
Expand Down
4 changes: 2 additions & 2 deletions scripts/clean_osm_data.py
Original file line number Diff line number Diff line change
Expand Up @@ -126,7 +126,7 @@ def add_line_endings_tosubstations(substations, lines):
def set_unique_id(df, col):
"""
Create unique id's, where id is specified by the column "col"
The steps below create unique bus id's without loosing the original OSM bus_id
The steps below create unique bus id's without losing the original OSM bus_id

Unique bus_id are created by simply adding -1,-2,-3 to the original bus_id
Every unique id gets a -1
Expand Down Expand Up @@ -662,7 +662,7 @@ def integrate_lines_df(df_all_lines, distance_crs):
clean_circuits(df)
clean_cables(df)

# analyse each row of voltage and requency and match their content
# analyse each row of voltage and frequency and match their content
split_and_match_voltage_frequency_size(df)

# fill the circuits column for explode
Expand Down
2 changes: 1 addition & 1 deletion scripts/non_workflow/zenodo_handler.py
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@
"access_right": "open",
"license": {"id": "cc-by-4.0"},
"keywords": ["Macro Energy Systems", "Power Systems"],
} # more opton visisble at Zenodo REST API https://developers.zenodo.org/#introduction
} # more options visible at Zenodo REST API https://developers.zenodo.org/#introduction


#############
Expand Down
2 changes: 1 addition & 1 deletion scripts/retrieve_databundle_light.py
Original file line number Diff line number Diff line change
Expand Up @@ -496,7 +496,7 @@ def get_best_bundles_by_category(
# check if non-empty dictionary
if dict_n_matched:
# if non-empty, then pick bundles until all countries are selected
# or no mor bundles are found
# or no more bundles are found
dict_sort = sorted(dict_n_matched.items(), key=lambda d: d[1])

current_matched_countries = []
Expand Down
4 changes: 2 additions & 2 deletions scripts/simplify_network.py
Original file line number Diff line number Diff line change
Expand Up @@ -632,7 +632,7 @@ def drop_isolated_nodes(n, threshold):
generators_mean_final = n.generators.p_nom.mean()

logger.info(
f"Dropped {len(i_to_drop)} buses. A resulted load discrepancy is {(100 * ((load_mean_final - load_mean_origin)/load_mean_origin)):2.1}% and {(100 * ((generators_mean_final - generators_mean_origin)/generators_mean_origin)):2.1}% for average load and generation capacity, respectivelly"
f"Dropped {len(i_to_drop)} buses. A resulted load discrepancy is {(100 * ((load_mean_final - load_mean_origin)/load_mean_origin)):2.1}% and {(100 * ((generators_mean_final - generators_mean_origin)/generators_mean_origin)):2.1}% for average load and generation capacity, respectively"
)

return n
Expand Down Expand Up @@ -716,7 +716,7 @@ def merge_isolated_nodes(n, threshold, aggregation_strategies=dict()):
generators_mean_final = n.generators.p_nom.mean()

logger.info(
f"Merged {len(i_suffic_load)} buses. Load attached to a single bus with discrepancies of {(100 * ((load_mean_final - load_mean_origin)/load_mean_origin)):2.1E}% and {(100 * ((generators_mean_final - generators_mean_origin)/generators_mean_origin)):2.1E}% for load and generation capacity, respectivelly"
f"Merged {len(i_suffic_load)} buses. Load attached to a single bus with discrepancies of {(100 * ((load_mean_final - load_mean_origin)/load_mean_origin)):2.1E}% and {(100 * ((generators_mean_final - generators_mean_origin)/generators_mean_origin)):2.1E}% for load and generation capacity, respectively"
)

return clustering.network, busmap
Expand Down