Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix some typos and formatting in the documentation #957

Merged
merged 6 commits into from
Jan 12, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions doc/data_workflow.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ Data used by the model

To properly model any region of the Earth, PyPSA-Earth downloads and fetches different data that are explained in detail in this section. Here we'll look into architecture of the data workflow while practical hand-ons are given in the :ref:`Tutorial <tutorial>` section.

Two major parts of the energy modeling workflow are preparing of power grid layout and climate inputs. Apart of that, PyPSA-Earth is relying of a number of environmental, economical and technological datasets.
Two major parts of the energy modeling workflow are preparing of power grid layout and climate inputs. Apart from that, PyPSA-Earth is relying of a number of environmental, economical and technological datasets.

1. Grid topology data
===================================
Expand Down Expand Up @@ -83,9 +83,9 @@ Technological
4. Pre-calculated datasets
===================================

There are some datasets which were prepared to ensure smooth run of the model. However, they may (and in some cases) must be replaced by custom ones.
There are some datasets which were prepared to ensure smooth run of the model. However, they may (and, in some cases, must) be replaced by custom ones.

* **natura.tiff** contains geo-spatial data on location of protected and reserved areas and may be used as mask the exclude such areas when calculating the renewable potential by `build_renewable_profiles` rule. The `natura` flag in the configuration file allows to switch-on this option while presence of the `natura.tiff` in the `resources` folder is needed to run the model.
* **natura.tiff** contains geo-spatial data on location of protected and reserved areas and may be used as a mask to exclude such areas when calculating the renewable potential by `build_renewable_profiles` rule. The `natura` flag in the configuration file allows to switch-on this option while presence of the `natura.tiff` in the `resources` folder is needed to run the model.

Currently the pre-build file is calculated for Africa, global `natura.tiff` raster is under development.

Expand Down
6 changes: 3 additions & 3 deletions doc/how_to_docs.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,12 +10,12 @@ How to docs?
We add the code documentation along the way.
You might think that cost a lot of time and is not efficient - but that's not really true anymore!
Documenting with great tools makes life much easier for YOU and YOUR COLLABORATORS and speed up the overall process.
Using `Readthedocs <https://docs.readthedocs.io/en/stable/intro/getting-started-with-sphinx.html>`_ and its add
on `sphinx.ext.autodoc <https://www.sphinx-doc.org/en/master/usage/extensions/autodoc.html>`_ we document in our
Using `Readthedocs <https://docs.readthedocs.io/en/stable/intro/getting-started-with-sphinx.html>`_ and its add-on
`sphinx.ext.autodoc <https://www.sphinx-doc.org/en/master/usage/extensions/autodoc.html>`_ we document in our
code scripts which then will automatically generate the documentation you might see here.

Thank you Eric Holscher & team for your wonderful *Readthedocs* open source project.
You can find an emotional speak by Eric `here <https://www.youtube.com/watch?v=U6ueKExLzSY>`_.
You can find an emotional speech by Eric `here <https://www.youtube.com/watch?v=U6ueKExLzSY>`_.

Structure and Syntax example
-----------------------------
Expand Down
17 changes: 5 additions & 12 deletions doc/structure.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,18 +10,11 @@ The structure

The main workflow structure built within PyPSA-Earth is as follows:

**1. Download and filter data**: first raw input data shall be downloaded. PyPSA-Earth provides automated
procedures to successfully download all the needed data from scratch, such as OpenStreetMap data,
specific potential of renewable sources, population, GDP, etc. Moreover, raw data shall be filtered to remove non-valid data and normalize the data
gathered from multiple sources.

**2. Populate data**: filtered data are then processed by means of specific methods to derive the
main input data of the optimization methods, such as renewable energy production, demand, etc.
Just to give an example, when the option is enabled, the renewable energy source potential
is transformed into time series for desired locations by using the tool `Atlite <https://github.com/PyPSA/atlite/>`_.

**3. Create network model**: once the necessary model inputs are drawn, then the network model is
developed using `PyPSA <https://github.com/PyPSA/PyPSA>`_
**1. Download and filter data**: first raw input data shall be downloaded. PyPSA-Earth provides automated procedures to successfully download all the needed data from scratch, such as OpenStreetMap data, specific potential of renewable sources, population, GDP, etc. Moreover, raw data shall be filtered to remove non-valid data and normalize the data gathered from multiple sources.

**2. Populate data**: filtered data are then processed by means of specific methods to derive the main input data of the optimization methods, such as renewable energy production, demand, etc. Just to give an example, when the option is enabled, the renewable energy source potential is transformed into time series for desired locations by using the tool `Atlite <https://github.com/PyPSA/atlite/>`_.

**3. Create network model**: once the necessary model inputs are drawn, then the network model is developed using `PyPSA <https://github.com/PyPSA/PyPSA>`_

**4. Solve network**: execute the optimization for the desired problem, e.g. dispatch, planning, etc.

Expand Down
12 changes: 5 additions & 7 deletions doc/tutorial.rst
Original file line number Diff line number Diff line change
Expand Up @@ -128,7 +128,7 @@ The snakemake included in the conda environment pypsa-earth can be used to execu

Starting with essential usability features, the implemented PyPSA-Earth `Snakemake procedure <https://github.com/pypsa-meets-earth/pypsa-earth/blob/main/Snakefile>`_ that allows to flexibly execute the entire workflow with various options without writing a single line of python code. For instance, you can model the world energy system or any subset of countries only using the required data. Wildcards, which are special generic keys that can assume multiple values depending on the configuration options, help to execute large workflows with parameter sweeps and various options.

You can execute some parts of the workflow in case you are interested in some specific it's parts.
You can execute some parts of the workflow in case you are interested in some specific parts.
E.g. power grid topology may be extracted and cleaned with the following command which refers to the script name:

.. code:: bash
Expand Down Expand Up @@ -160,7 +160,7 @@ Apart of that, it's worth to check that there is a proper match between the temp

It could be helpful to keep in mind the following points:

1. the cutout name should be the same across the whole configuration file (there are several entries, one under under `atlite` and some under each of the `renewable` parameters);
1. the cutout name should be the same across the whole configuration file (there are several entries, one under `atlite` and some under each of the `renewable` parameters);

2. the countries of interest defined with `countries` list in the `config.yaml` should be covered by the cutout area;

Expand Down Expand Up @@ -190,7 +190,7 @@ These steps are required to use CDS API which allows an automatic file download

The `build_cutout` flag should be set `true` to generate the cutout. After the cutout is ready, it's recommended to set `build_cutout` to `false` to avoid overwriting the existing cutout by accident. The `snapshots` values set when generating the cutout, will determine the temporal parameters of the cutout. Accessible years which can be used to build a cutout depend on ERA5 data availability. `ERA5 page <https://www.ecmwf.int/en/forecasts/datasets/reanalysis-datasets/era5>`_ explains that the data is available from 1950 and updated continuously with about 3 month delay while the data on 1950-1978 should be treated as preliminary as that is a rather recent development.

After the first run, if you don't change country and don't need to increase a considered time span wider than the one you created the cutout with, you may set to false both `retrieve_databundle` and `build_cutout`.
After the first run, if you don't change country and don't need to increase a considered time span wider than the one you created the cutout with, you may set both `retrieve_databundle` and `build_cutout` to false.

Spatial extent
^^^^^^^^^^^^^^
Expand All @@ -202,7 +202,7 @@ There is also option to set the cutout extent specifying `x` and `y` values dire
Temporal extent
^^^^^^^^^^^^^^^

If you create the cutout for a certain year (let's say 2013) and want to run scenarios for a subset of this year, you don't need to rerun the `build_cutout` as the cutout still contains all the hours of 2013. The workflow will automatically subset the cutout archive to extract data for the particular timeframe of interest. If you instead you want to run the 2014 scenario, then rerun `build_cutout` is needed.
If you create the cutout for a certain year (let's say 2013) and want to run scenarios for a subset of this year, you don't need to rerun the `build_cutout` as the cutout still contains all the hours of 2013. The workflow will automatically subset the cutout archive to extract data for the particular timeframe of interest. If you instead you want to run the 2014 scenario, then it is needed to rerun `build_cutout`.

In case you need model a number of years, a convenient approach may be to create the cutout for the whole period under interest (e.g. 2013-2015) so that you don't need to build any additional cutouts. Note, however, that the disk requirements increase in this case.

Expand All @@ -227,7 +227,7 @@ To validate the data obtained with PyPSA-Earth, we recommend to go through the p
Simulation procedure
--------------------

It may be recommended to check the following quantities the validation:
It may be recommended to check the following quantities in the validation:

#. inputs used by the model:

Expand Down Expand Up @@ -256,8 +256,6 @@ Data availability for many parts of the world is still quite limited. Usually th

* `BP <https://www.bp.com/en/global/corporate/energy-economics/statistical-review-of-world-energy.html>`_ Statistical Review of World Energy;

* International Energy Agency `IEA <https://www.iea.org/data-and-statistics>`_;

* `Ember <https://ember-climate.org/data/data-explorer/>`_ Data Explorer.


Expand Down
2 changes: 1 addition & 1 deletion scripts/solve_network.py
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@
-----------

Total annual system costs are minimised with PyPSA. The full formulation of the
linear optimal power flow (plus investment planning
linear optimal power flow (plus investment planning)
is provided in the
`documentation of PyPSA <https://pypsa.readthedocs.io/en/latest/optimal_power_flow.html#linear-optimal-power-flow>`_.
The optimization is based on the ``pyomo=False`` setting in the :func:`network.lopf` and :func:`pypsa.linopf.ilopf` function.
Expand Down
Loading