This repository was used to obtain all results of the paper Pavel Perezhogin, Laure Zanna, Carlos Fernandez-Granda "Generative data-driven approaches for stochastic subgrid parameterizations in an idealized ocean model" published in JAMES.
The main idea of the paper is to build stochastic subgrid parameterizations of mesoscale eddies using generative approach of Machine Learning (ML). Subgrid parameterization accounts for the missing physics induced by the eddies which are not resolved on the grid. Efficient parameterization should allow to simulate turbulent flows on a coarse computational grid. Turbulent flow represented on a coarse grid misses the information about the state of the subgrid eddies. It results in an uncertainty in the missing forcing induced by these eddies. Here we aim to produce samples from the distribution of all possible subgrid forcings consistent with resolved flow:
An example of many possible realizations of the subgrid forcing at fixed resolved flow is shown below:
An animation is produced using GAN model notebooks/Animation.ipynb.
Online simulations with generative models (GAN, VAE) reveal better numerical stability properties compared to the baseline GZ (Guillaumin Zanna 2021):
An animation is produced using notebooks/Animate-solution.ipynb.
See notebooks/JAMES_figures.ipynb.
In a case dataset in cloud is not working, download it from Zenodo!
- Google-Colab/dataset.ipynb - Description of the dataset containing training data and hires/lores simulations
- Google-Colab/training.ipynb - An example of training of the generative subgrid models
- Google-Colab/offline-analysis.ipynb - Prediction and plotting subgrid forcing. Comparing spectral properties of generated fields. Computing offline metrics.
- Google-Colab/online-simulations.ipynb - Run online simulations with pretrained subgrid models on GPUs. Compare Kinetic Energy (KE), spectrum of KE, snapshots. Compute online metrics.
cd scripts
and Check that slurm is consistent with your HPC:
python -c "from slurm_helpers import *; create_slurm('','test.py')"
cat launcher.sh
Run each script and pay attention to BASIC_FOLDER
, SCRIPT_PATH
and so on:
python run_reference.py
- Coarsegrain highres simulations with def coarsegrain_reference_dataset
python run_forcing_datasets.py
python train_parameterizations.py
python run_parameterized.py
python compute_online_metrics.py
pip install numpy matplotlib xarray aiohttp requests zarr pyfftw gcm_filters pyqg cmocean gplearn
- Install Pytorch
- Optionally, install pyqg_parameterization_benchmarks
pip install git+https://github.com/m2lines/pyqg_parameterization_benchmarks.git
git clone https://github.com/m2lines/pyqg_generative.git
pip install --editable .