Skip to content

Commit

Permalink
Deployed 2dcf230 with MkDocs version: 1.4.3
Browse files Browse the repository at this point in the history
  • Loading branch information
Patrick Emami committed Aug 2, 2023
1 parent 2636a8e commit 588bf54
Show file tree
Hide file tree
Showing 5 changed files with 59 additions and 24 deletions.
21 changes: 13 additions & 8 deletions index.html
Original file line number Diff line number Diff line change
Expand Up @@ -623,16 +623,19 @@ <h3 id="load-a-benchmark-dataset">Load a benchmark dataset<a class="headerlink"
</span><span id="__span-0-17"><a id="__codelineno-0-17" name="__codelineno-0-17" href="#__codelineno-0-17"></a> <span class="c1"># ...</span>
</span></code></pre></div>
<h2 id="installation">Installation<a class="headerlink" href="#installation" title="Permanent link"></a></h2>
<p>To just access the provided dataloaders, models, metrics, etc., install the package with:</p>
<p>If you aren't going to pretrain or evaluate models and just want access to the provided dataloaders, model code, metrics computation, etc., install the package with:</p>
<div class="language-bash highlight"><pre><span></span><code><span id="__span-1-1"><a id="__codelineno-1-1" name="__codelineno-1-1" href="#__codelineno-1-1"></a>pip<span class="w"> </span>install<span class="w"> </span>buildings_bench
</span></code></pre></div>
<p>To run the benchmark itself with provided Python scripts, clone this repository and install it in editable mode in a virtual environment or a conda environment.</p>
<p>First, create an environment with <code>python&gt;=3.8</code>, for example: <code>conda create -n buildings_bench python=3.8</code>.</p>
<p>Then, install the package in editable mode with
<h3 id="full-installation">Full installation<a class="headerlink" href="#full-installation" title="Permanent link"></a></h3>
<p>Otherwise, clone this repository and install it in editable mode in a virtual environment or a conda environment.</p>
<ol>
<li>Create an environment with <code>python&gt;=3.8</code>, for example: <code>conda create -n buildings_bench python=3.8</code>.</li>
<li>Install the package in editable mode with
<div class="language-bash highlight"><pre><span></span><code><span id="__span-2-1"><a id="__codelineno-2-1" name="__codelineno-2-1" href="#__codelineno-2-1"></a>git<span class="w"> </span>clone<span class="w"> </span>https://github.com/NREL/BuildingsBench.git
</span><span id="__span-2-2"><a id="__codelineno-2-2" name="__codelineno-2-2" href="#__codelineno-2-2"></a><span class="nb">cd</span><span class="w"> </span>BuildingsBench
</span><span id="__span-2-3"><a id="__codelineno-2-3" name="__codelineno-2-3" href="#__codelineno-2-3"></a>pip<span class="w"> </span>install<span class="w"> </span>-e<span class="w"> </span><span class="s2">&quot;.[benchmark]&quot;</span>
</span></code></pre></div></p>
</span></code></pre></div></li>
</ol>
<h3 id="installing-faiss-gpu">Installing faiss-gpu<a class="headerlink" href="#installing-faiss-gpu" title="Permanent link"></a></h3>
<p>Due to a PyPI limitation, we have to install <code>faiss-gpu</code> (for KMeans) by directly downloading the wheel from <a href="https://github.com/kyamagu/faiss-wheels/releases/">https://github.com/kyamagu/faiss-wheels/releases/</a>.
Download the wheel for the python version you are using, then install it in your environment.</p>
Expand All @@ -641,7 +644,7 @@ <h3 id="installing-faiss-gpu">Installing faiss-gpu<a class="headerlink" href="#i
</span><span id="__span-3-2"><a id="__codelineno-3-2" name="__codelineno-3-2" href="#__codelineno-3-2"></a>
</span><span id="__span-3-3"><a id="__codelineno-3-3" name="__codelineno-3-3" href="#__codelineno-3-3"></a>pip<span class="w"> </span>install<span class="w"> </span>faiss_gpu-1.7.3-cp38-cp38-manylinux2014_x86_64.whl
</span></code></pre></div>
<h3 id="optional-lightgbm">[Optional] LightGBM<a class="headerlink" href="#optional-lightgbm" title="Permanent link"></a></h3>
<h3 id="optional-installing-lightgbm">[Optional] Installing LightGBM<a class="headerlink" href="#optional-installing-lightgbm" title="Permanent link"></a></h3>
<p>If running the LightGBM baseline, you will need to install LightGBM.
Follow instructions <a href="https://pypi.org/project/lightgbm/">here</a> for your OS.
Then, <code>pip install skforecast</code>.</p>
Expand All @@ -662,11 +665,13 @@ <h2 id="download-the-datasets-and-metadata">Download the datasets and metadata<a
<ul>
<li><code>metadata.tar.gz</code></li>
</ul>
<p>Download and untar all files, which will create a directory called <code>BuildingsBench</code>.</p>
<p>Download and untar all files, which will create a new directory called <code>BuildingsBench</code>. <strong>This is the data directory, which is different than the code repository, although both are called "BuildingsBench".</strong>
See the README file <code>BuildingsBench/metadata/README.md</code> (in <code>metadata.tar.gz</code>) for more information about how the BuildingsBench dataset directory should be organized.</p>
<h3 id="environment-variables">Environment variables<a class="headerlink" href="#environment-variables" title="Permanent link"></a></h3>
<p>Set the environment variable <code>BUILDINGS_BENCH</code> to the path where the folder <code>BuildingsBench</code> is located.</p>
<p>Set the environment variable <code>BUILDINGS_BENCH</code> to the path where the data directory <code>BuildingsBench</code> is located (created when untarring the data files). <strong>This is not the path to the code repository.</strong></p>
<div class="language-bash highlight"><pre><span></span><code><span id="__span-4-1"><a id="__codelineno-4-1" name="__codelineno-4-1" href="#__codelineno-4-1"></a><span class="nb">export</span><span class="w"> </span><span class="nv">BUILDINGS_BENCH</span><span class="o">=</span>/path/to/BuildingsBench
</span></code></pre></div>
<h4 id="wandb">Wandb<a class="headerlink" href="#wandb" title="Permanent link"></a></h4>
<p>If using <code>wandb</code>, set the following:</p>
<ul>
<li><code>WANDB_ENTITY</code>: your wandb username</li>
Expand Down
42 changes: 36 additions & 6 deletions running/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -590,7 +590,7 @@ <h2 id="getting-started">Getting Started<a class="headerlink" href="#getting-sta
<p>To use these scripts with your model you'll need to register your model with our platform. </p>
<h3 id="registering-your-model">Registering your model<a class="headerlink" href="#registering-your-model" title="Permanent link"></a></h3>
<p>Please see this <a href="https://github.com/NREL/BuildingsBench/blob/main/tutorials/registering_your_model_with_the_benchmark.ipynb">step-by-step tutorial</a> for a Jupyter Notebook version of the following instructions.</p>
<p>Make sure to have installed the benchmark in editable mode: <code>pip install -e .</code></p>
<p>Make sure to have installed the benchmark in editable mode: <code>pip install -e .[benchmark]</code></p>
<ol>
<li>Create a file called <code>your_model.py</code> with your model's implementation, and make your model a subclass of the base model in <code>./buildings_bench/models/base_model.py</code>. Make sure to implement the abstract methods: <code>forward</code>, <code>loss</code>, <code>load_from_checkpoint</code>, <code>predict</code>, <code>unfreeze_and_get_parameters_for_finetuning</code>.</li>
<li>Place this file under <code>./buildings_bench/models/your_model.py.</code></li>
Expand All @@ -610,14 +610,44 @@ <h3 id="registering-your-model">Registering your model<a class="headerlink" href
</span><span id="__span-1-10"><a id="__codelineno-1-10" name="__codelineno-1-10" href="#__codelineno-1-10"></a><span class="k">[transfer_learning]</span>
</span><span id="__span-1-11"><a id="__codelineno-1-11" name="__codelineno-1-11" href="#__codelineno-1-11"></a><span class="c1"># override any of the default transfer_learning argparse args here</span>
</span></code></pre></div>
See <code>./configs/TransformerWithTokenizer-L.toml</code> for an example.</p>
See <code>./configs/TransformerWithTokenizer-S.toml</code> for an example.</p>
<h3 id="pretraining">Pretraining<a class="headerlink" href="#pretraining" title="Permanent link"></a></h3>
<p><code>python3 scripts/pretrain.py --config your_model.toml</code></p>
<p>This script is implemented with PyTorch <code>DistributedDataParallel</code>, so it can be launched with <code>torchrun</code>. See <code>./scripts/pretrain.sh</code> for an example.</p>
<h4 id="without-slurm">Without SLURM<a class="headerlink" href="#without-slurm" title="Permanent link"></a></h4>
<p>The script <code>pretrain.py</code> is implemented with PyTorch <code>DistributedDataParallel</code> so it must be launched with <code>torchrun</code> from the command line and the argument <code>--disable_slurm</code> must be passed.
See <code>./scripts/pretrain.sh</code> for an example.</p>
<div class="language-bash highlight"><pre><span></span><code><span id="__span-2-1"><a id="__codelineno-2-1" name="__codelineno-2-1" href="#__codelineno-2-1"></a><span class="ch">#!/bin/bash</span>
</span><span id="__span-2-2"><a id="__codelineno-2-2" name="__codelineno-2-2" href="#__codelineno-2-2"></a>
</span><span id="__span-2-3"><a id="__codelineno-2-3" name="__codelineno-2-3" href="#__codelineno-2-3"></a><span class="nb">export</span><span class="w"> </span><span class="nv">WORLD_SIZE</span><span class="o">=</span><span class="m">1</span>
</span><span id="__span-2-4"><a id="__codelineno-2-4" name="__codelineno-2-4" href="#__codelineno-2-4"></a><span class="nv">NUM_GPUS</span><span class="o">=</span><span class="m">1</span>
</span><span id="__span-2-5"><a id="__codelineno-2-5" name="__codelineno-2-5" href="#__codelineno-2-5"></a>
</span><span id="__span-2-6"><a id="__codelineno-2-6" name="__codelineno-2-6" href="#__codelineno-2-6"></a>torchrun<span class="w"> </span><span class="se">\</span>
</span><span id="__span-2-7"><a id="__codelineno-2-7" name="__codelineno-2-7" href="#__codelineno-2-7"></a><span class="w"> </span>--nnodes<span class="o">=</span><span class="m">1</span><span class="w"> </span><span class="se">\</span>
</span><span id="__span-2-8"><a id="__codelineno-2-8" name="__codelineno-2-8" href="#__codelineno-2-8"></a><span class="w"> </span>--nproc_per_node<span class="o">=</span><span class="nv">$NUM_GPUS</span><span class="w"> </span><span class="se">\</span>
</span><span id="__span-2-9"><a id="__codelineno-2-9" name="__codelineno-2-9" href="#__codelineno-2-9"></a><span class="w"> </span>--rdzv-backend<span class="o">=</span>c10d<span class="w"> </span><span class="se">\</span>
</span><span id="__span-2-10"><a id="__codelineno-2-10" name="__codelineno-2-10" href="#__codelineno-2-10"></a><span class="w"> </span>--rdzv-endpoint<span class="o">=</span>localhost:0<span class="w"> </span><span class="se">\</span>
</span><span id="__span-2-11"><a id="__codelineno-2-11" name="__codelineno-2-11" href="#__codelineno-2-11"></a><span class="w"> </span>scripts/pretrain.py<span class="w"> </span>--config<span class="w"> </span>TransformerWithGaussian-S<span class="w"> </span>--disable_slurm
</span></code></pre></div>
<p>The argument <code>--disable_slurm</code> is not needed if you are running this script on a Slurm cluster as a batch job.</p>
<p>This script will automatically log outputs to <code>wandb</code> if the environment variables <code>WANDB_ENTITY</code> and <code>WANDB_PROJECT</code> are set. Otherwise, pass the argument <code>--disable_wandb</code> to disable logging to <code>wandb</code>.</p>
<h4 id="with-slurm">With SLURM<a class="headerlink" href="#with-slurm" title="Permanent link"></a></h4>
<p>To launch pretraining as a SLURM batch job:</p>
<div class="language-bash highlight"><pre><span></span><code><span id="__span-3-1"><a id="__codelineno-3-1" name="__codelineno-3-1" href="#__codelineno-3-1"></a><span class="nb">export</span><span class="w"> </span><span class="nv">WORLD_SIZE</span><span class="o">=</span><span class="k">$((</span><span class="nv">$SLURM_NNODES</span><span class="w"> </span><span class="o">*</span><span class="w"> </span><span class="nv">$SLURM_NTASKS_PER_NODE</span><span class="k">))</span>
</span><span id="__span-3-2"><a id="__codelineno-3-2" name="__codelineno-3-2" href="#__codelineno-3-2"></a><span class="nb">echo</span><span class="w"> </span><span class="s2">&quot;WORLD_SIZE=&quot;</span><span class="nv">$WORLD_SIZE</span>
</span><span id="__span-3-3"><a id="__codelineno-3-3" name="__codelineno-3-3" href="#__codelineno-3-3"></a><span class="nb">export</span><span class="w"> </span><span class="nv">MASTER_PORT</span><span class="o">=</span><span class="k">$(</span>expr<span class="w"> </span><span class="m">10000</span><span class="w"> </span>+<span class="w"> </span><span class="k">$(</span><span class="nb">echo</span><span class="w"> </span>-n<span class="w"> </span><span class="nv">$SLURM_JOBID</span><span class="w"> </span><span class="p">|</span><span class="w"> </span>tail<span class="w"> </span>-c<span class="w"> </span><span class="m">4</span><span class="k">))</span>
</span><span id="__span-3-4"><a id="__codelineno-3-4" name="__codelineno-3-4" href="#__codelineno-3-4"></a>
</span><span id="__span-3-5"><a id="__codelineno-3-5" name="__codelineno-3-5" href="#__codelineno-3-5"></a><span class="nb">echo</span><span class="w"> </span><span class="s2">&quot;NODELIST=&quot;</span><span class="si">${</span><span class="nv">SLURM_NODELIST</span><span class="si">}</span>
</span><span id="__span-3-6"><a id="__codelineno-3-6" name="__codelineno-3-6" href="#__codelineno-3-6"></a><span class="nv">master_addr</span><span class="o">=</span><span class="k">$(</span>scontrol<span class="w"> </span>show<span class="w"> </span>hostnames<span class="w"> </span><span class="s2">&quot;</span><span class="nv">$SLURM_JOB_NODELIST</span><span class="s2">&quot;</span><span class="w"> </span><span class="p">|</span><span class="w"> </span>head<span class="w"> </span>-n<span class="w"> </span><span class="m">1</span><span class="k">)</span>
</span><span id="__span-3-7"><a id="__codelineno-3-7" name="__codelineno-3-7" href="#__codelineno-3-7"></a><span class="nb">export</span><span class="w"> </span><span class="nv">MASTER_ADDR</span><span class="o">=</span><span class="nv">$master_addr</span>
</span><span id="__span-3-8"><a id="__codelineno-3-8" name="__codelineno-3-8" href="#__codelineno-3-8"></a><span class="nb">echo</span><span class="w"> </span><span class="s2">&quot;MASTER_ADDR=&quot;</span><span class="nv">$MASTER_ADDR</span>
</span><span id="__span-3-9"><a id="__codelineno-3-9" name="__codelineno-3-9" href="#__codelineno-3-9"></a>
</span><span id="__span-3-10"><a id="__codelineno-3-10" name="__codelineno-3-10" href="#__codelineno-3-10"></a>srun<span class="w"> </span>python3<span class="w"> </span>scripts/pretrain.py<span class="w"> </span><span class="se">\</span>
</span><span id="__span-3-11"><a id="__codelineno-3-11" name="__codelineno-3-11" href="#__codelineno-3-11"></a><span class="w"> </span>--config<span class="w"> </span>TransformerWithGaussian-S
</span></code></pre></div>
<h3 id="zero-shot-stlf">Zero-shot STLF<a class="headerlink" href="#zero-shot-stlf" title="Permanent link"></a></h3>
<p><code>python3 scripts/zero_shot.py --config your_model.toml --checkpoint /path/to/checkpoint.pt</code></p>
<p>This script <code>scripts/zero_shot.py</code> and the script for transfer learning <code>scripts/transfer_learning_torch.py</code> do not use <code>DistributedDataParallel</code> so they can be run without <code>torchrun</code>.</p>
<p><code>python3 scripts/zero_shot.py --config TransformerWithGaussian-S --checkpoint /path/to/checkpoint.pt</code></p>
<h3 id="transfer-learning-for-stlf">Transfer Learning for STLF<a class="headerlink" href="#transfer-learning-for-stlf" title="Permanent link"></a></h3>
<p><code>python3 scripts/transfer_learning_torch.py --config your_model.toml --checkpoint /path/to/checkpoint.pt</code> </p>
<p><code>python3 scripts/transfer_learning_torch.py --config TransformerWithGaussian-S --checkpoint /path/to/checkpoint.pt</code> </p>



Expand Down
2 changes: 1 addition & 1 deletion search/search_index.json

Large diffs are not rendered by default.

18 changes: 9 additions & 9 deletions sitemap.xml
Original file line number Diff line number Diff line change
Expand Up @@ -2,47 +2,47 @@
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
<url>
<loc>https://nrel.github.io/BuildingsBench/</loc>
<lastmod>2023-07-05</lastmod>
<lastmod>2023-08-02</lastmod>
<changefreq>daily</changefreq>
</url>
<url>
<loc>https://nrel.github.io/BuildingsBench/running/</loc>
<lastmod>2023-07-05</lastmod>
<lastmod>2023-08-02</lastmod>
<changefreq>daily</changefreq>
</url>
<url>
<loc>https://nrel.github.io/BuildingsBench/tutorials/</loc>
<lastmod>2023-07-05</lastmod>
<lastmod>2023-08-02</lastmod>
<changefreq>daily</changefreq>
</url>
<url>
<loc>https://nrel.github.io/BuildingsBench/API/data/buildings_bench-data/</loc>
<lastmod>2023-07-05</lastmod>
<lastmod>2023-08-02</lastmod>
<changefreq>daily</changefreq>
</url>
<url>
<loc>https://nrel.github.io/BuildingsBench/API/models/buildings_bench-models/</loc>
<lastmod>2023-07-05</lastmod>
<lastmod>2023-08-02</lastmod>
<changefreq>daily</changefreq>
</url>
<url>
<loc>https://nrel.github.io/BuildingsBench/API/utilities/buildings_bench-evaluation/</loc>
<lastmod>2023-07-05</lastmod>
<lastmod>2023-08-02</lastmod>
<changefreq>daily</changefreq>
</url>
<url>
<loc>https://nrel.github.io/BuildingsBench/API/utilities/buildings_bench-tokenizer/</loc>
<lastmod>2023-07-05</lastmod>
<lastmod>2023-08-02</lastmod>
<changefreq>daily</changefreq>
</url>
<url>
<loc>https://nrel.github.io/BuildingsBench/API/utilities/buildings_bench-transforms/</loc>
<lastmod>2023-07-05</lastmod>
<lastmod>2023-08-02</lastmod>
<changefreq>daily</changefreq>
</url>
<url>
<loc>https://nrel.github.io/BuildingsBench/API/utilities/buildings_bench-utils/</loc>
<lastmod>2023-07-05</lastmod>
<lastmod>2023-08-02</lastmod>
<changefreq>daily</changefreq>
</url>
</urlset>
Binary file modified sitemap.xml.gz
Binary file not shown.

0 comments on commit 588bf54

Please sign in to comment.