Skip to content

Commit

Permalink
Merge pull request #138 from initze/version-bump
Browse files Browse the repository at this point in the history
version bump
  • Loading branch information
initze authored Jun 3, 2024
2 parents 0c0b4a6 + c74c160 commit a12106c
Show file tree
Hide file tree
Showing 3 changed files with 20 additions and 9 deletions.
7 changes: 7 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,12 @@
# Changelog

## [0.10.3] - 2024-06-03

### Changed

- use typer for cli Tools
- simplified udm application for planet data masking

## [0.10.2] - 2024-05-08

### Added
Expand Down
20 changes: 12 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,21 +28,23 @@ mamba install gdal>=3.6 -c conda-forge

This will pull the CUDA 12 version of pytorch. If you are running CUDA 11, you need to manually switch to the corresponding Pytorch package afterwards by running `pip3 install torch==2.2.0+cu118 torchvision==0.17.0+cu118 --index-url https://download.pytorch.org/whl/cu118`



### Additional packages

#### cucim

You can install cucim to speed up the postprocessing process. cucim will use the gpu to perform binary erosion of edge artifacts, which runs alot faster than the standard CPU implementation of scikit-learn.

`pip install --extra-index-url=https://pypi.nvidia.com cucim-cu11==24.4.*`

Installation for other cuda versions see here:
Installation for other cuda versions see here:

<https://docs.rapids.ai/install>

https://docs.rapids.ai/install
## System and Data Setup

### Option 1 - Singularity container
https://cloud.sylabs.io/library/initze/aicore/thaw_slump_segmentation

<https://cloud.sylabs.io/library/initze/aicore/thaw_slump_segmentation>

The container contains all requirements to run the processing code, singularity must be installed

Expand All @@ -52,7 +54,9 @@ singularity shell --nv --bind <your bind path> thaw_slump_segmentation.sif
```

### Option 2 - anaconda

### Environment setup

We recommend using a new conda environment from the provided environment.yml file

```bash
Expand All @@ -65,6 +69,7 @@ conda env create -n aicore -f environment.yml
2. copy/move data into <DATA_DIR>/auxiliary (e.g. prepared ArcticDEM data)

### Set gdal paths in system.yml file

#### Linux

```yaml
Expand Down Expand Up @@ -115,13 +120,13 @@ Hello tobi

### Data Preprocessing for Planet data

#### Setting up all required files for training and/or inference
#### Setting up all required files for training and/or inference

```bash
python setup_raw_data.py --data_dir <DATA_DIR>
```

#### Setting up required files for training
#### Setting up required files for training

```bash
python prepare_data.py --data_dir <DATA_DIR>
Expand All @@ -133,7 +138,6 @@ python prepare_data.py --data_dir <DATA_DIR>
python download_s2_4band_planet_format.py --s2id <IMAGE_ID> --data_dir <DATA_DIR>
```


### Training a model

```bash
Expand Down
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[project]
name = "thaw-slump-segmentation"
version = "0.10.2"
version = "0.10.3"
description = "Thaw slump segmentation workflow using PlanetScope data and pytorch"
authors = [
{ name = "Ingmar Nitze", email = "[email protected]" },
Expand Down

0 comments on commit a12106c

Please sign in to comment.