Deep learning for multi-modal classification of cloud, shadow and land cover scenes in high-resolution satellite imagery implemented using Keras as described in: Shendryk, Y., Rist, Y., Ticehurst, C. and Thorburn, P. (2019). "Deep learning for multi-modal classification of cloud, shadow and land cover scenes in PlanetScope and Sentinel-2 imagery." ISPRS Journal of Photogrammetry and Remote Sensing 157: 124-136.
This work exists thanks to:
and CSIRO's Digiscape Future Science Platform
conda
git
-
Clone the repository.
cd path/to/where/project/lives git clone --recursive https://github.com/yurithefury/ChipClassification.git
-
Conda environment install on local machine
conda-env create -f env.yaml
will create an appropriate environment called 'keras'. In one already exists, you will have to edit the first line ofenv.yaml
to something else, egname: veryniceenvironment
. If your computer does NOT have a GPU, you will have to editenv.yaml
to change the following line- keras-gpu=2.2.4
to read- keras=2.2.4
This script creates the datasets for training the final models used for inference.
This is an example of how to perform inference using a single Keras module and the utils.inference
module. It works on TIF
scenes. If you run this you should be able to look at the output in the scrap/
subdirectory.
A small utility script to check for NaNs in a HDF5 file.
Exactly what you'd think.
A Bash script for testing whether your environment works and whether Keras can find your GPUs.
A training script, for use with DistributedDuctTape, but can also be run as a standalone script.
Contains some QGIS style-files for visualisation of outputs.
Inference-ready models live in here.
This is a collection of functions for turning geospatial raster data into HDF5 datasets suitable for machine learning tasks. It is a one-file module. See the source code for docs.
Here you can find T-PS and T-S2 datasets, which is also available at Mendeley Data. A-PS data could be found at Planet: Understanding the Amazon from Space