Skip to content

Releases: IGNF/myria3d

Production Assets [arbitrary commit]

11 Jul 08:46
Compare
Choose a tag to compare

This release tracks an arbitrary commit that should be ignored.
It exists to document/publish a few models and their assets.
For the latest production-ready code for training and inference, use the main branch!

We attach some latest trained models for convenience, with no guarantees of performance. Better channels of diffusion shall be used in the future, with the disclosure of training data and more complete documentation on performances.

Assets for a trained model

  • A ModelCard with information on context, data, training, and performances.
  • Trained multiclass model lightning checkpoint.
  • Configuration file that can be used for prediction on unseen data.
  • A color scale to visualize predicted classification in Cloud Compare

Model Zoo

  • 20230930_60k_basic_targetted_epoch37_Myria3DV3.4.0 [best][staging]
  • proto151_V2.0_epoch_100_Myria3DV3.1.0 [production]
  • proto151_V1.0_epoch_40_Myria3DV3.0.1 [deprecated]
  • proto151_V0.0_epoch_056_Myria3DV2.3.0 [deprecated]

V3.3.0 - Ignore artefacts

07 Feb 06:23
4e94bc0
Compare
Choose a tag to compare
  • First external contribution: documentation of instructions to use docker and GPUs in the Windows Linux Subsystem. Thank you @jistiak!
  • Implementation of a way to ignore points with a specific class (i.e. 65 = artefacts). This behavior is the new default. See Documentation for inference options.

What's Changed

New Contributors

  • @jistiak made their first contribution in #53 (Cheers!)

Full Changelog: V3.2.5...V3.3.0

V3.2.5 Ship a trained model in code

07 Feb 06:18
3290b24
Compare
Choose a tag to compare
Pre-release

A trained model and its configuration are shipped directly under directory trained_model_assets, for convenience of users and to facilitate CI/CD.
Memory footprint is lightly reduced during interpolation.

What's Changed

  • Embedding a defaut trained model and config in the code by @MichelDaab in #48
  • Optimize memory usage during interpolation+saving by @leavauchier in #50

New Contributors

Full Changelog: V3.2.0...V3.2.5

V3.2.0 - Control over name of predicted dimensions in LAS

07 Feb 06:15
Compare
Choose a tag to compare

One can control how predicted classification and entropy are saved in a LAS. Possibility to override the Classification channel directly.

What's Changed

  • V3.2.0: control of output channels names (PredictedClassification & Entropy) by @CharlesGaydon in #46

Full Changelog: V3.0.2...V3.2.0

V3.0.2 - PyG-based implementation of RandLA-Net

07 Feb 06:13
Compare
Choose a tag to compare

Re-Implementation of the model in the Pytorch-Geometric framework - validated by the PyG community via a PR.

This enables to feed the model with variable size point clouds, and avoids data degradation due to subsampling, which significantly helps model accuracy.

What's Changed

Full Changelog: V2.4...V3.0.2

V2.4 - HDF5 datasets, refactors.

23 Aug 10:36
Compare
Choose a tag to compare
Pre-release

What's Changed

Full Changelog: V2.3.0...V2.4

V2.3.0

20 Jun 13:29
a222e18
Compare
Choose a tag to compare

Robust evaluation at test time, retrocompatible with previous models.

What's Changed

  • By-receptive-field interpolation at predict time ; parameterization of transforms ; PointNet++ support by @CharlesGaydon in #27
  • V2.3.0 eval time interpolation by @CharlesGaydon in #28

Full Changelog: V2.1.0...V2.3.0

V2.2.0

13 Jun 15:50
Compare
Choose a tag to compare

Key changes:

  • Final interpolation of predictions to the full point cloud happens by receptive-field. If multiple logits are predicted for the same point exist, they are sum-reduced together before computing probabilities, which allows for test-time-shifts.
  • Test-loop is simplified, and IoU applied on a per-sample basis, on points for which a model gave output logits. The drawback is that it does not allow to evaluate the effect of test-time-shifts anymore.
  • PointNet (whose current implementation required same-size clouds) is replaced by PointNet++ (adapted from the pyg implementation), which accepts full point clouds without a need for subsampling.
  • Transforms are now defined via configs, in three groups: preparations, augmentations, normalizations.
  • Clouds that are smaller that expected (i.e. less than 50x50 meters due to border effects) have their positions normalized by a constant to avoid dilatation of small clouds. This yields slightly worse results for previously trained models on these areas, but the effect is really light since the constant is chosen to be the same as the receptive field size.

Full Changelog: V2.1.0...V2.2.0

V2.0.0 - Myria3d

17 May 15:11
a930c35
Compare
Choose a tag to compare

A test suite that covers typical use case: training and prediction from CLI, successive train+test, dry run on RandLaNet, overfitting test with RandLaNet and PointNet to assure that the model is trainable.

Dependency torch-points-kernels is deleted, and replaced using pyg, which adds some complexity in code but simplifies installation of virtual environment. The resulting code is retrocompatible with previous models and fully tested for regressions (IoU is unchanged on a 15km² test set).

Corrections to the docker file are also implemented ; in particular, CUDA images were broken by a CUDA update, and needed to be adjusted.

Workflows make a good use of caching functionnalitis, both from Docker and from Github environment.

Requirements files are simplified. Dependencies are installed without redundant command lines. torchmetrics version is fixed, because pytorch-lightning would elsewise use a newer, non-retrocompatible version.

What's Changed

New Contributors

Full Changelog: V1.7.1...V2.0.0

V1.7.1 - Better Doc, Clearer Architecture

05 Apr 15:07
Compare
Choose a tag to compare

More Documentation, including hydra configs.
Changes in package architecture for more cohesion.

What's Changed

Full Changelog: V1.6.13...V1.7.0