Monocular training:
You can download the entire raw KITTI dataset by running:
wget -i kitti_archives_to_download.txt -P kitti_data/
Then unzip with
cd kitti_data
unzip "*.zip"
cd ..
Warning: it weighs about 175GB, so make sure you have enough space to unzip too!
Our default settings expect that you have converted the png images to jpeg with this command, which also deletes the raw KITTI .png
files:
find kitti_data/ -name '*.png' | parallel 'convert -quality 92 -sampling-factor 2x2,1x1,1x1 {.}.png {.}.jpg && rm {}'
- [] add train & inference script
- [] add kitti dataloader
- [] distributed training
- add loss function
- [] add log printing
- download related libraries including OpenCV and libtorch.
- download my converted torchscript model or convert your trained model into torchscript by yourself. If you don't familiar with torchscript currently, please check the offical docs
- prepare a sample image and change its path in main.cpp
- if you don't have available gpus, please annotate CUDA options in
CMakeLists.txt
Model | Language | 3D Packing | Inference time / im | Link |
---|---|---|---|---|
packnet_32 | litorch | Yes | download | |
packnet_32 | python | Yes | download |
If you're familiar with docker, you could run this project withour the need of installing those libraries. please remember install nvidia-docker because our projects needs to use gpus.
you could follow to_jit.py
to create your own torchscript model and use my converted model directly. We provide three different converted models as below.
monodepth2(FP32
)
packnet-sfm(FP16
)
we also offer a onnx file that could be accerlated with TensorRT. The related demo code will be released soon.
packnet-sfm(ONNX)