Skip to content

Latest commit

 

History

History
99 lines (75 loc) · 5.94 KB

install.md

File metadata and controls

99 lines (75 loc) · 5.94 KB

Installation

Docker Container

While it is possible to directly create a virtual or anaconda environment and install all the required dependencies, it is recommended to run the code inside a Docker Container.

Prerequisites

In order to correctly run, a PC running Ubuntu with an NVIDIA GPU that supports CUDA 11.3 is required.

When running the code inside a Docker container, it is required to install Docker and NVIDIA-Docker according to their official instructions.

  1. Install Docker following the official instructions.
  2. Run the official post-installation steps for Docker in Linux.
  3. Install NVIDIA-Docker following the official instructions.

Installation Procedure

Follow these steps to create the Docker image used for running the PADLoC source code.

  1. Clone the github repository.

    git clone https://https://github.com/robot-learning-freiburg/PADLoC
  2. Build the Docker image.

    cd PADLoC
    docker build -f docker/Dockerfile --tag padloc .

Running the Docker Container

Use the docker run command to start the container. Some useful flags and arguments are shown here. For more arguments and additional information on the docker run command, please refer to the official documentation.

docker run \
    [-it] \
    [--rm] \
    [--gpus GPUS] \
    [-m MEM] \
    [--shm-size=SHM] \
    [-v /HOST/PATH:/GUEST/PATH[:ro] ] \
    [--name NAME] \
    IMG
Argument Description
-it Opens an interactive terminal.
--rm Automatically remove the container when it exits.
--gpus GPUS Provide access to the host's GPUs. Use --gpus all to allow access to all available GPUs, or only allow some specific GPUs with --gpus '"device=0,2"' (notice the use of both single and double quotes).
-m MEM The maximum amount of memory that this container is allowed to use. Defined as an integer value with a suffix of either b (bytes), k (kilobytes), m (megabytes) or g (gigabytes).
--shm_size=SHM The maximum amount of shared memory that this container is allowed to use. Defined as an integer value with a suffix of either b (bytes), k (kilobytes), m (megabytes) or g (gigabytes).
-v /HOST/PATH:/GUEST/PATH[:ro] Mount a local path /host/path from the host as a volume on the container, with mount point in /guest/path. Optionally, :ro can be added at the end of the path mapping to specify a Read-Only volume.
--name NAME Assigns a name to the running container, instead of using the auto-generated one. Useful for referring to the container.
IMG Docker image name

⚠️ The source code is not copied into the Docker image when built and must therefore be mounted to the /padloc directory using the -v volume specification when running the container. See the example usage.

Example Usage

Run the padloc docker container with all the GPUs, 64GB of memory, 16GB of shared memory and mounting the checkpoint, dataset and output directories as volumes.

docker run \
    -it \
    --gpus all \
    -m 64g \
    --shm-size=16g \
    -v '/path/to/source/code/':/padloc \
    -v '/path/to/kitti/dataset/':/data \
    -v '/path/to/cp/':/cp \
    -v '/path/to/output/':/output \
    --name padloc \
    padloc

Pre-trained Models

You can find the pre-trained model weights for PADLoC here. Download the .tar file and place it in the directory that will then be mounted in the Docker Container.

⚠️ There is no need to extract the tarball, since the model loading method uses it directly.

Custom environment

If you wish to setup your own Virtual or Anaconda environment, install the dependencies listed in the environment.yaml file.

Then, install the following packages according to how it is done in the Dockerfile:

  • OpenPCDet

⚠️ Versions of Open3D >= 0.15 have a different implementation of RANSAC that results in poor registration accuracy. Please make sure to install a version of Open3D between 0.12.0 and 0.14.2 for the best results.

⚠️ Versions of SPConv >= 2.2 are not compatible with the provided pre-trained weights.