implementation of the paper LSENeRF
- Create an environment with python=3.8 or from environment.yml
- Install with the below:
python -m pip install torch==2.0.1+cu117 torchvision==0.15.2+cu117 --extra-index-url https://download.pytorch.org/whl/cu117
python -m pip install -e .
Refer to data repo to format either a EVIMOv2 or LSENeRF scene. To train a model, update the --data
in the training script and run them:
# to train a LSENeRF scene
bash scripts/train_lse_data.sh
# to train a EVIMOv2 scene
bash scripts/train_evimo.sh
You can choose which method to run by changing the configurations at the top of the train_evimo.sh and train_lse_data.sh.
To see all available parameters do:
ns-train lsenerf -h
These scripts run camera optimization before evaluation. Please update the experiment path before running. The example path in each script should give a sense of what to put down. To evaluate a non-embedding method, do:
bash scripts/eval.sh
To evaluate an embedding method do:
bash scripts/emb_eval.sh