Skip to content

[ECCV2024] CityGaussian: Real-time High-quality Large-Scale Scene Rendering with Gaussians

License

Notifications You must be signed in to change notification settings

DekuLiuTesla/CityGaussian

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation


[ECCV2024] CityGaussian: Real-time High-quality Large-Scale Scene Rendering with Gaussians

Yang Liu  He Guan  Chuanchen Luo  Lue Fan  Naiyan Wang  Junran Peng  Zhaoxiang Zhang 
Institute of Automation, Chinese Academy of Sciences; University of Chinese Academy of Sciences

GitHub Repo stars

The advancement of real-time 3D scene reconstruction and novel view synthesis has been significantly propelled by 3D Gaussian Splatting (3DGS). However, effectively training large-scale 3DGS and rendering it in real-time across various scales remains challenging. This paper introduces CityGaussian (CityGS), which employs a novel divide-and-conquer training approach and Level-of-Detail (LoD) strategy for efficient large-scale 3DGS training and rendering. Specifically, the global scene prior and adaptive training data selection enables efficient training and seamless fusion. Based on fused Gaussian primitives, we generate different detail levels through compression, and realize fast rendering across various scales through the proposed block-wise detail levels selection and aggregation strategy. Extensive experimental results on large-scale scenes demonstrate that our approach attains state-of-the-art rendering quality, enabling consistent real-time rendering of large-scale scenes across vastly different scales. Welcome to visit our Project Page.

Dialogue_Teaser

📰 News

[2024.08.20] Updates Custom Dataset Instructions!

[2024.08.05] Our code is now available! Welcome to try it out!

[2024.07.18] Camera Ready version now can be accessed through arXiv. More insights are included.

🥏 Model of CityGaussian

This repository contains the official implementation of the paper "CityGaussian: Real-time High-quality Large-Scale Scene Rendering with Gaussians". Star ⭐ us if you like it!

Training Pipeline

Rendering Pipeline

🔧 Usage

Note that the configs for five large-scale scenes: MatrixCity, Rubble, Building, Residence and Sci-Art has been prepared in config folder. Data of these datasets can be prepared according to Data Preparation. For COLMAP, we recommend to directly use our generated results:

Installation

a. Clone the repository

# clone repository
git clone --recursive https://github.com/DekuLiuTesla/CityGaussian.git
cd CityGaussian
mkdir data  # store your dataset here
mkdir output  # store your output here

b. Create virtual environment

# create virtual environment
conda create -yn citygs python=3.9 pip
conda activate citygs

c. Install PyTorch

  • Tested on PyTorch==2.0.1

  • You must install the one match to the version of your nvcc (nvcc --version)

  • For CUDA 11.8

    pip install torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 --index-url https://download.pytorch.org/whl/cu118

d. Install requirements

pip install -r requirements.txt

e. Install tailored LightGaussian for LoD

cd LargeLightGaussian
pip install submodules/compress-diff-gaussian-rasterization
ln -s /path/to/data /path/to/LargeLightGaussian/data
ln -s /path/to/output /path/to/LargeLightGaussian/output
cd ..

Prepare Config Files

If you use your own dataset, please follow instruction in Custom Dataset Instructions to prepare. We also prepared templates in ./config and LargeLightGaussian/scripts.

Training and Vanilla Rendering

To train a scene, config the hyperparameters of pretraining and finetuning stage with your yaml file, then replace the COARSE_CONFIG and CONFIG in run_citygs.sh. The max_block_id, out_name, and TEST_PATH in run_citygs.sh should be set according to your dataset as well. Then you can train your scene by simply using:

bash scripts/run_citygs.sh

This script will also render and evaluate the result without LoD.

Rendering with LoD

First, the LoD generation is realized by the following command:

cd LargeLightGaussian
bash scripts/run_prune_finetune_$your_scene.sh
bash scripts/run_distill_finetune_$your_scene.sh
bash scripts/run_vectree_quantize_$your_scene.sh
cd ..

After that, configure the LoD setting in another yaml file. Then replace CONFIG, TEST_PATH, and out_name with yours in run_citygs_lod.sh. Then you can render the scene with LoD by using:

bash scripts/run_citygs_lod.sh

Note that the LoD selection is now based on Nyquist sampling rate instead of manually defined distance threshold. This modification enables better generalization and anti-aliasing performance.

Viewer

We borrowed Web viewer from Gaussian Lightning. Take the scene Rubble as an example. To render the scene with no LoD, you can use the following command:

python viewer.py output/rubble_c9_r4

To render the scene with LoD, you can use the following command:

# copy cameras.json first for direction initialization
cp output/rubble_c9_r4/cameras.json output/rubble_c9_r4_lod/
python viewer.py config/rubble_c9_r4_lod.yaml

📝 TODO List

  • First Release.
  • Release CityGaussian code.
  • Release ColMap results of main datasets.
  • Release detailed instruction for custom dataset usage.
  • Release checkpoints on main datasets.

📄 License

Creative Commons License
This work is under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

🤗 Citation

If you find this repository useful, please use the following BibTeX entry for citation.

@article{liu2024citygaussian,
  title={Citygaussian: Real-time high-quality large-scale scene rendering with gaussians},
  author={Liu, Yang and Guan, He and Luo, Chuanchen and Fan, Lue and Wang, Naiyan and Peng, Junran and Zhang, Zhaoxiang},
  journal={arXiv preprint arXiv:2404.01133},
  year={2024}
}

👏 Acknowledgements

This repo benefits from 3DGS, LightGaussian, Gaussian Lightning. Thanks for their great work!