CenterGrasp: Object-Aware Implicit Representation Learning for Simultaneous Shape Reconstruction and 6-DoF Grasp Estimation
Repository providing the source code for the paper "CenterGrasp: Object-Aware Implicit Representation Learning for Simultaneous Shape Reconstruction and 6-DoF Grasp Estimation", see the project website. Please cite the paper as follows:
@article{chisari2024centergrasp,
title={CenterGrasp: Object-Aware Implicit Representation Learning for Simultaneous Shape Reconstruction and 6-DoF Grasp Estimation},
shorttile={CenterGrasp},
author={Chisari, Eugenio and Heppert, Nick and Welschehold, Tim and Burgard, Wolfram and Valada, Abhinav},
journal={IEEE Robotics and Automation Letters (RA-L)},
year={2024}
}
For centergrasp
conda create --name centergrasp_g_env python=3.8
conda activate centergrasp_g_env
pip install torch==1.13.1+cu117 torchvision==0.14.1+cu117 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu117
# pip install torch==1.13.1+cpu torchvision==0.14.1+cpu torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cpu
pip install kaolin==0.14.0 -f https://nvidia-kaolin.s3.us-east-2.amazonaws.com/torch-1.13.1_cu117.html
git clone [email protected]:PRBonn/manifold_python.git
cd manifold_python
git submodule update --init
make install
cd CenterGrasp
pip install -e .
For GIGA
pip install cython
pip install torch-scatter -f https://data.pyg.org/whl/torch-1.13.1+cu117.html
# pip install torch-scatter -f https://data.pyg.org/whl/torch-1.13.1+cpu.html
pip install catkin-pkg --extra-index-url https://rospypi.github.io/simple/
git clone [email protected]:chisarie/GIGA.git
cd GIGA
git checkout baseline
pip install -e .
python scripts/convonet_setup.py build_ext --inplace
Follow the instructions in the README.md
of GIGA's repository to download giga's pretrained models and object meshes (from data.zip
), which are needed to run the evaluations.
Next, download the robot description and the ycb scenes description, and extract both folders under ~/datasets/
.
In case you want to train CenterGrasp yourself, you can either download the pre-generated data (165 GB) and extract it under ~/datasets/
, or generate it yourself (see below).
If you want to generate your own training data, follow these steps in order.
First, download the pre-generated raw and processed data from GIGA's repository, and place the four folders under ~/datasets/giga/
.
Then, run the following commands. The data will be saved in the datasets
directory in your home folder.
python scripts/download_cc_texture.py # Around 30GB will be downloaded
python scripts/make_grasp_labels.py --num-workers 4
python scripts/make_sgdf_dataset.py --num-workers 4
python scripts/make_rgb_dataset.py --headless --raytracing --num-workers 4 --mode train
python scripts/make_rgb_dataset.py --headless --raytracing --num-workers 4 --mode valid
Please download the pretrained weights (sgdf decoder, rgb encoder), extract them, and place them in the ckpt_sgdf
and ckpt_rgb
folders respectively, at the root of the repository. These are the models trained on the GIGA set of objects.
To reproduce the results from the paper (Table II), do the following. If you want to evaluate a different checkpoint, remember to change the --rgb-model
cli argument.
python scripts/run_evals_shape.py
python scripts/run_evals_grasp.py
To train your own policies instead of using the pretrained checkpoints, do the following:
python scripts/train_sgdf.py --log-wandb
Modify configs/rgb_train_specs.json
-> EmbeddingCkptPath
with the checkpoint id that you just trained. Now you can use those embeddings to train the rgbd model:
python scripts/train_rgbd.py --log-wandb
To reproduce the results on the GraspNet-1B dataset (Table III in the paper), please check out the folder centergrasp/graspnet/
. Note that you first need to download the GraspNet-1B dataset. The checkpoints trained on the GraspNet-1B data are here: sgdf decoder, rgb encoder