Skip to content
/ OBM Public

TGRS paper "prompt-driven building footprint extraction in aerial images with offset-building model"

License

Notifications You must be signed in to change notification settings

likaiucas/OBM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

TGRS preprint paper

MIT License


Logo

Extract Building Footprint from Aerial Images with Offset-Building Model (OBM)

Extract building footprint like a human
Explore the docs »

View Demo · Report Bug · Request Feature

Table of Contents
  1. What is OBM?
  2. Getting Started
  3. Usage
  4. Results
  5. License
  6. Contact

What is OBM?

Product Name Screen Shot Product Name Screen Shot
(a) OBM Model (b) ROAM module

We first proposed a novel interactive model for footprint extraction and designed a series of Distance NMS algorithms tailored for the Building Footprint Extraction (BFE) problem. We also designed new metrics to measure the interactive BFE model.

Based on our accurate extraction of the roof and root-to-footprint offset. We can rebuild buildings' relative height maps.

(a) illustrates our structure which inherits from the Segment Anything Model (SAM).

(b) gives out the structure of our core module, Reference Offset Adaptive Module (ROAM). During inference, based on the results of the base offset head, each building will classified by its length and then roam to the adaptive head. The final result is determined by the Basehead and adaptive head.

Our model reaches new SOTA:

  • A clear drop of offset error by 16.99%, increase roof boundary iou by 13.15%, on open dataset BONAI.
  • Without extra training, we tested all models on a newly annotated dataset for generalization, improving 20.29% in vector and 76.36% in offset angle.
  • DNMS series can bring extra gain in all situations.

(back to top)

Built With

Our work is based on the following:

(back to top)

Our weight of OBM is available at OBM weight.

Getting Started

This is a quick start of OBM.

Prerequisites

The code is built on the old version of mmdetection, and trained with a server that has 6x RTX3090.

  • ensure you have the correct CUDA and compatible torch version
    nvidia-smi

Our runnable environments: pytorch 1.7.0, CUDA 11.1

Installation

NOTE: Please follow the installation of BONAI and the early edition of MMdetection.

Usage

  • Train on your own dataset:
bash tools/dist_train.sh configs/obm_seg_fintune/smlcdr_obm_pretrain.py 6 # train with ROAM
bash tools/dist_train.sh configs/obm_seg/obm_seg_b.py 6 # train without ROAM
  • Inference your dataset:
python tools/test_offset.py --config configs/obm_seg/obm_seg_b.py

WARNING: OUR TRAINING DOES NOT SUPPORT FINETUNE LIKE LORA, BACKBONE IS INCLUDED, PLEASE SET `samples_per_gpu = 1` !!!

WARNING: PLEASE SET `samples_per_gpu = 1` WHILE INFERENCING !!!

  • Improve offset quality:
# using function fixangle()
## parameter: model = 'max' represents DNMS
## parameter: model = 'guassia_std' represents soft-DNMS

python tools/postprocess_offset.py 
  • Visualize your results:
# we provide two kinds of visualizing functions: 
## 1: vis_3d() for relative height maps. 
## 2:  vis_boundary_offset: for roof and footprint boundary. 
python tools/visual_offset.py 

# if you want to visualize the results of LOFT in BONAI
python tools/visual_instance_seg.py

(back to top)

Workflow and Results

Our model simulates the process of how a human annotates a footprint.

1. At the first stage, we input an image with some prompts to imply buildings' rough location, using box prompts as an example:

2. Then Our OBM will give out roof segmentation and a roof-to-footprint offset for each prompt.

3. Finally, we drag the roof to its footprint via the offset.

  • We provide two kinds of operation: one is to directly get footprints, and the other is to get the relative height maps.

(back to top)

License

Distributed under the MIT License. See LICENSE.txt for more information.

(back to top)

Contact

(back to top)

Offset tokens are hard to train; they are very sensitive to the settings. If you have any problems training the offset tokens, please contact me by likai211#mails.ucas.ac.cn or kaili37-c#my.cityu.edu.hk. I think my experience of failure in training will help you train your model. You can also contact me for any building-related problem or collaboration.

About

TGRS paper "prompt-driven building footprint extraction in aerial images with offset-building model"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published