This repository contains supplementary material for our article "Improving Radar Human Activity Classification Using Synthetic Data with Image Transformation", published in MDPI Sensors as part of the Special Issue "Advances in Radar Sensors". There we introduce RACPIT: Radar Activity Classification with Perceptual Image Transformation, a deep-learning approach to human activity classification using FMCW radar and enhanced with synthetic data.
We use Range Doppler Maps (RDMs) as a basis for our input data. These can be either real data acquired with Infineon's Radar sensors for IoT or synthetic using kinematic data with the following model:
We further preprocess the RDMs by stacking them and summing over Doppler and range axis to obtain range and Doppler spectrograms, respectively:
We train our image transformation networks with an adapted version of Perceptual Losses for Real-Time Style Transfer and Super-Resolution.
Since we are working with radar data, we substitute VGG16 as the perceptual network with our two-branch convolutional neural network from Domain Adaptation Across Configurations of FMCW Radar for Deep Learning Based Human Activity Classification.
If we train the image transformation networks with real data as our input and synthetic data as our ground truth, we obtain a denoising behavior for the image transformation networks.
The code has been written for PyTorch based on Daniel Yang's implementation of Perceptual loss.
Data preprocessing is heavily based on xarray. You can take a closer look at it in our example.
- Python 3.8
- PyTorch 1.7.0
- xarray
- NumPy
- Pandas
- Matplotlib
- Cuda 11.0 (For GPU training)
Radar data can be batch-preprocessed and stored for faster training:
$ python utils/preprocess.py --raw "/path/to/data/raw" --output "/path/to/data/real" --value "db" --marginalize "incoherent"
$ python utils/preprocess.py --raw "/path/to/data/raw" --output "/path/to/data/synthetic" --synthetic --value "db" --marginalize "incoherent"
After this, you can train your CNN, that will serve as a perceptual network:
$ python main.py --log "cnn" train-classify --range --config "I" --gpu 0 --no-split --dataset "/path/to/data/synthetic"
Then you can train the image transformation networks:
$ python main.py --log "trans" train-transfer --range --config "I" --gpu 0 --visualize 5 --input "/path/to/data/real" --output "/path/to/data/synthetic" --recordings first --model "models/cnn.model"
And finally test the whole pipeline:
$ python main.py test --range --config "I" --gpu 0 --visualize 10 --dataset "/path/to/data/real" --recordings last --transformer "models/trans.model" --model "models/cnn.model"
If you use RACPIT's code or you take the publication as a reference for your research, please cite our work in the following way:
@Article{s22041519,
AUTHOR = {Hernang{\'o}mez, Rodrigo and Visentin, Tristan and Servadei, Lorenzo and Khodabakhshandeh, Hamid and Sta{\'n}czak, S{\l}awomir},
TITLE = {Improving Radar Human Activity Classification Using Synthetic Data with Image Transformation},
JOURNAL = {Sensors},
VOLUME = {22},
YEAR = {2022},
NUMBER = {4},
ARTICLE-NUMBER = {1519},
URL = {https://www.mdpi.com/1424-8220/22/4/1519},
ISSN = {1424-8220},
DOI = {10.3390/s22041519}
}