Jie Xu, Tao Du, Michael Foshey, Beichen Li, Bo Zhu, Adriana Schulz, Wojciech Matusik
ACM Transactions on Graphics, 38(4) 42:1-42:12 (SIGGRAPH), 2019
This repository implements the code for the paper Learning to Fly: Computational Controller Design for Hybrid UAVs with Reinforcement Learning (SIGGRAPH 2019).
SimulationUI
: C++ rigid body simulation to visualize the controller's performance. (C++)Training
: Code to train the controller with our method with Reinforcement Learning. (Python)Firmware
: Modified Ardupilot firmware code to implement our controller on real hardware. (C++)
-
The code is implemented to visualize the controller's performance in a realistic simulation environment and provide the user interaction to control the hybrid UAV through keyboard.
-
The code has been tested on
Ubuntu 16.04
andUbuntu 18.04
. -
Compiling the code relies on
cmake >= 3.1.0
. Please install it through the instruction in this link before compiling the code. -
Compile the code:
-
Enter
SimulationUI
folder. -
Create build folder and enter it by:
mkdir build; cd build;
-
Run cmake and make to compile the code.
cmake -DCMAKE_BUILD_TYPE=Release ../ make -j4
-
If cmake fails, please install the missing libraries listed in the error message.
-
-
Run the code:
-
Keyboard Command:
- We provide a bunch of keyboard command to allow you to easily control the hybrid UAV.
p
: start/pause simulation, you need to pressp
after you create the window.w/s
: increase/decrease velocity z (vertical velocity, negative value is going up).a/d
: increase/decrease target yaw angle.up/down
: increase/decrease velocity x (forward velocity).left/right
: increase/decrease velocity y (side velocity).- You can use the mouse scroll wheel to zoom in/out (scroll) and change the view (press and move) of the camera.
- For more keyboard operation, please refer to the code.
- To be noted, there is a dead zone for the velocity z, thus the controller is not triggered until the velocity z is smaller than -0.5.
-
Tail-sitter: we also provide the config file and the pretrained controller for tail-sitter. You can modify the code in
DesignViewer.cpp
to load its config file and replace the controller parameter fileprojects/copter_simulation/Controller/NN/NNModelParameters.h
with its controller file (which you can find inprojects/copter_simulation/Controller/NN/TailSitter/
).
-
The code has been tested on
Ubuntu 16.04
. -
The code relies on a modified version of OpenAI baselines in
[email protected]:eanswer/openai_baselines.git
. To run the code, please first clone the repositorygit clone [email protected]:eanswer/openai_baselines.git
and then follow the
README.md
to install baselines. -
We provide two models in the data folder.
-
To start training a controller for a model, for example QuadPlane, in one thread,
python run.py --model QuadPlane
or in multiple threads (e.g. 4),
mpirun -np 4 python run.py --model QuadPlane
You can also add a
--visualize
to visualize the training byMujoco
. (If you don't installMujoco
, this option does't work). -
The trained models are saved in
results
folder. -
To test a trained mode, for example model_afterIter_400.ckpt, you can run
python run.py --model QuadPlane --controller model_afterIter_400 --play
-
To save a trained model to a
c++
format to be plugged into theSimulationUI
code orardupilot
firmware code.python save.py --controller model_afterIter_400
-
The firmware code is based on the Ardupilot codebase and tested in Pixhawk 1.0.
-
First plug the model description file generated by training code (should named
AP_MotorsPolicyDefinition.h
) into the firmware codebase:cp AP_MotorsPolicyDefinition.h ../../Firmware/ardupilot/libraries/AP_Motors/
-
Connect the Pixhawk 1.0 to your computer by usb cabel.
-
Enter the folder
ArduCopter
bycd Firmware/ardupilot/ArduCopter
. -
Compile and upload the firmware code by
make px4-v2-upload
-
Please refer to Ardupilot official document for more detials.
If you find our paper or code is useful, please consider citing:
@article{xu2019learning,
title={Learning to fly: computational controller design for hybrid UAVs with reinforcement learning},
author={Xu, Jie and Du, Tao and Foshey, Michael and Li, Beichen and Zhu, Bo and Schulz, Adriana and Matusik, Wojciech},
journal={ACM Transactions on Graphics (TOG)},
volume={38},
number={4},
pages={1--12},
year={2019},
publisher={ACM New York, NY, USA}
}