-
Notifications
You must be signed in to change notification settings - Fork 65
Single Camera Person Tracking
To run only the person tracking module on a single camera, run the following:
roslaunch tracking single_camera_tracking_node.launch enable_people_tracking:=true enable_object:=false enable_pose:=false sensor_type:=<SENSOR_TYPE>
where you should replace <SENSOR_TYPE>
with kinect2
, zed
, realsense
, or azure
depending on your image (see below for additional notes on the Zed and Azure). Omitting the sensor_type
parameter will default to a kinect2
. Please note that the Zed, RealSense, and Azure are supported only on the newest version of OpenPTrack.
By default, all the modules are set to true
. For example, you can launch all the modules with a kinectv2 by running:
roslaunch tracking single_camera_tracking_node.launch enable_pose:=true enable_people_tracking:=true enable_object:=true
or simply:
roslaunch tracking single_camera_tracking_node.launch
You can activate or deactivate modules by setting the corresponding flags to true or false.
Note: Depending on your graphic processor, running pose annotation with the other two modules may be too much for your node.
Initial tests have found that our default person detection algorithm don't work well with the Zed camera. While we are working on this issue, we have added an option to perform YOLO-based person tracking- this will filter object detections to look for people. To use YOLO-based person tracking with a single camera Zed, set yolo_based_people_tracking:=true
:
roslaunch tracking single_camera_tracking_node.launch enable_people_tracking:=true enable_object:=false enable_pose:=false yolo_based_people_tracking:=true sensor_type:=zed
Note that enable_people_tracking
should also be set to true.
OpenPTrack v2.2 provides preliminary support for the Kinect Azure. Currently, single camera person tracking is available - the object and pose modules are not.
When running person tracking, the ground plane will be automatically detected. Sometimes due to a reflective or dark colored floor, tracking may not function properly or at all. This can also be solved by setting the ground plane manually. See here for the information.
Once the command is running, a rviz window will automatically open. Rviz is used for visualizing data in OPT. See here for instructions on visualizing data in OPT. You can choose your topics and subsequently save the configuration. OPT v2 comes with a default rviz configuration: SingleCameraTracking.rviz.
You also have the option to view the image as captured by the sensor by checking the SimpleImage option.
- System Requirements
- Supported Hardware
- Initial Network Configuration
- Example Hardware List for UCLA Setup
- Making the Checkerboard
- Time Synchronization
- Pre-Tracking Configuration
- Camera Network Configuration
- Single Camera
- Setting Parameters
- Multi-Sensor Person Tracking
- HOG vs YOLO Detectors
- World Coordinate Settings
- Single Camera
- Pose Initialization
- Multi Sensor Pose Annotation
- Pose Best Practices
- Setting Parameters
- Single Camera
- Setting Parameters
- Multi Sensor Object Tracking
- YOLO Custom Training & Testing
- Yolo Trainer
- Single Camera
- Setting Parameters
- Multi Sensor Face Detection and Recognition
- Face Detection and Recognition Data Format
How to receive tracking data in: