-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WebVMT Video Pose Use Case #2
Comments
One way to do the right thing for any relevant application domain or standard would be to follow the same path as we did for defining the coordinate systems -
Another possibility: Recognized values could be in a codelist maintained on the OGC definition service. In GeoPose sequences, the extra verbosity is not a factor because the information only appears once in the series or stream header. |
Many thanks for your feedback here and in the GeoPose SWG meeting on Friday (19/2/21). Both implementation options are feasible, though the latter has advantages in terms of accuracy and modularity for live streaming, and brevity. |
Consensus seems to be
|
The proposal for pose in WebVMT is:
|
No further discussion so closing. |
Question: How can GeoPose be integrated with timed video metadata to record camera location and orientation for moving images on the web?
Background
Moving object trajectories can be represented as WebVMT paths by recording location periodically and using interpolation to calculate intermediate values at any instant during the media timeline - a design aligned with OGC Moving Features. A camera pose feature has been proposed to extend this process to calculate GeoPose by recording camera orientation details based on discussion in the Spatial Data on the Web meeting on 25 June 2019 in Leuven.
Use Case
Consider the interval between two consecutive sample times A and B. A video camera moves from location A with a known orientation/pose to location B with another known pose. How can this be represented using GeoPose in a way that allows intermediate values to be determined during the interval?
There are (at least) two possible approaches.
Both approaches have pros and cons depending on the specific details of the use case such as whether real-time streaming is required.
Examples
Front-facing dashcam The dashcam calculates location from GNSS (global navigation satellite system) and heading from a compass, and these data can be captured in timed video metadata. Pose only needs to be calculated in 2D as the vehicle is on the ground and low precision is sufficient as the camera has a wide field of view.
Drone with gimballed camera The drone (unmanned aerial vehicle) calculates location from GNSS, height from an altimeter, orientation from a compass and gyro, and camera orientation from the gimbal controller. 3D pose is required as the camera is airborne and more precision is needed due its zoom capability which can reduce the field of view.
Related Issues
i. Location may be sampled regularly every few seconds (<1Hz);
ii. Camera gimbals may move quickly and sporadically so their pose remains unchanged for many minutes and then rapidly changes in a few tens of milliseconds (~10-100Hz);
iii. Image stabilisation systems may produce pose data at millisecond rates or faster (>1000Hz).
The text was updated successfully, but these errors were encountered: