Skip to content

Location Estimation

Kshitj Kabeer edited this page Jan 18, 2020 · 3 revisions

Once the object detection algorithm produces a vector containing all the data of valid objects, the height data from the LiDaR sensor and the camera model is used to calculate matrices that transform coordinates from the image frame into the camera frame (scaleUp * invCamMatrix in code)[1], then into the MAV frame (camtoQuad in code) and finally into the global frame(quadToGlob in code). Using the GeographicLib library the coordinates are converted to latitude-longitude. These coordinates are then relayed to the ground station for map generation and also relayed to the other MAVs.

[1]NOTE: This is just an approximation, considering the fact that the quad is nearly horizontal. In this case the scaleUp matrix becomes a diagonal matrix with each diagonal entry equal to the height of the quad.

Clone this wiki locally