You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hey everyone! Nice to see a community maintained SLAM project with active discussions :)
TL;DR:
Trying SLAM on mobile phone video. Only keypoints detected, no landmarks -> no map generated.
Any tips or ideas? Best approaches for debugging?
Description:
I am just diving into the code of this repository. I finished reading through the documentation and many discussions. The tutorial example is running on all viewers, tested on the aist_entrance_hall examples.
Now, I'd like to get the SLAM system running on a video file recorded on my mobile phone, a Samsung Galaxy A23.
When running the video with estimated intrinsic parameter, no map was created. So I followed the openCV Python camera calibration turorial and got the correct focal length, camera center and distortion parameters.
As seen in Feature extraction and projections #411 there are different parameters that have an influence on key points and landmarks. I tried playing with the parameters listed under Config.Mapping, as well as using a smaller video size.
While there are key points found in the video, they are not stable enough (?) to lead to landmarks and mapping.
It seems that the key points are oddly shifted to the right, even with the calibrated camera.
Questions:
Does anyone have an idea why the mapping fails and what needs to change in order for it to work?
Is there a good approach to debug such an issue?
(Currently I am exploring the code in run_video_slam and getting information about the key points and landmarks.)
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hey everyone! Nice to see a community maintained SLAM project with active discussions :)
TL;DR:
Trying SLAM on mobile phone video. Only keypoints detected, no landmarks -> no map generated.
Any tips or ideas? Best approaches for debugging?
Description:
I am just diving into the code of this repository. I finished reading through the documentation and many discussions. The tutorial example is running on all viewers, tested on the aist_entrance_hall examples.
Now, I'd like to get the SLAM system running on a video file recorded on my mobile phone, a Samsung Galaxy A23.
shelf-iridescence_key_point_cut_sized.mp4
When running the video with estimated intrinsic parameter, no map was created. So I followed the openCV Python camera calibration turorial and got the correct focal length, camera center and distortion parameters.
Current YAML Config:
As seen in Feature extraction and projections #411 there are different parameters that have an influence on key points and landmarks. I tried playing with the parameters listed under Config.Mapping, as well as using a smaller video size.
While there are key points found in the video, they are not stable enough (?) to lead to landmarks and mapping.
It seems that the key points are oddly shifted to the right, even with the calibrated camera.
Questions:
Does anyone have an idea why the mapping fails and what needs to change in order for it to work?
Is there a good approach to debug such an issue?
(Currently I am exploring the code in run_video_slam and getting information about the key points and landmarks.)
Beta Was this translation helpful? Give feedback.
All reactions