-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tensorflow and the intel camera #9430
Comments
Hi @garceling The RealSense SDK has a TensorFlow compatibility wrapper for Python and examples for it that cover object detection and querying the XYZ coordinates of detected objects. https://github.com/IntelRealSense/librealsense/tree/master/wrappers/tensorflow |
Hi, I was following the link you sent above. And i am a beginning so I am confused on which example shows the object detection. I need it to be above to detect the object and also label it as well. |
Object detection is introduced in Part 1 of the examples. I do not have information on label creation with the SDK's TensorFlow wrapper though. If labels are a vital requirement, there is alternatively a guide in the link below for using a RealSense D435 camera with TensorFlow to perform custom object detection with a point cloud and then label it. https://github.com/jediofgever/PointNet_Custom_Object_Detection The label creation guidance for this tutorial is in the article below: https://github.com/jediofgever/PointNet_Custom_Object_Detection/blob/master/PREPARE_DATA.md |
Hi, I finally got the object detection and distance measurement to work for the real depth camera. However, it can only measure distance accurately and recognize objects up to 30 m. Is this a issue with the program/ code I am using, or can the intel D345 only have a range of about 30 m. |
The D435 camera model's official depth sensing maximum range is 10 meters, though accuracy starts to noticably drift beyond 3 meters from the camera. This is less of a problem when analysis is based on RGB images, which do not have that depth sensing limitation. The D455 model has twice the accuracy over distance of the D435 and so has the same accuracy at 6 meters from the camera that the D435 / D435i has at 3 meters. The D455 has a minimum depth sensing distance of 0.4 meters though compared to the 0.1 m of D435/i, so is less suited to close-range depth sensing. |
可以采用pytorch进行目标检测吗 |
Translation: "Can pytorch be used for target detection?" Hello @Larry0607 The tutorial in the link below about target detection with pytorch and YOLOv3 may be helpful to you. Links to further guides can be found at the bottom of the article. 你好 @Larry0607 下面鏈接中關於使用 pytorch 和 YOLOv3 進行目標檢測的教程可能對你有所幫助。 可以在文章底部找到更多指南的鏈接。 |
Hello @Larry0607 Do you require further assistance with this case, please? Thanks! 你好@Larry0607 請問這個案例你需要進一步的幫助嗎? 謝謝! |
Hi @garceling The resources in the link below will hopefully be helpful for answering your question. |
Hi @garceling Do you require further assistance with this case, please? Thanks! |
Hi, I was playing around with the intel camera and one thing i noticed was that the camera provides an accurate measurement of the distance of objects located at the center. However, for objects located towards the edge of the camera's frame of view, the distance measurement becomes inaccurate. Does this error have to do with my code, or is it related to the intel camera? |
If you are using the get_distance instruction in Python then the effect that you describe is a known phenomenon. More information about this is in the link below |
Hi thank you, I was also wondering if it is possible to implement object tracking on a live video with the intel realesense camera. |
The article from the official RealSense blog in the link below about different kinds of tracking is a useful introduction to the subject. https://www.intelrealsense.com/types-of-tracking-overview/ After that article, the one in the link below about object tracking that provides example Python code would be worth looking at. https://learnopencv.com/object-tracking-using-opencv-cpp-python/ Another approach would be to use a neural network that is trained to identify particular objects on an RGB video image and keep track of them, like the YOLOv4 and TensorFlow example for Python in the link below. https://github.com/LeonLok/Multi-Camera-Live-Object-Tracking Once you decide upon a preferred approach to object tracking, I will be happy to assist with further questions about that subject if you have them. Good luck! |
Hi @garceling Do you require further assistance with this case, please? Thanks! |
Yes sorry to bother you again, but in response to the inaccuracy of the get_distance function you linked me to this solution: #7395 (comment) So i'm still confused. To fix the inaccuracy, do I just add the rs.align code that they have. Sorry, I am super new at this. |
It's no trouble at all. The RealSense user in that link was suggesting to use the script that is beneath the line in that comments that says Please see below my final stream config code that makes me not receive this error. If you need an example of Python depth to color alignment code that you can be confident is correct though, you could try the RealSense SDK's own official align-depth2color.py example script. |
Okay thank you, I was wondering if it was possible to feed a pre-recorded video into the intel camera. I want it to be able to measure the distance of objects not just in real-time, but also through a pre-recorded video |
You can record camera data into a bag file, which is like a video recording of camera data, and the RealSense SDK treats it as though it is a live camera feed. |
Oh okay. So is this a way to convert a mp4 file into a bag file then. |
You could read an mp4 video file into OpenCV, which the RealSense SDK is fully compatible with, and access the frames that way. |
Hi, I just recorded a bag file. How do I edit my code, so that instead of taking a live stream, it will work with a bag file instead? |
Programs that use live-streaming (including the SDK sample programs) can be modified to use a bag file as their data source instead of a live camera by adding the cfg.enable_device_from_file instruction to a script. An example of a Python version of this adaptation is below:
You could alternatively store the bag's path in a variable:
|
Hi @garceling Do you require further assistance with this case, please? Thanks! |
Hi, I just have one more question. I am trying to measure the angle of an object from the intel camera. I found this code online: |
Hi @garceling Do you require further assistance with this case, please? Thanks! |
I ran the first program, object detection.py, with the following error. I checked my tensoeflow for errors. Would like to ask your opinion! |
Hi @666tua tensorflow/tensorflow#57375 suggests running the command below before running your code in order to resolve error in PredictCost().
|
Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):
All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)
Issue Description
<Hi, I have spent weeks trying to do object detection and distance measurement with the intel camera. Essentially, the camera will detect and label an object and measure its distance from the camera. The program must use Tensorflowlite for object detection. Has anyone done a similar project or found anything online to help me. I am stuck and desperately need help. I also cannot find any code online for real-time object detection with the intel camera and tensorflow. Does anyone have any ideas on how to get that to work? Thank you and I appreciate any help. >
The text was updated successfully, but these errors were encountered: