-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problems with tracker.update_with_detections(detections) #1215
Comments
Hi @CodingMechineer 👋 Let's do one quick test - does installing |
@CodingMechineer, we accidentally shipped a tracking bug in |
@CodingMechineer Could you share with us the exact version of the model and the video file you are using? |
@SkalskiP Sure! Please let me know if there is an issue on my end. |
Hi @CodingMechineer 👋 Tracker uses detections overlap and motion model prediction to estimate which detection represent the same object in sequential frames. It then filters out what it can't match. While the details are a bit complicated, a quick way to influence the result is to increase the object area shown to the tracker. So, my quick suggestion: check if padding the boxes solves your problem. That means, insert Here's how it looks on my end. This way, all holes are detected, even after tracking. Does that solve your problem? 😉 |
That's unfortunate. Here's the next steps to try:
Are you always running this on videos, or on a stream too? If both, I wonder if it performs similarly on the live stream and the video of the same stream. |
@LinasKo, do you have any idea why this happens? |
@SkalskiP, no. I dug for an hour or so, plotted sequential detections (there's typically >50% overlap, yet they disappear). I played with some values, but I'd need to plot/print out steps of the algorithm to learn how it sees the world. |
I may run this on a stream in the future. Currently, I only run it on video files. To do the same task, I also tried the Ultralytics library. That works completely fine and I continue with that. The code looks something like this:
Maybe this can help you. Please let me know if I can do anything else for you. |
@CodingMechineer btw if you use |
@LinasKo and @CodingMechineer, is that issue still active? |
Yup, we'll need to look at this in the future. |
Hi @CodingMechineer, @SkalskiP, and @LinasKo I took a look at it with @CodingMechineer's code and it seems like the tracker is working as expected. It unfortunately is failing for @CodingMechineer because the motion predictor (kalman filter) in the tracker uses the first 2 frames of a track to determine the speed and direction of an object, and so for the first association between the initial detection frame and the second detection frame the tracker uses the overlap between the two frames, NOT between the prediction and the second frame. In this specific video, the objects move very quickly and so for the first 2 frames of some tracks, there is almost no overlap (the ones that are not tracked are <30%, needs to be >30%), meaning no track gets established. This then means the tracker must start over with an entirely new track for that object because it could not establish a motion model, and so for the next frame it has no hope of the overlap of frame 1 and frame 3 being more than 30% in this example. I improved the performance for this example by setting the minimum_matching_treshold to 0.9 when initializing For your specific example @CodingMechineer, if the tracking performance is essential to your project, I believe you would either need to record at a higher framerate or you would need to slow down the device being used to sort peanuts. Both these options would reduce the amount an object moves between frames. The last thing we could try is to initialize the motion model to be going in general from left to right if this specific location is the only place you need to run this code on. This would allow the tracker to better pick up on objects in the first two frames. Unfortunately it would require changing the source code a bit and messing around with the kalman filter, something that I am willing to help with, but we would probably not be able to put into the supervision API. @LinasKo @SkalskiP This issue seems to come from how ByteTrack is designed to be flexible for varied and unexpected tracking scenarios. If we wanted to fix the tracking for these types of repetitive, predictable computer vision tasks, it would be better to design a second tracker that can better handle high-speed and predictable types of motion for tasks like this. |
Thank you for your detailed investigation @rolson24! I made the same observation regarding the framerate as you explained. With the video from the example, I had the stated problem only with the Roboflow library but not with Ultralytics. Though, when the device is running faster with the same framerate, I had the same problems with the Ultralytics library. Thus, I must make sure the movements from the objects between the frames is small enough, so that the tracking works satisfyingly. Probably the object movement between the frames was between a threshold so that it worked with Ultralytics but not with Roboflow. In summary, I need to make sure my framerate is high enough so that the object overlap is big enough and the tracking works accordingly. Hence, there is no need to change the source code. Thanks everybody for your help! |
Search before asking
Bug
Somehow, I loose predicted bounding boxes in this line:
tracker.update_with_detections(detections)
In the plot from Ultralytics, everything is fine. Though, after the line above gets executed, I loose some bounding boxes. In this example, I loose two.
That's the plot from Ultralytics, how it should be:
That's the plot after the Roboflow labling, some predictions are missing:
Can somebody help me with this issue?
Environment
Minimal Reproducible Example
Code:
Prints in console:
CLS_YOLO-model: tensor([1., 1., 1., 1.], device='cuda:0') --> Class ID's from the predicted bounding boxes
ClassID_Supervision_1: [1 1 1 1] --> Converted into Supervision
ClassID_Supervision_2: [1 1] --> After the tracker method class ID's are lost
ClassID_Supervision_3: [1 1]
2 detections, Labels: ['Spot 0.87', 'Spot 0.86']
Additional
No response
Are you willing to submit a PR?
The text was updated successfully, but these errors were encountered: