-
-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can I inform the initial bounding box visibilities? #1767
Comments
If this is not possible, a workaround could be achieved if albumentation returns me the "perceived" visibility after the transformation. In that way I could calculate the "real" visibility as the multiplication of the initial and the perceived visibilities. |
I do not understand the question yet. As I understand, you crop parts from the image and bounding boxes that are not 100% contained in the image get truncated, right? And this becomes an issue. Or not? Could you provide some code? |
Hi @ternaus , thanks for the reply. Yes, you are correct. They get truncated. Therefore their visibility is not 100% to start with. Unfortunately I don't think providing code will make things any clearer, because this is more of an workflow problem. So let me give an hypothetical situation: 1 - Imagine my dataset have images 2000x2000 px. Everything up to this point happens before training the model. Its dataset pre-processing and has nothing to do with Albumentations. 5 - Now I'll start training a model and use albumentations. Then comes the question: Given that I want to work with minimal visibility of 40%, which value of
|
My Question
Is it possible to inform a transformation what is the initial visibility of each bounding box?
As far as I know, the transformation always assume that the objects are 100% visible in the before the transformation. But in real life, that is not always the case
Additional Context
I work with object detection on very high resolution images. As a preprocessing step, the images of the training dataset have to be sliced before they can be used by the model. During this preprocessing, the visibility of many bounding boxes become less than 100%. Of course, I can calculate those values, but is there a way to use them with albumentations?
The text was updated successfully, but these errors were encountered: