Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

work with partial annotation #8

Open
dalessioluca opened this issue Apr 29, 2020 · 2 comments
Open

work with partial annotation #8

dalessioluca opened this issue Apr 29, 2020 · 2 comments
Assignees
Labels
enhancement New feature or request

Comments

@dalessioluca
Copy link
Collaborator

dalessioluca commented Apr 29, 2020

If few images are partially annotated then you can do supervised learning.
I would think that given an integer_mask_annotation:

  1. compute target bounding boxes, centroid and width/height (using skimage)
  2. identify which voxel is responsible for each target bounding box.
  3. add a regression loss between the target bounding box and the infered bounding box (i.e. tx_map, ty_map, tw_map, th_map which are all in (0,1)). Note that only few voxel will be "labelled" therefore the regression loss should be "masked".
  4. All location inside the target bounding box should have a loss between p_map and target probability. The target probability is 1 at the center of the bounding box and zero in all the other location of the bounding box, i.e. the probability is both pushed up (at center) and down (at periphery)
  5. identifies the bb with the largest IoU with the target bounding box. For that bb put a cross entropy classification loss between the inferred and target mask

Note:
For most images there would not be any annotation, even the annotation is present it is only partial. Therefore the code need to be written in such a way that this labelled loss defaults to zero in most cases.

@dalessioluca
Copy link
Collaborator Author

The data loader would need to be changed to allow for both cases when annotation is present/not present

@dalessioluca
Copy link
Collaborator Author

Another possibility is to have:

  1. aggressive pre-porcessing
  2. pre-training based on disks (see line 380 in https://github.com/spacetx/spacetx-research/blob/38_OMG/MODULES/vae_parts.py)

When a model is pretrained we should report the mean and std of each channel in the dataset used for training. User will be responsible to match the mean and std in order to get the most out of the pretrained model.

@dalessioluca dalessioluca added the enhancement New feature or request label Jun 13, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants