Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MMpose V1.0.0]Several questions about json result on pipeline #2227

Closed
ThomaswellY opened this issue Apr 14, 2023 · 5 comments
Closed

[MMpose V1.0.0]Several questions about json result on pipeline #2227

ThomaswellY opened this issue Apr 14, 2023 · 5 comments

Comments

@ThomaswellY
Copy link

ThomaswellY commented Apr 14, 2023

Hi everyone ! I'm encountering some trouble on json result, the pipeline is fluid and successful, but how to expand info printed in the output json did stump me, I will list the question below, and wanna have some supports from you my friends.

  1. train and val pipeline
    I am utilizing coco datasets, and val_metric is like:
    val_evaluator = dict(
    type='CocoMetric',
    ann_file=f'{data_root}annotations/person_keypoints_val.json',
    score_mode='bbox_rle')
    I found loss recorded in log is less then 0.
    {
    "lr": 9.909819639278549e-05,
    "data_time": 0.025734996795654295,
    "loss": -78.74157875061036,
    "loss_kpt": -78.74157875061036,
    "acc_pose": 0.8585847159376572,
    "time": 0.45228503227233885,
    "epoch": 1,
    "memory": 1799,
    "step": 50
    }
    and here is the curves recored by tensorboard, through setting
    vis_backends = [
    dict(type='LocalVisBackend'),
    dict(type='TensorboardVisBackend'),
    ]
    image
    is that normal phenomenon? or does that only happen when 'type=CocoMetric' ?

  2. test pipeline
    the record on log is shown as follow:

{
"coco/AP": 0.5661935819079941,
"coco/AP .5": 0.8071986260011533,
"coco/AP .75": 0.6276648388391648,
"coco/AP (M)": 0.5420095587014345,
"coco/AP (L)": 0.6174008618580263,
"coco/AR": 0.6130382775119617,
"coco/AR .5": 0.8480861244019139,
"coco/AR .75": 0.6698564593301436,
"coco/AR (M)": 0.5789907312049433,
"coco/AR (L)": 0.6639087018544936,
"data_time": 0.009200310707092286,
"time": 0.06942817211151123
}

I wonder, is it possible to return the testing log like this (containing more detailed info on each img, so I can do evaluate by my own):

{
'sample_idx': 0,
'img_shape': (224, 224),
'scale_factor': (1.607361963190184, 1.610062893081761),
'ori_shape': (159, 163),
'num_classes': 4,
'img_path': '/home/xxx/xxx.jpg',
'gt_label': {'label': tensor([1])},
'pred_label': {'label': tensor([1]),
'score': tensor([0.0142, 0.9636, 0.0114, 0.0107])}
}

This is actually what i get in mmclassification dev-1.X on multi-classification task through setting "cfg.out-item='pred'and 'cfg.out=xxx.pkl'"
I wonder whether i can achieve the similar results, and get the testing log/json/pkl which contains bbox and keypoints ?
Sincerely looking forward to your response, Thanks in advance~
@ly015

@ThomaswellY ThomaswellY reopened this Apr 14, 2023
@ThomaswellY ThomaswellY changed the title [MMpose V1.0.0]Several questions about json result on train、val、test pipeline [MMpose V1.0.0]Several questions about json result on pipeline Apr 14, 2023
@Tau-J
Copy link
Collaborator

Tau-J commented Apr 14, 2023

  1. The loss goes negative is normal if you are using RLE. According to your test results, your model is trained successfully.
  2. You can obtain the result by specifing format_only and output_prefix in CocoMetric, please refer to here for more details.

@ThomaswellY
Copy link
Author

  1. The loss goes negative is normal if you are using RLE. According to your test results, your model is trained successfully.
  2. You can obtain the result by specifing format_only and output_prefix in CocoMetric, please refer to here for more details.

Hi, i have set test_evalution like this:
test_evaluator = dict(
type='CocoMetric',
format_only=True ,
outfile_prefix='/home/yhao/project/mmpose_1.X/work-dir/test_result/prefix',
ann_file=f'{data_root}annotations/person_keypoints_val.json',
score_mode='bbox_rle')
and got the 'prefix.keypoints.json', which contains dicts shown below:
[
{
"category_id": 1,
"image_id": 458755,
"keypoints": [ ......]
"score": 2.3132200241088867
},
……
]
However, the bbox is still missing, is there anyway to get bbox info besides keypoints?
By the way, I have tried to modify codes on tools/test.py according to open-mmlab/mmpretrain#1307
and the result pkl contains abundant infos, including:
{
'pred_instances' :
{'bbox_scores':
'keypoints_scores':
'keypoints':
'bbox':
}
}
Do you think my procedures are reasonble ? and does the pkl I got is reliable ?

@ThomaswellY
Copy link
Author

@Tau-J Hi , do you think the link from mmpretrain can be used to solve my problem ?

@Tau-J
Copy link
Collaborator

Tau-J commented Apr 24, 2023

Sorry for late reply. bbox can be easily calculated from keypoints via np.amax() and np.amin(). I think we can enhance the compute_metrics() in CocoMetric. Specifically speaking, we can add bbox to valid_kpts:

valid_kpts = defaultdict(list)

Here is an example of calculating bbox.

Would you like to create a PR to support this?

@ThomaswellY
Copy link
Author

Sorry for late reply. bbox can be easily calculated from keypoints via np.amax() and np.amin(). I think we can enhance the compute_metrics() in CocoMetric. Specifically speaking, we can add bbox to valid_kpts:

valid_kpts = defaultdict(list)

Here is an example of calculating bbox.

Would you like to create a PR to support this?

Yeah, I'd like to

@Tau-J Tau-J closed this as completed Apr 26, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants