-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[MMpose V1.0.0]Several questions about json result on pipeline #2227
Comments
|
Hi, i have set test_evalution like this: |
@Tau-J Hi , do you think the link from mmpretrain can be used to solve my problem ? |
Sorry for late reply.
Here is an example of calculating bbox. Would you like to create a PR to support this? |
Yeah, I'd like to |
Hi everyone ! I'm encountering some trouble on json result, the pipeline is fluid and successful, but how to expand info printed in the output json did stump me, I will list the question below, and wanna have some supports from you my friends.
train and val pipeline
I am utilizing coco datasets, and val_metric is like:
val_evaluator = dict(
type='CocoMetric',
ann_file=f'{data_root}annotations/person_keypoints_val.json',
score_mode='bbox_rle')
I found loss recorded in log is less then 0.
{
"lr": 9.909819639278549e-05,
"data_time": 0.025734996795654295,
"loss": -78.74157875061036,
"loss_kpt": -78.74157875061036,
"acc_pose": 0.8585847159376572,
"time": 0.45228503227233885,
"epoch": 1,
"memory": 1799,
"step": 50
}
and here is the curves recored by tensorboard, through setting
vis_backends = [
dict(type='LocalVisBackend'),
dict(type='TensorboardVisBackend'),
]
is that normal phenomenon? or does that only happen when 'type=CocoMetric' ?
test pipeline
the record on log is shown as follow:
{
"coco/AP": 0.5661935819079941,
"coco/AP .5": 0.8071986260011533,
"coco/AP .75": 0.6276648388391648,
"coco/AP (M)": 0.5420095587014345,
"coco/AP (L)": 0.6174008618580263,
"coco/AR": 0.6130382775119617,
"coco/AR .5": 0.8480861244019139,
"coco/AR .75": 0.6698564593301436,
"coco/AR (M)": 0.5789907312049433,
"coco/AR (L)": 0.6639087018544936,
"data_time": 0.009200310707092286,
"time": 0.06942817211151123
}
I wonder, is it possible to return the testing log like this (containing more detailed info on each img, so I can do evaluate by my own):
{
'sample_idx': 0,
'img_shape': (224, 224),
'scale_factor': (1.607361963190184, 1.610062893081761),
'ori_shape': (159, 163),
'num_classes': 4,
'img_path': '/home/xxx/xxx.jpg',
'gt_label': {'label': tensor([1])},
'pred_label': {'label': tensor([1]),
'score': tensor([0.0142, 0.9636, 0.0114, 0.0107])}
}
This is actually what i get in mmclassification dev-1.X on multi-classification task through setting "cfg.out-item='pred'and 'cfg.out=xxx.pkl'"
I wonder whether i can achieve the similar results, and get the testing log/json/pkl which contains bbox and keypoints ?
Sincerely looking forward to your response, Thanks in advance~
@ly015
The text was updated successfully, but these errors were encountered: