Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

在test.py中增加可视化结果的代码后,效果很差。 #27

Open
liuqinglong110 opened this issue Feb 23, 2019 · 0 comments
Open

Comments

@liuqinglong110
Copy link

您好,我在test.py脚本的test_net()函数内部将检测后的bbox输出在图片中,可是现实效果很差,几乎是没有检测效果。但是,我运行test.py时打印结果和readme中差不多。麻烦您看看是否是我添加可视化结果的代码有问题。

以下是打印结果:
Finished loading model!
loading annotations into memory...
Done (t=0.46s)
creating index...
index created!
val2017 gt roidb loaded from /media/sfpc/hdd01/works/01member/liuqinglong/PyTorch/CFENet/data/coco_cache/val2017_gt_roidb.pkl
=> Total 5000 images to test.
Begin to evaluate
100%|██████████████████████████████████████████████████████████████████████████████████████| 5000/5000 [02:48<00:00, 31.19it/s]
Evaluating detections
Collecting Results......
Writing results json to eval/COCO/detections_val2017_results.json
Loading and preparing results...
DONE (t=1.59s)
creating index...
index created!
Running per image evaluation...
useSegm (deprecated) is not None. Running bbox evaluation
Evaluate annotation type bbox
DONE (t=37.24s).
Accumulating evaluation results...
DONE (t=4.10s).

31.1
[40.57291121621083, 23.09799931924321, 27.079142799861117, 35.90445758180694, 59.03516009559062, 57.60104185561858, 62.10953584207819, 28.346934670516937, 15.879438226694829, 12.970908505667131, 55.03553105170638, 52.27166928240076, 32.502803148994516, 17.25144519844492, 22.92283574501527, 60.524507358106575, 55.43074100407569, 48.30292656303936, 39.25437644104135, 38.50729999007073, 53.78198667257775, 66.53508687457537, 57.069485827333544, 55.48621623615131, 8.226423098688626, 28.16240625576513, 6.334530659774646, 18.606222843150782, 25.961159722050674, 47.57286849340589, 15.05620897410914, 20.209121557732768, 24.63198468746786, 24.948788811398522, 16.490687186409396, 26.951079648205422, 39.290302826940234, 26.357816912911865, 33.12220267646468, 18.87029605885143, 18.62972706367258, 25.819311087372977, 17.209735294725846, 6.039470642626763, 5.763638765228907, 31.060914252819543, 15.184937468529409, 12.08085800131153, 30.269102490882066, 22.767590542989772, 17.840366849920855, 13.39895571754999, 25.849177952756648, 41.56957495448163, 35.43302914486948, 30.78429116989319, 18.60925156812012, 39.383414853228544, 18.458530393041997, 39.194152267453966, 23.626842896745135, 54.8987088811993, 47.88809867681634, 50.07943886845173, 45.832888186451086, 13.44602209563323, 41.22519880390803, 25.22350059982525, 51.13318211095752, 30.908740159536514, 25.331203019967546, 28.81247030147889, 44.56079493823082, 6.662690165252809, 38.242017965681576, 23.895685283325612, 24.197714298866003, 36.17901336876259, 0.0, 9.289938403610515]
~~~~ Summary metrics ~~~~
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.311
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.494
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.330
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.126
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.337
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.476
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.260
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.411
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.435
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.183
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.482
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.644
Wrote COCO eval results to: eval/COCO/detection_results.pkl
Detect time per image: 0.023s
Nms time per image: 0.005s
Total time per image: 0.029s
FPS: 34.986 fps

我在test.py的90行添加如下代码:
        _t['misc'].tic()
        for j in range(1, num_classes):
            inds = np.where(scores[:, j] > thresh)[0]
            if len(inds) == 0:
                all_boxes[j][i] = np.empty([0, 5], dtype=np.float32)
                continue
            c_bboxes = boxes[inds]
            c_scores = scores[inds, j]
            c_dets = np.hstack((c_bboxes, c_scores[:, np.newaxis])).astype(np.float32, copy=False)

            soft_nms = cfg.test_cfg.soft_nms
            keep = nms(c_dets, cfg.test_cfg.iou, force_cpu=soft_nms)
            keep = keep[:cfg.test_cfg.keep_per_class]
            c_dets = c_dets[keep, :]
            all_boxes[j][i] = c_dets
#**********此处开始及以下6行代码为添加的可视化代码*****************************#
            for id in range(len(c_dets)):
                ymin, xmin, ymax, xmax = c_dets[id,0:4]
                (left, right, top, bottom) = (xmin, xmax, ymin, ymax)
                cv2.rectangle(img, (int(left), int(top)), (int(right), int(bottom)), (255,0,0), 2)
            cv2.imshow('img', img)
            cv2.waitKey(1)
#***************************************************************************#
        if max_per_image > 0:
            image_scores = np.hstack([all_boxes[j][i][:, -1] for j in range(1,num_classes)])
            if len(image_scores) > max_per_image:
                image_thresh = np.sort(image_scores)[-max_per_image]
                for j in range(1, num_classes):
                    keep = np.where(all_boxes[j][i][:, -1] >= image_thresh)[0]
                    all_boxes[j][i] = all_boxes[j][i][keep, :]
        nms_time = _t['misc'].toc()


Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant