diff --git a/README.md b/README.md index c4287511..74f21d8d 100644 --- a/README.md +++ b/README.md @@ -191,17 +191,17 @@ PPQ is tested with models from mmlab-classification, mmlab-detection, mmlab-sega | fsaf | Detection | 32 imgs | pplnn | bbox_mAP | 36.5% | 36.6% | 37.4% | | mask_rcnn | Detection | 32 imgs | pplnn | bbox_mAP | 37.7% | 37.6% | 37.9% | | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | -| deeplabv3 | Segamentation | 32 imgs | conservative | aAcc / mIoU | 96.13% / 78.81% | 96.14% / 78.89% | 96.17% / 79.12% | -| deeplabv3plus | Segamentation | 32 imgs | conservative | aAcc / mIoU | 96.27% / 79.39% | 96.26% / 79.29% | 96.29% / 79.60% | -| fcn | Segamentation | 32 imgs | conservative | aAcc / mIoU | 95.75% / 74.56% | 95.62% / 73.96% | 95.68% / 72.35% | -| pspnet | Segamentation | 32 imgs | conservative | aAcc / mIoU | 95.79% / 77.40% | 95.79% / 77.41% | 95.83% / 77.74% | +| deeplabv3 | Segmentation | 32 imgs | conservative | aAcc / mIoU | 96.13% / 78.81% | 96.14% / 78.89% | 96.17% / 79.12% | +| deeplabv3plus | Segmentation | 32 imgs | conservative | aAcc / mIoU | 96.27% / 79.39% | 96.26% / 79.29% | 96.29% / 79.60% | +| fcn | Segmentation | 32 imgs | conservative | aAcc / mIoU | 95.75% / 74.56% | 95.62% / 73.96% | 95.68% / 72.35% | +| pspnet | Segmentation | 32 imgs | conservative | aAcc / mIoU | 95.79% / 77.40% | 95.79% / 77.41% | 95.83% / 77.74% | | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | | srcnn | Editing | 32 imgs | conservative | PSNR / SSIM | 27.88% / 79.70% | 27.88% / 79.07% | 28.41% / 81.06% | | esrgan | Editing | 32 imgs | conservative | PSNR / SSIM | 27.84% / 75.20% | 27.49% / 72.90% | 27.51% / 72.84% | * PPQ(sim) stands for PPQ quantization simulator's result. * Dispatcher stands for dispatching policy of PPQ. -* Classification models are evaluated with ImageNet, Detection and Segamentation models are evaluated with the COCO dataset, Editing models are evaluated with DIV2K dataset. +* Classification models are evaluated with ImageNet, Detection and Segmentation models are evaluated with the COCO dataset, Editing models are evaluated with DIV2K dataset. * All calibration datasets are randomly picked from training data. ### License