Two training modes are included currently, i.e., conventional training and semi-siamese training. Edit the configuration of each training mode by the following steps, and then you can train a face recognition model by the certain mode.
We use MS-Celeb-1M-v1c for conventional training. To perform open-set evaluation, we try our best to remove the identities which may overlap between this dataset and all of the test sets, resulting in a training set which includes 72,778 identities and about 3.28M images. The final identity list can be found in MS-Celeb-1M-v1c-r_id_list.txt. The format of training list should be the same as MS-Celeb-1M-v1c-r_train_list.txt. The shallow training set MS-Celeb-1M-v1c-Shallow is formed by randomly selecting two images of an identity in MS-Celeb-1M-v1c, and the selected image list can be downloaded in MS-Celeb-1M-v1c-r-shallow_train_list.txt. The training set for masked face recognition(MS-Celeb-1M-v1c-Mask) includes the original face images of each identity in MS-Celeb1M-v1c, as well as the corresponding masked face image by FMA-3D.
Align the face images to 112*112 according to face_align.py.
Edit the configuration in backbone_conf.yaml. Detailed description about the configuration can be found in backbone_def.py.
Edit the configuration in head_conf.yaml. Detailed description about the configuration can be found in head_def.py.
Edit the configuration in train.sh. Detailed description about the configuration can be found in train.py.
sh train.sh
- In order to train a model using only the upper half of face (model2 in 3.4), you need to set the last parameter of 'ImageDataset' to True and modify the 'out_h' of the backbone to 4.
- In order to train the masked face recognition model (model3 in 3.4), you just need to change the training set to MS-Celeb-1M-v1c-Mask, which includes 72,778 identities and about 3.28*2M images.
The models and training logs mentioned in our technical report are listed as follows. You can click the link to download them. For Megaface, we report the accuracy of the last checkpoint, and for other benchmarks, we report the accuracy of the best checkpoint.
Backbone | LFW | CPLFW | CALFW | AgeDb | MegaFace | Params | Macs | Models&Logs |
---|---|---|---|---|---|---|---|---|
MobileFaceNet | 99.57 | 83.33 | 93.82 | 95.97 | 90.39 | 1.19M | 227.57M | Google,Baidu:bmpn |
Resnet50-ir | 99.78 | 88.20 | 95.47 | 97.77 | 96.67 | 43.57M | 6.31G | Google,Baidu:8ecq |
Resnet152-irse | 99.85 | 89.72 | 95.56 | 98.13 | 97.48 | 71.14M | 12.33G | Google,Baidu:2d0c |
HRNet | 99.80 | 88.89 | 95.48 | 97.82 | 97.32 | 70.63M | 4.35G | Google,Baidu:t9eo |
EfficientNet-B0 | 99.55 | 84.72 | 94.37 | 96.63 | 91.38 | 33.44M | 77.83M | Google,Baidu:sgja |
TF-NAS-A | 99.75 | 85.90 | 94.87 | 97.23 | 94.42 | 39.59M | 534.41M | Google,Baidu:kq2v |
GhostNet | 99.65 | 83.52 | 93.93 | 95.70 | 89.42 | 26.76M | 194.49M | Google,Baidu:6dg1 |
Attention-56 | 99.88 | 89.18 | 95.65 | 98.12 | 97.75 | 98.96M | 6.34G | Google,Baidu:f93u |
Attention-92(MX) | 99.82 | 90.33 | 95.88 | 98.08 | 98.09 | 134.56M | 10.62G | Google,Baidu:3ura |
ResNeSt50 | 99.80 | 89.98 | 95.55 | 97.98 | 97.08 | 76.79M | 5.55G | Google,Baidu:3ura |
ReXNet_1.0 | 99.65 | 84.68 | 94.58 | 96.70 | 93.17 | 15.20M | 429.64M | Google,Baidu:3ura |
- MegaFace means MegaFace rank1 accuracy.
- Params and Macs are computed by THOP.
- MX means mixed precision training by apex.
Supervisory Head | LFW | CPLFW | CALFW | AgeDb | MegaFace_rank1 | Models&Logs |
---|---|---|---|---|---|---|
AM-Softmax | 99.58 | 83.63 | 93.93 | 95.85 | 88.92 | Google,Baidu:pe3n |
AdaM-Softmax | 99.58 | 83.85 | 93.50 | 96.02 | 89.40 | Google,Baidu:rcrk |
AdaCos | 99.65 | 83.27 | 92.63 | 95.38 | 82.95 | Google,Baidu:3sef |
ArcFace | 99.57 | 83.68 | 93.98 | 96.23 | 88.39 | Google,Baidu:aujd |
MV-Softmax | 99.57 | 83.33 | 93.82 | 95.97 | 90.39 | Google,Baidu:fcpd |
CurricularFace | 99.60 | 83.03 | 93.75 | 95.82 | 87.27 | Google,Baidu:iru3 |
CircleLoss | 99.57 | 83.42 | 94.00 | 95.73 | 88.75 | Google,Baidu:mj00 |
NPCFace | 99.55 | 83.80 | 94.13 | 95.87 | 89.13 | Google,Baidu:2hih |
MagFace | 99.53 | 84.32 | 94.03 | 95.82 | 89.85 | Google,Baidu:2hih |
Training Mode | LFW | CPLFW | CALFW | AgeDb | Models&Logs |
---|---|---|---|---|---|
Convention Training | 91.77 | 61.56 | 76.52 | 73.90 | Google,Baidu:j4ve |
Semi-siamese Training | 99.38 | 82.53 | 91.78 | 93.60 | Google,Baidu:n630 |
Model | Rank1 | Rank3 | Rank5 | Rank10 | Models&Logs | Note |
---|---|---|---|---|---|---|
model1 | 27.03 | 34.90 | 38.45 | 43.22 | Google,Baidu:vp7e | Trained by MS-Celeb-1M-v1c |
model2 | 71.40 | 76.60 | 78.62 | 81.05 | Google,Baidu:b7tk | Trained by the upper half face in MS-Celeb-1M-v1c |
model3 | 78.45 | 83.20 | 84.89 | 86.92 | Google,Baidu:pcio | Trained by MS-Celeb-1M-v1c-Mask |
model4 | 79.20 | 83.67 | 85.28 | 87.24 | Google,Baidu:d9ii | Concat the features of model2 and model3 |
- Define the network under the directory backbone.
- Create the object in backbone_def.py
- Add the configuration in backbone_conf.yaml.
- Define the new head under the directory head.
- Create the object in head_def.py
- Add the configuration in head_conf.yaml.
- Add the new data sampler in train_dataset.py
- Create a new folder named by the new training mode in this directory.
- Implement the training procedure in 'train.py'.
- Add the configuration in 'train.sh'.