DARTS: Differentiable Architecture Search
This paper addresses the scalability challenge of architecture search by formulating the task in a differentiable manner. Unlike conventional approaches of applying evolution or reinforcement learning over a discrete and non-differentiable search space, our method is based on the continuous relaxation of the architecture representation, allowing efficient search of the architecture using gradient descent. Extensive experiments on CIFAR-10, ImageNet, Penn Treebank and WikiText-2 show that our algorithm excels in discovering high-performance convolutional architectures for image classification and recurrent architectures for language modeling, while being orders of magnitude faster than state-of-the-art non-differentiable techniques. Our implementation has been made publicly available to facilitate further research on efficient architecture search algorithms.
Dataset |
Params(M) |
Flops(G) |
Top-1 Acc |
Top-5 Acc |
Subnet |
Config |
Download |
Remarks |
Cifar10 |
3.42 |
0.48 |
97.32 |
99.94 |
mutable |
config |
model | log |
MMRazor searched |
Cifar10 |
3.83 |
0.55 |
97.27 |
99.98 |
mutable |
config |
model | log |
official |
@inproceedings{liu2018darts,
title={DARTS: Differentiable Architecture Search},
author={Liu, Hanxiao and Simonyan, Karen and Yang, Yiming},
booktitle={International Conference on Learning Representations},
year={2018}
}