This repository has been archived by the owner on Sep 18, 2024. It is now read-only.
NNI v3.0 Preview Release (v3.0rc1)
Web Portal
- New look and feel
Neural Architecture Search
- Breaking change:
nni.retiarii
is no longer maintained and tested. Please migrate tonni.nas
.- Inherit
nni.nas.nn.pytorch.ModelSpace
, rather than use@model_wrapper
. - Use
nni.choice
, rather thannni.nas.nn.pytorch.ValueChoice
. - Use
nni.nas.experiment.NasExperiment
andNasExperimentConfig
, rather thanRetiariiExperiment
. - Use
nni.nas.model_context
, rather thannni.nas.fixed_arch
. - Please refer to quickstart for more changes.
- Inherit
- A refreshed experience to construct model space.
- Enhanced debuggability via
freeze()
andsimplify()
APIs. - Enhanced expressiveness with
nni.choice
,nni.uniform
,nni.normal
and etc. - Enhanced experience of customization with
MutableModule
,ModelSpace
andParamterizedModule
. - Search space with constraints is now supported.
- Enhanced debuggability via
- Improved robustness and stability of strategies.
- Supported search space types are now enriched for PolicyBaseRL, ENAS and Proxyless.
- Each step of one-shot strategies can be executed alone: model mutation, evaluator mutation and training.
- Most multi-trial strategies now supports specifying seed for reproducibility.
- Performance of strategies have been verified on a set of benchmarks.
- Strategy/engine middleware.
- Filtering, replicating, deduplicating or retrying models submitted by any strategy.
- Merging or transforming models before executing (e.g., CGO).
- Arbitrarily-long chains of middlewares.
- New execution engine.
- Improved debuggability via SequentialExecutionEngine: trials can run in a single process and breakpoints are effective.
- The old execution engine is now decomposed into execution engine and model format.
- Enhanced extensibility of execution engines.
- NAS profiler and hardware-aware NAS.
- New profilers profile a model space, and quickly compute a profiling result for a sampled architecture or a distribution of architectures (FlopsProfiler, NumParamsProfiler and NnMeterProfiler are officially supported).
- Assemble profiler with arbitrary strategies, including both multi-trial and one-shot.
- Profiler are extensible. Strategies can be assembled with arbitrary customized profilers.
Compression
- Compression framework is refactored, new framework import path is
nni.contrib.compression
.- Configure keys are refactored, support more detailed compression configurations. view doc
- Support multi compression methods fusion. view doc
- Support distillation as a basic compression component. view doc
- Support more compression targets, like
input
,output
and any registered parameters. view doc - Support compressing any module type by customizing module settings. view doc
- Pruning
- Pruner interfaces have fine-tuned for easy to use. view doc
- Support configuring
granularity
in pruners. view doc - Support different mask ways, multiply zero or add a large negative value.
- Support manully setting dependency group and global group. view doc
- A new powerful pruning speedup is released, applicability and robustness have been greatly improved. view doc
- The end to end transformer compression tutorial has been updated, achieved more extreme compression performance. view doc
- Quantization
- Distillation
- DynamicLayerwiseDistiller and Adaptive1dLayerwiseDistiller are supported.
- Compression documents now updated for the new framework, the old version please view v2.10 doc.
- New compression examples are under
nni/examples/compression
- Create a evaluator:
nni/examples/compression/evaluator
- Pruning a model:
nni/examples/compression/pruning
- Quantize a model:
nni/examples/compression/quantization
- Fusion compression:
nni/examples/compression/fusion
- Create a evaluator:
Training Services
- Breaking change: NNI v3.0 cannot resume experiments created by NNI v2.x
- Local training service:
- Reduced latency of creating trials
- Fixed "GPU metric not found"
- Fixed bugs about resuming trials
- Remote training service:
reuse_mode
now defaults toFalse
; setting it toTrue
will fallback to v2.x remote training service- Reduced latency of creating trials
- Fixed "GPU metric not found"
- Fixed bugs about resuming trials
- Supported viewing trial logs on the web portal
- Supported automatic recover after temporary server failure (network fluctuation, out of memory, etc)