Releases: intel/e2eAIOK
Intel® End-to-End AI Optimization Kit release v1.2
Highlights
This release introduces 3 new capabilities: RecDP-AutoFE, RecDP-LLM and DeltaTuner.
- RecDP-AutoFE provides automatic feature engineering capability to generate new features for any tabular dataset, this function is proven to be able to achieve competitive or even better accuracy comparing to data scientist's solution.
- RecDP-LLM is an one stop solution for LLM data preparation, it provides a ray and spark enhanced parallel data pipeline for pretrain data clean, RAG text extract/splitting/indexing, and finetune data quality evaluation and enhancement.
- DeltaTuner is an extension for Peft to improve LLM fine-tuning speed through multiple optimizations, including leveraging the compact model constructor denas to construct/modify the compact delta layers in a hardware-aware and train-free approach and adding more new deltatuning algorithms.
This release provides following major features:
Papers and Blogs
Versions and Components
- PyTorch >= 1.13.1
- Python 3.10
- Peft 0.4.0
- Pypark 3.4.1
- Ray 2.7.1
Links
- https://github.com/intel/e2eAIOK
- https://pypi.org/project/e2eAIOK-deltatuner/1.2.0/
- https://pypi.org/project/e2eAIOK-recdp/1.2.0/
Full Changelog: https://github.com/intel/e2eAIOK/commits/v1.2
Intel® End-to-End AI Optimization Kit release v1.1
Highlights
This release introduces a new component: Model Adaptor. It adopts transfer learning methodologies to reduce training time, improve inference throughput and reduce data labeling by taking the advantage of public pretrained models and datasets. The three methods in Model Adaptor are: Finetuner, Distiller, and Domain Adapter. Currently, model adaptor supports ResNet, BERT, GPT2, 3D Unet models, covering Image Classification, Natural Language Processing and Medical Segmentation domains.
This release provides following major features:
- Model Adaptor Finetuner
- Model Adaptor Distiller
- Model Adaptor Domain Adaptor
- Support Hugging Face models in training free NAS
Improvements
- Updated demo with colab click-to-run support
- Updated docker with jupyter support
Papers and Blogs
- The Parallel Universe Magazine - Accelerate AI Pipelines with New End-to-End AI Kit
- Multi-Model, Hardware-Aware Train-Free Neural Architecture Search
- SigOpt Blog - Enhance Multi-Model Hardware-Aware Train-Free NAS with SigOpt
- The Intel® SIHG4SR Solution for the ACM RecSys Challenge 2022
Versions and Components
- TensorFlow 2.10.0
- PyTorch 1.5, 1.12
- Intel® Extension for TensorFlow 2.10.x
- Intel® Extension for Pytorch 0.2, 1.12.x
- Horovod 0.26
- Python 3.9.12
Links
Full Changelog: https://github.com/intel/e2eAIOK/commits/v1.1
Intel® End-to-End AI Optimization Kit release v1.0
Highlights
This release introduces a new component: multi-model, hardware-aware training free neural architecture search module DE-NAS to extend model optimization to more domains. DE-NAS supports CNN, ViT, NLP and ASR models, and leverage training-free score to construct compact models directly on CPU clusters.
This release provides following major features:
- Multi-model, hardware aware training free NAS framework
- Pluggable search strategy
- Training-free scoring for candidate evaluation
- CNN, ViT, NLP, ASR DE-NAS recipes
Improvements
- New docker file supports PyTorch 1.12
- New CI/CD workflows support
- Updated data processing with RecDP for DLRM
- Automated packaging and delivery
Versions and Components
- TensorFlow 2.10
- PyTorch 1.5, 1.10, 1.12
- Intel® Extension for TensorFlow 2.10.x
- Intel® Extension for Pytorch 0.2, 1.10.x, 1.12.x
- Horovod 0.26
- Spark 3.1
- Python 3.x
Links
- https://github.com/intel/e2eAIOK
- https://pypi.org/project/e2eAIOK
- https://hub.docker.com/repository/docker/e2eaiok/e2eaiok-tensorflow
- https://hub.docker.com/repository/docker/e2eaiok/e2eaiok-pytorch
Full Changelog: https://github.com/intel/e2eAIOK/commits/v1.0
Intel® End-to-End AI Optimization Kit release v0.2
Intel® End-to-End AI Optimization Kit is a composable toolkits for E2E AI optimization to deliver high performance, lightweight networks/models efficiently on commodity HW like CPU, intending to make E2E AI pipelines faster, easier and more accessible.
Highlights
This release introduced 4 new deeply optimized End to End AI workflows including Computer Vision model ResNet, Speech Recognition model RNN-T, NLP model BERT and Reinforcement Learning model MiniGo that delivers optimized performance on CPU. The major optimizations are: improves scale-out capabilities on distributed CPU nodes, and built-in model optimization and auto hyperparameter tuning with Smart Democratization Advisor (SDA).
This release provides following highlighted features:
- Single click AI solution deployment in distributed CPU clusters
- Enhanced Smart Democratization Advisor (SDA)
- Optimized popular models Resnet, RNN-T, Bert, MiniGo on CPU.
Improvements
- Easy clustering deployment script
- Click-to-run optimized AI pipelines
- Updated data processing with RecDP for DLRM
- Step by Step guides and demos
Versions and Components
- Tensorflow 2.5, 2.10
- Pytorch 1.10
- Horovod 0.23, 0.26
- Spark 3.1
- Python 3.x
Links
Full Changelog: https://github.com/intel/e2eAIOK/commits/v0.2
Intel® End-to-End AI Optimization Kit release v0.1
Highlights
- First release of Smart Democratization Advisor (SDA)
- End to End AI pipeline of 3 Recommender System models: DLRM, DIEN, WnD
Contributors
- @xuechendi made their first contribution in #3
- @Jian-Zhang made their first contribution in #6
- @zigzagcai made their first contribution in #7
- @csdingbin made their first contribution in #8
- @XinyaoWa made their first contribution in #12
- @Peach-He made their first contribution in #2
- @tianyil1 made their first contribution in #18
Full Changelog: https://github.com/intel/e2eAIOK/commits/v0.1