- Neural Coder (Intel Neural Compressor Plug-in): One-Click, No-Code Solution (Pat's Keynote IntelON 2022) (Sep 2022)
- Alibaba Cloud and Intel Neural Compressor Deliver Better Productivity for PyTorch Users [Chinese version] (Sep 2022)
- Efficient Text Classification with Intel Neural Compressor (Sep 2022)
- Dynamic Neural Architecture Search with Intel Neural Compressor (Sep 2022)
- Easy Quantization in PyTorch Using Fine-Grained FX (Sep 2022)
- One-Click Enabling of Intel Neural Compressor Features in PyTorch Scripts (Aug 2022)
- Deep learning inference optimization for Address Purification (Aug 2022)
- Accelerate AI Inference without Sacrificing Accuracy (Jun 2022)
- PyTorch Inference Acceleration with Intel® Neural Compressor (Jun 2022)
- Intel and Hugging Face Partner to Democratize Machine Learning Hardware Acceleration (Jun 2022)
- Intel® Neural Compressor oneAPI (Jun 2022)
- Intel® Deep Learning Boost - Boost Network Security AI Inference Performance in Google Cloud Platform (GCP) (Apr 2022)
- INC as PT ecosystem project (Apr 2022)
- Dynamic Quantization with Intel Neural Compressor and Transformers (Mar 2022)
- New instructions in the Intel® Xeon® Scalable processors combined with optimized software frameworks enable real-time AI within network workloads (Feb 2022)
- Quantizing ONNX Models using Intel® Neural Compressor (Feb 2022)
- Quantize AI Model by Intel® oneAPI AI Analytics Toolkit on Alibaba Cloud (Feb 2022)
- Intel Neural Compressor Quantization with SigOpt (Jan 2022)
- AI Performance and Productivity with Intel® Neural Compressor (Jan 2022)
- Ease-of-use quantization for PyTorch with Intel® Neural Compressor (Jan 2022)
- Intel Neural Compressor Tutorial on BiliBili (Dec 2021)
- Faster AI/ML Results With Intel Neural Compressor (Dec 2021)
- Prune Once for All: Sparse Pre-Trained Language Models (Nov 2021)
- Faster, Easier Optimization with Intel® Neural Compressor (Nov 2021)
- Accelerate Deep Learning with Intel® Extension for TensorFlow* (Oct 2021)
- Intel® Neural Compressor: A Scalable Quantization Tool for ONNX Models (Oct 2021)
- A "Double Play" for MLPerf™ Inference Performance Gains with 3rd Generation Intel® Xeon® Scalable Processors (Sep 2021)
- Optimize TensorFlow Pre-trained Model for Inference (Jun 2021)
- 3D Digital Face Reconstruction Solution enabled by 3rd Gen Intel® Xeon® Scalable Processors (Apr 2021)
- Accelerating Alibaba Transformer model performance with 3rd Gen Intel® Xeon® Scalable Processors (Ice Lake) and Intel® Deep Learning Boost (Apr 2021)
- MLPerf™ Performance Gains Abound with latest 3rd Generation Intel® Xeon® Scalable Processors (Apr 2021)
- Using Low-Precision Optimizations for High-Performance DL Inference Applications (Apr 2021)
- Quantization support for ONNX using LPOT (Low precision optimization tool) (Mar 2021)
- DL Boost Quantization with CERN's 3D-GANs model (Feb 2021)
- Reduced Precision Strategies for Deep Learning: 3DGAN Use Case - presentation on 4th IML Machine Learning Workshop (Oct 2020)
- Intel Neural Compressor (Sep 2020)
- Lower Numerical Precision Deep Learning Inference and Training (May 2018)
- Highly Efficient 8-bit Low Precision Inference of Convolutional Neural Networks with IntelCaffe (May 2018)