From f5ae4187a0ce0031d9fcf88493aedeaa8414f0f3 Mon Sep 17 00:00:00 2001 From: ShifaAbu Date: Mon, 16 Sep 2024 21:40:04 +0300 Subject: [PATCH] Added Intel Gaudi to Accelerator Setup Guide (#6543) Added Intel Gaudi to the list of accelerators in the setup guide. Co-authored-by: sakell Co-authored-by: Logan Adams <114770087+loadams@users.noreply.github.com> --- docs/_tutorials/accelerator-setup-guide.md | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/docs/_tutorials/accelerator-setup-guide.md b/docs/_tutorials/accelerator-setup-guide.md index 6f50afe139a3..75c20134b5b7 100644 --- a/docs/_tutorials/accelerator-setup-guide.md +++ b/docs/_tutorials/accelerator-setup-guide.md @@ -9,6 +9,7 @@ tags: getting-started training accelerator - [Intel Architecture (IA) CPU](#intel-architecture-ia-cpu) - [Intel XPU](#intel-xpu) - [Huawei Ascend NPU](#huawei-ascend-npu) +- [Intel Gaudi](#intel-gaudi) # Introduction DeepSpeed supports different accelerators from different companies. Setup steps to run DeepSpeed on certain accelerators might be different. This guide allows user to lookup setup instructions for the accelerator family and hardware they are using. @@ -246,3 +247,10 @@ accelerator: npu ## Multi-card parallel training using Huawei Ascend NPU To perform model training across multiple Huawei Ascend NPU cards using DeepSpeed, see the examples provided in [DeepSpeed Examples](https://github.com/microsoft/DeepSpeedExamples/blob/master/training/cifar/cifar10_deepspeed.py). + +# Intel Gaudi +PyTorch models can be run on IntelĀ® GaudiĀ® AI accelerator using DeepSpeed. Refer to the following user guides to start using DeepSpeed with Intel Gaudi: +* [Getting Started with DeepSpeed](https://docs.habana.ai/en/latest/PyTorch/DeepSpeed/Getting_Started_with_DeepSpeed/Getting_Started_with_DeepSpeed.html#getting-started-with-deepspeed) +* [DeepSpeed User Guide for Training](https://docs.habana.ai/en/latest/PyTorch/DeepSpeed/DeepSpeed_User_Guide/DeepSpeed_User_Guide.html#deepspeed-user-guide) +* [Optimizing Large Language Models](https://docs.habana.ai/en/latest/PyTorch/DeepSpeed/Optimizing_LLM.html#llms-opt) +* [Inference Using DeepSpeed](https://docs.habana.ai/en/latest/PyTorch/DeepSpeed/Inference_Using_DeepSpeed.html#deepspeed-inference-user-guide)