From 22d012e4f71ca3ad46195d9053ce1d816dfce15e Mon Sep 17 00:00:00 2001 From: David Kinder Date: Tue, 3 Sep 2024 12:49:28 -0400 Subject: [PATCH] doc: fix headings, spelling, inter-doc references (#365) * fix skipped heading levels * fix misspellings, remove use of "please" * fix inter-doc references to work with github.io rendering Signed-off-by: David B. Kinder --- scripts/nvidia/README.md | 32 ++++++++++++++++---------------- 1 file changed, 16 insertions(+), 16 deletions(-) diff --git a/scripts/nvidia/README.md b/scripts/nvidia/README.md index e1054d3f..5966c358 100644 --- a/scripts/nvidia/README.md +++ b/scripts/nvidia/README.md @@ -1,25 +1,25 @@ -# QuickSatrt Guide +# NVIDIA GPU Quick-Start Guide Ver: 1.0 Last Update: 2024-Aug-21 Author: [PeterYang12](https://github.com/PeterYang12) E-mail: yuhan.yang@intel.com -This document is a quickstart guide for GenAIInfra deployment and test on NVIDIA GPU platform. +This document is a quick-start guide for GenAIInfra deployment and test on NVIDIA GPU platform. ## Prerequisite -GenAIInfra uses Kubernetes as the cloud native infrastructure. Please follow the steps below to prepare the Kubernetes environment. +GenAIInfra uses Kubernetes as the cloud native infrastructure. Follow these steps to prepare the Kubernetes environment. -#### Setup Kubernetes cluster +### Setup Kubernetes cluster -Please follow [Kubernetes official setup guide](https://github.com/opea-project/GenAIInfra?tab=readme-ov-file#setup-kubernetes-cluster) to setup Kubernetes. We recommend to use Kubernetes with version >= 1.27. +Follow the [Kubernetes official setup guide](https://kubernetes.io/docs/setup/) to setup Kubernetes. We recommend you use Kubernetes with version >= 1.27. -#### To run GenAIInfra on NVIDIA GPUs +### To run GenAIInfra on NVIDIA GPUs -To run the workloads on NVIDIA GPUs, please follow the steps. +To run the workloads on NVIDIA GPUs, follow these steps. -1. Please check the [support matrix](https://docs.nvidia.com/ai-enterprise/latest/product-support-matrix/index.html) to make sure that environment meets the requirements. +1. Check the [support matrix](https://docs.nvidia.com/ai-enterprise/latest/product-support-matrix/index.html) to make sure your environment meets the requirements. 2. [Install the NVIDIA GPU CUDA driver and software stack](https://developer.nvidia.com/cuda-downloads). @@ -28,15 +28,15 @@ To run the workloads on NVIDIA GPUs, please follow the steps. 4. [Install the NVIDIA GPU device plugin for Kubernetes](https://github.com/NVIDIA/k8s-device-plugin). 5. [Install helm](https://helm.sh/docs/intro/install/) -NOTE: Please make sure you configure the appropriate container runtime based on the type of container runtime you installed during Kubernetes setup. +NOTE: Make sure you configure the appropriate container runtime based on the type of container runtime you installed during Kubernetes setup. ## Usages -#### Use GenAI Microservices Connector (GMC) to deploy and adjust GenAIExamples on NVIDIA GPUs +### Use GenAI Microservices Connector (GMC) to deploy and adjust GenAIExamples on NVIDIA GPUs #### 1. Install the GMC Helm Chart -**_NOTE_**: Before installingGMC, please export your own huggingface tokens, Google API KEY and Google CSE ID. If you have pre-defined directory to save the models on you cluster hosts, please also set the path. +**_NOTE_**: Before installing GMC, export your own huggingface tokens, Google API KEY, and Google CSE ID. If you have a pre-defined directory to save the models on you cluster hosts, also set the path. ``` export YOUR_HF_TOKEN= @@ -45,21 +45,21 @@ export YOUR_GOOGLE_CSE_ID= export MOUNT_DIR= ``` -Here also provides a simple way to install GMC using helm chart `./install-gmc.sh` +Here is a simple way to install GMC using helm chart `./install-gmc.sh` > WARNING: the install-gmc.sh may fail due to OS distributions. -For more details, please refer to [GMC installation](https://github.com/opea-project/GenAIInfra/blob/main/microservices-connector/README.md) to get more details. +For more details, refer to [GMC installation](../../microservices-connector/README.md) to get more details. #### 2.Use GMC to compose a ChatQnA Pipeline -Please refer to [Usage guide for GMC](https://github.com/opea-project/GenAIInfra/blob/main/microservices-connector/usage_guide.md) for more details. +Refer to [Usage guide for GMC](../../microservices-connector/usage_guide.md) for more details. Here provides a simple script to use GMC to compose ChatQnA pipeline. #### 3. Test ChatQnA service -Please refer to [GMC ChatQnA test](https://github.com/opea-project/GenAIInfra/blob/main/microservices-connector/usage_guide.md#use-gmc-to-compose-a-chatqna-pipeline) +Refer to [GMC ChatQnA test](../../microservices-connector/usage_guide.md#use-gmc-to-compose-a-chatqna-pipeline) Here provides a simple way to test the service. `./gmc-chatqna-test.sh` #### 4. Delete ChatQnA and GMC @@ -71,4 +71,4 @@ kubectl delete ns chatqa ## FAQ and Troubleshooting -The scripts are only tested on baremental **Ubuntu22.04** with **NVIDIA H100**. Please report an issue if you meet any issue. +The scripts are only tested on bare metal **Ubuntu 22.04** with **NVIDIA H100**. Report an issue if you meet any issue.