Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Nginx - docker in CodeTrans #609

Merged
merged 12 commits into from
Aug 29, 2024
4 changes: 3 additions & 1 deletion CodeTrans/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ Currently we support two ways of deploying Code Translation services on docker:
docker pull opea/codetrans:latest
```

2. Start services using the docker images `built from source`: [Guide](./docker)
2. Start services using the docker images `built from source`: [Guide](./docker/xeon/README.md)

### Setup Environment Variable

Expand All @@ -34,6 +34,8 @@ To set up environment variables for deploying Code Translation services, follow
# Example: no_proxy="localhost, 127.0.0.1, 192.168.1.1"
export no_proxy="Your_No_Proxy"
export HUGGINGFACEHUB_API_TOKEN="Your_Huggingface_API_Token"
# Example: NGINX_PORT=80
export NGINX_PORT=${your_nginx_port}
```

2. If you are in a proxy environment, also set the proxy-related environment variables:
Expand Down
6 changes: 6 additions & 0 deletions CodeTrans/docker/docker_build_compose.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -22,3 +22,9 @@ services:
dockerfile: comps/llms/text-generation/tgi/Dockerfile
extends: codetrans
image: ${REGISTRY:-opea}/llm-tgi:${TAG:-latest}
nginx:
build:
context: GenAIComps/comps/nginx/docker
dockerfile: ./Dockerfile
extends: codetrans
image: ${REGISTRY:-opea}/nginx:${TAG:-latest}
62 changes: 49 additions & 13 deletions CodeTrans/docker/gaudi/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ git clone https://github.com/opea-project/GenAIComps.git
cd GenAIComps
```

### 2. Build the LLM Docker Image with the following command
### 2. Build the LLM Docker Image

```bash
docker build -t opea/llm-tgi:latest --no-cache --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/llms/text-generation/tgi/Dockerfile .
Expand All @@ -34,29 +34,50 @@ cd GenAIExamples/CodeTrans/docker/ui
docker build -t opea/codetrans-ui:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f ./docker/Dockerfile .
```

### 5. Build Nginx Docker Image

```bash
cd GenAIComps/comps/nginx/docker
docker build -t opea/nginx:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f ./Dockerfile .
```

Then run the command `docker images`, you will have the following Docker Images:

- `opea/llm-tgi:latest`
- `opea/codetrans:latest`
- `opea/codetrans-ui:latest`
- `opea/nginx:latest`

## 🚀 Start Microservices

### Setup Environment Variables

Since the `compose.yaml` will consume some environment variables, you need to setup them in advance as below. Notice that the `LLM_MODEL_ID` indicates the LLM model used for TGI service.
To set up environment variables for deploying Code Translation services, follow these steps:

```bash
export no_proxy=${your_no_proxy}
export http_proxy=${your_http_proxy}
export https_proxy=${your_http_proxy}
export LLM_MODEL_ID="HuggingFaceH4/mistral-7b-grok"
export TGI_LLM_ENDPOINT="http://${host_ip}:8008"
export HUGGINGFACEHUB_API_TOKEN=${your_hf_api_token}
export MEGA_SERVICE_HOST_IP=${host_ip}
export LLM_SERVICE_HOST_IP=${host_ip}
export BACKEND_SERVICE_ENDPOINT="http://${host_ip}:7777/v1/codetrans"
```
1. Set the required environment variables:

```bash
# Example: host_ip="192.168.1.1"
export host_ip="External_Public_IP"
# Example: no_proxy="localhost, 127.0.0.1, 192.168.1.1"
export no_proxy="Your_No_Proxy"
export HUGGINGFACEHUB_API_TOKEN="Your_Huggingface_API_Token"
# Example: NGINX_PORT=80
export NGINX_PORT=${your_nginx_port}
```

2. If you are in a proxy environment, also set the proxy-related environment variables:

```bash
export http_proxy="Your_HTTP_Proxy"
export https_proxy="Your_HTTPs_Proxy"
```

3. Set up other environment variables:

```bash
source ../set_env.sh
```

### Start Microservice Docker Containers

Expand Down Expand Up @@ -93,9 +114,24 @@ curl http://${host_ip}:7777/v1/codetrans \
-d '{"language_from": "Golang","language_to": "Python","source_code": "package main\n\nimport \"fmt\"\nfunc main() {\n fmt.Println(\"Hello, World!\");\n}"}'
```

4. Nginx Service

```bash
curl http://${host_ip}:${NGINX_PORT}/v1/codetrans \
-H "Content-Type: application/json" \
-d '{"language_from": "Golang","language_to": "Python","source_code": "package main\n\nimport \"fmt\"\nfunc main() {\n fmt.Println(\"Hello, World!\");\n}"}'
```

## 🚀 Launch the UI

### Launch with origin port

Open this URL `http://{host_ip}:5173` in your browser to access the frontend.

### Launch with Nginx

If you want to launch the UI using Nginx, open this URL: `http://{host_ip}:{NGINX_PORT}` in your browser to access the frontend.

![image](https://github.com/intel-ai-tce/GenAIExamples/assets/21761437/71214938-819c-4979-89cb-c03d937cd7b5)

Here is an example for summarizing a article.
Expand Down
19 changes: 19 additions & 0 deletions CodeTrans/docker/gaudi/compose.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -64,6 +64,25 @@ services:
- BASE_URL=${BACKEND_SERVICE_ENDPOINT}
ipc: host
restart: always
codetrans-gaudi-nginx-server:
image: ${REGISTRY:-opea}/nginx:${TAG:-latest}
container_name: codetrans-gaudi-nginx-server
depends_on:
- codetrans-gaudi-backend-server
- codetrans-gaudi-ui-server
ports:
- "${NGINX_PORT:-80}:80"
environment:
- no_proxy=${no_proxy}
- https_proxy=${https_proxy}
- http_proxy=${http_proxy}
- FRONTEND_SERVICE_IP=${FRONTEND_SERVICE_IP}
- FRONTEND_SERVICE_PORT=${FRONTEND_SERVICE_PORT}
- BACKEND_SERVICE_NAME=${BACKEND_SERVICE_NAME}
- BACKEND_SERVICE_IP=${BACKEND_SERVICE_IP}
- BACKEND_SERVICE_PORT=${BACKEND_SERVICE_PORT}
ipc: host
restart: always

networks:
default:
Expand Down
5 changes: 5 additions & 0 deletions CodeTrans/docker/set_env.sh
Original file line number Diff line number Diff line change
Expand Up @@ -9,3 +9,8 @@ export TGI_LLM_ENDPOINT="http://${host_ip}:8008"
export MEGA_SERVICE_HOST_IP=${host_ip}
export LLM_SERVICE_HOST_IP=${host_ip}
export BACKEND_SERVICE_ENDPOINT="http://${host_ip}:7777/v1/codetrans"
export FRONTEND_SERVICE_IP=${host_ip}
export FRONTEND_SERVICE_PORT=5173
export BACKEND_SERVICE_NAME=codetrans
export BACKEND_SERVICE_IP=${host_ip}
export BACKEND_SERVICE_PORT=7777
69 changes: 57 additions & 12 deletions CodeTrans/docker/xeon/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,29 +42,50 @@ cd GenAIExamples/CodeTrans/docker/ui
docker build -t opea/codetrans-ui:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f ./docker/Dockerfile .
```

### 5. Build Nginx Docker Image

```bash
cd GenAIComps/comps/nginx/docker
docker build -t opea/nginx:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f ./Dockerfile .
```

Then run the command `docker images`, you will have the following Docker Images:

- `opea/llm-tgi:latest`
- `opea/codetrans:latest`
- `opea/codetrans-ui:latest`
- `opea/nginx:latest`

## 🚀 Start Microservices

### Setup Environment Variables

Since the `compose.yaml` will consume some environment variables, you need to setup them in advance as below. Notice that the `LLM_MODEL_ID` indicates the LLM model used for TGI service.
To set up environment variables for deploying Code Translation services, follow these steps:

```bash
export no_proxy=${your_no_proxy}
export http_proxy=${your_http_proxy}
export https_proxy=${your_http_proxy}
export LLM_MODEL_ID="HuggingFaceH4/mistral-7b-grok"
export TGI_LLM_ENDPOINT="http://${host_ip}:8008"
export HUGGINGFACEHUB_API_TOKEN=${your_hf_api_token}
export MEGA_SERVICE_HOST_IP=${host_ip}
export LLM_SERVICE_HOST_IP=${host_ip}
export BACKEND_SERVICE_ENDPOINT="http://${host_ip}:7777/v1/codetrans"
```
1. Set the required environment variables:

```bash
# Example: host_ip="192.168.1.1"
export host_ip="External_Public_IP"
# Example: no_proxy="localhost, 127.0.0.1, 192.168.1.1"
export no_proxy="Your_No_Proxy"
export HUGGINGFACEHUB_API_TOKEN="Your_Huggingface_API_Token"
# Example: NGINX_PORT=80
export NGINX_PORT=${your_nginx_port}
```

2. If you are in a proxy environment, also set the proxy-related environment variables:

```bash
export http_proxy="Your_HTTP_Proxy"
export https_proxy="Your_HTTPs_Proxy"
```

3. Set up other environment variables:

```bash
source ../set_env.sh
```

### Start Microservice Docker Containers

Expand Down Expand Up @@ -100,3 +121,27 @@ curl http://${host_ip}:7777/v1/codetrans \
-H "Content-Type: application/json" \
-d '{"language_from": "Golang","language_to": "Python","source_code": "package main\n\nimport \"fmt\"\nfunc main() {\n fmt.Println(\"Hello, World!\");\n}"}'
```

4. Nginx Service

```bash
curl http://${host_ip}:${NGINX_PORT}/v1/codetrans \
-H "Content-Type: application/json" \
-d '{"language_from": "Golang","language_to": "Python","source_code": "package main\n\nimport \"fmt\"\nfunc main() {\n fmt.Println(\"Hello, World!\");\n}"}'
```

## 🚀 Launch the UI

### Launch with origin port

Open this URL `http://{host_ip}:5173` in your browser to access the frontend.

### Launch with Nginx

If you want to launch the UI using Nginx, open this URL: `http://{host_ip}:{NGINX_PORT}` in your browser to access the frontend.

![image](https://github.com/intel-ai-tce/GenAIExamples/assets/21761437/71214938-819c-4979-89cb-c03d937cd7b5)

Here is an example for summarizing a article.

![image](https://github.com/intel-ai-tce/GenAIExamples/assets/21761437/be543e96-ddcd-4ee0-9f2c-4e99fee77e37)
20 changes: 20 additions & 0 deletions CodeTrans/docker/xeon/compose.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,26 @@ services:
- BASE_URL=${BACKEND_SERVICE_ENDPOINT}
ipc: host
restart: always
codetrans-xeon-nginx-server:
image: ${REGISTRY:-opea}/nginx:${TAG:-latest}
container_name: codetrans-xeon-nginx-server
depends_on:
- codetrans-xeon-backend-server
- codetrans-xeon-ui-server
ports:
- "${NGINX_PORT:-80}:80"
environment:
- no_proxy=${no_proxy}
- https_proxy=${https_proxy}
- http_proxy=${http_proxy}
- FRONTEND_SERVICE_IP=${FRONTEND_SERVICE_IP}
- FRONTEND_SERVICE_PORT=${FRONTEND_SERVICE_PORT}
- BACKEND_SERVICE_NAME=${BACKEND_SERVICE_NAME}
- BACKEND_SERVICE_IP=${BACKEND_SERVICE_IP}
- BACKEND_SERVICE_PORT=${BACKEND_SERVICE_PORT}
ipc: host
restart: always

networks:
default:
driver: bridge
17 changes: 16 additions & 1 deletion CodeTrans/tests/test_codetrans_on_gaudi.sh
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ function build_docker_images() {
git clone https://github.com/opea-project/GenAIComps.git

echo "Build all the images with --no-cache, check docker_image_build.log for details..."
service_list="codetrans codetrans-ui llm-tgi"
service_list="codetrans codetrans-ui llm-tgi nginx"
docker compose -f docker_build_compose.yaml build ${service_list} --no-cache > ${LOG_PATH}/docker_image_build.log

docker pull ghcr.io/huggingface/tgi-gaudi:2.0.1
Expand All @@ -37,6 +37,12 @@ function start_services() {
export MEGA_SERVICE_HOST_IP=${ip_address}
export LLM_SERVICE_HOST_IP=${ip_address}
export BACKEND_SERVICE_ENDPOINT="http://${ip_address}:7777/v1/codetrans"
export FRONTEND_SERVICE_IP=${ip_address}
export FRONTEND_SERVICE_PORT=5173
export BACKEND_SERVICE_NAME=codetrans
export BACKEND_SERVICE_IP=${ip_address}
export BACKEND_SERVICE_PORT=7777
export NGINX_PORT=80

sed -i "s/backend_address/$ip_address/g" $WORKPATH/docker/ui/svelte/.env

Expand Down Expand Up @@ -108,6 +114,15 @@ function validate_megaservice() {
"mega-codetrans" \
"codetrans-gaudi-backend-server" \
'{"language_from": "Golang","language_to": "Python","source_code": "package main\n\nimport \"fmt\"\nfunc main() {\n fmt.Println(\"Hello, World!\");\n}"}'

# test the megeservice via nginx
validate_services \
"${ip_address}:80/v1/codetrans" \
"print" \
"mega-codetrans-nginx" \
"codetrans-gaudi-nginx-server" \
'{"language_from": "Golang","language_to": "Python","source_code": "package main\n\nimport \"fmt\"\nfunc main() {\n fmt.Println(\"Hello, World!\");\n}"}'

}

function validate_frontend() {
Expand Down
18 changes: 16 additions & 2 deletions CodeTrans/tests/test_codetrans_on_xeon.sh
Original file line number Diff line number Diff line change
Expand Up @@ -19,10 +19,10 @@ function build_docker_images() {
git clone https://github.com/opea-project/GenAIComps.git

echo "Build all the images with --no-cache, check docker_image_build.log for details..."
service_list="codetrans codetrans-ui llm-tgi"
service_list="codetrans codetrans-ui llm-tgi nginx"
docker compose -f docker_build_compose.yaml build ${service_list} --no-cache > ${LOG_PATH}/docker_image_build.log

docker pull ghcr.io/huggingface/text-generation-inference:1.4
docker pull ghcr.io/huggingface/text-generation-inference:sha-e4201f4-intel-cpu
docker images && sleep 1s
}

Expand All @@ -36,6 +36,12 @@ function start_services() {
export MEGA_SERVICE_HOST_IP=${ip_address}
export LLM_SERVICE_HOST_IP=${ip_address}
export BACKEND_SERVICE_ENDPOINT="http://${ip_address}:7777/v1/codetrans"
export FRONTEND_SERVICE_IP=${ip_address}
export FRONTEND_SERVICE_PORT=5173
export BACKEND_SERVICE_NAME=codetrans
export BACKEND_SERVICE_IP=${ip_address}
export BACKEND_SERVICE_PORT=7777
export NGINX_PORT=80

sed -i "s/backend_address/$ip_address/g" $WORKPATH/docker/ui/svelte/.env

Expand Down Expand Up @@ -109,6 +115,14 @@ function validate_megaservice() {
"codetrans-xeon-backend-server" \
'{"language_from": "Golang","language_to": "Python","source_code": "package main\n\nimport \"fmt\"\nfunc main() {\n fmt.Println(\"Hello, World!\");\n}"}'

# test the megeservice via nginx
validate_services \
"${ip_address}:80/v1/codetrans" \
"print" \
"mega-codetrans-nginx" \
"codetrans-xeon-nginx-server" \
'{"language_from": "Golang","language_to": "Python","source_code": "package main\n\nimport \"fmt\"\nfunc main() {\n fmt.Println(\"Hello, World!\");\n}"}'

}

function validate_frontend() {
Expand Down
Loading