diff --git a/ChatQnA/docker_compose/intel/cpu/xeon/README.md b/ChatQnA/docker_compose/intel/cpu/xeon/README.md index 5eca0d284..d3540bcb8 100644 --- a/ChatQnA/docker_compose/intel/cpu/xeon/README.md +++ b/ChatQnA/docker_compose/intel/cpu/xeon/README.md @@ -216,7 +216,14 @@ cd GenAIExamples/ChatQnA/ui docker build --no-cache -t opea/chatqna-conversation-ui:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f ./docker/Dockerfile.react . ``` -Then run the command `docker images`, you will have the following 7 Docker Images: +### 9. Build Nginx Docker Image + +```bash +cd GenAIComps +docker build -t opea/nginx:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/nginx/Dockerfile . +``` + +Then run the command `docker images`, you will have the following 8 Docker Images: 1. `opea/dataprep-redis:latest` 2. `opea/embedding-tei:latest` @@ -225,6 +232,7 @@ Then run the command `docker images`, you will have the following 7 Docker Image 5. `opea/llm-tgi:latest` or `opea/llm-vllm:latest` 6. `opea/chatqna:latest` or `opea/chatqna-without-rerank:latest` 7. `opea/chatqna-ui:latest` +8. `opea/nginx:latest` ## 🚀 Start Microservices @@ -267,57 +275,30 @@ For users in China who are unable to download models directly from Huggingface, ### Setup Environment Variables -Since the `compose.yaml` will consume some environment variables, you need to setup them in advance as below. - -**Export the value of the public IP address of your Xeon server to the `host_ip` environment variable** - -> Change the External_Public_IP below with the actual IPV4 value - -``` -export host_ip="External_Public_IP" -``` - -**Export the value of your Huggingface API token to the `your_hf_api_token` environment variable** - -> Change the Your_Huggingface_API_Token below with tyour actual Huggingface API Token value +1. Set the required environment variables: -``` -export your_hf_api_token="Your_Huggingface_API_Token" -``` + ```bash + # Example: host_ip="192.168.1.1" + export host_ip="External_Public_IP" + # Example: no_proxy="localhost, 127.0.0.1, 192.168.1.1" + export no_proxy="Your_No_Proxy" + export HUGGINGFACEHUB_API_TOKEN="Your_Huggingface_API_Token" + # Example: NGINX_PORT=80 + export NGINX_PORT=${your_nginx_port} + ``` -**Append the value of the public IP address to the no_proxy list** +2. If you are in a proxy environment, also set the proxy-related environment variables: -```bash -export your_no_proxy=${your_no_proxy},"External_Public_IP" -``` + ```bash + export http_proxy="Your_HTTP_Proxy" + export https_proxy="Your_HTTPs_Proxy" + ``` -```bash -export no_proxy=${your_no_proxy} -export http_proxy=${your_http_proxy} -export https_proxy=${your_http_proxy} -export EMBEDDING_MODEL_ID="BAAI/bge-base-en-v1.5" -export RERANK_MODEL_ID="BAAI/bge-reranker-base" -export LLM_MODEL_ID="Intel/neural-chat-7b-v3-3" -export TEI_EMBEDDING_ENDPOINT="http://${host_ip}:6006" -export TEI_RERANKING_ENDPOINT="http://${host_ip}:8808" -export TGI_LLM_ENDPOINT="http://${host_ip}:9009" -export vLLM_LLM_ENDPOINT="http://${host_ip}:9009" -export LLM_SERVICE_PORT=9000 -export REDIS_URL="redis://${host_ip}:6379" -export INDEX_NAME="rag-redis" -export HUGGINGFACEHUB_API_TOKEN=${your_hf_api_token} -export MEGA_SERVICE_HOST_IP=${host_ip} -export EMBEDDING_SERVICE_HOST_IP=${host_ip} -export RETRIEVER_SERVICE_HOST_IP=${host_ip} -export RERANK_SERVICE_HOST_IP=${host_ip} -export LLM_SERVICE_HOST_IP=${host_ip} -export BACKEND_SERVICE_ENDPOINT="http://${host_ip}:8888/v1/chatqna" -export DATAPREP_SERVICE_ENDPOINT="http://${host_ip}:6007/v1/dataprep" -export DATAPREP_GET_FILE_ENDPOINT="http://${host_ip}:6007/v1/dataprep/get_file" -export DATAPREP_DELETE_FILE_ENDPOINT="http://${host_ip}:6007/v1/dataprep/delete_file" -``` +3. Set up other environment variables: -Note: Please replace with `host_ip` with you external IP address, do not use localhost. + ```bash + source ./set_env.sh + ``` ### Start all the services Docker Containers @@ -449,7 +430,7 @@ docker compose -f compose_vllm.yaml up -d ```bash # vLLM Service - curl http://${your_ip}:9000/v1/chat/completions \ + curl http://${host_ip}:9000/v1/chat/completions \ -X POST \ -d '{"query":"What is Deep Learning?","max_tokens":17,"top_p":1,"temperature":0.7,"frequency_penalty":0,"presence_penalty":0, "streaming":false}' \ -H 'Content-Type: application/json' @@ -465,88 +446,98 @@ docker compose -f compose_vllm.yaml up -d }' ``` -9. Dataprep Microservice(Optional) +9. Nginx Service + + ```bash + curl http://${host_ip}:${NGINX_PORT}/v1/chatqna \ + -H "Content-Type: application/json" \ + -d '{"messages": "What is the revenue of Nike in 2023?"}' + ``` - If you want to update the default knowledge base, you can use the following commands: +10. Dataprep Microservice(Optional) - Update Knowledge Base via Local File [nke-10k-2023.pdf](https://github.com/opea-project/GenAIComps/blob/main/comps/retrievers/redis/data/nke-10k-2023.pdf). Or - click [here](https://raw.githubusercontent.com/opea-project/GenAIComps/main/comps/retrievers/redis/data/nke-10k-2023.pdf) to download the file via any web browser. - Or run this command to get the file on a terminal. +If you want to update the default knowledge base, you can use the following commands: - ```bash - wget https://raw.githubusercontent.com/opea-project/GenAIComps/main/comps/retrievers/redis/data/nke-10k-2023.pdf +Update Knowledge Base via Local File [nke-10k-2023.pdf](https://github.com/opea-project/GenAIComps/blob/main/comps/retrievers/redis/data/nke-10k-2023.pdf). Or +click [here](https://raw.githubusercontent.com/opea-project/GenAIComps/main/comps/retrievers/redis/data/nke-10k-2023.pdf) to download the file via any web browser. +Or run this command to get the file on a terminal. - ``` +```bash +wget https://raw.githubusercontent.com/opea-project/GenAIComps/main/comps/retrievers/redis/data/nke-10k-2023.pdf - Upload: +``` - ```bash - curl -X POST "http://${host_ip}:6007/v1/dataprep" \ - -H "Content-Type: multipart/form-data" \ - -F "files=@./nke-10k-2023.pdf" - ``` +Upload: - This command updates a knowledge base by uploading a local file for processing. Update the file path according to your environment. +```bash +curl -X POST "http://${host_ip}:6007/v1/dataprep" \ + -H "Content-Type: multipart/form-data" \ + -F "files=@./nke-10k-2023.pdf" +``` - Add Knowledge Base via HTTP Links: +This command updates a knowledge base by uploading a local file for processing. Update the file path according to your environment. - ```bash - curl -X POST "http://${host_ip}:6007/v1/dataprep" \ - -H "Content-Type: multipart/form-data" \ - -F 'link_list=["https://opea.dev"]' - ``` +Add Knowledge Base via HTTP Links: - This command updates a knowledge base by submitting a list of HTTP links for processing. +```bash +curl -X POST "http://${host_ip}:6007/v1/dataprep" \ + -H "Content-Type: multipart/form-data" \ + -F 'link_list=["https://opea.dev"]' +``` - Also, you are able to get the file list that you uploaded: +This command updates a knowledge base by submitting a list of HTTP links for processing. - ```bash - curl -X POST "http://${host_ip}:6007/v1/dataprep/get_file" \ - -H "Content-Type: application/json" - ``` +Also, you are able to get the file list that you uploaded: - Then you will get the response JSON like this. Notice that the returned `name`/`id` of the uploaded link is `https://xxx.txt`. - - ```json - [ - { - "name": "nke-10k-2023.pdf", - "id": "nke-10k-2023.pdf", - "type": "File", - "parent": "" - }, - { - "name": "https://opea.dev.txt", - "id": "https://opea.dev.txt", - "type": "File", - "parent": "" - } - ] - ``` +```bash +curl -X POST "http://${host_ip}:6007/v1/dataprep/get_file" \ + -H "Content-Type: application/json" +``` + +Then you will get the response JSON like this. Notice that the returned `name`/`id` of the uploaded link is `https://xxx.txt`. + +```json +[ + { + "name": "nke-10k-2023.pdf", + "id": "nke-10k-2023.pdf", + "type": "File", + "parent": "" + }, + { + "name": "https://opea.dev.txt", + "id": "https://opea.dev.txt", + "type": "File", + "parent": "" + } +] +``` - To delete the file/link you uploaded: +To delete the file/link you uploaded: - The `file_path` here should be the `id` get from `/v1/dataprep/get_file` API. +The `file_path` here should be the `id` get from `/v1/dataprep/get_file` API. - ```bash - # delete link - curl -X POST "http://${host_ip}:6007/v1/dataprep/delete_file" \ - -d '{"file_path": "https://opea.dev.txt"}' \ - -H "Content-Type: application/json" - - # delete file - curl -X POST "http://${host_ip}:6007/v1/dataprep/delete_file" \ - -d '{"file_path": "nke-10k-2023.pdf"}' \ - -H "Content-Type: application/json" - - # delete all uploaded files and links - curl -X POST "http://${host_ip}:6007/v1/dataprep/delete_file" \ - -d '{"file_path": "all"}' \ - -H "Content-Type: application/json" - ``` +```bash +# delete link +curl -X POST "http://${host_ip}:6007/v1/dataprep/delete_file" \ + -d '{"file_path": "https://opea.dev.txt"}' \ + -H "Content-Type: application/json" + +# delete file +curl -X POST "http://${host_ip}:6007/v1/dataprep/delete_file" \ + -d '{"file_path": "nke-10k-2023.pdf"}' \ + -H "Content-Type: application/json" + +# delete all uploaded files and links +curl -X POST "http://${host_ip}:6007/v1/dataprep/delete_file" \ + -d '{"file_path": "all"}' \ + -H "Content-Type: application/json" +``` ## 🚀 Launch the UI +### Launch with origin port + To access the frontend, open the following URL in your browser: http://{host_ip}:5173. By default, the UI runs on port 5173 internally. If you prefer to use a different host port to access the frontend, you can modify the port mapping in the `compose.yaml` file as shown below: ```yaml @@ -557,6 +548,10 @@ To access the frontend, open the following URL in your browser: http://{host_ip} - "80:5173" ``` +### Launch with Nginx + +If you want to launch the UI using Nginx, open this URL: `http://${host_ip}:${NGINX_PORT}` in your browser to access the frontend. + ## 🚀 Launch the Conversational UI (Optional) To access the Conversational UI (react based) frontend, modify the UI service in the `compose.yaml` file. Replace `chaqna-xeon-ui-server` service with the `chatqna-xeon-conversation-ui-server` service as per the config below: diff --git a/ChatQnA/docker_compose/intel/cpu/xeon/compose.yaml b/ChatQnA/docker_compose/intel/cpu/xeon/compose.yaml index ef64a3098..6f253093d 100644 --- a/ChatQnA/docker_compose/intel/cpu/xeon/compose.yaml +++ b/ChatQnA/docker_compose/intel/cpu/xeon/compose.yaml @@ -178,6 +178,25 @@ services: - DELETE_FILE=${DATAPREP_DELETE_FILE_ENDPOINT} ipc: host restart: always + chaqna-xeon-nginx-server: + image: ${REGISTRY:-opea}/nginx:${TAG:-latest} + container_name: chaqna-xeon-nginx-server + depends_on: + - chaqna-xeon-backend-server + - chaqna-xeon-ui-server + ports: + - "${NGINX_PORT:-80}:80" + environment: + - no_proxy=${no_proxy} + - https_proxy=${https_proxy} + - http_proxy=${http_proxy} + - FRONTEND_SERVICE_IP=${FRONTEND_SERVICE_IP} + - FRONTEND_SERVICE_PORT=${FRONTEND_SERVICE_PORT} + - BACKEND_SERVICE_NAME=${BACKEND_SERVICE_NAME} + - BACKEND_SERVICE_IP=${BACKEND_SERVICE_IP} + - BACKEND_SERVICE_PORT=${BACKEND_SERVICE_PORT} + ipc: host + restart: always networks: default: diff --git a/ChatQnA/docker_compose/intel/cpu/xeon/set_env.sh b/ChatQnA/docker_compose/intel/cpu/xeon/set_env.sh index d2cfd86d3..3373eb9fd 100644 --- a/ChatQnA/docker_compose/intel/cpu/xeon/set_env.sh +++ b/ChatQnA/docker_compose/intel/cpu/xeon/set_env.sh @@ -22,3 +22,8 @@ export BACKEND_SERVICE_ENDPOINT="http://${host_ip}:8888/v1/chatqna" export DATAPREP_SERVICE_ENDPOINT="http://${host_ip}:6007/v1/dataprep" export DATAPREP_GET_FILE_ENDPOINT="http://${host_ip}:6007/v1/dataprep/get_file" export DATAPREP_DELETE_FILE_ENDPOINT="http://${host_ip}:6007/v1/dataprep/delete_file" +export FRONTEND_SERVICE_IP=${host_ip} +export FRONTEND_SERVICE_PORT=5173 +export BACKEND_SERVICE_NAME=chatqna +export BACKEND_SERVICE_IP=${host_ip} +export BACKEND_SERVICE_PORT=8888 diff --git a/ChatQnA/docker_compose/intel/hpu/gaudi/README.md b/ChatQnA/docker_compose/intel/hpu/gaudi/README.md index ec8e3ad09..fab6f1046 100644 --- a/ChatQnA/docker_compose/intel/hpu/gaudi/README.md +++ b/ChatQnA/docker_compose/intel/hpu/gaudi/README.md @@ -192,7 +192,14 @@ cd GenAIExamples/ChatQnA/ui docker build --no-cache -t opea/chatqna-conversation-ui:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f ./docker/Dockerfile.react . ``` -Then run the command `docker images`, you will have the following 7 Docker Images: +### 10. Build Nginx Docker Image + +```bash +cd GenAIComps +docker build -t opea/nginx:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/nginx/Dockerfile . +``` + +Then run the command `docker images`, you will have the following 8 Docker Images: - `opea/embedding-tei:latest` - `opea/retriever-redis:latest` @@ -201,6 +208,7 @@ Then run the command `docker images`, you will have the following 7 Docker Image - `opea/dataprep-redis:latest` - `opea/chatqna:latest` or `opea/chatqna-guardrails:latest` or `opea/chatqna-without-rerank:latest` - `opea/chatqna-ui:latest` +- `opea/nginx:latest` If Conversation React UI is built, you will find one more image: @@ -251,51 +259,30 @@ For users in China who are unable to download models directly from Huggingface, ### Setup Environment Variables -Since the `compose.yaml` will consume some environment variables, you need to setup them in advance as below. +1. Set the required environment variables: -```bash -export no_proxy=${your_no_proxy} -export http_proxy=${your_http_proxy} -export https_proxy=${your_http_proxy} -export EMBEDDING_MODEL_ID="BAAI/bge-base-en-v1.5" -export RERANK_MODEL_ID="BAAI/bge-reranker-base" -export LLM_MODEL_ID="Intel/neural-chat-7b-v3-3" -export LLM_MODEL_ID_NAME="neural-chat-7b-v3-3" -export TEI_EMBEDDING_ENDPOINT="http://${host_ip}:8090" -export TEI_RERANKING_ENDPOINT="http://${host_ip}:8808" -export TGI_LLM_ENDPOINT="http://${host_ip}:8005" -export vLLM_LLM_ENDPOINT="http://${host_ip}:8007" -export vLLM_RAY_LLM_ENDPOINT="http://${host_ip}:8006" -export LLM_SERVICE_PORT=9000 -export REDIS_URL="redis://${host_ip}:6379" -export INDEX_NAME="rag-redis" -export HUGGINGFACEHUB_API_TOKEN=${your_hf_api_token} -export MEGA_SERVICE_HOST_IP=${host_ip} -export EMBEDDING_SERVICE_HOST_IP=${host_ip} -export RETRIEVER_SERVICE_HOST_IP=${host_ip} -export RERANK_SERVICE_HOST_IP=${host_ip} -export LLM_SERVICE_HOST_IP=${host_ip} -export BACKEND_SERVICE_ENDPOINT="http://${host_ip}:8888/v1/chatqna" -export DATAPREP_SERVICE_ENDPOINT="http://${host_ip}:6007/v1/dataprep" -export DATAPREP_GET_FILE_ENDPOINT="http://${host_ip}:6007/v1/dataprep/get_file" -export DATAPREP_DELETE_FILE_ENDPOINT="http://${host_ip}:6007/v1/dataprep/delete_file" - -export llm_service_devices=all -export tei_embedding_devices=all -``` + ```bash + # Example: host_ip="192.168.1.1" + export host_ip="External_Public_IP" + # Example: no_proxy="localhost, 127.0.0.1, 192.168.1.1" + export no_proxy="Your_No_Proxy" + export HUGGINGFACEHUB_API_TOKEN="Your_Huggingface_API_Token" + # Example: NGINX_PORT=80 + export NGINX_PORT=${your_nginx_port} + ``` -To specify the device ids, "llm_service_devices" and "tei_embedding_devices"` can be set as "0,1,2,3" alike. More info in [gaudi docs](https://docs.habana.ai/en/latest/Orchestration/Multiple_Tenants_on_HPU/Multiple_Dockers_each_with_Single_Workload.html). +2. If you are in a proxy environment, also set the proxy-related environment variables: -If guardrails microservice is enabled in the pipeline, the below environment variables are necessary to be set. + ```bash + export http_proxy="Your_HTTP_Proxy" + export https_proxy="Your_HTTPs_Proxy" + ``` -```bash -export GURADRAILS_MODEL_ID="meta-llama/Meta-Llama-Guard-2-8B" -export SAFETY_GUARD_MODEL_ID="meta-llama/Meta-Llama-Guard-2-8B" -export SAFETY_GUARD_ENDPOINT="http://${host_ip}:8088" -export GUARDRAIL_SERVICE_HOST_IP=${host_ip} -``` +3. Set up other environment variables: -Note: Please replace `host_ip` with your external IP address, do **NOT** use localhost. + ```bash + source ./set_env.sh + ``` ### Start all the services Docker Containers @@ -463,7 +450,7 @@ For validation details, please refer to [how-to-validate_service](./how_to_valid ```bash # vLLM-on-Ray Service - curl http://${your_ip}:9000/v1/chat/completions \ + curl http://${host_ip}:9000/v1/chat/completions \ -X POST \ -d '{"query":"What is Deep Learning?","max_tokens":17,"presence_penalty":1.03","streaming":false}' \ -H 'Content-Type: application/json' @@ -479,74 +466,82 @@ For validation details, please refer to [how-to-validate_service](./how_to_valid }' ``` -9. Dataprep Microservice(Optional) - - If you want to update the default knowledge base, you can use the following commands: - - Update Knowledge Base via Local File Upload: +9. Nginx Service ```bash - curl -X POST "http://${host_ip}:6007/v1/dataprep" \ - -H "Content-Type: multipart/form-data" \ - -F "files=@./nke-10k-2023.pdf" + curl http://${host_ip}:${NGINX_PORT}/v1/chatqna \ + -H "Content-Type: application/json" \ + -d '{"messages": "What is the revenue of Nike in 2023?"}' ``` - This command updates a knowledge base by uploading a local file for processing. Update the file path according to your environment. +10. Dataprep Microservice(Optional) - Add Knowledge Base via HTTP Links: +If you want to update the default knowledge base, you can use the following commands: - ```bash - curl -X POST "http://${host_ip}:6007/v1/dataprep" \ - -H "Content-Type: multipart/form-data" \ - -F 'link_list=["https://opea.dev"]' - ``` +Update Knowledge Base via Local File Upload: - This command updates a knowledge base by submitting a list of HTTP links for processing. +```bash +curl -X POST "http://${host_ip}:6007/v1/dataprep" \ + -H "Content-Type: multipart/form-data" \ + -F "files=@./nke-10k-2023.pdf" +``` - Also, you are able to get the file/link list that you uploaded: +This command updates a knowledge base by uploading a local file for processing. Update the file path according to your environment. - ```bash - curl -X POST "http://${host_ip}:6007/v1/dataprep/get_file" \ - -H "Content-Type: application/json" - ``` +Add Knowledge Base via HTTP Links: - Then you will get the response JSON like this. Notice that the returned `name`/`id` of the uploaded link is `https://xxx.txt`. - - ```json - [ - { - "name": "nke-10k-2023.pdf", - "id": "nke-10k-2023.pdf", - "type": "File", - "parent": "" - }, - { - "name": "https://opea.dev.txt", - "id": "https://opea.dev.txt", - "type": "File", - "parent": "" - } - ] - ``` +```bash +curl -X POST "http://${host_ip}:6007/v1/dataprep" \ + -H "Content-Type: multipart/form-data" \ + -F 'link_list=["https://opea.dev"]' +``` - To delete the file/link you uploaded: +This command updates a knowledge base by submitting a list of HTTP links for processing. - ```bash - # delete link - curl -X POST "http://${host_ip}:6007/v1/dataprep/delete_file" \ - -d '{"file_path": "https://opea.dev.txt"}' \ - -H "Content-Type: application/json" - - # delete file - curl -X POST "http://${host_ip}:6007/v1/dataprep/delete_file" \ - -d '{"file_path": "nke-10k-2023.pdf"}' \ - -H "Content-Type: application/json" - - # delete all uploaded files and links - curl -X POST "http://${host_ip}:6007/v1/dataprep/delete_file" \ - -d '{"file_path": "all"}' \ - -H "Content-Type: application/json" - ``` +Also, you are able to get the file/link list that you uploaded: + +```bash +curl -X POST "http://${host_ip}:6007/v1/dataprep/get_file" \ + -H "Content-Type: application/json" +``` + +Then you will get the response JSON like this. Notice that the returned `name`/`id` of the uploaded link is `https://xxx.txt`. + +```json +[ + { + "name": "nke-10k-2023.pdf", + "id": "nke-10k-2023.pdf", + "type": "File", + "parent": "" + }, + { + "name": "https://opea.dev.txt", + "id": "https://opea.dev.txt", + "type": "File", + "parent": "" + } +] +``` + +To delete the file/link you uploaded: + +```bash +# delete link +curl -X POST "http://${host_ip}:6007/v1/dataprep/delete_file" \ + -d '{"file_path": "https://opea.dev.txt"}' \ + -H "Content-Type: application/json" + +# delete file +curl -X POST "http://${host_ip}:6007/v1/dataprep/delete_file" \ + -d '{"file_path": "nke-10k-2023.pdf"}' \ + -H "Content-Type: application/json" + +# delete all uploaded files and links +curl -X POST "http://${host_ip}:6007/v1/dataprep/delete_file" \ + -d '{"file_path": "all"}' \ + -H "Content-Type: application/json" +``` 10. Guardrails (Optional) @@ -559,6 +554,8 @@ curl http://${host_ip}:9090/v1/guardrails\ ## 🚀 Launch the UI +### Launch with origin port + To access the frontend, open the following URL in your browser: http://{host_ip}:5173. By default, the UI runs on port 5173 internally. If you prefer to use a different host port to access the frontend, you can modify the port mapping in the `compose.yaml` file as shown below: ```yaml @@ -569,11 +566,9 @@ To access the frontend, open the following URL in your browser: http://{host_ip} - "80:5173" ``` -![project-screenshot](../../../../assets/img/chat_ui_init.png) +### Launch with Nginx -Here is an example of running ChatQnA: - -![project-screenshot](../../../../assets/img/chat_ui_response.png) +If you want to launch the UI using Nginx, open this URL: `http://${host_ip}:${NGINX_PORT}` in your browser to access the frontend. ## 🚀 Launch the Conversational UI (Optional) @@ -604,6 +599,12 @@ Once the services are up, open the following URL in your browser: http://{host_i - "80:80" ``` +![project-screenshot](../../../../assets/img/chat_ui_init.png) + +Here is an example of running ChatQnA: + +![project-screenshot](../../../../assets/img/chat_ui_response.png) + Here is an example of running ChatQnA with Conversational UI (React): ![project-screenshot](../../../../assets/img/conversation_ui_response.png) diff --git a/ChatQnA/docker_compose/intel/hpu/gaudi/compose.yaml b/ChatQnA/docker_compose/intel/hpu/gaudi/compose.yaml index 6689efc6f..e5aa98713 100644 --- a/ChatQnA/docker_compose/intel/hpu/gaudi/compose.yaml +++ b/ChatQnA/docker_compose/intel/hpu/gaudi/compose.yaml @@ -187,6 +187,25 @@ services: - DELETE_FILE=${DATAPREP_DELETE_FILE_ENDPOINT} ipc: host restart: always + chaqna-gaudi-nginx-server: + image: ${REGISTRY:-opea}/nginx:${TAG:-latest} + container_name: chaqna-gaudi-nginx-server + depends_on: + - chaqna-gaudi-backend-server + - chaqna-gaudi-ui-server + ports: + - "${NGINX_PORT:-80}:80" + environment: + - no_proxy=${no_proxy} + - https_proxy=${https_proxy} + - http_proxy=${http_proxy} + - FRONTEND_SERVICE_IP=${FRONTEND_SERVICE_IP} + - FRONTEND_SERVICE_PORT=${FRONTEND_SERVICE_PORT} + - BACKEND_SERVICE_NAME=${BACKEND_SERVICE_NAME} + - BACKEND_SERVICE_IP=${BACKEND_SERVICE_IP} + - BACKEND_SERVICE_PORT=${BACKEND_SERVICE_PORT} + ipc: host + restart: always networks: default: diff --git a/ChatQnA/docker_compose/intel/hpu/gaudi/set_env.sh b/ChatQnA/docker_compose/intel/hpu/gaudi/set_env.sh index 968f49492..d3b805953 100644 --- a/ChatQnA/docker_compose/intel/hpu/gaudi/set_env.sh +++ b/ChatQnA/docker_compose/intel/hpu/gaudi/set_env.sh @@ -21,3 +21,8 @@ export BACKEND_SERVICE_ENDPOINT="http://${host_ip}:8888/v1/chatqna" export DATAPREP_SERVICE_ENDPOINT="http://${host_ip}:6007/v1/dataprep" export DATAPREP_GET_FILE_ENDPOINT="http://${host_ip}:6007/v1/dataprep/get_file" export DATAPREP_DELETE_FILE_ENDPOINT="http://${host_ip}:6007/v1/dataprep/delete_file" +export FRONTEND_SERVICE_IP=${host_ip} +export FRONTEND_SERVICE_PORT=5173 +export BACKEND_SERVICE_NAME=chatqna +export BACKEND_SERVICE_IP=${host_ip} +export BACKEND_SERVICE_PORT=8888 diff --git a/ChatQnA/docker_compose/nvidia/gpu/README.md b/ChatQnA/docker_compose/nvidia/gpu/README.md index 7e3966a7f..dd21def27 100644 --- a/ChatQnA/docker_compose/nvidia/gpu/README.md +++ b/ChatQnA/docker_compose/nvidia/gpu/README.md @@ -134,7 +134,14 @@ docker build --no-cache -t opea/chatqna-react-ui:latest --build-arg https_proxy= cd ../../../.. ``` -Then run the command `docker images`, you will have the following 7 Docker Images: +### 10. Build Nginx Docker Image + +```bash +cd GenAIComps +docker build -t opea/nginx:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/nginx/Dockerfile . +``` + +Then run the command `docker images`, you will have the following 8 Docker Images: 1. `opea/embedding-tei:latest` 2. `opea/retriever-redis:latest` @@ -142,8 +149,8 @@ Then run the command `docker images`, you will have the following 7 Docker Image 4. `opea/llm-tgi:latest` 5. `opea/dataprep-redis:latest` 6. `opea/chatqna:latest` -7. `opea/chatqna-ui:latest` -8. `opea/chatqna-react-ui:latest` +7. `opea/chatqna-ui:latest` or `opea/chatqna-react-ui:latest` +8. `opea/nginx:latest` ## 🚀 Start MicroServices and MegaService @@ -161,33 +168,30 @@ Change the `xxx_MODEL_ID` below for your needs. ### Setup Environment Variables -Since the `compose.yaml` will consume some environment variables, you need to setup them in advance as below. +1. Set the required environment variables: -```bash -export no_proxy=${your_no_proxy} -export http_proxy=${your_http_proxy} -export https_proxy=${your_http_proxy} -export EMBEDDING_MODEL_ID="BAAI/bge-base-en-v1.5" -export RERANK_MODEL_ID="BAAI/bge-reranker-base" -export LLM_MODEL_ID="Intel/neural-chat-7b-v3-3" -export TEI_EMBEDDING_ENDPOINT="http://${host_ip}:8090" -export TEI_RERANKING_ENDPOINT="http://${host_ip}:8808" -export TGI_LLM_ENDPOINT="http://${host_ip}:8008" -export REDIS_URL="redis://${host_ip}:6379" -export INDEX_NAME="rag-redis" -export HUGGINGFACEHUB_API_TOKEN=${your_hf_api_token} -export MEGA_SERVICE_HOST_IP=${host_ip} -export EMBEDDING_SERVICE_HOST_IP=${host_ip} -export RETRIEVER_SERVICE_HOST_IP=${host_ip} -export RERANK_SERVICE_HOST_IP=${host_ip} -export LLM_SERVICE_HOST_IP=${host_ip} -export BACKEND_SERVICE_ENDPOINT="http://${host_ip}:8888/v1/chatqna" -export DATAPREP_SERVICE_ENDPOINT="http://${host_ip}:6007/v1/dataprep" -export DATAPREP_GET_FILE_ENDPOINT="http://${host_ip}:6007/v1/dataprep/get_file" -export DATAPREP_DELETE_FILE_ENDPOINT="http://${host_ip}:6007/v1/dataprep/delete_file" -``` + ```bash + # Example: host_ip="192.168.1.1" + export host_ip="External_Public_IP" + # Example: no_proxy="localhost, 127.0.0.1, 192.168.1.1" + export no_proxy="Your_No_Proxy" + export HUGGINGFACEHUB_API_TOKEN="Your_Huggingface_API_Token" + # Example: NGINX_PORT=80 + export NGINX_PORT=${your_nginx_port} + ``` + +2. If you are in a proxy environment, also set the proxy-related environment variables: -Note: Please replace with `host_ip` with you external IP address, do **NOT** use localhost. + ```bash + export http_proxy="Your_HTTP_Proxy" + export https_proxy="Your_HTTPs_Proxy" + ``` + +3. Set up other environment variables: + + ```bash + source ./set_env.sh + ``` ### Start all the services Docker Containers @@ -292,58 +296,68 @@ docker compose up -d }' ``` -9. Dataprep Microservice(Optional) - - If you want to update the default knowledge base, you can use the following commands: - - Update Knowledge Base via Local File Upload: +9. Nginx Service ```bash - curl -X POST "http://${host_ip}:6007/v1/dataprep" \ - -H "Content-Type: multipart/form-data" \ - -F "files=@./nke-10k-2023.pdf" + curl http://${host_ip}:${NGINX_PORT}/v1/chatqna \ + -H "Content-Type: application/json" \ + -d '{"messages": "What is the revenue of Nike in 2023?"}' ``` - This command updates a knowledge base by uploading a local file for processing. Update the file path according to your environment. +10. Dataprep Microservice(Optional) - Add Knowledge Base via HTTP Links: +If you want to update the default knowledge base, you can use the following commands: - ```bash - curl -X POST "http://${host_ip}:6007/v1/dataprep" \ - -H "Content-Type: multipart/form-data" \ - -F 'link_list=["https://opea.dev"]' - ``` +Update Knowledge Base via Local File Upload: - This command updates a knowledge base by submitting a list of HTTP links for processing. +```bash +curl -X POST "http://${host_ip}:6007/v1/dataprep" \ + -H "Content-Type: multipart/form-data" \ + -F "files=@./nke-10k-2023.pdf" +``` - Also, you are able to get the file list that you uploaded: +This command updates a knowledge base by uploading a local file for processing. Update the file path according to your environment. - ```bash - curl -X POST "http://${host_ip}:6007/v1/dataprep/get_file" \ - -H "Content-Type: application/json" - ``` +Add Knowledge Base via HTTP Links: - To delete the file/link you uploaded: +```bash +curl -X POST "http://${host_ip}:6007/v1/dataprep" \ + -H "Content-Type: multipart/form-data" \ + -F 'link_list=["https://opea.dev"]' +``` - ```bash - # delete link - curl -X POST "http://${host_ip}:6007/v1/dataprep/delete_file" \ - -d '{"file_path": "https://opea.dev"}' \ - -H "Content-Type: application/json" - - # delete file - curl -X POST "http://${host_ip}:6007/v1/dataprep/delete_file" \ - -d '{"file_path": "nke-10k-2023.pdf"}' \ - -H "Content-Type: application/json" - - # delete all uploaded files and links - curl -X POST "http://${host_ip}:6007/v1/dataprep/delete_file" \ - -d '{"file_path": "all"}' \ - -H "Content-Type: application/json" - ``` +This command updates a knowledge base by submitting a list of HTTP links for processing. + +Also, you are able to get the file list that you uploaded: + +```bash +curl -X POST "http://${host_ip}:6007/v1/dataprep/get_file" \ + -H "Content-Type: application/json" +``` + +To delete the file/link you uploaded: + +```bash +# delete link +curl -X POST "http://${host_ip}:6007/v1/dataprep/delete_file" \ + -d '{"file_path": "https://opea.dev"}' \ + -H "Content-Type: application/json" + +# delete file +curl -X POST "http://${host_ip}:6007/v1/dataprep/delete_file" \ + -d '{"file_path": "nke-10k-2023.pdf"}' \ + -H "Content-Type: application/json" + +# delete all uploaded files and links +curl -X POST "http://${host_ip}:6007/v1/dataprep/delete_file" \ + -d '{"file_path": "all"}' \ + -H "Content-Type: application/json" +``` ## 🚀 Launch the UI +### Launch with origin port + To access the frontend, open the following URL in your browser: http://{host_ip}:5173. By default, the UI runs on port 5173 internally. If you prefer to use a different host port to access the frontend, you can modify the port mapping in the `compose.yaml` file as shown below: ```yaml @@ -354,6 +368,10 @@ To access the frontend, open the following URL in your browser: http://{host_ip} - "80:5173" ``` +### Launch with Nginx + +If you want to launch the UI using Nginx, open this URL: `http://${host_ip}:${NGINX_PORT}` in your browser to access the frontend. + ## 🚀 Launch the Conversational UI (Optional) To access the Conversational UI (react based) frontend, modify the UI service in the `compose.yaml` file. Replace `chaqna-ui-server` service with the `chatqna-react-ui-server` service as per the config below: @@ -384,3 +402,11 @@ Once the services are up, open the following URL in your browser: http://{host_i ``` ![project-screenshot](../../../assets/img/chat_ui_init.png) + +Here is an example of running ChatQnA: + +![project-screenshot](../../../assets/img/chat_ui_response.png) + +Here is an example of running ChatQnA with Conversational UI (React): + +![project-screenshot](../../../assets/img/conversation_ui_response.png) diff --git a/ChatQnA/docker_compose/nvidia/gpu/compose.yaml b/ChatQnA/docker_compose/nvidia/gpu/compose.yaml index 5e50214d1..218e11bec 100644 --- a/ChatQnA/docker_compose/nvidia/gpu/compose.yaml +++ b/ChatQnA/docker_compose/nvidia/gpu/compose.yaml @@ -197,6 +197,25 @@ services: - DELETE_FILE=${DATAPREP_DELETE_FILE_ENDPOINT} ipc: host restart: always + chaqna-nginx-server: + image: ${REGISTRY:-opea}/nginx:${TAG:-latest} + container_name: chaqna-nginx-server + depends_on: + - chaqna-backend-server + - chaqna-ui-server + ports: + - "${NGINX_PORT:-80}:80" + environment: + - no_proxy=${no_proxy} + - https_proxy=${https_proxy} + - http_proxy=${http_proxy} + - FRONTEND_SERVICE_IP=${FRONTEND_SERVICE_IP} + - FRONTEND_SERVICE_PORT=${FRONTEND_SERVICE_PORT} + - BACKEND_SERVICE_NAME=${BACKEND_SERVICE_NAME} + - BACKEND_SERVICE_IP=${BACKEND_SERVICE_IP} + - BACKEND_SERVICE_PORT=${BACKEND_SERVICE_PORT} + ipc: host + restart: always networks: default: diff --git a/ChatQnA/docker_compose/nvidia/gpu/set_env.sh b/ChatQnA/docker_compose/nvidia/gpu/set_env.sh index 3b3d43336..49a7ad7d8 100644 --- a/ChatQnA/docker_compose/nvidia/gpu/set_env.sh +++ b/ChatQnA/docker_compose/nvidia/gpu/set_env.sh @@ -21,3 +21,8 @@ export BACKEND_SERVICE_ENDPOINT="http://${host_ip}:8888/v1/chatqna" export DATAPREP_SERVICE_ENDPOINT="http://${host_ip}:6007/v1/dataprep" export DATAPREP_GET_FILE_ENDPOINT="http://${host_ip}:6007/v1/dataprep/get_file" export DATAPREP_DELETE_FILE_ENDPOINT="http://${host_ip}:6007/v1/dataprep/delete_file" +export FRONTEND_SERVICE_IP=${host_ip} +export FRONTEND_SERVICE_PORT=5173 +export BACKEND_SERVICE_NAME=chatqna +export BACKEND_SERVICE_IP=${host_ip} +export BACKEND_SERVICE_PORT=8888 diff --git a/ChatQnA/docker_image_build/build.yaml b/ChatQnA/docker_image_build/build.yaml index 4dd1d3b74..906f6fcf7 100644 --- a/ChatQnA/docker_image_build/build.yaml +++ b/ChatQnA/docker_image_build/build.yaml @@ -137,3 +137,9 @@ services: dockerfile: Dockerfile.cpu extends: chatqna image: ${REGISTRY:-opea}/vllm:${TAG:-latest} + nginx: + build: + context: GenAIComps + dockerfile: comps/nginx/Dockerfile + extends: chatqna + image: ${REGISTRY:-opea}/nginx:${TAG:-latest} diff --git a/ChatQnA/tests/test_compose_on_gaudi.sh b/ChatQnA/tests/test_compose_on_gaudi.sh index d40f7ad1d..e98a76311 100644 --- a/ChatQnA/tests/test_compose_on_gaudi.sh +++ b/ChatQnA/tests/test_compose_on_gaudi.sh @@ -20,7 +20,7 @@ function build_docker_images() { git clone https://github.com/huggingface/tei-gaudi echo "Build all the images with --no-cache, check docker_image_build.log for details..." - service_list="chatqna chatqna-ui dataprep-redis embedding-tei retriever-redis reranking-tei llm-tgi tei-gaudi" + service_list="chatqna chatqna-ui dataprep-redis embedding-tei retriever-redis reranking-tei llm-tgi tei-gaudi nginx" docker compose -f build.yaml build ${service_list} --no-cache > ${LOG_PATH}/docker_image_build.log docker pull ghcr.io/huggingface/tgi-gaudi:2.0.1 @@ -52,6 +52,12 @@ function start_services() { export DATAPREP_DELETE_FILE_ENDPOINT="http://${ip_address}:6009/v1/dataprep/delete_file" export llm_service_devices=all export tei_embedding_devices=all + export FRONTEND_SERVICE_IP=${host_ip} + export FRONTEND_SERVICE_PORT=5173 + export BACKEND_SERVICE_NAME=chatqna + export BACKEND_SERVICE_IP=${host_ip} + export BACKEND_SERVICE_PORT=8888 + export NGINX_PORT=80 sed -i "s/backend_address/$ip_address/g" $WORKPATH/ui/svelte/.env diff --git a/ChatQnA/tests/test_compose_on_xeon.sh b/ChatQnA/tests/test_compose_on_xeon.sh index feba1039f..b7275cf8e 100644 --- a/ChatQnA/tests/test_compose_on_xeon.sh +++ b/ChatQnA/tests/test_compose_on_xeon.sh @@ -19,7 +19,7 @@ function build_docker_images() { git clone https://github.com/opea-project/GenAIComps.git && cd GenAIComps && git checkout "${opea_branch:-"main"}" && cd ../ echo "Build all the images with --no-cache, check docker_image_build.log for details..." - service_list="chatqna chatqna-ui chatqna-conversation-ui dataprep-redis embedding-tei retriever-redis reranking-tei llm-tgi" + service_list="chatqna chatqna-ui chatqna-conversation-ui dataprep-redis embedding-tei retriever-redis reranking-tei llm-tgi nginx" docker compose -f build.yaml build ${service_list} --no-cache > ${LOG_PATH}/docker_image_build.log docker pull ghcr.io/huggingface/tgi-gaudi:2.0.1 @@ -50,6 +50,12 @@ function start_services() { export DATAPREP_SERVICE_ENDPOINT="http://${ip_address}:6007/v1/dataprep" export DATAPREP_GET_FILE_ENDPOINT="http://${ip_address}:6007/v1/dataprep/get_file" export DATAPREP_DELETE_FILE_ENDPOINT="http://${ip_address}:6007/v1/dataprep/delete_file" + export FRONTEND_SERVICE_IP=${host_ip} + export FRONTEND_SERVICE_PORT=5173 + export BACKEND_SERVICE_NAME=chatqna + export BACKEND_SERVICE_IP=${host_ip} + export BACKEND_SERVICE_PORT=8888 + export NGINX_PORT=80 sed -i "s/backend_address/$ip_address/g" $WORKPATH/ui/svelte/.env