Skip to content

Commit

Permalink
add VDMS retriever microservice for v0.9 Milestone (#539)
Browse files Browse the repository at this point in the history
* add VDMS retriever microservice

Signed-off-by: s-gobriel <[email protected]>

* add retrieval gateway and logger back to init

Signed-off-by: s-gobriel <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* use 5009 in CI

Signed-off-by: BaoHuiling <[email protected]>

* change index_name to collection_name

Signed-off-by: s-gobriel <[email protected]>

* fix var name

Signed-off-by: BaoHuiling <[email protected]>

* use index name all

Signed-off-by: BaoHuiling <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* add deps

Signed-off-by: BaoHuiling <[email protected]>

* changes to address code reviews

Signed-off-by: s-gobriel <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* resolve docarray

Signed-off-by: s-gobriel <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* add optional docarray embeddoc constraints

Signed-off-by: s-gobriel <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* fix bug in comment

Signed-off-by: BaoHuiling <[email protected]>

* import DEBUG

Signed-off-by: BaoHuiling <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: s-gobriel <[email protected]>
Signed-off-by: BaoHuiling <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: chen, suyue <[email protected]>
Co-authored-by: BaoHuiling <[email protected]>
Co-authored-by: XuhuiRen <[email protected]>
  • Loading branch information
5 people authored Sep 4, 2024
1 parent 01886fe commit 445c9b1
Show file tree
Hide file tree
Showing 14 changed files with 643 additions and 8 deletions.
10 changes: 7 additions & 3 deletions comps/retrievers/langchain/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,14 +6,18 @@ The service primarily utilizes similarity measures in vector space to rapidly re

Overall, this microservice provides robust backend support for applications requiring efficient similarity searches, playing a vital role in scenarios such as recommendation systems, information retrieval, or any other context where precise measurement of document similarity is crucial.

## Retriever Microservice with Redis
# Retriever Microservice with Redis

For details, please refer to this [readme](redis/README.md)

## Retriever Microservice with Milvus
# Retriever Microservice with Milvus

For details, please refer to this [readme](milvus/README.md)

## Retriever Microservice with PGVector
# Retriever Microservice with PGVector

For details, please refer to this [readme](pgvector/README.md)

# Retriever Microservice with VDMS

For details, please refer to this [readme](vdms/README.md)
169 changes: 169 additions & 0 deletions comps/retrievers/langchain/vdms/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,169 @@
# Retriever Microservice

This retriever microservice is a highly efficient search service designed for handling and retrieving embedding vectors. It operates by receiving an embedding vector as input and conducting a similarity search against vectors stored in a VectorDB database. Users must specify the VectorDB's host, port, and the index/collection name, and the service searches within that index to find documents with the highest similarity to the input vector.

The service primarily utilizes similarity measures in vector space to rapidly retrieve contentually similar documents. The vector-based retrieval approach is particularly suited for handling large datasets, offering fast and accurate search results that significantly enhance the efficiency and quality of information retrieval.

Overall, this microservice provides robust backend support for applications requiring efficient similarity searches, playing a vital role in scenarios such as recommendation systems, information retrieval, or any other context where precise measurement of document similarity is crucial.

# Visual Data Management System (VDMS)

VDMS is a storage solution for efficient access of big-”visual”-data that aims to achieve cloud scale by searching for relevant visual data via visual metadata stored as a graph and enabling machine friendly enhancements to visual data for faster access.

VDMS offers the functionality of VectorDB. It provides multiple engines to index large number of embeddings and to search them for similarity. Based on the use case, the engine used will provide a tradeoff between indexing speed, search speed, total memory footprint, and search accuracy.

VDMS also supports a graph database to store different metadata(s) associated with each vector embedding, and to retrieve them supporting a large variety of relationships ranging from simple to very complex relationships.

In Summary, VDMS supports:

K nearest neighbor search
Euclidean distance (L2) and inner product (IP)
Libraries for indexing and computing distances: TileDBDense, TileDBSparse, FaissFlat (Default), FaissIVFFlat, Flinng
Embeddings for text, images, and video
Vector and metadata searches
Scalabity to allow for definition of different relationships across the metadata

# 🚀1. Start Microservice with Python (Option 1)

To start the retriever microservice, you must first install the required python packages.

## 1.1 Install Requirements

```bash
pip install -r requirements.txt
```

## 1.2 Start TEI Service

```bash
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=${your_langchain_api_key}
export LANGCHAIN_PROJECT="opea/retriever"
model=BAAI/bge-base-en-v1.5
revision=refs/pr/4
volume=$PWD/data
docker run -d -p 6060:80 -v $volume:/data -e http_proxy=$http_proxy -e https_proxy=$https_proxy --pull always ghcr.io/huggingface/text-embeddings-inference:cpu-1.5 --model-id $model --revision $revision
```

## 1.3 Verify the TEI Service

```bash
curl 127.0.0.1:6060/rerank \
-X POST \
-d '{"query":"What is Deep Learning?", "texts": ["Deep Learning is not...", "Deep learning is..."]}' \
-H 'Content-Type: application/json'
```

## 1.4 Setup VectorDB Service

You need to setup your own VectorDB service (VDMS in this example), and ingest your knowledge documents into the vector database.

As for VDMS, you could start a docker container using the following commands.
Remember to ingest data into it manually.

```bash
docker run -d --name="vdms-vector-db" -p 55555:55555 intellabs/vdms:latest
```

## 1.5 Start Retriever Service

```bash
export TEI_EMBEDDING_ENDPOINT="http://${your_ip}:6060"
python langchain/retriever_vdms.py
```

# 🚀2. Start Microservice with Docker (Option 2)

## 2.1 Setup Environment Variables

```bash
export RETRIEVE_MODEL_ID="BAAI/bge-base-en-v1.5"
export INDEX_NAME=${your_index_name or collection_name}
export TEI_EMBEDDING_ENDPOINT="http://${your_ip}:6060"
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=${your_langchain_api_key}
export LANGCHAIN_PROJECT="opea/retrievers"
```

## 2.2 Build Docker Image

```bash
cd ../../
docker build -t opea/retriever-vdms:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/retrievers/langchain/vdms/docker/Dockerfile .
```

To start a docker container, you have two options:

- A. Run Docker with CLI
- B. Run Docker with Docker Compose

You can choose one as needed.

## 2.3 Run Docker with CLI (Option A)

```bash
docker run -d --name="retriever-vdms-server" -p 7000:7000 --ipc=host -e http_proxy=$http_proxy -e https_proxy=$https_proxy -e INDEX_NAME=$INDEX_NAME -e TEI_EMBEDDING_ENDPOINT=$TEI_EMBEDDING_ENDPOINT opea/retriever-vdms:latest
```

## 2.4 Run Docker with Docker Compose (Option B)

```bash
cd langchain/vdms/docker
docker compose -f docker_compose_retriever.yaml up -d
```

# 🚀3. Consume Retriever Service

## 3.1 Check Service Status

```bash
curl http://localhost:7000/v1/health_check \
-X GET \
-H 'Content-Type: application/json'
```

## 3.2 Consume Embedding Service

To consume the Retriever Microservice, you can generate a mock embedding vector of length 768 with Python.

```bash
your_embedding=$(python -c "import random; embedding = [random.uniform(-1, 1) for _ in range(768)]; print(embedding)")
curl http://${your_ip}:7000/v1/retrieval \
-X POST \
-d "{\"text\":\"What is the revenue of Nike in 2023?\",\"embedding\":${your_embedding}}" \
-H 'Content-Type: application/json'
```

You can set the parameters for the retriever.

```bash
your_embedding=$(python -c "import random; embedding = [random.uniform(-1, 1) for _ in range(768)]; print(embedding)")
curl http://localhost:7000/v1/retrieval \
-X POST \
-d "{\"text\":\"What is the revenue of Nike in 2023?\",\"embedding\":${your_embedding},\"search_type\":\"similarity\", \"k\":4}" \
-H 'Content-Type: application/json'
```

```bash
your_embedding=$(python -c "import random; embedding = [random.uniform(-1, 1) for _ in range(768)]; print(embedding)")
curl http://localhost:7000/v1/retrieval \
-X POST \
-d "{\"text\":\"What is the revenue of Nike in 2023?\",\"embedding\":${your_embedding},\"search_type\":\"similarity_distance_threshold\", \"k\":4, \"distance_threshold\":1.0}" \
-H 'Content-Type: application/json'
```

```bash
your_embedding=$(python -c "import random; embedding = [random.uniform(-1, 1) for _ in range(768)]; print(embedding)")
curl http://localhost:7000/v1/retrieval \
-X POST \
-d "{\"text\":\"What is the revenue of Nike in 2023?\",\"embedding\":${your_embedding},\"search_type\":\"similarity_score_threshold\", \"k\":4, \"score_threshold\":0.2}" \
-H 'Content-Type: application/json'
```

```bash
your_embedding=$(python -c "import random; embedding = [random.uniform(-1, 1) for _ in range(768)]; print(embedding)")
curl http://localhost:7000/v1/retrieval \
-X POST \
-d "{\"text\":\"What is the revenue of Nike in 2023?\",\"embedding\":${your_embedding},\"search_type\":\"mmr\", \"k\":4, \"fetch_k\":20, \"lambda_mult\":0.5}" \
-H 'Content-Type: application/json'
```
2 changes: 2 additions & 0 deletions comps/retrievers/langchain/vdms/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
49 changes: 49 additions & 0 deletions comps/retrievers/langchain/vdms/docker/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@

# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

FROM langchain/langchain:latest

ARG ARCH="cpu"

RUN apt-get update -y && apt-get install -y --no-install-recommends --fix-missing \
libgl1-mesa-glx \
libjemalloc-dev \
iputils-ping \
vim

RUN useradd -m -s /bin/bash user && \
mkdir -p /home/user && \
chown -R user /home/user/

COPY comps /home/user/comps

# RUN chmod +x /home/user/comps/retrievers/langchain/vdms/run.sh

USER user
RUN pip install --no-cache-dir --upgrade pip && \
if [ ${ARCH} = "cpu" ]; then pip install torch torchvision --index-url https://download.pytorch.org/whl/cpu; fi && \
pip install --no-cache-dir -r /home/user/comps/retrievers/langchain/vdms/requirements.txt

RUN pip install -U langchain
RUN pip install -U langchain-community

RUN pip install --upgrade huggingface-hub

ENV PYTHONPATH=$PYTHONPATH:/home/user

ENV HUGGINGFACEHUB_API_TOKEN=dummy

ENV USECLIP 0

ENV no_proxy=localhost,127.0.0.1

ENV http_proxy=""
ENV https_proxy=""

WORKDIR /home/user/comps/retrievers/langchain/vdms

#ENTRYPOINT ["/home/user/comps/retrievers/langchain/vdms/run.sh"]
#ENTRYPOINT ["/bin/bash"]

ENTRYPOINT ["python", "retriever_vdms.py"]
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

version: "3.8"

services:
tei_xeon_service:
image: ghcr.io/huggingface/text-embeddings-inference:cpu-1.5
container_name: tei-xeon-server
ports:
- "6060:80"
volumes:
- "./data:/data"
shm_size: 1g
command: --model-id ${RETRIEVE_MODEL_ID}
retriever:
image: opea/retriever-vdms:latest
container_name: retriever-vdms-server
ports:
- "7000:7000"
ipc: host
environment:
no_proxy: ${no_proxy}
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
INDEX_NAME: ${INDEX_NAME}
LANGCHAIN_API_KEY: ${LANGCHAIN_API_KEY}
restart: unless-stopped

networks:
default:
driver: bridge
16 changes: 16 additions & 0 deletions comps/retrievers/langchain/vdms/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
docarray[full]
easyocr
einops
fastapi
langchain-community
langchain-core
langchain-huggingface
opentelemetry-api
opentelemetry-exporter-otlp
opentelemetry-sdk
prometheus-fastapi-instrumentator
pymupdf
sentence_transformers
shortuuid
uvicorn
vdms
Loading

0 comments on commit 445c9b1

Please sign in to comment.