Skip to content

Commit

Permalink
adding embedding support for CLIP based models for VideoRAGQnA exampl…
Browse files Browse the repository at this point in the history
…e for v0.9 (#538)

* clip embedding support

Signed-off-by: srinarayan-srikanthan <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

Signed-off-by: srinarayan-srikanthan <[email protected]>

* test script for embedding

Signed-off-by: srinarayan-srikanthan <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

Signed-off-by: srinarayan-srikanthan <[email protected]>

* fix freeze workflow (#522)

Signed-off-by: Sun, Xuehao <[email protected]>
Signed-off-by: srinarayan-srikanthan <[email protected]>

* Fix Dataprep Potential Error in get_file (#540)

* fix get file error & refine logs

Signed-off-by: letonghan <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: letonghan <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: srinarayan-srikanthan <[email protected]>

* Support SearchedDoc input type in LLM for No Rerank Pipeline (#541)

Signed-off-by: letonghan <[email protected]>
Signed-off-by: srinarayan-srikanthan <[email protected]>

* Add dependency for pdf2image and OCR processing (#421)

Signed-off-by: srinarayan-srikanthan <[email protected]>

* Add local_embedding return 768 length to align with chatqna example (#313)

Signed-off-by: Chendi.Xue <[email protected]>
Signed-off-by: srinarayan-srikanthan <[email protected]>

* add telemetry doc (#536)

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: srinarayan-srikanthan <[email protected]>

* Add video-llama LVM microservice under lvms  (#495)

Signed-off-by: BaoHuiling <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Signed-off-by: srinarayan-srikanthan <[email protected]>

* Fix the data load issue for structured files (#505)

Signed-off-by: XuhuiRen <[email protected]>
Signed-off-by: srinarayan-srikanthan <[email protected]>

* Add finetuning component (#502)

Signed-off-by: Xinyu Ye <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: lkk <[email protected]>
Co-authored-by: test <[email protected]>
Co-authored-by: root <[email protected]>
Co-authored-by: Letong Han <[email protected]>
Signed-off-by: srinarayan-srikanthan <[email protected]>

* add torchvision into requirements (#546)

Signed-off-by: chensuyue <[email protected]>
Signed-off-by: srinarayan-srikanthan <[email protected]>

* Use Gaudi base images from Dockerhub (#526)

* Use Gaudi base images from Dockerhub

Signed-off-by: Abolfazl Shahbazi <[email protected]>

* Fixing the malformed tag

Signed-off-by: Abolfazl Shahbazi <[email protected]>

* fix another malformed tag

Signed-off-by: Abolfazl Shahbazi <[email protected]>

---------

Signed-off-by: Abolfazl Shahbazi <[email protected]>
Signed-off-by: srinarayan-srikanthan <[email protected]>

* Add toxicity detection microservice (#338)

* Add toxicity detection microservice

Signed-off-by: Qun Gao <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Modification to toxicity plugin PR  (#432)

* changed microservice to use Service.GUARDRAILS and input/output to TextDoc

Signed-off-by: Tyler Wilbers <[email protected]>

* simplify dockerfile to use langchain

Signed-off-by: Tyler Wilbers <[email protected]>

* sort requirements

Signed-off-by: Tyler Wilbers <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Signed-off-by: Tyler Wilbers <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>

* Minor SPDX header update (#434)

Signed-off-by: Abolfazl Shahbazi <[email protected]>

* Remove 'langsmith' per code review (#534)

Signed-off-by: Abolfazl Shahbazi <[email protected]>

* Add toxicity detection microservices with E2E testing

Signed-off-by: Qun Gao <[email protected]>

---------

Signed-off-by: Qun Gao <[email protected]>
Signed-off-by: Tyler Wilbers <[email protected]>
Signed-off-by: Abolfazl Shahbazi <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Abolfazl Shahbazi <[email protected]>
Co-authored-by: Tyler W <[email protected]>
Signed-off-by: srinarayan-srikanthan <[email protected]>

* rename script and use 5xxx

Signed-off-by: BaoHuiling <[email protected]>
Signed-off-by: srinarayan-srikanthan <[email protected]>

* add proxy for build

Signed-off-by: BaoHuiling <[email protected]>
Signed-off-by: srinarayan-srikanthan <[email protected]>

* fixed commit issues

Signed-off-by: srinarayan-srikanthan <[email protected]>

* Fix docarray constraint

Signed-off-by: srinarayan-srikanthan <[email protected]>

* updated docarray

Signed-off-by: srinarayan-srikanthan <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

Signed-off-by: srinarayan-srikanthan <[email protected]>

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* rm telemetry which cause error in mega

Signed-off-by: BaoHuiling <[email protected]>

* renamed dirs

Signed-off-by: srinarayan-srikanthan <[email protected]>

* renamed test

Signed-off-by: srinarayan-srikanthan <[email protected]>

---------

Signed-off-by: srinarayan-srikanthan <[email protected]>
Signed-off-by: Sun, Xuehao <[email protected]>
Signed-off-by: letonghan <[email protected]>
Signed-off-by: Chendi.Xue <[email protected]>
Signed-off-by: BaoHuiling <[email protected]>
Signed-off-by: XuhuiRen <[email protected]>
Signed-off-by: Xinyu Ye <[email protected]>
Signed-off-by: chensuyue <[email protected]>
Signed-off-by: Abolfazl Shahbazi <[email protected]>
Signed-off-by: Qun Gao <[email protected]>
Signed-off-by: Tyler Wilbers <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Sun, Xuehao <[email protected]>
Co-authored-by: Letong Han <[email protected]>
Co-authored-by: Zaili Wang <[email protected]>
Co-authored-by: Chendi.Xue <[email protected]>
Co-authored-by: Sihan Chen <[email protected]>
Co-authored-by: Huiling Bao <[email protected]>
Co-authored-by: XuhuiRen <[email protected]>
Co-authored-by: XinyuYe-Intel <[email protected]>
Co-authored-by: lkk <[email protected]>
Co-authored-by: test <[email protected]>
Co-authored-by: root <[email protected]>
Co-authored-by: chen, suyue <[email protected]>
Co-authored-by: Abolfazl Shahbazi <[email protected]>
Co-authored-by: qgao007 <[email protected]>
Co-authored-by: Tyler W <[email protected]>
  • Loading branch information
17 people authored Sep 4, 2024
1 parent 8325d5d commit 2a53e25
Show file tree
Hide file tree
Showing 22 changed files with 409 additions and 0 deletions.
1 change: 1 addition & 0 deletions comps/cores/proto/docarray.py
Original file line number Diff line number Diff line change
Expand Up @@ -64,6 +64,7 @@ class EmbedDoc(BaseDoc):
fetch_k: int = 20
lambda_mult: float = 0.5
score_threshold: float = 0.2
constraints: Optional[Union[Dict[str, Any], None]] = None


class EmbedMultimodalDoc(EmbedDoc):
Expand Down
1 change: 1 addition & 0 deletions comps/dataprep/milvus/prepare_doc_milvus.py
Original file line number Diff line number Diff line change
Expand Up @@ -133,6 +133,7 @@ def ingest_data_to_milvus(doc_path: DocPath, embedder):
)

content = document_loader(path)

if logflag:
logger.info("[ ingest data ] file content loaded")

Expand Down
52 changes: 52 additions & 0 deletions comps/embeddings/multimodal_clip/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
# Multimodal CLIP Embeddings Microservice

The Multimodal CLIP Embedding Microservice is designed to efficiently convert textual strings and images into vectorized embeddings, facilitating seamless integration into various machine learning and data processing workflows. This service utilizes advanced algorithms to generate high-quality embeddings that capture the semantic essence of the input text and images, making it ideal for applications in multi-modal data processing, information retrieval, and similar fields.

Key Features:

**High Performance**: Optimized for quick and reliable conversion of textual data and image inputs into vector embeddings.

**Scalability**: Built to handle high volumes of requests simultaneously, ensuring robust performance even under heavy loads.

**Ease of Integration**: Provides a simple and intuitive API, allowing for straightforward integration into existing systems and workflows.

**Customizable**: Supports configuration and customization to meet specific use case requirements, including different embedding models and preprocessing techniques.

Users are albe to configure and build embedding-related services according to their actual needs.

## 🚀1. Start Microservice with Docker

### 1.1 Build Docker Image

#### Build Langchain Docker

```bash
cd ../../..
docker build -t opea/embedding-multimodal:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/embeddings/multimodal_clip/docker/Dockerfile .
```

### 1.2 Run Docker with Docker Compose

```bash
cd comps/embeddings/multimodal_clip/docker
docker compose -f docker_compose_embedding.yaml up -d
```

## 🚀2. Consume Embedding Service

### 2.1 Check Service Status

```bash
curl http://localhost:6000/v1/health_check\
-X GET \
-H 'Content-Type: application/json'
```

### 2.2 Consume Embedding Service

```bash
curl http://localhost:6000/v1/embeddings \
-X POST -d '{"text":"Sample text"}' \
-H 'Content-Type: application/json'

```
2 changes: 2 additions & 0 deletions comps/embeddings/multimodal_clip/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0
29 changes: 29 additions & 0 deletions comps/embeddings/multimodal_clip/docker/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

FROM langchain/langchain:latest

ARG ARCH="cpu"

RUN apt-get update -y && apt-get install -y --no-install-recommends --fix-missing \
libgl1-mesa-glx \
libjemalloc-dev \
vim

RUN useradd -m -s /bin/bash user && \
mkdir -p /home/user && \
chown -R user /home/user/

USER user

COPY comps /home/user/comps

RUN pip install --no-cache-dir --upgrade pip && \
if [ ${ARCH} = "cpu" ]; then pip install torch torchvision --index-url https://download.pytorch.org/whl/cpu; fi && \
pip install --no-cache-dir -r /home/user/comps/embeddings/multimodal_clip/requirements.txt

ENV PYTHONPATH=$PYTHONPATH:/home/user

WORKDIR /home/user/comps/embeddings/multimodal_clip

ENTRYPOINT ["python", "embedding_multimodal.py"]
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

version: "3.8"

services:
embedding:
image: opea/embedding-multimodal:latest
container_name: embedding-multimodal-server
ports:
- "6000:6000"
ipc: host
environment:
no_proxy: ${no_proxy}
http_proxy: ${http_proxy}
https_proxy: ${https_proxy}
LANGCHAIN_API_KEY: ${LANGCHAIN_API_KEY}
restart: unless-stopped

networks:
default:
driver: bridge
86 changes: 86 additions & 0 deletions comps/embeddings/multimodal_clip/embedding_multimodal.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,86 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

import datetime
import os
import time
from typing import Union

from dateparser.search import search_dates
from embeddings_clip import vCLIP

from comps import (
EmbedDoc,
ServiceType,
TextDoc,
opea_microservices,
register_microservice,
register_statistics,
statistics_dict,
)


def filtler_dates(prompt):

base_date = datetime.datetime.today()
today_date = base_date.date()
dates_found = search_dates(prompt, settings={"PREFER_DATES_FROM": "past", "RELATIVE_BASE": base_date})

if dates_found is not None:
for date_tuple in dates_found:
date_string, parsed_date = date_tuple
date_out = str(parsed_date.date())
time_out = str(parsed_date.time())
hours, minutes, seconds = map(float, time_out.split(":"))
year, month, day_out = map(int, date_out.split("-"))

rounded_seconds = min(round(parsed_date.second + 0.5), 59)
parsed_date = parsed_date.replace(second=rounded_seconds, microsecond=0)

iso_date_time = parsed_date.isoformat()
iso_date_time = str(iso_date_time)

if date_string == "today":
constraints = {"date": ["==", date_out]}
elif date_out != str(today_date) and time_out == "00:00:00": ## exact day (example last friday)
constraints = {"date": ["==", date_out]}
elif (
date_out == str(today_date) and time_out == "00:00:00"
): ## when search_date interprates words as dates output is todays date + time 00:00:00
constraints = {}
else: ## Interval of time:last 48 hours, last 2 days,..
constraints = {"date_time": [">=", {"_date": iso_date_time}]}
return constraints

else:
return {}


@register_microservice(
name="opea_service@embedding_multimodal",
service_type=ServiceType.EMBEDDING,
endpoint="/v1/embeddings",
host="0.0.0.0",
port=6000,
input_datatype=TextDoc,
output_datatype=EmbedDoc,
)
@register_statistics(names=["opea_service@embedding_multimodal"])
def embedding(input: TextDoc) -> EmbedDoc:
start = time.time()

if isinstance(input, TextDoc):
# Handle text input
embed_vector = embeddings.embed_query(input.text).tolist()[0]
res = EmbedDoc(text=input.text, embedding=embed_vector, constraints=filtler_dates(input.text))

else:
raise ValueError("Invalid input type")

statistics_dict["opea_service@embedding_multimodal"].append_latency(time.time() - start, None)
return res


if __name__ == "__main__":
embeddings = vCLIP({"model_name": "openai/clip-vit-base-patch32", "num_frm": 4})
opea_microservices["opea_service@embedding_multimodal"].start()
50 changes: 50 additions & 0 deletions comps/embeddings/multimodal_clip/embeddings_clip.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

import torch
import torch.nn as nn
from einops import rearrange
from transformers import AutoProcessor, AutoTokenizer, CLIPModel

model_name = "openai/clip-vit-base-patch32"

clip = CLIPModel.from_pretrained(model_name)
processor = AutoProcessor.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)


class vCLIP(nn.Module):
def __init__(self, cfg):
super().__init__()

self.num_frm = cfg["num_frm"]
self.model_name = cfg["model_name"]

def embed_query(self, texts):
"""Input is list of texts."""
text_inputs = tokenizer(texts, padding=True, return_tensors="pt")
text_features = clip.get_text_features(**text_inputs)
return text_features

def get_embedding_length(self):
return len(self.embed_query("sample_text"))

def get_image_embeddings(self, images):
"""Input is list of images."""
image_inputs = processor(images=images, return_tensors="pt")
image_features = clip.get_image_features(**image_inputs)
return image_features

def get_video_embeddings(self, frames_batch):
"""Input is list of list of frames in video."""
self.batch_size = len(frames_batch)
vid_embs = []
for frames in frames_batch:
frame_embeddings = self.get_image_embeddings(frames)
frame_embeddings = rearrange(frame_embeddings, "(b n) d -> b n d", b=len(frames_batch))
# Normalize, mean aggregate and return normalized video_embeddings
frame_embeddings = frame_embeddings / frame_embeddings.norm(dim=-1, keepdim=True)
video_embeddings = frame_embeddings.mean(dim=1)
video_embeddings = video_embeddings / video_embeddings.norm(dim=-1, keepdim=True)
vid_embs.append(video_embeddings)
return torch.cat(vid_embs, dim=0)
14 changes: 14 additions & 0 deletions comps/embeddings/multimodal_clip/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
dateparser
docarray[full]
einops
fastapi
huggingface_hub
langchain
open_clip_torch
opentelemetry-api
opentelemetry-exporter-otlp
opentelemetry-sdk
prometheus-fastapi-instrumentator
sentence_transformers
shortuuid
uvicorn
6 changes: 6 additions & 0 deletions comps/finetuning/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,10 +92,12 @@ Assuming a training file `alpaca_data.json` is uploaded, it can be downloaded in

```bash
# upload a training file

curl http://${your_ip}:8015/v1/finetune/upload_training_files -X POST -H "Content-Type: multipart/form-data" -F "files=@./alpaca_data.json"

# create a finetuning job
curl http://${your_ip}:8015/v1/fine_tuning/jobs \

-X POST \
-H "Content-Type: application/json" \
-d '{
Expand All @@ -104,18 +106,22 @@ curl http://${your_ip}:8015/v1/fine_tuning/jobs \
}'

# list finetuning jobs

curl http://${your_ip}:8015/v1/fine_tuning/jobs -X GET

# retrieve one finetuning job
curl http://localhost:8015/v1/fine_tuning/jobs/retrieve -X POST -H "Content-Type: application/json" -d '{
"fine_tuning_job_id": ${fine_tuning_job_id}}'

# cancel one finetuning job


curl http://localhost:8015/v1/fine_tuning/jobs/cancel -X POST -H "Content-Type: application/json" -d '{
"fine_tuning_job_id": ${fine_tuning_job_id}}'

# list checkpoints of a finetuning job
curl http://${your_ip}:8015/v1/finetune/list_checkpoints -X POST -H "Content-Type: application/json" -d '{"fine_tuning_job_id": ${fine_tuning_job_id}}'


```
Empty file.
7 changes: 7 additions & 0 deletions comps/finetuning/handlers.py
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,9 @@ def update_job_status(job_id: FineTuningJobID):
status = str(job_status).lower()
# Ray status "stopped" is OpenAI status "cancelled"
status = "cancelled" if status == "stopped" else status

logger.info(f"Status of job {job_id} is '{status}'")

running_finetuning_jobs[job_id].status = status
if status == "finished" or status == "cancelled" or status == "failed":
break
Expand Down Expand Up @@ -105,7 +107,9 @@ def handle_create_finetuning_jobs(request: FineTuningJobsRequest, background_tas
)
finetune_config.General.output_dir = os.path.join(JOBS_PATH, job.id)
if os.getenv("DEVICE", ""):

logger.info(f"specific device: {os.getenv('DEVICE')}")

finetune_config.Training.device = os.getenv("DEVICE")

finetune_config_file = f"{JOBS_PATH}/{job.id}.yaml"
Expand All @@ -120,6 +124,7 @@ def handle_create_finetuning_jobs(request: FineTuningJobsRequest, background_tas
# Path to the local directory that contains the script.py file
runtime_env={"working_dir": "./"},
)

logger.info(f"Submitted Ray job: {ray_job_id} ...")

running_finetuning_jobs[job.id] = job
Expand Down Expand Up @@ -172,7 +177,9 @@ async def save_content_to_local_disk(save_path: str, content):
content = await content.read()
fout.write(content)
except Exception as e:

logger.info(f"Write file failed. Exception: {e}")

raise Exception(status_code=500, detail=f"Write file {save_path} failed. Exception: {e}")


Expand Down
Empty file added comps/finetuning/jobs/.gitkeep
Empty file.
12 changes: 12 additions & 0 deletions comps/finetuning/lanuch.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

if [[ -n "$RAY_PORT" ]];then
export RAY_ADDRESS=http://127.0.0.1:$RAY_PORT
ray start --head --port $RAY_PORT
else
export RAY_ADDRESS=http://127.0.0.1:8265
ray start --head
fi

python finetuning_service.py
Loading

0 comments on commit 2a53e25

Please sign in to comment.