Skip to content

Latest commit

 

History

History

clip-multi-modal-text-image-search

Multi-Modal Text/Image search using CLIP

Weaviate Multi-Modal Search

This example application spins up a Weaviate instance using the multi2vec-clip module, imports a few sample images (you can add your own images, too!) and provides a very simple search frontend in React using the Weaviate JS Client

It is a minimal example using only 5 images, but you can add any amount of images yourself!

Prerequisites to run it yourself

  • Docker & Docker-Compose
  • Bash
  • Node.js and npm/yarn if you also want to run the frontend

Run it yourself

  1. Start up Weaviate using docker-compose up -d
  2. Import the schema (the script will wait for Weaviate to be ready) using bash ./import/curl/create_schema.sh
  3. Import the images using bash ./import/curl/import.sh
  4. To run the frontend navigate to the ./frontend folder and run yarn && yarn start. Wait for your browser to open at http://localhost:3000

How to run with your own images

Simply add your images to the ./images folder prior to running the import script. The script looks for .jpg file ending, but Weaviate supports other image types as well, you can adopt those if you like.

Model Credits

This demo uses the ckip-ViT-B32-multilingual-v1 model from SBERT.net. Shoutout to Nils Reimers and his colleagues for the great Sentence Transformers models.

Image credits

The images used in this demo are licensed as follows: