Multimodal CLIP Embedding Microservice¶
The Multimodal CLIP Embedding Microservice provides a powerful solution for converting textual and visual data into high-dimensional vector embeddings. These embeddings capture the semantic essence of the input, enabling robust applications in multi-modal data processing, information retrieval, recommendation systems, and more.
Table of Contents¶
Start Microservice¶
Build Docker Image¶
To build the Docker image, execute the following commands:
cd ../../..
docker build -t opea/embedding:latest \
--build-arg https_proxy=$https_proxy \
--build-arg http_proxy=$http_proxy \
-f comps/embeddings/src/Dockerfile .
Run Docker with Docker Compose¶
cd comps/embeddings/deployment/docker_compose/
docker compose up clip-embedding-server -d
Consume Embedding Service¶
Check Service Status¶
Verify that the embedding service is running properly by checking its health status with this command:
curl http://localhost:6000/v1/health_check \
-X GET \
-H 'Content-Type: application/json'
Use the Embedding Service API¶
The service supports OpenAI API-compatible requests.
Single Text Input:
curl http://localhost:6000/v1/embeddings \
-X POST \
-d '{"input":"Hello, world!"}' \
-H 'Content-Type: application/json'
Multiple Texts with Parameters:
curl http://localhost:6000/v1/embeddings \
-X POST \
-d '{"input":["Hello, world!","How are you?"], "dimensions":100}' \
-H 'Content-Type: application/json'