🌟 Embedding Microservice with TEI¶
This guide walks you through starting, deploying, and consuming the TEI-based Embeddings Microservice. 🚀
📦 1. Start Microservice with docker run
¶
🔹 1.1 Start Embedding Service with TEI¶
Start the TEI service: Replace
your_port
andmodel
with desired values to start the service.your_port=8090 model="BAAI/bge-large-en-v1.5" docker run -p $your_port:80 -v ./data:/data --name tei-embedding-serving \ -e http_proxy=$http_proxy -e https_proxy=$https_proxy --pull always \ ghcr.io/huggingface/text-embeddings-inference:cpu-1.5 --model-id $model
Test the TEI service: Run the following command to check if the service is up and running.
curl localhost:$your_port/v1/embeddings \ -X POST \ -d '{"input":"What is Deep Learning?"}' \ -H 'Content-Type: application/json'
🔹 1.2 Build Docker Image and Run Docker with CLI¶
Build the Docker image for the embedding microservice:
cd ../../../ docker build -t opea/embedding:latest \ --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy \ -f comps/embeddings/src/Dockerfile .
Run the embedding microservice and connect it to the TEI service:
docker run -d --name="embedding-tei-server" \ -p 6000:6000 \ -e http_proxy=$http_proxy -e https_proxy=$https_proxy \ --ipc=host \ -e TEI_EMBEDDING_ENDPOINT=$TEI_EMBEDDING_ENDPOINT \ -e EMBEDDING_COMPONENT_NAME="OPEA_TEI_EMBEDDING" \ opea/embedding:latest
📦 2. Start Microservice with docker compose¶
Deploy both the TEI Embedding Service and the Embedding Microservice using Docker Compose.
🔹 Steps:
Set environment variables:
export host_ip=${your_ip_address} export EMBEDDING_MODEL_ID="BAAI/bge-base-en-v1.5" export TEI_EMBEDDER_PORT=8090 export EMBEDDER_PORT=6000 export TEI_EMBEDDING_ENDPOINT="http://${host_ip}:${TEI_EMBEDDER_PORT}"
Navigate to the Docker Compose directory:
cd comps/embeddings/deployment/docker_compose/
Start the services:
docker compose up tei-embedding-serving tei-embedding-server -d
📦 3. Consume Embedding Service¶
🔹 3.1 Check Service Status¶
Verify the embedding service is running:
curl http://localhost:6000/v1/health_check \
-X GET \
-H 'Content-Type: application/json'
🔹 3.2 Use the Embedding Service API¶
The API is compatible with the OpenAI API.
Single Text Input
curl http://localhost:6000/v1/embeddings \ -X POST \ -d '{"input":"Hello, world!"}' \ -H 'Content-Type: application/json'
Multiple Text Inputs with Parameters
curl http://localhost:6000/v1/embeddings \ -X POST \ -d '{"input":["Hello, world!","How are you?"], "dimensions":100}' \ -H 'Content-Type: application/json'
✨ Tips for Better Understanding:¶
Port Mapping: Ensure the ports are correctly mapped to avoid conflicts with other services.
Model Selection: Choose a model appropriate for your use case, like “BAAI/bge-large-en-v1.5” or “BAAI/bge-base-en-v1.5”.
Environment Variables: Use http_proxy and https_proxy for proxy setup if necessary.
Data Volume: The
-v ./data:/data
flag ensures the data directory is correctly mounted.