LVM Microservice with PredictionGuard

This service utilizes Prediction Guard for Visual Question and Answering (VQA). Prediction Guard provides access to hosted open models with seamlessly integrated safeguards, including factual consistency checks, toxicity filters, and more.


Table of Contents

  1. Start Microservice

  2. Consume LVM Service


Start Microservice

Build Docker Image

First, build the generic LVM microservice Docker image:

cd ../../../
docker build -t opea/lvm:latest \
  --build-arg https_proxy=$https_proxy \
  --build-arg http_proxy=$http_proxy \
  -f comps/lvms/src/Dockerfile .

Run with Docker Compose

Deploy the PredictionGuard LVM service using Docker Compose.

  1. Export the required environment variables. Get your API key from Prediction Guard.

    export PREDICTIONGUARD_PORT=9399
    export PREDICTIONGUARD_API_KEY=${your_predictionguard_api_key}
    
  2. Navigate to the Docker Compose directory and start the service:

    cd comps/lvms/deployment/docker_compose/
    docker compose up predictionguard-service -d
    

Consume LVM Service

Once the service is running, you can send requests to the API.

Use the LVM Service API

Send a POST request with an image (base64 encoded) and a prompt.

curl http://localhost:9399/v1/lvm \
  -X POST \
  -d '{"image": "iVBORw0KGgoAAAANSUhEUgAAAAoAAAAKCAYAAACNMs+9AAAAFUlEQVR42mP8/5+hnoEIwDiqkL4KAcT9GO0U4BxoAAAAAElFTkSuQmCC", "prompt":"What is this?"}' \
  -H 'Content-Type: application/json'