Deploying VisualQnA on Intel® Gaudi® Processors

This document outlines the deployment process for a VisualQnA application utilizing the GenAIComps microservice pipeline on Intel Gaudi server. The steps include Docker image creation, container deployment via Docker Compose, and service execution to integrate microservices such as llm. We will publish the Docker images to Docker Hub, it will simplify the deployment process for this service.

Table of Contents

  1. VisualQnA Quick Start Deployment

  2. Validate Microservices

  3. Launch the UI

VisualQnA Quick Start Deployment

This section describes how to quickly deploy and test the VisualQnA service manually on an Intel® Xeon® processor. The basic steps are:

  1. Build Docker Images

  2. Setup Environment Variables

  3. Deploy the Services Using Docker Compose

Build Docker Images

First of all, you need to build Docker Images locally. This step can be ignored after the Docker images published to Docker hub.

Please refer the table below to build different microservices from source:

Microservice

Deployment Guide

MegaService

Build MegaService Docker Image

LVM and NGINX

Build LVM and NGINX Docker Images

vLLM or TGI

Build vLLM or Pull TGI Gaudi Image

UI

Basic UI build guide

Then run the command docker images, you will have the following 5 Docker Images:

  1. opea/vllm-gaudi:latest

  2. ghcr.io/huggingface/tgi-gaudi:2.3.1 (Optional)

  3. opea/lvm:latest

  4. opea/visualqna:latest

  5. opea/visualqna-ui:latest

  6. opea/nginx

Setup Environment Variables

Since the compose.yaml will consume some environment variables, you need to setup them in advance as below.

source set_env.sh

Note: Please replace with host_ip with you external IP address, do not use localhost.

Deploy the Services Using Docker Compose

cd GenAIExamples/VisualQnA/docker_compose/intel/hpu/gaudi/
docker compose -f compose.yaml up -d
# if use TGI as the LLM serving backend
docker compose -f compose_tgi.yaml up -d

NOTE: Users need at least one Gaudi cards to run the VisualQnA successfully.

After running docker compose, check if all the containers launched via docker compose have started.

Validate MicroServices

Follow the instructions to validate MicroServices.

Note: If you see an “Internal Server Error” from the curl command, wait a few minutes for the microserver to be ready and then try again.

  1. LLM Microservice

    http_proxy="" curl http://${host_ip}:9399/v1/lvm -XPOST -d '{"image": "iVBORw0KGgoAAAANSUhEUgAAAAoAAAAKCAYAAACNMs+9AAAAFUlEQVR42mP8/5+hnoEIwDiqkL4KAcT9GO0U4BxoAAAAAElFTkSuQmCC", "prompt":"What is this?"}' -H 'Content-Type: application/json'
    
  2. MegaService

curl http://${host_ip}:8888/v1/visualqna -H "Content-Type: application/json" -d '{
    "messages": [
      {
        "role": "user",
        "content": [
          {
            "type": "text",
            "text": "What'\''s in this image?"
          },
          {
            "type": "image_url",
            "image_url": {
              "url": "https://www.ilankelman.org/stopsigns/australia.jpg"
            }
          }
        ]
      }
    ],
    "max_tokens": 300
    }'

Launch the UI

To access the frontend, open the following URL in your browser: http://{host_ip}:5173. By default, the UI runs on port 5173 internally. If you prefer to use a different host port to access the frontend, you can modify the port mapping in the compose.yaml file as shown below:

  visualqna-gaudi-ui-server:
    image: opea/visualqna-ui:latest
    ...
    ports:
      - "80:5173"