Example DBQnA Deployment on AMD GPU (ROCm)

This document outlines the deployment process for DBQnA application which helps generating a SQL query and its output given a NLP question, utilizing the GenAIComps microservice pipeline on an AMD GPU. This example includes the following sections:

DBQnA Quick Start Deployment

This section describes how to quickly deploy and test the DBQnA service manually on AMD GPU (ROCm). The basic steps are:

  1. Access the Code

  2. Generate a HuggingFace Access Token

  3. Configure the Deployment Environment

  4. Deploy the Service Using Docker Compose

  5. Check the Deployment Status

  6. Test the Pipeline

  7. Cleanup the Deployment

Access the Code

Clone the GenAIExample repository and access the DBQnA AMD GPU (ROCm) Docker Compose files and supporting scripts:

git clone https://github.com/opea-project/GenAIExamples.git
cd GenAIExamples/DBQnA/docker_compose/

Checkout a released version, such as v1.3:

git checkout v1.3

Generate a HuggingFace Access Token

Some HuggingFace resources, such as some models, are only accessible if you have an access token. If you do not already have a HuggingFace access token, you can create one by first creating an account by following the steps provided at HuggingFace and then generating a user access token. Docker image creation, container deployment via Docker Compose, and service execution to integrate microservices. We will publish the Docker images to Docker Hub soon, which will simplify the deployment process for this service.

Configure the Deployment Environment

To set up environment variables for deploying DBQnA service, source the set_env.sh script in this directory:

source amd/gpu/rocm/set_env.sh

The set_env.sh script will prompt for required and optional environment variables used to configure the DBQnA service based on TGI. If a value is not entered, the script will use a default value for the same. It will also generate a .env file defining the desired configuration.

Deploy the Service Using Docker Compose

To deploy the DBQnA service, execute the docker compose up command with the appropriate arguments. For a default deployment, execute:

cd amd/gpu/rocm/
docker compose -f compose.yaml up -d

The DBQnA docker images should automatically be downloaded from the OPEA registry and deployed on the AMD GPU (ROCm)

Check the Deployment Status

After running docker compose, check if all the containers launched via docker compose have started:

docker ps -a

For the default deployment, the following 4 containers should be running.

Test the Pipeline

Once the DBQnA service are running, test the pipeline using the following command:

curl http://${host_ip}:${DBQNA_TEXT_TO_SQL_PORT}/v1/texttosql \
    -X POST \
    -d '{"input_text": "Find the total number of Albums.","conn_str": {"user": "'${POSTGRES_USER}'","password": "'${POSTGRES_PASSWORD}'","host": "'${host_ip}'", "port": "5442", "database": "'${POSTGRES_DB}'"}}' \
    -H 'Content-Type: application/json'

Cleanup the Deployment

To stop the containers associated with the deployment, execute the following command:

docker compose -f compose.yaml down

All the DBQnA containers will be stopped and then removed on completion of the “down” command.

DBQnA Docker Compose Files

The compose.yaml is default compose file using tgi as serving framework

Service Name

Image Name

dbqna-tgi-service

ghcr.io/huggingface/text-generation-inference:2.4.1-rocm

postgres

postgres:latest

text2sql

opea/text2sql:latest

text2sql-react-ui

opea/text2sql-react-ui:latest

DBQnA Service Configuration for AMD GPUs

The table provides a comprehensive overview of the DBQnA service utilized across various deployments as illustrated in the example Docker Compose files. Each row in the table represents a distinct service, detailing its possible images used to enable it and a concise description of its function within the deployment architecture.

Service Name

Possible Image Names

Optional

Description

dbqna-tgi-service

ghcr.io/huggingface/text-generation-inference:2.4.1-rocm

No

Specific to the TGI deployment, focuses on text generation inference using AMD GPU (ROCm) hardware.

postgres

postgres:latest

No

Provides the relational database backend for storing and querying data used by the DBQnA pipeline.

text2sql

opea/text2sql:latest

No

Handles text-to-SQL conversion tasks.

text2sql-react-ui

opea/text2sql-react-ui:latest

No

Provides the user interface for the DBQnA service.