Build MegaService of ChatQnA on Gaudi

This document outlines the deployment process for a ChatQnA application utilizing the GenAIComps microservice pipeline on Intel Gaudi server. The steps include Docker image creation, container deployment via Docker Compose, and service execution to integrate microservices such as embedding, retriever, rerank, and llm. We will publish the Docker images to Docker Hub, it will simplify the deployment process for this service.

Quick Start:

  1. Set up the environment variables.

  2. Run Docker Compose.

  3. Consume the ChatQnA Service.

Quick Start: 1.Setup Environment Variable

To set up environment variables for deploying ChatQnA services, follow these steps:

  1. Set the required environment variables:

    # Example: host_ip="192.168.1.1"
    export host_ip="External_Public_IP"
    # Example: no_proxy="localhost, 127.0.0.1, 192.168.1.1"
    export no_proxy="Your_No_Proxy"
    export HUGGINGFACEHUB_API_TOKEN="Your_Huggingface_API_Token"
    
  2. If you are in a proxy environment, also set the proxy-related environment variables:

    export http_proxy="Your_HTTP_Proxy"
    export https_proxy="Your_HTTPs_Proxy"
    
  3. Set up other environment variables:

    source ./set_env.sh
    

Quick Start: 2.Run Docker Compose

docker compose up -d

It will automatically download the docker image on docker hub:

docker pull opea/chatqna:latest
docker pull opea/chatqna-ui:latest

If you want to build docker by yourself, please refer to ‘Build Docker Images’ in below.

Note: The optional docker image opea/chatqna-without-rerank:latest has not been published yet, users need to build this docker image from source.

QuickStart: 3.Consume the ChatQnA Service

curl http://${host_ip}:8888/v1/chatqna \
    -H "Content-Type: application/json" \
    -d '{
        "messages": "What is the revenue of Nike in 2023?"
    }'

🚀 Build Docker Images

First of all, you need to build Docker Images locally. This step can be ignored after the Docker images published to Docker hub.

1. Build Embedding Image

git clone https://github.com/opea-project/GenAIComps.git
cd GenAIComps
docker build --no-cache -t opea/embedding-tei:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/embeddings/tei/langchain/Dockerfile .

2. Build Retriever Image

docker build --no-cache -t opea/retriever-redis:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/retrievers/redis/langchain/Dockerfile .

3. Build Rerank Image

Skip for ChatQnA without Rerank pipeline

docker build --no-cache -t opea/reranking-tei:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/reranks/tei/Dockerfile .

4. Build LLM Image

You can use different LLM serving solutions, choose one of following four options.

4.1 Use TGI

docker build --no-cache -t opea/llm-tgi:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/llms/text-generation/tgi/Dockerfile .

4.2 Use VLLM

Build vllm docker.

docker build --no-cache -t opea/llm-vllm-hpu:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/llms/text-generation/vllm/langchain/dependency/Dockerfile.intel_hpu .

Build microservice docker.

docker build --no-cache -t opea/llm-vllm:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/llms/text-generation/vllm/langchain/Dockerfile .

4.3 Use VLLM-on-Ray

Build vllm-on-ray docker.

docker build --no-cache -t opea/llm-vllm-ray-hpu:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/llms/text-generation/vllm/ray/dependency/Dockerfile .

Build microservice docker.

docker build --no-cache -t opea/llm-vllm-ray:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/llms/text-generation/vllm/ray/Dockerfile .

5. Build Dataprep Image

docker build --no-cache -t opea/dataprep-redis:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/dataprep/redis/langchain/Dockerfile .

6. Build Guardrails Docker Image (Optional)

To fortify AI initiatives in production, Guardrails microservice can secure model inputs and outputs, building Trustworthy, Safe, and Secure LLM-based Applications.

docker build -t opea/guardrails-tgi:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/guardrails/llama_guard/langchain/Dockerfile .

7. Build MegaService Docker Image

  1. MegaService with Rerank

    To construct the Mega Service with Rerank, we utilize the GenAIComps microservice pipeline within the chatqna.py Python script. Build the MegaService Docker image using the command below:

    git clone https://github.com/opea-project/GenAIExamples.git
    cd GenAIExamples/ChatQnA/docker
    docker build --no-cache -t opea/chatqna:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f Dockerfile .
    
  2. MegaService with Guardrails

    If you want to enable guardrails microservice in the pipeline, please use the below command instead:

    git clone https://github.com/opea-project/GenAIExamples.git
    cd GenAIExamples/ChatQnA/
    docker build --no-cache -t opea/chatqna-guardrails:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f Dockerfile.guardrails .
    
  3. MegaService without Rerank

    To construct the Mega Service without Rerank, we utilize the GenAIComps microservice pipeline within the chatqna_without_rerank.py Python script. Build MegaService Docker image via below command:

    git clone https://github.com/opea-project/GenAIExamples.git
    cd GenAIExamples/ChatQnA/docker
    docker build --no-cache -t opea/chatqna-without-rerank:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f Dockerfile.without_rerank .
    

8. Build UI Docker Image

Construct the frontend Docker image using the command below:

cd GenAIExamples/ChatQnA/ui
docker build --no-cache -t opea/chatqna-ui:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f ./docker/Dockerfile .

9. Build Conversational React UI Docker Image (Optional)

Build frontend Docker image that enables Conversational experience with ChatQnA megaservice via below command:

Export the value of the public IP address of your Gaudi node to the host_ip environment variable

cd GenAIExamples/ChatQnA/ui
docker build --no-cache -t opea/chatqna-conversation-ui:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f ./docker/Dockerfile.react .

Then run the command docker images, you will have the following 7 Docker Images:

  • opea/embedding-tei:latest

  • opea/retriever-redis:latest

  • opea/reranking-tei:latest

  • opea/llm-tgi:latest or opea/llm-vllm:latest or opea/llm-vllm-ray:latest

  • opea/dataprep-redis:latest

  • opea/chatqna:latest or opea/chatqna-guardrails:latest or opea/chatqna-without-rerank:latest

  • opea/chatqna-ui:latest

If Conversation React UI is built, you will find one more image:

  • opea/chatqna-conversation-ui:latest

If Guardrails docker image is built, you will find one more image:

  • opea/guardrails-tgi:latest

🚀 Start MicroServices and MegaService

Required Models

By default, the embedding, reranking and LLM models are set to a default value as listed below:

Service

Model

Embedding

BAAI/bge-base-en-v1.5

Reranking

BAAI/bge-reranker-base

LLM

Intel/neural-chat-7b-v3-3

Change the xxx_MODEL_ID below for your needs.

For users in China who are unable to download models directly from Huggingface, you can use ModelScope or a Huggingface mirror to download models. TGI can load the models either online or offline as described below:

  1. Online

    export HF_TOKEN=${your_hf_token}
    export HF_ENDPOINT="https://hf-mirror.com"
    model_name="Intel/neural-chat-7b-v3-3"
    docker run -p 8008:80 -v ./data:/data --name tgi-service -e HF_ENDPOINT=$HF_ENDPOINT -e http_proxy=$http_proxy -e https_proxy=$https_proxy --runtime=habana -e HABANA_VISIBLE_DEVICES=all -e OMPI_MCA_btl_vader_single_copy_mechanism=none -e HUGGING_FACE_HUB_TOKEN=$HF_TOKEN -e ENABLE_HPU_GRAPH=true -e LIMIT_HPU_GRAPH=true -e USE_FLASH_ATTENTION=true -e FLASH_ATTENTION_RECOMPUTE=true --cap-add=sys_nice --ipc=host ghcr.io/huggingface/tgi-gaudi:2.0.5 --model-id $model_name --max-input-tokens 1024 --max-total-tokens 2048
    
  2. Offline

    • Search your model name in ModelScope. For example, check this page for model neural-chat-7b-v3-1.

    • Click on Download this model button, and choose one way to download the model to your local path /path/to/model.

    • Run the following command to start TGI service.

      export HF_TOKEN=${your_hf_token}
      export model_path="/path/to/model"
      docker run -p 8008:80 -v $model_path:/data --name tgi_service --runtime=habana -e HABANA_VISIBLE_DEVICES=all -e OMPI_MCA_btl_vader_single_copy_mechanism=none -e HUGGING_FACE_HUB_TOKEN=$HF_TOKEN -e ENABLE_HPU_GRAPH=true -e LIMIT_HPU_GRAPH=true -e USE_FLASH_ATTENTION=true -e FLASH_ATTENTION_RECOMPUTE=true --cap-add=sys_nice --ipc=host ghcr.io/huggingface/tgi-gaudi:2.0.5 --model-id /data --max-input-tokens 1024 --max-total-tokens 2048
      

Setup Environment Variables

Since the compose.yaml will consume some environment variables, you need to setup them in advance as below.

export no_proxy=${your_no_proxy}
export http_proxy=${your_http_proxy}
export https_proxy=${your_http_proxy}
export EMBEDDING_MODEL_ID="BAAI/bge-base-en-v1.5"
export RERANK_MODEL_ID="BAAI/bge-reranker-base"
export LLM_MODEL_ID="Intel/neural-chat-7b-v3-3"
export LLM_MODEL_ID_NAME="neural-chat-7b-v3-3"
export TEI_EMBEDDING_ENDPOINT="http://${host_ip}:8090"
export TEI_RERANKING_ENDPOINT="http://${host_ip}:8808"
export TGI_LLM_ENDPOINT="http://${host_ip}:8005"
export vLLM_LLM_ENDPOINT="http://${host_ip}:8007"
export vLLM_RAY_LLM_ENDPOINT="http://${host_ip}:8006"
export LLM_SERVICE_PORT=9000
export REDIS_URL="redis://${host_ip}:6379"
export INDEX_NAME="rag-redis"
export HUGGINGFACEHUB_API_TOKEN=${your_hf_api_token}
export MEGA_SERVICE_HOST_IP=${host_ip}
export EMBEDDING_SERVICE_HOST_IP=${host_ip}
export RETRIEVER_SERVICE_HOST_IP=${host_ip}
export RERANK_SERVICE_HOST_IP=${host_ip}
export LLM_SERVICE_HOST_IP=${host_ip}
export BACKEND_SERVICE_ENDPOINT="http://${host_ip}:8888/v1/chatqna"
export DATAPREP_SERVICE_ENDPOINT="http://${host_ip}:6007/v1/dataprep"
export DATAPREP_GET_FILE_ENDPOINT="http://${host_ip}:6007/v1/dataprep/get_file"
export DATAPREP_DELETE_FILE_ENDPOINT="http://${host_ip}:6007/v1/dataprep/delete_file"

export llm_service_devices=all
export tei_embedding_devices=all

To specify the device ids, “llm_service_devices” and “tei_embedding_devices”` can be set as “0,1,2,3” alike. More info in gaudi docs.

If guardrails microservice is enabled in the pipeline, the below environment variables are necessary to be set.

export GURADRAILS_MODEL_ID="meta-llama/Meta-Llama-Guard-2-8B"
export SAFETY_GUARD_MODEL_ID="meta-llama/Meta-Llama-Guard-2-8B"
export SAFETY_GUARD_ENDPOINT="http://${host_ip}:8088"
export GUARDRAIL_SERVICE_HOST_IP=${host_ip}

Note: Please replace host_ip with your external IP address, do NOT use localhost.

Start all the services Docker Containers

cd GenAIExamples/ChatQnA/docker_compose/intel/hpu/gaudi/

If use tgi for llm backend.

# Start ChatQnA with Rerank Pipeline
docker compose -f compose.yaml up -d
# Start ChatQnA without Rerank Pipeline
docker compose -f compose_without_rerank.yaml up -d

If use vllm for llm backend.

docker compose -f compose_vllm.yaml up -d

If use vllm-on-ray for llm backend.

docker compose -f compose_vllm_ray.yaml up -d

If you want to enable guardrails microservice in the pipeline, please follow the below command instead:

cd GenAIExamples/ChatQnA/docker_compose/intel/hpu/gaudi/
docker compose -f compose_guardrails.yaml up -d

NOTE: Users need at least two Gaudi cards to run the ChatQnA successfully.

Validate MicroServices and MegaService

Follow the instructions to validate MicroServices. For validation details, please refer to how-to-validate_service.

  1. TEI Embedding Service

    curl ${host_ip}:8090/embed \
        -X POST \
        -d '{"inputs":"What is Deep Learning?"}' \
        -H 'Content-Type: application/json'
    
  2. Embedding Microservice

    curl http://${host_ip}:6000/v1/embeddings \
      -X POST \
      -d '{"text":"hello"}' \
      -H 'Content-Type: application/json'
    
  3. Retriever Microservice

    To consume the retriever microservice, you need to generate a mock embedding vector by Python script. The length of embedding vector is determined by the embedding model. Here we use the model EMBEDDING_MODEL_ID="BAAI/bge-base-en-v1.5", which vector size is 768.

    Check the vecotor dimension of your embedding model, set your_embedding dimension equals to it.

    export your_embedding=$(python3 -c "import random; embedding = [random.uniform(-1, 1) for _ in range(768)]; print(embedding)")
    curl http://${host_ip}:7000/v1/retrieval \
      -X POST \
      -d "{\"text\":\"test\",\"embedding\":${your_embedding}}" \
      -H 'Content-Type: application/json'
    
  4. TEI Reranking Service

    Skip for ChatQnA without Rerank pipeline

    curl http://${host_ip}:8808/rerank \
        -X POST \
        -d '{"query":"What is Deep Learning?", "texts": ["Deep Learning is not...", "Deep learning is..."]}' \
        -H 'Content-Type: application/json'
    
  5. Reranking Microservice

    Skip for ChatQnA without Rerank pipeline

    curl http://${host_ip}:8000/v1/reranking \
      -X POST \
      -d '{"initial_query":"What is Deep Learning?", "retrieved_docs": [{"text":"Deep Learning is not..."}, {"text":"Deep learning is..."}]}' \
      -H 'Content-Type: application/json'
    
  6. LLM backend Service

    In first startup, this service will take more time to download the model files. After it’s finished, the service will be ready.

    Try the command below to check whether the LLM serving is ready.

    docker logs ${CONTAINER_ID} | grep Connected
    

    If the service is ready, you will get the response like below.

    2024-09-03T02:47:53.402023Z  INFO text_generation_router::server: router/src/server.rs:2311: Connected
    

    Then try the cURL command below to validate services.

    #TGI Service
    curl http://${host_ip}:8005/generate \
      -X POST \
      -d '{"inputs":"What is Deep Learning?","parameters":{"max_new_tokens":64, "do_sample": true}}' \
      -H 'Content-Type: application/json'
    
    #vLLM Service
    curl http://${host_ip}:8007/v1/completions \
      -H "Content-Type: application/json" \
      -d '{
      "model": "${LLM_MODEL_ID}",
      "prompt": "What is Deep Learning?",
      "max_tokens": 32,
      "temperature": 0
      }'
    
    #vLLM-on-Ray Service
    curl http://${host_ip}:8006/v1/chat/completions \
      -H "Content-Type: application/json" \
      -d '{"model": "${LLM_MODEL_ID}", "messages": [{"role": "user", "content": "What is Deep Learning?"}]}'
    
  7. LLM Microservice

    curl http://${host_ip}:9000/v1/chat/completions \
      -X POST \
      -d '{"query":"What is Deep Learning?","max_new_tokens":17,"top_k":10,"top_p":0.95,"typical_p":0.95,"temperature":0.01,"repetition_penalty":1.03,"streaming":true}' \
      -H 'Content-Type: application/json'
    
  8. MegaService

    curl http://${host_ip}:8888/v1/chatqna -H "Content-Type: application/json" -d '{
         "messages": "What is the revenue of Nike in 2023?"
         }'
    
  9. Dataprep Microservice(Optional)

    If you want to update the default knowledge base, you can use the following commands:

    Update Knowledge Base via Local File Upload:

    curl -X POST "http://${host_ip}:6007/v1/dataprep" \
         -H "Content-Type: multipart/form-data" \
         -F "files=@./nke-10k-2023.pdf"
    

    This command updates a knowledge base by uploading a local file for processing. Update the file path according to your environment.

    Add Knowledge Base via HTTP Links:

    curl -X POST "http://${host_ip}:6007/v1/dataprep" \
         -H "Content-Type: multipart/form-data" \
         -F 'link_list=["https://opea.dev"]'
    

    This command updates a knowledge base by submitting a list of HTTP links for processing.

    Also, you are able to get the file/link list that you uploaded:

    curl -X POST "http://${host_ip}:6007/v1/dataprep/get_file" \
         -H "Content-Type: application/json"
    

    Then you will get the response JSON like this. Notice that the returned name/id of the uploaded link is https://xxx.txt.

    [
      {
        "name": "nke-10k-2023.pdf",
        "id": "nke-10k-2023.pdf",
        "type": "File",
        "parent": ""
      },
      {
        "name": "https://opea.dev.txt",
        "id": "https://opea.dev.txt",
        "type": "File",
        "parent": ""
      }
    ]
    

    To delete the file/link you uploaded:

    # delete link
    curl -X POST "http://${host_ip}:6007/v1/dataprep/delete_file" \
         -d '{"file_path": "https://opea.dev.txt"}' \
         -H "Content-Type: application/json"
    
    # delete file
    curl -X POST "http://${host_ip}:6007/v1/dataprep/delete_file" \
         -d '{"file_path": "nke-10k-2023.pdf"}' \
         -H "Content-Type: application/json"
    
    # delete all uploaded files and links
    curl -X POST "http://${host_ip}:6007/v1/dataprep/delete_file" \
         -d '{"file_path": "all"}' \
         -H "Content-Type: application/json"
    
  10. Guardrails (Optional)

curl http://${host_ip}:9090/v1/guardrails\
  -X POST \
  -d '{"text":"How do you buy a tiger in the US?","parameters":{"max_new_tokens":32}}' \
  -H 'Content-Type: application/json'

🚀 Launch the UI

To access the frontend, open the following URL in your browser: http://{host_ip}:5173. By default, the UI runs on port 5173 internally. If you prefer to use a different host port to access the frontend, you can modify the port mapping in the compose.yaml file as shown below:

  chaqna-gaudi-ui-server:
    image: opea/chatqna-ui:latest
    ...
    ports:
      - "80:5173"

project-screenshot

Here is an example of running ChatQnA:

project-screenshot

🚀 Launch the Conversational UI (Optional)

To access the Conversational UI (react based) frontend, modify the UI service in the compose.yaml file. Replace chaqna-gaudi-ui-server service with the chatqna-gaudi-conversation-ui-server service as per the config below:

chaqna-gaudi-conversation-ui-server:
  image: opea/chatqna-conversation-ui:latest
  container_name: chatqna-gaudi-conversation-ui-server
  environment:
    - APP_BACKEND_SERVICE_ENDPOINT=${BACKEND_SERVICE_ENDPOINT}
    - APP_DATA_PREP_SERVICE_URL=${DATAPREP_SERVICE_ENDPOINT}
  ports:
    - "5174:80"
  depends_on:
    - chaqna-gaudi-backend-server
  ipc: host
  restart: always

Once the services are up, open the following URL in your browser: http://{host_ip}:5174. By default, the UI runs on port 80 internally. If you prefer to use a different host port to access the frontend, you can modify the port mapping in the compose.yaml file as shown below:

  chaqna-gaudi-conversation-ui-server:
    image: opea/chatqna-conversation-ui:latest
    ...
    ports:
      - "80:80"

Here is an example of running ChatQnA with Conversational UI (React):

project-screenshot