Build Mega Service of AudioQnA on AMD ROCm GPU¶
This document outlines the deployment process for a AudioQnA application utilizing the GenAIComps microservice pipeline on server on AMD ROCm GPU platform.
Build Docker Images¶
1. Build Docker Image¶
Create application install directory and go to it:
mkdir ~/audioqna-install && cd audioqna-install
Clone the repository GenAIExamples (the default repository branch “main” is used here):
git clone https://github.com/opea-project/GenAIExamples.git
If you need to use a specific branch/tag of the GenAIExamples repository, then (v1.3 replace with its own value):
git clone https://github.com/opea-project/GenAIExamples.git && cd GenAIExamples && git checkout v1.3
We remind you that when using a specific version of the code, you need to use the README from this version:
Go to build directory:
cd ~/audioqna-install/GenAIExamples/AudioQnA/docker_image_build
Cleaning up the GenAIComps repository if it was previously cloned in this directory. This is necessary if the build was performed earlier and the GenAIComps folder exists and is not empty:
echo Y | rm -R GenAIComps
Clone the repository GenAIComps (the default repository branch “main” is used here):
git clone https://github.com/opea-project/GenAIComps.git
cd GenAIComps
We remind you that when using a specific version of the code, you need to use the README from this version.
Setting the list of images for the build (from the build file.yaml)
If you want to deploy a vLLM-based or TGI-based application, then the set of services is installed as follows:
vLLM-based application
service_list="vllm-rocm whisper speecht5 audioqna audioqna-ui"
TGI-based application
service_list="whisper speecht5 audioqna audioqna-ui"
Optional. Pull TGI Docker Image (Do this if you want to use TGI)
docker pull ghcr.io/huggingface/text-generation-inference:2.3.1-rocm
Build Docker Images
docker compose -f build.yaml build ${service_list} --no-cache
After the build, we check the list of images with the command:
docker image ls
The list of images should include:
vLLM-based application:
opea/vllm-rocm:latest
opea/whisper:latest
opea/speecht5:latest
opea/audioqna:latest
TGI-based application:
ghcr.io/huggingface/text-generation-inference:2.3.1-rocm
opea/whisper:latest
opea/speecht5:latest
opea/audioqna:latest
Deploy the AudioQnA Application¶
Docker Compose Configuration for AMD GPUs¶
To enable GPU support for AMD GPUs, the following configuration is added to the Docker Compose file:
compose_vllm.yaml - for vLLM-based application
compose.yaml - for TGI-based
shm_size: 1g
devices:
- /dev/kfd:/dev/kfd
- /dev/dri/:/dev/dri/
cap_add:
- SYS_PTRACE
group_add:
- video
security_opt:
- seccomp:unconfined
This configuration forwards all available GPUs to the container. To use a specific GPU, specify its cardN
and renderN
device IDs. For example:
shm_size: 1g
devices:
- /dev/kfd:/dev/kfd
- /dev/dri/card0:/dev/dri/card0
- /dev/dri/render128:/dev/dri/render128
cap_add:
- SYS_PTRACE
group_add:
- video
security_opt:
- seccomp:unconfined
How to Identify GPU Device IDs:
Use AMD GPU driver utilities to determine the correct cardN
and renderN
IDs for your GPU.
Set deploy environment variables¶
Setting variables in the operating system environment:¶
Set variable HUGGINGFACEHUB_API_TOKEN:¶
### Replace the string 'your_huggingfacehub_token' with your HuggingFacehub repository access token.
export HUGGINGFACEHUB_API_TOKEN='your_huggingfacehub_token'
Set variables value in set_env****.sh file:¶
Go to Docker Compose directory:
cd ~/audioqna-install/GenAIExamples/AudioQnA/docker_compose/amd/gpu/rocm
The example uses the Nano text editor. You can use any convenient text editor:
If you use vLLM¶
nano set_env_vllm.sh
If you use TGI¶
nano set_env.sh
If you are in a proxy environment, also set the proxy-related environment variables:
export http_proxy="Your_HTTP_Proxy"
export https_proxy="Your_HTTPs_Proxy"
Set the values of the variables:
HOST_IP, HOST_IP_EXTERNAL - These variables are used to configure the name/address of the service in the operating system environment for the application services to interact with each other and with the outside world.
If your server uses only an internal address and is not accessible from the Internet, then the values for these two variables will be the same and the value will be equal to the server’s internal name/address.
If your server uses only an external, Internet-accessible address, then the values for these two variables will be the same and the value will be equal to the server’s external name/address.
If your server is located on an internal network, has an internal address, but is accessible from the Internet via a proxy/firewall/load balancer, then the HOST_IP variable will have a value equal to the internal name/address of the server, and the EXTERNAL_HOST_IP variable will have a value equal to the external name/address of the proxy/firewall/load balancer behind which the server is located.
We set these values in the file set_env****.sh
Variables with names like “******_PORT”** - These variables set the IP port numbers for establishing network connections to the application services. The values shown in the file set_env.sh or set_env_vllm they are the values used for the development and testing of the application, as well as configured for the environment in which the development is performed. These values must be configured in accordance with the rules of network access to your environment’s server, and must not overlap with the IP ports of other applications that are already in use.
Set variables with script set_env****.sh¶
If you use vLLM¶
. set_env_vllm.sh
If you use TGI¶
. set_env.sh
Start the services:¶
If you use vLLM¶
docker compose -f compose_vllm.yaml up -d
If you use TGI¶
docker compose -f compose.yaml up -d
All containers should be running and should not restart:
If you use vLLM:¶
audioqna-vllm-service
whisper-service
speecht5-service
audioqna-backend-server
audioqna-ui-server
If you use TGI:¶
audioqna-tgi-service
whisper-service
speecht5-service
audioqna-backend-server
audioqna-ui-server
Validate the Services¶
1. Validate the vLLM/TGI Service¶
If you use vLLM:¶
DATA='{"model": "Intel/neural-chat-7b-v3-3t", '\
'"messages": [{"role": "user", "content": "What is Deep Learning?"}], "max_tokens": 256}'
curl http://${HOST_IP}:${AUDIOQNA_VLLM_SERVICE_PORT}/v1/chat/completions \
-X POST \
-d "$DATA" \
-H 'Content-Type: application/json'
Checking the response from the service. The response should be similar to JSON:
{
"id": "chatcmpl-142f34ef35b64a8db3deedd170fed951",
"object": "chat.completion",
"created": 1742270316,
"model": "Intel/neural-chat-7b-v3-3",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "",
"tool_calls": []
},
"logprobs": null,
"finish_reason": "length",
"stop_reason": null
}
],
"usage": { "prompt_tokens": 66, "total_tokens": 322, "completion_tokens": 256, "prompt_tokens_details": null },
"prompt_logprobs": null
}
If the service response has a meaningful response in the value of the “choices.message.content” key, then we consider the vLLM service to be successfully launched
If you use TGI:¶
DATA='{"inputs":"What is Deep Learning?",'\
'"parameters":{"max_new_tokens":256,"do_sample": true}}'
curl http://${HOST_IP}:${AUDIOQNA_TGI_SERVICE_PORT}/generate \
-X POST \
-d "$DATA" \
-H 'Content-Type: application/json'
Checking the response from the service. The response should be similar to JSON:
{
"generated_text": " "
}
If the service response has a meaningful response in the value of the “generated_text” key, then we consider the TGI service to be successfully launched
2. Validate MegaServices¶
Test the AudioQnA megaservice by recording a .wav file, encoding the file into the base64 format, and then sending the base64 string to the megaservice endpoint. The megaservice will return a spoken response as a base64 string. To listen to the response, decode the base64 string and save it as a .wav file.
# voice can be "default" or "male"
curl http://${host_ip}:3008/v1/audioqna \
-X POST \
-d '{"audio": "UklGRigAAABXQVZFZm10IBIAAAABAAEARKwAAIhYAQACABAAAABkYXRhAgAAAAEA", "max_tokens":64, "voice":"default"}' \
-H 'Content-Type: application/json' | sed 's/^"//;s/"$//' | base64 -d > output.wav
3. Validate MicroServices¶
# whisper service
curl http://${host_ip}:7066/v1/asr \
-X POST \
-d '{"audio": "UklGRigAAABXQVZFZm10IBIAAAABAAEARKwAAIhYAQACABAAAABkYXRhAgAAAAEA"}' \
-H 'Content-Type: application/json'
# speecht5 service
curl http://${host_ip}:7055/v1/tts \
-X POST \
-d '{"text": "Who are you?"}' \
-H 'Content-Type: application/json'
4. Stop application¶
If you use vLLM¶
cd ~/audioqna-install/GenAIExamples/AudioQnA/docker_compose/amd/gpu/rocm
docker compose -f compose_vllm.yaml down
If you use TGI¶
cd ~/audioqna-install/GenAIExamples/AudioQnA/docker_compose/amd/gpu/rocm
docker compose -f compose.yaml down