Build and deploy CodeTrans Application on AMD GPU (ROCm)

Build images

Build the LLM Docker Image

### Cloning repo
git clone https://github.com/opea-project/GenAIComps.git
cd GenAIComps

### Build Docker image
docker build -t opea/llm-tgi:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/llms/text-generation/tgi/Dockerfile .

Build the MegaService Docker Image

### Cloning repo
git clone https://github.com/opea-project/GenAIExamples
cd GenAIExamples/CodeTrans

### Build Docker image
docker build -t opea/codetrans:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f Dockerfile .

Build the UI Docker Image

cd GenAIExamples/CodeTrans/ui
### Build UI Docker image
docker build -t opea/codetrans-ui:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f ./docker/Dockerfile .

Deploy CodeTrans Application

Features of Docker compose for AMD GPUs

  1. Added forwarding of GPU devices to the container TGI service with instructions:

shm_size: 1g
devices:
  - /dev/kfd:/dev/kfd
  - /dev/dri/:/dev/dri/
cap_add:
  - SYS_PTRACE
group_add:
  - video
security_opt:
  - seccomp:unconfined

In this case, all GPUs are thrown. To reset a specific GPU, you need to use specific device names cardN and renderN.

For example:

shm_size: 1g
devices:
  - /dev/kfd:/dev/kfd
  - /dev/dri/card0:/dev/dri/card0
  - /dev/dri/render128:/dev/dri/render128
cap_add:
  - SYS_PTRACE
group_add:
  - video
security_opt:
  - seccomp:unconfined

To find out which GPU device IDs cardN and renderN correspond to the same GPU, use the GPU driver utility

Go to the directory with the Docker compose file

cd GenAIExamples/CodeTrans/docker_compose/amd/gpu/rocm

Set environments

In the file “GenAIExamples/CodeTrans/docker_compose/amd/gpu/rocm/set_env.sh “ it is necessary to set the required values. Parameter assignments are specified in the comments for each variable setting command

chmod +x set_env.sh
. set_env.sh

Run services

docker compose up -d

Validate the MicroServices and MegaService

Validate TGI service

curl http://${HOST_IP}:${CODETRANS_TGI_SERVICE_PORT}/generate \
  -X POST \
  -d '{"inputs":"    ### System: Please translate the following Golang codes into  Python codes.    ### Original codes:    '\'''\'''\''Golang    \npackage main\n\nimport \"fmt\"\nfunc main() {\n    fmt.Println(\"Hello, World!\");\n    '\'''\'''\''    ### Translated codes:","parameters":{"max_new_tokens":17, "do_sample": true}}' \
  -H 'Content-Type: application/json'

Validate LLM service

curl http://${HOST_IP}:${CODETRANS_LLM_SERVICE_PORT}/v1/chat/completions \
  -X POST \
  -d '{"query":"    ### System: Please translate the following Golang codes into  Python codes.    ### Original codes:    '\'''\'''\''Golang    \npackage main\n\nimport \"fmt\"\nfunc main() {\n    fmt.Println(\"Hello, World!\");\n    '\'''\'''\''    ### Translated codes:"}' \
  -H 'Content-Type: application/json'

Validate MegaService

curl http://${HOST_IP}:${CODEGEN_BACKEND_SERVICE_PORT}/v1/codetrans \
  -H "Content-Type: application/json" \
  -d '{"language_from": "Golang","language_to": "Python","source_code": "package main\n\nimport \"fmt\"\nfunc main() {\n    fmt.Println(\"Hello, World!\");\n}"}'