# Build Mega Service of MultimodalQnA for AMD ROCm This document outlines the deployment process for a MultimodalQnA application utilizing the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservice pipeline on AMD server with ROCm GPUs. The steps include Docker image creation, container deployment via Docker Compose, and service execution to integrate microservices such as `multimodal_embedding` that employs [BridgeTower](https://huggingface.co/BridgeTower/bridgetower-large-itm-mlm-gaudi) model as embedding model, `multimodal_retriever`, `lvm`, and `multimodal-data-prep`. We will publish the Docker images to Docker Hub soon, it will simplify the deployment process for this service. For detailed information about these instance types, you can refer to this [link](https://aws.amazon.com/ec2/instance-types/m7i/). Once you've chosen the appropriate instance type, proceed with configuring your instance settings, including network configurations, security groups, and storage options. After launching your instance, you can connect to it using SSH (for Linux instances) or Remote Desktop Protocol (RDP) (for Windows instances). From there, you'll have full access to your Xeon server, allowing you to install, configure, and manage your applications as needed. ## Setup Environment Variables Since the `compose.yaml` will consume some environment variables, you need to setup them in advance as below. Please use `./set_env.sh` (. set_env.sh) script to set up all needed Environment Variables. **Export the value of the public IP address of your server to the `host_ip` environment variable** Note: Please replace with `host_ip` with you external IP address, do not use localhost. ## 🚀 Build Docker Images ### 1. Build embedding-multimodal-bridgetower Image Build embedding-multimodal-bridgetower docker image ```bash git clone https://github.com/opea-project/GenAIComps.git cd GenAIComps docker build --no-cache -t opea/embedding-multimodal-bridgetower:latest --build-arg EMBEDDER_PORT=$EMBEDDER_PORT --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/third_parties/bridgetower/src/Dockerfile . ``` Build embedding microservice image ```bash docker build --no-cache -t opea/embedding:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/embeddings/src/Dockerfile . ``` ### 2. Build LVM Images Build lvm-llava image ```bash docker build --no-cache -t opea/lvm-llava:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/lvms/src/integrations/dependency/llava/Dockerfile . ``` ### 3. Build retriever-multimodal-redis Image ```bash docker build --no-cache -t opea/retriever:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/retrievers/src/Dockerfile . ``` ### 4. Build dataprep-multimodal-redis Image ```bash docker build --no-cache -t opea/dataprep:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/dataprep/src/Dockerfile . ``` ### 5. Build MegaService Docker Image To construct the Mega Service, we utilize the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservice pipeline within the [multimodalqna.py](../../../../multimodalqna.py) Python script. Build MegaService Docker image via below command: ```bash git clone https://github.com/opea-project/GenAIExamples.git cd GenAIExamples/MultimodalQnA docker build --no-cache -t opea/multimodalqna:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f Dockerfile . cd ../.. ``` ### 6. Build UI Docker Image Build frontend Docker image via below command: ```bash cd GenAIExamples/MultimodalQnA/ui/ docker build --no-cache -t opea/multimodalqna-ui:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f ./docker/Dockerfile . cd ../../../ ``` ### 7. Pull TGI AMD ROCm Image ```bash docker pull ghcr.io/huggingface/text-generation-inference:2.4.1-rocm ``` Then run the command `docker images`, you will have the following 8 Docker Images: 1. `opea/dataprep:latest` 2. `ghcr.io/huggingface/text-generation-inference:2.4.1-rocm` 3. `opea/lvm:latest` 4. `opea/retriever:latest` 5. `opea/embedding:latest` 6. `opea/embedding-multimodal-bridgetower:latest` 7. `opea/multimodalqna:latest` 8. `opea/multimodalqna-ui:latest` ## 🚀 Start Microservices ### Required Models By default, the multimodal-embedding and LVM models are set to a default value as listed below: | Service | Model | | --------- | ------------------------------------------- | | embedding | BridgeTower/bridgetower-large-itm-mlm-gaudi | | LVM | llava-hf/llava-1.5-7b-hf | | LVM | Xkev/Llama-3.2V-11B-cot | Note: For AMD ROCm System "Xkev/Llama-3.2V-11B-cot" is recommended to run on ghcr.io/huggingface/text-generation-inference:2.4.1-rocm ### Start all the services Docker Containers > Before running the docker compose command, you need to be in the folder that has the docker compose yaml file ```bash cd GenAIExamples/MultimodalQnA/docker_compose/amd/gpu/rocm . set_env.sh docker compose -f compose.yaml up -d ``` Note: Please replace with `host_ip` with your external IP address, do not use localhost. Note: In order to limit access to a subset of GPUs, please pass each device individually using one or more -device /dev/dri/rendered, where is the card index, starting from 128. (https://rocm.docs.amd.com/projects/install-on-linux/en/latest/how-to/docker.html#docker-restrict-gpus) Example for set isolation for 1 GPU ``` - /dev/dri/card0:/dev/dri/card0 - /dev/dri/renderD128:/dev/dri/renderD128 ``` Example for set isolation for 2 GPUs ``` - /dev/dri/card0:/dev/dri/card0 - /dev/dri/renderD128:/dev/dri/renderD128 - /dev/dri/card1:/dev/dri/card1 - /dev/dri/renderD129:/dev/dri/renderD129 ``` Please find more information about accessing and restricting AMD GPUs in the link (https://rocm.docs.amd.com/projects/install-on-linux/en/latest/how-to/docker.html#docker-restrict-gpus) ### Validate Microservices 1. embedding-multimodal-bridgetower ```bash curl http://${host_ip}:${EMBEDDER_PORT}/v1/encode \ -X POST \ -H "Content-Type:application/json" \ -d '{"text":"This is example"}' ``` ```bash curl http://${host_ip}:${EMBEDDER_PORT}/v1/encode \ -X POST \ -H "Content-Type:application/json" \ -d '{"text":"This is example", "img_b64_str": "iVBORw0KGgoAAAANSUhEUgAAAAoAAAAKCAYAAACNMs+9AAAAFUlEQVR42mP8/5+hnoEIwDiqkL4KAcT9GO0U4BxoAAAAAElFTkSuQmCC"}' ``` 2. embedding ```bash curl http://${host_ip}:$MM_EMBEDDING_PORT_MICROSERVICE/v1/embeddings \ -X POST \ -H "Content-Type: application/json" \ -d '{"text" : "This is some sample text."}' ``` ```bash curl http://${host_ip}:$MM_EMBEDDING_PORT_MICROSERVICE/v1/embeddings \ -X POST \ -H "Content-Type: application/json" \ -d '{"text": {"text" : "This is some sample text."}, "image" : {"url": "https://github.com/docarray/docarray/blob/main/tests/toydata/image-data/apple.png?raw=true"}}' ``` 3. retriever-multimodal-redis ```bash export your_embedding=$(python3 -c "import random; embedding = [random.uniform(-1, 1) for _ in range(512)]; print(embedding)") curl http://${host_ip}:7000/v1/multimodal_retrieval \ -X POST \ -H "Content-Type: application/json" \ -d "{\"text\":\"test\",\"embedding\":${your_embedding}}" ``` 4. lvm-llava ```bash curl http://${host_ip}:${LLAVA_SERVER_PORT}/generate \ -X POST \ -H "Content-Type:application/json" \ -d '{"prompt":"Describe the image please.", "img_b64_str": "iVBORw0KGgoAAAANSUhEUgAAAAoAAAAKCAYAAACNMs+9AAAAFUlEQVR42mP8/5+hnoEIwDiqkL4KAcT9GO0U4BxoAAAAAElFTkSuQmCC"}' ``` 5. lvm ```bash curl http://${host_ip}:9399/v1/lvm \ -X POST \ -H 'Content-Type: application/json' \ -d '{"retrieved_docs": [], "initial_query": "What is this?", "top_n": 1, "metadata": [{"b64_img_str": "iVBORw0KGgoAAAANSUhEUgAAAAoAAAAKCAYAAACNMs+9AAAAFUlEQVR42mP8/5+hnoEIwDiqkL4KAcT9GO0U4BxoAAAAAElFTkSuQmCC", "transcript_for_inference": "yellow image", "video_id": "8c7461df-b373-4a00-8696-9a2234359fe0", "time_of_frame_ms":"37000000", "source_video":"WeAreGoingOnBullrun_8c7461df-b373-4a00-8696-9a2234359fe0.mp4"}], "chat_template":"The caption of the image is: '\''{context}'\''. {question}"}' ``` ```bash curl http://${host_ip}:9399/v1/lvm \ -X POST \ -H 'Content-Type: application/json' \ -d '{"image": "iVBORw0KGgoAAAANSUhEUgAAAAoAAAAKCAYAAACNMs+9AAAAFUlEQVR42mP8/5+hnoEIwDiqkL4KAcT9GO0U4BxoAAAAAElFTkSuQmCC", "prompt":"What is this?"}' ``` Also, validate LVM Microservice with empty retrieval results ```bash curl http://${host_ip}:9399/v1/lvm \ -X POST \ -H 'Content-Type: application/json' \ -d '{"retrieved_docs": [], "initial_query": "What is this?", "top_n": 1, "metadata": [], "chat_template":"The caption of the image is: '\''{context}'\''. {question}"}' ``` 6. dataprep-multimodal-redis Download a sample video, image, and audio file and create a caption ```bash export video_fn="WeAreGoingOnBullrun.mp4" wget http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/WeAreGoingOnBullrun.mp4 -O ${video_fn} export image_fn="apple.png" wget https://github.com/docarray/docarray/blob/main/tests/toydata/image-data/apple.png?raw=true -O ${image_fn} export caption_fn="apple.txt" echo "This is an apple." > ${caption_fn} export audio_fn="AudioSample.wav" wget https://github.com/intel/intel-extension-for-transformers/raw/main/intel_extension_for_transformers/neural_chat/assets/audio/sample.wav -O ${audio_fn} ``` Test dataprep microservice with generating transcript. This command updates a knowledge base by uploading a local video .mp4 and an audio .wav file. ```bash curl --silent --write-out "HTTPSTATUS:%{http_code}" \ ${DATAPREP_GEN_TRANSCRIPT_SERVICE_ENDPOINT} \ -H 'Content-Type: multipart/form-data' \ -X POST \ -F "files=@./${video_fn}" \ -F "files=@./${audio_fn}" ``` Also, test dataprep microservice with generating an image caption using lvm microservice ```bash curl --silent --write-out "HTTPSTATUS:%{http_code}" \ ${DATAPREP_GEN_CAPTION_SERVICE_ENDPOINT} \ -H 'Content-Type: multipart/form-data' \ -X POST -F "files=@./${image_fn}" ``` Now, test the microservice with posting a custom caption along with an image ```bash curl --silent --write-out "HTTPSTATUS:%{http_code}" \ ${DATAPREP_INGEST_SERVICE_ENDPOINT} \ -H 'Content-Type: multipart/form-data' \ -X POST -F "files=@./${image_fn}" -F "files=@./${caption_fn}" ``` Also, you are able to get the list of all files that you uploaded: ```bash curl -X POST \ -H "Content-Type: application/json" \ ${DATAPREP_GET_FILE_ENDPOINT} ``` Then you will get the response python-style LIST like this. Notice the name of each uploaded file e.g., `videoname.mp4` will become `videoname_uuid.mp4` where `uuid` is a unique ID for each uploaded file. The same files that are uploaded twice will have different `uuid`. ```bash [ "WeAreGoingOnBullrun_7ac553a1-116c-40a2-9fc5-deccbb89b507.mp4", "WeAreGoingOnBullrun_6d13cf26-8ba2-4026-a3a9-ab2e5eb73a29.mp4", "apple_fcade6e6-11a5-44a2-833a-3e534cbe4419.png", "AudioSample_976a85a6-dc3e-43ab-966c-9d81beef780c.wav ] ``` To delete all uploaded files along with data indexed with `$INDEX_NAME` in REDIS. ```bash curl -X POST \ -H "Content-Type: application/json" \ -d '{"file_path": "all"}' \ ${DATAPREP_DELETE_FILE_ENDPOINT} ``` 7. MegaService ```bash curl http://${host_ip}:8888/v1/multimodalqna \ -H "Content-Type: application/json" \ -X POST \ -d '{"messages": "What is the revenue of Nike in 2023?"}' ``` ```bash curl http://${host_ip}:8888/v1/multimodalqna \ -H "Content-Type: application/json" \ -d '{"messages": [{"role": "user", "content": [{"type": "text", "text": "hello, "}, {"type": "image_url", "image_url": {"url": "https://www.ilankelman.org/stopsigns/australia.jpg"}}]}, {"role": "assistant", "content": "opea project! "}, {"role": "user", "content": "chao, "}], "max_tokens": 10}' ```