ChatQnA Benchmarking¶
This folder contains a collection of Kubernetes manifest files for deploying the ChatQnA service across scalable nodes. It includes a comprehensive benchmarking tool that enables throughput analysis to assess inference performance.
By following this guide, you can run benchmarks on your deployment and share the results with the OPEA community.
Purpose¶
We aim to run these benchmarks and share them with the OPEA community for three primary reasons:
To offer insights on inference throughput in real-world scenarios, helping you choose the best service or deployment for your needs.
To establish a baseline for validating optimization solutions across different implementations, providing clear guidance on which methods are most effective for your use case.
To inspire the community to build upon our benchmarks, allowing us to better quantify new solutions in conjunction with current leading llms, serving frameworks etc.
Metrics¶
The benchmark will report the below metrics, including:
Number of Concurrent Requests
End-to-End Latency: P50, P90, P99 (in milliseconds)
End-to-End First Token Latency: P50, P90, P99 (in milliseconds)
Average Next Token Latency (in milliseconds)
Average Token Latency (in milliseconds)
Requests Per Second (RPS)
Output Tokens Per Second
Input Tokens Per Second
Results will be displayed in the terminal and saved as CSV file named 1_stats.csv
for easy export to spreadsheets.
Table of Contents¶
Deployment¶
Prerequisites¶
Kubernetes installation: Use kubespray or other official Kubernetes installation guides.
Helm installation: Follow the Helm documentation to install Helm.
Setup Hugging Face Token
To access models and APIs from Hugging Face, set your token as environment variable.
export HF_TOKEN="insert-your-huggingface-token-here"
Prepare Shared Models (Optional but Strongly Recommended)
Downloading models simultaneously to multiple nodes in your cluster can overload resources such as network bandwidth, memory and storage. To prevent resource exhaustion, it’s recommended to preload the models in advance.
pip install -U "huggingface_hub[cli]" sudo mkdir -p /mnt/models sudo chmod 777 /mnt/models huggingface-cli download --cache-dir /mnt/models Intel/neural-chat-7b-v3-3 export MODEL_DIR=/mnt/models
Once the models are downloaded, you can consider the following methods for sharing them across nodes:
Persistent Volume Claim (PVC): This is the recommended approach for production setups. For more details on using PVC, refer to PVC.
Local Host Path: For simpler testing, ensure that each node involved in the deployment follows the steps above to locally prepare the models. After preparing the models, use
--set global.modelUseHostPath=${MODELDIR}
in the deployment command.
Add OPEA Helm Repository:
python deploy.py --add-repo
Label Nodes
python deploy.py --add-label --num-nodes 2
Deployment Scenarios¶
The example below are based on a two-node setup. You can adjust the number of nodes by using the --num-nodes
option.
By default, these commands use the default
namespace. To specify a different namespace, use the --namespace
flag with deploy, uninstall, and kubernetes command. Additionally, update the namespace
field in benchmark.yaml
before running the benchmark test.
For additional configuration options, run python deploy.py --help
Case 1: Baseline Deployment with Rerank¶
Deploy Command (with node number, Hugging Face token, model directory specified):
python deploy.py --hf-token $HF_TOKEN --model-dir $MODEL_DIR --num-nodes 2 --with-rerank
Uninstall Command:
python deploy.py --uninstall
Case 2: Baseline Deployment without Rerank¶
python deploy.py --hf-token $HFTOKEN --model-dir $MODELDIR --num-nodes 2
Case 3: Tuned Deployment with Rerank¶
python deploy.py --hf-token $HFTOKEN --model-dir $MODELDIR --num-nodes 2 --with-rerank --tuned
Benchmark¶
Test Configurations¶
Key |
Value |
---|---|
Workload |
ChatQnA |
Tag |
V1.1 |
Models configuration
Key |
Value |
---|---|
Embedding |
BAAI/bge-base-en-v1.5 |
Reranking |
BAAI/bge-reranker-base |
Inference |
Intel/neural-chat-7b-v3-3 |
Benchmark parameters
Key |
Value |
---|---|
LLM input tokens |
1024 |
LLM output tokens |
128 |
Number of test requests for different scheduled node number:
Node count |
Concurrency |
Query number |
---|---|---|
1 |
128 |
640 |
2 |
256 |
1280 |
4 |
512 |
2560 |
More detailed configuration can be found in configuration file benchmark.yaml.
Test Steps¶
Use kubectl get pods
to confirm that all pods are READY
before starting the test.
Upload Retrieval File¶
Before testing, upload a specified file to make sure the llm input have the token length of 1k.
Get files:
wget https://github.com/opea-project/GenAIEval/tree/main/evals/benchmark/data/upload_file_no_rerank.txt
wget https://github.com/opea-project/GenAIEval/tree/main/evals/benchmark/data/upload_file.txt
Retrieve the ClusterIP
of the chatqna-data-prep
service.
kubectl get svc
Expected output:
chatqna-data-prep ClusterIP xx.xx.xx.xx <none> 6007/TCP 51m
Use the following cURL
command to upload file:
cd GenAIEval/evals/benchmark/data
# RAG with Rerank
curl -X POST "http://${cluster_ip}:6007/v1/dataprep" \
-H "Content-Type: multipart/form-data" \
-F "files=@./upload_file.txt"
# RAG without Rerank
curl -X POST "http://${cluster_ip}:6007/v1/dataprep" \
-H "Content-Type: multipart/form-data" \
-F "files=@./upload_file_no_rerank.txt"
Run Benchmark Test¶
Run the benchmark test using:
bash benchmark.sh -n 2
The -n
argument specifies the number of test nodes. Required dependencies will be automatically installed when running the benchmark for the first time.
Data collection¶
All the test results will come to the folder GenAIEval/evals/benchmark/benchmark_output
.
Teardown¶
After completing the benchmark, use the following commands to clean up the environment:
Remove Node Labels:
python deploy.py --delete-label
Delete the OPEA Helm Repository:
python deploy.py --delete-repo