OPEA™
1.1
OPEA Project
v: 1.1
Document Versions
latest
1.0
1.1
OPEA Project links
Project Home
Wiki
Documentation Home
OPEA Overview
OPEA Project Architecture
Microservices: Flexible and Scalable Architecture
Megaservices: A Comprehensive Solution
Gateways: Customized Access to Mega- and Microservices
Next Step
Open Platform for Enterprise AI (OPEA) Framework Draft Proposal
1. Summary
2. Introduction
3. Framework Components, Architecture and Flow
4. Assessing GenAI components and flows
5. Grading Structure
6. Reference flows
Appendix A – Draft OPEA Specifications
Getting Started with OPEA
Understanding OPEA’s Core Components
Prerequisites
Create and Configure a Virtual Server
Deploy the ChatQnA Solution
Interact with ChatQnA
What’s Next
Get Involved
GenAI Examples
ChatQnA Sample Guide
Overview
Purpose
Key Implementation Details
How It Works
Expected Output
Validation Matrix and Prerequisites
Architecture
Microservice Outline and Diagram
Deployment
ChatQnA Deployment Options
Troubleshooting
Monitoring
Set Up the Prometheus Server
Set Up the Grafana Dashboard
Summary and Next Steps
AgentQnA Sample Guide
Overview
Purpose
How It Works
Deployment
Single Node
Codegen Sample Guide
Overview
Purpose
How It Works
Deployment
CodeGen Deployment Options
Generative AI Examples
Introduction
Architecture
Getting Started
Deployment Guide
Supported Examples
Contributing to OPEA
Additional Content
Examples
AgentQnA Application
Agents for Question Answering
Single node on-prem deployment with Docker Compose on Xeon Scalable processors
Single node on-prem deployment AgentQnA on Gaudi
Retrieval tool for agent
AudioQnA Application
AudioQnA Application
AudioQnA Accuracy
AudioQnA Benchmarking
Build Mega Service of AudioQnA on Xeon
Build Mega Service of AudioQnA on Gaudi
Deploy AudioQnA in a Kubernetes Cluster
Deploy AudioQnA in Kubernetes Cluster on Xeon and Gaudi
AudioQnA
AvatarChatbot Application
AvatarChatbot Application
Build Mega Service of AvatarChatbot on Xeon
Build Mega Service of AvatarChatbot on Gaudi
ChatQnA Application
ChatQnA Application
ChatQnA Accuracy
ChatQnA Benchmarking
ChatQnA Deployment
ChatQnA Benchmarking
Build and deploy CodeGen Application on AMD GPU (ROCm)
Build Mega Service of ChatQnA on AIPC
Build Mega Service of ChatQnA on Xeon
Build Mega Service of ChatQnA (with Pinecone) on Xeon
Build Mega Service of ChatQnA (with Qdrant) on Xeon
Build MegaService of ChatQnA on Gaudi
How to Check and Validate Micro Service in the GenAI Example
Build MegaService of ChatQnA on NVIDIA GPU
Deploy ChatQnA in Kubernetes Cluster
Deploy ChatQnA in Kubernetes Cluster on Xeon and Gaudi
Deploy ChatQnA in Kubernetes Cluster on Single Node environment (Minikube)
ChatQnA Conversational UI
ChatQnA Customized UI
CodeGen Application
Code Generation Application
CodeGen Accuracy
CodeGen Benchmarking
Build and deploy CodeGen Application on AMD GPU (ROCm)
Validate the MicroServices and MegaService
Build MegaService of CodeGen on Xeon
Build MegaService of CodeGen on Gaudi
Deploy CodeGen in Kubernetes Cluster
Deploy CodeGen in a Kubernetes Cluster
Deploy CodeGen with ReactUI
Code Gen
Code Gen
CodeTrans Application
Code Translation Application
CodeTrans Benchmarking
Build and deploy CodeTrans Application on AMD GPU (ROCm)
Validate the MicroServices and MegaService
Build Mega Service of CodeTrans on Xeon
Build Mega Service of CodeTrans on Gaudi
Deploy CodeTrans in Kubernetes Cluster
Deploy CodeTrans in a Kubernetes Cluster
Code Translation
DBQnA Application
DBQnA Application
Deploy on Intel Xeon Processor
DBQnA React Application
DocIndexRetriever Application
DocRetriever Application
DocRetriever Application with Docker
DocRetriever Application with Docker
DocSum Application
Document Summarization Application
Build and deploy DocSum Application on AMD GPU (ROCm)
Build Mega Service of Document Summarization on Intel Xeon Processor
Build MegaService of Document Summarization on Gaudi
Deploy DocSum in Kubernetes Cluster
Deploy DocSum in Kubernetes Cluster
Deploy DocSum with ReactUI
Document Summary
Doc Summary React
Doc Summary
EdgeCraftRAG Application
Edge Craft Retrieval-Augmented Generation
FaqGen Application
FAQ Generation Application
FaqGen Accuracy
FaqGen Benchmarking
Build and deploy FaqGen Application on AMD GPU (ROCm)
Build Mega Service of FAQ Generation on Intel Xeon Processor
Build MegaService of FAQ Generation on Gaudi
Deploy FaqGen in Kubernetes Cluster
Deploy FaqGen with ReactUI
Doc Summary React
FAQ Generation
GraphRAG Application
GraphRAG Application
ChatQnA Conversational UI
ChatQnA Customized UI
InstructionTuning Application
Instruction Tuning
Deploy Instruction Tuning Service on Xeon
Deploy Instruction Tuning Service on Gaudi
MultimodalQnA Application
MultimodalQnA Application
Build Mega Service of MultimodalQnA on Xeon
Build Mega Service of MultimodalQnA on Gaudi
ProductivitySuite Application
Productivity Suite Application
Build Mega Service of Productivity Suite on Xeon
🔐 Keycloak Configuration Setup
🚀 Deploy ProductivitySuite with ReactUI
Productivity Suite React UI
RerankFinetuning Application
Rerank Model Finetuning
Deploy Rerank Model Finetuning Service on Xeon
Deploy Rerank Model Finetuning Service on Gaudi
SearchQnA Application
SearchQnA Application
Build Mega Service of SearchQnA on Xeon
Build Mega Service of SearchQnA on Gaudi
Deploy SearchQnA in a Kubernetes Cluster
Neural Chat
Text2Image Application
Text-to-Image Microservice
Deploy Text-to-Image Service on Xeon
Deploy Text-to-Image Service on Gaudi
Text2Image Customized UI
Translation Application
Translation Application
Build Mega Service of Translation on Xeon
Build MegaService of Translation on Gaudi
Deploy Translation in Kubernetes Cluster
Deploy Translation in a Kubernetes Cluster
Language Translation
VideoQnA Application
VideoQnA Application
Build Mega Service of VideoQnA on Xeon
VisualQnA Application
Visual Question and Answering
VisualQnA Benchmarking
Build Mega Service of VisualQnA on Xeon
Build MegaService of VisualQnA on Gaudi
Deploy VisualQnA in Kubernetes Cluster
Deploy VisualQnA in a Kubernetes Cluster
WorkflowExecAgent Application
Workflow Executor Agent
Validate Workflow Agent Microservice
Legal Information
License
Citation
Docker Images
Example images
Microservice images
Supported Examples
ChatQnA
CodeGen
CodeTrans
DocSum
Language Translation
SearchQnA
VisualQnA
VideoQnA
RerankFinetuning
InstructionTuning
DocIndexRetriever
AgentQnA
AudioQnA
FaqGen
MultimodalQnA
ProductivitySuite
GenAI Microservices
Generative AI Components (GenAIComps)
GenAIComps
Installation
MicroService
MegaService
Gateway
Contributing to OPEA
Additional Content
Legal Information
License
Citation
Agent Microservice
Agent Microservice
1. Overview
🚀2. Start Agent Microservice
🚀 3. Validate Microservice
🚀 4. Provide your own tools
5. Customize agent strategy
Plan Execute
RAG Agent
Animation Microservice
Avatar Animation Microservice
🚀1. Start Microservice with Docker (option 1)
1.1 Build the Docker images
1.2. Set environment variables
🚀2. Run the Docker container
2.1 Run Wav2Lip Microservice
2.2 Run Animation Microservice
🚀3. Validate Microservice
3.1 Validate Wav2Lip service
3.2 Validate Animation service
Asr Microservice
ASR Microservice
🚀1. Start Microservice with Python (Option 1)
🚀2. Start Microservice with Docker (Option 2)
Chathistory Microservice
📝 Chat History Microservice
🛠️ Features
⚙️ Implementation
📝 Chat History Microservice with MongoDB
Setup Environment Variables
🚀Start Microservice with Docker
✅ Invoke Microservice
Cores Microservice
Telemetry for OPEA
Metrics
Tracing
Visualization
Dataprep Microservice
Dataprep Microservice
Install Requirements
Use LVM (Large Vision Model) for Summarizing Image Data
Dataprep Microservice with Redis
Dataprep Microservice with Milvus
Dataprep Microservice with Qdrant
Dataprep Microservice with Pinecone
Dataprep Microservice with PGVector
Dataprep Microservice with VDMS
Dataprep Microservice with Multimodal
Dataprep Microservice with Milvus
🚀1. Start Microservice with Python (Option 1)
🚀2. Start Microservice with Docker (Option 2)
🚀3. Consume Microservice
🚀4. Troubleshooting
Multimedia to Text Services
Prerequisites
Getting Started
Validate Microservices
How to Stop/Remove Services
Test Data for Document Summarization
Overview
Source of Test Data
Description of Test Data
Files
Usage
License
Dataprep Microservice for Multimodal Data with Redis
🚀1. Start Microservice with Python(Option 1)
🚀2. Start Microservice with Docker (Option 2)
🚀3. Status Microservice
🚀4. Consume Microservice
Dataprep Microservice with Neo4J
🚀Start Microservice with Python
🚀Start Microservice with Docker
Invoke Microservice
Dataprep Microservice with Neo4J
Setup Environment Variables
🚀Start Microservice with Docker
Invoke Microservice
Dataprep Microservice with PGVector
🚀1. Start Microservice with Python(Option 1)
🚀2. Start Microservice with Docker (Option 2)
🚀3. Consume Microservice
Dataprep Microservice with Pinecone
🚀Start Microservice with Python
🚀Start Microservice with Docker
Invoke Microservice
Dataprep Microservice with Qdrant
🚀Start Microservice with Python
🚀Start Microservice with Docker
Invoke Microservice
Dataprep Microservice with Redis
🚀1. Start Microservice with Python(Option 1)
🚀2. Start Microservice with Docker (Option 2)
🚀3. Status Microservice
🚀4. Consume Microservice
Dataprep Microservice with VDMS
🚀1. Start Microservice with Python (Option 1)
🚀2. Start Microservice with Docker (Option 2)
🚀3. Status Microservice
🚀4. Consume Microservice
Multimodal Dataprep Microservice with VDMS
🚀1. Start Microservice with Python (Option 1)
🚀2. Start Microservice with Docker (Option 2)
🚀3. Status Microservice
🚀4. Consume Microservice
Embeddings Microservice
Embeddings Microservice
Embeddings Microservice with TEI
Embeddings Microservice with Mosec
Embeddings Microservice with Multimodal
Embeddings Microservice with Multimodal Clip
Embeddings Microservice with Prediction Guard
build Mosec endpoint docker image
build embedding microservice docker image
launch Mosec endpoint docker container
launch embedding microservice docker container
run client test
Embedding Server
1. Introduction
2. Quick Start
Multimodal Embeddings Microservice
🚀1. Start Microservice with Python (Option 1)
🚀2. Start Microservice with Docker (Option 2)
🚀3. Consume Embedding Service
Multimodal CLIP Embeddings Microservice
🚀1. Start Microservice with Docker
🚀2. Consume Embedding Service
Embedding Generation Prediction Guard Microservice
🚀 Start Microservice with Docker
🚀 Consume Embeddings Service
Embeddings Microservice with Langchain TEI
🚀1. Start Microservice with Python (Option 1)
🚀2. Start Microservice with Docker (Optional 2)
🚀3. Consume Embedding Service
Embeddings Microservice with Llama Index TEI
🚀1. Start Microservice with Python (Option 1)
🚀2. Start Microservice with Docker (Optional 2)
🚀3. Consume Embedding Service
Feedback_management Microservice
🗨 Feedback Management Microservice
🛠️ Features
⚙️ Implementation
🗨 Feedback Management Microservice with MongoDB
Setup Environment Variables
🚀Start Microservice with Docker
Finetuning Microservice
Fine-tuning Microservice
🚀1. Start Microservice with Python (Option 1)
🚀2. Start Microservice with Docker (Option 2)
🚀3. Consume Finetuning Service
🚀4. Descriptions for Finetuning parameters
Guardrails Microservice
Trust and Safety with LLM
Bias Detection Microservice
Introduction
Future Development
🚀1. Start Microservice with Python(Option 1)
🚀2. Start Microservice with Docker (Option 2)
🚀3. Get Status of Microservice
🚀4. Consume Microservice Pre-LLM/Post-LLM
Factuality Check Prediction Guard Microservice
🚀 Start Microservice with Docker
Setup Environment Variables
Build Docker Images
Start Service
🚀 Consume Factuality Check Service
Guardrails Microservice
🚀1. Start Microservice with Python (Option 1)
🚀2. Start Microservice with Docker (Option 2)
🚀3. Consume Guardrails Service
PII Detection Microservice
NER strategy
ML strategy
Input and output
🚀1. Start Microservice with Python(Option 1)
🚀2. Start Microservice with Docker (Option 2)
🚀3. Get Status of Microservice
🚀4. Consume Microservice
PII Detection Prediction Guard Microservice
🚀 Start Microservice with Docker
Setup Environment Variables
Build Docker Images
Start Service
🚀 Consume PII Detection Service
Prompt Injection Detection Prediction Guard Microservice
🚀 Start Microservice with Docker
Setup Environment Variables
Build Docker Images
Start Service
🚀 Consume Prompt Injection Detection Service
Toxicity Detection Microservice
Introduction
🚀1. Start Microservice with Python(Option 1)
🚀2. Start Microservice with Docker (Option 2)
🚀3. Get Status of Microservice
🚀4. Consume Microservice Pre-LLM/Post-LLM
Toxicity Checking Prediction Guard Microservice
🚀 Start Microservice with Docker
Setup Environment Variables
Build Docker Images
Start Service
🚀 Consume Toxicity Check Service
Guardrails Microservice
🚀1. Start Microservice with Python (Option 1)
🚀2. Start Microservice with Docker (Option 2)
🚀3. Consume Guardrails Service
Image2image Microservice
Image-to-Image Microservice
🚀1. Start Microservice with Python (Option 1)
1.1 Install Requirements
1.2 Start Image-to-Image Microservice
🚀2. Start Microservice with Docker (Option 2)
2.1 Build Images
2.2 Start Image-to-Image Service
3 Test Image-to-Image Service
Image2video Microservice
Image-to-Video Microservice
🚀1. Start Microservice with Python (Option 1)
1.1 Install Requirements
1.2 Start SVD Service
1.3 Start Image-to-Video Microservice
🚀2. Start Microservice with Docker (Option 2)
2.1 Build Images
2.2 Start SVD and Image-to-Video Service
Intent_detection Microservice
Intent Detection Microservice by TGI
🚀1. Start Microservice with Python(Option 1)
🚀2. Start Microservice with Docker (Option 2)
🚀3. Consume Microservice
Llms Microservice
TGI FAQGen LLM Microservice
🚀1. Start Microservice with Docker
🚀3. Consume LLM Service
vLLM FAQGen LLM Microservice
🚀1. Start Microservice with Docker
🚀3. Consume LLM Service
Document Summary TGI Microservice
🚀1. Start Microservice with Python 🐍 (Option 1)
🚀2. Start Microservice with Docker 🐳 (Option 2)
🚀3. Consume LLM Service
Document Summary vLLM Microservice
🚀1. Start Microservice with Python 🐍 (Option 1)
🚀2. Start Microservice with Docker 🐳 (Option 2)
🚀3. Consume LLM Service
LLM Microservice
Validated LLM Models
Clone OPEA GenAIComps
Prerequisites
🚀1. Start Microservice with Python (Option 1)
🚀2. Start Microservice with Docker (Option 2)
🚀3. Consume LLM Service
LLM Native Microservice
🚀1. Start Microservice
🚀2. Consume LLM Service
LLM Native Microservice
🚀1. Start Microservice
🚀2. Consume LLM Service
Introduction
Get Started
Build Docker Image
Run the Ollama Microservice
Consume the Ollama Microservice
Prediction Guard Introduction
Get Started
Consume the Prediction Guard Microservice
TGI LLM Microservice
🚀1. Start Microservice with Python (Option 1)
🚀2. Start Microservice with Docker (Option 2)
🚀3. Consume LLM Service
vLLM Endpoint Service
🚀1. Set up Environment Variables
🚀2. Set up vLLM Service
🚀3. Set up LLM microservice
vLLM Endpoint Service
🚀1. Set up Environment Variables
🚀2. Set up vLLM Service
🚀3. Set up LLM microservice
LM-Eval Microservice
CPU service
Lvms Microservice
LVM Microservice
🚀 Start Microservice with Docker
LVM Microservice
🚀1. Start Microservice with Python (Option 1)
🚀2. Start Microservice with Docker (Option 2)
LVM Prediction Guard Microservice
🚀1. Start Microservice with Python
🚀2. Start Microservice with Docker (Option 2)
🚀3. Consume LVM Service
LVM Microservice
🚀1. Start Microservice with Docker
✅ 2. Test
♻️ 3. Clean
Nginx Microservice
Nginx for Microservice Forwarding
🚀1. Build Docker Image
🚀2. Environment Settings
🚀3. Start Nginx Service
🚀4. Consume Forwarded Service
Prompt_registry Microservice
🧾 Prompt Registry Microservice
🛠️ Features
⚙️ Implementation
🧾 Prompt Registry Microservice with MongoDB
Setup Environment Variables
🚀Start Microservice with Docker
Ragas Microservice
Reranks Microservice
Reranking Microservice
🛠️ Features
⚙️ Implementation
Reranking Microservice with fastRAG
🚀1. Start Microservice with Python (Option 1)
🚀2. Start Microservice with Docker (Option 2)
✅ 3. Invoke Reranking Microservice
Reranking Microservice with Mosec
Build Reranking Mosec Image
Build Reranking Microservice Image
Launch Mosec Endpoint Image Container
Launch Embedding Microservice Image Container
✅ Invoke Reranking Microservice
Reranking Microservice via TEI
🚀1. Start Microservice with Python (Option 1)
🚀2. Start Microservice with Docker (Option 2)
✅3. Invoke Reranking Microservice
Rerank Microservice with VideoQnA
🚀1. Start Microservice with Docker
✅ 2. Invoke Reranking Microservice
♻️ 3. Cleaning the Container
Retrievers Microservice
Retriever Microservice
Retriever Microservice with Redis
Retriever Microservice with Milvus
Retriever Microservice with PGVector
Retriever Microservice with Pathway
Retriever Microservice with QDrant
Retriever Microservice with VDMS
Retriever Microservice with Multimodal
Retriever Microservice with Milvus
🚀Start Microservice with Python
🚀Start Microservice with Docker
🚀3. Consume Retriever Service
Retriever Microservice
🚀1. Start Microservice with Python (Option 1)
🚀2. Start Microservice with Docker (Option 2)
🚀3. Consume Retriever Service
Retriever Microservice with Neo4J
🚀Start Microservice with Python
🚀Start Microservice with Docker
🚀3. Consume Retriever Service
Retriever Microservice with Neo4J
🚀Start Microservice with Docker
Invoke Microservice
Retriever Microservice with Pathway
🚀Start Microservices
Retriever Microservice
🚀1. Start Microservice with Python (Option 1)
🚀2. Start Microservice with Docker (Option 2)
🚀3. Consume Retriever Service
Retriever Microservice with Qdrant
1. 🚀Start Microservice with Python (Option 1)
2. 🚀Start Microservice with Docker (Option 2)
🚀3. Consume Retriever Service
Retriever Microservice
🚀1. Start Microservice with Python (Option 1)
🚀2. Start Microservice with Docker (Option 2)
🚀3. Consume Retriever Service
Retriever Microservice
🚀1. Start Microservice with Python (Option 1)
🚀2. Start Microservice with Docker (Option 2)
🚀3. Consume Retriever Service
Retriever Microservice
Visual Data Management System (VDMS)
🚀1. Start Microservice with Python (Option 1)
🚀2. Start Microservice with Docker (Option 2)
🚀3. Consume Retriever Service
Text2image Microservice
Text-to-Image Microservice
🚀1. Start Microservice with Python (Option 1)
1.1 Install Requirements
1.2 Start Text-to-Image Microservice
🚀2. Start Microservice with Docker (Option 2)
2.1 Build Images
2.2 Start Text-to-Image Service
3 Test Text-to-Image Service
Texttosql Microservice
🛢 Text-to-SQL Microservice
🛠️ Features
⚙️ Implementation
🛢🔗 Text-to-SQL Microservice with Langchain
🚀 Start Microservice with Python(Option 1)
🚀 Start Microservice with Docker (Option 2)
✅ Invoke the microservice.
Tts Microservice
GPT-SoVITS Microservice
Build the Image
Start the Service
Test
TTS Microservice
1.2 Start SpeechT5 Service/Test
1.3 Start TTS Service/Test
🚀2. Start Microservice with Docker (Option 2)
Vectorstores Microservice
Vectorstores Microservice
Vectorstores Microservice with Redis
Vectorstores Microservice with Qdrant
Vectorstores Microservice with PGVector
Vectorstores Microservice with Pinecone
Vectorstores Microservice with Pathway
Vectorstores Microservice with Milvus
Vectorstores Microservice with LanceDB
Vectorstores Microservice with Chroma
Vectorstores Microservice with VDMS
Start Chroma server
Introduction
Getting Started
Start LanceDB Server
Setup
Usage
Start Milvus server
1. Configuration
2. Run Milvus service
Start the Pathway Vector DB Server
Configuration
Building and running
Health check the vector store
Start PGVector server
1. Download Pgvector image
2. Configure the username, password and dbname
3. Run Pgvector service
Pinecone setup
1. Create Pinecone account from the below link
2. Get API key
3. Create the index in https://app.pinecone.io/
Start Qdrant server
1. Download Qdrant image
2. Run Qdrant service
Start Redis server
1. Download Redis image
2. Run Redis service
Start VDMS server
1. Download VDMS image
2. Run VDMS service
Web_retrievers Microservice
Web Retriever Microservice
Start Microservice with Docker
Deploying GenAI
GenAIInfra
Overview
Prerequisite
Setup Kubernetes cluster
(Optional) To run GenAIInfra on Intel Gaudi product
Usages
Use GenAI Microservices Connector (GMC) to deploy and adjust GenAIExamples
Use helm charts to deploy
Additional Content
Development
Prerequisites
Testing
pre-commit testing
Legal Information
License
Citation
Release Branches
1. Create release candidate branch
2. Create images with release tag
3. Test helm charts
4. Test GMC
5. Publish images
Installation Guides
GenAI-microservices-connector(GMC) Installation
GenAI-microservices-connector(GMC)
Install GMC
Use GMC to compose a chatQnA Pipeline
Kubernetes Installation Options
Kubernetes Installation using AWS EKS Cluster
Prerequisites
Create AWS EKS Cluster in AWS Console
Uploading images to an AWS Private Registry
Kubernetes installation demo using kubeadm
Node configuration
Step 0. Clean up the environment
Step 1. Install relevant components
Step 2. Create the k8s cluster
Step 3 (optional) Reset Kubernetes cluster
NOTES
Kubernetes installation using Kubespray
Node preparation
Prerequisites
Step 1. Set up Kubespray and Ansible
Step 2. Build your own inventory
Step 3. Define Kubernetes configuration
Step 4. Deploy Kubernetes
Step 5. Create kubectl configuration
Quick reference
Authentication and Authorization
Authentication and authorization
Istio based implementation for cloud native environments
APISIX based implementation for cloud native environments
Authentication and Authorization with APISIX and OIDC based Identity provider (Keycloak)
Prerequisites
Update values
Install
Usage
Uninstall
Leveraging Istio to compose an OPEA Pipeline with authentication and authorization enabled
Prerequisite
Perform authentication and authorization via Bearer JWT tokens and curl
Perform authentication and authorization via oauth2-proxy and OIDC provider and UI
Helm Charts
Helm charts for deploying GenAI Components and Examples
Table of Contents
Helm Charts
Deploy with Helm charts
Helm Charts Options
Using HPA (autoscaling)
Using Persistent Volume
Using Private Docker Hub
Generate manifests from Helm Charts
CI guidelines for helm charts
Table of Contents
Infra Setup
Add new test case
HorizontalPodAutoscaler (HPA) support
Table of Contents
Introduction
Pre-conditions
Gotchas
Enable HPA
Verify
Monitoring support
Table of Contents
Introduction
Pre-conditions
Install
Verify
agent
Deploy
Verify
Options
asr
(Option1): Installing the chart separately
(Option2): Installing the chart with dependencies automatically
Verify
Values
chathistory-usvc
(Option1): Installing the chart separately
(Option2): Installing the chart with dependencies automatically
Verify
Values
data-prep
(Option1): Installing the chart separately
(Option2): Installing the chart with dependencies automatically
Verify
Values
Milvus support
embedding-usvc
(Option1): Installing the chart separately
(Option2): Installing the chart with dependencies automatically
Verify
Values
gpt-sovits
Install the chart
Verify
Values
guardrails-usvc
(Option1): Installing the chart separately
(Option2): Installing the chart with dependencies automatically
Verify
Values
llm-uservice
(Option1): Installing the chart separately
(Option2): Installing the chart with dependencies automatically
Verify
Values
lvm-uservice
(Option1): Installing the chart separately
(Option2): Installing the chart with dependencies automatically
Verify
Values
mongodb
Install the Chart
Verify
Values
prompt-usvc
(Option1): Installing the chart separately
(Option2): Installing the chart with dependencies automatically
Verify
Values
redis-vector-db
Install the Chart
Verify
Values
reranking-usvc
(Option1): Installing the chart separately
(Option2): Installing the chart with dependencies automatically
Verify
Values
retriever-usvc
(Option1): Installing the chart separately
(Option2): Installing the chart with dependencies automatically
Verify
Values
Milvus support
speecht5
Installing the Chart
Verify
Values
tei
Installing the Chart
Verify
Values
teirerank
Installing the Chart
Verify
Values
tgi
Installing the Chart
Verify
Values
tts
(Option1): Installing the chart separately
(Option2): Installing the chart with dependencies automatically
Verify
Values
vllm
Installing the Chart
Verify
Values
web-retriever
(Option1): Installing the chart separately
(Option2): Installing the chart with dependencies automatically
Verify
Values
whisper
Installing the Chart
Verify
Values
AgentQnA
Deploy
Verify
AudioQnA
Installing the Chart
Verify
Values
ChatQnA
Installing the Chart
Verify
Values
Troubleshooting
ChatQnA Troubleshooting
a function to get the endpoint of service
define the namespace of service
Update a file to database
get the embedding of input
get the retriever docs
reranking the docs
TGI Q and A
REF
CodeGen
Installing the Chart
Verify
Values
CodeTrans
Installing the Chart
Verify
Values
DocSum
Installing the Chart
Verify
Values
FaqGen
Verify
Values
VisualQnA
Verify
Values
Kubernetes Addons
Deploy Kubernetes add-ons for OPEA
Intel® Gaudi® Base Operator for Kubernetes
How-To Setup Observability for OPEA Workload in Kubernetes
Prepare
1. Setup Prometheus & Grafana
2. Metrics for Gaudi Hardware (v1.16.2)
3. Metrics for OPEA applications
4. Metrics for PCM (Intel® Performance Counter Monitor)
More dashboards
memory bandwidth exporter
Setup
More flags about memory bandwidth exporter
Microservices Connector
genai-microservices-connector(GMC)
Description
Architecture
Personas
Getting Started
Troubleshooting GMC Custom Resource(CR)
Usage guide for genai-microservices-connector(GMC)
Use GMC to compose a chatQnA Pipeline
Use GMC to adjust the chatQnA Pipeline
Use GMC to delete the chatQnA Pipeline
Use GMC and Istio to compose an OPEA Pipeline with authentication and authorization enabled
ChatQnA Use Cases in Kubernetes Cluster via GMC
Using prebuilt images
Deploy ChatQnA pipeline
Helm chart for genai-microservices-connector(GMC)
Installing the GMC Helm Chart
Check the installation result
Next step
Uninstall
Pipeline Proxy
OPEA Pipeline Proxy
Features
Build
Deployment
Development
Guardrails
Architecture
Deployment
Scripts
Scripts and tools
NVIDIA GPU Quick-Start Guide
Prerequisite
Usages
FAQ and Troubleshooting
Deploy Autoscaling Ray Cluster with KubeRay in Kubernetes Cluster
Install KubeRay
Start Ray Cluster with Autoscaling
Delete Ray Cluster with Autoscaling
Uninstall KubeRay
Evaluating GenAI
GenAIEval
Installation
Evaluation
lm-evaluation-harness
bigcode-evaluation-harness
Kubernetes platform optimization
Benchmark
Features
How to use
Grafana Dashboards
Additional Content
Legal Information
License
Citation
Kubernetes Platform Optimization with Resource Management
Introduction
NRI Plugins
Install
Validate policy status
Configure
Validate CPU affinity and hardware alignment in containers
Remove a policy
NRI topology-aware resource policy
OPEA Benchmark Tool
Features
Table of Contents
Installation
Prerequisites
Usage
Configuration
Test Suite Configuration
Test Cases
Auto-Tuning for ChatQnA: Optimizing Resource Allocation in Kubernetes
Key Features
Usage
Configuration Files
Hardrware_info.json
chatqna_neuralchat_rerank_latest.yaml
Tuning Config Parameters
Output
Auto-Tuning for ChatQnA: Optimizing Accuracy by Tuning Model Related Parameters
Prepare Dataset
Run the Tuning script
Setup Prometheus and Grafana to visualize microservice metrics
1. Setup Prometheus
1.1 Node Metrics (optional)
1.2 Intel® Gaudi® Metrics (optional)
2. Setup Grafana
3. Import Grafana Dashboard
StressCli
stresscli.py
Prerequirements
Installation
Usage
locust scripts for OPEA ChatQnA
Configuration file
Basic Usage
HELMET: How to Evaluate Long-context Language Models Effectively and Thoroughly
Quick Links
Setup
Data
Running evaluation
Model-based evaluation
Adding new models
Adding new tasks
Others
Contacts
Citation
CRAG Benchmark for Agent QnA systems
Overview
Getting started
CRAG dataset
Launch agent QnA system
Run CRAG benchmark
Use LLM-as-judge to grade the answers
AutoRAG to evaluate the RAG system performance
Service preparation
RAG evaluation
Notes
🚀 QuickStart
Installation
Launch Service of LLM-as-a-Judge
Writing your first test case
Acknowledgements
🚀 QuickStart
Installation
Launch a LLM Service
Example 1: TGI
Example 2: OPEA LLM
Predict
Evaluate
Evaluation Methodology
Introduction
Prerequisite
Environment
MultiHop (English dataset)
Launch Service of RAG System
Launch Service of LLM-as-a-Judge
Prepare Dataset
Evaluation
CRUD (Chinese dataset)
Prepare Dataset
Launch Service of RAG System
Evaluation
Acknowledgements
Metric Card for BLEU
Metric Description
Intended Uses
How to Use
Inputs
Output Values
Examples
Limitations and Bias
Citation
Further References
RAGAAF (RAG assessment - Annotation Free)
Key features
Run RAGAAF
1. Data
2. Launch endpoint on Gaudi
3. Model
4. Metrics
5. Evaluation
Customizations
OPEA adaption of ragas (LLM-as-a-judge evaluation of Retrieval Augmented Generation)
User data
Launch HuggingFace endpoint on Intel’s Gaudi machines
Run OPEA ragas pipeline using your desired list of metrics
Troubleshooting
Developer Guides
Coding Guides
OPEA API Service Spec (v1.0)
OPEA Mega Service API
OPEA Micro Service API
Documentation Guides
Documentation Guidelines
Markdown vs. RestructuredText
Documentation Organization
Headings
Content Highlighting
Lists
Multi-Column Lists
Tables
File Names and Commands
Branch-Specific File Links
Internal Cross-Reference Linking
Non-ASCII Characters
Include Content from Other Files
Code and Command Examples
Images
Tabs, Spaces, and Indenting
Background Colors
Drawings
Alternative Tabbed Content
Instruction Steps
First Instruction Step
Second Instruction Step
Documentation Generation
Drawings Using Graphviz
Simple Directed Graph
Adding Edge Labels
Tables
Finite-State Machine
OPEA Documentation Generation
Documentation Overview
Set Up the Documentation Working Folders
Install the Documentation Tools
Documentation Presentation Theme
Run the Documentation Processors
Doc Build Troubleshooting
Publish Content
Document Versioning
Filter Expected Warnings
OPEA Community
Community Support
Resources
Contributing Guides
Contribution Guidelines
All The Ways To Contribute
Support
Contributor Covenant Code of Conduct
OPEA Project Code Owners
GenAIComps Repository Code Owners
GenAIEval Repository Code Owners
GenAIExamples Repository Code Owners
GenAIInfra Repository Code Owners
docs Repository Code Owners
Continuous Integration (CICD) owners
Reporting a Vulnerability
Script Usage Notice
Roadmaps
OPEA 2024 - 2025 Roadmap
May 2024
June 2024
July 2024
Aug 2024
Sep 2024
Q4 2024
Q1 2025
OPEA CI/CD Roadmap
Milestone 1 (May, Done)
Milestone 2 (June)
Milestone 3 (July)
Milestone 4 (Aug)
Project Governance
Technical Charter (the “Charter”) for OPEA a Series of LF Projects, LLC
1. Mission and Scope of the Project
2. Technical Steering Committee
3. TSC Voting
4. Compliance with Policies
5. Community Assets
6. General Rules and Operations.
7. Intellectual Property Policy
8. Amendments
Technical Steering Committee (TSC)
Technical Steering Committee Members
Contributor Covenant Code of Conduct
Our Pledge
Our Standards
Enforcement Responsibilities
Scope
Enforcement
Enforcement Guidelines
Attribution
Reporting a Vulnerability
Script Usage Notice
RFC Proposals
Request for Comments (RFCs)
24-05-16 GenAIExamples-001 Using MicroService to Implement ChatQnA
24-05-16 OPEA-001 Overall Design
24-05-24 OPEA-001 Code Structure
24-06-21-OPEA-001-DocSum_Video_Audio
24-06-21-OPEA-001-Guardrails-Gateway
24-07-11-OPEA-Agent
24-08-02-OPEA-AIAvatarChatbot
24-08-07 OPEA-001 OPEA GenAIStudio
24-08-20-OPEA-001-AI Gateway API
24-08-21-GenAIExample-002-Edge Craft RAG
24-10-02-GenAIExamples-001-Image_and_Audio_Support_in_MultimodalQnA
RFC Template
Release Notes
OPEA Release Notes v1.1
What’s New in OPEA v1.1
Highlights
Notable Changes
Full Changelogs
Contributors
Contributing Organizations
Individual Contributors
OPEA Release Notes v1.0
What’s New in OPEA v1.0
Details
OPEA Release Notes v0.9
What’s New in OPEA v0.9
Details
OPEA Release Notes v0.8
What’s New in OPEA v0.8
Details
Thanks to these contributors
OPEA Release Notes v0.7
OPEA Highlights
GenAIExamples
GenAIComps
GenAIEvals
GenAIInfra
OPEA Release Notes v0.6
OPEA Highlight
GenAIExamples
GenAIComps
GenAIEvals
GenAIInfra
Contributing to OPEA
Additional Content
OPEA Frequently Asked Questions
What is OPEA’s mission?
What is OPEA?
What problems are faced by GenAI deployments within the enterprise?
Why now?
How does it compare to other options for deploying Gen AI solutions within the enterprise?
Will OPEA reference implementations work with proprietary components?
What does OPEA acronym stand for?
How do I pronounce OPEA?
What initial companies and open-source projects joined OPEA?
What is Intel contributing?
When you say Technical Conceptual Framework, what components are included?
What are the different ways partners can contribute to OPEA?
Where can partners see the latest draft of the Conceptual Framework spec?
Is there a cost for joining?
Do I need to be a Linux Foundation member to join?
Where can I report a bug or vulnerability?
OPEA™
1.1
»
GenAI Examples
»
Codegen Sample Guide
»
CodeGen Example Deployment Options
View page source
CodeGen Example Deployment Options
¶
Here are some deployment options, depending on your hardware and environment:
Single Node
¶
Gaudi AI Accelerator