OPEA™
latest
OPEA Project
v: latest
Document Versions
latest
1.0
OPEA Project links
Project Home
Wiki
Documentation Home
OPEA Overview
OPEA Project Architecture
Microservices: Flexible and Scalable Architecture
Megaservices: A Comprehensive Solution
Gateways: Customized Access to Mega- and Microservices
Next Step
Open Platform for Enterprise AI (OPEA) Framework Draft Proposal
Getting Started with OPEA
Prerequisites
Understanding OPEA’s Core Components
Visual Guide to Deployment
Setup ChatQnA Parameters
Set the required environment variables:
Deploy ChatQnA Megaservice and Microservices
Interact with ChatQnA Megaservice and Microservice
What’s Next
GenAI Examples
ChatQnA Sample Guide
Overview
Purpose
Key Implementation Details
How It Works
Architecture
Deployment
Troubleshooting
Monitoring
ChatQnA Example Deployment Options
Single Node
Kubernetes
Cloud Native
Generative AI Examples
Introduction
Architecture
Getting Started
Supported Examples
Contributing to OPEA
Additional Content
Legal Information
License
Citation
Docker Images
Example images
Microservice images
Supported Examples
ChatQnA
AgentQnA Application
Agents for Question Answering
Retrieval tool for agent
AudioQnA Application
AudioQnA Application
AudioQnA Accuracy
Build Mega Service of AudioQnA on Xeon
Build Mega Service of AudioQnA on Gaudi
Deploy AudioQnA in a Kubernetes Cluster
Deploy AudioQnA in Kubernetes Cluster on Xeon and Gaudi
AudioQnA
ChatQnA Application
ChatQnA Application
ChatQnA Accuracy
ChatQnA Benchmarking
ChatQnA Deployment
Build Mega Service of ChatQnA on AIPC
Build Mega Service of ChatQnA on Xeon
Build Mega Service of ChatQnA (with Qdrant) on Xeon
Build MegaService of ChatQnA on Gaudi
How to Check and Validate Micro Service in the GenAI Example
Build MegaService of ChatQnA on NVIDIA GPU
Deploy ChatQnA in Kubernetes Cluster
Deploy ChatQnA in Kubernetes Cluster on Xeon and Gaudi
ChatQnA Conversational UI
ChatQnA Customized UI
CodeGen Application
Code Generation Application
CodeGen Accuracy
Build MegaService of CodeGen on Xeon
Build MegaService of CodeGen on Gaudi
Deploy CodeGen in Kubernetes Cluster
Deploy CodeGen in a Kubernetes Cluster
Deploy CodeGen with ReactUI
Code Gen
Code Gen
CodeTrans Application
Code Translation Application
Build Mega Service of CodeTrans on Xeon
Build Mega Service of CodeTrans on Gaudi
Deploy CodeTrans in Kubernetes Cluster
Deploy CodeTrans in a Kubernetes Cluster
Code Translation
DBQnA Application
DBQnA Application
Deploy on Intel Xeon Processor
DBQnA React Application
DocIndexRetriever Application
DocRetriever Application
DocRetriever Application with Docker
DocRetriever Application with Docker
DocSum Application
Document Summarization Application
Build Mega Service of Document Summarization on Intel Xeon Processor
Build MegaService of Document Summarization on Gaudi
Deploy DocSum in Kubernetes Cluster
Deploy DocSum in Kubernetes Cluster
Deploy DocSum with ReactUI
Doc Summary React
Doc Summary
FaqGen Application
FAQ Generation Application
FaqGen Accuracy
Build Mega Service of FAQ Generation on Intel Xeon Processor
Build MegaService of FAQ Generation on Gaudi
Deploy FaqGen in Kubernetes Cluster
Deploy FaqGen with ReactUI
Doc Summary React
FAQ Generation
InstructionTuning Application
Instruction Tuning
Deploy Instruction Tuning Service on Xeon
Deploy Instruction Tuning Service on Gaudi
MultimodalQnA Application
MultimodalQnA Application
Build Mega Service of MultimodalQnA on Xeon
Build Mega Service of MultimodalQnA on Gaudi
ProductivitySuite Application
Productivity Suite Application
Build Mega Service of Productivity Suite on Xeon
🔐 Keycloak Configuration Setup
🚀 Deploy ProductivitySuite with ReactUI
Productivity Suite React UI
RerankFinetuning Application
Rerank Model Finetuning
Deploy Rerank Model Finetuning Service on Xeon
Deploy Rerank Model Finetuning Service on Gaudi
SearchQnA Application
SearchQnA Application
Build Mega Service of SearchQnA on Xeon
Build Mega Service of SearchQnA on Gaudi
Deploy SearchQnA in a Kubernetes Cluster
Neural Chat
Text2Image Application
Text2Image Customized UI
Translation Application
Translation Application
Build Mega Service of Translation on Xeon
Build MegaService of Translation on Gaudi
Deploy Translation in Kubernetes Cluster
Deploy Translation in a Kubernetes Cluster
Language Translation
VideoQnA Application
VideoQnA Application
Build Mega Service of VideoQnA on Xeon
VisualQnA Application
Visual Question and Answering
Build Mega Service of VisualQnA on Xeon
Build MegaService of VisualQnA on Gaudi
Deploy VisualQnA in Kubernetes Cluster
Deploy VisualQnA in a Kubernetes Cluster
GenAI Microservices
Generative AI Components (GenAIComps)
GenAIComps
MicroService
MegaService
Gateway
Contributing to OPEA
Additional Content
Legal Information
License
Citation
Agent Microservice
Agent Microservice
Plan Execute
RAG Agent
Asr Microservice
ASR Microservice
Chathistory Microservice
📝 Chat History Microservice
📝 Chat History Microservice with MongoDB
Cores Microservice
Telemetry for OPEA
Dataprep Microservice
Dataprep Microservice
Dataprep Microservice with Milvus
Dataprep Microservice for Multimodal Data with Redis
Dataprep Microservice with Neo4J
Dataprep Microservice with PGVector
Dataprep Microservice with Pinecone
Dataprep Microservice with Qdrant
Dataprep Microservice with Redis
Dataprep Microservice with VDMS
Multimodal Dataprep Microservice with VDMS
Embeddings Microservice
Embeddings Microservice
build Mosec endpoint docker image
Embedding Server
Multimodal Embeddings Microservice
Multimodal CLIP Embeddings Microservice
Embedding Generation Prediction Guard Microservice
Embeddings Microservice with Langchain TEI
Embeddings Microservice with Llama Index TEI
Feedback_management Microservice
🗨 Feedback Management Microservice
🗨 Feedback Management Microservice with MongoDB
Finetuning Microservice
Fine-tuning Microservice
Guardrails Microservice
Trust and Safety with LLM
Bias Detection Microservice
Factuality Check Prediction Guard Microservice
🚀 Start Microservice with Docker
🚀 Consume Factuality Check Service
Guardrails Microservice
PII Detection Microservice
PII Detection Prediction Guard Microservice
🚀 Start Microservice with Docker
🚀 Consume PII Detection Service
Prompt Injection Detection Prediction Guard Microservice
🚀 Start Microservice with Docker
🚀 Consume Prompt Injection Detection Service
Toxicity Detection Microservice
Toxicity Checking Prediction Guard Microservice
🚀 Start Microservice with Docker
🚀 Consume Toxicity Check Service
Guardrails Microservice
Image2video Microservice
Image-to-Video Microservice
🚀1. Start Microservice with Python (Option 1)
🚀2. Start Microservice with Docker (Option 2)
Intent_detection Microservice
Intent Detection Microservice by TGI
Llms Microservice
TGI FAQGen LLM Microservice
Document Summary TGI Microservice
LLM Microservice
LLM Native Microservice
LLM Native Microservice
Introduction
Prediction Guard Introduction
TGI LLM Microservice
vLLM Endpoint Service
vLLM Endpoint Service
VLLM-Ray Endpoint Service
LM-Eval Microservice
Lvms Microservice
LVM Microservice
LVM Microservice
LVM Prediction Guard Microservice
LVM Microservice
Nginx Microservice
Nginx for Microservice Forwarding
Prompt_registry Microservice
🧾 Prompt Registry Microservice
🧾 Prompt Registry Microservice with MongoDB
Ragas Microservice
Reranks Microservice
Reranking Microservice
Reranking Microservice with fastRAG
Reranking Microservice with Mosec
Reranking Microservice via TEI
Rerank Microservice with VideoQnA
Retrievers Microservice
Retriever Microservice
Retriever Microservice with Milvus
Retriever Microservice
Retriever Microservice with Neo4J
Retriever Microservice with Pathway
Retriever Microservice
Retriever Microservice with Qdrant
Retriever Microservice
Retriever Microservice
Retriever Microservice
Text2image Microservice
Text-to-Image Microservice
🚀1. Start Microservice with Python (Option 1)
🚀2. Start Microservice with Docker (Option 2)
3 Test Text-to-Image Service
Texttosql Microservice
🛢 Text-to-SQL Microservice
🛢🔗 Text-to-SQL Microservice with Langchain
Tts Microservice
GPT-SoVITS Microservice
TTS Microservice
Vectorstores Microservice
Vectorstores Microservice
Start Chroma server
Start LanceDB Server
Start Milvus server
Start the Pathway Vector DB Server
Start PGVector server
Pinecone setup
Start Qdrant server
Start Redis server
Start VDMS server
Web_retrievers Microservice
Web Retriever Microservice
Deploying GenAI
GenAIInfra
Overview
Prerequisite
Usages
Additional Content
Development
Prerequisites
Testing
pre-commit testing
Legal Information
License
Citation
Release Branches
1. Create release candidate branch
2. Create images with release tag
3. Test helm charts
4. Test GMC
5. Publish images
Installation Guides
GenAI-microservices-connector(GMC) Installation
Kubernetes Installation Options
Kubernetes Installation using AWS EKS Cluster
Kubernetes installation demo using kubeadm
Kubernetes installation using Kubespray
Authentication and Authorization
Authentication and authorization
Authentication and Authorization with APISIX and OIDC based Identity provider (Keycloak)
Leveraging Istio to compose an OPEA Pipeline with authentication and authorization enabled
Helm Charts
Helm charts for deploying GenAI Components and Examples
HorizontalPodAutoscaler (HPA) support
asr
chathistory-usvc
data-prep
embedding-usvc
guardrails-usvc
llm-uservice
mongodb
prompt-usvc
redis-vector-db
reranking-usvc
retriever-usvc
speecht5
tei
teirerank
tgi
tts
vllm
web-retriever
whisper
ChatQnA
ChatQnA Troubleshooting
CodeGen
CodeTrans
DocSum
Kubernetes Addons
Deploy Kubernetes add-ons for OPEA
Intel® Gaudi® Base Operator for Kubernetes
How-To Setup Observability for OPEA Workload in Kubernetes
memory bandwidth exporter
Microservices Connector
genai-microservices-connector(GMC)
Troubleshooting GMC Custom Resource(CR)
Usage guide for genai-microservices-connector(GMC)
ChatQnA Use Cases in Kubernetes Cluster via GMC
Helm chart for genai-microservices-connector(GMC)
Pipeline Proxy
OPEA Pipeline Proxy
Guardrails
Scripts
Scripts and tools
NVIDIA GPU Quick-Start Guide
Deploy Autoscaling Ray Cluster with KubeRay in Kubernetes Cluster
Evaluating GenAI
GenAIEval
Installation
Evaluation
Benchmark
Additional Content
Legal Information
License
Citation
Kubernetes Platform Optimization with Resource Management
Introduction
NRI Plugins
Install
Validate policy status
Configure
Validate CPU affinity and hardware alignment in containers
Remove a policy
NRI topology-aware resource policy
OPEA Benchmark Tool
Features
Table of Contents
Installation
Usage
Configuration
Auto-Tuning for ChatQnA: Optimizing Resource Allocation in Kubernetes
Key Features
Usage
Configuration Files
Output
Auto-Tuning for ChatQnA: Optimizing Accuracy by Tuning Model Related Parameters
Prepare Dataset
Run the Tuning script
Setup Prometheus and Grafana to visualize microservice metrics
1. Setup Prometheus
2. Setup Grafana
3. Import Grafana Dashboard
StressCli
stresscli.py
locust scripts for OPEA ChatQnA
Configuration file
Basic Usage
CRAG Benchmark for Agent QnA systems
Overview
Getting started
CRAG dataset
Launch agent QnA system
Run CRAG benchmark
Use LLM-as-judge to grade the answers
AutoRAG to evaluate the RAG system performance
Service preparation
RAG evaluation
Notes
Evaluation Methodology
Introduction
Prerequisite
MultiHop (English dataset)
CRUD (Chinese dataset)
Acknowledgements
Metric Card for BLEU
Metric Description
Intended Uses
How to Use
Limitations and Bias
Citation
Further References
OPEA adaption of ragas (LLM-as-a-judge evaluation of Retrieval Augmented Generation)
User data
Launch HuggingFace endpoint on Intel’s Gaudi machines
Run OPEA ragas pipeline using your desired list of metrics
Troubleshooting
Developer Guides
Coding Guides
OPEA API Service Spec (v0.9)
Documentation Guides
Documentation Guidelines
Drawings Using Graphviz
OPEA Documentation Generation
OPEA Community
Community Support
Resources
Contributing Guides
Contribution Guidelines
OPEA Project Code Owners
Reporting a Vulnerability
Roadmaps
OPEA 2024 - 2025 Roadmap
OPEA CI/CD Roadmap
Project Governance
Technical Charter (the “Charter”) for OPEA a Series of LF Projects, LLC
Technical Steering Committee (TSC)
Contributor Covenant Code of Conduct
Reporting a Vulnerability
RFC Proposals
Request for Comments (RFCs)
Release Notes
OPEA Release Notes v1.0
What’s New in OPEA v1.0
Details
OPEA Release Notes v0.9
What’s New in OPEA v0.9
Details
OPEA Release Notes v0.8
What’s New in OPEA v0.8
Details
Thanks to these contributors
OPEA Release Notes v0.7
OPEA Highlights
GenAIExamples
GenAIComps
GenAIEvals
GenAIInfra
OPEA Release Notes v0.6
OPEA Highlight
GenAIExamples
GenAIComps
GenAIEvals
GenAIInfra
OPEA Frequently Asked Questions
What is OPEA’s mission?
What is OPEA?
What problems are faced by GenAI deployments within the enterprise?
Why now?
How does it compare to other options for deploying Gen AI solutions within the enterprise?
Will OPEA reference implementations work with proprietary components?
What does OPEA acronym stand for?
How do I pronounce OPEA?
What initial companies and open-source projects joined OPEA?
What is Intel contributing?
When you say Technical Conceptual Framework, what components are included?
What are the different ways partners can contribute to OPEA?
Where can partners see the latest draft of the Conceptual Framework spec?
Is there a cost for joining?
Do I need to be a Linux Foundation member to join?
Where can I report a bug or vulnerability?
OPEA™
Latest
»
Deploying GenAI
»
Deploy Kubernetes add-ons for OPEA
View page source
Deploy Kubernetes add-ons for OPEA
¶