Docker Images¶
A list of released OPEA docker images in https://hub.docker.com/, contains all relevant images from the GenAIExamples, GenAIComps and GenAIInfra projects. Please expect more public available images in the future release.
Take ChatQnA for example. ChatQnA is a chatbot application service based on the Retrieval Augmented Generation (RAG) architecture. It consists of opea/embedding, opea/retriever, opea/reranking-tei, opea/llm-textgen, opea/dataprep, opea/chatqna, opea/chatqna-ui and opea/chatqna-conversation-ui (Optional) multiple microservices. Other services are similar, see the corresponding README for details.
Example images¶
Example Images |
Dockerfile |
Description |
Readme |
---|---|---|---|
Audioqna gateway. Using language modeling to generate answers to user queries by converting audio input to text, and then use text-to-speech (TTS) to convert those answers back to speech for interaction. |
|||
Audioqna gateway. Uing language modeling to generate answers to user queries by converting multilingual audio input to text, and then use multilingual text-to-speech (TTS) to convert those answers back to speech for interaction. |
|||
Audioqna UI entry. Enables seamless interaction with users |
|||
Avatarchatbot gateway. Interacted with users by understanding their questions and providing relevant answers. |
|||
Chatqna gateway. Interact with users to understand their questions and provide relevant answers. |
|||
Chatqna React UI. Facilitates interaction with users, enabling chat-based Q&A with conversation history stored in the browser’s local storage. |
|||
Chatqna UI entry. Facilitates interaction with users to answer questions |
|||
Codegen gateway. Provides automatic creation of source code from high-level representations |
|||
Codegen Gradio UI entry. Interact with users to generate source code by providing high-level descriptions or inputs. |
|||
Codegen React UI. Interact with users to generate appropriate code based on current user input. |
|||
Codegen UI entry. Facilitates interaction with users, automatically generate code based on user’s descriptions |
|||
Codetrans gateway. Provide services to convert source code written in one programming language to an equivalent version in another programming language. |
|||
Codetrans UI entry. Facilitate interaction with users, translating one programming language into another |
|||
DocRetriever gateway. Matching a user query to a set of free text records using different methods. |
|||
Docsum gateway. Provide a service that captures the gist and important details of the original text |
|||
Docsum Gradio UI entry. Interact with users to summarize documents and text by uploading files or pasting text to generate concise summaries. |
|||
Docsum React UI entry. It allows upload a file or paste text and then click on “Generate Summary” to get a condensed summary of the generated content and automatically scroll to the bottom of the summary. |
|||
Docsum UI entry. Facilitate interaction with users for document summarization |
|||
Edge Craft RAG (EC-RAG) gateway. Provides a customizable, production-ready retrieval-enhanced generation system that is optimized for edge solutions. |
|||
Edge Craft RAG (EC-RAG) server, Provides a customizable, production-ready retrieval-enhanced generation system that is optimized for edge solutions. |
|||
Edge Craft RAG (EC-RAG) UI entry. Ensuring high-quality, performant interactions tailored for edge environments. |
|||
Edge Craft RAG (EC-RAG) Gradio UI entry. Interact with users to provide a customizable, production-ready retrieval-enhanced generation system optimized for edge solutions. |
|||
GraphRAG gateway, Local and global queries are processed using knowledge graphs extracted from source documents. |
|||
Graphrag React UI entry. Facilitates interaction with users, enabling queries and providing relevant answers using knowledge graphs. |
|||
Graphrag UI entry. Interact with users to facilitate queries and provide relevant answers using knowledge graphs. |
|||
Multimodalqna gateway. Dynamically solves problems by obtaining the most relevant multimodal information (frames, text, and/or subtitles) from the user’s video collection. |
|||
Multimodalqna UI entry. Easy interaction with users. Question answers are generated from videos uploaded by users. |
|||
Productivity Suite React UI server. Interact with users to upload documents and inputs, enabling seamless productivity workflows. |
|||
Searchqna gateway. Provide services to retrieve accurate and relevant answers to user queries from knowledge bases or data sets |
|||
Searchqna UI entry. Facilitate interaction with users to answer questions |
|||
Translation gateway. Provision of language translation services |
|||
Translation UI entry. Facilitate language translation interactions with users |
|||
Videoqna gateway. Retrieve videos based on user prompts and interact with users |
|||
Videoqna UI entry. Interact with users to retrieve videos based on user prompts. |
|||
Videoqna gateway. Output answers in natural language based on combinations of images and questions. |
|||
Visualqna UI entry. Interact with users to answer questions based on a combination of images and queries. |
Microservice images¶
Microservice Images |
Dockerfile |
Description |
Readme |
---|---|---|---|
OPEA agent microservices for GenAI applications |
|||
OPEA agent microservice UI entry for GenAI applications use |
|||
OPEA Avatar Animation microservice for GenAI applications |
|||
OPEA Audio-Speech-Recognition microservice for GenAI applications |
|||
OPEA Chat History microservice is based on a MongoDB database and is designed to allow users to store, retrieve and manage chat conversations. |
|||
OPEA Microservice base image. |
|||
OPEA data preparation microservices for GenAI applications |
|||
OPEA mosec embedding microservice for GenAI application |
|||
OPEA multimodal embedded microservices based on bridgetower for use by GenAI applications |
|||
OPEA multimodal embedded microservices based on bridgetower for use by GenAI applications on the Gaudi |
|||
OPEA mosec embedding microservice base on Langchain framework for GenAI application use |
|||
OPEA feedback management microservice uses MongoDB database for GenAI applications. |
|||
OPEA Fine-tuning microservice for GenAI application |
|||
OPEA Fine-tuning microservice for GenAI application use on the Gaudi |
|||
OPEA Fine-tuning microservice base on Xtune for GenAI application use on the Arc A770 |
|||
OPEA GPT-SoVITS service for GenAI application |
|||
OPEA guardrail microservice for GenAI application |
|||
OPEA guardrail microservice to provide bias detection for GenAI application |
|||
OPEA guardrail microservice to provide factuality predictionguard for GenAI application |
|||
OPEA guardrail microservice to provide hallucination detection for GenAI application |
|||
OPEA guardrail microservice to provide injection predictionguard for GenAI application |
|||
OPEA guardrail microservice for PII detection in GenAI applications |
|||
OPEA guardrail microservice to provide toxicity detection for GenAI application |
|||
OPEA Image-to-Image microservice for GenAI application. |
|||
OPEA Image-to-Image microservice for GenAI application use on the Gaudi. |
|||
OPEA image-to-video microservice for GenAI application. |
|||
OPEA image-to-video microservice for GenAI application use on the Gaudi. |
|||
OPEA is a Large Language Model (LLM) service based on intel-extension-for-pytorch. It provides specialized optimizations, including technical points like paged attention, ROPE fusion, etc. |
|||
OPEA LLM microservice upon docsum docker image for GenAI application |
|||
OPEA LLM microservice upon eval docker image for GenAI application |
|||
OPEA FAQ Generation Microservice is designed to generate frequently asked questions from document input using the HuggingFace Text Generation Inference (TGI) framework. |
|||
OPEA LLM microservice upon textgen docker image for GenAI application |
|||
OPEA LLM microservice upon textgen docker image for GenAI application use on the Gaudi2 |
|||
OPEA LLM microservice upon textgen docker image for GenAI application use on the Gaudi2 with Phi4 optimization. |
|||
OPEA large visual model (LVM) microservice for GenAI application |
|||
OPEA microservice running Llama Vision as a large visualization model (LVM) server for GenAI applications |
|||
OPEA microservice running Llama Vision Guard as a large visualization model (LVM) server for GenAI applications |
|||
OPEA microservice running Llama Vision with DeepSpeed as a large visualization model (LVM) server for GenAI applications |
|||
OPEA microservice running LLaVA as a large visualization model (LVM) server for GenAI applications |
|||
OPEA microservice running LLaVA as a large visualization model (LVM) server for GenAI applications on the Gaudi2 |
|||
OPEA microservice running PredictionGuard as a large visualization model (LVM) server for GenAI applications |
|||
OPEA microservice running Video-Llama as a large visualization model (LVM) server for GenAI applications |
|||
OPEA nginx microservice for GenAI application |
|||
OPEA Pathway microservice for GenAI application |
|||
OPEA Prompt Registry microservice based on MongoDB database, designed to store and retrieve user preference prompts |
|||
OPEA reranking microservice for GenAI application |
|||
OPEA retrieval microservice for GenAI application |
|||
OPEA SpeechT5 service for GenAI application |
|||
OPEA SpeechT5 service on the Gaudi2 for GenAI application |
|||
OPEA struct-to-graph service for GenAI application |
|||
OPEA Text-to-Cypher microservice for GenAI application use on the Gaudi2. |
|||
OPEA Text-to-Graph microservice for GenAI application |
|||
OPEA text-to-image microservice for GenAI application |
|||
OPEA text-to-image microservice for GenAI application use on the Gaudi |
|||
OPEA text-to-image microservice UI entry for GenAI application |
|||
OPEA text to Structured Query Language microservice for GenAI application |
|||
OPEA text to Structured Query Language microservice react UI entry for GenAI application |
|||
OPEA Text-To-Speech microservice for GenAI application |
|||
Deploying and servicing VLLM models based on VLLM projects |
|||
Deploying and servicing VLLM models on Arc based on VLLM projects |
|||
Deploying and servicing VLLM models on Gaudi2 based on VLLM project |
|||
VLLM Model for Deploying and Serving Openvino Framework Based on VLLM Project |
|||
Deploying and servicing VLLM models on AMD Rocm based on VLLM project |
|||
OPEA Generate lip movements from audio files microservice with Pathway for GenAI application |
|||
OPEA Generate lip movements from audio files microservice with Pathway for GenAI application use on the Gaudi2 |
|||
OPEA retrieval microservice based on chroma vectordb for GenAI application |
|||
OPEA Whisper service for GenAI application |
|||
OPEA Whisper service on Gaudi2 for GenAI application |