Docker Images¶
A list of released OPEA docker images in https://hub.docker.com/, contains all relevant images from the GenAIExamples, GenAIComps and GenAIInfra projects. Please expect more public available images in the future release.
Take ChatQnA for example. ChatQnA is a chatbot application service based on the Retrieval Augmented Generation (RAG) architecture. It consists of opea/embedding, opea/retriever, opea/reranking-tei, opea/llm-textgen, opea/dataprep, opea/chatqna, opea/chatqna-ui and opea/chatqna-conversation-ui (Optional) multiple microservices. Other services are similar, see the corresponding README for details.
Example images¶
Example Images |
Dockerfile |
Description |
---|---|---|
The docker image served as an audioqna gateway and using language modeling to generate answers to user queries by converting audio input to text, and then use text-to-speech (TTS) to convert those answers back to speech for interaction. |
||
The docker image acted as the audioqna UI entry for enabling seamless interaction with users |
||
The docker image served as an audioqna gateway and using language modeling to generate answers to user queries by converting multilingual audio input to text, and then use multilingual text-to-speech (TTS) to convert those answers back to speech for interaction. |
||
The docker image served as a avatarchatbot gateway and interacted with users by understanding their questions and providing relevant answers. |
||
The docker image served as a chatqna gateway and interacted with users by understanding their questions and providing relevant answers. |
||
The purpose of the docker image is to provide a user interface for chat-based Q&A using React. It allows for interaction with users and supports continuing conversations with a history that is stored in the browser’s local storage. |
||
The docker image acted as the chatqna UI entry for facilitating interaction with users for question answering |
||
The purpose of the docker image is to provide a user interface for chat-based Q&A using React. It allows for interaction with users and supports continuing conversations with a history that is stored in the browser’s local storage. |
||
This docker image is used to encapsulate chatqna’s LLM service to secure model inputs and outputs. Guardrails proactively prevents models from interacting with insecure content and signals in time to stop insecure behavior. |
||
The docker image served as the codegen gateway to provide service of the automatic creation of source code from a higher-level representation |
||
The docker image acted as the codegen UI entry for facilitating interaction with users for automatically generating code from user’s description |
||
The purpose of the docker image is to provide a user interface for Codegen using React. It allows generating the appropriate code based on the current user input. |
||
The docker image served as a codetrans gateway to provide service of converting source code written in one programming language into an equivalent version in another programming language |
||
The docker image acted as the codetrans UI entry for facilitating interaction with users for translating one programming language to another one |
||
The docker image acts as a DocRetriever gateway, It uses different methods to match user queries with a set of free text records. |
||
The docker image served as a docsum gateway to provide service of capturing the main points and essential details of the original text |
||
The docker image acted as the docsum UI entry for facilitating interaction with users for document summarization |
||
The purpose of the docker image is to provide a user interface for document summary using React. It allows upload a file or paste text and then click on “Generate Summary” to get a condensed summary of the generated content and automatically scroll to the bottom of the summary. |
||
The purpose of the docker image is to provides a user interface for summarizing documents and text using a Dockerized frontend application. Users can upload files or paste text to generate summaries. |
||
The docker image served as an Edge Craft RAG (EC-RAG) gateway, delivering a customizable and production-ready Retrieval-Augmented Generation system optimized for edge solutions. |
||
The docker image acted as the Edge Craft RAG (EC-RAG) UI entry. It ensuring high-quality, performant interactions tailored for edge environments. |
||
The docker image served as an Edge Craft RAG (EC-RAG) server, delivering a customizable and production-ready Retrieval-Augmented Generation system optimized for edge solutions. |
||
The docker image served as a faqgen gateway and automatically generating comprehensive, natural sounding Frequently Asked Questions (FAQs) from documents, legal texts, customer inquiries and other sources. |
||
The docker image serves as the faqgen UI entry point for easy interaction with users, generating FAQs by pasting in question text. |
||
The purpose of the docker image is to provide a user interface for Generate FAQs using React. It allows generating FAQs by uploading files or pasting text. |
||
The docker image served as a GraphRAG gateway, leveraging a knowledge graph derived from source documents to address both local and global queries. |
||
The docker image acted as the GraphRAG UI entry for facilitating interaction with users |
||
The purpose of the docker image is to provide a user interface for GraphRAG using React. |
||
The docker image served as a multimodalqna gateway and dynamically fetches the most relevant multimodal information (frames, transcripts, and/or subtitles) from the user’s video collection to solve the problem. |
||
The docker image serves as the multimodalqna UI entry point for easy interaction with users. Answers to questions are generated from videos uploaded by users.. |
||
The purpose of the docker image is to provide a user interface for Productivity Suite Application using React. It allows interaction by uploading documents and inputs. |
||
The docker image served as the searchqna gateway to provide service of retrieving accurate and relevant answers to user queries from a knowledge base or dataset |
||
The docker image acted as the searchqna UI entry for facilitating interaction with users for question answering |
||
The docker image served as the translation gateway to provide service of language translation |
||
The docker image acted as the translation UI entry for facilitating interaction with users for language translation |
||
The docker image acts as videoqna gateway, interacting with the user by retrieving videos based on user prompts |
||
The docker image serves as the user interface entry point for the videoqna, facilitating interaction with the user and retrieving the video based on user prompts. |
||
The docker image acts as a videoqna gateway, outputting answers in natural language based on a combination of images and questions |
||
The docker image serves as the user interface portal for VisualQnA, facilitating interaction with the user and outputting answers in natural language based on a combination of images and questions from the user. |
Microservice images¶
Microservice Images |
Dockerfile |
Description |
---|---|---|
The docker image exposed the OPEA agent microservice for GenAI application use |
||
The docker image exposed the OPEA agent microservice UI entry for GenAI application use |
||
The docker image exposed the OPEA Audio-Speech-Recognition microservice for GenAI application use |
||
The purpose of the Docker image is to expose the OPEA Avatar Animation microservice for GenAI application use. |
||
The docker image exposes OPEA Chat History microservice which based on MongoDB database, designed to allow user to store, retrieve and manage chat conversations |
||
The docker image exposed the OPEA dataprep microservice for GenAI application use |
||
The docker image exposed the OPEA mosec embedding microservice for GenAI application use |
||
The docker image exposed the OPEA mosec embedding microservice base on Langchain framework for GenAI application use |
||
The docker image exposes OPEA multimodal embedded microservices based on bridgetower for use by GenAI applications |
||
The docker image exposes OPEA multimodal embedded microservices based on bridgetower for use by GenAI applications on the Gaudi |
||
The docker image exposes that the OPEA feedback management microservice uses a MongoDB database for GenAI applications. |
||
The docker image exposed the OPEA Fine-tuning microservice for GenAI application use |
||
The docker image exposed the OPEA Fine-tuning microservice for GenAI application use on the Gaudi |
||
The docker image exposed the OPEA GPT-SoVITS service for GenAI application use |
||
The docker image exposed the OPEA guardrail microservice for GenAI application use |
||
The docker image exposed the OPEA guardrail microservice to provide toxicity detection for GenAI application use |
||
The docker image exposed the OPEA guardrail microservice to provide PII detection for GenAI application use |
||
The docker image exposed the OPEA guardrail microservice to provide injection predictionguard for GenAI application use |
||
The docker image exposed the OPEA guardrail microservice to provide hallucination detection for GenAI application use |
||
The docker image exposed the OPEA guardrail microservice to provide factuality predictionguard for GenAI application use |
||
The docker image exposed the OPEA guardrail microservice to provide bias detection for GenAI application use |
||
The purpose of the Docker image is to expose the OPEA Image-to-Image microservice for GenAI application use on the Gaudi. |
||
The purpose of the Docker image is to expose the OPEA Image-to-Image microservice for GenAI application use. |
||
The purpose of the Docker image is to expose the OPEA image-to-video microservice for GenAI application use on the Gaudi. |
||
The purpose of the Docker image is to expose the OPEA image-to-video microservice for GenAI application use. |
||
The docker image exposed the OPEA LLM microservice upon textgen docker image for GenAI application use |
||
The docker image exposed the OPEA LLM microservice upon textgen docker image for GenAI application use on the Gaudi2 |
||
The docker image exposed the OPEA LLM microservice upon eval docker image for GenAI application use |
||
The docker image exposed the OPEA LLM microservice upon docsum docker image for GenAI application use |
||
This docker image is designed to build a frequently asked questions microservice using the HuggingFace Text Generation Inference(TGI) framework. The microservice accepts document input and generates a FAQ. |
||
The docker image exposed the OPEA large visual model (LVM) microservice for GenAI application use |
||
The docker image exposed the OPEA microservice running LLaVA as a large visual model (LVM) server for GenAI application use |
||
The docker image exposed the OPEA microservice running Video-Llama as a large visual model (LVM) for GenAI application use |
||
The docker image exposed the OPEA microservice running predictionguard as a large visual model (LVM) server for GenAI application use |
||
The docker image exposed the OPEA microservice running LLaVA as a large visual model (LVM) service for GenAI application use on the Gaudi2 |
||
The docker image exposed the OPEA microservice running Llama Vision as the base large visual model service for GenAI application use |
||
The docker image exposed the OPEA microservice running Llama Vision with deepspeed as the base large visual model service for GenAI application use |
||
The docker image exposed the OPEA microservice running Llama Vision Guard as the base large visual model service for GenAI application use |
||
The docker image exposes the OPEA Prompt Registry microservices which based on MongoDB database, designed to store and retrieve user’s preferred prompts |
||
The docker image exposed the OPEA reranking microservice for GenAI application use |
||
The docker image exposed the OPEA retrieval microservice for GenAI application use |
||
The docker image exposed the OPEA text-to-image microservice for GenAI application use |
||
The docker image exposed the OPEA text-to-image microservice for GenAI application use on the Gaudi |
||
The docker image exposed the OPEA text-to-image microservice UI entry for GenAI application use |
||
The docker image exposed the OPEA text to Structured Query Language microservice for GenAI application use |
||
The docker image exposed the OPEA text to Structured Query Language microservice react UI entry for GenAI application use |
||
The docker image exposed the OPEA Text-To-Speech microservice for GenAI application use |
||
The docker image exposed the OPEA SpeechT5 service for GenAI application use |
||
The docker image exposed the OPEA SpeechT5 service on Gaudi2 for GenAI application use |
||
The docker image exposed the OPEA gpt-sovits service for GenAI application use |
||
The docker image exposed the OPEA nginx microservice for GenAI application use |
||
The docker image exposed the OPEA Vectorstores microservice with Pathway for GenAI application use |
||
The docker image exposed the OPEA Generate lip movements from audio files microservice with Pathway for GenAI application use |
||
The docker image exposed the OPEA Generate lip movements from audio files microservice with Pathway for GenAI application use on the Gaudi2 |
||
The docker image powered by vllm-project for deploying and serving vllm Models on Arc |
||
The docker image powered by vllm-project for deploying and serving vllm Models of the Openvino Framework |
||
The docker image powered by vllm-project for deploying and serving vllm Models on Gaudi2 |
||
The docker image powered by vllm-project for deploying and serving vllm Models |
||
The docker image exposed the OPEA Whisper service on Gaudi2 for GenAI application use |
||
The docker image exposed the OPEA Whisper service for GenAI application use |
||
The docker image exposed the OPEA retrieval microservice based on chroma vectordb for GenAI application use |