VideoQnA¶
Note
This guide is in its early development and is a work-in-progress with placeholder content.
Overview¶
VideoQnA is a framework that retrieves video based on provided user prompt. It uses only the video embeddings to perform vector similarity search in Intel’s VDMS vector database and performs all operations on Intel Xeon CPU. The pipeline supports long form videos and time-based search.
Purpose¶
Efficient Search: Utilizes video embeddings for accurate and efficient retrieval.
Long-form Video Support: Capable of handling extensive video archives and time-based searches.
Microservice Architecture: Built on GenAIComps, incorporating microservices for embedding, retrieval, reranking, and language model integration.
How It Works¶
It utilizes the GenAIComps microservice pipeline on Intel Xeon server. The steps include Docker image creation, container deployment via Docker Compose, and service execution to integrate microservices such as embedding, retriever, rerank, and lvm. Videos are converted into feature vectors using mean aggregation and stored in the VDMS vector store. When a user submits a query, the system performs a similarity search in the vector store to retrieve the best-matching videos. Contextual Inference: The retrieved videos are then sent to the Large Vision Model (LVM) for inference, providing supplemental context for the query.
Deployment¶
To deploy on Xeon, please check guide here