AudioQnA Application¶
AudioQnA is an example that demonstrates the integration of Generative AI (GenAI) models for performing question-answering (QnA) on audio files, with the added functionality of Text-to-Speech (TTS) for generating spoken responses. The example showcases how to convert audio input to text using Automatic Speech Recognition (ASR), generate answers to user queries using a language model, and then convert those answers back to speech using Text-to-Speech (TTS).
The AudioQnA example is implemented using the component-level microservices defined in GenAIComps. The flow chart below shows the information flow between different microservices for this example.
Deploy AudioQnA Service¶
The AudioQnA service can be deployed on either Intel Gaudi2 or Intel Xeon Scalable Processor.
Deploy AudioQnA on Gaudi¶
Refer to the Gaudi Guide for instructions on deploying AudioQnA on Gaudi.
Deploy AudioQnA on Xeon¶
Refer to the Xeon Guide for instructions on deploying AudioQnA on Xeon.
Supported Models¶
ASR¶
The default model is openai/whisper-small. It also supports all models in the Whisper family, such as openai/whisper-large-v3
, openai/whisper-medium
, openai/whisper-base
, openai/whisper-tiny
, etc.
To replace the model, please edit the compose.yaml
and add the command
line to pass the name of the model you want to use:
services:
whisper-service:
...
command: --model_name_or_path openai/whisper-tiny
TTS¶
The default model is microsoft/SpeechT5. We currently do not support replacing the model. More models under the commercial license will be added in the future.