# Deploying language-detection Service This document provides a comprehensive guide to deploying the language-detection microservice pipeline on Intel platforms. This guide covers two deployment methods: - [πŸš€ 1. Quick Start with Docker Compose](#quick-start-with-docker-compose): The recommended method for a fast and easy setup. - [πŸš€ 2. Manual Step-by-Step Deployment (Advanced)](#manual-step-by-step-deployment-advanced): For users who want to build and run each container individually. - [πŸš€ 3. Start Microservice with Python](#start-microservice-with-python): For users who prefer to run the ASR microservice directly with Python scripts. ## πŸš€ 1. Quick Start with Docker Compose This method uses Docker Compose to start all necessary services with a single command. It is the fastest and easiest way to get the service running. ### 1.1. Access the Code Clone the repository and navigate to the deployment directory: ```bash git clone https://github.com/opea-project/GenAIComps.git cd GenAIComps/comps/language_detection/deployment/docker_compose ``` ### 1.2. Deploy the Service Choose the command corresponding to your target platform. ```bash docker compose -f compose.yaml up language-detection -d ``` ### 1.3. Validate the Service #### 1.3.1 Pipeline Mode The input request consists of the answer that has to be translated and a prompt containing the user's query. **Example Input** ```bash curl -X POST -H "Content-Type: application/json" -d @- http://localhost:8069/v1/language_detection <