PolyLingua

A production-ready translation service built with OPEA (Open Platform for Enterprise AI) components, featuring a modern Next.js UI and microservices architecture.

Table of Contents

  1. Components

  2. Quick Start

  3. Configuration

  4. Development

  5. Operations

  6. Troubleshooting

  7. Resources

  8. Support

Components

  1. vLLM Service - High-performance LLM inference engine for model serving

  2. LLM Microservice - OPEA wrapper providing standardized API

  3. PolyLingua Megaservice - Orchestrator that formats prompts and routes requests

  4. UI Service - Next.js 14 frontend with React and TypeScript

  5. Nginx - Reverse proxy for unified access

🚀 Quick Start

Prerequisites

  • Docker and Docker Compose

  • Git

  • HuggingFace Account (for model access)

  • 8GB+ RAM recommended

  • ~10GB disk space for models

1. Clone and Setup

cd PolyLingua

# Configure environment variables
./set_env.sh

You’ll be prompted for:

  • HuggingFace API Token - Get from https://huggingface.co/settings/tokens

  • Model ID - Default: swiss-ai/Apertus-8B-Instruct-2509 (translation-optimized model)

  • Host IP - Your server’s IP address

  • Ports and proxy settings

2. Build Images

./deploy/build.sh

This builds:

  • Translation backend service

  • Next.js UI service

3. Start Services

./deploy/start.sh

Wait for services to initialize (~2-5 minutes for first run as models download).

4. Access the Application

  • Web UI: http://localhost:80

  • API Endpoint: http://localhost:8888/v1/translation

5. Test the Service

./deploy/test.sh

Or test manually:

curl -X POST http://localhost:8888/v1/translation \
  -H "Content-Type: application/json" \
  -d '{
    "language_from": "English",
    "language_to": "Spanish",
    "source_language": "Hello, how are you today?"
  }'

📋 Configuration

Environment Variables

Key variables in .env:

Variable

Description

Default

HF_TOKEN

HuggingFace API token

Required

LLM_MODEL_ID

Model to use for translation

swiss-ai/Apertus-8B-Instruct-2509

MODEL_CACHE

Directory for model storage

./data

host_ip

Server IP address

localhost

NGINX_PORT

External port for web access

80

See .env.example for full configuration options.

Supported Models

The service works with any HuggingFace text generation model. Recommended models:

  • swiss-ai/Apertus-8B-Instruct-2509 - Multilingual translation (default)

  • haoranxu/ALMA-7B - Specialized translation model

🛠️ Development

Project Structure

PolyLingua/
├── polylingua.py          # Backend polylingua service
├── requirements.txt       # Python dependencies
├── Dockerfile            # Backend container definition
├── docker-compose.yaml   # Multi-service orchestration
├── set_env.sh           # Environment setup script
├── .env.example         # Environment template
├── ui/                  # Next.js frontend
│   ├── app/            # Next.js app directory
│   ├── components/     # React components
│   ├── Dockerfile      # UI container definition
│   └── package.json    # Node dependencies
└── deploy/             # Deployment scripts
    ├── nginx.conf      # Nginx configuration
    ├── build.sh        # Image build script
    ├── start.sh        # Service startup script
    ├── stop.sh         # Service shutdown script
    └── test.sh         # API testing script

Running Locally (Development)

Backend:

# Install dependencies
pip install -r requirements.txt

# Set environment variables
export LLM_SERVICE_HOST_IP=localhost
export LLM_SERVICE_PORT=9000
export MEGA_SERVICE_PORT=8888

# Run service
python polylingua.py

Frontend:

cd ui
npm install
npm run dev

API Reference

POST /v1/translation

Translate text between languages.

Request:

{
  "language_from": "English",
  "language_to": "Spanish",
  "source_language": "Your text to translate"
}

Response:

{
  "model": "polylingua",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Translated text here"
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {}
}

🔧 Operations

View Logs

# All services
docker compose logs -f

# Specific service
docker compose logs -f polylingua-xeon-backend-server
docker compose logs -f polylingua-ui-server

Stop Services

./deploy/stop.sh

Update Services

# Rebuild images
./deploy/build.sh

# Restart services
docker compose down
./deploy/start.sh

Clean Up

# Stop and remove containers
docker compose down

# Remove volumes (including model cache)
docker compose down -v

🐛 Troubleshooting

Service won’t start

  1. Check if ports are available:

    sudo lsof -i :80,8888,9000,8028,5173
    
  2. Verify environment variables:

    cat .env
    
  3. Check service health:

    docker compose ps
    docker compose logs
    

Model download fails

  • Ensure HF_TOKEN is set correctly

  • Check internet connection

  • Verify model ID exists on HuggingFace

  • Check disk space in MODEL_CACHE directory

Translation errors

  • Wait for vLLM service to fully initialize (check logs)

  • Verify LLM service is healthy: curl http://localhost:9000/v1/health

  • Check vLLM service: curl http://localhost:8028/health

UI can’t connect to backend

  • Verify BACKEND_SERVICE_ENDPOINT in .env

  • Check if backend is running: docker compose ps

  • Test API directly: curl http://localhost:8888/v1/translation

🔗 Resources

📧 Support

For issues and questions:

  • Open an issue on GitHub

  • Check existing issues for solutions

  • Review OPEA documentation


Built with OPEA - Open Platform for Enterprise AI 🚀