PolyLingua¶
A production-ready translation service built with OPEA (Open Platform for Enterprise AI) components, featuring a modern Next.js UI and microservices architecture.
Table of Contents¶
Components¶
vLLM Service - High-performance LLM inference engine for model serving
LLM Microservice - OPEA wrapper providing standardized API
PolyLingua Megaservice - Orchestrator that formats prompts and routes requests
UI Service - Next.js 14 frontend with React and TypeScript
Nginx - Reverse proxy for unified access
🚀 Quick Start¶
Prerequisites¶
Docker and Docker Compose
Git
HuggingFace Account (for model access)
8GB+ RAM recommended
~10GB disk space for models
1. Clone and Setup¶
cd PolyLingua
# Configure environment variables
./set_env.sh
You’ll be prompted for:
HuggingFace API Token - Get from https://huggingface.co/settings/tokens
Model ID - Default:
swiss-ai/Apertus-8B-Instruct-2509(translation-optimized model)Host IP - Your server’s IP address
Ports and proxy settings
2. Build Images¶
./deploy/build.sh
This builds:
Translation backend service
Next.js UI service
3. Start Services¶
./deploy/start.sh
Wait for services to initialize (~2-5 minutes for first run as models download).
4. Access the Application¶
Web UI: http://localhost:80
API Endpoint: http://localhost:8888/v1/translation
5. Test the Service¶
./deploy/test.sh
Or test manually:
curl -X POST http://localhost:8888/v1/translation \
-H "Content-Type: application/json" \
-d '{
"language_from": "English",
"language_to": "Spanish",
"source_language": "Hello, how are you today?"
}'
📋 Configuration¶
Environment Variables¶
Key variables in .env:
Variable |
Description |
Default |
|---|---|---|
|
HuggingFace API token |
Required |
|
Model to use for translation |
|
|
Directory for model storage |
|
|
Server IP address |
|
|
External port for web access |
|
See .env.example for full configuration options.
Supported Models¶
The service works with any HuggingFace text generation model. Recommended models:
swiss-ai/Apertus-8B-Instruct-2509 - Multilingual translation (default)
haoranxu/ALMA-7B - Specialized translation model
🛠️ Development¶
Project Structure¶
PolyLingua/
├── polylingua.py # Backend polylingua service
├── requirements.txt # Python dependencies
├── Dockerfile # Backend container definition
├── docker-compose.yaml # Multi-service orchestration
├── set_env.sh # Environment setup script
├── .env.example # Environment template
├── ui/ # Next.js frontend
│ ├── app/ # Next.js app directory
│ ├── components/ # React components
│ ├── Dockerfile # UI container definition
│ └── package.json # Node dependencies
└── deploy/ # Deployment scripts
├── nginx.conf # Nginx configuration
├── build.sh # Image build script
├── start.sh # Service startup script
├── stop.sh # Service shutdown script
└── test.sh # API testing script
Running Locally (Development)¶
Backend:
# Install dependencies
pip install -r requirements.txt
# Set environment variables
export LLM_SERVICE_HOST_IP=localhost
export LLM_SERVICE_PORT=9000
export MEGA_SERVICE_PORT=8888
# Run service
python polylingua.py
Frontend:
cd ui
npm install
npm run dev
API Reference¶
POST /v1/translation¶
Translate text between languages.
Request:
{
"language_from": "English",
"language_to": "Spanish",
"source_language": "Your text to translate"
}
Response:
{
"model": "polylingua",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Translated text here"
},
"finish_reason": "stop"
}
],
"usage": {}
}
🔧 Operations¶
View Logs¶
# All services
docker compose logs -f
# Specific service
docker compose logs -f polylingua-xeon-backend-server
docker compose logs -f polylingua-ui-server
Stop Services¶
./deploy/stop.sh
Update Services¶
# Rebuild images
./deploy/build.sh
# Restart services
docker compose down
./deploy/start.sh
Clean Up¶
# Stop and remove containers
docker compose down
# Remove volumes (including model cache)
docker compose down -v
🐛 Troubleshooting¶
Service won’t start¶
Check if ports are available:
sudo lsof -i :80,8888,9000,8028,5173
Verify environment variables:
cat .envCheck service health:
docker compose ps docker compose logs
Model download fails¶
Ensure
HF_TOKENis set correctlyCheck internet connection
Verify model ID exists on HuggingFace
Check disk space in
MODEL_CACHEdirectory
Translation errors¶
Wait for vLLM service to fully initialize (check logs)
Verify LLM service is healthy:
curl http://localhost:9000/v1/healthCheck vLLM service:
curl http://localhost:8028/health
UI can’t connect to backend¶
Verify
BACKEND_SERVICE_ENDPOINTin.envCheck if backend is running:
docker compose psTest API directly:
curl http://localhost:8888/v1/translation
🔗 Resources¶
📧 Support¶
For issues and questions:
Open an issue on GitHub
Check existing issues for solutions
Review OPEA documentation
Built with OPEA - Open Platform for Enterprise AI 🚀