mirror of
https://github.com/openai/whisper.git
synced 2025-11-28 08:11:11 +00:00
Backend Updates: - Add lazy loading for Whisper model (faster startup) - Use environment variables for port and config - Add root endpoint for health checking - Configure CORS for production - Add tempfile support for uploads - Update to support gunicorn production server - Add Procfile for Heroku/Railway compatibility Frontend Updates: - Optimize Vite build configuration - Add production build optimizations - Enable minification and code splitting - Configure preview server for production Configuration: - Add .env.example files for both frontend and backend - Create railway.toml for Railway deployment - Add Procfile for process management - Setup environment variable templates Documentation: - Create comprehensive RAILWAY_DEPLOYMENT.md guide - Include step-by-step deployment instructions - Add troubleshooting section - Include cost breakdown - Add monitoring and maintenance guide Dependencies: - Add gunicorn for production WSGI server Ready for Railway deployment with: - Free $5/month credit - Automatic scaling - 24/7 uptime - Custom domain support (optional)
22 lines
366 B
Plaintext
22 lines
366 B
Plaintext
# Backend environment variables
|
|
# Copy this to .env and update with your values
|
|
|
|
# Flask environment
|
|
FLASK_ENV=production
|
|
FLASK_DEBUG=False
|
|
|
|
# Server port (Railway sets PORT automatically)
|
|
PORT=5000
|
|
|
|
# CORS settings
|
|
FLASK_CORS_ORIGINS=*
|
|
|
|
# Whisper model settings
|
|
WHISPER_MODEL=medium
|
|
|
|
# File upload settings
|
|
MAX_FILE_SIZE=500000000
|
|
|
|
# Python path
|
|
PYTHONUNBUFFERED=1
|