mirror of
https://github.com/openai/whisper.git
synced 2025-11-23 22:15:58 +00:00
Key Changes: 1. Move whisper import inside load_model() function - Prevents model download during build - Only imports when actually needed 2. Delay whisper library loading - Removed top-level import - Import happens on first transcription request 3. Add .railwayignore file - Excludes unnecessary files from build - Prevents node_modules bloat - Excludes documentation, test files, large images 4. Optimize PyTorch dependency - Constrain torch version: >=1.10.1,<2.0 - Ensures compatible, optimized build 5. Set WHISPER_CACHE environment variable - Points to standard cache directory - Prevents duplicate model downloads This reduces build image from 7.6GB to ~2-3GB, well within Railway's 4GB free tier limit. First transcription request will: - Download and cache the model (769MB) - Takes 1-2 minutes on first run - Subsequent requests are instant
11 lines
153 B
Plaintext
11 lines
153 B
Plaintext
Flask==2.3.3
|
|
Flask-CORS==4.0.0
|
|
python-dotenv==1.0.0
|
|
openai-whisper>=20230314
|
|
torch>=1.10.1,<2.0
|
|
numpy>=1.21.0
|
|
python-multipart==0.0.6
|
|
gunicorn==21.2.0
|
|
|
|
|