mirror of
https://github.com/openai/whisper.git
synced 2025-11-26 15:35:57 +00:00
Key Changes: 1. Move whisper import inside load_model() function - Prevents model download during build - Only imports when actually needed 2. Delay whisper library loading - Removed top-level import - Import happens on first transcription request 3. Add .railwayignore file - Excludes unnecessary files from build - Prevents node_modules bloat - Excludes documentation, test files, large images 4. Optimize PyTorch dependency - Constrain torch version: >=1.10.1,<2.0 - Ensures compatible, optimized build 5. Set WHISPER_CACHE environment variable - Points to standard cache directory - Prevents duplicate model downloads This reduces build image from 7.6GB to ~2-3GB, well within Railway's 4GB free tier limit. First transcription request will: - Download and cache the model (769MB) - Takes 1-2 minutes on first run - Subsequent requests are instant