Completely containerized FFmpeg processing API that starts with one command. No database setup, no Redis configuration, no manual steps - everything is automated and production-ready out of the box.
RendDiff is proud to build upon FFmpeg, the incredible open-source project that has powered video technology for decades. We're deeply grateful to the FFmpeg community for their tireless work in creating the most comprehensive multimedia framework in existence.
This project is our contribution back to the community — making FFmpeg's power accessible through a simple, scalable API with zero configuration.
FFmpeg is incredibly powerful but requires deep expertise to use effectively. RendDiff bridges this gap by providing a clean REST API that handles all the complexity — from infrastructure setup to process management — with absolutely zero configuration required.
Start with one Docker command. PostgreSQL, Redis, migrations, health checks - everything is automated and production-ready instantly.
Celery workers with Redis queue handle massive workloads. Real-time progress via SSE, comprehensive job tracking, and auto-scaling support.
Local filesystem or S3-compatible storage (AWS, MinIO, DigitalOcean). Seamless switching between backends with no code changes.
Built-in VMAF, PSNR, and SSIM analysis. Make data-driven decisions about encoding quality vs. file size trade-offs.
API key authentication, IP whitelisting, rate limiting, HSTS, CSP headers. Production-grade security out of the box.
Prometheus metrics, Grafana dashboards, structured JSON logs. Monitor performance, track errors, optimize workflows.
FFmpeg remains the core encoder — GenAI provides intelligent optimization and enhancement without replacing FFmpeg's proven encoding engine.
⚡ Requires NVIDIA GPU with Docker runtime
Stop wrestling with FFmpeg command lines. Start shipping features. RendDiff handles the complexity so you can focus on building great applications.
# Convert video with zero configuration needed curl -X POST http://localhost:8080/api/v1/convert \ -H "X-API-Key: your-api-key" \ -H "Content-Type: application/json" \ -d '{ "input_path": "/input/video.mp4", "output_path": "/output/converted.mp4", "operations": [ {"type": "transcode", "codec": "h264", "crf": 23}, {"type": "scale", "width": 1920, "height": 1080} ] }' # Response with job tracking { "job_id": "550e8400-e29b-41d4-a716-446655440000", "status": "processing", "progress": 0.0 }
# AI-powered smart encoding pipeline curl -X POST http://localhost:8080/api/genai/v1/pipeline/smart-encode \ -H "X-API-Key: your-api-key" \ -H "Content-Type: application/json" \ -d '{ "video_path": "/input/video.mp4", "quality_preset": "high", "optimization_level": 2 }' # AI upscaling with Real-ESRGAN curl -X POST http://localhost:8080/api/genai/v1/enhance/upscale \ -H "X-API-Key: your-api-key" \ -d '{ "video_path": "/input/video.mp4", "scale_factor": 4, "model_variant": "RealESRGAN_x4plus" }'
# Scale API instances docker-compose up -d --scale api=3 # Scale workers for heavy processing docker-compose up -d --scale worker=8 # Scale with AI workers (GPU required) docker-compose -f docker-compose.genai.yml up -d --scale worker-genai=4 # Monitor scaling metrics curl http://localhost:3000/api/dashboards/worker-status
# Health check endpoint curl http://localhost:8080/api/v1/health # Prometheus metrics curl http://localhost:9090/metrics # Real-time job progress via SSE curl -N http://localhost:8080/api/v1/jobs/{job_id}/progress # Grafana dashboards (admin/admin) # - API Performance: Request metrics and response times # - Worker Status: Celery worker health and queue status # - System Resources: CPU, memory, and disk usage # - GenAI Metrics: AI model performance and GPU utilization
Whether it's code, documentation, bug reports, or feature ideas — every contribution helps make RendDiff better for everyone.
⭐ Star & Fork on GitHub