Container-based services with auto-scaling, health checks, and zero-downtime deployments.
Every service on RaidFrame runs as a container. You bring a Dockerfile or let RaidFrame detect your stack automatically. Services are deployed with zero-downtime rolling updates and scale automatically based on load.
Receive HTTP/HTTPS traffic through the load balancer. Assigned a public URL.
rf services create api --type web --port 8080
Background processes with no inbound traffic. Connect to databases, queues, and other services over the private network.
rf services create processor --type worker
Run on a schedule. Ideal for cleanup tasks, report generation, data sync.
rf services create daily-report --type cron --schedule "0 9 * * *"
Static files served directly from the global CDN. No container runtime.
rf services create docs --type static --build-dir ./out
| Plan | CPU | Memory | Storage | Price |
|---|---|---|---|---|
| Starter | Shared | 512 MB | 1 GB | Free |
| Standard | 1 vCPU | 1 GB | 5 GB | $7/mo |
| Pro | 2 vCPU | 4 GB | 20 GB | $25/mo |
| Pro Plus | 4 vCPU | 8 GB | 40 GB | $50/mo |
| Performance | 8 vCPU | 16 GB | 80 GB | $100/mo |
| Performance XL | 16 vCPU | 32 GB | 160 GB | $200/mo |
| Dedicated | 32 vCPU | 64 GB | 320 GB | $400/mo |
rf services scale api --resources pro
Every web service has a health check. RaidFrame sends HTTP requests to your health endpoint and only routes traffic to healthy instances.
services:
web:
health_check:
path: /api/health
interval: 10s
timeout: 5s
retries: 3
start_period: 30s
If an instance fails health checks, it's replaced automatically. During deployments, old instances stay live until new ones pass health checks.
Deployments use a rolling update strategy by default:
rf deploy --watch
Building... ████████████████████ 100% (28s)
Starting... 2/2 instances ready
Health check... ✓ All passing
Draining old... Waiting for connections to close (30s)
✓ Deployment complete (v42)
Run one-time setup commands before your main process starts. Common use case: database migrations.
services:
web:
init:
- command: npx prisma migrate deploy
timeout: 120s
- command: node scripts/seed-cache.js
command: node server.js
Init containers run sequentially. If any fails, the deployment is rolled back.
Run multiple processes in a single service when they share the same codebase and scaling characteristics:
services:
web:
processes:
server: node server.js
scheduler: node scheduler.js
Both processes share the same container, environment, and resources.
Connect directly to a running container:
rf ssh web
Connecting to web (instance i-abc123, us-east-1)...
root@web-abc123:/app#
Run a one-off command:
rf exec web "npx prisma studio"
rf exec web "python manage.py shell"
Port forward to your local machine:
rf port-forward web 9229:9229
Now connect your local debugger (VS Code, Chrome DevTools) to localhost:9229 to debug the production process directly.
Set CPU and memory limits to prevent a single service from consuming all available resources:
services:
api:
resources:
cpu: 2
memory: 4GB
memory_swap: 8GB
If a service exceeds its memory limit, it's OOM-killed and restarted automatically. The event appears in your logs and triggers an alert.
rf services restart api
rf services restart api --instance i-abc123
rf services restart --all
Services automatically restart on crash with exponential backoff: 1s, 2s, 4s, 8s, up to 5 minutes.