Compute & Services

Container-based services with auto-scaling, health checks, and zero-downtime deployments.

Overview

Every service on RaidFrame runs as a container. You bring a Dockerfile or let RaidFrame detect your stack automatically. Services are deployed with zero-downtime rolling updates and scale automatically based on load.

Service Types

Web Services

Receive HTTP/HTTPS traffic through the load balancer. Assigned a public URL.

rf services create api --type web --port 8080

Worker Services

Background processes with no inbound traffic. Connect to databases, queues, and other services over the private network.

rf services create processor --type worker

Cron Services

Run on a schedule. Ideal for cleanup tasks, report generation, data sync.

rf services create daily-report --type cron --schedule "0 9 * * *"

Static Services

Static files served directly from the global CDN. No container runtime.

rf services create docs --type static --build-dir ./out

Instance Types

PlanCPUMemoryStoragePrice
StarterShared512 MB1 GBFree
Standard1 vCPU1 GB5 GB$7/mo
Pro2 vCPU4 GB20 GB$25/mo
Pro Plus4 vCPU8 GB40 GB$50/mo
Performance8 vCPU16 GB80 GB$100/mo
Performance XL16 vCPU32 GB160 GB$200/mo
Dedicated32 vCPU64 GB320 GB$400/mo
rf services scale api --resources pro

Health Checks

Every web service has a health check. RaidFrame sends HTTP requests to your health endpoint and only routes traffic to healthy instances.

services:
  web:
    health_check:
      path: /api/health
      interval: 10s
      timeout: 5s
      retries: 3
      start_period: 30s

If an instance fails health checks, it's replaced automatically. During deployments, old instances stay live until new ones pass health checks.

Zero-Downtime Deployments

Deployments use a rolling update strategy by default:

  1. New instances are started with the new code
  2. Health checks must pass before receiving traffic
  3. Traffic gradually shifts from old to new instances
  4. Old instances are terminated after drain period
rf deploy --watch
Building...        ████████████████████ 100% (28s)
Starting...        2/2 instances ready
Health check...    ✓ All passing
Draining old...    Waiting for connections to close (30s)
✓ Deployment complete (v42)

Init Containers

Run one-time setup commands before your main process starts. Common use case: database migrations.

services:
  web:
    init:
      - command: npx prisma migrate deploy
        timeout: 120s
      - command: node scripts/seed-cache.js
    command: node server.js

Init containers run sequentially. If any fails, the deployment is rolled back.

Multi-Process Services

Run multiple processes in a single service when they share the same codebase and scaling characteristics:

services:
  web:
    processes:
      server: node server.js
      scheduler: node scheduler.js

Both processes share the same container, environment, and resources.

SSH & Live Debugging

Connect directly to a running container:

rf ssh web
Connecting to web (instance i-abc123, us-east-1)...
root@web-abc123:/app#

Run a one-off command:

rf exec web "npx prisma studio"
rf exec web "python manage.py shell"

Port forward to your local machine:

rf port-forward web 9229:9229

Now connect your local debugger (VS Code, Chrome DevTools) to localhost:9229 to debug the production process directly.

Resource Limits

Set CPU and memory limits to prevent a single service from consuming all available resources:

services:
  api:
    resources:
      cpu: 2
      memory: 4GB
      memory_swap: 8GB

If a service exceeds its memory limit, it's OOM-killed and restarted automatically. The event appears in your logs and triggers an alert.

Restart Policies

rf services restart api
rf services restart api --instance i-abc123
rf services restart --all

Services automatically restart on crash with exponential backoff: 1s, 2s, 4s, 8s, up to 5 minutes.