Native cron scheduling, worker processes, and managed job queues.
Schedule recurring tasks with standard cron syntax:
rf cron add "0 9 * * *" "node scripts/daily-digest.js" --name daily-digest
✓ Cron job created: daily-digest
Schedule: 0 9 * * * (daily at 9:00 AM UTC)
Command: node scripts/daily-digest.js
Next run: 2026-03-17T09:00:00Z
Or define in raidframe.yaml:
services:
daily-digest:
type: cron
schedule: "0 9 * * *"
command: node scripts/daily-digest.js
timeout: 300s
retries: 2
cleanup:
type: cron
schedule: "0 3 * * *"
command: node scripts/cleanup.js
weekly-report:
type: cron
schedule: "0 10 * * MON"
command: python scripts/report.py
# List all cron jobs
rf cron list
# View execution history
rf cron logs daily-digest
# Trigger manually
rf cron run daily-digest
# Pause without deleting
rf cron pause daily-digest
# Resume
rf cron resume daily-digest
# Delete
rf cron remove daily-digest
rf cron logs daily-digest --last 10
RUN ID STATUS DURATION STARTED
run_a8f3 ✓ success 12s 2026-03-16 09:00:01
run_b2c1 ✓ success 14s 2026-03-15 09:00:00
run_c4d2 ✗ failed 8s 2026-03-14 09:00:01
run_d5e3 ✓ success 11s 2026-03-13 09:00:00
View logs for a specific run:
rf cron logs daily-digest --run run_c4d2
Long-running background processes that run continuously:
services:
email-worker:
type: worker
command: node workers/email.js
scaling:
min: 1
max: 10
target_queue_depth: 50
image-processor:
type: worker
command: python workers/resize.py
resources:
cpu: 2
memory: 4GB
Workers scale based on queue depth, custom metrics, or fixed instance count. No timeout limits — they run as long as you need.
RaidFrame includes a built-in job queue. No external service needed.
import { Queue } from "@raidframe/sdk";
const emailQueue = new Queue("emails");
await emailQueue.add("welcome-email", {
to: "[email protected]",
template: "welcome",
data: { name: "Alice" },
});
import { Worker } from "@raidframe/sdk";
const worker = new Worker("emails", async (job) => {
await sendEmail(job.data.to, job.data.template, job.data.data);
});
worker.on("completed", (job) => console.log(`Sent: ${job.id}`));
worker.on("failed", (job, err) => console.error(`Failed: ${job.id}`, err));
queues:
emails:
type: queue
max_retries: 3
retry_backoff: exponential
dead_letter: true
timeout: 60s
concurrency: 10
uploads:
type: queue
max_retries: 5
timeout: 300s
concurrency: 3
priority: true
rf queues info emails
Queue: emails
Pending: 42
Active: 3
Completed: 1,847 (24h)
Failed: 12 (24h)
Dead letter: 2
Workers: 3 instances
# Retry failed jobs
rf queues retry emails --all
# Purge dead letter queue
rf queues purge emails --dead-letter
# Drain queue (process remaining, stop accepting new)
rf queues drain emails
Run a job at a specific future time:
rf jobs schedule "node scripts/migrate-data.js" --at "2026-03-20T02:00:00Z" --name data-migration
✓ Job scheduled: data-migration
Runs at: 2026-03-20T02:00:00Z (3 days from now)
Command: node scripts/migrate-data.js