Guidesbackground jobscron jobsfree hosting

Free Background Jobs Without URL Pinging Hacks

Stop using cron-job.org to ping your API routes. RaidFrame gives you native cron jobs and worker processes for free — no timeouts, no external services, no hacks.

R

RaidFrame Team

March 16, 2026 · 7 min read

TL;DR — Most hosting platforms don't support background jobs natively. You end up using external URL pingers or cramming work into serverless functions that timeout after 10 seconds. RaidFrame gives you native cron jobs and long-running worker processes on the free tier. No hacks. No separate billing. Define your schedule, deploy, done.

Why is running a cron job so hard in 2026?

You'd think scheduling a task to run every hour would be a solved problem. It is — if you own a server. But if you're on Vercel, Netlify, or any serverless platform, you're out of luck.

Serverless functions are designed to handle HTTP requests, not run background work. They spin up, respond, and die. There's no persistent process to run your email digest at 6am or clean up expired sessions every night.

So developers hack around it. And the hacks are terrible.

The URL pinging hack (and why it breaks)

Here's the pattern thousands of developers use today:

  1. Write an API route like /api/cron/send-emails
  2. Sign up for cron-job.org, EasyCron, or UptimeRobot
  3. Configure it to GET your endpoint every hour
  4. Hope it works
// src/app/api/cron/send-emails/route.ts
// The "please ping me" pattern
export async function GET(request: Request) {
  // Verify the request is from your cron service
  const authHeader = request.headers.get("authorization");
  if (authHeader !== `Bearer ${process.env.CRON_SECRET}`) {
    return new Response("Unauthorized", { status: 401 });
  }
 
  // Do the actual work — but hurry, you have 10 seconds
  await sendDailyDigestEmails();
 
  return new Response("OK");
}

This is fragile in at least five ways:

  • Timeouts: Vercel functions timeout at 10s (free) or 60s (Pro). If your job takes longer, it dies mid-execution.
  • No retries: If the ping fails, your job just doesn't run. No retry logic. No alerting.
  • No monitoring: You can't see if the job succeeded or failed without building your own logging.
  • Security: You're exposing an HTTP endpoint that triggers work. One leaked secret and anyone can trigger it.
  • Cold starts: The function might spend 2-3 seconds booting before it even starts your job.

What about Vercel's built-in cron?

Vercel added cron support via vercel.json. It looks promising until you read the fine print.

{
  "crons": [{
    "path": "/api/cron/cleanup",
    "schedule": "0 * * * *"
  }]
}

The limitations:

  • Still runs as a serverless function — same timeout limits apply
  • Free tier: 1 cron job, 1 execution per day maximum
  • Pro tier: 10 cron jobs, 1 execution per minute minimum interval
  • No long-running tasks — your function timeout is the hard ceiling
  • If the function fails, there's no automatic retry

You can't process a queue of 500 images in a function that dies after 60 seconds.

Try RaidFrame free

Deploy your first app in 60 seconds. No credit card required.

Start free

How RaidFrame handles background jobs

On RaidFrame, background jobs are a first-class feature. Not an afterthought. Not a workaround. Your cron jobs and worker processes run alongside your app in the same deployment, same logs, same dashboard.

Native cron jobs

Define your schedule in raidframe.yaml or via the CLI:

# raidframe.yaml
# Alternative to CLI commands — define your cron jobs declaratively in config
app:
  name: my-saas
  command: npm start
 
cron:
  - name: daily-email-digest
    schedule: "0 6 * * *"        # Every day at 6am UTC
    command: node jobs/send-digest.js
 
  - name: cleanup-sessions
    schedule: "0 */4 * * *"      # Every 4 hours
    command: node jobs/cleanup.js
 
  - name: sync-analytics
    schedule: "*/15 * * * *"     # Every 15 minutes
    command: python scripts/sync.py

Or add one from the CLI:

rf cron add "0 6 * * *" "node jobs/send-digest.js" --name daily-digest

No external service. No URL pinging. No timeout limits. Your job runs as a real process with access to your environment variables, your database, your file system.

Long-running worker processes

For queue-based work — processing uploads, sending emails, generating reports — you need a persistent worker, not a cron job.

# raidframe.yaml
app:
  name: my-saas
  command: npm start
 
workers:
  - name: email-worker
    command: node workers/email.js
 
  - name: upload-processor
    command: node workers/uploads.js

Workers run continuously alongside your web process. Same deployment. Same rf deploy. No separate service with separate billing.

Real code: common background job patterns

Pattern 1: Daily email digest (cron)

// jobs/send-digest.js
import { db } from "../lib/database.js";
import { sendEmail } from "../lib/email.js";
 
async function sendDailyDigest() {
  const users = await db.user.findMany({
    where: { digestEnabled: true },
  });
 
  for (const user of users) {
    const updates = await db.activity.findMany({
      where: {
        teamId: user.teamId,
        createdAt: { gte: new Date(Date.now() - 86400000) },
      },
    });
 
    if (updates.length > 0) {
      await sendEmail({
        to: user.email,
        subject: `${updates.length} updates from your team`,
        html: renderDigest(updates),
      });
    }
  }
 
  console.log(`Sent digest to ${users.length} users`);
}
 
sendDailyDigest()
  .then(() => process.exit(0))
  .catch((err) => {
    console.error("Digest failed:", err);
    process.exit(1);
  });

No timeout. Processes 10 users or 10,000 users — it runs until it's done.

Pattern 2: Queue worker (process uploads)

// workers/uploads.js
import { db } from "../lib/database.js";
import { processImage } from "../lib/image.js";
import { uploadToStorage } from "../lib/storage.js";
 
async function processQueue() {
  while (true) {
    const job = await db.uploadQueue.findFirst({
      where: { status: "pending" },
      orderBy: { createdAt: "asc" },
    });
 
    if (!job) {
      await sleep(5000); // Poll every 5 seconds
      continue;
    }
 
    await db.uploadQueue.update({
      where: { id: job.id },
      data: { status: "processing" },
    });
 
    try {
      const resized = await processImage(job.filePath, {
        width: 1200,
        format: "webp",
      });
      await uploadToStorage(resized, job.outputPath);
      await db.uploadQueue.update({
        where: { id: job.id },
        data: { status: "complete" },
      });
    } catch (err) {
      await db.uploadQueue.update({
        where: { id: job.id },
        data: { status: "failed", error: err.message },
      });
    }
  }
}
 
const sleep = (ms) => new Promise((r) => setTimeout(r, ms));
processQueue();

This worker runs forever. On Vercel, this is impossible. On RaidFrame, it's just another process.

Pattern 3: Go scheduled cleanup

// jobs/cleanup.go
package main
 
import (
    "context"
    "fmt"
    "log"
    "os"
    "time"
 
    "github.com/jackc/pgx/v5"
)
 
func main() {
    conn, err := pgx.Connect(context.Background(), os.Getenv("DATABASE_URL"))
    if err != nil {
        log.Fatal(err)
    }
    defer conn.Close(context.Background())
 
    cutoff := time.Now().Add(-24 * time.Hour)
    result, err := conn.Exec(context.Background(),
        "DELETE FROM sessions WHERE expires_at < $1", cutoff)
    if err != nil {
        log.Fatal(err)
    }
 
    fmt.Printf("Cleaned up %d expired sessions\n", result.RowsAffected())
}

Schedule it in your config: schedule: "0 3 * * *" — runs at 3am daily. Takes 200ms or 20 minutes, doesn't matter.

Platform comparison

FeatureVercelNetlifyRailwayRaidFrame
Cron jobsLimited (vercel.json)NoRequires separate worker serviceNative (raidframe.yaml)
Worker processesNoNoSeparate service + billingSame deployment
Max job duration10-300s10-26sUnlimited (separate service)Unlimited
Free tier includes jobs1 job, 1x/dayNoNo free workersYes
Monitoring/logsBasicNoSeparate dashboardSame dashboard
Retry on failureNoNoManualAutomatic

FAQ

Can I use Redis-based queues like BullMQ?

Yes. Add Redis with rf add redis, and your REDIS_URL is set automatically. Run BullMQ, Celery, Sidekiq, or any queue library as a worker process.

What happens if my cron job fails?

RaidFrame automatically retries failed cron jobs with exponential backoff. Failures show up in your logs and dashboard. No silent failures.

Is there a limit on how many cron jobs I can run?

Free tier includes 5 cron jobs and 1 worker process. Pro tier is unlimited.

Can I run cron jobs more frequently than every minute?

Cron syntax supports per-minute granularity. For sub-minute scheduling, use a worker process with a timer loop.

Do workers share resources with my web app?

Workers run in the same deployment but have their own resource allocation. A runaway worker won't starve your web process.

How do I test cron jobs locally?

# Run the job manually
node jobs/send-digest.js
 
# Or use the CLI
rf cron run daily-digest --local

Stop pinging URLs. Deploy your background jobs the right way — start for free.

background jobscron jobsfree hostingworkersdeployment

Ship faster with RaidFrame

Auto-scaling compute, managed databases, global CDN, and zero-config CI/CD. Free tier included.