Product — Compute

Containers that
scale themselves.

Push your code. RaidFrame builds, deploys, and auto-scales across 14+ global regions. From zero to millions of requests — no infrastructure to manage.

14+
Global regions
<10s
Scale-up time
99.99%
Uptime SLA
$0
Starting price
How it works

From git push to production.

Three commands. No Dockerfile required. No infrastructure to configure.

terminal
$ rf deploy
Detecting stack... Node.js (Next.js 16)
Building... ████████████████████ 100% (28s)
Deploying... Rolling update (2 instances)
Health check... Passing
Live at https://my-app.raidframe.app
$ rf add postgres
PostgreSQL 16 provisioned
DATABASE_URL injected
$ rf logs
14:23:01 [web] GET /api/users 200 12ms
14:23:02 [web] POST /api/orders 201 45ms
Step 01

Push your code

Connect your GitHub repo or run rf deploy from the CLI. RaidFrame auto-detects your stack — Node.js, Python, Go, Ruby, Rust, Java, or any Dockerfile.

Step 02

We build & deploy

Docker images are built with layer caching (10s rebuilds). Rolling deployments ensure zero downtime. Health checks gate traffic to new instances.

Step 03

Auto-scale on demand

CPU, memory, or request-based scaling. New instances launch in under 10 seconds. Scale to zero when idle, burst to hundreds during traffic spikes.

Compatibility

Every stack. Every framework.

Bring a Dockerfile or let us auto-detect. If it runs in a container, it runs on RaidFrame.

Next.js
Node.js
Python
Django
Go
Ruby
Rails
Rust
Java
Spring
PHP
Laravel
.NET
Bun
Deno
Elixir
Hono
FastAPI
Pricing

Instance types

Start free. Scale to dedicated 32-vCPU machines. Flat per-service pricing — no per-request charges, no bot traffic bills.

PlanCPURAMStoragePrice
StarterFree tierShared512 MB1 GBFree
Standard1 vCPU1 GB5 GB$7/mo
Pro2 vCPU4 GB20 GB$25/mo
Pro Plus4 vCPU8 GB40 GB$50/mo
Performance8 vCPU16 GB80 GB$100/mo
Performance XL16 vCPU32 GB160 GB$200/mo
Dedicated32 vCPU64 GB320 GB$400/mo

All plans include: auto-scaling, zero-downtime deploys, SSL, private networking, and logs. No hidden fees.

Capabilities

Everything your app needs to run.

Production infrastructure out of the box. No plugins, no add-ons, no third-party integrations.

Auto-scaling

Scale from 0 to hundreds of instances on CPU, memory, requests, or custom metrics. New instances healthy in under 10 seconds. Configurable cooldown prevents flapping.

Zero-downtime deploys

Rolling deployments with health check gating. New instances must pass health checks before receiving traffic. Old instances drain gracefully. Every deploy is reversible.

Multi-region

Deploy to 14+ regions worldwide. Automatic failover routes traffic to the nearest healthy region. Add a region with one command: rf regions add eu-west-1.

Docker & Buildpacks

Bring a Dockerfile or let RaidFrame auto-detect your stack. Layer caching means rebuilds in seconds. Multi-stage builds supported for optimized images.

Preview environments

Every pull request gets a full isolated environment with its own URL and database branch. Reviewers test real deployments, not code diffs. Auto-cleanup on merge.

Private networking

All services communicate over encrypted private networks. Internal traffic never touches the public internet. Reference other services by name: http://api.internal:8080.

SSH & debugging

Drop into a running container with rf ssh. Run one-off commands with rf exec. Port-forward to your local machine for remote debugging with VS Code or Chrome DevTools.

Scale to zero

Idle services scale to zero to save costs. First request triggers a cold start in under 3 seconds. Configure idle timeout per service. Only pay when your app is actually running.

Init containers

Run migrations, seed caches, or warm up data before your main process starts. Init containers run sequentially — if any fails, the deployment rolls back automatically.

Comparison

Why not just use AWS / Vercel / Railway?

FeatureAWS ECSVercelRailwayRaidFrame
Deploy timeMinutes~60s~60s~30s
Auto-scalingConfig heavyServerless onlyManualBuilt-in
Pricing modelUsage-basedPer-seat + usageUsage-basedFlat per-service
Managed databasesRDS (separate)Third-partyBuilt-inBuilt-in
SSH accessVia SSMNoNoYes
Preview environmentsManualYesNoYes + DB branch
Background jobsFargate tasksNoSeparate serviceNative
Setup complexityDaysMinutesMinutesMinutes

Frequently asked questions

How fast does auto-scaling react?

New instances spin up in under 10 seconds. Scale-down uses a configurable cooldown (default 5 minutes) to prevent flapping.

Can I use my own Dockerfile?

Yes. Bring a Dockerfile or let RaidFrame auto-detect your stack with buildpacks. Node, Python, Go, Ruby, Rust, Java, PHP, .NET — all supported.

What happens during a deployment?

Rolling deployments: new instances are health-checked, then traffic shifts gradually. Old instances drain connections. Zero downtime, every time.

Is there a free tier?

Yes. The Starter plan includes a shared-CPU instance with 512 MB RAM. No credit card required. No time limit.

Can I deploy to multiple regions?

Yes. Deploy to 14+ regions with rf regions add. Traffic is routed to the nearest healthy region automatically.

How is pricing different from Vercel/AWS?

Flat per-service pricing. No per-request charges, no bandwidth overage, no per-seat fees. Bot traffic doesn't affect your bill.

Deploy your first service.

Free tier available. No credit card. No infrastructure setup. Just push and go.