Engineeringmicroservicesarchitecturebackend

Microservices Architecture in 2026: A Practical Guide

When to use microservices, when to stay monolithic, and how to split a growing app without creating a distributed mess. Real patterns, not theory.

R

RaidFrame Team

February 17, 2026 · 5 min read

TL;DR — Don't start with microservices. Start with a monolith. Split when you have a clear reason: independent deployment, independent scaling, or team autonomy. In 2026, the best approach is a modular monolith that can be decomposed later. When you do split, deploy on a platform that makes inter-service communication trivial.

The monolith-first rule

Every successful microservice architecture started as a monolith. Netflix, Amazon, Uber — they all split after reaching scale, not before.

Starting with microservices means:

  • Distributed debugging before you have users
  • Network failures between services that should be function calls
  • Deployment pipelines for 5 services instead of 1
  • Schema coordination across databases
  • Distributed transactions for simple operations

Start with one service. Split when it hurts.

When to split

Split a service out when you have at least two of these:

  1. Different scaling needs — your image processor needs GPU, your API doesn't
  2. Different deployment cadence — the billing team deploys daily, the auth team deploys weekly
  3. Team ownership boundaries — team A owns search, team B owns checkout
  4. Technology mismatch — ML pipeline in Python, API in Go
  5. Isolation requirement — payment processing needs PCI compliance, the blog doesn't

If you only have one reason, a modular monolith is probably better.

Try RaidFrame free

Deploy your first app in 60 seconds. No credit card required.

Start free

Organize your monolith as independent modules with clear boundaries:

my-app/
├── modules/
│   ├── auth/
│   │   ├── routes.ts
│   │   ├── service.ts
│   │   └── repository.ts
│   ├── billing/
│   │   ├── routes.ts
│   │   ├── service.ts
│   │   └── repository.ts
│   ├── orders/
│   │   ├── routes.ts
│   │   ├── service.ts
│   │   └── repository.ts
│   └── notifications/
│       ├── service.ts
│       └── workers.ts
├── shared/
│   ├── database.ts
│   └── middleware.ts
└── server.ts

Rules:

  • Modules communicate through interfaces, not direct imports
  • Each module owns its database tables
  • No cross-module joins
  • Shared code lives in /shared

When you eventually split, each module becomes its own service with minimal refactoring.

Splitting patterns

Pattern 1: Extract a worker

The easiest first split. Move background processing out of your API:

# raidframe.yaml
services:
  api:
    type: web
    port: 3000
    scaling:
      min: 2
      max: 10
 
  worker:
    type: worker
    command: node worker.js
    scaling:
      min: 1
      max: 5
      target_queue_depth: 50

The API enqueues jobs. The worker processes them. They share a database and Redis queue. This is the simplest microservice split and solves 80% of "my API is slow because of background processing" problems.

Pattern 2: Extract by domain

Split a specific business domain into its own service:

services:
  api:
    type: web
    port: 3000
 
  search:
    type: web
    port: 8080
    resources:
      cpu: 4
      memory: 8GB
 
  payments:
    type: web
    port: 8081

The API calls search and payments over the private network:

// api → search service
const results = await fetch("http://search.internal:8080/query", {
  method: "POST",
  body: JSON.stringify({ q: "laptop" }),
}).then(r => r.json());

On RaidFrame, services discover each other automatically via service.internal hostnames. No service mesh, no Consul, no DNS configuration.

Pattern 3: Database per service

The hardest split. Each service gets its own database:

databases:
  api-db:
    engine: postgres
  search-db:
    engine: postgres
  payments-db:
    engine: postgres

Sync data between services via events:

// payments service publishes
await queue.publish("payment.completed", { order_id: "o_123", amount: 99.99 });
 
// api service subscribes
pubsub.subscribe("payment.completed", async (event) => {
  await db.query("UPDATE orders SET status = 'paid' WHERE id = $1", [event.data.order_id]);
});

Inter-service communication

MethodUse WhenLatency
HTTP (REST/gRPC)Synchronous request/response1-10ms internal
Message queueAsync processing, decoupling10-100ms
Pub/SubEvent broadcasting10-50ms
Shared databaseSimple reads (avoid for writes)< 1ms

On RaidFrame, all services in a project share a private encrypted network. Internal HTTP calls between services add < 1ms of latency.

What NOT to do

  • Don't use microservices for a CRUD app. If your app is forms and database queries, a monolith is faster to build and easier to maintain.
  • Don't create a service per database table. That's not microservices, that's a distributed monolith with network calls instead of function calls.
  • Don't use Kubernetes for fewer than 10 services. RaidFrame or any container platform handles this without the operational overhead.
  • Don't do distributed transactions. Design for eventual consistency or keep tightly-coupled operations in the same service.
  • Don't build a service mesh for 3 services. Private networking with health checks is enough.

FAQ

How many services should I have?

As few as possible. Most apps under $1M ARR should have 1-5 services. More services = more complexity.

Should I use gRPC or REST?

REST for external APIs. gRPC for internal service-to-service communication where latency and type safety matter. Both work on RaidFrame.

How do I handle authentication across services?

Pass JWT tokens between services. Each service validates the token independently. Don't create an "auth service" that every request must hit.

What about service discovery?

On RaidFrame, services are discoverable at service-name.internal. No Consul, no Eureka, no DNS configuration.

When should I introduce a message queue?

When you need to decouple producers from consumers, handle work asynchronously, or retry failed operations. Use RaidFrame's built-in queues — no external service needed.

microservicesarchitecturebackendscalingmonolith

Ship faster with RaidFrame

Auto-scaling compute, managed databases, global CDN, and zero-config CI/CD. Free tier included.