Threshold alerts, anomaly detection, and multi-channel notifications.
rf alerts create \
--name "High Error Rate" \
--metric error_rate \
--service api \
--threshold "> 5%" \
--window 5m \
--notify slack,email
✓ Alert created: High Error Rate
Trigger: api.error_rate > 5% for 5 minutes
Notify: #ops (Slack), [email protected]
Trigger when a metric crosses a threshold:
rf alerts create --name "CPU Spike" --metric cpu_percent --service web --threshold "> 90%" --window 3m
rf alerts create --name "Memory Critical" --metric memory_percent --service api --threshold "> 95%" --window 1m
rf alerts create --name "Slow Responses" --metric response_time_p99 --service api --threshold "> 1000ms" --window 5m
rf alerts create --name "Queue Backlog" --metric queue_depth --service tasks --threshold "> 1000" --window 10m
rf alerts create --name "Disk Full" --metric disk_percent --threshold "> 85%" --window 5m
RaidFrame learns your traffic patterns and alerts on deviations:
rf alerts create --name "Traffic Anomaly" --metric request_count --service web --type anomaly --sensitivity medium
No threshold to set — RaidFrame detects unusual spikes or drops automatically based on historical patterns.
Auto-notify on deployment events:
rf alerts create --name "Deploy Notify" --type deployment --events start,success,failure,rollback --notify slack
Monitor external endpoints:
rf alerts create --name "Site Down" --type uptime --url https://myapp.com --interval 30s --notify pagerduty
rf alerts channel add slack --webhook https://hooks.slack.com/services/xxx --name ops-channel
rf alerts channel add email --address [email protected] --name oncall
rf alerts channel add email --address [email protected] --name escalation
rf alerts channel add pagerduty --integration-key xxx --name critical
rf alerts channel add webhook --url https://myapi.com/alerts --secret xxx --name custom
Webhook payload:
{
"alert": "High Error Rate",
"status": "firing",
"service": "api",
"metric": "error_rate",
"value": 8.2,
"threshold": 5,
"started_at": "2026-03-16T14:30:00Z",
"project": "my-saas",
"environment": "production",
"dashboard_url": "https://dash.raidframe.com/my-team/my-saas/alerts/a_123"
}
rf alerts channel add opsgenie --api-key xxx --name oncall
# List all alerts
rf alerts list
# View alert history
rf alerts history "High Error Rate"
# Mute an alert temporarily
rf alerts mute "High Error Rate" --duration 2h --reason "Known deploy in progress"
# Delete an alert
rf alerts delete "High Error Rate"
# Test a notification channel
rf alerts test ops-channel
Route alerts to different channels based on severity and duration:
alerts:
high-error-rate:
metric: error_rate
service: api
threshold: "> 5%"
window: 5m
escalation:
- after: 0m
notify: [slack:ops-channel]
- after: 15m
notify: [email:oncall, pagerduty:critical]
- after: 30m
notify: [email:escalation, slack:leadership]
Get notified before hitting spending limits:
rf budget set 500 --notify email:[email protected] --at 50%,80%,100%
✓ Budget set: $500/mo
Alerts at: $250 (50%), $400 (80%), $500 (100%)
Notify: [email protected]