Alerts & Notifications
Get notified immediately when your routes experience failures, high latency, unauthorized access attempts, or rate limit issues.What Are Alerts?
Alerts monitor your routes in real-time and send notifications when specific conditions are met. Example:- 🚨 Detect API integration failures instantly
- ⏱️ Monitor response time degradation
- 🔒 Track unauthorized access attempts
- 📊 Catch rate limit violations
- 🛠️ Proactive issue resolution before customers complain
Alert Types
KnoxCall supports four types of alerts:1. Request Failures
What it monitors: Failed requests (5xx status codes) Common scenarios:- Backend API is down (502, 503, 504)
- Internal server errors (500)
- Backend timeout issues
2. High Latency
What it monitors: Request response times (P95 percentile) Common scenarios:- Backend database slow queries
- Network issues
- Service degradation
- P50 (median): 50% of requests faster than this
- P95: 95% of requests faster than this (catches slowest 5%)
- P99: 99% of requests faster than this (extreme outliers)
3. Rate Limit Exceeded
What it monitors: Requests hitting your configured rate limits Common scenarios:- Client sending too many requests
- Runaway script or bot
- DDoS attempt
4. Unauthorized Client
What it monitors: Requests from non-whitelisted IP addresses Common scenarios:- Security breach attempt
- Client using wrong IP after server migration
- Misconfigured firewall
Notification Channels
Alerts can be sent through multiple channels:Email Notifications
Configuration:SMS Notifications
Configuration:Slack Notifications
Configuration:Creating Your First Alert
Step 1: Navigate to Alerts
- Click Monitoring in sidebar
- Select Alerts
- Click + Create Alert
Step 2: Choose a Template
KnoxCall provides pre-configured templates: Integration Failure (Recommended)- Alert on any 5xx error
- Severity: High
- Cooldown: 15 minutes
- Great for catching backend issues immediately
- Alert on 5+ failures within 5 minutes
- Severity: Critical
- Cooldown: 30 minutes
- Best for production environments
- Alert when P95 latency exceeds 2 seconds
- Severity: Medium
- Cooldown: 20 minutes
- Good for performance monitoring
- Immediate alert on unauthorized IP
- Severity: High
- Cooldown: 60 minutes
- Essential for security
Step 3: Configure Alert
Route: Select the route to monitor- Low: Informational, review later
- Medium: Important, check within hours
- High: Urgent, check within 30 minutes
- Critical: Emergency, check immediately
Step 4: Configure Notifications
Enable Email
Enable SMS (optional)
Enable Slack (optional)
At least one channel required. You can enable multiple channels to ensure alerts are seen.
Step 5: Custom Message Templates (Optional)
Customize notification content: Email Subject Template:{{ALERT_NAME}}- Alert name{{ROUTE_NAME}}- Route name{{SEVERITY}}- low/medium/high/critical{{TIMESTAMP}}- When alert triggered{{CONDITION_DESCRIPTION}}- Human-readable condition{{TRIGGER_DETAILS}}- Specific trigger info (errors, latency, etc.){{ALERT_ID}}- Alert ID{{ROUTE_ID}}- Route ID
Step 6: Create Alert
Click Create Alert Alert is now active and monitoring your route! 🎉Alert States
Alerts have three states:1. OK (Green)
Meaning: Condition is not met, everything normal Example:2. Triggered (Red)
Meaning: Condition met, notification sent Example:3. Cooldown (Yellow)
Meaning: Recently triggered, waiting for cooldown period Example:- Prevents notification spam
- Gives time to fix issue
- Won’t trigger again until cooldown expires
Advanced Configuration
Condition Schemas
Different alert types have different configuration options:Request Failures Schema
High Latency Schema
Rate Limit Exceeded Schema
Unauthorized Client Schema
Multi-Status Code Filtering
Include only specific errors:Latency Percentiles
P50 (Median):Alert Management
Viewing Alert Status
Navigate to: Monitoring → Alerts List view shows:- Alert name and description
- Route name
- Current state (OK / Triggered / Cooldown)
- Severity
- Trigger count (24h)
- Last triggered time
- Enabled/Disabled status
- Severity: Low, Medium, High, Critical
- State: OK, Triggered, Cooldown
- Enabled: All, Enabled, Disabled
- Alert Type: Failures, Latency, Rate Limit, Unauthorized
- Activity: 0 triggers, 1-5, 6-20, 20+
Viewing Alert Details
Click alert name to see: Overview:- Current state
- Trigger history graph
- Recent triggers list
- Alert type and conditions
- Notification channels
- Cooldown and aggregation settings
- When alert triggered
- Notification delivery status
- Error details that triggered alert
Editing Alerts
- Navigate to alert details
- Click Edit Alert
- Modify settings:
- Change threshold
- Update notification emails/phones
- Adjust cooldown
- Change severity
- Click Save Changes
Disabling Alerts
Temporarily disable:- Navigate to alert details
- Toggle Enabled switch to OFF
- Alert stops monitoring (won’t trigger)
- Scheduled maintenance
- Known issue being fixed
- Testing changes without spam
- Toggle Enabled switch to ON
- Alert resumes monitoring
Deleting Alerts
- Navigate to alert details
- Click Delete Alert
- Confirm deletion
Alert Logs
View alert trigger history: Navigate to: Monitoring → Alert Logs Shows:- When alert triggered
- Which route
- Severity
- Trigger details (error messages, latency values, etc.)
- Notification channels used
- Delivery status (sent, failed)
- Date range
- Route
- Severity
- Alert name
- Audit notification history
- Troubleshoot missed alerts
- Analyze incident patterns
Best Practices
1. Start with Templates
✅ Use built-in templates when creating your first alerts Why:- Pre-configured with sensible defaults
- Battle-tested thresholds
- Clear descriptions
2. Set Appropriate Thresholds
Too sensitive (spam):3. Use Severity Correctly
Critical:- Production payment processing down
- Complete API outage
- Security breach attempt
- Partial service degradation
- Single integration failing
- Elevated error rate
- Performance degradation
- Non-critical API slow
- Occasional errors
- Informational
- Minor issues
- For tracking/trending
4. Configure Cooldowns
Problem: Alert triggers every minute → 60 emails in 1 hour Solution: Use cooldown- Critical alerts: 30-60 minutes
- High alerts: 15-30 minutes
- Medium alerts: 60 minutes
- Low alerts: 2-4 hours
5. Use Multiple Notification Channels
Redundancy strategy:- Email might be missed
- SMS ensures immediate attention
- Slack allows team collaboration
6. Test Your Alerts
Before going live:-
Create test alert with low threshold:
- Trigger condition (e.g., send request that returns 500)
-
Verify notifications received:
- Check email inbox
- Check SMS received
- Check Slack message
- Adjust configuration if needed
- Set production thresholds
7. Monitor Alert Logs
Weekly review:- Which alerts triggered most?
- Any false positives?
- Any missed incidents?
8. Document Your Alerts
In alert description, include:- Team knows what alert means
- Clear action steps
- Faster incident resolution
Common Alert Scenarios
Scenario 1: Backend API Outage
Problem: Stripe API completely down Alert Configuration:- 3 failures → Alert triggers
- Notifications sent to all channels
- Team investigates
- Issue resolved or escalated to Stripe
- 30 minutes later, if still failing, alert again
Scenario 2: Gradual Performance Degradation
Problem: Database queries getting slower over time Alert Configuration:- P95 latency hits 1.5s → Alert triggers
- Team reviews logs
- Identifies slow query
- Optimizes or adds caching
- Latency returns to normal
Scenario 3: Security Incident
Problem: Unknown IP trying to access internal API Alert Configuration:- Unauthorized IP makes request → Immediate alert
- Security team reviews
- IP blocked if malicious
- Client contacted if legitimate (e.g., moved servers)
Scenario 4: Rate Limit Abuse
Problem: Client’s script gone rogue, hitting rate limits Alert Configuration:- Client hits rate limit 5 times in 10 min → Alert
- Review which client
- Contact client
- They fix infinite loop
- Rate limit stops triggering
Troubleshooting
Issue: “Alert not triggering”
Check:- Alert is enabled (not disabled)
- Route is active (not disabled)
- Condition threshold is correct (not too high)
- Check alert state (might be in cooldown)
Issue: “Too many notifications”
Cause: Threshold too low or cooldown too short Fix:- Increase threshold:
1 → 5 - Increase cooldown:
5 minutes → 30 minutes - Increase window:
1 minute → 5 minutes
Issue: “Not receiving email notifications”
Check:- Email addresses correct (no typos)
- Check spam folder
- Email channel enabled
- Alert logs show “sent” status
Issue: “Slack notifications not working”
Check:- Webhook URL correct (starts with
https://hooks.slack.com) - Slack channel enabled in alert config
- Webhook not revoked in Slack settings
Related Features
- API Logs: View detailed request history that triggered alerts
- Analytics: Visualize trends and patterns in alert triggers
- Audit Logs: Track who created/modified alerts
Next Steps
API Logs
View requests that triggered alerts
Analytics
Analyze alert patterns and trends
Audit Logs
Track alert configuration changes
Routes
Configure routes to monitor
📊 Statistics
- Level: beginner to intermediate
- Time: 15 minutes
🏷️ Tags
alerts, monitoring, notifications, incidents, email, sms, slack