Overview
The DNSRadar API implements rate limiting to ensure fair usage and maintain service quality for all users. Rate limits are applied per API key and reset at fixed intervals.All rate limits use a sliding window algorithm, meaning limits reset continuously rather than at fixed intervals.
Rate Limit Tiers
Standard Endpoints
Most API endpoints follow these rate limits:Read Operations
20 requests per minuteApplies to all
GET requestsWrite Operations
5 requests per minuteApplies to
POST, PUT, PATCH, and DELETE requestsBulk Import Endpoints
The following endpoints have higher limits to support bulk operations:Monitor Creation
250 requests per minuteApplies to:
POST /monitorsPOST /monitors/bulk
The
POST /monitors/bulk endpoint accepts up to 1,000 monitors per request, allowing you to theoretically import 250,000 monitors per minute when used efficiently.Rate Limit Headers
Every API response includes headers that provide information about your current rate limit status:The maximum number of requests you can make per minute for this endpointExample:
20The number of requests remaining in the current rate limit windowExample:
15Unix timestamp (in seconds) indicating when the rate limit window resetsExample:
1704636000Example Response Headers
Handling Rate Limits
Rate Limit Exceeded Response
When you exceed the rate limit, you’ll receive a429 Too Many Requests response:
Best Practices
Monitor Rate Limit Headers
Monitor Rate Limit Headers
Always check the
X-RateLimit-Remaining header and proactively slow down requests before hitting the limit.Implement Exponential Backoff
Implement Exponential Backoff
When you receive a 429 response, wait before retrying with increasing delays between attempts.
Use Bulk Endpoints
Use Bulk Endpoints
When creating multiple monitors, use the Do this:
POST /monitors/bulk endpoint instead of making individual requests.Instead of this:Cache GET Requests
Cache GET Requests
Cache responses from GET requests when appropriate to reduce unnecessary API calls.
Distribute Requests Over Time
Distribute Requests Over Time
Instead of sending bursts of requests, distribute them evenly across the rate limit window.

