Rate limiting
Reqon provides adaptive rate limiting that learns from API responses and respects rate limit headers.
Source-level configuration
source API {
auth: bearer,
base: "https://api.example.com",
rateLimit: {
requestsPerMinute: 60,
strategy: "pause"
}
}
Rate limit options
| Option | Description | Default |
|---|---|---|
requestsPerMinute | Maximum requests per minute | 60 |
strategy | How to handle limits | "pause" |
maxWait | Maximum wait time (ms) | 60000 |
Strategies
Pause strategy
Wait when rate limit is reached:
source API {
auth: bearer,
base: "https://api.example.com",
rateLimit: {
requestsPerMinute: 60,
strategy: "pause"
}
}
When limit is reached:
- Reqon pauses execution
- Waits until rate limit window resets
- Continues with next request
Throttle strategy
Slow down requests proactively:
source API {
auth: bearer,
base: "https://api.example.com",
rateLimit: {
requestsPerMinute: 60,
strategy: "throttle"
}
}
Automatically spaces requests to stay within limits.
Fail strategy
Throw error when limit is reached:
source API {
auth: bearer,
base: "https://api.example.com",
rateLimit: {
requestsPerMinute: 60,
strategy: "fail"
}
}
Use with error handling:
action FetchWithRateLimitHandling {
get "/data"
match response {
{ error: "rate_limit" } -> retry { delay: 60000 },
_ -> continue
}
}
Response header support
Reqon automatically reads standard rate limit headers:
| Header | Description |
|---|---|
X-RateLimit-Limit | Maximum requests allowed |
X-RateLimit-Remaining | Requests remaining in window |
X-RateLimit-Reset | When the window resets |
Retry-After | Seconds to wait before retrying |
Header parsing
HTTP/1.1 429 Too Many Requests
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1705752000
Retry-After: 60
Reqon automatically:
- Pauses for 60 seconds (from
Retry-After) - Updates internal limit tracking
- Retries the request
Adaptive rate limiting
Reqon learns from API responses:
source API {
auth: bearer,
base: "https://api.example.com",
rateLimit: {
requestsPerMinute: 100, // Initial estimate
strategy: "pause",
adaptive: true // Learn from responses
}
}
With adaptive: true:
- Reqon monitors response headers
- Adjusts request pacing dynamically
- Backs off before hitting limits
Per-endpoint rate limits
Some APIs have different limits per endpoint:
mission APISync {
source API {
auth: bearer,
base: "https://api.example.com",
rateLimit: { requestsPerMinute: 100 }
}
action FetchUsers {
// Standard endpoint - uses default limit
get "/users"
}
action FetchReports {
// Heavy endpoint - add delay
get "/reports" {
rateLimit: { requestsPerMinute: 10 }
}
}
}
Combining with pagination
get "/items" {
paginate: offset(offset, 100),
until: length(response.items) == 0
}
Rate limiting applies to each page request, not just the action.
Combining with retry
source API {
auth: bearer,
base: "https://api.example.com",
rateLimit: {
requestsPerMinute: 60,
strategy: "pause"
}
}
action Fetch {
get "/data" {
retry: {
maxAttempts: 5,
backoff: exponential
}
}
}
Order of operations:
- Rate limiter checks if request is allowed
- If not, pauses (based on strategy)
- Request is made
- If fails, retry logic kicks in
Handling 429 responses
Even with rate limiting, you might hit limits. Handle gracefully:
action RobustFetch {
get "/data"
match response {
{ code: 429 } -> retry {
maxAttempts: 5,
backoff: exponential,
initialDelay: 60000 // Wait 1 minute
},
_ -> continue
}
}
Multiple sources with different limits
mission MultiSourceSync {
source HighVolumeAPI {
auth: bearer,
base: "https://high-volume.api.com",
rateLimit: { requestsPerMinute: 1000 }
}
source LowVolumeAPI {
auth: bearer,
base: "https://limited.api.com",
rateLimit: { requestsPerMinute: 10 }
}
action FetchBoth {
// These respect their respective limits
get HighVolumeAPI "/items"
get LowVolumeAPI "/items"
}
}
Monitoring rate limits
Track rate limit status:
action MonitoredFetch {
get "/data"
match response {
{ code: 429, headers: h } -> {
store {
endpoint: "/data",
hitLimit: true,
retryAfter: h["Retry-After"],
timestamp: now()
} -> rateLimitLogs
retry { delay: h["Retry-After"] * 1000 }
},
_ -> continue
}
}
Best practices
Start conservative
// Good: start below the actual limit
source API {
rateLimit: { requestsPerMinute: 50 } // API allows 60
}
// Risky: at or above the limit
source API {
rateLimit: { requestsPerMinute: 60 } // Exactly at limit
}
Use pause for critical syncs
source API {
rateLimit: {
requestsPerMinute: 60,
strategy: "pause" // Ensures completion
}
}
Use throttle for background jobs
source API {
rateLimit: {
requestsPerMinute: 60,
strategy: "throttle" // Smooth, predictable pacing
}
}
Set reasonable maxWait
source API {
rateLimit: {
requestsPerMinute: 60,
strategy: "pause",
maxWait: 300000 // 5 minutes max wait
}
}
Combine with circuit breaker
source API {
auth: bearer,
base: "https://api.example.com",
rateLimit: {
requestsPerMinute: 60,
strategy: "pause"
},
circuitBreaker: {
failureThreshold: 5,
resetTimeout: 30000
}
}
Troubleshooting
Still hitting rate limits
Lower your configured limit:
source API {
rateLimit: {
requestsPerMinute: 30, // Lower than API limit
strategy: "pause"
}
}
Requests too slow
Check if throttle strategy is too aggressive:
// If using throttle, switch to pause
source API {
rateLimit: {
requestsPerMinute: 60,
strategy: "pause" // Only waits when needed
}
}
Inconsistent API limits
Use adaptive mode:
source API {
rateLimit: {
requestsPerMinute: 60,
strategy: "pause",
adaptive: true
}
}