Guides
Rate Limits
Every Outfame API key has rate limits — per-minute caps, burst limits, and daily maximums. This guide covers what those limits are for each plan, how to read the rate limit headers in responses, and how to build retry logic that won't blow up in production.
Limits by plan
| Plan | Requests per minute | Burst limit | Daily maximum |
|---|---|---|---|
| Basic | 100 | 20 req/sec | 50,000 |
| Pro | 500 | 100 req/sec | 250,000 |
| Turbo | 2,000 | 400 req/sec | 1,000,000 |
| Enterprise | 10,000 | 2,000 req/sec | Unlimited |
Limits are per API key. If you use multiple keys, each one has its own independent quota. OAuth tokens share the limit of the organization they belong to.
Rate limit headers
Every response includes these headers so you always know where you stand:
| Header | What it tells you | Example |
|---|---|---|
X-RateLimit-Limit | Max requests allowed per minute. | 500 |
X-RateLimit-Remaining | How many requests you have left in this window. | 347 |
X-RateLimit-Reset | Unix timestamp when the window resets. | 1707487200 |
X-RateLimit-Burst-Limit | Max requests per second. | 100 |
X-RateLimit-Burst-Remaining | Burst requests you have left. | 88 |
Example response headers
HTTP/1.1 200 OK
Content-Type: application/json
X-RateLimit-Limit: 500
X-RateLimit-Remaining: 347
X-RateLimit-Reset: 1707487200
X-RateLimit-Burst-Limit: 100
X-RateLimit-Burst-Remaining: 88429 Too Many Requests
Hit the limit and you'll get a 429 status code. The response includes a Retry-After header telling you exactly how many seconds to wait.
HTTP/1.1 429 Too Many Requests
Content-Type: application/json
Retry-After: 12
X-RateLimit-Limit: 500
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1707487200
{
"error": {
"code": "rate_limit_exceeded",
"message": "Rate limit exceeded. Please retry after 12 seconds.",
"retry_after": 12,
"doc_url": "https://docs.outfame.com/guides/rate-limits"
}
}Retry strategies
Exponential backoff with jitter
This is what you want in production. Exponential backoff prevents you from hammering the API, and the random jitter stops multiple clients from retrying at the exact same moment (the "thundering herd" problem).
Node.js
async function apiCallWithRetry(fn, maxRetries = 3) {
for (let attempt = 0; attempt <= maxRetries; attempt++) {
try {
return await fn();
} catch (error) {
if (error.status !== 429 || attempt === maxRetries) {
throw error;
}
// Use Retry-After header if available
const retryAfter = error.headers?.["retry-after"];
const baseDelay = retryAfter
? parseInt(retryAfter) * 1000
: Math.pow(2, attempt) * 1000;
// Add jitter: 0-25% random additional delay
const jitter = baseDelay * Math.random() * 0.25;
const delay = baseDelay + jitter;
console.log(`Rate limited. Retrying in ${Math.round(delay)}ms (attempt ${attempt + 1}/${maxRetries})`);
await new Promise(resolve => setTimeout(resolve, delay));
}
}
}
// Usage
const accounts = await apiCallWithRetry(() =>
outfame.accounts.list({ platform: "instagram" })
);Python
import time
import random
def api_call_with_retry(fn, max_retries=3):
for attempt in range(max_retries + 1):
try:
return fn()
except outfame.RateLimitError as e:
if attempt == max_retries:
raise
retry_after = e.retry_after or (2 ** attempt)
jitter = retry_after * random.uniform(0, 0.25)
delay = retry_after + jitter
print(f"Rate limited. Retrying in {delay:.1f}s (attempt {attempt + 1}/{max_retries})")
time.sleep(delay)
# Usage
accounts = api_call_with_retry(
lambda: client.accounts.list(platform="instagram")
)Proactive rate limit management
Even better than handling 429s gracefully? Avoiding them entirely. Track the headers and slow down before you hit the wall.
class RateLimitedClient {
constructor(outfame) {
this.outfame = outfame;
this.remaining = Infinity;
this.resetAt = 0;
}
async request(fn) {
// Wait if we're close to the limit
if (this.remaining < 5) {
const waitMs = Math.max(0, this.resetAt - Date.now());
if (waitMs > 0) {
console.log(`Proactively waiting ${waitMs}ms to avoid rate limit`);
await new Promise(resolve => setTimeout(resolve, waitMs));
}
}
const response = await fn();
// Update rate limit state from headers
this.remaining = parseInt(response.headers["x-ratelimit-remaining"] || "100");
this.resetAt = parseInt(response.headers["x-ratelimit-reset"] || "0") * 1000;
return response;
}
}Endpoint-specific limits
A few endpoints have their own rate limits on top of the global per-key limit:
| Endpoint | Additional limit | Why |
|---|---|---|
POST /v1/targeting/suggestions | 10 per hour per account | AI inference is compute-heavy. |
POST /v1/accounts | 20 per hour | Account creation involves external validation. |
GET /v1/analytics/audience | 60 per hour per account | Audience data is aggregated in near-real-time. |
POST /v1/webhooks | 30 per hour | Includes URL validation on each request. |
Best practices
- Cache responses — Analytics data updates every few minutes at most. Cache locally and skip redundant calls.
- Use webhooks instead of polling — Subscribe to webhook events and let the data come to you.
- Batch when you can — Pull lists with higher
limitvalues (up to 100) instead of making tons of small requests. - Watch the headers — Track
X-RateLimit-Remainingand throttle before you hit zero, not after. - Always use backoff on retries — Never retry a 429 immediately. Exponential backoff with jitter is the standard for a reason.
- Split read and write keys — Use one API key for high-frequency reads (analytics polling) and another for writes (account management). That way a burst of reads won't block your writes.
Requesting a limit increase
Need higher limits? Two paths:
- Upgrade your plan — Fastest way. Check API pricing for what each tier gets you.
- Talk to us — For Enterprise plans or anything custom, email developers@outfame.com with your use case and expected volume.