Guides

Rate Limits

Every Outfame API key has rate limits — per-minute caps, burst limits, and daily maximums. This guide covers what those limits are for each plan, how to read the rate limit headers in responses, and how to build retry logic that won't blow up in production.


Limits by plan

PlanRequests per minuteBurst limitDaily maximum
Basic10020 req/sec50,000
Pro500100 req/sec250,000
Turbo2,000400 req/sec1,000,000
Enterprise10,0002,000 req/secUnlimited

Limits are per API key. If you use multiple keys, each one has its own independent quota. OAuth tokens share the limit of the organization they belong to.


Rate limit headers

Every response includes these headers so you always know where you stand:

HeaderWhat it tells youExample
X-RateLimit-LimitMax requests allowed per minute.500
X-RateLimit-RemainingHow many requests you have left in this window.347
X-RateLimit-ResetUnix timestamp when the window resets.1707487200
X-RateLimit-Burst-LimitMax requests per second.100
X-RateLimit-Burst-RemainingBurst requests you have left.88

Example response headers

HTTP/1.1 200 OK
Content-Type: application/json
X-RateLimit-Limit: 500
X-RateLimit-Remaining: 347
X-RateLimit-Reset: 1707487200
X-RateLimit-Burst-Limit: 100
X-RateLimit-Burst-Remaining: 88

429 Too Many Requests

Hit the limit and you'll get a 429 status code. The response includes a Retry-After header telling you exactly how many seconds to wait.

HTTP/1.1 429 Too Many Requests
Content-Type: application/json
Retry-After: 12
X-RateLimit-Limit: 500
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1707487200

{
  "error": {
    "code": "rate_limit_exceeded",
    "message": "Rate limit exceeded. Please retry after 12 seconds.",
    "retry_after": 12,
    "doc_url": "https://docs.outfame.com/guides/rate-limits"
  }
}

Retry strategies

Exponential backoff with jitter

This is what you want in production. Exponential backoff prevents you from hammering the API, and the random jitter stops multiple clients from retrying at the exact same moment (the "thundering herd" problem).

Node.js

async function apiCallWithRetry(fn, maxRetries = 3) {
  for (let attempt = 0; attempt <= maxRetries; attempt++) {
    try {
      return await fn();
    } catch (error) {
      if (error.status !== 429 || attempt === maxRetries) {
        throw error;
      }

      // Use Retry-After header if available
      const retryAfter = error.headers?.["retry-after"];
      const baseDelay = retryAfter
        ? parseInt(retryAfter) * 1000
        : Math.pow(2, attempt) * 1000;

      // Add jitter: 0-25% random additional delay
      const jitter = baseDelay * Math.random() * 0.25;
      const delay = baseDelay + jitter;

      console.log(`Rate limited. Retrying in ${Math.round(delay)}ms (attempt ${attempt + 1}/${maxRetries})`);
      await new Promise(resolve => setTimeout(resolve, delay));
    }
  }
}

// Usage
const accounts = await apiCallWithRetry(() =>
  outfame.accounts.list({ platform: "instagram" })
);

Python

import time
import random

def api_call_with_retry(fn, max_retries=3):
    for attempt in range(max_retries + 1):
        try:
            return fn()
        except outfame.RateLimitError as e:
            if attempt == max_retries:
                raise

            retry_after = e.retry_after or (2 ** attempt)
            jitter = retry_after * random.uniform(0, 0.25)
            delay = retry_after + jitter

            print(f"Rate limited. Retrying in {delay:.1f}s (attempt {attempt + 1}/{max_retries})")
            time.sleep(delay)

# Usage
accounts = api_call_with_retry(
    lambda: client.accounts.list(platform="instagram")
)

Proactive rate limit management

Even better than handling 429s gracefully? Avoiding them entirely. Track the headers and slow down before you hit the wall.

class RateLimitedClient {
  constructor(outfame) {
    this.outfame = outfame;
    this.remaining = Infinity;
    this.resetAt = 0;
  }

  async request(fn) {
    // Wait if we're close to the limit
    if (this.remaining < 5) {
      const waitMs = Math.max(0, this.resetAt - Date.now());
      if (waitMs > 0) {
        console.log(`Proactively waiting ${waitMs}ms to avoid rate limit`);
        await new Promise(resolve => setTimeout(resolve, waitMs));
      }
    }

    const response = await fn();

    // Update rate limit state from headers
    this.remaining = parseInt(response.headers["x-ratelimit-remaining"] || "100");
    this.resetAt = parseInt(response.headers["x-ratelimit-reset"] || "0") * 1000;

    return response;
  }
}

Endpoint-specific limits

A few endpoints have their own rate limits on top of the global per-key limit:

EndpointAdditional limitWhy
POST /v1/targeting/suggestions10 per hour per accountAI inference is compute-heavy.
POST /v1/accounts20 per hourAccount creation involves external validation.
GET /v1/analytics/audience60 per hour per accountAudience data is aggregated in near-real-time.
POST /v1/webhooks30 per hourIncludes URL validation on each request.

Best practices

  • Cache responses — Analytics data updates every few minutes at most. Cache locally and skip redundant calls.
  • Use webhooks instead of polling — Subscribe to webhook events and let the data come to you.
  • Batch when you can — Pull lists with higher limit values (up to 100) instead of making tons of small requests.
  • Watch the headers — Track X-RateLimit-Remaining and throttle before you hit zero, not after.
  • Always use backoff on retries — Never retry a 429 immediately. Exponential backoff with jitter is the standard for a reason.
  • Split read and write keys — Use one API key for high-frequency reads (analytics polling) and another for writes (account management). That way a burst of reads won't block your writes.

Requesting a limit increase

Need higher limits? Two paths:

  1. Upgrade your plan — Fastest way. Check API pricing for what each tier gets you.
  2. Talk to us — For Enterprise plans or anything custom, email developers@outfame.com with your use case and expected volume.