Skip to main content

Overview

The InstaView API implements production-ready rate limiting to ensure fair usage and system stability. Rate limits are enforced per API key with two-tier protection:
  • Per-minute limits: Burst protection (default: 60 requests/minute)
  • Per-hour limits: Quota enforcement (default: 2,000 requests/hour)
Rate limits can be customized per API key or per company based on your subscription plan.

How It Works

Our rate limiting system provides:
  • Fair enforcement: Smooth limit enforcement that prevents burst abuse
  • High performance: Minimal overhead (~2-5ms per request)
  • High availability: System stays available even during technical issues
  • Full visibility: Rate limit information in every response header

Response Headers

All API responses include rate limit information in HTTP headers:
X-RateLimit-Limit-Minute: 60
X-RateLimit-Limit-Hour: 2000
X-RateLimit-Remaining-Minute: 45
X-RateLimit-Remaining-Hour: 1850
X-RateLimit-Reset-Minute: 1705320120
X-RateLimit-Reset-Hour: 1705323720
Header Descriptions:
  • X-RateLimit-Limit-Minute: Maximum requests allowed per minute
  • X-RateLimit-Limit-Hour: Maximum requests allowed per hour
  • X-RateLimit-Remaining-Minute: Requests remaining this minute
  • X-RateLimit-Remaining-Hour: Requests remaining this hour
  • X-RateLimit-Reset-Minute: Unix timestamp when minute limit resets
  • X-RateLimit-Reset-Hour: Unix timestamp when hour limit resets

Rate Limit Response

When you exceed the rate limit, you’ll receive:
HTTP/1.1 429 Too Many Requests
Retry-After: 60
{
  "statusCode": 429,
  "message": "Rate limit exceeded",
  "error": "Too Many Requests",
  "retryAfter": 30,
  "limit": 60,
  "remaining": 0,
  "reset": 1705320120
}
Response Fields:
  • retryAfter: Seconds to wait before retrying
  • limit: The limit that was exceeded
  • remaining: Always 0 when rate limited
  • reset: Unix timestamp when the limit resets

Subscription Plan Limits

Rate limits are automatically set based on your subscription plan:
PlanRequests/MinuteRequests/HourBurst Size
Trial3050050
Starter1205,000200
Business30015,000500
Scale60030,0001,000
EnterpriseCustomCustomCustom
Rate limits are automatically applied based on your active subscription. Contact your account manager to upgrade your plan or request custom limits.

Best Practices

Implement request queuing in your application to avoid overwhelming the API:
class RequestQueue {
  constructor(concurrency = 5) {
    this.concurrency = concurrency;
    this.running = 0;
    this.queue = [];
  }
  
  async add(fn) {
    while (this.running >= this.concurrency) {
      await new Promise(resolve => this.queue.push(resolve));
    }
    
    this.running++;
    
    try {
      return await fn();
    } finally {
      this.running--;
      const next = this.queue.shift();
      if (next) next();
    }
  }
}

// Usage
const queue = new RequestQueue(5);
await queue.add(() => fetch('/api/endpoint'));
Cache frequently accessed data to reduce API calls:
class APIClient {
  constructor(apiKey, cacheTTL = 60000) {
    this.apiKey = apiKey;
    this.cache = new Map();
    this.cacheTTL = cacheTTL;
  }
  
  async get(url) {
    const cached = this.cache.get(url);
    
    if (cached && Date.now() - cached.timestamp < this.cacheTTL) {
      return cached.data;
    }
    
    const response = await fetch(url, {
      headers: { 'Authorization': `Bearer ${this.apiKey}` }
    });
    
    const data = await response.json();
    this.cache.set(url, { data, timestamp: Date.now() });
    
    return data;
  }
}
When possible, batch multiple operations together:
// ❌ Multiple individual requests
for (const candidate of candidates) {
  await api.post('/candidates', candidate);
}

// ✅ Better: Process in controlled batches
async function processBatch(items, batchSize = 10, delayMs = 100) {
  for (let i = 0; i < items.length; i += batchSize) {
    const batch = items.slice(i, i + batchSize);
    await Promise.all(batch.map(item => api.post('/candidates', item)));
    
    if (i + batchSize < items.length) {
      await new Promise(resolve => setTimeout(resolve, delayMs));
    }
  }
}
Add small delays between requests to avoid bursts:
async function sleep(ms) {
  return new Promise(resolve => setTimeout(resolve, ms));
}

async function processWithDelay(items, delayMs = 200) {
  const results = [];
  
  for (const item of items) {
    results.push(await processItem(item));
    await sleep(delayMs);
  }
  
  return results;
}
Be prepared for potential errors with exponential backoff:
async function exponentialBackoff(fn, maxRetries = 5) {
  let delay = 1000;
  
  for (let attempt = 0; attempt < maxRetries; attempt++) {
    try {
      return await fn();
    } catch (error) {
      if (attempt === maxRetries - 1) throw error;
      
      await new Promise(resolve => setTimeout(resolve, delay));
      delay *= 2;
    }
  }
}

Monitoring Your Usage

Use the response headers to monitor your API usage:
const response = await fetch("/api/endpoint", {
  headers: {
    Authorization: `Bearer ${apiKey}`,
  },
});

const minuteLimit = response.headers.get("X-RateLimit-Limit-Minute");
const minuteRemaining = response.headers.get("X-RateLimit-Remaining-Minute");
const minuteReset = response.headers.get("X-RateLimit-Reset-Minute");

console.log(`Rate limit: ${minuteRemaining}/${minuteLimit} remaining`);
console.log(`Resets at: ${new Date(minuteReset * 1000).toISOString()}`);

Handling Rate Limits

Implement exponential backoff when you receive a 429 response:
async function makeRequestWithRetry(url, options, maxRetries = 3) {
  for (let attempt = 0; attempt < maxRetries; attempt++) {
    const response = await fetch(url, options);

    if (response.status === 429) {
      const retryAfter =
        response.headers.get("Retry-After") || Math.pow(2, attempt); // Exponential backoff

      console.log(`Rate limited. Retrying in ${retryAfter}s...`);
      await sleep(retryAfter * 1000);
      continue;
    }

    return response;
  }

  throw new Error("Max retries exceeded");
}

Custom Rate Limits

If you need custom rate limits for your use case:
1

Contact Your Account Manager

Discuss your specific requirements
2

Provide Usage Estimates

Share expected request volumes and patterns
3

Configuration

We’ll configure custom limits for your API keys
4

Testing

Test with your new limits before going live

Contact Us

If you have specific rate limit requirements or questions:

Next Steps