Overview
The InstaView API implements production-ready rate limiting to ensure fair usage and system stability. Rate limits are enforced per API key with two-tier protection:- Per-minute limits: Burst protection (default: 60 requests/minute)
- Per-hour limits: Quota enforcement (default: 2,000 requests/hour)
How It Works
Our rate limiting system provides:- Fair enforcement: Smooth limit enforcement that prevents burst abuse
- High performance: Minimal overhead (~2-5ms per request)
- High availability: System stays available even during technical issues
- Full visibility: Rate limit information in every response header
Response Headers
All API responses include rate limit information in HTTP headers:X-RateLimit-Limit-Minute: Maximum requests allowed per minuteX-RateLimit-Limit-Hour: Maximum requests allowed per hourX-RateLimit-Remaining-Minute: Requests remaining this minuteX-RateLimit-Remaining-Hour: Requests remaining this hourX-RateLimit-Reset-Minute: Unix timestamp when minute limit resetsX-RateLimit-Reset-Hour: Unix timestamp when hour limit resets
Rate Limit Response
When you exceed the rate limit, you’ll receive:retryAfter: Seconds to wait before retryinglimit: The limit that was exceededremaining: Always 0 when rate limitedreset: Unix timestamp when the limit resets
Subscription Plan Limits
Rate limits are automatically set based on your subscription plan:| Plan | Requests/Minute | Requests/Hour | Burst Size |
|---|---|---|---|
| Trial | 30 | 500 | 50 |
| Starter | 120 | 5,000 | 200 |
| Business | 300 | 15,000 | 500 |
| Scale | 600 | 30,000 | 1,000 |
| Enterprise | Custom | Custom | Custom |
Best Practices
Use Request Queues
Use Request Queues
Implement request queuing in your application to avoid overwhelming the API:
Cache Responses
Cache Responses
Cache frequently accessed data to reduce API calls:
Batch Operations
Batch Operations
When possible, batch multiple operations together:
Add Delays Between Requests
Add Delays Between Requests
Add small delays between requests to avoid bursts:
Implement Exponential Backoff
Implement Exponential Backoff
Be prepared for potential errors with exponential backoff:
Monitoring Your Usage
Use the response headers to monitor your API usage:Handling Rate Limits
Implement exponential backoff when you receive a 429 response:Custom Rate Limits
If you need custom rate limits for your use case:1
Contact Your Account Manager
Discuss your specific requirements
2
Provide Usage Estimates
Share expected request volumes and patterns
3
Configuration
We’ll configure custom limits for your API keys
4
Testing
Test with your new limits before going live
Contact Us
If you have specific rate limit requirements or questions:- Email: [email protected]
- For Enterprise customers: Contact your account manager