Rate Limits
CODITECT API uses rate limiting to ensure fair usage and platform stability.
Rate Limit Tiers
| Tier | Requests/Minute | Requests/Hour | Requests/Day |
|---|---|---|---|
| Free | 60 | 1,000 | 10,000 |
| Pro | 300 | 10,000 | 100,000 |
| Enterprise | 1,000 | 50,000 | 500,000 |
Rate Limit Headers
Every API response includes rate limit headers:
X-RateLimit-Limit: 300
X-RateLimit-Remaining: 295
X-RateLimit-Reset: 1704794460
| Header | Description |
|---|---|
X-RateLimit-Limit | Maximum requests allowed per window |
X-RateLimit-Remaining | Requests remaining in current window |
X-RateLimit-Reset | Unix timestamp when limit resets |
Rate Limit Response
When you exceed the rate limit:
HTTP/1.1 429 Too Many Requests
Retry-After: 45
{
"error": {
"code": "rate_limit_exceeded",
"message": "Too many requests. Please wait before retrying.",
"details": {
"limit": 300,
"reset_at": "2026-01-09T10:35:00Z",
"retry_after": 45
}
}
}
Endpoint-Specific Limits
Some endpoints have additional limits:
| Endpoint | Limit | Window |
|---|---|---|
POST /auth/login | 10 | Per minute |
POST /auth/register | 5 | Per minute |
POST /licenses/acquire | 20 | Per minute |
POST /webhooks | 10 | Per hour |
Best Practices
1. Implement Exponential Backoff
import time
import random
def api_call_with_retry(func, max_retries=5):
for attempt in range(max_retries):
try:
response = func()
if response.status_code == 429:
retry_after = int(response.headers.get('Retry-After', 60))
# Add jitter to prevent thundering herd
sleep_time = retry_after + random.uniform(0, 10)
time.sleep(sleep_time)
continue
return response
except Exception as e:
wait = (2 ** attempt) + random.uniform(0, 1)
time.sleep(wait)
raise Exception("Max retries exceeded")
2. Cache Responses
from functools import lru_cache
import time
@lru_cache(maxsize=100)
def get_plans_cached():
"""Cache plans for 5 minutes."""
return client.plans.list()
# Invalidate cache periodically
def get_plans():
cache_key = int(time.time() / 300) # 5-minute buckets
return get_plans_cached()
3. Batch Operations
Instead of individual requests:
# Bad: Multiple requests
for user_id in user_ids:
client.users.get(user_id)
# Good: Single batch request
client.users.list(ids=user_ids)
4. Monitor Rate Limit Headers
def check_rate_limits(response):
remaining = int(response.headers.get('X-RateLimit-Remaining', 0))
if remaining < 10:
logger.warning(f"Rate limit running low: {remaining} remaining")
Handling 429 Responses
Python
import requests
import time
def make_request(url, headers):
while True:
response = requests.get(url, headers=headers)
if response.status_code == 429:
retry_after = int(response.headers.get('Retry-After', 60))
print(f"Rate limited. Waiting {retry_after} seconds...")
time.sleep(retry_after)
continue
return response
JavaScript
async function makeRequest(url, options) {
while (true) {
const response = await fetch(url, options);
if (response.status === 429) {
const retryAfter = parseInt(response.headers.get('Retry-After') || '60');
console.log(`Rate limited. Waiting ${retryAfter} seconds...`);
await new Promise(r => setTimeout(r, retryAfter * 1000));
continue;
}
return response;
}
}
Request Enterprise Limits
For higher limits, contact sales@coditect.ai with:
- Your organization ID
- Expected request volume
- Use case description