Rate Limits
Understand API rate limits, quotas, and best practices for handling rate limiting in your AmanahAgent integrations.
Overview
Rate limits protect the AmanahAgent API from abuse and ensure fair usage across all customers. They define the maximum number of requests you can make within specific time windows.
🎯 Why Rate Limits?
- Prevent API abuse and maintain service quality
- Ensure fair resource allocation across all users
- Protect against accidental infinite loops or runaway scripts
- Maintain consistent performance and reliability
Rate Limit Tiers
Free Plan
Perfect for testing and small projects
Pro Plan
For growing businesses and applications
Enterprise
Custom limits for large-scale operations
⚠️ WhatsApp Business API Limits
In addition to our API rate limits, WhatsApp Business API has its own messaging limits:
- New Business: 250 messages per day (increases based on phone number quality rating)
- Template Messages: Subject to WhatsApp's 24-hour messaging window rules
- Media Files: Limited to 16MB per file with supported formats only
Rate Limit Headers
Every API response includes headers that help you track your current rate limit status and plan your requests accordingly.
Header | Description | Example |
---|---|---|
X-RateLimit-Limit | Maximum requests allowed in the current window | 100 |
X-RateLimit-Remaining | Requests remaining in the current window | 45 |
X-RateLimit-Reset | Unix timestamp when the rate limit resets | 1705320000 |
X-RateLimit-Window | Rate limit window duration in seconds | 3600 |
Retry-After | Seconds to wait before making another request (only when rate limited) | 300 |
Example Response Headers
HTTP/1.1 200 OK Content-Type: application/json X-RateLimit-Limit: 100 X-RateLimit-Remaining: 45 X-RateLimit-Reset: 1705320000 X-RateLimit-Window: 3600 Date: Mon, 15 Jan 2024 10:30:00 GMT { "message_id": "msg_abc123xyz789", "status": "sent" }
HTTP Response Codes
When you exceed rate limits, the API returns specific HTTP status codes to help you handle the situation appropriately.
429 Too Many Requests
Returned when you exceed your rate limit. The response includes a Retry-After
header indicating how long to wait before making another request.
Example Response
HTTP/1.1 429 Too Many Requests Content-Type: application/json X-RateLimit-Limit: 100 X-RateLimit-Remaining: 0 X-RateLimit-Reset: 1705320000 Retry-After: 300 { "error": { "code": "RATE_LIMIT_EXCEEDED", "message": "Rate limit exceeded. Please wait 300 seconds before making another request.", "details": { "limit": 100, "window": "1 hour", "reset_time": "2024-01-15T11:00:00Z" } } }
503 Service Unavailable
Returned during periods of high load when our servers are temporarily overwhelmed. This is different from rate limiting and indicates a temporary service issue.
Example Response
HTTP/1.1 503 Service Unavailable Content-Type: application/json Retry-After: 60 { "error": { "code": "SERVICE_UNAVAILABLE", "message": "Service temporarily unavailable. Please retry after 60 seconds.", "details": { "reason": "high_load", "retry_after": 60 } } }
Best Practices for Handling Rate Limits
1. Implement Exponential Backoff
When you receive a 429 response, wait progressively longer between retries to avoid overwhelming the API.
JavaScript Example
async function sendMessageWithRetry(messageData, maxRetries = 3) { let retryCount = 0; while (retryCount < maxRetries) { try { const response = await fetch('https://api.amanahagent.cloud/v1/messages', { method: 'POST', headers: { 'Authorization': 'Bearer YOUR_API_KEY', 'Content-Type': 'application/json' }, body: JSON.stringify(messageData) }); if (response.status === 429) { const retryAfter = parseInt(response.headers.get('Retry-After')) || (2 ** retryCount); console.log(`Rate limited. Waiting ${retryAfter} seconds...`); await sleep(retryAfter * 1000); retryCount++; continue; } if (!response.ok) { throw new Error(`HTTP ${response.status}: ${response.statusText}`); } return await response.json(); } catch (error) { if (retryCount === maxRetries - 1) { throw error; } const backoffTime = (2 ** retryCount) * 1000; // Exponential backoff console.log(`Request failed. Retrying in ${backoffTime/1000} seconds...`); await sleep(backoffTime); retryCount++; } } } function sleep(ms) { return new Promise(resolve => setTimeout(resolve, ms)); }
Python Example
import time import requests from requests.adapters import HTTPAdapter from urllib3.util.retry import Retry def create_session_with_retries(): session = requests.Session() # Define retry strategy retry_strategy = Retry( total=3, status_forcelist=[429, 500, 502, 503, 504], backoff_factor=2, # Exponential backoff raise_on_status=False ) adapter = HTTPAdapter(max_retries=retry_strategy) session.mount("http://", adapter) session.mount("https://", adapter) return session def send_message_with_retry(message_data): session = create_session_with_retries() response = session.post( 'https://api.amanahagent.cloud/v1/messages', headers={ 'Authorization': 'Bearer YOUR_API_KEY', 'Content-Type': 'application/json' }, json=message_data, timeout=30 ) if response.status_code == 429: retry_after = int(response.headers.get('Retry-After', 60)) print(f"Rate limited. Waiting {retry_after} seconds...") time.sleep(retry_after) return send_message_with_retry(message_data) response.raise_for_status() return response.json()
2. Monitor Rate Limit Headers
Always check rate limit headers in responses to proactively manage your API usage and avoid hitting limits.
class RateLimitManager { constructor() { this.limits = {}; } updateLimits(response) { this.limits = { limit: parseInt(response.headers.get('X-RateLimit-Limit')), remaining: parseInt(response.headers.get('X-RateLimit-Remaining')), reset: parseInt(response.headers.get('X-RateLimit-Reset')), window: parseInt(response.headers.get('X-RateLimit-Window')) }; } shouldThrottle() { if (!this.limits.remaining) return false; // If less than 10% of requests remaining, slow down const usagePercent = (this.limits.limit - this.limits.remaining) / this.limits.limit; return usagePercent > 0.9; } getThrottleDelay() { if (!this.shouldThrottle()) return 0; const now = Math.floor(Date.now() / 1000); const timeUntilReset = this.limits.reset - now; // Distribute remaining requests evenly over remaining time return (timeUntilReset / this.limits.remaining) * 1000; } } // Usage const rateLimitManager = new RateLimitManager(); async function makeRequest(url, options) { // Check if we should throttle const delay = rateLimitManager.getThrottleDelay(); if (delay > 0) { await sleep(delay); } const response = await fetch(url, options); rateLimitManager.updateLimits(response); return response; }
3. Implement Request Queuing
Use a queue system to manage API requests and ensure you stay within rate limits, especially for high-volume applications.
class APIQueue { constructor(requestsPerMinute = 100) { this.queue = []; this.processing = false; this.requestsPerMinute = requestsPerMinute; this.minInterval = (60 * 1000) / requestsPerMinute; // ms between requests this.lastRequestTime = 0; } async enqueue(requestFunction) { return new Promise((resolve, reject) => { this.queue.push({ request: requestFunction, resolve, reject }); this.processQueue(); }); } async processQueue() { if (this.processing || this.queue.length === 0) { return; } this.processing = true; while (this.queue.length > 0) { const { request, resolve, reject } = this.queue.shift(); try { // Ensure minimum interval between requests const now = Date.now(); const timeSinceLastRequest = now - this.lastRequestTime; const delay = Math.max(0, this.minInterval - timeSinceLastRequest); if (delay > 0) { await sleep(delay); } const result = await request(); this.lastRequestTime = Date.now(); resolve(result); } catch (error) { reject(error); } } this.processing = false; } } // Usage const apiQueue = new APIQueue(90); // 90 requests per minute (under the limit) // Queue requests instead of making them directly const sendMessage = (messageData) => { return apiQueue.enqueue(async () => { const response = await fetch('https://api.amanahagent.cloud/v1/messages', { method: 'POST', headers: { 'Authorization': 'Bearer YOUR_API_KEY', 'Content-Type': 'application/json' }, body: JSON.stringify(messageData) }); if (!response.ok) { throw new Error(`HTTP ${response.status}: ${response.statusText}`); } return response.json(); }); };
4. Optimize Request Patterns
✅ Efficient Patterns
- Batch operations when possible
- Cache frequently accessed data
- Use webhooks instead of polling
- Implement client-side rate limiting
- Spread requests evenly over time
- Use appropriate pagination limits
❌ Inefficient Patterns
- Making rapid sequential requests
- Polling APIs frequently without need
- Retrying immediately after failures
- Not caching API responses
- Sending duplicate requests
- Ignoring rate limit headers
Upgrading for Higher Limits
If your application requires higher rate limits, consider upgrading your plan or contacting us for custom enterprise limits.
🚀 Upgrade to Pro
Get 10x higher rate limits and advanced features for growing applications.
- 1,000 requests per hour
- 25,000 messages per month
- Priority support
- Advanced analytics
⭐ Enterprise Solutions
Custom rate limits and dedicated infrastructure for large-scale operations.
- Custom rate limits
- Dedicated infrastructure
- 24/7 dedicated support
- SLA guarantees
Monitoring Your Usage
Track your API usage and rate limit consumption through our dashboard and analytics API.
Dashboard Analytics
View real-time and historical usage data in your AmanahAgent dashboard.
Current Usage
847
requests this hour
Remaining
153
requests available
Reset Time
23m
until limit resets
Usage API
Query your current rate limit status programmatically.
GET https://api.amanahagent.cloud/v1/usage/rate-limits { "current_limits": { "requests_per_minute": { "limit": 100, "used": 23, "remaining": 77, "reset_time": "2024-01-15T10:31:00Z" }, "requests_per_hour": { "limit": 1000, "used": 847, "remaining": 153, "reset_time": "2024-01-15T11:00:00Z" }, "messages_per_month": { "limit": 25000, "used": 12458, "remaining": 12542, "reset_time": "2024-02-01T00:00:00Z" } }, "plan": "pro", "usage_history": { "last_24h": 2847, "last_7d": 18945, "last_30d": 12458 } }