Rate Limits & Data Retention
Throughput limits and automatic data expiration policies.
Rate limits
mail-catcher does not impose application-level rate limits. Throughput is governed by AWS service limits:
| Resource | Default Limit | Notes |
|---|---|---|
| Lambda concurrent executions | 1,000 per region | Shared across all Lambda functions in the account |
| DynamoDB read capacity | On-demand (auto-scaling) | No pre-provisioned capacity to manage |
| S3 request rate | 5,500 GET/s per prefix | More than sufficient for typical usage |
For most E2E testing and email ingestion use cases, these defaults are well above what you'll need.
Data retention
mail-catcher is designed as an ephemeral testing tool. All data auto-expires:
| Data | Retention | Mechanism | Location |
|---|---|---|---|
| Email metadata | 7 days | DynamoDB TTL | EmailsTable |
Raw .eml files | 8 days | S3 lifecycle rule | EmailBucket/incoming/ |
| Parsed attachments | 8 days | S3 lifecycle rule | EmailBucket/attachments/ |
| API keys | Permanent | Manual revocation only | ApiKeysTable |
The 1-day buffer between DynamoDB (7 days) and S3 (8 days) ensures raw files remain accessible until after their index entries expire.
Customizing retention
You can adjust retention periods by modifying the infrastructure code in packages/infra/src/index.ts:
- DynamoDB TTL: change the
ttlcalculation in the ingest handler - S3 lifecycle: change the
expirationsetting on the bucket lifecycle rule
After changing, redeploy:
bun run deploy:devExisting records keep their original expiration. New records use the updated values.