Export usage data to S3
Last validated:
The Aperture dashboard shows recent usage data, but it is not designed for long-term retention. You can configure a retention policy to control how long Aperture keeps capture data locally. Exporting usage data to S3-compatible storage gives you a durable record of every LLM session for compliance auditing, cost analysis, and custom reporting.
When require_export is enabled in the retention configuration, Aperture only purges local capture data after it has been successfully exported to S3.
Aperture's S3 exporter periodically writes usage records to the bucket you configure as CBOR-encoded, zstd-compressed files. It supports Amazon S3, Google Cloud Storage, MinIO, Backblaze B2, and other S3-compatible services.
Prerequisites
Before you begin, ensure you have the following:
- An Aperture instance with at least one provider configured.
- Admin access to the Aperture configuration.
- An S3-compatible bucket with write access. You need the bucket name, region, and credentials (access key ID and secret).
Configure the S3 exporter
Open the Settings page in the Aperture dashboard and add an exporters section to your configuration:
"exporters": {
"s3": {
"bucket_name": "aperture-exports",
"region": "us-east-1",
"access_key_id": "<aws-access-key-id>",
"access_secret": "<aws-secret-access-key>"
}
}
Setting bucket_name to a non-empty value enables the S3 exporter. Aperture begins exporting usage records automatically on the next export cycle.
Use a non-Amazon S3-compatible service
Aperture also supports S3-compatible storage services beyond AWS. For Google Cloud Storage, MinIO, Backblaze B2, or any service with an S3-compatible API, you can use the same configuration with the addition of an endpoint field.
For services other than Amazon S3 (such as Google Cloud Storage, MinIO, or Backblaze B2), add the endpoint field with the service's S3-compatible API URL:
"exporters": {
"s3": {
"endpoint": "https://storage.googleapis.com",
"bucket_name": "aperture-exports",
"region": "us-east-1",
"access_key_id": "<access-key-id>",
"access_secret": "<secret-key>"
}
}
The region field is required even for non-AWS services because the AWS SDK validates it.
Customize export behavior
You can adjust how frequently Aperture exports data and how many records it includes per batch using the every and limit fields:
"exporters": {
"s3": {
"bucket_name": "aperture-exports",
"region": "us-east-1",
"access_key_id": "<aws-access-key-id>",
"access_secret": "<aws-secret-access-key>",
"prefix": "prod",
"every": 1800,
"limit": 2000
}
}
The following table summarizes these fields:
| Field | Default | Description |
|---|---|---|
prefix | "" | (Optional) Path prefix for S3 objects. Must not end with /. |
every | 3600 | Seconds between export cycles. The example above exports every 30 minutes. |
limit | 1000 | Maximum records per export batch. Aperture caps this at 10000. |
Export file format
Aperture exports usage data as CBOR-encoded, zstd-compressed files. CBOR is a binary encoding, similar to JSON, that supports binary data such as images and audio from multimodal LLM requests.
Each export file is named ts-aperture-export-<unix-timestamp>.cbor.zstd and contains a sequence of CBOR-encoded records. Each record includes the following top-level fields:
| Field | Type | Description |
|---|---|---|
id | integer | Record identifier. |
ver | integer | Schema version. 2. |
timestamp | timestamp | When the LLM request occurred. |
identity | object | User and node identification: login name, stable node ID, and tags. |
model | string | LLM model name. |
api_type | string | API type, such as oai_completions or ant_messages. |
usage | object | Token counts: input, output, cached, and reasoning tokens. |
estimated_cost | object | Estimated dollar cost of the request. Omitted if unavailable. |
duration_ms | integer | Request duration in milliseconds. |
capture_id | string | Unique identifier for this capture. |
session_id | string | Session grouping identifier. |
path | string | API endpoint path. |
status_code | integer | HTTP response status code. |
capture | object | Full request and response data, including headers, processed JSON bodies, tool use data, and, optionally, raw binary request and response bodies. |
To read export files, decompress with a zstd library, then decode the CBOR sequence. Common libraries include cbor2 and zstandard for Python, fxamacker/cbor and klauspost/compress for Go, and the zstd CLI tool for decompression.
Verify the export
After saving the configuration, wait for the next export cycle (based on your every setting) and check the S3 bucket for new objects. The objects appear under the configured prefix (if set) with the .cbor.zstd file extension.
If no objects appear after the expected interval, check the Aperture server logs for S3-related errors such as authentication failures or permission issues.
Next steps
- Build a custom webhook to send real-time event data to your own services.
- Review the Aperture dashboard reference for details on the built-in usage views.
- Refer to the exporters configuration reference for the complete field reference.