Export usage data to S3

Last validated:

Aperture by Tailscale is currently in beta.

The Aperture dashboard shows recent usage data, but it is not designed for long-term retention. You can configure a retention policy to control how long Aperture keeps capture data locally. Exporting usage data to S3-compatible storage gives you a durable record of every LLM session for compliance auditing, cost analysis, and custom reporting.

When require_export is enabled in the retention configuration, Aperture only purges local capture data after it has been successfully exported to S3.

Aperture's S3 exporter periodically writes usage records to the bucket you configure as CBOR-encoded, zstd-compressed files. It supports Amazon S3, Google Cloud Storage, MinIO, Backblaze B2, and other S3-compatible services.

Prerequisites

Before you begin, ensure you have the following:

  • An Aperture instance with at least one provider configured.
  • Admin access to the Aperture configuration.
  • An S3-compatible bucket with write access. You need the bucket name, region, and credentials (access key ID and secret).

Configure the S3 exporter

Open the Settings page in the Aperture dashboard and add an exporters section to your configuration:

"exporters": {
  "s3": {
    "bucket_name": "aperture-exports",
    "region": "us-east-1",
    "access_key_id": "<aws-access-key-id>",
    "access_secret": "<aws-secret-access-key>"
  }
}

Setting bucket_name to a non-empty value enables the S3 exporter. Aperture begins exporting usage records automatically on the next export cycle.

Use a non-Amazon S3-compatible service

Aperture also supports S3-compatible storage services beyond AWS. For Google Cloud Storage, MinIO, Backblaze B2, or any service with an S3-compatible API, you can use the same configuration with the addition of an endpoint field.

For services other than Amazon S3 (such as Google Cloud Storage, MinIO, or Backblaze B2), add the endpoint field with the service's S3-compatible API URL:

"exporters": {
  "s3": {
    "endpoint": "https://storage.googleapis.com",
    "bucket_name": "aperture-exports",
    "region": "us-east-1",
    "access_key_id": "<access-key-id>",
    "access_secret": "<secret-key>"
  }
}

The region field is required even for non-AWS services because the AWS SDK validates it.

Customize export behavior

You can adjust how frequently Aperture exports data and how many records it includes per batch using the every and limit fields:

"exporters": {
  "s3": {
    "bucket_name": "aperture-exports",
    "region": "us-east-1",
    "access_key_id": "<aws-access-key-id>",
    "access_secret": "<aws-secret-access-key>",
    "prefix": "prod",
    "every": 1800,
    "limit": 2000
  }
}

The following table summarizes these fields:

FieldDefaultDescription
prefix""(Optional) Path prefix for S3 objects. Must not end with /.
every3600Seconds between export cycles. The example above exports every 30 minutes.
limit1000Maximum records per export batch. Aperture caps this at 10000.

Export file format

Aperture exports usage data as CBOR-encoded, zstd-compressed files. CBOR is a binary encoding, similar to JSON, that supports binary data such as images and audio from multimodal LLM requests.

Each export file is named ts-aperture-export-<unix-timestamp>.cbor.zstd and contains a sequence of CBOR-encoded records. Each record includes the following top-level fields:

FieldTypeDescription
idintegerRecord identifier.
verintegerSchema version. 2.
timestamptimestampWhen the LLM request occurred.
identityobjectUser and node identification: login name, stable node ID, and tags.
modelstringLLM model name.
api_typestringAPI type, such as oai_completions or ant_messages.
usageobjectToken counts: input, output, cached, and reasoning tokens.
estimated_costobjectEstimated dollar cost of the request. Omitted if unavailable.
duration_msintegerRequest duration in milliseconds.
capture_idstringUnique identifier for this capture.
session_idstringSession grouping identifier.
pathstringAPI endpoint path.
status_codeintegerHTTP response status code.
captureobjectFull request and response data, including headers, processed JSON bodies, tool use data, and, optionally, raw binary request and response bodies.

To read export files, decompress with a zstd library, then decode the CBOR sequence. Common libraries include cbor2 and zstandard for Python, fxamacker/cbor and klauspost/compress for Go, and the zstd CLI tool for decompression.

Verify the export

After saving the configuration, wait for the next export cycle (based on your every setting) and check the S3 bucket for new objects. The objects appear under the configured prefix (if set) with the .cbor.zstd file extension.

If no objects appear after the expected interval, check the Aperture server logs for S3-related errors such as authentication failures or permission issues.

Next steps