Aperture how-to guides

Last validated:

Create a custom webhook integration to send Aperture event data to your own services.

Check the status of Aperture quota buckets and manually refill balances using the Aperture dashboard.

Configure Aperture grants to control which models each user or group can access.

Configure Aperture to export LLM usage data to an Amazon S3 bucket for compliance, analysis, and long-term retention.

Configure Aperture grants to control which MCP tools, resources, and templates users can access.

Send AI request data from Aperture to Cerbos for fine-grained authorization decisions on LLM access.

Route AI usage data from Aperture to Cribl for processing and forwarding to your observability destinations.

Send AI tool use data from Aperture to Oso for authorization decisions and observability.

Configure a shared quota bucket in Aperture to cap total AI spending across your organization.

Configure quota buckets in Aperture to set spending limits for individual users.

Configure a pre-request hook to inspect, modify, or block LLM requests before they reach the provider.

Configure a self-hosted or locally running LLM server as a provider in Aperture so your team can access private models through your tailnet.

Configure a Vertex AI Express provider in Aperture with a Google Cloud API key so your team can access Gemini models through your tailnet without managing service accounts.

Configure administrator roles for managing Aperture settings and accessing all user data.

Configure an Amazon Bedrock provider in Aperture so your team can access foundation models through AWS.

Configure an Anthropic provider in Aperture so your team can access Claude models.

Configure a Google Gemini provider in Aperture so your team can access Gemini models using the direct Gemini API.

Configure an OpenAI provider in Aperture so your team can access GPT models.

Configure an OpenRouter provider in Aperture so your team can access models from multiple providers through a single aggregator.

Configure a Vercel AI Gateway provider in Aperture so your team can access models from multiple LLM providers.

Configure a Vertex AI provider in Aperture with a GCP service account and key file so your team can access Gemini and Claude models through Aperture.

Configure Claude Code to route requests through your Aperture proxy.

Configure OpenAI Codex to route requests through your Aperture proxy.

Configure Gemini CLI, Roo Code, Cline, and other OpenAI-compatible tools to route requests through Aperture.

Configure OpenCode to route requests through your Aperture proxy.