Aperture how-to guides
Last validated:
Build a custom webhook
Create a custom webhook integration to send Aperture event data to your own services.
Check and refill budgets
Check the status of Aperture quota buckets and manually refill balances using the Aperture dashboard.
Control model access
Configure Aperture grants to control which models each user or group can access.
Export usage data to S3
Configure Aperture to export LLM usage data to an Amazon S3 bucket for compliance, analysis, and long-term retention.
Grant access to MCP tools
Configure Aperture grants to control which MCP tools, resources, and templates users can access.
Integrate Cerbos with Aperture
Send AI request data from Aperture to Cerbos for fine-grained authorization decisions on LLM access.
Integrate Cribl with Aperture
Route AI usage data from Aperture to Cribl for processing and forwarding to your observability destinations.
Integrate Oso with Aperture
Send AI tool use data from Aperture to Oso for authorization decisions and observability.
Set a team-wide budget
Configure a shared quota bucket in Aperture to cap total AI spending across your organization.
Set per-user spending limits
Configure quota buckets in Aperture to set spending limits for individual users.
Set up a guardrail
Configure a pre-request hook to inspect, modify, or block LLM requests before they reach the provider.
Set up a self-hosted provider
Configure a self-hosted or locally running LLM server as a provider in Aperture so your team can access private models through your tailnet.
Set up a Vertex AI Express provider
Configure a Vertex AI Express provider in Aperture with a Google Cloud API key so your team can access Gemini models through your tailnet without managing service accounts.
Set up admin access
Configure administrator roles for managing Aperture settings and accessing all user data.
Set up Amazon Bedrock
Configure an Amazon Bedrock provider in Aperture so your team can access foundation models through AWS.
Set up Anthropic
Configure an Anthropic provider in Aperture so your team can access Claude models.
Set up Google Gemini
Configure a Google Gemini provider in Aperture so your team can access Gemini models using the direct Gemini API.
Set up OpenAI
Configure an OpenAI provider in Aperture so your team can access GPT models.
Set up OpenRouter
Configure an OpenRouter provider in Aperture so your team can access models from multiple providers through a single aggregator.
Set up Vercel AI Gateway
Configure a Vercel AI Gateway provider in Aperture so your team can access models from multiple LLM providers.
Set up Vertex AI with Aperture
Configure a Vertex AI provider in Aperture with a GCP service account and key file so your team can access Gemini and Claude models through Aperture.
Use Claude Code with Aperture
Configure Claude Code to route requests through your Aperture proxy.
Use Codex with Aperture
Configure OpenAI Codex to route requests through your Aperture proxy.
Use OpenAI-compatible tools with Aperture
Configure Gemini CLI, Roo Code, Cline, and other OpenAI-compatible tools to route requests through Aperture.
Use OpenCode with Aperture
Configure OpenCode to route requests through your Aperture proxy.