Set up LLM providers

Last validated:

Aperture by Tailscale is currently in beta.

Each provider requires API credentials and a configuration entry in the Aperture configuration. After you configure a provider, any LLM client set up to use Aperture can access models from that provider automatically. Refer to the provider compatibility reference for the full matrix of supported providers, API formats, and compatibility flags.


Configure an OpenAI provider in Aperture so your team can access GPT models.

Configure an Anthropic provider in Aperture so your team can access Claude models.

Configure a Google Gemini provider in Aperture so your team can access Gemini models using the direct Gemini API.

Configure a Vertex AI provider in Aperture with a GCP service account and key file so your team can access Gemini and Claude models through Aperture.

Configure a Vertex AI Express provider in Aperture with a Google Cloud API key so your team can access Gemini models through your tailnet without managing service accounts.

Configure an Amazon Bedrock provider in Aperture so your team can access foundation models through AWS.

Configure an OpenRouter provider in Aperture so your team can access models from multiple providers through a single aggregator.

Configure a Vercel AI Gateway provider in Aperture so your team can access models from multiple LLM providers.

Configure a self-hosted or locally running LLM server as a provider in Aperture so your team can access private models through your tailnet.