Set up LLM providers
Last validated:
Each provider requires API credentials and a configuration entry in the Aperture configuration. After you configure a provider, any LLM client set up to use Aperture can access models from that provider automatically. Refer to the provider compatibility reference for the full matrix of supported providers, API formats, and compatibility flags.
Set up OpenAI
Configure an OpenAI provider in Aperture so your team can access GPT models.
Set up Anthropic
Configure an Anthropic provider in Aperture so your team can access Claude models.
Set up Google Gemini
Configure a Google Gemini provider in Aperture so your team can access Gemini models using the direct Gemini API.
Set up Vertex AI with Aperture
Configure a Vertex AI provider in Aperture with a GCP service account and key file so your team can access Gemini and Claude models through Aperture.
Set up a Vertex AI Express provider
Configure a Vertex AI Express provider in Aperture with a Google Cloud API key so your team can access Gemini models through your tailnet without managing service accounts.
Set up Amazon Bedrock
Configure an Amazon Bedrock provider in Aperture so your team can access foundation models through AWS.
Set up OpenRouter
Configure an OpenRouter provider in Aperture so your team can access models from multiple providers through a single aggregator.
Set up Vercel AI Gateway
Configure a Vercel AI Gateway provider in Aperture so your team can access models from multiple LLM providers.
Set up a self-hosted provider
Configure a self-hosted or locally running LLM server as a provider in Aperture so your team can access private models through your tailnet.