Provider compatibility

Last validated:

Aperture by Tailscale is currently in beta.

Aperture routes LLM requests to multiple providers, each with different base URLs, authentication methods, and API formats. This reference lists the supported providers, their compatibility flags, authorization types, and pricing options.

This page is part of the Aperture reference documentation. For field-level configuration details, refer to the Aperture configuration reference. For step-by-step provider setup, refer to the set up LLM providers guides. To configure coding agents to connect through Aperture, refer to the set up LLM clients guides.

Provider matrix

The following table summarizes how to configure each supported provider type:

ProviderBase URLAuthorizationCompatibility flagscost_basisGuide
OpenAIhttps://api.openai.com/beareropenai_chat, openai_responsesopenaiSet up
Anthropichttps://api.anthropic.comx-api-keyanthropic_messagesanthropicSet up
Google Geminihttps://generativelanguage.googleapis.comx-goog-api-keygemini_generate_contentgoogleSet up
Vertex AI (Gemini)https://aiplatform.googleapis.combearergoogle_generate_contentvertexSet up
Vertex AI (Anthropic)https://aiplatform.googleapis.combearergoogle_raw_predictvertexSet up
Vertex AI Expresshttps://aiplatform.googleapis.comx-goog-api-keygoogle_generate_contentvertexSet up
Amazon Bedrockhttps://bedrock-runtime.<region>.amazonaws.combearerbedrock_model_invokebedrockSet up
OpenRouterhttps://openrouter.ai/api/beareropenai_chat (default)openrouterSet up
Vercel AI Gatewayhttps://ai-gateway.vercel.shbeareropenai_chat, openai_responsesvercelSet up
Self-hostedYour server URLbearer (default)openai_chat (default)N/ASet up

Compatibility flags

The compatibility object in a provider configuration specifies which API formats the provider supports. These flags determine which endpoints Aperture exposes for the provider's models.

FlagTypeDefaultDescription
openai_chatbooleantrueSupports /v1/chat/completions
openai_responsesbooleanfalseSupports /v1/responses
anthropic_messagesbooleanfalseSupports /v1/messages
gemini_generate_contentbooleanfalseSupports the direct Gemini API (generativelanguage.googleapis.com)
bedrock_model_invokebooleanfalseSupports Amazon Bedrock format
google_generate_contentbooleanfalseSupports Vertex AI Gemini format (aiplatform.googleapis.com)
google_raw_predictbooleanfalseSupports Vertex AI raw predict for Anthropic models
bedrock_conversebooleanfalseSupports Amazon Bedrock Converse API format
experimental_gemini_cli_vertex_compatbooleanfalseRewrites short-form model paths for Gemini CLI compatibility with Vertex AI. Experimental; behavior may change.

Enable the flags that match the API formats your provider supports. For providers that serve models from multiple vendors (such as Vertex AI with both Gemini and Anthropic models), enable multiple flags.

Additional provider fields

In addition to baseurl, apikey, authorization, models, compatibility, cost_basis, name, add_headers, and model_cost_map, providers support the following optional fields:

FieldTypeDefaultDescription
descriptionstring""Human-readable description of the provider.
preferenceinteger0Routing priority. Higher values are preferred when multiple providers serve the same model.
disabledbooleanfalseExcludes the provider from routing and the /v1/models endpoint. Use to temporarily disable a provider without removing its configuration.

Authorization types

Different providers require different authorization header formats. Set the authorization field on the provider to specify which format to use.

ValueHeader formatUsed by
bearerAuthorization: Bearer <key>OpenAI and most providers
x-api-keyx-api-key: <key>Anthropic
x-goog-api-keyx-goog-api-key: <key>Google Gemini, Vertex AI Express

The authorization field is not required for all providers. For example, Vertex AI uses a service account key file instead of an API key (prefixed with keyfile::). Refer to set up a Vertex AI provider for step-by-step configuration instructions. Vertex AI Express uses x-goog-api-key with a standard API key. Refer to set up a Vertex AI Express provider for details.

Custom headers

Some providers require additional headers beyond the standard authorization field. Use add_headers on the provider to include custom headers in every request Aperture sends to that provider. Each entry is a string in "Header-Name: value" format:

{
  "providers": {
    "example-provider": {
      "baseurl": "https://api.example.com/",
      "apikey": "<your-key>",
      "authorization": "bearer",
      "models": ["model-name"],
      "add_headers": [
        "Custom-Header: value"
      ]
    }
  }
}

Each string in the array must follow the "Header-Name: value" format.

Cost basis

Aperture estimates the dollar cost of every LLM request. Cost estimates power quotas, hook metadata, and the per-model pricing shown in the Aperture dashboard.

Aperture auto-infers pricing for known providers based on the provider's compatibility flags (for example, anthropic_messages maps to Anthropic pricing). For providers where auto-inference does not apply, you can set cost_basis explicitly on the provider.

cost_basis valuePricing source
anthropicAnthropic API list prices
openaiOpenAI API list prices
googleGoogle Gemini API list prices
bedrockAWS Bedrock default pricing
bedrock-usAWS Bedrock US-region pricing
bedrock-euAWS Bedrock EU-region pricing
vertexGoogle Vertex AI pricing
azureAzure OpenAI standard pricing
azure-euAzure OpenAI EU-region pricing
openrouterOpenRouter pricing
vercelVercel AI Gateway pricing

To disable auto-inference globally, set auto_cost_basis to false at the top level of the configuration.

{
  "auto_cost_basis": false,
  "providers": {
    "anthropic": {
      "cost_basis": "anthropic"
    }
  }
}

When auto_cost_basis is false, only providers with an explicit cost_basis produce cost estimates.

Model cost map

When a model name does not appear in the pricing database (for example, after adding a new or custom model), you can use model_cost_map to map it to a known model for pricing purposes:

{
  "providers": {
    "anthropic": {
      "cost_basis": "anthropic",
      "model_cost_map": [
        {"match": "claude-opus-9-*", "as": "claude-opus-4-6"},
        {"match": "claude-*-preview*", "as": "claude-sonnet-4-5", "adjustment": 1.1}
      ]
    }
  }
}

In this example, requests to any claude-opus-9-* model are priced like claude-opus-4-6, and preview models are priced like claude-sonnet-4-5 with a 10% markup.

Each entry supports the following fields:

  • match: Glob pattern against the model name. Uses Go's path.Match syntax, where * matches any sequence of non-separator characters and ? matches a single character.
  • as: Replacement model name for the pricing lookup.
  • adjustment: Price multiplier (optional, default 1.0). Use 1.5 to mark up 50%.

Aperture uses the first matching entry.