Provider compatibility
Last validated:
Aperture routes LLM requests to multiple providers, each with different base URLs, authentication methods, and API formats. This reference lists the supported providers, their compatibility flags, authorization types, and pricing options.
This page is part of the Aperture reference documentation. For field-level configuration details, refer to the Aperture configuration reference. For step-by-step provider setup, refer to the set up LLM providers guides. To configure coding agents to connect through Aperture, refer to the set up LLM clients guides.
Provider matrix
The following table summarizes how to configure each supported provider type:
| Provider | Base URL | Authorization | Compatibility flags | cost_basis | Guide |
|---|---|---|---|---|---|
| OpenAI | https://api.openai.com/ | bearer | openai_chat, openai_responses | openai | Set up |
| Anthropic | https://api.anthropic.com | x-api-key | anthropic_messages | anthropic | Set up |
| Google Gemini | https://generativelanguage.googleapis.com | x-goog-api-key | gemini_generate_content | google | Set up |
| Vertex AI (Gemini) | https://aiplatform.googleapis.com | bearer | google_generate_content | vertex | Set up |
| Vertex AI (Anthropic) | https://aiplatform.googleapis.com | bearer | google_raw_predict | vertex | Set up |
| Vertex AI Express | https://aiplatform.googleapis.com | x-goog-api-key | google_generate_content | vertex | Set up |
| Amazon Bedrock | https://bedrock-runtime.<region>.amazonaws.com | bearer | bedrock_model_invoke | bedrock | Set up |
| OpenRouter | https://openrouter.ai/api/ | bearer | openai_chat (default) | openrouter | Set up |
| Vercel AI Gateway | https://ai-gateway.vercel.sh | bearer | openai_chat, openai_responses | vercel | Set up |
| Self-hosted | Your server URL | bearer (default) | openai_chat (default) | N/A | Set up |
Compatibility flags
The compatibility object in a provider configuration specifies which API formats the provider supports. These flags determine which endpoints Aperture exposes for the provider's models.
| Flag | Type | Default | Description |
|---|---|---|---|
openai_chat | boolean | true | Supports /v1/chat/completions |
openai_responses | boolean | false | Supports /v1/responses |
anthropic_messages | boolean | false | Supports /v1/messages |
gemini_generate_content | boolean | false | Supports the direct Gemini API (generativelanguage.googleapis.com) |
bedrock_model_invoke | boolean | false | Supports Amazon Bedrock format |
google_generate_content | boolean | false | Supports Vertex AI Gemini format (aiplatform.googleapis.com) |
google_raw_predict | boolean | false | Supports Vertex AI raw predict for Anthropic models |
bedrock_converse | boolean | false | Supports Amazon Bedrock Converse API format |
experimental_gemini_cli_vertex_compat | boolean | false | Rewrites short-form model paths for Gemini CLI compatibility with Vertex AI. Experimental; behavior may change. |
Enable the flags that match the API formats your provider supports. For providers that serve models from multiple vendors (such as Vertex AI with both Gemini and Anthropic models), enable multiple flags.
Additional provider fields
In addition to baseurl, apikey, authorization, models, compatibility, cost_basis, name, add_headers, and model_cost_map, providers support the following optional fields:
| Field | Type | Default | Description |
|---|---|---|---|
description | string | "" | Human-readable description of the provider. |
preference | integer | 0 | Routing priority. Higher values are preferred when multiple providers serve the same model. |
disabled | boolean | false | Excludes the provider from routing and the /v1/models endpoint. Use to temporarily disable a provider without removing its configuration. |
Authorization types
Different providers require different authorization header formats. Set the authorization field on the provider to specify which format to use.
| Value | Header format | Used by |
|---|---|---|
bearer | Authorization: Bearer <key> | OpenAI and most providers |
x-api-key | x-api-key: <key> | Anthropic |
x-goog-api-key | x-goog-api-key: <key> | Google Gemini, Vertex AI Express |
The authorization field is not required for all providers. For example, Vertex AI uses a service account key file instead of an API key (prefixed with keyfile::). Refer to set up a Vertex AI provider for step-by-step configuration instructions. Vertex AI Express uses x-goog-api-key with a standard API key. Refer to set up a Vertex AI Express provider for details.
Custom headers
Some providers require additional headers beyond the standard authorization field. Use add_headers on the provider to include custom headers in every request Aperture sends to that provider. Each entry is a string in "Header-Name: value" format:
{
"providers": {
"example-provider": {
"baseurl": "https://api.example.com/",
"apikey": "<your-key>",
"authorization": "bearer",
"models": ["model-name"],
"add_headers": [
"Custom-Header: value"
]
}
}
}
Each string in the array must follow the "Header-Name: value" format.
Cost basis
Aperture estimates the dollar cost of every LLM request. Cost estimates power quotas, hook metadata, and the per-model pricing shown in the Aperture dashboard.
Aperture auto-infers pricing for known providers based on the provider's compatibility flags (for example, anthropic_messages maps to Anthropic pricing). For providers where auto-inference does not apply, you can set cost_basis explicitly on the provider.
cost_basis value | Pricing source |
|---|---|
anthropic | Anthropic API list prices |
openai | OpenAI API list prices |
google | Google Gemini API list prices |
bedrock | AWS Bedrock default pricing |
bedrock-us | AWS Bedrock US-region pricing |
bedrock-eu | AWS Bedrock EU-region pricing |
vertex | Google Vertex AI pricing |
azure | Azure OpenAI standard pricing |
azure-eu | Azure OpenAI EU-region pricing |
openrouter | OpenRouter pricing |
vercel | Vercel AI Gateway pricing |
To disable auto-inference globally, set auto_cost_basis to false at the top level of the configuration.
{
"auto_cost_basis": false,
"providers": {
"anthropic": {
"cost_basis": "anthropic"
}
}
}
When auto_cost_basis is false, only providers with an explicit cost_basis produce cost estimates.
Model cost map
When a model name does not appear in the pricing database (for example, after adding a new or custom model), you can use model_cost_map to map it to a known model for pricing purposes:
{
"providers": {
"anthropic": {
"cost_basis": "anthropic",
"model_cost_map": [
{"match": "claude-opus-9-*", "as": "claude-opus-4-6"},
{"match": "claude-*-preview*", "as": "claude-sonnet-4-5", "adjustment": 1.1}
]
}
}
}
In this example, requests to any claude-opus-9-* model are priced like claude-opus-4-6, and preview models are priced like claude-sonnet-4-5 with a 10% markup.
Each entry supports the following fields:
match: Glob pattern against the model name. Uses Go'spath.Matchsyntax, where*matches any sequence of non-separator characters and?matches a single character.as: Replacement model name for the pricing lookup.adjustment: Price multiplier (optional, default1.0). Use1.5to mark up 50%.
Aperture uses the first matching entry.