Kubernetes operator
The Tailscale Kubernetes operator lets you:
- Expose
Services
in your Kubernetes cluster to your Tailscale network (known as a tailnet) - Securely connect to the Kubernetes control plane (kube-apiserver) via an API server proxy, with or without authentication
- Egress from a Kubernetes cluster to an external service on your tailnet
- Deploy subnet routers and exit nodes on Kubernetes
Setting up the Kubernetes operator
Prerequisites
Tailscale Kubernetes Operator must be configured with OAuth client credentials. The operator uses these credentials to manage devices via Tailscale API and to create auth keys for itself and the devices it manages.
-
In your tailnet policy file, create the ACL tags
tag:k8s-operator
andtag:k8s
, and maketag:k8s-operator
an owner oftag:k8s
. If you want yourServices
to be exposed with tags other than the defaulttag:k8s
, create those as well and maketag:k8s-operator
an owner."tagOwners": { "tag:k8s-operator": [], "tag:k8s": ["tag:k8s-operator"], }
-
Create an OAuth client in the OAuth clients page of the admin console. Create the client with
Devices
write scope and the tagtag:k8s-operator
.
Installation
A default operator installation creates a tailscale
namespace, an operator Deployment
in the tailscale
namespace, RBAC for the operator, and ProxyClass
and Connector
Custom Resource Definitions.
Helm
Tailscale Kubernetes Operator's Helm charts are available from two chart repositories.
The https://pkgs.tailscale.com/helmcharts
repository contains well-tested charts for stable Tailscale versions.
Helm charts and container images for a new stable Tailscale version are released a few days after the official release. This is done to avoid releasing image versions with potential bugs in the core Linux client or core libraries.
The https://pkgs.tailscale.com/unstable/helmcharts
repository contains charts with the very latest changes, published in between official releases.
The charts in both repositories are different versions of the same chart and you can upgrade from one to the other.
To install the latest Kubernetes Tailscale operator from https://pkgs.tailscale.com/helmcharts
in tailscale
namespace:
-
Add
https://pkgs.tailscale.com/helmcharts
to your local Helm repositories:helm repo add tailscale https://pkgs.tailscale.com/helmcharts
-
Update your local Helm cache:
helm repo update
-
Install the operator passing the OAuth client credentials that you created earlier:
helm upgrade \ --install \ tailscale-operator \ tailscale/tailscale-operator \ --namespace=tailscale \ --create-namespace \ --set-string oauth.clientId=<OAauth client ID> \ --set-string oauth.clientSecret=<OAuth client secret> \ --wait
Static manifests with kubectl
-
Download the Tailscale Kubernetes operator manifest file from the tailscale/tailscale repo.
-
Edit your version of the manifest file:
- Find
# SET CLIENT ID HERE
and replace it with your OAuth client ID. - Find
# SET CLIENT SECRET HERE
and replace it with your OAuth client secret. The OAuth client secret is case-sensitive.
For both the client ID and secret, quote the value, to avoid any potential yaml misinterpretation of unquoted strings. For example, use:
client_id: "k123456CNTRL" client_secret: "tskey-client-k123456CNTRL-abcdef"
instead of:
client_id: k123456CNTRL client_secret: tskey-client-k123456CNTRL-abcdef
- Find
-
Apply the edited file to your Kubernetes cluster:
kubectl apply -f manifest.yaml
Validation
Verify that the Tailscale operator has joined your tailnet. Open the Machines page of the admin console and look for a node named tailscale-operator, tagged with the tag:k8s-operator
tag. It may take a minute or two for the operator to join your tailnet, due to the time required to download and start the container image in Kubernetes.
Exposing a Kubernetes cluster workload to your tailnet (cluster ingress)
You can use the Tailscale Kubernetes operator to expose a Kubernetes cluster workload to your tailnet in three ways:
- Create a
LoadBalancer
typeService
with thetailscale
loadBalancerClass
that fronts your workload - Annotate an existing
Service
that fronts your workload - Create an
Ingress
resource fronting aService
orService
s for the workloads you wish to expose
We currently do not support exposing a cluster workload using a Service
of type ExternalName neither directly nor as an Ingress
backend.
Exposing a cluster workload via a tailscale Load Balancer Service
Create a new Kubernetes Service
of type LoadBalancer:
- Set
spec.type
toLoadBalancer
. - Set
spec.loadBalancerClass
totailscale
.
Once provisioning is complete, the Service
status will show the fully-qualified domain name of the Service
in your tailnet. You can view the Service
status by running kubectl get service <service name>
.
You should also see a new node with that name appear in the Machines page of the admin console.
Exposing a cluster workload by annotating an existing Service
If the Service
you want to expose already exists, you can expose it to Tailscale using object annotations.
Edit the Service
and under metadata.annotations
, add the annotation tailscale.com/expose
with the value "true"
. Note that "true"
is quoted because annotation values are strings, and an unquoted true
will be incorrectly interpreted as a boolean.
In this mode, Kubernetes doesn’t tell you the Tailscale machine name. You can look up the node in the Machines of the admin console to learn its machine name. By default, the machine name of an exposed Service
is <k8s-namespace>-<k8s-servicename>
, but it can be changed.
Exposing a Service
using Ingress
You can use the Tailscale Kubernetes operator to expose an Ingress
resource in your Kubernetes cluster to your tailnet. When configured using an Ingress
resource, you also get the ability to identify callers using HTTP headers injected by the Ingress
proxy.
Ingress
resources only support TLS, and are only exposed over HTTPS. You must enable HTTPS on your tailnet.
Edit the Ingress
resource you want to expose to use the Ingress
class tailscale
:
- Set
spec.ingressClassName
totailscale
. - Set
tls.hosts
to the desired host name of the Tailscale node. Only the first label is used. See custom machine names for more details.
For example, to expose an Ingress
resource nginx
to your tailnet:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx
spec:
defaultBackend:
service:
name: nginx
port:
number: 80
ingressClassName: tailscale
tls:
- hosts:
- nginx
The backend is HTTP by default. To use HTTPS on the backend, either set the port name to https
or the port number to 443
:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx
spec:
defaultBackend:
service:
name: nginx
port:
name: https
ingressClassName: tailscale
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
ports:
- name: https
port: 443
targetPort: 443
type: ClusterIP
A single Ingress
resource can be used to front multiple backend Services
:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
spec:
ingressClassName: tailscale
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: ui-svc
port:
number: 80
- path: /api
pathType: Prefix
backend:
service:
name: api-svc
port:
number: 80
Currently the only supported Ingress
path type is Prefix
. Requests for paths with other path types will be routed according to Prefix
rules.
Exposing a Service
to the public internet using Ingress
and Tailscale Funnel
You can also use the Tailscale Kubernetes operator to expose an Ingress
resource in your Kubernetes cluster to the public internet using Tailscale Funnel. To do so:
-
Add a
tailscale.com/funnel: "true"
annotation:apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: funnel annotations: tailscale.com/funnel: "true" spec: defaultBackend: service: name: funnel port: number: 80 ingressClassName: tailscale tls: - hosts: - funnel
-
Update the ACLs for your tailnet to allow Kubernetes Operator proxy services to use Tailscale Funnel.
Add a node attribute to allow nodes created by the Operator to use Funnel:
"nodeAttrs": [
{
"target": ["tag:k8s"], // tag that Tailscale Operator uses to tag proxies; defaults to 'tag:k8s'
"attr": ["funnel"],
},
...
]
Note that even if your policy has the funnel
attribute assigned to autogroup:member
(which is the default), you still need to add it to the tag used by proxies, since autogroup:member
does not include tagged nodes.
Removing a Service
Any of the following actions remove a Kubernetes Service
you exposed from your tailnet:
- Delete the
Service
entirely - If you are using the
tailscale.com/expose
annotation, remove the annotation - If you are using an
Ingress
resource, delete it or change or unsetspec.ingressClassName
Deleting a Service
's Tailscale node in the admin console does not clean up the Kubernetes state associated with that Service
.
Accessing the Kubernetes control plane using an API server proxy
You can use the Tailscale Kubernetes operator to expose and access the Kubernetes control plane (kube-apiserver) over Tailscale.
The Tailscale API server proxy can run in one of two modes:
-
in auth mode, requests from tailnet, that are proxied over to Kubernetes API server, are additionally impersonated using the sender's tailnet identity. Kubernetes RBAC can then be used to configure granular API server permissions for individual tailnet identities or groups.
-
in noauth mode, requests from tailnet will be proxied over to Kubernetes API server, but not authenticated. This mechanism can be used in combination with another authentication/authorizaton mechanism, such as authenticating proxy provided by an external IDP or a cloud provider.
Prerequisites
The API server proxy runs as part of the same process as the Tailscale Kubernetes operator and is reached via the same tailnet node. It is exposed on port 443
. Ensure that your ACLs allow all devices/users who want to access the API server via the proxy, access to the Tailscale Kubernetes operator. For example, to allow all tailnet devices tagged with tag:k8s-readers
access to the proxy, create ACL rule like this:
{
"action": "accept",
"src": ["tag:k8s-readers"],
"dst": ["tag:k8s-operator:443"]
}
Being able to access the proxy over tailnet does not grant tailnet users any default permissions to access Kubernetes API server resources. Tailnet users will only be able to access API server resources that they have been explicitly authorized to access, that is, by Kubernetes RBAC.
To use a Tailscale Kubernetes API server proxy, you need to enable HTTPS for your tailnet.
Configuring the API server proxy in auth mode
Installation
Helm
If you are installing Tailscale Kubernetes operator with Helm, you can install the proxy in auth mode by passing --set-string apiServerProxyConfig.mode=true
flag to the install command:
helm upgrade \
--install \
tailscale-operator \
tailscale/tailscale-operator \
--namespace=tailscale \
--create-namespace \
--set-string oauth.clientId=<OAauth client ID> \
--set-string oauth.clientSecret=<OAuth client secret> \
--set-string apiServerProxyConfig.mode="true" \
--wait
Static manifests with kubectl
If you are installing Tailscale Kubernetes operator using static manifests:
-
Set the
API_SERVER_PROXY
env var in the Tailscale Kubernetes operator deployment manifest to "true"name: APISERVER_PROXY value: "true"
-
Download and apply RBAC for the API server proxy from the tailscale/tailscale repo.
Configuring authentication and authorization
API server proxy in auth mode impersonates requests from tailnet to the Kubernetes API server. You can then use Kubernetes RBAC to control what API server resources tailnet identities are allowed to access.
The impersonation is applied as follows:
- if the user, who sends a request to Kube API server via the proxy, is in a tailnet user group for which API server proxy ACL grants have been configured for that proxy instance, the request will be impersonated as from a Kubernetes group specified in the grant. Additionally, it will also be impersonated as from a Kubernetes user whose name matches tailnet user's name.
- if ACL grants are not used and the node from which the request is sent is tagged, the request will be impersonated as if from a Kubernetes group whose name matches the tag.
- if ACL grants are not used and the node from which the request is sent is not tagged, the request will be impersonated as if from a Kubernetes user whose name matches the sender's tailnet username.
Impersonating Kubernetes groups with ACL grants
You can use ACL grants to configure what Kubernetes API server resources Tailscale user groups are allowed to access.
For example, to give tailnet user group group:prod
cluster admin access and give tailnet user group group:k8s-readers
read permissions for most Kubernetes resources:
-
Update your ACL grants:
{ "grants": [{ "src": ["group:prod"], "dst": ["tag:k8s-operator"], "app": { "tailscale.com/cap/kubernetes": [{ "impersonate": { "groups": ["system:masters"], }, }], }, }{ "src": ["group:k8s-readers"], "dst": ["tag:k8s-operator"], "app": { "tailscale.com/cap/kubernetes": [{ "impersonate": { "groups": ["tailnet-readers"], }, }], }, }] }
grants.src
is the Tailscale user group to which the grant appliesgrants.dst
must be the tag of the Tailscale Kubernetes operatorsystem:masters
is a Kubernetes group that has default RBAC bindings in all clusters. Kubernetes creates a defaultClusterRole
cluster-admin
that allows all actions against all Kubernetes API server resources and aClusterRoleBinding
cluster-admin
that binds thecluster-admin
ClusterRole
tosystem:masters
group.tailnet-readers
is a Kubernetes group that you will bind the default Kubernetesview
ClusterRole
to in a following step. (Note that Kubernetes group names do not refer to existing identities in Kubernetes- they do not need to be precreated to start using them in(Cluster)RoleBinding
s)
-
Bind
tailnet-readers
to theview
ClusterRole
:kubectl create clusterrolebinding tailnet-readers-view --group=tailnet-readers --clusterrole=view
Impersonating Kubernetes groups with tagged tailnet nodes
If the request is sent from a tagged device it will be impersonated as if from a Kubernetes goup whose name matches the tag. For example, a request from a tailnet node tagged with tag:k8s-readers
will be authenticated by the API server as from a Kubernetes group tag:k8s-readers
.
You can create Kubernetes (Cluster)Roles
and (Cluster)RoleBindings
to configure what permissions the group should have or bind an existing (Cluster)Role
to the group.
For example, to grant nodes tagged with tag:k8s-readers
read-only access to most Kubernetes resources, you can bind Kubernetes group tag:k8s-users
to the default Kubernetes view
ClusterRole:
kubectl create clusterrolebinding tailnet-readers --group="tag:k8s-readers" --clusterrole=view
Impersonating Kubernetes users
If the request is not sent from a tagged device it will be impersonated as if from a Kubernetes user named the same as the sender's tailnet user.
You can then create Kubernetes (Cluster)Roles
and (Cluster)RoleBindings
to configure what permissions the user should have or bind an existing (Cluster)Role
to the user.
For example, to allow tailnet user alice@tailscale.com
read-only access to most Kubernetes resources, you can bind Kubernetes user alice@tailscale.com
to the default Kubernetes view
ClusterRole like so:
kubectl create clusterrolebinding alice-view --user="alice@tailscale.com" --clusterrole=view
Configuring kubeconfig
You can run the following CLI command to configure your kubeconfig
for authentication with kubectl
via the Tailscale Kubernetes API server proxy: tailscale configure kubeconfig <operator-hostname>
. By default, the hostname for the operator node is tailscale-operator.
Configuring API server proxy in noauth mode
The noauth mode of the API server proxy is useful if you want to use Tailscale to provide access to the Kubernetes API server over tailnet, but want to keep using your existing authentication and authorization mechanism.
Installation
Helm
If you are installing Tailscale Kubernetes operator with Helm, you can install the proxy in auth mode by passing --set-string apiServerProxyConfig.mode=noauth
flag to the install command:
helm upgrade \
--install \
tailscale-operator \
tailscale/tailscale-operator \
--namespace=tailscale \
--create-namespace \
--set-string oauth.clientId=<OAauth client ID> \
--set-string oauth.clientSecret=<OAuth client secret> \
--set-string apiServerProxyConfig.mode="noauth" \
--wait
Static manifests with kubectl
If you are installing Tailscale Kubernetes operator using static manifests:
-
Set the
API_SERVER_PROXY
env var in the Tailscale Kubernetes operator deployment manifest to "noauth"name: APISERVER_PROXY value: "noauth"
Authentication and authorization
When ran in noauth mode, API server proxy exposes Kubernetes API server to the tailnet, but does not provide authentication. You can use the proxy endpoint which is <hostname of the Tailscale operator>:443
instead of Kubernetes API server address and set up authentication and authorization over that using any other mechanism such as another authenticating proxy provided by your managed Kubernetes provider or IDP or similar.
Exposing a tailnet service to your Kubernetes cluster (cluster egress)
You can make services that are external to your cluster, but available on your tailnet, available to your Kubernetes cluster workloads by making the associated tailnet node accessible from the cluster.
You can configure the operator to set up an in-cluster egress proxy for a tailnet node by creating a Kubernetes Service
that specifies a tailnet node either by its Tailscale IP address or its MagicDNS name. In both cases your cluster workloads will refer to the tailnet service by the Kubernetes Service
name.
Expose a tailnet node to your cluster using its Tailscale IP address
-
Create a Kubernetes
Service
of type ExternalName annotated with the Tailscale IP address of the tailnet node you want to make available:apiVersion: v1 kind: Service metadata: annotations: tailscale.com/tailnet-ip: <Tailscale IP address> name: rds-staging # service name spec: externalName: placeholder # any value - will be overwritten by operator type: ExternalName
Value of the tailscale.com/tailnet-ip
annotation can be either a tailnet IPv4 or IPv6 address, for either a Tailscale node or a route in a Tailscale subnet. IP ranges are not supported.
Expose a tailnet node to your cluster using its Tailscale MagicDNS name
-
Ensure that MagicDNS is enabled for your cluster.
-
Create a Kubernetes
Service
of type ExternalName annotated with the MagicDNS name of the tailnet node that you wish to make available:apiVersion: v1 kind: Service metadata: annotations: tailscale.com/tailnet-fqdn: <Tailscale MagicDNS name> name: rds-staging # service name spec: externalName: placeholder # any value - will be overwritten by operator type: ExternalName
Note that the value of the tailscale.com/tailnet-fqdn
annotation must be the full MagicDNS name of the tailnet service (not just hostname). The final dot is optional.
Validation
Wait for the Tailscale Kubernetes operator to update spec.externalName
of the Kubernetes Service
that you created. The Service
external name should get set to the Kubernetes DNS name of another Kubernetes Service
that is fronting the egress proxy in tailscale
namespace. The proxy is responsible for routing traffic to the exposed Tailscale node over the tailnet.
Once the Service
external name gets updated, workloads in your cluster should be able to access the exposed tailnet service by referring to it via the Kubernetes DNS name of the Service
that you created.
Exposing a Service
in one cluster to another cluster (cross-cluster connectivity)
You can use the Tailscale Kubernetes operator to expose a Service
in one cluster to another cluster. This is done by exposing the Service
on destination cluster A to the tailnet (cluster ingress), and connecting from a source Service
in cluster B to the tailnet (cluster egress) in order to access the Service
running in cluster A.
This will need to be configured for each Ingress
and Egress
pair of Services
. To set this up for access via ingress to a Service
in cluster A and routing via egress from a Service
in cluster B:
- Set up
Ingress
in cluster A for theService
you wish to access. - Expose the external
Service
(running in cluster A) using its Tailscale IP address in cluster B with an annotation on the externalService
Shared cluster egress and cluster ingress proxy configuration
Configuration options in this section apply to both cluster egress and cluster ingress (configured via a Service
or Ingress
) proxies.
The API server proxy currently runs as part of the same process as the Kubernetes operator. You can use the available operator configuration options to configure the API server proxy parameters.
Customizing ACL tags
Currently cluster ingress and cluster egress proxies join your tailnet as separate Tailscale devices tagged by one or more ACL tags.
The Tailscale operator must be a tag owner of all the proxy tags: if you want to tag a proxy device with tag:foo,tag:bar
, the tagOwners
section of the tailnet policy file must list tag:k8s-operator
as one of the owners of both tag:foo
and tag:bar
.
Currently ACL tags can not be modified once a proxy has been created.
Default tags
By default, a proxy device joins your tailnet tagged with the ACL tag tag:k8s
. You can modify the default tag or tags when installing the operator.
If you install the operator with Helm you can use .proxyConfig.defaultTags
in the Helm values file.
If you install the operator with static manifests you can set the PROXY_TAGS
env var in the deployment manifest.
Multiple tags must be passed as a comma separated string, that is, tag:foo,tag:bar
.
Tags for individual proxies
To override the default tags for an individual proxy device, you can set tailscale.com/tags
annotation on the Service
or Ingress
resource, used to tell the operator to create the proxy, to a comma separated list of the desired tags.
For example, setting tailscale.com/tags = "tag:foo,tag:bar"
will result in the proxy device having the tags tag:foo
and tag:bar
.
Using custom machine names
Cluster ingress and egress proxies support overriding the hostname they announce while registering with Tailscale. For Service
s custom hostname can be set via a tailscale.com/hostname
annotation. For Ingress
es a custom hostname can be set via .spec.tls.hosts
field (only the first value will be used).
Note that this only sets a custom OS hostname reported by the node. Actual machine name will differ if there already is a device on the network with the same name.
Machine names are subject to the constraints of DNS: they can be up to 63 characters long, must start and end with a letter, and consist of only letters, numbers, and -
.
Cluster resource customization using ProxyClass
Custom Resource
Tailscale operator v1.60 and newer provides ability to customize configuraton of cluster resources created by the operator using ProxyClass
Custom Resource Definition.
You can specify cluster resource configuration for custom labels and resource requests using a ProxyClass
Custom Resource.
You can then:
-
Apply configuration from a particular
ProxyClass
to cluster resources created for a tailscaleIngress
orService
using atailscale.com/proxy-class=<proxy-class-name>
label on theIngress
orService
. -
Apply configuration from a particular
ProxyClass
to cluster resources created for aConnector
usingconnector.spec.proxyClass
field.
The following example demonstrates how to use a ProxyClass
that specifies custom labels and node selector that should get applied to Pod
s for a tailscale Ingress
, a cluster egress proxy and a Connector
:
-
Create a
ProxyClass
resource:apiVersion: tailscale.com/v1alpha1 kind: ProxyClass metadata: name: prod spec: statefulSet: pod: labels: team: eng environment: prod nodeSelector: beta.kubernetes.io/os: "linux"
-
Create a tailscale
Ingress
withtailscale.com/proxy-class=prod
label:apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: my-app labels: tailscale.com/proxy-class: "prod" spec: rules: ... ingressClassName: tailscale
-
Create a cluster egress
Service
with atailscale.com/proxy-class=prod
label:apiVersion: v1 kind: Service metadata: annotations: tailscale.com/tailnet-ip: <tailnet-ip> labels: tailscale.com/proxy-class: "prod" name: my-tailnet-service spec:
-
Create a
Connector
that refers to the 'prod'ProxyClass
:apiVersion: tailscale.com/v1alpha1 kind: Connector metadata: name: prod spec: proxyClass: prod ...
You can find all available ProxyClass
configuration options on GitHub →
Deploying exit nodes and subnet routers on Kubernetes using Connector
Custom Resource
Tailscale Kubernetes operator installation includes a Connector
Custom Resource Definition.
Connector
can be used to configure the operator to deploy a Tailscale node that acts as a Tailscale subnet router, exit-node, or both.
For example, you can deploy a Connector
that acts as a subnet router and exposes to your tailnet cluster Service
CIDRs or some cloud service CIDRs that are available from the cluster, but not publicly accessible.
To create a Connector
that exposes 10.40.0.0/14
CIDR to your tailnet:
-
(Optional) Set the tag of the
Connector
node to be auto-approved. By default, the node will be tagged withtag:k8s
. Custom tag or tags can be set via.connector.spec.tags
in step 2. If you set a custom tag, you must also ensure that operator is an owner of this tag. -
Create a
Connector
Custom Resource:apiVersion: tailscale.com/v1alpha1 kind: Connector metadata: name: ts-pod-cidrs spec: hostname: ts-pod-cidrs subnetRouter: advertiseRoutes: - "10.40.0.0/14"
-
Wait for the
Connector
resources to get created:$ kubectl get connector ts-pod-cidrs NAME SUBNETROUTES ISEXITNODE STATUS ts-pod-cidrs 10.40.0.0/14 false ConnectorCreated
-
(Optional) If you did not configure the route to be auto-approved in step 1, open the Machines page of the admin console and manually approve the newly created
ts-pod-cidrs
node to advertise the10.40.0.0/14
route. -
(Optional and for Linux clients only) Ensure that clients that need to access resources in the subnet have accepted the advertised route.
You can find all available Connector
configuration options on GitHub →
Supported versions
Operator and proxies
We recommend that you use the same version for the operator and the proxies, because we currently run majority of our tests using the same versions.
We do, however, support a version skew of up to four minor versions (for example, operator version v1.62.0 is compatible with proxy version v1.58.0, but not any older).
Kubernetes versions
The oldest currently supported version of Kubernetes is v1.23.0.
CNI compatibility
The operator creates proxies that configure custom routing and forwarding rules in each proxy Pod
's network namespace only.
Because the proxying is implemented in the proxy Pod
's namespace, the routing and firewall configuration on the Node
(for example, using iptables, eBPF, or any other mechanism) doesn't affect the proxies.
This means that the proxies work with most CNI configurations out of the box.
Cilium in kube-proxy replacement mode
You must enable bypassing socket load balancer in Pods' namespaces if you run Cilium in kube-proxy replacement mode and want to do one or more of the following:
- Expose a Kubernetes
Service
to your tailnet as a Tailscale LoadBalancerService
. - Expose a Kubernetes
Service
to your tailnet usingtailscale.com/expose
annotation. - Expose a
Service
CIDR range viaConnector
.
This is needed because when Cilium runs in kube-proxy replacement mode with the socket load balancing in Pod
s' namespaces enabled, connections from Pod
s to ClusterIP
s go over a TCP socket (instead of going out via Pod
s' veth devices) and thus bypasses Tailscale firewall rules that are attached to netfilter hooks.
Troubleshooting
Using logs
If you are experiencing issues with your installation, it might be useful to take a look at the operator logs.
For ingress and egress proxies and the Connector
the operator creates a single replica StatefulSet
in the tailscale
namespace, that is responsible for proxying the traffic to and from the tailnet. If the StatefulSet
has been successfully created, you should also take a look at the logs of its Pod
.
Operator logs
You can increase operator's log level to get debug logs.
To set log level to debug
for an operator deployed using Helm run:
helm upgrade --install \
operator tailscale/tailscale-operator \
--set operatorConfig.logging=debug
If you deployed the operator using static manifests, you can set OPERATOR_LOGGING
environment variable for the operator's Deployment
to debug
.
To view the logs run:
kubectl logs deployment/operator --namespace tailscale
Proxy logs
To get logs for the proxy created for an Ingress
resource run:
kubectl logs --selector=tailscale.com/parent-resource-type=ingress \
--selector=tailscale.com/parent-resource=<ingress-name> \
--selector=tailscale.com/parent-resource-ns=<ingress-namespace> \
--namespace tailscale
To get logs for a proxy created for an ingress or egress Service
run:
kubectl logs --selector=tailscale.com/parent-resource-type=svc \
--selector=tailscale.com/parent-resource=<service-name> \
--selector=tailscale.com/parent-resource-ns=<service-namespace> \
--namespace tailscale
To get logs for a proxy created for a Connector
run:
kubectl logs --selector=tailscale.com/parent-resource-type=connector \
--selector=tailscale.com/parent-resource=<connector-name> \
--namespace tailscale
Troubleshooting TLS connection errors
If you are connecting to a workload exposed to tailnet over Ingress
or to kube API server over the operator's API server proxy, you can sometimes run into TLS connection errors.
Check the following, in sequence:
-
HTTPS is not enabled for the tailnet.
To use tailscale
Ingress
or API server proxy you must ensure that HTTPS is enabled for your tailnet. -
LetsEncrypt certificate has not yet been provisioned.
If HTTPS is enabled, the errors are most likely related to LetsEncrypt certificate provisioning flow.
For each Tailscale
Ingress
resource, the operator deploys a Tailscale node that runs a TLS server. This server is provisioned with a LetsEncrypt certificate for the MagicDNS name of the node. For the API server proxy, the operator also runs an in-process TLS server that proxies tailnet traffic to the Kubernetes API server. This server gets provisioned with a LetsEncrypt certificate for the MagicDNS name of the operator.In both cases the certificates get provisioned lazily, the first time a client connects to the server. It takes some time to provision it, so you might see some TLS timeout errors.
You can take a look at the logs to follow the certificate provisioning process:
For API server proxy, review the operator's logs:
- For API server proxy, review the operator logs
- For
Ingress
, review the proxy logs
There is nothing you can currently do to prevent the first client connection sometimes erroring. Do reach out if this is causing issues for your workflow.
-
You have hit LetsEncrypt rate limits.
If the connection does not succeed even after first attempt to connect, you should verify that you have not hit LetsEncrypt rate limits. If a limit has been hit, you will be able to see the error returned from LetsEncrypt in the logs.
We are currently working on making it less likely for users to hit LetsEncrypt rate limits. See related discussion in tailscale/tailscale#11119.
Troubleshooting cluster egress/cluster ingress proxies
The proxy pod is deployed in the tailscale
namespace, and will have a name of the form ts-<annotated-service-name>-<random-string>
.
If there are issues reaching the external service, verify the proxy pod is properly deployed:
- Review the logs of the proxy pod
- Review the logs of the operator. You can do this by running
kubectl logs deploy/operator --namespace tailscale
. The log level can be configured using theOPERATOR_LOGGING
environment variable in the operator's manifest file. - Verify that the cluster workload is able to send traffic to the proxy pod in the
tailscale
namespace
Limitations
- There are no dashboards or metrics. We are interested to hear what metrics you would find useful — do reach out.
- The container images, charts or manifests are not signed. We are working on this.
- The static manifests are currently only available from tailscale/tailscale codebase. We are working to improve this flow.
Cluster ingress
- Tags are only considered during initial provisioning. That is, editing
tailscale.com/tags
on an already exposedService
doesn’t update the tags until you clean up and re-expose the Service. - The requested machine name is only considered during initial provisioning. That is, editing
tailscale.com/hostname
on an already exposedService
doesn't update the machine name until you clean up and re-expose the Service. - Cluster-ingress using Kubernetes
Ingress
resource requires TLS certificates. Currently the certificates are provisioned on the first connect. This means that the first connection might be slow or even time out.
API server proxy
- The API server proxy runs inside of the cluster. If your cluster is non-functional or is unable to schedule pods, you may lose access to the API server proxy.
- API server proxy requires TLS certificates. Currently the certificates are provisioned on the first API call via the proxy. This means that the first call might be slow or even time out.
Cluster egress
- Cluster workloads can only access the exposed tailnet node by a Kubernetes
Service
name of the egress proxy. We currently do not make MagicDNS names of the exposed nodes resolve within the cluster. As a result of this, if you use Tailscale to provision certificates you may see certificate name mismatch errors. We are working on this. - Egress to external services supports using an IPv4 or IPv6 address for a single route in the
tailscale.com/tailnet-ip
annotation, but not IP ranges. - Egress to external services currently only supports clusters where privileged pods are permitted (that is, GKE Autopilot is not supported).
Glossary
Proxy
In the context of this document, a proxy is the Tailscale node deployed for each user-configured component that the operator manages (such as a Tailscale Ingress
or a Connector
).
The proxy is deployed as a StatefulSet
in the operator's namespace (defaults to tailscale
).
The StatefulSet
s name is prefixed by a portion of the configured component's name.
If you need to reliably refer to the proxy's StatefulSet
, you can use label selectors.
For example, to find StatefulSet
for a Tailscale Ingress
resource named ts-ingress
in prod
namespace , you can run:
$ kubectl get statefulset \
--namespace tailscale \
--selector="tailscale.com/managed=true,tailscale.com/parent-resource-type=ingress,tailscale.com/parent-resource=ts-ingress,tailscale.com/parent-resource-ns=prod"
The tailscale.com/parent-resource
label is set to svc
for a Service
and to connector
for a Connector
.
The tailscale.com
labels are also propagated to the Pod
.