Where is my GKE master?

Julio Diez
Google Cloud - Community
7 min readDec 17, 2020

--

Photo by Maximilian Weisbecker on Unsplash

Google Kubernetes Engine (GKE) clusters can be configured in different ways that make accessing the control plane, your Kubernetes master or API server, a non-trivial task. You will see that it's not only where the master is but also where you are. I will explore the different scenarios and configurations to help you decide your access model and troubleshoot your setup.

Scenarios

You need to use tools like kubectl to access the Kubernetes API and manage your cluster workloads, and how your GKE cluster is configured imposes access restrictions. First let's see the different places from where you may want to access a GKE master. The following figure shows them.

Figure 1. Accessing the GKE master from different environments
  1. Internet: this represents accessing the master from a (non-GCP) public IP, e.g. your laptop at home, although it could also be your corporate machine when going through the Internet.
  2. Cluster nodes: these are your worker nodes accessing the master, typically the kubelet agent that runs on each node.
  3. VMs in cluster's VPC: other VMs in the same cluster's VPC network, for example a management host to reach a private API endpoint. This option has more sub-scenarios that may be relevant in some cases, e.g. if we consider the subnet or region where VMs are located. I will delve into these later.
  4. VMs in another VPC: similar to previous point but when in a different VPC to that of the cluster. These VPCs are not connected to each other.
  5. Cloud Shell: this is the online shell environment for Google Cloud that you access with your browser, with pre-installed tools like gcloud and kubectl.
  6. On-premises: companies typically connect their private IP space to GCP VPCs through a VPN or Interconnect link. This case may be your corporate machine or CI/CD tooling managing your cluster.

Now let's explain the different configurations for GKE and how you can get access or not from these places.

Public clusters

By default GKE clusters are created with public IPs for master and worker nodes. Create a cluster and retrieve the Kubernetes credentials to gain access:

$ gcloud container clusters create test-cluster
$ gcloud container clusters get-credentials test-cluster

Run a kubectl command to verify that you have access:

$ kubectl get nodes -o name
node/gke-test-cluster-default-pool-f89318a3-5bg5
node/gke-test-cluster-default-pool-f89318a3-nfsp
node/gke-test-cluster-default-pool-f89318a3-rwb1

You can also check the public API endpoint where the master is listening:

$ kubectl cluster-info
Kubernetes master is running at https://104.155.43.13
...

Assuming you have the necessary credentials, you will have access from most of the environments since the API endpoint is public:

  1. Internet:️️ ️✅
    You may already know it since probably you run this test from your laptop.
  2. Cluster nodes: ️✅️
    Sure, cluster nodes have public IPs, and they need access to the master.
  3. VMs in cluster’s VPC: ✅
    Any VM with a public IP can get access to the cluster.
  4. VMs in another VPC: ✅
    Same comment as above, the VPC here doesn't make a difference.
  5. Cloud Shell: ✅
    A Cloud Shell is just a managed VM with a public IP.
  6. On-premises: ❌️
    This is the only case without access to the cluster. Remember here we consider corporate machines with only private access to GCP, a corporate machine with public IP would be option 1 (Internet).

Public clusters facilitate administration. The master API is publicly accessible through TLS providing you have the credentials, and you can harden worker nodes with GCP firewall rules.

Public clusters + Authorized networks

You may want to restrict access to the master API to a known set of IP addresses, to limit potential attacks or access e.g. in case credentials are stolen. Update your cluster using GKE's authorized networks feature and your desired CIDR ranges (e.g. your ISP provided IP):

$ gcloud container clusters update test-cluster --enable-master-authorized-networks --master-authorized-networks cidr1,cidr2...

Take into account this can block IPs from outside Google only. IPs from Google Cloud will continue to have access.

Let's summarize it. The symbol ❓ ️means that only authorized IP ranges will get access:

  1. Internet:️️ ❓️
  2. Cluster nodes: ✅️
  3. VMs in cluster’s VPC: ️✅️
  4. VMs in another VPC: ️✅
  5. Cloud Shell: ️✅️
  6. On-premises: ❌️

Using authorized networks with public clusters doesn't really add more security, since a potential attacker can always spin up a VM in GCP to gain access from there.

Private clusters

In a private cluster, worker nodes only get internal IP addresses. However, the control plane has a private endpoint and a public endpoint. Access to the public endpoint can be disabled when creating a private cluster, as we will see later.

$ gcloud container clusters create priv-cluster --enable-ip-alias --enable-private-nodes --master-ipv4-cidr 172.16.0.0/28 --no-enable-master-authorized-networks
$ gcloud container clusters get-credentials priv-cluster [--internal-ip]

Access to the private endpoint is through an internal load balancer (ILB). That means you will need to enable global access if you want to reach the control plane from a different GCP region to where it is deployed:

$ gcloud container clusters update priv-cluster --enable-master-global-access

With a public and a private endpoint you can get access from anywhere, with some caveats:

  1. Internet:️️ ️✅
  2. Cluster nodes: ️✅️
    They will use the private endpoint.
  3. VMs in cluster’s VPC:
    ◾Public endpoint: ✅️
    ◾Private endpoint:
    🔸️VMs in same region as the cluster: ✅
    🔸️VMs in different region: ️️️❌️
    🔸️VMs in different region, global access: ✅️
  4. VMs in another VPC: ✅
  5. Cloud Shell: ✅
  6. On-premises:
    Similar to option 3:
    🔸️VPN in same region as the cluster: ✅
    🔸️VPN in different region: ️️️❌️
    🔸️VPN in different region, global access: ✅

Private clusters add flexibility to administer from on-premises, and nodes and pods are isolated from the Internet by default. You can use Cloud NAT to provide Internet access for your private nodes.

Private clusters + Authorized networks

As with public clusters, you can use GKE’s authorized networks feature with private clusters to restrict access to the master API. The behavior is a bit different though:

$ gcloud container clusters create priv-cluster --enable-ip-alias --enable-private-nodes --master-ipv4-cidr 172.16.0.0/28 --enable-master-authorized-networks --master-authorized-networks cidr1,cidr2...

Whereas for public clusters only IPs from outside Google can be blocked, for private clusters everything except the cluster's subnet can be blocked with authorized networks.

  1. Internet:️️️ ❓️
  2. Cluster nodes: ️✅️
  3. VMs in cluster’s VPC:
    ◾Public endpoint: ❓️
    ◾Private endpoint:
    🔸️VMs in same region & subnet as the cluster: ✅
    🔸️VMs in same region & different subnet: ❓
    🔸️VMs in different region: ️️️❌️
    🔸️VMs in different region, global access: ❓
  4. VMs in another VPC: ❓
  5. Cloud Shell: ❓
  6. On-premises:
    🔸️VPN in same region as the cluster: ❓
    🔸️VPN in different region: ️️️❌️
    🔸️VPN in different region, global access: ❓

With private clusters and authorized networks you gain more access control to your cluster and improve security, using access control lists.

Private clusters + private endpoint

As mentioned before, access to the public endpoint can be disabled. In this case, authorized networks feature is required:

$ gcloud container clusters create priv-cluster --enable-ip-alias --enable-private-nodes --enable-private-endpoint --master-ipv4-cidr 172.16.0.0/28 --enable-master-authorized-networks --master-authorized-networks cidr1,cidr2...
  1. Internet:️️️ ❌️️
  2. Cluster nodes: ️✅️
  3. VMs in cluster’s VPC:
    ◾Public endpoint: ❌️️
    ◾Private endpoint:
    🔸️VMs in same region & subnet as the cluster: ✅
    🔸️VMs in same region & different subnet: ❓
    🔸️VMs in different region: ️️️❌️
    🔸️VMs in different region, global access: ❓
  4. VMs in another VPC: ❌️
  5. Cloud Shell: ❌️
  6. On-premises:
    🔸️VPN in same region as the cluster: ❓
    🔸️VPN in different region: ️️️❌️
    🔸️VPN in different region, global access: ❓

If you don't need or want to avoid any public access to your cluster, disable the public endpoint. This setup is probably the more secure, and less flexible.

Setups with VPC Peering

The control plane’s VPC network is located in a project managed by Google. This VPC is connected to your cluster's VPC with VPC Network Peering. VPC Network Peering is not transitive, if network-b is peered with network-a and network-c but those are not directly connected, network-a cannot communicate with network-c over the peering.

This situation often happens in hub & spoke architectures that use VPC Peering. Imagine network-a is a hub VPC, providing network services like VPN connection to on-premises, network-b is a spoke VPC running services like GKE, and network-c is the control plane’s VPC network. From the network hub or any other network connected to it, like on-premises, it is not possible to reach the GKE private endpoint directly.

In these cases you can deploy a jump host in the GKE VPC to allow access to the Kubernetes API, but that gives for another article.

Troubleshooting points

To close this article, here are some common issues and tips related to GKE clusters and their control plane:

  • GKE is integrated with OAuth, so make sure your users or service accounts have proper IAM roles (like Kubernetes Engine Developer) and access scopes.
  • Make sure to enable master global access if you want to reach the private endpoint from a different region to where it is deployed.
  • If connected to GKE private endpoint through VPN, export custom routes to control plane’s VPC so that master's return traffic can reach the source.
  • If your VMs only have private IPs you will probably need Private Google Access (PGA) to reach Google Cloud APIs (e.g. to access container images from Artifact Registry). PGA is enabled by default in private clusters except for Shared VPC clusters.

I hope this information will clarify how to configure your GKE cluster. Thanks for reading!

--

--

Julio Diez
Google Cloud - Community

Strategic Cloud Engineer at Google Cloud, focused on Networking and Security