There are few clear visuals on the shared responsibility model for GKE. It's easy to understand what is included in IaaS and SaaS.
In the case of IaaS, Google is responsible for only the hardware, storage, and network. Whereas in the case of SaaS, Google is responsible for everything except content and user authentication. GKE is neither of them and falls in between.
For GKE, at a high level, Google is responsible for protecting:
The underlying infrastructure, including hardware, firmware, kernel, OS, storage, network, and more. This includes encrypting data at rest by default, encrypting data in transit, using custom-designed hardware, laying private network cables, protecting data centers from physical access, and following secure software development practices.
The Kubernetes distribution. GKE provides the latest upstream versions of Kubernetes and supports several minor versions. Providing updates to these, including patches, is Google’s responsibility.
The nodes’ operating system, such as Container-Optimized OS (COS) or Ubuntu. GKE promptly makes any patches to these images available. If you have auto-upgrade enabled, these are automatically deployed. This is the base layer of your container—it’s not the same as the operating system running in your containers.
The control plane. In GKE, Google manages the control plane, which includes the master VMs, the API server, and other components running on those VMs, as well as the etcd database. This includes upgrades and patching, scaling, and repairs, all backed by an SLO.
Google Cloud integrations, for IAM, Cloud Audit Logging, Stackdriver, Cloud Key Management Service, Cloud Security Command Center, etc. These enable controls available for IaaS workloads across Google Cloud on GKE as well.
Protecting workloads is still your responsibility
What is covered so far is the underlying infrastructure that runs your workload. Application security and other protections for your nodes and workload are your responsibility. These includes:
Kubernetes configurations for your workloads
Setting up network policies to restrict pod to pod traffic
Setup pod security policies to restrict pod capabilities
Keeping any software installed on your nodes patched/updated
There are other commercial distributions like Rancher which allow k8s deployments to be cloud-agnostic. Most of these platforms are focused on speed and agility and it might be good to evaluate the shared responsibility model.
At Araali our goal is to make it easy for teams to take care of this shared responsibility matrix with few simple commands (demo).