In a recent report, 94% of respondents said they experienced a Kubernetes-related security incident. Misconfigurations are the most common kind of Kubernetes vulnerability, reported by 70% of the surveyed companies.1 What’s one attractive target for cybercriminals? The Kubernetes control plane.
Teams must harden the perimeter of nodes, masters, core components, APIs, and public-facing pods. Otherwise, they can’t defend clusters against existing and potential vulnerabilities. Here are 10 best practices to help you secure your Kubernetes control plane and speed up the deployment process.
10 tips to secure your Kubernetes control plane
1. Use Kubernetes Role-Based Access Control (RBAC)
Take advantage of RBAC to set who has access to the Kubernetes API and, once they’re in, which permissions they have. You’ll find RBAC enabled by default in Kubernetes version 1.6 and up. Since K8s brings together authorization controllers, you can disable the legacy Attribute Based Access Control (ABAC) when RBAC is on.
When setting permissions, pick namespace-specific ones over cluster-wide ones. Even when your team is busy debugging, it’s better not to give anyone cluster administrator privileges. Otherwise, your cluster’s security may become compromised.
2. Bet on isolation
Don’t expose your Kubernetes nodes directly to public networks. Trust me; it’s a bad idea. The best place for your nodes is a separate network with no direct connection to the general corporate network.
Another important isolation best practice is to separate the Kubernetes control and data traffic. You don’t want them to flow through the same pipe. Open access to the data plane results in open access to the control plane.
3. Avoid deploying objects to the default namespace
In Kubernetes, namespaces provide a mechanism for isolating resource groups within one cluster. They’re a great use case for environments where many users are spread across multiple teams or projects.
All the objects that have no namespace assigned to them end up in the default namespace. That makes it easier to deploy malicious containers close to your most critical workloads. I recommend creating namespaces for objects in your deployment.2
4. Steer clear of forbidden types
Don’t use forbidden types such as NodePort or LoadBalancer. Instead, expose services through ClusterIP. This is how you can avoid the discovery of cluster infrastructure components by some malicious actor.
5. Encrypt secrets
Did you know that secrets aren’t actually encrypted at rest by default in base Kubernetes implementations? If you use a Kubernetes-managed service like GKE, secrets are encrypted at rest.
Why is encrypting secrets important? If anyone intercepts your key-value store, they’ll get access to everything in your cluster. This includes all cluster secrets in plain text. Encrypting the cluster state store is the best way to secure your cluster against data-at-rest exfiltration.
6. Secure access to etcd
Access to etcd is equivalent to root permission. That’s why it’s a critical control plane component and the most important piece to secure within the control plane.
Ensure that communication with etcd is encrypted and that clients use certificate-based authentication. To limit the attack surface, ideally only the API server should have access to etcd. Take a look here to see how to do it.
7. Don’t mount container runtime sockets in your containers
Why should you care if your deployments have container runtime (CRI) sockets mounted in containers? docker.sock, containerd.sock, and crio.sock increase the chance of an attacker gaining root access privileges for the host and the respective container runtime. To avoid this, remove the /var/run/<CRI>.sock hostPath volume.
8. Running containers without a read-only root file system? Think twice
Are your containers running without a read-only root file system? Using a read-only file system prevents malicious binaries from writing to a system or system takeover. You can ensure that containers use only the read-only filesystem by setting readOnlyRootFilesystem to true in Pod securityContext definition.
9. Secure then access to the Kubernetes control plane
To get an extra layer of security features like multi-factor authentication, get a third-party authentication provider. And to fully secure your control plane access, avoid managing users at the level of the API server level. Instead, use a solution from your cloud provider like AWS Identity and Access Management (IAM). If you can’t get CSP IAM, choose OpenID Connect (OIDC) alongside an SSO provider you’re familiar with.3
10. Create a rolling update strategy
To keep your EKS security airtight, build a rolling update strategy. Rolling updates allow deployment updates to minimize your application downtime thanks to incremental pod updates. Check out this page in the Kubernetes docs for more information.
Another point is running a vulnerability scan at runtime. Your cluster faces the risk of supply chain attacks. To handle them, you need to understand what really got into your cluster – even if you scanned all the deployment artifacts during CI/CD. Agent-based security solutions are good here – or even better than “agentless” ones.
Achieve Kubernetes control place security with expert help
As the Kubernetes ecosystem evolves, so do its security concerns. Keeping up with changes is time-consuming, and once vulnerabilities pile up, engineers are forced to prioritize many items at once.
CAST AI’s Security Report Best Practices feature checks clusters against industry best practices, Kubernetes recommendations, and CIS Kubernetes benchmarks – and then prioritizes them automatically to set you on the right track from the start.
Scan your Kubernetes cluster against configuration and security best practices, and find out how to secure optimally.
Check your Kubernetes cluster against configuration and security best practices
Connect your cluster and see your costs in 5 min, no credit card required.
References
- [1] – RedHat
- [2] – Kubernetes docs
- [3] – Kubernetes docs
FAQ
A Kubernetes cluster is made of worker machines called nodes, which host pods. The control plane is used to manage the worker nodes and pods in a cluster. In production environments, the control plane runs across multiple machines, and a cluster uses multiple nodes for fault tolerance and high availability.
The control plane includes components that are capable of making global decisions about the cluster – for example, node scheduling. A different task of the control plane is detecting and responding to cluster events – for instance, the control place starts up a pod when the deployment’s replicas field is unsatisfied.
In a Kubernetes cluster, control plane nodes (also called master nodes) are responsible for running services required to control the Kubernetes cluster.
The Kubernetes control plane serves as the source of truth and communication between the entire Kubernetes stack. To make sure everything is running smoothly, teams need a control plane monitoring solution that monitors the API server, the controller manager, the scheduler, and etcd (the key-value store).