Understanding Amazon Elastic Container Service for Kubernetes (EKS)
Amazon Elastic Container Service for Kubernetes or EKS provides a Managed Kubernetes Service. Amazon does the undifferentiated heavy lifting, such as provisioning the cluster, performing upgrades and patching. Although it is compatible with existing plugins and tooling, EKS is not a proprietary AWS fork of Kubernetes in any way. This means you can easily migrate any standard Kubernetes application to EKS without any changes to your code base. You'll connect to your EKS cluster with
kubectl in the same way you would have done in a self-hosted Kubernetes.
At this stage, EKS is very loosely integrated with other AWS services. This is definitely expected to change over time though, as EKS adoption increases. That, said, Kubernetes is much more popular than either Elastic Beanstalk or ECS.
Managed Control Plane
EKS provides a Managed Control Plane, which includes Kubernetes master nodes, API server and the
etcd persistence layer. As part of the highly-available control plane, you get 3 masters, 3
etcd and 3 worker nodes, where AWS provisions automatic backup snapshotting of
etcd nodes alongside automated scaling.
With EKS, AWS is responsible for maintaining master nodes for you by provisioning these nodes in multiple high-availability zones to maintain redundancy. So, as your workload increases, AWS will add master nodes for you. If you were running your own Kubernetes cluster, you'd have to scale it up whenever you added a worker node.
EKS runs a network topology that integrates tightly with a Virtual Private Network. EKS uses a Container Network Interface plugin that integrates the standard Kubernetes overlay network with VPC networking. This plugin allows you to treat your EKS deployment as just another part of your existing AWS infrastructure. Things like network access, control list, routing tables and subnets are all available in the Kubernetes applications running in EKS.
Each pod gets an IP address on an Elastic Network Interface, where these addresses belong to the block of the subnet where the worker node is deployed. In the diagram above, you can see the IP addresses assigned to the Virtual Ethernet Adapter on each pod. These pod IP addresses are fully rideable within the VPC, and they comply with all the policies and access controls at the network level. So, things like security groups and
ACL remain in effect. On each EC2 instance or worker node, Kubernetes runs a daemon set that hosts the CNI plugin. This plugin is a thin layer that communicates with the network local control point. This network local control plane maintains a pool of available IP addresses. So, when the
kubelet on a node schedules a pod, it asks the CNI plugin to allocate an IP address. At this point, the CNI plugin allocates an IP, grabs secondary IP address and associates it with the pod. It then hands that configuration back to the
This thing is based on Amazon Linux too. It comes pre-configured to work with EKS out-of-the-box. It has all the required services pre-installed including Docker, the Kubelet and AWS IAM Authenticator. When you are provisioning your EKS worker nodes with the AWS supplied cloud formation template, it launches your worker nodes with some EC2 user data script which bootstraps the nodes with configuration allowing them to join your EKS cluster automatically.
Amazon EKS on AWS provides a great opportunity to create a self-hosted managed Kubernetes cluster. It is also compatible with open-source Kubernetes, and can be safely migrated to any other Kubernetes instance at any time. Worth to mention, that for users who use solutions for centralized management of Kubernetes clusters, it makes sense to go with EKS instead of any other option such as ECS, since EKS exposes the same API as an open-source Kubernetes.