TechGry

How to deploy more pods on kubernetes nodes

How to deploy maximum number of pods on kubernetes nodes ?

Vishnu Sunkari
California USA
2 min read

How to deploy maximum number of pods on EKS nodes

AWS EKS supports native VPC networking with the Amazon VPC Container Network Interface (CNI) plugin for Kubernetes. Using this plugin allows Kubernetes Pods to have the same IP address inside the pod as they do on the VPC network.
The Amazon VPC CNI plugin for Kubernetes is deployed with each of your EC2 Nodes in a daemonset with the name aws-node.

This is a great feature, but it introduces a limitation in the number of pods per EC2 node. Whenever you deploy a pod in the EKS worker node, EKS creates a new IP address from VPC subnet and attach to the instance. You can find the number of IP addresses per network interface per instance type in here: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#AvailableIpPerENI

In summary there are 3 limits that we need to be aware of:

For example:

m5.4xlargem5.xlarge
AWS Recommended Limit23458
Kubernetes Recommended Limit110110

Because of the above CNI limitation EKS always restricts the max pods to just 110 for m5.4xlarge node. Seems like newer versions of eksctl have a configuration param for maxPodsPerNode - https://eksctl.io/usage/eks-managed-nodes/ (search page for maxPodsPerNode ). and we could push it beyond what’s available to 234 pods. Below is a sample of your cluster config manifest file with maxPodsPerNode

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: dev-cluster
  region: ap-northeast-1
  version: "1.23"
  tags:
    "Stage": "dev"
    "Type": "eks"

managedNodeGroups:
  - name: generalCompute-4
    instanceType: m5.4xlarge
    minSize: 3
    maxSize: 20
    maxPodsPerNode: 234
    desiredCapacity: 3
    volumeSize: 200
    updateConfig:
      maxUnavailablePercentage: 10
    

Comments

Link copied to clipboard!