Kubernetes Service and kube-proxy

Kubernetes Service has several types (ClusterIP, NodePort, LoadBalancer), while kube-proxy can operate in different modes (userspace, iptables, IPVS). But how does network traffic flows from end to end, from the client to the backend pods, while navigating through Kubernetes Service and kube-proxy?

Kubernetes Service builds on top of one another. So if you use a LoadBalancer-type service, it comes with ClusterIP and NodePort (with certain exceptions depending on cloud providers). Kubernetes Service defines the components that handle traffic from the edge to its virtual, logical IP address:

  1. Edge traffic reaches the load balancer. The load balancer spreads the traffic to different nodes via NodePort.

  2. Each node receives the traffic via its NodePort, and forwards that to the service’s virtual IP address, the ClusterIP.

  3. The ClusterIP receives the traffic, then, what now?

As we know, a Kubernetes Service’s ClusterIP is only a virtual IP. It serves as a “gateway” for several Endpoint IP addresses belonging to the Kubernetes pods.

So when traffic reaches the ClusterIP, depending on kube-proxy operating mode:

Hence, kube-proxy defines the components that handle traffic from the virtual ClusterIP to the actual pods.