Introduction

Kubernetes networking often feels abstract until you look at what actually exists inside a node. In reality, a pod is built from standard Linux networking primitives: network namespaces, virtual ethernet pairs, routing tables, and iptables rules.

This blog focuses on understanding real pod-to-pod networking by tracing the packet path through a Kubernetes node and across nodes using Linux networking primitives. It is the first part of a networking deep-dive series; upcoming blogs will cover Kubernetes Services and kube-proxy internals, DNS flow with CoreDNS, NetworkPolicy enforcement, external north–south traffic, and service-to-pod versus pod-to-service packet behavior.

The Mental Model

At a high level, each pod looks like this:

Mental model of Kubernetes pod networking showing network namespace, veth pair, and host routing Fig: Mental model of a pod

Every packet entering or leaving a pod passes through these layers.

What Actually Exists Inside a Pod

A Kubernetes pod is not a VM or a special network device. It is simply:

Kubernetes pod showing cluster IP and scheduled worker node Fig: Pod received a cluster IP and is scheduled on the worker-1 node

The Pause Container

Kubernetes creates a small "pause" container first. Its only job is to hold the network namespace alive. All application containers join this namespace.

Why it matters:

Pause container holding the network namespace for a Kubernetes pod Fig: Pause container holding the pod network namespace

Pod Interface (eth0)

Inside the pod, running ip a and ip route reveals the network configuration:

ip a
ip route

You will see:

Interfaces inside a Kubernetes pod showing lo and eth0 with veth pair Fig: Interfaces inside the Pod network namespace — lo for localhost and eth0 as the Pod's virtual NIC. eth0@if21 with 10.44.0.6 confirms a veth pair connected to the worker node.
Pod routing table showing default gateway and pod network CIDR route through eth0 Fig: Pod routing table — all traffic goes out via eth0 to the node-side gateway (default via 10.44.0.0). The 10.32.0.0/12 route means pod-network traffic is directly reachable through eth0.

The veth Pair — The Bridge Between Pod and Node

The most important concept in Kubernetes networking is the veth pair.

A veth pair acts like a virtual cable. For this demo, we are using Weave Net and Flannel, but the concept remains the same across different CNIs like Calico, Cilium, and others.

[ pod eth0 ] <====> [ host vethwe-XXXX ]

One end lives in the pod namespace, and the other end lives in the node's root namespace. When a packet leaves the pod:

Veth pair connectivity between Kubernetes pod and host node namespace Fig: Veth pair connectivity between pod and node

Host Network Namespace — Where Routing Happens

Once traffic reaches the host namespace, normal Linux routing takes over. The CNI plugin typically performs tasks such as:

Host routing table showing pod CIDR routed via Weave CNI interface Fig: Pod CIDR routed via the Weave interface — the host routing table shows pod networks (10.32.0.0/12) routed through the weave interface created by the CNI.

This means that all pod-to-pod traffic is handled within the node's host network namespace before leaving the physical NIC, if needed.

Pod-to-Pod Traffic — Same Node

When two pods run on the same node, traffic remains local to that node's networking stack. The packet never leaves the host or touches the physical NIC.

Observing Same-Node Pod Traffic with tcpdump

To verify that pod-to-pod traffic stays local on the same node, we run tcpdump on the worker node while generating traffic between two pods.

tcpdump capture showing same-node pod-to-pod ICMP traffic on vethwepl interfaces via Weave CNI Fig: Same-node pod-to-pod ICMP seen on vethwepl* interfaces (Weave)

The vethwepl* interfaces represent the host-side veth pairs created by the Weave CNI. The packet enters through the source pod's veth interface and is forwarded directly to the destination pod's veth interface within the host network namespace.

Key takeaway: Same-node pod communication uses local Layer-3 forwarding and does not leave the node's physical network interface.

Pod-to-Pod Traffic — Different Nodes

When pods run on different nodes, traffic leaves the source pod via eth0, crosses the veth into the host namespace, travels through the CNI's inter-node path to the remote node, and is then routed to the destination pod's eth0.

Observing Cross-Node Pod Traffic with tcpdump

To verify pod-to-pod traffic across nodes, run tcpdump on both worker nodes while generating traffic from a pod on one node to a pod on another.

tcpdump showing cross-node pod-to-pod communication between two Kubernetes worker nodes Fig: Cross-node pod communication
Flannel VXLAN overlay traffic on flannel.1 interface showing cross-node pod packets Fig: Overlay cross-node pod traffic via flannel.1
flannel.1 Out IP 10.244.1.5 > 10.244.2.5
flannel.1 In  IP 10.244.2.5 > 10.244.1.5

flannel.1 is the VXLAN overlay interface. Flannel routes traffic through this interface only when the destination pod is on another node. Same-node pod traffic stays on cni0 and never hits flannel.1.

Full packet path:
vethSRC → cni0 → flannel.1 → ens3 (underlay) → ens3 → flannel.1 → cni0 → vethDST

When pods run on different nodes, Flannel uses a VXLAN overlay to carry traffic between hosts. In this case, packets leave the local bridge and enter the flannel.1 interface before being sent across the physical network.

The presence of packets on flannel.1 confirms that traffic is being tunneled through Flannel's overlay network.

Why Many CNIs Prefer L3 Routing

Modern CNIs often rely on L3 routing instead of Linux bridges because:

Some CNIs still use overlays or bridges depending on configuration, but the core Linux packet flow remains unchanged.

Frequently Asked Questions (FAQ)

How does pod-to-pod networking work in Kubernetes?

Each pod gets its own Linux network namespace with a virtual ethernet (veth) pair connecting it to the host. Traffic leaves the pod through eth0, crosses the veth boundary into the host namespace, and is routed by the CNI plugin either locally (same node) or across the network (different nodes).

What is a veth pair in Kubernetes networking?

A veth pair is a virtual ethernet cable with two ends. One end (eth0) lives inside the pod's network namespace and the other end (e.g., vethwe-XXXX) lives in the host's root namespace. Packets sent from the pod exit through eth0 and arrive at the host-side veth interface.

What is the role of the pause container in a Kubernetes pod?

The pause container is the first container created in every pod. Its sole purpose is to hold the network namespace alive so that all application containers can share the same network stack, IP address, and localhost. The pod retains its IP even if application containers restart.

Does same-node pod traffic leave the physical NIC?

No. When two pods are on the same node, traffic is forwarded locally within the host network namespace using Layer-3 routing. The packet travels from one pod's veth interface to the other pod's veth interface without ever touching the physical network interface.

How does cross-node pod-to-pod traffic work with Flannel?

Flannel uses a VXLAN overlay. Traffic exits the source pod, enters the cni0 bridge, then passes through the flannel.1 VXLAN interface. The packet is encapsulated and sent over the physical network (ens3) to the destination node, where it is decapsulated and forwarded to the target pod.

Why do modern CNIs prefer L3 routing over Linux bridges?

L3 routing offers lower latency, reduced encapsulation overhead, clearer packet visibility for debugging, and better scalability compared to bridge-based or overlay approaches. CNIs like Calico can use pure L3 routing with BGP for high-performance production clusters.

Conclusion

Kubernetes networking is ultimately built on familiar Linux networking concepts such as namespaces, veth pairs, routing tables, and packet filtering. By following the real packet path — from a pod's eth0 through the host network namespace and across nodes when required — you gain a clear mental model that applies to any CNI.

Understanding these fundamentals makes troubleshooting easier, since networking issues can be analyzed by observing how packets actually move rather than relying only on Kubernetes abstractions.

Written By

Ujwal Budha

Hello, I am Ujwal Budha. Currently working as a Jr. Cloud Engineer at Adex International Pvt. Ltd. Expert in creating scalable cloud infrastructure and automating the workflow for deployment. An AWS Certified Solution Architect Associate, Ujwal enjoys sharing knowledge in the form of technical blogs and helping others to go through their cloud journey.