If you run Kubernetes, you may well get used to logfiles full of impenetrable
nonsense being churned out on a continuous basis… To say it can be quite
“verbose” is an understatement.

You may also be tempted to silence or ignore some of them in whatever tool
you use to manage your logging - after all, everything seems to be working
fine.

That’s where I’ve been with the “TLS handshake” errors that have been spamming
my log server for the last three months. But as my mouse was hovering over
the ‘filter’ button, my better side took over and I figured I should do
something about it.

As luck would have it, it was relatively easy to fix some of them. But it
did take a few Google searches to narrow it down - so I present it here for your
benefit.

The problem may not be obvious - but it’s there. The MTU of the virtual
interface is the same as the MTU of the physical interface. The problem is
this: the virtual interfaces use vxlan (Virtual Xtensible LAN) to manage the
virtual overlay networking between pods - which adds a few bytes to every packet
which things like virtual LAN IDs. So the actual max transmission unit is
less than 1500.

Your MTU on the containers should be (real MTU - 50), to allow for the vxlan
overhead. In my case, that means they ought to be 1450, not 1500.

The exact solution is going to depend on the virtual networking provider that
you use on your Kubernetes installation (Calico, Flannel, whatever.) In my
case, Rancher deployments by default use the CanalContainer Networking
Interface
provider.

To change the MTU, I need to edit the canal-config configmap in the kube-system
namespace:

kubectl -n kube-system edit configmap canal-config

And I need to add an entry "mtu": 1450, under the "type": "calico" line:

And, in fact, not exactly “no more” TLS error spam in my logs. A significant
reduction, but far from eliminated. So it looks like that was the cause of
some, but not all, of my Kubernetes networking woes:
Rate of TLS errors - cursor shows when MTU fix applied

The search for other problems goes on; if I find anything interesting,
I’ll write :-).