HTTP is a flexible thing. Unlike its networking neighbors – TCP and IP – it is almost limitless in the information it can carry from point A to point B. While IP and TCP are required to adhere to very strict, inflexible standards that define – to the bit – what values can be used, HTTP takes a laissez-faire approach to the data it carries. Text. Binary. JSON. XML. Encrypted. Plain-text.

Like honey-badger, HTTP don’t care. It will carry it all – and more.

One of the ways in which HTTP is constantly flexing its, well, flexibility is in its rarely-seen-by-users headers. This the meta-data carried by every HTTP request and response. It shares everything from content type to content length to authorization tokens to breadcrumbs that tattle on who you are and where you’ve been – whether you want it to or not.

This is important to note because as we’ve seen in the container space, HTTP headers are growing as a mechanism not only to transport data between clients and services, but as a means to share the meta-data that makes these fast-moving environments scale so very efficiently.

Of growing note is the notion of a service-mesh and, with it, the addition of custom HTTP headers that carry operational information. This blog from Buoyant – the company behind one of the two leading open-source, service mesh implementations – illustrates the reliance on HTTP headers for sharing telemetry necessary to enabling correlation of traces that help simplify the highly complex set of transactions across services that make up a single HTTP request and response pair.

For those not interested in reading the entire aforementioned blog, here’s the most relevant bit – highlighting is mine:

While we at Buoyant like to describe all of the additional tracing data that linkerd provides as “magic telemetry sprinkles for microservices”, the reality is that we need a small amount of request context to wire the traces together. That request context is established when linkerd receives a request, and, for HTTP requests, it is passed via HTTP headers when linkerd proxies the request to your application. In order for your application to preserve request context, it needs to include, without modification, all of the inbound l5d-ctx-* HTTP headers on any outbound requests that it makes.

It should be noted that the referenced custom HTTP headers are only one of several that are used for sharing telemetry in these highly distributed systems. As noted in the blog, the l5d-sample header can be used to adjust tracing sample rates. So it’s not only being used to share information, it’s being used to provide operational control over the system.

Let that sink in for a moment. HTTP headers are used to control behavior of operational systems. Remember this, it will be important in a couple of paragraphs.

Rather than separate the control plane from the data plane, in this instance both planes are transported simultaneously and it falls to the endpoints to separate out form from function, as it were. As this particular solution relies on a service-mesh concept – in which every inbound and outbound request from a service passes through a proxy – this is easily enough accomplished. The proxy can filter out the operational HTTP headers and act on them before forwarding the request (or response) onto its intended recipient. It can also add any operational instructions as well as insert telemetry to help match up the traces later.

Application networking, too, is becoming a common thing in container environments. While it’s always been a thing (at least for those of us in the world of programmable proxies) it’s now rising with greater frequency as the need for greater flexibility grows. Ingress controllers are, at their core, programmable proxies that enable routing based not only IP addresses or FQDNs, but on application-specific data most commonly carried by HTTP headers. Versioning, steering, scaling. All these functions of an ingress controller are made possible by HTTP and its don’t care attitude toward HTTP headers.

Sadly, HTTP headers are also their own attack vector. It is incumbent upon us, then, to carefully consider the ramifications of relying on HTTP headers to not only share operational data but control operational behavior. HTTP headers are a wildcard (seriously, read the BNF) that are universally text-based in nature. This makes them not only easy to modify but also to manipulate into carrying malicious commands that are consumed by a growing number of intermediate and endpoint devices and systems.

If that does not terrify you, you haven’t been paying attention.

Luckily, the use of HTTP headers as a method of both control and data plane are primarily limited to containerized systems. This means they are generally tucked away behind several public-facing points of control that afford organizations the ability to mitigate the threat of their overly generous nature. An architectural approach that combines a secure inbound path (north-south) can provide the necessary protection against exploitation. No, we haven’t seen anyone try that. Yet. But we’ve seen too many breaches already thanks to HTTP headers that it’s better to be safe than sorry.

HTTP is rising, not only to the primary protocol for apps, services, and devices but for telemetry, tracking, and transport of operational commands. It’s an exciting time, but we need to temper that “we can do anything” with “but let’s do it securely” if we’re to avoid operational disasters.