Who is F5?

Hardware’s Innovation Theorem

Lori MacVittie

Published July 25, 2016

I recently finished reading Fermat’s Last Theorem (Simon Singh) which, you might be surprised to learn, was full of drama, intrigue, murder, and mistresses. Really. The history of math is quite full of people who despite their prodigious understanding of concepts we mere mortals don’t truly grok, are just people after all.

But that’s not really the point today. The point is that for over 350 years, mathematicians have been driven to try to prove (or disprove) what was known as Fermat’s last theorem. More of a conjecture, the brilliant (amateur) mathematician noted that while the Pythagorean Theorem was nearly axiomatic, it only worked for squares. That is, you can’t find an answer to: an + bn = cn when “n” is any number other than “2”. What really drove mathematicians wild, apparently, was that Fermat noted he had a truly delightful proof for this but the margin of the book in which he was commenting was too small for it.

And they couldn’t find the proof anywhere.

So mathematicians set about trying to prove or disprove it. Long story short, someone finally did. But in order for him to do that he had to combine two completely different disciplines of math. Disciplines that did not exist when Fermat made his claim. Some of which can be traced to attempts to solve Fermat’s Last Theorem as well as other challenging mathematical problems. One of the disciplines used to solve Fermat’s last Theorem included the study of elliptical curves. If that sounds familiar it’s because elliptical curves are the foundation for ECC (Elliptic Curve Cryptography) which is increasingly favored today as a replacement for the older, more vulnerable encryption schemes.

Basically, one of the benefits of solving a problem in one mathematical discipline is that it often spurs innovation in other, related but distinctly separate, mathematical disciplines.

It turns out that when you build your own hardware to ensure the capacity and speed needed for those services deployed on the north-south data center runway, you also have to build out the software that goes along with it. See, hardware by itself is just resources, for the most part. That, too, is changing but that’s a blog for another day. For the most part, the reality is that hardware provides resources. Software is the magic that turns those resources into consumables that are ultimately used for the services that secure and deliver apps every second of every day across the Internet. So when someone is heralding the arrival of new hardware in then networking world, it is also heralding the announcement of new software. Because without the software, custom hardware doesn’t do much.

Now here’s where the innovation crosses the hardware-software divide. That software can be lifted and shifted from its original hardware to commodity hardware (COTS). That’s your general purpose servers, so named because they aren’t really optimized for anything because they have to support everything. But the software that was previously running on purpose-built hardware is optimized, and the tricks and tips the engineers have learned and tweaked over time get transferred to the software version, too.

And many of them are actually more applicable than you’d think. See, there are chips from folks like Intel that are used in custom built hardware that are also present in commodity systems. But for the most part the performance or capacity enhancing characteristics of those chips aren’t used by most software because, well, it wasn’t written with that hardware in mind. But some systems were, and that means that when that software is lifted and shifted to commodity hardware, it retains a lot of its performance and capacity advantages over other software built to do the same thing that doesn’t use the special hardware.

Trying to solve the performance and capacity challenges associated with software (way back in the 1990s) led to the extensive use of hardware in the network, including new internal architectures related to how data was passed around the system. Those tricks and techniques are being translated back into software, now, to improve performance and capacity. When folks are tasked with designed high-speed, high-capacity software because no platform exists for the hardware they’re developing for, they come up with new ways to do things. They challenge old assumptions and discover better ways of manipulating, inspecting, and modifying data as it’s passing through the system. They figure out new algorithms and better data structures that improve on memory management and protocol parsing.

While most people don’t associate hardware with innovation, the reality is that just like math, solving a problem in one discipline leads to innovation in other disciplines. That’s something we see all the time as we lift and shift the software that is BIG-IP on our custom hardware platforms to that of the commodity hardware used in private, on-premise and public cloud environments. The lift and shift requires work; the software has to be adapted to fit into virtualized, containerized, and cloudified form factors. But the innovations resulting from the new hardware remain, providing for faster, more scalable, and more efficient operation on commoditized platforms, too.

Developing new hardware, and adapting software to its new capabilities, ultimately means innovation in the software, whether that software is running on custom or commodity hardware. And that’s why it’s exciting when new hardware is introduced. Because it’s the harbinger of innovation.

That’s Hardware’s Innovation Theorem. Just as solving Fermat’s Theorem is going to lead to more innovation across mathematics and echo those advances into the realm of cryptography and security, solving the challenge of adapting software to new custom hardware will lead to more innovation in that software and echo across the realm of on-premise and cloud data centers for years to come.