When Gordon Moore calculated in the mid-1960s that the number of components per chip would double every year or so – with a corresponding decrease in the cost per component – it wasn’t clear whether this was based upon robust scientific data, or just a theory that the whole semiconductor industry got behind and treated as a viable road map.

More than three decades later, it no longer mattered: the theory had proved itself as silicon chips continued to shrink in size and got cheaper with every couple of years.

Dead end ahead

Perhaps it wasn’t driven solely by an industry road map; software developers and users also demanded more as the years went by. Either way, user value has grown to the extent that we now have devices you can hold in your hand that are more powerful than the massive mainframes of old.

There are physical limitations to how much further this can go, however. And, despite considerable research into finding an alternative to silicon, none has yet materialised.

As Paolo Gargini, chairman of the industry body overseeing the road map for semiconductors, comments: "Top-of-the-line microprocessors currently have circuit features that are around 14 nanometres across… that’s already smaller than most viruses." He goes on to question whether anything smaller would qualify as a viable device, given the quantum uncertainties surrounding electrons of that size, which would make the transistors unreliable at best.

And so, finally, it looks like the chips are down for Moore’s Law.

Built on obsolescence

Rather than mourn the death of Moore’s Law, I’m more interested in looking at what will emerge next

On one level, this is not surprising. Technology has driven change since the Stone Age, and it’s an industry built on obsolescence; think metals coming on the scene, and poor old stone being slowly edged out.

One could argue that we are always somewhere on an S curve. This is particularly true of new technologies. There is no neat, linear pattern to growth: successful new technologies hit a point where their growth accelerates exponentially, then, inevitably, that growth slows as markets mature. Then a new technology adopts the same pattern. And repeat.

So rather than mourn the death of Moore’s Law, I’m more interested in looking at what will emerge next. Robotics and artificial intelligence, 3D printing, low-power sensors associated with the internet of things, not to mention biotechnologies – all feel varying degrees of distant from mainstream adoption, but, without doubt, there is a Gordon Moore of the 21st century in a garage somewhere, on the cusp of making a breakthrough.

Why should we care? I guess because there are recent parallels in our industry and a mindset we might benefit from adopting, if we haven’t already.

Born digital

When digital transformation started having a real impact on commerce and culture around the early 2000s, most businesses that weren’t born digital saw it as a marketing and communications ‘channel’ only. It was a display ad or a post there. Businesses learned to use digital to be more responsive, to react to customers, live and interactive. Then went back to the day job.

In the meantime, the businesses born digital have transformed categories end-to-end. Transport with Uber, holidays with Airbnb, home-delivery meals with aggregators like Deliveroo, Facebook for connecting with people, Google for information, Amazon for everything else. We know this.

I’ve written here before that 2015 felt like the year when the current crop of technological innovation seemed to slow and consolidate (top of the S Curve, anyone?). Rather than relax and put our feet up, I suggest we spend at least a few moments thinking about what the emerging technologies might represent: hype versus something genuinely threatening or opportunistic for our category. Let’s assume that at least some of them will reach mainstream maturity sometime from 2020 onwards.

As Shekhar Borkar, head of Intel’s advanced microprocessor research, puts it: "The ideas are out there. Our job is to engineer them."