Unfortunately The Economist does not analyze the fundamental reasons for the Security Problems faced by the IT Industry and all users of Information Technology. And if you don't understand the problem, how can you possibly fix it ?

One way to see the problem is to reason about Correctnes Of Algorithms. Software engineers and "programmers" (the amateur version of a software engineer) churn out lots of Algorithms while developing both standard and bespoke software systems. They "appear" to work while being subjected to some test input data. But usually, "testing" is by no means comprehensive, simply because the amount of possible "input data" is astronomically large.
Just think of four input variables, each of which is a 32 bit integer. To exhaustively test correctness of the algorithm the developer or tester would have to generate 2^(4*32) input data records. That is about 100000000000000000000000000000000 records ! Obviously, this is not feasible. So what is done in reality ?

Managers pressure developers to "finally get it done" and that results in something like 5 test records being created and checked before the algorithm (software) is put into production.

The problem is, attackers/hackers/cyber warriors might use the faults in the algorithms for their nefarious purposes. In many cases, this allows the attacker to totally "take over" systems, because real-world systems are very often very brittle. More on this below.

The only way to address this problem perfectly is by means of mathematical proof. Which is of course very, very expensive to perform on anything but trivialy small/limited algorithms.
Computer Science has developed some sort of tool automation to help software engineers in this task.
Still, this approach is taken almost never in real-world systems development.

The "L4" operating system of Dresden University and UNSW is an example of a mathematically proven correct
operating system kernel. INRIA also does some interesting work on proven correct compilers.

A less-than perfect approach is to hire excellent software engineers, which also happen to be not the cheapest ones on the market.
Have them review algorithm code and hope they will "somehow ferret out" the bugs and therefore the security issues.
The success of this very much depends on the quality of said software engineers and their real authority
to make sometimes drastic changes to the software code under review. That implies that management must have
a strong, supportive relationship to their senior software engineers. Too often, managers are simply cheapskates
who simply don't want to work like that.
Also, this approach does not ensure 100% security, it merely improves the odds in favour of security.

There exist some more excellent technology approaches to shore up security, namely firewalls, sandboxing and
"memory safe programming languages". These techniques revolve around the principle of "least privilege to
perform the job". So, a firewall will allow external connections only to corporate servers which are actually
intended to provide services to customers, suppliers and the like. Employee PCs will be simply unreachable.
This comes with a big caveat, though: Never expect your entire corporate network to be friendly. Odds are
that one or a couple of PCs will always be infected by malware. The solution for that issue is to build
"security compartments" similar to the compartments of a ship. This saves the corporation from "total sink".
Sandboxing locks algorithms("programs") into a confined space and thereby limits the damage a hacker can do via
this program. For example, the buggy(=dangerously insecure) PDF viewer(often used by Chinese intelligence
to get an entry point targeted computers) programs have absolutely no business in reading Excel files.
Sandboxes such as "AppArmor" or "Sandboxie" can provide this feature.
Similarly, memory-safe programming languages can limit security issues to certain aspects of a program.
That is, the memory structures are protected by the programming language and/or runtime system itself.
For example, a bug in a web trading server might expose a parts database to the attacker, but the orders
and customer databases will remain secure.
Unfortunately, some programming languages such as the popular "C" language typically "open up the kingdom"
to the attacker if a single bug is found and exploited. "C", developed by U.S. post&telecom research is
actually a very dangerous regression from the days of ALGOL of the 1960s, which had protections against this.

These days, Java, C# and languages such as Sappeur provide memory-safety, which is very valuable in defending
a system against cyber attack.

Why anything critical, like a power grid, is attached to the internet eludes me. Private networks are an obvious solution. They are not absolutely impenetrable, but it requires a high level of negligence for it to happen. It would be wise to change security personnel with some frequency; prolonged success makes guardians lazy and careless.

If it only were as easy as you make it up. Real-world businesses need to be connected for plenty of business reasons with their supplier network and their customers.

Just think of a just-in-time auto manufacturing operation, where parts are ordered almost constantly based on the inflow of car orders.

Or think of Boeing and their vast 787 supplier network. They need to exchange lots of structured and unstructured data to keep this operation working nicely.

Energy companies can greatly improve economics by setting real-time prices to their customers. The only way to perform this is to connect grid management in some way with millions of customers by means of a computer network.

The core problem is that many managers these days perceive software engineers and systems administrators as a kind of "data janitor" and consequently don't heed their advice.

I was referring only to genuinely key installations, like the power grid, or water supply systems, or the telephone network. The country will not face a disaster if the typical company has a temporary problem with its IT system; it happens every time an upgrade is installed. If worse came to worst, calls to suppliers to enter the order, or a typical order, into their own systems could work. It would slow things down, but better that than a complete stoppage.

Just in time yields inventory savings, but at the cost of robustness. There has to be a balance that recognizes the unfortunate fact that things go wrong. Of course, if your suppliers are overseas, you are carrying the cost of a warehouse in every ship.

The problem with too many of the suits who mind the store is that they neither know, nor seem to want to learn, how their businesses run on a practical level. Perhaps it's time to burn down all the Business Schools.

Whenever you look at the kind of software that gets produced, you need to bear one thing in mind. For the guys who are writing it, there are just two priorities -- and those determine how their performance is rated, how big their raises/bonuses are, and whether they get promoted. And, therefore, that is where all their focus is.
Those two priorities are:
-- functionality
-- time to market
Those are, after all, what drive sales.
Note that there is nothing in here about the other obvious features that you might think that management would care about:
-- security
-- reliability
-- performance
From which we conclude that management really doesn't care about those. Or only cares about them when they fail so noticably that sales are impacted. And until that management outlook changes, the problems will not be avoided/resolved.
We can write code which mostly avoids all those issues. The methods to do it are hardly unknown. But not if the people who pay the bills don't decide that they really do care about them -- and reward those programmers (and managers of programmers) who do it right. Including accepting that writing good code, and testing it to make sure it really is good, takes time. And money. Instead, we will have management fads regarding programming methodologies. Fads which get abandon, or at least gutted, the instant that they are seen to impact the two existing priorities.
In short, there isn't really a problem with how to write better, more secure, more efficient, more reliable systems. There is a mangement problem, pure and simple.

I don't think it is possible to unwind Just-In-Time manufacturing. Or in other words, it is cheaper to hire some great software engineers and security experts to make the systems actually secure.
What is not going to work is to run IT "lean" in the sense that you can do it with amateurs. Doing so would be an invitation to cyber crooks.
Of course many companies attempt exactly that - do IT on a shoestring and wire it up to the internet. Then whine when somebody messes things up. It's a Darwinist learning process...

Sometimes I also blame everything on management and their myopic views.
But before you can fix a problem, you need to discuss what can be done in theory. Different social and technological approaches, their costs and efficiencies.
Also, many software development grunts themselves still do not understand that "testing" will almost never be sufficient to *prove* security. We have seen lots of systems that were productive with great success for more than a decade to have exploitable bugs. Which means that these bugs were only triggered by well-crafted attack data/code but not by "ordinary" production data.
Having said that, there is one technique called "fuzzing" which is a useful security "testing" technique. It helps to find exploitable bugs, but of course it also does NOT prove the absence of exploitable bugs. In other words, it won't find all bugs.
My goal was to educate readers about the problem and about possible solution, but certainly I don't have a "fix" for cynics, ignorants and beancounters (the "you can only manage what you can measure" idiots).
I do still believe there are some people with moderate amounts of "good faith" around in many corporations. These folks were addressed.