Wednesday, 20 March 2013

Research lines that lead nowhere (II) to publishing in Nature: unnecessary experiments

Hi rats,

When I was a young muroid, my collaborators and I proved
that, for the state estimation of pairs of coherent states of the form
\ket{\alpha}\ket{\alpha*}, entangling measurements are more efficient than LOCC
measurements, contrary to the case \ket{\alpha}\ket{\alpha}. The result was curious,
and I was convinced that we would manage to publish it in a nice journal (at
that time, PRA was a nice journal).

Then, something happened. My supervisor told me that he had
contacted an experimental group which was willing to prepare pairs of coherent
states and perform the optimal LOCC and entangling measurements.

I couldn’t believe my ears. An experimentalist, the superior
species, wanted to test our result!! I was joyous and jubilant, because I was not
fully convinced by our rigorous mathematical proof. My supervisor and I were so
happy indeed, that we held hands and sang and danced together. Then Christopher Robin and Doraemon came
with an apple pie, and we all had a nice meal under the shade of the Magic Oak Tree, in Sugarcandyland (Bromley South).

In the real world, however, I was confused and angry. We
already knew what was going to
happen. What did those experimentalists expect? That quantum mechanics was
going to break in an experiment involving two coherent states, a beam splitter
and homodyne measurements? That once the setup was complete, the skies would
open and a deep voice would say: “thou shall not finish that experiment!”?

Well, the experiment was completed, and, behold, they
measured what the theory predicted. Once more, quantum mechanics (and the world!)
was saved.

This post is about futile experiments like mine which are perhaps
a bit too subtle for an Ig-Nobel prize. You know what experiments I’m
talking about: the kind which make you scream “for the glory of Cavendish!!”
when you see them featured in the cover of renowned scientific journals. The
kind which the theorists involved describe as: “…and then we performed the
experiment. I’m sorry”.

Well, I’ve had enough. I won’t stay silent while promising QI
theorists and experimentalists waste their talents in meaningless
collaborations. Did you know that people in other fields (e.g.: organic chemistry) conduct experiments to actually advance
the theory? It’s time to kick some asses.

Before starting my monthly rant, though, let me clarify
what this post is not about.

In this post I’m going to discuss six experiments. I won’t
criticize the theoretical results underlying these experiments (well, just one),
or the technical ability and innovation of the experimentalists who carried
them out. What I will argue here is rather the need to perform such experiments. So if I have happened to single
out one of your papers and at any time you feel that I’m undermining your work,
please come back to this paragraph and re-read it as many times as necessary.

And then be honest: do we really need more experiments
like…?

1) Experimental demonstration of 2, 3, 5, 6, 8-photon
entanglement.

Contrary to popular claims, we don’t have a use for generic
entanglement, so most of these results have no practical application (what is the
usefulness of an 8-party GHZ state!?). One could argue that entangling a vast
number of particles may be theoretically impossible due to collapse theories,
etc., and that it is interesting to see how far we can go. Even so, photons are
a very bad candidate to look for violations of quantum mechanics; massive
particles seem to me a better choice.

Where does this obsession to entangle photons come from? How
many photons will have to get entangled before the topic dies out? For Christ’s
sake, somebody write a paper showing how to entangle n+1 photons from n
entangled photons, and stop this madness for good!

2) Experimental estimation of the dimension of classical and
quantum systems.

The story begins an interesting theoretical study of
the correlations generated by classical and quantum systems of dimension d in prepare-and-measure
scenarios, followed by a complicated optics experiment where the authors certify dimension four. Unfortunately, certifying dimension four is not that difficult: I can do it with my balls an abacus, or a mango and a watermelon. And there's more! I can remember 9-digit phone numbers, so I can certify dimension 10^9. Don’t study quantum optics, study me!!*

This project was born dead. The authors present an
experimental demonstration of entanglement sudden death for a two-qubit state
subject to amplitude decay and phase damping channels. In case you’re not
familiar with ESD, here’s the theory of the paper in three pictures:

The ellipsoid represents the set of separable states; the extremes of the stick, the initial and the final quantum state after repeated iteration of the quantum channel. (a) If the channel converges to a point in the boundary of the set of separable states, for certain initial states, the system will enter the ellipsoid in finite time (ESD). (b) For some others, it won't. (c) However, if the map converges to a point in the interior of the set, you will always observe ESD.

Fascinating. Let us now discuss the need for an experiment. The authors
claim that “photons are a useful experimental tool for demonstrating [ESD] and,
more generally, for investigating quantum channels like [the amplitude decay
channel], as the decoherence mechanisms can be implemented in a controlled
manner”.

Of course, this is all bullshit, because these two channels
are defined mathematically, so one can perform a simple
analytical study of the properties of the states which undergo such
transformations (which the authors actually do). Implementing these channels in
an optical scenario is not going to add any insight, just experimental errors. And as for ESD verification, an in-depth pub study of the different ways to touch
an olive with a toothpick is equally revealing and much tastier.

4) Violation of Bell’s inequality in Josephson phase qubits

The CHSH inequality (there are so many Bell inequalities,
why does everyone choose the same?) has been violated with photons,
ions and cold atoms. So what? Violating CHSH with two
yoghourt cans tied with a string is hardly surprising if you allow for locality
or detection loopholes. The actual challenge is to violate local realism, i.e., to implement a loophole-free Bell test. If
you’re an experimentalist with a genuine interest in nonlocality, don’t waste
your time and ours with more non-conclusive games and go for the real thing
once and for all.

And don’t make me speak of contextuality experiments; there
the “loophole” turns into Madonna’s vagina.

Synopsis: the authors come up with a model for quantum time
machines, mathematically equivalent to teleportation with post-selection. Then,
they decide to make an experiment to “test the predictions of the theory”.

OH-MY-GOD! A time travel experiment!! Our heroes travel back
to 1955 in a modified DeLorean and accidentally seduce their own mothers in a
thought-provoking adventure of self-discovery*.

*More concretely, the discovery that you’re inclined to
practise incest.

Well… no. Rather, they perform a very expensive and time-consuming
experiment of quantum teleportation with post-selection. Then they verify that,
indeed, quantum teleportation gives the same predictions as their time-machine
model, which by definition gives the same predictions as quantum teleportation.
The paradox is therefore solved in a self-consistent way, Martin McFly’s right
hand reappears and he can finally wank return to 1985.

I strongly recommend the authors to travel back in time and remove
the experimental part from their letter. Not for me, or for you, but for the students.
Think of them and their bleeding eyes when they read your paper!

6) An experimental test for non-local realism

Here an experiment to violate Leggett’s model of
crypto-nonlocality is carried out. This experiment, as well as any other
one trying to disprove Leggett’s model, is pointless: if a set of bipartite
correlations p(a,b|x,y) is compatible with Leggett’s axioms, then it must
correspond to the statistics generated by a two-qubit separable state (see arxiv:1303.5124). This implies that any experiment
showing entanglement between two photons is a refutation of Leggett’s model.
Since two-photon entanglement has been verified ad nauseam, Gröblacher et al.’s experiment was not necessary. This
is a case where the theory was simply not advanced enough to embark on an
experiment.

Enough blood for today. You have already seen several
examples of unjustifiable waste of tax-payer’s money in pointless experiments.
Yet many authors of these papers are respectable figures in QI. What is
happening?

When the scientific community acts bizarrely, dig in and you
will find an important journal at the bottom of the trash-bin.

Some years ago, an unhealthy paradigm of research in QI was
established via journal feedback: the duty of theorists is to develop results
which are experimentally testable with current technology, while
experimentalists are expected to come up with ways to implement the protocols
which the theorists devise.

Play by the rules of the game, and you will get rewarded: if
you’re a theorist, you will get published in prestigious journals, like Nature
or Science, where theoretical Physics hardly ever appears (and if it appears,
it is usually in embarrassing forms). If
you’re an experimentalist, you can claim that your technical achievements are
actually interesting -read “practical”- for quantum information processing.

The negative side, of course, is that many theorists are limiting
their theoretical research to subjects where “experimental investigations” can
be carried out straightforwardly. Most worryingly, the paradigm has driven
experimentalists to theorist hunting,
a recent mass phenomenon that I invite you to contemplate at your next
theoretical seminar: hordes of experimentalists, sitting on the back row,
breathing anxiously, their claws ready to trap any theoretician who can tell them what the hell to do with their current optical setup. It is
precisely this obsession to implement experimentally whatever is fashionable
in theoretical circles what leads to surrealistic situations where four different groups report experimental boson sampling in the same week.

These are my final messages:

1) Experimentalists: for many of you, the real motivation is
the experimental control of quantum systems. Such is a noble enterprise; be
proud of your work and stop forcing QI applications into your papers.

2) Theorists: not every theoretical discovery must be
complemented with an experiment. E.g.: it is possible to control where soldier
crabs walk by projecting them predator shadows; hence, in theory, one can build
a computer using swarms of crabs rather than electric currents. However, no serious researcher
would attempt to perpetrate such a stupi-. Oh, no.

I agree wholeheartedly with you concerning the existence of a disease, but I think your diagnosis fails to find the root cause. Most importantly, I violently disagree with your characterisation of the boson sampling experiments as unnecessary fashion-following. The theoretical advance behind these experiments was amazing (the discovery of a complexity class between BPP and BQP that could be computed with linear optics), and these experiments are the closest thing we have to a quantum computer that cannot be classicaly simulated.

Or would you disagree that it is a worthy goal to make an experiment whose outcomes we are unable to predict?

Of course, the existence of 4 simultaneous experiments about it is troubling, but I think this is YOUR fault (and mine also, of course): the experimentalists are desperate to measure anything of importance, and when something appears they immediately jump for it. It is the theoreticians' fault that important experiments are so rare, that the experimentalists have to reduce themselves to measuring all kinds of crap in order to not close down their labs.

PS: About research lines that lead nowhere, do you plan to talk about hidden variables eventualy?

Please, read again the ninth paragraph: I'm not criticizing theoretical boson sampling (and I agree that it is an important theoretical result).

Regarding predictability, [in the paper that I read] the authors used a normal computer to calculate the permanents of the unitaries and then compared them with the experimental statistics. So, like in the other experiments, they knew in advance what they were going to measure.

>the experimentalists are desperate to measure anything> of importance

Many experimentalists I know are creative and intelligent individuals who could achieve great results without a theorist on their backs. It's a pity that in this field they have become so dependent on theorists.

>PS: About research lines that lead nowhere, do you plan >to talk about hidden variables eventualy?

I did not say that the outcomes of these experiments was not predictable. I said that they were the closest thing we had, and the goal was to eventually produce unpredictable results. Come on, you can calculate by hand the permanents needed to simulate the experiment with three photons, but they say that 20 photons is already on the verge of unpredictability.

And you don't just wake up, read the arXiv: "Humm, good proposal, guess I'll build a 20-photon interferometer today". That's hard. You've got to start somewhere. And even the "easy" three-photon case took them about 2 years to do. Because it required new experimental techniques, better "experimental control of quantum systems. Such is a noble enterprise".

I don't think that bosonic sampling falls into the category of pointless experiments (it's a first step, even though results are still predictable). In the post, I mention the topic as an example of "theory hunting". I believe, and it seems that you agree with me, that having four groups working in essentially the same experiment is a waste of human resources. So we actually agree in a couple of things.

We disagree in that you blame the theorists (in particular, me!!) for not developing interesting material for the experimentalists to realize, i.e., you seem to support the paradigm that I refer to in my post. I think that it is not my job to tell the experimentalists what to do (my job is to find interesting theoretical results), and it is certainly not their job to follow my orders. Experimentalists are grown-up scientists who can come up with interesting ideas by themselves; I find pitiful that in order to publish them they have to find a connection with state-of-the-art theory.

A minor clarification is in place: indeed, we tried to use our balls for the dimensionality certification experiment. The editor did not like it. He said that we had misunderstood something about the editorial line of Phys. Rev. X.

In order to offer some guidance, the editor suggested using balls of a more prominent scientist within the community. Only after uncountable rejections from the top guys in quantum info to participate in the experiment we decided to implement it with photons. This turned out to be the right decision, as later we noticed that photons allowed us to implement the quantum version of it and therefore, submit to Nature pub. group.

As usual, I think Miguel is being too polite, especially when it comes to the "boson sampling" experiments.

On the experimental side, these are meaningless stunts that are fairly simple to implement, as evidenced by the 4 different teams jumping on them. Furthermore, the techniques used there are explicitly non-scalable---they need to post-select the shit of these things just to get the photons required. Probability of getting n photons goes like 1/2^n, so this whole game is pointless (same goes for most linear optics demonstrations). So I take back the claim that they're simple to implement---you need patience and not much else.

On the theory side, there are big problems too. The biggest is the lack of fault-tolerance for this model. As many have argued before, if you have a model of computation (like real computation) that isn't fault-tolerant, any conclusions derived from it should be viewed with some skepticism. Specifically, if the linear optics model of computation studied in the boson sampling paper (originally developed by gurvitz, it seems) requires universality to achieve fault-tolerance, it cannot properly be considered an easier model.

Given that boson sampling is not scalable, and so takes exponential effort as anon 08:48 points out, it is pretty ridiculous to say that "these experiments are the closest thing we have to a quantum computer that cannot be classicaly simulated." as anon 04:26 did. If you want to see a paper that really shows something that currently can't be classically simulated, look at http://arxiv.org/pdf/1101.2659.pdf At least it couldn't easily be simulated then, though perhaps with more work on a classical computer it may be in range now. Boson sampling, which is something previously studied by Gurvitz, is currently computer science problem rather than practical experimental approach, barring major advances in optics.

The Rat is right on. But I disagree with the Rat on his first claim. I want to encourage experimentalists to improve their ability to control quantum systems, and creating weird entangled states can just be viewed as a way of measuring their progress. So long as it requires a nontrivial experimental improvement to make the state, it is worthwhile.

I agree with the previous two comments concerning the experiments on boson sampling and that it's not clear at the moment how interesting the computational model really is, given we do not know whether it can be made fault tolerant.

However I'd like to point out there _are_ interesting theoretical results in the Aaronson-Arkhipov paper, in particular that if there is an efficient classical algorithm for sampling from the output distribution of quantum mechanical systems (e.g. linear optical networks) then PP is contained in BPP^NP (which is very unlikely). This is also achieved in the Bremner-Jozsa-Shepherd paper (using the IQC model instead) and I think it's really a great result. Aaronson and Arkhipov even make progress in the very interesting open problem of ruling out efficient classical algorithms that can only _approximately_ sample quantum distributions, connecting it to a lot of cool stuff in classical complexity theory and random matrix theory (random self reducibility of the permanent, anti-concentration inequalities, ...).

So I see the value of the two papers in giving evidence against efficient classical simulations of quantum mechanics (of very different flavour from the hardness of factoring). It's a pity they ended up being considered as papers about intermediate models of computation and fueling a new generation of pointless experiments.

More generally, quantum information "science" assumes that quantum mechanics is the correct physical theory. Then, most theorists prove things of the form "If XXX then YYY", and if they're responsible, nothing needs to be check in the lab. There are a few exceptions, like determining experimentally if the noise model in a certain system admits fault-tolerant quantum computation. Strangely, few would question the importance of demonstrating experimental capability, but very very few authors would state that as the main result of their papers. Debbie

Of course, once you've proven "If XXX then YYY", there are lots of other questions that can make the result more or less important. For example, is XXX likely to be true in any system anyone might build? Is the chain of reasoning from XXX to YYY so convoluted that for all practical purposes we can often have XXX but not YYY because of non-idealities in real systems, even though the result is mathematically true? Is the effect so small that nobody should ever care about it anyway? These are all questions that you can answer with an experiment to demonstrate your theory. I think the Rat's real concern is that (like in his example with coherent states), often the fact that an experiment will match the theory has already been established. Then you end up doing stuff like labeling a half-wave plate a "quantum gun" and calling it time-travel.

To interpolate a literary note into this discussion (which is a very interesting and important discussion, as it seems to me), we read in Victor Hugo's Les Travailleurs de la Mer (Toilers of the Sea, 1866) the mathematically celebrated passage:

(It was slack water, but the tide was beginning to make itself felt, the moment was favorable for setting out.)-------------------

Here the image of a flowing tide, and in particular the word "étale", both were adopted by Alexander Grothendieck in geometrizing domains of mathematics that formerly were perceived algebraically.

-------------------"A different image came to me a few weeks ago. The unknown thing to be known appeared to me as some stretch of earth or hard marl, resisting penetration. . . the sea advances insensibly in silence, nothing seems to happen, nothing moves, the water is so far off you hardly hear it. . . yet it finally surrounds the resistant substance."-------------------

Already for quantum systems engineers — and for string theorists? — the tide of geometric dynamics is flowing vigorously (hmmm … perhaps a hybrid discipline of étale dynamics is being born?).

Under any name, we may all reasonably anticipate (or at least hope!) that the tide of 21st century geometric dynamics is beginning to flow strongly enough, and far enough, as to launch the ship-careers of many young 21st century quantum researchers!

The 1980s assertion "genomics has nothing to do with genetics" is unassailably correct … yet history has shown us that this point-of-view is so self-limiting as to be wrong-headed.

Perhaps the conviction that "quantum mechanics is the study of trajectories in Hilbert space" has this same trait … it is unassailably correct, yet perhaps we should be concerned that it has been excessively limiting?

To say the same thing … a little more directly … it is far from evident (theoretically, or experimentally, or in engineering practice) that exact unitary evolution on finite-yet-exponential-dimension Hilbert state-manifolds is a scalably realizable limit of relativistic gauge field theories — which are (apparently?) the sole quantum-dynamical systems that Nature provides.

So despite their undeniable algebraic convenience, perhaps finite-dimension Hilbert spaces are not the sole (or even the most fertile) grounds for quantum information theory? That would be good news for young quantum researchers, because the old QIT roadmaps are lagging so dismayingly, that new STEM avenues are welcome, eh "anonymous"?

"Experimentalists: for many of you, the real motivation is the experimental control of quantum systems. Such is a noble enterprise"

That's exactly what this:

"Experimental demonstration of 2, 3, 5, 6, 8-photon entanglement."

is all about.

Except for that, I mostly agree. Part of the reason why experimentalists publish those papers is that it helps pay for the actual tech development that would otherwise go unfunded. It might not have filtered through to your sewer yet, but actual quantum computers are still far away. You need to build up some excitement and keep the funding agencies interested if you ever want to arrive there.

It seems that what you're suggesting is that experimentalists should lock themselves into their labs and not come out until they can factor a 300-digit prime number. Once they do, you will probably tell us that factoring wasn't ever interesting anyway.

>It seems that what you're suggesting is that >experimentalists should lock themselves into their labs >and not come out until they can factor a 300-digit prime >number.

Not at all. I think that experimental quantum control is an important subject, that should be funded independently of quantum computation, and I will be happy to see experimentalists reporting their advances.

Most importantly, I want the experimentalists to advertise their actual breakthroughs, rather than hide them behind a “QI application” label.

When you invent a new algorithm for quantum chemistry, you do not report your result as: “calculation of the angle of the water molecule up to eight decimal places”. Computing the shape of water is just an example of what you can do with your new method, not your main achievement.

The same holds for experiments. If you invent a new experimental tool that enhances your quantum control over many-photon systems (and, in particular, it allows you to generate a seven-photon entangled state), the title of your paper should be “Revolutionary experimental technique for quantum control”, not “Entanglement of seven photons”.

On an unrelated note, I don't need a quantum computer to factor a 300-digit prime number.

> On an unrelated note, I don't need a quantum computer to factor a 300-digit prime number.

Does that mean you've found a method for factoring -prime- numbers on classical computers? Now that would be revolutionary. My bad.

But to get back to your initial point: what's wrong with the Josephson paper then? The novelty was obviously that they extended their quantum control capabilities to entangling two phase qubits. And a Bell test is, no matter whether you think that's a bit medieval (which I do), still the litmus test for demonstrating entanglement.

You're now saying, the title should instead have been "New method for entangling two Josephson phase qubits". But what does "new" mean? Nothing, without context. So should they have put the whole method section of their paper in the title?

>But to get back to your initial point: what's wrong with >the Josephson paper then? The novelty was obviously that >they extended their quantum control capabilities to >entangling two phase qubits. And a Bell test is, no matter >whether you think that's a bit medieval (which I do), >still the litmus test for demonstrating entanglement.

In principle, you can use self-testing or rigidity to certify, from a Bell inequality violation, that the state and measurements that you have in the lab are close to what the theory predicts. However, with a CHSH violation of 2.07 -barely beyond the classical value- you cannot certify much in a device-independent way.

To the authors' credit, the experiment does show quantum control: their state has high overlap with the singlet, and their measurements have high fidelity. These parameters are not derived from the CHSH violation, though, but through other methods, like quantum tomography.

The Bell test thus seems superfluous in this paper. That is, IF the goal is to evidence quantum control. Because, from the conclusion and the Supplementary Material, the aim of the authors seems to be rather to convince the reader that they could violate local realism without loopholes in a similar experiment where the qubits are separated by 10 m.

I sincerely hope that the authors conduct such an experiment successfully in a near future. Still, I don't see the need of implementing a simulacrum of a CHSH experiment; at this stage, it was enough to prepare the state and check the fidelities.

>You're now saying, the title should instead have been "New >method for entangling two Josephson phase qubits". But >what does "new" mean? Nothing, without context. So should >they have put the whole method section of their paper in >the title?

Using "new" in the title is not a good idea (to be fair, neither is using "revolutionary"); APS wouldn't allow it, for instance. How about "Quantum Control of blah blah VIA blah blah"? Hinting (not giving a whole description) what makes your technique different to other methods will attract the attention of other researchers in the area.

Perhaps I've misunderstood the main point being made in the original post, but you appear to adopt the position that experimentation is unnecessary in the light of theoretical results ["not every theoretical discovery must be complemented with an experiment." ].

This position seems counter to the point of science. Theory must always suffer the disadvantage of requiring proof - ideas cannot live on your recommendation alone. Experimentation is the only means of validating theoretical efforts and, even when experiment don't impress you, they serve the greater purpose of validating the consistency of science. Yes, experiments that fail to have known theoretical descriptions are, in some sense, more interesting, but they are not the only ones that are necessary.

The point of the post is that experimentation is not ALWAYS necessary.

E.g.: it was clear from the beginning that nothing can be learned about time machines by making a teleportation experiment.

E.g. (2): biologists already knew that they could control swarms of soldier crabs with shadows. It was not necessary to build logic gates with crabs to show that you can do "crab computation" (no matter how funny!).

" This part of mechanics was cultivatedby the ancients in the five powers which relate to manual arts, who considered gravity (it not being a manual power, no otherwisethan as it moved weights by those powers). Our design not respecting arts, but philosophy, and our subject NOT MANUAL but natural powers...for all the difficulty of philosophy seems to consist in this "from the phænomena of motions to investigate the forces of nature, AND THEN these forces to demonstrate the other phenomena..."

Also, I think I agree with the Travis Humble guy above, even if you think they are trivial, you should in principle perform the experiments that prove your theory. At least, if I understood correctly what Galileo said!

"you should in principle perform the experiments that prove your theory"

Sure, but the experiments refereed here are in the same level of drawing a circle with a compass and then performing a complicated experiment of quantum optics to verify that the points that were draw really obey x^2+y^2=1. You are not getting any new insight in anything.

For example: There are some experiments that test the commutation relations of creation and annihilation operators. Instead, the right experiment should be to test the regime of validity of using these operators to describe processes of absorption and adition of photons. If you are in the valid regime, it is obvious that the commutation relations hold.

There is no need to check experimentally every theoretical step of a theory; this is nonsensical. By the same reasoning, we should check every day that a stone will fall to the ground according to Newton's law - well, who knows, it could stop mid-air!

x^2+y^2=1 is mathematics, not physics :) on the other hand, the physics would be: how close is my compass to an ideal compass, then you can use quantum optical experiments to test with high precision whether you have actually implemented x^2+y^2=1 with good approx. hehe

for what concerns a, a^dag, i don't know well enough these experiments, are you referring to the stuff done by bellini et al?

and for gravity: in principle yes, every time you release a stone and it falls to the ground you are once again confirming newton, so you can decrease your estimated probability that the law is wrong, but who knows, some day it may not fall! hehe again!

What a wonderful, and funny post. I especially liked the sexistic references ;)

After reading this, my former view of the QI physics mafia is confirmed again. So glad that I left physics 2 years ago.

Compare what is produced nowadays by theoretical and experimental physicists, with the experiments and theories developed 100 years ago. The establishment of impact factors and citation hunting might be the reason for the massive decay of quality in physics.

This is the story about the obituary for Alexander Grothendieck (one of the greatest mathematician and visionary of last century) written by two great mathematicians - John Tate (and Abel prize winner) and David Mumford (Fields medallist). :)

Good post. Eventually Nature and PRL will notice the problem. I would put it, equivalently, as I think, but totally humorlessly, that Quantum information/computation/mechanics is making a transition from Science to a system of Engineering rules. We're part way through the process inasmuch as, for example, one can now buy off the shelf components that not long ago would have taken a year to build, but the creation of good engineering rules is hard. Once rules are in place, Engineers construct stuff not to test a theory but to use it.