I understand that a reversible computer does not dissipate heat through the Landauer's principle whilst running - the memory state at all times is a bijective function of the state at any other time.

However, I have been thinking about what happens when a reversible computer initializes. Consider the state of the physical system that the memory is built from just before power up and initialization. By dint of the reversibility of underlying microscopic physical laws, this state stay encoded in the overall system's state when it is effectively "wiped out" as the computer initialized and replaces it with a state representing the initialized memory (set to, say, all noughts).

So it seems to me that if $M$ bits is the maximum memory a reversible algorithm will need to call on throughout its working, by the reasoning of Landauer's principle, ultimately we shall need
do work $M\, k\,T\,\log 2$ to "throw the excess entropy out of the initialized system".

Question 1: Is my reasoning so far right? If not, please say why.

Now, specializing to quantum computers, this seems to imply some enormous initialization energy figures. Suppose we have a system with $N$ qubits, so the quantum state space has $2^N$ basis states. Suppose further that, for the sake of argument, the physics and engineering of the system is such that the system state throughout the running of the system only assumes "digitized" superpositions, $i.e.$ sums of the form:

where $x_j, \;p_{k,j}\in{0,1}$ and ${\cal N}$ the appropriate normalization. To encode the beginning state that is wiped out at power up and initialization, it seems to me that the Landauer-principle-behested work needed is $2^N \,k\,T\,\log 2$. This figure reaches 6 000kg of energy (about humanity's yearly energy consumption) at around 140 qubits, assuming we build our computer in deep space to take advantage of, say, 10K system working temperature.

Question 2: Given that we could build a quantum computer with 140 qubits in "digital" superpositons as above, do we indeed need such initialization energies?

One can see where arguments like this might go. For example, Paul Davies thinks that similar complexity calculations limit the lower size of future quantum computers because their complexity (information content) will have to respect the Bekestein Bound. P.C.W. Davies, "The implications of a holographic universe for quantum
information science and the nature of physical law", Fluctuation and Noise Lett7, no. 04, 2007 (see also http://arxiv.org/abs/quantph/0703041)

Davies points out that it is the Kolmogorov complexity that will be relevant, and so takes this as an indication that only certain "small" subspaces of the full quantum space spanned by a high number of qubits will be accessible by real quantum computers. Likewise, in my example, I assumed this kind of limitation to be the "digitization" of the superposition weights, but I assumed that all of the qubits could be superposed independently. Maybe there would be needfully be correlations between the superpositions co-efficients in real quantum computers.

I think we would hit the Landauer constraint as I reason likewise, but at a considerably lower number of qubits.

Last Question: Am I applying Landauer's principle to the quantum computer in the right way? Why do my arguments fail if they do?*

Interestingly, I found a paper ("unfortunately" in French), here, which states , that in the case of a hybrid optomechanic system (See fig $2$ page $7$, fig $5$ page $10$), the work necessary to initialize a qbit is proportionnal to the Rabi frequency, which plays the role of a temperature (formula $23$ page $12$)
–
TrimokNov 2 '13 at 10:04

Yes, no, and no. Initializing one qubit dissipates $kT$ of energy, and thus, initializing N qubits dissipates an energy of $NkT$. (Note that if the energy would not scale linearly with the number of qubits, this would likely give rise to all kind of contradictions!) This is closely related to the question whether $N$ qubits "contain" $N$ bits or $2^N$ bits of information (and typically $N$ is the more appropriate answer) -- e.g., arxiv.org/abs/quant-ph/0507242 contains some arguments about that.
–
Norbert SchuchNov 2 '13 at 12:55

1 Answer
1

No, the reasoning is wrong. When you need a N-qubit state, no difference entangled or not, you just initialize N qubits and then apply a unitary transformation (reversible). So, entropy produced during the initialization is proportional to the amount of information, without exponents.

For explanation of the “how many states do N qubits have” perennial question (and refuting respective misconceptions) refer to How many states can a n qubit quantum computer store? , although there are several places on the site where different users wrote nearly the same thing about superpositions/linear combinations.

What about reasoning above question 1? Why is that wrong? Is the energy not $M\,k\,T\,\log 2$? Also, could you please explain "you just initialize N qubits and then apply a unitary transformation (reversible). So, entropy produced during the initialization is proportional to the amount of information, without exponents." a bit more. I mean, the amount of information, if you like, is what I need. Are you saying that the initialisation work needed is simply $N\,k\,T\,\log 2$ for qubits as well?
–
WetSavannaAnimal aka Rod VanceOct 3 '14 at 14:33

If one qubit costs some constant amount of entropy (of whichever value), then the entropy output is proportional to the n-r of qubits ∼ amount of information. The “physics and engineering… is such that the system state throughout the running of the system only assumes "digitized" superpositions bla-blah-blh” reasoning is based on arbitrary assumptions having nothing to do with quantum computing, and doesn’t prove anything. A system of N qubits admits $2^N$ states exactly, although these $2^N$ states can exist in coherent superposition. Any set of more than $2^N$ states is linearly dependent.
–
Incnis MrsiOct 16 '14 at 13:27