Is it really all about the unconscious? An interesting discussion, much of it around the value of the Freudian view: powerful insight into unfathomable complexity or literary stuff of no therapeutic value?

Shahidha Bari makes an impassioned case for the depth of Freud’s essential insights; Barry C Smith says Freud actually presents the motives and workings of the unconscious as too much like those of the conscious mind. Richard Bentall says it’s the conscious mind that is the real mystery; unconsciousness is the norm for non-human beings. Along the way we hear about some interesting examples of how the conscious mind seems to be just a rationalising module for decisions made elsewhere. Quote back to people opinions they never actually voiced, and they will devise justifications for them.

I think the separation between conscious and unconscious often gets muddled with the difference between explicit and inexplicit thinking. It’s surely possible to think consciously without thinking in words, but the borderline between wordless conscious thought and unconscious processes is perhaps hard to pin down.

30 Comments

The spectrum of the non-conscious, subconscious and conscious is physically driven by the body with all of its autonomic activity and needs, in order for the creature to move around somatically in the world for the 3 or 4 f’s. People who ignore the body will never understand this spectrum or its interaction between the body and brain, which of course evolved together.

It is true that the conscious mind is the real mystery as it makes us capable to consider what we call the unconscious. But what is interesting is how unconscious vs conscious guides our thoughts and actions. Automatisms of the living began 4Byears ago. Self-consciousness is only a few millions years old. What happened in that short period of time to make us so different from animals ?
At animal level the motivations are mostly related to survival (individual & species).
At human level consciousness makes us capable to make free choices. But what are human motivations? life/death drives? pursuit of happiness? ego valorization? anxiety limitation? …
This looks as an important aspect of the conscious vs unconscious

3. Hunt says:

At the risk of introducing the computation metaphor again, it’s just so tempting to draw the analogy between conscious and unconscious and foreground and background processing in multitasking operating systems. There is a foreground (conscious) process and numerous background (unconscious) processes. The bg processes aren’t idle (unless they are), their effects are seen, and sometimes they even seep through into the fg process. Sometimes the fg process notices actions being taken without its own agency (e.g. the moment you realize you’ve driven all the way to work without thinking about it). There could also be some kind of ordered queue of processes, i.e. the process just behind the fg process is more conscious than others further down the queue.

This also introduces the possibility that when one “loses” consciousness, e.g. dreaming, it’s actually just a shift in the priory queue of processes; the primary conscious process moves to bg, and a bg process(es) move forward. Have you ever noticed “yourself” in a dream?

4. john davey says:

Hunt
“At the risk of introducing the computation metaphor again, it’s just so tempting to draw the analogy between conscious and unconscious and foreground and background processing in multitasking operating systems.

Ah – not only is consciousness computational, it’s open source too ! A linux-OS based system! No wonder it’s so reliable.

There was never any difference in substance between “foreground” and “background” in Unix/Linux speak. Specifically it was the relationship of file descriptors to teletype devices (ie keyboards and screens controlled by users). We wouldn’t use the terms today if linux was invented tomorrow – we’d more likely use ‘interactive’ and ‘non-interactive’, dependent upon whether the input and output were being consumed from an unpredictable source. That doesn’t stop, of course, the use of ‘interactive’ or ‘foreground’-style programs being run in a ‘non-interactive’ or ‘background’ way by the simple expedient of input/output redirection. It never did stop background and foreground processes being the same thing.It’s probably an inherited idea from the early days of Unix – probably useful for teaching – but never had significance.

Hell, SIGHUP is even ignored by daemons these days when you don’t run ‘nohup.’ It used to kill them. There really is no difference.

5. Hunt says:

john,
That there is little difference between fg and bg processes may just add to the analogy. How sure are we that there is a great difference between conscious and unconscious processes of the mind? Peripherals bound to the active process correspond to somatic attachment of the body to the conscious mind; while unconscious processes of the dreaming state are detached from the body by complete atonia (usually). This was no doubt one of the first thing evolution made certain, that our ancestors wouldn’t get up and sing and dance all night with the lions, tigers and bears.
Dreaming is a time when all processes of the brain can imaginatively cavort together in the safety of a paralyzed body.

The metaphor extends to how bg or non-interactive processes can still have tangible effects, e.g. a bg process that updates a screen timer. Getting back to my previous example, driving to work unconsciously, it can’t be true that unconscious processes are never in control of our bodies. You arrive at work, realizing that you’ve been thinking about an argument with a coworker, not driving. But something drove you! Otherwise you would have it in conscious recall.

I think different people, and on different occasion, make use of this multi-threaded aspect of consciousness, usually without knowing it. In fact, if you didn’t stop to reflect on the oddness of being driven to work by another agent, the events of the day would just seamlessly flow without notice.

One final disconcerting note: No reason these “unconscious” processes are not actually conscious. How would you know? We’re so used to thinking that “we” (whoever that is) are in ownership of whatever goes on in our bodies, we assume that we’re the only ones home. While you were thinking of the argument, another “you”, unable to verbally express it (because it doesn’t have the right peripherals) was intently driving you to work.

6. John Davey says:

Hunt

The overwhelming evidence – discussed at length here on these pages – is that minds are not computational because computation cannot generate consciousness. Computation is extrinsic too, in my mind an even bigger obstacle.

Tell me Hunt – and if you can answer this question I’ll give you my eternal gratitude – if I designed a computer that used the Sun, what intrinsic property of the Sun would alter because of it ? If I used you as an element in my computer, how would you know ?

There are unacceptable answers to this question which I will preempt. One is to say that a “neuron” does not change intrinsically because it is part of a brain.

This cannot be asserted because no one knows how brains work. To claim it because current physical or chemical models may not support the idea is totally unjustified as well.

Brains and minds are the domain of science. Computers live in the land of definition. There is nothing unknown about them. So you cannot respond to the question “how do computers acquire intrinsic properties” with some mumbo jumbo about brain components being in the same position

The situation is not symmetric. Brains and minds acquire intrinsic properties by an as-yet unknown mechanism. What I want from you is an explanation as to how computers could do it, bearing in mind there is no hidden phenomena waiting to be discovered about them.

7. Hunt says:

john,
To be clear, using the computational metaphor doesn’t imply endorsing computationalism. I haven’t made any such claim. Postulating a multi-process consciousness is just interesting and possibly useful, just as comparing birds to planes is interesting and useful.

However…

By intrinsic property of minds and brains I’m assuming you mean things like emotional states or declarations of intentions. You make the fixed assumption that these things are real, like physical properties. So not only are you cutting me off from introducing speculative assumptions, you’re making assumptions yourself to solidify your position.

Ultimately I think the trouble spot in discussions like this is the unwillingness to introduce the possibility of radical meaninglessness, the possibility that even your most intense emotion might ultimately have no more import than a string of digits passed from one computer to another. If we reduce ourselves to computer, then we are just machines, a tautology. Most people who haven’t embraced some form of existentialism and become determined to mine meaning out of the world by themselves don’t want to face that.

Hunt 5
In my experience, thinking that you were ‘driving to work unconsciously’ is a common mistake, unless there is something wrong with you. Driving to work requires continuous conscious participation (no matter how slight) in allowing the necessary somatic activity to take place. Without such you would stop and or crash. The fact that you do not remember the journey is because of the repetitous activity and there being nothing particularly noticeable or out of the ordinary worth remembering. Part of the spectrum of non-conscious to conscious but still conscious.

9. Hunt says:

Richard,
It’s open to interpretation, and probably varies person to person. And of course I’m not talking about split personalities or psychosis. Some may call it daydreaming, but there is definitely a process where you can be doing (at least) two things at once in a more or less independent fashion, and neither process appears to be very informed by the other, except perhaps at a moment of interrupt (another computer analogy) when primary consciousness must attend to an emergency.

Hunt
To be clear, if you are unconscious, no matter how slight, you are not conscious or semi-conscious. Therefore you would not be able to attend consciously to anything, let alone an emergency.
I am fine with your computer metaphors/analogies, although they can be misinterpreted.

11. John Davey says:

Hunt

No, by ‘intrinsic’ I mean intrinsic in a natural sense. Brains have an inherent, intrinsic contiguity. They are made of the same stuff in a contiguous 1.5kg blob : they generate only one conscious mind, meaning that consciousness has a relationship to intrinsic features of a natural brain.

Computational states are solely extrinsic. They need a symbol manipulator to make sense of them and as such it’s the symbol manipulator ( human beings) that defines the extent and scope of the computational machine. Thus I can include the sun in my computer : the car next door : mount everest.

This means that conscious mental states – which we know to be intrinsic – cannot arise from systems whose scope is extrinsic, such as a book, or a painting, .. or a computer program.

It’s not just consciousness that suffers at the hands of a lack of intrinsicity: any coherent internal mental state cannot be supported in a system where the extent of the system is arbitrarily decided by the observer. There are no naturally occurring computers and if you think there are i’d like an example.

Comparing intrinsic mental states to extrinsic computational states can be done at a simpler level: biological states vs computational states. The chemical processing within a paramecium avoiding an acid area is intrinsic to the living entity and calls for a much simpler philosophical background than human consciousness.
Evolution has built up human consciousness in living entities. Both have mysterious natures and the intrinsicness of meaning generation in living entities is obviously simpler to address than the one in human mind.
Understanding the nature of life would probably bring significants openings on the nature of consciousness. As R. Brooks wrote in 2001 “we might be missing something fundamental and currently unimagined in our models of biology”. Focusing on consciousness looks to me as somehow putting the cart before the horse.

13. Hunt says:

Christophe provides me with a useful “way out” of the task, provided a person accepts the simplifying assumptions he provides: that it is simpler and equivalent to ask why biological states are intrinsic, yet computational ones are extrinsic.

Why does a Protist avoiding acid seem to fulfill the requirement while an algorithm simulating the (surely simple enough) process doesn’t? (And if it isn’t simple enough, choose a biological process that is, like the lac operon.) It can’t be as simple as just proposing input and output transducers interfacing the computation, right? Yet, in a strict sense, that does seem to be all there is to it; since a machine, operating in the world, processing input and “doing things,” is as extrinsic as required. Note that I didn’t say “generating output” since that introduces the opportunity to say that the output is yet open to interpretation.

And this small caveat is enough to make me suspicious that this is all really just a semantic game for us to “hide the homunculus,” if one is fool enough to let it into the conversation, well, then one loses the argument.

Yet I suspect this is all deeply unsatisfying to john, and it is to me as well, since minds seem to be intrinsic just sitting there, without input and without doing anything, whereas computation, disconnected from I/O seem to be open to interpretation. So we’re back where we started.

14. john davey says:

Hunt

Christophe provides me with a useful “way out” of the task, provided a person accepts the simplifying assumptions he provides: that it is simpler and equivalent to ask why biological states are intrinsic

Sorry, I don’t think this even remotely up for grabs. It’s pretty straightfoward.

A chemical reaction is not observer relative. Computation IS observer relative, like a painting of a tree or a view of a sunset. There is no computation without the observer and his agents : the observer and his agents are part of the computational process. You don’t need an observer – or the idea of an observer – for a chemical reaction to take place.

It is the observer who created the computation by mapping the arbitrary physical attribute (the voltage level in the chip ; the up and down state of a steam valve ; the shape of a symbol on a piece of paper) to the symbol in the first place. The observer (or his agents), whether real of idealized, has to be there for otherwise there are no mapping rules and no computations.

Going back to my first comment no.1
All of the physical intrinsic functions of the body are the causation or homuniculi of what drives our brain, via the connectome for all actions, needs, deeds and seeds. This is how consciousness evolved from the unconscious activity, and is the physical duality and interaction we experience between our non-conscious and conscious nervous spectrum.

16. Stephen says:

@John 14

Take an example of a refrigerator with an internal computer to control when the compressor comes on and when the defrost heater comes on. It is clearly computational, but it’s operation is also intrinsic by your definition. It doesn’t need anyone to observe its functionality for it to occur.

Your last paragraph seems to imply that because the creator of the refrigerator had to provide the symbolic mapping, it must be extrinsic. That would then mean that anything created by a person is automatically extrinsic, which is a stretch. I’m not sure that was what you meant.

Getting back to the original issue of the relative function of the unconscious and conscious, it seems we make far too much of the primacy of the conscious in defining ourselves. We are both our conscious and unconscious, each part contributing to the whole in its own way. Each part highly integrated with the other.

17. john davey says:

Stephen

“Take an example of a refrigerator with an internal computer to control when the compressor comes on and when the defrost heater comes on. It is clearly computational, but it’s operation is also intrinsic by your definition. It doesn’t need anyone to observe its functionality for it to occur.”

You need to distinguish computation from the physics that implements the computation. All the physics that goes on in the refigerator is intrinsic. There is no computation whatsoever. A martian who was unaware of computation would simply say “this is a machine that controls cooling”. He wouldn’t see a computation going on : just a cooling system going on an off. He would look at the controlling chip and say “here is a piece of silicon. It has electrical currents flowing through it”. He wouldn’t see any computing.

There is only computation once you feel the need to analyze that a computation is going on in controlling chip. In the absence of such an observer (which can be the idea of an observer, or the agent of an observer) a computer chip is just a piece of silicon with certain electrical properties.

18. john davey says:

Stephen

“Your last paragraph seems to imply that because the creator of the refrigerator had to provide the symbolic mapping, it must be extrinsic. That would then mean that anything created by a person is automatically extrinsic, which is a stretch. I’m not sure that was what you meant.”

You only provide symbolic mapping rules when you intend to create a computational machine – that is, a machine (consisting of self-contained, physically intrinsic properties) that is intended to represent a computational process. A computational process is a mathematical process after all – they do not subsist in the universe of real things, but in the domain of mathematical objects.

The representation of the computational process is extrinsic to the (physical) machine that we describe as the computer, as only we know that the machine is being used in such a way. Likewise there is nothing about a potato peeler’s physical makeup that could be described as “intrinsic” to peeling potatoes. It is merely that its’ physical construction lends itself to the task we have in mind for it.

19. Hunt says:

20. Stephen says:

John

Sure, a computer can be seen as just a bunch of elementary particles with energy levels and relative locations. It’s just that looking at it that way isn’t very useful. It can also be seen as registers changing values as it computes new values. It’s the same thing, just viewed from a different perspective. Looking at it from a higher level allows you to see the organization of all of those elementary particles. The computations exist in that organization. If an observer isn’t aware of the computations, it doesn’t mean they doesn’t exist.

21. john davey says:

Stephen

You’re right but you’re failing to spot the difference between the two.

The physical characteristics of a silicon chip are intrinsic to it. The computational aspects (“higher level” as you call it) are extrinsic to it. One requires an observer OR (as I said before) the “idea” of an observer. Without the observer or the “idea” of an observer there is no computation.

There are lots of resources out there that explain this this far better than me. Searle has a good deal to say about it and he has a knack of making things easy to understand. Perhaps you should read some Searle and he will make the distinction clearer in your mind.

Maybe I can make it simpler. It’s like this .. to change a physical thing is to change it’s intrinsic characteristics. let’s say I have a spinning top and it’s spinning in an anticlockwise direction. If I reverse the spin to clockwise I’m changing an intrinsic property of the the spinning top.

If i’m making a computer out of spinning tops I can say “anticlockwise spin represents 0 : clockwise spin represents 1”. So in the above example my spinning top goes from a value of 0 to a value of 1.

I then decide I’m not happy about this so I say “no – let clockwise spin represent 0 and anticlockwise represent 1”. Now currently my top is spinning in an anticlockwise direction, so in the new scheme this represents a value of 1 – not 0, as under the old scheme.

So the extrinsic characteristics of the spinning top have altered – the computational value has gone from 0 to 1 – despite the fact that the physical characteristics have not altered at all.

I hope this makes it clearer and demonstrates to you that there is no such thing as an inherent computational scheme. All are extrinsic.

So you are right when you say we can think of a silicon chip as computing as well as being a physical system. But we cannot possibly state that the computation is intrinsic. It is extrinsic. As I said, a martian who had only knowledge of physics and none of computation would not need any further account to explain why the fridge worked : the physical account would suffice.

Why does this matter ? Because the mental states of persons are intrinsic to the brains that produce them. If computation cannot be intrinsic – and it simply cannot – then neither can mental states be produced by computation. It is a stmbling block that John Searle spotted a long, long time ago.

22. Stephen says:

John

I guess we just disagree. I just don’t see much extrinsic/intrinsic difference between a bunch of neurons organized to produce consciousness and a bunch of transistors organized to grind through some code. The biggest difference is that we understand computers from top to bottom and our knowledge of the workings of the brain is very murky and incomplete. The temptation to assign mystical explanations to fill in the gaps can be strong.

23. john davey says:

Stephen

“I guess we just disagree. I just don’t see much extrinsic/intrinsic difference between a bunch of neurons organized to produce consciousness and a bunch of transistors organized to grind through some code.”

This seems to be a standard mistake that a lot of people make. It goes like this : “if I can’t provide an explanation for a fundamental problem in computationalism, I’ll look for a similar feature in brains and argue, by some bizarre principle of double negative, that by being wrong twice, I’m being right once”.

There is no epistemological symmetry between brains and computers. Computers are objects of definition : they are totally, completely understood. There is nothing about them that is not known. Brains are objects of scientific enquiry and are almost entirely not yet understood.

So to the question “how do computers acquire intrinsic attributes?” the answer must be provided, in total, using only the principles of computation. The answer is not simply to say “I don’t see how brains do it if computers can’t” which is, effectively what you just said.

Brains create intrinsic features by an as-yet unknown mechanism. In time it may well be identified. But that current level of scientific ignorance does not protect you in any way from answering basic questions about computers, of which there is zero scientific ignorance. If you cannot account for how computation produces intrinsic features, then you are admitting that such a proposition is a dead duck.

To my knowledge, no-one ever has or ever will do – it’s impossible by definition.

24. john davey says:

Stephen

The temptation to assign mystical explanations to fill in the gaps can be strong.

I do hope you aren’t applying this pat gibberish to me. It seems to me that computationalism is the new de facto mysticism : instead of us being blobs of protein in a meaningless universe ( more of less my viewpoint) computationalist fantasy seems to be driven towards proving that humans are angelic blocks of mathematics.

25. Stephen says:

You seem to be putting a lot of words in my mouth. What I said was that the computations of a computer were intrinsic to it. It doesn’t need an observer to make it do what it does or to interpret the results once it has been created.

Your spinning top analogy doesn’t work because the same thing applies to the brain. Usually a faster neural firing rate means more, but we could easily envision a brain where a longer period means more.

26. john davey says:

“What I said was that the computations of a computer were intrinsic to it.”

Computation is not intrinsic by very definition.

“Your spinning top analogy doesn’t work because the same thing applies to the brain.”

There it goes again ! it’s remorseless .. I actually don’t think a lot of people realis theye’re doing it. “I can’t answer a question about computation, so I’ll point out that the ‘same’ issue arises with brains”. Wrong. Brains and computers are NOT equivalent, epistemeologically speaking. You cannot answer a question about the inability of computation to do something by pointing out that you don’t see how the brain does it either. Computation is not expanding, it is a limited, fully-scoped area of mathematics. If you can’t answer a question about, you’re never going to.

Brains are not restricted to the insignificant, trifling scope of computational behaviour. They are real objects of the universe, limited only by the capabilities of that universe – not some branch of mathematics. You don’t know how they work and don’t claim that you do – therefore don’t pertain to be able to claim that they “can’t do x” because you can’t see why.

27. Hunt says:

“Brains are not restricted to the insignificant, trifling scope of computational behaviour. They are real objects of the universe, limited only by the capabilities of that universe – not some branch of mathematics. You don’t know how they work and don’t claim that you do – therefore don’t pertain to be able to claim that they “can’t do x” because you can’t see why.”

Let’s say you can apply a hypothetical scope to brains described by the fact that they are composed of neurons, and neurons are cells of prescribed behavior. Is this an ironclad argument? No, of course not, but it does have the virtue of forcing the question: Your argument relies on some unknown natural (or supernatural) property that cells must have that prevents a mathematical or programmatic abstraction. Nobody knows what that might be.

If neurons (cells) did have an abstract simulation then the same argument would apply to them; the simulation is open to interpretation. Therefore, no network of simulated neurons, no matter how complex, would have any intrinsic value, or “meaning”.

28. john davey says:

“Your argument relies on some unknown natural (or supernatural) property that cells must have that prevents a mathematical or programmatic abstraction.”

I’m not going to comment on the “supernatural” allegation. It’s the last resort of a computational scoundrel.

Not all mathematical relations are computational : nor is it a requirement of the universe that all phenomena must be modellable in mathematical terms: that is a dogma, unsupported by anything more than centuries of physics propaganda, physics being the self-styled cheerleader of rational thought.

It may well be that certain components of mental life may be modellable. One would certainly hope that it be so. But we know already that no amount of mathematics generates the irreducible. No amount of mathematics generates the semantic of time or of space, for instance. To that extent the universe is already ‘unmathematizable’. It is already mathematically insoluble, as to ask the question “what is time?” and “what is space” are perfectly valid questions which physics is incapable of answering. It doesn’t have the tools.

“Therefore, no network of simulated neurons, no matter how complex, would have any intrinsic value, or “meaning”.”

A network of neurons has intrinsic nature. I don’t know where you got the impression I was on about ‘meaning’ – maybe the same place you think I’m being ‘supernatural’ ? A brain is a wired-together mass of matter with an internal contiguity and coherence. It’s tied together with real physical forces (not the observer-related strings of computational systems). Its free to create consciousness because it’s not a computer. It’s a real thing. It makes no sense to computationalists that it does so : and that’s because they just can’t visualise a world in which brains aren’t computers. They have to see a brain as a series of dots linked by mathematical equations. They are incapable of seeing the brain as a lump of stuff that just does things. There is a fixation with neurons in particular, and an industry is now developed about this biological artefact about which actually very little is known – but that doesn’t stop all the network-style diagrams and the computational simulations. It’s pie in the sky : learning to fly before you can even crawl.

29. Hunt says:

I’m not implying anything about you! And even if I were, why is that a slight? Plenty of people believe in the supernatural, philosophers even. Even being a non-theistic dualist is still a perfectly respectable position to take, unpopular though it may be.

You seem sold on the intrinsic/extrinsic distinction. I remain skeptical there’s anything there. First it’s unclear to me that this is anything other than a word game. All things “exist in the world” and “do things”. I’m not sure the idea that you can abstract a computer “doing things” to a computation says anything different than that you can interpret a brain “doing things” however you like.

Second, I’m not sure that what you call intrinsic cannot emerge from complex computation. You counter by saying that everything is known about computation, which should be news to CS departments the world over. How is this any different than saying we know how matter interacts, therefore we know everything about all material combinations?

Finally, it’s an obscurantist argument. In that way it does bear some resemblance to substance dualism. We already know brains are complex organs composed of neurons, synapses, endocrine secretions, etc. There actually is nothing terribly mysterious about these processes; certainly nothing preventing us from simulating them on a computer. Why then jump to a complication? Now we have an intrinsic/extrinsic divide. Why not just stick to what seems to be obvious?, that mental phenomenon arises from essentially understood cellular processes, and that given enough computational power, there is no reason these things could not be simulated computationally down at least to the level that we’re interested in?

30. john davey says:

Hunt

“You seem sold on the intrinsic/extrinsic distinction.”

Not just me. It’s a major topic of philosophy. See my spinning top example in #23. I can’t make it clearer than that. If that doesn’t convince you, try to find Searle on the subject (as well as other things). He is clear and precise. His description of the confusion between ‘subjective’ and ‘objective’ is masterly.

If that doesn’t make it clear to you, nothing will as you have no intention of being persuaded there is a difference. If you think there is one, elaborate on the point and don’t just say ‘i’m not convinced there is a difference’ without further argument. Otherwise I shall just assume you don’t have one.

Incidentally, I think I asked you to define ‘time’ didn’t I ? I’m still waiting .. I think you maintained it wasn’t irreducible.

“All things “exist in the world” and “do things””

But not in the same way.

” I’m not sure the idea that you can abstract a computer “doing things” to a computation says anything different than that you can interpret a brain “doing things” however you like.”

And the reason for that conclusion is ?? See #23

which should be news to CS departments the world over

What isn’t known ? The turing machine is in a state of flux ? Don’t confuse the developing technology of computers with the principles on which it based. The principles of computation are static : fully scoped. The growing of oranges is a developing branch of farming, but oranges are still oranges.

“How is this any different than saying we know how matter interacts,”

This wouldbe news to physics departments the world over. The correct statement is :”we have a series of material pictures of matter interaction that seem accurate, apart from those areas where they are not accurate and the different and competing pictures totally disagree (but that doesn’t matter because we don’t have the technology or the resources to resolve them), notwithstanding the limited scope of energy resources and detection tools available to us which make probable the existence of large amounts of phenomena,as yet undetected, that contradict current theoretical models and leave physics as open-ended as ever”. Not quite as snappy.

The last point matters. Physics has no end point. Computation has no derivative end point – it’s principles can be derived and analysed indefinitely – but those principles are an end point. It’s going nowhere. Von Neumann is Von Neumann, the Turing model is the Turing model.

The other point is of course that matter acts intrinsically and computers don’t. Until you see the difference, which is pretty straightforward, there is not much further to be said.