The point is that the complexity for such a system goes beyond the realm of computer science and I'm not just talking about the algorithmic core (although, wouldn't such a system *be* the algorithmic core in this case anyway?). You're talking from a "physics" point of reference here (I assume you are/were a physics student which would explain a lot about the horror of this discussion) which doesn't translate to real-world possibility and doesn't take into account limitations in computer science. Yes, you are right, there is more than just an algorithmic core in robots. They have arms/legs to move about with, for example. There are other mechanical functions. There are even connections with wires that transmit data and commands from the core to these parts! BUT even with all these other mechanical functions, they still abide by the basic laws of computer science that Turing machines describe the limit for what is possible. Neural networks and the construction of them is beyond the limits.

OrangeRakoon wrote:You keep ignoring or missing the whole point that robots exist in reality and follow all the same laws that apply to organic matter, so it is entirely plausible for a sufficiently complex robot to be conscious.

Except a sufficiently complex robot is itself implausible. You may as well replace the word robot with ham sandwich.

Warmij wrote:You're talking from a "physics" point of reference here (I assume you are/were a physics student which would explain a lot about the horror of this discussion)...

Yes I studied physics

Warmij wrote:...which doesn't translate to real-world possibility and doesn't take into account limitations in computer science. Yes, you are right, there is more than just an algorithmic core in robots. They have arms/legs to move about with, for example. There are other mechanical functions. There are even connections with wires that transmit data and commands from the core to these parts! BUT even with all these other mechanical functions, they still abide by the basic laws of computer science that Turing machines describe the limit for what is possible. Neural networks and the construction of them is beyond the limits.

I disagree entirely. Why is a robot that is made up of more than just a Turing machine limited to only the behaviour of a Turing machine?

That article you linked to includes a relevant paragraph admitting the real possibility:

With this in mind, we can still speculate about whether non-biological machines that support consciousness can exist, but we must realize that these machines may need to duplicate the essential electrochemical processes (whatever those may be) that are occurring in the brain during conscious states. If this were possible at all without organic materials, it would presumably require more than Turing machines, which are purely syntactic processors (symbol manipulators), and digital simulations, which may lack the necessary physical mechanisms.

Although I still disagree on the final assertion that digital simulations may lack the necessary physical mechanism. If the simulation fails for that reason, it is surely simply because those physical mechanism also need to be included in the simulation.

I still don't understand, in the assertion that simulation =/= reality, what the differentiation is.

barrybarryk wrote:Except a sufficiently complex robot is itself implausible. You may as well replace the word robot with ham sandwich.

OrangeRakoon wrote:Why is a robot that is made up of more than just a Turing machine limited to only the behaviour of a Turing machine?

That article you linked to includes a relevant paragraph admitting the real possibility:

With this in mind, we can still speculate about whether non-biological machines that support consciousness can exist, but we must realize that these machines may need to duplicate the essential electrochemical processes (whatever those may be) that are occurring in the brain during conscious states. If this were possible at all without organic materials, it would presumably require more than Turing machines, which are purely syntactic processors (symbol manipulators), and digital simulations, which may lack the necessary physical mechanisms.

This alone shows you don't actually understand.

Turing machines show the limits of mechanical devices. The computational core can do what it wants. "More than a Turing machine" isn't possible to form the required complexity. Him saying "It would presumably require more than Turing machines" isn't saying "It's possible if you go beyond turing machines" because you can't go beyond turing machines in this situation. He's saying "it would require, so it's impossible".

Because the level of complexity found in organic living beings cannot be replicated artificially?

Maybe if you had millions of nuclear engineers working on an atomic level for hundreds/thousands/whatever years but we're talking about within the realms of real possibility here, not "what is physically possible at a physical level without taking into account reality"

Regardless, the complexity of organic living beings may have arose through unguided evolutionary means in roughly 4 billion years from the simplest form, but it is not entirely unreasonable to imagine an artificial process being many magnitudes quicker. Look how far we have come in just ~100 years, since before the beginnings of modern AI research. We're a good way through that 4 billion years already.

Warmij: I don't see what the fundamental difference between a sufficiently precise simulation and reality is. I'm not being an edgelord here, it's just that we know that on a fundamental level the Universe obeys certain mathematical and statistical laws, and if a system x is known to obey the set of mathematical laws Y then I don't see why it matters to x whether Y is implemented by directly by the Universe or indirectly by a computer: the behaviours encoded are the same, no?

Biological systems are complex, but not infinitely so. We can't simulate them to arbitrary precision today, but it won't take us forever to get there. I think we'll see OpenWorm make real strides towards a full 'digital life form' over the next decade.

If you fundamentally feel that reality is in some sense 'special' then I'm not sure I'll be able to convince you, other than to say that I work as a researcher at the boundary between physics, computer science, and human biology - I simulate how medicines affect the building blocks of the human body - and the sentiment doesn't make a great deal of sense to me personally.

I hear there is some resentment towards this kind of reductive world-view across the proverbial campus though, so if your background happens to be in philosophy or the humanities I understand if you don't care what I think!

The electrochemical processes in the brain are so complex it'd be practically impossible to replicate at an atomic level without unrealistic numbers of engineers in an unrealistic amount of time.

The non-distinction you both keep making between reality and simulation is silly. It's like me writing a novel about a fantasy world. Even if I wrote about it in extreme, atomic detail (which would be impossible) and then saying "look, it's there, in the book". It's written down and showcased, but the universe doesn't actually exist - it's all just words on a paper.

As for me, I'm purely focused on computer science, and the possibility of consciousness when viewed from a computer science perspective (the only reasonable one, since that's where such a thing would be implemented anyway), and all the evidence says No. (My background is university degree in Computer Science with research into the philosophy of artificial intelligence)

Also, OpenWorm is literally just a computational model. They describe it as a "digital lifeform" on their main page, but their "Our goal" page is much more realistic.

barrybarryk wrote: I've been programming for over twenty years and I've never, ever seen a computer behave unexpectedly with enough research. Such a thing would stand out.

OK, I've been programming a long time too, so let's think about it together! You present some input data to a neural net and carefully trace through how that data travels through the net, how various neuron-neuron connection weightings get tweaked as a result, and what the resulting output is. Is this enough IYO to prove the system isn't conscious? Because doing the same for brains is certainly an ultimate goal of neuroscience -- far off, but we're slowly getting there with very simple organisms.

My point wasn't really supposed to be that the system would in some sense go against its programming, but sorry if it came across like that; I was trying to say only that 1. a sufficiently complex machine learning algorithm could prove comparably non-trivial to a real nervous system to trace in this way and 2. such a trace wouldn't be strong evidence that the system isn't conscious. How data travels through these systems can be pretty complex; we aren't talking about a giant switch/case.

Back to Warmij:

Warmij wrote:The electrochemical processes in the brain are so complex it'd be practically impossible to replicate at an atomic level without unrealistic numbers of engineers in an unrealistic amount of time.

Agreed, but do we need to do it atom-by-atom? Atom-by-atom is just a thought experiment. Fine, neuron-by-neuron is still unrealistic for the moment, but I suspect there are further optimisations to be made there; evolution is messy and the brain is probably not the most efficient way to achieve consciousness. Perhaps certain parts of the brain can be modelled sufficiently well by mathematical functions on sets of input signals, or even optimised away entirely. I don't know; we're now straying a bit too far outside of my field for me to have real thoughts on those ideas.

EDIT (to tie that paragraph back to the main discussion): But however we fudge it, for me to call it a 'person' (or whatever) the model just has to be good enough to represent real brain behaviour in a way I can't distinguish. If I can have a chat to it about how its day went without it crashing or telling me it likes to eat pasta every other sentence, I'll start to suspect there's a spark in there somewhere. So if it can convince me for the rest of my life, I'll die thinking it's probably conscious, and that's that! Knowing it was a system of, say, 'only' a few million virtual neurons connected to various black box 'optimised brain function modules' wouldn't make me treat it any differently.

Warmij wrote:The non-distinction you both keep making between reality and simulation is silly...

Sure, OK. Your objections on the whole seem fairly 'philosophical' in nature, which is fine. My experience is that mathematical models work for predicting reality, but I don't really claim to have further thoughts on the deeper "is it real though?" problem, because that question just hasn't been meaningful in anything I've ever worked on. My answer will always be "Why does it matter?", I suppose! With that in mind, I'm happy to call time on this here, but thanks for chatting to me! (I'll still read your reply to this if you make one, but I might not respond myself.) I'll finish up with more OpenWorm though, because it's a cool project(!):

Warmij wrote:Also, OpenWorm is literally just a computational model. They describe it as a "digital lifeform" on their main page, but their "Our goal" page is much more realistic.

Sure, I follow the project fairly closely -- a computational biologist friend of mine is a code contributor. The project is in its infancy and there is a lot of groundwork to be laid - they are currently focusing on dynamics simulations of the worm's muscular system - but their ultimate goal is to perform a cell-by-cell simulation of the entire organism. I think they'll get there! Consider subscribing to their low-volume mailing list; whether or not you think it'll one day be digital life or not, I think you'll still probably find it quite interesting!

Karl wrote:OK, I've been programming a long time too, so let's think about it together! You present some input data to a neural net and carefully trace through how that data travels through the net, how various neuron-neuron connection weightings get tweaked as a result, and what the resulting output is. Is this enough IYO to prove the system isn't conscious? Because doing the same for brains is certainly an ultimate goal of neuroscience -- far off, but we're slowly getting there with very simple organisms.

No, largely because neural nets have little to do with consciousness outside of their geography. It's why most of these posts are word salad. Intelligence, sentience and consciousness are all different things. Neural nets are a simplified model of how our neurons work(more particularly, how we think Neurons work, today), they're a stepping stone to understanding intelligence, not consciousness, and are a recreation/accurate model of neither. The last two are little more than abstract psychology and philosophy theories with little to no bearing on Comp Sci or AI research outside of the still hypothetical Strong AI. Even Searle, the guy who came up with the Chinese room thought experiment we were talking about, thinks a program is incapable of recreating consciousness (a mind). And his explanation has nothing to do with how powerful it is, how accurate it is etc just that programming itself as a concept we have now cannot break through from syntactic symbols to semantic symbols, so it will never, ever understand what it is doing.

Is there any reason you think it is possible to prove consciousness? Of anything outside yourself? Or even yourself? Descartes didn't say: "he doubts, he thinks, therefore I am"
Even for people, the state we call consciousness is assumed not proven. It's the base assumption that everything else is based on. With the key being we're not conscious because of how we act, we're conscious because of how we think. Whether or not it's demonstrable to the outside world is a whole other problem because one must assume there is an outside world.

With the key being we're not conscious because of how we act, we're conscious because of how we think. Whether or not it's demonstrable to the outside world is a whole other problem because one must assume there is an outside world.

Agree with everything you've said, and whilst I agree with that we are conscious because of how we think and actions do not make us conscious, generally actions and behaviours that humans exhibit could only ever really be rationalized if there was a 'self'. Not that this proves there is such thing as a 'self' or that sentience is experienced by other people, but I'd say it's highly likely. But then again, that's more philosophy than CompSci.

Warmij: Sure, I look forward to reading it. I mentioned before, but please forgive me if I don't reply, because this is getting very 'philosophy' now and I'm not really equipped for it.

Same for you barrybarryk, but I'll respond to this part of your post because I'm interested in hearing your further thoughts on it:

barrybarryk wrote:The last two are little more than abstract psychology and philosophy theories with little to no bearing on Comp Sci or AI research [...] Is there any reason you think it is possible to prove consciousness?

OK, sure -- you're right, we should define terms and stop dancing around that issue. (And this is interesting, because I think this hits the nail on the head for me, but clearly you are coming at it from the polar opposite point of view to me! ) No, I don't think we can prove or disprove that a system that seems conscious is, because 'consciousness' and 'sentience' are ill-defined and don't appear to have any physical meaning. What can we measure? We can probably measure whether a system learns things with sufficient intelligence to be convincingly human, and we can probably measure whether it appears outwardly to have a sense of self (i.e. can refer to its own memories, express desires, and so forth).

Without being 2edgy, I think I lean towards the view that the 'mind' is an illusory by-product of a sufficiently complex brain monitoring its own thought processes. But I try not to think too hard about it, because I don't feel the question is physically meaningful or scientifically helpful, given we aren't able to capture a 'mind' or measure 'mindness'.

That's just it, intelligence is a different concept. Making a computer intelligent (even jumping straight to true intelligence rather than the appearance of, assuming it can exist without the rest of the mind) won't spontaneously cause it to develop desires or wants etc. Those functions come from complex relationships in our mind, complex semantic relationships. Without those, no matter how intelligent or complex you make the model, it will never understand what it is doing and vital cognitive functions like reasoning, sapience etc will always be off the table. It may still give an answer to queries, but it'll lack anything along the lines of wisdom in forming that answer. It'll still be just a really impressive calculator.

Karl wrote:What can we measure? We can probably measure whether a system learns things with sufficient intelligence to be convincingly human

It's worth mentioning that this is only half of intelligence and by far the easiest part to replicate. The greater part is absolutely application of learned knowledge, not just acquisition of it. Anything can learn afterall, that's just effectively storing and accessing of data, but the means by which it uses them is what really defines intelligence. And that's just the core of what intelligence without really dealing with the things that interlink with intelligence. And as others have said, these things itself don't come close to being a conciousness or having sapience or sentience, intelligence itself is a very separate and relatively sterile thing from that.

Agreed, but do we need to do it atom-by-atom? Atom-by-atom is just a thought experiment. Fine, neuron-by-neuron is still unrealistic for the moment, but I suspect there are further optimisations to be made there; evolution is messy and the brain is probably not the most efficient way to achieve consciousness. Perhaps certain parts of the brain can be modelled sufficiently well by mathematical functions on sets of input signals, or even optimised away entirely. I don't know; we're now straying a bit too far outside of my field for me to have real thoughts on those ideas.

Is it not? What is then? I genuinely can't think of where you're getting that from bar pure speculation. The brain is probably the most efficient - after all, a brain takes 9 months to develop (or less if you're premature) after a nice bit of Sexual Action, whereas we've been dabbling with research into artificial intelligence, as a species, for hundreds of years and haven't even created one workable brain yet. I'd say 9 months of biology taking its course is more efficient than that (and, unless we can create a magical duplication device (because it'd probably be hand-constructed neuron-by-neuron as you say), even if we do create one eventually, the second and third will take just as long.

Not that neuron-by-neuron is feasible.

EDIT (to tie that paragraph back to the main discussion): But however we fudge it, for me to call it a 'person' (or whatever) the model just has to be good enough to represent real brain behaviour in a way I can't distinguish. If I can have a chat to it about how its day went without it crashing or telling me it likes to eat pasta every other sentence, I'll start to suspect there's a spark in there somewhere. So if it can convince me for the rest of my life, I'll die thinking it's probably conscious, and that's that! Knowing it was a system of, say, 'only' a few million virtual neurons connected to various black box 'optimised brain function modules' wouldn't make me treat it any differently.

But just because you can't distinguish that there's no consciousness doesn't mean there actually is. I mean, it doesn't really matter either way, and it'd probably be better if we just had something indistinguishable from consciousness without actually there being consciousness (because of this ethical side to discuss) but yeah.

Warmij wrote:Sure, OK. Your objections on the whole seem fairly 'philosophical' in nature, which is fine. My experience is that mathematical models work for predicting reality, but I don't really claim to have further thoughts on the deeper "is it real though?" problem, because that question just hasn't been meaningful in anything I've ever worked on. My answer will always be "Why does it matter?", I suppose! With that in mind, I'm happy to call time on this here, but thanks for chatting to me! (I'll still read your reply to this if you make one, but I might not respond myself.) I'll finish up with more OpenWorm though, because it's a cool project(!):

You're right, mathematical models work for predicting reality. Important note being "predicting". They can be used for research, yes.

Warmij wrote:The brain is probably the most efficient - after all, a brain takes 9 months to develop (or less if you're premature) after a nice bit of Sexual Action, whereas we've been dabbling with research into artificial intelligence, as a species, for hundreds of years and haven't even created one workable brain yet. I'd say 9 months of biology taking its course is more efficient than that (and, unless we can create a magical duplication device (because it'd probably be hand-constructed neuron-by-neuron as you say), even if we do create one eventually, the second and third will take just as long.

I think you're comparing unlike things here. A 9 month pregnancy can be thought of as the construction time for a brain (although arguably the brain takes much longer than that to fully develop as a fully conscious mind, as it continues developing throughout childhood). This would be equivalent to the time taken to build a machine, including "training" it, already knowing the design.

The hundreds of years of AI research is development time, which is more analogous to the time it took for humans/the brain to evolve - roughly 4 billion years.

In terms of working efficiency, it's a pretty fair assumption that the brain is not perfectly optimised. Evolution is demonstrably imperfect at designing things for specific purposes.

Warmij wrote:But just because you can't distinguish that there's no consciousness doesn't mean there actually is. I mean, it doesn't really matter either way, and it'd probably be better if we just had something indistinguishable from consciousness without actually there being consciousness

I remain unconvinced that there is any distinction, or that something indistinguishable from consciousness without being conscious can exist without an assertion that consciousness has some physical existence (which I do not believe it does).

On the subject of syntactical programming vs semantic understanding, I think a hard distinction is drawn where there is none. It seems plausible that semantics can arise from a computer program as long as it replicates those complex semantic relationships as found in the mind.

OrangeRakoon wrote:On the subject of syntactical programming vs semantic understanding, I think a hard distinction is drawn where there is none. It seems plausible that semantics can arise from a computer program as long as it replicates those complex semantic relationships as found in the mind.

Again with another "but if we assume X is possible, surely then x is possible" assertion.
Programming is syntactical. Programming languages are based on strict syntax and the processors that run the programs even stricter syntax. It cannot replicate complex or simple semantic relationships. It is fundamentally against how they operate.

Last edited by barrybarryk on Fri Sep 23, 2016 10:56 pm, edited 1 time in total.

The idea that meaning could arise from the execution of a program following strict syntax seems only as absurd as the idea that meaning could arise from the interactions of particles following strict laws.

OrangeRakoon wrote:I think you're comparing unlike things here. A 9 month pregnancy can be thought of as the construction time for a brain (although arguably the brain takes much longer than that to fully develop as a fully conscious mind, as it continues developing throughout childhood). This would be equivalent to the time taken to build a machine, including "training" it, already knowing the design.

The hundreds of years of AI research is development time, which is more analogous to the time it took for humans/the brain to evolve - roughly 4 billion years.

By that logic, since that is a prerequisite, humans must have evolved to the point where we as a species are intelligent enough to then create artificial consciousness quicker (which, again, I doubt will ever happen). So the "development time" would include those billions of years anyway. I realise this may seem to be somewhat of an absurdist point and could be applied to any software (eg Facebook didn't create X weeks to develop but billions of years), but in all actuality we're talking about going from zero to consciousness here, if you're discussing evolution being "inefficient". Think about what it started from and where it's ended. I'd love to be told of another way to go from zero to what we have now (but, then, we are told by others, and that's creationism...).

Construction time per conscious mind? Sure, we can predict that once we know how to develop one, the construction of one would be quicker than developing the first one, BUT even when construction is at its peak, would it necessarily be quicker than 9 months for each thing? I don't think so. I heavily doubt it, especially if it was created on, as suggested, a neuron-by-neuron basis (which would take forever, and that's even assuming there was a finalized blueprint to start on.

OrangeRakoon wrote:In terms of working efficiency, it's a pretty fair assumption that the brain is not perfectly optimised. Evolution is demonstrably imperfect at designing things for specific purposes.

That's because evolution doesn't "design" things (although perhaps there is/was an intelligent creator who designed the beginning of events that set evolution in motion, but perhaps there isn't/wasn't either). For what it is, which is essentially a bunch of atoms and chemicals thrown together and "hey, let's see what happens" it is pretty efficient. If you go down the design route, a neuron-by-neuron basis, or atom-by-atom basis (which would be required for the level of complexity required for a conscious mind) would be incredibly inefficient, almost impossible to actually produce.

OrangeRakoon wrote:I remain unconvinced that there is any distinction, or that something indistinguishable from consciousness without being conscious can exist without an assertion that consciousness has some physical existence (which I do not believe it does).

What do you mean by "physical existence"? Do you mean there are no specific "consciousness atoms" (which I would agree with)? Because if so, that's a pointless assertion. Lots of abstract things exist. You, and I, experience a stream of thought and active thinking and a sense of 'self'. If a robot was good enough to be indistinguishable, to you or I, then that would be an awesome AI. But if it doesn't itself experience the 'self', it isn't really conscious. Again, that wouldn't be testable, although you could observe its behaviour and figure out whether or not the actions it performs would make sense if there was a self (for example, does it actively seek pleasureful experiences? If there's no consciousness, why would it bother?) and come to a pretty strong conclusion.

OrangeRakoon wrote:On the subject of syntactical programming vs semantic understanding, I think a hard distinction is drawn where there is none. It seems plausible that semantics can arise from a computer program as long as it replicates those complex semantic relationships as found in the mind.

But, no. This is circular logic at its finest. A computer cannot be subjective, which is a requirement for semantic understanding. What you are arguing here is the logical equivalent of replying to "pigs can not fly" with "but they could if they have wings".

OrangeRakoon wrote:What does a semantic relationship look like?

As in, what mechanism in the brain is semantical and not syntaxical.

I'm going to answer this by explaining the difference between human thoughts and computers at an appropriate levels.

There is a fundamental, tangible difference between performing an algorithm and consciously understanding something. For example, if I ask Siri "Where would be a good place to take my dog for a walk", Siri doesn't actually understand what I am asking, but instead just takes the voice input, does speech-to-text, takes the strings, compares them with its database (which then sends requests off to other search providers), assembles an appropriate response of strings and then displays them to my device (and uses a basic text-to-speech converter to output a response in sound form). Very, very different from something that actually knows what a 'dog' is (which involves real understanding not just being able to send the string off and receive related strings) or what a 'good place' entails. For example, if I were to ask someone who would have no access to the internet where a good place to take my dog, they would think about what knowledge they have (real knowledge) about dogs (they like open spaces, running, etc) and think of places they might know (that have large areas, things to explore, etc) and then form that subjective connection and come up with a genuinely intelligent answer. The connection between the knowledge they have about dogs and the places, the intelligent creativity they use to come up with an answer, is a semantic relationship. They do not connect to a database and perform relative searches for the words 'dog' and 'walk' to produce answers, they creatively provide the answers. (Look up the Chinese Room argument because it's a similar thing)

Another way of thinking about it is in terms of reasoning. If I ask Siri "Where would be a good place to take my dog", and it gives me an answer, and I ask "Why?", it probably has a pre-built generic answer to "why". But it can't actually reason as to why the place it lists would be a good place to take my dog. The best it could probably do one day is to retrieve comments by other people as to why it would be a good place to take my dog (plenty of ways it could go about doing that). But that isn't Siri reasoning, that's the other person. Computers can't be creative, they can only display other people's creativity.

In fact, that's basically all artificial intelligence will ever be - something which takes strings (or voice input, images, whatever), processes them, and outputs strings (or images, voice output, whatever). Artificial intelligence's real goal is to make any output as natural as possible, even to the point you could be fooled that there was an utter genius on a godly level inside your computer answering your questions. But, even then, it's still a sham.