Warmij wrote:It's pretty easy to distinguish, on a code level, whether or not an artificial intelligence is genuinely self aware or just well coded for plenty of possible outcomes.

Well, I think it could be non-trivial depending on the underlying method used. A really big lookup table would be obvious (and one imagines couldn't work too well, unless it was essentially infinite in size) but machine learning techniques (which are more likely to one day be convincing) can actually be fairly difficult to analyse and understand reductively.

I have always liked the counter-argument that facetiously proves humans aren't conscious, as follows: 1. Take some theoretical perfect neural map of a person who understands Chinese; 2. Build that map using little wires and switches for neurons; 3. Have an English-speaker painstakingly simulate the input of a Chinese character via the optic nerve switches; 4. After a very long time of manually switching neurons on and off, you eventually have your output symbol in your spinal column switches, which you can in theory read (or connect to an arm or whatever); 5. So you got your output, but the only decision-making was done by the English-speaker, so surely the wires and switches aren't sentient(!), so neither was the Chinese man(!!), etc.

It's largely down to the way people have butchered the term AI over the years, now we have stuff like weak AI and (still theoretical) Strong AI.

Weak AI is pretty much the stuff that gets run through the Turing test these days (and basically everything which claims to be AI today). It's not much more than software designed to get a person into believing they're intelligent. It gets more and more impressive as time goes on but still underneath it you have a regular state machine. It's not intelligence, it's just the appearance of intelligence. It no more has sentience than a plastic flower has a need for water. Whether it can fool someone (or even everyone) or not is irrelevant.

Strong AI (sometimes called true AI) is a completely different thing, this is the one that raises all the hard moral questions (should it ever actually happen) like should it have rights? Could it be dangerous? etc because it's not "artificial intelligence" in the sense of just being code to give the impression of intelligence, but artificial in the sense of being a man-made version of actual intelligence.

I have always liked the counter-argument that facetiously proves humans aren't conscious, as follows: 1. Take some theoretical perfect neural map of a person who understands Chinese; 2. Build that map using little wires and switches for neurons; 3. Have an English-speaker painstakingly simulate the input of a Chinese character via the optic nerve switches; 4. After a very long time of manually switching neurons on and off, you eventually have your output symbol in your spinal column switches, which you can in theory read (or connect to an arm or whatever); 5. So you got your output, but the only decision-making was done by the English-speaker, so surely the wires and switches aren't sentient(!), so neither was the Chinese man(!!), etc.

i dont know much about neural networks but i thought the whole point was that they DO understand things and think, a neural network made from the brain of a chinese person is not anything like google translate, which is just running through commands

A neural network just runs through commands. It just sort of makes its own commands to run through. But someone made the original commands, and, with enough thought/the same amount of processing power/knowledge of inputs, could work out what the thing was going to say next by simulation, and understand why that happened. It's not true intelligence in the sense I mean, because someone can theoretically work out how it reaches conclusions, and we can follow the logical 'instructions' to calculate as it has calculated.

But if some advancement means that something can create instructions so complex that we cannot understand them, or the decision making process is so obfuscated that we don't understand there /are/ instructions being followed, and it appears self-aware, then we have to treat them as sentient, until proven otherwise. It is amoral to treat them otherwise.

There are other interesting philosophical points along this vein. Ostensibly, one day, we might be able to model a human mind. We might be able to read the instructions, and understand what the outcome is going to be based on the physical processes. At this point, do we stop calling humans sentient, as they are only following instructions? At what point to we call something /truly/ intelligent? What does intelligence even mean?

Searle's theory falls down because he is taking the part of a mechanism rather than a thought process. In the same way that he could replace his arm with a prosthetic, the arm doesn't make the decision to move. In the same way, Searle isn't part of the computer's thought process, the bit we are evaluating. It is irrelevant to the process. Especially as we also understand his particular role in the process, whereas we don't understand the process as a whole. He is hardware, not software, and software is clearly the important part, because brains are just hardware really.

---

On a similar path to "The Chinese Room", but more about the applications of the morality. You go into a room, and in the room is a button on a black box, and a screen in front of it. The screen says "I am in the black box. If you press this button, I will die. It will be excrutiatingly painful for me. However, you will receive £1.". Do you press the button?

I think that most people would argue that at the point that you don't know whether the screen is lying or not, it's incredibly amoral to press the button. Killing a living thing painfully for minor personal gain is not generally approved of.

Now take the situation where you see this on the screen, but you realise that the top of the box is see through. Inside the box is a machine, and from your knowledge of electronics, you realise that it's nothing more than a calculator, programmed to put up the words on the screen. Then, feasibly, you might push the button. £1 for deleting a program isn't so bad, and there is no way for it to express loss.

If, however, you saw a human with a keyboard in the box, you'd be horrified. You would, presumably, never press that button, for fear of what it might do to the poor specimen.

Now imagine you see a completely alien device. You see wires and processors that you could not begin to comprehend the purpose or assembly thereof. You realise that you cannot know whether or not the machine is telling the truth or not. So then, is it moral to press the button? I would argue, probably not. Because in construction of the problem, this machine is no different to the human, or more specifically, the human's brain. This alien machine is a device of unknown process. As is the human brain. About neither do you know the construction or process, or how decisions are made. Everything that you can say about the origin of the human's decision-making, it's consciousness, you can say about the third device. How can it possibly be moral to kill it? If it's non-intelligent, it's probably fine for personal gain. But if it's intelligent, then it's abhorrent to destroy it. It must be presumed sentient, because the consequences of failing that assumption are vast.

---

There are other threads this goes down, talking about the fact that you could reproduce the first program perfectly, but at the moment you can't reproduce a human brain's function, so death would be permanent in the second case (and from there, imagining you can replicate the human perfectly, you get into the Teleportation Problem and Ship of Theseus arguments, really interesting). You couldn't reproduce the third case either presumably.

There's a concept that is called a Philosophical Zombie, that a human mimicing sentience is indistinguishable from a human that has sentience. Seeing as you can't ever know whether someone's a zombie or not (or indeed, whether or not every person other than yourself is a zombie), you have to treat them as though they are not.

What makes something human? If something inhuman is indistinguishably human, what stops it from being human?

And similarly, what makes something intelligent, or conscious, or worthy of rights? If it is indistinguishably all of those things, what stops it from being so?

The only reasoning I can come up with is the thing that stops it is some new information being brought to light that differentiates them. In which case we have a metric, and can remove those things. But until such a point, if someone develops an AI that exhibits true intelligence, and believes itself to be alive, and we cannot find a counterpoint to that belief or the origin of that belief, then what right do we have to treat it any differently to ourselves in terms of a right to being? Because any argument we use, it could use against us, and similarly, we could use against us. Innocent until proven guilty in a different context.

Edit: Put in a short rebuttal to Searle.

Wrathy: Water's not just a drink. I mean, you can't have a bath in lemonade, wash the dishes in sprite or flush the toilet with irn bru.
--Gallery | RedBubble | Never Forget
--
I used to have an original sig, now I'm just a mod.

Time for an extremely controversial opinion: Robots are a million times better than having to deal with today's females, why would anyone prefer fantasy over the real thing? Cause the real thing is GARBAGE and most definitely NOT WORTH IT! I live in a country where prostitution is legal and I still rather spend my money on video games and anime, dating is just a waste of time girls are just looking for sex and the truth is that many males are starting to wake up and realize that females are not necessary in one's life, being alone is just as good and even better than with a "partner" I look forward to the day when human interaction and socializing will no longer be necessary, and the need for affection and a partner will be replaced by WAY more convenient virtual reality and robots! Call me whatever you want I know the truth! The time is coming, and is coming fast!

OrangeRakoon wrote:
If it perfectly mimics sentience then it should be as difficult to distinguish as a real person.

You're still confusing intelligence with consciousness. Even if it is difficult/impossible to distinguish, why should it still have rights? Why is, in your eyes, mimicry of sentience enough of a reason to give something rights?

If you have a sort of AI which literally exists as a person would (goes out, has a job, interacts with people) and is indistinguishable - well, what would be the point of such an AI? There is literally no reason for such a thing to be developed apart from as a research project just to see if it is possible, it would have no purpose for mass production though so in this scenario again, it would be pointless to give such a thing 'rights' (since it wouldn't have any sentience anyway).

If you're talking about artificial consciousness that's a whole other thing and something which probably will never happen anyway outside of science fiction, at least not on purpose. Consciousness isn't even particularly relevant to the modern field of artificial intelligence (at least, not the development side, yet) because it would pretty much be entirely pointless to implement. There is literally no reason to implement artificial consciousness; if you can create an AI which mimics consciousness (but isn't genuinely conscious) and serves a purpose, why bother making it conscious? There's no benefit to be drawn from that, and thus pretty much a slave and raise all sorts of moral/ethical problems which would otherwise need not exist. Especially in this idea of "sex with a robot" - if it mimics consciousness to the point where it is indistinguishable, why give it a real consciousness and subject it to misery when it need not exist and customers would never notice the difference anyway?

You seem awfully confident that you can define 'consciousness' in a meaningful, objective way.

How would we tell if something were a perfect mimicry or the real thing? I am admittedly only tangentially involved in the machine learning field with my research, which is primarily bioinformatics, but my guess is it that it simply wouldn't be obvious on a code level -- in much the same way that you don't get a feel for whether something is a sea worm or a human by zooming in and watching a few individual neurons fire.

I agree though with the general sentiment that it would be silly to have an intelligence of any kind existing inside something intended to be used as a very expensive vibrator. There are other contexts though in which a human-seeming AI would be less silly; there are lots of jobs where having a Lt.Cmdr. Data in the office would be useful.

OrangeRakoon wrote:
If it perfectly mimics sentience then it should be as difficult to distinguish as a real person.

You're still confusing intelligence with consciousness. Even if it is difficult/impossible to distinguish, why should it still have rights? Why is, in your eyes, mimicry of sentience enough of a reason to give something rights?

How would you know it /isn't/ conscious? If the mimicry is perfect then you will have no test that allows you to distinguish.

OrangeRakoon wrote:
If it perfectly mimics sentience then it should be as difficult to distinguish as a real person.

You're still confusing intelligence with consciousness. Even if it is difficult/impossible to distinguish, why should it still have rights? Why is, in your eyes, mimicry of sentience enough of a reason to give something rights?

How would you know it /isn't/ conscious? If the mimicry is perfect then you will have no test that allows you to distinguish.

Uh... what?

By looking at the code? The code will let you know if the intelligence is being mimicked (by if the thing is designed to mimic intelligence) or not.

Karl wrote:You seem awfully confident that you can define 'consciousness' in a meaningful, objective way.

How would we tell if something were a perfect mimicry or the real thing? I am admittedly only tangentially involved in the machine learning field with my research, which is primarily bioinformatics, but my guess is it that it simply wouldn't be obvious on a code level -- in much the same way that you don't get a feel for whether something is a sea worm or a human by zooming in and watching a few individual neurons fire.

I agree though with the general sentiment that it would be silly to have an intelligence of any kind existing inside something intended to be used as a very expensive vibrator. There are other contexts though in which a human-seeming AI would be less silly; there are lots of jobs where having a Lt.Cmdr. Data in the office would be useful.

It would be obvious. If the code contains functions to generate responses and can be reverse-engineered, it'd be blatantly obvious. Also that analogy doesn't really work in this context.

Actual consciousness living inside something requires more than code, though, I'll give you that, so it *may* be possible to not figure out if something has consciousness purely through looking at the code - but if any organic matter is found inside the thing surely that would be cause for alarm? I bet by the time it's technologically possible to have such a being, there would also be advanced scanners which can detect this sort of thing without opening things up.

Karl wrote:You seem awfully confident that you can define 'consciousness' in a meaningful, objective way.

How would we tell if something were a perfect mimicry or the real thing? I am admittedly only tangentially involved in the machine learning field with my research, which is primarily bioinformatics, but my guess is it that it simply wouldn't be obvious on a code level -- in much the same way that you don't get a feel for whether something is a sea worm or a human by zooming in and watching a few individual neurons fire.

I agree though with the general sentiment that it would be silly to have an intelligence of any kind existing inside something intended to be used as a very expensive vibrator. There are other contexts though in which a human-seeming AI would be less silly; there are lots of jobs where having a Lt.Cmdr. Data in the office would be useful.

Because there's no such thing as "perfect mimicry". Whether or not it can fool a passerby or a judge in a test is not the same as fooling the person or team that wrote the code and if the person that wrote the code can't tell if it's "learning" outside it's codebase then you should find the engineers they stole it from. I've been programming for over twenty years and I've never, ever seen a computer behave unexpectedly with enough research. Such a thing would stand out.

If a thing is really indistinguishable from the thing it's mimicing, then it is the thing it's mimicing. But yeah you'll find pretty much everyone that's involved with A.I on any level will agree that Intelligence isn't a synonym for sentience, it's a prerequisite for it. There's lots of other characteristics required outside of solving problems by brute force before a computer could possibly be said to have a self.

I think the confusion in this discussion here is around *who* would be distinguishing it. Of course, from the point of view of the *user* of such an AI it may be impossible to distinguish but I'm pretty much 100% certain that the programmers/engineers behind the thing would know and be able to test. Making sure something wouldn't have a consciousness would be a pre-requisite for the thing to even go on the market (I'd imagine, anyway, otherwise it could be considered slavery) so it's pretty much guaranteed the sort of nonsense in iRobot and the like wouldn't happen.

Given a full and complete description of a system, there is no system as of yet that cannot be explained and predicted entirely mechanically or probabilistically (i.e. with behaviour determined by random chance). In the absence of such evidence, it seems a fair assumption to believe that a person is no different. The problem with doing so is simply that the entire physical system that is a person is massively complex so as to make fully describing it completely prohibited.

To flip the problem round the other way, what specifically is it about an organic entity that makes it sentient and conscious? You cannot single out any one part of a person and say "this is where the consciousness resides". If you zoom in on any one part of a person, be it the workings of the brain or the reactions of the body, they are all simply physical systems that can be explained without ever introducing some magical consciousness property, either mechanically or (and especially when you zoom in the furthest you can to the level of the individual particles that make us up in our entirety) probabilistically through the happening of random chance.

It's only when you layer these countless systems upon each other to look at a person as a whole, on a macroscopic level, that you see what we can describe as consciousness. You have a system that is sufficiently complex that we have no way of understanding it as a whole - but what we can understand is that every small part that makes up this system, as far as /anyone/ can tell or prove, can be perfectly described through simple physics. That is no different, in principle, to a sufficiently advanced computer.

You can look through the code all you want and only ever find either entirely predictable behaviour, or behaviour determined by random chance (or the reaction to external stimulus, which in turn is either mechanical or random). But if there is enough of that code, it seems entirely conceivable that on a macroscopic level you would see behaviour that is non-differentiable from consciousness.

If your objection to a machine being able to be fully conscious is that you can't look through the underlying code and find anything that suggests any capacity to be conscious, then I challenge you to apply the same reasoning to humans. I think you'll find that the same applies to us.

Warmij wrote:Actual consciousness living inside something requires more than code, though, I'll give you that, so it *may* be possible to not figure out if something has consciousness purely through looking at the code - but if any organic matter is found inside the thing surely that would be cause for alarm? I bet by the time it's technologically possible to have such a being, there would also be advanced scanners which can detect this sort of thing without opening things up.

So what /does/ actual consciousness require? Are you suggesting the consciousness resides in that organic matter in this hyopthetical biocomputer? If you can tell me where and how I would love to know, as would countless philosophers and scientists.

First of all, are we defining consciousness as the very loose definition:

"the state of being aware of and responsive to one's surroundings."? Because this can apply to modern laptops and phones really (my phone can detect and is aware of the surroundings and can indeed respond to it). Because I've been assuming by consciousness we're discussing sentience.

OrangeRakoon, I base my reasoning around three main factors:

1) There is no tangible evidence to suggest anything other than the assertion that, throughout the billions of years of known history of the universe, sentience has always arisen from organic matter
2) There is no tangible evidence to suggest artificial/manmade sentience is even possible in the first place
3) There is no tangible evidence to suggest that, even if factor 2 was proven wrong, such a thing could be created without organic matter

None of these 3 things are objective proof that artificial consciousness using non-organic substitutes is impossible, but these are 3 very important things to consider when debating the plausibility of such a thing.

You cannot single out any one part of a person and say "this is where the consciousness resides".

Well, actually, you can. It's called the brain.

If you zoom in on any one part of a person, be it the workings of the brain or the reactions of the body, they are all simply physical systems that can be explained without ever introducing some magical consciousness property

"Consciousness" is not a magical property, it is the product of how the brain works. It is literally the end goal of the brain. There's, once again, zero evidence to say that a machine can ever produce the same result as a brain does in terms of producing consciousness. In fact, evidence says the opposite.

You can look through the code all you want and only ever find either entirely predictable behaviour, or behaviour determined by random chance (or the reaction to external stimulus, which in turn is either mechanical or random). But if there is enough of that code, it seems entirely conceivable that on a macroscopic level you would see behaviour that is non-differentiable from consciousness.

...no, the people who developed the code will know if they get results outside the original code's boundaries. If they don't then it isn't truly conscious (sentient in this context).

If your objection to a machine being able to be fully conscious is that you can't look through the underlying code and find anything that suggests any capacity to be conscious, then I challenge you to apply the same reasoning to humans. I think you'll find that the same applies to us.

We can't "look through the code" of humans in the same way we can computers. Code is not equivalent to DNA or other building blocks for our being, and it is far less complex. We have neuroscience for consciousness/sentience theories and we have biology for theories on organic matter and how it works - except this is all subject to further research, at which point things can change. Computer Science, on the other hand, is particularly rigid. There's no discovering "oh, this method *actually* does this, not that". It isn't subject to change in the same way that the other sciences are. I mean, yes, we build upon them - we create new technologies that may simplify or improve on things - but the overarching theories as to "how X works" is not subject to change since the field is researched by the people who create the systems in the first place. Research can only determine what, that doesn't already exist, is actually possible, by attempts at building/developing things. It is not how something that already exists works because whatever already exists was already created by someone who knows how it works hence they made it, thus the research is impossible.

You might be wondering how that ties into this particular line of discussion - it's because the reasoning behind living, organic beings and computers are fundamentally different from each other.

Either way, this is all irrelevant to the discussion of "should robots have rights". The answer, assuming they are not sentient (which I genuinely doubt can happen anyway) should still be "no" because all AI is is very advanced computer. I wouldn't give my laptop rights - only person with rights is me as its owner.

Thought experiment: You possess an incredibly powerful computer that simulates near perfectly, according to currently understood physics, the interactions of fundamental particles. The mechanics of the system are calculated to a reasonable level of precision, and the computer utilises a source of true randomness (by measuring the radioactive decay of some material) in order to simulate probabilistic events.

From simulating the interactions of these fundamental particles relatively simple systems can be replicated. The behaviour of atoms can be determined from the interaction of the particles, which in turn can be scaled up to simulate entire molecules. Luckily for us the machine is incredibly capable and has access to all the resources it needs - up to the point where it can fully simulate an entire person.

Would this person be conscious? If not, why not? What is the difference between this simulated person and a real one, if all the laws governing their behavior are the same? If you object to the possibility of performing this experiment, what part of it specifically do you object to? Presumably, that part is where you will find the source of consciousness.

Warmij wrote:1) There is no tangible evidence to suggest anything other than the assertion that, throughout the billions of years of known history of the universe, sentience has always arisen from organic matter
2) There is no tangible evidence to suggest artificial/manmade sentience is even possible in the first place
3) There is no tangible evidence to suggest that, even if factor 2 was proven wrong, such a thing could be created without organic matter

Conversely, there is no tangible evidence to suggest that anyone else other than yourself is conscious. What is it about organic matter that you think differentiates itself from a computer? What property is it that organic matter holds that allows it to create a state of consciousness, that non-organic matter does not?

Warmij wrote:

You cannot single out any one part of a person and say "this is where the consciousness resides".

Well, actually, you can. It's called the brain.

Where in the brain? If I keep cutting parts out, where would I find the part that is conscious? If I zoom in and examine the numerous electrical and chemical interactions, all which can be understood on a physical level, where would I see something that I could point and and say "oh look, this part isn't just following simple physical laws. It's actually thinking!"

Warmij wrote:

If you zoom in on any one part of a person, be it the workings of the brain or the reactions of the body, they are all simply physical systems that can be explained without ever introducing some magical consciousness property

"Consciousness" is not a magical property, it is the product of how the brain works. It is literally the end goal of the brain. There's, once again, zero evidence to say that a machine can ever produce the same result as a brain does in terms of producing consciousness. In fact, evidence says the opposite.

If it's not a magical property, what is it? I refute strongly that consciousness is the "end goal" of the brain - for one that suggests evolutionary purpose, and for another that ignores the wealth of evidence that shows the sub-conscious brain to be responsible for a lot more than the conscious part.

From what I understand that paper makes no claims that a machine cannot produce the same results as a brain, only that consciousness arises in part from quantum behaviour that is non-computable. It seems perfectly reasonable to assume that you could build a machine that utilises that same behaviour from the physics of its construction.

Warmij wrote:

If your objection to a machine being able to be fully conscious is that you can't look through the underlying code and find anything that suggests any capacity to be conscious, then I challenge you to apply the same reasoning to humans. I think you'll find that the same applies to us.

We can't "look through the code" of humans in the same way we can computers. Code is not equivalent to DNA or other building blocks for our being, and it is far less complex.

By "look through the code" I mean taking a snapshot of the entire physical system that is a person, and examining every part of it.

OrangeRakoon wrote:Thought experiment: You possess an incredibly powerful computer that simulates near perfectly, according to currently understood physics, the interactions of fundamental particles. The mechanics of the system are calculated to a reasonable level of precision, and the computer utilises a source of true randomness (by measuring the radioactive decay of some material) in order to simulate probabilistic events.

From simulating the interactions of these fundamental particles relatively simple systems can be replicated. The behaviour of atoms can be determined from the interaction of the particles, which in turn can be scaled up to simulate entire molecules. Luckily for us the machine is incredibly capable and has access to all the resources it needs - up to the point where it can fully simulate an entire person.

Would this person be conscious? If not, why not? What is the difference between this simulated person and a real one, if all the laws governing their behavior are the same? If you object to the possibility of performing this experiment, what part of it specifically do you object to? Presumably, that part is where you will find the source of consciousness.

This comes down to your first sentence - what you are suggesting is a thought experiment. "Simulating" molecules (and simulating the interactions) is very different to actually bringing those molecules into reality. For example, I could, potentially, simulate a controlled demolition of a building onto my computer. It certainly does not mean that I would actually have destroyed a building in real life. It would just generate output - the output itself would be "simulated" too. If there was anything produced, it would also only be a "simulated" consciousness. Same with any simulation. For anything.

Simulated (not real) input = simulated (not real) output

Warmij wrote:Conversely, there is no tangible evidence to suggest that anyone else other than yourself is conscious.

This is just silly faux-solipsism and you know it. The most obvious evidence is behaviour - behaviour of other people can't be rationalised if you assume they are not conscious - unless, of course, you assume they are a robot with incredible artificial intelligence with the power to mimic it (which requires a heavy, heavy load of assumptions - firstly that it's physically possible, second that someone in the world has already figured out how to do it, third that they actually did do it). On the other hand, behaviour of people fits perfectly if you assume they are conscious. Taking two answers and seeing which is more plausible and which require the least assumptions - Occam's razor - is itself evidence.

What is it about organic matter that you think differentiates itself from a computer? What property is it that organic matter holds that allows it to create a state of consciousness, that non-organic matter does not?

Warmij wrote:Where in the brain? If I keep cutting parts out, where would I find the part that is conscious? If I zoom in and examine the numerous electrical and chemical interactions, all which can be understood on a physical level, where would I see something that I could point and and say "oh look, this part isn't just following simple physical laws. It's actually thinking!"

The thalamus is responsible for regulating and generating the consciousness of a person.

At least, this is the current accepted theory.

Warmij wrote:If it's not a magical property, what is it? I refute strongly that consciousness is the "end goal" of the brain - for one that suggests evolutionary purpose, and for another that ignores the wealth of evidence that shows the sub-conscious brain to be responsible for a lot more than the conscious part.

We don't know the "main" purpose for consciousness, but most scientists and psychologists agree it's a useful tool to strengthen the power of other evolutionary traits. For example, attraction is, in terms of the evolutionary perspective, used to pair two people to mate (thus reproduce). Consciousness is a tool to strengthen that attraction - it's all the more powerful when you have a "self" with a stream of consciousness that then reflects upon the attraction. There are many other things consciousness can boost, too.

So, what, your claim is that it's a "magical property"? You really believe that it's just magic?

From what I understand that paper makes no claims that a machine cannot produce the same results as a brain, only that consciousness arises in part from quantum behaviour that is non-computable. It seems perfectly reasonable to assume that you could build a machine that utilises that same behaviour from the physics of its construction.

Eh? If you can't compute it, then no, you can't "build a machine" to make up for that. It's like saying "time travel is impossible because of reason X but what if you build a time machine that made up for that" - it's just trying to assert that it's reasonable to then make another machine that somehow bypasses this problem. It's again, purely a hypothetical with zero evidence. Like a lot of this discussion, tbh.

Warmij wrote:By "look through the code" I mean taking a snapshot of the entire physical system that is a person, and examining every part of it.

Ok I'll do that, hand me a entire physical system snapshot please? Or is this yet another hypothetical?

Warmij wrote:This comes down to your first sentence - what you are suggesting is a thought experiment. "Simulating" molecules (and simulating the interactions) is very different to actually bringing those molecules into reality. For example, I could, potentially, simulate a controlled demolition of a building onto my computer. It certainly does not mean that I would actually have destroyed a building in real life. It would just generate output - the output itself would be "simulated" too. If there was anything produced, it would also only be a "simulated" consciousness. Same with any simulation. For anything.

Simulated (not real) input = simulated (not real) output

Okay, so what is the difference. They are doing the same thing, so why is one conscious and one not. You keep missing this point.

Warmij wrote:

OrangeRakoon wrote:Conversely, there is no tangible evidence to suggest that anyone else other than yourself is conscious.

This is just silly faux-solipsism and you know it. The most obvious evidence is behaviour - behaviour of other people can't be rationalised if you assume they are not conscious - unless, of course, you assume they are a robot with incredible artificial intelligence with the power to mimic it (which requires a heavy, heavy load of assumptions - firstly that it's physically possible, second that someone in the world has already figured out how to do it, third that they actually did do it). On the other hand, behaviour of people fits perfectly if you assume they are conscious. Taking two answers and seeing which is more plausible and which require the least assumptions - Occam's razor - is itself evidence.

No, the point of the sentence is to demonstrate that consciousness is not a know, measurable property. You can only infer it based on your own experience.

Warmij wrote:

What is it about organic matter that you think differentiates itself from a computer? What property is it that organic matter holds that allows it to create a state of consciousness, that non-organic matter does not?

Well, I don't know that. Nobody knows that. It's what is being researched by lots of neuroscientists at the moment. Claiming one thing doesn't mean I have all the answers yet. But so far, it's been the case that organic matter has easily arisen consciousness - the same cannot be said for any other type of matter.

Organic matter is fundamentally no different from any matter you can use to build a machine.

Warmij wrote:

OrangeRakoon wrote:Where in the brain? If I keep cutting parts out, where would I find the part that is conscious? If I zoom in and examine the numerous electrical and chemical interactions, all which can be understood on a physical level, where would I see something that I could point and and say "oh look, this part isn't just following simple physical laws. It's actually thinking!"

The thalamus is responsible for regulating and generating the consciousness of a person.

At least, this is the current accepted theory.

You're missing the obvious point. But it should become clear if we keep this going. If I zoom in on the thalamus, where would I see the consciousness. To repeat, where would I see something that I could point and and say "oh look, this part isn't just following simple physical laws. It's actually thinking!"

Warmij wrote:

OrangeRakoon wrote:If it's not a magical property, what is it? I refute strongly that consciousness is the "end goal" of the brain - for one that suggests evolutionary purpose, and for another that ignores the wealth of evidence that shows the sub-conscious brain to be responsible for a lot more than the conscious part.

We don't know the "main" purpose for consciousness, but most scientists and psychologists agree it's a useful tool to strengthen the power of other evolutionary traits. For example, attraction is, in terms of the evolutionary perspective, used to pair two people to mate (thus reproduce). Consciousness is a tool to strengthen that attraction - it's all the more powerful when you have a "self" with a stream of consciousness that then reflects upon the attraction. There are many other things consciousness can boost, too.

So, what, your claim is that it's a "magical property"? You really believe that it's just magic?

You're describing its use, not what it is. My claim is precisely not that it's a magical property, my actual belief is that consciousness is an illusion arising from complexity. Your insistence that consciousness is confined to organic creatures, and that a perfect simulation would not be conscious, is far more reliant on consciousness existing as a magical property, because otherwise how are you distinguishing between two systems that behave identically? It seems like you are suggesting consciousness to be a separate, existing thing, rather than simply a higher level description of the behaviour of a system.

Warmij wrote:Eh? If you can't compute it, then no, you can't "build a machine" to make up for that.

I attach a radioactive isotope to a computer that measures the radioactive decay and acts upon it. I have just built a machine that does something you can't compute.

Warmij wrote:

OrangeRakoon wrote:By "look through the code" I mean taking a snapshot of the entire physical system that is a person, and examining every part of it.

Ok I'll do that, hand me a entire physical system snapshot please? Or is this yet another hypothetical?

Hypothetical =/= invalid. If you could do this, what would you find that differentiates a person.

With this in mind, we can still speculate about whether non-biological machines that support consciousness can exist, but we must realize that these machines may need to duplicate the essential electrochemical processes (whatever those may be) that are occurring in the brain during conscious states. If this were possible at all without organic materials, it would presumably require more than Turing machines, which are purely syntactic processors (symbol manipulators), and digital simulations, which may lack the necessary physical mechanisms.

I assume you know about Turing machines and their importance in determining what is possible at a machine level? And why this explanation explains that it is impossible to happen? If something "requires more than Turing Machines" - which are, demonstrably, the fundamental limitations of computation (this is taught in like, first year Computer Science university degree, as well as reinforced and explained in detail in 2nd year) - then it can't be done without something non-computable. For example... organic matter.

OrangeRakoon wrote:You're missing the obvious point. But it should become clear if we keep this going. If I zoom in on the thalamus, where would I see the consciousness. To repeat, where would I see something that I could point and and say "oh look, this part isn't just following simple physical laws. It's actually thinking!"

Firstly, research the neural correlates of consciousness

Secondly

I think you're missing the obvious point here with this sentence:

where would I see something that I could point and and say "oh look, this part isn't just following simple physical laws. It's actually thinking!"

What does this even mean??? You can't "see" consciousness, it is a state... Where do I see happy? Where do I see sad? Where do I see horny (apart from the dick)? Where do I "see" confusion? We can only see the byproducts, not the actual thing, and if that's the point you're trying to make in the first place then you've gone about it in a very obtuse way. In which case I'll counter the base of your claim:

"If we cannot 100% verify consciousness and can only view the behaviour, how can we tell if something isn't conscious"

Which is basically asking to prove a negative, which in itself is a spurious assertion.

OrangeRakoon wrote:You're describing its use, not what it is. My claim is precisely not that it's a magical property, my actual belief is that consciousness is an illusion arising from complexity. Your insistence that consciousness is confined to organic creatures, and that a perfect simulation would not be conscious, is far more reliant on consciousness existing as a magical property, because otherwise how are you distinguishing between two systems that behave identically? It seems like you are suggesting consciousness to be a separate, existing thing, rather than simply a higher level description of the behaviour of a system.

The insistence is based on well-known (in the field, anyway) limitations of computational systems.

OrangeRakoon wrote:I attach a radioactive isotope to a computer that measures the radioactive decay and acts upon it. I have just built a machine that does something you can't compute.

No, all you've done is generate a source of true randomness (and even then, that is debatable as to whether it would truly be random). Of course you can't 'compute' something which requires an external input - that is literally the entire basis of my argument! You can't compute consciousness because you require something external to computer science to truly arise it. Artificial consciousness is a nice idea in science fiction. Certainly not in reality.

OrangeRakoon wrote:Hypothetical =/= invalid. If you could do this, what would you find that differentiates a person.

How would I know? It can't be done, so I have no answer to this. And that is the problem with hypothetical thought experiments with no grounding in reality. It's all very well for solving philosophical and moral dilemmas, but it has zero applicability in practical sciences.

You keep missing the point that if consciousness is not a separate concept with physical existence, but is actually just high-level behaviour that we can observe in a complex system (you do seem to agree that it is a state we observe, so presumably you agree with this), then there is fundamentally no reason it cannot arise in any sufficiently complex system. Including robots.

I think you're incorrectly equating a machine to a computer, in that you're restricting the entire workings of a robot to just a computational, algorithmic core. You keep ignoring or missing the whole point that robots exist in reality and follow all the same laws that apply to organic matter, so it is entirely plausible for a sufficiently complex robot to be conscious.

Warmij wrote:

Organic matter is fundamentally no different from any matter you can use to build a machine.