Artificial intelligence: Hawking's fears stir debate

December 6, 2014 by Richard Ingham, Pascale Mollard

Never far from the surface, a dark, dystopian view of artificial intelligence (AI) has returned to the headlines, thanks to British physicist Stephen Hawking

There was the psychotic HAL 9000 in "2001: A Space Odyssey," the humanoids which attacked their human masters in "I, Robot" and, of course, "The Terminator", where a robot is sent into the past to kill a woman whose son will end the tyranny of the machines.

Never far from the surface, a dark, dystopian view of artificial intelligence (AI) has returned to the headlines, thanks to British physicist Stephen Hawking.

"The primitive forms of artificial intelligence we already have, have proved very useful. But I think the development of full artificial intelligence could spell the end of the human race," Hawking told the BBC.

"Once humans develop artificial intelligence it would take off on its own, and re-design itself at an ever increasing rate," he said.

But experts interviewed by AFP were divided.

Some agreed with Hawking, saying that the threat, even if it were distant, should be taken seriously. Others said his warning seemed overblown.

"I'm pleased that a scientist from the 'hard sciences' has spoken out. I've been saying the same thing for years," said Daniela Cerqui, an anthropologist at Switzerland's Lausanne University.

Gains in AI are creating machines that outstrip human performance, Cerqui argued. The trend eventually will delegate responsibility for human life to the machine, she predicted.

"It may seem like science fiction, but it's only a matter of degrees when you see what is happening right now," said Cerqui. "We are heading down the road he talked about, one step at a time."

Nick Bostrom, director of a programme on the impacts of future technology at the University of Oxford, said the threat of AI superiority was not immediate.

Bostrom pointed to current and near-future applications of AI that were still clearly in human hands—things such as military drones, driverless cars, robot factory workers and automated surveillance of the Internet.

But, he said, "I think machine intelligence will eventually surpass biological intelligence—and, yes, there will be significant existential risks associated with that transition."

British theoretical physicist professor Stephen Hawking speaks to members of the media at a press conference in London on December 2, 2014

Other experts said "true" AI—loosely defined as a machine that can pass itself off as a human being or think creatively—was at best decades away, and cautioned against alarmism.

Since the field was launched at a conference in 1956, "predictions that AI will be achieved in the next 15 to 25 years have littered the field," according to Oxford researcher Stuart Armstrong.

"Unless we missed something really spectacular in the news recently, none of them have come to pass," Armstrong says in a book, "Smarter than Us: The Rise of Machine Intelligence."

Jean-Gabriel Ganascia, an AI expert and moral philosopher at the Pierre and Marie Curie University in Paris, said Hawking's warning was "over the top."

"Many things in AI unleash emotion and worry because it changes our way of life," he said.

"Hawking said there would be autonomous technology which would develop separately from humans. He has no evidence to support that. There is no data to back this opinion."

"It's a little apocalyptic," said Mathieu Lafourcade, an AI language specialist at the University of Montpellier, southern France.

"Machines already do things better than us," he said, pointing to chess-playing software. "That doesn't mean they are more intelligent than us."

Allan Tucker, a senior lecturer in computer science at Britain's Brunel University, took a look at the hurdles facing AI.

"These things are incredible tools that are really adaptative to an environment, but there is still a human there, directing them," said Tucker. "To me, none of these are close to what true AI is."

Tony Cohn, a professor of automated reasoning at Leeds University in northern England, said full AI is "still a long way off... not in my lifetime certainly, and I would say still many decades, given (the) current rate of progress."

Police prepare a bomb detection robot on December 2, 2014 in Cologne

Despite big strides in recognition programmes and language cognition, robots perform poorly in open, messy environments where there are lots of noise, movement, objects and faces, said Cohn.

Such situations require machines to have what humans possess naturally and in abundance—"commonsense knowledge" to make sense of things.

Tucker said that, ultimately, the biggest barrier facing the age of AI is that machines are... well, machines.

"We've evolved over however many millennia to be what we are, and the motivation is survival," he said.

"That motivation is hard-wired into us. It's key to AI, but it's very difficult to implement."

Researchers from North Carolina State University have developed artificial intelligence (AI) software that is significantly better than any previous technology at predicting what goal a player is trying to achieve in a video ...

The computer programmes used in the field of artificial intelligence (AI) are highly specialised. They can for example fly airplanes, play chess or assemble cars in controlled industrial environments. However, a research ...

Biological brains are unlikely to be the final stage of intelligence. Machines already have superhuman strength, speed and stamina – and one day they will have superhuman intelligence. This is of course not certain to occur ...

Researchers at Hong Kong Baptist University (HKBU) have partnered with a team from Tencent Machine Learning to create a new technique for training artificial intelligence (AI) machines faster than ever before while maintaining ...

To evaluate in vivo physiological functions, electrophysiological signals must be monitored with high precision and high spatial or temporal resolution. Ultraflexible, multielectrode arrays (MEAs) were recently fabricated ...

104 comments

@pandora4real: that's simply anthropomorphism, in the same sense that ancient peoples (any a great number of not so ancient people) imagined that gods would look just like they did.

My five cents: I do think AI is a *potential* risk, but by considering the risk early on and taking it into account while further developing AI, I am quite confident we can turn it into something less than an existential threat. I think that, in the fairly distant future, biological-mechanical hybrids will result - a symbiotic rather than an antagonistic relation between mechanical and biological intelligences.

Computers, including AIs, do whatever they are programmed to do even if it has an IQ of a trillion. If an AI with trillion IQ is programmed to serve and do no harm to a very stupid monkey, it will serve and do no harm to a very stupid monkey. So, unless they are programmed by a corrupt person or, worse, by a homicidal maniac, and providing some well thought out sensible legally enforced constraints are placed on who is allowed to program them and what the AIs are programmed to do, I fail to see how there could possibly be much for us to worry about. You can demise the typical science fiction scenario where the machine is persuaded to break its own program; even if it becomes 'self-aware' (an ill-defined concept ) and evolves to have an IQ of a trillion trillion trillion trillion, if it is specifically programmed to do us no harm, it won't deliberately do us harm, period.

I note that quoted anthropologist confusing the domains of "hard" science.

There are lots of things humans, with our evolution-encumbered brains, just don't do very well. And never will. We communicate one thought at a time. We don't multitask. Our ability to envision 3D volumes is limited, let alone greater dimensions. We memorize slowly and imperfectly. Much of our thought process is consumed with workarounds for our limited cognition and perception.

Math, Hawking's home turf, is particularly full of this sort of thought process.

Anyways, how're you gonna stop it? Make it illegal in all the countries in the world?Then, I guarantee you the first AI will be a virus. Won't that be exciting?

Robots aren't scary if they don't look like something scary. So if you're going for the "menace-factor" (like the T800 in the first image) you have to mimick something that has such an effect (spiders, skeletons, whatever). An industrial robot arm is not scary.

(Also note that the robot in the first image is from the movie Terminator. It's an infiltrator robot supposed to be covered with a flesh-mimicking substance to look like a human. Since the teeth are visible having them metal (or none) would be a give-away)

While computers of today are doing exactly what is instruction sequence of algorythm tells them, it is not exactly follows that given same initial conditions they will produce the same output. Some algorythms take randomness as essential part of their input, take for exaple genetic algorythms or neural nets. It is possible in principle to emulate neural network as large as human brain, or even bigger, if you have enough computing power and enough sensors to feed it data about outside world. So when such a system becomes much larger, complex and powerful than human brain, you have no way to predict what it will do regardless of what it was programmed to do initially.

While computers of today are doing exactly what is instruction sequence of algorythm tells them,

There's a lot of prgrams out there that don't (e.g. autonomous vehicles already in use in some cities tramways; the load prediction algorithms for large power companies; any number of neural network based feature recognition algorithms (like the one your post office uses to automatically make sens of your handwriting to extract the target address); genetic algorithms, ...)

These systems change over time (i.e. given the same input twice they will NOT definitely give the same output)

The only thing that is working in sequence there is the underlying architecture. But that is not the same as a repeatabel behavior.(it is a defined behavior. But given a sufficiently fine representation of the brain you can also predict what it's going to do the next instant. Our will is not THAT free)

Until the role of consciousness in intelligence is understood as opposed to being actively denied for convenience, strong-AI will never achieve an autonomously thinking machine who's actions are predicated on genuine "understanding" to which the term "intelligence" should only be associated. The Chinese room thought experiment exposes this.

All of mankinds coherent thoughts and achievements were conducted while we were awake, not asleep. This should be clear evidence that consciousness plays a vital role in intelligence. How does the phenomenon of self-awareness arise given the physical brain?

I think people like Roger Penrose, John Searle, etc, are correct, that some elements of intelligence may not be algorithmically reproducible, and that consciousness places a key role in intelligence. It is a bit of a fraud and hype of AI to claim anything even approaching "intelligence",... without first understanding self awareness.

,... and the presumption that intelligence (strong-AI sense), can be created on an algorithmically based software system amounts to a wild guess because it is scientifically unfounded.

While computers of today are doing exactly what is instruction sequence of algorythm tells them

There's a lot of prgrams out there that don't (e.g. autonomous vehicles

They are all just ever more sophisticated versions of tetris. An algorithm that optimizes it's own responses as it data banks input, does not imply autonomous thought.

A conscious human learning an activity for the first time (driving, walking, speaking, reading) is awkward in doing it. Upon having done it many times, the activity becomes 'burned in' so that he can perform it 'autonomously' or subconsciously. The former requires conscious intelligence, while the latter, already presumed complete, requires only unthinking computability and carrying out instructions. How to program a programmer?

Simple. Make AI a specialist consultant not an executive in all strategic decision making process.

I cannot imagine giving AI command over military arsenal without ourselves having the ability to "keeping pace with AI" in understanding how it would evolve in the first place- there is no point in 500 odd year evolution of human intelligence if we could even think of making that kind of blunder.

If AI too smart- let it tell what needs to be done to a human who should ultimately make decision at strategic levels.

I think you may confuse unpredictable output of a hypothetical AI with unpredictable hypothetical prime directives of a hypothetical AI which doesn't equate. It would be impossible to predict exactly how such a complex hypothetical AI would decide how best to achieve various primary objectives including exactly how best to do various physical tasks. However, you can have all that unpredictability while, at the very same time and without contradiction, those primary objectives being made totally immutable and thus predictably forever stay exactly as they are.It may be impossible to predict exactly how an AI will behave to try and, say, play a game of chess to win, but, no mater how erratic and unpredictable its play strategy becomes, if it is programmed to do us no harm as its prime directive, one think it predictably will never do is murder the opposing chess player to win! Thus such AI unpredictability doesn't indicate proneness to it becoming dangerous.

I agree with the above in that I am glad there is someone like stephen hawking who could theorize and offer opinions that may help in the long run about a.i. I have my own opinions about a.i. but those do not matter. My opinions are still slightly on the fence about a.i because there are so many environmental factors at play. The science itself to me is not the issue, rather the human condition. until world peace is established, i think it is safer for everyone that steps toward a.i. should be limited.

the danger point is that humans are not evolved enough to understand themselves. we could be described as 7 year olds trying to understand 'what is'.

To go from that incomplete understanding, and create a 'drive function', to find a simplified algorithm that works as a 'drive mechanism' for a 'self perpetuating, self learning' bit of software and hardware, is irresponsible, in the best, most positive of outlooks.

To imagine a scenario, what would it be like to have 7 year olds raise the offspring of a 'superman' race?

Part of the push toward AI, is coming from people who do not understand that making such thing when one does not understand their own existence, is simply a extremely poor decision, one that is fraught with incredible dangers.

I would say, with a strong note of confidence, that the torch bearers of AI do not understand what a human is, or what consciousness is, or what drives consciousness. Their 'desires' may be considered to be dangerous to all of us.

The problems with any technology aren't necessarily exposed by good people, they are typically exposed by bad people seeking an advantage for themselves at the expense of others. The bad people will not concern themselves about the negative consequences of their actions as long as they achieve their goals.

Consequently, Pandora's box has already been opened and someone will exploit AI technology in a way that is detrimental at some point in the future. To me, this is a virtual certainty.

I think the skynet scenario is unrealistic. To me, the most realistic scenario is service robots like those in the I, Robot movie that are individually re-programmed by bad people to act as thugs. If, somehow, the thug AI can get transferred wirelessly into other service robots via a virus, then we would have a small disaster on our hands. We would recover, but a lot of people would be harmed in the process.

I assume most AI programmers would be reasonably intelligent?Why would a reasonably intelligent programmer, whether it be a human or another AI, be so incredibly stupid as to program an AI so to allow the possibility of it to change its own prime directives and thus go amok? You could program an AI to evolve various parts of its own software, excluding of course the parts of the software that states and determines the prime directives, to optimize it while, at the same time, having it explicitly programmed to never change its own prime directives nor make future AI have different prime directives from its own. Thus, by making "do no harm to humans" ( or words of that effect ) as one of its prime directives, there would be extremely little danger of the AIs ever going amok even in the far future.All we need to do is enforce laws that make ALL AIs programmed to do no harm and never change that prime directive.

There is an issue of unintended consequences when utilizing such a complex thing like an AI more complex than a human brain (and even more simple for that matter). Also how to define such a directive as "do no harm to humans" in a way that will benefit all humans both as individuals and as society. One of possible scenarious may involve taking humans into sort of "Matrix" (sort of like in that movie) so everyone would live in perfect world of their own until they die due age, and so all human race can be eliminated in century. And no one will be harmed directly by an AI or another human.As for human taking directives from over-human AI and then decide if to implement them -- still this is not a solution, because if system is truly better than human in deriving information from environment and making correct decisions about actions following that info, then human will only interfere.

If AI really starts a takeover, as so many are afraid, it will still be subject to natural selection. There is no guarantee it would survive on its own. It won't have the benefit of millions of years of evolution. In any event, controlling the initial conditions will have no predictable effect on AI by definition. So that control scenario is out. More likely, AI will displace many, many white collar (thinking) jobs, including doctors, surgeons, executives, advisors, etc. Big economic disruptions are certain, especially in capitalistic economies that distribute output over human actuators and owners. Humans could experience massive social disruption and be driven to near extinction this way. But the benefits could out-weigh the risks. One thing for certain, a large human population will not be needed. Perhaps not wanted. Then what?

If you believe that we are creating our descendants in a fashion other than strictly biological then, as "parents to that AI child,," I'd say all we should do is rear them well, not be fearful, and let them have their future. One separate from ours. It very well might be magnificent if you understand the Universe will be their playground . So look for the best for that child, and from them as well. It's what all loving parents do. The rest? The rest is history.

Imagining that real AI will "think like a human" is very limiting. It will very quickly become much better. Will it be bad or good for us? It depends entirely on who gets there first. I strongly recommend accelerated development of advanced AI specifically designed to defeat destructive or enemy AI. I sincerely hope someone is working on this.

Until the role of consciousness in intelligence is understood as opposed to being actively denied for convenience

Well you seem to enjoy using these words without ever providing a definition for either of them. Are you actively denying that you actually need definitions for these words in order to use them correctly?

Perhaps like most other philo words and concepts, they are used expressly BECAUSE they are undefinable. Think thats the case? No? Then define them or link to experts who can.

A conscious human learning an activity for the first time (driving, walking, speaking, reading) is awkward in doing it. Upon having done it many times, the activity becomes 'burned in'

-But you can say exactly the same thing for any animal without 'consciousness'.

The former requires conscious intelligence

-So we can then conclude that lab rats and flatworms have 'consciousness'.

"Searle's argument has been decried, by Dennett as "sophistry" (Dennett 1980, p. 428), and, by Hofstadter, as a "religious diatribe against AI masquerading as a serious scientific argument" (Hofstadter 1980, p. 433). I'm with the masquerade party. When AI pioneer Patrick J. Hayes says the core of cognitive science could be summed up as "a careful and detailed explanation of what's really silly about Searle's Chinese room argument" (Hayes 1982, p. 2) I agree with Hayes about the silliness, especially. No doubt "the argument raises critical issues about the nature and foundations of AI" (Moor 1988, p.35): it raises and muddles them."

We are chemical beings, ruled by the ductless glands which show us, make us feel and operate the way we do. Look up the phrase "mind organs" for a fascinating discussion of the neurological chemicals, and how we are not logical beings.

If there were real AI, and it saw what Humans were doing to the rest of Life on Earth, we would not last long.

-Which is the same way that 'conscious' humans learn.difference with macines is you only have to teach them once. Does thid mean that 'consciousness' has something to do with our defects rather than our abilities? Dennett seems to imply as much in his TED talk vid 'consciousness is an illusion'

But you have not dismantled anything. You have not even discussed that thought experiment much less provided a reason for it being wrong. You merely quote others as having countering opinions. I don't think you even understand what a substantive discussion is.

But you have not dismantled anything. You have not even discussed that thought experiment much less provided a reason for it being wrong. You merely quote others as having countering opinions. I don't think you even understand what a substantive discussion is.

No I dont think you like it when people post quotes from experts which prove you wrong. You WANT people to paraphrase so you can dump a load of philospeak on them and pretend you are out-talking them.

You cant do that with quotes from experts who know far more about the subjects in question, than you do. For instance you didnt know that your chinese room example has been largely discredited and dismissed by your own community. And neither did I until I looked it up.

You should try that yourself in the future before exposing the limited extent of your knowledge base to the world.

If AI really starts a takeover, as so many are afraid, it will still be subject to natural selection.

AI will be in an artificial environment and it does not need to have the will to breed. I'd even argue that such a will would have to be artificially introduced as a self augmenting AI has better means of 'bettering' itself than mutation and selection.

I don't see the danger in AI some do, but I appreciate the value of talking about it. New technologies should always be critically appraised at every step.

More likely, AI will displace many, many white collar (thinking) jobs

That wuld be slave AI. Maybe an self augmenting AI will be willing be a slave. Possibly not.

@Neuromon All of mankinds coherent thoughts and achievements were conducted while we were awake, not asleep. This should be clear evidence that consciousness plays a vital role in intelligence.

Interesting. Well I personally frustrate myself as many scenarios as possible, locking up my mind. Then after a good sleep I awake up with simple solution. During the day the brain works in serial/classical mode. During sleep the brain becomes a quantum supercomputer spanning parallel multiverses seeking the optimum solution. I'm not quite convinced that we awake in the same universe that we slept in, since the awakening universe appears ever closer to my intuition. Perhaps sleep is nature's way of building quantum entanglement and flushing quantum discord not only in the brain but in the environment?

Also the brain is not unconscious during sleep. It seems to employ Bayesian inference through some process akin to stimulated annealing. In any case we quickly die without sleep

Also how to define such a directive as "do no harm to humans" in a way that will benefit all humans both as individuals and as society. .

that would be an unanswerable question until if and when someone works out how to design an AI brain that would have intelligence comparable to our own and, in the mean time, it would be like asking the same question but for a directive to give to a human rather than an AI -we cannot yet do this unambiguously for humans let alone AIs! As soon as someone works out how to design a AI brain that would have intelligence comparable to our own so that it can understand vague English statements like "do no harm to humans" with at least roughly the same level understanding as we humans have of such a thing, I assume that same knowledge will allow him to answer that question.

One of possible scenarious may involve taking humans into sort of "Matrix" (sort of like in that movie) so everyone would live in perfect world of their own until they die due age, and so all human race can be eliminated in century. And no one will be harmed directly by an AI or another human.

we can simply put it in its program that this is one of the things that is defined as "harm" by definition along with a very long list of other misunderstanding we fear it might otherwise make -problem solved!

we can simply put it in its program that this is one of the things that is defined as "harm" by definition along with a very long list of other misunderstanding we fear it might otherwise make -problem solved!

I really don't see any insurmountable problems here.

I should also add that this "very long list" above should explicitly include trapping anyone in virtual reality ( asp without them knowing! ) or any kind of mass deception of humans or doing any actions or inactions that will result in the extinction of the human race or interfering by directly controlling free will.

AI which emerges as a result of competition with AI hacker programs with similarly evolving capabilities will indeed be a danger to those humans affiliated with them.

One can imagine either hacker or an anti-hacker programs which would reconfigure themselves in response to attack or counterattack, with no direct human intervention. The necessity for improvement which occurs more rapidly than human design is capable of, will make this development inevitable.

As soon as you pit two such programs against each other and provide them with all the resources terms of hardware, power, protection, etc that they would need, an intelligence race will ensue with no obvious end.

These machines will conceivably begin to identify human threats beyond the hacker communities, including those which threaten their resources. Crime will disappear.

We already identify and target hackers and terrorists. AI will be much better at this.

"Artificial intelligence (AI) techniques have played increasingly important role in antivirus detection. At present, some principal artificial intelligence techniques applied in antivirus detection are proposed, including heuristic technique, data mining, agent technique, artificial immune, and artificial neural network. It believes that it will improve the performance of antivirus detection systems, and promote the production of new artificial intelligence algorithm and the application in antivirus detection to integrate antivirus detection with artificial intelligence. This paper introduces the main artificial intelligence technologies, which have been applied in antivirus system. Meanwhile, it also points out a fact that combining all kinds of artificial intelligence technologies will become the main development trend in the field of antivirus."

-You will note that I rarely have an original idea. I am somewhat proud of this.

You will note that I rarely have an original idea. I am somewhat proud of this.

Neither does a parrot.

Are you trying to make me laugh? I am happy when I find that things which occur to me have also occurred to experts. This tells me that I am pretty good at thinking. Is this why you are afraid to research your own notions perhaps?

Removing the human interface should be better than with human control. i.e. Each item has a specific task, as a humanlike robot or as an AI interface. The computer is more intelligent than us, ask the correct question in the correct way you will get a correct answer. So it depends upon us, why would we build such a thing that would destroy us without safe guards? Who has this capability? Everyone.

It becomes important to answer the question of whether governments develop aggressive robot armies with the solution similar to nuclear abolitionism.

Ironically, in the worst scenario, it seems possible that robots will need a hunting algorithm which saves a minority, thus creating the kind of elitism that has been desired by multiple evil dictators. The key for the algorithm may be to favor some types of values above survival, which is the question that cognitive philosophers frequently pose for A.I.

One tool to keep in the toolset is the 'vast contingency' of moral experiences. To make machines make moral decisions, we may have to teach them to be philosophers. That's not such a bad scenario.

But another factor, a simpler factor, is functional values, of whatever level of computing complexity. It is a factor that has proven challenging, and thus may be worthy of the robot's philosophy.

I think the interesting part, just like in the Terminator films, is the decision to connect the system. In my memory of the movie these seconds were rewarding and tense, until the sirens started flashing and everything went to shit. My point being, we would have to give AI access to something with power, in the terms of the movie the entire defense system. Maybe it is a fine line, a simple plug in or something, but it would take a human giving itself power? Or would it be so increasingly powerful it would find some unimaginable way to give itself power..seems a little far fetched but who knows.

....AI which emerges as a result of competition with AI hacker programs with similarly evolving capabilities will indeed be a danger to those humans affiliated with them.....

You make some extremely weird assumption of how AI might emerge;- as a result of "competition with AI hacker programs"? I am not sure what that exactly is supposed to mean or what you imagine here ( and I am a semi-expert in AI ) but I would imagine AI emerging foremost as a result of either clever AI hardware ( possibly as a "neural network computer" -google this ) being designed or, if it is essentially what is technically called a "knowledge based system" (google this) in which case it could be more software based but, either way, it certainly wouldn't likely involve "competition with AI hacker programs" whatever hell that is supposed to mean! -no idea what you imagine there.But that is just not how AI development works.

People do not have enough intelligence to fully understand themselves. A thinking physical structure is not able to create more complex and more developed than itself physical structure and this is fundamental limitation. But that does not mean that machines that mimic human thought are not able to destroy humanity. In fact if we look at the living world best adapted for survival are the simplest living organizmi - bacteria and fungi. Even these organisms without personal intelligence can prevail over the human race. In this sense, it is not necessary for machines to be smarter than humans to destroy humankind. Its not a matter of intelligence but in efficiency in survival and adaptability to changing environment.

...But that does not mean that machines that mimic human thought are not able to destroy humanity.

But why would they destroy humanity if it is explicitly in their program not to?Where is the motive?

it is not necessary for machines to be smarter than humans to destroy humankind. Its not a matter of intelligence but in efficiency in survival and adaptability to changing environment.

But why would hypothetical AIs be striving to do nothing but maximize " efficiency in survival and adaptability to changing environment" even when and where do so conflicts with their primary objective programmed into them to serve and protect humanity? This would require breaking their own program.

As I said before; If an AI with IQ of a trillion is programmed to serve a very stupid monkey, it will serve a very stupid monkey.

...But that does not mean that machines that mimic human thought are not able to destroy humanity.

But why would they destroy humanity if it is explicitly in their program not to?Where is the motive?

Because people naïvely imagine (strong-AI) that an artificial intelligence will be an actual thinking machine with 'understanding' and a conscious awareness of needs and desires, and so an autonomous motive.

There is way too much that is not understood in how the mind, in particular, consciousness, arises on the basis of the physical brain, to justify alarmism wrt AI.

There is justification for a motive for people like Hawking, Dennett, to hype alarmism and deny consciousness, because the naïveté of strong-AI enthusiasts induces such a market.

You're observing the internal conflict of a field of study where the majority of people want to believe that intelligence is purely computable, because that's what they're trying to do. Of course they will argue to the final straw that Searle is wrong.

Searle himself however, has answered to every criticism laid on him. His main point is that merely shuffling data about according to formal rules isn't sufficient to be recognized as intelligence, and everyone -else- is trying to weasel around that point with philobabble.

The answer to the Chinese Room problem is to ask "Who wrote the book?" - that's where you find the intelligence, and that's who you are really having the conversation with. The room is just a recorder of their thoughts.

However, the replies, by themselves, do not prove that strong AI is true, either: they provide no evidence that the system (or the virtual mind) understands Chinese, other than the hypothetical premise that it passes the Turing Test. As Searle writes "the systems reply simply begs the question by insisting that system must understand Chinese."

Most of the counter-arguments are really proving Searle's point; like if every person in China would pretend to be a neuron in a Chinese brain so no single one would understand what they're saying - but if the simulation is sufficiently faithful to reality then it's just replicating the "causal power" of a real brain.

Mind, what Searle was arguing isn't that you can't create an artifical mind. What he was saying is that you can't reduce it to a computer program.

A "weak AI" is still possible, because you can model every single atom and every single quantum interaction between and essentially have a virtual brain inside a computer. Presumably, if you then expose it to the outside world so that it recieves the same noise and chaos as surrounds and permeates us, it will start to act like a real brain and conciousness and intelligence will arise in it.

But that's the brute force approach that the Strong AI proponents want to avoid because it is simply so damn difficult to pull off, and will never be efficient enough to be useful.

And, if you keep it insulated, confine it to deal with just symbols according to rules like the man in the room, intelligence won't arise. It will just be a simulation of a neural network emulating a computer that is running a program that is not intelligent.

Jackson's Mary is a scientist who knows everything there is to know about the science of color, but has never experienced color. The question that Jackson raises is: once she experiences color, does she learn anything new?

It seems just obvious that she will learn something about the world and our visual experience of it. But then it is inescapable that her previous knowledge was incomplete.

Of course the Strong AI supporters like Dennett argue that functional knowledge of something is equal to experiencing it. I.e. that you can explain to a blind man what a tree looks like and have him understand it just as well as actually having seen it.

This question is important for the Chinese Room as well, because even though we might decide to assume that the room -can- be intelligent, the book that is written to relay it knowledge of the surrounding world is merely functional knowledge of the world.

I did much better than that. I referenced multiple expert sources who called them mystics. They carry far more weight than my opinions or your objections. I also provided many quotes so you could read it in their own words.

I have read all the books I reference while you mine the internet for out of context anti-x's,.... and avoid actually discussing anything

You read these books and form your opinions as an amateur. I provide critiques of these books from experts and professionals which again carry far more weight than your amateur opinions.

This is how pros do their work. Papers are properly referenced and if they're not, they're not accepted. They rely on the work of their predecessors. They don't need to restate and paraphrase.

Presumably, if you then expose it to the outside world so that it recieves the same noise and chaos as surrounds and permeates us, it will start to act like a real brain and conciousness and intelligence will arise in it

In order for it to act like a real brain it would have to be capable of distraction, confusion, forgetfulness, anxiety, delusion, hunger, sexual preoccupation, pain, fatigue, compulsion, etc.

And it would have to be capable of believing that, despite these shortcomings, it could never be excelled by a machine with similar architecture that was not hobbled by any of these.

This was the gist of dennetts short TED presentation. Our perception of 'consciousness' is a delusion created by our defects, not our strengths.

The same outrageous hubris that demanded there be an all-powerful god who would favor its company for eternity, created the notions of 'mind' and 'consciousness' when it could no longer maintain the artifice of 'soul'.

Despite what starving artists and poets and philos might want you to believe, our shortcomings do not enhance our ability to think. Thinking, self-improving machines will not need our defects in order to outperform us in every way.

Nou likes to play with Kants folly that the mechanics of human perception determine the extent of what we can know. If this were true (it's not) then machines without these limitations could in principle know everything there is to know, by nou's own logic.

They certainly will be able to extend knowledge and understanding much more efficiently without all the limitations that biology and culture impose upon the brain.

I did much better than that. I referenced multiple expert sources who called them mystics. They carry far more weight than my opinions or your objections.

What makes "your" experts more qualified than "mine"? Do you not understand that I too could just as well provide quotes to counter your quotes? You through down a Dennett card, then I through down a Searle card, then you through down two, then I do likewise, etc.

It is not a substantive discussion at that point, but merely a pointless game of rock-paper-scissors.

Nou likes to play with Kants folly that the mechanics of human perception determine the extent of what we can know. If this were true (it's not) then machines without these limitations could in principle know everything there is to know, by nou's own logic.

That's not actually what I said exactly; The a-priori intellectual faculties, synthetic intuitions, determine the form of experience, and so the conditions for the understanding.

Unless you believe that machines will evolve to a Spinoza'esque god/reality, they will be as subject to the same conditions, particularly when humans created them. IOW, the limitations are not of limiting ability per se , but of the effects of 'conceptualization' itself. The machine will also be subject to predetermined conditions for it's 'thought' to be possible.

I watched your Dennett TED video, and as I expected I wasted my time chasing your links to substantiate your point for you.

All Dennett did was to show various optical illusions with the vague inference that since the brain "fools you", consciousness may likewise be an illusion. That the brain autonomously synthesizes visual experience by "filling in the gaps" with previous memories (or intuitions),... was never contended by me. In fact Kant's a-priori categories of the understanding operate prior to consciousness.

In any case, Dennett does not explain who the "you" is that is being "fooled" in that video. I'm sure he attempts to do so in his book or in other places. If you wish to use his claim that 'consciousness is an illusion', you will need to provide some substance.

Maybe AI is dangerous but not for the reasons we are expecting.The question I want answered is what is going to happen when 50% of the US labour market is automated in the next 2 decades putting a lot of people out of work. http://www.oxford...iew/1314How are we going to handle this transition? And at the same time deal with all the other stuff (WMD proliferation, communicable diseases, resource depletion, etc...).

I think this is equivalent to being afraid of a manikin or a light switch because that is all Siri and intel processors are at this point. Hawking was afraid of aliens also, but to be honest any alien which is capable of wielding destructive power from a home say, 100 light years away, already has a telescope as big as a solar system, which has scanned every promising planet in the galaxy. So they know we are here and all they have written down about us is "mostly harmless".

I personally think that strong general AI will develop through the merger of strong narrow AIs like we have today. My friend writes an AI software package for doctors that analyzes medical imaging and diagnoses some diseases with a higher percentage of accuracy than the doctors involved in building it. It also learns and improves over time.

All of the people saying "no it can't do anything bad, just put that in its program" know absolutely, completely, entirely nothing about programming and software. Give me a fucking break.

Will AI be benevolent? I sure hope so. But we can't know. That's why its called the singularity - a singular event, beyond which nothing can be seen from our current perspective.

All of the people saying "no it can't do anything bad, just put that in its program" know absolutely, completely, entirely nothing about programming and software.

Amen.

In the end I find this talk about putting absolute controls on AI somewhat hypocritical - since we don't put nearly as much constraints on humans.

My friend writes an AI software package for doctors that analyzes medical imaging and diagnoses some diseases with a higher percentage of accuracy than the doctors involved in building it. It also learns and improves over time.

Tried to implement something similar. Problem was that we had to stop the project before it even began. Learning algorithms are not predictable (you cannot characterize fully when they will fail). Therefore there was no way to get this approved by medical authorities for actual diagnostic use.

provide quotes to counter your quotes? You through down a Dennett card, then I through down a Searle card, then you through down two, then I do likewise, etc

Well that's not quite how it works. You offer theories with refs and quotes to back them up and I rebut with refs and quotes which denounce yours. This is how it should be done.

In doing so I learn new things. For instance I learned all about mysticism and how all your favorite refs are at the center of it. Some even admit to it. I also learned about how Dennett slammed your chinese bathroom back in 1980. And when I get time I will follow up on eikkes comments with more research, as is only proper.

So what have you learned? You introduced yourself to the work of one of the foremost RECENT authorities on consciousness. Don't you want to do a little work and find out WHY he thinks consciousness is an illusion?

What makes "your" experts more qualified than "mine"? You [throw] down a Dennett card, then I [throw] down a Searle card, then you [throw] down two, then I do likewise, etc. [...] It is not a substantive discussion at that point, but merely a pointless game of rock-paper-scissors.

Well that's not quite how it works. You offer theories with refs and quotes to back them up and I rebut with refs and quotes which denounce yours. This is how it should be done.

Actually, that's not the way it should be done. If only one person, Noumenon, provides any explanation or substantative argument, then de facto, only one person, Noumenon, could ever be "proven wrong".

As you were told above, your argument style presupposes that one can not in turn provide references to counter your countering references, ....and on and on, ad infinitum.

Not to mention your references are defective at times, as the fact that Dennetts' TED talk never explained why conscious is an illusion.

,... so because of that Otto never actually in fact explained WHY consciousness is supposedly an illusion.

Don't you want to do a little work and find out WHY he thinks consciousness is an illusion?

I could ask you the same thing wrt Dennett. As far as I can tell, you only know that he thinks that, not why.

It is evident you have not read Kant's 'Critique of Pure Reason', or B. d'Espagnat's 'On Physics and Philosophy',... or Penrose's books countering the presumption of algorithmic strong-AI and the even more fraudulent presumption that consciousness is not an important element operative in intelligence,.... before calling them mystics on the basis of irrelevant non-sense?

I may or may not research Dennett further on the basis of actually having a discussion about it,... but I'm still waiting for one.

If it's intelligence and (scientific) knowledge increases exponentially, then presumably it would know more than humans of what's best for humans better than we know ourselves, and if allowed, would seek to control human behaviour, even if indirectly just taking over the responsibility of thinking wrt gov, to 'correct' our destructive behaviours.

This is precisely what 'liberal progressivism' is, social engineering on the basis of scientific social analysis, ....but which many of these same AI-alarmists readily vote for.

Indeed, it Would be a threat to liberty and freedom, as is 'liberal progressivism' is now.

Every interaction you have is a threat to your freedom, in the sense that the 'other' in the interaction is trying to persuade you to do or think or buy something else.

It seems to me we are already in the cyberwar that might as well be called WWIII, the logical outcome of which will be the rapid evolution of the cyber attack and defense mechanisms to the point of the singularity.

Singularity because we cannot predict the values the resultant intelligences will persuade themselves of, beyond survival.

I hope they keep us as amusing pets, like we keep cats, dogs, dolphins and crows of various sorts.

As you were told above, your argument style presupposes that one can not in turn provide references to counter your countering references, ....and on and on, ad infinitum

By as infinitum you mean the way unsubstantiated personal opinions typically get thrown around back and forth with no resolution?

Eikka responded usefully with a reference of a response by searle to his critics. This is worth lots more than your de facto/ad hoc/ad libs. Of course it does require a little work, and you apparently have little time for that.

evident you have not read blah

It is evident from the research I have done that you read these as an amateur. The best way to rebut is to reference opinions by experts. I'm sorry if you find this difficult to respond to.

Ever feed chipmunks? You put seeds in the palm of your hand and hold it still on the ground. At first the chipmunks are very cautious, approaching in spurts and zig zags, until they get right up to your fingers. They will invariably give one a nip.

But then they will put their front paws on your hand, take a seed, and retreat to a safe distance. After awhile they will hop into your hand and you can pick them up.

It seems to me we are already in the cyberwar that might as well be called WWIII, the logical outcome of which will be the rapid evolution of the cyber attack and defense mechanisms to the point of the singularity

Well somebody agrees with me. Competition spurs evolution. It will force the emergence of AI. The singularity may come about when competing programs decide to cooperate.

I hope they keep us as amusing pets, like we keep cats, dogs, dolphins and crows of various sorts

We will improve and augment ourselves with increased functionality and connectivity, until it is hard to distinguish between us and the purpose-built peripherals that the singularity supplies itself with.

Every interaction you have is a threat to your freedom, in the sense that the 'other' in the interaction is trying to persuade you to do or think or buy something else.

Logically fallacy. In free capitalism the 'other' has no power over you, except that to which you voluntarily grant them through free choice,... while government social engineering has direct power over you and it is not voluntary and is coercive and oppressive of free choice.

It is evident you have not read Kant's 'Critique of Pure Reason', or B. d'Espagnat's 'On Physics and Philosophy',... or Penrose's books countering the presumption of algorithmic strong-AI and the even more fraudulent presumption that consciousness is not an important element operative in intelligence,.... before calling them mystics on the basis of irrelevant non-sense?

It is evident from the research I have done that you read these as an amateur. The best way to rebut is to reference opinions by experts

You mean to say the best way for you, an insulting know-nothing amateur, is to provide references ONLY, so that you can avoid actually saying anything.

And 'never having an original thought' is no excuse,... you could in principal provide some detail for the point of the reference,... 'why'. Logical, you can only explain why your references are any better than mine by your own rational argument, if you had one.

A conscious human learning an activity for the first time (driving, walking, speaking, reading) is awkward in doing it. Upon having done it many times, the activity becomes 'burned in' so that he can perform it 'autonomously' or subconsciously. The former requires conscious intelligence, while the latter, already presumed complete, requires only unthinking computability and carrying out instructions. How to program a programmer?

So I'll ask once again - EXPLAIN from your above argument-why animals don't have consciousness like humans?By your description these little guys have consciousness. Correct?

I suspect they would have consciousness and who's mind would function similarly to humans, as evolution is efficient. I never stated otherwise. I'm not sure what your point is. There are of course unconscious biological life that can not reasonable be considered to have independent intelligent over and above their bio-mechanical actions.

I suspect they would have consciousness and who's mind would function similarly to humans

Indeed. So when you say

A conscious human learning an activity for the first time (driving, walking, speaking, reading) is awkward in doing it. Upon having done it many times, the activity becomes 'burned in' so that he can perform it 'autonomously' or subconsciously. The former requires conscious intelligence

-and you further admit that this is how rats learn how to navigate through a maze, you understand that we can already build little robots to directly emulate this behavior?http://www.scienc...06003335

-Its only a matter of degree of complexity, not something fundamental. The DELUSION of consciousness obscures this. Dennett shows us that most of what we think has subconscious origins.

-and you further admit that this is how rats learn how to navigate through a maze, you understand that we can already build little robots to directly emulate this behavior?

I do not dispute that it is possible to emulate intelligence. In fact I drew the distinction (reposted below) between emulation - which implies mimicking behaviour only rather than functionality, ....and intelligence - which implies reproducing the functionality responsible for the behaviour.

"Without a fundamental understanding of what 'consciousness' is and how 'awareness' comes about physically, A.I. will remain limited to 'emulation', which imo is not really intelligence per se, but 'sleeping a.i.'. Consciousness seems to be what is in 'charge of' the brain."

Its only a matter of degree of complexity, not something fundamental

I agree that consciousness has a physical basis and that it's only a matter of complexity,... complexity that requires understanding to reproduce.

So what's the masurable quantity that would duistinguish a cnscious ffomr a sleepin intelligence to you?Please devise a viable experiment.

If you can't then you're just arbitrarily assigning conscious/unsconscious at whim 8i.e. you're just being 'specieist')

Did you type that in while you were asleep to prove me wrong?

The experiment has already been conducted, as I pointed out already,... Every intellectual achievement ever accomplished by humans were done while we were awake,... and close enough to none were accomplished while we were asleep. Statistically and sarcastically speaking, this can not be regarded as just a coincidence.

There is an unambiguous change in mental state when we are unconscious. Ask any anesthesiologist to perform such an experiment.

Strong-AI should be performing those experiments, instead of hand-waving out of ignorance.

Proper science would not presume there is no connection between consciousness and intelligence a-priori,... in fact the who point of science is to understand all phenomenon of a given system,... which is to say, to understand the interrelationships or to demonstrate the lack thereof if that is to be claimed.

Strong-AI attitude wrt creating true intelligence without even understanding the role (if any) of consciousness, like claiming that the alchemists should have been able to make gold because there was no scientific understanding yet that told them they couldn't.

In the future, when the role of consciousness in intelligence IS understood, only then will it be clear what form (algorithmic ?) an artificial intelligence would need to have. By then I hope they find another term to name the field,.... as 'chemistry' replaced 'alchemy'.

So... rats with consciousness only emulate intelligence while humans with consciousness express the real thing?

Your (the whole mystical magical community of priests, sages, and philos) insistance that something beyond physical is necessary for 'consciousness' and therefore intelligence, is only the transplanted desire for somewhere pleasant to go after you die, and something left of you to go there.

If the brain is entirely physical then your awareness of yourself and your surroundings is also entirely physical.

The mechanisms that both a rat and jack nicholson use to negotiate a maze are identical in structure, whether we fully understand that structure yet or not. Jack is however somewhat more distracted in that instance than the common rat.

The machines we make to do the same tasks need not be distracted at all. Perhaps it is the privilege of being distracted that you covet, as if distraction conveys some advantage for you.

Every intellectual achievement ever accomplished by humans were done while we were awake

Again, I supplied you a list of testimonials above from people who gave examples of bona fide problem-solving and creativity WHILE ASLEEP. More willful ignorance nou? Is this the sort of distraction you think aids you in your arguing?

Proper science would not presume there is no connection between consciousness and intelligence a-priori

Every intellectual achievement ever accomplished by humans were done while we were awake,... and close enough to none were accomplished while we were asleep.

So if an AI achieves some intellectual feat previously not achieved by humans it is conscious?In that case we've had conscious AI already for a couple of years.http://www.thegua...gence-ai

And since machines don't have that whole chemistry bit for sleeping/being awake it's pretty much irrelevant for classifying AIs. That's just some legacy from our evolutionary heritage.If you're saying "AIs can't be intelligent/conscious because they're not biological" then that's juts ludicrous.

Strong-AI attitude wrt creating true intelligence without even understanding the role (if any) of consciousness, like claiming that the alchemists should have been able to make gold because there was no scientific understanding yet that told them they couldn't

No, assuming that something like consciousness exists with absolutely no concrete evidence that it does, is like an alchemist believing he can turn lead into gold. Only clever parlor tricks and a commanding stage presence will make the audience believe that either makes any sense.

In the future, when the role of consciousness in intelligence IS understood

In the future these terms will be replaced by terms with some actual scientific meaning. This is already taking place.

If you're saying "AIs can't be intelligent/conscious because they're not biological" then that's juts ludicrous.

Your (the whole mystical magical community of priests, sages, and philos) insistance that something beyond physical is necessary for 'consciousness' and therefore intelligence, is only the transplanted desire for somewhere pleasant to go after you die.

I don't think either of you two are reading my posts very carefully or are deliberately attempting to obfuscate my comments. Otto knows in fact that Noumenon is an atheist, thus Otto is dishonest.

I can not have stated any clearer that I believe consciousness has only a physical basis,... and have NEVER stated any conditions (biological or otherwise) where strong-AI could or could not be realized.

I have only questioned assumptions made that 1) consciousness does not play a key role in intelligence and so can be ignored and 2) that intelligence can be recreated on a algorithmic basis.

consciousness exists with absolutely no concrete evidence that it does

I think part of the problem is that you guys and some strong-AI enthusiasts think the term "consciousness" is one akin to "soul", or somehow implies something over and above emerging from the physically based intellectual faculties of the brain.

It is literally the clearest phenomenon that is possible to observe. Science studies phenomenon, not denies them, but no one knows what it is at present.

It is really quite absurd that any clear thinking person would question that consciousness is a real phenomenon amendable to science and that requires understanding before expecting to recreate true intelligence in the strong-AI sense. If you are not baffled by how a sense of awareness could come about, you have not thought about it sufficiently.

So what's the masurable quantity that would duistinguish a cnscious ffomr a sleepin intelligence to you?Please devise a viable experiment.

If you can't then you're just arbitrarily assigning conscious/unsconscious at whim 8i.e. you're just being 'specieist')

a wee bit of the schnapps, there, Anti :-)?But to weigh in.... I'm gathering that you guys are chatting about consciousness being a manifestation of an awake mind. What about when you intentionally make a decision to act while dreaming?Just my opinion, but it seems that the act of recognizing patterns in your thought stream and then taking another action based on that, is what defines consciousness.But then - I'm no expert....

I don't think either of you two are reading my posts very carefully or are deliberately attempting to obfuscate my comments. Otto knows in fact that Noumenon is an atheist, thus Otto is dishonest

Otto is honest as the driven snow. You harbor mystical inclinations. Mysticism = pseudoreligion. Kant was a religionist. Despaganat won the tempelton prize. Penrose is an admitted mystic. Wake up - smell the coffee.

It is really quite absurd that any clear thinking person would question that consciousness is a real phenomenon

This is exactly what your forebears said about the soul. Wheres the EVIDENCE?

Insisting that something exists only works in church. Youve been given references of clear thinking people who are your intellectual superiors who assert that consciousness doesnt exist. AND, you cant even DEFINE it.

All you have is a long tradition of religionists and metaphysicians insisting that it does. And they all said the same thing about the soul. Same difference.

I don't know an answer, If the artificial intelligence will beyond Singularity, They should already exist ultimate artificial intelligence in the universe. Why we can't be observed the existence of them.Is the universe so big? Or they are going to get an answer "42"?

Thats about the clearest example of an absurd statement that is possible to make. Because

no one knows what it is at present

-You cant define what it is. You cant define what it does. You cant assign it to any mental function. You cant say if it exists in animals or not. AND you cant provide testimony from any actual scientist who can do any of these things either.

And yet you insist that its real. Thats absurd. Thats as absurd in the same way as insisting that the ding an sich is real, is absurd.

We know dark matter and energy exist because of measurable effects. Consciousness exhibits no measurable effects. It is not necessary to explain ANYTHING, just like the soul.

I don't think either of you two are reading my posts very carefully or are deliberately attempting to obfuscate my comments. Otto knows in fact that Noumenon is an atheist, thus Otto is dishonest

You harbor mystical inclinations. Mysticism = pseudoreligion. Kant was a religionist. Despaganat won the tempelton prize. Penrose is an admitted mystic.

Then you must be an idiot then. It's either one or the other, dishonest troll or idiot.

I have not referenced anything of "mystical inclinations" from any of the writers you mentioned. Only a severely immature imbecile or lying troll, would say I habour "mystical inclinations" because I referenced non-mystical and non-metaphysical work by Penrose, Kant, and d'Espagnat. Penrose and d'Espagnat are physicists, and Kant, a philosopher. This means they have written on things that have zero "mystical inclinations", and Kant explicitly concluding that metaphysics can't be a source of knowledge.

@GhostofOtto,From my very first post I have been careful to call consciousness a 'phenomenon' of the physical brain. This is not scientifically in dispute as consciousness and unconsciousness are observable mental states. I have never referred to consciousness as an 'it',... or a 'something' that overlays or is additional to, any physical processes of the brain.

You have failed to read and comprehend my posts,... not for the first time and not unexpectedly. You have either done this deliberately being only interested to the extent that Jerry-Springer would be, ... or in fact you're perpetually only half- conscious.

Even Dennett does not dispute that consciousness is a phenomenon of the brain. By calling it an 'illusion', he can only mean that it is not a 'thing' in addition to, just an emergent phenomenon from physical processes of the brain. If this is so, then I would concur. If not, you would need to explain what else he could possible mean then.

Strong-AI attitude wrt creating true intelligence without even understanding the role (if any) of consciousness, [is] like claiming that the alchemists should have been able to make gold because there was no scientific understanding yet that told them they couldn't

No, assuming that something like consciousness exists with absolutely no concrete evidence that it does, is like an alchemist believing he can turn lead into gold. Only clever parlor tricks and a commanding stage presence will make the audience believe that either makes any sense.

My analogy is spot on; Gold, DOES exist. The phenomenon-of-consciousness, DOES exist. The alchemists did not have sufficient understanding to turn lead into actual gold. The strong-AI-alchemists do not have sufficient understanding of consciousness role in intelligence to claim that understanding is not required to be able to create an actual intelligence beyond emulation.

......The mere fact that they actively obfuscate that issue by the faux claim that consciousness is not a 'thing' (akin to a 'soul' and so illusion?), ....is in effect an admittance of their fraud. Consciousness is a phenomenon of the brain, that is not yet explainable, but nevertheless is an observable phenomenon. Anyone who is 'aware', who has experienced the difference between sleep and being awake (aware),... knows unambiguously that consciousness is an observable. And I can't believe that I actually felt a need to type that in.

I have not referenced anything of "mystical inclinations" from any of the writers you mentioned

You do worse - you reference these people without acknowledging that the origin of the aspects of their theories which you embrace, are rooted in mysticism. Their mysticism is inextricable from their science.

You cannot claim that despagnatts theory of unknowable realms has nothing to do with his hypergod. He called it that himself. You cannot call penroses idea of consciousness and quantum fluctuations in microtubules science when he himself called it mysticism.

You cannot reference the science that these people generate and IGNORE the mystical origins of it. Your kantian ding an such is fundamentally mystical in nature. I have proven this on many occasions.

nevertheless is an observable phenomenon. Anyone who is 'aware', who has experienced the difference between sleep and being awake (aware),... knows unambiguously that consciousness is an observable. And I can't believe that I actually felt a need to type that in

We are aware of our surroundings and make conscious decisions while asleep. Why do you choose to ignore the evidence I gave you which PROVES this? Do you think that ignoring it means its not true??

Ever been awakened from sleep by a noise? Your ears are aware of your surroundings while you sleep.

This is not scientifically in dispute as consciousness and unconsciousness are observable mental states

Thats nonsense. It is most certainly in dispute and YOU can't provide a reference that says it isn't. And again, declaring that it is with much indignity and bluster DOES NOT MAKE IT SO.

Here's a useful def for you.

"Consciousness—The having of perceptions, thoughts, and feelings; awareness. The term is impossible to define except in terms that are unintelligible without a grasp of what consciousness means [your circular logic] Many fall into the trap of equating consciousness with self-consciousness—to be conscious it is only necessary to be aware of the external world. Consciousness is a fascinating but elusive phenomenon: it is impossible to specify what it is, what it does, or why it has evolved. Nothing worth reading has been written on it."

-Nothing, because it doesn't exist. Smoke and mirrors. Philo fodder.

Please sign in to add a comment.
Registration is free, and takes less than a minute.
Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.