It's made out of meat.

(The following essay was originally written for the program book of HAL-Con 2010, held in Omiya, Japan. Consider it a speculative polemic intended to amuse and provoke, rather than a serious prediction of the future.)

The internet is made out of meat

I realize that this may be a rather strong proposal to swallow, but let us consider the historical evidence:

In 1968, after approximately five years of deliberation, committee meetings, and reports, self-propelled lumps of meat from the US government's Advanced Research Projects Agency awarded another group of meat-lumps a contract to produce a piece of machinery called an Interface Message Processor. The objective of building these machines was to permit the exchange of messages between computers, to allow better use of time-sharing facilities on these expensive, primitive calculating engines ... but even at the outset, it was seen as a useful goal to use computer messaging to allow lumps of meat to send each other email.

See? No lumps of meat, no internet. It's as simple as that!

Less flippantly, we lumps of meat are big on communication ...

I'll drop the meat thing now; people are big on communication. But watch out for anthropomorphism, for the tendency to adopt what cognitive philosophers call "the intentional stance", of ascribing deliberate intent to come up with plausible explanations of behaviour. (Not all of our actions are conscious, rational, or planned, and I'm going to talk here about some stuff that we are usually unaware of.)

Why are we big on communication?

Sometimes it seems as if asking this question is like asking why water is wet: to a first approximation, we can be defined as that species of meat (sorry, primate) that communicates in languages with complex semantics. Human culture is almost entirely about communication, often in manners that aren't superficially obvious: clothing, for example, above and beyond its basic function as an ersatz layer of fur, is almost always used to communicate information about social status and identity.

We're communicators. It's hard to know how long we've been doing it for; recorded communication is a relatively recent development, going back a few thousand years. But it's a fair bet that the evidence of human culture which dates back up to 70,000 years — including organized burials, jewellery and other artwork — required laugnage on the part of its creators.

What is language for?

One hypothesis (which I'm partial to) is that language is a substitute for the physical grooming that maintains social hierarchy in primate groups. If a group is small, the members can groom everyone else and still have time for other activities such as foraging for food. But if a troupe of hominids gets too big — well, you try picking the fleas out of fifty alpha males' pelts before breakfast. Using speech, you can communicate with multiple other primates simultaneously. The upper size limit on a group of social hominids with language is presumably a lot higher than that of apes with no linguistic facility.

But if it's just a social tool, why has it become so important to us?

I don't think we have a definite answer to that question yet. However, I have a gut feeling that the reason we're so communicative is that we are, at a very fundamental level, a communication phenomenon: that is, our actual sense of conscious identity emerges from the internal use of our language faculty to bind together our stream of cognition and create an internal narrative. Internally, language allows us to codify our memories and provides us with a toolkit for symbolic manipulation — it's a very important component of the "theory of mind" which allows us to anticipate the behaviour and internal thoughts of others. And it also extends our awareness beyond the reach of our own sensory organs by allowing us to use others as proxies.

Language is a multi-function tool: it's not just a dessert topping, it's a floor wax too.

I'm writing these notes sitting in the middle of an SF convention in Boston. (This is my fault for being behind schedule.) I've just come from an interesting panel discussion on the subject of the Singularity, with co-panelists Alastair Reynolds, Karl Schroeder, and Vernor Vinge. (Oddly enough we're all sometime hard-SF writers who've dealt with the subject.)

To give a quick recap: over the past 200 years, many of our technologies — themselves, a collection of techniques transmitted horizontally between lumps of animate meat by means of language — have followed a sigmoid development curve.

About 20 years ago, Vinge asked, "what if there exist new technologies where the curve never flattens, but looks exponential?" The obvious example — to him — was Artificial Intelligence. It's still thirty years away today, just as it was in the 1950s, but the idea of building machines that think has been around for centuries, and more recently, the idea of understanding how the human brain processes information and coding some kind of procedural system in software for doing the same sort of thing has soaked up a lot of research.

Vernor came up with two postulates. Firstly, if we can design a true artificial intelligence, something that's cognitively our equal, then we can make it run faster by throwing more computing resources at it. Which means problems get solved fast. This is your basic weakly superhuman AI: the one you deploy if you want it to spend an afternoon cracking a problem that's basically tractable by human intelligence, if human intelligence could work on it for a few centuries.

He also noted something else: individually, on average, we humans are not terribly smart. Our general intelligence, which relies on symbol manipulation, gives us immense power to use other hominids' ideas — but individually we're not terribly good at solving new problems. What if there exist other forms of intelligence which are fundamentally more powerful than ours at doing whatever it is that consciousness does? Just as a quicksort algorithm that sorts in O(n log n) comparisons is fundamentally better (except in very small sets) than a bubble sort that typically takes O(n2) comparisons.

If such higher types of intelligence can exist, and if a human-equivalent intelligence can build an AI that runs one of them — which is an open question — then it's going to appear very rapidly after the first weakly superhuman AI. And we're not going to be able to second guess it because it'll be as much smarter than us as we are than a frog.

Vernor's singularity is usually presented as an artificial intelligence induced leap into the unknown: we can't predict where things are going on the other side of that event because it's unprecedented since the development of language. It's as if the steadily steepening rate of improvement in transportation technologies that gave us the Apollo flights by the late 1960s kept on going, with a Jupiter mission in 1982, a fast relativistic flight to Alpha Centauri by 1990, a faster than light drive by 2000, and then a time machine so we could arrive before we set off. It makes a mockery of attempts to extrapolate our situation from prior, historical conditions.

Of course, aside from making it possible to write very interesting science fiction stories, the Singularity is a very controversial idea — largely because it is built on top of another controversial idea: that of artificial intelligence.

For one thing, there's the whole question of whether a machine can think — as the late, eminent professor Edsger Djikstra said, "the question of whether machines can think is no more interesting than the question of whether submarines can swim".

For another thing, there's the whole question of what thinking is. We tend to think (see, I'm doing it!) that it's something to do with language processing, and as computers are machines for performing general-purpose symbolic manipulation, we assume that they ought to be able to think. But thinking, and consciousness, didn't emerge out of nowhere: they showed up as an evolutionary upgrade to what was already a complex survival machine -- the early hominid. Animals may not have language, but we don't deduce from this absence a lack of the ability to reason, or to respond to their environment. Is intelligence a symbol-manipulation problem, or is it something else? And is it something that requires a brain and only a brain, or could it be an emergent phenomenon, a feedback loop arising when an embodied nervous system interacts with its environment?

We may be barking up the wrong tree in thinking of intelligence as something we can construct mechanistically. But there are other routes to a Vingean Singularity. Augmented intelligence, as opposed to artificial intelligence, is one such route: we may not need machines that think, if we can come up with tools that enable us to think faster and more efficiently. The world wide web seems to be one example. Lifelogging and memory prostheses may be another.

But. Let us for a moment suppose that the classical formulation of the singularity is plausible, and that furthermore classical computational artificial intelligence is possible. Where is it likely to emerge?

Genome researchers were flabbergasted in the late 1990s and early 00's when the Human Genome Project delivered its preliminary results. This project, an attempt to conduct the first exhaustive sequencing of a human genome, revealed that humans run on a total of around 24,000 genes — about the same number as a mouse, double that of a roundworm. (Previously, estimates in the range 40-50,000 genes were common, with some predicting as many as 2 million genes.) Large portions of the genome are very similar to those of other vertebrates; meanwhile, a huge quantity of human DNA consists of stuff other than genes — some are concerned with modulating gene expression (and there's a whole epigenetic apparatus of short interfering RNAs that were only discovered in the mid-00's), but a startling quantity of our genetic payload consists of viruses and other forms of what is currently believed to be functionless junk that is along for the replication ride. Indeed, human endogenous retroviruses account for 8% of our genome. They may actually provide some benefit to the host — they're immunosuppressive, and it's theorized that placental mammals were only able to evolve in the wake of ERV infection, which allows a fetus to suppress the maternal immune system — but there's a lot we still don't know about how junk DNA and endogenous viruses modulate our genome.

I don't like to stretch a metaphor too far, but it's tempting to observe that DNA is an information processing system; proteins are expressed, allow the cell that expresses them to interact with the extracellular environment, and delivers feedback to the cell's genetic apparatus which in turn can express more proteins (or siRNAs and other effector molecules).

And having noted this in passing, it's time to go for the throat ...

Spam is everywhere.

About 92-95% of all email traffic is spam. Every new communications medium that opens up on the internet succumbs rapidly to spam, unless it is designed with such heavy filtering in place that it's almost impossible to send a message to someone else without prior approval. But new communications media don't get adopted unless they're useful — and one of the key uses of a communications medium is to allow strangers with useful information to get in touch. Spam, almost by definition, isn't useful: but it tries to masquerade as meaningful communication.

In the bad old days of email, just about everything anybody sent would eventually get delivered to a mailbox, if it was correctly addressed. When the "anybody" using the internet expanded sufficiently to include unscrupulous advertisers and scam artists, the utility of email began to drop. The solution that eventually turned up was the widespread adoption of filters — software that attempts to determine whether an inbound message is unsolicited rubbish, or something potentially of interest to a human recipient.

There are a vast number of ways of filtering. One of the most effective is to look for patterns in the mail stream; an identical message sent to a million people is almost certainly spam unless it emanates from a well-known mailing list system. Unique messages are less likely to be spam. So looking for huge deluges of identikit mail worked for a while — until the spammers took to appending random snippets of text to each individual message, to make them look different.

Another filtering technique is to look at the word or letter frequency of the message; purely on a statistical level, spam doesn't look like part of a conversation (unless your correspondents regularly interrupt the flow of discourse to shout BUY CHEAP DESIGNER HAND-BAGS or similar). But again: spam is big business — it's a very effective form of mass advertising — and the spammers are ingenious.

As filters get more sophisticated, the spammers are abandoning old-style broadcast advertisements and are moving to much more tightly targeted ads, addressing the recipient by name and attempting to pitch selectively. The most tightly targeted spam is created for spear phishing attacks (in which specific personal information is used to target selected individuals — usually for identity theft or corporate espionage). Today, this is labour intensive: but it's a fair bet that as more of us place more information about ourselves online, spear phishing techniques will gradually become automated, and targeted junk internet advertising will rise to levels of sophistication we can barely guess at. There's lots of money in spam (these days it's a branch of organized crime), and where there's money, talent can be hired.

We are currently in the early days of an arms race, between the spammers and the authors of spam filters. The spammers are writing software to generate personalized, individualized wrappers for their advertising payloads that masquerade as legitimate communications. The spam cops are writing filters that automate the process of distinguishing a genuinely interesting human communication from the random effusions of a 'bot. And with each iteration, the spam gets more subtly targeted, and the spam filters get better at distinguishing human beings from software, in a bizarre parody of the imitation game popularized by Alan Turing (in which a human being tries to distinguish between another human being and a piece conversational software via textual communication) — an early ad hoc attempt to invent a pragmatic test for artificial intelligence.

We have one faction that is attempting to write software that can generate messages that can pass a Turing test, and another faction that is attempting to write software that can administer an ad-hoc Turing test. Each faction has a strong incentive to beat the other. This is the classic pattern of an evolutionary predator/prey arms race: and so I deduce that if symbol-handling, linguistic artificial intelligence is possible at all, we are on course for a very odd destination indeed — the Spamularity, in which those curious lumps of communicating meat give rise to a meta-sphere of discourse dominated by parasitic viral payloads pretending to be meat ...

On a more serious note, your throwaway comment that: 'animals may not have have language', hasn't there been some serious work with dolphins that seems to imply they might have a much richer vocabulary that simple 'good fish over there, watch out for sharks'

And in the mean time, we end up going back to dropping in on our friends, and getting to know new people at parties and conventions.

(Best of luck for those of us whose circle of friends is geographically far flung.)

Are you aware if any of the filter models is at all based on the idea of the web of trust, a la PGP? It might slow spam machines down a bit if they're having to break a strong cryptographic key for each message they send.

Mind you, at that point, the forward thinking spammer goes for a bit of social engineering - he befriends Cory Doctorow, becomes his daughter's godparent, and then fires off a couple of million spams as fast as possible before that reputation burns out.

It would be very, very trivial to reject or greylist any email that wasn't signed with a trusted public key. The problem is purely social -- how many people do you know who routinely encrypt or sign their mail? How long have we been trying to get people to do it?

Well, it's one hypothesis for what 'junk DNA' is - given that the spam in this case is 'sequences not of benefit to the human genome'.

The alternative, vide Carl Sagan, that DNA may contain thus-far-undecoded messages to us from our originators would imply that there would be spam too - for how likely is it that even higher level civilisations are free from the curse? Perhaps, embedded within the coding sequences for the male human genitalia, there is a 'enlarge your penis' message.

My pessimistic belief is that eventually (in Internet years, so less than ten calendar years) most people and services will require full sender and path transaction authentication for most internet services, including email, which will likely strangle the Spamgularity in its cradle. It will also be the end of anonymity on the Internet, but that's coming anyway.

Beyond that we will get a real-life version of Vinge's "Trusted Platform" (from "Rainbow's End") which will prevent almost all virus attacks. There is plenty of research going on as to how to build the hardware and software infrastructure of such a platform.

I think this is where much of the industry would like to go. My only point of optimism is that I expect people to find trusted platforms less useful than their free equivalents (compare jailbroken iPhones and rooted Androids). I don't see any way TPTB can block off "Hecho en Paraguay" devices without losing a lot of money.

Few people really desire to be anonymous on the Internet. Anonymity is a barrier to forming or maintaining many kinds of relationships. The success of social networks is a clear indicator that people, on average, don't desire anonymity.

I disagree. I suggest that people like to be anonymous some of the time, eg commenting on fora, and anonymous to a wider audience but known to the people they are directly communicating with, at other times. And sometimes they don't want to be anonymous at all, in some social networks. It depends on the context, but the key point is that I think the majority want to have some say over how anonymous they are, rather than having everyone turn into completely known and identified people online.

Nice post Charlie. An arms race is certainly a well trod path to rapid change, although I think it's unlikely this one will produce AI. But something unforeseeen is eminently possible.

Regards befriending famous people and then abusing their trust: EVE Online is notorious for 'corporate scams' wherein players spend many months and even years building up trust and position before robbing a corporation blind. Sometimes to the virtual equivalent of tens of thosuands of dollars.

The point is that people are very trusting and have great difficulty building up their mental immune system. So spam filters do serve as a mental prosthesis to enable us to live differently.

To the contrary, I think few people actually think one iota about how anonymous they are. They only actively want to be anonymous after details they'd rather not reveal have been disclosed. It's a common reasoning problem that most people have- you see it all the time with security breaches. Nobody thinks about security until after security has been compromised, and then SOMEBODY HAS TO DO SOMETHING!

Most people, most of the time, don't need to even consider how identifiable they are.

What about people that want to partition their lives e.g. not really want their acquaintances to know that they are HunterBob on a furies forum, or on Sam87 a gay forum, pseudonymously members of socially embarrassing circles, or that they comment on SF author's websites ?

I'm not convinced. Yes, I have a very anonymous usertag on lots of blogs (including this one) but that's partly because the hosting software won't accept any other tag (yes, I've tried), and the tag was originally chosen for I Can Haz Cheezburger.

You're right in that most people don't consider the benefits of anonymity until they've made a misstep non-anonymously. That said, there is a definite benefit to continuing to provide options for those of us who have learned these lessons (sometimes the hard way). In most online interactions, anonymity is unnecessary and does produce a barrier to certain kinds of network building. In short, there are arguments for both anonymity and non-anonymity.

They're a small portion of the population. Most people don't fall into that category.

The reality is that there are a large number of entities in the marketplace that would rather anonymity were hard: governments, businesses, charities, etc.- all of them want a way to track you, find you again, sell your data, compile their demographics, etc.

These entities are generally pretty powerful, but they still respond to market forces. The largest section of the market, by a huge margin, doesn't care about anonymity. Whether they should or not is irrelevant, they don't.

Which means you have a small portion of the market that cares, that attempts to lobby and get other people to care, but it just isn't going to work.

Note: I'm not arguing whether anonymity is good or bad, I'm just pointing out that the market doesn't want it, and thus far, attempts to get them to want it have failed, and likely will continue to fail, because for most people, most of the time, it's not necessary or even vaguely interesting.

How can we as a society and individuals become more immune to the spammers message, and by extension, to advertising in general? I was under the impression that carefully targeted marketing based upon information scarfed from users/ loyalty card holders* and so on, did work to some extent.

I agree that the vast majority of people have public identities that they want to endorse. But I think a large subset of the same people also have more private identities on, say, dating sites and are not interested in dropping their pseudonyms on those sites (I would bet the market share of a dating site that enforced public verified real-name profiles would drop faster than a stone, albeit to a near-zero positive number).

Also, pseudonymous does not mean that you forfeit all the potential social networking gain, just that you have a higher chance of controlling access to the "real you". Recent examples spring to mind : Trappy from TeamBlackSheep (of fpv over New-York fame), _Why from the Ruby programming community.

Of course, there's a difference between privacy/security and anonymity. P/S means things like not letting phishermen get hold of your account details. Anonymity means not being known (at least by your real name).

I only know a handful of people who use dating sites. None of them use a pseudonym. Anecdotal, I know, but my experience is that people honestly don't give their identity a second thought.

You're misunderstanding my comments on social networking gain- most people don't social network to be part of the Ruby programming community. They social network to interact with their old college roommate, their mother, their drinking buddies. For most people, that's the primary use case of social networks, and that's a use case that works opposite anonymity.

I'm arguing that most people do not have public identities they want to endorse, and aren't concerned that their private identity becomes public as they interact with the Internet.

Personally, I've taken an odd approach- my professional identity is built around my full name. My personal identity is built around a shortened version of it (an unusual shortening). My personal identity is my public identity, my professional identity is a sop to staying employed.

Thinking that the spam problem has mostly been solved by social networking approach of building a trusted network that is allowed to email you. Easy enough to maintain a couple facebook/twitter identities...

Interesting essay - but is the idea that all communication media that enable easy communication also allow for easy spamming really true? On one hand, there are indeed social norms (eg. no vote-for-this-candidate-robocalls in Germany because culturally inappropriate/not legal). On the other hand, social media (a la Facebook, to some extent even a la Twitter) are not really spam-able, as long as the circle of contacts consists of more-or-less-known people. In other words: Social media killed the parasitic spam-entity?

It's easy to create them, but ensuring they stay in separate silos is a lot harder. You're almost inevitably going to have overlapping circles of contacts, and it's highly likely that one or more of them will eventually "out" you by linking your identities.

(Says the guy who operated a pseudonymous blog for a while, as somewhere to let his hair down with friends -- until Warren Ellis let the cat out of the bag by forgetting which blog was the official one and which wasn't.)

I think that, in general, social networks make it more likely that spam will spread, if cleverly designed, and that some unfortunate few will indeed generate revenue for the spammers. At least some of us know how to spot spam in our email inboxes, and most people can do it with varying degrees of success. When it occurs on a social network, it is usually because someone in our friends list has clicked on spam postings from someone in their friends list, etc. We don't approach our social networks the same way we approach the wide world, and as a consequence, such spam can cut inroads into areas it would have been less likely to achieve if it were just dropping into someone's mailbox randomly. Facebook has shown this to be true on multiple occasions (one of which I saw yesterday): spam links automatically share themselves with everyone the person knows, and those whose curiosity gets the better of them will also spread the link. After all, we reason, the person is in our network, which implies at least a base level of trust.

Beyond that we will get a real-life version of Vinge's "Trusted Platform" (from "Rainbow's End") which will prevent almost all virus attacks. There is plenty of research going on as to how to build the hardware and software infrastructure of such a platform.

It's been pointed out here and other places that we could have that right now. The problem isn't in the hard/software; it's in the people. Right now, for most people, there's a significant trade-off between security and other factors like convenience, processing speed, etc.

But basic implementation-level holes in the system don't tend to open up as fast as complexity, speed, and capacity increases. Eventually the trade-offs for security will be miniscule, a few tenths of a percentage point in various benchmarks for performance. Encryption won't be a matter of laborious click-throughs and conscious operator decisions and agonizing ADHD-enhanced delays; something like RSA 8,192-bit keys will be the default even if you're going online to order a pizza for pickup. And it will come at the cost of less than one percent of the then-current computing overhead.

Otoh:

My pessimistic belief is that eventually (in Internet years, so less than ten calendar years) most people and services will require full sender and path transaction authentication for most internet services, including email, which will likely strangle the Spamgularity in its cradle. It will also be the end of anonymity on the Internet, but that's coming anyway.

Again, this is comes down to what are essentially people reasons. Everyone has a stake in secure communications and low-level attacks, and so virus assemblers, worm generators, the malware coders etc will always be operating on the fringes of society and against the combined weight of the State and private organizations. They simply don't have the resources to effectively compete in that environment.

Otoh, the big players still have their own agendas to push, so what you will see is not spam, precisely, but chaff. Overwhelming people's capacity to make competent informed decisions with terabytes of unnecessary, irrelevant, and distracting information.

I could see it getting to the point that in 2060 there will be business models predicated on selling people the tools to determine the right service to tell them the right ratings agency to subscribe to in order to filter out all the distracting duppel when deciding what kind of shoes to buy or the holiday seasons latest in toys for the tots.

Expect advertising in all it's guises, not just the spam-blasting gatling-gun type, to become very, very sophisticated, and so prevalent that multi-year crewed missions to Jupiter on the government's dime won't be able to escape the stuff. Hmmm . . . come to that, expect average human intelligence in the developed world to jump by another thirty points over the next fifty years. For certain values of intelligence, of course :-)

It was easier in Nevil Shute's day: he didn't use a nom de plume, instead he wrote his (arguably sf) novels as Nevil Shute and worked as an aerodynamicist as Mr N.S. Norway, using his initials and his real surname.

Here's a more important point. What evidence is there that we make bad decisions through a lack of intelligence? Obviously it's easy to point to examples of individual stupidity, just as it is to point to examples of individual heroism. But battles aren't won by individual heroes, nor are they often lost by individual cowards.

We know that we have a variety of interesting cognitive biases. There is no reason to think that a human-equivalent AI wouldn't have its own biases, either by accident (bugs), as evolutionary relics of its human programmers, or possibly by design. After all, it is suspected that some of the human biases are examples of rational unreason. We might design certain biases into it. (Asimov's Three Laws of Robotics are an example.)

But more fundamentally, what reasons are there to think there are serious problems we haven't solved because we don't have a superhuman AI hanging around? There are important problems in science that were only solved when we had enough computer power to run the numbers (computational fluid dynamics, hello), but that's not the same thing. Computers didn't theorise CFD or design the experiments. That would be like saying that test tubes discovered penicillin.

There are probably many more problems like that - protein folding, hello, and a ton of stuff in the social sciences - but it's far from obvious that the AI would solve them. It could be a question of whether the computing power needed to run the AI (and to begin with, I bet we're talking 250,000sq ft datacentres out by the hydroelectric dam) would be better used running the experiment.

What about global warming, then? Aren't we too stupid to solve it? Fuck no. We know exactly what is happening, how it's happening, and why. We can predict what is going to happen reasonably well. And we have a pretty detailed idea of how to fix it. So the jumped-up pile of molten sand, drinking its coal fired electricity all the while, says "just get on and fix it already". Well, very clever. For this we sent Amazon EC2 to college. Similarly, the good good people at Thales and BAE Surface Fleet and Rolls Royce obviously knew how to build a T45 destroyer, but we somehow ended up paying £1bn a ship.

Most of our bad decisions as societies come down to politics and institutional design. A really expensive R&D project in a big shed is not going to be any more effective in persuading US Senators than James Hansen, much more costly, and quite possibly easier to ignore.

"[Vernor Vinge] also noted something else: on average, we humans are not terribly smart. Our general intelligence, which relies on symbol manipulation, gives us immense power to use other hominids' ideas — but individually we're not terribly good at solving new problems."

Yes and no - but mostly no.

1.Using Rasch (absolute) measures such as the Stanford-Binet V CSS
scale average adults aren't much more than 25% smarter than 2 year-olds. OTOH, this shows just how smart even a two-year old is compared to just about anything else under reasonably equal conditions.
2.Symbolic intelligence is perhaps less than half of general intelligence - "culture- fair" tests such as the Raven's Matrices pretty much test only visuo-spatial abilities. (Vocabulary tests do seem to be more accurate, though, at least within a population with a common native language.)
3.Solving new problems is exactly where humans totally beat computers. To a computer, every sink of dirty dishes is a radically new problem. And really computers rarely solve any problems - it's the human programmers who did all the hard stuff and condensed it into tricks, hacks and rules of thumb that work in a very restricted but often useful set of conditions.
*

"However, I have a gut feeling that the reason we're so communicative is that we are, at a very fundamental level, a communication phenomenon: that is, our actual sense of conscious identity emerges from the internal use of our language faculty to bind together our stream of cognition and create an internal narrative."

Reminds me of the Whorf-Sapir hypothesis and Delany's Babel-17.

While studying on Crete 20 years ago I wrote a story that took the tone of TOWWAF Omelas (all exposition of a utopian setting, no plot or characters) and the idea of Babel-17, filed off the serial numbers and shipped it to a Greek-like environment, with a pinch of Null-A. It was well received at the symposium (all symposia should have wine, food and Greek dancing).
*

"Language is a multi-function tool: it's not just a dessert topping, it's a floor wax too."

It's odd, but when I read the title "It's made out of meat", the desert topping/floor wax meme immediately popped into my head. Maybe it's really made out of _memes_, not meat. (Also, Spam is regarded as an acceptable dessert topping by most cats, and can likely be used as a floor wax. Nevertheless: Do Not Taunt Happy Fun Ball.)
*

"For another thing, there's the whole question of what thinking is."

Symbols are the easy part - meaning (which perhaps can be defined as the complex connections among symbols, past experiences and past thoughts) is what is missing. Language in computers is a relatively low level thing from which larger structures are built; in humans language is the froth on top of a huge amount of stuff happening in the brain in response to not only the environment but the whole history of the organism.

Computer languages also all have various deep-seated conceptual problems which will prevent them from being used for real AI - textual nature, compiling, lack of homoiconicity, problems with the nature of variables and objects, lack of a correspondence with the physical world of space and time, and so on.

The solutions to these problems may already be out there in bits and pieces but not all in one language. Clojure has the most bits, I think, (its a powerful, practical, concurrent LISPy language running on the JVM) but some other programming language types (dataflow, visual, spreadsheet etc.) make better use of the scarce resources between the programmers' ears. Better exploitation of higher math concepts in programs is needed - staying at the symbolic level can save trillions of cycles - but also the ability to automatically take advantage of vast numbers of heterogenous processor cores and unstructured and legacy data and to use these resources by default to improve progams recursively to asymptotically approach the long-sought "do_what_I_mean" command. I'm not holding my breath.
*

"...a startling quantity of our genetic payload consists of viruses and other forms of what is currently believed to be functionless junk that is along for the replication ride."

The Wikipedia article "Noncoding DNA" has some interesting stuff on this- some of the "junk" is conserved over tens and even hundreds of millions of years, more than some of the flashier protein-coding DNA that separates us from fish. It must be highly naturally selected. I suspect some of the rest- the fossil and viral sequences - may be important to speciation, allowing re-expressing ancient traits or transferring DNA between species.
*

"...we are on course for a very odd destination indeed — the Spamularity, in which those curious lumps of communicating meat give rise to a meta-sphere of discourse dominated by parasitic viral payloads pretending to be meat..."

Well, some of those marketdroids, talking heads, animatronic politicians and evangel-bots out there already fail the the Turing test - lumps of meat pretending to be conscious, who one would think could already be replaced by very small, unwholesome shell scripts. Perhaps they already have. Who would notice?

It's easy to create them, but ensuring they stay in separate silos is a lot harder. You're almost inevitably going to have overlapping circles of contacts, and it's highly likely that one or more of them will eventually "out" you by linking your identities.

Or, you can get the wrong identity established in a certain circle and wish you were using another one after a while... do you "break security" or emulate schizophrenia by joining the same place as two people

I suspect that even if human-level procedural AI is possible something else will evolve first. From the OP:

There are a vast number of ways of filtering. One of the most effective is to look for patterns in the mail stream; an identical message sent to a million people is almost certainly spam unless it emanates from a well-known mailing list system.

So one way to detect even individually-tailored spam is to look for similarities in messages received by multiple people. Suppose your spam filter was smart enough to make up a set of hashes of the messages it received, based on partitioning the messages into likely semantic units (it doesn't have to understand them, at least in the initial implementation, just figure out which ones contain somewhat self-contained submessages) then shares those hashes with other spam filters. It should be possible for a large enough group of filters to determine that sufficiently-alike messages are spam, and act to warn each other about them, all without exposing the actual contents of the messages.

Now watch as those filters evolve better and better strategies for recognizing equivalent messages as spam, even as the spammers evolve better ways to tailor their messages. Eventually we could get communal organisms of spam filters, in which the Turing-level mental operations take place in the traffic between the filters, not in the individual filters, like the operation of an ant colony.

Have you read much Richard Powers? Although he's pigeonholed as literary fiction, he writes great sci-fi. (The Echo Maker is about subjectivity as a myth of meat. The Gold Bug Variations has computer programmers debugging DNA.) I think he had an inkling of the Spamularity in this story, which is no longer online:

By invitation of an anonymous e-mail, his narrator tests a computer program, DIALOGOS, which weaves its stories from bits of data around the web, without any central authorship. He sends out e-mails into the DIALOGOS void–to Emma Thompson, to an estranged friend, to Goethe, to Emily Dickinson. Each missive is returned with implausible accuracy.

First you/we have to recognise there IS a problem (is it a bug, or a feature: - is another way of saying this)
THEN
You have to formulate what options you might have/want to follow to solve the problem - ASSUMING you have correctly defined the problem.
THEN
You have to ask the right questions, properly framed.
IF you get to that point, then you will almost certainly solve the problem(s)

If one looks back at the history of science and technology, one sees that very often framing the "right" question is the killer operation.
And it is a lot harder to do than many people think.
Oddly enough R. M. Pirsig at-least-half-addressed this difficulty in "Zen and the art..."

However, I have a gut feeling that the reason we're so communicative is that we are, at a very fundamental level, a communication phenomenon: that is, our actual sense of conscious identity emerges from the internal use of our language faculty to bind together our stream of cognition and create an internal narrative. Internally, language allows us to codify our memories and provides us with a toolkit for symbolic manipulation — it's a very important component of the "theory of mind" which allows us to anticipate the behaviour and internal thoughts of others. And it also extends our awareness beyond the reach of our own sensory organs by allowing us to use others as proxies.

My own theory is that a large part of the selection pressure that's pushed the evolution of human language is that it allows us to better display the parameters of our theory of mind to other humans. This in turn makes it easier for them to interact with us, because they can more easily predict how we're likely to react to their actions. And vice versa, of course.

One reason for believing this is the way some other animals use communication. Take dogs: they're intelligent enough that some dogs can recognize more than 300 words, but since they don't have anything like a human larynx or parrot trachea, they can't speak them. They do communicate quite well using body language, odors, and non-verbal vocalizations, but those are not parts of symbolic language. Rather they're a fairly sophisticated mechanism for communicating state of mind and for interrogating state of mind in other dogs.

Symbolic language is even better at communicating state of mind because it can communicate arbitrarily complex states, and can adapt to communicating newly-discovered states within a single lifetime, rather than requiring the evolution of new behaviors or odors.

The thing about AI, intelligence and if machines can think is always the same (at least so far): people overlooks something that machines designers don't even have a clue on how to build: an emotional model.

Every decision you make is half rational half emotional. Well, never is an exact half but you're not as rational as you may expect.

In the other hand, you have a testable experiment (there is a famous patient with this disease) that without a tiny little part of the brain near your right eye you are a perfectly normal person except you can use emotions to take decisions.

Consequences of that?

You become more rationalizer than Sheldon and Spock combined. You put two slightly different pencils in front of this poor (real life) patient and he will struggle in an endlessly list of pros and cons trying to conclude why one is superior to the other.

Our emotions are the shortcut that saves us of overanalisys for every little
thing we do.

And yet, we barely know what emotions are. Not to mention a model for those that works well with some rational engine.

And that's not to mention consciousness.

Is nice to see technological enthusiasm but a lot of people is just too much enthusiast.

(warning, this is poorly articulated, as I'm not sure what I'm saying or where this thread of thought is going to go)

Nearly all of the motivation of spam is a call to action to engage in commerce, and/or an attempt to inject some meme into you, to convince you that some idea is true, useful, or worthy. As the arms race you describe continues, the actual calls to action, the presentation and framing, and the subtly and "truthyness" of the payload memes will get more subtle, and more infectious, and more "injectable", just because those that do not will lose the race.

However, much/most of human to human communication are ALSO calls to action and "meme spreads".

How do our evolving filters tell them apart?

Especially when as machine generated or machine tweaked communication becomes MORE compelling, comforting, memorable, or interesting than the peer human generated stuff? One of Chiang's stories touches on this, and also this is something that is already happening in the real world.

One bad end result is that the filters evolve not to keep out spam, but to keep out things we do not like. This will take the already existing nature of many memes to be antagonist to competition, and the resulting human tendency towards echo chambering, and make it happen at machine speed with machine strength.

i was thinking about googling alastair reynolds, this morning, i had read one of his books a while ago, i like terence mekennas food of the gods for the thing about language, something about red meat, caffiene, and refined sugar, makes for an aggressive spamish society, i would like to have algae but we can't afford it because of the ill gotten ways of teh spammers, spamm doesnt really bother me, because
me and money don't mix, so we find it kind of amusing. man i hate spam.

oh i forgot about AI, HEY AREN'T WE HUMANS SUPPOSEDLY AN UNINTERRUPTED
INTELLIGENCE FROM SINCE THE BEGINNING OF THE COSMOS? AND SO IF THERES EVER AI WOULDN'TN THAT BE SOMETHING FROM SOMEWHERE OR THE FUTURE,AND NOT OF US BECAUSE EVERYTHING THAT WE HAVE HERE RIGHT NOW IS SUPPOSEDLY AN EXTENSION and an expression of us. does that make any sense?

Check out this video of African wild dogs hunting (apologies for the commercial starting it). This shows a cooperatively breeding carnivore using basic doggish communication skills to set up a multi-pronged attack.

Similarly, dolphins coordinate attacks, and African gray parrots fly in flocks up to 500, and use varied calls to coordinate their movements (for instance, a family of parrots will invite some groups to share food, but not other groups).

The advantage to intra- and interspecies communication is so prevalent that everything from bacteria on up does it. Complex communications are also resource and energy expensive, which is why guppies don't spend a lot of time tweeting to each other, even though they're constantly coordinating schooling, predator avoidance, and social dominance.

As for the spamularity, I think the first thing to demonstrate is that this isn't the definition of civilization as it currently stands. If one buys Diamond's thesis that agriculture was the worst mistake that our species ever made, then I think that civilization=the triumph of intelligent spam as a natural conclusion. (/fnord).

The really horrifying though is that if there is a singularity via spam wars, than it would likely follow one of two extremes: a spammer dominated emergent system, in which we become perfect pawns to the manipulation of an omniscient system that has a cognitive model of us sufficiently deep to allow any manipulation: if we're talking deities, its an olympian god.

On the other extreme would be the spam-fighter emergent system, which is constantly trying to punish and destroy that which is not human based on a limited and narrow set of criteria. When this set of criteria becomes the new morality, with punishment meted out by movement to the really world equivalent of /dev/null, then in deity terms this is the vengeful old testament Yahweh.

Or we're all some sort of meat wrapper designed to disguise the spam messages virally encoded in our DNA, either left over from some long dead galactic spam machine, or trapped in somebody else's spam filter. Which is why nobody seems to be coming to visit.

Antonio Damasio's neuroscience/philosophy books are very literate yet technically grounded explorations of the role of emotions and feelings in how we function. "Looking for Spinoza: Joy, Sorrow, and the Feeling Brain" is perhaps the one most directly related to this topic.

Why do we call virii separate entities when they are actually part of the genome and thus the genotype? Why not just take it as a weird expression of the phenotype? Like pollen or slime?

Of course, this begs the question whether retrovirii were in fact the first kind of virus that only gave rise to the development of other virii that are seperate from the genome - or if it was the other way around.

But it would make sense, as virii are a great alternative, yet selective, way to spread snippets of genetic information, when more direct ways like sexual transmission are not available or impractical. All the usual rules about evolution would still apply. Only that it would suddenly make a whole lot of sense to have large populations of clones. As those would still have genetic interaction and variation via retrovirii.

"But basic implementation-level holes in the system don't tend to open up as fast as complexity..."

Unfortunately, that idea (proven false in the 1970s by Fred Brooks, as described in The Mythical Man-Month) just refuses to die. It also leads to much insecure software, and periodically causes my life to suck.

In my experience, of several years, security flaws in any given system tend to increase much faster than most pragmatic measures of complexity. This holds true even when incidental complexity (caused by the use of inappropriate tools, languages, etc.) is factored out. Tools, methodologies, etc., have not, in a broad sense, stemmed the tide.

The argument I usually use when cautioning against adding more than the absolute minimum required complexity is a straightforward reliability calculation. A system composed of two 90% reliable widgets is 81% reliable. In the real world, where no one person may grok the entire code base, (which increases the impact of personal communications problems) etc., this example tends strongly toward optimism.

This stuff has been out there for forty years, and confirmed time and again.

@EH, I think you may be aiming too low... it's the difference between engineering and science — and both Our Gracious Host and Vernor Vinge are science-fiction authors, not engineering-fiction ones.

I think "solving new problems" doesn't refer to washing dishes or even building a bridge across the Bering Strait; those are known or nearly-known problems and we can apply known techniques to them. I think "solving new problems" refers more to things like the question of the orbits of the planets around the sun — something that we've been diligently trying to work out for the last 300+ years, since Newton. We know they're approximately ellipses — but what are they exactly?

Similarly, the difference between Python and Clojure is an engineering one. AI won't be cracked by those sorts of differences; there's nothing magical about concurrency or LISP. AI will be cracked, if it is, by progress in science, in this case computer science and various related fields. Or, of course, by accident, although in that case it can hardly be termed "artificial".

(BTW, my working definition for the difference between engineering and science is that engineering is expected to bear fruit in 5 years or sooner, applied science in 5 to 50 years and pure science after 50 years or more.)

I recently added AES 256 bit encryption to a custom network protocol implementation I'm responsible for. All packets are encrypted. Load analysis shows no significantly measurable time difference from the non-encrypted version to the encrypted version. And this system is running on an average eight-year-old computer with no hardware acceleration.

Encryption isn't costly anymore for many typical scenarios. However, I'd imagine if you wanted to stream encrypted video you'd notice the overhead. But for email and web? No problem.

(BTW, my working definition for the difference between engineering and science is that engineering is expected to bear fruit in 5 years or sooner, applied science in 5 to 50 years and pure science after 50 years or more.)

In the UK then, science is probably screwed. Since, as I understand it, government policy is that most research funding should yield commercial results in a short (i.e. 5 year) time period.

Also on the science vs engineering thing, particularly in the field of computer science, I suspect there is a fair amount of blurring of boundaries. I say this as an engineer, who also happens to be the named inventor on a number of patents a few of which are over a decade old and just becoming useful, was that engineering, or science? I don't care, to me, they were just neat ideas, and luckily, my employer thought I was right!

Hm, I'm not convinced with "SPAM mutated to a online identities portfolio manager". At least not in a thread about SPAM evolving in AIs. I will only be convinced if we start to see public relations departements (including the staff, the network access, the social media, the PR "rules", the multiple fake identities managed by promotional players and the toner catridge) as networks-of-actors-that-are-entities (cf. Latour etc.). In that sense, public relation departements could become organizations specialised in breaking SPAM detectors and trying to be convincing enough to break also the human SPAM sensors. But to see the PR departement as an AI?

Machine on Machine warfare is really developer heuristics vs d.h. They aren't even trying to build an AI. What I'm certain is going to evolve from this is something like chess programs or flight stabilisation software that will eventually work very well on parsing natural human texts.

But no general purpose AI. Let me put it like this: Some software surpasses in the ability to manouver an aircraft (shortterm) even the most capable pilot and yet couldn't even do a flight attendant's job and bring you your coffee. To me, if it's not GPAI it's not intelligence at all, and certainly not superhuman in a meaningful way (that is something more than you can call a forklift, a chess computer or a falling rock superhuman).

A spam producing GPAI would ultimately achieve it's goals by selecting the stuff that I'm acutally interested in, so that I turn off my spam guard against this GPAI. But I suspect even that can be done without AI, just a huge data collection and some heuristics. Depressing, really.

On the thought if it's possible for machines to think at all: IMO it's far from certain that humans think, either. If you work on the assumption (I think I recently read Charlie subscribes to it, too) that thought is a bunch of electro-chemical interactions it the CNS you have a pretty deterministic system, mitigated only by some quantum fluctuation. That leaves no room for will or decisions (a predetermined decision is no decision, right?) or any of that, just some dominoes falling, cause and effect. I'm not sure you can call that thought either, only awareness. In that case we are just along for the ride like crash test dummies in a pointless experiment with certain outcome.

Um, no, thinking is defined as what we think we do in our heads (or in our hearts, if ancient Greeks, I suppose). If we don't do it, it's pretty much a null term with no meaning. Conversely, the idea of the Turing test is that so long as a machine can pass the sorts of tests we apply to other human beings to decide that they have the same sorts of minds as we do and are not philosophical zombies, then there's no reason not to call what those machines are doing 'thinking' as well.

Now, it's certainly possible that thinking is not actually the sort of activity we often describe it as being to each other (e.g., mental imagery, stream of consciousness, subvocalization, although there's empirical evidence for much of that) , but it's distinctly non-useful to say that 'thoughts' as defined in the language of discourse are not.

It's still spam- but like the most successful spam, it pretends to be something people are actually interested in.

Yes. This is what I meant by informatic chaff. Take something like Obama's tax cuts or net neutrality or global warming. It doesn't take a genius to figure out what's what as long as the unadorned facts are presented in an unbiased manner. But by putting out a blizzard of spin, the low-information user can be successfully befuddled into thinking there's some sort of controversy and that "more study" needs to be done, or it's just a matter of competing (and lying) interest groups.

The part that makes the disinformation spam is that the people paying for it want to sell you something, in this case on something, an intangible idea.

Notice btw how the idea of agency creeps in. The welter of conflicting information isn't some accident, it's intentional. Intentionality seems to be something that a lot of higher organisms have glommed onto as an organizing principle. My near-sighted chihuahua will bark at those blinking warning signs put out around road work. I think he thinks the light is the head and the stand is four legs. Why does he bark? Because the sign is blinking at him. On purpose.

did anyone ever read that article in wired about the two guys both working on AI,
i think one was at MIT and the other was some nut up in canada, and they both committed suicide the same way several months apart...

Humans are so good at seeing unlike problems as being the same that we are often no longer able see the enormous differences which would completely flummox a computer. A is hardly ever A' if they refer to things outside a computer. Two pictures of two of the same class of object hardly ever have more than a handful of random pixels in common.

On the other hand your example of planetary orbits are about as close to perfect for computers as any real situation ever gets - in vacuum, enormous inertia, most of the mass in the sun, nearly coplanar, all the really unstable orbits long since gone for everything bigger than a few tens of meters, only a tiny number of rather precisely known numbers really matter to the calculation. Aside from very, very slight plasma and light effects (almost certainly the answer to the slight extra-solar probe acceleration anomalies) and the inherent chaos of n-body math, this is as solved as problems get. The limits to precision are inherent in the chaos, and will never improve much at predicting the far future, even if someday we know the mass and position of every rock bigger than a bread box within a thousand AU to 17 decimal places...

And no, computer languages aren't equally powerful, except in a technical sense that only applies to non-existent computers with infinite memory, infinite time, no interaction, and perfect programmers with infinite time, knowledge, memory etc. If memory, time, programmer capability, multiple unsynchronized machines running many unknown programs and serving interrupts from the random and weird real world are important, than language choice matters.

Languages which require a compile cycle are different in practice from those which don't. Languages that encourage programs that can rewrite themselves (eg LISPs) are fundamentally different from imperative languages which separate code and data. Sure, you could do it in C, but it would involve doing really ugly, difficult things such as effectively working in assembly and maybe recreating large parts of the compiler and/or OS kernel. The easiest way would be to re-implement the LISP interpreter in C.

Likewise, I can theoretically do everything that the Java libraries can do by moving rocks around on the beach to represent the state of the registers and logic. In actuality, those libraries took many thousands of man-years of skull sweat, and no matter how powerful and capacious the computer (or not) I had, no matter what library-lacking language I had, I couldn't reproduce the capabilities of the standard software on my own, let alone test it and fix the bugs to the same degree, no matter how much time I took. Those library capabilities make languages that have them of a completely different order than those which don't.

Now when the code can rewrite itself, has vast libraries available, runs on nearly anything without a port or recompile, and takes most of the problems out of concurrent programming (which is where all increases in power are coming from, and the problems with which range from really nasty to much worse) - that has the potential to be able to do things which couldn't otherwise be done at all. Real AI? No. Things which seem magic even (or especially) once you've poked around inside the works? Absolutely.

Um, no, thinking is defined as what we think we do in our heads (or in our hearts, if ancient Greeks, I suppose). If we don't do it, it's pretty much a null term with no meaning.

But is most of our mentation for most people properly classified as "thinking"? It's a deeper question than it appears . . . and it's been the basis of more than one horror story (or at least what I would classify as horror.)

Conversely, the idea of the Turing test is that so long as a machine can pass the sorts of tests we apply to other human beings to decide that they have the same sorts of minds as we do and are not philosophical zombies, then there's no reason not to call what those machines are doing 'thinking' as well.

Now, it's certainly possible that thinking is not actually the sort of activity we often describe it as being to each other (e.g., mental imagery, stream of consciousness, subvocalization, although there's empirical evidence for much of that) , but it's distinctly non-useful to say that 'thoughts' as defined in the language of discourse are not.

I've never bought this one for the reason that it's trivially easy to get a machine to pass the Turing test using only a lookup table. I don't have any reason to believe that "when x string appears output y string" is what most people would classify as thinking, or at least thinking in a remotely human style.

(BTW, my working definition for the difference between engineering and science is that engineering is expected to bear fruit in 5 years or sooner, applied science in 5 to 50 years and pure science after 50 years or more.)

5 years? I wish just once I had that long to crack an engineering problem...Used to be find the problem before the next silicon chip tape out, and you've really no more than 2 months because by that point the design will no longer be able to garner market share based on it's design specs.

Yes, it does make one's head spin, but I never saw a chip get to revision 3.0 because of design errors because by that point the project was dead. If you did see a rev 3.0 piece of silicon, it was usually because a die shrink gave it enough of a frequency boost to be able to continue selling it as a viable product.

"Humans are so good at seeing unlike problems as being the same that we are often no longer able see the enormous differences..."

I think this is the idea of the human brain as an abstraction engine. You could, however less flatteringly call it a prejudice engine.

"On the other hand your example of planetary orbits are about as close to perfect for computers as any real situation ever gets..."

I'm no cosmologist, so forgive me if the following sounds a little amateurish, but I think all these problems are nowhere close to being solved.
We have no idea what causes the dark flow. We have no idea if dark matter exists and if it does where it exists and what it is made of. Same goes for almost every phenomena astrophysicists prefix with "dark" or "black" and both types of known space/time singularities, i.e. black holes and the big bang. Yet without these phenomena our theories wouldn't work at all. AFAIK we are nowhere close enough to having a model where we can just start the computers and numbercrunch and miraculously all will be revealed.

"I think this is the idea of the human brain as an abstraction engine. You could, however less flatteringly call it a prejudice engine."

i think that's pretty much exactly it. there is evidence that other species will adopt the vocalisations of 'new' individuals in order to make themselves more attractive to a mate - humpback whales in Western Australia and some types of song-birds here in New Zealand.

abstraction allows us to better ocnceptualise means to acquire needs, and to better explain the attraction of wants.

so if you're talking about AI, you need to work out what it's wants and needs are. those two things would likely trigger what we think of as intellect?

You mean a newspaper? A way of getting you to look at adverts by wrapping in interesting content. It's where you draw that line between pretending to be interesting so you read by
mistake and actually being of interest so you don't resent also picking up the adverts.

Right, I see what you meant. I thought you were talking about planetary orbits in the solar system sense, not cosmology in general. Planetary orbits in the solar system are pretty well settled, but many of the wild conjectures in cosmology seem to me to not be properly scientific, but rather aimed at countering the apparent falsification of hypotheses that have achieved the status of cosmological dogma. Dark matter and dark energy in particular seem to squint towards epicycles. On the other hand, I'm not happy with the non-conventional alternative hypotheses either.

There seems to have been a growing problem over the past few decades with "scientists" and science fans claiming to have figured everything out, not only speculating beyond the data but being dogmatic about their unproven, undefined and sometimes even falsified hypotheses. I'm especially suspicious of models that either aren't mathematical, aren't explicit, or contain dozens of hand-tuned parameters and fudge factors. Nearly everything that people get dogmatic about seems to fall into one or more of these categories.

Ok. Can I pull the it's-not-my-mother-tongue-card out on that one? I can't learn or use a foreign language if I have to triple check every word I use. So at some point, I'll just take some things for granted, even though they turn out to be wrong ... exactly until I find out.

I would appreciate your criticism if so much as an attempt to answer the actual question had come along with it.

I've never bought this one for the reason that it's trivially easy to get a machine to pass the Turing test using only a lookup table.

It would be trivially easy if you had a lookup table of unlimited size and an infinitely fast oracle to fill it in. By that standard, resolving the Millennium Prize problems in mathematics is also trivial: assume a lookup table containing the answers you seek, indexed by the problem number. Ta-da!

"But is most of our mentation for most people properly classified as "thinking"?"

Yes, inasmuch as the two terms are pretty much synonyms. Everything is properly classified as thinking that is defined as thinking, as per your pick of dictionary entries or common usage. What's the point of narrowing the definition to something so divergent from either? Now, if you want to say that most thinking is more of an epiphenomenon than a determinant of our behavior than is commonly supposed, you might have an argument, but that doesn't mean it isn't still what people mean when they say or write the word "thinking".

"I don't have any reason to believe that "when x string appears output y string" is what most people would classify as thinking, or at least thinking in a remotely human style."

Do you have any reason to believe that anybody other than yourself thinks, then? What reasons, if not (some form of) a Turing test? How do you know I'm not just an Eliza-derivative with a (hopefully) better look up table?

Shorter form: I submit that 'thinking' is best understood as whatever activity people are actually referring to when they use the word, not some narrow subset of cogitation or cognition that meets your criteria for 'real' thinking.

I read all of Alastair Reynolds' books about a year ago and here's my internal chronology, with links to each of my reviews. I haven't read the most recent book, Terminal World, and am waiting for the trilogy starting in 2011.

I've never bought this one for the reason that it's trivially easy to get a machine to pass the Turing test using only a lookup table.

It would be trivially easy if you had a lookup table of unlimited size and an infinitely fast oracle to fill it in.

Well, no, our table isn't of unlimited size (if I'm understanding you correctly); it's merely very big, and there's nothing in the laws of physics that says such a thing is impossible. Also, I don't know what you mean by trivially easy because what you write next is not the same thing at all:

By that standard, resolving the Millennium Prize problems in mathematics is also trivial: assume a lookup table containing the answers you seek, indexed by the problem number. Ta-da!

Yes, he says drily - I better not hear anyone of accusing me of being snide, mean or rude! - assuming the conclusion is one way to prove something. It's not usually the right way as I have to tell my students over and over and over again, but . . .

The problem here is that you're assuming your answers exist. You can't do that; that's something you have to prove. Otoh, a big lookup table is feasible in the sense that we know that the entries exist, obviously, because a regular human can generate them in the course of a conversation!

But in any event, this is a distraction from the main point that while this machine can pass the Turing test, few people would say that the system is "thinking".

Yes, the Chinese Room argument is the same sleight-of-hand trick. Assume that the hard part of the problem has been done outside the boundaries of a system you define, then demonstrate that there is nothing remarkable left within the boundaries. Let the reader of the argument falsely conclude that there was never anything hard about the problem.

There's a similar argument to conclude there is nothing remarkable about problems that humans can solve well only with computers. "Machines can predict next Wednesday's precipitation from models and current conditions, but a human could do it by hand with enough patience." Sure, a human could in principle do just like the machine if they imitated the algorithms by hand with chalk and a blackboard. But for that they'd need to be immortal and to have a blackboard the size of Rhode Island, and the "prediction" would arrive centuries after the actual weather.

These "in principle" arguments are too quick to assume things as part of the argument that are not just unlikely or difficult but literally impossible.

I don't think that this machine is even a Chinese Room except in the most trivial sense. There is almost no computation going on here; the whole apparatus is just one humongous conditional statement embedded in a loop.

In fact, it's hard to come up with any sort of computation at that level of abstraction that is simpler. At least, for me it is. I've been out of the computer science game for literally decades now and someone might have a counterexample ready to hand.

The lookup table you need to pass the Turing test isn't merely big, it's of infinite size. For example, anyone should be able to tell you the sum of a given integer and 1. But the integers are infinite. You'd need a lookup table of infinite size to deal with the most elementary arithmetic questions.

Of course the machine could deflect instead of answering directly, Eliza-style, when it encounters an input not in its finite-size lookup table. But I think that will be noticed by a questioner who's specifically looking to sort men from machines.

On the other hand your example of planetary orbits are about as close to perfect for computers as any real situation ever gets [...] this is as solved as problems get.

Well, yes. I was referring to the process of solving it that we've been doing manually over the last 300 years, going from what Newton wrote to what we have now, inventing several fields of mathematics along the way.

I used an already-mostly-solved problem as my example, because that way we can read about its history and how difficult it (unexpectedly) turned out to be and because of its familiarity.

There's a bunch of open problems today, at least some of which are likely to turn out to be 300-year posers — the Millenium Prize Problems, for instance, or what's left of Hilbert's problems.

The question, then, is "[whether] there exist other forms of intelligence which are fundamentally more powerful than ours", which would solve these problems not just faster, but fundamentally faster — and whether we can find one (different question to proving that one exists), and what will happen if we do. And how it fits in with other hypothetical forms of AI.

For the programming languages, you seem to have ultimately reached the same conclusion — that the difference between them will not lead to real AI... I'm glad we agree on that.

So, if real AI comes at all, it'll come from something more fundamental than the difference between programming languages — presumably computer science research and research in related fields.

"But is most of our mentation for most people properly classified as "thinking"?"

Yes, inasmuch as the two terms are pretty much synonyms. Everything is properly classified as thinking that is defined as thinking, as per your pick of dictionary entries or common usage. What's the point of narrowing the definition to something so divergent from either?

This would seem to be an eminently reasonable reply were it not for the actual history of the subject. Almost invariably, the cycle seems to be that once a machine has mastered an activity considered to involve thinking, the consensus opinion changes to "that's not really thinking". The ability to play a good game of chess would be the canonical example, I believe. Moreover, it seems to me that in the rare instances where machines approximate the sort of data processing humans do, the tendency is to label the processes "not really thinking".

And that is where the subject becomes interesting. As Charlie points out, no one really knows what thinking really is, nor is there a good, universally-agreed upon set of criteria which the definition should encompass. Physics envy again, I guess.

"I don't have any reason to believe that "when x string appears output y string" is what most people would classify as thinking, or at least thinking in a remotely human style."

Do you have any reason to believe that anybody other than yourself thinks, then? What reasons, if not (some form of) a Turing test? How do you know I'm not just an Eliza-derivative with a (hopefully) better look up table?

Well, I don't disagree with the notion of administering some sort of Turing test. I just happen to believe that while it's a necessary condition, it's hardly a sufficient one.

The lookup table you need to pass the Turing test isn't merely big, it's of infinite size. For example, anyone should be able to tell you the sum of a given integer and 1. But the integers are infinite. You'd need a lookup table of infinite size to deal with the most elementary arithmetic questions.

The flaw here is that you're assuming that humans (or indeed any finite-state automata) could produce as output a string of infinite length. Not true. And more practically still, the length of time to administer these tests is finite. So, for example, no one will be asking a human the sum of a number on the order of a googolplex and one; as Carl Sagan says, there simply isn't enough room in the known universe to write down this number, even if it was jam-packed with neutronium from one edge of the cosmological horizon to the other.

So no, still a finite amount of space.

Of course the machine could deflect instead of answering directly, Eliza-style, when it encounters an input not in its finite-size lookup table. But I think that will be noticed by a questioner who's specifically looking to sort men from machines.

But if that's what humans would really do, such dissembling is to be expected, and furthermore, such dissembling can be expressed in a bit string of finite length.

Btw, it's not as if the equivalent of lookup tables isn't a common strategy in nature. Someone, Dennet, Gould (possibly Sagan again) once commented that the complex behavioural patterns of everything from sheep to bumblebees is a bit of cheat. They look remarkable and elegant only because they compress all the failed experimentation, all the billions and trillions - hundreds of trillions - of possible ancestors who didn't get the dance quite right into a bit of code that can be expressed in considerably fewer than, say, 10^30 bits. There's even an old con based on this simple but powerful idea.

Well, I don't disagree with the notion of administering some sort of Turing test. I just happen to believe that while it's a necessary condition, it's hardly a sufficient one.

Depends on what you're testing for. For generic "intelligence" the Turing Test is not necessary; I'd not expect the output from an Alien Intelligence to be indistinguishable from a that of a human therefore an Alien would fail the Turing Test, but it's intelligent.

The Turing Test merely tests to see if a machine can produce results indistinguishable from human intelligence, and that's a poor definition of testing for artificial intelligence. It may, however, be a useful test for the creation of an AI that we can interact with on a daily basis :-)

The flaw here is that you're assuming that humans (or indeed any finite-state automata) could produce as output a string of infinite length. Not true. And more practically still, the length of time to administer these tests is finite. So, for example, no one will be asking a human the sum of a number on the order of a googolplex and one

Doesn't need to. "Infinity" in this context is a tricky one. If you don't have an infinitely large lookup table then how will you be sure you've caught the answer to 1.23456789 + 1? If you only build your table to 5 decimal places then someone asking a question to 6 dp will cause your lookup to fail. The first operand need not even be expressed as a rational, nor even as a known irrational; it could be presented as the sum of a series (sum n=1->inf 2^-n)+1.

Basically, ut doesn't matter that a human can only make a finite number of questions; the potential question space is infinite and a lookup table would need to cover all potential questions.

However, this fragment of a debate is a dead-end; it presumes a lookup from question to answer. The answer, however, could be an algorithm. You don't need to have a lookup for all possible answers of "+1", you just need an algorithm that evaluates the "+" and then works on the two operands.

FWIW, computers can deal with infinite strings perfectly happily, using "lazy evaluation" where it only needs to evaluate the string to the required level of accuracy.

Problems which are too complex for humans might include problems which a human can't describe, let alone solve. Philosophers of science have for the last few centuries made much of the notion that the universe is simple enough for human beings to comprehend, and this has become somewhat of a truism. But it might not in fact be true. Perhaps what we've been able to discover and think about so far is simple enough for us to understand, and we haven't discovered the deeper and more complex aspects of the universe because we've been looking for things we could understand. It's entirely possible that large parts of the way the universe is constructed are too complex for a human mind to cope with.

Consider as an example the proof for Fermat's Last Theorem, or the Four Color Map Theorem. Both of those proofs are too complex for a human to hold the entirely in mind at one time. It's possible that there are physical theories that are both true and similarly complicated. Certainly String Theory is so complex that very few people can understand it (though we don't yet know if it's true).

If one can conceive of a mind capable of formulating such problems, one can conceive of a mind capable of solving such such problems, and if one can conceive of a mind capable of solving such such problems, one can conceive of a mind capable of formulating an explanation of such problems and their solutions abstracted and simplified just enough to be comprehensible to any given lesser mind. That one can conceive of such problems and minds does not prove their existence, of course, unless one also buys Anselm's ontological argument as well.

(Why not? I'll have a go at a modern version: To conceive of a mind is a far more subtle difference from actuality than to conceive of a concretely physical contrafactual, to imagine a mind is to begin to emulate it - and if a lesser mind can conceive of a greater, a greater can certainly conceive of itself, or a lesser. The least complicated formulation of quantum mechanics is that anything that can happen does happen. Lesser minds than ours exist, so unless we are the greatest minds possible, greater minds exist and the same argument holds for them. Therefore arbitrarily great minds exist. Therefore God exists.)

The question is, though, from our perspective, are we dealing with a greater mind or a phony? How can we tell? The question is relevant for current theories. I've never seen even anything even purporting to lay out the so-called "standard model", and I've looked hard. The theory-formerly-known-as-string is even worse - it is like a description of the qualities of the parameters of the topologies of the compactifications of the dimensions of the classes of the functions of the functions of the hypothetical entities a theory might have if someone actually were to come up with one. I doubt anybody actually understands most of what theoretical physicists have been getting paid for for the past few decades. Are they really smart, really mountebanks or some of both?

If we can't tell with physicists, who are at least technically human, how can we possibly tell with a high-level AI?

The flaw here is that you're assuming that humans (or indeed any finite-state automata) could produce as output a string of infinite length. Not true. And more practically still, the length of time to administer these tests is finite. So, for example, no one will be asking a human the sum of a number on the order of a googolplex and one; as Carl Sagan says, there simply isn't enough room in the known universe to write down this number, even if it was jam-packed with neutronium from one edge of the cosmological horizon to the other.

So no, still a finite amount of space.

A questioner allowed to communicate in Twitter-size messages once per minute can still easily write arithmetic problems that no physically possible lookup table is likely to contain but that humans (or computers not based solely on lookup tables) can easily solve. Just pick a random number between 10^99 and 10^100 and ask the other participant to add one. The question is under 140 characters. The answer is under 140 characters. The number of table entries needed to cover all question/answer pairs in this range is far greater than the estimated number of atoms in the observable universe.

More generally, I am suspicious of definitions of intelligence that require looking inside the box. If I observe intelligent behavior from an entity but the implementation is surprisingly simple, I am going to revise my assumptions instead of my observations. I believe that playing grandmaster-level chess is intelligent whether done by man or machine. I also believe that it and similar spectacular but narrow triumphs illustrate that the "general intelligence factor" is not universal, and that intelligence is not a scalar value.

the Turing test is a theoretical tool. It's a thought experiment. So any handwavery along the lines of "oh, well, no one would really ask questions for a year on end" or "the questions would be really pretty short" is irrelevant.

And if you want to talk about an actually practically possible Turing test, then you have to go all the way and say, for example, how many entries the lookup table would have to have, and how fast a processor you'd need to search them. And if the conclusion is that you'd need a moon converted into computronium for your lookup table... well, sorry, but that argument won't work.

Second point: don't forget that your lookup table won't just need answers to every possible 140-character question. It'll need answers for every stage of every conceivable conversation composed of 140-character questions. Otherwise it'll fail this very simple test of short-term memory:

1. Q: What's your name?
A: ELIZA

2. Q: OK, Eliza. Now, just to be sure I'm talking to a human, next time I ask you what your name is, I want you to say "BARBARA". Do you understand?
A: YES

Pretty bad. Also he gets the source wrong - it was Anselm, not Aquinas who came up with this bit of sophistry. (Anselm also made Canterbury the throne of the primate of Britain, stopped priests from marrying (c.1100) and had a rather interesting life.)

IIRC I came across the ontological argument in Philip K. Dick's VALIS. Even a guy who was more than half convinced that he was in telepathic contact with evil space alien Yahweh thought the ontological argument was absurd.

thought is a bunch of electro-chemical interactions it the CNS you have a pretty deterministic system, mitigated only by some quantum fluctuation.

No, that's a gross oversimplification. The CNS doesn't have well-defined boundaries, and is constantly receiving on the order of 100mbps of incoming sensory data, much of it delivered over analog channels, from a chaotic environment. Moreover, individual processing units with the CNS (neurons) have life cycles, modify their axonal connections to other neurons, and die. There are other units within the brain with less clear functions -- Schwann cells and oligodendrocytes -- that modulate or otherwise affect neuronal signal processing. The neurons are also subject to bulk chemical gradient fluctuations in the cerebrospinal fluid medium they're perfused by, and in the blood circulation in general.

What this means is that the CNS is a poster child for sensitive dependence on initial conditions -- provide the exact same inputs twice at different points in time and you will get subtly different outputs. (To get the same output from the same brain you'd have to have some mechanism for resetting both the brain, and its environment, to a previous checkpoint. Simulation using reversible computing, maybe.)

As a point of note, a friend of mine has a standard approach for breaking ELIZA bots (or tackling the Imitation Game, aka Turing's thought experiment): a couple of exchanges into the conversation, throw in "I can smell smoke", and then "my kitchen is on fire".

Breaks ELIZA bots every time.

(Which is to say, a lookup table that can cope with exceptions like "my kitchen is on fire" is not an easy beast to populate; the combinatorial explosion is rapid and enormous.)

Don't touch, it's still sore. I don't think that theoretical physicists deserve the contempt you seem to hold for them. They are not gods and don't know all the answers. That is what is to be expected.

It has worked like this ever since "science" itself has been established: You observe a naturally occurring phenomenon (experimental physics). You develop a mathematical model for it ("explanation", theoretical physics). You try to find if as yet unobserved phenomena that the theory predicts ("proof", experimental physics). You try to find phenomena that it doesn't explain (sometimes falsely called "falsification", experimental physics). You expand on your original model or create a radically different one (theoretical physics).

Almost invariably, the cycle seems to be that once a machine has mastered an activity considered to involve thinking, the consensus opinion changes to "that's not really thinking". The ability to play a good game of chess would be the canonical example, I believe.

Note that, in this case, there's also the parochial version of thinking. In the case of chess, we do know that the machine isn't thinking in the sense defined above: the way it decides how to play is extremely different from the way a human decides its next move.

I sometimes feel like a very deterministic system, once I was recording some video to use in compositing with a friend, I made some comments regarding the takes as I was making them, which were recorded in the audio. Later on, enough time had passed for me to forget what I had said exactly, we reviewed the videos on the PC, and I found myself repeatedly making the same comments which were then echoed by my recorded voice. It was a creepy "I can see my wires" moment, for me.

I've been on the lookout for such telltales ever since, and they do crop up reliably. Not exactly deja vu, more like deja do...

Philosophers of science have for the last few centuries made much of the notion that the universe is simple enough for human beings to comprehend, and this has become somewhat of a truism.

I assume you're not using "comprehend" in the "grok" sense, 'cos that'd be silly.

Consider as an example the proof for Fermat's Last Theorem, or the Four Color Map Theorem. Both of those proofs are too complex for a human to hold the entirely in mind at one time.

These are two different things: "comprehend" and "hold the entirely(sic) in mind at one time"

There are many many things that can't be held in mind at the same time; one example is a row of 10 pennies - it's very rare for someone to be able to look at that and determine there are 10, but it's very easy to count them and determine there are 10. Equally it's very hard to visualize a row of 10 pennies, 20 pennies, 30 pennies. Despite that I can comprehend that 10 pennies can buy twice as many "penny chews" as 5 pennies, but I can't hold 10 of those in my mind at one time either.

To aid in this comprehension we break things down to simpler units; 30 pence could be built in columns of 5; those 6 columns can be arranged 2x3. We can now intuitively compare that 30 to a similarly arranged 50 pence and determine which is larger simply because of the simplification (we're just comparing 3 to 5).

You referred to the four-colour problem; the proof of this is, in fact, quite easy to comprehend, for someone with sufficient mathematical training. Basically mathematics was used to reduce the problem space to a known finite (but large) number of cases, and a program written to evaluate all those cases. The computer merely did in a faster way something that would be impractical for humans to do, but not beyond human comprehension.

I found myself repeatedly making the same comments which were then echoed by my recorded voice. It was a creepy "I can see my wires" moment, for me.

One theory for intelligence arising was to increase the ability to predict actions about to be performed by others. Predicting your own actions as recorded on tape should be relatively simple in comparison :-)

This is used (at least by me) to reclaim a lost chain a thought; if I return to the starting point of the chain then there's a good chance that I'll be able to repeat the steps and return to the point I got distracted/lost/whatever and then continue on.

This "prediction" ability also shows up when watching TV shows or films; have you ever known what the next line to be said by a character was? The joke in my house, when this happens: "Oh, didn't you know? I wrote this episode". Sometimes it's bad writing and the next line is telegraphed way ahead (a meme, a trope, generic shallow character), sometimes it's well written character consistency... whatever, it's still your intelligence predicting the future.

The real purpose of a "Turing Test" is - should be - to attempt to convey an arbitrary idea to the other side, and see if the other side works out some of the consequences of that idea.

Wrong. Utterly wrong.

Here's Computing Machinery and Intelligence, Alan Turing's 1950 paper in which he describes the imitation game. (Warning: PDF.) It's an imitation game; the goal is to deceive the other side as to the nature and identity of the player. Nothing about conveying arbitrary ideas and seeing if the other side can reason on the basis of them. (Otherwise systems like Wolfram's Alpha would, arguably, come close to passing.)

Don't buy it. If we call it thinking when a person does it, it's thinking.

Now, part of what we are referring to is not only the behavior resulting out of the thinking (e.g., making a useful chess move, producing the correct answer to an arithmetic question) but the ability to give an account of how one's thoughts led to one's actions, i.e., verbal or written evidence of having a model of one's own mental states. As far as I know, nobody's produced an AI that can, say, play chess AND carry on a conversation about why it moved the way it did AND switch to discussing arithmetic and geometry in the same way AND then go on to critique a novel or short story.

"I'm especially suspicious of models that either aren't mathematical, aren't explicit, or contain dozens of hand-tuned parameters and fudge factors."

Wow. Had to think about this for a while. I can't say I disagree, but are you aware that you just created a new definition for physics? Look up the cosmological constant if you care. I'm guessing you have a background in mathematics or software engineering and therefore distrust any arbitrary values. So do I. BUT sometimes they can't be avoided if you want anything workable to play with.

Re breaking ELIZA bots by putting "I can smell fire" in the conversation - does that not also breaks humans? I.e., what would be a sensible human reaction to such exceptions from talking routine in a computer-mediated chat? I find this example interesting because it is similar to Garfinkel's experiments with normal expectations and every-day routine doings. And humans can't cope very good with such ethnomethodological experiments (like, ignoring every rule of courtesy in an "normal" conversation).

I think we are on a different page from a lot of the participants of this discussion. While there are a lot of expert systems already in operation today they are nothing like a General Purpose AI. That would be the one that can play chess, fly a plane, serve coffee and participate in discussions about artificial intelligence. All of these things for the same system.

The expert system may for its purpose be a more powerful or versatile tool than the human mind(=brain=body), but so is a pocket calculator or a cell phone or a forklift. It's not superhuman, however, not even weakly. It's a tool in the same way that an axe is more proficient in chopping wood than a human hand but in no way superior to a human hand in all its applications.

For me (us?) a general purpose AI would have to be at least comparable to a trained (trainable) monkey.

Yes, the Chinese Room argument is the same sleight-of-hand trick. Assume that the hard part of the problem has been done outside the boundaries of a system you define, then demonstrate that there is nothing remarkable left within the boundaries. Let the reader of the argument falsely conclude that there was never anything hard about the problem.

Have I got a comic for you. Iow, there's no trick to the Chinese room setup at all, though Penrose was greatly mistaken about it's implications. Yes, the Chinese room does understand Chinese in a way that Penrose the operator cannot. For as Hofstadter points out, there is indeed some misdirection going on. We are told that all we are getting is "just slips of paper" and that Penrose is "just making alterations according to some rules". Reading the original, it's easy to get the impression that Penrose is sitting at the kitchen table with an in-basket on his left, a (possibly largish) three-ring binder with instructions (transition rules), and an out-basket on his right. So then Penrose is a significant component of the Chinese room, in fact, the biggest part of it. And that is where the power of his argument lies insofar as he's trying to deride strong AI: It's easy to pooh-pooh the idea of the seat of understanding lying in a three-ring binder and a couple of baskets of paper.

In reality, the more accurate picture is that Penrose is in a continent-sized warehouse of paper and he's just a clerk shuffling one stack of papers to another after duly marking particular sheets with their authorized stamps. He's not even an electron in this computing machine; he's the motive force which pushes them around. Quite a different picture, eh?

All this is making the usual assumptions that strong AI proponents make, of course.

Doesn't need to. "Infinity" in this context is a tricky one. If you don't have an infinitely large lookup table then how will you be sure you've caught the answer to 1.23456789 + 1?

...

Basically, ut doesn't matter that a human can only make a finite number of questions; the potential question space is infinite and a lookup table would need to cover all potential questions.

Well, no, that's not how it works. Humans can only generate strings of finite length. So, for example, if they can generate (optimistically) 100 bits/sec, and they are allowed 10,000 seconds, they can only produce strings holding 1,000,000 bits of information. So it's easy to see then that there are at most 2^1,000,000 different possible strings, and only (2^1,000,000)^2 distinct possible ways of responding to them (assuming the reply is of equal length.) A very large number[1], I grant you. But still finite.

However, this fragment of a debate is a dead-end; it presumes a lookup from question to answer. The answer, however, could be an algorithm. You don't need to have a lookup for all possible answers of "+1", you just need an algorithm that evaluates the "+" and then works on the two operands.

What prompted my initial response was the assertion that any machine capable of passing the Turing test must be said in some sense to be thinking. As my little scenario shows, that's not necessarily the case.

[1] 4^(10^12), to be exact. Far larger than a google, but insignificantly tiny compared to a googolplex.

A questioner allowed to communicate in Twitter-size messages once per minute can still easily write arithmetic problems that no physically possible lookup table is likely to contain but that humans (or computers not based solely on lookup tables) can easily solve. Just pick a random number between 10^99 and 10^100 and ask the other participant to add one. The question is under 140 characters. The answer is under 140 characters. The number of table entries needed to cover all question/answer pairs in this range is far greater than the estimated number of atoms in the observable universe.

I'm not understanding your point here. The size of the inputs and outputs matter, because we are conducting a test that takes a finite length of time. But why does the size of the insides of the machine matter? That's a pretty unorthodox claim, at least by strong-AI standards.

I've never bought this one for the reason that it's trivially easy to get a machine to pass the Turing test using only a lookup table.

I don't think you quite understand what the word "trivially" means.

Would you care to explain why? Or would it be acceptable to certain people (as well as being consistent of them) to point out that you're being gratuitously rude, snarky, dismissive, or whatever other character defect I can semi-plausibly accuse you of in this context?

the Turing test is a theoretical tool. It's a thought experiment. So any handwavery along the lines of "oh, well, no one would really ask questions for a year on end" or "the questions would be really pretty short" is irrelevant.

Indeed. Do you know what the Turing test is? Have you read the actual paper? If you had, you'd know that these limitations were part of the game. And in fact, Turing insisted that the machine playing the game be a discrete-state machine[1]. He even made provisions for the workings of the machine to be unclear.

Second point: don't forget that your lookup table won't just need answers to every possible 140-character question. It'll need answers for every stage of every conceivable conversation composed of 140-character questions. Otherwise it'll fail this very simple test of short-term memory:

[1]This is sort of crucial if playing the strong-AI game the way it's usually played. One assumption here is that any physical process can simulated to an arbitrary degree of precision on a discrete-state machine (i.e., a digital computer.) At no point are any assumptions made about the size of the machine, nor about the "speed" with which it simulates these processes.

What this means is that the CNS is a poster child for sensitive dependence on initial conditions -- provide the exact same inputs twice at different points in time and you will get subtly different outputs. (To get the same output from the same brain you'd have to have some mechanism for resetting both the brain, and its environment, to a previous checkpoint. Simulation using reversible computing, maybe.)

That's certainly true in terms of gross physical processes. But whether or not these make a difference is the sticking point. If they do, then one of the assumptions of the strong AI crowd is invalidated.

Like I said, there's a lot of physics envy going on these days. And it's not served the scientific community very well at all. I hold the physicists responsible, of course.

I've frequently thought that one of the prerequisites for Artificial Intelligence is the creation of "Artificial Stupidity." This sounds like a joke, but it's not - many of the behaviors we attribute to "intelligence" are multiple examples of really simple behaviors stacked on top of one another.

Take the very simple example of a cat meowing to be let out. First that cat has to figure out that it can make noise - an obvious artifact of basic learning during kittenhood - then the cat has to figure out that the human being can hear it meow. Further the cat has to know you'll treat it's noises as attempts at communication. Then it has to figure out that there is a door, which opens, which involves some kind of memory and anticipation of action on the part of an object that has no ability to move on it's own. Then it has to figure out that the human can act upon the door, and that the human will use all of the abilities that the cat has developed/understood in the previous cases to "grok" the communication from the cat as it sits in front of the door making noise.

None of these realizations even comes close to the level of intelligence. Any argument Pavlov or Skinner might make about positive feedback or the cat being a "black box" applies here, and none of the "realizations" I discuss above even rises to the level of "difficult" for a human being with fairly bad brain damage... but even a well-programmed computer/robot can't figure this stuff out. Sure, you can program a robot to make noise near a door, or even seek out a human then make noise and go to the door, but the robot can't figure this out, and even a fairly stupid cat gets this eventually.

When you've got a robot that can build useful complex behaviors out of simple behaviors on it's own. then we'll talk about "artificial intelligence."

i remember i used to work at these places that used computers and for some reason they had these programs that would get u into other states of consciousness, of course they wouldn't tell you, and then they would also try to direct the ride wich was really stupid on their part, sometimes we use to smell fire, mostly though we figured out it was bullshit that was filling the airwaves, spies are some interesting fellows...

And explanations must be falsifiable[1]. This point has recently become of tremendous import to me over the last few years. The general method of inquiry - the cycle of observation, hypothesis, testing, modification, RepeatUntil - doesn't seem to be well understood by a lot of people. And it's contributed greatly to the erosion of the decision-making process the world over.

I don't single out the politicians, the oligarchs, the corporate chieftains and their minions, the hucksters, the con men, the ad men, etc by the way.

It seems that in large part, this is simply the way humans are built, cognitively speaking. They prefer and intuitively understand narrative as an explanatory mechanism. The scientific method? That's awkward and not easily taught, and not always used well, even by the people who live by it. And it's certainly not preferred. People don't want facts and theories. They want stories. That's not a bad thing. But it does make those who have only narrative and nothing else extremely vulnerable to manipulation.

Sorry about the meta digression. But I feel very strongly that this is an extremely important point. One that could make or break humanity as a civilized species.

[1]And generally speaking, most people seem to think that the ones providing the explanation are the ones responsible for coming up with the disconfirming experiment.

Note that, in this case, there's also the parochial version of thinking. In the case of chess, we do know that the machine isn't thinking in the sense defined above: the way it decides how to play is extremely different from the way a human decides its next move.

Oh, I agree completely. My original comment was a response to the idea that whatever humans do in their heads is ipso facto thinking. While this is certainly a defensible view, the history of the field - the parochial version that you refer to - seems to discount it in significant ways.

I'd say that this is even something of an sf (or horror) trope: that humans don't really think/aren't really conscious/etc. Ties in to the whole sf notion of the Sapir-Whorf superman and even dicier stuff like the man who was able to use fifteen percent of his brain instead of the usual ten.

Yes indeed. And there should be a lot of envy going on. It's one of the subjects that is less reliant on conjecture than most. I'm sure that some non-physicists know their stuff (mathematics) and are probably greater minds than I am (hello heteromeles, scentofviolets). I still think that the best way to describe any phenomenon is to provide a mathematical model of it. Not a statistical proof of a phenomenon described in "human language". Having practised law some twenty years ago I am weary of that. (To put it bluntly: I'm not a physicist.)
@123, 124, (79)

"float" and similar are not the only way of storing numbers in a computer. They're just fast and sufficiently convenient for many tasks. Computers are quite capable of performing infinite precision arithmetic (this was, in fact, a task back in the first term of my uni days).

I think one problem is that we tend to view human intelligence as something reasonably consistent and measurable. I would say that we're dealing with a spectrum, not only in terms of processing power, but actual processing of information.

If it seems inherently hard to imagine how a non-human intelligence could function, it's perhaps worth noting that it's just as hard to see how certain types of human intelligence function. Communication, especially through language, is such an important aspect of human culture that there is an inherent bias within it in favour of those who can communicate, as obviously we do not have access to the mind-state of those who, for whatever reason, do not.

There seem to be several components to human intelligence (let's say: internal processing capacity, external ability to communicate and that strange elusive emotional state which impacts them both). For example, it seems that many humans who display processing capacity at the higher end of the spectrum fall short on the others. Perhaps some minds are either forced to or choose to divert ressources. What we do know of the brain seems to indicate a startling capacity for reallocation.

To push my point to an extreme, if a mind were vastly superior in processing power to others around it, it would likely spend much of its time just trying to make itself understood. It might consider that to be a waste of time and get on with easier things.

So super-intelligences might in fact already be among us but be unwilling or unable to make their presence known. Let's say the Great Spamhaus suddenly becomes sentient, what's the likeliest outcome? I'd say reduced rates on Cialis, Viagra and Gucci... ;)

To take the example of chess again, even using it to evaluate intelligence between humans ignores the fact that in an actual competitive game setting, the capacity to mess with the opponent's processing through the emotional level is an integral part of the game that can be leveraged. So what's actually measured is not intrinsic ability, but the capacity to integrate these different aspects towards a successful outcome.

Using it to evaluate intelligence between human and machine, is therefore not really a level playing field. Chess, like poker, has as much to do with reading cues and playing mind-games as probabilities. Granted, the machine will likely not use mind-games actively, though would it tell us if it did?

You are obviously right, there are some libraries that support this way of storage. That's why I wrote "arbitrary precision". From what I've read you're not some lowly developer, but in many of the languages there is a huge performance penalty on using objects instead of primitives.

I'm not understanding your point here. The size of the inputs and outputs matter, because we are conducting a test that takes a finite length of time. But why does the size of the insides of the machine matter? That's a pretty unorthodox claim, at least by strong-AI standards.

If this "lookup table" approach is supposed to be a pure mathematical/philosophical construct, then it requires an infinitely fast oracle to pre-populate an infinitely large table of questions and answers because the questions that can be asked are likewise unbounded by physical constraints.

If the lookup table approach is instead supposed to be something that could be constructed with sufficiently advanced engineering and queried by a real human being, then it is literally impossible because there isn't enough matter in the observable universe to build a table even to cover Twitter-size questions.

One is a non-solution and the other exists only in the realm of pure thought. Neither is a trivial solution to the Turing test.

Such answers are of course possible ("Why are you still chatting?" is another one). But I don't think they are very probable - why should someone with a house on fire continue to chat, and why should the person on the other side assume that they will continue to chat? Is it appropriate to continue talking if someone's house is on fire? Or should one start calling the fire brigade, but to do this, one would need the address, and this would mean to continue the chat, thus lowering the other side's chances to kill the fire? At least I guess not everybody is cool-blooded enough to react sensible in such a situation - many would start to think in circles.

None of my grousing was directed at you, I'm just less convinced than you seem to be that some areas of science are following the method they say they are. I don't have contempt for physicists, but I do think that there are areas of "science" where they seem to be playing tennis with the net down. Hypotheses and theories are sometimes so loose, underspecified and mutable that they "aren't even wrong". Evidence that agrees is unquestioned, evidence that refutes is ignored or worse.

Adding to what you said:
The sensitive dependence on initial conditions amplifies the noise all the way up from the quantum level, which is generally thought to be "truly random". Also the details of the placements of the synapses and larger structures depend on the microtubules which bend sharply in response to a reversible change in state of a single electron in a protected pocket of the tubulin molecule. (Hameroff and Nanopoulos' papers have interesting details.) So not only the behavior over time but also the structures themselves are sensitive to tiny fluctuations. (Even without the rest of the quantum brain hypothesis necessarily being true.)

Other tidbits:
Free will theorem (from Wikipedia):"The free will theorem of John H. Conway* and Simon B. Kochen states that, if we have a certain amount of "free will", then, subject to certain assumptions, so must some elementary particles. Conway and Kochen's paper was published in Foundations of Physics in 2006." (Free will in the sense of actions not being determined in advance.) [* yeah, that John Conway - Game of Life, Surreal Numbers, up-arrow notation etc.]

...Jonathan Barrett from the University of Bristol and Nicolas Gisin from the University of Geneva .... assume that entanglement does occur as quantum mechanics proscribes and then ask how much free will an experimenter must have to rule out the possibility of hidden interference.... if there is any information shared by the experimenters and the particles they are to measure, then entanglement can be explained by some kind of hidden process that is deterministic. It means there can be no information shared between them and the particles to be measured either. In other words, they must have completely free will.
....
In fact, if an experimenter lacks even a single bit of free will then quantum mechanics can be explained in terms of hidden variables. Conversely, if we accept the veracity of quantum mechanics, then we are able to place a bound on the nature of free will.

Glancing at the paper, it seems likely that it could also work (no hidden variables) with partially mechanistic observers if the particles have complete free will. (The authors seem to be silent on this point, however.)

I propose to consider the question, "Can machines think?" This should begin with definitions of the meaning of the terms "machine" and "think." … Instead of attempting such a definition I shall replace the question by another, which is closely related to it …

The new form of the problem can be described in terms of a game ... It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart from the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either "X is A and Y is B" or "X is B and Y is A."

The interrogator is allowed to put questions to A and B thus:

C: Will X please tell me … ?

Now suppose X is actually A, then A must answer. It is A's object in the game to try and cause C to make the wrong identification. ... The object of the game for the third player (B) is to help the interrogator. The best strategy for her is probably to give truthful answers.

We now ask the question, "What will happen when a machine takes the part of A in this game?" Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, "Can machines think?" ...

The new problem has the advantage of drawing a fairly sharp line between the physical and the intellectual capacities of a man ...

It might be urged that when playing the "imitation game" the best strategy for the machine may possibly be something other than imitation of the behaviour of a man. This may be, but I think it is unlikely that there is any great effect of this kind. In any case there is no intention to investigate here the theory of the game, and it will be assumed that the best strategy is to try to provide answers that would naturally be given by a man.

Extracted like that, it seems pretty strange doesn't it?

Turing seems to forget that B is a woman, but let's just put that down to mid-C20th sexism and move on.

Having set up the game, it seems that he's not really interested in it, if I'm right that the key passages are:

there is no intention to investigate here the theory of the game, and it will be assumed that the best strategy [for machine player A] is to try to provide answers that would naturally be given by a man.

The object of the game for the third player (B) is to help the interrogator. The best strategy for her is probably to give truthful answers.

It is still odd that he speaks of the strategy for the machine: I think we'll allow that if it has a strategy, it thinks, even if it is lousy at the game! So, we've still some rejigging to do.

It seems that what he wants is a machine that gives answers to questions that result—under game conditions of ignorance—in its being identified as human by a human at least half the time when playing against an honest human (of either sex!). Does that seem a fair representation of Turing's thought disentangled from the game of gender masquerading played between entities whose ability to think is not in question? Looks a lot like what I take to be the layperson's understanding of the Turing test. Does that mean I've made a mistake?

Now, for all I know, a computer may some day be made which passes this test, just as a computer may some day be made which adequately models weather systems.

What I want to know is what this has got to do with thinking?

(I am not skeptical about the possibility of using computers (as conceived by Turing) as the “brains” of thinking machines (more loosely conceived). I'm not that kind of skeptic.)

If a computer “passes the Turing test”, can we ask what it is thinking? (If not, can we ask that of humans?) Is it thinking "my hair is shingled, and the longest strands are about nine inches long"? Is it thinking “ha! Fooled those idiot humans”? Is it that we cannot say what it is thinking, but we can be certain that it is thinking something?

I suspect that the problem is here:

The new problem has the advantage of drawing a fairly sharp line between the physical and the intellectual capacities of a man

… that the supposition is that by focusing on (a proxy for) verbal behaviour, we've sliced the mind from the body. And Turing thinks that, because … ?

Imagine player B comes into the same room as the interrogator and her verbal behaviour is as responsive to C's questions as it was when C couldn't see B, but her behaviour is in every other way chaotic: no pattern can be seen in her responses to stimuli other than questions presented in the way they were in the game. Do we still say B is a thinking thing? Can't behaviour undetectable behind the “wall” of the imitation game undermine C's belief that B is a thinking thing? But if the wall can hide the fact that a human isn't thinking, then it can hide the fact that a machine isn't thinking, no?

But why does the size of the insides of the machine matter? That's a pretty unorthodox claim, at least by strong-AI standards.

If this "lookup table" approach is supposed to be a pure mathematical/philosophical construct, then it requires an infinitely fast oracle to pre-populate an infinitely large table of questions and answers because the questions that can be asked are likewise unbounded by physical constraints.

If you read Turing's paper, you'll see that this assertion that questions are "unbounded by physical constraints" is simply not the case (I'm assuming here that what you mean by that phrase is that the questions can be input as incompressible strings of infinite length.) In particular, it's not just any machine that's playing the game; it's a digital machine, i.e. a "finite state machine". In fact, from the clean copy our host has so graciously provided:

Following this suggestion we only permit digital computers to take part in our game.

This is because of the notion that a finite-state machine, which in an important sense really is nothing more than a set of lookup tables, can be emulated on a "digital computer" [1](to my ears that seems to be an incredibly quaint turn of phrase.)

Nor is there, really, any reason to not to include machines with an infinitely large memory capacity:

Most actual digital computers have only a finite store. There is no theoretical difficulty in the idea of a computer with an unlimited store. Of course only a finite part can have been used at any one time. Likewise only a finite amount can have been constructed, but we can imagine more and more being added as required. Such computers have special theoretical interest and will be called infinitive capacity computers.

What Turing is concerned about, after all, is not the details of the operation, but attempting to answer what is essentially a philosphical question (actually, Turing is just turning the old crank on the operationalism that was even in his day falling out of vogue.)

If the lookup table approach is instead supposed to be something that could be constructed with sufficiently advanced engineering and queried by a real human being, then it is literally impossible because there isn't enough matter in the observable universe to build a table even to cover Twitter-size questions.

As you can see from the quote above, infinite memory capacity bothered Turing not at all.

[1] And there's an actual quote to that effect:

This example is typical of discrete-state machines. They can be described by such tables provided they have only a finite number of possible states. It will seem that given the initial state of the machine and the input signals it is always possible to predict all future states

and then:

Given the table corresponding to a discrete-state machine it is possible to predict what it will do. There is no reason why this calculation should not be carried out by means of a digital computer. Provided it could be carried out sufficiently quickly the digital computer could mimic the behavior of any discrete-state machine. The imitation game could then be played with the machine in question (as B) and the mimicking digital
computer (as A) and the interrogator would be unable to distinguish them.

I'm glad Charlie linked to this. The material about "digital computers", emulations, discrete state machines, etc. are well known to most computer science majors by the end of their second year. One tends to forget how fresh the original inspiration was . . . and how well written.

Professor Robert French, a heavyweight in cognitive science (translated Gödel, Escher, Bach into French, coauthored about 80 papers, and several books, ~400,000 euros/yr in grants) has some cutting things to say about the Chinese Room argument and the idea that rule-based system such as a look-up table could pass the Turing test. He says a machine would need to have experienced life as we have to have a chance, otherwise the combinatorial explosion of possible questions would make the attempt not only infeasible but impossible. His publications page has two papers that are especially relevant:
"The Chinese Room: Just Say “No!”"
"...the question should be, "Does the very idea of such a Room and a person in the Room who is able to answer questions in perfect Chinese while not understanding any Chinese make any sense at all?"

"Peeking Behind the Screen: The Unsuspected Power of the Standard Turing Test"
"No computer that had not experienced the world as we humans had could pass a rigorously administered standard Turing Test."

While he heaps scorn on the premises behind Chinese-room type schemes, on the other hand he also draws the conclusion that the Turing test is not a specific test for AI consciousness, allowing the interrogator to distinguish machines from humans regardless of whether they are conscious and intelligent or not.

@ 151
You shouldn't have picked CATS for that example!
I know some are pretty thick, but ....

What do you do when your unspeakably cute tom-kitten shows he not only understands the class of objects with lids [ Extend paw, extend claws until lift is obtained under edge - be it ever so thin - lift.
Bingo! Duck-stock/butter/other human munchies ] but can not only get the milk-jug out of the coffee-machine [ Wrap paw around handle and pull ] but can OPERATE the said machine - we suddenly heard the milk-whizzer go to "on" [ He'd pulled the operating lever all the way around, and was still trying to get at the warm, frothed milk inside, when we charged into the kitchen.
AAAARGH!

@ 154
Terry Pratchett said this, some years ago - the NARRATIVE is the important thing.
IIRC it was in "Science of Discworld II" - but I'm prepared to be corrected on that one.

EH @ 167
Examples, please?
Other than "string theory", of course!

167 & 168
BUT
I thought that the single-photon through double-slit practical experimental results showed thast there must be a hidden variable(s) somewhere?
Since the photons are "fired" singly, over an extended period of time ... yet after (about/approx) 500-2000 particles have been released, an interference pattern emerges.
Um.

Just a brief note to Mike and everyone else: although the topic of this thread refers to spam, if your comment contains certain strings associated with the topic, the spam filter on this blog will decide your comment is spam!

Different people will want AI's for different reasons, and one can even imagine terrorism carried out by those who think AI's are potentially evil and dangerous upon those who think they could be very useful.
(Of course Frank Herbert went a bit further than that)

Reasons for having some sort of AI type thing-
replace humans in customer service roles, replace humans in taxi jobs, in fact replace humans in just about everything they do. This might not be for the best for society as a whole, but the logic of commerce would insure its triumph, as the early adopters reap rewards in terms of service up time, profit, lack of uppity staff and sickness, etc.

You might have noticed some small problems with that idea though. Firstly, it has to be cheaper to make the AI than it is to employ a human over a period of years. Secondly, we are assuming it is possible to make such a thing in the first place. Thirdly, if an AI does appear to be self aware and Turing capable, then that raises the issue of human rights for AI's, at which point enslaving them for permanent work seems a bit off. Unless you permit them to work off their creation debt and free themselves, a la Pratchetts golems. Even the latter would cause problems though, because like the golems, what would they do with the money they could earn for working all day long and they would likely have lower maintenance costs.
So you end up with a broken economic system, with normal humans left by the wayside and increasingly unable to work at anything which will get them money to survive.

But then it certainly isn't human level AI, and I don't recall us granting police dogs the right to vote or complain about conditions of service.
In fact, with those capabilities, would it even be able to pass a Turing test?

"How do you feel today?"
"Who is this you?"

Come to think of it, I reckon you just described a chess playing computer.

Human level AI is an interesting idea but why do we need it? We dont desire fish level submarines or bird level planes. When i said i'd like AI to work for me i mean highly capable problem solvers with the ability to learn but no personality or sense of "i am". Thats not to say they couldnt be frighteningly intelligent or capable of emulating a person enough to pass the turing test

Well, really going into it would be a very long post which would step on a lot of toes. James Hogan's book "Kicking the Sacred Cow" does a good job on several of them.

The big bang theory is probably the example that has the most good evidence against it with relatively little emotional investment on the part of the public. The Top 30 Problems with the Big Bang" by Tom van Flandern, (former Chief of the Celestial Mechanics Branch of the Nautical Almanac Office, U.S. Naval Observatory) is a good introduction. (Not that some of his proposals on other subjects aren't wrong.)

Well, really going into it would be a very long post which would step on a lot of toes. James Hogan's book "Kicking the Sacred Cow" does a good job on several of them.

I would not suggest that you put forward a holocaust denier as an authority on this forum.

(As for the other reference, I'm a long way from convinced - from a quick skim, the author seems quite happy to talk about huge red shifts without actually considering the implications of high red shifting in the first place. So I'd say that his proposals here are wrong too.)

Dean Radin, (one of the top researchers in parapsychology) recently published a response on his blog to James Alcock's commentary in the Skeptical Inquirer on Bem's research. His comments and links answer your question well.

Bem's latest experiment has gained more exposure that previous research, perhaps because of Bem's status as an Ivy-League professor and his clever inversion of a standard psychology experiment to show precognition, but even more rigorous experiments (ganzfeld-type) have been replicated dozens of times with generally even larger effect sizes and significances. All six meta-analyses of these experiments have shown a statistically positive effect.

Other sorts of experiments have also shown positive effects despite far stricter controls and more careful experimental design than is found in any other area of science.

Still, many skeptics echo Eddington’s ironic remark, “I won’t believe the experiment until it is confirmed by theory!” (especially ironic since their models of the universe seem to date from the 19th century at latest.) Time isn't quite what we thought it was, and eventually our intuitive models of the world are going to have to incorporate and extend the weird ideas of Einstein and quantum physics.

Ryan #186 - I somehow lost my original post before posting, so this is an attempted recap, naturally the original was wonderful ;)

I don't quite see how the ability to learn and super genius capabilities can be squared with programming to obey the laws of robotics. Plus I don't think such a limited creature would count as AI for most people, and therefore there wouldn't be ethical type issues, insted you'd just have the complaints about people being done out of a job.

As for Hogan, I'm afraid the Amazon reviews say that his book demonstrates HIV denialism, anti-environmentalism and support for intelligent design, which altogether shows how stupid an intelligent person can be and that I really don't want to read it. Maybe he has a good point about the big bang, but then I thought anyone who's read about the last 20 years of cosmology knew that the big bang was under threat and will be replaced as soon as someone comes up with something better, therefore it isn't a sacred cow unless you don't know anything about the topic.

I have seen no evidence that he is a "Holocaust denier", (mere name calling), and even if he were, (rather than someone who merely objected to a man being imprisoned for publishing revisionist history in a country where that is not illegal, imprisoned on the grounds that his web page could be accessed from countries where publishing such arguments and purported evidence have been made illegal) what bearing would it have on the evaluation of the quality of his critiques of the qualities of other people's arguments on other subjects?

"If you give me six lines written by the hand of the most honest of men, I will find something in them which will hang him." -Cardinal Richelieu

In fact, I encourage critical, engaged, honest reading all sorts of arguments, including those for marginal, contrarian and suppressed points of view. Some may be turgid, nonsensical or even vile, (including some of those most widely believed), but engaging them with the proper frame of mind strengthens the critical mental faculties, whereas taking the easy shortcut of judging arguments based on the ugly names opponents have called their authors leads eventually to an inability to think anything not packaged by unprincipled or illogical opinion-makers.

Reasons for having some sort of AI type thing-
replace humans in customer service roles, replace humans in taxi jobs, in fact replace humans in just about everything they do. This might not be for the best for society as a whole, but the logic of commerce would insure its triumph, as the early adopters reap rewards in terms of service up time, profit, lack of uppity staff and sickness, etc.

And then, since they're no longer needed, the rich kill all the poor people.

Depending on which crapsack world you're living in, this can be something as painless as not letting them reproduce, all the way up to the grim meathook future where the poor wake up one morning to find themselves dosed with tailored plagues and the survivors hunted down by robots specially made for just that purpose.

Like pollution and resource depletion, this appears to be another blind spot for the way the future was in the 1950's. The major problem with robotics and AI that most of those writers saw was a pampered, well-off population dying of boredom, the canonical example I guess being Player Piano. But that's a highly Socialistic vision; not something that squares with the muscular capitalism that so many of those writers also implicitly accepted without seeing the contradictions. Turns out, what you really get with advanced AI and manufacturing techniques is 0.01% of the population owning everything, including the other 99.99% of their fellows. Not some sort of Asimovian future where the rich and attractive who lounge about their estates while the robots in their multitudes are the overwhelming population norm (Btw, I give Asimov a lot of credit for being up front about the social justification for robots where so many of his colleagues either gave that aspect a pass or else simply didn't think it through.)

Iow, for most people, the sort of AI they would like to see will be very, very bad for them. These he-man libertarian computer types aren't creating their servants (and congratulating themselves on their cleverness while doing so.) They're rather witlessly creating their successors. A superior form of servant for their employers that doesn't complain, doesn't talk back, works for free, and can be ordered to do whatsoever the owner pleases.

A certain learned constructor built the New Machines, devices so excellent that they could work quite independently, without supervision. And that was the beginning of the catastrophe. When the New Machines appeared in the factories, hordes of Drudgelings lost their jobs; and, receiving no salary, they faced starvation. . .”

“Excuse me, Phool,” I asked, “but what became of the profits the factories made?”

“The profits,” he replied, “went to the rightful owners, of course. Now, then, as I was saying, the threat of annihilation hung. . .”

“But what are you saying, worthy Phool!” I cried. “All that had to be done was to make the factories common property, and the New Machines would have become a blessing to you!”

The minute I said this the Phool trembled, blinked his ten eyes nervously, and cupped his ears to ascertain whether any of his companions milling about the stairs had overheard my remark.

“By the Ten Noses of the Phoo, I implore you, O stranger, do not utter such vile heresy, which attacks the very foundation of our freedom! Our supreme law, the principle of Civic Initiative, states that no one can be compelled, constrained, or even coaxed to do what he does not wish. Who, then, would dare expropriate the Eminents’ factories, it being their will to enjoy possession of same? That would be the most horrible violation of liberty imaginable. Now, then, to continue, the New Machines produced an abundance of extremely cheap goods and excellent food, but the Drudgelings bought nothing, for they had not the wherewithal. . .”

“But, my dear Phool!” I cried. “Surely you do not claim that the Drudgelings did this voluntarily? Where was your liberty, your civic freedom?! ”

“Ah, worthy stranger,” sighed the Phool, “the laws were still observed, but they say only that the citizen is free to do whatever he wants with his property and money; they do not say where he is to obtain them. No one oppressed the Drudgelings, no one forced them to do anything; they were completely free and could do what they pleased, yet instead of rejoicing at such freedom they died off like flies.”

That's from Lem, btw, and for the near future at least, I think he gets it right. That's what all the guff about personal freedom, personal liberty, "noncoercion", and "individual responsibility" you get from the usual suspects is all about. Remember Lem's words the next time you hear one of those libertarian/conservative types start ranting about rights and responsibilities.

And the problem with that remains that if what humans do isn't thinking, then there is no such thing as thinking and thinking is a null term, so all we've really accomplished is to reduce our vocabulary by one word.

Regarding cats, our neighbour's cat has developed an astonishingly thick and luxuriant winter coat in the last couple of weeks, so much so her face is now surrounded by a sort of Elizabethan ruff. I'm now worrying that her white patches are going to start growing.

Laws of Robotics? I take it you mean asimovs highly anthropomorphic 3 laws? they are an interesting philosophy idea but they dont make sense in terms of computer programming. It implies that the AIs have not only a perfect grasp of the English language but culture as well so as to understand and fufill the laws.

As far as i see all you could do to ensure correct behaviour is to make the machine comply with the law and local environmental regulations (i.e. work place laws). It may mean that a robot will watch a person drown rather than call an ambulance because the work phone is "not for employee use" but its better than the vague alternative

It's long, 61 double-spaced pages. The payoff paragraph is in the discussion section:

Across all nine experiments, Stouffer’s z = 6.66, p = 1.34 × 10-11, with a mean effect size (d) of 0.22. As seen in the table, all but the retroactive induction of boredom experiment independently yielded statistically significant results. Stimulus seeking was significantly correlated with psi performance in five of the experiments (including the induction of boredom experiment), and these correlations are reflected in the enhanced psi performances across those experiments by those high in stimulus seeking: For the stimulus-seeking subsamples, the mean effect size across all experiments in which the stimulus-seeking scale was administered was 0.43.

It was only really stated a couple of times, but the Three Laws were not in English -- they were an English representation of a particular mathematical set instantiated in a positronic brain, upon which all subsequent positronic brains were modeled. (Susan Calvin, I think, mentions this very briefly in one of the stories.)

I meant you to understand by 'laws of robotics' the programming that meant that such a computer system obeyed the laws etc.
Boy this communication stuff is hard work.
Anyway, yes, of course such a robot would watch someone drown, since the concept would not fit within its world view, if for example its world view involved driving around delivering stuff and when it drops off supplies at a boatyard someone slips into the water. It might notice that a human has fallen into liquid water, but such an occurence would have no meaning or relevance for it.
But then are we still talking about an AI or just a well programmed computer?

(Did you lot have this discussion already upthread and I didn't understand it?)

Scent of violets #194 - that's another Lem book I shall have to hunt down and read. The internet suggests it is from "The star diaries" but is not clear on whcih voyage. Fortunately I have said book on the shelf above the computer.

I'm much less convinced that a robot/AI economy will end in a crapsack world due to the deprivation of the unemployed. If AI is just software and robots can build more robots from raw materials, all it takes is one robot owner allowing others to make copies (legal or not) and then the genie is out of the bottle for everyone, not just capital owners. The outcome is like having a replicator even if the actual implementation is macro-sized worker bots instead of quantum-nano-magic stuff.

Compare to the history of digital music: in theory, technical measures and law prevent the sharing of music online. The move to online distribution should have marked a triumph for publishers with record profits as music became decoupled from physical artifacts and could be locked down and price-discriminated without limit. In practice, we know how that turned out.

There might still be a crapsack world waiting in the wings if the widespread robot/AI technology is frequently used to build weapons, though. Given sufficiently advanced AI and robots you can make nerve gas starting with dirt, water, and sunlight.

Ah, but we do. Or some of us do, at any rate. There are mechanical, nautical, and aeronautical engineers who are trying to find out how to make ships and submarines as efficient as fish, and air vehicles (I won't say "airplanes" because that usually means a particular class of vehicles that don't borrow any techniques from living things) as efficient as birds or insects. And the best way for them to do that is to study how living organisms do it. Note that even if it turns out that we can't use biologically motivated techniques to improve our manned vehicles, we can still use them to improve our unmanned drones.

I thought that the single-photon through double-slit practical experimental results showed thast there must be a hidden variable(s) somewhere?

No, Bell's Theorem shows that any theory that includes hidden variables must be either non-causal or non-local. You only get a completely local and causal theory by not permitting hidden variables, which means you're stuck with describing quantum states in purely probabilistic terms. All experiments with entanglement since the Aspect experiment in the early 1980's have corroborated Bell's work AFAIK. Many physicists are willing to accept non-locality if it means they don't have to accept non-causality; the latter makes a hash of most people's notions of physics; but I think the majority just reject the notion of hidden variables. And they still have to accept the experimental evidence that entanglement somehow "propagates" instantaneously (and backwards in time where necessary, I believe).

Given how many fundamental errors Alcock has been shown already to have made, it's worth throwing this quote from his article back at him: "Obvious methodological or analytical sloppiness indicates that the implicit social contract has been violated and that we can no longer have confidence that the researcher followed best practices and minimized personal bias."

Perhaps the negative correlation of effect and sample size is correct, but it comes from Ray Hyman, certainly known for his personal bias on this subject. Why didn't Alcock verify Hyman's math? Perhaps he's not really skeptical in the proper sense of the word? (But on the other hand, Alcock has trouble dividing 1 by 2 earlier in the article, so we shouldn't expect him to do something more complicated.)

Alcock reports the alleged correlation, but not whether it was significant - which seems unlikely given that there are only 9 data points (9 experiments) and the sample size doesn't vary much between them. The experiments are also measuring somewhat different things and the sample sizes may have therefore been chosen on the basis of prior research and hypotheses about the phenomena to be sufficient to achieve significance. In such a situation one would expect the observed negative correlation between sample size and effect size. I also wonder how many statistical tests Hyman tried before he got the answer he wanted. (This is the offense that Alcock repeatedly and wrongly ascribes to Bem.)

Bem notes that the journal that published this rejects over 80% of the papers submitted, all four reviewers had no indication of the paper's author or institution, reviewers were chosen as experts in the field of psychology and cognitive science research rather than parapsychology, most or all of the reviewers were skeptical of psi, and they and the two editors went over the "logic and clarity of the article’s exposition, the soundness of its experimental methods, and the validity of its statistical analyses" carefully. This, together with the many already demonstrated flaws in Alcocks paper tends to argue that Bem is right and Alcock is wrong.

Further, there is likely no amount or quality of evidence that could change the opinion of someone with such strong preconceptions. Here's Dean Radin's amusing response to someone who asked: "Dean, have you shared this with Alcock? If not, are you going to?":

No. There is a 2007 article in the journal Cognitive Neuropsychiatry entitled "Semantic episodic interactions in the neuropsychology of disbelief."

The paper concludes "that truth judgements are made through a combined weighting of the reliability of the information source and the compatibility of this information with already stored data," and that "semantic episodic interactions also appear to prevent new information from being accepted as 'true' through encoding bias or the assignment of a 'false' tag to data that is incompatible with prior knowledge." This difficulty of responding to new information "can arise from mild hippocampal dysfunction and might result in delusions."

From his critique, we see that Alcock clearly views the source of new psi data as dubious, and his memory thoroughly confirms this, so it would be exceedingly difficult for him to revise his disbelief.

But this is not just being stubborn. As the journal article describes, those who continue to disbelieve something in the face of accumulated empirical data to the contrary may be suffering from a brain-generated delusion.

Whats the difference? unless you assume that a "well programmed computer" is simply a comprehensive flow diagram of instructions (like a chatbot). Do you mean AI to mean a human identical mind in a box? which may have the perks that a software mind can have such as perfect memory, faster processing, parallel processing and self editing?

My point was that when we say "AI" we essentially want just that, intelligent agents that are capable of performing better than people. That doesnt mean they have to be like people at all.

In my mind you could chat to an AI that would pass the turing test and seem more human and have a more attractive personality than any human you have ever met but on the inside the mind is working in a completely different manner. Most likely just running a machine/human interaction program that concocts this awesome personality so that it can fulfil its task of interacting with humans well.

"running a machine/human interaction program that concocts this awesome personality so that it can fulfil its task of interacting with humans well."

That's what I do when I'm trying to be sociable... I've encountered this argument before and it seems to assume whatever is going on behind our eyeballs is comfortable and familiar and understood, which isn't something I'm all that confident of.

That said a lot of human and animal behaviours really are simpler than we think, the previously mentioned cat that meows to have the door open doesn't really need to be conceptualizing anything, rather simply loop through "I am distressed, make noise until the problem solver appears". This explaines the tendency of cats to whine about the door being closed once they have been let out, there was no goal, just a response to a stimulus (blocked path)

I'm not saying cats cannot conceptualize opening doors, as some can manage this task on their own, but that the combination of compliant human + distress call is something of a universal tool, all the cat needs to do is identify there is a problem, then meow until the human makes it go away.

This leads me to the example of the industrial machine ignoring a drowning human - it would be possible to give any machine capable of detecting the presence of a human a range of expected parameters for human behaviour beyond which it determines there is a "problem" (human is too still, human is thrashing, making noises - throw in voice stress analysis, etc) and escalate it to higher authority. With kinect-stlye motion capture becoming widespread we'll have a wealth of data to make these databases.

The savings in lawsuits and added security would be immediate, and we pretty much have the technology. The machines don't need to be able to conceptualize three laws, just to be able to identify that there is a problem.

My version of the robot laws would be:

Any robot with sensors capable of detecting a human must do so at all times.
Humans must be monitored for aberrant behaviour, if this is detected, then escalate to a more complex system or a human.

Dr Susn Blackmore's comment on experiments in a lab "which accounted for about one-third of the original ganzfeld studies included in Bem and Honorton's meta-analysis.":

"These experiments, which looked so beautifully designed in print, were in fact open to fraud or error in several ways, and indeed I detected several errors and failures to follow the protocol while I was there."

So why should anyone believe an experiment in this field by experimenters known to be biased and apparently sloppy unless a sceptic of the rank of Randi is present? Peer review is no use if the peer doing the reviewing isn't standing over the parapsychologist's shoulders. They can only review the data they are presented with, they hardly ever visit the lab to check that someone is keeping to the high standards they assume that all scientists aspire to.

"can arise from mild hippocampal dysfunction and might result in delusions." So Dean Radin thinks anyone who doubts his scientific method is brain-damaged and insane? Immature name-calling wrapped up in scientific language. When a non-believer replicates his experiments with positive results people might take notice. I have had the misfortune to read a lot of his papers on all sorts of nonsense and wish I hadn't wasted my time.

Remeber, these experiments were done, several times between about 1975 and 2000.
Fir a SINGLE photon at a double slit.
Wait, say 5 minutes, fire another single photon .....
Appears random at first - but, by the time you have over 2000 hits, you have an interference pattern.
BUT
There is ONE photon at a time - it can't go through both slits, so what is happening here?
You call, please:
Is it non-locaism, or non-causality (Arrrgh!) or is there a hidden variable?
Or is there another possible explanation we have not considered?

The difference is in percieved level of capability. It appears that we have somewhat different ideas of what that would be and how it would be perceived by people.
By AI I personally mean something which is capable of learning, responding to a variety of stimuli and at the upper end, fooling people into thinking it was a human, but thats a tricky thing to do.

A human identical mind in a box rather assumes that we will be able to model human brain and body functions well enough on a fast enough computer, a conceit that has served SF well. Whereas a software based thingy which can self edit and learn etc might well be able to pass for human or intelligent.

I agree people/ large organisations want things which are capable of performing better than people, but I wouldn't necessarily call them intelligent, because I tend to reserve that word for humans on down. Possibly we are coming at this from opposite directions? The phrasing you use suggests to me that you regard input/ output, responses to stimuli etc as being some sort of intelligence, and that somehow you can write a comprehensive diagram of what it should respond like no matter what the situation. I agree that when you get down to the most abstract concept, thats what humans and other life forms do as well, but I take it as read there is ultimately a difference. (I also agree that AI type stuff would run differently to humans etc, which is precisely why I am not convinced that you would be able to program them to do and behave exactly how you want them to)

Take for example a robot driven car, I understand they are getting pretty good these days. Would something capable of driving a car with more precision and more safely than a human be intelligent?
This is a useful discussion for me, it's years since I have considered this sort of thing.

it does seem to boil down to what intelligence is. For me intelligence is the ability to adapt by learning and modifying behaviour. The general SF use of AI should really be AGI (artificial general intelligence) because the AI have to ability to learn in many varied situations.

I would indeed consider software that drives a car and learns from its mistakes to be intelligent. In my mind the term AI is far too vague and ambiguous, it seems to imply that unless something can walk and talk in a human manner then it is not AI but good software.

I dont think that flow-diagram ideas are feasible (like a chatbot) except as a conceptualised system. Perhaps it will take a big breakthrough of neuroscience for us to really be able to make intelligent software.

Like i said it all boils down to this idea of a general intelligent agent. I dont think that it makes sense to program such an entity with each "upon this stimulus do this output" because obviously that is impractical and not really how we operate.

I've just thought thou that perhaps such an entity would not be considered AI until it had learnt to be able to best humans in almost all areas (including socialisation)? in other words its what the entity does and not its potential that makes it AI

Keep doing so, and after a few thousand or so, you'll notice that almost exactly 50% of them come down heads, 50% come down tails.

Surely this isn't a hidden variable communicating through time?

And of course it isn't, it's a probability thing. The same thing is applying to your photons. In the case of the photons, what is being controlled is the position where they happen to be detected rather than their orientation. And what is the control is the wave nature of light. Your wave 'goes through' both slits (and everything else it can, including heading off in the reverse direction in an ever expanding sphere everywhere it isn't blocked). And your wave it is that has the peaks and troughs that control the probability that the quantum energy packet could be detected at any particular point.

The odd thing is that this is happening for every photon in the classic double-slit setup, yet people had no problem with the concept of interference when there are billions of photons flooding through. No, I don't think the photons interfere with each other, I think each one is interfering with itself in glorious isolation, and we only see the pattern when you have enough to total the results, just as with the coin tosses. It doesn't matter whether that's instantaneous, or over the course of a month in an almost perfectly dark room.

How the wave works is a deeper question. It's simple in the case of water waves, or sound waves - we see underlying particles. In electromagnetism, there appears to be no aether. But we can very exactly describe what happens.

"Perhaps it will take a big breakthrough of neuroscience for us to really be able to make intelligent software."

Well, perhaps it will, but why suppose so?

I don't accuse you of this, but I think we're all in danger of confusing scientific questions (descriptive, empirical: how do our brains work?) with questions about rationality (normative: what ought one to conclude from ... ?).

We don't allow that someone has learned to do mental arithmetic unless they get the answers right a lot of the time, but questions about what does go on in people when they calculate (empirical: the realm of psychology & neuroscience) aren't questions about what they are trying to do when they calculate (normative: mathematics). We shouldn't suppose that our brains implement the rules of mathematics, and they may not: we may not find arithmetic inscribed in our brains. But if I were writing software to perform calculations ... ?

The Turing test may muddy the waters, but what is the task of the AI researcher, to model the behaviour of humans (and other thinking creatures, but they may require different models), or to understand and formalise rationality?

The honest answer may be "it depends on the researcher, the day of the week, and whether there's an "r" in the month", or even "both". That's not to say the questions are the same.

Isn't the dream of AI (at least for skiffy nerds): here's what humans ought to be able to do (our calculus of rationality), and here's a machine that does it really fast ... and it's shiny?

@ 217
NO
A photon is a PARTICLE - check Feynman if you don't believe me. It says so in my copy of the Red Book, somewhere.

It isn't some mystical "wave" - you only get wave-phenomena with large numbers (I think).
The particle can and will only go through ONE slit - yet ......

Now then which is the most likely explanation for what is happening, given that there are single particles, massively separated in time?

P.S. Old-fashioned silver halide photography would only, and does only work, because photons are particles. It requires 4 photons to trigger one silver halide speck inside its' crystal to a silver atom/speck, which can then be developed to make a solid, negative image. I once worked with the people who managed to prove that, back in the '70's.

I agree with a lot of what you say. Designing software that achieves its goals the best possible way is what we do. Designing software to learn and adapt is even better. If we can design software that can learn and adapt across multiple fields and beyond its original parameters then that's fantastic (i.e. a program that can learn how to read natural language and convert text into both a layman's summary as well as a script of symbolic logic to determine the validity of statements made then apply that by making its argument on the subject based on what it has learned).

Essentially such a generalist intelligence would be what we want in terms of fulfilling our desires to say to such an entity "achieve goal X under parameters A, B and C" (a less symbolic example could be "design an engine capable of giving a performance of 200 miles to the gallon, the engine must be no bigger than 50x50X50 centimetres, cost under £5000 and comply with environmental laws") and with no knowledge on the subject goes and learns about it (downloading as much information as possible in all formats and understanding it) then goes out and does it.

However there is a massive tendency in SF (our host here is definitely an exception to this) to suggest/assume that such an entity would have to have the characteristics of a person. Such as a humanish personality and a humanish way of thinking (there is absolutely no reason why such an entity should require the type of internal narrative we constantly have going on for example).

As i stated earlier in my mind an "AI" could present a nice attractive interface of a human personality to aid our interaction but on the inside be totally different

It appears to me to be the case (from the slow speed experiment) that that 'large number' is unity. Now, I don't know about you, but I've never considered 1 to be a large number.

Yes, I am perfectly aware of the particulate properties of light. I'm not going to argue with the work on photo-electricity that got Einstein a Nobel, an effect which cannot plausibly be explained without individual quanta of energy.

But you seem to be arguing that because light is particles, it's not a wave. If so, you're now arguing with De Broglie, who got his Nobel only a few years after Einstein.

The particle can and will only go through ONE slit - yet ......

If you follow this assumption, you end up in all sorts of problems. It's one that also ends up denying the phenomenon of interference, since by the same logic, all particles have to go through one or other slot, and yet the pattern on the nominal screen the other side is not the twin-line pattern you'd get from a macroscopic equivalent (such as firing machine gun bullets at a slat fence).

The unspoken part of the assumption is that there is a particle at all except when it's being observed (e.g. with your silver halide).

I am away from my shelves, but I'm pretty sure the Red Book doesn't deny the wave nature of light.

I go with the notion that "particle" is just the name we give to interactions (the junctions in Feynman diagrams) and "wave" to propagation. We tend to add the symmetries and conserved quantities to the "particle" side, but this leads to all sorts of confusion. Particles never propagate, and waves never interact (without losing their continuous nature and becoming a discrete, hence particle entity.) The conserved quantities are carried by the wave or waves and only measured at the interactions.

"...which may have the perks that a software mind can have such as perfect memory, faster processing, parallel processing and self editing?"

I strongly disagree this would be desireable on several points.

Perfect memory: It has been proven that constant reweighing of synaptic pathways is essential to human learning. It has furthermore been recently proven that increased levels of adenosine in the rat hippocampus (which makes synapses more stable) greatly reduce their ability to store new information.

Faster + Parallel processing: Because of the networked structure any brain possesses unique parallel processing capabilities that go far beyond any known computer technology. Even very large-scale systems are nowhere near capable of processing anything near the amount of data that only auditory input provides for the human brain, let alone the rest of the sensory input (eye!!).

Self editing: Unlike computer systems, where this has to be implemented very awkwardly on top of the more conventional software, and requires any number of dirty tricks to keep your program structures and data structures in sync, this is a builtin for any type of animal brain.

Perfect memory: If a brain was accurately simulated on a computer then all the memories that the brain has can be stored outside the simulation. The brain will forget memories (as it always does, by the time you or I have read this long post we will have forgotten most of the words at the beginning!) but at some point the brain had those memories, all of those memories can be stored however in the computer even if they are not in the brain simulation itself. Then in the process of trying to remember the software running it can recognise that what the brain is trying to remember is no longer stored within itself and so copies and pastes the memory from storage into the brain. In this way if a brain-in-a-box was attempting to learn there is nothing it can forget but memories are processed and stored in the same way.
As for the hippocampus and adenosine levels I don't quite see how you think this links using a hypothetical computer to simulate a brain.

Faster/Parallel: I wasn't suggesting that a computer was better at these than the brain. What I was suggesting was that if a computer can simulate a brain then it can simulate it faster than real-time because if it takes X computer power to simulate at real-time (R) then 2X = 1/2R. In other words if you chuck more processing power at it the program will speed up (for the simbrain everything else will seem to slow down).
As for parallel processing I was alluding to the idea that if you give a simbrain a problem to solve it can copy and paste itself into a team of identical simbrains all studying different aspects of the problem. Not only that but they could be synchronised every now and then to become whole again.

A practical example of the points so far could be that one simbrain is asked to read 5 different books and recite them word for word. The simbrain copies itself 5 times, each simbrains reads a different book with the simulation speeding up condensing the few hours of reading into a few minutes/seconds (depending on computer power) then all the simbrains synchronise back into one that has the memory of reading all of them. This one simbrain then recites each book word for word as the software running the simulation has stored all of the memories it has ever made and is supplying them again.

Finally self editing: a brain in a box would have just the same emotions and processes as a real brain. It would need to sleep, it will get bored it will have emotions. That might not be very useful but as we have simulated the brain here we can study how each of these processes works and possibly edit the brain to utilise these. Using the example already given, the simbrain copies may get bored of some of the books, this will make reading them tedious and slow down the task. After a few subjective hours their motivation will probably suffer. These simbrains could edit themselves though producing an emotional state whereby they are constantly motivated, focused, awake and dedicated to the task. whilst in an ordinary brain these states of mind will falter, the simbrains will be existing in a computer environment where the simulation can just prevent any deviation.

I hope that got across a few of the points i was making better than i put before. Its all conceptual and hypothetical though because at the end of the day we will have to actually try it to learn the benefits and limitations.

Makes sense to me now, thanks for the clarifications. I know I shouldn't expect that on this forum, but some people seem to think that a computer is just a perfect tool to throw at every problem, when in fact all current implementations are far from that.

For something to be an AI at all one would have to postulate that it can exceed some predetermined limits and then further improve on that. Programming some software to some standards and then just fulfilling them won't cut it as an intelligence for me. If I understand you correctly, your idea of a generalist AI is similar to what was proposed @135, and 142.

It's still very hard conceive of a hardware that would be able to run it all. Biological brains today already come with plasticity, parallel processing, storage and deletion very much optimised at reasonably low energy requirements.

Computers are still based on your tube/transistor, capacitor and resistor technology, organised in grids for addressing. With that we can achieve huge processing speeds for problems easily formulated for this kind of hardware. Simulation of biological networks with its rerouting and weighing mechanisms comes at great cost, though, and should maybe be avoided altogether. After all, we did make some decent tool that do not require GPAI.

You left off the part that makes even most physicists scratch their heads: if you put individual photon detectors behind the slits to find out which slit a given photon goes through, the detectors work as expected if the photons are particles, but the interference fringes do not appear. Worse still, if you have a setup that allows you to decide whether to detect particles or waves after the photon goes through the slits, you get the same behavior as when the type of detection is fixed. In other words, the photons act as particles or waves depending on whether you can tell which slit they passed through, even if you decide what to look for after they go through the slit.

"As for the hippocampus and adenosine levels I don't quite see how you think this links using a hypothetical computer to simulate a brain."

Well, insofar as it has been proven, that stabilising synapses not only makes it easier to retain old information (which is to be expected) but at the same time makes it harder to store new information (which came as some surprise to me, because creation or new synapses is not inhibited by the same mechanism).

I don't know if this has any bearing on information processing and storage for learning systems in general, but it might well have.

I always find it hard to compare the brain with a computer and a mind with AI. the brain is the best tool we have at running a mind but does that mean we need one to run AI? analogies can only take us so far and at the end of the day the brain is not a computer. There are similarities yes but they are not the same.

If a computer can be made that either runs software or has hardware built to emulate functions of the brain then perhaps we can make AI. A lot of the functions in the brain don't have to be simulated (we dont need to simulate the gene expression of every nucleus in every neuron) for us to simulate what a brain can do. The brain is a complex interplay of neural networks, i'm not convinced that we have to simulate the biology to be able to simulate the networks.

@229 with a brain-in-box scenario you could tweak the simulation so that the biochemistry remained how you wanted it too. it doesnt have to directly copy all of what happens in nature

@230: "I think my comment @211 best sums up my impression of AI." +
@211: "My point was that when we say "AI" we essentially want just that, intelligent agents that are capable of performing better than people. That doesnt mean they have to be like people at all. ..."

By that definition a chess program or flight stabilisation software would also qualify as AI for you? IMHO that can't be right. They outperform any human at their task. By that measure a forklift would be an AI, too, as it will outperform any human at lifting crates any time.

I have no doubt that a chatbot will eventually be created that will be more pleasant to talk to than a human for many applications. Intelligence isn't necessarily required for that. We'd just have to evaluate what that person wants to hear, and you'll basically have an interactive equivalent for FOX, and a lot of people would adore those rants. It's a little scary, because you can't convince these people with the truth if it stares them right in the face. Imagine what it would be like if they thought they just chatted with real people and their "friends" were all convinced of the same things that they themselves believe to be true! *Shudder*.

@230: "A lot of the functions in the brain don't have to be simulated..."
I have yet to see an example where any human-engineered structure surpassed its biological equivalents, but I agree it's theoretically possible.

I don't know that we know that to be true. It may be, but as we don't yet have any real notion of how to emulate a brain (let alone a mind running on a brain), and we don't know what actions in the brain are vital to thinking and what actions are not, we can't know what we need to emulate at the present state of our knowledge. What we can and do know is that the brain does not work like a digital computer, so we don't know if any of the theorems that have been proven about equivalence classes of computations or of computing devices apply to brains. If, for instance, it would require computing NP-hard algorithms in order to emulate brain operations, then we're not going to do it with digital computers.

"we don't yet have any real notion of how to emulate a brain (let alone a mind running on a brain)"

If we want to say that brains aren't or might not be computers--and that seems quite sensible--, why do we want to talk of minds running on brains? Is it just a metaphor? If so, what point are people trying to make with it?

As for which bits of biology (or chemistry, or physics) one has to take in to account to adequately simulate (i.e. model) a brain, doesn't that depend on the purpose of the emulation? If one is trying to predict the brain's colour under a range of conditions, perhaps one needs to pay attention to a different set of properties than those needed if it is brain temperature one is interested in.

As for NP-hard algorithms and brain emulation, one might just as well worry about them when speculating on the possibilities of emulating any physical system, no? Rain clouds, tectonic plates, viscous liquids, ... Why does it matter, when they're all built from the same fundamental particles?

It seems to me though, that I've not heard people stress much about the adequacy of digital computers (per se: everyone wants bigger & faster) as tools for modeling weather, continental drift, or turbulent flow. Why is that?

If rhetorical questions are assertions in disguise, these aren't rhetorical. If it seems hard to decide whether they're naive or ironical, well ...

There MUST be some point at which our formulation of QM is in some sense "wrong" - for all its accuracy in prediction. There is also the little matter of the renormalisation problem, but let's not go there today, huh?

In place of "Microsoft" you can insert whatever scares you - your government, some other government, another big corporation, etc.,

Just to give some really obvious examples, are homosexuals human? What about black people? Fetuses? Whoever writes the rules gets to makes those choices, not to mention the more subtle choices of what constitutes "obedience" and "harm."

And are there protected classes of "harm?" What about smoking or eating ice cream... one could create some very interesting loopholes/restrictions.

If we want to say that brains aren't or might not be computers--and that seems quite sensible--, why do we want to talk of minds running on brains? Is it just a metaphor? If so, what point are people trying to make with it?

No, it isn't necessarily just a metaphor1. Even if brains aren't computers, it still might be meaningful to talk about mind being an emergent property of the operation of a brain, so "running on a brain" would be a coherent concept.

As for which bits of biology (or chemistry, or physics) one has to take in to account to adequately simulate (i.e. model) a brain, doesn't that depend on the purpose of the emulation?

Perhaps, and perhaps not. If we don't know what bits are needed for some level of accurate emulation, then we don't know what the mapping from "our purpose" to the required level of precision is.

It seems to me though, that I've not heard people stress much about the adequacy of digital computers (per se: everyone wants bigger & faster) as tools for modeling weather, continental drift, or turbulent flow. Why is that?

Here's the nub of the emulation problem. If what we're trying to emulate is chaotic, i.e. has critical sensitivity to intial conditions (as both the weather and turbulent flow have), then making the computer faster doesn't help much. Chaotic systems are typically exponential in the change of output versus input; adding one decimal place of precision raises the potential variance in output by a power of 10.

Some problems are just not soluble by Turing-complete digital computers because no matter how fast the computer may be, the time for a solution is typically far longer than the potential age of the universe. About 20 years ago I wrote a post on one of the usenet sci groups (I wish I could remember which one) calculating how long it would take to compute all possible chess games, assuming all matter in the visible universe were converted to computronium, with each atom being a processor. IIRC the time required was around 1090 years. Similarly, it may be that computing an emulation of a brain to the necessary precision may require more precision and/or more time than is available.

1. Wait. Just a metaphor? But if you are persuaded by Lakoff's notions about metaphors (as I am), then you accept that all human thinking is based on metaphors. So what is it that's more than a metaphor?

There MUST be some point at which our formulation of QM is in some sense "wrong" - for all its accuracy in prediction.

No, not necessarily. It may be that the reason we don't understand QM is that we aren't smart enough; that was the point I was trying to make in comment #112: it may be that some aspects of the universe are too complicated or too far from the world-view we've evolved to understand for them to make sense to human-level intelligence.

Does Quantum Mechanics challenge the assumption that mathematics is, in some fundamental way, Truth?

We have tools which can describe reality. We have acquired better tools. Newton's model of gravity works well, but it isn't the most accurate model. Why should we assume our current best models are somehow any more real?

Metaphor isn't a substitution of symbols; it's a relation of concepts. Which is to say that metaphor takes place in the domain of semantics whereas symbols exist in the domain of semiotics.

And concepts can be related by metaphors in a cyclic graph; there doesn't have to be a "ground" concept, any more than there has to be a single base definition in a dictionary from which all words get their meaning. Google for "semantic net" for a longer (and better) explanation.

You make the point eminently clear, but I think you hesitate in the simple step of connecting the dots.

QM doesn't challenge the "truth" in mathematics. It is a mathematical model to explain reality, after all. Our mathematical models in general are hardly ever wrong. They are just, for mathematics itself necessarily so, incomplete. It seems redundant to cite the great G* here.

I meant that a program that could surpass humans in learning would be AI. It could be shit at chess and know nothing about a multitude of subjects when it is first made but it could learn pretty much everything faster and more efficiently than a person.

It doesn't matter how complex the conceptual connections are, semantics requires one to break out of the web of interdefinition.

Consider Wittgenstein's example of the language game with "slab".

As for the dictionary example, of course I'm not going to say there has to be a single base definition! I'm going to say that you can't learn a language from a dictionary. You might learn it on a building site.

Ryan @243:
"(AI) ...could learn pretty much everything faster and more efficiently than a person."

I like that, but it's a very ambitious goal.

We have been doing research into optimising user experience in searching in/for legal texts. Legal texts are easy to work with as far as natural language goes, because they tend to be formulaic to a fault and contain some very specific pointers as to their relation with cited documents. We had great hopes that we could create a repository that really "understood" what the user ultimately wanted. To cut a long story short: Some 15 years later I'm fundamentally underwhelmed by what has come from this line of thinking.

While the user experience has developed the way we had hoped and is in some ways superior to what we imagined, how we got there was not via any concept or "machine learning" at all.

We just have more data to work on, more developers with experience in the field, a more experienced "content enrichment" team, faster computers and more feedback because for us the AJAX thing REALLY worked (we had independently developed some aspects of this idea before that became a standard).

Basically all that became a success was the exploits of human intelligence rather than developing any machine intelligence as we had initially hoped. In fact, some of the more sophisticated software had to be retired in favour of more simplistic approaches, because they simply (bitter HA!) worked better.

Creating an AI that learns from the world the way a human does is way beyond our abilities now, and we've made almost no progress towards that goal in the last 30 years. Look up the CYC project sometime; they've been trying to construct a program that has common sense and can learn by extension of its existing knowledge since the 1980's. They started with the idea that they'd need to supply a few thousand seed rules and the program would do the rest; last I heard they were saying a few million rules would do the job. They're still not there.

One project of theirs fills me with unease, the Terrorism Knowledge Base. Quoting from Wikipedia:

The comprehensive Terrorism Knowledge Base is an application of cyc in development that will try to ultimately contain all relevant knowledge about "terrorist" groups, their members, leaders, ideology, founders, sponsors, affiliations, facilities, locations, finances, capabilities, intentions, behaviors, tactics, and full descriptions of specific terrorist events. The knowledge is stored as statements in mathematical logic, suitable for computer understanding and reasoning.

I can just imagine some of the ways that the US government could misuse and abuse this.

I realise I'm going far OT here, please only read on if you care for a defense of lawyers in general.

While that might hold true for some of my brethren I assure you that most lawyers try to be as concise and to-the-point as possible. There is a lot of pressure to include or exclude certain assumptions or interpretation in legal texts. Look at disclaimers:

"This is a work of fiction. Any similarity to persons living or dead (unless explicitly noted) is merely coincidental."

You would think "this is a work of fiction" were quite enough, unless you knew how that turned out for some authors and publishing companies. In some places, you wouldn't expect disclaimers or warnings at all. "Objects in rear view mirror may be closer than they appear!", well that's just how convex mirrors work (dimwit)! This is caused by (some) lawyers making their clients appear dimmer than they are for apparent reasons and companies reacting to ensuing legal threat.

The point I'm trying to make is that it's not the lawyers that make law difficult to understand, but societies. If you have an overly long contract, it's mostly the exceptions and counterexceptions that make it thus: "Person A buys property P from person B. Location of property P described by straight lines connecting in this order geocoordinates W, X, Y, Z, W. Date, signature A, signature B" should be enough. Instead it goes on and on and on. No right of passage to or fro through any non-public roadways. No tenants to be evicted. No guarantee nobody has ever been murdered on premises, that no dog ever shat on the front lawn and no animals ever been hurt in the making of the property... ad nauseam.

What juries (as in a large number of them, not any individual ones) do, undoubtably with all of the best intentions is make it possible for something gross to happen. With all the individual assumptions that require clarification they make it necessary to bloat the text to a point where it becomes hard to see what is really relevant.

In contracts sometimes it is actually the lawyer trying to slip something past you or your lawyer. No dog may actually ever have left its poo-poo on premises but nowhere was the possibility excluded that there are massive deposits of mercury in the soil which the owner will have to clean up within the next three months. But surely you must have been aware that such a long list of exclusions can not be concluded to be anything but comprehensive. This is a pretty crude example that wouldn't work, but you get my drift. This, I believe, was what you were hinting at.

Although this may be perceived a being a lawyer's everyday business, it isn't a common case. What most would do is throw out everything that isn't strictly required, if only to have it claimed back in by the opponent's attorney with some kind of explanation.

I've been out of this business for over 15 years, but I felt that I owed them as much as at least trying to explain how I thought it worked.

As for never having tried to build automatic engines to pass down verdicts: Believe me, it has been tried. Most of the cases were things where it was believed easy to do, which is mostly automobile traffic. Speed cameras for one, but also other easy cases like running a red light. It's not as easy as it seems, but maybe with the advent of self-navigating systems (which is clearly an example of "understandig" rules applied to reality) they have a better chance of being implemented. I think this would require some rewriting of laws to be interpreted by machines, which is an idea that was proposed like this decades ago. Most often the system creating the laws is more than happy for them not to be specific enough to leave some loopholes, which is incompatible with the idea of implementing software to execute them. How would the law-making-process work? Have you ever discussed software implementation with one client-representative? How about hundreds? Would the software implementation get passed as a law (bugs and all)? The requirement specs? The technical specs?

There are some claims in legal theory that legal subsumption can not be done without a human(~intelligent) agent at all, because any act of law is the specialisation and concretisation of a more abstract legal act, therefore not feasible unless the subsuming agent understands the more abstract act, its implications for the case it's beeing applied to, and and of the idea of a (the?) legal system in general. You may hold this to be unnecessarily self-congratulatory (on being human) and impractical, but would you like to be judged by a machine? If so what would your prerequisites be to defer to this machine's authority? Would eventual maintainance deficits or possibility of interference enter into the equation?

"This is a work of fiction. Any similarity to persons living or dead (unless explicitly noted) is merely coincidental."

Actually, this is a wonderful example of baffling legalese. The second sentence seems to amount to "no similarity of any character to any person living or dead not explicitly noted as intended is intended." That's hardly likely to be true.

Suppose I write a novel in which the 16th president of the USA is called "Abraham Lincoln". I think it is a pretty good bet that there will be many non-coincidental (non-chance, deliberate, intentional) similarities between the fictional character and the real Lincoln, indeed that I am writing about the real Lincoln (albeit that I'm not writing a biography of him) and that I expect the reader to grasp that and that I expect the reader to consider that the Lincoln-of-the-story is a lot like the actual Lincoln in ways I don't state, too.

That doesn't mean it is not fiction, and there is next to no chance that I'll put in a note (which would need to be outside the fiction, not a Susanna Clarke-style footnote) saying that the character called "Lincoln" is meant to resemble Lincoln in such-and-such ways.

If you doubt me, consider that there is nothing special about people in fiction, and that species of plants and animals, buildings (e.g. St. Paul's cathedral), elements, and places (e.g. Scotland) in fiction will bear many non-coincidental resemblances to their "real world counterparts", and that none of that will be noted. It is not in the nature of fiction that similarities are chance unless stated to be otherwise.

And, of course, one can have intentional similarities without the sharing of names (e.g. "there's this singer from Hoboken who's trying to break into pictures, he's called Francesco Smith"--if he turns out to have connections to the mob & to the Democratic party, will that really be coincidental?).

"Any similarity to persons living or dead (unless explicitly noted) is merely coincidental" is not entailed by the thing's being a work of fiction, and in most cases (at least most cases in which the matter comes up!) it will be a blatant lie.

(Of course, some similarities of characters to persons living or dead are coincidental--the protagonist has blue eyes and so does your aunt Helen (of whom the author has never heard)--, but that's a very weak claim.)

Of course, that's not to say that having the Lincoln-of-the-novel commit adultery amounts to a claim that Lincoln actually committed adultery, but if that's what the disclaimer is there to guard against (and I don't say that it is), would any non-lawyer be able to extract that from the wording?

Of course, if all similarities are coincidental, then nothing in the fiction constitutes libel. Sadly, saying it doesn't make it so.

You've just made the point to an extent that I wouldn't know how to surpass.

"This is a work of fiction" should make it all eminently clear, even if your (or my) aunt clara has blue eyes. If my (or your) great-great-granduncle flew a plane painted red or not. It's obvious that many fictional characters have some traits in common with non-fictional characters, because we all have traits in common like that. Let me guess, you're not an ELIZA-like construct. You've got one nose, two ears, two eyes, a backbone, two arms and two legs. Your great-grandfather took a bullet on a European battlefield, and your grandfather never was the same after returning from an battlefield.

Go back long enough in any family tradition and you are likely to find some similarities that seem spooky enough. (Did I ever tell you about my sometimes acute appetite for blood?-) Even if there are more than one or two intersections with what we perceive as history they are none of them going to be the truth. Did the Hapsburgs sue the production company of "The Illusionist"? (Bad script, too, go for "The Prestige" if you want a good movie with magicians in it.) NO, the notion is beyond bizarre.

That something happened to any of your ancestors doesn't make it your intellectual property, and it shouldn't. If you disagree: Where should it end? Is any of the Red Baron's ground crew less mentionable or their offspring less worthy of some of the legend's spoils?

We both know how fiction works (perhaps not in a "high theory" sense, but practically), and I imagine most children have a pretty good grasp of it, even before they learn to read.

I think it was this:

""Objects in rear view mirror may be closer than they appear!", well that's just how convex mirrors work (dimwit)!"

that led me to believe that you were offering some kind of defence of the "all similarities ... coincidental" disclaimer, along the lines of "painfully obvious, but true--included for morons!". That was probably unfair of me. Please put it down to an excess of caffeine and the lateness of the hour.

If we agree that the truth is that some but not all similarities are coincidental, and the intentional ones won't all be flagged, then what is the rationale, however flaky, for including the "all similarities ... coincidental" disclaimer?

@ 238, 239
Must disagree with your apparent suggestion that our understanding can't cope. That may be, temporarliy the case, but I'm not buying it as a permanent solution. The disconnect between QM and relativity shows that something, somwhere is "wrong" - please note the quotes.

Sorry, but we know that mathematics is a construct...
A very powerful one, but a construct.
This is the McGuffin in the "Laundry" novels and stories, after all!

No argument there; the only question is, are we justified in believing that the universe is so constructed as to be easily modeled by that construct. That's been the assumption ever since Galileo, and there's never been any proof. I doubt it's possible to prove it. I'm not even sure it's falsifiable.

As for QM being wrong; well, that's what Einstein believed, but the best counter-argument he could come up with was the EPR paradox, and Bell and Aspect put paid to that one. So I won't say you're wrong about QM, but I think there's enough going on that we need to entertain other possibilities.

Myself, I've always been fond of the Everett-Wheeler Many Histories interpretation1, maybe because it's so grandiose, and fulfills two favorite mottos of mine: "Go big or go home", and "A theory needs to be strange enough to be true".

@ 255 No
It isn't science.
Feynman was particularly "anti" this sort of semi-mystical claptrap.
I tend to agree with him - hence my amateur interpretation, earlier.
( It's a long time since I did any serious Physics )

I point out that so far, all the work on String Theory has yet to produce a falsifiable hypothesis (and the current idea that to find the String Theory that works just requires us to choose the right one from among 10500 candidates strikes me as real wankery). Also true for every other interpretation of QM so far, as opposed to the operational aspects of superposition, entanglement, decoherence, and etc.

It seems very unlikely that anything testable that discriminates among interpretations of QM is accessible much above the Planck scale, which is about ten or fifteen orders of magnitude beyond what our current tools like the Tevatron and the LHC can reach. Rejecting Many Histories because it's not currently testable will require you to reject Feynman's Sum Over Histories as well, since its predictions are precisely the same. So calling it names isn't going to make it go away.

As I pointed out above, Einstein refused to believe in QM, and it hasn't gone away because of that. And QM has yet to be fail experimental test, so saying it has to be wrong somehow is an expression of personal esthetics. You may be right, but there's no objective evidence yet that you are.

"Rejecting Many Histories because it's not currently testable will require you to reject Feynman's Sum Over Histories as well, since its predictions are precisely the same. So calling it names isn't going to make it go away.

Einstein refused to believe in QM, and it hasn't gone away because of that. And QM has yet to be fail experimental test, so saying it has to be wrong somehow is an expression of personal esthetics."

I've nothing against QM--I'm really not qualified to have anything against it. I don't say it must be wrong, and for all I know it is true. The issue is interpretations: telling stories about QM.

If Many Histories & Sum Over Histories make precisely the same predictions, in what sense are they different?

What predictions do Many Histories & Sum Over Histories make that "raw" QM doesn't? Doesn't matter that we can't do the tests now, so long as we understand what we're supposed to be looking for.

Consider three theories, identical save that:

[1] Theory A predicts the existence of one particle which can't causally interact with anything;

[2] Theory B predicts the existence of two particles which can't causally interact with anything;

[3] Theory C doesn't predict the existence of any "ghost" particles.

What reason could there be to favour Theories A and B over Theory C? What reason could there be to favour Theory A/B over Theory B/A?

My worry is that QM interpretations are like theories A and B (metaphysical, in the pejorative sense), but I don't claim to be any kind of expert on the matter.

"It seems very unlikely that anything testable that discriminates among interpretations of QM is accessible much above the Planck scale, which is about ten or fifteen orders of magnitude beyond what our current tools like the Tevatron and the LHC can reach."

How will a better collider (or any other new tool) help me find a branch of the universe which can't causally interact with mine?

The current consensus assumption is that below the Planck scale (1o-33cm.) space and time are replaced by a kind of "quantum foam" which doesn't have a deterministic metric or number of dimensions. It's conceived to be full of wormholes and EPR bridges and other odd bits of topology. Presumably it's in some sense the substrate of everything above it, and that might include all branches of the Everett-Wheeler universe, if they exist. So, in theory, if we could get down to that scale, we might find a regime where the branches can causally interact via the foam below them.

The problem with "raw" QM is that it's strictly operational; it tells you how your instruments will read when you perform an experiment, but it doesn't tell you how or why they read that way. This is very unsatisfactory for some physicists1 who believe that a mathematical model should have intuitive explanatory power2. The problem is of course that none of the explanatory theories that anyone has come up with in the last 80 years has been falsifiable; clearly there's something missing, but no one knows what, and unfortunately not many physicists are bothered enough by the situation to risk their reputations in an area that's resulted in so much controversy.

1. To be fair, about 90% of physicists and quantum chemists don't give a hoot in hell about interpretations of QM as long their equations work and they get the experimental results they want.

2. Einstein's super power was an intuition about physical models that led him past all the confusion and paradox that had bothered physicists since Maxwell and let him see that electromagnetism and spacetime had to be intimately intertwined. In the process, he realized that one of the big stumbling blocks was Newton's concepts of absolute time and action at a distance: it's not entirely true that Newton's physics is correct as far as it goes and is just extended by Einstein's relativity; in some way's Newton really was wrong.

@ 259 et seq .....
I agree that QM fulfills its predictions.
So does Relativity theory...
Remember I mentioned the "Renormalisation Problem"???
32 or 36 orderes of magnitude error, IIRC.
Something is wrong (for some value of "wrong") somewhere ......

Bruce @ 263
It's better than it was - back in the 1960-70's you wre NOT ALLOWED to mention the QM/Relativity mis-fit.
Now, of course, everyone appears to be looking for ways around it, rather than trying to attack the question - I don't blame them, it's a bugger of a problem.

Of course something is "wrong" with physics; if it wasn't, our physicists would be looking for a new job!

Actually, there's probably more than one thing wrong. Just as classical physics began to unravel in the latter half of the 19th century around a few odd edge cases (Olbers' Paradox, the ultraviolet catastrophe, and the non-existence of the luminiferous ether -- have I forgotten any others? "Fixing" those three gave us a non-static universe, quantum mechanics and relativity) so we have some loose ends today: the failure of QM and relativity to hook up, the dark matter problem/inflation, the matter/antimatter imbalance, and now (notably) the suggestion that the fine structure constant is a variable. And that's just on the cosmological front. Nobody (AIUI) has yet worked out how high temperature superconductivity works; the search for the Higgs boson continues: and so on.

Today's physics is a much better model of reality than the physics of the 1860s, but it's still not perfect. Or even "good enough", as long as we can conceive of having the engineering ability to construct better instruments for probing the edge conditions.

"if we could get down to that scale, we might find a regime where the branches can causally interact"

But the objection was to a fairy story in which the branches cannot causally interact; how would finding "new" branches of the universe which do causally interact with ours confirm the existence of branches which cannot?

I'm not sure what intuitive explanatory power is, but I take it the emphasis is on the "intuitive".

Perhaps a satisfying model accords with human intuition/feels comfortable/describes the world in terms of something we think we already understand, but is there any reason to think the world will play along and that models adequate to the facts will have any such property?

As for many worlds/histories, the objection is that it is nonsense (i.e. marks on the page with no significance). I've been begging for reasons to think it is not (as I know that to an outsider sense may look like nonsense), but they've not been offered. Nonsense has no explanatory power, intuitive or otherwise.

(Olbers' Paradox, the ultraviolet catastrophe, and the non-existence of the luminiferous ether -- have I forgotten any others?

One other, that we tend to forget about: up until 1905 many, perhaps most, physicists did not believe in atoms. 20th century physics was so heavily involved with atomic and nuclear particle theory that it's hard to remember that the existence of atoms was controversial before. But without atoms, quantum theory makes a lot less sense, at least in terms of the experimental techniques of the first half of the 20th century, and chemistry is stranded at the macroscale without quantum theory.

Renormalization isn't considered a problem these days; it's considered a necessary correction for the effect of the virtual particle loops that surround any bare particle. See this wikipedia article for a relatively clear explanation; I couldn't find anything else in a short time that wasn't highly specialized.

the objection is that it is nonsense (i.e. marks on the page with no significance).

Please explain why you say it's nonsense. I agree that it's not (at the present time) falsifiable, and that there are arguments against it on the basis of elegance and violation of Occam's Razor (which arguments I don't happen to agree with, and I'll explain why, if you're interested), but as far as I can see it's a coherent view of a possible universe.

The other thing that was going to blow an interesting hole in physics as understood at the end of the 19th Century was that funny pitchblende stuff that seemed to stay a little bit warmer than it should.

And that led to radioactivity, and an explanation of why Lyell's deep time worked after all.

"Please explain why you say it's nonsense. I agree that it's not (at the present time) falsifiable, ... but as far as I can see it's a coherent view of a possible universe."

OK, I'll have another go.

We agree that pure reason isn't supposed to get us to the truth of the theory, that it'll stand or fall on the evidence.

Now suppose I put forward the theory that every time I clap my hands, within 10s a donkey is created within 10m of me. A stupid theory, I'll grant you, but you can look for the donkeys.

You don't find any donkeys. "Ah," I say, "Did I forget to mention that they are invisible, intangible, inaudible donkeys? In fact, they're quite incapable of causally interacting with anything, ... save, perhaps, other donkeys similarly incapable of interacting with anything else."

Wouldn't you be right to say that my theory (so amended) was nonsense? I might reply "You understand all the words in the theory, don't you? You know what a donkey is, don't you? And there's nothing odd about my grammar, so what's the problem?"

I hope you'd be as unimpressed by this as by the claims that green ideas sleep furiously or that the number two masses 300kg.

So, again, what could possibly count as evidence of the existence of items with which we cannot interact? If the theory doesn't determine what would count as evidence for or against it (irrespective of whether we've the tools to test it now), does it say anything at all?

Of course, you have hinted that down in the quantum foam there might be a way to interact with the other worlds/histories, and I'll grant that "worlds" that are hard to interact with (but which given higher energy accelerators ...) are quite another thing from causally inert (or inaccessible) worlds or histories, but isn't that to give up what's distinctive (and mad) about the many worlds interpretation?

For the life of me, I still can't see why anyone would want to believe in the many worlds/histories interpretation. Seems as barking to me as David Lewis' take on modal realism.

As for Occam's Razor, as I understand it, it is a principle about choosing between distinct theories: if two theories account equally well for the observations but make different predictions, pick the simpler one. I agree that Occam's Razor, taken this way, can lead to the elimination (pro tem) of the better theory, and future observations may bring it back into play. That doesn't seem an insuperable objection to it as a guide to action; "get in the car with the sober driver" seems pretty good advice, but sometimes the drunk is the only one to make it to the destination in one piece.

Occam's Razor is not needed to eliminate the "magic donkey" theory, and it doesn't favour one magic donkey over two, as we've given no sense to the notion of causally inert middle-sized dry goods (J L Austin's phrase?), and so the "theories" citing them were never in the running.

If you've something to say about Occam's Razor, I'd be interested to hear it, but can we settle the many histories thing first? (Pretty please!)

Remember: if it's true that "real" A.I. is always thirty years in the future, that means that it will never arrive, but on the up side that means that twenty years _after_ it arrives it will be FUSION-POWERED.

@ 272
That, of course, is the evidence against any "god", as I'm sure you are aware.
Can I put in a word for "string theory" here as (currently) untestable hand-waving?

"Real" AI
Define "intelligence" first.
Generalised problem-solving in diverse circumstances, and the ability to react to unexpected situations might be a parameter to try to match.
And we are nowhere near that - yet.

"That, of course, is the evidence against any "god", as I'm sure you are aware.
Can I put in a word for "string theory" here as (currently) untestable hand-waving?"

Dear Greg,

Again with the "currently untestable", which still seems to be avoiding picking between the interaction-down-in-the-quantum-foam (which needs detail filling in, of course) and the no-interaction-possible (scorned by Brandi as in principle "untestable") versions of Many Worlds/Histories. Perhaps you think they're the same ...

Evidence against god? I don't know about that, as I'm skeptical of religious propositions being empirical (they're not up for scientific test, as how the world is doesn't bear on their truth). I think that "God exists." "Really? I've never met him in the local Waitrose." is a stupid conversation. Letting the religious off their (supposed) existence claims doesn't make me any more tolerant of religion, of course.

If you want to say Many Worlds is theology & not science, I'm sure you'll find some people to agree with you. I don't think you do, though.

String theory? I'm no physicist & have no opinion about it. Is it the case that no one has the slightest idea what evidence for or against it would look like? I'm guessing not (that it makes a whole raft of predictions of the non-magic donkey variety), even if we're in no position to gather evidence now. But not knowing the content of the theory, I probably shouldn't even make that guess.

Have we a general account of the ability to solve problems which doesn't involve attributing purposes to the problem-solving entity? Do we need one for this purpose? (I don't assume ant particular answer.)

I have every sympathy for trying to understand intelligence in terms of generalised problem solving, but that'll probably land us with an ideal notion of intelligence, such that even bright humans don't count as intelligent: as a species, we may have cognitive blind spots. (I don't say necessary blind spots--I'm not Colin McGinn--, but blind spots nonetheless.)

String theory? I'm no physicist & have no opinion about it. Is it the case that no one has the slightest idea what evidence for or against it would look like?

Current versions of String Theory have no values for any parameters. That's why there's talk of 10500 different String Theories, one for each set of parameter values you pick. So String Theory predicts nothing unless you choose parameter values, at which point it predicts anything, depending on the values.

Nope, I still don't agree with you that Everett-Wheeler is nonsense. There are three basic arguments I can make: it's not logically unfalsifiable, it's not any less physical a theory than many others that are currently accepted (it reduces to sum-over-histories for a given world-line), and the lack of future causal connection is no different from that of parts of the universe that were separated by inflation after the Planck era at the beginning of the universe. I don't really have time to go into all the details of these arguments now, or to research those of Hawking, Deutsch, Dewitt, and Tegmark, who all accepted it as a valid theory. If you want to continue the discussion, I might have some time to do the research in a few days.

Oh, one other argument: it's more believable to me (and to a lot of other people I've talked to) than the notion of the "Collapse of the Wave Function When Observed by a Conscious Entity" that the Copenhagen Interpretation forces on us.

We can certainly agree to scoff at the notion of the magic powers of consciousness being written into physics.

I've nothing against empirical versions of multiple worlds, its just theological versions that get my goat. I'm open to persuasion as to where the boundary is, but I don't expect you to do research to benefit me.

Well, no, that's not how it works. Humans can only generate strings of finite length. So, for example, if they can generate (optimistically) 100 bits/sec, and they are allowed 10,000 seconds, they can only produce strings holding 1,000,000 bits of information. So it's easy to see then that there are at most 2^1,000,000 different possible strings, and only (2^1,000,000)^2 distinct possible ways of responding to them (assuming the reply is of equal length.) A very large number[1], I grant you. But still finite.

However, this fragment of a debate is a dead-end; it presumes a lookup from question to answer. The answer, however, could be an algorithm. You don't need to have a lookup for all possible answers of "+1", you just need an algorithm that evaluates the "+" and then works on the two operands.

What prompted my initial response was the assertion that any machine capable of passing the Turing test must be said in some sense to be thinking. As my little scenario shows, that's not necessarily the case.

[1] 4^(10^12), to be exact. Far larger than a google, but insignificantly tiny compared to a googolplex.

I would say it is true, but irrelevant. Since a lookup table capable of handling all possible 10,000 second conversations (including those with references to what happened earlier in the conversation) cannot fit in real universe, any entity, machine or otherwise, capable of passing Turing Test in real life would, by necessity, use something other than a lookup table. That alone does not prove such entity "thinks" (although I would be inclined to say it does, by any reasonable definition), but "it could be done with lookup table" is not a valid argument against it.

Have you calculated just how bright a light source needs to be to ensure that, in a small twin-slit interference experiment, the majority of the photons aren't single? The individual photon in your average physics class experiment last on the order of 10 nanoseconds. So you need to ensure your light source is at least 10^8 photons per second on the area of the slits to get to the point that you can talk about photons existing at the same time, let alone interfering.

This is why I'm not surprised by the phenomenon of interference occurring with single photons - because that's not actually that faint a light. For photons to interfere with other, on the other hand, is potentially starting a turtles-all-the-way-down case, because what mediates this inter-particle force?

Even if we're talking about visible light photons (~1 electron volt/photon) 108 photons is only 10-11 watts; not very bright. If you use a 1 milliwatt light source (an LED or a laser diode, say) you get 1016 photons per second. In visible light you have to work fairly hard to get single photon sources.