AI and the Singularity.

Machines that replicate human brain may be a possibility if we figure well enough how brain works.

For the sake of my statements (1-3) lets define "replicate the human brain" to mean able to reason and make deductions in arbitrary problem domains as well as humans. It does *not* mean have human emotions, desires, goal, motivations, or ambitions.

Quote:

"artificial general intelligence" is a totally bullshit category altogether inapplicable to software.

Your brain is a general intelligence. Why can't a machine be built with the same functionality? The universe reserves this right for biology only?

Quote:

(1) it is impossible for us to build machines which replicate the human brain because we will never ever figure out how the brain works.

- or -

(2) is is impossible for us to build machines which replicate the human brain because even after we figure out how it works we will never ever have sufficient manufacturing technology to build an equivalent machine.

- or -

(3) it is impossible for us to build machines which replicate the human brain because human brains are not just biological machines, they have magic inside, endowed by a power unavailable to humans.

Which (1-3) is your reasoning?

Quote:

2013...can't do anything at all beyond somewhat flashy but ultimately trivial examples that just won't extend into much anything.

Indeed, because an artificial general intelligence does not yet exist. But that is not an argument for why a general artificial intelligence can not exist. I am interested in people's arguments (specific,objective) for why a general artificial intelligence can't exist.

I suppose the problem with SF writers is that generally they're not good enough to imagine anything particularly alien so any non-human intelligence ends up coming across as a very pretentious human that knows a lot and has a funny name.

Sci-fi writers are *writers*. They're interested in telling a human story. Pure 100% science fiction would be boring as shit.

Dan Simmons Hyperion series did a pretty good job imagining something non-human relating to AI. But, in the end, it's a story told from a human perspective. It's always going to be an allegory about the human condition.

Your brain is a general intelligence. Why can't a machine be built with the same functionality? The universe reserves this right for biology only?

If that machine is a brain sim, maybe it can be built and maybe it will work and we'll call it general intelligence. But there's no general intelligence principle and the category "general intelligence" is primarily useful for bullshit.

Consider software that can solve arbitrary systems of equations efficiently, even when system is as messy as a simulator of hardware design. It is general enough intelligence in my book - you can make function the solution to which is the best simulator of the data we got (for example, cell metabolism data), then you can make a function solution to which is the cure for cancer. You could make speech recognition out of it as well. The doomsday nutjobs I spoke of think of this differently. You give an artificial general intelligence a goal to cure cancer, and then it kills everyone because that, too, cures cancer. Or, you give it equation to solve and it destroys the world to build bigger computer to solve the equation, because, you see, it's "general intelligence" and it should be smart enough to do that. Now that is complete cretinism. Neither humans do this, nor would any even remotely practical software, of the kind that doesn't tell you "cancer is not epoxy and can not cure". And solutions to equations do not magically involve trying to cut the real world time and the like.

The doomsday nutjobs I spoke of think of this differently. You give an artificial general intelligence a goal to cure cancer, and then it kills everyone because that, too, cures cancer. Or, you give it equation to solve and it destroys the world to build bigger computer to solve the equation, because, you see, it's "general intelligence" and it should be smart enough to do that. Now that is complete cretinism.

Even if it smart enough to do these things, how is it suppose to do them if we don't give it the means to do so?

The doomsday nutjobs I spoke of think of this differently. You give an artificial general intelligence a goal to cure cancer, and then it kills everyone because that, too, cures cancer. Or, you give it equation to solve and it destroys the world to build bigger computer to solve the equation, because, you see, it's "general intelligence" and it should be smart enough to do that. Now that is complete cretinism.

Even if it smart enough to do these things, how is it suppose to do them if we don't give it the means to do so?

It's supposed to think up the means. It'll make cure for cancer with some evil side effect that will make a virus kill us all. Or something. Bottom line is, pay the uneducated guy to prevent it, or we all die.

You don't have to explicitly create a god for a thing to have godlike powers. Think of the software in a bank that maintains customer accounts. High Frequency Trading algorithms may already be our first electronic gods.

I suppose the problem with SF writers is that generally they're not good enough to imagine anything particularly alien so any non-human intelligence ends up coming across as a very pretentious human that knows a lot and has a funny name.

I too was bothered by this at one point. I was however lucky to stumble upon TVTropes' Blue and Orange Morality, which is a pretty good guide to works of fiction with truly alien aliens.

Not too alien, of course, since they were still conceived by human minds, but you take what you can get.

Well, at least the AI that is built using neural networks and patterns of connectivity taken from mammals, would be this slightly weird human and not very alien, so this has merit. More merit than attributing desire to physically kill the opponent to a very advanced chess AI, for instance, or the desire to kill everyone who might get in the way, to a very advanced self driving car AI. It seems to me that AIs in the gap in-between the truly super duper alien (a chess AI, a self driving car AI, etc) and fairly human-like (neural network AIs based on the mammalian data), are particularly unlikely to exist. The blue and orange morality strikes me as example of some sort of half-way anthropomorphizing; if you were speaking of something derived from mammals, or even non mammals, that'd be more anthropomorphic, and if you were speaking of something built from scratch, it would be much less.

The blue and orange morality strikes me as example of some sort of half-way anthropomorphizing; if you were speaking of something derived from mammals, or even non mammals, that'd be more anthropomorphic, and if you were speaking of something built from scratch, it would be much less.

But you're back to the fact that writers are, in the end, telling a human story. "Blue and Orange Morality" is not a tool to describe a scientifically realistic AI, it's a tool/trope to get humans reading it to consider that their own moral positions may be arbitrary.

Strangely, I think one of the more realistic depictions of AI in ficition is in Evangelion. They've got 3 custom-built AIs in separate pieces of hardware basically firewalled off from everything else. They have an army of humans feeding/filtering data to them. They ask them pointed questions by way of skilled human operators translating the questions from the human management. "What are our chances of defeating this Angel on land?" The 3 AIs each give their own answer in a no-frills, robotic, mathematical way. "Alpha: 52%, Beta: 54%, Gamma: 22%". You need 3 separate AIs with different personalities/seeds because each AI is so complicated that you don't really understand its inner workings. A single answer may be heavily skewed by a quirk of that single AI. 3 gives you a tiebreaker.

There is no human voice for the AI. The only anthropomorphizing of the AI is on the level of a mechanic anthropomorphizing a hot rod he's working on. There is no AI haxoring the intarwebs to spread itself to pocket calculators.

The blue and orange morality strikes me as example of some sort of half-way anthropomorphizing; if you were speaking of something derived from mammals, or even non mammals, that'd be more anthropomorphic, and if you were speaking of something built from scratch, it would be much less.

But you're back to the fact that writers are, in the end, telling a human story. "Blue and Orange Morality" is not a tool to describe a scientifically realistic AI, it's a tool/trope to get humans reading it to consider that their own moral positions may be arbitrary.

Strangely, I think one of the more realistic depictions of AI in ficition is in Evangelion. They've got 3 custom-built AIs in separate pieces of hardware basically firewalled off from everything else. They have an army of humans feeding/filtering data to them. They ask them pointed questions by way of skilled human operators translating the questions from the human management. "What are our chances of defeating this Angel on land?" The 3 AIs each give their own answer in a no-frills, robotic, mathematical way. "Alpha: 52%, Beta: 54%, Gamma: 22%". You need 3 separate AIs with different personalities/seeds because each AI is so complicated that you don't really understand its inner workings. A single answer may be heavily skewed by a quirk of that single AI. 3 gives you a tiebreaker.

There is no human voice for the AI. The only anthropomorphizing of the AI is on the level of a mechanic anthropomorphizing a hot rod he's working on. There is no AI haxoring the intarwebs to spread itself to pocket calculators.

That sounds OK-ish, except the whole necessity to firewall out the AIs, presumably to prevent them from spreading themselves to pocket calculators. This strikes me as part of this anthropomorphizing. I.e. the idea that if you ask AI a very hard problem it will hack the internets to answer it, and possibly create some computing nanogoo that kills you. This sort of stuff is just not part of a mathematical problem unless you go to great pains to make it so. Solvers of all kinds don't give a damn, if you manage to make some solver that hacks around when trying to find solution, it could e.g. change the problem to calculating 2*2 and the print 4 and be done .

This is a real issue if you are trying to evolve some sort of software. I recall reading of some sort of evolving automated reasoning system that, as a bug, would break itself. Somehow, hacking the internet should be a desired solution, and hacking the definition of the problem should not. That ain't going to happen by accident.

Have you ever heard about electronic circuit optimizing software that uses genetic algorithms to generate the variants to be tested? The circuits often look optimized to human experts. But some have dangling parts which make no sense yet stops the circuit working when they're taken out. Some realtime optimizing setups, using FPGAs, have been found to be exploiting the ambient radio noise in the lab after experts could not figure out their operation.

They're called AI because they figure things that people cannot. We have 7 billion "computers" made with unskilled labor and we still have problems to be solved. A chess solving AI may not specify terminating its opponent to win the game but we still need to carefully set up the problem and environment to get an answer we can use. And we need to expect novel approaches because that's why we build them in the first place.

Have you ever heard about electronic circuit optimizing software that uses genetic algorithms to generate the variants to be tested? The circuits often look optimized to human experts. But some have dangling parts which make no sense yet stops the circuit working when they're taken out. Some realtime optimizing setups, using FPGAs, have been found to be exploiting the ambient radio noise in the lab after experts could not figure out their operation.

They're called AI because they figure things that people cannot. We have 7 billion "computers" made with unskilled labor and we still have problems to be solved. A chess solving AI may not specify terminating its opponent to win the game but we still need to carefully set up the problem and environment to get an answer we can use. And we need to expect novel approaches because that's why we build them in the first place.

Using radio noise is to be expected... Exploiting environment actively runs into an issue if the optimizer finds that it can 'optimize' the goal. Among the undesired solutions in chess are things like removing enemy pieces from the internal board. Killing the physical opponent is a particularly nasty form of cheating that occurs when you are to bypass rules of chess but not to bypass some nasty evil instincts that privilege this particular course of action over all other cheats.

For the sake of my statements (1-3) lets define "replicate the human brain" to mean able to reason and make deductions in arbitrary problem domains as well as humans. It does *not* mean have human emotions, desires, goal, motivations, or ambitions.

We could build the best calculating engine in the world and imbue it with the ability to solve any math problem instantaneously but if it doesn't have the intrinsic desire to do anything without being told what to do by an external agent, it can't really be considered intelligent. Yes, it's philosophical, but intelligence without some kind of will or desire behind it is just rote calculation. Until the calculating machine becomes a feeling machine, it can't be a thinking machine.

We could build the best calculating engine in the world and imbue it with the ability to solve any math problem instantaneously but if it doesn't have the intrinsic desire to do anything without being told what to do by an external agent, it can't really be considered intelligent. Yes, it's philosophical, but intelligence without some kind of will or desire behind it is just rote calculation. Until the calculating machine becomes a feeling machine, it can't be a thinking machine.

I'm not at all sure you need emotion to get an awareness of self. A better question IMHO is whether or not self-awareness is externally measurable. 'I think, therefore I am' is an entirely internal dialog. Do Androids Dream of Electronic Sheep? If we can't figure out how to define our own conciousness to an external observer then I'm not sure we could ever do it for Colossus, HAL, or Cmdr. Data. We may be stuck with Turing: if we can't tell whether or not it's concious, does it matter if it really is?

For really good explorations of machine intelligence along with a decent yarn I highly recommend Ian Banks.

I'm not at all sure you need emotion to get an awareness of self. A better question IMHO is whether or not self-awareness is externally measurable. 'I think, therefore I am' is an entirely internal dialog. Do Androids Dream of Electronic Sheep? If we can't figure out how to define our own conciousness to an external observer then I'm not sure we could ever do it for Colossus, HAL, or Cmdr. Data. We may be stuck with Turing: if we can't tell whether or not it's concious, does it matter if it really is?

I think the problem is more "how do we get there from here"? Somehow you need to bootstrap a basic awareness or at least the ability to have awareness AND an environment where it's stable AND have the resources to sustain it .

My guess is that this kind of AI will co-evolve with people. It's not too unrealistic to envision the evolution of physical interfaces to the human body to the point where you can start using a computer as an augmentation device. We can already pull basic motor functions directly from the brain right? I bet moving up the brainstem is going to be a long but IMO inevitable process. If thought is purely electro-chemical then the rest is just engineering...

edit:Banks is kind of depressing, although sometimes side splitting funny at the same time.

My guess is that this kind of AI will co-evolve with people. It's not too unrealistic to envision the evolution of physical interfaces to the human body to the point where you can start using a computer as an augmentation device. We can already pull basic motor functions directly from the brain right?

We now have devices such as the ReWalk that do this. There was research in the 90's about controlling a computer with thought, but don't know what happend to it.

There was research in the 90's about controlling a computer with thought, but don't know what happend to it.

Misnamed. It was controlling a computer with biofeedback, not thought. AFAIK, that sort of tech has been used to help disabled people operate computers and computerized prosthetics, but it takes way too much effort for the human to learn to use it to ever be a mainstream product.

It's not the computer learning to read the human's thoughts, it's the human learning to give biofeedback to the computer.

We could build the best calculating engine in the world and imbue it with the ability to solve any math problem instantaneously but if it doesn't have the intrinsic desire to do anything without being told what to do by an external agent, it can't really be considered intelligent. Yes, it's philosophical, but intelligence without some kind of will or desire behind it is just rote calculation. Until the calculating machine becomes a feeling machine, it can't be a thinking machine.

I'm not at all sure you need emotion to get an awareness of self. A better question IMHO is whether or not self-awareness is externally measurable. 'I think, therefore I am' is an entirely internal dialog. Do Androids Dream of Electronic Sheep? If we can't figure out how to define our own conciousness to an external observer then I'm not sure we could ever do it for Colossus, HAL, or Cmdr. Data. We may be stuck with Turing: if we can't tell whether or not it's concious, does it matter if it really is?

For really good explorations of machine intelligence along with a decent yarn I highly recommend Ian Banks.

IMO 'self awareness' is over-emphasised. If it is meaningful to say that an animal or an AI is aware of it's environment, there's nothing really special about it being also aware of itself. Some specific forms of self awareness - such as AI understanding that if it destroys the computer that is computing the AI, then it will cease to exist - that's something humans themselves have a lot of trouble with. It seems to me that people often confuse self awareness with qualia or notion of self as the inner thing in the Cartesian theater. We perceive some of our outputs in much same manner how we perceive sensory data, too (hearing yourself think), that seems neither essential nor profoundly important.

BTW, one thing for sure: the AI can not have *your* qualia (the ones that you have) or it would be you. It's unclear to me that qualia is at all generalizable as such. You know you have your qualia, and I know I have my qualia, or at least from your point of view I claim so, and there's good reasons to assume symmetry here, but what I call qualia and what you call qualia are two distinct things (I am not feeling what you feel or I would be you, and vice versa) and the 'qualia in general' do not have to be a real thing or a definable category of some kind.

Your brain is a general intelligence. Why can't a machine be built with the same functionality? The universe reserves this right for biology only?

For the moment, and for quite some time to come, apparently, yes it does. Huge amount of hubris in the notion that we could simply create something like an artifical brain anyway. Currently, the ones we have took 4 billion years of natural selection to be as they are. Thinking about that alone should give you pause; to paraphrase one JBS Haldane, the brain is not only stranger than we imagine, it's currently stranger than we can imagine.

Your brain is a general intelligence. Why can't a machine be built with the same functionality? The universe reserves this right for biology only?

For the moment, and for quite some time to come, apparently, yes it does. Huge amount of hubris in the notion that we could simply create something like an artifical brain anyway. Currently, the ones we have took 4 billion years of natural selection to be as they are. Thinking about that alone should give you pause; to paraphrase one JBS Haldane, the brain is not only stranger than we imagine, it's currently stranger than we can imagine.

Exactly. And even in the future, unless copying the brain, there's no 'general intelligence' principle. Even under the most extreme "tabula rasa" view on the brain , the brain still consists of parts that do specialized tasks in parallel, cobbled together by something which we are not even interested in replicating.

My guess is that this kind of AI will co-evolve with people. It's not too unrealistic to envision the evolution of physical interfaces to the human body to the point where you can start using a computer as an augmentation device.

Regardless of whether or not the machine becomes what most people in this thread are defining as "intelligent" that's precisely where we're going.

Augmented Human/Computer Intelligence is just as, if not more, interesting than godlike, anthropomorphized machines. After all, what's the distinction between a machine with true "human-like" intelligence and a person that has a machine's knowledge or problem-solving ability? Other than plentiful organs and the fact that former is still in the realm of science fiction while the latter is coming together piecemeal, here and now.

Latter is arguably coming together ever since we developed speech which allows multiple human brains to work on a task, or writing, which not only allows to preserve information but serves as an aid to short term memory; then computers which we program.

Huge amount of hubris in the notion that we could simply create something like an artifical brain anyway. Currently, the ones we have took 4 billion years of natural selection to be as they are.

It took Egyptian slaves longer to build the pyramids than it would take us today. That it took a process forever to develop something that's incidental to its function isn't really meaningful.

Once something as nebulous as a consciousness can be defined, it becomes possible (if infeasible) to implement. Since philosophers have been arguing existential questions like that for longer than we've been recording history it's unlikely we'll have a concrete definition anytime soon. Meanwhile, progress continues unabated.

That it took a process forever to develop something that's incidental to its function isn't really meaningful.

Intelligence is unarguably a trait that boosts our ability to survive and succeed. It's not an "incidental" function of the human brain at all.

The process in question here is referring to natural selection, and it didn't "care" about selecting "intelligence" over brawn or the ability to climb trees. That we had intelligence as a trait and that it was selected isn't meaningful, as it could well have been any other number of traits that were selected depending on circumstances at the time.

But this particular conversation is a rabbit hole. Just because it took something eons to develop naturally doesn't mean it can't be replicated in a much shorter time. Humans are good at copying if nothing else.

Huge amount of hubris in the notion that we could simply create something like an artifical brain anyway. Currently, the ones we have took 4 billion years of natural selection to be as they are. Thinking about that alone should give you pause; to paraphrase one JBS Haldane, the brain is not only stranger than we imagine, it's currently stranger than we can imagine.

There is a difference between "recreating the human brain" and "creating something functionally equivalent". We can fly, live in armored shells, mass communicate data, know where magnetic north is etc all with out reverse engineering how other organisms do the same things. I am not saying that it's a trivial task to recreate something that emulates awareness close enough that your arguing semantics but I think it's inevitable and much sooner than some seem to think.

Natural selection *is* weighted random chance. once you have the ability to set long term goal (assuming it's possible, which this has to be) things speed up by several orders of magnitude. it may take thousands of years but (assuming our current understanding of physics on which the mechanics of thought are based on is more or less accurate) I'm guessing hundreds or maybe even less although I wouldn't bet on it happening in my life time. It's the mechanics of thought we don't understand but I think we just need (a lot more) experimental data.

I don't think its will look like the generic Singularity folks think it will though. I suspect it may be easier to copy existing awareness into new bodies than bootstrapping a stable one from scratch - similar to the Edenists in the Hamilton books.

Do you think it's likely we will have the ability to "offload" memories?

Latter is arguably coming together ever since we developed speech which allows multiple human brains to work on a task, or writing, which not only allows to preserve information but serves as an aid to short term memory; then computers which we program.

And version control systems.

In the programming world there has been an amazing amount of improvement in the ability to work collaboratively* at the purely functional level (i.e. to get code that will actually run on a computer and provide the results you desire). Between this and the ever increasing granularity of the software stack (and how it's constantly moving towards instinctive understanding**) I think that software is a good analogy but that does not mean it's the whole thing. We are hardware as well as software and we're just getting into the area where those interact.

One of the things I think we're semi-stuck at now as far as data communication is that .short of plugging something directly into your brain. the highest bandwidth "interface" to a person is their optic nerve and our ability to actually grasp fine grained information that way is limited. Of course we can use lossy compression (smokin' hot babe means different things to you and I but you'll still get the idea) or preallocated shared references (like languages). At some level we get this stuff from our hardware.

One of the things I am really fascinated by is the period in human evolution just before and during us learning how to speak. Lots of animals vocalize but as far as I know we're the only living species who can choose to communicate intangibles. My bees can tell each other where the nectar flow is even if it's miles away but they can't say add to their vocabulary in "software". The ability to create a mental model of something that may not even exist in the real world and then share that model with another entity is amazing.

I don't think its will look like the generic Singularity folks think it will though. I suspect it may be easier to copy existing awareness into new bodies than bootstrapping a stable one from scratch - similar to the Edenists in the Hamilton books.

Do you think it's likely we will have the ability to "offload" memories?

For one, memories aren't all there is to consciousness. Personally I think consciousness is the sum activity of the organic brain evoking the mind as a whole, so consequently, I don't believe in "artificial" intelligence at all by definition. If it's a piece of technology, it can never actually be conscious, and equally, never actually intelligent.

However, offloading memories is something that is undoubtedly possible - just not anytime particularly soon. We currently understand that there's nothing magical about how the brain stores information; neurons are polarised with patterns of stimulation which last for extremely long periods. The problem is that we don't understand the encoding format, or the various other laws surrounding it - the essential "capacity" of neurons for storing experiences, what kind of experiences, for how long, how many neurons are needed for particular kinds of memories - and so on.

We also currently lack the equipment to investigate this at a sufficiently high granularity, but, we do have magnetoencephalography (MEG) and functional-MRI, and those two technologies are already letting us read the raw "stuff" of neurological activity in a crude kind of way. In time, as the technology improves, I don't see why it shouldn't be possible to use non-invasive technologies to read neurological activity from outside the skull, then go some way towards translating activity in a particular region with perceived sensory experiences into an externally readable format. Again, though, that would become a whole field of study in its own right, because we can not only recall events which have happened to us, but also imagine ones that haven't.

Once/if there is a strong reason to use mechanical augmentations (i.e. ability to provide artificial memories) it opens the door to a much richer data set for signal analysis which means we could start doing deductive work on thought. Right now so much of what we're doing is handicapped because it's experimental and we're using the same equipment we're experimenting on.

I also want to point out that even if it takes ~10,000 years that's a drop in the bucket to the ~4 billion or so it took to get to this point. A very small drop considering where we where 10k years ago. We're just getting started.

One of the other questions that is tangential to this is why hasn't another species on another planet already done this (or if they have why do we see no evidence of it and/or at least the technology it presumably depends on)? It's a scary thought but maybe we're the first to roll snake eyes 1000x consecutively. Or maybe others have gotten here before but died before getting to the tech level where interplanetary civilization was sustainable.

One of the other questions that is tangential to this is why hasn't another species on another planet already done this (or if they have why do we see no evidence of it and/or at least the technology it presumably depends on)?

Er, it's entirely possible it has happened thousands of times...we simply have no way of detecting it.

Er, it's entirely possible it has happened thousands of times...we simply have no way of detecting it.

Unless they don't use the electromagnetic tech the way we do (or are a lot more advanced in their use) shouldn't we expect the Seti guys to find something? I know it's a needle in a haystack issue but some radiation should be pretty easy to see. I am not saying we'd detect every civilization out there but if civs with our level of tech are common shouldn't we have noticed at least one of them by now?

Well thousands of times maybe not. Lot of noise, universe is large. But even 9,999 instances of tech civs out there would hardly make it common. I think life itself is certain to exist outside of Earth, self awareness would be at least an order of magnitude less common.

I know it's a needle in a haystack issue but some radiation should be pretty easy to see. I am not saying we'd detect every civilization out there but if civs with our level of tech are common shouldn't we have noticed at least one of them by now?

The problem with EM signals is that they weaken as they disperse...and they are eventually indistinguishable from background noise. Barring very directed signaling and detection of same, it's extremely unlikely we would notice any EM signaling not intentionally designed to be detected at great distance. Even if there we a million such civilizations, assuming some relatively even distribution of same, you're not just looking for a needle in a haystack, you're looking for a needle in all the haystacks, all the beaches, all the deserts, and all the oceans.