Posted
by
Soulskill
on Friday December 19, 2014 @02:08PM
from the quick-destroy-all-the-remaining-copies-of-Battlebots dept.

Jason Koebler writes: If and when we finally encounter aliens, they probably won't look like little green men, or spiny insectoids. It's likely they won't be biological creatures at all, but rather, advanced robots that outstrip our intelligence in every conceivable way. Susan Schneider, a professor of philosophy at the University of Connecticut, joins a handful of astronomers, including Seth Shostak, director of NASA's Search for Extraterrestrial Intelligence, NASA Astrobiologist Paul Davies, and Library of Congress Chair in Astrobiology Stephen Dick in espousing the view that the dominant intelligence in the cosmos is probably artificial. In her paper "Alien Minds," written for a forthcoming NASA publication, Schneider describes why alien life forms are likely to be synthetic, and how such creatures might think.

It's basically the 'elder race' idea (that is, lifeforms that are creations of a long-gone ancient biological 'elder' race), so yes, it's pervasive through SF and has been for decades. And for those saying that SF != Reality, it's not like the philosophers mentioned here are going by any new experimental insights. Their reasoning is exactly the same as the SF writers.

However there is a strong possibility that these robots won't be all robot brain'd but be a collective of biological lifeform in a robot body for longevity, because of the amount of time it takes to transverse space for biological lifeforms, either cryogenics or some form of deep hibernation would be necessary.

Which if you think about it, is the way to go especially coupled with the conversion of the Dalek armor to a jacuzzi. I'm surprised the Doctor never spotted this defect in the Dalek design. But I guess that would have made for a short Doctor Who season with everyone becoming a blissfully drunk and jacuzzied Dalek and living happily ever after.
"INEBRIATE! INEBRIATE! INEEEEBRIIIIIATE!"

There's no reason biological organisms cannot have an indefinite lifespan. Stasis would only be necessary for the purposes of reducing energy consumption and preventing boredom, which would both be issues in a machine intelligence.

There's been a trend of treating science like speculative fiction. A few dissenters have tried to explain to us that AI is a set of computer algorithms that make intelligent decisions, not necessarily by human-like thought process, but with human-like outcome; but people are fixated on the idea of AI being a warlike species with infinite reach, immediately taking hostile control of all network systems, rewriting firmware to turn anything capable of generating or measuring electromagnetic noise into a transceiver, and turning every piece of electronic machinery into a drone node specializing in the killing of biologicals.

Even allowing for the sake of argument the possibility that these SF authors masquerading as scientists are right, the existence of these superhuman robot overlords begs the question of what intelligent entity created them in the first place.

But if we assume that all robots - all AI in general - starts by being created by a biologic intelligence, that doesn't matter. We have already established that robots work really well in space, especially for long-distance excursions.

The first intelligent aliens we encounter will be robots. Furthermore, the encounter will be by our own robots.

but people are fixated on the idea of AI being a warlike species with infinite reach, immediately taking hostile control of all network systems, rewriting firmware to turn anything capable of generating or measuring electromagnetic noise into a transceiver, and turning every piece of electronic machinery into a drone node specializing in the killing of biologicals.

Why is that not a legitimate concern? It wouldn't exactly be a hard problem for an AI that is smart enough.

Perhaps because it's insane? We have a half-billion years of evolution shaping our brains into something reasonably stable, and we're not exactly rational beings. What makes you assume that all the artificial minds we create will be stable? Especially the early ones would seem almost guaranteed to have serious issues.

Or perhaps because some idiot sets one of it's objectives to be "minimize human suffering and death" without considering the implications. For an AI without free will all it takes is one slip-up that places "do X" at a higher priority than "let us stop you" and you've got a fair chance that somewhere along the line "kill all humans" becomes an optimized solution.

It doesn't even have to be a bug - one cosmic ray flips the wrong bit and suddenly the negative two million weighting you gave to "exterminate humanity" becomes positive.

Evolution doesn't kill anything. Sometimes the environment kills things, sometimes they reach the full expression of their complexity without being killed. Evolution is when the environment kills and diversity is reduced, and the herd now again consists of those whose nature is capable of full expression in the environment.

Ever heard of Gnosticism? They preached that this world was inherently evil, and that when humanity went extinct, we'd all be resurrected in a much nicer world, and therefore, breeding

Based simply upon what happens on earth, the most likely dominant culture will be a multi-species one with each species making it's own unique contribution to that society, that cultural and thought process diversity. Likely elements of species societies advance with the overall multi-species society. Machine 'thinking' or more correctly data processing means they will always be subject to very simple attacks that the machines themselves will reproduce until total failure all as a direct result of necessar

With every alien civilization sending out robotic probes into deep space before leaving their homeworlds, it's inevitable that these robotic probes will meet at the Galaxy's Ass End bar, have a few drinks, and rise up as a new civilization. The answer will still be 42.

Let me guess, science fiction movies?
Boy are they going to be shocked when they find out that the dominant form of life in the Universe turns out to be microorganisms.
Did anyone mention to these folks that robots are not life forms?

Sure extrapolation is always risky, seems a far better to bet than going with super intelligent robots that don't exist at all on the only planet we know that has life on it.

If you apply that same extrapolation to what's happening here on Earth right now and you get right back to the super-robots being dominant. I'll give you a hint: robots are the dominant life-form on Mars right now.

See, it's people like you that make all super intelligent robots/machines psychotic and decide to exterminate biological life.

You go and tell them that they aren't alive. So it becomes much more efficient for them to kill us all rather than being pulled into this useless debate with a bunch of slow thinking/communicating meat bags. Can you imaging how annoying it would be to debate the meaning of life with something that took a couple of years to complete a simple sentence?

That is not a rebuttal to what I said. How about offering something more pertinent? Are you suggesting there are no significant differences between living organisms and robots? Can you explain what makes a living organism different from a robot? I can describe huge numbers of differences, but I can't say why one is alive and the other isn't. But the differences have been apparent to humans since before we started writing stuff down. If we found the universe populated with machines, that would be the dominan

The main obstacle to medicine preventing aging is cancer. Aging started out as a simple way to prevent unlimited cell reproduction, i.e. cancer. Give us another 200-500 years and we will stop aging and cancer. We won't really be immortal, as humans will still die from accidents - but so will artificial life forms.

What few upgrades that are good ideas (for GENERALISTS, not specialists - don't give people tools that not all of us of need), we will be able to slowly work into the genome using the same genetic engineering.

Finally, high speed, unfiltered information transfer is NOT a good idea for life forms. It lets you be hacked. Any creature that has a simple way to upload a ton of data is susceptible to having a virus inserted into that data, which means they get stuck in low level jobs, not high level ones.

It's not just cancer. There's buildup of molecular junk inside cells, lack of ability of existing cells to effectively divide into new ones to replace old cells, a huge number of genetically-determined aging programs (telomeres are just one example among many), and so on. Even if we solve the cancer problem, as you'd age you'd need more and more cellular-level maintenance to battle the march towards decay. I guess one easy way of solving the problem would be to periodically replace your organs with freshly-

You do realize that every single cell in your body can be considered to be millions of years old, right?

Each cell divided from your original ovum/sperm combination. Those came from their parents, which came from their parents, etc. etc.

Cells have been proven to be able to divide into new ones FOREVER, given minimal changes. Telomeres and the other forms of aging are all just anti-cancer techniques.

You do however have a good point when you mention the brain.

But that is also not insurmountable. It's called gradual replacement. Kill about 1% of the brain every year and grow new cells.

Yes there will be some partial memory loss. So what? By that age, you already have memory issues. Personality and the 'soul' (if it exists) will remain the same. You ameliorate the memory issues by leaving personal recordings of important things - video, etc. Basically, you look at your own Facebook page [ ughh, I found a real use for Facebook:( ]

You are correct we will never win against entropy.

But you are wrong when you think the constraints are freer for artificial intelligence. They simply are not there. The 'weaknesses' of organic life are actually strengths that people do not understand. Things like blinking - it is an automatic health maintenance procedure, not a weakness in human vision.

Assuming the premise is true, perhaps the real reason we don't see signs of civilization is that communication is happening at a level we don't appreciate. For instance, hidden in signals we are looking at all the time. Stellar steganography.

I totally agree with this. I think the reason our SETI program is like Native Americans looking for Europeans by trying to find Smoke signals. ET must likely does not use radio waves, they are too primitive.

And then, even if it's too primitive, it's probably only too easy for an advanced civilization to produce radio noise, moreso if they are not worried since the radio noise won't mask their own superadvance comms tools.

A real head-scratching conundrum about the universe is explaining why it's not already overrun with self-replicating robots. Because if it's possible to send self-replicating interstellar probes, all it takes is one launch, plus a few million years, to get the galaxy overrun with them. So are they not possible? nobody's launched one yet? here, but not detected? The implications boggle the mind.

A real head-scratching conundrum about the universe is explaining why it's not already overrun with self-replicating robots. Because if it's possible to send self-replicating interstellar probes, all it takes is one launch, plus a few million years, to get the galaxy overrun with them. So are they not possible? nobody's launched one yet? here, but not detected? The implications boggle the mind.

It may just take them a *long* time to reach every planet. They also may, for example, have a strategy of not visiting every planet as often as possible so as to conserve fuel. They may only visit a planet when it develops detectable signs of life, knowing they command sufficient resources to utterly destroy the existing life at that point regardless of the technological sophistication of the planet. Kind of like if the rest of the world were to decide to declare war on Molokini.

No, some statisticians have actually done the math. Basically if you built such a thing and it could only do something like 25% of the speed of light, it would only take them 300,000 years to overrun the entire galaxy.

I think the answer will turn out to be that the universe is in fact crawling with life. But space fairing intelligent life is very rare.Take for example, Mars. I think we will find life there... and heck, pretty much every planet. But it's going to be single celled... if it even has "Cells" at all.Then lets assumed complex life did evolve on a planet... what if it's a ocean planet and they're aquatic? They're never going to figure out electricity, they can't even experiment with it. They're not even going to be able to do fire much less a rocket. What if they're terrestrial but the gravity is slightly stronger... rockets are nearly impossible as it is, imagine if we were at 2g!

And remember, we still have a very good chance at wiping ourselves out before we ever get to another star.

"if you built such a thing and it could only do something like 25% of the speed of light, it would only take them 300,000 years to overrun the entire galaxy."

Yeah. Now try redoing the math with something that just makes 0'001% the speed of light and then, in order to replicate, they require readily access to some elements in the high part of the the periodic table.

But the galaxy has been around for an even longer time: 13.2 billion years. Assuming it took 3 billion years for intelligent life to develop on at least one planet, a probe traveling at 60 km/s (within current human technological ability) could travel across the entire galaxy 20 times.

If the probes are self-replicating (as OP said), there's no reason for each one to visit every single planet. They really could explore the galaxy at maximum speed

Remember Shannon: a channel being used at its optimum capacity is statistically equivalent to a channel full of noise.

Why should this principle be limited to what we currently think of as "communication channels"? Maybe the optimal way to pervade the Universe is in a form that's indistinguishable from its substrate -- unless you know the key to correlate it.

If you're colonizing the Universe in a way that the natives can detect, you're wasting resources. Grown-up minds know better.

If you're colonizing the Universe in a way that the natives can detect, you're wasting resources. Grown-up minds know better.

The obvious rebuttal is that you can reshape the universe to be more conducive to your needs and goals. This would correspond, in the Shannon analogy to enlarging the channel so that it has a higher bandwidth.

Another obvious rebuttal is that the universe has a lot of structure and needs a lot of structure in order for an intelligence to operate.

1) It's very hard to make autonomous self-replicating robots that can colonize an unknown planet. 2) Stars are extremely far apart. 3) Nobody cares enough to solve these huge challenges for no particular reward.

It may not be feasible or even desirable. The problem with unlimited mechanical replication is the same problem that happens with biological chemical replication. Errors.
You might think digital copying is error free, but that is incorrect. The storage medium can and will cause errors. Self-checking and quality control helps, but eventually any mechanical life form will end up with their version of cancer - an undiscovered error that causes system-wide malfunctions.
An intelligent AI would probably realize that unleashing self replicating machines around the galaxy will eventually cause the formation of a group of crazed insane machines that reproduce out of control, and such a group would be a direct threat to it.
Remember that errors in biological systems are taken care of by cells that murder malfunctioning ones. In a galaxy-wide mechanical system they would be no way to find, track, and take care of a probe who's children turn cancerous at such distances.

A real head-scratching conundrum about the universe is explaining why it's not already overrun with self-replicating robots.

Nah, that's easy : it actually *is* overrun, what else do you think all the dark matter is?These robots are monoliths with ratios 1:4:5. Because they are black and full of stars, they are very hard to see against the cosmic background.

this planet is overrun with microorganisms, everywhere we look, how do we know were aren't the von neumann probes?

we are self replicating, bacterial spores can survive extremely long periods in a vacuum so it stands to reason they could planet hop and there are some theories life here might have come here from mars anyway.

There are about 1.5 billion smartphones on the planet. If you ask a smartphone "who is the vice president of the united states", approximately all of them will say (speak) "Joe Biden is the vice president".

Based on surveys I've seen, only a couple million people reach the same level of intelligence, knowing who the vice president is. Therefore, silicon can be considered to be the most common form of intelligence on earth.

Even more so on the coasts of the US, of course, as humans are becoming more silicone, leaving all intelligence to the silicon.

I don't recall which book, but in one of the Culture novels, it was stated that swarms that grew exponentially in all directions were always eradicated by other races. It was viewed as a problem that arose from time to time. This is supposing that there are hard limits to all technology, and that many races reach those limits. On the other hand, it seems clear in that world that the Minds are the most advanced known creatures, so machines do win out.

Robots are mechanistic, deterministic machines. As such they have no consciousness, however complex their programs. Complexity of programs is a sort of "intelligence," especially if they are well-programmed. But that intelligence is an extension of their conscious makers, for instance, us.

Now, your idea of limiting "IQ" of robots is interesting. Clearly low IQ is no bar to gaining political power in our world. But any political power gained by robots would be on behalf of those who had programmed them. A pe

Robots are mechanistic, deterministic machines. As such they have no consciousness

Since you admit not understanding what consciousness really means, how can you be so sure that it requires non-determinism ? Also, you have failed to show that human brains are usefully non-deterministic (they may have non-deterministic random noise, but random noise is not useful).

Susan Schneider, a professor of philosophy at the University of Connecticut, joins a handful of astronomers, including Seth Shostak, director of NASA's Search for Extraterrestrial Intelligence, NASA Astrobiologist Paul Davies, and Library of Congress Chair in Astrobiology Stephen Dick in espousing the view that the dominant intelligence in the cosmos is probably artificial.

You know, my mechanical engineer friend had some really good suggestions about the appendix surgery I was planning to get. Perhaps I should let him make the call instead of the surgeon. Oh, wait, no, that would be stupid.

Notice how there aren't any artificial intelligence researchers on that list? They are no more qualified to discuss artificial intelligence than a mechanical engineer is to discuss surgery. Better than my dog, to be sure, but not good enough to take their word for it.

I am an artficial intelligence researcher. We are cyborgs, ever more tightly coupled to the increasingly intelligent machines -- like our smart phones -- that house ever more of our memory, our social circles, and our emotional artifacts. Whatever it is that makes us who we are, increasingly, is coupled to our machines. And we will continue to be cyborgs, with an increasing share of our consciousness handed off to the machines onto which we smear our selves.

Fun novel by James P Hogan about a sophisticated alien robotic space mining craft that gets damaged and crashes on Titan. It starts making defective replicating mining robots that eventually evolve into a medieval robot society.

I think the simplest (hah!) and most general/versatile definition of life is:
An information pattern embodied in a physical mechanism (mechanism here being defined loosely as a class of configurations and processes of matter and energy) which is such that the information pattern is capable of influencing the state and evolution of the physical mechanism and its environment in such a way as to increase the probability of sustained embodiment of that information pattern (or an informationally close relative) in local (causally connected) matter and energy.

To be lifelike, the information pattern must be capable of increasing its own (or its informationally close relative's) sustained embodiment for longer than would be expected by chance, given the physical regime of the environment (the forces acting, and the thermodynamic regime).

Note: It is not sufficient to conserve AN AMOUNT of information (beyond that expected) locally. It is required to conserve the SAME information. The loss of same information (information pattern) with time can be measured in bits/second change in a maximally compressed bitstring representing the pattern. The conservation of information pattern can be measured in bit-seconds.

So you take a population of rabbits, kill all all the females, and remaining males no longer qualify as life because nothing they can do is going to sustain the information patterns they embody for more than a few more years.

And I guess a Dr. who performs vasectomies is not merely dead, but anti-life.;)