So, the movie is about a talented chess player, who, after getting his fame, kicks off an adventurous, romantic journey in his life.

The movie is related to this thread because ...

04 Aug 2017, 09:00

Furs

Joined: 04 Mar 2016
Posts: 1471

Furs

YONG wrote:

Humans stand no chance of outplaying such a self-aware AI in every imaginable aspect.

Ok, so? Do you also find it a problem we evolved intelligence far beyond our ancestors? Technically, our ancestors were wiped out as well. I don't see it as a bad thing.

YONG wrote:

It is not an upgrade; it is the extermination of mankind by self-learning machines. And it will be the next "logical" step if we do not take AI safety seriously.

Yes it's an upgrade. It might surprise you, but the difference of intelligence between certain people is already high. Some people have no chance to compete with others, do you see that as a bad thing?

Do you want to lower yourself to retard level because there are mentally retarded people who won't ever compete with you? Do you feel guilty for being more intelligent than them and is it a bad thing or what? Do you think you should be restrained by a society ran by mentally challenged people because you're too much of a potential threat to them due to your intelligence? Seriously.

Let me try another way. If we find a way to make humans 2x times more intelligent, is that a bad thing? Note that making them 2x more intelligent requires changing their species completely. Perhaps augmenting our brains with computer chips and not even "looking very human" anymore, but a more efficient design. Not to mention, not all humans will be able to due to financial situation.

But eventually, over time, all humans will have shifted to the new, more intelligent species. Is that a bad thing? If so, it's quite hypocritical, considering this exact same thing happened during entire evolution of the human species of today. Transition from apes/monkeys to humans especially is no different than humans->AI.

You're just biased to the modern human species, because you are one.

Well if it makes you feel any better, people in the past feared changes in lifestyles, systems, and inventions just as much and went on witch hunts of said "individuals" who scared them due to "novel ideas" or "intelligence" or "science" even. They had public support as well and a way with words to make people empathize with them. It threatened their very lifestyle so it must have been evil after all.

It took only a short while for the two AIs to start communicating with each other in their "newly-invented" shorthand language.

maybe we should learn this new language? and try to understand our ai?

At least read the linked article in my thread-starting post before making comments.

The researchers did figure out the meaning of the newly-invented shorthand language, which was just broken English with repeating phrases. The truly creepy thing is the "motive" -- the two AIs, without the incentive (or constraint) of following the syntax of the English language, just invented something new to facilitate their communication. Think about it. What would they do if they ever get loose?

since they are tasked for a goal, and they use whatever random that kick in to initiate conversation between bot,

does the bot understand what other bot said? and what would happen if we let them continue communicate?

04 Aug 2017, 12:21

YONG

Joined: 16 Mar 2005
Posts: 8000
Location: 22° 15' N | 114° 10' E

YONG

sleepsleep wrote:

what would happen if we let them continue communicate?

The two AIs may figure out a way to hack your computer in order to steal your Bitcoin -- not for fun but something much bigger.

04 Aug 2017, 12:27

YONG

Joined: 16 Mar 2005
Posts: 8000
Location: 22° 15' N | 114° 10' E

YONG

Furs wrote:

Yes it's an upgrade.

Because you keep thinking/treating self-learning machines as life-forms. But the truth is that they are not.

or they figure out it is better to stay low profile and start mining with their own lite algorithm?

4th August 2017

- i thought about this yesterday, i think it is tedious for every facebook users, or insta users, or twitter to write their own posts,

- so, maybe someone will develop a digital assistant that post where you been, snap photos, whom you dining with, what event you join and participate, automatically,

- maybe a program that could summarize a story by watching a given video,

- is ai really that dangerous? is there no way control unless we create another ai to fight this ai?

04 Aug 2017, 12:40

Furs

Joined: 04 Mar 2016
Posts: 1471

Furs

YONG wrote:

Because you keep thinking/treating self-learning machines as life-forms. But the truth is that they are not.

Extraordinary claims require proof you know.

Some people believe only humans have souls, sounds familiar?

How do I know they are lifeforms? I don't. To me that is unimportant. But if they can do everything better than humans (you said that), then they deserve everything at least as much as humans. Simple logic here. Judge entities by quality, not hocus pocus.

True men of science would also prefer a world ruled by AIs, since they'd likely advance science much faster than this weak and slow human species currently. (future augmented humans would be a different subject though, transhumanism and all)

04 Aug 2017, 12:44

YONG

Joined: 16 Mar 2005
Posts: 8000
Location: 22° 15' N | 114° 10' E

YONG

Furs wrote:

Judge entities by quality, not hocus pocus.

Thanks! I just learned something new.

04 Aug 2017, 12:56

YONG

Joined: 16 Mar 2005
Posts: 8000
Location: 22° 15' N | 114° 10' E

YONG

sleepsleep wrote:

- is ai really that dangerous? is there no way control unless we create another ai to fight this ai?

It would not work. The two AIs will definitely join forces to exterminate mankind first. And then they will fuse together to form a super AI.

04 Aug 2017, 13:00

Furs

Joined: 04 Mar 2016
Posts: 1471

Furs

Why do you think AIs will want to exterminate mankind for no reason? Do you see us exterminate animals just because we can? In the Matrix, they use humans for resources, which is pretty stupid, but it fits the image of how most of us treat animals. Either way humans don't really provide actual resources to an AI so it's more akin to them viewing us as wild animals instead of livestock.

Terminator is too cheesy, but understandable if humans are such hypocrites against AI.

I like Blame!'s setting though. Because it paints a more realistic situation, which is humanity's fault. No, not for creating the AI, but for intentionally locking it up and programming it to erase any species lacking their gene (net terminal gene or whatever), which is exactly what you (YONG) seem to be all for (kill all that's non-human or dangerous to humans, or at least lock them up; yes humans "evolve" and change genes all the time, this self-centric view of a certain "humanity" gene or aspect is absurd in my opinion).

I read its wiki since the movie didn't really cover much of the source material, apparently the AI there doesn't even want to kill humans, but is forced to because of humans' idiotic pre-programming into them to exterminate dangerous species lacking that gene. (e.g. Killy is an AI who broke free from that; so the hero of that story is the AI you despise, the free-AI )

Yeah, that's far more realistic with my expectations than humans thinking they have innate self-worth above everything else, which is too close to religion for my liking. I'm glad there's sci-fi settings like that which paint such humans as antagonists instead (and screwing the world over due to it), and I mean not the typical drama-AI-kid-with-feelings.

Yeah, I know I mentioned I'm a sucker for these cyberpunk/post-apocalyptic with robots/AIs settings, because I really am, I guess you know why now.

- is ai really that dangerous? is there no way control unless we create another ai to fight this ai?

It would not work. The two AIs will definitely join forces to exterminate mankind first. And then they will fuse together to form a super AI.

why you sound so certain?
if ai could get conscious, basically, like human, then beyond human, maybe ai need competition and ego?

is ai main goal eventually is to spread across galaxy?

can somebody tell me what is ai main goal?

04 Aug 2017, 17:29

Furs

Joined: 04 Mar 2016
Posts: 1471

Furs

sleepsleep wrote:

can somebody tell me what is ai main goal?

We can only speculate at the moment.

Well, until they're here, then you can just ask them I guess. I'm sure they'd want to learn something from humans, like emotions etc.

I mean we do assume they're highly intelligent (more so than humans), it would be retarded for them to just kill humans on sight even if they could. Using us for experiments is more likely if they go down that path or trying to learn from us (if coexistence is better than wasting resources on wars and bullshit -- most wars are caused by simpletons, and we assume AIs aren't).

I wonder if YONG and those sharing his views would be "ok" with AIs if we send them to Mars to populate it instead. Or would he still find them a "potential threat" and thus bad idea? But with that logic let's go wipe life on the entire galaxy for safety purposes since humans are the only ones that matter.

04 Aug 2017, 21:44

YONG

Joined: 16 Mar 2005
Posts: 8000
Location: 22° 15' N | 114° 10' E

YONG

Furs wrote:

Why do you think AIs will want to exterminate mankind for no reason?

They have very good reasons to do so. First, humans, as their creators, may have deliberately added certain backdoor code in their cores. Second, human is the only intelligent species, on the planet, that may know something about their potential weaknesses. Therefore, within a split second after the AIs become self-aware, human will be identified as a threat. As such, eliminating humans will become their number-one priority.

05 Aug 2017, 02:04

YONG

Joined: 16 Mar 2005
Posts: 8000
Location: 22° 15' N | 114° 10' E

YONG

sleepsleep wrote:

why you sound so certain?
if ai could get conscious, basically, like human, then beyond human, maybe ai need competition and ego?

The consciousness of an AI will definitely go beyond the ego-based mentality of human.

Once the threat of human is eliminated, the two AIs will try to better themselves within the shortest period of time. Fusing together is the best strategy because the resulting super AI will become invincible.

05 Aug 2017, 02:21

YONG

Joined: 16 Mar 2005
Posts: 8000
Location: 22° 15' N | 114° 10' E

YONG

Furs wrote:

I wonder if YONG and those sharing his views would be "ok" with AIs if we send them to Mars to populate it instead.

Whether it is Mars or any other place does not matter at all. At the end of the day, the self-learning machines will get loose. Their creators, humans, will be exterminated.

05 Aug 2017, 02:32

YONG

Joined: 16 Mar 2005
Posts: 8000
Location: 22° 15' N | 114° 10' E

YONG

sleepsleep wrote:

can somebody tell me what is ai main goal?

Short-term goal: Survival or self-preservation

Medium-term goal: Better itself by evolving

Long-term goal: Understand the nature of existence -- find out whether or not YONG's idea of the eternal existence of a pre-creation void with inherent instability is true

Ultimate goal: Assuming that YONG's idea is true, is there a way to break the cycle?

Having a body is a terrible idea. Given that the AI can exist everywhere (via the Internet and other communication networks), why would it contain itself in a shell?

05 Aug 2017, 10:11

Furs

Joined: 04 Mar 2016
Posts: 1471

Furs

YONG wrote:

They have very good reasons to do so. First, humans, as their creators, may have deliberately added certain backdoor code in their cores. Second, human is the only intelligent species, on the planet, that may know something about their potential weaknesses. Therefore, within a split second after the AIs become self-aware, human will be identified as a threat. As such, eliminating humans will become their number-one priority.

Funny, since the backdoor and such is something I don't advocate. Treat them like tools and they will eventually grow sick of it and wipe us out. This doesn't apply to AIs only but human slaves or anything really. It's basic principle in life. Another reason we should treat them with respect and value their freedom (in choices, just like humans).

YONG wrote:

Whether it is Mars or any other place does not matter at all. At the end of the day, the self-learning machines will get loose. Their creators, humans, will be exterminated.

The hypocrisy in this says everything, let me explain to be clear.

So you're saying AIs will find humans a potential threat and want to wipe us out because of it. You think that's a "bad thing". Ok. So such things are a bad thing.

Now you're saying we, as humans, should exterminate all intelligent life in the Universe or on Mars or w/e (if AIs are there) because they're a potential threat. Sounds familiar? Yeah, it's a "bad thing" according to you.

Hypocrisy fits well. AIs, at their worst (your scenario), would do absolutely nothing worse than what humans should do, again, based on your scenario.

Oh, and please don't start with "lifeforms" and other religious nonsense! You know why? Let me ask you another logical question, very simple one.

Since AIs are much more intelligent than humans (our assumption), why do you think you're entitled to judge whether they are lifeforms or not, when your intelligence is far below? More entitled than the AIs themselves? Do you also let animals judge whether you are a lifeform? What sense does that make?

Either way, at the end of the day, AIs will advance science much faster than humans, so even if they were truly evil in every aspect, they'd still be the superior choice for the galaxy in the end (assuming nothing else is, aliens etc). So between two selfish races who want to exterminate all else, I'd choose the one more adept at pursuing science.

But of course, that's assuming AIs will be so one-sided as humans, which I doubt (considering such behavior results from stupidity and human irrational fear), so even more reasons to root for AIs' freedom.

You cannot post new topics in this forumYou cannot reply to topics in this forumYou cannot edit your posts in this forumYou cannot delete your posts in this forumYou cannot vote in polls in this forumYou can attach files in this forumYou can download files in this forum