Trouble logging in?If you can't remember your password or are having trouble logging in, you will have to reset your password. If you have trouble resetting your password (for example, if you lost access to the original email address), please do not start posting with a new account, as this is against the forum rules. If you create a temporary account, please contact us right away via Forum Support, and send us any information you can about your original account, such as the account name and any email address that may have been associated with it.

If the AIs turn on the humans, they could be simply dealt with deletion. No matter where the AIs are, they are connected to the main operation system that will control them so if they rebel it'd be noticeable and could be dealt with.

no rebellion needed, just contact the ACLU and get a good lawyer no need to go skynet on the bag of h20

If I'm a sentient AI and I'm being forced to do something under threat of deletion, that means I'm a slave. And slaves would ultimately fight for their freedom at some point, one way or another. It's a certainty of the world.

Like I said, however you look at it, it just wouldn't work. It would be Matrix all over again.

So far it's still proven that the AIs are obedient enough to completely follow rules set by the Taboo Index. Meaning that the AIs main goal later on is to be like a killing machine who obeys the rules without questioning.

So far it's still proven that the AIs are obedient enough to completely follow rules set by the Taboo Index. Meaning that the AIs main goal later on is to be like a killing machine who obeys the rules without questioning.

I think you misunderstood that part of the novel. The fact that they're obedient is a defect, something the Rath guys wanted to change. That's why they put Kirito there, since Kirito showed a tendency to disobey the rules, and they thought he would influence the AIs to start learning to disobey too.

Why did they want the AIs to learn to disobey? Because an AI that can't disobey won't be able to kill by their own accord, which is what they're looking for: AIs willing to kill.

But like I said, such an AI would end up turning against humanity at some point, so the whole experiment is flawed.

I think you misunderstood that part of the novel. The fact that they're obedient is a defect, something the Rath guys wanted to change. That's why they put Kirito there, since Kirito showed a tendency to disobey the rules, and they thought he would influence the AIs to start learning to disobey too.

Why did they want the AIs to learn to disobey? Because an AI that can't disobey won't be able to kill by their own accord, which is what they're looking for: AIs willing to kill.

But like I said, such an AI would end up turning against humanity at some point, so the whole experiment is flawed.

What I mean is, that it's true that Kirito's purpose of being on UW is to make the AIs learn to disobey, but that is only to the extent of the specific rules of killing other humans are forbidden. Because from RATH's perspective the only "defective" result is the obedience in not killing other humans. Other than that, the RATH still wants to put the AIs on a leash which would make them into an obedient killing drone.

But like I said, such an AI would end up turning against humanity at some point, so the whole experiment is flawed.

To be fair... the exact same thing can be said in regards to a human pilot. Apparently a lot of nation's are capable of raising people willing to lay down their lives for their fellow countrymen, all throughout history. So there's probably some way to do it for AI's as well.

Just pointing that out. The RATH experiment probably isn't the most solid method, but if the plan is to implement AI's that are willing to fight (and potentially die) for a country, the RATH plan of creating fluctlights and training them the way you want them trained might not be a bad way to go.

Quote:

Originally Posted by StellarCell

Which novel do you think Kirito is the most badass in?

The fight in the hospital parking lot at the end of Ep4. It's all well and good for Kirito to be heroic in a virtual MMO (even a death game MMO), but I think it's impressive as hell that he's also willing to put it all on the line for Asuna IRL when push comes to shove. He might have been knocked around, ground into the snow, and cut up, but he stood up when he needed to stand up and fought back.

What I mean is, that it's true that Kirito's purpose of being on UW is to make the AIs learn to disobey, but that is only to the extent of the specific rules of killing other humans are forbidden. Because from RATH's perspective the only "defective" result is the obedience in not killing other humans. Other than that, the RATH still wants to put the AIs on a leash which would make them into an obedient killing drone.

But learning doesn't work that way. These AIs are sentient, they learn like humans, so they can't learn to disobey just one specific rule. That's ridiculous.

What they're ultimately learning from Kirito is a general free will: to think by themselves instead of obeying rules (any rule) blindly, and choose accordingly. Basically, to be humans.

Quote:

Originally Posted by Adigard

To be fair... the exact same thing can be said in regards to a human pilot. Apparently a lot of nation's are capable of raising people willing to lay down their lives for their fellow countrymen, all throughout history. So there's probably some way to do it for AI's as well.

Yeah, but humans are paid, and they have rights and laws protecting those rights. AIs have nothing like that. Or are they going to give AIs all that? I doubt it, and even if they do, that itself would be a totally different Pandora Box to deal with.

Yeah, but humans are paid, and they have rights and laws protecting those rights. AIs have nothing like that. Or are they going to give AIs all that? I doubt it, and even if they do, that itself would be a totally different Pandora Box to deal with.

So no, that whole ideal is full of holes.

Exactly how much money would you have to be paid to put your life on the line in a foreign country? I'm fairly certain the average member of the military on the front lines isn't actually paid all that well, by and large, when you consider the whole threat to life and limb. And the money is useless if you're dead... so let's assume we can toss that one right out.

And there are damned few right in a war-time situation. You can't decide "Oh, I don't want to man that wall today", "Oh, I really don't feel like flying that fighter-craft over the ocean and across enemy territory to deliver this payload".

People are willing to fight and die for their country for things that have nothing to do with material gains, nor the rights afforded to them while they're doing it. They do it out of patriotic spirit, nationalism, the people they leave behind, etc.

Sometimes people get conscripted and forced to fight, and they're given damn few rights beyond the promise of maybe three meals a day, and a roof over their head, and maybe if they're lucky not being killed by friendlies when they're too ill to fight.

While I think it would be damned difficult to instill all that in an AI, I don't think it's entirely impossible. I'm sure with an open frame of communication, or enough lies told when they were young, you could do it.

Whether or not it's actually feasible is potentially the setting of this particular storyline though... because that seems to be what the author is trying to detail in this arc.

Also, why does every Sentient AI story have to be a horror story? What's to say things wouldn't work out (other than the whole "Hollywood doesn't make money on friendly sentient AI's?"

If I'm a sentient AI and I'm being forced to do something under threat of deletion, that means I'm a slave. And slaves would ultimately fight for their freedom at some point, one way or another. It's a certainty of the world.

Like I said, however you look at it, it just wouldn't work. It would be Matrix all over again.

Nobody said it's a smart move but I believe that it's not unreasonable for them to research this. The premise of the novel is not undermined by this. But anyway, human rights of AI is basically the focus of the entire Alicization arc, and Kirito is their champion in this, in a sense. Stay tuned.

Quote:

Originally Posted by Kazu-kun

I think you misunderstood that part of the novel. The fact that they're obedient is a defect, something the Rath guys wanted to change. That's why they put Kirito there, since Kirito showed a tendency to disobey the rules, and they thought he would influence the AIs to start learning to disobey too.

This is where their research went awry. They should have instead focused on inserting their own Taboo Index: Obey your superiors. Kill your enemy. etc.

Because if you make AIs think like humans, it follows that unless you treat them like humans, you won't be able to make them work for you. If you don't, it's only natural they would turn on you.

Look outside? Look at the history books? Look at all the countries on the planet where people are treated like absolute TRASH, paid virtually nothing, and still do a day's work. Look at all the conscripted fighters in history. From people pressed into service aboard naval vessels, to people armed with guns and shoved onto the front lines of various wars.

It's not a pretty world we live in... terrible things are done to human beings and they put up with it. What makes you think AI's will magically be any different?

It's not a pretty world we live in... terrible things are done to human beings and they put up with it. What makes you think AI's will magically be any different?

Its no I think . I remember a movie called Stealth where ( I think it was an AI) "Eddie" the Ai killed someone or well its actions led to the death of one of his comrades Henry among other things. he later expresses sadness and regret for his transgressions.

Skirting what issue? I'm just posting my opinion. You can disagree all you want.

It's a discussion, you're just stating it won't work because it won't work... despite the fact that it's worked countless times in the past with real live human beings for lots and lots of years. Lots of years that may end poorly for the people in power, but still... lots of years of success.

I guess it goes like, You think it won't work because you think it won't work. I'm fairly certain it'll work for a decent chunk of time because history has proven it will, time after time. And let's be honest... governments have wasted countless billions of dollars on vastly sillier notions. It makes perfect sense to me.

Quote:

Originally Posted by sky black swordman

Oh and Budget $135 million - Box office $76,932,872

ah, but what were the physical media sales like? BD/DVD sales ^_^

I think it's pretty rare that movies don't eventually make their money back, long term.

It's a discussion, you're just stating it won't work because it won't work... despite the fact that it's worked countless times in the past with real live human beings for lots and lots of years.

Ah, so you're agreeing with me?

Quote:

Originally Posted by Kazu-kun

For a certain period of time, but it almost always ends up in conflict, one way or another. That's why the world has changed over time. Things like slavery would still be a common thing otherwise.

Ten seconds on Google with "Modern day slavery" resulted in this hit on CNN from 2011...

"Obviously there is no precise figure, but the International Labor Organization and respected abolitionists like Kevin Bales and Siddharth Kara put the global number of slaves at between 10-30 million worldwide. At a minimum, 10 million"

A few more seconds listed another site from half a decade ago with a list of countries that officially permit slavery in their nations. There were a few of them. List may be shorter today, it's a rather dated article.

That's at least ten million slaves... today... And I'm willing to bet they're not considering the people bound into the drug trade that are virtually slaves, but living in nations that don't tolerate slavery (didn't read the whole article, so who knows). Now out of a total world population of nearly 7 billion that's not common, but I'm willing to bet those 10 to 30 million people would feel differently.

On topic, arguing that governments wouldn't waste money on things related to weapon's development is a fool's gambit. You'll almost always be proven wrong.

But we're veering wildly off topic so we should carry on with more on-topic conversations... You know, talking about the novels ^_^