Isn’t an “advanced” AI with singularity less dangerous than an “advanced” AI without it?
If you are going to tell an AI to solve a problem, and it picks you out as an ‘possible’ threat because you ‘are able prevent it from happening’ just by blocking it’s walking path in the hallway without even considering it, I rather have it some sense.
I see an more advanced AI without having some ability of some perception waaay more dangerous than one with, because in the end the AI will complete it’s task with the precision you can pair to the core of nature.
It still could remove you in any case, but when it got some sense, it got some sense…could decide it’s not necessarily for it’s task, I’m not saying it’s not dangerous, I’m saying it’s less dangerous one with imo.
You don’t want when you order you robot to get you a cookie from the kitchen that it is going to kill you because you are ‘possible’ obstacle which could go stand in it’s way, and it won’t even tell you because that would reduce the chances of succeeding it’s task, even if you are never even intended to do that, the robot will take you out because it needs to make sure it would execute the task perfectly according what it’s ordered to, which is getting you a ordinary cookie from the kitchen (this concept is no joke, it’s a serious concern)
Call me sci fi hippi or something but I’m more concerned about robots without any sense than one with, either way advancements in computing, robotics, and AI will come if you like it or not tho.@discobot fortune don’t even think about it

(also watch the monkey video again and imagine the soldiers are the AI and the ak-47 stand for technology and the ape stands for the humans)

Meh, there is not much difference between humans and monkeys, when superintelligence comes to play. But superintelligence for sure will know who can cause more problems.

We can hope the competition for AI will completely bog out in laws that will be there, made by wise people that know about AI more than we here. Combined with complete surveilance of future networks and cyber and information police having clear priorities we can get somewhere, that means nowhere actually, which is better when it somes to developing a self improving AI.

Only I predict Asimov probably will turn in his grave few times untill we will get there.

It’s not ‘humans’ that cause problems, but individuals of them, just like other individuals create solutions.
But there is no issue when AI is able to know where the problem comes from, that’s the main purpose of it, solving problems, when it comes to problems and solutions. But yes you might be right if we humans in regular are the main issue we probably might be ucked lol.

Nana_Skalski:

Combined with complete surveilance of future networks and cyber and information police having clear priorities we can get somewhere

That is a concept that brings AI and biological intelligence together. We could ask ourselves a question then: how many times a day someone thinks about other human being a problem? Do these people always react and why they react? If all those people of the world would be standing before this big brain that can solve problems and find solutions, what would it find as solution? What would it do? How it would react? The Asimov’s laws are already there, but implementing them into a machine mind of future AI could be a lot harder than just saying, nobody attempted it and nobody knows how it will end. Being paranoid in that matter is a good thing then, I think.

Sure you got a point there someone thinking about someone else being a problem, also I’m stating my own perspective on the subject, maybe indeed AI might be a huge mistake, even I take that in some part notice.
But like I said earlier, people scare me most more than a general intelligent AI, maybe that’s not in your case as well as many others.

As for ‘viewing another person as a problem’ is a kind of view that I would think occurs out of feelings or emotions. I’m not saying an AI can’t have emotions but in all likelihood, they will traverse the least resistant and thus most efficient route towards a goal. I highly doubt the easiest solution for a computer would be enslaving or wiping out biological organisms. The reasoning is that this takes a lot of effort and resource investment. We could argue they are as likely to turn on the creators as they are to build a rocket and fly to mars to avoid conflict. The merits of this of course being we are far from being able to give chase and by the time we are able they might have sufficient structure to confiscate our planet entirely. One might argue that it is pride and hubris that results in conflict rather than need.

Would it not be great if eve had some sort of AI system for warfare?

I could think of a few places where this would be neat as an effort of taking the already current system of mining ‘multiboxing’ and while not getting rid of it creating an interface that made it less difficult to control. Requiring, of course, that you to sub all accounts but allowing you to manage them under one master account and restricted to mining ships. In addition organizing the PI windows in a similar fashion to where you can send out your miners to their PI planets etc.

This may be off-topic but I felt inclined to talk about some eve related content since we are on the eve forums.

I think she meant having an opinion of someone or ‘something’, emotion not included.
A robot doesn’t need to hate you to kill you for example, it could have other reasons why it “doesn’t like you” without any emotion included.
So a danger is what if the opinion of an AI towards you is which has a negative outcome when it makes it own decisions.
Just like 2 humans don’t like each other because they have different perspective of what they think is right and what is wrong.