We don't need to understand computers, we need to understand ourselves, right?

What I'm basically saying is that we do not need to merge into artificial intelligence, instead, we need to focus on ourselves and our (human or superhuman) abilities in which we could create or improve. Why doesn't society do this instead? To create artificial intelligence, the computer has to be far ahead of the brain. So far that it could take millions of programmers to create. Biochips are the new technology, lets improvise that.

Post a Reply

Replies

let's get one thing straight first. there are 7 billion people on this planet. not every one of them is a computer scientist/engineer. there are fields such as biotechnology/bio-engineering which focuses on merging technology with biological matter and/or enhancing the human condition however, the trials you would need to run would have to guarantee that there are no harmful side effects in the short and long term, therefore extensive tests and research would be required. the integration of biological matter and pieces of metal is a delicate procedure.

computers and technology have and probably will only be (for a while), a cheaper imitation of nature, however, artificial intelligence does not mean that it has to be of the same or greater caliber of the human brain. artificial intelligence is merely the construct of some "thing" that is intelligent, self-aware or conscious. what you're thinking of is some kind of skynet/terminator/ultron being and we are nowhere close.

Do you want the computer programmers of today create artificial intelligence that could someday kill off our own species? I'm trying to save lives with this idea, thus not worrying about how far we go with computer programming. There is always a bad side to technology you know. I've been in four computer programming classes already, and all of them referred to artificial intelligence being our future. It's not our future, it's a massive genocide waiting to happen.

thing is, we are already at the point where we can kill our own species. yes, technology has always had a negative side to it - did albert einstein want his research to lead up to the atomic bomb? no, yet it was still produced and utilized. you could argue that there is a "full" ai and a "weak" ai where the weak ai only perform reasonable, logical operations and "full" ai may be on the same intelligence as humans. weak ai is already happening in right now in self-driving cars and other general robotics, it's already in the present. as for "full" ai, we are not at that stage yet, our computers do not have enough processing power and we do not understand the relationship of the mind and brain which creates consciousness. but will it happen? it's likely that it will happen despite all the controversies. surely, someone's mad enough to bring the rise of the machines. perhaps we will somehow develop a method to contain any malicious actions of ai. who knows?