Techdirt. Stories filed under "skynet"Easily digestible tech news...https://www.techdirt.com/
en-usTechdirt. Stories filed under "skynet"https://ii.techdirt.com/s/t/i/td-88x31.gifhttps://www.techdirt.com/Wed, 22 Jul 2015 17:00:00 PDTDailyDirt: Artificial Intelligence Is Here To Help Us...Michael Hohttps://www.techdirt.com/articles/20101226/23543912417/dailydirt-artificial-intelligence-is-here-to-help-us.shtml
https://www.techdirt.com/articles/20101226/23543912417/dailydirt-artificial-intelligence-is-here-to-help-us.shtmlartificial intelligence in an evil light, humans are building these intelligent machines -- and presumably, we'll have some control over how dangerous they'll ultimately become (but maybe not). People are building artificial brains without really knowing how brains work, but that's how we're learning. Maybe we should be breeding hyper-intelligent parrots, instead?

The document cites Zaidan as an example to demonstrate the powers of SKYNET, a program that analyzes location and communication data (or “metadata”) from bulk call records in order to detect suspicious patterns.

Now, there are a few interesting things that come out of this. First, the NSA has phone metadata on phones in Pakistan. That's found in the other released presentation on the NSA's "SKYNET" (yes, SKYNET) program:

But, perhaps the much more interesting tidbit is that this detailed report showing why they think Zaidan is a key Al Qaida courier shows a huge problem with metadata. When you think about it, it really should not be at all surprising that a journalist who is one of the leading reporters covering Al Qaeda might have phone metadata similar to someone who is actually in Al Qaeda. It's likely that he tries to contact them a lot and that he goes to where they are a lot. That's called being a reporter. But, to the NSA, those sorts of distinctions don't matter. Remember, former NSA boss Michael Hayden has outright admitted that "we kill people based on metadata."

Metadata reveals an awful lot, but there may be alternative explanations for those patterns. But when you get so focused on the data itself, you fall into this trap of believing what the data suggests may be true, because it looks so analytical. The idea that it might be a "false positive" and that there might be an alternative explanation (i.e., a reporter covering Al Qaeda is likely to have similar metadata) doesn't even seem to enter into the equation...

Permalink | Comments | Email This Story
]]>we kill based on metadatahttps://www.techdirt.com/comment_rss.php?sid=20150508/17241430943Fri, 17 May 2013 19:39:00 PDTRice University Professor: SkyNET's Gonna Take Ur Jerbs!Timothy Geignerhttps://www.techdirt.com/articles/20130517/06185923116/rice-university-professor-skynets-gonna-take-ur-jerbs.shtml
https://www.techdirt.com/articles/20130517/06185923116/rice-university-professor-skynets-gonna-take-ur-jerbs.shtml
It's sad to note how collective humanity has done an ostrich on the warnings about the machines. Still the NFL exists, robbing us of our best and brightest, who will no longer be available for the coming war with SkyNET. Conferences on what to do about the surely coming robot horde have produced little in the way of a path forward and have gone relatively unreported in any case. Due to this, we know very little about what form the non-existent threat of terminator-like metal monsters will take. Will they simply wage war against us? Will they syphon our body heat for energy? Will they farm our skin and dance around in it to Goodbye Horses, like some kind of graphite Buffalo Bill?

Pictured: A Rice University professor in the near future Image source: CC BY 2.0

According to Vardi, sometime around the year 2045, you won't have a job any longer because the robots will have taken it away from you.

In recent writings, Vardi traces the evolution of the idea that artificial intelligence may one day surpass human intelligence, from Turing to Kurzweil, and considers the recent rate of progress. Although early predictions proved too aggressive, in the space of 15 years we’ve gone from Deep Blue beating Kasparov at chess to self-driving cars and Watson beating Jeopardy champs Ken Jennings and Brad Rutter. Extrapolating into the future, Vardi thinks it’s reasonable to believe intelligent machines may one day replace human workers almost entirely and in the process put millions out of work permanently.

Well, looking back through the history of technological progress, you can certainly see his point. And once you've seen that point, you can laugh at it. And once you've laughed at it, you can call his local police station and request that they remove any science fiction movies from his home by force, because he's clearly seen too many of them.

The problem with thinking that artificial intelligence is going to replace us in the workforce is two-fold. First, it cheaply ignores the impact every other form of technological progress has had thus far. Robots are used on assembly lines, yet there's no drastic net loss of jobs. When the automobile was invented, it isn't as though the buggy whip makers simply died off in unemployed starvation. There are other jobs to be had, most often created as a direct result of the advance in technology. Assembly line workers become machinists. Buggy whip makers go to work for the auto companies. There can be pain in the market in the short term as it is disrupted, but on a long enough timeline everything seems to even back out.

The second problem is the failure to recognize that people value some products and services provided by our fellow meat-sacks. Can auto-attendant systems handle phone duties? Sure, but there are tons of companies that specifically advertise the concept of customers being able to talk to a "real" person. Can machines make rugs? Yup, yet there's a huge market in hand-woven rugs out there. And the service industries rely heavily on personality. A machine might be able to serve me my beer at my local watering hole, but will it listen to me complain about my job if I'm having a crappy day? Will it be able to offer me an opinion on which wine is the best on the menu? And, as the article notes, what if any workforce disruption that does occur is desirable?

Perhaps in the future, while some of us work hard to build and program super-intelligent machines, others will work hard to entertain, theorize, philosophize, and make uniquely human creative works, maybe even pair with machines to accomplish these things. These may seem like niche careers for the few and talented. But at the beginning of the Industrial Revolution, jobs of the mind in general were niche careers.

I call dibs on being the new Socrates.

Permalink | Comments | Email This Story
]]>derpa-derphttps://www.techdirt.com/comment_rss.php?sid=20130517/06185923116Fri, 30 Nov 2012 00:07:00 PSTCambridge Proposes New Centre To Study Ways Technology May Make Humans ExtinctTimothy Geignerhttps://www.techdirt.com/articles/20121126/10403721148/cambridge-proposes-new-centre-to-study-ways-technology-may-make-humans-extinct.shtml
https://www.techdirt.com/articles/20121126/10403721148/cambridge-proposes-new-centre-to-study-ways-technology-may-make-humans-extinct.shtmldeserve rights, for instance. On the flip side of the benevolence coin, I also had the distinct pleasure of discussing one sports journalist's opinion that we had to outlaw American football as we know it today for the obvious reason that the machines are preparing to take over and s#@% is about to get real.

A philosopher, a scientist and a software engineer have come together to propose a new centre at Cambridge, the Centre for the Study of Existential Risk (CSER), to address these cases – from developments in bio and nanotechnology to extreme climate change and even artificial intelligence – in which technology might pose "extinction-level" risks to our species.

Now, it would be quite easy to simply have a laugh at this proposal while writing off concerns about extinction-level technological disasters as being the thing of science fiction movies, and to some extent I wouldn't disagree with that notion, but this group certainly does appear to be keeping a level head about the subject. There doesn't seem to be a great deal of fear-mongering coming out of group, unlike what we see in cybersecurity debates, and the founding members of the group aren't exactly luddites. That said, even some of the group's members seem to realize how far-fetched this all sounds, such as Huw Price, the Bertrand Russell Professor of Philosophy and one of the group's founding members.

"Nature didn't anticipate us, and we in our turn shouldn't take AGI for granted. We need to take seriously the possibility that there might be a "Pandora's box" moment with AGI that, if missed, could be disastrous. I don't mean that we can predict this with certainty, no one is presently in a position to do that, but that's the point! With so much at stake, we need to do a better job of understanding the risks of potentially catastrophic technologies."

Unfortunately, the reasonable nature of Price's wish to simply study the potential of a problem does indeed lead to what seems to be laughable worries. For example, Price goes on to worry that an explosion in computing power and the possibility of software writing new software will relegate humanity to the back burner in competition with machines for global resources. My issue is that these researchers appear to equate intelligence with consciousness. Or, at the very least, they assume that a machine as intelligent as or even more intelligent than a human being will also have a human's motivation for dominance, expansion, or procreation (as in writing new software or creating more machines). Following the story logically, and having written a fictional novel discussing exactly that subject matter, I'm just not sure how the researchers got from point A to point B without a little science fiction magic worked into the mix.

So, while it would seem to be unreasonable to decry studying the subject, I would hope this or any other group looking at the possible negative impact of expanding technology would try to keep their sights on the most likely scenarios and stay away from the more fantastical, albeit entertaining, possibilities.

By the way, StumbleUpon can recommend some good Techdirt articles, too.

Permalink | Comments | Email This Story
]]>urls-we-dig-uphttps://www.techdirt.com/comment_rss.php?sid=20110603/11150914548Wed, 16 Feb 2011 14:34:47 PSTThe NFL Or SkyNET: There Can Be Only OneTimothy Geignerhttps://www.techdirt.com/articles/20110215/14082113113/nfl-skynet-there-can-be-only-one.shtml
https://www.techdirt.com/articles/20110215/14082113113/nfl-skynet-there-can-be-only-one.shtmlWe've all giggled at examples of technopanic in the past. We laughed at ER doctors warning about walking and texting at the same time. We snickered at the notion that Google's steetview was a threat to children. Some of our palms may have met our faces at the notion that digital drugs could be a real life danger.

It turns out the joke is on us. SkyNET is coming, my friends, and we're going to lose the war. And you know why? Because of football, hockey and boxing.

So says Rick Telander in a piece for the Chicago Sun Times, in which he declares that traumatic head injuries in those sports are stealing away our ability to fight the machines. Seriously. I couldn't make this stuff up. To preface, it should be noted that Telander isn't some crackpot pseudo-journalist. He is the senior sports columnist for the Chicago Sun Times, hired away from Sports Illustrated, where he was also a Senior Writer. He attended Northwestern University on a football scholarship and then went to training camp with the Kansas City Chiefs. Personally, I think he might have taken a few blows to the head himself.

Telander starts off talking about the trauma of head injuries in pro sports, namely boxing, football and hockey. We're okay so far. Bruising from sustained blows to the head lead to long term medical effects in players -- something that is becoming a growing issue. Then Telander goes completely off the reservation in answering his own question as to why this is more important now than ever:

"Consider it wasn’t until last year that the devious and know-nothing NFL Mild Traumatic Brain Injury Committee was restructured with seemingly authentic and un-buyable neurologists at the helm, and the word ‘‘Mild’’ was dropped altogether. Mild. Brain injury. Ha. I am reminded here of ‘‘minor’’ surgery, which, of course, is surgery on somebody else."

Hmm, well okay, the NFL is beginning to take brain injury more seriously. But the problem has been known for some time. It's thanks to boxers becoming pale drooling ghosts of their former selves that we have the term "punch drunk". But whatever...

"Second, we live in a world that is progressing into a vast arena in which mankind has never lived, never even comprehended, the stadium of human-enhanced computer dominance. It is a place where intelligence, real or artificial, will be all. Scientists say that by as early as 2045 there may well be a computer that dwarfs mankind. By then, according to the current cover story in Time, a computer might exist that will surpass ‘‘the brainpower equivalent to that of all human brains combined.’’ That’s smart. Unless we’re really dumb. And we’re not, except when we do dumb things, like let our heads get damaged continually and call it something like ringing a bell. In our new environment, how can anyone allow his or her IQ, or their children’s, to be lowered?"

Uh, what? Because technology is progressing, head injuries are now more important? And we can't play football? Or hockey? Or box? But why, Rick, why?

"If you think the talk of silicon joining and even replacing the organic mind is nonsense, remember that your own laptop does the work a global library once did. Consider, as Time points out, that ‘‘your average cell phone is about a millionth the size of, a millionth the price of and a thousand times more powerful than’’ the best computer at MIT 40 years ago...But the olden days are gone. And you can be assured that if the battle between machines and humans ever becomes confrontational, it won’t be won by fists and forearms, helmets and sticks to our delicate heads."

And there you have it. We cannot have football, hockey or boxing because the war against the machines is coming and we're turning those who would lead us in that fight into men with brain-mush in their formerly bright heads. Because prospective General Brett Favre has clearly shown how acclimated with the dangers of technology he is. And no one is cautious around new technology media like budding Admiral Chad Ochocino. Hell, I don't even want to think about a Colonel Patrick Kane leading the charge against a host of Terminators.

Once again, we all agree that brain injuries in sports are a bad thing. But the idea that it's suddenly become more important due to the rise of the machines? That seems like the product of one too many sports-related brain injuries.

My suggestion? Just make it mandatory that all machines on earth must do a ten year stint playing football or hockey. Today's matchup, the Texas Toasters up against the Rochester Refrigerators! Join us next week on ESPN when the Carolina Computers skate the ice against the San Diego Smartphones! I could go on, but I'll leave you with Boers and Bernstein's take on their radio show, the most listened to sports show in Chicago (the good stuff starts around 4 minutes and 30 seconds...):