If AI ends up thinking in a fundamentally diferent manner than humans should we be surprised? It will almost certainly have many more sense organs than a human. It will almost certainly have many more 'brains' involved than a human. It will almost certainly be 'smarter' (perhaps not at first, but qickly) than a human.

Nothing good? The credit fraud detection software protecting your Visa card? IBM Watson to help diagnose cancer and other illnesses? There is a long list of these type of apps, none of these are "good" for us?

It's like you think SkyNet is inevitable if we continue down this road. But I will admit that when I read the line in article that said "although we don't fully understand conciousness yet", it put a damper on any conclusions these guys gave.

I think that if AI ever creates self awareness and self preservation in the machine, that's when the science fiction movies begin to look a little more real. Scariest one I have seen is Eagle Eye. Not feasible today but didn't look that far off from possible reality. The computer was not trying to self actualize, like Data in Star Trek Next Generation, but simply survive when it learned it was going to be shut down.

I can not find anything positive in the creation of Artificial Intellegence. Once we lose control of these machines, man will not be able to fix anything to stop this from continuing onto a critical end.

Comparing the development of digital music fidelity to the advancement of AI doesn't work as an analogy because the difference between analog and digital is well-understood. Not so human consciousness. If Tononi's model is correct -- and there's still debate about that -- then we simply can't model human consciousness on a computer. We may get something functionally similar, but we won't be able to compare AI to the conscious mind because the latter will remain a black box.

And digital music will never sound as good as analog nor will digital photos ever come close to rivaling film! With all due respect, statements like these seem ludicrous to me. We are not even at beta in our thinking about AI, much farther away still from being able to imagine AI post singularity. What will AI itself say about it's own ability to synthesize human consciousness? Don't know the answer, neither do I. Never say never or history will only remember you with amusement.

As InformationWeek Government readers were busy firming up their fiscal year 2015 budgets, we asked them to rate more than 30 IT initiatives in terms of importance and current leadership focus. No surprise, among more than 30 options, security is No. 1. After that, things get less predictable.