Fearing Cyber-Terrorism (Ethical Data Science Anyone?)

Discussions of the ethics of data science are replete with examples of not discriminating against individuals based on race (a crime in some contexts), violation of privacy expectations, etc.

What I have not seen, perhaps poor searching on my part, are discussions of the ethical obligation of data scientists to persuade would be clients that their fears are baseless and/or refusing to participate in projects based on fear mongering.

…
The cyberwar could get much hotter soon, in the estimation of former CIA counter-intelligence director Barry Royden, a 40-year intel veteran, who told Business Insider the threat of cyberterrorism is pervasive, evasive, and so damned invasive that, sooner or later, someone will give into temptation, pull the trigger, and unleash chaos.
…

Ooooh, chaos. That sounds serious, except that it is the product of paranoid fantasy and and a desire to game the appropriations process.

About 31,300. That is roughly the number of magazine and journal articles written so far that discuss the phenomenon of cyber terrorism.

Zero. That is the number of people that who been hurt or killed by cyber terrorism at the time this went to press.

In many ways, cyber terrorism is like the Discovery Channel’s “Shark Week,” when we obsess about shark attacks despite the fact that you are roughly 15,000 times more likely to be hurt or killed in an accident involving a toilet. But by looking at how terror groups actually use the Internet, rather than fixating on nightmare scenarios, we can properly prioritize and focus our efforts. (emphasis in original)
…

That’s a data point isn’t it?

The quantity of zero. Yes?

In terms of data science narrative, the:

…we obsess about shark attacks despite the fact that you are roughly 15,000 times more likely to be hurt or killed in an accident involving a toilet.

is particularly impressive. Anyone with a data set of the ways people have been injured or killed in cases involving a toilet?

What are the ethics of taking $millions for work that you know is unnecessary and perhaps even useless?

Do you humor the client, and in the case of government, loot the public till?

Does it make a difference (ethically speaking) that someone else will take the money if you don’t?

Any examples of data scientists not taking on work based on the false threat of cyber-terrorism?

PS: Just in case anyone brings up the Islamic State, the bogeyman of the month of late, point them to: ISIS’s Cyber Caliphate hacks the wrong Google. The current cyber abilities of the Islamic State make them more of a danger to themselves than anyone else. (That’s a factual observation and not an attempt to provide “material support or resources” to the Islamic State.)

This entry was posted
on Wednesday, March 2nd, 2016 at 10:22 am and is filed under Ethics, Government.
You can follow any responses to this entry through the RSS 2.0 feed.
Both comments and pings are currently closed.