Viewers who want to play along at home can buy a Samsung Smart TV. Samsung's related privacy policy warns that whenever the device's microphone is activated: "Please be aware that if your spoken words include personal or other sensitive information, that information will be among the data captured and transmitted to a third party." And that's before we even get into the device's built-in facial and gesture recognition capabilities.

"The self-surveillance state can be deadly."

In a quickly released frequently asked questions issued after The Daily Beast first reported on the nuances of that privacy policy, Samsung said the captured information wouldn't be used for marketing purposes, while declining to answer who those third parties might be. For anyone concerned that they hadn't consented to this type of information transfer, Samsung said the voice-recognition feature - and related data transfer - was covered by a separate privacy policy, to which they agreed if they'd activated the feature. Individual privacy policies also covered all of the third-party apps running on Samsung Smart TVs.

"We recommend that you refer to that application's privacy policy if you have any questions," Samsung says. It further clarifies that the voice recognition isn't stealthy: if users activate the feature, a microphone will appear on screen.

But security experts have long warned that if a device has a feature that can be disabled, a skilled hacker can often remotely enable that feature without revealing what they've done (see VoIP Phones: Eavesdropping Alert). There's a reason why leading security experts such as Mikko Hypponen keep a Band-Aid over their laptop's webcam.

When TVs Listen

Samsung, however, certainly isn't the first company to move in the direction of monitoring the users of its products. In May 2011, Verizon - which doesn't manufacture TVs, but which does have set-top boxes for cable TV subscribers - applied for a U.S. patent for creating a "detection zone" that would study what was happening near a television, then serve related advertising. Privacy, notably wasn't mentioned once in the 7,500-word, 14-page patent application.

What about other TV manufacturers - might their devices also be at risk? That's not clear, but Sony, for example, sells some Android-based TVs that have voice recognition features, and any such device is at risk if someone is able to remotely access the microphones built into these devices or infect the TV operating system with malware.

Compared with the authoritarian surveillance state envisioned by Orwell, however, reality for most of us seems much more banal. Think Internet of Things, selfie sticks, smart toys, smartphones, wearable computing devices and all of the other tools we now take for granted. Some pundits call this the self-surveillance state, on account of all of the information, images and videos that our devices now collect and transmit.

Self-Surveillance Downsides

For some, however, the self-surveillance state can be deadly. To wit, last year the Intercept published documents from 2011 and 2012 leaked by former National Security Agency contractor Edward Snowden, revealing the existence of a big data program called Skynet. That's a reference to an artificial intelligence program in the "Terminator" movies that gained sentience and then tried to wipe out humanity by building legions of Arnold Schwarzeneggers.

In this case, however, the machine-learning program was designed to study travel patterns - including countries visited and days of the week - as well as behavior-based analytics, such as infrequent incoming calls, "excessive SIM or handset swapping" or frequent power-downs for the 55 million users of Pakistan's mobile telephone network, to try and spot terrorists. Related details about how the program may have been deployed since then remain state secrets.

But the program notes reinforce that the data handled by our Internet-connected or mobile telephony devices, as well as how, where and when we use devices, can be used to fingerprint people, track them, potentially serve them advertising - or coupons - or in the case of the U.S. drone program, remotely kill them.

At the time that the Skynet documents were created, however, the program's false positive rate was at least 0.18 percent. As Ars Technica reports, that means of the 55 million people being monitored, about 99,000 individuals would be mislabeled as being a potential "terrorist."

About the Author

Schwartz is an award-winning journalist with two decades of experience in magazines, newspapers and electronic media. He has covered the information security and privacy sector throughout his career. Before joining Information Security Media Group in 2014, where he now serves as the Executive Editor, DataBreachToday and for European news coverage, Schwartz was the information security beat reporter for InformationWeek and a frequent contributor to DarkReading, amongst other publications. He lives in Scotland.