Proprietary Surveillance

Nonfree (proprietary) software is very often malware (designed to
mistreat the user). Nonfree software is controlled by its developers,
which puts them in a position of power over the users; that is the
basic injustice. The developers and manufacturers often exercise
that power to the detriment of the users they ought to serve.

One common form of mistreatment is to snoop on the user. This page
records clearly established cases of proprietary software that
spies on or tracks users. Manufacturers even refuse
to say
whether they snoop on users for the state.

All appliances and applications that are tethered to a specific
server are snoopers by nature. We do not list them in this page
because they have their own
page: Proprietary
Tethers.

If you know of an example that ought to be in this page but isn't
here, please write
to <webmasters@gnu.org>
to inform us. Please include the URL of a trustworthy reference or two
to serve as specific substantiation.

Introduction

For decades, the Free Software movement has been denouncing the
abusive surveillance machine of
proprietary software
companies such as
Microsoft
and
Apple.
In the recent years, this tendency to watch people has spread across
industries, not only in the software business, but also in the
hardware. Moreover, it also spread dramatically away from the
keyboard, in the mobile computing industry, in the office, at home, in
transportation systems, and in the classroom.

Aggregate or anonymized data

Many companies, in their privacy policy, have a clause that claims
they share aggregate, non-personally identifiable information with
third parties/partners. Such claims are worthless, for several
reasons:

They could change the policy at any time.

They can twist the words by distributing an “aggregate” of
“anonymized” data which can be reidentified and attributed to
individuals.

The raw data they don't normally distribute can be taken by
data breaches.

The raw data they don't normally distribute can be taken by
subpoena.

Therefore, we must not be distracted by companies' statements of
they will do with the data they collect. The wrong is that
they collect it at all.

A downgrade to Windows 10 deleted surveillance-detection
applications. Then another downgrade inserted a general spying
program. Users noticed this and complained, so Microsoft renamed it
to give users the impression it was gone.

We can suppose Microsoft look at users' files for the US government
on demand, though the “privacy policy” does not explicitly
say so. Will it look at users' files for the Chinese government
on demand?

It also demonstrates how you can't trust proprietary software,
because even if today's version doesn't have a malicious functionality,
tomorrow's version might add it. The developer won't remove the
malfeature unless many users push back hard, and the users can't
remove it themselves.

Spyware on Mobiles

All “Smart” Phones

According to Edward Snowden, agencies can take over
smartphones by sending hidden text messages which enable
them to turn the phones on and off, listen to the microphone,
retrieve geo-location data from the GPS, take photographs, read
text messages, read call, location and web browsing history, and
read the contact list. This malware is designed to disguise itself
from investigation.

iThings

In the latest iThings system,
“turning off” WiFi and Bluetooth the obvious way
doesn't really turn them off. A more advanced way really does turn
them off—only until 5am. That's Apple for you—“We
know you want to be spied on”.

Apple proposes a
fingerprint-scanning touch screen—which would mean no way
to use it without having your fingerprints taken. Users would have
no way to tell whether the phone is snooping on them.

Even if you disable Google Maps and location tracking, you must
disable Google Play itself to completely stop the tracking. This is
yet another example of nonfree software pretending to obey the user,
when it's actually doing something else. Such a thing would be almost
unthinkable with free software.

Samsung phones come with apps
that users can't delete, and they send so much data that their
transmission is a substantial expense for users. Said transmission,
not wanted or requested by the user, clearly must constitute spying
of some kind.

Merely asking the “consent” of users is not enough to
legitimize actions like this. At this point, most users have stopped
reading the “Terms and Conditions” that spell out what
they are “consenting” to. Google should clearly and
honestly identify the information it collects on users, instead of
hiding it in an obscurely worded EULA.

However, to truly protect people's privacy, we must prevent Google
and other companies from getting this personal information in the
first place!

Mobile Apps

Facebook offered a convenient proprietary
library for building mobile apps, which also
sent personal data to Facebook. Lots of companies built apps that
way and released them, apparently not realizing that all the personal
data they collected would go to Facebook as well.

It shows that no one can trust a nonfree program, not even the
developers of other nonfree programs.

Collecting hardware identifiers is in apparent violation of
Google's policies. But it seems that Google wasn't aware of it,
and, once informed, was in no hurry to take action. This proves
that the policies of a development platform are ineffective at
preventing nonfree software developers from including malware in
their programs.

Twenty nine “beauty camera” apps that used to
be on Google Play had one or more malicious functionalities, such as
stealing users' photos instead of “beautifying” them,
pushing unwanted and often malicious ads on users, and redirecting
them to phishing sites that stole their credentials. Furthermore,
the user interface of most of them was designed to make uninstallation
difficult.

Users should of course uninstall these dangerous apps if they
haven't yet, but they should also stay away from nonfree apps in
general. All nonfree apps carry a potential risk because
there is no easy way of knowing what they really do.

An investigation of the 150 most popular
gratis VPN apps in Google Play found that
25% fail to protect their users’ privacy due to DNS leaks. In
addition, 85% feature intrusive permissions or functions in their
source code—often used for invasive advertising—that could
potentially also be used to spy on users. Other technical flaws were
found as well.

Often they send the machine's “advertising ID,” so that
Facebook can correlate the data it obtains from the same machine via
various apps. Some of them send Facebook detailed information about
the user's activities in the app; others only say that the user is
using that app, but that alone is often quite informative.

This spying occurs regardless of whether the user has a Facebook
account.

Google did not intend to make these apps spy; on the contrary, it
worked in various ways to prevent that, and deleted these apps after
discovering what they did. So we cannot blame Google specifically
for the snooping of these apps.

On the other hand, Google redistributes nonfree Android apps, and
therefore shares in the responsibility for the injustice of their being
nonfree. It also distributes its own nonfree apps, such as Google Play,
which
are malicious.

Could Google have done a better job of preventing apps from
cheating? There is no systematic way for Google, or Android users,
to inspect executable proprietary apps to see what they do.

Google could demand the source code for these apps, and study
the source code somehow to determine whether they mistreat users in
various ways. If it did a good job of this, it could more or less
prevent such snooping, except when the app developers are clever
enough to outsmart the checking.

But since Google itself develops malicious apps, we cannot trust
Google to protect us. We must demand release of source code to the
public, so we can depend on each other.

The suit accuses that this was done without the users' consent.
If the fine print of the app said that users gave consent for this,
would that make it acceptable? No way! It should be flat out illegal to design
the app to snoop at all.

Currently, the app is
being pre-installed on only one phone, and the user must
explicitly opt-in before the app takes effect. However, the app
remains spyware—an “optional” piece of spyware is
still spyware.

This example illustrates how “getting the user's
consent” for surveillance is inadequate as a protection against
massive surveillance.

A
research paper that investigated the privacy and security of
283 Android VPN apps concluded that “in spite of the promises
for privacy, security, and anonymity given by the majority of VPN
apps—millions of users may be unawarely subject to poor security
guarantees and abusive practices inflicted by VPN apps.”

Following is a non-exhaustive list, taken from the research paper,
of some proprietary VPN apps that track users and infringe their
privacy:

SurfEasy

Includes tracking libraries such as NativeX and Appflood,
meant to track users and show them targeted ads.

sFly Network Booster

Requests the READ_SMS and SEND_SMS
permissions upon installation, meaning it has full access to users'
text messages.

DroidVPN and TigerVPN

Requests the READ_LOGS permission to read logs
for other apps and also core system logs. TigerVPN developers have
confirmed this.

HideMyAss

Sends traffic to LinkedIn. Also, it stores detailed logs and
may turn them over to the UK government if requested.

VPN Services HotspotShield

Injects JavaScript code into the HTML pages returned to the
users. The stated purpose of the JS injection is to display ads. Uses
roughly five tracking libraries. Also, it redirects the user's
traffic through valueclick.com (an advertising website).

WiFi Protector VPN

Injects JavaScript code into HTML pages, and also uses roughly
five tracking libraries. Developers of this app have confirmed that
the non-premium version of the app does JavaScript injection for
tracking the user and displaying ads.

The article should not have described these apps as
“free”—they are not free software. The clear way
to say “zero price” is “gratis.”

The article takes for granted that the usual analytics tools are
legitimate, but is that valid? Software developers have no right to
analyze what users are doing or how. “Analytics” tools
that snoop are just as wrong as any other snooping.

A
study in 2015 found that 90% of the top-ranked gratis proprietary
Android apps contained recognizable tracking libraries. For the paid
proprietary apps, it was only 60%.

The article confusingly describes gratis apps as
“free”, but most of them are not in fact free software. It also uses the
ugly word “monetize”. A good replacement for that word
is “exploit”; nearly always that will fit perfectly.

The FTC criticized this app because it asked the user to
approve sending personal data to the app developer but did not ask
about sending it to other companies. This shows the weakness of
the reject-it-if-you-dislike-snooping “solution” to
surveillance: why should a flashlight app send any information to
anyone? A free software flashlight app would not.

TV Sets

Emo Phillips made a joke: The other day a woman came up to me and
said, “Didn't I see you on television?” I said, “I
don't know. You can't see out the other way.” Evidently that was
before Amazon “smart” TVs.

Vizio TVs
collect “whatever the TV sees,” in the own words of the company's
CTO, and this data is sold to third parties. This is in return for
“better service” (meaning more intrusive ads?) and slightly
lower retail prices.

What is supposed to make this spying acceptable, according to him,
is that it is opt-in in newer models. But since the Vizio software is
nonfree, we don't know what is actually happening behind the scenes,
and there is no guarantee that all future updates will leave the
settings unchanged.

If you already own a Vizio smart TV (or any smart TV, for that
matter), the easiest way to make sure it isn't spying on you is
to disconnect it from the Internet, and use a terrestrial antenna
instead. Unfortunately, this is not always possible. Another option,
if you are technically oriented, is to get your own router (which can
be an old computer running completely free software), and set up a
firewall to block connections to Vizio's servers. Or, as a last resort,
you can replace your TV with another model.

Some web and TV advertisements play inaudible
sounds to be picked up by proprietary malware running
on other devices in range so as to determine that they
are nearby. Once your Internet devices are paired with
your TV, advertisers can correlate ads with Web activity, and other
cross-device tracking.

It is possible to turn this off, but having it enabled by default
is an injustice already.

Tivo's alliance with Viacom adds 2.3 million households
to the 600 millions social media profiles the company
already monitors. Tivo customers are unaware they're
being watched by advertisers. By combining TV viewing
information with online social media participation, Tivo can now
correlate TV advertisement with online purchases, exposing all
users to new combined surveillance by default.

This shows that laws requiring products to get users' formal
consent before collecting personal data are totally inadequate.
And what happens if a user declines consent? Probably the TV will
say, “Without your consent to tracking, the TV will not
work.”

Proper laws would say that TVs are not allowed to report what the
user watches—no exceptions!

Cameras

In many cases, the video shows everyone that comes near, or merely
passes by, the user's front door.

The article focuses on how Ring used to let individual employees look
at the videos freely. It appears Amazon has tried to prevent that
secondary abuse, but the primary abuse—that Amazon gets the
video—Amazon expects society to surrender to.

When Consumer Reports tested them, it suggested that these
manufacturers promise not to look at what's in the videos. That's not
security for your home. Security means making sure they don't get to
see through your camera.

The app was reporting the temperature of the vibrator minute by
minute (thus, indirectly, whether it was surrounded by a person's
body), as well as the vibration frequency.

Note the totally inadequate proposed response: a labeling
standard with which manufacturers would make statements about their
products, rather than free software which users could have checked
and changed.

The company's statement that it was anonymizing the data may be
true, but it doesn't really matter. If it had sold the data to a data
broker, the data broker would have been able to figure out who the
user was.

Those toys also contain major security vulnerabilities; crackers
can remotely control the toys with a mobile phone. This would enable
crackers to listen in on a child's speech, and even speak into the
toys themselves.

Even though the ink subscription program may be cheaper in some
specific cases, it spies on users, and involves totally unacceptable
restrictions in the use of ink cartridges that would otherwise be in
working order.

It was very difficult for them to do this. The job would be much
easier for Amazon. And if some government such as China or the US
told Amazon to do this, or cease to sell the product in that country,
do you think Amazon would have the moral fiber to say no?

Today's technological practice does not include any way of making
a device that can obey your voice commands without potentially spying
on you. Even if it is air-gapped, it could be saving up records
about you for later examination.

GM did not get users' consent, but it could have got that easily by
sneaking it into the contract that users sign for some digital service
or other. A requirement for consent is effectively no protection.

The cars can also collect lots of other data: listening to you,
watching you, following your movements, tracking passengers' cell
phones. All such data collection should be forbidden.

But if you really want to be safe, we must make sure the car's
hardware cannot collect any of that data, or that the software
is free so we know it won't collect any of that data.

That's easy to do because the system has no authentication
when accessed through the modem. However, even if it asked
for authentication, you couldn't be confident that Nissan
has no access. The software in the car is proprietary, which means
it demands blind faith from its users.

Even if no one connects to the car remotely, the cell phone modem
enables the phone company to track the car's movements all the time;
it is possible to physically remove the cell phone modem, though.

Tesla cars allow the company to extract
data remotely and determine the car's location
at any time. (See Section 2, paragraphs b and c of the
privacy statement.) The company says it doesn't store this
information, but if the state orders it to get the data and hand it
over, the state can store it.

The case of toll-collection systems, mentioned in this article,
is not really a matter of proprietary surveillance. These systems
are an intolerable invasion of privacy, and should be replaced with
anonymous payment systems, but the invasion isn't done by malware. The
other cases mentioned are done by proprietary malware in the car.

Spyware in Networks

This is not a malicious functionality of a program with some other
purpose; this is the software's sole purpose, and Google says so. But
Google says it in a way that encourages most people to ignore the
details. That, we believe, makes it fitting to list here.