I have stopped participating on Facebook. I’m leaving my account live (so that my post about why I’m leaving is visible), but everything will be shut off as much as possible, and the rest will be ignored. No Messenger, no more posts on my timeline, no notifications, no tagging, etc.

I’ll be spending more time on LinkedIn and Twitter. I hope you’ll follow those pages, or use the Subscribe form on the right side of my blog page.

This isn’t an easy decision because it will be harder to keep in touch with everyone in my life, not least my family (including famous daughter and grandchild) and the many friends I’ve made in my travels. But I’ve decided we must stand up.

The rest of this post explains why; if you don’t need that info, ignore it – but please keep in touch.

I’ve concluded that Facebook is incompetent about security of our data and irresponsible about the side effects of what happens when marketers, bots, and monitors interact with the site. It allows (or fails to stop) unscrupulous behavior by unseen marketers, behind the scenes or even posing as members of patient groups.

In my opinion none of us should entrust a single bit of patient information to Facebook. Of course it’s up to you: you may want to stay, all things considered, and I support you in doing what you want. But be aware of what could be going on behind the curtain.

I’ll discuss three areas that have multiple evidence points.

1. Covert marketing withinpatient groups

For most of us, if someone is secretly selling on Facebook it may be merely annoying. But in some cases these people have done really bad things with patient groups.

Treating people this way when they have any kind of medical or mental health problem is flat-out predatory, and I believe patients should be aware that they might want to stay away. I would. (I won’t say “should stay away” because that’s a personal choice. But I won’t stand for it being in dark alleys.)

2. Incompetence at security – and burying the evidence

An especially bad case of skullduggery and self-interest happened last July, when Wall Street was rattling swords at Facebook because FB had not been truthful to investors about the Cambridge Analytica election scandal: SEC Probes Why Facebook Didn’t Warn Sooner on Privacy Lapse (Wall Street Journal). (It’s one thing to mess with the public, but mess with Wall Street and s4!t gets serious, eh?)

Coincidentally, right when that happened, a thriving private FB #MeToo group of 15,000 sexual abuse survivors got hacked by trolls (see the Wired article How a Facebook group for sexual assault survivors became a tool for harassment), who proceeded to post vicious sexual images to certain members, privately or publicly in that group. When the admins reported it to FB, FB didn’t investigate – without warning they ERASED THE WHOLE GROUP, destroying all the evidence – not to mention all the group’s past conversations, networks of contacts, etc.

The company has gone too far, to the point where it’s time to walk away.

3. Incompetence and haphazard management of hate speech issues

Clearly, after the scandals around the 2016 elections and alt-right hate problems, Facebook needed to do something about all the fraudulent accounts and hate speech they were allowing. But rather than figuring out an approach that could have been costly – actually being careful about rules – they went for cheap and sloppy, because “careful” ain’t cheap. The result has been so dishearteningly inept that it helped nail the coffin on whether I could tolerate being there.

It’s summed up in two articles about how they’re clumsily handling censorship vs freedom of speech – a very delicate issue in these times, which they’re trying to handle by sending disorganized rules created by random people everywhere to cheap call center personnel, in the form of PowerPoint slides!

Nov 2018, Rolling Stone: “Who will fix FB?” including a sample story of a guy whose legit website got banned from FB as collateral damage during a sweep intended to erase frauds … it seems nobody checked whether the rules were working as intended! That is WICKED bad in a software company. Blind, unthinking execution of rules written by someone somewhere, carried out (the article suspects) by workers in low-priced overseas call centers. And nobody checking.

The decision to actually leave Facebook started in mid-December. (It had come up several times, but throughout 2018 it got worse and worse.) Then, right after Christmas this came out:

The leaker said FB “was exercising too much power, with too little oversight — and making too many mistakes.” Mistakes like that can cause harm; harm that happens entirely because the company is being reckless.

Beware of technology carelessly used in the pursuit of large-scale automated profits

A basic reason why business loves automation is that human intervention is costly. “It doesn’t scale,” as they say. (Specifically, to do more of it, you have to hire and train more people, pay them benefits, etc. Silicon Valley likes things you can program into a system and sell to 100 or six billion people at the same cost.)

I love automation as much as anyone (it’s been my whole career), but there are limits: you have to check that the robots aren’t going insane. Especially in cases where harm can result. Like driverless cars. Or healthcare.

Some things truly require human judgment.

Other big tech companies are getting too big and irresponsible for their britches – e.g. Amazon wants to sell its “Rekognition” face recognition software to the TSA, even though (USA Today, July) it misidentified 28 members of Congress in an ACLU test. The software said those 28 faces matched a database of arrest photos!

Click image to visit original article on American Civil Liberties Union website

Are you eager to walk through that software for TSA, at your next flight? Especially if you’re not Caucasian: “Nearly 40 percent of Rekognition’s false matches in our test were of people of color, even though they make up only 20 percent of Congress.” [ACLU]

Note: TSA hasn’t bought Rekognition yet, but USA Today says local law enforcement agencies already have. Do they have I.T. experts who can adjust and evaluate such new technology??

You should have exactly this kind of worry about anyone who’s touting some amazing “AI” (artificial intelligence) as the next miracle. AI is powerful and beginning to do great things – but it must be monitored and checked for unintended harms, or the robots truly will do large-scale harm in our civilization.

Some of the investor-oriented tweets and posts I’ve seen don’t care a thing about whether the stuff is accurate – “Hey, it’s NEW! It’s gonna be great! Don’t miss out – buy some today!”

Not me – not unless a thinking human is doing a sanity check on whether it gives accurate answers.

It’s often said that with great power comes great responsibility. Actions like FB and Amazon’s go way too far, and the last straw to me was the increasingly clear picture that Facebook truly isn’t going to let the risk of harm to others slow them down.

That would be irresponsible in any walk of life; in criminal law it’s called negligence. In healthcare (where I try to lead) it especially crosses the line into “must not be tolerated” territory.

So, Facebook: as they say on Shark Tank: I’m out.

Additional reading:

“The company also writes that it never sells the data and that users are in control of the data uploaded to Facebook. This “fact check” contradicts several details Ars found in analysis of Facebook data downloads and testimony from users who provided the data.”

Comments

I understand, Nancylynn. Without a doubt Facebook knows that we’ve all gotten accustomed to connecting with each other easily and for free – so they believe they can do anything they want with us.

It’s a classic moral question: when you (and people you care about) are getting a free ride on something you value, how much do you put up with before you say “I’m out”? The past year’s news made me reach that point. :-( (That’s why I included all the detail. I didn’t decide this lightly.)

I honestly suspect Facebook (and importantly its investors) think they can get away with anything in exchange for the free ride. In this case, though, it’s not just me that’s affected – they’re being terribly (and sometimes disgustingly) predatory about people who have a real problem. I’ve had it.

What do you think – if they were completely open and honest about what they’re doing, would everyone be okay with that? Perhaps, but in that case why be sneaky, and deny it, to Congress and everyone?

Christoph, the article’s not open access – will we fully get the point if we just read the abstract? Some of the concepts you introduce are pretty arcane compared to the thoughts of ordinary daily users. And I sure would like to know what’s behind each footnote.

Dave, I’d like to add Mayo Clinic Connect https://connect.mayoclinic.org to your list of reputable online community spaces for patients and caregivers.

Connect is an open forum for all (you don’t have to be a Mayo patient). To register, people only need an email (for notifications), @username and password. People can choose to use a pseudonym or real name. Further information is voluntary. People can disclose as much or as little as they feel comfortable with.

While this article is about breaches and covert (bad) access to people’s information, we all know that the data and discussions that patients share in online communities is gold and can have a profound effect on people’s health and on health care. Working with the magic that happens in patients communities fuels me.

Happy New Year! I also left facebook last month for the same reasons. They are more worried about the bottom line than about their users. If you don’t know what the product is, then you are the product. I was tired of being the product…
Keep fighting the good fight!

I understand the reason and I agree with the well articulated criticism of Facebook, but I disagree with the decision to leave. Leaving will have zero impact on Facebook, but will deprive many, many patients and clinicians of learning from you.

I have no delusions about Zuck or anyone else at FB saying “Oh no! e-Patient Dave is pissed – we better run!” They’re not my intended audience.

I hope, first, to stir discussion of the cited issues. That includes people thinking about peer health advice on social media, but also the more general public.

Second, I’m not sure all that many patients and clinicians have been reading what I say on FB. I’m not privy to those numbers, since I’m not on a “business page” there. Of course I’ve always tried to teach there, but I don’t know how big an impact I’ve had. (My posts there that got a lot of likes and responses were always the goofy giggles.)

There’s another level of this, though, that I think has more long term import: as you know, after the 2016 election (with all the talk of skewed content in our social media feeds), I concluded the only “grown-up” solution was to take responsibility ourselves for where we get information, and not just see what drifts by. As you know, I created my own Twitter list that I called credible publications across the spectrum.

There’s a further aspect, too – I didn’t say it in the post but it already came up in another comment this morning, so maybe it’s more central than I realized: your point that you and I want FB’s “microphone,” to reach everyone we can, implies the power they’ve achieved by being “the place where everyone is,” which I strongly suspect makes them think they can do anything they want to us, because no matter what they do, we ain’t gonna leave. (See what I mean?)

All things considered I made the choice to say there is a limit to how much misuse I’ll tolerate.

In the context of the post-election news realizations, I think it’s essential for us as a civilization to stop saying “Whatever” to whatever crap floats by in our feeds. We know there’s garbage, and the remedy is not to shrug, it’s to say “Not good enough” and look elsewhere.

I deleted Facebook in 2016. The funny thing is that no patient groups discussed any of this. Most of them appear to be run by content marketers and influencers. If you really want to see how much data they ahve access to , go and read up on Palantir. Corporations like Palantir and Optum seek out keywords, and scrape data. They have full acceses to Facebook. The information is used to not only do deceptive marketing but to create counter narratives so peope don’t question anything. The owner of Palantir, Peter Thiel believes that even marketing psuedo scince to desperate patients is a good thing, becuase there is a profit to be made.
Keep your eyes open people! It does not look liek there will be any regualtions any time soon. Zuck got busted paying kids to turn over all of their FB data, and there was no reaction.

Trackbacks

[…] month I suspended my Facebook activity and posted Facebook, I’m out. Your irresponsibility with patient groups has gone too far. It’s been hard because 87 times a day I think “I gotta pop this on FB” or […]

[…] My post January 7 about why I stopped participation on Facebook has drawn lots of discussion, which is great. In this era, it’s increasingly important for us all to THINK about the information that floats past our eyeballs: “Wait – who put this here?? Do I trust it?? What if it’s wrong??” […]