This reminds me of the long conversations that I used to have with family members and friends several years ago. With their continuous requests to create my own Facebook profile so I can keep in contact with them and with their activities as well as to share my whereabouts. I always used the same argument to reject these suggestions — "I don't want Facebook to have too much data about me, more than the data that you already provided".

I got used to the looks of disbelief, thinking that I was some sort of hermit, an antisocial.

I also got tired of answering the frequent "Why don't you have Facebook?" questions.

I remember the last time I had this conversation with someone, last year (2017) around August. I found a new love partner, and after the long intimate talks on the phone, they requested the usual "intimate pictures", not necessarily sexual but certainly sexy. While I have no tabus with regards to my sexuality, having an understanding of how the Internet works, I have always refused to send that type of images/videos/audios, and I always tried to be patient with the other person to explain my constant denials. Unfortunately, expecting a non-tech-savvy person to understand how data moves around the Internet is most of the time based on hope, and even if they understand, they ultimately don't care because the result doesn't change: you don't get to share something with them and that affects personal interactions.

I am sure that the deletion of media files in services like Facebook has never meant to be absolute. Many of my colleagues believe the same thing that I believe: Facebook and other services do not actually delete data, they just mark it as "deleted" and purge it only if they need the space. The same way a hard drive works, you don't really delete a picture when you hit the "delete" key, nor even if you clear the "trash" folder, the data is still there, where it was, it just loses the links to the metadata.

It is sad how this information becomes news only when bad things happen.

> I am sure that the deletion of media files in services like Facebook has never meant to be absolute. Many of my colleagues believe the same thing that I believe: Facebook and other services do not actually delete data, they just mark it as "deleted" and purge it only if they need the space.

"The delete operation is simple – it marks the needle in the haystack store as deleted by setting a “deleted” bit in the flags field of the needle. However, the associated index record is not modified in any way so an application could end up referencing a deleted needle. A read operation for such a needle will see the “deleted” flag and fail the operation with an appropriate error. The space of a deleted needle is not reclaimed in any way. The only way to reclaim space from deleted needles is to compact the haystack (see below)."

We don't soft delete payloads at Raygun (https://raygun.com), for the very fact that typically if one of our customers wants to delete something it's because they might have sent something they don't want a third party to have. We have filters and other PII filtering tools etc, but it every now and then something might be sent by mistake.

Having said that, you'd be amazed how often folks ask for things to be undeleted (despite a big warning dialog).

It isn’t that hard to combine soft deletes with delayed hard deletes: generate a new encryption key every day for “data deleted today”, and encrypt deleted data with it. After X days, destroy the decryption key.

If you use asymmetric encryption, you can keep the group of people who who can recover “deleted data” small. You could even have an independent party generate your encryption key pair, give you the encryption key, and your customer, on request, the decryption key (I think there is a business model for a non-profit here).

Because the key is smaller, it is easier to make sure you deleted every copy of that key than that you deleted every copy of the data. The data also might be part of a larger backup that you would have to take apart and reassemble in order to delete the data, or might be in a place where doing that is costly (e.g. on Amazon Glacier)

The question is not can we encrypt at storage. We’re now talking about encrypting as a soft-deletion method, which means we need to know everywhere the data is stored at deletion time, whether to delete it or to encrypt it with this new “deletion” key.

Thanks for raising that issue, I was somewhat confused by the mentioning of encryption as a soft-deletion method... it made precious little sense to me, but everybody seemed to go along with it and I thought I was missing something very fundamentally ’right’ about that idea. Turns out it’s not so.

if they upload a private key, and delete because they "don't want a third party to have". do you also guarantee it wasn't seen or cached anywhere else? I dont know the details of that product, but I usually treat anything uploaded even once as compromised from that point on.

This is the same argument people used to make for why it was fine for capabilities to be unrevocable--someone could have copied the data anyway (or whatever) so there was no point in revoking it. In reality, most of the time nobody but the host of a deleted item has access to the data, has a way to tie it to the originator, and has a motive to use it, especially without significant effort. Being able to delete things is a very important feature (not to mention a legal requirement in many countries!), and it's disturbing to me how many people seem to want to justify a world where every bit of data is saved, forever.

> Not to alleviate facebook of blame, but who's to say data on almost every other social media service isn't also just flagged for deletion?

The word "delete" has a pretty clear definition to most users. Facebook is one of the most used pieces of software in the world. If FB is allowed to lie to its users, it would indeed give a pass to just about every social media service out there.

The reason Facebook is special, and deserves special scrutiny, is because of its power. If FB establishes a bad behavior, it will become the norm.

A more prudent question would be whether these tech companies should be reined in by federal privacy law. Should they be allowed to collect, trade and analyze private data on all of its users? Where do we draw the line in the sand, in terms of what's acceptable and not.

These are incredibly important questions. A related field would be the credit bureaus, such as Equifax. Global companies who store social security numbers and all other sorts of information. We need a national set of rules for these companies to follow.

Not keeping my hopes up, given our Congress is so dysfunctional these days.

Does it make it ok for Facebook to do it just because similar other companies do it? I say no, all of them should delete something I say to delete. And "everyone does it" is makes it a bigger problem, not a smaller one.

A lot of the big agile companies are using event sourcing. So there isn't even a model to delete. It's all events with the models being created from a snapshot of events. The event stream is usually durable and lives forever.

So with this type of system nothing is ever "deleted". It's just an event that something is deleted.

This is a common and very scalable system. You don't deal with models, you deal with events (and a model is a snapshot of events).

This is even an issue. Even other companies that aren't event sourcing, but traditional model architecture have backups. You ask something to be deleted and they might actually delete it, but what about last weeks backup? It's not deleted there.

User expectations about a deletion would probably be "make it as though this was never uploaded": no copy, no backup, no recoverable form whatsoever. And yet, if it was not deleted, and there was a problem requiring backups to be used, they would expect to never even know about it, just that all their data would remain.

It's very much against the rules in event sourced systems to change history. But maybe that just doesn't matter. If it means you can never meet a user expectation about privacy, I guess you could tell the user that everything persists indefinitely... or when something is deleted, go back to the upload event and remove it, rebuilding history with any event related to that uploaded item ignored. Putting the user above the "purity" of the software and creating potential problems elsewhere.

Even on backups in long term storage, there could be some process of creating new copies of the backups with any needed modifications on some kind of schedule, so deletions can propagate over time.

Ultimately the challenges here are financial. We could delete things thoroughly if we were willing to pay for the developer time and other resources needed to make it work.

Why would they? They implement their system the way they want to. Also, this is a completely logical way to deal with deletions. This is what I would do, (what I have done, when I created a simple CMS system). I don't want an endless quarrel with a customer, who "accidentally" deleted something and wants it back. I just turn the switch and it is back.

Do you mean the delete button is a lie? Why would it be a lie? Can you or someone else access the deleted video from Facebook.com? Or in another public way? Isn't it deleted from this point of view?

I am not defending Facebook in anyway. I just don't understand why is everybody surprised about these things. Do you think if you click delete on a video on youtube, then it physically deletes the video from all of its servers?

I recall the distinction being made very clear on LiveJournal between "deleted" content vs "purged". I would be very surprised if they were not being forthright about this. Of course this was 10 years ago, before the Russian ownership. So I do have reason to believe that not all companies act in deceitful ways when it comes to retention of user data.

It is best to never have any Chinese company store your data. They are by law (and under severe penalties) required to make all data in their possession available to government officials at any time that it is requested. Dictatorships are like that.

Chinese firms are generally required under threat of arrest to store and/or transmit all data on users. A single Chinese firm failing to delete data would have little or no negative impact within China given that they’re probably already secretly required to do exactly this. Do you mean US customer impact?

I understand your skepticism considering the behavior we've seen from some of these companies recently — but when their settlement with the FTC includes an independent company monitoring their handling of user privacy for 20 years, I think it's safe to trust them on this one.

>Snapchat servers are designed to automatically delete all Snaps after they’ve been viewed by all recipients

>Snapchat servers are designed to automatically delete all unopened Snaps after 30 days

Yeah i don't see much difference between this and hitting delete on a file in a local file system. The data itself still sits there until the sectors gets reclaimed, but there is no longer a file name or directory entry associated with them.

The database we use (Vertica) works this way. Nothing is deleted. Instead it is flagged as deleted. A background task may purge old data (older than x). Historical queries show the database state as it was days or weeks ago. If the background task is broken (bug?) then the data stays indefinitelly on disk.

Not at all. All my reasons were technical in that events are part of a stream, and delete is just one more event. When you reconstruct the stream, the end product is the item is deleted. But you could recreate the item from the stream so technically not deleted.

Companies that take daily backups. Say a user asked to delete something, do they now go through and comb through their backups (which might even be offsite or in cold storage) and delete it? It's essentially the same thing.

Choosing to adopt a technology that makes deletions impossible absolutely shows a lack of respect for user decisions. Not building in the ability to deep delete from backups is the same. There is no technical restriction on deleting data, just company decisions that make it difficult.

Isn't every business that has backups in the same boat? Event sourcing is just like having continuous backup.

So you're saying that every company that has a backup system, and who don't regularly go through the backups and remove individual files from the backups because users requested it is a lack of respect? So companies that have offsite backups should, according to you, have policies in place where user data is also removed from offsite backups?

For example: a soft delete may be just a stronger version of public vs private settings. The whole software infrastructure still assumes a link exists and doesn’t need to cover cases where it really isn’t there. I could see how that makes maintaining indexes etc easier.

Flipping a flag and then filtering out results down the line based on the delete setting is probably much easier than actively removing them from an index.

And if deleting is rare (it probably is), then the performance and resource impact should be minimal.

Banning accounts using this add-on (breach of ToS) would be a formality for facebook, if this was to become an issue in the first place. (unlikely that a sufficient number of people will bother doing this)

Unfortunately there will be no prizes for having been right all along.

Even now, as facebook is burning, statements of how one has quit or will be quitting facebook get swept into the pile of incendiary indignation, with encouragement from all sides.

But never having used facebook, even at significant personal effort as you indicate, one is relegated from "elitist" before to "smug" now.

One day in the future a recruiter will ask why there's nothing about you on the Internet, and you will proudly be able to say: "Because I know the Internet and its dynamics that well" and they will hire you, in awe of your analytical foresight.

That's the dream anyway, because you're more likely to be reported for being suspicious. After facebook there will be another facebook, and another, and people will flock to them just the same, and you get to experience being an antisocial hermit all over again.

Now I made myself sad. "Social Media: even more depressing when you're not on them!"

"I am sure that the deletion of media files in services like Facebook has never meant to be absolute." This is very common, I'm sure. There should be a way to request or a right to request permanent deletion, by law, of one's data on site like Facebook. That said, once something is on the internet, anyone can and will archive it (see https://www.reddit.com/r/DataHoarder/). Closing an account, however, should imply permanent deletion. Companies are instead able to operate in a gray area through terms of service agreements that knowingly play on the ignorance of the end user. This common and widespread behavior is a detriment to the user and (arguably) society at-large.

Obviously I'm not privy to the details of this particular requirement, but I'm fairly certain that very few, if any, of our videos actually go away when we delete accounts. (Or even when we delete the videos themselves.) I think this because I've seen images from SMS texts, instagrams, snapchats and things of that nature used in court cases. So law enforcement must have access to that stuff somehow? But, again, I'm not privy to the technical or legal mechanisms they use to make that happen. All that said, I have seen images from services like these in court cases. And defendants have CLAIMED that they had deleted them. (For whatever value of "deleted" exists on the given service.)

So I'm wondering if the services actually have some sort of archiving requirement for law enforcement purposes? Maybe for a certain number of years, they have to save your data or something like that?

If there's anyone who would be familiar with the legal obligations of these services vis-a-vis data archiving I'd be really interested in hearing more about what we should reasonably expect from these services in terms of deletion etc?

> So I'm wondering if the services actually have some sort of archiving requirement for law enforcement purposes? Maybe for a certain number of years, they have to save your data or something like that?

Apart from a handful of specific cases like financial data, the US has no general data-retention laws. You can delete stuff aggressively as long as it's based on a consistent archival policy, not one-off deletions where you risk looking like you chose a particular thing to delete to hide evidence.

You can tell this is possible in practice by looking at how common it is to have aggressive permanent-deletion policies in corporate email, at least outside of tech. A number of big US companies automatically delete read emails in employees' inboxes after N days (with N ranging from 7 (!) to 365), unless the employee specifically takes action to refile the email into a project folder with a different per-project retention policy. The goal of those policies is to reduce companies' exposure to fishing expeditions in future lawsuits by just keeping less email around. To make that effective, the policies really do delete the emails, including from any backup systems.

Given that they have figured out how to perma-delete their own old email, I believe companies could really delete user-deleted content, perhaps after some specified period of time, if they wanted to. But unlike with their own internal emails, they don't have the same incentives to be aggressive about purging that stuff from their servers. If anything, they have the opposite incentive, to keep as much user data around indefinitely as possible.

GDPR is intended to at least force service providers to give folks the right to be forgotten which compels providers to delete data. While it's own Europe, it's difficult to comply without just making general decision about honoring these requests.

Actually, GDPR only requires that any links from the data to the user should be destroyed, so that you can no longer figure out who created the data. This means that a lot of data will be left. And realistically I think that a lot of it will remain identifiable, just like anonymized data can be traced back to real users pretty easily if you have enough data points.

My understanding is that an image is by itself PII, regardless of whether or not it has any additional information associated with it. I don’t think there’s a way to retain images without contravening GDPR.

I’m not sure I understand what you’re saying, but I think you’re misreading ”Different pieces of information, which collected together can lead to the identification of a particular person, also constitute personal data.”

What that says is that, if (A,B,C) identifies a person, each of A, B, and C, in isolation, is personal data, not that you will be allowed to keep the pair (A,B) if it doesn’t.

One mathematically can cut each bit of information in units of arbitrarily small entropy. So, if taken to the letter, “this user is not Mark Zuckerberg” would be personal data. I doubt jurisprudence will go that far, but we’ll see.

Actually, GDPR only requires that any links from the data to the user should be destroyed, so that you can no longer figure out who created the data.

Not in this case, because if the photos or videos contain recognisable people then they are themselves personal data.

How far the new subjects rights involving data deletion will go in practice is one of the biggest unknowns with the GDPR. Clearly from a technical point of view we understand that deleting a key isn't the same as deleting data from a disk, and often that would also include deleting a file in a filesystem if the underlying storage isn't robustly wiped as well. Throw in the kinds of distributed architecture, redundancies and backup systems that many organisations use, particularly in the era of cloud-based hosting and off-site backup services, and you have an unfortunate conflict between not truly deleting data (and therefore still having some degree of risk that the data will leak even if it's intended to be beyond use, contrary to the spirit and possibly the letter of the new regulations) and potentially high or even prohibitive implementation costs to ensure robust deletion of all copies of personal data when a suitable request is received.

> Facebook and other services do not actually delete data, they just mark it as "deleted" and purge it only if they need the space.

You may be correct, but that doesn't explain why Facebook decided to include so-called deleted files in a download of user data. Clearly these deleted files are still a part of Facebook user profiles and accessible to company data mining software. Facebook has exposed their own duplicity.

Maybe the Facebook development processes and tracking of tech debt is just shit.
First person: "I'll just flag the content and then it won't show on their timeline!"
Second person: "I'll just select all the records that belongs to this account when packaging a backup. All the deleted content should be gone!"

When storage is cheap, it's rational to develop the delete flag first and think about cleanup later, which means never. The download content thing seems like a low priority project and the poor intern who probably did it didn't want to figure out how each store keeps the delete flag. At least it's honest. Would you be surprised a dd of your sd card showed your deleted photos?

<cynical view> When a _customer_ requests data to be deleted, you delete it. Pretty sure Facebook have probably complied with every user-data-deletion request they've ever got from their paying customers - because advertisers are well know for wanting access to less data about the cattle...

If your ethics are such that you believe the state should be able to view data on someone in order to help prosecution of a crime then you could support the retention of data on all users in order to avoid deletions made to hide criminal activity.

Such an ethic creates a moral reasoning to not comply with an individual's wishes in the immediate deletion of data.

(FWIW I'm not defending this position nor suggesting it's the case here, just you said there's no moral reason that can support it, which seems wrong; different ethical systems can provide different reasoned moral outcomes.)

There might be something to that, in the sense that I could very well see someone in a meeting ask 'well, but what do we do when another user is tagged in a photo' followed by discussion rationalising why that person shouldn't lose out because someone wants to delete data, and someone coming up with the 'solution' of effectively reference counting the data and figuring out when to actually purge the underlying file later (i.e. never)

Or possibly they just screwed up. Perhaps the "soft delete" was originally intended to allow "undelete" by the user with delayed purge, and/or single-instance storage with reference counting that they never quite got around to finishing.

> but that doesn't explain why Facebook decided to include so-called deleted files in a download of user data.

This happened because the person tasked with writing the code to build the archive forgot to include the filter for "deleted" records somewhere in the code.

I.e., they forgot the "where is_deleted = false" part below on one or more DB query requests like this:

select * from table where is_deleted = false;

This is the biggest problem with the "soft delete flag in database" method of deletion. Every single query writer, everywhere, forever, must always remember to include the "is_deleted" filter in their queries. And when they don't, what was deleted reappears as if it had never been deleted at all.

That is a good point, but flagging shouldn’t be the end of the line for soft deleted data. There should be a process going back and removing everything that was flagged for deletion, prioritized to guarantee deletion within a set time frame but without impacting performance. Meanwhile, most queries should be done through a view that automatically masks out any flagged data. It’s a basic data integrity feature that shouldn’t be left to their API (which is such a fast moving target that one developer doesn’t know what the other is doing much of the time).

Excellent - they'll be able to offer advertised another demographic category "People who are vocally privacy conscious, but who aren't prepared to do anything about preserving it if it means they don't get to play Farmville."

Why not just every day? Only use it as much as absolutely necessary (to communicate with people you wouldn't be able to reach otherwise) and use competitors instead. Even using FB owned companies (e.g. Whatsapp) would help. While FB still gets some data, they don't get contents (unlike FB messenger all chats are end-to-end encrypted) and most importantly no ad revenues. And lower revenues is what would truly change Facebook's policies.

Bunch of companies get a lot of their customers through Facebok. Those will probably support (finanically and otherwise) FB as long as there are no obviously better ways to get customers. Not for any sentimental reason, just because their business relies on it.

Not necessarily. Businesses are usually fine with making things worse for themselves if it also impacts all of their competitors, preferably even more. That is, as long as a business comes out a bit more ahead of their competitors, a business is usually just fine with making something worse for everybody.

So, even if a business currently gets a lot of revenue from Facebook, as long as a business thinks that other businesses in their field are more dependent on Facebook than they themselves are, they should be fine with Facebook declining.

It is more effective to organise for a cause than against a politician. Presidents are intentionally difficult to remove. The bar for promoting action against Facebook is lower than for prompting action against the President.

> It is sad how this information becomes news only when bad things happen.

What bad things? I feel that's the part missing from the argument. People have yet to see or hear what are the negative consequences of all that data being kept or even leaked or re-sold.

The only one they've started to know about is the potential impact on elections, which is pretty hypothetical and weak to most people I feel. Or maybe identity theft, but that's more related to the Equifax leak.

I think its important to rationalise on what are the real consequences of our data no longer being private. Is it really dangerous? What's the worse that could happen? What are the chances of it happening, etc.

Just send a link to picture (or document or whatever confidential information you want to share) to a password-protected resource on your own server (or even a laptop or desktop machine, if you have globally routeable IP address there). Facebook automation is not that smart to grab the password from the very same conversation, and even if they do - I'm sure they won't do it, knowing you'll catch them in access logs and press charges for unauthorized access.

I doubt many would object and insist on sending via a very specific medium (i.e. strictly require pics in a FB Messenger). Some, of course, may find this inconvenient.

Where I live Internet providers deliberately make self hosting anything extremely hard.

Then they charge often 5x or more their normal price to let you host things, but add lots of exceptions, for example all providers put in contract they can immediately cancel your subscription of they detect you hosting anything irc related, doesn't matter of it is a irc server or a irc bot or a server for a open source irc client...

I think a great way to explain privacy limitations to a non-tech-savvy person is to walk them through using GPG.

Once someone understands public and private keys, and webs of trust, there really isn't much left to learn. For someone who understands keypairs, the limitations of Facebook/Twitter/etc., DRM, etc. are obvious.

It seems most of us are afraid our non-tech-savvy friends and family won't be able to wrap their heads around security, but not understanding it has gotten us into a pretty bad situation. We should really stress the importance of learning about it.

Okay, don't assume people won't be interested in interesting things. Who is this general public, anyway? It's not an homogeneous group; it's made up of physicians, mechanics, teachers, lovers, Doomsday preppers, engineers, preachers, and all kinds of people who have special interests. The thing I see is that if you show them how it matters to them in their special role, rather than to them as members of this general public, they may well take an interest. Some of them may become very deeply interested indeed, if they needed such a thing but didn't know about it until you showed them!

Honestly, it's not been that interesting to me in general. It's only interesting to me for the same reason it might be interesting to the sorts of people I enumerated --- because of the ways it can be useful to me. I don't really care about how it works, in depth; I just want it to keep my stuff private. The only difference is that I have just enough technical expertise, as a programmer, that I can see its applicability without having it explained in a sympathetic manner.

Especially if their tech-savvy friends are confident they can learn about it - because it really isn't that complex - and if they understand that keypairs and trust are the basis for literally all digital security.

I'm fairly tech savvy (ok, I'm an expert compared to my non-tech family and friends, but not compared to people here). I even had a copy of pgp on my Windows 3.1 machine shortly after Phil Zimmerman created it. I didn't understand it then, and I don't want to understand it now. The better and easier solution has been to avoid putting stuff I don't want anyone to know about me on the internet.

I'm a linux sysadmin, and GPG is horrible. Complicated, complex, with weird naming scheme, multiple programs (gpg vs gpg2), etc - but it's a brilliant example why all of this is so complicated. Other ideas about describing the trouble trust means on the internet are welcome.

Symmetric encryption is also popular and ubiquitous. I don't think this can be practically explained to most of the population of non-technical people. Encryption schemes also often use hybrids of symmetric and asymmetric encryption; they are useful for different scenarios.

I went through the same issues with friends and loved ones wanting me to create a Facebook account. I resisted for years with the same arguments you made. It had some unfortunate consequences: https://news.ycombinator.com/item?id=16675681

How did you arrive at that conclusion? I assume Twitter retains everything as well (even "deleted" tweets) and it's all associated with an email address. Or did you mean it in the sense that far fewer people have a Twitter account?

I used to believe Twitter was better. But once you're above a certain number of active (i.e. publicly retweeting) followers there's a pretty high chance that your tweets will end up in the feed that is used to generate the twitter stream archives:

These are tar files that contain bz2 compressed newline separated twitter events as json. These include deletion events as well, so you can for instance easily estimate the time an auto-deleter is set to.

Yes, they're huge archives, but you could still probably process a year of these for particular targets for under $10 on EC2.

Whilst I'm impressed with archive team's efforts, I would be surprised if there aren't some commercial twitter stream consumers that absolutely dwarf this.

Treat everything you put on twitter as public forever and you won't go too far wrong.

Well, I never said I did believe it was private. When I said better, I should really have said better behaved in respect to deletions.

Because of the twitter stream APIs it's not. But there does seem to be a strange presumption amongst users that deleted tweets are gone from public view and cannot resurface. There are people who use tweets in all manner of ways that they really weren't designed for, some of which involve deleting them after a few minutes.

Many a public figure uses these tweet deletion apps. Some do it for more honest reasons (status count limits -- do they still exist?), others do it to limit their exposure.

In the UK at least, there have been cases of libel where either the claimant or the defendant depended upon twitter and in at least one of these the court admitted the claimant had an unfair advantage by forgetting about having a tweet deletion app attached to their account. The case proceeded and the claimant won despite the acknowledged advantage. To some, this may be seen as a clear message that in the eyes of the judiciary it's okay to delete tweets (evidence) as long as it was through an auto deletion app and the individual concerned forgot about its existence.

I would not at all be surprised if some lawyers to the rich quietly suggest they install a tweet deletion app as general advice upon instruction.

Twitter is more about finding your own social graph of people you find interesting than friends/family/coworkers like Facebook is primarily about. I could have a completely anonymous persona on Twitter and get all the same content. I could use a fake name on Facebook but it wouldn't make as much sense, and I could be reverse engineered with some accuracy from just my social graph. Your family is going to tag you as family, etc. The other non family and friends content on Facebook is more watered down than on Twitter and Facebook wouldn't be worth using for that alone.

>I remember the last time I had this conversation with someone, last year (2017) around August. I found a new love partner, and after the long intimate talks on the phone, they requested the usual "intimate pictures", not necessarily sexual but certainly sexy.

Why the fuck are these a thing? Couples don't meet in real life much anymore? And how "usual" are they?

Anyone have stats on how widespread this is? My spouse and I avoid being in front of cameras naked even when we're pretty sure the camera isn't enabled. Not that anyone else would really want to see us nude, but why take a chance on accidentally recording material that could be embarrassing?

> expecting a non-tech-savvy person to understand how data moves around the Internet

Then we - the people that do have the necessary technical knowledge - have a duty to teach them what they need to know. This isn't necessarily "how data moves on the internet". Yes, this can be difficult and tedious, but understanding the risk profile for data/networks is increasingly important as networks become involved in everything.

> they ultimately don't care

Again, it's our duty to teach them why they need to care. This probably shouldn't involve a lecture on networking or data analysis, but instead tailoring an explanation to their personal situation and knowledge.

I don't think it's because they don't understand or because they don't care, it's just overwhelming. Think about it, to have any basic grasp of understanding regarding the security infrastructure of the internet you need to have a basic understanding of network connections, how HTTPS works, how files are stored on your computer, how files are sent across computers, how your average database works etc...

Think about the last time you've tried tinkering with something you're a noob at. Maybe it's deciding that you would try fixing your car engine yourself even though you never were a mechanic. Maybe you decided to make a complicated cake and halfway through you realize that you overestimated your pastry skills. Try to remember the feeling of helplessness you felt at that moment, the "I have no idea what I'm doing and I wish I never had started that in the first place". In my experience that's how 90% of people feel like when trying to do something technical with a computer.

A few weeks ago a colleague from HR asked me if I could make a backup of a computer because it contained some critical stuff and she wanted to be able to restore it later if necessary. I say okay, boot up a debian live USB stick I had lying around and start dd'ing the drive to external storage. When I told her the copy was in progress she told me "but I didn't give you the password?". She was amazed when I told her that I didn't need the windows session password to access the data on the disc. I swear I'm not making it up when I say that she asked me if I was a "hacker".

That made me realize that there are probably many people out there who think their files are safe as long as their Windows password isn't compromised even if the disc is not encrypted. After all, they can't access the files, so surely nobody else can? If Facebook says my photo is deleted, then surely it must be? Why wouldn't it be?

I don't think it's fair to blame these people, we've designed so many strange patterns over the past decades in software that it's difficult to keep track. Maybe having "delete" not actually delete should be considered a dark pattern. Maybe it should even be illegal.

And how would we do that? Every time I've tried to explain privacy issues to non tech individuals at best they consider me paranoid and at worse a fucking sociopath who doesn't have a FB profile because I can't correlate with other people. I can't carry this burden and I doubt many can.

There have been horror stories over the years about identity theft, even before the emergence of social media. Has this stopped anyone outside our community from posting details about their lives online? I hardly think this whole situation with FB will change anything in the end.

I don't feel I have any obligation/duty towards anyone. If they want my opinion or ask me about an issue I'll gladly inform them. But I won't start a crusade for a better informed society. Internet was supposed to do that and we ended up with videos of cats and wannabe celebrities posing seminude pics on Instagram. Fuck that shit.

Your view is well represented on the Internet, and is perhaps most aptly exemplified by the early jargon word “luser”, and the BOFH phenomenon. I have never, I think, really been prone to such thinking. I have never had a problem talking to ordinary people or users, or felt the immense frustration which many people have vividly described. (Note: I am a sysadmin with approximately 20 years of professional experience, and have always had a user-facing role as at least a part of my job.)

It reminds me where in Zen Buddhism, there are those who become enlightened and go off to do their own thing, and those who become enlightened and stay in the world with the rest of the ordinary unenlightened people. In the words of Alan Watts:

The understanding of Zen, the understanding of awakening, the understanding of– Well, we’ll call it mystical experiences, one of the most dangerous things in the world. And for a person who cannot contain it, it’s like putting a million volts through your electric shaver. You blow your mind and it stays blown. Now, if you go off in that way, that is what would be called in Buddhism a pratyeka- buddha—“private buddha”. He is one who goes off into the transcendental world and is never seen again. And he’s made a mistake from the standpoint of Buddhism, because from the standpoint of Buddhism, there is no fundamental difference between the transcendental world and this everyday world. The bodhisattva, you see, who doesn’t go off into a nirvana and stay there forever and ever, but comes back and lives ordinary everyday life to help other beings to see through it, too, he doesn’t come back because he feels he has some solemn duty to help mankind and all that kind of pious cant. He comes back because he sees the two worlds are the same. He sees all other beings as buddhas. He sees them, to use a phrase of G.K. Chesterton’s, “but now a great thing in the street, seems any human nod, where move in strange democracies a million masks of god.”

> I am sure that the deletion of media files in services like Facebook has never meant to be absolute. Many of my colleagues believe the same thing that I believe: Facebook and other services do not actually delete data, they just mark it as "deleted" and purge it only if they need the space.

This is a dumb conspiracy theory. Facebook has made plenty of public statements that say otherwise, and there's a whole team that works on the system that ensures every trace is erased from disks, logs, cold storage and backups when deleting content.

The accepted answer to the first link you posted explicitly calls out:

> There is one class of data that you have to delete - and that's personal data that the user doesn't want you to hold any more. There may be local laws (e.g. in the EU) that makes this a mandatory requirement (thanks Gavin)

This is exactly the type of data we're discussing here. So no, contradicting the user's expectation when handling personal data is not a "best practice".

Disclaimer: I deleted my Facebook account a couple years ago and never looked back.

That said, Facebook is who is just getting collectively stabbed with the pitchfork right now. Engineering best practices are one thing. My right to privacy is another. As an engineer I care about efficiency. As a human I care about privacy. My rights win over any technocratic babble. Sorry if I am being harsh. I am, of course not surprised. Engineers are lazy at best and at worst, something truly sinister is brewing.

I agree that you have the right to privacy, but there's also technical reasons why instant deletion is not always possible. If they can guarantee that the data will be gone after X days, then that's fair to me.

...as if I didn't already have enough reasons to hate that cliche, thought-terminating phrase... every situation is unique and figuring out what exactly to do for your particular one is probably the main purpose of being a software engineer.

Except when it’s not, and they want back the data they’ve deleted by mistake.

In those cases it will take a lot of support to explain that what is gone is gone. I think customers don’t have a unified vision of what deleting means, they just want what’s optimal for the situation.

> There is one class of data that you have to delete - and that's personal data that the user doesn't want you to hold any more. There may be local laws (e.g. in the EU) that makes this a mandatory requirement (thanks Gavin)

These best practices are about database records and not about files. I'd be very surprised if Facebook store files as database blobs. These are generally stored on a separate system, and it's quite reasonable to delete the file while keeping the metadata in the database.

The most unsettling part is in Facebook's response: “We’ve heard that when accessing their information from our Download Your Information tool, some people are seeing their old videos that do not appear on their profile or Activity Log. We are investigating.” Who wants to bet against their investigation being “how to keep users from seeing it.” Anyone?

I honestly don't understand this cynicism. Facebook does not want your deleted video, and they certainly don't want to keep it given the current media frenzy, with the CEO under fire.

Every application of any complexity has features which inactivate, but don't delete data. At Facebook scale, deleting data is non-trivial, and it would be impossible to immediately delete something.

We all have bugs, including extremely critical security bugs, availability-threatening performance bugs, or many other types of bugs. It's strange that we accept those bugs as merely bugs, without assuming a backdoor, or intentional sabotage, but when it comes to personal data, suddenly it's a nefarious plot.
It's an odd position to take that Facebook is not only saving these deleted videos intentionally (for what, exactly?) but that they'll now lie to us and pretend to delete them, but only remove it from their Download Information tool.

The GDPR prompted them to make the data resurface, so it's not impossible to track this data given a few months of warning. It's just that Facebook as a company does not have an interest in deleting data they collected.

Can't understand why you're downvoted. If you can't handle the data you collect, maybe you should collect that data in the first place? Or invest in technology, hire more engineers to handle the data you collected?

You're right that it's not due to the GDPR, but existing EU law already required them to provide all user info on request. See Max Schrems' work, specifically his 2011 complaints to the Irish Data Protection Commissioner, and the subsequent Europe v. Facebook case.

One fascinating outcome of all this fallout is that there's now a readymade excuse to stop using Facebook.

My personal observations are that a good number of people have felt 'fatigued' by Facebook for a very long time, but were also unsure of how best to extricate themselves without incurring a social penalty.

But now there's an impetus that most people can understand. I'm not sure about how many people will move away or how quickly it'll happen, but the network effects Facebook capitalized on can also work in reverse: if you have just one or two very vocal privacy proponents in a friend circle pushing to get off the platform. One group I'm in recently migrated to Telegram for this very reason.

If you truly want privacy and security I would recommend Signal over Telegram -- Telegram has had some controversy with respect to their encryption protocol not being audited, as well as some weird stuff with a very large recent ICO that seems entirely unnecessary except as a money grab and Russian subpoenas for their master private keys.

Signal and Telegram are very different. Signal has always been an open source project that allows you to audit the source and run your own server if you so desire [1].

It’s a project that has always put security first but made some compromises for usability — very different from Telegram which has put expansion and monetization first — and it was started by Moxie Marlinspike whose views and contributions are well-known.

With Signal, it is not a single point of failure. The Android, iOS, desktop apps all do end-to-end encryption. So a compromised server wouldn’t mean your messages are compromised.

The client would need to be compromised, and if the client is compromised, tox.chat is toast as well.

Just an anecdote. I had a day set aside to purging my Facebook entries a year or two back. I manually deleted comments and posts.

Of course there was too many to do and it was very boring so I only spent a couple of hours at it. But that's nots what's interesting. What happened was that I got a huge uptake of people commenting on some old post I made, like a profile picture change. I think Facebook saw I was purging my data slowly and reached out to my FB contacts encouraging them to interact more with me. It was very odd.

You made changes to old post (deleted comments), so Facebook decided that because there are some updates to old posts - it makes sense to treat these old posts as new.
So Facebook started to show these posts to your friends in their news feeds.

I don't believe so, at least not in the recent past. I purged my FB of all content about 18 months ago, and had to do it manually. Took several hours spread across a few weeks, whenever I could force myself to spend the time on it. For whatever reason I kept finding posts/comments for a few weeks after that; I'd go back to make sure I got everything, scouring the timelines, and there'd be something I missed somehow, quite bizarre.

I've never had a Facebook account, but friends have told me that no, there is not. That would go against their user and content retention models, I'm sure. It makes sense that Facebook would make it as difficult, tedious, and painful as possible to delete content from their platform.

The funniest part of it: All the media hype around the topic is generated... BY DATA collected by NYT/Bloomberg/Techcrunch/you name it. Those articles generate additional views and they just continue to ride this wave. And all those publications share this data with 3rd parties (ad networks, analytics providers, cpa networks)

On top of that, you know what else do they measure? SENTIMENT. So until kicking Facebook generates more revenue - the articles will paint Facebook as a world's main evil. But the day sentiment changes you will see all the articles about Facebook following best practices.

And in the end? Some EU commission will be created and make a law which oblige to "show cookie usage disclaimer", because of which 90% of sites welcome you with ugly popup and ruin the experience providing 0% advantage in managing your privacy...

> On top of that, you know what else do they measure? SENTIMENT. So until kicking Facebook generates more revenue - the articles will paint Facebook as a world's main evil. But the day sentiment changes you will see all the articles about Facebook following best practices.

So what you're saying is that sites like nymag will only run stories that are profitable?

Kind of. When everyone is writing articles AGAINST Facebook, it will be very hard to 'sell' article which SUPPORTS Facebook to the editor (because of the potential PR nightmare when potential 4chan starts attacking you)

There is nothing I've come across, ever, that has lead me to believe that Facebook, Google, Amazon, etc., ever delete anything, ever. Not even to clean up space as some people on this thread are suggesting. Hard drive space is cheap and data is valuable. This isn't a secret, this is a fairly obvious business practice that all the big players, and most competent small players, are engaging in.

FWIW, Facebook does say that they will delete all of your data within 90 days of account deletion. I believe that indicates that they've put the engineering effort to do a full audit of data to be deleted, handle missing references across the product, and to fully delete user data from logs and backups.

> When you delete your account, people won't be able to see it on Facebook. It may
take up to 90 days from the beginning of the deletion process to delete all of the things you've posted, like your photos, status updates or other data stored in backup systems.

The case from the article is trickier. My impression is the feature was just implemented with an append-only data model, which is often (maybe usually) a good engineering decision. "Secretly" from the article title feels disingenuous because Facebook never said it was deleted. As an engineer, it's frustrating that I might have to write my software to be more fragile to match the implicit expectations of how a non-technical user thinks software should work. But the frustration on the user's end is also plenty understandable here. Hopefully the gap can be closed a little on both sides by a combination of educating users and being more privacy-conscious in engineering and business decisions.

Do you think it's as easy as "rm"-ing a file away? Your data is kept internally in a multitude of different databases. Parts of it sitting in cold storage. Log files, caches. That data is split across thousands of different nodes. Each system has different data retention policies. Some databases don't permit removal of a specific record - the records must "expire" first. It really does take time to delete data.

True, but this isn't an excuse. It's slow to delete data because Facebook designed it that way. They could have designed for privacy and real-time deletion of data, but they didn't, because they didn't care.

> "They could have designed for privacy and real-time deletion of data"

Actually, they could not. If data is geo-replicated across multiple clusters, spread all over the place, divided into hot and cold storage layers - it's crystal clear you can't perform "real time deletion of data". Instantaneous deletion of all data, leaving no trace behind, can not happen under such complex constraints.

>> > "They could have designed for privacy and real-time deletion of data"

> Actually, they could not. If data is geo-replicated across multiple clusters, spread all over the place, divided into hot and cold storage layers - it's crystal clear you can't perform "real time deletion of data". Instantaneous deletion of all data, leaving no trace behind, can not happen under such complex constraints.

Yes, they could have. Your post is just a description of a design that can't delete data quickly. That doesn't prove that no design exists which can delete data quickly.

If Facebook had been designed with "we need to allow users to delete their data quickly and permanently" as a constraint from the beginning, it wouldn't look like the system you've described.

All you've done is pick all the things that Facebook did and say that if you do those things you can't delete data quickly. Yes, that's true--which is why Facebook would not have done those things if they cared about allowing users to delete their data.

It's interesting that you think they need those 90 days to off load your data. As if they hadn't done so before your deletion.

By the way, rm does work like that. The file will just be marked as deleted (by removing its entry from the filesystem index), but will remain on your disk for some time afterwards, from some minutes to months. If you want to ensure deletion, you should be using shred.

> By the way, rm does work like that. The file will just be marked as deleted (by removing its entry from the filesystem index), but will remain on your disk for some time afterwards, from some minutes to months. If you want to ensure deletion, you should be using shred.

This is true, but it's worth noting that not overwriting your own data on a machine you physically own as an optimization is very different behavior from not overwriting someone else's data on your server when they request that you delete it.

There's nothing there which clarifies that "delete" isn't a euphemism for "flag it to no longer be displayed to users" like it is everywhere else where companies collect data on users, so you'll excuse my skepticism.

"Well when talking about deleting we mean we do the exact same thing the file system does to a file, it flags it, but doesn't actually erase it's content. Acting like a filesystem delete operation is what people expect when using that word"

That would be somewhat reasonable if it were just an implementation detail. But unfortunately, it's not just an implementation detail. When a filesystem has data to write and runs out of hard drive space, it overwrites the data which was flagged for deletion. But when a web 2.0 company has data to write and runs out of hard drive space, they buy more hard drive space, usually automatically.

Clearly there is a big disconnect here. It seems somebody is suggesting there should be a correlation between a user removing content from their account and Facebook destroying some of Facebook property.

Anything submitted to Facebook is the property of Facebook. Users have no business telling Facebook to destroy Facebook property.

In short Facebook can do anything they want with the IP content you provide to them. It also identifies, by example, IP content as media you provide to them. For that material the policy is pretty clear, but what about other material? What about textual content that is typed into Facebook and identified relationships? It seems this information is covered by the same policies and is IP subject to Facebook's use.

My understanding of Facebook policy is also likely dated as their terms change periodically. The current policy is dated at 30 January 2015.

Facebook's terms of services agreement that you must consent to in order to open or maintain your account. These are the rules not because Facebook says so, but because you say so when you agree to their terms.

> Users didn't sign or agree to anything just because they checked a checkbox next to a link to an ever-changing jumble of legalese to get past a screen. This isn't agreement, it's manufactured consent.

What is the difference? I am thinking if a person really actually cared they would have read the legal agreement before checking the checkbox in question and possibly consulted an attorney of their own. I am thinking most users absolutely do not care and agree out-right and immediately to all claims presented by Facebook. How is that not still agreement?

You can't claim users don't care about their videos not being deleted--the fact that they do care is exactly why this is in the news. They may click past a screen because they think that they don't care, but that's only because they don't understand the implications of doing so. Part of the reason is that a lot of people naively believe that a respectable company like Facebook wouldn't try to screw them over, and would behave with their best interests in mind.

It's unrealistic to expect that users will read AND understand the TOS of every website AND all of the changes to the TOS that occur over time.