According to Washington Post story by Travis M. Andrews dated Nov. 8, Facebook wants you to upload your own explicit photos – for your own good.

This is how it works. You have explicit pictures that you are worried that your ex-lover might post to Facebook to embarrass you. This is your typical revenge porn scenario. To proactively prevent this, you upload your own explicit photos ― those that you suspect your ex has and might post – to some type of a secure Facebook portal. Facebook won't store the photos but will use some AI-driven algorithm to create a digital footprint of the photo so that it would be able to recognize the photo and disallow if someone tries to post it to the platform at some later date.

In a way, this is a very forward-leaning, prevention-focused way to address the growing revenge porn problem, rather than engage in incident management after-the-fact. At the same time, it does require you to submit your naked, explicit photos (and videos, I would assume, in the next iteration of this capability) to Facebook. That is a whole lot of trust that Facebook is asking you for.

Being the old geezer that I am, the question that immediately pops up is, "How about not taking naked, explicit photos of yourself and sharing them with your significant others?" Frankly, while I intellectually understand that sexting and associated behavior is a huge problem among high school students and young people in general, I haven't quite come to terms that such behavior is common enough to warrant a specific technical solution like the one Facebook offers above. But I guess I am wrong.

This brings me to a bigger question. How far does Facebook have to go to provide solutions to check the negative effects of (mal) intentional behaviors of its users? And not just Facebook, but any other widely used social media platform in which users generate contents to share with one another.

This question goes to the heart of the disinformation campaign that Russia engaged in during the last U.S. presidential campaign. Is it Facebook's fault if a Russian operative posts a fake news about Hillary Clinton and her alleged connections to all sorts of conspiracies? Or what if the same operative announces a Black Lives Matter protest event using incendiary language against the police? Should Facebook be responsible for ferreting these out?

Twitter is struggling with a similar challenge. Originally, Twitter prided itself as the free speech champion in which anybody can say anything to anyone else – even anonymously – without being censored by the platform. Twitter's primary brand was that of unfettered, neutral platform for free speech. The Preamble to Twitter's rules (2009-2015) read in part, "Each user is responsible for the content he or she provides. We do not actively monitor and will not censor user content, except in limited circumstances."

However, free speech is not necessarily civil speech. Trolls of all colors have had a field day harassing Twitter users whose posts they don't agree with ad hominem attacks using truly vile, hateful language. And it's not just the words used in the attacks but the frequency and number of attacks, all designed to shut the person up, not engage in spirited debate over the merits of an idea or position. In other words, unfettered and unmoderated free speech has led to the lessening of the "freedom" of speech by forcing users to take themselves off the platform.

In a word, the Twitter experience shows us that free speech without civility does not lead to neutrality. Let's turn this around. Without enforcing civility, you won't have neutrality. For Facebook, the equation is similar with slightly different variables. Without enforcing transparency and accountability, it will lose trust.

But how do you enforce civility? And where do you draw the line between civility and censorship? That's the key question. One person's civil discourse could be another person's trigger. Also, does civility only cover the abusive words or does it extend to tone and actual substance of the discussion? Does overly cynical or mocking tone – without using any apparent hateful language – constitute harassment? How about discussion about the scientific legitimacy of eugenics, positive aspects of the Holocaust, or justifications for assassinating the police? Are these allowed as long as they stay on the good side of George Carlin?

And the answer is necessarily, "It depends." Ugh. How unsatisfactory. How true, though.

What's acceptable and unacceptable is a function of the social and cultural context that we live in. Try insulting the king in Thailand and see what happens. Or try denying the Holocaust in Germany. Or reject that Rape of Nanking ever happened in China. Or claim that Comfort Women were willing prostitutes in Korea.

On a more subtle and difficult note, the whole debate about "safe spaces" is a case in point in which different demographic groups with differing social and cultural sensitivities want a space where they won't be exposed to speech and ideas that they find offensive. Too much moderation run the risk of turning Twitter, Facebook, and the like into virtual safe spaces catering to the sensitivities of the self-appointed "Safety Officials." Sounds deservedly ominous.

Jason Lim (jasonlim@msn.com) is a Washington, D.C.-based expert on innovation, leadership and organizational culture. He has been writing for The Korea Times since 2006.lsdie@koreatimes.co.kr