Category: case study

I am moderating what will almost certainly be a fascinating discussion about BYOD, enterprise data security and device management at the BlackBerry Experience event at Montecasino in Fourways, Johannesburg today. The event is hosted by BlackBerry in partnership with ITWeb Events and will also take place in Durban and Cape Town later this month.

The speakers at the event will include Nader Henein, BlackBerry’s Regional Director for Product Security; Izak Meyer, Director for Enterprise at BlackBerry South Africa; Wikus Viljoen, a systems analyst at Nedbank and Andy Swanepoel, a technical specialist and manager at Nedbank.

We are going to talk a bit about BYOD trends (and the growing tendency towards more managed deployments), a better strategy for securing data on mobile devices and more. I don’t want to say too much about the content attendees can expect but I find much of it fascinating. You can follow the discussion on Twitter using the hashtag #BBExperience if you aren’t attending the event.

That the respondent in the latest High Court Facebook defamation case, M v B, was ordered to remove defamatory posts on Facebook isn’t remarkable. What is more interesting about that case is that it reiterates a principle that a court will not step in and proactively block future defamatory posts.

The applicant in this case, M (SAFLII redacts personal information about parties in cases it publishes in certain circumstances), brought an urgent application to the Kwa-Zulu Natal High Court on 9 September 2013 to order his ex-wife, B, to –

“remove all messages as contained in annexure ‘D’ to the applicant’s founding affidavit, from her Facebook page;”

“refrain from posting any defamatory statements about the applicant on her Facebook page;” and

“refrain from in any way making, publishing and/or distributing defamatory statements about the applicant.”

The urgent application was successful and M was granted an interim order which M subsequently sought to have made final. Judge Chetty’s judgment on this was delivered just over a year after the initial application was launched, on 19 September 2014.

Background

Judge Chetty gave the following background to the applications:

[3] It is necessary to sketch the brief history of the matter, and particularly the facts giving rise to the launching of the application. The applicant and the respondent are the biological parents of a minor child, a daughter P born in July 2008. At the time of the launching of the application, the child was five years old. The respondent and the applicant were never married, and at the time of the institution of these proceedings, were no longer in a relationship. P lives with the respondent. In terms of an arrangement between the parties, the applicant has contact with his child every alternate weekend from Friday afternoon until Sunday afternoon. It is not disputed that in accordance with this agreement, the applicant picked up his daughter on the weekend commencing 30 August 2013 and returned her to the respondent on Sunday 1 September 2013.

[4] During the course of this particular weekend the applicant and his daughter visited the house of a friend, and ended up staying over. During the course of the evening, other friends gathered at the house eventually resulting in P sharing a bed with an adult female, who is a pre-primary school teacher, and someone known to her as she had babysat P on previous occasions. The applicant has categorically stated that he has never had a romantic relationship with the teacher concerned. P was safely returned to her mother on the Sunday.

[5] In the week that followed, the applicant received calls from several friends drawing his attention to a posting by the respondent on Facebook, under the heading “DEBATE”. The posting reads as follows:

‘DEBATE: your ex has your daughter (5) for the weekend and is sleeping at a mates house. They all (about six adults) go jolling and your ex’s drunk, 50 yr old girl “friend” ends up sleeping with your daughter cause he doesn’t want his girl “friend” sleeping in a single bed she can share the double bed with his/your daughter! How would you feel?’

[6] It is not in dispute that at the time of this posting the respondent had 592 “Facebook friends”. A number of the respondent’s ‘friends’ responded to her posting and were critical of the behaviour of the applicant. The respondent further contributed towards the debate by making subsequent postings to that set out above. These postings or messages appear as annexure ‘A’ to the applicant’s founding papers. The initial postings resulted in a further debate with the respondent’s brother S[…] B[…], who questioned the aspersions cast by the respondent on the applicant and the teacher with whom P shared a bed. These postings appear as annexure ‘B’ to the applicant’s founding papers.

[7] In light of the postings, which the applicant regarded as defamatory and detrimental to his business reputation, he engaged his attorneys who wrote to the respondent on 4 September 2013 clarifying that during the weekend in which the applicant had access to P, at no time therein was she placed in any danger, nor was her safety compromised in any way. His attorneys then called upon the respondent to remove the offending postings (annexures ‘A’ and ‘B ‘to the founding papers) from her Facebook page by the close of business on 4 September 2013, failing which they threatened litigation.

[8] According to the respondent, she removed the offending postings by 5 September 2013. Accordingly, at the time when the application came before my colleague Nkosi J, the respondent contended in her opposing affidavit that there was no need for the application as she had long since complied with the demand and removed the postings. In support of the submission, the respondent attached an SMS received from the applicant on 5 September 2013 stating:

‘And well done on removing your false Facebook posting – you’ve saved yourself from a lawsuit. Ensure no further defamatory posts are put up or you’ll find yourself in Court!!’

[9] As is evident from the prayers sought in the Notice of Motion, notwithstanding the removal of postings in the form of annexures A and B, the applicant persisted in his application for urgent relief on the basis that the respondent had failed to take down the postings on what is referred to as her Facebook Wall, which the applicant contends “retained a partisan version of the debate”. The postings on the respondent Face Wall appeared as annexure D to the applicant’s founding affidavit. The applicant contended that the contents of annexure ‘D’ defamed him, even though the respondent has deleted the earlier postings on her Facebook page. In order to understand the applicant’s complaint, a perusal of the respondent’s Facebook Wall reflects the contents of active debate taking place between the respondent and her friends. The subject of the debate continues to be the incident relating to the applicant’s care (or neglect) of his daughter over the weekend at the end of August 2013. In particular, the opening message on the applicant’s Facebook Wall is the following:

‘This is my FB page which I can get opinions on matters close to my heart, if you don’t like it then go read someone else’s and defriend me!’

[10] This message was posted in response to earlier messages from the respondent’s brother, S[…] B[…], who it would appear, did not take kindly to the insinuations of neglect aimed at the applicant.

The Court’s decision

These facts are pretty similar to two 2013 Facebook defamation case which I wrote about, H v W and Isparta v Richter and Another. The order directing B to remove defamatory posts from her Facebook Wall was not particularly controversial. There was some discussion about the timing of the application and B’s efforts to remove some defamatory posts but this order was in line with Judge Willis’ judgment in H v W and Acting Judge Hiemstra in Isparta v Richter and Another. After considering arguments from both sides, Judge Chetty found against B:

[20] Other than a denial that the postings were defamatory, the respondent does not make out any argument of the public interest in respect of the statements attributed to the applicant. I am satisfied that the applicant was entitled to approach the Court on an urgent basis at the time that he did. I am accordingly satisfied that the applicant has made out a case for first part of the rule nisi, in terms of the relief sought in prayer 2.1 of the Notice of Motion, to be confirmed.

The Court then moved on to the second part of the matter, namely whether M should be entitled to a final order, essentially, prohibiting B from publishing defamatory comments about M in the future. This may seem like a perfectly reasonable order but it is important to bear in mind that just because a comment is defamatory, doesn’t mean that it is wrongful. As Judge Chetty pointed out –

[24] On the other hand, the respondent submitted that there is no basis at common law for a Court to curtail the respondent in respect of material which is not as yet known to the Court, nor has it been presented or published. As such the Court is asked to speculate on what could constitute a defamatory statement, uttered or published by the respondent against the applicant. It was correctly submitted in my view that even if the statement in the future by the respondent is defamatory of the applicant, it is equally so that not every defamatory statement is per se actionable in that the respondent may have a good defence to its publication. For example, the respondent might be under a legal duty to furnish information about the applicant in connection with an investigation of a crime, or she could be a member of a public body which places on her a social duty to make defamatory statements about the applicant. To this extent, the respondent may make defamatory statements about the applicant in circumstances where they may be a qualified privilege. Obviously it would be necessary to ascertain the nature of the occasion in order to determine whether any privilege attaches to it. The difficulty in granting such an order is evident, albeit in the context of the publication of an article, from the judgement in Roberts v The Critic Ltd & others 1919 WLD 26 at 30–31 where the Court held:

‘I think I have jurisdiction to make an order restraining the publication of a specific statement that is defamatory, but in the present case I am asked to restrain the publication of an article in so far as it is defamatory; if the applicant’s contention is correct this will come to the same thing as restraining any continuation of the article at all, because that contention is that no continuation of the article can be written that is not defamatory… . There is the grave difficulty in the way of granting an interdict restraining the publication of an article which purports to deal with a matter of great public interest, and which I have not before me. It is impossible to say what it will contain, however grave one’s suspicions may be. The respondents specifically state that the continuation will not be libellous, nor will it slander the petitioner; nor will it affect her good name and fair fame. It can only be determined upon the publication of the article if this statement be true. I think it is impossible for me to deal with it now. In the cases I have referred to the defendants insisted on the right to publish the statements complained of. The interdict must therefore be discharged.’

[25] At the same time it has also been held that it is lawful to publish a defamatory statement which is fair comment on facts that are true and in matters of public interest, as well as in circumstances where it is reasonably necessary for and relevant to the defence of one’s character or reputation. Counsel relied on the judgement of Willis J in H v W (supra) para 40 in support of his submission that Courts should not be eager to prohibit or restrict parties in respect of future conduct, of which one can only speculate in the present. The Court held that:

‘Although judges learn to be adept at reading tealeaves, they are seldom good at gazing meaningfully into crystal balls. For this reason, I shall not go so far as “interdicting and restraining the respondent from posting any information pertaining to the applicant on Facebook or any other social media”. I have no way of knowing for certain that there will be no circumstances in the future that may justify publication about the applicant.’

Although judges probably wouldn’t have a difficulty ordering a person not to do something that is clearly and unjustifiably wrongful in the future (that is largely what an interdict is for), the challenge M faced with this part of his application is that a future defamatory statement could well be justifiable and not wrongful. As I pointed out in my post, Judge Willis considered a couple justifications in H v W –

After exploring Twitter briefly, Judge Willis turned to established case law in South Africa including authority for the proposition Roos expressed that a privacy infringement can be justified in a similar way that defamation can be justified and a more recent Supreme Court of Appeal judgment in the 2004 Mthembi-Mahanyele v Mail & Guardian case which, according to Judge Willis –

affirmed the principle that the test for determining whether the words in respect of which there is a
complaint have a defamatory meaning is whether a reasonable person of ordinary intelligence might reasonably understand the words concerned to convey a meaning defamatory of the litigant concerned

The test for determining whether words published are defamatory is to ask whether a ‘reasonable person of ordinary intelligence might reasonably understand the words … to convey a meaning defamatory of the plaintiff… . The test is an objective one. In the absence of an innuendo, the reasonable person of ordinary intelligence is taken to understand the words alleged to be defamatory in their natural and ordinary meaning. In determining this natural and ordinary meaning the Court must take account not only of what the words expressly say, but also of what they imply’

Referencing one of the justifications for (or defences to) defamation, namely that the defamatory material be true and to the public benefit or in the public interest, Judge Willis drew an important distinction that is worth bearing in mind –

A distinction must always be kept between what ‘is interesting to the public’ as opposed to ‘what it is in the public interest to make known’. The courts do not pander to prurience.

The Court moved on to explore another justification, fair comment. In order to qualify as “fair comment” –

the comment “must be based on facts expressly stated or clearly indicated and admitted or proved to be true”

The person relying on this justification must prove that the comment is, indeed, fair comment and “malice or improper motive” will defeat this justification or defence, regardless of its demonstrably factual nature. In this particular case, the Court found that W acted maliciously and she was unable to prevail with this defence.

Because defamation can be justified in appropriate circumstances and because judges can’t predict when defamatory statements will be justifiable in a particular context, proactively blocking defamatory Facebook posts is inherently problematic. Judge Chetty summarised the point:

As set out earlier this argument must fail because it is clear that not every defamatory statement made by the respondent about the applicant would be actionable.

Recent reports about hacked celebrity iCloud accounts seem to be attributable a vulnerability in iOS’ Find My iPhone service which enabled someone trying to gain access to an iCloud account to use a brute force attack to guess the account password. A brute force attack involves guessing a large number of possible passwords until the correct one pops up and grants access. Apple usually rate limits password attempts (in other words, Apple’s software imposes a limit on the number of password attempts before locking the account or device – something an iPhone or iPad user with small children will be familiar with). That security feature doesn’t seem to have been implemented properly but Apple has reportedly since patched the vulnerability.

As The Next Web reported earlier today the attack may be linked to software on GitHub called iBrute that is capable of carrying out automated brute-force attacks against iCloud accounts. In this scenario, an attacker simply guesses a password again and again until they succeed. While tedious and time-consuming for a person, it’s a simple and infinitely faster process for a computer.

The as-yet unknown attacker had one other thing going for him: Apple allows an unlimited number of password guesses. Normally, systems limit the number of times someone can try to log in to a system with an incorrect password before the account is locked down entirely. Apple has since fixed that aspect of the vulnerability.

Assuming this was the nature of the hack which exposed the celebrities’ account data, iCloud users can probably protect their accounts from similar attacks by enabling what Apple calls “two-step verification” (also known as “two-factor authentication”). I came across two terrific tutorials for enabling two-step verification:

Two-step verification protects your accounts by requiring you to supply a unique code you usually receive on a device you own by SMS or through a code generator of some kind. It is a good idea to enable two-step verification (often referred to as “two-factor authentication”) if your service or app supports it as a way to prevent brute force attacks from being successful.

The first anniversary of Occupy Wall Street gathered at Washington Square Park, Occupy Town Square. A march down Broadway to Zuccotti Park started at 6pm on September 15th.

A recent New York Police Department attempt to engage with New Yorkers serves as a reminder that crowdsourcing positive feedback doesn’t always work quite as well as you may hope, if it works at all. As Ars Technica reported:

The Twitterverse was abuzz Tuesday evening after the New York City Police Department made what it thought was a harmless request to its followers: post pictures that include NYPD officers and use the #MyNYPD hashtag.

Much to the NYPD’s surprise and chagrin, the simple tweet brought on a torrent of criticism from the Internet. The result was national coverage of hundreds of photos depicting apparent police brutality by NYPD officers, which individuals diligently tweeted with the hashtag #myNYPD.

The Ars article touches on a number of other, similar attempts to elicit positive feedback from communities and the clear trend is that the community will give you its assessment of what you are doing and represent, it won’t necessarily give you the feedback you probably want.

This isn’t necessarily a reason not to engage with your community but it does require courage. If you want honest feedback, community feedback is a terrific opportunity to get it. If, on the other hand, you don’t want to venture outside a positive reinforcement bubble, perhaps start with a different sort of campaign.

SnapChat’s privacy controls are what made it both enormously popular and troubling to its young users’ parents. When SnapChat launched, it gave users the ability to share photos and videos which promptly vanished into the ether. This appealed to its typically young and privacy conscious users because they finally had a way to share stuff with each other with impunity. This obviously bothered parents and teachers as it potentially gave their children a way to share content they shouldn’t share.

An Federal Trade Commission investigation has led to acknowledgements that content posted on SnapChat isn’t nearly as temporary as everyone may have thought. The New York Times published an article titled “Off the Record in a Chat App? Don’t Be Sure” which began with the following:

What happens on the Internet stays on the Internet.

That truth was laid bare on Thursday, when Snapchat, the popular mobile messaging service, agreed to settle charges by the Federal Trade Commission that messages sent through the company’s app did not disappear as easily as promised.

Snapchat has built its service on a pitch that has always seemed almost too good to be true: that people can send any photo or video to friends and have it vanish without a trace. That promise has appealed to millions of people, particularly younger Internet users seeking refuge from nosy parents, school administrators and potential employers.

Oversight or lie?

The FTC’s release includes the following background to its investigation and its stance:

Snapchat, the developer of a popular mobile messaging app, has agreed to settle Federal Trade Commission charges that it deceived consumers with promises about the disappearing nature of messages sent through the service. The FTC case also alleged that the company deceived consumers over the amount of personal data it collected and the security measures taken to protect that data from misuse and unauthorized disclosure. In fact, the case alleges, Snapchat’s failure to secure its Find Friends feature resulted in a security breach that enabled attackers to compile a database of 4.6 million Snapchat usernames and phone numbers.

According to the FTC’s complaint, Snapchat made multiple misrepresentations to consumers about its product that stood in stark contrast to how the app actually worked.

“If a company markets privacy and security as key selling points in pitching its service to consumers, it is critical that it keep those promises,” said FTC Chairwoman Edith Ramirez. “Any company that makes misrepresentations to consumers about its privacy and security practices risks FTC action.”

Touting the “ephemeral” nature of “snaps,” the term used to describe photo and video messages sent via the app, Snapchat marketed the app’s central feature as the user’s ability to send snaps that would “disappear forever” after the sender-designated time period expired. Despite Snapchat’s claims, the complaint describes several simple ways that recipients could save snaps indefinitely.

Consumers can, for example, use third-party apps to log into the Snapchat service, according to the complaint. Because the service’s deletion feature only functions in the official Snapchat app, recipients can use these widely available third-party apps to view and save snaps indefinitely. Indeed, such third-party apps have been downloaded millions of times. Despite a security researcher warning the company about this possibility, the complaint alleges, Snapchat continued to misrepresent that the sender controls how long a recipient can view a snap.

While we were focused on building, some things didn’t get the attention they could have. One of those was being more precise with how we communicated with the Snapchat community. This morning we entered into a consent decree with the FTC that addresses concerns raised by the commission. Even before today’s consent decree was announced, we had resolved most of those concerns over the past year by improving the wording of our privacy policy, app description, and in-app just-in-time notifications.

On the one hand, the FTC essentially found that SnapChat has been misleading its users about its service’s privacy practices and, on the other hand, SnapChat pointed to a communications lapse, almost as an oversight. Considering that SnapChat has always been focused on the fleeting nature of content posted on the service and the privacy benefits for its users, this doesn’t seem very plausible.

“Improved” privacy policy wording

SnapChat updated its privacy policy on 1 May. The section “Information You Provide To Us” is revealing because it qualifies “Snaps'” transient nature so much, transience seems to be the exception, rather than default behaviour:

We collect information you provide directly to us. For example, we collect information when you create an account, use the Services to send or receive messages, including photos or videos taken via our Services (“Snaps”) and content sent via the chat screen (“Chats”), request customer support or otherwise communicate with us. The types of information we may collect include your username, password, email address, phone number, age and any other information you choose to provide.

When you send or receive messages, we also temporarily collect, process and store the contents of those messages (such as photos, videos, captions and/or Chats) on our servers. The contents of those messages are also temporarily stored on the devices of recipients. Once all recipients have viewed a Snap, we automatically delete the Snap from our servers and our Services are programmed to delete the Snap from the Snapchat app on the recipients’ devices. Similarly, our Services are programmed to automatically delete a Chat after you and the recipient have seen it and swiped out of the chat screen, unless either one of you taps to save it. Please note that users with access to the Replay feature are able to view a Snap additional times before it is deleted from their device and if you add a Snap to your Story it will be viewable for 24 hours. Additionally, we cannot guarantee that deletion of any message always occurs within a particular timeframe. We also cannot prevent others from making copies of your messages (e.g., by taking a screenshot). If we are able to detect that the recipient has captured a screenshot of a Snap that you send, we will attempt to notify you. In addition, as for any other digital information, there may be ways to access messages while still in temporary storage on recipients’ devices or, forensically, even after they are deleted. You should not use Snapchat to send messages if you want to be certain that the recipient cannot keep a copy.

If you read the second paragraph carefully, you’ll notice the following exceptions to what most users assumed was the service’s default behaviour: permanently deleting Snaps after specified time intervals. I have highlighted the exceptions in the quotes below.

“Similarly, our Services are programmed to automatically delete a Chat after you and the recipient have seen it and swiped out of the chat screen, unless either one of you taps to save it“

“… users with access to the Replay feature are able to view a Snap additional times before it is deleted from their device”

“… if you add a Snap to your Story it will be viewable for 24 hours“

“Additionally, we cannot guarantee that deletion of any message always occurs within a particular timeframe“

“We also cannot prevent others from making copies of your messages …”

“In addition, as for any other digital information, there may be ways to access messages while still in temporary storage on recipients’ devices or, forensically, even after they are deleted“

The last sentence emphasises how much its users should rely on the service for meaningful privacy:

You should not use Snapchat to send messages if you want to be certain that the recipient cannot keep a copy.

Where does this leave SnapChat users?

The problem with these revelations is not that Snaps are actually accessible and may endure in some form or another. The problem is that SnapChat pitched a service that doesn’t retain its users’ content. SnapChat rose to prominence at a time when the world was reeling from revelations about unprecedented government surveillance which seemed to reach deep into a variety of online services we assumed were secure. It’s promise was to protect its users’ privacy and their content from unwanted scrutiny. In many respects, SnapChat seemed to be the first of a new wave of services that placed control in users’ hands.

In the process, SnapChat misled its users fairly dramatically and that is the most troubling aspect of this story. SnapChat users relied on an assumption that their content is transient and this has turned out not to be the case at all. Putting this into context, though, this doesn’t mean SnapChat is inherently less private than any other chat service. Short of poor security practices, this isn’t necessarily the case. It means that SnapChat is fairly comparable to other chat services which haven’t made similar claims about the privacy of their users’ communications.

That said, a significant challenge is that a significant proportion of SnapChat’s users are probably under the age of 18. Although US services are more concerned about children under the age of 13 using their services due to certain laws protecting children in the United States, our law doesn’t draw this distinction. In South Africa, a person under the age of 18 is a child and subject to special protections which SnapChat has had almost no regard for. Not only has SnapChat arguably processed children’s personal information in a manner which would not be acceptable in our law, it is misled those children about the extent to which it protects their privacy. At the very least, they and their parents should be very concerned and circumspect about continuing to use the service.

The last couple weeks saw two spectacular lapses in judgment in corporate Twitter accounts. The first was the pornographic US Airways tweet in response to a passenger’s complaints about a delayed flight and the second was an FNB employee’s flippant tweet about an ad personality’s activities in Afghanistan.

Each incident has unfolded a little differently. Both are stark reminders about the very serious legal consequences for misguided tweets.

The last couple weeks saw two spectacular lapses in judgment in corporate Twitter accounts. The first was the pornographic US Airways tweet in response to a passenger’s complaints about a delayed flight and the second was an FNB employee’s flippant tweet about an ad personality’s activities in Afghanistan.

Each incident has unfolded a little differently. In the case of the US Airways tweet, it appears that the tweet was a mistake and that the employee concerned will not be fired. Here is an explanation of the incident and some commentary from Sarah and Amber on a recent Social Hour video:

Disciplinary processes were under way following an offensive tweet sent from a First National Bank Twitter account, the bank said on Wednesday.

“We can confirm that disciplinary actions are currently under way as we are following the required industrial relations processes,” FNB’s acting head of digital marketing and media, Suzanne Myburgh, said.

In both cases, the companies concerned removed the offending tweets as soon as they discovered them and apologised for the tweets. Both incidents attracted a tremendous amount of attention and both brands were praised for apologising and being transparent about their investigations into their respective incidents. The benefit of this approach has been to mitigate the reputational harm both companies faced by engaging with their followers and keeping their customers updated on their investigations.

It is worth bearing in mind that managing corporate social media profiles at scale is not a simple exercise. As Cerebra’sMike Stopforth pointed out in his Twitter post-mortem of the FNB tweet controversy:

On yesterday’s #FNB “debacle”: The challenge of managing the complexity of a corporation in social media can’t be underestimated. (1/7)

I don’t think I would characterise the tweet as an “understandable error”. Twitter profiles as prolific as FNB’s @RBJacobs profile require careful attention to the kinds of tweets that may be published and to what extent the teams managing these profiles can inject their personalities into the corporate personality or representation of the brand online.

> From a Legal Perspective

The legal issues here are perhaps not as exciting as the raging debate and threats but they are important nonetheless. One of the central themes in the blog posts by both companies, Playhaven and SendGrid, is that employees who fail to fulfil their obligations towards their employers can be dismissed. In both Richards’ and Playhaven’s ex-employee’s cases, both individuals brought their employers into disrepute through their actions and, in this respect, exposed themselves to disciplinary action.

Employees owe their employers a number of duties and they can be disciplined if they fail to honour their obligations towards their employers. Employees’ duties include the duties to –

Taking into account all the circumstances – what was written; where the comments were posted; to whom they were directed, to whom they were available and last but by no means least, by whom they were said – I find that the comments served to bring the management into disrepute with persons both within and outside the employment and that the potential for damage to that reputation amongst customers, suppliers and competitors was real.

and

This case emphasizes the extent to which employees may, and may not, rely on the protection of statute in respect of their postings on the Internet. The Internet is a public domain and its content is, for the most part, open to anyone who has the time and inclination to search it out. If employees wish their opinions to remain private, they should refrain from posting them on the Internet.

FNB clearly seems to have a process in place to identify, respond to and address incidents such as this tweet. It presumably has a sound policy framework that it will rely on when dealing with its incident. This is where a social engagement policy (what used to be a “social media policy” and which has evolved since then) is really important.

Although much of the focus of a social engagement policy has traditionally been on behaviours which must align with the brand, the policy also serves an important disciplinary function by clearly communicating a standard which employees using social communication tools must meet. This, in turn, ties into one of the important requirements of a sound disciplinary procedure: demonstrating that a clear standard was effectively communicated to employees who were aware of the standard and failed to meet it.

We may learn what happens to the FNB employee who published that ill-advised tweet. What is certain, though, is that this won’t be the last incident like this. We will see more incidents at other companies and the sooner companies develop effective processes to address these incidents, the better.

Like this:

I spoke to Kieno Kammies on 567 CapeTalk radio this morning about a troubling trend. As you can hear from the segment, below, the concern is partly about people being photographed in suspicious ways in public. One example is a person following women around shooting video of them or taking photos without their knowledge. This isn’t so much about a person taking a photograph of a scene that happens to include women walking past but actually targeting those women.

Whether this is a privacy issue depends very much on the subject matter and the context. In this respect it comes down to legitimate expectations of privacy in the case of adults and appropriate consent when it comes to children (at least in terms of the Protection of Personal Information Act). The law that is likely to be more appropriate here is the Protection from Harassment Act which targets forms of harassment which the Act defines as follows:

“harassment” means directly or indirectly engaging in conduct that the 5 respondent knows or ought to know-

(a) causes harm or inspires the reasonable belief that harm may be caused to the complainant or a related person by unreasonably-

(i) following, watching, pursuing or accosting of the complainant or a related person, or loitering outside of or near the building or place where the complainant or a related person resides, works, carries on business, studies or happens to be;

(ii) engaging in verbal, electronic or any other communication aimed at the complainant or a re.lated person, by any means, whether or not conversation ensues; or

(iii) sending, delivering or causing the delivery of letters, telegrams, packages, facsimiles, electronic mail or other objects to the complainant or a related person or leaving them where they will be found by, given to. or brought to the attention of, the complainant or a related person; or

(b) amounts to sexual harassment of the complainant or a related person;

<

p>The harm the Act protects against may be “any mental, psychological, physical or economic harm”.

This Act is designed to be user friendly and the Regulations describe which forms to use for which steps and who to approach at each step. The Department of Justice and Constitutional Development has a comprehensive page with links to the Act, the Regulations and the various forms. The process was designed in such a way that you don’t need an attorney to assist you (although you can have one helping you) and you need not know the harasser’s identity either. The Act creates a mechanism whereby the police may be instructed to investigate and identify the suspected harasser.

This legislation can be used for a various activities which fall into the “harassment” definition including stalkers like the ones described in the segment as well as cyber-bullying and more.

Our email providers give themselves much more convenient access to your data through their terms of service or privacy policies. On one hand, this is level of access may be necessary to prevent disruptions and limit liability but, on the other hand, these permissions we, as users, grant providers like Microsoft, Google, Yahoo and others pretty broad access to our data without requiring them to obtain court orders or satisfy any external legal requirement.

It came out yesterday that the company had read through a user’s inbox as part of an internal leak investigation. Microsoft has spent today in damage-control mode, changing its internal policies and rushing to point out that they could have gotten a warrant if they’d needed one. By all indications, the fallout is just beginning.

Your provider is watching you

As disturbing as this is, there is a bigger picture. As The Verge’s Russell Brandom goes on to point out –

But while Microsoft is certainly having a bad week, the problem is much bigger than any single company. For the vast majority of people, our email system is based on third-party access, whether it’s Microsoft, Google, Apple or whoever else you decide to trust. Our data is held on their servers, routed by their protocols, and they hold the keys to any encryption that protects it. The deal works because they’re providing important services, paying our server bills, and for the most part, we trust them. But this week’s Microsoft news has chipped away at that trust, and for many, it’s made us realize just how frightening the system is without it.

People following the Oscar Pistorius trial in the last week would have discovered that private chats can become very public if law enforcement authorities believe they are relevant to an investigation.

Difficult listening to this whatsapp conversation between #Reeva and #OscarPistorius. So personal and intimate.

Although law enforcement authorities are required to follow various procedures to gain access to messaging and social media users’ communications, the companies operating the chat and email services we use daily don’t have this hurdle in their way if they deem it necessary to access their users’ communications.

The right to privacy in the South African Bill of rights includes the right not to have the “privacy [your] communications infringed”. This right is not absolute and can be (and is) limited by various laws including the Regulation of Interception of Communications and Provision of Communication-related Information Act which is how local law enforcement can obtain access to your communications. What this means is that, for law enforcement at least, there are checks and balances in place to protect our communications both thanks to laws as well as service providers’ requirements.

Unfortunately, those same providers give themselves much more convenient access to your data through their terms of service or privacy policies. On one hand, this is level of access may be necessary to prevent disruptions and limit liability but, on the other hand, these permissions we, as users, grant providers like Microsoft, Google, Yahoo and others pretty broad access to our data without requiring them to obtain court orders or satisfy any external legal requirement.

Microsoft

As The Verge pointed out, if you use Hotmail/Outlook.com, you have granted Microsoft permission to access your data. Microsoft’s Privacy Statement includes these permissions:

We may access or disclose information about you, including the content of your communications, in order to: (a) comply with the law or respond to lawful requests or legal process; (b) protect the rights or property of Microsoft or our customers, including the enforcement of our agreements or policies governing your use of the services; or (c) act on a good faith belief that such access or disclosure is necessary to protect the personal safety of Microsoft employees, customers or the public.

Because you agree to the Privacy Statement as a condition of your use of Microsoft’s services, you have consented to these uses of your personal information. These consent enable Microsoft to circumvent any questions about privacy infringement because your legitimate expectation of privacy does not extend to these particular activities. This is the key rationale for a privacy policy and it is the same principle applies to the permissions you grant to other providers (I’ve referred to a couple more below).

Google

Google operates an enormously popular email service, Gmail, which is also probably one of the most secure from the perspective of external surveillance and attacks. While Google holds itself out as its users’ protector from external threats, it also has the option of accessing your data because you have agreed to this when you agreed to its Privacy Policy which includes these provisions:

We use the information we collect from all of our services to provide, maintain, protect and improve them, to develop new ones, and to protect Google and our users.

…

We may combine personal information from one service with information, including personal information, from other Google services – for example to make it easier to share things with people you know.

…

We will share personal information with companies, organizations or individuals outside of Google if we have a good-faith belief that access, use, preservation or disclosure of the information is reasonably necessary to:

protect against harm to the rights, property or safety of Google, our users or the public as required or permitted by law.

These three sections are drawn from different parts of Google’s Privacy Policy and, between them, they give Google permission to share fairly comprehensive information it has about you with law enforcement authorities as well as to use that information itself to, among other things, “protect” its services, itself and its users. This is a fairly broad term and this is likely intentional. When you write these sorts of policy documents, you don’t want to be too prescriptive if you anticipate requiring fairly broad consents for a wide range of foreseeable risks and to cater for unforeseen risks.

Yahoo

Yahoo’s webmail service is still very popular. While Yahoo’s privacy policy tends to be pretty good about handling users’ personal information, it also retains fairly broad permissions in its Privacy Policy (I added some emphasis):

Yahoo does not rent, sell, or share personal information about you with other people or non-affiliated companies except to provide products or services you’ve requested, when we have your permission, or under the following circumstances:

…

We believe it is necessary to share information in order to investigate, prevent, or take action regarding illegal activities, suspected fraud, situations involving potential threats to the physical safety of any person, violations of Yahoo’s terms of use, or as otherwise required by law.

Apple

Although not as popular as the other providers, Apple’s tight service and software integration makes its iCloud email service a convenient option, especially because its possible to create an email account on iCloud without requiring another email account first (which is increasingly rare). When you use Apple’s products and services, your consents include the following:

How we use your personal information

…

We also use personal information to help us create, develop, operate, deliver, and improve our products, services, content and advertising, and for loss prevention and anti-fraud purposes.

…

We may also use personal information for internal purposes such as auditing, data analysis, and research to improve Apple’s products, services, and customer communications.

Where this leaves you

<

p>Public events like the Oscar Pistorius trial and, before it, the ongoing revelations about state surveillance programs over the last year or so, have reminded us that our private communications are not quite as private as we may have hoped. Our privacy is protected more by obscurity and because our communications, for the most part, are not the sorts of things others would be terribly concerned about.

Our trust and the possibility of severe reputational harm keep the likes of Google, Yahoo, Microsoft, Facebook and others generally honest although, as we have seen with Microsoft, they may be prepared to break that trust if the reason is compelling enough to them. They will invariably point to the permissions we give them in our contracts with them and they’ll be quite right. We have agreed to this and we’ll continue being in agreement with them having this level of access to our data because the alternatives are not nearly as convenient.