This is a blog about the future of digital media law from Laurence Kaye. Laurence runs Laurence Kaye Consulting Limited (click here), bringing insight an clarity to the complexities of the digital world.

April 20, 2010

This is a bit last minute but if you happen to be at the London Book Fair tomorrow, Laurie is chairing a discussion on e-books ('the hot legal issues'). It's at 2.30pm, in the Cromwell Room, Earls Court 1.

For those of you who won't be attending (whether prevented by volcanic ash or otherwise), we will blog about the key points discussed and we'll put the slides in Knowledge Zone on our website in due course.

The article seeks to outline the scope of the various debates over the wide-ranging principle of net neutrality. In particular, the article considers the various proposals to regulate the internet (e.g. to protect legitimate interests of copyright owners and consumers) and the counter arguments that the internet should operate free of restrictions in an open, accessible and unfettered way. You won't be able to access the newsletter unless you are a subscriber, so we have posted the article on our website here.

March 08, 2010

Last week brought the shock news that an Italian court was sentencing Google executives under Italian data protection laws in relation to user generated content (UGC) posted on a You-Tube type service operated by Google.

Bullies

The UGC in question showed a teenager with Down's syndrome being taunted and filmed by his bullies on a mobile phone. In a further act of harassment, the bullies posted the footage online for the world to see.

There is no doubt that the act of bullying itself was shameful and that the bullies should be punished for it (whether or not they were, I do not know). That is not the main point here.

Similarly, it is undisputed that the victim's privacy was invaded. Needless to say, he did not consent to being bullied in the first place, let alone being filmed and published online. In the UK, this could amount to breach of privacy (or, more accurately, the common law breach of confidence as stretched by the judiciary and Article 8 of the Human Rights Act 1998).

By publishing this private content, the bullies are the ones primarily in the wrong as their 'analogue' bullying becomes, in addition, cyberbullying (in the UK, the civil wrong and criminal offence of harassment). But that is not the main point either. They are primarily in the wrong, but the Italian court pointed the finger of blame beyond them to a 'bigger bully', Google.

Google

The shock factor in this case is that certain US-based Google executives were held criminally liable (in an Italian court) for hosting the UGC in question on Google's (internationally available) site. The sting in the tail is that the cause of action was under Italian data protection law (rather than, for example, defamation, privacy or e-commerce) laws.

We'll need to wait for the Italian judgment to be published in order to examine the exact facts of the case and the judge's reasoning. In the meantime, let's consider the headline issues raised by the case:

applicable law and jurisdiction;

the 'hosting exemption'; and

data protection laws.

Applicable Law and Jurisdiction

Applicable law and jurisdiction in an online context is a can of worms so I will touch on it very briefly here.

Websites often state in their terms and conditions that use of the website is subject to the laws of a particular country. Despite this, there are certain 'mandatory provisions' of third party countries which will override that country's laws where the website is seen to target the citizens of those third party countries. This will depend on the type of law (e.g. data protection and consumer protection laws are often 'exported' in this way). This case has the added complexity of Italy being a member state of the EU and, as such, subject to certain Directives, etc.

Even if one country's laws apply, that doesn't mean that all disputes must be heard in that country's courts. There have been instances where one country has heard a dispute in its courts by applying the laws of another country. As Italy is an EU member state, appeals on European points of law may be made to the ECJ.

I should also add that the extent to which officers/directors can be held personally liable for their company's acts and omissions depends largely on the applicable law (e.g. Italian or US) and the cause of action (e.g. defamation or privacy).

Hosting Exemption

Of course, the Google executives did not - could not possibly - have known about the video in question. They did not make the video, did not post it, did not authorise it to be posted, and undoubtedly discouraged it from being posted (the terms and conditions would have made it clear that users are not permitted to upload unlawful UGC).

In considering whether Google is liable for displaying unlawful (e.g. defamatory) UGC, the question is whether it should have known about the video and removed it. This comes down to constructive, or imputed, knowledge, 'notice and take down' mechanisms, and whether Google went far enough to discharge its 'duty of care' as host of UGC. It seems that Google did, in fact, remove the video without undue delay upon notification.

Section 19 of the UK's E-Commerce Regulations contains a well-established exemption from liability for Google-type internet intermediaries hosting UGC. This exemption is what enables Google and countless other online companies, from ISPs like AOL to auction sites like eBay to forums like Mumsnet, to conduct their businesses. Without this exemption, they would face astronomical costs in moderating all content, and/or face countless lawsuits. This would have the effect of stifling UGC and the Internet as we know it would not exist. The US has similar provisions in its legislation. So does Italy, in line with Article 14 of the E-Commerce Directive.

The hosting exemption is, however, qualified by the requirement for Google et al to operate effective and expeditious 'notice and take-down' policies. That is, when they are notified of offensive content, they must act swiftly to remove it. This mechanism has worked to date in that it seems to strike a workable balance between fostering sharing of information (e.g. UGC) and safeguarding people's rights (e.g. privacy).

In ruling that Google's executives are personally, criminally liable for UGC hosted by a site owned by Google, the Italian court's decision would, at first glance, seem to upset this balance. However, upon closer examination, this is not the case because, in fact, one of the counts against Google was for defamation and it was dismissed (it can be safely assumed) on the basis of the hosting exemption.

Data Protection

The indictment of the Google executives, in fact, related to the second charge made against them of unlawful processing of personal data under the Italian Privacy Code.

I do not intend to examine this charge, or the judge's reasoning, until I have had the opportunity to review the judgment in full. The main point to bear in mind is that Article 1(5) of the E-Commerce Directive expressly states that its provisions shall not apply to the Directives governing data protection. Therefore Google would not have had the protection of the hosting exemption under this count of breach of Italy's data protection laws. Recital (14) of the E-Commerce Directive explains:

"The protection of individuals with regard to the processing of personal data is solely governed by Directive 95/46/EC ...and Directive 97/66/EC ...which are fully applicable to information society services; these Directives already establish a Community legal framework in the field of personal data and therefore it is not necessary to cover this issue in this Directive in order to ensure the smooth functioning of the internal market... the implementation and application of this Directive should be made in full compliance with the principles relating to the protection of personal data, in particular as regards unsolicited commercial communication and the liability of intermediaries; this Directive cannot prevent the anonymous use of open networks such as the Internet."

Conclusion

So the indictment of the Google executives under Italian data protection law does not threaten to deprive internet intermediaries of the hosting exemption in relation to unlawful UGC. However, it does highlight that online companies must be vigilant as to the multitude of data protection laws which may govern their activities. In our experience, all companies tend to underestimate the importance of complying with data protection laws. This case may herald a turning point that sees data protection compliance rising up the agenda for online businesses.

On a different note, I think it is quite telling that, although many people left outraged comments on Google's website against the video in question in this case, not one of them actually clicked the button to flag it. As Google cannot be expected to monitor every item of UGC uploaded, so it cannot be expected to monitor every comment made in relation to each post. If, at the time, the Google site did not feature such a panic button, then the lesson to online UGC hosts is to include one. If, however, there was a panic button but users did not think to click it, then users need to be educated to do so. The UK's click clever click safe campaign drives home the message of how to use the Internet safely by following simple steps. It is aimed at children, but adults and children alike would do well to observe its 'zip it, block it, flag it' mantra.

We all need to act responsibly online, from cyberbullies (who leave cyber footprints as evidence of their bullying), to teens and adults (who can be too free with the personal information they make available online), to the internet intermediaries, who cannot simply claim ignorance and blanket exemptions against all potential liability, particularly in the field of data protection.

December 15, 2009

Copyright has been high on the agenda throughout 2009 both on a UK and EU level, with the Digital Britain Report, Content Online Platform, Pirate Bay decision and Digital Economy Bill, to name but a few examples. As the year draws to a close and we shift into 'reflective mode', you may be interested to read Laurie's article on our website, in which he pulls together the highlights from 2009 with a view to piecing together a snapshot of ‘where we are now with copyright and digital media law’.

We will follow up on many of these 'highlights' in 2010 and we will watch the progression of the various consultations, reviews and legislative initiatives (including the Digital Economy Bill) with particular interest.

On behalf of everyone at the firm, I'd like to wish you all the very best for the new year. I can think of one new year's resolution already and that will be to post more frequently!

September 23, 2009

Laurie is busy with work on SABIP and apologises for the radio silence. He'll be back and blogging soon but in the meantime, I'm afraid, you'll have to make do with me...

The correct title for this post is in fact "Net Neutrality (what is it and why should I care)"... but please don't run away.

I admit it's a cheap trick - but one that I was caught by recently when I went to a lecture given by Chris Marsden. I turned up to the friendly sounding "Future Regulation of the Internet" only to be told that in fact the seminar was on "Net Neutrality 'Lite': The European Approach".

If it weren't for the free cakes, I would have left.

But I'm glad I stayed. Net neutrality is one of those terms I often hear people use but never quite fully understand. For me, it's in that category of frightening webby concepts including folksonomies and convergence. What's more: it's about the pipes and I'm really more interested in the content. Chris managed to get me interested in it, though, and in reading around the subject after the seminar, I am on a quest to get to the bottom of it because it is clearly something that people get evangelistic about. For example, the US Federal Communication Commission has recently launched a site for the preservation of a "free and open internet", advocating net neutrality principles (see here for Mashable's commentary on the issue).

The reason that the term 'net neutrality' causes me such consternation is that it seems to mean different things to different people. Google explains it as "the principle that Internet users should be in control of what content they view and what applications they use on the Internet" (it would - it is, of course, a mere facilitator). Techies describe it as "the idea that every packet of information...carries equal importance. That is, a message from me moves no faster or slower over the Internet than a message from the Queen." Chris defined it as the principle that there is no ISP interference with the packets of data; i.e. that ISPs do not open packets to see inside and that they do not discriminate between packets.

Combining these, I understand net neutrality to be the principle that the Internet is a method of communicating packets of information from one end to another which does not: (a) investigate the packets; or (b) discriminate between the packets. This seems to me to be similar to Google's argument in the case I recently blogged about: i.e. Google, like the Internet, is a mere facilitator and is neither responsible for the content it carries nor obliged to monitor the same (the latter principle is enshrined in Art.15 of the E-Commerce Directive).

Apparently, ISPs breach the principle of net neutrality (and, possibly, their contracts with their subscribers) by advertising 'unlimited broadband' and then slowing different types of traffic to suit them. They may do this for practical purposes ('traffic management'), for selfish purposes (to save money), to get the competitive edge (to promote their own bundled services over those offered by third parties), out of the goodness of their heart (by blocking spam)...the list goes on. The interesting thing is that when an ISP selectively slows traffic, it is extremely difficult to tell if this is because of a legitimate attempt to avoid congestion, or because the ISP has underprovisioned (i.e. advertised speeds faster than it is capable of achieving all of the time), or a deliberate attempt to do so for a particular purpose (and then there are problems with evidencing that purpose). So far, so conspiracy theory!

Putting that can of worms aside, what interests me are the overarching principles - the way the net neutrality principle interacts with the fact that certain controls are necessary to ensure protection of consumers, content providers and other policy concerns. It is not a great leap to draw parallels with the freedom of expression -v- protection of privacy debate, or the free -v- paid-for content debate.

Also, from a legal perspective, as we often mention in our posts, offline laws apply in the online world and no amount of web-specific principles can undermine that. So the Data Retention Directive means that internet intermediaries have to keep records of communications, and the exemptions for ISPs in the E-Commerce Directive are defeated if the ISP has not observed notice and take down. I think this is what Chris meant by net neutrality 'lite'. The principle of net neutrality is all well and good but must in certain circumstances be diluted by regulations (which reflect real and practical concerns) in order to find the middle ground wherever the principle of net neutrality clashes with other equally important principles (e.g. privacy, crime prevention, consumer protection).

As always, this is a small taster of the whole debate and please forgive me any inaccuracies - it was my first foray specifically into this area. There are consumer protection, competition and so many more issues involved. But I hope that if you, like me, secretly baulked at the mention of net neutrality, you'll have found that this taster whets your appetite.

September 11, 2009

There has been some doubt in the UK as to whether search engines are 'publishers' and so responsible for: (a) displaying unlawful content in 'snippets' on the search results page; or (b) displaying a link to the page on which such content first appears.

But Mr Justice Eady has swept away this uncertainty by ruling in July that Google is a 'facilitator', not a 'publisher' and as such is not liable for defamatory text appearing in a snippet on its search results page (or, by extension, for displaying a link to the page on which the defamatory content was published in full, although Eady J did not specifically address this point).

The claimant was a company trading as "Scheidegger MIS" offering a distance learning course called "Train2Game". The claimant noticed defamatory comments on a forum provider's website, some of which also appeared in Google's search results when searching for "Train2Game".

The position regarding forum providers' liability for UGC is relatively clear (essentially they can benefit from certain exemptions as long as they implement an expeditious 'notice and take-down' policy) so I won't rehearse it here.

The decision in this case was essentially that Google is not to be regarded as publisher of the defamatory comments so it is not liable for them. As such, although Eady J considered the defences available to Google under common law, the Defamation Act 1996 and the E-Commerce Directive, it was unnecessary for Google to rely on them - it is not liable in any event because "if a person is not properly to be categoised as the publisher at common law, there is no need of a defence."

So the general position regarding search engines now appears to be that they are not publishers of the content they display on their search results page and, as such, they are not liable for any unlawful content contained therein. However, this is subject to the following provisos:

This will depend on the facts of each case and the way in which each search engine operates (e.g. search engines acting as aggregators may still run into trouble as Google News did in Belgium in 2007 - see our article here).

As a High Court decision in an interim application, this decision is not strictly binding legal precedent but is strongly persuasive and highly likely to be followed in future cases - and, perhaps more significantly, to act as a deterrent to claimants joining search engines as co-defendants in defamation claims.

Eady J made much of the fact that Google's service is fully automated and "Google has no control over the search terms entered by users of the search engine or of the material which is placed on the web by its users." It follows that "it has not authorised or caused the snippet to appear on the user's screen in any meaningful sense."

As there is no directly applicable case law and no specific legislation on the question of whether a search engine is a 'publisher' for the purposes of defamation, Eady J took the approach "to see how the relatively recent concept of a search engine can be made to fit into the traditional legal framework (unless and until specific legislation is introduced in this jurisdiction". In doing so, he referred to the ISP cases of Bunt v Tilley and Godfrey v Demon Internet.

The idea that one party can be liable for an unlawful statement published by another party goes all the way back to 1894 to the quirky English case of Hird v Wood. In this case, a man sitting in a chair pointing to a defamatory placard was held to be liable because by pointing out the sign, he had contributed to its publication. However, in the even earlier (and even quirkier) case of Smith v Wood in 1813, a man who showed a defamatory caricature to a stranger who had knocked on his door and asked to see his etchings, was held not to have published that defamatory caricature.

Eady J did not refer to either of these cases in his judgment. If he had, I imagine he would have seen Google as the caricature artist acting passively in response to the requests of strangers, rather than the man actively pointing to a sign. Google does not choose what it 'points to' - as Eady J held, it "has no role to play in formulating the search terms" and performs the search "automatically in accordance with computer programmes." In addition, Google has an "absence of knowledge...in relation to the offending material", so on this basis it can be distinguished from the caricature artist as well.

Notice and Take-Down

One of Google's arguments in its defence in this case was that "it is practically impossible and certainly disproportionate to expect [Google] to embark on a wild goose chase in order to determine where the words complained of, or some of them, might from time to time "pop up" on the Web". Eady J agreed with this point, saying "One cannot merely press a button to ensure that the offending words will never reappear on a Google search snippet...." As such, access to the specific link the claimant complained of may be removed but this does not mean that the comments will not appear in Google's search results (e.g. if they appear on another site, Google may link to that site, unless the URL for that site is also specifically blocked).

When I Googled "train2game" to see what the fuss was about, the following message appeared at the bottom of the search results page:

So Google has removed the link to the offending comment. It is interesting to note that Google has a mechanism for people to file notices of copyright infringement (see here) but not for other types of unlawful (e.g. defamatory) content. Following this decision, there is no reason (in the UK, at least) for it to change this position.

We have previously advised a client who had written an article which aggravated members of the online community, causing them to attack him in blogs, forums, wikis, you name it. Needless to say, his 'net-rep' was in tatters and he was finding difficulty getting a job as a result. As it was fruitless to pursue all of the sites on which defamatory comments appeared, he went straight to Google to ask them to block the content. However, they said that this was not possible, for the reasons given above. Eady J's decision in this case further decreases his chances of any such remedy.

Eady J did, however, say that "there is no doubt room for debate as to what further blocking steps it would be open for it to take, or how effective they might be." We'll keep an eye out for any developments in this area.

No doubt this will be a welcome development for all search engines. It has created a degree of certainty in this area, bringing the UK into line with those EU member states which have opted to include a statutory exemption from liability for search engines (e.g. Bulgaria). However, there is still room for clarification on the notice and take-down obligations of search engines, which could be dealt with by a voluntary industry code or by the introduction of legislation.

September 01, 2009

I was quoted in the Times recently in relation to a teenage 'cyberbully', Keeley Houghton, who was jailed and issued with a restraining order for bullying another teenage girl. The reporter asked me if this case marks a turning point in online bullying cases - here are my thoughts on the issue.

Without knowing the exact facts of the case, it seems to me that Houghton must have been prosecuted under the Protection from Harassment Act 1997 ("the Act"), under which offenders can be sentenced for up to 6 months' improsonment.

It should be noted that the victim had "been victimised by Houghton for four years...and had previously suffered a physical assault as well as damage to her home." This would have been a significant factor in leading the judge to believe that Houghton had pursued a "course of conduct" which: (a) "amounts to harassment of another" under s.1(1) of the Act; and/or (b) "causes another to fear, on at least two occasions, that violence will be used against him" under s.4(1) of the Act.

For this reason, I don't think that this case marks a turning point in online harassment or cyberbullying cases. On the facts, there was a long prior course of conduct including assault and criminal damage against the victim. It would, of course, be different if a person were sentenced for using social networks to bully another person without having also bullied that person 'offline'. The wording of the Act does not preclude an offence being committed in such a scenario - but to my knowledge nobody has been imprisoned on that basis in the UK yet.

However, this case does underline the point that people do not have a carte blanche to behave online in a way that is unacceptable offline. It also demonstrates that courts will take threats and comments made through social networking sites very seriously. Needless to say, there have been numerous online defamation cases and even a revival of the Obscene Publications Act 1959 in a lawsuit against a blogger. With the explosion in UGC and social networking sites, it is likely that online harassment cases will likewise occur more frequently, whether in civil or criminal actions.

As we've seen from people's behaviour on the gamut of websites from Facebook to Secondlife, Mumsnet to Owlstalk, the online world follows the offline world in all things bad as well as good. At the same time, laws that were drafted pre-Web 2.0 are being stretched to cover all sorts of new online behaviours. Put another way, the same old laws are applied to cover the activities of the same old bullies in a new playground.

Unfortunately, as every playground has a bully, so social networking sites have spawned cyberbullying and this case shows that harassment laws are flexible enough to step in to protect the victims.

June 26, 2009

I will be blogging soon about Digital Britain so watch this space. In the meantime, Yasmin weighs up a regulatory development in the field of data retention in this post and in a related article on our website.

Landline and mobile phone providers have been required to retain certain communications data (e.g. time/length of call, name/address of caller) since 2007. New Regulations introduced a couple of months ago have extended this obligation to cover internet, email and VOIP as well, which could potentially see every post, tweet and poke being compulsorily retained for 12 months.

This move may be seen as a welcome and necessary weapon in the fight against terrorism and other serious crime which can be incited, orchestrated or even conducted online. However, others would counter that it is a threat to privacy as well as an extra compliance burden for service providers.

The new Regulations came into force on 6th April and require certain providers of telephone and internet services to retain communications data for a year. This ‘communications data’ relates to the who/when/where of a communication (but not the content) and ranges from log on/call times and durations to the names and addresses of people sending and receiving communications (callers, callees, emailers, emailees... YouTubers? Twitterers?)

Only public communications providers who are notified by the Secretary of State will be required to comply with the Regulations. It remains to be seen which companies will receive such notification, but ISPs (e.g. BT Internet) will certainly be notified, as will mobile phone providers (if they haven't already, e.g. O2) and VOIP operators (e.g. Skype). It will be interesting to see whether the Government will extend such notification to search engines (Google?) and website operators – particularly social networking sites (Facebook?), where mass communication (Twitter?) is key.

Pros

The Home Office has pointed out that communications data has long proved valuable for law enforcement purposes, in detecting crimes, investigating suspects and prosecuting offenders. Although many communications providers already retain this information in any event, they delete it as soon as their business purposes have been met (whether because of data protection legislation or the costs of storage). The Home Office argues that long running investigations, which may require communications data some time after a crime has been detected, tend to relate to the most serious crimes and as such there is a strong public interest in obliging relevant companies to preserve such evidence.

If every email, IM, tweet and post is logged along with the sender’s name, address and geographical location at the time, then law enforcers will find it easier to verify alibis, trace contacts and track movements. Criminals will be unable to rely on the perceived anonymity of the Web to disguise their activities. The Regulations send a clear message that people cannot hide behind online personalities to conduct criminal behaviour – ‘no avatar is an island’, if you like.

Cons

However, despite the stated benefits of the Regulations as a crime fighting tool, legitimate data protection concerns have been raised by privacy groups, who object to being monitored (or spied on) and criticise the measure as a step towards a ‘Big Brother’ state. As the Government has not had a good track record recently with safeguarding data, concerns over the generation and retention of increasing amounts of data are perhaps justified.

What I see as the biggest concern is the fact that the Regulations do not limit the disclosure and use of the data to investigation of the serious crimes on the basis of which the Regulations are justified. The Regulations blandly state that “Access to data retained in accordance with these Regulations may be obtained only (a) in specific cases, and (b) in circumstances in which disclosure of the data is permitted or required by law.” It is not difficult to envisage courts interpreting this provision widely and ordering disclosure in civil cases where this information would be useful – for example, defamation claims (to discover the details of a big-mouth blogger) and divorce cases (to check a cheating spouse’s phone calls). We have previously postedon how 'Norwich Pharmacal' orders have been made to disclose the contact details of certain libellous chat room participants. We may see increasing similar instances as more companies are required to hold more data for longer, meaning more data is available for disclosure under court order. This may not necessarily be a bad thing, but it does depart from the stated purpose of the Regulations and the reasoning behind their introduction.

We should not forget that the Regulations will impose an additional compliance burden on the notified public communications providers. The extent to which this is an issue depends on the amount of companies notified under the Regulations and the additional measures they will have to take to understand and implement their obligations under the Regulations. The Regulations have gone some way to addressing this by stating that the Secretary of State “may reimburse any expenses incurred by a public communications provider in complying with the provisions of these Regulations.” However, this subsidy ultimately comes from credit-crunched UK taxpayers, who may query the efficacy and efficiency of setting up the systems required to implement the Regulations.

May 22, 2009

My colleague, Yasmin Joomraty, attended a very interesting forum on 'Behavioural Targeting, Social Networking and the Challenges of Online Privacy' earlier this week. We were discussing her views and I asked her to blog about them so here follow her personal reflections on profiling, targeting and behavioural advertising...

(Yasmin writes) "I returned to my desk today to write up my take on the issues discussed at the Westminster eForum. At the forum, the Assistant Information Commissioner had mentioned the 'Personal Information Promise' on the ICO website to which companies can sign up. I Googled it to find out more. As soon as I had keyed "personal info" into the search bar, top of the list of Google's suggested search terms for my search was - you guessed it - "Personal Information Promise". In light of the comments the delegate from Phorm had made regarding search engines profiling and targeting users in more ways than Phorm would ever wish to, I chuckled to myself at this timely demonstration.

I then went to look up "privacy enhancing technologies" and, again, no sooner had I typed "privacy e" but Google had guessed what I was looking for. Handy, yes. But a little disconcerting in light of the 'challenges for online privacy' I was contemplating.

"Google uses cookies and other technologies to enhance your online experience and to learn about how you use Google services in order to improve the quality of our services."

and

"Google’s servers automatically record information when you visit our website or use some of our products, including the URL, IP address, browser type and language, and the date and time of your request."

So that explains the customised search suggestions. Google knows my IP address and has tracked my online behaviour in order to provide me with this service - which, incidentally, I do not remember signing up for. This raises 3 questions for me:

What constitutes personal data? An IP address has been held to be personal data. So Google has obligations under the DPA here.

Does it matter whether information about me constitutes personal data or not? As technologies evolve and trackers can find out more about me, should the obligations under the DPA stop at personal data? Do I have a valid objection to companies building up a profile of me which, although it does not constitute personal data, consists only of numbers and codes, and is never even read by a human but simply passes through a 'black box' (as the Phorm delegate called it), but which nevertheless corresponds to me and my habits, some of which may be private? As society understands the new technologies better, there is scope for data about my behaviour finding its way to third parties and even saying private things about me to others? For example, if a friend uses my laptop and notices that the suggested search terms and targeted ads are geared towards Botox, this may reveal something about me that is private and if not constitutes then relates to personal data.

Have I consented? I would describe myself as protective over my online presence and reluctant to receive marketing communications - I tend to search for opt-outs and actively select my preferences. The notion of informed consent is often debated but it seems to me that if I find it difficult to ascertain what Google is doing with my information and how to opt out of the same, then how will the 'reasonable man' who is not actively looking or notified?

More worryingly, another point in Google's Privacy Overview is:

"Google collects personal information when you register for a Google service or otherwise voluntarily provide such information. We may combine personal information collected from you with information from other Google services or third parties to provide a better user experience, including customizing content for you."

Does this include/anticipate collaboration with Phorm-like behavioural trackers?

As new technologies and social attitudes merge to cause the shift in media, publishing and entertainment from a 'one to many' broadcast to the two way dialogue of 'many to many' communication, so advertising is reaching its holy grail of targeting specific individuals with relevant messages.

An individual's online presence makes him part of the online world (consumer, broadcaster, commentator, buyer, seller, MMORPG player all in one) in a way that he never was before TV remotes had red buttons. The fact that that individual has an online presence exposes him to risks which do not apply in the offline world. These risks mostly centre on that individual's data - the type he chooses to share and the type he does not know he is sharing.

However, advertising has rich benefits and should not be unduly stifled. It is the driver for online growth and funds much of our virtual activities. It offers choice to consumers and can entertain, inform and empower.

March 13, 2009

From October this year, if you operate a website which has interactive features used by children, you will be required to vet new staff you employ to act as moderators on the website. This means that their names must be registered with the Independent Safeguarding Authority (ISA), a public body which has been set up to prevent unsuitable people from working with children. From 2010 you will also be required to ensure all existing staff moderators are ISA-registered (even if they have already undergone a CRB check).

The relevant legislation is The Safeguarding Vulnerable Groups Act 2006, which provides for the maintenance of a register of people who are banned from certain activities relating to children (a 'barred list'). Under a commencement order which came into force in January this year, this Act will apply to regulate moderators of websites with interactive features aimed at children.

You can read more about this development in the law by reading the article on our website.

December 03, 2007

Back to the thorny subject of the liability of social networks, ISPs and other intermediaries for hosting or carrying illegal content. When the E-Commerce Directive introduced exemptions for hosting content, it specifically said that there was no obligation on hosts to monitor their sites for illegal content.

But industry pracctice is beginning to move in the opposite direction through the the use of filtering technology. Take, for example, YouTube's recently introduced video identification software for vetting its content for copyright infringement.

YouTube is keen to be seen to be acting responsibly: ("Like our other content policies and tools, YouTube Video Identification goes well above and beyond our legal responsibilities.") No doubt prompted in part by Viacon's proceedings, YouTube wants to send out a strong message that it doesn't condone or facilitate copyright infringement .

The software checks newly uploaded videos against a database of copyright protected content. As such, it is only as good as the database and the thoroughness of the checks. It is still in beta version at present, so will be bound to encounter teething troubles. It will be interesting to see if this builds bridges between YouTube and content providers, and how the courts will interpret its efforts.

At the same time, a group of big name media and internet companies have published a set of user generated content principles, as a set of good practice guidelines to serve as a benchmark for how to behave responsibly in the world of Web 2.0 (mentioned en passant in a previous post). But Google is notable by its absence as a signatory to these guidelines. Is it because the sands of legal liability for user generated content are shifting and it doesn't want to commit itself yet to a particular position?

Another type of industry standard has also been introduced recently: the Automated Content Access Protocol, or ACAP. This will enable internet content publishers to communicate permissions for access to and use of content on their sites to online intermediaries (such as search engine crawlers). Again, this represents another industry-led initiative setting standards to fill in the gaps where out-of-date legislation has proved inadequate.