The Government has announced the organisations that will sit on the Executive Board of a new national body to tackle online harms in the UK.

The UK Council for Internet Safety (UKCIS) is the successor to the UK Council for Child Internet Safety (UKCCIS), with an expanded scope to improve online safety for everyone in the UK.

The Executive Board brings together expertise from a range of organisations in the tech industry, civil society and public sector.

Margot James, Minister for Digital and the Creative Industries said:

Only through collaborative action will the UK be the safest place to be online. By bringing together a wealth of expertise from a wide range of fields, UKCIS can be an example to the world on how we can work together to
face the challenges of the digital revolution in an effective and responsible way.

UKCIS has been established to allow these organisations to collaborate and coordinate a UK-wide approach to online safety.

It will contribute to the Government's commitment to make the UK the safest place in the world to be online, and will help to inform the development of the forthcoming Online Harms White Paper.

Priority areas of focus will include online harms experienced by children such as cyberbullying and sexual exploitation; radicalisation and extremism; violence against women and girls; hate crime and hate speech; and forms
of discrimination against groups protected under the Equality Act, for example on the basis of disability or race.

CEO of Internet Matters Carolyn Bunting said:

We are delighted to sit on the Executive Board of UKCIS where we are able to represent parents needs in keeping their children safe online.

Online safety demands a collaborative approach and by bringing industry together we hope we can bring about real change and help everyone benefit from the opportunities the digital world has to offer.

The UKCIS Executive Board is jointly chaired by Margot James, Minister for Digital and the Creative Industries (Department for Digital, Culture, Media and Sport); Victoria Atkins, Minister for Crime, Safeguarding and
Vulnerability (Home Office); and Nadeem Zahawi, Minister for Children and Families (Department for Education). It also includes representatives from the Devolved Administrations of Scotland, Wales and Northern Ireland. Board membership will be
kept under periodic review, to ensure it represents the full range of online harms that the government seeks to tackle.

Senran Kagura Burst Re:Newal was recently delayed on the PlayStation 4 as Sony demanded that publisher XSEED remove a mode which effectively allows you to fondle its cast of indeterminate aged virtual characters against their will.

There's some speculation that Sony's clamping down on heavily sexualised content, especially after it refused release of bizzarro dating game Super Seducer , but many assumed that this would be limited to Western territories. However,
comparison screenshots of a new Japanese visual novel which released this week in the East reveal it may be a company-wide policy. The pictures, compared to the Nintendo Switch and PC, show use of heavy censorship to obscure sexual imagery on the
PS4 only. The censorship was not present in the PS Vita version, which launched a year ago.

There's also chatter that the PlayStation maker has requested jiggle physics be removed from the PS4 version of Warriors Orochi 4 , as they're present in the Nintendo Switch release but conspicuously absent from the Sony SKU. This adds
evidence to the notion that Sony ia shutting down this kind of content.

Meanwhile
nichegamer.com reports that the Japanese developer light recently held a live broadcast where they confirmed Sony's new and aggressive policy against sexual themes in seemingly only Japanese-made games is actually preventing them from
releasing their latest visual novel. The game, Silverio Trinity , is their latest visual novel opus -- and it has sexual themes in it. Developer light noted that Sony is getting strict with their approval process, especially regarding
sexual themes.

The developer noted they were hoping to release the game for PlayStation 4 soon after New Year's as development on the game is complete, however, Sony has been reluctant to approve the game. Furthermore, Sony is confusingly asking Japanese
developers to plead their approval only in English, making the process even more difficult for developers whose staff only speak or write in Japanese. The developer noted if they were to release the game for Windows PC (via Steam) they could
release it next week.

Anti-alcohol campaigners from the Centre for Alcohol and Tobacco Studies has urged the Advertising Standards Agency and Ofcom to ban all alcohol imagery before the 9pm time slot, claiming it has harmful effects on young people. The campaigners
also complain about breaks in Coronation Street, which sometimes feature alcoholic drinks.

The group claims that alcoholic imagery on the TV shows and advertisements correlates directly with the number of viewers over 15 years old who drink alcohol. According to Alexander Barker: '

There is strong evidence that viewing alcohol advertising or imagery has an uptake on subsequent alcohol use in young people.

The Nottingham University-based group analyzed 611 shows and 1,140 advertisement breaks between 6pm and 10pm and say that approximately half of the content broadcast featured alcoholic imagery.

Google is set to be fined in Russia for not complying with Russia's list of websites to censor.

Roskomnadzor, the Russian government's internet and media censor, accused Google of ignoring a law requiring search engines to block censored content. Roskomnadzor has recorded the fact of Google 's non-compliance with its duty to connect to the
federal state 'information system'.

Google is now subject to a fine of up to 700,000 rubles ($10,600).

Vadim Subbotin, Roskomnadzor deputy chief censor, said Google had three days to respond to its ruling,.

Ofcom's Content Board is a committee of the main Ofcom Board. It has advisory responsibility for a wide range of content issues, including the regulation of television, radio and video-on-demand quality and standards.

Sophie joins the Content Board on a three-year term.

Sophie is a television presenter, campaigner, artist and entrepreneur. She is lead presenter of live Paralympic sport on Channel 4, and has presented events including the 2016 Rio Paralympics and the 2017 Winter Paralympics. Sophie has also
presented a range of other programmes such as Unreported World , Tricks of the Restaurant Trade and Best Laid Plans .

Sophie, who was paralysed from the chest down aged 18, has twice been voted one of the top 100 most influential people with a disability in the UK. An active campaigner, Sophie is a patron of the charity Scope and an ambassador for Human Rights
Watch.

A Northern Soul is a UK documentary classified 15 for cinema release for around twenty uses of strong language. Prior to its submission to the BBFC Sheffield City Council classified the film 12A, for its premiere, as did Hull City
Council. The film's director complained in the media about the BBFC's decision. A letter co-signed by three Hull MPs was sent to the BBFC requesting that the 15 classification be reviewed, to which David Austin responded. The film is now
classified 12A by seven local authorities (Sheffield, Hull, Leeds, Liverpool, Halifax, Southampton and Lambeth).

Update: Nottingham too

25th October 2018. See article

Nottingham City Council has joined the group of councils that has disagreed with the BBFC 15 rating for the documentary, A Northern Soul. The film will be released locally with a Nottingham 12A rating.

Once states totalling 35% of the EU's population oppose the new Copyright Directive, they can form a "blocking minority" and kill it or cause it to be substantially refactored. With the Italians opposing the Directive because of its
draconian new internet rules (rules introduced at the last moment, which have been hugely controversial), the reputed opponents of the Directive have now crossed the 35% threshold, thanks to Germany, Finland, the Netherlands, Slovenia, Belgium
and Hungary.

Unfortunately, the opponents of Article 11 (the "link tax") and Article 13 (the copyright filters) are not united on their opposition -- they have different ideas about what they would like to see done with these provisions. If they
pull together, that could be the end of these provisions.

If you're a European
this form will let you contact your MEP quickly and painlessly and let them know how you feel about the proposals.

That's where matters stand now: a growing set of countries who think copyright filters and link taxes go too far, but no agreement yet on rejecting or fixing them.

The trilogues are not a process designed to resolve such large rifts when both the EU states and the parliament are so deeply divided.

What happens now depends entirely on how the members states decide to go forward: and how hard they push for real reform of Articles 13 and 11. The balance in that discussion has changed, because Italy changed its position. Italy changed its
position because Italians spoke up. If you reach out to your countries' ministry in charge of copyright, and tell them that these Articles are a concern to you, they'll start paying attention too. And we'll have a chance to stop this terrible
directive from becoming terrible law across Europe.

The Reliant is a 2018 USA action film by Paul Munger.
Starring Eric Roberts, Kevin Sorbo and Brian Bosworth.

Economic collapse causes widespread rioting and social unrest, leaving a lovesick 19-year-old girl struggling to care for her siblings in a stretch of woods bordered by lawless anarchy, wondering why a good God would let this happen.

The film was rated R by the MPAA for some violence.

The producer's weren't impressed with the rating and decided to appeal the decision, presumably seeking a PG-13 rating.

Article 13 as written threatens to shut down the ability of millions of people -- from creators like you to everyday users -- to upload content to platforms like YouTube. And it threatens to block users in the EU from viewing content that is
already live on the channels of creators everywhere. This includes YouTube's incredible video library of educational content, such as language classes, physics tutorials and other how-to's.

This legislation poses a threat to both your livelihood and your ability to share your voice with the world. And, if implemented as proposed, Article 13 threatens hundreds of thousands of jobs, European creators, businesses, artists and everyone
they employ. The proposal could force platforms, like YouTube, to allow only content from a small number of large companies. It would be too risky for platforms to host content from smaller original content creators, because the platforms would
now be directly liable for that content. We realize the importance of all rights holders being fairly compensated, which is why we built Content ID and a platform to pay out all types of content owners. But the unintended consequences of article
13 will put this ecosystem at risk. We are committed to working with the industry to find a better way. This language could be finalized by the end of the year, so it's important to speak up now.

Please take a moment to
learn more about how it could affect your channel and take action immediately. Tell the world through social media (#SaveYourInternet) and your channel why the creator economy is important and how this legislation will impact you

A committee of MPs has claimed that the government is not taking the urgent action needed to protect democracy from fake news on Facebook and other social media.

The culture committee wants a crackdown on the manipulation of personal data, the spread of disinformation and Russian interference in elections. Tory MP Damian Collins, who chairs the committee, says he is disappointed by the response to its
latest report. Collins has accused ministers of making excuses to further delay desperately needed announcements on the ongoing issues of harmful and misleading content being spread through social media.

When the Digital Culture Media and Sport Committee issued its interim report on fake news in July it claimed that the UK faced a democratic crisis founded on the manipulation of personal data.

The MPs called for new powers for the Electoral Commission - including bigger fines - and new regulation of social media firms. But of the 42 recommendations in its interim report, the committee says only three have been accepted by the
government, in its official response, published last week.

The committee has backed calls from the Electoral Commission to force social media advertisers to publish an imprint on political ads to show who had paid for them, to increase transparency. Collins also criticised the government's continued
insistence that there was no evidence of Russian interference in UK elections.

Collins said he would be raising this and other issues with Culture Secretary Jeremy Wright, when he appears before the committee on Wednesday.

A pre-roll ad seen on YouTube in June 2018 for Spotify featured a number of scenes in quick succession and tense sound effects that imitated the style of a horror film. The ad opened with a shot of three characters having breakfast. One
character said, Can you play the wakeup playlist? and they played a particular song from their phone. That was followed by a shot of another character rousing himself and saying, Turn that up. As the music was turned up, a shot showed a horror
film style doll in a dilapidated old room raising its head and tense music was played to accompany the song. Several shots followed of the doll ambushing the characters in the ad whenever they played the song and implicitly attacking them. The
final shots showed one character attempting to convince the other not to play the song. The ad showed the character taking hold of the other character's hand to stop him playing it but then the doll's hand reached out to press play. The final
shots of the ad showed the doll's face alongside text which stated, Killer songs you can't resist.

The ad was seen during a video on the YouTube channel for DanTDM, a gaming channel.

The complainant, who was a parent said their children saw the ad and found it distressing, and objected that the ad was:

unduly distressing; and

irresponsibly targeted, because it was seen during videos that were of appeal to children.

Spotify said that the ad was intended for an adult audience and was particularly targeted towards adults aged 18 to 34. They understood that the tools provided to them by YouTube to target ads towards a particular age group and demographic used a
combination of self-identification by YouTube users and probabilistic data based on the user's behaviour across the internet. Their agency had applied relevant content exclusions including ensuring that the ad was not shown alongside shocking or
graphic content. Additionally they applied a function so that users could skip the ad after five seconds. They noted that the first encounter with the doll in the ad occurred after 12 seconds and that between 7 and 12 seconds the ad introduced
cues as to the tone of the ad so they considered that viewers would have had the opportunity to skip the ad at any point if they considered the content to be distressing.

Spotify provided information from YouTube which listed the demographic data of viewers of logged-in viewers of the YouTube channel on which the ad was seen by the complainant. They explained that the data showed that 89% of viewers of the channel
were aged 18 or over and that most (73%) were aged between 18 and 44. Only 11% of viewers were aged between 13 and 17. Spotify said that the ad had appeared prior to a video about a video game that was marketed as a stealth and horror game.

ASA Assessment: Complaints 1 & 2 upheld in part

The ASA considered that although violence was not explicitly shown in the ad, it was implied. The ad contained several scenes that were suggestive of a horror film, including tense music and scenes of characters looking scared or in distress. In
two scenes in particular, actors were shown playing the song in bed and in the shower when they were ambushed by the doll. We considered that those scenes would be seen by viewers as reminiscent of famous scenes from horror films.

We first considered whether the ad was likely to cause undue distress to adults who saw it. The ad featured shots reminiscent of a horror film. However, we considered a number of scenes, including the doll nodding its head to the rhythm of the
song and the doll's hand pressing the play button on a device that had the Spotify app open, would be seen by viewers as humorous. We considered that although some might find the ad mildly scary, most adult viewers would find the ad overall to be
humorous rather than frightening and it was unlikely to cause distress to them.

However, we did consider that the nature of the ad meant it was not suitable to be seen by children because it was likely to be distressing to them. In particular, the ad contained scenes that had tense sound effects and imagery similar to a
horror film including the implied threat of violence. The fact the ad was set inside the home, including a bedtime setting, and featured a doll, meant it was particularly likely to cause distress to children who saw it. We did not consider that
the context of the ad justified the distress. In addition, the nature of the ad as emulating a horror trailer was deliberately not made clear from the start of the ad and children were likely to be exposed to some of the potentially frightening
scenes before they, or parents viewing with them, realised that was the case. We considered the ad therefore should have been appropriately targeted to avoid the risk of children seeing it.

We considered that the ad may have been appropriate to show before content on YouTube that was unlikely to be of particular interest to children. However, when seen by the complainant the ad was juxtaposed against unrelated content for the video
game Hello Neighbour . Although the video game was marketed as a stealth horror game, it included colourful cartoonish images and was rated by the ESRB as suitable for players aged 10+ and by PEGI as suitable for players aged seven or
older. We therefore considered that it was reasonable to expect that content about Hello Neighbour was more likely to appeal to children.

The figures provided by Spotify showed that 11% of viewers of the DanTDM were between the ages of 13 and 17, based on viewer demographics relating to logged-in users. However, the channel made use of cartoonish imagery and included videos of
video games popular with children and media including Fortnite and The Incredibles. We noted videos on the channel were presented in an enthusiastic manner by a youthful presenter who had won an award from a children's television network. Taken
altogether, we considered that from the content of the videos and presentational style, the channel would have particular appeal to children. For those reasons we concluded that the ads had appeared before videos that were likely to be of appeal
or interest to children.

We concluded that the ad was unlikely to cause distress to adults, but that it was likely to cause undue distress to children. Therefore, because the ad had appeared before videos of appeal to children, we concluded that it had been
inappropriately targeted.

We told Spotify to ensure that future ads did not cause distress to children without justifiable reason, and to ensure ads that were unsuitable for viewing by children were appropriately targeted.

After the recent censorship purge of over 800 independent media outlets on Facebook, the Supreme Court is now hearing a case that could have ramifications for any future attempts at similar purges.

The United States Supreme Court has agreed to take a case that could change free speech on the Internet. Manhattan Community Access Corp. v. Halleck, No. 17-702, the case that it has agreed to take, will decide if the private operator of a public
access network is considered a state actor.

The case could affect how companies like Facebook, Twitter, Instagram, Google and YouTube are governed. If the Court were to issue a far-reaching ruling it could subject such companies to First Amendment lawsuits and force them to allow a much
broader scope of free speech from its users.

DeeDee Halleck and Jesus Melendez claimed that they were fired from Manhattan Neighborhood Network for speaking critically of the network. And, though the case does not involve the Internet giants, it could create a ruling that expands the First
Amendment beyond the government.

The animal campaign group Peta has taken issue with a North American retailer Canada Goose which sells down filled jackets. Peta writes:

To kick off our robust anti-Canada Goose campaign across the U.S. and Canada, an enormous billboard has been erected near the retailer's flagship store in Chicago. A goose, pleading for his life, now towers over one of Chi-Town's busiest
streets, reminding drivers and pedestrians alike that geese don't want to die.

Meanwhile, in Short Hills, New Jersey, geese are making their own bus-side plea that's sure to grab folks' attention.

However not everyone is happy with the adverts leading to the advertising space company Astral, quickly taking down some of the adverts.

Peta wasn't impressed and responded:

Citing numerous complaints, the ad agency Astral Media Outdoor removed PETA's ads from several bus shelters in Toronto after they were up for just one day last month--so our legal counsel sent a letter to the agency pointing out that the
censorship violates the Canadian Charter of Rights and Freedoms, which guarantees freedom of expression, and demanding an explanation for the removal of the ads.

In a survey more about net neutrality than porn censorship, MoneySupermarket noted:

We conducted a survey of over 2,000 Brits on this and it seems that if an ISP decided to block sites, it could result in increasing numbers of Brits switching - 64 per cent of Brits would be likely to switch ISP if they put blocks in place

In reality, this means millions could be considering a switch as nearly six million having tried to access a site that was blocked in the last week - nearly one in 10 across the country.

It's an issue even more pertinent for those aged 18 to 34, with nearly half (45 per cent) having tried to access a site that was blocked at some point.

While ISPs might block sites for various reasons, a quarter of Brits said they would switch ISP if they were blocked from viewing adult sites - with those living with partners the most likely to do so!

Now switching ISPs isn't going to help much if the BBFC, the government appointed porn censor, has dictated that all ISPs block porn sites. But maybe these 25% of internet users will take up alternatives such as subscribing to a VPN service.

Index on Censorship is standing with our free speech friends at Flying Dog Brewery who've just been told by the UK drinks censor that they should stop selling one of the beers because the artwork by award-winning artist Ralph Steadman might
encourage immoderate drinking.

Flying Dog was told that the Portman Group deemed the artwork for its Easy IPA Session India Pale Ale could spur people to drink irresponsibly.

We think this is nonsense and are pleased Flying Dog plans to ignore this ruling.

The press release sent by Flying Dog Brewery is below:

Flying Dog Brewery Will Not Comply with Regulatory Group's Ruling on Easy IPA

Flying Dog Brewery has been defending free speech and creative expression in the United States for more than 25 years. Now, it's taking a stand in the United Kingdom.

In May 2018, the Portman Group, a third-party organization that evaluates alcohol-related marketing, allegedly received a single complaint from a person who thought that Flying Dog's Easy IPA Session India Pale Ale could be mistaken for a soft
drink.

After months of deliberation, the Portman Group issued a final ruling, claiming that the packaging artwork ...directly or indirectly encourages illegal, irresponsible or immoderate consumption, such as binge drinking, drunkenness or
drunk-driving. It will be issuing a Retailer Alert Bulletin on 15 October, which will ask retailers not to place orders for the beer.

Notwithstanding the Portman Group's ruling, Flying Dog has decided to continue to distribute Easy IPA in the United Kingdom.

Jim Caruso, Flying Dog CEO said:

Not surprisingly, the alleged complaint -- by a sole individual -- that a product labeled 'Easy IPA Session India Pale Ale' might be mistaken for a soft drink was, we believe, correctly dismissed by the Portman Group, That should have
been the end of it. However, the Portman Group then went on to ban the creative and carefree Easy IPA label art by the internationally-renowned UK artist Ralph Steadman.

Steadman has illustrated all of Flying Dog's labels since 1995. In the ruling, the Portman Group claims that the artwork of this low-ABV beer could be seen as encouraging drunkenness.

Without question, over-consumption, binge drinking and drunk-driving are serious health and public safety issues, and Flying Dog has always advocated for moderation and responsible social drinking, Caruso said. At the same time, there is no
evidence to suggest that the whimsical Ralph Steadman art on the Easy IPA label causes any of those problems. We believe that British adults can think for themselves and Flying Dog, an independent U.S. craft brewer, will not honor the Portman
Group's request to discontinue shipping Easy IPA to the UK.

The drinks censors of the Portman Group tried to justify their ban in their summary release:

A complaint about Easy IPA has been upheld by the Independent Complaints Panel.

The complainant, a member of the public, believed that the drink, which is produced by Flying Dog Brewery, appealed to under 18s. While the Panel concluded that the product did not have direct appeal to under-18s, the Panel investigated whether
the product packaging encouraged immoderate consumption.

The Panel noted that the front of the can contained the terms Easy IPA, and Session IPA, which is a commonly used descriptor in the craft beer category. However, they also noted that the original meaning of the phrase was a prolonged drinking
session. Although the Panel did not consider these terms to be problematic if used in the right context, when used alongside an image of an inebriated looking creature balancing on one leg presented an indication of drunkenness. Accordingly,
Panel upheld the decision.

John Timothy, Secretary to the Independent Complaints Panel, commented: We are disappointed that Flying Dog Brewery do not appear to respect the decision or the process. Producers need to be extremely sensitive about the overall impact of their
labelling. Use of a phrase that could have been innocuous on its own has taken on a different meaning when considered alongside a drunken looking character.

Until 1968 plays that had the potential to create immoral or anti-government feelings were banned by the Lord Chamberlain's office or ordered to be edited.

The V&A exhibition includes original manuscripts with notes on what needs to be changed and letters from Lord Chamberlain explaining why the edits are required.

In the exhibition there are several pieces including a manuscript about the play Saved by Edward Bond. The play tells the story of a group of young people living in poverty and includes a scene in which a baby is stoned to death.

When the Royal Court Theatre submitted the play to the censor, over 50 amendments were requested. Bond refused to cut two key scenes, stating 'it was either the censor or me -- and it was going to be the censor'. As a result, the play was banned.

Before the act was passed, playwrights got around the law by staging banned plays in members clubs which meant they could not be persecuted since it was private venue. The continued success of this strategy and the reluctance to prosecute made a
mockery of the Lord Chamberlain's powers and reflected the increasingly relaxed attitudes of the public towards 'shocking' material.

The first night after the Act was introduced, the rock musical Hair opened on Shaftesbury Avenue in the West End. It featured drugs, anti-war messages and brief nudity, ushering in a new age of British theatre.

As far as I can see if a porn website verifies your age with personal data, it will probably also require you tick tick a consent box with a hol load of small print that nobody ever reads. Now if that small print lets it forward all personal
data, coupled with porn viewing data, to the Kremlin's dirty tricks and blackmail department then that's ok with the the Government's age verification law. So for sure some porn viewers are going to get burnt because of what the government has
legislated and because of what the BBFC have implemented.

So perhaps it is not surprising that the BBFC has asked the government to pick up the tab should the BBFC be sued by people harmed by their decisions. After all it was the government who set up the unsafe environment, not the BBFC.

Margot James The Minister of State, Department for Culture, Media and Sport announced in Parliament:

I am today laying a Departmental Minute to advise that the Department for Digital, Culture, Media and Sport (DCMS) has received approval from Her Majesty's Treasury (HMT) to recognise a new Contingent Liability which will come into effect when
age verification powers under Part 3 of the Digital Economy Act 2017 enter force.

The contingent liability will provide indemnity to the British Board of Film Classification (BBFC) against legal proceedings brought against the BBFC in its role as the age verification regulator for online pornography.

As you know, the Digital Economy Act introduces the requirement for commercial providers of online pornography to have robust age verification controls to protect children and young people under 18 from exposure to online pornography. As the
designated age verification regulator, the BBFC will have extensive powers to take enforcement action against non-compliant sites. The BBFC can issue civil proceedings, give notice to payment-service providers or ancillary service providers, or
direct internet service providers to block access to websites where a provider of online pornography remains non-compliant.

The BBFC expects a high level of voluntary compliance by providers of online pornography. To encourage compliance, the BBFC has engaged with industry, charities and undertaken a public consultation on its regulatory approach. Furthermore, the
BBFC will ensure that it takes a proportionate approach to enforcement and will maintain arrangements for an appeals process to be overseen by an independent appeals body. This will help reduce the risk of potential legal action against the BBFC.

However, despite the effective work with industry, charities and the public to promote and encourage compliance, this is a new law and there nevertheless remains a risk that the BBFC will be exposed to legal challenge on the basis of decisions
taken as the age verification regulator or on grounds of principle from those opposed to the policy.

As this is a new policy, it is not possible to quantify accurately the value of such risks. The Government estimates a realistic risk range to be between 2£1m - 2£10m in the first year, based on likely number and scale of legal challenges. The
BBFC investigated options to procure commercial insurance but failed to do so given difficulties in accurately determining the size of potential risks. The Government therefore will ensure that the BBFC is protected against any legal action
brought against the BBFC as a result of carrying out duties as the age verification regulator.

The Contingent Liability is required to be in place for the duration of the period the BBFC remain the age verification regulator. However, we expect the likelihood of the Contingent Liability being called upon to diminish over time as the regime
settles in and relevant industries become accustomed to it. If the liability is called upon, provision for any payment will be sought through the normal Supply procedure.

It is usual to allow a period of 14 Sitting Days prior to accepting a Contingent Liability, to provide Members of Parliament an opportunity to raise any objections.

The BBFC has made a few changes to its approach since the rather ropey document published prior to the BBFC's public consultation. In general the BBFC seems a little more pragmatic about trying to get adult porn users to buy into the age
verification way of thinking. The BBFC seems supportive of the anonymously bought porn access card from the local store, and has taken a strong stance against age verification providers who reprehensibly want to record people's porn browsing,
claiming a need to provide an audit trail.

The BBFC has also decided to offer a service to certify age verification providers in the way that they protect people's data. This is again probably targeted at making adult porn users a bit more confident in handing over ID.

The BBFC tone is a little bit more acknowledging of people's privacy concerns, but it's the government's law being implemented by the BBFC, that allows the recipients of the data to use it more or less how they like. Once you tick the 'take it or
leave it' consent box allowing the AV provider 'to make your user experience better' then they can do what they like with your data (although GDPR does kindly let you later withdraw that consent and see what they have got on you).

Another theme that runs through the site is a rather ironic acceptance that, for all the devastation that will befall the UK porn industry, for all the lives ruined by people having their porn viewing outed, for all the lives ruined by fraud and
identity theft, that somehow the regime is only about stopping young children 'stumbling on porn'... because the older, more determined, children will still know how to find it anyway.

So the BBFC has laid out its stall, and its a little more conciliatory to porn users, but I for one will never hand over any ID data to anyone connected with a servicing porn websites. I suspect that many others will feel the same. If you can't
trust the biggest companies in the business with your data, what hope is there for anyone else.

There's no word yet on when all this will come into force, but the schedule seems to be 3 months after the BBFC scheme has been approved by Parliament. This approval seems scheduled to be debated in Parliament in early November, eg on 5th
November there will be a House of Lords session:

Implementation by the British Board of Film Classification of age-verifications to prevent children accessing pornographic websites 203 Baroness Benjamin Oral questions

The BBFC has published its Age Verification Guidance document that will underipin the implementation of internet porn censorship in the UK.

Perhaps a key section is:

5. The criteria against which the BBFC will assess that an age-verification arrangement meets the requirement under section 14(1) to secure that pornographic material is not normally accessible by those under 18 are set out below:

a. an effective control mechanism at the point of registration or access to pornographic content by the end-user which verifies that the user is aged 18 or over at the point of registration or access

b use of age-verification data that cannot be reasonably known by another person, without theft or fraudulent use of data or identification documents nor readily obtained or predicted by another person

c. a requirement that either a user age-verify each visit or access is restricted by controls, manual or electronic, such as, but not limited to, password or personal identification numbers. A consumer must be logged out by default unless they
positively opt-in for their log in information to be remembered

d. the inclusion of measures which authenticate age-verification data and measures which are effective at preventing use by non-human operators including algorithms

It is fascinating as to why the BBFC feels that bots need to be banned, perhaps they need to be 18 years old too, before they can access porn. I am not sure if porn sites will appreciate Goggle-bot being banned from their sites. I love the idea
that the word 'algorithms' has been elevated to some sort of living entity.

It all smacks of being written by people who don't know what they are talking about.

In a quick read I thought the following paragraph was important:

9. In the interests of data minimisation and data protection, the BBFC does not require that age-verification arrangements maintain data for the purposes of providing an audit trail in order to meet the requirements of the act.

It rather suggests that the BBFC pragmatically accept that convenience and buy-in from porn-users is more important than making life dangerous for everybody, just n case a few teenagers get hold of an access code.

The British Board of Film Classification was designated as the age-verification regulator under Part 3 of the Digital Economy Act on 21 February 2018. The BBFC launched its consultation on the draft Guidance on Age-verification Arrangements and
draft Guidance on Ancillary Service Providers on 26 March 2018. The consultation was available on the BBFC's website and asked for comments on the technical aspects on how the BBFC intends to approach its role and functions as the
age-verification regulator. The consultation ran for 4 weeks and closed on 23 April 2018, although late submissions were accepted until 8 May 2018.

There were a total of 624 responses to the consultation. The vast majority of those (584) were submitted by individuals, with 40 submitted by organisations. 623 responses were received via email, and one was received by post. Where express
consent has been given for their publication, the BBFC has published responses in a separate document. Response summaries from key stakeholders are in part 4 of this document.

Responses from stakeholders such as children's charities, age-verification providers and internet service providers were broadly supportive of the BBFC's approach and age-verification standards. Some responses from these groups asked for
clarification to some points. The BBFC has made a number of amendments to the guidance as a result. These are outlined in chapter 2 of this document. Responses to questions raised are covered in chapter 3 of this document.

A significant number of responses, particularly from individuals and campaign groups, raised concerns about the introduction of age-verification, and set out objections to the legislation and regulatory regime in principle. Issues included
infringement of freedom of expression, censorship, problematic enforcement powers and an unmanageable scale of operation. The government's consultation on age-verification in 2016 addressed many of these issues of principle. More information
about why age-verification has been introduced, and the considerations given to the regulatory framework and enforcement powers can be found in the 2016 consultation response by the Department for Digital Culture Media and Sport1.

Nasty Gal Ltd stated that the model featured in the ad was a UK size eight and that her body mass index (BMI) was within the healthy range for an adult woman.

Clearcast stated that the model weighed 134lbs and was 178cm tall with a BMI of 18.8, which sat well within the healthy weight and BMI range in accordance with NHS guidelines. They said that some viewers may subjectively view the model to be too
slender, whilst others would recognise her to be of a healthy appearance, which was supported by the NHS guidelines.

ASA Assessment: Complainsts upheld

The ASA considered that while the female model in the ads generally appeared to be in proportion, there were specific scenes, which because of her poses, drew attention to her slimness. For instance, the ads showed the model lying on a sun
lounger stretching her arms, which emphasised their slimness and length. Furthermore, towards the end of the ads were scenes showing the model spraying mist on herself, which placed focus on her chest where her rib cage was visible and appeared
prominent.

We considered that the model appeared unhealthily underweight in those scenes and concluded that the ads were therefore irresponsible.

The ads must not be broadcast again in their current form. We told Nasty Gal Ltd to ensure that the content in their ads were prepared responsibly.

As the EU
advances the new Copyright Directive towards becoming law in its 28 member-states, it's important to realise that the EU's plan will end up censoring the Internet for everyone , not just Europeans.

A quick refresher: Under Article 13 of the new Copyright Directive, anyone who operates a (sufficiently large) platform where people can post works that might be copyrighted (like text, pictures, videos, code, games, audio etc) will have to
crowdsource a database of "copyrighted works" that users aren't allowed to post, and block anything that seems to match one of the database entries.

These blacklist databases will be open to all comers (after all, anyone can create a copyrighted work): that means that billions of people around the world will be able to submit anything to the blacklists, without having to prove that
they hold the copyright to their submissions (or, for that matter, that their submissions are copyrighted). The Directive does not specify any punishment for making false claims to a copyright, and a platform that decided to block someone for
making repeated fake claims would run the risk of being liable to the abuser if a user posts a work to which the abuser does own the rights .

The major targets of this censorship plan are the social media platforms, and it's the "social" that should give us all pause.

That's because the currency of social media is social interaction between users . I post something, you reply, a third person chimes in, I reply again, and so on.

Alice posts a picture of a political march: thousands of protesters and counterprotesters, waving signs. As is
common
around
the
world , these signs include copyrighted images, whose use is permitted under US "fair use" rules that permit parody. Because Twitter enables users to communicate significant amounts of user-generated content, they'll fall within
the ambit of Article 13.

Bob lives in Bulgaria, an EU member-state whose copyright law
does not permit parody . He might want to reply to Alice with a quote from the Bulgarian dissident Georgi Markov , whose works were translated into English in the late 1970s and are still in copyright.

Carol, a Canadian who met Bob and Alice through their shared love of Doctor Who, decides to post a witty meme from " The Mark of the Rani ," a 1985 episode in which Colin Baker travels back to witness the Luddite protests of the 19th
Century.

Alice, Bob and Carol are all expressing themselves through use of copyrighted cultural works, in ways that might not be lawful in the EU's most speech-restrictive copyright jurisdictions. But because (under today's system) the platform typically
is only required to to respond to copyright complaints when a rightsholder objects to the use, everyone can see everyone else's posts and carry on a discussion using tools and modes that have become the norm in all our modern, digital discourse.

But once Article 13 is in effect, Twitter faces an impossible conundrum. The Article 13 filter will be tripped by Alice's lulzy protest signs, by Bob's political quotes, and by Carol's Doctor Who meme, but suppose that Twitter is only required to
block Bob from seeing these infringing materials.

Should Twitter hide Alice and Carol's messages from Bob? If Bob's quote is censored in Bulgaria, should Twitter go ahead and show it to Alice and Carol (but hide it from Bob, who posted it?). What about when Bob travels outside of the EU and
looks back on his timeline? Or when Alice goes to visit Bob in Bulgaria for a Doctor Who convention and tries to call up the thread? Bear in mind that there's no way to be certain where a user is visiting from, either.

The dangerous but simple option is to subject all Twitter messages to European copyright censorship, a disaster for online speech.

And it's not just Twitter, of course: any platform with EU users will have to solve this problem. Google, Facebook, Linkedin, Instagram, Tiktok, Snapchat, Flickr, Tumblr -- every network will have to contend with this.

With Article 13, the EU would create a system where copyright complainants get a huge stick to beat the internet with, where people who abuse this power face no penalties, and where platforms that err on the side of free speech will get that
stick right in the face.

As the EU's censorship plan
works its way through the next steps on the way to becoming binding across the EU, the whole world has a stake -- but only a handful of appointed negotiators get a say.

If you are a European, the rest of the world would be very grateful indeed if you would take a moment to
contact your MEP and urge them to protect us all in the new Copyright Directive.

The Google+ social network exposed the personal information of hundreds of thousands of people using the site between 2015 and March 2018, according to a report in the Wall Street Journal. But managers at the company chose not to go public with
the failures, because they worried that it would invite scrutiny from regulators, particularly in the wake of Facebook's security failures.

Shortly after the report was published, Google announced that it would be shutting down Google+ by August 2019. In the announcement, Google also announced raft of new security features for Android, Gmail and other Google platforms that it has
taken as a result of privacy failures..

Google said it had discovered the issues during an internal audit called Project Strobe. Ben Smith, Google's vice president of engineering, wrote in a blog post:

Given these challenges and the very low usage of the consumer version of Google+, we decided to sunset the consumer version of Google+.

The audit found that Goggle+ APIs allowed app developers to access the information of Google+ users' friends, even if that data was marked as private by the user. As many as 438 applications had access to the unauthorized Google+ data, according
to the Journal.

Now, users will be given greater control over what account data they choose to share with each app. Apps will be required to inform users what data they will have access to. Users have to provide explicit permission in order for them to gain
access to it. Google is also limiting apps' ability to gain access to users' call log and SMS data on Android devices.Additionally, Google is limiting which apps can seek permission to users' consumer Gmail data. Only email clients, email backup
services and productivity services will be able to access this data.

Google will continue to operate Google+ as an enterprise product for companies.

People's medical records will be combined with social and smartphone surveillance to predict who will pick up bad habits and stop them getting ill, under radical government proposals.

Matt Hancock, the health secretary, is planning a system of predictive prevention, in which algorithms will trawl data on individuals to send targeted health nags to those flagged as having propensities to health problems, such as taking up
smoking or becoming obese.

The creepy plans have already attracted privacy concerns among doctors and campaigners, who say that the project risks backfiring by scaring people or being seen to be abusing public trust in NHS handling of sensitive information.

The Online Forums Bill is a Private Members' Bill that was introduced on Parliament on 11th September 2018 under the Ten Minute Rule. The only details published so far is a summary

A Bill to make administrators and moderators of certain online forums responsible for content published on those forums; to require such administrators and moderators to remove certain content; to require platforms to publish information about
such forums; and for connected purposes.

The next stage for this Bill, Second reading, is scheduled to take place on Friday 26 October 2018.

There is a small petition against the bill

Stop the Online Forums Bill 2017-18 becoming law.

Thought control by politicians, backed by the main stream media has led to ever more sinister intrusions into people's freedom to criticize public policy and assemble into campaign groups. ?More details

By requiring platforms to publish information about closed forums and making Administrators responsible for content is Orwellian and anti-democratic.

A new sculptural work, Coralarium, created by artist and environmentalist Jason deCaires Taylor, was demolished last week after it was deemed anti-Islamic. The semi-submerged artwork was criticised by religious leaders and scholars in the
Maldives, where Islam is the official religion. The depiction of human figures in art is discouraged under Islamic law.

The government ordered the destruction of the artwork, after a court ruled it to be a threat to Islamic unity and the peace and interests of the Maldivian state, despite the authorities previously granting permission.

The project by DeCaires Taylor features a large steel frame with cutouts aiming to mimic the marine world was intended to allow sea life to explore freely within, acting as a new habitat for coral and other species. Thirty human figures were
positioned on top and inside the frame at tidal level, with others submerged beneath. The sculptures were based on life-casts of people, around half of them Maldivian, with some reimagined as hybrid forms including coral or root-like elements.

Nine months in the making, its creation involved a large team of marine engineers, steel fabricators, divers and mould-makers. However, on 21 September the work was destroyed under court order with pickaxes, saws and ropes. The Coralarium
structure and underwater trees remains intact but the human figures have been hacked out.

In Canada, there have been ongoing discussions and proposals about new levies and fees to compensate creators for supposed missed revenue. There have been calls to levy a tax on mobile devices such as iPhones, for example. This week the Screen
Composers Guild of Canada took things up a notch, calling for a copyright levy on all broadband data use above 15 gigabytes per month.

A proposal from the Screen Composers Guild of Canada (SCGC), put forward during last week's Government hearings, suggests to simply add a levy on Internet use above 15 gigabytes per month.

The music composers argue that this is warranted because composers miss out on public performance royalties. One of the reasons for this is that online streaming services are not paying as much as terrestrial broadcasters.

The composers SCGC represents are not the big music stars. They are the people who write music for TV-shows and other broadcasts. Increasingly these are also shown on streaming services where the compensation is, apparently, much lower. SCGC
writes:

With regard to YouTube, which is owned by the advertising company Alphabet-Google, minuscule revenue distribution is being reported by our members. Royalties from the large streaming services, like Amazon and Netflix, are 50 to 95% lower when
compared to those from terrestrial broadcasters.

Statistics like this indicate that our veteran members will soon have to seek employment elsewhere and young screen-composers will have little hope of sustaining a livelihood, the guild adds, sounding the alarm bell.

SCGC's solution to this problem is to make every Canadian pay an extra fee when they use over 15 gigabytes of data per month. This money would then be used to compensate composers and fix the so-called value gap. As a result, all Internet users
who go over the cap will have to pay more. Even those who don't watch any of the programs where the music is used.

However, SCGC doesn't see the problem and believes that 15 gigabytes are enough. People who want to avoid paying can still use email and share photos, they argue. Those who go over the cap are likely streaming not properly compensated videos.
SCGC notes:

An ISP subscription levy that would provide a minimum or provide a basic 15 gigabytes of data per Canadian household a month that would be unlevied. Lots of room for households to be able to do Internet transactions, business, share photos,
download a few things, emails, no problem.

[W]hen you're downloading and consuming over 15 gigabytes of data a month, you're likely streaming Spotify. You're likely streaming YouTube. You're likely streaming Netflix. So we think because the FANG companies will not give us access to the
numbers that they have, we have to apply a broad-based levy. They're forcing us to.

The last comment is telling. The composers guild believes that a levy is the only option because Netflix, YouTube, and others are not paying their fair share. That sounds like a licensing or rights issue between these services and the authors.
Dragging millions of Canadians into this dispute seems questionable, especially when many people have absolutely nothing to do with it.

Is a movie about an alien parasite that forcibly takes over someone's body and then starts threatening to bite heads and limbs off, but parents may be wondering if the movie is too scary for younger children.

As someone who has tracked technology and human rights over the past ten years, I am convinced that digital ID, writ large, poses one of the gravest risks to human rights of any technology that we have encountered. . By Brett Soloman

The recent Fosta law in the US forces internet companies to censor anything to do with legal, adult and consensual sex work. It holds them liable for abetting sex traffickers even when they can't possibly distinguish the trafficking from the
legal sex work. The only solution is therefore to ban the use of their platforms for any personal hook ups. So indeed adult sex work websites have been duly cleansed from the US internet.

But now a woman is claiming that Facebook facilitated trafficking when of course its nigh on impossible for Facebook to detect such use of their networking systems. But of course that's no excuse under the FOSTA.

According to a new lawsuit by an unnamed woman in Houston, Texas, Facebook's morally bankrupt corporate culture for permitting a sex trafficker to force her into prostitution after beating and raping her. She claims Facebook should be held
responsible when a user on the social media platform sexually exploits another Facebook user. The lawsuit says that Facebook should have warned the woman, who was 15 years old at the time she was victimized, that its platform could be used by sex
traffickers to recruit and groom victims, including children.

The lawsuit also names Backpage.com, which according to a Reuters report , hosted pictures of the woman taken by the man who victimized her after he uploaded them to the site.

The classified advertising site Backpage has already been shut down by federal prosecutors in April of this year.

Fisting; it's not for everyone. Certainly not for many Filipino moviegoers who apparently took offense with an independent film that used the word as its title.

Director Whammy Alcazaren's film originally titled Fisting now only goes by its much less graphic subtitle Never Tear Us Apart after festival organizer Cinema One Originals requested a title change.

The film makers responded by a stop in social media accounts made for the movie and take down other promotional materials with the former title.

According to a statement on Facebook, Alcazaren was willing to change the title on grounds of pragmatism:

We are doing these necessary steps so that we can continue the dialogue we wanted to have with the audience through our film, the statement reads.

The Philippine Daily Inquirer reported that the Movie and Television Review and Classification Board (MTRCB), the agency that rates films, has also flagged the film's producers for its title . Apparently, the film's producers did not submit the
publicity materials for review. The MTRCB also noted in a memorandum that all publicity materials for films must be suitable for a general audience.

Never Tear Us Apart is a family drama about an aging spy who discovers that his wife was impregnated by a monster called The Shadow.

Google's parent company Alphabet has rolled out a new tool aimed at defending against attacks on free speech around the globe.

Jigsaw announced the release of a new app, Intra , designed to protect Android users against the manipulation of DNS resolutions, a commonly used practice among repressive regimes to prohibit users from accessing information deemed off-limits by
the state.

In Iran, for example, certain websites redirect to a government censorship page. The same is true of China's Great Firewall (GFW), which returns false and, often instead, seemingly erratic IP addresses in response to DNS queries to
government-blocked domains. Hundreds of websites are likewise blocked in Pakistan.

Intra works, according to its creators, by simply encrypting the user's connection to the DNS server. By default, it points to Google's own DNS servers but for users who prefer to use another ( Cloudflare or IBM's Quad9 , for example) those
settings can be changed within the app.

According to CNET, DNS queries will be encrypted by default in an updated version of Android Pie. Reportedly, however, around 80 percent of Android users aren't using the latest version of the Android operating system. For those, Intra is now
available in Google Play

A Thai sex hotel has sparked 'outrage' with Nazi-themed rooms decorated with swastikas and huge murals of hitler overlooking bed

The Communist room is one of the largest at Villa Love Hotel near Bangkok, Thailand, and is said to be extremely popular with swingers and randy groups looking for sordid orgies. The Communist room at Love Villa Hotel has been condemned by Jewish
communities

Efraim Zuroff from the Simon Wiesenthal Center - an international campaign group in Los Angeles - said:

This is truly awful. It's horrendous, absolutely disgusting. It shows a complete lack of knowledge and education about Hitler, the harm he cause and the horrifying crimes that he committed in World War Two.

This is a problem throughout Asia and unfortunately I'm not at all surprised by it. Frankly, the Thai government needs to be a lot more active in preventing this kind of thing and there's no reason why they shouldn't be.

And from my knowledge of Thailand I can confirm that Thais indeed have a complete lack of knowledge and education about Hitler, the harm he cause and the horrifying crimes that he committed in World War Two. So are certainly not guilty of
knowingly setting out to outrage anybody.

Abraham Cooper, a Rabbi from California whinged:

There's no excuse in the age of Wikipedia for someone to not know that Hitler was one of the worst monsters of history.

This hotel is outrageous and beyond the pale. The rooms needs to be painted over immediately and the Thai government needs to take instant action if the country wants to be taken seriously as a tourist destination.

I rather suspect that Europeans are equally unknowledgeable and unsympathetic about some of the monsters that have massacred people in Asia over the years.

New rules on audiovisual media services will apply to broadcasters, and also to video-on-demand and video-sharing platforms

MEPs voted on updated rules on audiovisual media services covering children protection, stricter rules on advertising, and a requirement 30% European content in video-on-demand.

Following the final vote on this agreement, the revised legislation will apply to broadcasters, but also to video-on-demand and video-sharing platforms, such as Netflix, YouTube or Facebook, as well as to live streaming on video-sharing
platforms.

Audiovisual media services providers should have appropriate measures to combat content inciting violence, hatred and terrorism, while gratuitous violence and pornography will be subject to the strictest rules. Video-sharing platforms will now be
responsible for reacting quickly when content is reported or flagged by users as harmful.

The legislation does not include any automatic filtering of uploaded content, but, at the request of the Parliament, platforms need to create a transparent, easy-to-use and effective mechanism to allow users to report or flag content.

The new law includes strict rules on advertising, product placement in children's TV programmes and content available on video-on-demand platforms. EP negotiators also secured a personal data protection mechanism for children, imposing measures
to ensure that data collected by audiovisual media providers are not processed for commercial use, including for profiling and behaviourally targeted advertising.

Redefined limits of advertising

Under the new rules, advertising can take up a maximum of 20% of the daily broadcasting period between 6.00 and 18.00, giving the broadcaster the flexibility to adjust their advertising periods. A prime-time window between 18:00 and 0:00 was also
set out, during which advertising will only be allowed to take up a maximum of 20% of broadcasting time.

30% of European content on the video-on-demand platforms' catalogues

In order to support the cultural diversity of the European audiovisual sector, MEPs ensured that 30% of content in the video-on-demand platforms' catalogues should be European.

Video-on-demand platforms are also asked to contribute to the development of European audiovisual productions, either by investing directly in content or by contributing to national funds. The level of contribution in each country should be
proportional to their on-demand revenues in that country (member states where they are established or member states where they target the audience wholly or mostly).

The legislation also includes provisions regarding accessibility, integrity of a broadcaster's signal, strengthening regulatory authorities and promoting media competences.

Next steps

The deal still needs to be formally approved by the Council of EU ministers before the revised law can enter into force. Member States have 21 months after its entry into force to transpose the new rules into national legislation.

The text was adopted by 452 votes against 132, with 65 abstentions.

Article 6a

A new section has been added to the AVMS rules re censorship

Member States shall take appropriate measures to ensure that audiovisual media services provided by media service providers under their jurisdiction which may impair the physical, mental or moral development of minors are only made available
in such a way as to ensure that minors will not normally hear or see them. Such measures may include selecting the time of the broadcast, age verification tools or other technical measures. They shall be proportionate to the potential harm of
the programme. The most harmful content, such as gratuitous violence and pornography, shall be subject to the strictest measures.

Personal data of minors collected or otherwise generated by media service providers pursuant to paragraph 1 shall not be processed for commercial purposes, such as direct marketing, profiling and behaviourally targeted advertising.

Member States shall ensure that media service providers provide sufficient information to viewers about content which may impair the physical, mental or moral development of minors. For this purpose, media service providers shall use a system
describing the potentially harmful nature of the content of an audiovisual media service. For the implementation of this paragraph, Member States shall encourage the use of co - regulation as provided for in Article 4a(1).

The Commission shall encourage media service providers to exchange best practices on co - regulatory codes of conduct . Member States and the Commission may foster self - regulation, for the purposes of this Article, through Union codes of
conduct as referred to in Article 4a(2).

Article 4a suggests possible organisation of the censors assigned to the task, eg state censors, state controlled organisations eg Ofcom, or nominally state controlled co-regulators like the defunct ATVOD.

Article 4a(3). notes that censorial countries like the UK are free to add further censorship rules of their own:

Member States shall remain free to require media service providers under their jurisdiction to comply with more detailed or stricter rules in compliance with this Directive and Union law, including where their national independent regulatory
authorities or bodies conclude that any code of conduct or parts thereof h ave proven not to be sufficiently effective. Member States shall report such rules to the Commission without undue delay. ;

We have received complaints from some viewers who were unhappy with scenes of violence in the Mick Carter prison storyline.

Response

We're aware that any scenes of violence and unpleasantness can sometimes be upsetting for some of our audience but occasionally it's necessary to the narrative. EastEnders has a long established relationship with its audience who have come to
expect big dramatic moments such as these and as our regular viewers will know, the scenes in question were part of an ongoing storyline which has seen Mick pushed to his limits after he was falsely imprisoned.

We are always extremely mindful of the content within an episode and the time slot in which it is shown. All of our content, including language must be editorially justified and we're always careful to film and edit scenes in such a way that they
do not exceed reasonable expectations for the programme -- with much of the violence being implied rather than explicit.

It's also important to note that EastEnders is a fictional drama but, like society, it's made up of many different character types. We feel the scenes in question are crucial aspects of the overall storyline of Mick's time in prison, and that
they were not included gratuitously.

Tommy Robinson has accused Sky News of editing their interview with him to make it seem like he said he didn't mind inciting fear of Muslims, a sentiment which was reported in the press, including by RT.

Robinson, with supporting videos published on YouTube showing the full interview, notes that the statement was made to a different question about a Dutch public service video warning children about the dangers of grooming. Robinson's comments
were about whether the video incited fear of Mulsims, and was not about Robinson's actions inciting fear.

In a response video titled: Exposing Sky News lies and propaganda I will take them to court for this, Robinson states :

[The] headline that's gone all around the world says that Tommy Robinson says he doesn't care if he -- as in me -- incites fear against Muslims.

Jason Farrell wrote a piece for Sky News defending his interview, not on the grounds of its selective editing but rather over criticism that it provided a platform for Robinson and his views:

Are we not interested then in quizzing him about who he is now, and how he justifies his more recent words and actions?

Steam isn't officially available in China, but it's not officially blocked either. But this inbetween state still gives the censors unofficial power to ensure that Steam does not allow adult games to be sold in China.

Steam only recently stopped censoring adult games in the rest of the world but the change of policy will not apply to China.

As part of the policy shift, steam added two more content filtering options for users: A general Mature Content filter and an Adult Only filter. But China doesn't have the latter option, which means that they don't have access to these games at
all.

Chinese media speculated that Steam is restricting adult titles from Chinese gamers to avoid getting officially blocked in the country. China's government is tightening its grip on the gaming industry and repeatedly clamps down on online content
that they deem inappropriate, so Steam could be trying to keep a delicate balance: Not officially blocked, but not officially banned, either.

Sky TV has decided to partner with the US media rating service, Common Sense Media to introduce a detailed rating system that will help parents make smarter choices about what their children watch on Sky. The new service will launch in the UK in
2019.

Since its founding in 2003, Common Sense has built the largest library of independent age-based reviews for everything kids watch, play, read and learn. The service, which will be available on Sky Q, will include in-depth information on the
prevalence of specific types of content. This includes the educational value of the show, positive messages, use of positive role models, bad language, violence, sex and drink and drugs. Each is rated on a scale of one to five depending on how
applicable it is to each show.

Jeremy Darroch, Group Chief Executive, Sky, said:

As a parent I know how reassuring it is that the Sky platform offers a safe, highly-regulated, family-friendly environment 203 but we know we can always do more.? Our partnership with Common Sense will help give parents greater peace of mind,
helping them make smarter viewing choices for their children.

Later this year Sky Kids Safe Mode will launch on Sky Q, helping parents hand pick and ring-fence the content they want their children to watch and password protect any content they feel is unsuitable.

The announcement does not mention how this will effect Sky's relationship with the BBFC, presumably this is a bit of a snub to cinema and video ratings provided by the BBFC.

As an example of Common Sense Media I compared their comments on the Marvel superhero Venom with the more detailed BBFC advice:

MPAA Rated PG-13 for intense sequences of sci-fi violence and action, and for language.

What parents need to know

Parents need to know that Venom is a sci-fi action movie based on an antihero/villain from the Marvel universe. Photo journalist Eddie Brock's (Tom Hardy) life is disrupted for good when he becomes host to an alien parasite. The alien
symbiote is able to take over Brock's body, giving him superpowers but also a dark alter ego called Venom. As his worried girlfriend, Anne (Michelle Williams), watches, Brock struggles with whether to escape the destructive being taking over
his body or to give in to its dangerous power. This movie looks darker than most of the Marvel films; expect intense, graphic violence, strong language, and lots of scares.

Rated 15 for strong threat, horror, violence

VENOM is a US sci-fi action fantasy in which alien organisms are brought back to Earth.

Threat

There are a number of sequences in which people are threatened and attacked by the alien organisms, or by people into whose bodies the aliens have entered.

Horror sequences include the alien organisms entering people's bodies, causing their limbs to distort and their bones to crack. There is sight of injury detail, including protruding bones

Violence

Stronger moments of violence include people being impaled by the alien organisms, sometimes with bloody detail, and people being eaten by the aliens. There is also moderate action violence throughout, including heavy punches, kicks and other
blows as well as use of tasers.

There is also infrequent strong language ('f**k'), alongside milder bad language (eg pussy, shit'). There are sequences in which live animals appear to be eaten but no animals were harmed in the making of the film.

A new policing super-database is in the works -- and it puts our rights at serious risk. But the Home Office has failed to respond sufficiently to Liberty's concerns. We can't be part of a process that gives a free pass to the creeping
expansion of digital policing that shows contempt for our privacy rights.

On 28 September, we wrote to the Home Office telling them we can no longer take part in their Open Space civil society consultation on the Law Enforcement Data Service (LEDS) -- the Home Office's planned police super-database.

LEDS will bring together the Police National Computer and Police National Database in one place. This unprecedented development will see the Government amass deeply sensitive data for policing purposes.

It requires rigorous scrutiny and debate to make sure our personal information is protected, with robust safeguards to protect us from threats to our privacy and other fundamental rights.

The Home Office has made clear to us that the Open Space consultation will exclude discussion of our key concerns with the plan.

The information on the database will be vulnerable in many ways -- and the Home Office's plans fail to explain how police will use the system in conjunction with the creeping progression of surveillance and algorithmic policing.

The proposed system doesn't have an agreed retention policy and the police have even admitted that data they no longer have any right to hold will be transferred to the new database.

The plans even allow our data to be shared with non-policing organisations where a business case can be made.

And the Home Office has excluded from its consultation process any consideration of how the database will be linked with lawless facial recognition technology.

LEDS cannot be considered in a vacuum. This derisory consultation continues the pattern of police adding to their powers to use invasive technology without giving any regard to proper scrutiny and accountability -- or the effect on our rights.

Police forces are increasingly looking to big data to assist with law enforcement. Having enormous amounts of our personal information held in one place is a significant violation of our privacy. While the collection of a few pieces of data can
seem innocuous, combining it with other sensitive information can let the state build up a detailed and extremely intrusive personal profile on each of us.

Even more sinister are the algorithms the state is increasingly using to make important decisions about us -- leading to conclusions which may be inaccurate or biased and lack proper human oversight.

We must question how super-databases like this will be linked with lawless surveillance technologies or biased algorithmic programs that make predictions about who is likely to commit crime.

In the UK, we have a long-held principle of policing by consent. We must be able to trust the police to protect our privacy and our fundamental rights.

Kuwait's book censors have been very busy of late banning 4,390 books since 2014, hundreds of them this year.

Recent targets include an encyclopedia with a picture of Michelangelo's David and a Disney version of The Little Mermaid . David had no fig leaf, and the mermaid, alas, wore half a bikini.

Shamayel al-Sharikh, a Kuwaiti women's activist explained that the powers that be thought her dress was promiscuous.

Sometimes the 12-member censors committee (six Arabic readers, six English readers) that rules on books for the Ministry of Information gives a reason: The anthology Why We Write was banned because its editor, Meredith Maran, had falsely
accused her father of molestation.

In other cases, the justification is obscure, such as with The Art of Reading , by Damon Young. Maya Angelou's memoir, I Know Why the Caged Bird Sings , is forbidden in Kuwait.

One Hundred Years of Solitude , by Nobel Prize winner Gabriel García Márquez, is banned because of a scene in which a wife sees her husband naked, as is Children of Gebelawi , by Egyptian author Naguib Mahfouz, the first
Arabic-language writer to win the Nobel in literature.