Archive for year: 2016

There have been lots of articles over the last week or so talking about “fake news” on Facebook, many revolving around the US election.

The ‘poster child’ of Facebook Fake News is this post: “FBI Agent Suspected in Hillary Email Leaks Found Dead…“. It appeared a few days before the US presidential election, and was shared a phenomenal number of times (567,752 according to Facebook’s API). It turned out the “Denver Guardian” does not actually exist – the site is just a shell set up to spread fake news, registered under an anonymous domain owner.

Interesting, eh? So the fake Denver Guardian article was “several orders of magnitude more popular a story than anything any major city paper publishes on a daily basis”. And here’s a graph from that article, backing that up:

Quite a compelling chart. From that graph it looks like that Denver Guardian article is way way way more popular than anything the Boston Globe, LA Times, Chicago Tribune, and others have ever posted. Here you can see that debunking article shared on Twitter – Benedict Evans of the famous VC firm Andreessen Horowitz is retweeing it here, on an original tweet from Jay Rosen, who’s a Professor of Journalism at NYU:

408 retweets – I bet quite a few people read that post . Except… if you read into the detail properly, and check the actual data… that graph is not representative either. Here is why:

The author of the article just picked a single post, listed as ‘top story’, from each of the publications listed above, on a single day. If he’d picked a day earlier at a different time, he’d have found much more popular articles; if he’d picked a day later he may have too.

That line about “this article from a fake local paper was shared one thousand times more than material from real local papers” – strictly speaking that’s true, because “material” could mean any article. But it provides a false impression.

I spent a few minutes looking for the actual most shared posts on each of the above listed websites to remake the graph taking the actual ‘most shared’ posts. I went back to the start of September 2016. Here’s how the amended graph looks:

The “Denver Guardian” post is still very high there, but it’s not “several orders of magnitude more popular a story than anything any major city paper publishes on a daily basis”.

In other words: An article debunking fake news on Facebook actually gives a very false impression of reality itself. It was compelling enough that an NYU professor shared it, & several hundred people retweeted that. The article has itself been shared more than 1,500 times on Facebook.

The author was told that the article was wrong. He quietly updated some of it, and added an explicit update note to the end later on, but most of the elements in the post are left as-is. It still says the Denver Guardian’s article is “several orders of magnitude more popular a story than anything any major city paper publishes on a daily basis”, and the graph remains in tact. The NYU professor was told too, but left the RT as-is. Both probably did all of this with good intent, but the result is some who read it may take it at face value, and believe the problem to be “several orders of magnitude” greater than it likely is.

Summary:

Yes, there is fake information on Facebook. Some of it is deliberate; some of it is due to simple incompetence.

If you pick the most shared ‘fake news’ article of all time on Facebook, and compare it against some moderately shared posts from reputable news outlets, the outcome is that the problem looks much greater than it may be.

Sometimes very reputable people accidentally share false information; sometimes they leave it there even after it’s noted as being not quite right.

Fake news is still a problem. If you wanted, you could probably cheat the stock market, or nudge one or two votes in an election, by timing & pushing a piece of fake news at the right time. And, realistically, there are plenty of avenues Facebook could explore to limit the effectiveness of ‘fake news’.

Take what you read with a pinch of salt and, where you have a few moments spare, do a little of your own research to double check its validity. If it does not “pass the smell test”, maybe wait before hitting RT. But don’t overreact to the problem… it’s extremely unlikely that this fake news is “several orders of magnitude more popular a story than anything any major city paper publishes on a daily basis”.

This post simply includes 2 pieces of information: Answers to the question “Should the United Kingdom remain a member of the European Union or leave the European Union?” from the weekend immediately prior to the referendum, and answers to the same question, surveyed on 23rd August 2016 (ie. exactly 2 months after the referendum).

Responses to a survey on “the Brexit Question” immediately prior to the vote:

(data here is from 2,008 responses, weighted on the basis of demographic information from 1,485 respondents. All were in the UK, but were not qualified on eligibility to vote, or on whether they intended to vote)

Responses to a survey on “the Brexit Question” exactly 2 months after the vote:

(data here is from 1,002 responses, weighted on the basis of demographic information from 763 respondents. All were in the UK, but were not qualified on eligibility to vote, or on whether they had/had not voted)

Summary

A summary of the change in the above numbers is as follows:

Immediately prior to the referendum, the survey indicated a likely ‘Remain’ vote. As you can see from the error percentages, a “Leave” vote was within the margin of error. (ie, the poll was inconclusive)

2 months following the referendum, the ‘Remain’ percentage has grown significantly; the ‘Leave’ percentage has grown by a smaller amount (but has still grown), and the ‘Undecided’ percentage has dropped by over 10 percentage points, with roughly 80% of that ‘undecided’ pool going to ‘Remain’, and roughly 20% going to ‘Leave’. The error bars no longer overlap (ie, the respondents to the poll were, overall, in favour of remaining)

Caveats

As always, this is simply a snapshot poll, not an actual vote with real-world ramifications.

The 2 pools polled are not identical (ie. the survey respondents are not the same people)

None of the above is qualified by actual intention to vote, or whether the respondents voted or not.

Many more caveats apply. If you would like me to go into further detail on those, please do let me know.

(For full clarity: I do this purely out of interest, and fund it myself).

“What would happen if you reran the UK’s “in/out” EU Referendum today, having seen the news headlines immediately following the announcement of the result?”

I am sure many have asked the question. As you will surely know: It is not possible to answer with anything even approaching a rough degree of certainty. However, I have been running opinion polls on the Brexit question for the last year, and have carried out 3 across June (2x before, 1x after announcement of the result). While opinion polls are far from perfect, they do still help in understanding rough trends.

Below are the results of an opinion poll carried out immediately prior to the Brexit vote; followed by a poll carried out over the days immediately after.

First is a straight ‘Before & After’ comparison of the 2 polls. Below that is a little greater detail on each of the polls. Additionally, there is data gathered over the last year on ‘the Brexit’ question, to add further context to the results.

Note of Caution: I am not publishing this to suggest that there should be another referendum. I am simply publishing as I have been tracking this for the last year, and found the massive change in results very interesting.

Before & After

Below is a comparison of an opinion poll carried out over the weekend prior to the ‘Brexit’ vote, compared to the same poll carried out over the week following the announcement of results:

Summary: The poll results show a big shift from the ‘Undecided’ group to the ‘Remain’ group. The size of the ‘Leave’ group changes, but only by 4-5% points.

Each poll here was anonymous, carried out among adults in the UK, via the internet. There were 2,018 responses gathered in the ‘before’ poll. There were & 1,092 responses to the ‘after’ poll. The data is weighted based on the internet population of the UK, and that weighting is based on demographic data from 1,485 respondents in the ‘before’ poll, and 829 respondents in the ‘after’ poll.

More Detail: Result Immediately Prior to the Vote

Here were the results of the poll, carried out over the weekend immediately prior to the referendum, including additional notes:

Notes:

The ‘undecided’ group is quite high. This was a UK-wide poll, with no filter question clarifying whether respondents were registered to vote. Ie, it is representative of adults in the UK, not just voters.

Importantly: ‘Remain’ polled at 37.2%, the ‘Leave’ answer polled at 32.5%. You can see from the bars that each had a margin of error of +2.6/-2.5. Ie, the ‘Remain’ result was predicted between 34.7 and 39.8; the ‘Leave’ result predicted between 30.0% and 35.1%. Ie – according to the snapshot poll – ‘Remain’ would probably win, but there was a chance ‘Leave’ would prevail.

More Detail: Result Immediately After the Vote Result

Here is the same poll, carried out in the days immediately after the vote was announced, and newspaper headlines and politicians across the world reacted. The data here was gathered between 24th-29th June 2016. Again, I have included additional notes:

3 notes there:

‘Remain’ has leapt enormously.

‘Leave’ has dropped, but only by 4 or 5 percentage points.

The ‘undecided’ group has dropped hugely – from 30% to 15%.

Based purely on this poll, those believing the UK should remain a member of the EU outnumbered those who believe we should leave the EU by a margin of 2:1 after the referendum result was announced & newspaper headlines and politicians had responded to the result.

More Detail: Data Gathered over the Last Year:

Below is a summary of most of the Brexit polls I have carried out over the last year, to add context, and further illustrate the large change in results. Most were carried out among just over 2,000 respondents; one or two were carried out with just over 1,000 respondents. The early polls I chose to omit the ‘Undecided’ group. The referendum question changed late last year, from a ‘Yes/No’ question to ‘Remain/Leave’.

Summary:

Among these polls, there is a large swing from ‘Undecided’ to ‘Remain’ following the announcement of the results.

The ‘Leave’ group has dropped, but had polled at similar levels previously.

The ‘Undecided’ group has dropped significantly.

Important Caveats

Above are simply opinion polls. As we all know, opinion polls do not necessarily predict the actual result of elections or referendums, they simply offer a snapshot of opinion among a group of respondents. The respondents above were in the UK, but not necessarily registered to vote.

Men were slightly more likely to answer ‘Remain’ than women; paradoxically men were also more likely to answer ‘Leave’ than women. Men were much more likely to give an answer one way or the other, whereas a higher percentage of women answered ‘Undecided’.

Age is a strong predictor of response. 18-24 year olds were most likely to answer ‘Remain’; 65+ year olds were more likely to answer ‘Leave’.

Weighted Results:

“Should the United Kingdom remain a member of the European Union or leave the European Union?”

Remain: 37.2%

Leave: 32.5%

Undecided: 30.3%

Results excluding ‘Undecided’

Stripping out the ‘Undecided’ portion, the results are as follows:

Remain: 53.4%

Leave: 46.6%

‘Split by Age:

The ‘Remain’ group, broken down by age group, is as follows:

The ‘Leave’ group, broken down by age group, is as follows:

Split by Gender:

Appendix:

The poll ran & as displayed to a random sample of users within the UK.

The question was worded as per the official question.

There are lots of caveats with data such as this. It is a ‘snapshot’ rather than a ‘prediction’.

I have asked the specific ballot question, rather than framing it as “If you were to vote today…?”.

I’ve left error bars on the results so that you can see the variability.

Phase 3: A partial caveat, that the site will be ‘archived’ or ‘mothballed’.

Phase 4: A very large caveat, that many of the recipes will move across to ‘BBC Good Food’.

The original story (phases 1 & 2) explained that this was happening to ‘streamline output’ and ‘save £15m’.

That didn’t really make sense, as it costs relatively little to keep pre-existing content on a website and, naturally, people became quite angry about this. The BBC is funded by taxpayers, and license fee payers, who asked why the content they’d funded was being removed.

Mothballing?

After lots of anger about the closing of ‘BBC Food’, they put out a clarification (‘phase 3’) above. Here’s how that was communicated by their Press Office on Twitter:

You’ll note the words ‘archived’ and ‘mothballed’ in there. Neither of those sound particularly friendly to the general public. The announcement was designed (presumably) to quell the anger of all those asking why it was being closed. By using internal jargon, it cleared up very little & simply prompted more anger & questions.

Looking at what the BBC normally do when they ‘mothball’ something, this probably would not mean that they would hide all of the recipes from Google search. It also likely does not mean they’d stick around for a short time before being deleted (there is BBC content many years old still sat there ‘mothballed’). It more likely means that the content would be excluded from the BBC’s own website search function, and that a ‘This page has been archived’ header would be tacked onto the pages:

There are exceptions to this – for example, when closing ‘bbcshop.com’, they removed the entire site & redirected users across to its replacement but, in general, they do the above: Add an ‘archive’ tag, leave it available in Google search, remove it from their own search results, and leave it to sit there accessible by the public if they try hard to find it.

The Final Clarification

Despite their ‘archiving’ announcement, people continued to be angry, and a petition to ‘save’ the site hit more than 100,000 signatures. In response, the BBC clarified still further:

The Possible Motivation

The obvious question is: Why didn’t they put out a straightforward announcement in the beginning. Surely they would have seen that “We’re moving most of the content from one of our sites to another” would prompt less backlash from their users than “we’re shutting down”.

It’s possible that what’s happening is this:

The ‘BBC Food’ section of the overall BBC site competes with ‘BBC Good Food’, which is a completely separate website.

‘BBC Good Food’ is part of BBC Worldwide, the commercial arm of the BBC which is charged with maximising profit inn order to help fund the overall BBC. BBC Worldwide is a £1bn+ company, which generated just under £140m profit last year, and passed £226m across to the parent organisation (ie. theoretically keeping the license fee down).

If the BBC close ‘BBC Food’, and migrate the most valuable content across to ‘BBC Good Food’, this increases the likelihood of them making profit.

So why did the BBC simply announce that they were closing ‘BBC Food’, without explaining that they’d be moving content across to ‘BBC Good Food’?

Option B: It’s also possible that they do not quite know what’s going on themselves – they recognise they have 2 areas covering the same thing, and that the most obvious one to close is the taxpayer funded one, but they haven’t fully planned things out.

Option C: Most likely, I think, is that part of the messaging around the whole announcement was that they are closing to avoid competing directly with commercial organisations. If they had announced at the beginning that they’d be shifting taxpayer funded content across to their own commercial organisation, that puts a very negative spin on things, and would likely raise some complaints from commercial competitors.

Summary:

It’s likely BBC Food will stay around for a little while, with an ‘archive’ note at the top of pages.

Likely, longer-term, they will shift most/all of the content (or at least the most valuable, heavily accessed content) across to BBC Good Food.

In shifting that content, they are essentially moving ‘non-profit making’ content across to an area that’s very happy to make profit (in fact it is its primary motive). And, of course they also remove their own ‘regular’ site from competing against their commercial site.

All of the fuss among the general public could likely have been avoided by communicating this differently, but in doing so they’d likely have stoked a lot of anger among other newspapers, publishers, and other commercial organisations.

Twitter and Google regularly do something that – if you or I did it – would be breaking the law. They reveal the identities of people who courts have decided should not be named. If newspapers and members of the general public name them, there are very serious repercussions. Yet Google & Twitter’s algorithms seem free to do this.

Today they are doing it again, in relation to the killers of a 39 year old woman.

One of the saddest legal cases you may ever read is that of Angela Wrightson. She was killed by two teenagers – 13 and 14 at the time – who inflicted more than 100 injuries on her. Today a judge sentenced them both to life in prison, serving a minimum of 15 years. The full, shocking, detail is here: https://www.judiciary.gov.uk/wp-content/uploads/2016/04/sentence_F_D.pdf

In considering whether Angela’s killers should remain anonymous, or whether newspapers, news media, and the general public should be allowed to reveal their identities, the judge said many things, including that it is likely to pose a great danger to them:

The judges summary was this:

I suspect some of us would agree with the above, and some would argue that they are murderers, and have brought it upon themselves. Either way, however strong they are, your feelings & my feelings are irrelevant: a judge has decided that there is a ‘real and present danger’ to these two girls, references suicide attempts, and therefore summarises that they must remain anonymous.

Yet, clicking on the victim’s name on Twitter, which has trended for much of the day, reveals 2 girls’ names:

Angela’s name trended across the UK. In other words, if you have used Twitter today, you were a single click away from seeing the names of two people who a judge had deemed should not be named.

And, as I have written about before, Google does a very similar thing, at the foot of search results:

This happens automatically, because both Google & Twitter have algorithms that associate related searches to each other. In other words, the algorithms are breaking the law.

I have written about this several times over the last few years:

Twitter & Google both showed photographs of one of James Bulger’s killers, when searching for broadly related terms: http://barker.co.uk/algolaw

The footballer Adam Johnson’s 15 year old victim was named http://barker.co.uk/algorithmlaw

In recent days, a very high profile celebrity was named by Twitter’s algorithm, when searching for the initials he had been given by the UK courts to conceal his identity.

And, again, it is happening today.

It is not right that this should happen. It is dangerous both in cases like this – for the killers, for their friends, and those associated with them – and it is most definitely not right in examples where victims are named.

This week, there was a very high profile story in the media about Adam Johnson, a footballer who was found guilty of a child sex charge. It is illegal in the UK to identify the victim of a sexual offence. There has been much said about individuals naming the victim in this case, but less said about the capability for algorithms to also do so.

Here is a quick look at whether Twitter & Google have managed to improve over the last 3 years, or whether there is still a chance they may inadvertently break the law in this way.

Twitter’s Algorithm

Adam Johnson’s name was one of the top trends in Twitter for much of the day. A click on the trending term took users to this search results page:

As you can see, there are some quite nasty ‘related searches’ displayed for his name. A click on the 2nd related search leads to this result:

I have blurred 2 entries there: A Twitter username whose account has since been deleted, and what appears to be a woman’s name. I do not know whether either of these is the victim, but it’s worrying that Twitter aren’t on top of suppressing these on such high profile trends.

(update: On checking several hours after publishing this post, and after ‘Adam Johnson’ stopped trending, some Twitter results have now been cleaned.)

Google’s Algorithm

Google’s algorithm fares a little better at first. It is only when reaching the foot of search results that their ‘related searches’ appear. Here are the results at the foot of the first search results page:

As you can see, several names are mentioned there. The first 2 names have been mentioned many, many times in the press – Adam Johnson’s former partner, and daughter. ‘adam johnson 15 year old’ is also present, but the results aren’t quite as nasty as the first set of Twitter ‘related’ searches. But, on clicking ‘stacey flounders’, and scrolling to related searches there, the following appears:

Again, as you can see, I have blurred the results partially, where Google lists a person’s name which cannot be explained by other means (2 of the other, unblurred names there have appeared in other sad news stories, and are explainable). The above is simply when clicking the name of Adam Johnson’s partner. When clicking the ‘adam johnson 15 year old’ related search, the following appears:

Again, I have blurred a result there. As you can see, all of the related results are quite nasty here, but the blurred one in particular is very worrying. Ie: As with Twitter’s algorithm, Google is specifically naming someone who may/may not be the victim in the case. Additionally, Dan Bell noted a similar issue appears in Google Image Search. As Google themselves should not know the name of the victim, again, it is worrying that a name is allowed to appear here. (I have attempted to notify them.)

Summary

It is over 3 years since I last wrote about this topic, where both Twitter’s & Google’s algorithms appeared to be displaying results which could break the law.

I do not know in the above examples if the names they display are the victim in this case (frankly I hope not). Either way, it is worrying that both Google & Twitter’s search algorithms seem still to be capable of doing something that would likely be illegal for any person in the UK to do. It is concerning both from the broad point of view of algorithms breaking the law (and causing harm to individuals), and from the narrower point of view of this individual case.

The first “brand new newspaper for 30 years” just launched: ‘The New Day’. The pitch is nice: A small team of 25 staff. Gender neutral, politically neutral, & focused on positivity. It sounds like a very light, non-business version of the FT. It was free for the first day, will be 25p for the first 2 weeks, and then 50p from there onward. Across the course of a year, that’s £182.50 at full price. As context, the UK TV license is currently £145.50.

The paper has been launched by Trinity Mirror who, including The Mirror, own 260 newspaper titles, plus their own Email Service Provider (communicator), and their own Digital Agency (rippleffect). It is 2016, and every other newspaper in the land has been talking about ‘Digital First’ for the last five years. Newspapers have finally caught up on the opportunities around email. And ‘social’ is one of the biggest channels around for newspapers.

So bearing all this in mind, it is slightly strange that The New Day has no website, is not collecting email addresses, and seems slightly slow on social media. Here are some notes on the launch:

1. No website.

I’d said that The New Day has no website. They also say in interviews that they have no website. But actually this isn’t quite true. They do have a website, but it’s tough to find: It doesn’t rank in Google for their name. During the crucial launch period, they were not buying paid ads on the name. Etc. Here’s what you find if you search for it on Google:

Note their sister publications don’t even rank there. Ie: The FT & The Guardian have made more money online from the New Day than they have themselves.

24 hours after their launch, someone eventually saw sense & launched a Google Ad for the brand name, but it’s a shame to have missed the ‘big interest’ launch period:

2. When you do find the website…

The team themselves say that the paper will have no website. Here’s a quote from the BBC news article covering its launch:

“It will not have a website” it says. But of course – there is a site – it’s just extremely sparse:

It’s hosted on the web hosting provider TSOhost. That’s not bad, but it’s not great. It’s the internet service provider I use to host ‘non-critical’ websites (for example this one), whereas I opt to use other more reliable/faster hosts for those where it’s critical they stay available.

The site has obviously been hastily put together:

The background is a looping Youtube video, uploaded on the 26th February with the title “Sun Illfracombe” (sic);

The code hides the text “Welcome to the New Day, the first new newspaper for 30 years.” which has been replaced with the slightly more specific “Welcome to The New Day, the first brand new newspaper for 30 years.”

The title tag is the slightly clumsily worded “A brand new UK National Paper for women and men. Gives context, background and points of view rather than just reporting of what people probably have already heard”

You could forgive them the hurry, and for not having a fully finished website just yet. The domain names were only registered a few weeks ago and, though they say they’ve been working on it since last summer, most of the staff only joined in January or February this year according to LinkedIn. But they aren’t saying “we don’t have a website yet”, they’re saying this is the strategy: “we won’t have a website”.

This makes little sense. An app may make sense. A site simply collating their tweets may even make sense. A big box saying ‘Sign Up For Email Updates’ may make sense. If their content was utterly, radically different to most content from other online sources it may make sense. It might even make sense if they had a message saying “You won’t find our content online: We’re an offline only publication. Buy us today”, along with the latest front cover image may make sense. But a non-site that they’ve deliberately created simply as a holding page does not make sense.

A commenter on Facebook summed up the puzzling nature of this here:

3. Twitter

Here’s the tweet that launched the first ever cover of the newspaper into the world:

268 retweets. Not bad, right? Only… that’s not the main Twitter account that tweeted it: it’s a member of staff (actually the Exec Editor). It’s not a big deal, but doing this slows the likelihood of growing followers for the brand, in favour of growing them (to a much lesser extent) for the member of staff whose tweet gets picked up first. I assumed they’d fix this on day two, but exactly the same happened again.

4. Facebook & other Social Media

The Facebook account is a little better. The purpose of it seems to be to gather feedback from readers. That makes some sense: Without comments, it’s tough to understand what is working & what is not (the Daily Mail, for example, is on track to hit 2 billion up/down votes on comments this year, which gives them a huge amount of insight into which articles drive engagement, and which readers are engaged). They also seem to be actively reading & responding to some comments, albeit some may be a little over earnest:

Outside of Facebook & Twitter, there is no social footprint. Because there is no website, there are no ‘share on whatsapp’ or ‘share by email’ buttons. Unlike most newspapers, where every view of an article is also an opportunity for it to be shared, to bring extra traffic, and to expand audience, the New Day’s content exists in static paper form only. In an age where “How to boil an egg” by Heston Blumenthal can get more than 35,000 shares for The Guardian, that is quite a disadvantage.

5. Email

As touched on, ’email’ has finally caught up as one of the focus channels for many newspapers. Email allows a paper to form an ‘ongoing dialogue’ (or really more of a monologue) with readers, without having to rely on them remembering to visit the site, or open the app.

Generally, many newspaper sites interact with audience through the following layers, from ‘shallowest’ users through to ‘deepest’:

Social media reader.

Social follower.

Site user.

Commenter/participant.

Email subscriber/site member.

Paying customer.

In the case of the New Day, the first couple of layers exist (as bolded), the last layer exists (newspaper purchasers), but the layers in the middle do not. Without the expense & time of building a site, ’email subscriber’ would be the easiest way for them to bridge the odd gap between their social accounts & the paper, and to communicate with readers without simply waiting for them to turn up at the newsagent & pick up a copy. As with the ‘no website’ status being odd, this is quite strange for a group that owns an Email Service Provider.

6. Data.

Alongside the above, having some sort of email presence would also allow the New Day team to mitigate one of the biggest downsides of their ‘no website’ strategy: They have very, very little opportunity to gather data on their audience.

In a strange way I suppose that may help their ad targets: There’s far less accountability to advertisers than online.

But from a ‘success’ point of view, whereas all of their competitors can see which journalists are most read, which articles are most popular, which are most shared, who their regular readers are, which ads are engaging, the demographically categorised popularity of every aspect of their journalism, the New Day is reliant on guesswork, educated guesses, and focus groups. This is not the end of the world: It’s how many papers operated 20 years ago. Perhaps they can accurately guess the exact content that will take them to the 200,000 daily readership target they’ve set, but it would definitely be easier if they had any sort of data to judge what is/is not working.

Summary

The New Day is an interesting project. A tiny team compared to many newspapers, a different approach in terms of gender/politics to many. And the odd idea of going ‘print first’ in a world where almost every other newspaper went ‘digital first’ many years ago.

The nearest equivalent in the UK in recent years was the “i” newspaper. A cut-down version of The Independent, which did not have its own website specifically, though it did benefit from the brand recognition of its parent the Independent, along with the Independent’s website, and the i100 website that almost shared its name.

Perhaps The New Day seeks to own the gap left by the soon-to-be-discontinued Independent newspaper. They have the similar positives of an experienced staff & a wealthy owner (Trinity Mirror), but the equally enormous, in my opinion, downsides of having zero online presence, zero ability for readers to share popular content among friends & family, and zero data to understand where they are performing well, and where they could improve.