The debate following the commitment made by Labour Party Shadow Chancellor Ed Balls of restoring the 50p tax rate for the top earners has been emblematic of how politics and electoral campaigning are played out in the media.

First a party leader, comes out with a policy announcement, backed up by figures and often a new study assessing its financial impact or implications.

Not long after, other party leaders, using a different set of figures or perhaps taking a very different interpretation of the available data, attempt to show why their counterpart is wrong, demonstrating that in fact it is their own plan has the taxpayers best interests at heart.

This will often continue for months on end, with the different sides trying to poke holes in the other’s stats, while both sets of figures highly unlikely to show the full picture, cherry picked to within an inch of their statistical lives.

When it comes to the 50p tax rate and its potential benefits or pitfalls, evidence used to back up each side of the argument have cited two rather different reports, which came out at different times.

Ed Balls/Labour:

“Latest figures from the HMRC show that people earning over £150,000 paid almost £10bn more in tax in the three years when the 50p top rate of tax was in place than was estimated at the time when the government did its assessment back in 2012.”

Chancellor George Osbourne:

“The direct cost (of reducing the top rate from 50p to 45p) is only 100 million pounds a year. HMRC calculate the loss of other tax revenues may cancel that out. It raises at most a fraction of what we were told and may raise nothing at all.”

“If only life were so simple… and all the taxable income in the country was a delicious gigantic cake and all the Chancellor had to do was decide how big a slice to take. However, taxable income is a moveable cake, it’s a cake that shrinks from the taxman’s cake slice and grows again when the taxman is out of the room.”

Essentially the main point here is that taxable income changes in response to tax rates. This is especially true in the case of the 50p tax rate, as both its introduction in 2010 by Alistair Darling and George Osbourne’s decision to cut it to 45p were pre-announced.

This allowed for major behavioural responses as people adjusted how they paid their taxes, bonuses and collected dividends based on the rate at the time, either by paying them early in anticipation of the impending tax hike, or forestalling and waiting to collect it later when it was dropped.

It was in place for such a brief time that it tells us very little about “how much it might have raised in the long run when everything had settled down”, according to the analysis by More or Less.

Data journalism site Ampp3d have done a good job in explaining the debate here, raising the issue of other factors at play, not just the revenue raised by the 50p tax rate.

Given how tough it is to draw conclusions, especially considering the uncertainty surrounding income tax revenue as a whole, (according to More or Less studies by the Institute of Fiscal Studies show that the top rate of tax at which the Treasury would scoop up the absolute revenue could be as low as 30p to as high as 75p), it is unlikely that a definitive answer is likely to emerge anytime soon.

In the next year we will undoubtedly have these stats thrown at us time and time again, used to back up the respective arguments.

Therefore, it’s important to acknowledge that most probably these figures are being used to reinforce an ideological point of view ahead of the general election rather than the result of having studied the facts and adopted a position through a clear and thorough data analysis.

If you haven’t already, definitely worth listening to the full version of More or Less, available here

What do Edward Snowden, Corn Exchanges, erotica for women and fish and chips all have in common?

They are just some of the topics that feature in the first issue of Contributoria, the new crowdfunded collaborative journalism platform launched last month.

Backed by the Guardian Media Group (GMG), Contributoria allows its community of journalists and interested readers to decide what articles they would like to see written and support each pitch accordingly – with the community involved in all stages of an article’s development.

The underlying aim is to enable the creation of transparent, high-quality collaborative journalism that might otherwise not have been produced.

I spoke to editor Sarah Hartley about how Contributoria is making collaborative, transparent journalism work.

*Interhacktives is the website the students on the Interactive Journalism MA at City University London.

Essentially the post was borne out of a growing desperation to keep the site updated during a slow couple of weeks where work experience and the badly needed Christmas holidays put writing for interhacktives on the back burner for a while.

At the same, this quick analysis of the type of articles that do well for the website offered valuable insight in refining our content strategy and provide added focus for the year ahead.

After Adam Tinworth, our lecturer for the Social Media and Community Engagement module, pointed to what we could learn from analysing this type of data, I thought it was worth delving into the analytics a little further to find some more meaning in the metrics.

The most-read post published on interhacktives – by some distance – was an article on the top ten tools for data journalism. Not only did it receive more pageviews, but equally as important was the fact that readers spent around two and a half times longer (7 mins and 25 seconds) on that post than is the average for all pages on the site (2 minutes and 56 seconds).

This is a great example of a type of article that can live through time and keep getting pageviews months after initial publication.

As data driven journalism becomes more popular in the industry and as upcoming journalists join media professionals in trying to stay up to date with the skillset needed to do some basic-level data analysis and visualisation, the article’s prominence is no surprise.

Google Analytics showing pageviews over time for the Top 10 Data Journalism Tools

A look at its popularity over the months shows spikes at different times, with new tweets about it from other sources coming after its initial publication also interesting.

Top Ten Data Journalism Tools

Another popular story, the third most viewed story of 2013, was in fact a post written in April 2012 about making a website compliant with EU cookie law. Looking at the source of its traffic over the 13 month-period, more than two out of three views came from Google, as this was a topic that bloggers and others were presumably still searching for. It is in this context that this article’s enduring popularity makes sense.

Traffic source data for the third most popular post of 2013 on interhacktives

Two more recent how-to guides to making a choropleth map and using Raw to make advanced data visualisations are regularly generating traffic over the last couple of months, often featuring on the trending content widget on the homepage.

With most interviews for example, they may receive attention at the time, especially via social media, but are unlikely to keep generating traffic to your time in bulk.

Trend over time graph for an interview article

In fact, analysis of the top 10 articles from January 1st 2013 to February 1st 2014 in terms of pageviews shows that the majority are predominantly timeless, durable pieces of content of use to readers beyond their publication date. They are what you would describe as ‘stock content’.

“…stock content is durable. Examples of stock content include podcasts, videos, guides and research work.

Flow content is the stream of daily and sub-daily updates. For instance, news articles, surveys, live blogs and social media updates.

While flow content helps to keep newspapers or brands in the public eye, stock content drives steady and continuous traffic to websites over a long period of time. This is why it is really important not to remove good-quality archived content from a website. Good quality archived content can still drive views in if people are researching the topic, for instance.”

Articles explaining the difference between Sunni and Shia Muslims are a great example of stock content that will likely drive views long after the publication date.

Difference between sunni and shia google search

As the question undoubtedly will crop up regularly across time, any explainers on the issue will regularly attract traffic. The first two search results are from the BBC from 2009 and 2011, while the Economist’s May 2013 guide comes in third. Both websites will certainly get hits on their site regularly from this one-off explainer based on people’s searches.

Perhaps in the media industry content such as explainers, how-to guides and reviews of apps and tools are often perceived to be of secondary importance, to accompany a major development or news piece. That maybe so, but they given the nature of the internet, they can live much longer online than the news article and are a core part of the journalistic task to inform the population.

At interhacktives have perhaps been guilty of not focusing enough on this and the potential the website offers to create long-lasting stock content based on the skills we are regularly taught and experiment with as part of the course.

Over the next few months, that is something we should perhaps turn our attention to a little more and leave a lasting legacy on the interhacktives website, hopefully ensuring traffic for the site many months after our involvement with it ends.

I think people like the massive set of sub’s and the intelligent conversation. For that you don’t have to be picky about formatting. It may not be the prettiest, but it works fine for many people.

If you want pretty sidebars go to Facebook, you’ll just give up the IQ level and actual answers to questions.

How Reddit looks on a browser

Honestly, I do get it. But however much I tried to force myself to use Reddit and become part of what is undoubtedly a fascinating community, we just didn’t click. Even after the major controversy and fierce criticism of Reddit in the aftermath of the Boston bombings, it remains a serious social news hub, at times an excellent platform for debate and a treasure trove of interesting and rather random material, very useful from the perspective of a journalist and someone interested in community engagement.

Despite all that, I just could not get past it’s ugliness. It’s browser version is clunky, archaic and uninspiring.

The turning point in my relationship with Reddit was the moment I started experiment with iPad apps for reddit.

There is no official client and the third party apps available are far from perfect. However, they are pretty.

Reddit for iPad

iAlien for Reddit, the free app I’ve been using, is available on both iphone and ipad. While according to reviews from more experienced Redditors the comment editing and deleting functions within the app are seriously lacking, its smooth, slick and very attractive user interface has finally given me a window into the weird and wonderful world of reddit that I want to browse through for hours on end.

I am slowly understanding this fascination with “the front page of the internet”, albeit in a way that most traditional redditors would consider sacrilegious.

At the moment, I’m everything the true reddit fan despises, I’m a true lurker. However, this is just the start, as I get hooked to Reddit, I’m sure I will begin contributing as well, it’s just a matter of time.

Diversifying user experience on other platforms

Other than how fickle I am, my Reddit experience indicates the power that social media mobile and tablet apps have in diversifying their user experience to attract different audiences with different ideas and needs in utilising it.

Offering an identical experience on different platforms may work for some sites, applications or social networks, but reaching out to a completely different type of users may also be worth looking into.

Another example of this is Flow, an iPad compatible Instagram app. The application, specifically crafted for the iPad’s screen describes itself as “the missing iPad app for Instagram”, was made by digital design studio Codegent because they “couldn’t wait any longer for an iPad compatible Instagram app so we built our own”.

Flow app for iPad

Thinking beyond the original use and platforms networks and apps were originally made for could be an opportunity worth at the very least looking into and at best a huge growth area for a different audience.

Last weekend, the Premier League released the annual spending on agents per club, following a commitment to make this data public each year.

The data shows the total amount each club paid to authorised agents during the period from October 1, 2012, to September 30, 2013. It wasn’t hard to find the data for the previous year and so I decided to put some basic data visualisation skills we learnt with John Burn-Murdoch to the test.

It was a really straightforward dataset (see table below), therefore quite an easy write up for sport websites, using hooks like the fact that the total figure of 96 million representing a record, or highlighting a few interesting points on which club paid the most, the least and so on.

One of the easiest things to do was to compare between the two years, as The Times did (£), making a point that Chelsea and Newcastle spent double this year than they had done the previous one.

Given that in the last few weeks we had learnt about a couple of great data visualisation tools, namely Datawrapper and Tableau, which allow you to quickly and with minimal fuss visualise your data, I thought it was a great chance to try my hand at using them in ‘real time’.

So for the initial year on year comparison I used Datawrapper. I included the values for the clubs in the Premier League for both seasons, so did not include clubs relegated in 2012 or promoted from the Championship last season.

It was really easy, but I encountered a slight problem as I had initially uploaded the values from the CSV as currency, which Datawrapper had some trouble with. I amended that, transposed the table to get each team side by side and simples, here’s the result.

If you want to take a look at the interactive version, click on the image.

What is noticeable is that it’s immediately easier to see the difference year on year per club when the data is presented in this way rather than list or text form.

You can quickly identify that only two clubs – Arsenal and West Ham – paid less to agents this year than they had the last, while the difference between the amount Chelsea, Man City, Spurs, Liverpool and the rest spent this season is clear.

Design-wise as there are more than 15 clubs in our chart, including two values for each, it does look quite cramped, but still does what it’s supposed to and in just a few minutes. Maybe rounding up the figures to millions with a couple of decimal places would have worked better though.

Not wanting to stop there while I was in the swing of things and quite eager to practise a little more with Tableau, I went a bit further.

Using just the latest data for 2012-2013, I tried to show how much each club spent in terms of the whole, to give a slightly different visual focus from the initial chart that looked at the change year on year.

Given how much of a no-no it would be to represent this with a pie chart, the best way to do this was probably using a tree map. Again, this is a pretty simple database, but Tableau really does make it look like you have spent much more time and effort than you actually have.

Click on the image to look at the interactive version.

The tree map works well to show the difference between the clubs in terms of the total spent and makes this clearer than a bar or column chart. I think the colour gradient for the different parts of the tree map comparing value ranges is a great feature as well.

Making these two visualisations, simple as they are, only took around 15 minutes in total. The great thing however is that it looks like it took much more time and effort to create.

As access to publishing information has spread from a select few to, well, anyone with a smartphone, the way we discover and consume news has changed.

As the amount of content shared online becomes ever-more unmanageable to keep track of, journalists have not always found it easy to ensure that their audience receives reliable and accurate information.

Last week I spoke to Malachy Browne, news editor of Dublin-based social media news agency Storyful. I asked him what the principles behind discovering and verifying news online were and how Storyful go about separating “news from noise”, delivering verified and valid news to their clients and the wider world.

You can read the interview here on interhacktives.com, the website run by the 2013-2014 cohort of Interactive Journalism MA students at City University London.