Being fairly new to this being an editor business, and the workshop being so local, I took the opportunity to go, and found it all really useful. Not only from my perspective as someone in charge of a journal, but also from the data management and publication point of view. A lot of the issues raised during the workshop, like attribution, authorship, plagiarism etc. are just as easily applied to datasets as they are to journal articles.

The workshop was a mixture of talks and discussion sessions, where we were given examples of actual cases that COPE had been told about, and we had to discuss and decide what the best course of action was. Then we were told what the response from the COPE members was in those particular cases - reassuringly we were pretty much in agreement in all cases!

It was a pretty standard workshop format - lots of talks, but there were a wide variety of speakers, coming from a wide spread of backgrounds, which really helped make people think about the issues involved in data visualisation. I particularly enjoyed the interactive demonstrations from the speakers from the BBC and the Financial Times - both saying things that seem really obvious in retrospect, but are worth remembering when doing your own data visualisations (like keep it simple, and self contained, and make sure it tells a story).

For those who are interested, I've copied my (slightly edited) notes from the workshop below. Hopefully they'll make sense!

Richard O’Beirne (Digital Strategy Group, Oxford University Press)

What is a figure? A scientific result converted into a collection of pixels

Data FAIRport initiative - to join and support existing communities that try to realise and enable a situation where valuable scientific data is ‘FAIR’ in the sense of being Findable, Accessible, Interoperable and Reusable

Hard to make visualisations scale!

Open data and APIs make it easier to understand the context behind the stories

Keep interaction simple - remember different devices are used to access content

Rowan Wilson (Research Technology Specialist, University of Oxford)

Creating cross walks for common types of research data to get it into Blender

People aren't that used to navigating around 3 dimensional data - example imported into Minecraft (as sizeable proportion of the population are comfortable with navigating around that environment)

Issues with confidentiality and data protection, data ownership, copyright and database rights, open licenses are good for data, but should consider waiving hard requirement for attribution, as cumbersome attribution lists will put people off using data

Meshlab - tool to convert scientific data into Blender format

Felix Krawatzek (Department of Politics and International Relations, University of Oxford)

Visualising 150 years of correspondence between the US and Germany

Letters (handwritten/typed) need significant resource and time to process them before they can be used

Software produced to systematically correct OCR mistakes

Visualise the temporal dynamics of the letters

Visualisation of political attitudes

Can correlate geographic data from the corpus with census data

Always questions about availability of time or resources

Crowdsourcing projects that tend to work are those that appeal to people's sense of wonder, or their human interest. Get more richly annotated data if can harness the power of crowds.

Zooniverse created a byline to give the public credit for their work in Zooniverse projects

Andrea Rota (Technical Lead and Data Scientist. Pattrn)

Origin of the platform: the Gaza platform - documenting atrocities of war, humanitarian and environmental crises

Paper on survey about reproducibility - "More than 70% of researchers have tried and failed to reproduce another scientist's experiments, and more than half have failed to reproduce their own experiments."

The problem with doing closing talks is that so much of what I wanted to say had pretty much already been said by someone during the course of the day - sometimes even by me during the breakout sessions! Still, it was a really interesting workshop, with excellent discussion (despite the pall that Brexit cast over the coffee and lunchtime conversation - but that's a topic for another time).

There were three breakout session possibilities, of which the timings meant that you could go to two of them.

I started with Group 3: Making possible and encouraging the reuse of data: incentives needed. This is my day job - taking data in from researchers, making it understandable and reusable, and figuring out ways to give them credit and rewards for doing so. And my group has been doing this for more than 2 decades, so I'm afraid I might have gone off on a bit of a rant. Regardless, we covered a lot, though mainly the old chestnuts of the promotion and tenure system being fixated on publications as the main academic output, the requirements for standards (especially for metadata - acknowledging just how difficult it would be to come up with a universal metadata standard applicable to all research data), and the fact that repositories can control (to a certain extent) the technology, but culture change still needs to happen. Though there were some positives on the culture change - I noted that journals are now pushing DOIs for data, and this has had an impact on people coming to us to get DOIs.

Next breakout group I went to was Group 1: Research Data services planning, implementation and governance. What surprised me in this session (maybe it shouldn't have) was just how far advanced the UK is when it comes to research data management policies and the likes, in comparison to other countries. This did mean that me and my other UK colleagues did get quizzed a fair bit about our experiences, which made sense. I had a bit of a different perspective from most of the other attendees - being a discipline-specific repository means that we can pick and choose what data we take in, unlike institutional repositories, who have to be more general. On being asked about what other services we provide, I did manage to name-drop JASMIN, in the context of a UK infrastructure for data analysis and storage.

I think the key driver in the UK for getting research data management policies working was the Research Councils, and their policies, but also their willingness to stump up the cash to fund the work. A big push on institutional repositories was EPSRC's putting the onus on research institutions to manage EPSRC-funded research data. But the increasing importance of data, and people's increased interest in it, is coming from a wide range of drivers - funders, policies, journals, repositories, etc.

I understand that the talks and notes from the breakouts will be put up on the workshop website, but they're not up as of the time of me writing this. You can find the slides from my talk here.