By coincidence, an EPCC colleague had recently asked me the same question. So, I hit Google and Wikipedia, tracked down some candidates and decided to try the free open source AutoHotKey toolkit. In this blog post, I describe my experiences with this "scriptable desktop automation" tool.

Freeware and open source GUI test tools

Wikipedia lists a number of open source GUI test tools and Google revealed a couple of others. However, only a few of these are free or open source:

This is the first in a series of articles by the Institute's Fellows, each covering an area of interest that relates directly both to their own work and the wider issue of software's role in research.

If the Internet went down all historical software would cease to function, except for Microsoft Word. For an academic historian, a grant to build a high profile web-based project is likely the biggest pot of money he or she will ever receive during their career. That is, if they ever receive it as few historians will even apply. Instead, most are content to work in a fashion relatively similar to the way they did before the Internet came along. They go to the archives, read books and manuscripts, and write up their findings. This is their tried and tested mode of research, with costs limited to a few new books now and again, a train ticket or two to get to the archives, and refreshments while they're there.

Historical research is still largely a solo intellectual pursuit rather than a technical team-based one. There is nothing wrong with that. Not all discovery needs to be expensive, and as a tax-payer, I find it refreshing that there are still corners of the academic world in which spending more money isn't the easiest way to career progression. For the ambitious few who rise to the challenge and put in a proposal, meanwhile, the website that results, and in some cases the hundreds of thousands of pounds of funding that come with it, have made project leaders celebrities within the field. This celebrity comes with it all the accolades and resentment one might expect from fame.

In 2010, Jeremy Fox and Owen Petchey proposed an innovative idea – fix peer review by introducing a peer review currency, which they called PubCreds[1]. Fox and Petchey noted that peer review suffers from a tragedy of the commons , in which "individuals have every incentive to exploit the reviewer commons by submitting manuscripts, but little or no incentive to contribute reviews. The result is a system increasingly dominated by cheats (individuals who submit papers without doing proportionate reviewing), with increasingly random and potentially biased results as more and more manuscripts are rejected without external review." Their solution was to privatise the commons by introducing a currency which is earned by reviewing and spent by getting reviewed.

Symptoms of the tragedy of commons in peer review

One of the main symptoms is slowing down communication of science. Fox and Petchey describe other symptoms, including an increasing tendency for journals to peer review only a small fraction of papers received, resulting in greater randomness in what eventually gets published. Another symptom is editors inviting many more reviewers than necessary in order to secure the minimum number necessary (anecdotally ~5x as many).

The Wellcome Trust Centre for Human Genetics at the University of Oxford hosted its first Software Carpentry workshop this January. So how did the workshop go? I’m a bit biased, so to get a better idea I sent the participants a similar questionnaire to the one I sent to the Software Carpentry workshop I organised previously.

Through a short set of questions, the Lindat license selector can help guide you to a license that both meets your software and data sharing requirements while satisfying any existing constraints on any software or data you have exploited.

By Kristian Strutt, Experimental Officer at the University of Southampton, and Dean Goodman, Geophysicist at the Geophysical Archaeometry Laboratory, UC Santa Barbara.

This article is part of our series: a day in the software life, in which we ask researchers from all disciplines to discuss the tools that make their research possible.

Archaeological practice in the field seems so down to earth. The daily routine of excavation, recording of stratigraphy, finds and contexts, and understanding the different formation processes – it is what we are, and what we do.

However, it is easy to overlook the scientific aspects of our work that integrate with the development of how archaeology understands past human activity.

Now, ARCHER, the UK National Supercomputing Service, is to roll out an ARCHER driving test. Despite their similar names, these tests differ in nature, intent, scale and reward. In this post we compare and contrast these two supercomputer tests.

No one knows how much software is used in research. Look around any lab and you’ll see software – both standard and bespoke – being used by all disciplines and seniorities of researchers. Software is clearly fundamental to research, but we can’t prove this without evidence. And this lack of evidence is the reason why we ran a survey of researchers at 15 Russell Group universities to find out about their software use and background.

Headline figures

92% of academics use research software

69% say that their research would not be practical without it

56% develop their own software (worryingly, 21% of those have no training in software development

70% of male researchers develop their own software, and only 30% of female researchers do so

Data

The data collected during this survey is available for download from Zenodo ("S.J. Hettrick et al, UK Research Software Survey 2014"​​, DOI:10.5281/zenodo.14809). It is licensed under a Creative Commons by Attribution licence (attribution to The University of Edinburgh on behalf of the Software Sustainability Institute).