How?

Taking advantage of these things through GitHub is pretty easy. In this post I’m going to give a brief overview of how to set up a GitHub data repository.

Note: I’ll assume that you have already set up your GitHub account. If you haven’t done this, see the instructions here (for set up in the command line) or here (for the Mac GUI program) or here (for the Windows GUI program).

Store Data in the Cloud

Data basically consists of two parts, the data and description files that explain what the data means and how we obtained it. Both of these things can be simple text files, easily hosted on GitHub:

Create a new repository on GitHub by clicking on the New Repository button on your GitHub home page. A repository is just a collection of files.

Have GitHub create a README.md file.

Clone your repository to your computer.

If you are using GUI GitHub, on your repository’s GitHub main page simply click the Clone to Mac or Clone to Windows buttons (depending on your operating system).

If you are using command line git.

First copy the repository’s URL. This is located on the repository’s GitHub home page near the top (it is slightly different from the page URL).

In the command line just use the git clone [URL] command. To clone the example data repository I use for this post type:

Of course you can choose which directory on your computer to put the repository in with the cd command before running git clone.

Fill the repository with your data and description file.

Use the README.md file as the place to describe your data–e.g. where you got it from, what project you used it for, any notes. This file will be the first file people see when they visit your repository.

Create a Data folder in the repository and save your data in it using some text format. I prefer .csv. You can upload other types of files to GitHub, but if you save it in a text-based format others can directly suggest changes and you can more easily track changes.

In command line git first change your directory to the data repository with cd. Then add your changes with $ git add .. This adds your changed files to the ‘‘staging area’’ from where you can commit them. If you want to see what files were changed type git status -s.

Create a cover site with GitHub Pages. This creates a nice face for the data repository. To create the page:

Click the Admin button next to your repository’s name on its GitHub main page.

Under ‘‘GitHub Pages’’ click Automatic Page Generator. Then choose the layout you like, add a tracking ID if you like, and publish the page.

Track Changes

GitHub will now track every change you make to all files in the data repository each time you commit the changes. The GitHub website and GUI program have a nice interface for seeing these changes.

Replication Website

Once you set up the page described in Step 5, other researchers can easily download the whole data repository either as a .tar.gz file or .zip. They can also go through your main page to the GitHub repository.
Specific data files can be directly downloaded into R with the RCurl package (and textConnection from the base package). To download my example data into R just type:

You can use this to directly load GitHub based data into your Sweave or knitr file for direct replication.

Improve your data through community error checking

GitHub has really made open source coding projects much easier. Anybody can view a project’s entire code and suggest improvements. This is done with a pull request. If the owner of the project’s repository likes the changes they can accept the request.
Researchers can use this same function to suggest changes to a data set. If other researchers notice an error in a data set they can suggest a change with a pull request. The owner of the data set can then decide whether or not to accept the change.
Hosting data on GitHub and using pull requests allows data to benefit the kind of community led error checking that has been common on wikis and open source coding projects for awhile.

Comments

Have you considered how you might include the capacity for citation of this data, given you've created the data file and are interested in it's reuse - and deserve appropriate credit for doing so. Perhaps you could consider the possibility for a persistent identifier such as DOI, Handle - or even one of the underyling identifiers within Git?

"Data basically consists of two parts, the data and description files that explain what the data means and how we obtained it. Both of these things can be simple text files" :-) Lucky you, working in a field where data can be text. I use github for code, but must upload data in a binary format (netCDF) because that's what's used (in atmospheric modeling) and it's already too big in the binary format--text would just blowout my repo size.

I just wanted to say great blog post. I had been wondering about if there was a "github for data" and as you point it out, it could be github :-) I wonder how that fits in with all the public data sets http://www.quora.com/Data/Where-can-I-find-large-datasets-open-to-the-public github has a 1GB storage limit, which is fine for many purposes of course ... I guess the other concern might be github disappearing in the future - might be nice if there was some way to replicate across a few different storage services ...

I think your point about a lack of integration with other public data set repositories is probably one of the bigger weaknesses of the Github for data storage approach right now.

But, it is definitely not an insurmountable problem since fundamentally the data is just a hosted csv file the link to which could easily be cross-posted in multiple places.

Re Github not being around in the future: this would be inconvenient from an access point of view (e.g. broken URLs in citations). However, it wouldn't be a big problem for both the data itself or it's version history (all of the changes made to it). These are all recorded by Git which is an open standard separate from Github and though new, I think it is reasonable to assume Git will be around for a long time.

If Github shut down, you could easily push the entire data set, all ancillary files, and version history to a another Github-like service (e.g. bitbucket) or host it yourself.

Popular posts from this blog

I often want to quickly create a lag or lead variable in an R data frame. Sometimes I also want to create the lag or lead variable for different groups in a data frame, for example, if I want to lag GDP for each country in a data frame.I've found the various R methods for doing this hard to remember and usually need to look at old blogposts. Any time we find ourselves using the same series of codes over and over, it's probably time to put them into a function. So, I added a new command–slide–to the DataCombine R package (v0.1.5).Building on the shift function TszKin Julian posted on his blog, slide allows you to slide a variable up by any time unit to create a lead or down to create a lag. It returns the lag/lead variable to a new column in your data frame. It works with both data that has one observed unit and with time-series cross-sectional data.Note: your data needs to be in ascending time order with equally spaced time increments. For example 1995, 1996, 1997. ExamplesNot…

Update 2 February 2014: A new version of simPH (Version 1.0) will soon be available for download from CRAN. It allows you to plot using points, ribbons, and (new) lines. See the updated package description paper for examples. Note that the ribbons argument will no longer work as in the examples below. Please use type = 'ribbons' (or 'points' or 'lines').
Effectively showing estimates and uncertainty from Cox Proportional Hazard (PH) models, especially for interactive and non-linear effects, can be challenging with currently available software. So, researchers often just simply display a results table. These are pretty useless for Cox PH models. It is difficult to decipher a simple linear variable’s estimated effect and basically impossible to understand time interactions, interactions between variables, and nonlinear effects without the reader further calculating quantities of interest for a variety of fitted values.So, I’ve been putting together the simPH R p…

Reproducibility has come a long way in political science. Many major journals now require replication materials be made available either on their websites or some service such as the Dataverse Network. Most of the top journals in political science have formally committed to reproducible research best practices by signing up to the The (DA-RT) Data Access and Research Transparency Joint Statement.This is certainly progress. But what are political scientists actually supposed to do with this new information? Data and code availability does help avoid effort duplication--researchers don't need to gather data or program statistical procedures that have already been gathered or programmed. It promotes better research habits. It definitely provides ''procedural oversight''. We would be highly suspect of results from authors that were unable or unwilling to produce their code/data.However, there are lots of problems that data/code availability requirements do not address.…