How to download complete XML records from PubMed and extract data

Introduction

My first PubMed script (An R Script to Automatically download PubMed Citation Counts By Year of Publication) extracted yearly counts for any number of search strings, by using PubMed’s E-utilities. Specifically, it’s using the esearch-function, which will report the number of hits for your search and/or the articles PMIDs. This method is very reliable and fast if you’re only interested in the number of hits for a given query. However, PubMed’s E-utilities have a lot more features than that, some of which I will use in this article to download complete article records in XML.

How it works

What’s cool about esearch is that you can tell it to save a history of the articles found by your query, and then use another function called efetch to download that history. This is done by adding &usehistory=y to your search, which will generate this XML (in addition to some other XML-tags):

NCID_1_90555773_130.14.18.48_5553_1335519406_1226217114

Once we have extracted the WebEnv string, we just tell PubMed’s efetch to send us the articles saved in WebEnv. There’s one complication, though. PubMed “only” allows us to fetch 10 000 articles in one go, therefore my code includes a loop that will batch download the data, and paste it together in order to create valid XML-code. The XML cutting and pasting is done with gsub, since the unparsed XML-data is just a long string. It’s not the most beautiful solution, but it seems to work.

Now that all XML-data is saved in one object, we just need to parse it an extract whatever PubMed field(s) we’re interested in. I’ve included a function that will parse the XML-code and extract journal counts, although you could use the same method to extract any field.

One example run: Top 20 CBT journals in 2010, 2011 and all time

These two graphs were created by using the following 3 queries (notice that I use single-quotes inside my query). This script does not have the functionality to download different queries automatically for you, so I ran my three searches individually. The R code for searchPubmed() and extractJournal() are at the end of this article.

# Get data for 2011
query

Reshaping the data and creating the plots

I needed to reshape my data a bit, and combine it into one object, before I used ggplot2 to make the graphs. I did it like this:

# Add year-column
cbt_2010$year

Ggplot2 code

Now that I have all my top 20 data in one object in the long format, the ggplot2 code is pretty simple.

## Names for plot legend ##
my_labels

Reliability of the method

To check the reliability of my method I compared the number of extracted journals to the total number of hits. These are the numbers: