Subscribe to our newsletter

Login to ScienceNode

At Science Node, we need your help. We have ALMOST reached our fund-raising goal. In order to maintain our independence as a source of unbiased news and information, we don’t take money from big tech corporations. That’s why we’re asking our readers to help us raise the final $10,000 we need to meet our budget for the year. Donate now to Science Node's Gofundme campaign. Thank you!

Knowing me, knowing you

Molecular biologists around the world are currently creating a high-resolution genetic map with the 1000 Genomes Project, which they hope will contain all the common gene variations in the major populations.
But now, increasingly, the bottleneck is becoming data transfer between centers.

This 454 Life Sciences machine is an example of a next-generation sequencer and represents cheaper and more efficient DNA sequencing. This technology anchors DNA fragments to tiny resin beads (one fragment to each bead), which are put into wells on a plate. Nucleotides (A, C, G, or T) are washed over the wells, and as they are incorporated into the new DNA strand, they give off light, which is a measure of how many As, Ts, Cs or Gs have been incorporated. Image courtesy Lawrence Berkeley National Lab - Roy Kaltschmidt, photographer.

In 2008, molecular biologists around the world joined forces and set out on an ambitious three-year project: to sequence the genome of 1,000 people. Called the 1000 Genomes Project, they hoped to identify the common gene variants across nationalities and to identify the genetic susceptibility of many diseases.

What they didn't fully anticipate was just how rapid the increase in sequencing technology would be. "Our institute joined the project because we wanted to [perform] second generation sequencing technology. Now, our genome tool kit can analyze thousands more bytes of data - cheaper and faster - than three years ago," said Li Yingrui from the BGI (formerly the Beijing Genomics Institute, which dropped the name when the headquarters moved to Shenzhen).

So, when they finished sequencing the first 1,000 genomes in mid-2010, they moved the target, and now they are aiming to sequence more than double the original amount: 2,500 genomes instead. The new bottleneck in the project, though, became the efficient transfer and analysis of genetic data after a genome has been sequenced.

The data generated by the project, which is co-led by David Altshuler from the Broad Institute in Cambridge, USA, and Richard Durbin from the Sanger Institute near Cambridge in the UK,is held by and distributed from the European Bioinformatics Institute (EBI) and the US National Center for Biotechnology Information (NCBI), which is part of US National Institute of Health. There will also be a mirror website for data access in Shenzhen (China).

But for now, the largest sequenced data are often shipped between sites by mail.

"I know this is absurd"

"One genetic sequencer can generate half a terabyte of nucleotides [basic structural unit of DNA] per run in one week. There are thousands of sequencers producing data throughout the world," said Li.

"Once genetic data is processed, it is copied to hard disks and sent via mail to another institute for analysis synchronization. I know this is absurd, but this is a fact," said Li.

It may only take a week to generate half a terabyte of data, but after it's generated, researchers can spend up to two weeks copying out data, mailing it and then having it uploaded onto a new machine for analysis. This is because current Internet bandwidth speeds are too slow.

When cloud stops being cost effective

"The main issue for us is that our data sizes are so large, that the cost and difficulty of moving the data to the cloud stops it being cost effective for many jobs. We do use the cloud for the Ensembl genomes database, but only to provide [data] mirrors that are closer to users," said Phil Butcher, Head of IT at the Sanger Institute, one of the major research institutes involved in the project and located near Cambridge in the UK.

"We have looked at volunteer computing, but it has never seemed sensible because of data and network issues. We distressingly often resort to shipping hard disks around to transfer data between centers, rather than use the internet, or even via Aspera which is faster than ftp [file transfer protocol]," Richard Durbin said.

A team of scientists from labs around the world - a type of academic social network - is the way forward, Li said. Data could be then be stored and analyzed in an academic computing cloud which researchers could access remotely. It's such an issue for them that the BGI has an open access journal dedicated to the topic: Giga Science.

The show must go on

Transferring the largest sequenced datasets of human genomes for analysis is improving a challenge. This is because current internet bandwidth is not high enough - therefore scientists have to resort to mail or post in many cases. Image courtesy Wikimedia Commons.

While data transfer issues continue to distress those in charge, the science coming out of the project nevertheless continues at a fast pace. From the first phase of the project - when the 1,000 genomes were sequenced - the teams found that each person carries approximately 250 to 300 loss-of-function variants, which result in the gene having less or no function,and 50 to 100 variants previously implicated in inherited disorders.

More basically, though, the project plans to characterize over 95% of variants that have a frequency of 1% or higher in each of five major population groups (populations in or with ancestry from the Americas, East Asia, South Asia, Europe, and West Africa).

This will form a "high-resolution genetic map" said Li. This map will then form a baseline for future studies, such as identification of genetic susceptibility to disease.

In fact, the leaps in sequencing technology have allowed the project to increase in scope. "The 1000 Genomes Project is now sampling from several more populations than were originally proposed," said Thomas Keane, researcher at the Sanger Institute.

"Now we can focus on individual ethnic groups," Li said. BGI contributes the genomes of two main ethnic Chinese groups to the 1000 genomes project: The North Han, the southern Han (the largest ethnic group in the world) and the sparseDai people.

The field of molecular biology won't stop here. Next week, iSGTW will carry a feature about a more focused project with even more genomes, the UK10K project, which will sequence parts of the genome of 10,000 people in the UK.

Contact

Science Node

Disclaimer: While Science Node ™ does its best to provide complete and up-to-date information, it does not warrant that the information is error-free and disclaims all liability with respect to results from the use of the information.

Republish

We encourage you to republish this article online and in print, it’s free under our creative commons attribution license, but please follow some simple guidelines:

You have to credit our authors.

You have to credit ScienceNode.org — where possible include our logo with a link back to the original article.

You can simply run the first few lines of the article and then add: “Read the full article on ScienceNode.org” containing a link back to the original article.

The easiest way to get the article on your site is to embed the code below.