The search itself will look for strings in the names of files and directories on the ftp site. This means the search can be used to find all vcf files or files associated with a particular release date or particular individual.

The search options will allow you to include md5s in the output and have the ftp paths point to either the NCBI or the EBI ftp site. Due to the volume of results which would be returned the search by default excludes fastq and bam files but you can return these results to the search. Currently the search will only return the first 1000 results due to the large volume of files on the ftp site.

Accessibility

Many of our releases contain very large files which can be challenging to download in their entirety. Both bam and vcf files have indexes which allow subsections to be downloaded using samtools or tabix respectively. There are descriptions of how to do this in our faq. We also now have a web based tool within our Ensembl browser which allows you to request a 10KB subsection of these files.

The Data Slicer (http://browser.1000genomes.org/tools.html) needs the URL of a indexed bam or vcf file and then will present a view of this file and a bam or vcf file to download. The data slicer can be accessed from the tool link at in the top right hand of all browser pages. It should work for any remotely accessible tabix indexed vcf file. It will work for any indexed bam over http but may only work for ftp bams within the EBI

You can also upload data from bam or vcf files from our ftp site. To do you you need to click on the mange your data link on the left hand menu of a page. This is best done from Location view. The section of the menu you need to click on is labeled attach remote file. Only bam files from the EBI ftp site will be visible but any remotely accessible vcf which is accompanied by a tabix index. Once your file is loaded you should be able to see the snps or aligned reads displayed and also share these links with others. This is described with screenshots in our Ensembl tutorial http://www.1000genomes.org/sites/100...l_20110506.doc

The browser also has a variant effect predictor tool which will take in up to 750 snps and indels in VCF format or an Ensembl specific format. This tool provides functional consequences with respect to the current gene and regulatory annotation which include SIFT and PolyPhen for any non synonymous snps. http://browser.1000genomes.org/tools.html. You can also download

If you have any questions about these new features or any other aspects of the project please email [email protected]

currently I estimate (wild guess) you have ~500 complete human genomes (1500GB)
at ~10fold coverage but they are scattered in lots of different formats and directories
and it would take me ~10 hours to figure out how to find the data and decompress and
convert it and another ~5 hours to just download the compressed data

I'd like to see the estimates of others

----------new estimates-------
they have all 1092 genomes(people,"samples") sequenced at 2-6 fold coverage
(which I assume means that they have lots of small segments (~500 nucleotides
per segment ?) from the genome and those may have many errors but overlap
the genome at ~2-6 fold at each position)
critical positions, those with expected mutations overlap more often (50-100 fold)
So they have a total of ~2e13 overlapping nucleotides

the data is in "vcf" files with complicated format, so I stay with my estimate
of ~10hours work to convert them into a workable format.

The data could be ~700MB only, the y-chr came in 2 files of 29MB compressed
-------------------------------------------------

how would I pack the data ?
I want the 1092*36.7M SNPs in 23 binary files, one per chromosome.
Bit i in chromosome j in file(sample) k should be set, iff that SNP is present.
Then compressed with gzip.
23 files, ~50MB per file, I estimate

wait, I have a better idea.
You compute the genetical distance between any pair of two samples, 1092^2 integers,4MB.
Just the number of set bits in the logical xor of the two 37M-bit-vectors.
Then you (circular) sort the 1092 samples so the sum of the distances between two neighbors
is minimal (traveling salesman problem, typically easy to solve for n=1092)
Then you compute the logical xors of any two adjacent samples, which presumably has lots of zeros.
1092 binary vectors of length 37M again, but this time with much better compression
via gzip or such because of the many zeros.
I can write you the programs for encoding and decoding, if you want.
Self-expanding executable, easy to use, all automatic.
The size of that file would be a measure of the genetical variability of your set of 1092 samples.

We provide all our variation data in VCF format which serves our needs quite well, if you have a better idea for your own needs then you should be able to get all the info you need from these files to do the conversion