Not only is the bed format 0-based, it's also "half-open", meaning the start position is inclusive, but the end position is not.

So if your region starts at position 100 and ends at 101 using standard 1-based coordinates with both start and end inclusive (ie it's two bases long), when you convert it to 0-based half-open coords for bed format the region now starts at 99 but it still ends at 101!

This is a popular one, dec1 is another well known example. But you can actually tell Excel not to do that auto correction. Since you most often get the data from biologist who may have treated the data in Excel already better use an another ID column and not the gene name column if that is available (you often receive both anyway). These errors can even occur in databases that you download data from or which are used for annotation, so it is good to check.

If you forgive an attempt to be somewhat provocative, my two favorite mistakes are:

1 Letting academics build software

Academics are in the need to publish papers, and one easy way to do that is to implement an algorithm, demonstrate that it works (more or less), and type it up in a manuscript. BT,DT. But robust and useful software requires a bit more than that, as evidenced by the sad state of affairs in typical bioinformatics software (I think I've managed to crash every de novo assembler I've tried, for instance. Not to mention countless hours spent trying - often in vain - to get software to compile and run). Unfortunately, you don't get a lot of academic credit for improved installation proceedures, testing, software manuals, or especially, debugging of complicated errors. Much better and productive to move on to the next publishable implementation.

2 Letting academics build infrastructure

Same argument as above, really. Academics are eager to apply to construct research infrastructures, but of course they aren't all that interested in doing old and boring stuff. So although today's needs might be satisfied by a $300 FTP server, they will usually start conjecturing about tomorrow's needs instead, and embark on ambitious, blue sky stuff that might result in papers, but not in actually useful tools. And even if you get a useful database or web application up and running (and published), there is little incentive to update or improve it, and it is usually left to bitrot, while the authors go off in search of the next publication.

Yeah I don't know what why it is so hard for me to remember all the great bioinformatics software that has come from industry, like uhh Eland, or the great standards that have come from industry, like Phred-64 FASTQ.

I am fine with point 2, but I have to disagree with 1. Your de novo assembler example is actually not a good one. De novo assembly is very complicated and highly data dependent. I doubt any assemblers work for any data sets, no matter developed by academia or by professional programmers.

I always wonder if they have ever check the program/code that come with paper. In one paper, they hardcode the input file in code, make me waste a whole afternoon to figure out what's the hell wrong with it.

@Jeremy: I'm not so sure industry is much better, and it's possible that academia is the democracy of development - worst, except for the others. Also, a lot of industry software are add-ons, designed to sell something else. FWIW, Newbler seems to be one of the better assemblers out there, and CLC is at least half-decent as an analysis platform for non-informaticians.

Re-inventing the wheel. So often did I have to debug (or just replace) a bad implementation of a fasta-parser when BioPython/BioPerl have perfect implementations, I don't understand why no-one bothers to use them. 10 minutes in Google can save you 2 days of work and other people a week of work (you save 2 days of programming, they save a week of understanding your program to find the bug)

As the nim library docs say, "The great thing about re-inventing the wheel is that you can get a round one."

My main reason for reinventing the wheel is that I want to use much more powerful and general language: Python instead of R. Of course, if the stuff I needed was already in Python/Pandas it would be a different thing entirely.

I fully agree, re-inventing the wheel is so tempting. We are way too eager to write a few lines of code each time. Plus, because you may have convinced yourself that you can resolve the code in 15 minutes, you don't bother about writting any documentation. In short, there is a very large tendency to re-invent the wheel... many, many times!

I'll offer this one, which is a bit on the general side: Deletion of data that appear to serve no relevance from the computational side, but which have importance to the biology/biologist. Often, this arises from a lack of clear communication between the two individuals/teams as to what everything means, what it exactly means and why it is relevant to the process being developed.

I gave my Amazon EC2 password to someone in my group who wanted to run something quickly (estimated cost, $2). I received the bill 2 months later: $156. This person forgot to close the instance. This is a 8 months story and I'm still waiting for my reimbursement... Conclusion: don't trust colleagues!

Not dealing with error conditions at all. This is one thing that I really noticed when I started with bioinformatics; code that would just merrily continue when it hit incorrect data and output jibberish or fail far away from the bad data. A debugging nightmare.

Not testing edge and corner cases for input data

Assuming that your input data is sane; I've run into all sorts of inconsistency issues with public data sets (i.e. protein domains at positions off the end of the protein, etc). Usually fixed promptly if you complain but you've got to find them first.

One mistake: not looking to see that the 0x4 bit in the bitflag column of a SAM (or BAM) file indicates the entry is mapped. RNAME, CIGAR, and POS may be set to something non-null (an actual string!) but these are not meaningful if the 0x4 flag says the read is unmapped.

I often encounter problems related to the fact the computer scientists index their arrays starting with 0, while biologists index their sequences starting with 1. Simple concept that drives the noobs mad and even trips up more experienced scientists every once in a while.

Do pathways statistics or gene set enrichment statistics and then represent the list of gene sets as a valuable result, instead using that statistics just as a means to decide which pathways need to be evaluated.

(This is bad for many reasons for instance because the statistical contribution of a key regulatory gene in a pathway is equal to that of 1 out 7 iso-enzymes that catalyze a non-relevant side reaction, and because the significance of a pathway changes when you add a few non-relevant genes, and also because we have many overlapping pathways).

No, I think it is actually wrong to publish a list of pathways without further judgement. I think not doing the judgement is a mistake. But I have to admit that I don't really understand your examples. So maybe my English is not good enough to understand the finesses of the difference between poor judgement an stupid mistakes.

So this is a (very) late reply, but in case it's still helpful or someone comes across this question like I did, "rm -ir" will ask before deleting files. Maybe a little annoying to type "y" a hundred times, but better to do that than lose all your data to a mistyped glob IMHO.

I too have had that moment of dread when I realized I typed rm * /folder versus rm /folder/* ! Check out some of the solutions on this forum page ( http://unix.stackexchange.com/questions/42757/make-rm-move-to-trash ), specifically trash-cli. You can set up a trash folder so after deleting files they are not completely gone and can be restored if needed. You would have to manually empty the trash folder or set up a cron job to do so on a regular basis, but this may help circumvent the nightmares listed here!

I was just deleting some unnecessary files from a dir and managed to have a space and an asterisk at the end of the rm command. As soon as I realized what was happening I hit ctrl-c, but important files without backups were already gone. Oh well, it will only take like 2-3 weeks to reproduce them. Also time to edit .bashrc following Philipp's post..

Once I did something very similar: I deleted all files and subdirectories in a directory of which I thought I have them in duplicate. Shortly after I realized I was inside a symbolic link directory and I was deleting the original data....

If I understand you correctly, you are saying that this will inflate the number of variants, since many have ambigous positions? Interesting - do aligners generally guarantee that such ambigous variants are consistently placed for forward and reverse reads?

BWA always places the indel at the beginning of a microsatellite. If you align the read to the rev-complemented ref, the indel will be at the end. Many indel callers assume the bwa behavior, though there are also tools to left-align indels.

that only applies if the sequence contains only matches or mismatches, this means edit strings that are composed of a number followed by M (like 76M) . For all other alignments you will need to parse the CIGAR string and build the end coordinate from the start + numbers in the edit string.

I made one a few months ago. I launched a heavy process in a pay-per-use cluster, it was running for one week. I thought, 6 pennies/hr cannot be too much money. I received a bill for $832 usd. I'm not using this cluster again unless I estimate the total cost of the process.

Running the bwa/GATK pipeline with a corrupt/incompletely generated bwa index of hg19. Everything still aligned, but one of 2 mates would have its strand set incorrectly. Other than the insert size distribution, everything seemed normal, until the TableRecalibration step downshifted all quality scores significantly and then UnifiedGenotyper called 0 SNPs. 1st time I've seen a problem with step 1 of a pipeline not become obvious until step 5+.

Possibly: implementing methods that magically generate p-values from non-replicated RNA-seq experiments, possibly as a result to pressure from 'experimentalists'. I really would like to know the history behind their implementation (where they forced by reviewers, or by other groups?). Now we have to explain why those p-values are bogus, and why there are so little significantly differentially expressed genes detected in a non-replicated analysis.

I had another good one recently. I was executing an untested bash script for generation of output from two input files on our cluster. I just let it run over night. When I checked in the morning, it had generated about 40 TB worth of output (expectation was about 20 MB). There was a tiny spelling mistake that led to an infinite loop. Oops. I was lucky to check it when I did because there was still a few TB space left so at least other jobs didn't get killed because of it..

Some really great comments here, nice to know that such things happen to all genii ;). I have to say my most painful moments relate to my assumption that data obtained elsewhere is correct in every way. I also remember early in my career, using PDB files and realising that sometimes, chains are represented more than once, thus when manually checking calculations involving atomic coordinates, being utterly perplexed and wanting to break my computer. Oh the joys of Bioinformatics.

Assuming that the gene IDs in "knownGenes.gtf" from UCSC are actually gene IDs. Instead they just put the transcript ID as the gene ID.

This just caused me a bit of pain when doing read counting at the gene level. Basically, any consittutive exon in a gene with multiple splice forms was ignored because all the reads in that exon were treated as ambiguous.

How about writing a tool and being convinced it works perfectly, so you start testing it on a complete dataset instead of testing it first on a subset and finding out after it ran for an hour or so that you made a tiny mistake somewhere. Sooo much time wasted that i'll never get back :P

I have spent hours, in repeated occasions, looking for a mysterious error in a perl script that at the end was simply a "=" instead of a "==" within an IF statement.

Another recurrent mistake: not documenting what I did and what those scripts do in the belief that everything is so intuitive, organised, simple and natural that it won't be necessary. Then, sometime after, I have to spend hours trying to guess what all that mess was.

This one was really good, embarking on sudo yum update when there was lots of stuff to update and swap space was very low. Ended up with a situation much like this. Took me good four hours before I saw my desktop again.

Another fun one is when you develop a pipeline with a small test set thinking speed over all and then you increase your test set size and realize that you're creating TBs of temp data and utilizing hundreds of GBs of RAM :)