Would really like to have a similar one on sharing memory across processes too, if you have the time

]]>Comment on Python: Interleave Paired-End Reads by Nickhttp://www.ngcrawford.com/2012/03/28/interleave-paired-end-reads/comment-page-1/#comment-4838
Tue, 30 Sep 2014 18:30:15 +0000http://www.ngcrawford.com/?p=472#comment-4838Great catch – I’ll update the text. Short answer: I originally wrote the script using python’s gzip module, but removed it when I added optional piped input. A simple work around would be to use process subsitituion to unzip the files on the fly. E.g.:

]]>Comment on Python: Interleave Paired-End Reads by Tommyhttp://www.ngcrawford.com/2012/03/28/interleave-paired-end-reads/comment-page-1/#comment-4584
Thu, 25 Sep 2014 16:27:01 +0000http://www.ngcrawford.com/?p=472#comment-4584Where is the part where you handle gzipped input?
]]>Comment on Getting started with Ultra Conserved Elements by Getting started with Ultra Conserved Elements | The Molecular Ecologisthttp://www.ngcrawford.com/2013/10/01/getting-started-with-ultra-conserved-elements/comment-page-1/#comment-1257
Tue, 01 Oct 2013 22:53:52 +0000http://www.ngcrawford.com/?p=716#comment-1257[…] Cross posted on ngcrawford.com […]
]]>Comment on Bowtie2 output as BAM by How to make bowtie2 output as bam | RNA-Seq data analysishttp://www.ngcrawford.com/2012/03/14/bowtie2-output-as-bam/comment-page-1/#comment-1256
Tue, 10 Sep 2013 04:26:48 +0000http://www.ngcrawford.com/?p=442#comment-1256[…] How to make bowtie2 output as bam […]
]]>Comment on Python: Multiprocessing large files by Nickhttp://www.ngcrawford.com/2012/03/29/python-multiprocessing-large-files/comment-page-1/#comment-1249
Wed, 24 Jul 2013 17:04:15 +0000http://www.ngcrawford.com/?p=466#comment-1249Results should be overwritten at each loop, but after your write them to an outfile. If I extended the results list as you suggest this would mean that all the results are stored in memory which kinda defeats the purpose of this script. Overwriting results isn’t a bug, it’s a feature! 😉
]]>Comment on Python: Multiprocessing large files by athttp://www.ngcrawford.com/2012/03/29/python-multiprocessing-large-files/comment-page-1/#comment-1242
Mon, 20 May 2013 20:00:47 +0000http://www.ngcrawford.com/?p=466#comment-1242A mistake in the code make the results beeing overwritten at each loop.

Here is the final loop corrected:
results = []
for chunk in grouper(10, test_data):
result = p.map(process_chunk, chunk)
results.extend(result)
for r in results:
print r # replace with outfile.write()