note
Corion
<p>First of all, I would look at tuning the database, and pushing part of the work into the database. Databases are usually good at aggregation, for example, but depending on your key space size, aggregation or sorting may be prohibitive on your database server unless the indices needed for that already exist.</p>
<p>If working with the database is not a good option, I would write the query results to disk into one or more files, and simply launch the processing in parallel for each file. This approach relies on the fact that all your aggregators are symmetric. For example <c>sum()</c> and <c>count()</c> and <c>max()</c> are symmetric aggregators, because it doesn't matter in which order you visit the rows to find them.</p>
<p>If you need more complicated aggregators, like <c>TOP 10</c> or a <c>HAVING</c> filter, things will require much more thought - Google Sawzall and MapReduce have produced some papers on symmetric aggregators for other than the trivial stuff.</p>
<p>I've used small SQLite databases to produce and store the intermediate results and <c>ATTACH</c>ed the databases to create the final results from them. In my experience SQLite is not suitable for holding (and JOINing) data if the file size goes above 2GB, but that experience was with SQLite 2. Changes to the Btree backend may have improved that limit.</p>
<p>Without knowing about <i>how</i> the data is fetched from the database and <i>how</i> the restructuring is to be done, it's hard to suggest a proper optimization. Just be warned that for example some tied hashes do not like to have more than 4GB keys.</p>
1000794
1000794