[mkgmap-dev] Memory limits for mkgmap and splitter

Hi,
I used to process the europe.osm (from geofabrik.de) file using
splitter. It does not work (in finited time) any more because of the
4GB limit of my hardware RAM.
I have a suggestion, which comes from my experience years ago, where I
had to crunch gigabytes of textual data using PERL. In the old days I
had written some scripts, which used associative arrays for data
storage. I had to have access to the whole data "at once" for report
generation. The trick was to test the processing on small amount of
data and for the big run "tie" the "associative array" functionality
to BerkeleyDB. The processing was slow, but possible without spending
any more time on searching for other solution.
Would it be possible to allow an option, where data storage instead of
internal structures would be stored partly on disk and partly in
database engine cache (see BerkeleyDB implementation). I am aware,
that the processing will be 10 or 100 times slower, but will not take
my workstation down grabbing all RAM and swap space and could do that
in background leaving me substantial amount of RAM and one CPU core
free for other tasks.
In PERL it is builtin "tie" functionality, I have, however, no idea
what is the used data structure in mkgmap and splitter and how to
translate the trick into Java.
If you think that the change is trivial and point me to the critical
section I might see if I get it implemented.
Any ideas?
Paul
--
Don't take life too seriously; you will never get out of it alive.
-- Elbert Hubbard