Massimiliano,
I had to update your code for it to compile (removed "sequence" from
testpdf'. However, I don't see any significant difference in the
memory profile of either testpdf or testpdf'.
Not sure how you are watching the memory usage, but if you didn't know
the option "+RTS -sstderr" will print out useful memory statistics
when you run your program. E.g.:
> pdf_test.exe +RTS -sstderr
gives:
2,157,524,764 bytes allocated in the heap
246,516,688 bytes copied during GC (scavenged)
6,086,688 bytes copied during GC (not scavenged)
45,107,704 bytes maximum residency (8 sample(s))
4086 collections in generation 0 ( 0.61s)
8 collections in generation 1 ( 0.67s)
129 Mb total memory in use
INIT time 0.02s ( 0.00s elapsed)
MUT time 5.83s ( 7.48s elapsed)
GC time 1.28s ( 1.45s elapsed)
RP time 0.00s ( 0.00s elapsed)
PROF time 0.00s ( 0.00s elapsed)
EXIT time 0.00s ( 0.00s elapsed)
Total time 7.13s ( 8.94s elapsed)
%GC time 18.0% (16.3% elapsed)
Alloc rate 369,202,098 bytes per MUT second
Productivity 81.8% of total user, 65.2% of total elapsed
Above you can see 45 MB was the max amount of memory ever in use - and
according to the heap profiling I did it's about constant. I saw the
same results when using "testpdf'".
A few tricks I've learned to reduce space usage:
* Use strict returns ( return $! ...)
* foldl' over foldr unless you have to use foldr.
* Profile, profile, profile - understand who is hanging on to the
memory (+RTS -hc) and how it's being used (+RTS -hb).
* Use +RTS -p to understand who's doing all the allocations and
where your time is being spent.
* Approach profiling like a science experiment - make one change,
observe if anything is different, rollback and make another change -
observer the change. Keep notes!
Good luck!
Justin