Ian Taylor and I just posted a document discussing the results of
testing different values of cbs and gbs for a large ADMB model (SS3).
One thing that we learned is that when using the command line to set
-gbs and -cbs (and -ams) it uses an integer data type. However, it seems
that the code for the calculations associated with the buffers (excuse
my ignorance) uses a long data type. This limits the inputs to values
less than 2^31 and a total memory allocation around 2Gb. Linux uses
64-bits to store long integers, thus we made a simple change to how the
command line inputs are read in (used long) and were able to allocate
12Gb to an ADMB program. However, Windows uses 32-bits for a long
integer, thus this change makes no difference in Windows (I have no clue
about macs).
I made these changes to the xmodel3m.cpp file and attempted to add some
Doxygen comments. I did not attempt to change any of the memory
allocation warnings in gs_set.cpp that still reference 16-bit systems.
I do not feel I understand the gradstack or cmpdiff buffers enough to do
this intelligently.
I am wondering if this change would be useful to add to the source code,
and I am curious of the process to make changes to the source code and
how these changes are verified or moderated.
Thanks,
Allan
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: xmodelm3.cpp
URL: <http://lists.admb-project.org/pipermail/developers/attachments/20100818/399deb98/attachment-0001.pl>