Michael Abshoff wrote:
>> Sure, it also works for incremental builds and I do that many, many
> times a day, i.e. for each patch I merge into the Sage library. What
> gets recompiled is decided by our own dependency tracking code which we
> want to push into Cython itself. Figuring out dependencies on the fly
> without caching takes about 1s for the whole Sage library which includes
> parsing every Cython file.
>
Hm, I think I would have to look at what sage is internally to really
understand the implications. But surely, if you can figure out in one
second the whole dependency for scipy, I would be more than impressed:
you would beat waf and make at their own game.
>> We used to use threads for the "parallel stuff" and it is indeed racy,
> but that was mostly observed when running doctests since we only had one
> current directory. All those problems went away once we started to use
> Pyprocessing and while there is some overhead for the forks it is
> drowned out by the build time when using 2 cors.
>
Does pyprocessing work well on windows as well ? I have 0 experience
with it.
>> Ouch. Is that without the dependencies, i.e. ATLAS?
>
Yes - but I need to build scipy three times, for each ATLAS (if I could
use numscons, it would be much better, since a library change is handled
as a dependency in scons; with distutils, the only safe way is to
rebuild from scratch for every configuration).
> I was curious how you build the various version of ATLAS, i.e. no SSE,
> SSE, SSE2, etc. Do you just set the arch via -A and build them all on
> the same box? [sorry for getting slightly OT here :)]
>
This does not work: ATLAS will still use SSE if your CPU supports it,
even if you force an arch without SSE. I tried two different things:
first, using a patched qemu with options to emulate a P4 wo SSE, with
SSE2 and with SSE3, but this does not work so well (the generated
versions are too slow, and handling virtual machines in qemu is a bit of
a pain). Now, I just build on different machines, and hope I won't need
to rebuild them too often.
cheers,
David