> From: Colin Walters <address@hidden>
> On Tue, 2004-06-22 at 19:15 -0700, Tom Lord wrote:
> > GCC commits happen too fast (last I checked) to serialize them while
> > inserting tests between each one.
> > I.e., just naively dropping a "make test" call into your PQM just
> > before the "tla commit" --- probably the commit queue will grow
> > without bound (until the developers notice and say, hey, this isn't
> > working :-).
> How else would you do it? You could run the tests in parallel, which
> would scale up faster with more CPUs, but if you want to enforce the
> invariant that the test suite passes for every commit on the mainline -
> you have to run the test suite for every commit.
So, to sum up...... if you want a process that says:
while (pending patch)
if (test is ok)
commit
then your patch queue is rate limited by the greater of the commit
time and the test time. Test time for GCC will strongly dominate.
Speeding up test times, through any combination of parallel builds,
cached build results, precompiled headers etc --- are therefore,
indeed, useful ways to raise the maximum commit rate of the above
loop.
But there's an upper bound on that commit rate and, for now anyway,
I'm just pretending we know that the maximum commit rate for that loop
is too low.
That's not a bad thing to pretend. _Absent_ compilation and
test-execution speed-ups, the test rate is already too high to keep up
with GCC commits. [To be clear: that's my rough recollection, not a
proven result that I'm claiming. It's my envelope calculation, not a
proven fact.]
Are the compilation and test-execution speed-ups viable? In the long
run as testing procedures get more complex? I doubt it. Remember
that, ideally, these tests are repeated on several platforms.
Nevermind that they can be done in parallel -- just pushing around the
bits is already going to get you in trouble.
-t