Yesterday I switched patchbot to submit jobs to my personal devel
environment, while our ordinary one is being repaired. However it looks
like this bug is back on my freshly upgraded VM host and hitting with
a vengeance:
https://bugzilla.redhat.com/show_bug.cgi?id=927032
So that is why overnight a large number of patchbot jobs aborted. And it
doesn't look like it will be solved any time soon. For now I have marked
the VMs as broken, which only leaves one physical box in my Beaker.
I'm open to suggestions on where to send the patchbot jobs instead.
--
Dan Callaghan <dcallagh(a)redhat.com>
Software Engineer, Infrastructure Engineering and Development
Red Hat, Inc.

I'm pleased to announce that thanks to the efforts of Dan Callaghan and
Red Hat's IT department, HTTPS access is now enabled for beaker-project.org.
I highly recommend switching any yum repo files that refer to
beaker-project.org over to using HTTPS rather than HTTP :)
Cheers,
Nick.
P.S. At some point in the future we'll likely automatically redirect any
remaining HTTP traffic to HTTPS, but Gerrit doesn't like that with our
current web server configuration.
--
Nick Coghlan
Red Hat Infrastructure Engineering & Development, Brisbane
Testing Solutions Team Lead
Beaker Development Lead (http://beaker-project.org/)

My patch series to port Beaker's web UI to Twitter Bootstrap is now
complete. It's on Gerrit under the "bootstrap" topic. Included is the
patch to start using python-webassets for asset management and
minification.
http://gerrit.beaker-project.org/#/q/project:beaker+branch:develop+topic:...
All tests are passing locally on my dev box. The dogfood task is
currently failing because twill (in /distribution/beaker/setup)
apparently doesn't understand <button type="submit"/>. My solution to
that is here (as yet untested):
http://gerrit.beaker-project.org/2217
--
Dan Callaghan <dcallagh(a)redhat.com>
Software Engineer, Infrastructure Engineering and Development
Red Hat, Inc.

Just dumping a transcript of an IRC conversation between rmancy and I
about unit tests vs integration tests (in the context of the patch that
makes the createrepo command configurable:
http://gerrit.beaker-project.org/#/c/2208/).
At the moment, Beaker's unit testing is fairly minimal, so we don't have
a good, quick, confidence building set of tests to run before pushing to
Gerrit, just the full set of integration tests (which can take half an
hour or more to run, depending on the details of your system).
It's going to take some time for us to improve Beaker's unit testing
story, this just struck me as an expedient way to get something on
record that this is the direction we're likely to be heading :)
Cheers,
Nick.
[11:45] <rmancy> ncoghlan, I don't understand what the objection is to
creating integration tests that test different configuration values
[11:46] <ncoghlan> rmancy: integration tests are *slow*, because they
include the full chain (client, web server, database)
[11:46] <ncoghlan> so doing exhaustive testing at the integration level
is inordinately expensive
[11:47] <ncoghlan> the idea of unit/integration/acceptance layer is to
increase confidence while minimising cost
[11:47] <ncoghlan> *layering
[11:47] <ncoghlan> so you do your exhaustive testing of combinations at
the unit test layer, where it's cheap
[11:48] <ncoghlan> and use your integration tests mostly to ensure
everything is hooked up correctly
[11:48] <ncoghlan> rather than to do exhaustive testing of the
alternative inputs at each layer
[11:48] <rmancy> ncoghlan, so why don't we just abandon integration
testing of anything that is not directly testing the UI ?
[11:48] <rmancy> i.e unit tests are the new norm
[11:49] <ncoghlan> rmancy: because it hasn't been our highest priority
to date :)
[11:49] <ncoghlan> but yes, that's the direction I would eventually like
use to go
[11:49] <rmancy> ncoghlan, it doesn't need to be a priority, we just
need to say 'no more integration tests of things that aren't directly
testing the UI'
[11:50] <ncoghlan> rmancy: well, you need at least *some* integration
tests to ensure things are hooked up correctly (so one success case, one
failure case, both the web UI and the CLI)
[11:51] <ncoghlan> but yeah, that would be a good thing to include in
the developer guide :)
[11:51] <ncoghlan> I'd also like us to start moving the unit tests into
test subpackages
[11:51] <ncoghlan> rather than having the test files directly adjacent
to the main source files
[11:51] <rmancy> ncoghlan, but then if you have tested your
success/failure via the integration test, is some circumstances that
would negate the need for the unit test
[11:52] <rmancy> s/is some/in some/
[11:52] <rmancy> which is also fine as far as I'm concerned
[11:52] <ncoghlan> rmancy: ah, but once the unit tests are good enough,
you change the merge criteria from "integration tests pass" to "unit
tests pass", to reduce your cycle times
[11:53] <ncoghlan> since the unit tests should run one or two orders of
magnitude faster than the integration tests
[11:55] <ncoghlan> so yeah, since we're updating the developer guide to
add a high level style guide this sprint, it's a good idea to add
something along those lines
[11:56] <ncoghlan> (although patchbot has already taken some of the pain
away, since it's usually feasible to let that handle the testing to get
a +1 verified, and work on something else while waiting for it)
[11:56] <rmancy> ncoghlan, lately I've been thinking that patchbot does
a reasonable job to reduce those times (at least in my case...)
[11:56] <rmancy> heh, yes
[11:57] <ncoghlan> that means good unit tests become the criteria for
"run these before pushing to Gerrit, so you don't waste patchbot's time" :)
[11:57] <rmancy> ncoghlan, I agree
[11:58] <rmancy> but good integration tests still give you confidence
that you won't break soemthing in production
[11:58] <rmancy> So I think they are still important to have, even if we
get to a point where we only run them before branching a release or
something
[11:58] <ncoghlan> yeah
[11:59] <ncoghlan> actually, I think we're likely to stick with the
current flow (of patchbot running the integration tests)
[11:59] <rmancy> right
[11:59] <ncoghlan> so improving the unit tests will be about a pre-check
run by developers before pushing to Gerrit
[11:59] <rmancy> I just mean that, it's important to have good
integration tests, even if you only run them once
[11:59] <ncoghlan> yup
[12:00] <ncoghlan> it's just about remembering that integration tests
are there to ensure everything is hooked up correctly, not verifying
internal details of individual components
[12:00] <ncoghlan> unless there's no way to verify those internal
details with a unit test
[12:01] <ncoghlan> it's also why the approach of using exception types
to communicate with the UI is so important
[12:01] <ncoghlan> at the moment, we *have* to do exhaustive integration
tests, since that's the only way to check the UI error handling
[12:02] <ncoghlan> whereas when the internal layers communicate with the
UI by throwing particular exceptions, then the integration tests just
need to provoke each kind of error *once*
[12:02] <ncoghlan> and ensure the appropriate message is displayed
[12:03] <ncoghlan> then the *unit* tests can take care of testing the
different ways of provoking each exception, and ensure those each have
the right message in the exception
[12:03] <ncoghlan> hmm, I think I may grab a copy of this archive and
post it to the beaker-devel list :)
[12:04] <ncoghlan> since it's important, but it may be a while before we
get it explained nicely in the developer guide
12:08] <rmancy> ncoghlan, but the unit tests don't actually run the same
code as an integration test. So whilst a unit test will test that the
actual code that was changed work, it doesn't check that it works with
in the context it will actually be called in.
[12:09] <ncoghlan> rmancy: the main things the unit tests ensure is that
calls that used to work keep working. If an integration test fails when
the unit tests passsed, it's a sign that there's a missing unit test
[12:09] <ncoghlan> sorry, *often* a sign
[12:10] <ncoghlan> the other thing it can indicate is that a mocked API
is too permissive, so the unit tests aren't enforcing the same
constraints as the integration tests (there are pros and cons to that)
[12:11] <rmancy> ncoghlan, ok so you can fix your unit test up to catch
those kind of issues, but surely you still want the integration test
there to continue finding non obvious errors so that you can continually
improve your unit tests?
[12:12] <ncoghlan> that's why it's important to have both - unit tests
optimise for speed and reliability and avoiding external dependencies in
order to improve cycle times during development, but you need the
integration tests to back them up and ensure the assumptions in the unit
tests are still valid
[12:12] <rmancy> right
[12:13] <ncoghlan> and then acceptance tests for new features add a
*third* layer, which ensure that what you built is actually usable by
other humans ;)
[12:13] <ncoghlan> and then acceptance tests for new features add a
*third* layer, which ensure that what you built is actually usable by
other humans ;)
[12:14] <rmancy> ncoghlan, right, so in the case of the createrepo
tests, adding a unit test to complement the integration test is what is
needed. I don't yet buy that they are fragile, or not worth the hassle.
[12:15] <ncoghlan> rmancy: there's an existing Bugzilla bug to say we
need unittests for model.TaskLibrary. The bit Dan is point out as
fragile is starting and stopping gunicorn to pick up the config change.
[12:16] <rmancy> ncoghlan, we found a problem that I've fixed. I don't
see that as any different to other test iterations.
[12:16] <ncoghlan> https://bugzilla.redhat.com/show_bug.cgi?id=965915 btw
[12:18] <ncoghlan> rmancy: agreed, there's value in that integration
test to ensure it *is* reading the config file correctly
[12:20] <ncoghlan> however, it's not actually testing that at the moment
- you could change the code to always call "createrepo" regardless of
the settings, and it would still pass
[12:24] <rmancy> ncoghlan, hmm, that's true
[12:24] <ncoghlan> rmancy: more specific feedback added to
http://gerrit.beaker-project.org/#/c/2208/ :)
[12:24] <rmancy> ok, I think I need to change that test
--
Nick Coghlan
Red Hat Infrastructure Engineering & Development, Brisbane
Testing Solutions Team Lead
Beaker Development Lead (http://beaker-project.org/)

Hi all,
This bug [1] was filed and is being worked upon to ensure that we do not end up
with the case of having different tasks pointing to the same RPM. Looking
at the task upload/save code, one of the first checks that is being done is that
an RPM with the same filename doesn't already exist on disk. And thus, Beaker would
reject if an attempt was made to upload an RPM (with the same filename).
Thus, the only way we could have ended with multiple tasks pointing to the same RPM
is that these tasks were perhaps uploaded when we did not have this check yet in Beaker.
>From the implementation point of view, this is just about making the appropriate change in
model.py:
- Column('rpm', Unicode(2048)),
+ Column('rpm', Unicode(255), unique=True),
Also, as commented in [2], I will do a check and see if we have RPMs which are above the limit.
Writing the test seems to be a little non-trivial, considering the above.
[1] https://bugzilla.redhat.com/show_bug.cgi?id=998369
[2] https://bugzilla.redhat.com/show_bug.cgi?id=998369#c2
-Amit.

I did the following to get the development server up and running
with the patches pushed by Dan on gerrit:
- Check out http://gerrit.beaker-project.org/#/c/2200/
- Installed the following packages:
# yum -y install uglify-js lessjs python-webassets
- Checkout submodules:
Modify the location of the bootstrap's git repo in .gitmodules:
url = git://github.com/twbs/bootstrap.git
$ git submodule sync
$ git submodule init
$ git submodule update
And start the dev server.
-Amit.

Hello all,
Starting this week, I will post an email like this one every Monday where I will update the status
of Beaker's tests run on Fedora rawhide.
Kernel:
3.11.0-0.rc5.git1.1.fc20.x86_64
Test run summary:
882 tests, 14 failures, 26 errors, 3 skips in 2767.7s
Causes for failures:
- Incompatibility with sqlalchemy-0.8 (https://bugzilla.redhat.com/show_bug.cgi?id=989902)
- cracklib-python is broken (https://bugzilla.redhat.com/show_bug.cgi?id=998321)
- There are a few other failures/errors which seem to be unrelated to the above, but I am not
sure yet.
Notes:
- selenium-2.35.0 is out which is needed for Beaker's selenium tests to run with recent Firefox versions
(I will submit a patch soon to fix this).
Best,
Amit.
--
Amit Saha <http://echorand.me>
Infrastructure Engineering and Development
Red Hat, Inc.