And in addition to the new Planck jobs, C@H is the "Marathon" project for the yearly BOINC Pentathlon contest! You can find out more about the Pentathlon here.

To slightly sweeten the deal, we'd like to add a few more thank-yous to the paper, so we will again take the top 3 users and top team (excluding the winners from last time) and add your names to the paper. Although you're welcome to start crunching Planck jobs now, the contest will be only considering Planck jobs returned during the Pentathlon which runs from May 5th to May 19th (see above link for a countdown). We'll post an updated page to keep track of the leaders this week. Good luck everyone!

Looks like validation is a bit behind as you guys are pummeling the server with results ;) however we'll make sure it catches up before the end of the contest. I will post the page tracking the results of our contest soon as well.

I feel sorry for that validation server. Now it's over 85,000 in the queue. On the other hand some stats say the volume of points, and thus work being done is now 4x the usual level. Perhaps we shouldn't be surprised it it's a struggle.

But looking at my own queues it seems camb_boinc2docker are the units not getting processed. All my legacy and planck stuff returned has cleared through, but so many of the boinc2dockers seem stuck from yesterday morning yet have a quorum of 1?

The problem for Cosmology at the moment is not work creation, but validation. No point in more work available if it can't deal with the results.

The validation server queue is now over 100,000.

Some work is getting through the validator but it is hit and miss, and depends on timing of how it arrives if the validator has momentary availability as it lands. The validator seems to be taking from the top of the in tray pile, i.e. it sees the latest arrival first. If it is free while you are on the top of the pile, you get points. If another WU arrives whilst yours was on top, yours will be covered and then gets progressively buried to never see the light of day again.

The validator is now in a downward spiral the increasing queue is probably adding to the strain as the unmanageable database tables and file records get more scrambled adding to fragmentation on any server's hard drive. I don't know but it may be time to temporarily block any new results from being returned to allow the validator process to clean up a bit?