All running very smoothly indeed. Just a problem with deadline scheduling which I hope we can discuss and resolve on Monday, especially with some feedback from the BOINC meeting in London.
Also some hiccups on the CERN AFS infrastructure.
I am now hoping to prioritise the writing of my paper on numeric results reproducibility but I am continuing to run work for the next weeks as described in my new thread "Work Unit Description"
in the Message Board "Number Crunching".
I am also pondering how to best handle "very long"
jobs bearing in mind your feedback.
And of course I shall try and keep you informed.

Regarding GPU Tasks, when it happens I would suggest that GPU Tasks be relatively short, e.g., one hour max and preferable 20~30 minutes in duration.

I run SETI@Home and Einsten@Home. However, I do not allow GPU Tasks for Einstein because then have, in the past, been very long and hog the GPU (since there is no provision for Task switching in the GPU).

Tom, excellent point. I have been running SETI almost entirely with GPU (NVIDIA) and waiting with my arms crossed, tapping my feet, when LHC will start doing same. To use GPU, I would prefer task switching and settings to be available. William C Wilson
São Paulo Brazil

Well after some discussion today and your feedback I think I
can say we will split into units of not more than one hour.
This would be extremely satisfactory all round. The problem is
when can I do this? I think it is a top priority. It would also allow
us to run over one million turns rather easily. The resources
required here at CERN will be greater as we need to submit and retrieve
much more data. Nonetheless in theory we have a plan for more
here with lxtrack with more disk space and CPU.
In addition this could all be tested using our backup/test server to
avoid disruption to the current rather fine service.
I'll keep you posted. Eric.

The majority of users on this project are enthusiasts who probably won't mind long, or even monster work units, as long as the deadline is long enough even for slower computers.The biggest threat to public safety and security is not terrorism, it is Government abuse of authority.
Bitcoin Donations: 1Le52kWoLz42fjfappoBmyg73oyvejKBR3

While it might be a fair amount of work initially to set up a second application for people to choose from it would give you the flexibility to allow people to 'opt in' and also give you a seperate queue for any future "odd" runs you might dream up :-)

I know this may be a little bit off topic, but I noticed that the boinc all project stats hasn't been updated for LHC in 302 days according to my profile page. It shows me at 7050 in credits when I actually have 46945 credits. Can you please check on that? Barry Hoover(BHoov)

I know this may be a little bit off topic, but I noticed that the boinc all project stats hasn't been updated for LHC in 302 days according to my profile page. It shows me at 7050 in credits when I actually have 46945 credits. Can you please check on that? Barry Hoover(BHoov)

The statistics for the LHC project (like every BOINC project) are simply generated once, and placed in a public location for any and every statistics site to collect and process.

Other major players in the statistics game - BOINCstats and BOINC Combined Statistics - are up to date, so LHC is doing what it needs to do. I suspect that allprojectstats may not have updated their records when LHC changed location and URL last year. That's something you'll have to take up with them.