Yes, it could be ghosts being resent - some of my machines had developed a new crop of hauntings overnight. But it looks like it's beginning to decay now - this might be a good time to try manual updates, and help flush the remaining gremlins out of the system.

Yes, it could be ghosts being resent - some of my machines had developed a new crop of hauntings overnight. But it looks like it's beginning to decay now - this might be a good time to try manual updates, and help flush the remaining gremlins out of the system.

Well as long as it's just Pocha Hauntis...Batman: Some days you just can't get rid of a bomb.

Hopefully, it's ghost task reissues. Which should be returned a bit more quickly by hungry hosts, helping to clean up the database.
Results in the field and results awaiting validation have both been dropping.Always remember.....kitties are all Angels with fur.
'Cat lives matter.'

On a side-note, Since switching from 6.2.19 to 6.10.58, I have noticed that my cache is not being processed in FIFO. It is all APs..about 17 days worth, and APs with a deadline four days sooner than the ones that keep getting picked to run next are still sitting there not getting started.

I know there are cache/queue changes along the way through the build history, but each WU has a 25-day deadline, so wouldn't it still make sense to run the soonest deadlines first (which also happen to be the ones that were acquired first)? I mean, it works out in the end I'm sure, but it's just weird.

Not true. FIFO does not necessarliy equate to EDF (earliest deadline first). A task's deadline is determined by how long it is estimated to take to run. So it is possible to download a bunch of tasks yesterday estimated to take 20 hours to run and have deadlines in late January, and then a bunch today estimated to take 1 hour and have deadlines in mid-December. And don't forget, if you're running more than one project, Boinc has to balance all of them, and different projects do their time estimates and deadlines differently.

However, if all of the tasks you're looking at have the same time estimate, then I agree, it is weird for them not to run FIFO, which presumably is also EDF.DavidSitting on my butt while others boldly go,
Waiting for a message from a small furry creature from Alpha Centauri.

On a side-note, Since switching from 6.2.19 to 6.10.58, I have noticed that my cache is not being processed in FIFO. It is all APs..about 17 days worth, and APs with a deadline four days sooner than the ones that keep getting picked to run next are still sitting there not getting started.

I know there are cache/queue changes along the way through the build history, but each WU has a 25-day deadline, so wouldn't it still make sense to run the soonest deadlines first (which also happen to be the ones that were acquired first)? I mean, it works out in the end I'm sure, but it's just weird.

Not true. FIFO does not necessarliy equate to EDF (earliest deadline first). A task's deadline is determined by how long it is estimated to take to run. So it is possible to download a bunch of tasks yesterday estimated to take 20 hours to run and have deadlines in late January, and then a bunch today estimated to take 1 hour and have deadlines in mid-December. And don't forget, if you're running more than one project, Boinc has to balance all of them, and different projects do their time estimates and deadlines differently.

However, if all of the tasks you're looking at have the same time estimate, then I agree, it is weird for them not to run FIFO, which presumably is also EDF.

If tasks from the same download batch don't appear to run in FIFO (and be very careful to observe that you haven't applied a sort order to one of the columns in BOINC Manager, before you jump to that conclusion), then it's a long-standing bug which applies some slight randomisation to the display order when data is transferred from the server to the BOINC Client to the BOINC Manager. In short, it's cosmetic only.

BOINC v6.10.58 is still a very old version. We applied a lot of pressure to get that bug (and many others) fixed - I forget just when. The latest ones - I'm running v7.0.38 - have had display order and running order in perfect step for a long time - possibly even since sometime in the v6.12.xx range - but I wouldn't advise upgrading just for this. Like I said, it's cosmetic only.

On a side-note, Since switching from 6.2.19 to 6.10.58, I have noticed that my cache is not being processed in FIFO. It is all APs..about 17 days worth, and APs with a deadline four days sooner than the ones that keep getting picked to run next are still sitting there not getting started.

I know there are cache/queue changes along the way through the build history, but each WU has a 25-day deadline, so wouldn't it still make sense to run the soonest deadlines first (which also happen to be the ones that were acquired first)? I mean, it works out in the end I'm sure, but it's just weird.

Not true. FIFO does not necessarliy equate to EDF (earliest deadline first). A task's deadline is determined by how long it is estimated to take to run. So it is possible to download a bunch of tasks yesterday estimated to take 20 hours to run and have deadlines in late January, and then a bunch today estimated to take 1 hour and have deadlines in mid-December. And don't forget, if you're running more than one project, Boinc has to balance all of them, and different projects do their time estimates and deadlines differently.

However, if all of the tasks you're looking at have the same time estimate, then I agree, it is weird for them not to run FIFO, which presumably is also EDF.

If tasks from the same download batch don't appear to run in FIFO (and be very careful to observe that you haven't applied a sort order to one of the columns in BOINC Manager, before you jump to that conclusion), then it's a long-standing bug which applies some slight randomisation to the display order when data is transferred from the server to the BOINC Client to the BOINC Manager. In short, it's cosmetic only.

BOINC v6.10.58 is still a very old version. We applied a lot of pressure to get that bug (and many others) fixed - I forget just when. The latest ones - I'm running v7.0.38 - have had display order and running order in perfect step for a long time - possibly even since sometime in the v6.12.xx range - but I wouldn't advise upgrading just for this. Like I said, it's cosmetic only.

Thank you for the very informative insight to my observation. Since the tasks are all APs, they all have a 25-day deadline from when they were issued. In 6.2.19, they would crunch in FIFO, unless for some crazy reason high priority mode kicked in. I switched to 6.10.58 a few days ago and for example, I have a pile of APs that are due Dec 3, but ones for Dec 6 were running in high priority instead. High priority has since ended, and the ones due Dec 3 and 4 still haven't been touched, but 6-8 are being crunched pretty much in order.

I do notice that the sort order in Manager operates a little differently than in the older version, but I figure it will sort itself out eventually.Linux laptop:
record uptime: 1511d 20h 19m (ended due to the power brick giving-up)

I currently have 15 ghosts, all GPU units, and I'm now down to 104 units "In Progress" counting those ghosts. I'm probably down to under 2 days worth of units for either the CPU or GPU and the only reason I haven't run out of GPU units, other that it's a weak GPU, is I routinely suspend GPU crunching to play games or watch movies.

Well Turkey day is coming up, I guess even a computer could use the break."Life is just nature's way of keeping meat fresh." - The Doctor

Well, since the outage i've picked up some work. I'm also getting new errors when trying to contact the Scheduler.
Still getting the timeouts, but to add to that i'm now getting "Server returned nothing (no headers, no data)" & "Failure when receiving data from the peer".
As before, even with NNT set, it appears to depend on the wind direction & how you hold your tongue while clicking repatedly on the retry button as to whether or not you will get a response from the Scheduler.Grant
Darwin NT

My i7 is finally up to its full 200 WU limit. I suppose this means it finished enough Einstein GPU work to ask Seti for some, and the Seti servers were actually able to deliver it.

I also see that the five APs I got yesterday are done already, four valid and one pending.DavidSitting on my butt while others boldly go,
Waiting for a message from a small furry creature from Alpha Centauri.

Save them (separately or together) in one or two files in BOINC's Data directory: give the files names with the extension ".cmd"

Then, double-clicking the file(s) will quickly give you an overview of how well the scheduler requests have been going.

Don't swamp Eric with data, but if a few of us (those who feel confident working with that minimalist instruction - don't bother if you're not comfortable doing that) keep an eye on his experiments and provide feedback, it may help. Remember your logs will be timestamped in your local timezone - please supply the UTC offset so he can match them up with the server changes.

After the change(s) this afternoon, I had several nodes that had empty caches but could not get a successful scheduler update. I was able to get them to start downloading some tasks by decreasing the minimum work buffer to 0.25 days. Now they are slowly getting some resent tasks.

Things seems to have started again, and this time we're talking to Synergy over the Campus data network (128.32.18.157) - anybody using a manually configured hosts file please note. We're still using setiboinc.ssl.berkeley.edu, so the proxies should pick up the change automatically.

So far, the only difference that I've noticed (apart from the fact that it works...) is a re-allocation and download of some of the little graphics files used in Simple View.

Save them (separately or together) in one or two files in BOINC's Data directory: give the files names with the extension ".cmd"

Then, double-clicking the file(s) will quickly give you an overview of how well the scheduler requests have been going.

Don't swamp Eric with data, but if a few of us (those who feel confident working with that minimalist instruction - don't bother if you're not comfortable doing that) keep an eye on his experiments and provide feedback, it may help. Remember your logs will be timestamped in your local timezone - please supply the UTC offset so he can match them up with the server changes.

I had not thought to check the logs that way. Quite a good idea. I took it a bit further and did a 3rd to just check for "[SETI@home] Scheduler request failed: Timeout was reached" to separate other failures. Then I have the bat count the lines and give me the % failure for total and timeout. So far checking several machines that have data going back to the 5th. The failure rate is between 14% & 19% for all failures.SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the BP6/VP6 User Group today!