And now none of the rigs has connected for half an hour.
From the looks of the Cricket graph, either things fell over again, or they are rebooting.
And so it goes.A kitty keeps loneliness away.
More meowing, less hissing. I speak meow, do you?

Hmm... so they dumped once again lots of unsplited AP. Is AP not worth crunching or how shall we understand that?

They also have probably hundreds of Terabytes of data recorded before release of the Astropulse application. Dumping those is just adding some more to the pool of data processed only by S@h Enhanced algorithms.

The production rate of AP "splitting" is probably heavily influenced by the amount of server-side blanking. The reason the AP splitters were going so slowly might possibly have been a lot of detected RADAR RFI, in which case splitting those tapes might be considered less worthwhile than cleaner data.

It is a bit troublesome that this problem seems to be re-enacted without much in the way of a learning process....

Perhaps since there seems to be this compulsion to revisit this operational problem what folks could do is announce in advance something along these lines:

AP disruptive release anticipated -- consider shifting over to another project until we realize it is disruptive and back off the process

Insert rueful smile/shrug here.

The production rate of AP "splitting" is probably heavily influenced by the amount of server-side blanking. The reason the AP splitters were going so slowly might possibly have been a lot of detected RADAR RFI, in which case splitting those tapes might be considered less worthwhile than cleaner data.

I thought the server-side stuff was supposed to be detecting APs that would be 100% blanked and not even send them out in the first place.

If they know what sections to inject random noise into.. they know what sections need to be blanked. A simple byte-map would handle that, plus some tiny bit of splitter logic.

Basically if your list of byte offsets and byte lengths are defined, then the splitter can then look at the list and say "oh, this WU that I'm splitting starts and finishes inside one length of blanked data, tell the science DB this one is bad and I'm moving on to the next start point."

Saves 16 MiB of data transfer right there. Do that a couple thousand times... and it adds up real fast.Linux laptop:
record uptime: 1511d 20h 19m (ended due to the power brick giving-up)

Still struggling to get work, "Couldn't connect to server" is still the standard response to a Scheduler request.
Very, very few requests result in contact, and then "Failure when receiving data from the peer" tends to be the response.Grant
Darwin NT

Still struggling to get work, "Couldn't connect to server" is still the standard response to a Scheduler request.
Very, very few requests result in contact, and then "Failure when receiving data from the peer" tends to be the response.

Seems that way, with full contact very intermittent but I just got 15 units including one AP.

If you switch to a US based proxy, all is working, DL are very fast and the servers give you work, so the old problem with HE/Router connection returns for us the rest of the world. So the servers are realy on line and working.

But as allways, we uses to much bandwidth, so the proxy admins kick us very fast.

I can’t even saw the downloads. My cash is full at less than a minute.

Tim

Milkyway uses double precission math, our 590 is not good with DP math, so it´s not a match for the ATI GPU´s there. But at least you get plenity of jobs and will keep the GPU´s warm. There are few projects that the 590 works better on them, Collantz, Gpugrid for example, but take care each one of them have it problems too.